How to send multiple variables from Processing to Arduino

Categories Interaction, Programming

I wanted to be able to send a set of 4 integers to an Arduino from a Processing application that I am working on, I also wanted to be able to prefix the set with a tag, so that I could deal with different sets of integers in specific ways, for specific purposes.

For instance, I have an LCD screen and some LED’s connected to my Arduino, or maybe I have a servo motor and a speaker that I want to control based on events happening in software running via Processing.

I modified* the code below from the code shown here (Arduino Cookbook @

*I simply expanded the amount of variables being sent per packet.

Processing Code:

// Processing Sketch

/* SendingBinaryToArduino
 * Language: Processing
import processing.serial.*;

Serial myPort;  // Create object from Serial class
public static final char HEADER    = 'H';
public static final char A_TAG = 'M';
public static final char B_TAG = 'X';

void setup()
  size(512, 512);
  String portName = Serial.list()[1];
  myPort = new Serial(this, portName, 9600);

void draw(){

void serialEvent(Serial p) {
  // handle incoming serial data
  String inString = myPort.readStringUntil('\n');
  if(inString != null) {     
    print( inString );   // echo text string from Arduino

void mousePressed() {
  sendMessage(A_TAG, 8,5,20,33);

void sendMessage(char tag, int a, int b, int c, int d){
  // send the given index and value to the serial port
  myPort.write((char)(a / 256)); // msb
  myPort.write(a & 0xff);  //lsb
  myPort.write((char)(b / 256)); // msb
  myPort.write(b & 0xff);  //lsb
  myPort.write((char)(c / 256)); // msb
  myPort.write(c & 0xff);  //lsb
  myPort.write((char)(d / 256)); // msb
  myPort.write(d & 0xff);  //lsb

You can see in the void sendMessage function that each integer is being split into two bytes, but both the header and tag are sent as a single byte – this means we are sending 10 bytes all up (we will need to reference this in the Arduino code in TOTAL_BYTES).

Arduino Code:

// BinaryDataFromProcessing
// These defines must mirror the sending program:
const char HEADER       = 'H';
const char A_TAG    = 'M';
const char B_TAG    = 'X';
const int  TOTAL_BYTES  = 10  ; // the total bytes in a message

void setup()

void loop(){
  if ( Serial.available() >= TOTAL_BYTES)
     if( == HEADER)
      char tag =;
      if(tag == A_TAG)
        //Collect integers
        int a = * 256; 
        a = a +;
        int b = * 256;
        b = b +;
        int c = * 256;
        c = c +;
        int d = * 256;
        d = d +;
        Serial.print("Received integers | a:");
        Serial.print(", b:");
        Serial.print(", c:");
        Serial.print(", d:");
      else {
        Serial.print("got message with unknown tag ");

You can see under Serial.available() after the comment “//Collect integers” that each byte is being stored in a variable (multiplied by 256 to reverse the division on the Processing side) and then the second byte is added to that (thus reversing the rejoining the split that took place on the Processing side) – in this way we are reassembling the data that we sent into a useable format to be used.

From here you can do whatever you like with the data, use it for conditionals or set it to analog outputs. The applications are nearly endless.

Make a custom USB button pad [Pt III]

Categories Interaction, Uncategorized

In my previous posts I was modifying a traditional plastic-housed USB Keyboard that had a chip inside it like the one below:

These contact points had to be scraped or sanded clean in order to solder to them, however scraping them made them quite thin on the surface and therefor quite brittle.

The chips inside the cheap, flexible silicone keyboards (above) are actually much easier to work with because they have a line of female headers instead of contact strips. This allowed me to solder up a vero board to act as a breakout shield (below)

I bought some female-female jumper cables which I will be cutting, stripping and placing a n/o button between. This allows me to plug and play the buttons as needed with low-risk. The layout of the header pins on the vero board was determined through shorting sections of the chip with test leads and figuring out the combinations necessary to make a minimum of 25 unique combinations/keystrokes.


Getting ready for User Testing

Categories Design, Interaction

I am user testing my interactive instrument tomorrow. I have four participants coming in throughout the day, all of them musicians, each with varying modes of musical expression ranging from more new school beat-makers to borderline analog purists. I am excited to see what they do with the device and whether they could see it fitting into their musical workflow. I will be recording all video, audio and screen activity from the day (as much as I can manage) and will then begin wading through it all to try and analyse the sessions. More to come soon.

Make a custom button pad for Raspberry Pi [Pt II]

Categories Interaction, RMIT

In my previous post I was using a testing cable with an alligator clip on one end and a probe on the other, shorting the circuit board that I took out of a cheap USB Keyboard. The small distance between each contact point made using the alligator clip awkward and prone to bridging nearby contacts. I decided that I would use a section of jumper ribbon cable that I salvaged from an old computer to form a sort of breakout board that I could use as a more robust, exploratory interface.

Firstly, I started stripping two sets of 13 wires for each bank of contacts on the USB circuit.

From there I started soldering each wire to the circuit board. Initially, creating a successful solder joint was difficult and I found that a light sanding of the contact points (to remove some surface coating and presumably add texture) made soldering much easier.

After all the contact points (in actuality, only those that were necessary) were soldered to the ribbon cable, I used jumper cables to open out the pins onto a small breadboard, allowing me to hook multiple jumpers up between points. I used this setup to create a document that outlines which connections produced a useable keystroke within the context of my project (in this case letters, numbers and symbols – no function or conditional keys). In total, I needed at least 25 different keystrokes for use in my 5 x 5 button matrix, with 8 additional keys to account for function keys.

In a later post I’ll show the process of creating the actual 25-key button matrix, solder it all up and test it to make sure that every button on the custom pad triggers a unique keystroke. It’s been a while since I’ve done a great deal of soldering, and my connections on the USB circuit – though functional – aren’t very clean looking, so I’m worried about soldering up 25+ buttons. Ultimately all the ugly electronics will be hidden inside a 3D Printed housing anyway, so no one will ever need to know.

 to be continued

Make a custom button pad for Raspberry Pi [Pt I]

Categories Interaction, RMIT

I needed a button matrix to use as an interface for a new digital instrument that I’m making as part of my undergraduate major project. Something like the monome would probably work quite well, but I cannot justify being lazy and paying for one when I feel like I could probably hack some existing things together and end up with a custom result.

In earlier posts I talked about establishing serial connection between raspberry pi and arduino, because initially I intended to read a button array with the arduino and then send little data packets via usb that told the raspberry pi which buttons were pressed.

Previous design exploration: 16-Digit Keypad connected to Arduino, connected via USB serial to Raspberry Pi using a 3.5″ TFT LCD Screen as display.

For a few reasons, particularly the latency issues, this wasn’t a viable solution. I am vaguely aware that the arduino could be cut out of the process, and the button array read by the raspberry pi directly, but I’m also vaguely aware that to try and figure out how to do that may set me back a month.

The solution was to buy a cheap USB Keyboard, gut it, rip the chip out and hook it up to a custom button array made up of a 5 x 5 grid of N/O buttons.

I like this solution in that it doesn’t really matter which contacts I solder the buttons to, so long as each button triggered a different keystroke. I can then assign each button to it’s specific keystroke and then map each keystroke to it’s corresponding position in my Processing/Python sketch.

Ultimately, the keyboard is controlled by a small chip that communicates via USB. The chip has two banks of 13 contact points. To get any of the ~101 keystrokes available you have to close the circuit between two specific contact points.

So I started blindly shorting the board with alligator clips and a probe with notepad open on my laptop to capture the keystrokes.

Ended up with some glitch poetry

Now I just need to perform the time-consuming task of soldering up an array of N/O buttons to the keyboard chip, based on the combinations that results in 25 useful keystrokes (letters, numbers, punctuation – no function or naviagation keys). Then I need to house it all in a 3D printed case.

continue to Part II

Generative Art & Interaction Design

Categories Interaction, Programming

I find generative art to be a strange concept.


I feel the concept is at best too broad in that it would essentially describe all Art and at worst too narrow in that it describes only certain artistic movements in which a generative approach is apparent at a surface level.


Marius Watz: Three stills from Electroplastique, 2005.


EvoEco, Jon McCormack, 2010.


Generative art is created by a seemingly autonomous system, usually a computer algorithm that has room for variance informed by some random element – though analog systems also apply. Some systems are completely chaotic, others enforce various over-arching rules in order to create an artistic focal point, perhaps in order to more faithfully represent existing artistic conventions; abstract paintings, ambient music and light-shows all lend well to this sort of contained generative methodology.


Some digital generative art is so complex – or perhaps, some forms of digital art have exhausted themselves into predictability – that we can produce a near infinite amount of “unique” pieces on command from a single source (see “The Infinite Music Machine”). In this way, the algorithm-system itself is the artwork and it’s offspring a kind of sub-art. It’s an interesting development that has parallels in consumer culture. The movement from the naturally imbued variance of artisan-produced objects, through to the precision and conformity of mass-produced objects and moving on now into an era of mass-produced bespoke objects. Though I believe that there will always be a territory of Art that is unable to be mass-produced simply by virtue of that fact that it isn’t – there may be less of a distinction between the lifeless-via-duplication and the lively-via-mutation in regards to new forms of generative Art.


We can create things that create things. But the things that we create can only create things that we essentially let them create.


If any of the above sounds radical or pointless or removed from reality, consider the humble wind chime.

The wind chime is a generative art object, it’s enforced over-arching rules are defined by the tonality that the relationships between each chime creates (as in, the musical scale that the wind chime adheres to) and it’s random element is the wind itself, an autonomous system (which to discuss further is a philosophical conversation of it’s own).


The wind chime is a great example of how simple constructs can lead to complex, “embedded” or “reactionary” behaviours. It’s beautiful, transitory, unpredictable and fairly non-invasive to it’s environment. Besides sporadic melodies, it’s behaviour also contains information about the wind patterns outside encoded within it’s output.


I’d like to explore what the wind chime is employing for it’s random element – that is, nature – in the context of a complex digital structure. The everyday world is a constant ever changing source of variables with intrinsic relationships between them. These related variables may yield beauty, simply as a result of having to conform to the physical laws of existence (leading to ubiquitous relationships such as the Golden Ratio and Pi). If we data-mine the world around us using computer systems and use that data to inform generative art objects, we could fabricate simulated synaesthetic experiences.


The exploration of this idea is what I am basing my major undergraduate project on throughout 2013. I will be posting about that project on this blog as it progresses.


Following from my earlier post Embracing the Digital Landscape, I think generative procedures will be invaluable for creating modern computer-human interfaces and technologies. As defined above, generative art need not be chaotic. It can be a way to condense, rearrange, reform and “perceptify” data. Rather than presenting data as numbers (or equivalent), with singular meanings, we could attempt to create shapes, colours and sounds that contain multiple threads of data, to be understood at a glance.


David Eagleman (PhD) talking about colour perception in relation to real-world information

Interaction design and UX design often strives to streamline data, present a “cleaner” interface and occasionally to reduce the “depth” of an interface, to bring all the information as close to the user as possible. Generative methods might be useful in this regard.


Displaying webcam video on Raspberry Pi using pygame

Categories Open-source, Programming, RMIT, Visuals

Another small exploratory project related to my undergraduate Major Project had me trying to stream video onto a small 3.5″ TFT LCD display using a webcam connected to a Raspberry Pi mini computer.


Firstly, not all USB webcams work with the Raspberry Pi, so after trying randomly to find a working webcam I discovered this list of Raspberry Pi verified peripherals and bought myself a Logitech C100 for $20 on ebay.

Secondly, USB webcams seem to be horribly slow on the Raspberry Pi, pulling dodgy framerates at what might be <10fps at their full resolution (in my case 640×480 pixels). Originally, I was using the custom imgproc library (available here) but found that there was either a lack of simple documentation for extended functionality like scaling and so on, or that the library wasn’t built to perform those sorts of tasks. Eventually I settled upon using the pygame library (which you can get here). The documentation for pygame is thorough and easy to navigate, making troubleshooting and extension or exploration of the library very easy to do.

In order to combat the low framerate and because resolution is not a priority (for the purposes of the concept I will be testing) I decided to pull the webcam feed in at a much lower resolution (32×24 pixels) and then scale it up to full screen (640×480 pixels). I’m unsure if the webcam can actually pull the feed in at 32×24 pixels, I have to look into that – and perhaps some pixel dropping techniques.

I wrote the code, copied it over, connected the webcam to the Raspberry Pi and ran the following python script:

import sys
import pygame


#create fullscreen display 640x480
screen = pygame.display.set_mode((640,480),0)

#find, open and start low-res camera
cam_list =
webcam =[0],(32,24))

while True:
    #grab image, scale and blit to screen
    imagen = webcam.get_image()
    imagen = pygame.transform.scale(imagen,(640,480))

    #draw all updates to display

    # check for quit events
    for event in pygame.event.get():
        if event.type == pygame.QUIT:

After a brief pause a pygame window pops up and the webcam feed is shown on the screen, scaled to fullscreen. The framerate is better than the full resolution webcam feed, but is still very slow by modern standards.


Drawing in Python using Arduino data via serial

Categories Open-source, Programming, Visuals

Following on from my previous post where I established serial communication between an Arduino and a Raspberry Pi, I needed to find out if I could store the data being sent from the Arduino in a variable on the Raspberry Pi and use it to alter some functions in a Python script.

I wrote up a piece of code that tries to read the data from the serial and use it to change the colour of a circle drawn in the center of a 640 x 480 pixel screen:

# Title: Arduino2RPiSerial
 # Goal: to draw a circle based on colour values sent from the arduino

import pygame
 import sys
 import serial


#create a screen
 screen = pygame.display.set_mode((640,480))

#create colour
 redValue = 255
 fgColour = (redValue,0,0)
 bgColour = (0,0,255)


 ser = serial.Serial('/dev/ttyACM0',9600)

 while 1 :

#Update from serial
 redValue = ser.readline()
 #update colour value
 fgColour = (redValue,0,0), fgColour, (320,240), 200, 0)

# check for quit events
 for event in pygame.event.get():
 if event.type == pygame.QUIT:
 pygame.quit(); sys.exit();

Getting the Raspberry Pi to talk to Arduino

Categories Open-source, Programming

As part of an exploratory design project I am undertaking within my undergraduate Major Project, I needed to establish serial communication between a Raspberry Pi (a miniature, credit-card sized computer) and an Arduino Uno (an electronic prototyping board). I found that Raspberry Pi could both supply power and talk to the Arduino Uno via USB cable.

I also found that as a beginner in many aspects of Python programming, Linux commands and serial communication that much of the documentation on the internet seemed daunting, so I’ve made a very thorough documentation here to help anyone out who has the same amount or less of the starting knowledge that I had. Hopefully it can aid in the creation of more open-source Raspberry Pi / Arduino projects.

I based my efforts on the steps outlined here.




Arduino Uno, Raspberry Pi and Laptop.
Arduino Uno connected to the laptop, with  a Raspberry Pi bundled with it’s own 3.5″ LCD Screen running below.


Firstly, I needed to install pyserial onto my Raspberry Pi. I have no ethernet cable or Wi-Fi dongle connected to my Raspberry Pi and so I had to transfer the pyserial-2.6.tar file via a USB Flash drive. The file can be downloaded here.

After transferring the file to desktop (home/pi/desktop) and unzipping it’s contents to it’s own folder on the desktop (home/pi/desktop/pyserial-2.6) I opened up the LXterminal to execute the following commands:

cd /home/pi/desktop/pyserial-2.6

The cd command directs the terminal to a folder on the Raspberry Pi (in this case, the desktop), to give future commands a context of operation. Basically, the command tells the terminal where we are working from and where to look for any files called upon.


sudo python install

This command tells python to install pyserial. I hit enter and the terminal spat out line after line of output, completing the install process shortfly after.

From this point, the Raspberry Pi was ready for serial communication via the pyserial module.

I needed to find out what the device name for the Arduino Uno was on the Raspberry Pi so that I could reference it in the python shell when opening up serial communications. For this I used a command that lists the devices:

ls /dev/tty*

The ls command lists files within a directory, the second part ‘/dev/tty*’ is a filter. The use of the wildcard character ‘*’ means that we are looking for any files that begin with ‘/dev/tty’ and that end with any other combination of characters. I ran the command without the Arduino Uno connected and then again with the Arduino Uno connected so that I could see the difference between the two lists, and find out what it’s device name was.




Using ls /dev/tty* to display a list of devices
Using ls /dev/tty* to display a list of devices


It’s hard to see in the picture above because the 3.5″ TFT LCD screen has such a horrible resolution (and has a strange flickering colour issue) but in the second list positioned at 3 items from the bottom of the rightmost column, a file appeared that wasn’t present in the first list (/dev/ttyACM0). I made note of that file because I would need to reference it later.

Then, the code below needed to be uploaded to the Arduino Uno to generate a test signal:

//Code originally by simon monk (
//Commented by Daniel Kerris

//This code makes use of the Arduino's built-in LED which is located on Pin 13
const int ledPin = 13;

 //This sets up the code and defines the baud rate as 9600 - it's important to use the same baud rate on the Raspberry Pi
 void setup()
   pinMode(ledPin, OUTPUT);

 //This part of the code loops continuously with a delay of 1000 milliseconds between each loop
 //It sends the message "Hello Pi" over and over again via USB
 void loop()
   Serial.println("Hello Pi");

   //This section checks to see if the Raspberry Pi is talking back
   //and then makes a call to the flash() script which is define below
   if (Serial.available())
      flash( - '0');

 //This flash() script takes the number that the Raspberry Pi has sent over via USB (stored in the variable n)
 //and uses it to determing how many times the LED should flash
 void flash(int n)
   for (int i = 0; i < n; i++)
     digitalWrite(ledPin, HIGH);
     digitalWrite(ledPin, LOW);

After the code was uploaded to the Arduino Uno, I connected it via USB to the Raspberry Pi. I could see that it was powered and that the code was working because the LED for Tx (serial communication) was flashing, showing that it was being used. The Raspberry Pi still needed some code to tell it to listen to the messages coming through the USB port.

I booted up the Raspberry Pi and opened up the application called IDLE, highlighted the Python Shell Window and entered the following commands, hitting enter after each line:

import serial
ser = serial.Serial('/dev/ttyACM0', 9600)

You’ll remember that ‘/dev/ttyACM0’ was the name of the Arduino that I had found out earlier. 9600 is the baud rate, which is set the same is in the Arduino code above.


while 1 :

The while statment is used to do a set of actions while a statement is true. In computer languages 1 and 0 are stand-ins for true and false or on and off. Basically, this line of code is saying that whilst 1 is equal to true (which is constantly) then do something. This creates a loop that executes itself until the program quits, just like the void loop() in the above Arduino code. The ser.readline() is telling our serial object ser that we want to execute the function readline() which reads from the serial device (‘/dev/ttyACM0′) that we established earlier.

I had to hit enter twice after the ser.readline() command to let Python know that I am finished writing commands within the while statement’s block (you can read more about code blocks here if you don’t understand).





Finally, the Raspberry Pi successfully read from the Arduino.

I couldn’t get return communication happening successfully, though luckily for me the project I am working on only requires one-way communication from Arduino Uno to Raspberry Pi and not the other way round. I’m going to be using the Arduino Uno to handle sensors and tangible controls as I am much more comfortable coding and prototyping in that environment. I will write code that analyses and packages the information from sensors and tangible controls and sends it via USB to the Raspberry Pi, which will handle all of the GUI and Sound output (which the Arduino would otherwise struggle to handle).

Transience in Social Technologies

Categories Interaction, Social, Technology

Real life is dynamic and transitory.


Day to day communication exists in it’s moment of context and (for the most part) that’s it. Luckily, everything you’ve ever said or done hasn’t been recorded and archived. It doesn’t need to be. [Edit: Unless you’re the NSA, in which case you suck communications straight from the faucet to be filed away]


Conversely, social networks are mostly static and perpetual.


This means that our social tools and our social behaviours meet at an awkward divide. We don’t necessarily want to interact in an environment of forever.


Communication in real life can be flippant, idle, playful, not-for-archival, transitory, momentary, uncrafted and raw – it doesn’t have to be, but it does serve certain function to be at times. The truth can be best delivered by the spoken word. It’s why a phone-call is often a better way to discuss serious or emotionally heated matters than the perpetual, stilted, everlasting, re-assessable world of the text message. The text message can exist within it’s temporal context and outside it, it can contain subtext that only grows with it’s permanence.


There aren’t many social tools that acknowledge this divide. What results is warped behaviour.



The Guardian, UK. The future of social interaction.


It’s long been apparent that a person’s online persona (such as a facebook profile, tumblr page or twitter account) is a curated version of their own life and not an accurate reflection of that person – not a new concept, consider the useful social “masks” we all wear on a daily basis. The effect is, however, heavily amplified by the perceived (and actual) permanence of the “profile”. The existence of a social receipt of every interpersonal transaction you’ve made on a social network. Eventually, we learned to brand ourselves, to periodically check and bask in our own image, to actively compare ourselves to each other in an easily cross-referenced, standardised format.


Ever noticed how human behaviour changes around the presence of a camera? That’s because a sense of permanence has just been introduced to a transitory environment. Interactions become crafted, more curated and less a reflection of the usual.


And though permanence is incredibly useful, it doesn’t describe the rightful state of all social interactions.


“The internet is forever.”


Well, let’s pretend it isn’t.


Data can be useless. Delete it. Let it fall into non-storage. The lack of transience devalues some communication by valuing it too much. Non-permanent information and communication serves a different function to permanent information.


A great example of the validity of non-permanent communication is Snapchat, a photo-sharing application that enforces transience. The app works as follows:


  1. You take a photo
  2. You specify the photo’s lifespan (in seconds)
  3. You send the photo to another user
  4. The photo is received, viewed and auto-deleted after the specified lifespan.


Snapchat takes the existing functionality of a social tool (picture messaging) and adds the element of enforced transience (auto-deletion). What we see is that Snapchat is used for much different forms of communication than the standard MMS functionality of modern phones. The non-permanent nature of the communication leads to more playful and perhaps baser interactions such as frivolity and flirting – these modes of communication are no less socially enriching than others. Snapchat is successful because it caters to our need to communicate digitally outside of a permanent environment.


Transient content, self-destructing media and perhaps degradable social connections are surely things to consider implementing in future social media environments. Ubiquitious or wearable computers and near-field communicators will hopefully bring about the data we will need to create honest social network technologies.