Skip to content

Tag: computer music

A Shared Instrument Using Game Controllers and a Laptop

On April 20, 2014, I held my first public performance using an instrument I built and programmed.

It was a gamepad-controlled musical instrument, combining off-the-shelf hardware and open source software.

The performance was held for Sunday Assembly Atlanta (SAA), a secular organization whose motto is: Live Better, Help Often, Wonder More.

(For the folks at SAA – I apologize for taking so long to write this, but here it is – along with the source code on GitHub – for any geeks in the crowd who want to play with it themselves.)

After finishing a critical phase of a work-related project, I now have time to write an overview of what I did, why I did it, how it turned out, and what I learned.

Why do this?

I started thinking about this project after attending the first two meetings of Sunday Assembly Atlanta. There were several parts of the meeting where we did karaoke, and I was wondering whether there was another form of musical interaction – perhaps even unique to the organization – that we could do as a body.  That got me thinking about new ways of looking at how music could be made and how it could be experienced.

Thus began this experiment in collaborative music.

What did I build?

Over the course of a few months I built a “shared” instrument that used wireless PlayStation3 controllers to build loops of  user-selected samples.  The audience would pass around the controllers to add and remove samples from musical tracks that had predefined beats and time signatures. The patterns they created would be be projected on a screen as a sequencer-like display.

[youtube=http://www.youtube.com/watch?v=el22V6u63Uw]

 

Why did I build it this way?

I know a lot of different programming languages, but really didn’t have a good idea of how to create sounds (much less music) starting from raw code. I needed to use something that met the following criteria:

  1. Didn’t take too long to learn
    I wanted to start and finish within a reasonable period of time;
  2. Cheap/free to learn
    I’m thankful that in this day and age, you can learn pretty much any programming language for free (More on this later).
  3. Had a community of users who could offer suggestions and sample code
    No one wants to reinvent the wheel if they don’t have to;

I narrowed it down to two languages: ChucK and Processing.  I had recently learned ChucK through an instructor-led Coursera class. Once done, I found the language easy enough to use to do the types of projects I’m thinking about.

Processing I learned almost at the last minute because it had a framework for receiving Open Sound Control (OSC) signals. The language wasn’t that hard to grasp since I’d used something like in when I was playing with Arduino micro-controllers.

So…how does it work?

I used five PS3 DualShock Sixaxis controllers as the physical part of the instrument. I chose these controllers because (1) my son had one that I could experiment with;  and (2) other than a Bluetooth adapter and USB cable, they don’t require any additional hardware to connect to the computer.

The sound generators are a collection of scripts written in ChucK. Some scripts assign a collection of samples to each gamepad that’s linked to the computer. Other scripts define a common time signature and set of note lengths.

Again, the Processing sketch handles the visual display.

Each game pad controls a track. A track has collection of samples and a time signature assigned to it.

Once the sound generators start, they listen for the four buttons on the right side of the pad being pressed. Pressing those buttons assigns a sample to a beat that gets played during the next loop.

Code summary

Here’s the high-level summary of what each of the scripts in the instrument does.

initialize.ck

  • calls the global classes

score.ck

  • creates the tracks
  • assighs sample groups to each track
  • sets the asderss and port number the tracks send OSC signals on

BPM.ck

  • global time signature
  • sets beats per minute (that is, the tempo of the piece)
  • sets durations for quarter, eighth, 16th, and 32nd notes

SoundChain.ck

  • master volume

BaseTrack.ck

  • Defines the properties and methods that all track types have in common .

Track.ck

  • A type of BaseTrack
  • Adds more properties and methods specific to what I want this instrument to do.

AutoTrack.ck

  • Another type of BaseTrack – not implemented.
  • It was meant to take information about what was played so far and use some rules to automatically generate sounds or kick off samples.

joystick_test.ck

  • Joystick testing code borrowed from one of the examples on the ChucK site

rec-audio-stereo.ck

  • Records audio from the tracks

TrackSequence.pde

  • Processing sketch file for the instrument’s visual display.
  • Listens for signals from the tracks and displays them as a sequencer-like display

 

How do you hook up the game controllers?

I’m running an Ubuntu Linux system, so my process for getting the PS3 controllers hooked up via Bluetooth will differ from other systems. Once you’ve got one set up, hooking up others easy – it’s the same process.

I hooked up five controllers for the Sunday Assembly event, and from what I’ve read in various places, you can hook up as many as seven. I didn’t have time (or money) to buy a bunch of PS3 controllers, so I asked for folks to let me borrow any they could spare.

One thing to be aware of is that if you leave the controllers idle for too long, the Bluetooth connection drops – so keep an eye on their usage.

 

How did it go?

Despite a slight hiccup – and what public demo doesn’t have one? – we got the setup working so that people could play with it after the meeting. People seemed to find it interesting, and I got a lot of questions about how I built it. I also got suggestions from the audience and from people outside of Sunday Assembly for new rules, new samples/instruments, and new interfaces.

Here’s the raw audio (converted to MP3 format):

[soundcloud url=”https://api.soundcloud.com/tracks/150582520″ params=”color=00cc11&auto_play=false&hide_related=false&show_artwork=true” width=”100%” height=”166″ iframe=”true” /]

It’s a long file – I kept it running while people were eating brunch after the meeting.

I really appreciate all the feedback I’ve gotten – it will definitely influence not only the next iteration of this project, but future projects as well.

Lessons learned

  • When proposing a project that you’ve never done before, either keep your mouth shut until you have a proof-of-concept already done, or give yourself a *lot* of time to develop and test the project. Given what I do for a living, I should’ve known better, but I let the “holy crap, this is so cool!” aspect of it cloud my judgement of the level of effort involved.
  • Nothing is more motivating  than making an announcement in front of people, and then striving to meet their expectations.
  • To make this work better in the future, I need to use a more ubiquitous device to act as a user interface. The most obvious would be smartphones, and I’m looking into how I could incorporate smartphones into future shared instruments.

Update: Source Code Is Available

I’ve uploaded the source code to GitHub for those who want to play with the code.

Have fun with it, and let me know if you get anything useful out of it.

Leave a Comment

The Fungi from Yuggoth Project – Programmatic (and Problematic) Composition

I had never heard the phrase, “You get what you get and you don’t get upset” until I was listening to a lecture on poetry on CDs  with my son Jack. It made me think about this project of mine – creating an audio book of H.P. Lovecraft’s Fungi from Yuggoth. Not only was I recording my spoken version of it, but I was adding original soundtracks. And to put the cherry on this Geek Sundae, I was going to write code that would “render” the music for me.

The task was – and still is – daunting, and I’m uneasy about how it’s coming out. I can tell right now that more than half of this project will prove very difficult for a lot of people to listen to.

But you know what? To hell with it – this is fun for me…

My criterion for success is pretty simple: the project will be complete when all 36 poems are posted on Bandcamp.

The project consists of two versions of each poem – “compressed” and “uncompressed”. More on that later…

The music is created as the poem is typed. Each key pressed creates a note with a duration. Vowel keys and the space bar kick off samples or percussion instruments.

I’m using a programming language called ChucK for creating the music. I discovered the language while browsing for online classes at Coursera. The site had a class called “Introduction to Programming for Musicians and Digital Artists”. If you’re interested in using programming to create music, I recommend this course – it’s well-organized and you learn something regardless of whether you start as a coder or a musician.

To use this language, you’ll need to install ChucK and its development environment, miniAudicle.
You can get them both here. I’m not going to get into the installation process – the ChucK website has a page devoted to that.

I use five scripts to create the music:

  • initialize.ck – this calls the master script, score.ck
  • score.ck – this calls three scripts needed to create and record the music
  • BPM.ck – this program defines Beats Per Minute (BPM) as well as named note durations (from whole note to 32nd note)
  • mechanical-typist.ck – this script is the heart of the music “rendering” system. It defines the rules and the instruments used. It also listens for the keyboard input that plays the instruments and effects.
  • rec-auto-stereo.ck – this is the recording script. It records until you shut off all the “Shreds” or pieces of code running in ChucK.

There is also a folder called “audio” containing all the audio samples used by the scripts.

Each of these scripts was based on either the examples used in the Coursera class or examples on the ChucK website.

I’m making  the files I used to create the music available as a zip file on my Google Drive, so feel free to play with them and create your own pieces.

Here’s an example of what a “rendered” composition sounds like:

[soundcloud params=”color=33e040&theme_color=80e4a0″]https://soundcloud.com/bryant-ohara/sonnet-xi-the-well-test[/soundcloud]

I’ve taken these  initial renderings and done additional processing in Audacity.

Here are some examples of a “compressed” and “uncompressed” version of the poem, “Night-Gaunts”:

[soundcloud params=”color=33e040&theme_color=80e4a0″]https://soundcloud.com/bryant-ohara/the-fungi-from-yuggoth-sonnet[/soundcloud]

[soundcloud params=”color=33e040&theme_color=80e4a0″]https://soundcloud.com/bryant-ohara/the-fungi-from-yuggoth[/soundcloud]

You may have noticed that the uncompressed version is significantly longer than the compressed version. I was initially at a loss for how best to present the poems. I didn’t want to use the rendered music solely as raw material – the rendering is the actual text of poem, just transformed into sound. Each rendering is a tone poem in a very literal sense.

That still doesn’t make it any easier to listen to, which is why I’m adding heavily processed version of the vocal track to the uncompressed pieces. As I progress in the project, I’ll think about what else, if anything , to add.

Stay tuned for more updates on the project!

Leave a Comment

The Fungi from Yuggoth Project – Origin Story

Why am I sitting at my desk, banging my head, trying to create a hard-to-make, hard-to-listen-to album of H.P. Lovecraft’s poetry?

For my peeps, that’s why.

Let me explain: I’ve been reading and listening to a lot of H.P. Lovecraft stories in both written and audio form for a few years now, and I started wondering whether there was some common ground between T.S. Eliot, Franz Kafka, and Lovecraft…but that is for another post.

Then one day I discovered that Lovecraft had written poetry as well as prose. The Fungi from Yuggoth consists of 36 sonnets and embody more or less the same elements of “cosmic horror” that run throughout his stories. There have been a few print editions of the poems, the most recent one was done in 2013 and is illustrated by D.M. Mitchell.

There have been a few audio recordings done as well. The most recent that I can find is from 2009  by Pixyblink & Rhea Tucanae. They used electronica soundtracks and soundscapes for background music to wonderful effect. I bought it, and I thoroughly enjoyed their version – but it only covered eleven of the 36 poems.

There are some older audio CDs of the complete set of sonnets – I found one by Colin Timothy Gagnon in the Internet Archive – and there are more than likely others. The poems are in the public domain, so there should be quite a few versions out there.

So I got this idea in my head to make my own version – and I was going to use my programming skills to create the music.

I also needed cheap birthday/Xmas gifts that were made from the heart…for my peeps…

For the sake of keeping this post short, here’s the high-level overview of what I hope to do:

  1. Create a program that turns a poem into a musical piece as it is typed.  I’m calling this the “rendered” part.
  2. Record the spoken version of each poem.
  3. Combine the rendered part with the spoken part.
  4. Perform additional audio processing (I’m using Audacity) .

As of this writing, I’ve already worked on ten of the poems. I’ll talk about how they came out in a later post.

Leave a Comment