Hey there, folks! It’s been a while, but I’m restarting this website after being away doing…well, life-stuff.
I promise there will be new content coming more frequently and coming soon.
Hey there, folks! It’s been a while, but I’m restarting this website after being away doing…well, life-stuff.
I promise there will be new content coming more frequently and coming soon.
This is the second project of two that were assigned as part of the online course “Survey of Music Technology“. The course is (was, really – it’s almost over) available on Coursera.org and is taught by Dr. Jason Freeman from the Georgia Institute of Technology, with assistance from TA Brad Short.
The course covered a lot of ground – if you’re curious about the syllabus or the project descriptions, check out the links. Many students have been posting their projects to SoundCloud if you want to hear them:
[soundcloud url=”https://api.soundcloud.com/groups/172269″ params=”color=00cc11&auto_play=false&hide_related=false&show_comments=true&show_user=true&show_reposts=false” width=”100%” height=”450″ iframe=”true” /]
I liked using this method – and this framework – for creating compositions. I was able to do multiple runs and render multiple takes in WAV or MP3. You can even export your work as a project collection that can be opened in the Reaper desktop DAW. My only complaint was not being able to use more of Python’s power. I understand why: EarSketch is in a sense a “sandboxed solution” – it gives users tools to work with but tries not to give users power that could harm the application as a whole.
The script became musical crack for me after the first couple of renderings.
The core of it – the hardest piece to build – was the function for building the beat patterns. I wanted to use Euclidean rhythms, which more often than not are quite funky and represent much of the world’s grooves. There are a couple of algorithms for creating Euclidean rhythms, and I had a hard time finding a version in python. The two algorithms used to make these rhythms are from E. Bjorklund and Jack Elton Bresenham. I reverse-engineered some ChucK code in the electro-music.com discussion forum that used the Bresenham algorithm. Here’s what my function looks like:
def EuclideanGenerator ( pulses, steps ):
# Euclidean rhythm generator based on Bresenham’s algorithm
# pulses – amount of pulses
# steps – amount of discrete timing intervals
# generates a beat string pattern where
# 0 indicates the beginning of a sample,
# – indicates silence,
# + indicates continued play of the sample
seq = [‘0’]*steps
error = 0
breakorcontinue = [‘+’, ‘-‘][randint(0,1)]
for i in range(steps):
error = error + pulses
if error > 0:
seq[i] = ‘0’
error = error – steps
seq[i] = breakorcontinue
Hopefully, if anyone wants to do this Python it won’t be so hard to get started. Other peeople have done this in Python, but I had a hard time finding links that were active. As of this writing, I did find someone who did post some code using the Bjorklund algorithm.
It’s probably not perfect – buy hey, it’s funky enough for government work 🙂
So, back to the musical crack..
The final script produces unique works every time you render it: four tracks of Euclidean rhythms, and four sections of what I call “soundscape clouds.” Each soundscape cloud consists of four tracks containing bits and pieces of samples scattered across a set number of measures using a gaussian distribution. A couple of effects were applied to almost every track. There are a lot of random decisions made, but there’s enough method to the madness to keep it interesting.
I won’t bore you with the details, but if you want to run the code in EarSketch, here it is.
I’ve posted fifteen “Rendings” to my SoundCloud page under a single playlist to give you an idea of the range of what this one program could generate:
[soundcloud url=”https://api.soundcloud.com/playlists/59129470″ params=”auto_play=false&hide_related=false&show_comments=true&show_user=true&show_reposts=false&visual=true” width=”100%” height=”300″ iframe=”true” /]
Personally it came out better than I’d hoped. I had a lot of ideas running around in my head, most of which were unworkable given the constraints of the EarSketch web interface and its architecture. In the end, after a lot of math and programming, I got something that works, works consistently, and works reasonably well.
The most important thing I got out of this exercise – and the course as a whole – is that algorithmic composition is not just about generating some “bloop-bleep” computer music. It’s about applying processes that can make composition faster, provide creative pressure, and open up new possibilities in creative expression.
There will be more to come…
I’ve decided to return to my “Fungi From Yuggoth” project – I want to finish it by the end of this month and put it on Bandcamp.
But before I get back to it, I’m taking a little detour with an experiment in time-stretching. I’ve taken the first line from T.S. Eliot’s “The Hollow Men”, and done a couple of things to it.
For starters, I’ve used the singing synthesis component of the Festival package to create a couple of vocal parts around the first line:
We are the hollow men, we are the stuffed men.
I’ve decided to call these synthesized choral pieces The Bot Chorale – I’ve used them (with difficulty) in the past. I hope to use them more going forward.
These pieces – sliced, diced, time-stretched (with the Paulstretch algorithm), granulated, and mangled – came together in ways that I hope to use for “The Fungi From Yuggoth.”
And, yes, I will do “The Hollow Men” – eventually.
Consider this a teaser…
[soundcloud url=”https://api.soundcloud.com/tracks/175582192″ params=”color=00cc11&auto_play=false&hide_related=false&show_comments=true&show_user=true&show_reposts=false” width=”100%” height=”166″ iframe=”true” /]
I’ve been taking this course now for about three weeks, and it’s already filled in a few gaps in my knowledge of audio processing. For instance, I was always fuzzy on the physics and psychoacoustics of sound – now I’ve got a “less clueless” level of knowledge.
I’m also less afraid of MIDI than I was before. Setting up connections to hardware and/or software was, and is, still a little bit of a pain, but it’s getting easier.
The course has also forced me to learn a new Digital Audio Workstation (DAW) – Reaper. I’m running it on a Windows machine because it’s easier to do it that way. For the sake of the class, I want to spend time learning the tool instead of how to make it work in my other environments. I’ll run it in Linux and Mac OS X eventually – just not yet. I’ve also got a Mac Mini my brother-in-law gave me, but I haven’t taken the time to get comfortable with that system.
My first assignment is due by the end of this weekend, and I’ve chosen to use this as an opportunity to revisit an old project. I’m going to do a new recording of “The Love Song of J. Alfred Prufrock”, by T.S, Eliot. I really like this poem, and it was among the first pieces I uploaded to SoundCloud. Here’s the first version:
[soundcloud url=”https://api.soundcloud.com/tracks/27485009″ params=”color=00cc11&auto_play=false&hide_related=false&show_comments=true&show_user=true&show_reposts=false” width=”100%” height=”166″ iframe=”true” /]
This time, I feel I’m relying more on my own musical compositon chops. Here’s how I’m starting:
[soundcloud url=”https://api.soundcloud.com/tracks/173747350″ params=”color=ff5500&inverse=false&auto_play=false&show_user=true” width=”100%” height=”20″ iframe=”true” /]
The assignment constrains the length of the piece (60 to 120 seconds) so the reading will only cover the first section. I’ll post the completed assignment to this article once it’s done.
UPDATE: Here’s the completed assignment:
[soundcloud url=”https://api.soundcloud.com/tracks/174026783″ params=”color=00cc11&auto_play=false&hide_related=false&show_comments=true&show_user=true&show_reposts=false” width=”100%” height=”166″ iframe=”true” /]
I’m pleased with how it came out. Now to finish the rest of the poem…
On Sunday, August 31, 2014, I did a presentation on the intersection of technology and poetry at art|DBF, an art-oriented segment of the Decatur Book Festival. The presentation was the culmination of several months of coding to develop a system that allowed a poet and an audience to create an interactive soundscape.
Most people, when they think of poetry, they think of it as this fundamentally human, often life-affirming human activity.
Most people, when they think of technology, think of it as this inhumane, if not inhuman, often soul-crushing process.
This is a false dichotomy, of course. Poetry and technology are both artifacts of what humans do. They are both profoundly human acts.
From the campfire to the cathedral, from the crystal AM radio to the liquid crystal display, our technology has affected what form poetry takes, who creates it, who listens to it, where it is experienced, and how it is distributed.
My intent was to build a demonstration of one possible way to enhance the the experience of poetry for both poet and audience.
I built a web based audio application that controlled sounds with smartphones.
The phones accessed a web server running on my laptop. The pages for the audience could read through the poems being performed and manipulate sounds using one of three instruments.
The audience accesses a website via smartphone. The site offers a view of the current poem, a dropdown selection of poems, and links to one of three musical interfaces, the first of which displays by default.
The three interfaces do the following things:
The interface for the poet has six options – unfortunately only four worked at the time of performance, and only three worked without issues.
The speaker had a separate interface for adding vocal effects and a background beat.
The web pages sent messages to a set of ChucK scripts running on my laptop. The scripts generated the sounds and altered the vocals as well as recorded the presentation.
The presentation itself was well-received. It was in the tent for Eyedrum, an Atlanta-based, non-profit organization developing contemporary art, music and new media in its gallery space.
I did my presentation outside with a set of powered PC speakers attached to the laptop. Later, I borrowed a PA and mixer from my friend and fellow poet Kevin Sipp. By the way, check out his debut graphic novel, The Amazing Adventures of David Walker Blackstone:
The laptop was attached to a wireless router that passersby could use to connect to the website. Everyone was able to connect and interact with the site. There were some glitches – which I’ll talk about later – but for the most part, people seemed intrigued by the possible uses of mobile and web technology for poetic performances.
A couple of components either did not perform as expected or did not work at all. Of the audience-specific pages, Instrument 3 did not play or was at too low a volume to be heard over the ambient sounds of the festival. There were also some issues with switching between poems.
The poet-specific pages had issues with two of the six effects: “Multicomb” and “Spectacle”. The multicomb filter had a problem with feedback and was too loud. The spectacle effect didn’t work at all. In addition, the audio started suffering from latency issues. The recording of the first twenty minutes of the presentation started suffering from unintended glitching and was pretty much ruined. The recording of the last fifteen minutes was a little better (I stopped the recording to switch to Kevin’s PA setup), but suffered from the same issue not long into the presentation.
Overall, I think the presentation was well-received, and people were intrigued by what they heard. The issues with the setup became clear when I reviewed the recordings. There’s definitely room for improvement, and I will definitely build upon this design for future performances.
Despite the issues, I consider the project a success. This is a prototype, so I expected some problems. Luckily, none of the problems were catastrophic. There were lots of bloops and bleeps, but nothing went “boom”. It would only have been a failure if I had learned nothing from the experience.
Until next time, check out the code, play with, let me know if you use it or modify it.
I’ve put together a proof of concept for enhancing poetry with ChucK scripts, however, I soon realized that I wasn’t actually doing pitch-following. Instead, the code I put together was something called an “envelope follower”. I’ve uploaded the code to GitHub in case anyone wants to play around with it (you’ll need ChucK and the Audicle or miniAudicle IDE).
My physics is really rusty, so the best way I can explain it is that instead of checking the pitch of the voice to determine whether to kick off an effect, the script checks the *power* of the voice. I interpret this as more of a measurement of inflection or stress.
Not exactly what I’d planned, but it’s in the right direction.
This first draft of the script taught me a few things about how to build ChucK scripts that would respond to vocal input. For starters, I now have a new dimension to the vocals that I can use to kick off effects. Currently the threshold used to determine when the effects start has to be manually adjusted, but that could be dynamically changed through some other criteria like external data feeds or input by other people.
I also found that I needed to have a means to stop as well as start effects. When I first put the code together without having a means to stop an effect, the result got noisier and louder until I manually stopped the program.
I also wanted to vary the duration of the effects, so I did the following: (1) I included a global class for setting tempo and note durations; then (2) I added an array of time durations and looped through them each time an effect got kicked off.
Most of the resulting code is cobbled together from existing code examples found on the internet. My coding philosophy for the most part is based on what I call “the thieving magpie”: find components that do what I want (or close to it), slap them together, then modify as needed until I get the desired result.
The poem I used for the demo is “The Seekim”, by Sidney H. Sime. It comes from the book “Bogey Beasts”, which is out-of-print and hard-to-find. Each poem was written and illustrated by Sime; each poem also had a musical score written by Joseph Holbrooke. I’ve never heard the music performed, but the book fascinated me. I’m still kicking myself for having sold it at a used book store almost twenty years ago.
[soundcloud url=”https://api.soundcloud.com/tracks/157563219″ params=”auto_play=false&hide_related=false&show_comments=true&show_user=true&show_reposts=false&visual=true” width=”100%” height=”450″ iframe=”true” /]
So even though I didn’t exactly know what I was coding, I got some results I liked, and learned enough to start thinking about next steps.
I’ve got a presentation coming up in a month and some change (more on that in another post), and I want to blend spoken word poetry, music, and programming. Currently I’m thinking of using something called a “pitch follower” as the core program.
Based on what value(s) I look for, I want to have different inflections of my voice trigger a process that could, for instance,
These are some of the possible uses running through my head. Hopefully I’ll have a demo file ready some time within the next couple of days.
On April 20, 2014, I held my first public performance using an instrument I built and programmed.
It was a gamepad-controlled musical instrument, combining off-the-shelf hardware and open source software.
The performance was held for Sunday Assembly Atlanta (SAA), a secular organization whose motto is: Live Better, Help Often, Wonder More.
(For the folks at SAA – I apologize for taking so long to write this, but here it is – along with the source code on GitHub – for any geeks in the crowd who want to play with it themselves.)
After finishing a critical phase of a work-related project, I now have time to write an overview of what I did, why I did it, how it turned out, and what I learned.
I started thinking about this project after attending the first two meetings of Sunday Assembly Atlanta. There were several parts of the meeting where we did karaoke, and I was wondering whether there was another form of musical interaction – perhaps even unique to the organization – that we could do as a body. That got me thinking about new ways of looking at how music could be made and how it could be experienced.
Thus began this experiment in collaborative music.
Over the course of a few months I built a “shared” instrument that used wireless PlayStation3 controllers to build loops of user-selected samples. The audience would pass around the controllers to add and remove samples from musical tracks that had predefined beats and time signatures. The patterns they created would be be projected on a screen as a sequencer-like display.
I know a lot of different programming languages, but really didn’t have a good idea of how to create sounds (much less music) starting from raw code. I needed to use something that met the following criteria:
I narrowed it down to two languages: ChucK and Processing. I had recently learned ChucK through an instructor-led Coursera class. Once done, I found the language easy enough to use to do the types of projects I’m thinking about.
Processing I learned almost at the last minute because it had a framework for receiving Open Sound Control (OSC) signals. The language wasn’t that hard to grasp since I’d used something like in when I was playing with Arduino micro-controllers.
I used five PS3 DualShock Sixaxis controllers as the physical part of the instrument. I chose these controllers because (1) my son had one that I could experiment with; and (2) other than a Bluetooth adapter and USB cable, they don’t require any additional hardware to connect to the computer.
The sound generators are a collection of scripts written in ChucK. Some scripts assign a collection of samples to each gamepad that’s linked to the computer. Other scripts define a common time signature and set of note lengths.
Again, the Processing sketch handles the visual display.
Each game pad controls a track. A track has collection of samples and a time signature assigned to it.
Once the sound generators start, they listen for the four buttons on the right side of the pad being pressed. Pressing those buttons assigns a sample to a beat that gets played during the next loop.
Here’s the high-level summary of what each of the scripts in the instrument does.
I’m running an Ubuntu Linux system, so my process for getting the PS3 controllers hooked up via Bluetooth will differ from other systems. Once you’ve got one set up, hooking up others easy – it’s the same process.
I hooked up five controllers for the Sunday Assembly event, and from what I’ve read in various places, you can hook up as many as seven. I didn’t have time (or money) to buy a bunch of PS3 controllers, so I asked for folks to let me borrow any they could spare.
One thing to be aware of is that if you leave the controllers idle for too long, the Bluetooth connection drops – so keep an eye on their usage.
Despite a slight hiccup – and what public demo doesn’t have one? – we got the setup working so that people could play with it after the meeting. People seemed to find it interesting, and I got a lot of questions about how I built it. I also got suggestions from the audience and from people outside of Sunday Assembly for new rules, new samples/instruments, and new interfaces.
Here’s the raw audio (converted to MP3 format):
[soundcloud url=”https://api.soundcloud.com/tracks/150582520″ params=”color=00cc11&auto_play=false&hide_related=false&show_artwork=true” width=”100%” height=”166″ iframe=”true” /]
It’s a long file – I kept it running while people were eating brunch after the meeting.
I really appreciate all the feedback I’ve gotten – it will definitely influence not only the next iteration of this project, but future projects as well.
I’ve uploaded the source code to GitHub for those who want to play with the code.
Have fun with it, and let me know if you get anything useful out of it.
I had never heard the phrase, “You get what you get and you don’t get upset” until I was listening to a lecture on poetry on CDs with my son Jack. It made me think about this project of mine – creating an audio book of H.P. Lovecraft’s Fungi from Yuggoth. Not only was I recording my spoken version of it, but I was adding original soundtracks. And to put the cherry on this Geek Sundae, I was going to write code that would “render” the music for me.
The task was – and still is – daunting, and I’m uneasy about how it’s coming out. I can tell right now that more than half of this project will prove very difficult for a lot of people to listen to.
But you know what? To hell with it – this is fun for me…
My criterion for success is pretty simple: the project will be complete when all 36 poems are posted on Bandcamp.
The project consists of two versions of each poem – “compressed” and “uncompressed”. More on that later…
The music is created as the poem is typed. Each key pressed creates a note with a duration. Vowel keys and the space bar kick off samples or percussion instruments.
I’m using a programming language called ChucK for creating the music. I discovered the language while browsing for online classes at Coursera. The site had a class called “Introduction to Programming for Musicians and Digital Artists”. If you’re interested in using programming to create music, I recommend this course – it’s well-organized and you learn something regardless of whether you start as a coder or a musician.
To use this language, you’ll need to install ChucK and its development environment, miniAudicle.
You can get them both here. I’m not going to get into the installation process – the ChucK website has a page devoted to that.
I use five scripts to create the music:
There is also a folder called “audio” containing all the audio samples used by the scripts.
Each of these scripts was based on either the examples used in the Coursera class or examples on the ChucK website.
I’m making the files I used to create the music available as a zip file on my Google Drive, so feel free to play with them and create your own pieces.
Here’s an example of what a “rendered” composition sounds like:
I’ve taken these initial renderings and done additional processing in Audacity.
Here are some examples of a “compressed” and “uncompressed” version of the poem, “Night-Gaunts”:
You may have noticed that the uncompressed version is significantly longer than the compressed version. I was initially at a loss for how best to present the poems. I didn’t want to use the rendered music solely as raw material – the rendering is the actual text of poem, just transformed into sound. Each rendering is a tone poem in a very literal sense.
That still doesn’t make it any easier to listen to, which is why I’m adding heavily processed version of the vocal track to the uncompressed pieces. As I progress in the project, I’ll think about what else, if anything , to add.
Stay tuned for more updates on the project!
Why am I sitting at my desk, banging my head, trying to create a hard-to-make, hard-to-listen-to album of H.P. Lovecraft’s poetry?
For my peeps, that’s why.
Let me explain: I’ve been reading and listening to a lot of H.P. Lovecraft stories in both written and audio form for a few years now, and I started wondering whether there was some common ground between T.S. Eliot, Franz Kafka, and Lovecraft…but that is for another post.
Then one day I discovered that Lovecraft had written poetry as well as prose. The Fungi from Yuggoth consists of 36 sonnets and embody more or less the same elements of “cosmic horror” that run throughout his stories. There have been a few print editions of the poems, the most recent one was done in 2013 and is illustrated by D.M. Mitchell.
There have been a few audio recordings done as well. The most recent that I can find is from 2009 by Pixyblink & Rhea Tucanae. They used electronica soundtracks and soundscapes for background music to wonderful effect. I bought it, and I thoroughly enjoyed their version – but it only covered eleven of the 36 poems.
There are some older audio CDs of the complete set of sonnets – I found one by Colin Timothy Gagnon in the Internet Archive – and there are more than likely others. The poems are in the public domain, so there should be quite a few versions out there.
So I got this idea in my head to make my own version – and I was going to use my programming skills to create the music.
I also needed cheap birthday/Xmas gifts that were made from the heart…for my peeps…
For the sake of keeping this post short, here’s the high-level overview of what I hope to do:
As of this writing, I’ve already worked on ten of the poems. I’ll talk about how they came out in a later post.