Skip to content

Month: November 2014

Euclidean Rhythms, Python, and EarSketch

This is the second project of two that were assigned as part of the online course “Survey of Music Technology“. The course is (was, really – it’s almost over) available on Coursera.org and is taught by Dr. Jason Freeman from the Georgia Institute of Technology, with assistance from TA Brad Short.

The course covered a lot of ground – if you’re curious about the syllabus or the project descriptions, check out the links.  Many students have been posting their projects to SoundCloud if you want to hear them:

[soundcloud url=”https://api.soundcloud.com/groups/172269″ params=”color=00cc11&auto_play=false&hide_related=false&show_comments=true&show_user=true&show_reposts=false” width=”100%” height=”450″ iframe=”true” /]

This project was all about algorithmic composition. We were to use the musical programming framework called EarSketch, which is another creation from the folks at Georgia Tech. EarSketch is also a web-based Digital Audio Workstation (DAW) that uses Python and the EarSketch framework to build tracks and add effects. Apparently you can use JavaScript, too.

I liked using this method – and this framework – for creating compositions. I was able to do multiple runs and render multiple takes in WAV or MP3. You can even export your work as a project collection that can be opened in the Reaper desktop DAW. My only complaint was not being able to use more of Python’s power. I understand why: EarSketch is in a sense a “sandboxed solution” – it gives users tools to work with but tries not to give users power that could harm the application as a whole.

The script became musical crack for me after the first couple of renderings.

The core of it – the hardest piece to build – was the function for building the beat patterns. I wanted to use Euclidean rhythms, which more often than not are quite funky and represent much of the world’s grooves. There are a couple of algorithms for creating Euclidean rhythms, and I had a hard time finding a version in python. The two algorithms used to make these rhythms are from E. Bjorklund and Jack Elton Bresenham. I reverse-engineered some ChucK code in the electro-music.com discussion forum that used the Bresenham algorithm. Here’s what my function looks like:

 

def EuclideanGenerator ( pulses, steps ):

# Euclidean rhythm generator based on Bresenham’s algorithm

# pulses – amount of pulses
# steps – amount of discrete timing intervals
# generates a beat string pattern where
# 0 indicates the beginning of a sample,
# – indicates silence,
# + indicates continued play of the sample

seq = [‘0’]*steps
error = 0
breakorcontinue = [‘+’, ‘-‘][randint(0,1)]
for i in range(steps):
error = error + pulses
if error > 0:
seq[i] = ‘0’
error = error – steps
else:
seq[i] = breakorcontinue

return ”.join(seq)

 

Hopefully, if anyone wants to do this Python it won’t be so hard to get started. Other peeople have done this in Python, but I had a hard time finding links that were active. As of this writing, I did find someone who did post some code using the Bjorklund algorithm.

It’s probably not perfect – buy hey, it’s funky enough for government work 🙂

So, back to the musical crack..

The final script produces unique works every time you render it: four tracks of Euclidean rhythms, and four sections of what I call “soundscape clouds.” Each soundscape cloud consists of four tracks containing bits and pieces of samples scattered across a set number of measures using a gaussian distribution.  A couple of effects were applied to almost every track. There are a lot of random decisions made, but there’s enough method to the madness to keep it interesting.

I won’t bore you with the details, but if you want to run the code in EarSketch, here it is.

I’ve posted fifteen “Rendings” to my SoundCloud page under a single playlist to give you an idea of the range of what this one program could generate:

[soundcloud url=”https://api.soundcloud.com/playlists/59129470″ params=”auto_play=false&hide_related=false&show_comments=true&show_user=true&show_reposts=false&visual=true” width=”100%” height=”300″ iframe=”true” /]

Personally it came out better than I’d hoped. I had a lot of ideas running around in my head, most of which were unworkable given the constraints of the EarSketch web interface and its architecture. In the end, after a lot of math and programming, I got something that works, works consistently, and works reasonably well.

The most important thing I got out of this exercise – and the course as a whole – is that algorithmic composition is not just about generating some “bloop-bleep” computer music. It’s about applying processes that can make composition faster, provide creative pressure, and open up new possibilities in creative expression.

There will be more to come…

Comments closed

Experimenting With Paulstretch and Singing Synthesis

I’ve decided to return to my “Fungi From Yuggoth” project – I want to finish it by the end of this month and put it on Bandcamp.

But before I get back to it, I’m taking a little detour with an experiment in time-stretching. I’ve taken the first line from T.S. Eliot’s “The Hollow Men”, and done a couple of things to it.

For starters, I’ve used the singing synthesis component of the Festival package to create a couple of vocal parts around the first line:

We are the hollow men, we are the stuffed men.

I’ve decided to call these synthesized choral pieces The Bot Chorale – I’ve used them (with difficulty) in the past. I hope to use them more going forward.

These pieces – sliced, diced, time-stretched (with the Paulstretch algorithm), granulated, and mangled – came together in ways that I hope to use for “The Fungi From Yuggoth.”

And, yes, I will do “The Hollow Men” – eventually.

Consider this a teaser…

[soundcloud url=”https://api.soundcloud.com/tracks/175582192″ params=”color=00cc11&auto_play=false&hide_related=false&show_comments=true&show_user=true&show_reposts=false” width=”100%” height=”166″ iframe=”true” /]

Comments closed