Skip to content

Author: bryantohara

Taking Part in National Poetry Writing (and Generation) Month

This year I’m giving myself a double challenge: I’m taking part in both National Poetry Writing Month (aka NaPoWriMo ) and National Poetry Generation Month (aka NaPoGenMo).

Basically I’ll spend April writing poems and code that generates poems.

Pieces that are particularly interesting will start getting posted here, and I’ll talk about the code I create as part of the challenge.

Comments closed

I’m Back

It’s been way too long since I’ve updated this site. With all the attention paid to social media, I’ve let the site wither.

I have ideas I want to go into great detail on, and I believe this is where I can best describe them.

So going forward, I’ll strive to post here first, and social media second.

Stay tuned.

Comments closed

Poem: The Garden of the Patrons

Took me a while, but I’ve created a little video of my poem, “The Garden of the Patrons,” which was published in Pandemic Atlanta 2020 magazine, an assortment of artwork, literature, poetry, and photography documenting the experiences of Atlanta-based artists during the COVID-19 pandemic.

Hope you enjoy.

Comments closed

Euclidean Rhythms, Python, and EarSketch

This is the second project of two that were assigned as part of the online course “Survey of Music Technology“. The course is (was, really – it’s almost over) available on Coursera.org and is taught by Dr. Jason Freeman from the Georgia Institute of Technology, with assistance from TA Brad Short.

The course covered a lot of ground – if you’re curious about the syllabus or the project descriptions, check out the links.  Many students have been posting their projects to SoundCloud if you want to hear them:

[soundcloud url=”https://api.soundcloud.com/groups/172269″ params=”color=00cc11&auto_play=false&hide_related=false&show_comments=true&show_user=true&show_reposts=false” width=”100%” height=”450″ iframe=”true” /]

This project was all about algorithmic composition. We were to use the musical programming framework called EarSketch, which is another creation from the folks at Georgia Tech. EarSketch is also a web-based Digital Audio Workstation (DAW) that uses Python and the EarSketch framework to build tracks and add effects. Apparently you can use JavaScript, too.

I liked using this method – and this framework – for creating compositions. I was able to do multiple runs and render multiple takes in WAV or MP3. You can even export your work as a project collection that can be opened in the Reaper desktop DAW. My only complaint was not being able to use more of Python’s power. I understand why: EarSketch is in a sense a “sandboxed solution” – it gives users tools to work with but tries not to give users power that could harm the application as a whole.

The script became musical crack for me after the first couple of renderings.

The core of it – the hardest piece to build – was the function for building the beat patterns. I wanted to use Euclidean rhythms, which more often than not are quite funky and represent much of the world’s grooves. There are a couple of algorithms for creating Euclidean rhythms, and I had a hard time finding a version in python. The two algorithms used to make these rhythms are from E. Bjorklund and Jack Elton Bresenham. I reverse-engineered some ChucK code in the electro-music.com discussion forum that used the Bresenham algorithm. Here’s what my function looks like:

 

def EuclideanGenerator ( pulses, steps ):

# Euclidean rhythm generator based on Bresenham’s algorithm

# pulses – amount of pulses
# steps – amount of discrete timing intervals
# generates a beat string pattern where
# 0 indicates the beginning of a sample,
# – indicates silence,
# + indicates continued play of the sample

seq = [‘0’]*steps
error = 0
breakorcontinue = [‘+’, ‘-‘][randint(0,1)]
for i in range(steps):
error = error + pulses
if error > 0:
seq[i] = ‘0’
error = error – steps
else:
seq[i] = breakorcontinue

return ”.join(seq)

 

Hopefully, if anyone wants to do this Python it won’t be so hard to get started. Other peeople have done this in Python, but I had a hard time finding links that were active. As of this writing, I did find someone who did post some code using the Bjorklund algorithm.

It’s probably not perfect – buy hey, it’s funky enough for government work 🙂

So, back to the musical crack..

The final script produces unique works every time you render it: four tracks of Euclidean rhythms, and four sections of what I call “soundscape clouds.” Each soundscape cloud consists of four tracks containing bits and pieces of samples scattered across a set number of measures using a gaussian distribution.  A couple of effects were applied to almost every track. There are a lot of random decisions made, but there’s enough method to the madness to keep it interesting.

I won’t bore you with the details, but if you want to run the code in EarSketch, here it is.

I’ve posted fifteen “Rendings” to my SoundCloud page under a single playlist to give you an idea of the range of what this one program could generate:

[soundcloud url=”https://api.soundcloud.com/playlists/59129470″ params=”auto_play=false&hide_related=false&show_comments=true&show_user=true&show_reposts=false&visual=true” width=”100%” height=”300″ iframe=”true” /]

Personally it came out better than I’d hoped. I had a lot of ideas running around in my head, most of which were unworkable given the constraints of the EarSketch web interface and its architecture. In the end, after a lot of math and programming, I got something that works, works consistently, and works reasonably well.

The most important thing I got out of this exercise – and the course as a whole – is that algorithmic composition is not just about generating some “bloop-bleep” computer music. It’s about applying processes that can make composition faster, provide creative pressure, and open up new possibilities in creative expression.

There will be more to come…

Comments closed

Experimenting With Paulstretch and Singing Synthesis

I’ve decided to return to my “Fungi From Yuggoth” project – I want to finish it by the end of this month and put it on Bandcamp.

But before I get back to it, I’m taking a little detour with an experiment in time-stretching. I’ve taken the first line from T.S. Eliot’s “The Hollow Men”, and done a couple of things to it.

For starters, I’ve used the singing synthesis component of the Festival package to create a couple of vocal parts around the first line:

We are the hollow men, we are the stuffed men.

I’ve decided to call these synthesized choral pieces The Bot Chorale – I’ve used them (with difficulty) in the past. I hope to use them more going forward.

These pieces – sliced, diced, time-stretched (with the Paulstretch algorithm), granulated, and mangled – came together in ways that I hope to use for “The Fungi From Yuggoth.”

And, yes, I will do “The Hollow Men” – eventually.

Consider this a teaser…

[soundcloud url=”https://api.soundcloud.com/tracks/175582192″ params=”color=00cc11&auto_play=false&hide_related=false&show_comments=true&show_user=true&show_reposts=false” width=”100%” height=”166″ iframe=”true” /]

Comments closed

Taking a Coursera Class: Survey in Music Technology

I’ve been taking this course now for about three weeks, and it’s already filled in a few gaps in my knowledge of audio processing. For instance, I was always fuzzy on the physics and psychoacoustics of sound – now I’ve got a “less clueless” level of knowledge.

I’m also less afraid of MIDI than I was before. Setting up connections to hardware and/or software was, and is, still a little bit of a pain, but it’s getting easier.

The course has also forced me to learn a new Digital Audio Workstation (DAW) – Reaper. I’m running it on a Windows machine because it’s easier to do it that way. For the sake of the class, I want to spend time learning the tool instead of how to make it work in my other environments. I’ll run it in Linux and Mac OS X eventually – just not yet. I’ve also got a Mac Mini my brother-in-law gave me, but I haven’t taken the time to get comfortable with that system.

My first assignment is due by the end of this weekend, and I’ve chosen to use this as an opportunity to revisit an old project. I’m going to do a new recording of “The Love Song of J. Alfred Prufrock”, by T.S, Eliot. I really like this poem, and it was among the first pieces I uploaded to SoundCloud. Here’s the first version:

[soundcloud url=”https://api.soundcloud.com/tracks/27485009″ params=”color=00cc11&auto_play=false&hide_related=false&show_comments=true&show_user=true&show_reposts=false” width=”100%” height=”166″ iframe=”true” /]

This time, I feel I’m relying more on my own musical compositon chops. Here’s how I’m starting:

[soundcloud url=”https://api.soundcloud.com/tracks/173747350″ params=”color=ff5500&inverse=false&auto_play=false&show_user=true” width=”100%” height=”20″ iframe=”true” /]

The assignment constrains the length of the piece (60 to 120 seconds) so the reading will only cover the first section. I’ll post the completed assignment to this article once it’s done.

UPDATE: Here’s the completed assignment:

[soundcloud url=”https://api.soundcloud.com/tracks/174026783″ params=”color=00cc11&auto_play=false&hide_related=false&show_comments=true&show_user=true&show_reposts=false” width=”100%” height=”166″ iframe=”true” /]

I’m pleased  with how it came out. Now to finish the rest of the poem…

Comments closed

Poetic Tech at the Decatur Book Festival: The “Bloop-Bleep” Stage

On Sunday, August 31, 2014, I did a presentation on the intersection of technology and poetry at art|DBF, an art-oriented segment of the Decatur Book Festival. The presentation was the culmination of several months of coding to develop a system that allowed a poet and an audience to create an interactive soundscape.

 

Why did I do this?

Most people, when they think of poetry, they think of it as this fundamentally human, often life-affirming human activity.

Most people, when they think of technology, think of it as this inhumane, if not inhuman, often soul-crushing process.

This is a false dichotomy, of course. Poetry and technology are both artifacts of what humans do. They are both profoundly human acts.

From the campfire to the cathedral, from the crystal AM radio to the liquid crystal display, our technology has affected what form poetry takes, who creates it, who listens to it, where it is experienced, and how it is distributed.

My intent was to build a demonstration of one possible way to enhance the the experience of poetry for both poet and audience.

 

How did it work?

I built a web based audio application that controlled sounds with smartphones.

The phones accessed a web server running on my laptop. The pages for the audience could read through the poems being performed and manipulate sounds using one of three instruments.

Audience UI

The audience accesses a website via smartphone. The site offers a view of the current poem, a dropdown selection of poems, and links to one of three musical interfaces, the first of which displays by default.

The three interfaces do the following things:

  • Instrument 1: create a rain stick-like sound with different effects based moving a point within an small window;
    swipe demo - Chromium_465
  • Instrument 2: a set of four percussion pads;
    swipe demo - Chromium_466
  • Instrument 3: a text area that creates sounds for each word typed.
    swipe demo - Chromium_467

Poet UI

The interface for the poet has six options – unfortunately only four worked at the time of performance, and only three worked without issues.

  • Effect 1: pitch follower creating audio effect a fifth higher than detected frequency
  • Effect 2: pitch follower creating audio effect a seventh higher than detected frequency
  • Effect 3: Multicomb filter
  • Effect 4: Spectacle filter
  • Effect 5: Hypnodrone – drone effect kicked off by detected amplitude
  • Effect 6: Stutter – warbling bass line using sine oscillator. Originally intended to create a glitch effect.

Poet UI - Chromium_468

 

The speaker had a separate interface for adding vocal effects and a background beat.

The web pages sent messages to a set of ChucK scripts running on my laptop. The scripts generated the sounds and altered the vocals as well as recorded the presentation.

 

How did it go?

The presentation itself was well-received. It was in the tent for Eyedrum, an Atlanta-based, non-profit organization developing contemporary art, music and new media in its gallery space.

I did my presentation outside with a set of powered PC speakers attached to the laptop. Later, I borrowed a PA and mixer from my friend and fellow poet Kevin Sipp. By the way, check out his debut graphic novel, The Amazing Adventures of David Walker Blackstone:

 

1655583_269790189852487_1506966739_o

The laptop was attached to a wireless router that passersby could use to connect to the website. Everyone was able to connect and interact with the site. There were some glitches – which I’ll talk about later – but for the most part, people seemed intrigued by the possible uses of mobile and web technology for poetic performances.

A couple of components either did not perform as expected or did not work at all. Of the audience-specific pages, Instrument 3 did not play or was at too low a volume to be heard over the ambient sounds of the festival. There were also some issues with switching between poems.

The poet-specific pages had issues with two of the six effects: “Multicomb” and “Spectacle”. The multicomb filter had a problem with feedback and was too loud. The spectacle effect didn’t work at all. In addition, the audio started suffering from latency issues. The recording of the first twenty minutes of the presentation started suffering from unintended glitching and was pretty much ruined. The recording of the last fifteen minutes was a little better (I stopped the recording to switch to Kevin’s PA setup), but suffered from the same issue not long into the presentation.

 

Conclusion

Overall, I think the presentation was well-received, and people were intrigued by what they heard. The issues with the setup became clear when I reviewed the recordings. There’s definitely room for improvement, and I will definitely build upon this design for future performances.

So good, bad, or ugly, I’m posting both recordings (Part 1 and Part 2) and the code for all to see.

Despite the issues, I consider the project a success. This is a prototype, so I expected some problems. Luckily, none of the problems were catastrophic. There were lots of bloops and bleeps, but nothing went “boom”. It would only have been a failure if I had learned nothing from the experience.

Until next time, check out the code, play with, let me know if you use it or modify it.

 

Comments closed