This is it! The first box of copies of The Ghettobirds, my first published poetry collection!
This is it! The first box of copies of The Ghettobirds, my first published poetry collection!
I’d like to announce that my science fiction poetry collection, “The Ghettobirds”, will be published by Frayed Edge Press in the Spring of 2021. Please go to frayededgepress.com and sign up for the mailing list, that way you’ll be alerted when the book is available for discounted pre-sale pricing and official publication.
Hey there, folks! It’s been a while, but I’m restarting this website after being away doing…well, life-stuff.
I promise there will be new content coming more frequently and coming soon.
I’ve decided to return to my “Fungi From Yuggoth” project – I want to finish it by the end of this month and put it on Bandcamp.
But before I get back to it, I’m taking a little detour with an experiment in time-stretching. I’ve taken the first line from T.S. Eliot’s “The Hollow Men”, and done a couple of things to it.
For starters, I’ve used the singing synthesis component of the Festival package to create a couple of vocal parts around the first line:
We are the hollow men, we are the stuffed men.
I’ve decided to call these synthesized choral pieces The Bot Chorale – I’ve used them (with difficulty) in the past. I hope to use them more going forward.
These pieces – sliced, diced, time-stretched (with the Paulstretch algorithm), granulated, and mangled – came together in ways that I hope to use for “The Fungi From Yuggoth.”
And, yes, I will do “The Hollow Men” – eventually.
Consider this a teaser…
[soundcloud url=”https://api.soundcloud.com/tracks/175582192″ params=”color=00cc11&auto_play=false&hide_related=false&show_comments=true&show_user=true&show_reposts=false” width=”100%” height=”166″ iframe=”true” /]
I’ve put together a proof of concept for enhancing poetry with ChucK scripts, however, I soon realized that I wasn’t actually doing pitch-following. Instead, the code I put together was something called an “envelope follower”. I’ve uploaded the code to GitHub in case anyone wants to play around with it (you’ll need ChucK and the Audicle or miniAudicle IDE).
My physics is really rusty, so the best way I can explain it is that instead of checking the pitch of the voice to determine whether to kick off an effect, the script checks the *power* of the voice. I interpret this as more of a measurement of inflection or stress.
Not exactly what I’d planned, but it’s in the right direction.
This first draft of the script taught me a few things about how to build ChucK scripts that would respond to vocal input. For starters, I now have a new dimension to the vocals that I can use to kick off effects. Currently the threshold used to determine when the effects start has to be manually adjusted, but that could be dynamically changed through some other criteria like external data feeds or input by other people.
I also found that I needed to have a means to stop as well as start effects. When I first put the code together without having a means to stop an effect, the result got noisier and louder until I manually stopped the program.
I also wanted to vary the duration of the effects, so I did the following: (1) I included a global class for setting tempo and note durations; then (2) I added an array of time durations and looped through them each time an effect got kicked off.
Most of the resulting code is cobbled together from existing code examples found on the internet. My coding philosophy for the most part is based on what I call “the thieving magpie”: find components that do what I want (or close to it), slap them together, then modify as needed until I get the desired result.
The poem I used for the demo is “The Seekim”, by Sidney H. Sime. It comes from the book “Bogey Beasts”, which is out-of-print and hard-to-find. Each poem was written and illustrated by Sime; each poem also had a musical score written by Joseph Holbrooke. I’ve never heard the music performed, but the book fascinated me. I’m still kicking myself for having sold it at a used book store almost twenty years ago.
[soundcloud url=”https://api.soundcloud.com/tracks/157563219″ params=”auto_play=false&hide_related=false&show_comments=true&show_user=true&show_reposts=false&visual=true” width=”100%” height=”450″ iframe=”true” /]
So even though I didn’t exactly know what I was coding, I got some results I liked, and learned enough to start thinking about next steps.
I’ve got a presentation coming up in a month and some change (more on that in another post), and I want to blend spoken word poetry, music, and programming. Currently I’m thinking of using something called a “pitch follower” as the core program.
Based on what value(s) I look for, I want to have different inflections of my voice trigger a process that could, for instance,
These are some of the possible uses running through my head. Hopefully I’ll have a demo file ready some time within the next couple of days.
On April 20, 2014, I held my first public performance using an instrument I built and programmed.
It was a gamepad-controlled musical instrument, combining off-the-shelf hardware and open source software.
The performance was held for Sunday Assembly Atlanta (SAA), a secular organization whose motto is: Live Better, Help Often, Wonder More.
(For the folks at SAA – I apologize for taking so long to write this, but here it is – along with the source code on GitHub – for any geeks in the crowd who want to play with it themselves.)
After finishing a critical phase of a work-related project, I now have time to write an overview of what I did, why I did it, how it turned out, and what I learned.
I started thinking about this project after attending the first two meetings of Sunday Assembly Atlanta. There were several parts of the meeting where we did karaoke, and I was wondering whether there was another form of musical interaction – perhaps even unique to the organization – that we could do as a body. That got me thinking about new ways of looking at how music could be made and how it could be experienced.
Thus began this experiment in collaborative music.
Over the course of a few months I built a “shared” instrument that used wireless PlayStation3 controllers to build loops of user-selected samples. The audience would pass around the controllers to add and remove samples from musical tracks that had predefined beats and time signatures. The patterns they created would be be projected on a screen as a sequencer-like display.
I know a lot of different programming languages, but really didn’t have a good idea of how to create sounds (much less music) starting from raw code. I needed to use something that met the following criteria:
I narrowed it down to two languages: ChucK and Processing. I had recently learned ChucK through an instructor-led Coursera class. Once done, I found the language easy enough to use to do the types of projects I’m thinking about.
Processing I learned almost at the last minute because it had a framework for receiving Open Sound Control (OSC) signals. The language wasn’t that hard to grasp since I’d used something like in when I was playing with Arduino micro-controllers.
I used five PS3 DualShock Sixaxis controllers as the physical part of the instrument. I chose these controllers because (1) my son had one that I could experiment with; and (2) other than a Bluetooth adapter and USB cable, they don’t require any additional hardware to connect to the computer.
The sound generators are a collection of scripts written in ChucK. Some scripts assign a collection of samples to each gamepad that’s linked to the computer. Other scripts define a common time signature and set of note lengths.
Again, the Processing sketch handles the visual display.
Each game pad controls a track. A track has collection of samples and a time signature assigned to it.
Once the sound generators start, they listen for the four buttons on the right side of the pad being pressed. Pressing those buttons assigns a sample to a beat that gets played during the next loop.
Here’s the high-level summary of what each of the scripts in the instrument does.
I’m running an Ubuntu Linux system, so my process for getting the PS3 controllers hooked up via Bluetooth will differ from other systems. Once you’ve got one set up, hooking up others easy – it’s the same process.
I hooked up five controllers for the Sunday Assembly event, and from what I’ve read in various places, you can hook up as many as seven. I didn’t have time (or money) to buy a bunch of PS3 controllers, so I asked for folks to let me borrow any they could spare.
One thing to be aware of is that if you leave the controllers idle for too long, the Bluetooth connection drops – so keep an eye on their usage.
Despite a slight hiccup – and what public demo doesn’t have one? – we got the setup working so that people could play with it after the meeting. People seemed to find it interesting, and I got a lot of questions about how I built it. I also got suggestions from the audience and from people outside of Sunday Assembly for new rules, new samples/instruments, and new interfaces.
Here’s the raw audio (converted to MP3 format):
[soundcloud url=”https://api.soundcloud.com/tracks/150582520″ params=”color=00cc11&auto_play=false&hide_related=false&show_artwork=true” width=”100%” height=”166″ iframe=”true” /]
It’s a long file – I kept it running while people were eating brunch after the meeting.
I really appreciate all the feedback I’ve gotten – it will definitely influence not only the next iteration of this project, but future projects as well.
I’ve uploaded the source code to GitHub for those who want to play with the code.
Have fun with it, and let me know if you get anything useful out of it.
I had never heard the phrase, “You get what you get and you don’t get upset” until I was listening to a lecture on poetry on CDs with my son Jack. It made me think about this project of mine – creating an audio book of H.P. Lovecraft’s Fungi from Yuggoth. Not only was I recording my spoken version of it, but I was adding original soundtracks. And to put the cherry on this Geek Sundae, I was going to write code that would “render” the music for me.
The task was – and still is – daunting, and I’m uneasy about how it’s coming out. I can tell right now that more than half of this project will prove very difficult for a lot of people to listen to.
But you know what? To hell with it – this is fun for me…
My criterion for success is pretty simple: the project will be complete when all 36 poems are posted on Bandcamp.
The project consists of two versions of each poem – “compressed” and “uncompressed”. More on that later…
The music is created as the poem is typed. Each key pressed creates a note with a duration. Vowel keys and the space bar kick off samples or percussion instruments.
I’m using a programming language called ChucK for creating the music. I discovered the language while browsing for online classes at Coursera. The site had a class called “Introduction to Programming for Musicians and Digital Artists”. If you’re interested in using programming to create music, I recommend this course – it’s well-organized and you learn something regardless of whether you start as a coder or a musician.
To use this language, you’ll need to install ChucK and its development environment, miniAudicle.
You can get them both here. I’m not going to get into the installation process – the ChucK website has a page devoted to that.
I use five scripts to create the music:
There is also a folder called “audio” containing all the audio samples used by the scripts.
Each of these scripts was based on either the examples used in the Coursera class or examples on the ChucK website.
I’m making the files I used to create the music available as a zip file on my Google Drive, so feel free to play with them and create your own pieces.
Here’s an example of what a “rendered” composition sounds like:
I’ve taken these initial renderings and done additional processing in Audacity.
Here are some examples of a “compressed” and “uncompressed” version of the poem, “Night-Gaunts”:
You may have noticed that the uncompressed version is significantly longer than the compressed version. I was initially at a loss for how best to present the poems. I didn’t want to use the rendered music solely as raw material – the rendering is the actual text of poem, just transformed into sound. Each rendering is a tone poem in a very literal sense.
That still doesn’t make it any easier to listen to, which is why I’m adding heavily processed version of the vocal track to the uncompressed pieces. As I progress in the project, I’ll think about what else, if anything , to add.
Stay tuned for more updates on the project!
Hello, and welcome to Intimate and Intricate. I am your host, Bryant O’Hara.
This is a blog that will feature my thoughts about – and experiments in – science fiction poetry, music, programming, making, and the intersections of these subjects.
Currently I have a SoundCloud page where I post my music and poetry, and this blog will serve – at least initially – as a “behind the scenes” look at what goes into the making of the pieces. Going forward I hope to start describing what I’m doing as I do it.
I look forward to all feedback.
Hope you enjoy!