The audio comprises two realizations of These are They, locally by Bhob Rainey and externally by Eric Laska. Rainey’s realization includes material contributions from Chris Cooper, Ernst Karel, Leila Bordreuil, James Ilgenfritz, Kate Czajkowski, Matt Mitchell, Andie Springer, Carrie Frey, Vasko Dukovski, MinKyung Ji, Gibi ASMR, GentleWhispering, Creative Calm, and Unintentional ASMR. Both realizations were produced autonomously by the software.

EL: You describe “These are They” as an algorithmic “machine.” Did you conceive the project as a way for other artists to re-arrange and manipulate their original audio?

BR: First, I should mention “machine”. The README file where that description appears isn’t exactly a paragon of precise terminology, but I was somewhat deliberate in choosing “machine”, for two main reasons:

1.It is metaphorically sound to the point of being on-the-nose. The piece is structured around different sources of “energy” that are gathered by a few loosely coupled “storage centers” that eventually release a transformed version of that energy when certain thresholds are exceeded. It’s an amateur steam engine or maybe a slightly dangerous circuit.

2.There is currently a rush to label any software that does minorly complex things quickly and without assistance as AI. Aside from this being an annoying tic of contemporary society, it is also completely false regarding this piece, which – intentionally – is not “intelligent” and does not “learn”. So, just “machine”, no “learning”.

I’m fascinated by many aspects of machine learning (the algorithms and math involved are often strikingly elegant), but it is still largely a practice of statistical imitation. If you want it to produce an unforeseeable aesthetic outcome (without being too fast and loose with the meaning of “aesthetic”), you have to either deliberately introduce some kind of noise or design your experiment badly (and get really lucky). People who are doing the hard work in this area will justifiably have a bone to pick with my terse critique, but, ultimately, it still requires a lot of data and a lot of real, heat-releasing energy to get any kind of aesthetically surprising output from machine learning at this time. As someone who is primarily interested in the aesthetic process and not in furthering the field of artificial intelligence, I consider this energy wasted.

So, lately, rather than making programs that are intelligent, I’ve been focusing on programs that are “mysterious”, “inscrutable”, “other”. While these programs, once executed, usually operate without assistance, they still require some form of human interaction to achieve resonance, the intention being that there is a productive encounter between the person(s) and the “other”.

Which brings me to your actual question – did I conceive this as a way for other artists to manipulate, etc., their original audio? Ultimately, no. I conceived this as an encounter that produces questions and problems analogous to the interpretation of traditional scores but in a contemporary technical environment that draws on different skill sets. It is ultimately a distinct piece that will always have a particular, if aleatoric, form, where a person can have dramatic effects on its outcome by developing intuitions regarding how the piece’s rigid and flexible components interact. It is a piece that can be learned and interpreted.

Typically, when an audio program isn’t a fixed or entirely self-generating piece, it is either a compositional tool (lloop, UPIC, Abelton, etc.), an instrument (virtual or augmented), or an electronic partner / accompanist (Voyager, ImproteK). While these all introduce “opinions” that affect the outcome of the music produced with them, they largely function more like “environments” than compositions requiring unique realization. I greatly appreciate that the best of these environments offer a well-tuned set of options and limitations, but I also long for the more specific challenges of interpreting and executing compositions. So, how might a piece of software idiomatically raise those challenges without merely generating something that is, more or less, a score? Here, I’ve tried tapping a relatively common contemporary skill set – “production”, or, more generally, creating and organizing sound files to mesh with and enhance a given context – and placing it in an ornery and sometimes adversarial situation. It becomes a bit of a puzzle, and to fully realize this piece, you have to get a feel for its transformation over time, and to get that feel, you have to feed the algorithm sounds that “work”, and to do that, you need a combination of intuition, analytical insight, and luck, as well as a feel for the transformation over time, etc. To me, this looks a lot like old-school music learning, and it’s kinda funny that the medium usually used to make things easier is deployed here to keep them difficult in a slightly altered way.

EL: The SuperCollider program includes 5 subfolders for audio that represent different dynamic components of the playback. Though there are quite a few variables to consider – length of the audio files, number of audio files, type of audio material etc. – the resulting material is structured in a particular way. What was your process regarding the structure of the program?

BR: The core components for me were the “talkers” and the synthetic events that interrupt them. I was riffing on the attention strain of 24/7 information and how existing and aspiring autocrats use it to their advantage. While I sympathize with the person who combats the exhaustion of staring at a screen that indiscriminately shifts from spreadsheets to social media stats to news of tragedy and outrage to friendly if ill-timed messages to “stand up and breathe deeply” notifications that pop up while the arctic melts by listening to a Spotify “Chill Out” playlist, I thought that I’d take more of a “ride the wave” approach to not being decimated by the assault of undifferentiated garbage that rather suddenly became the norm for our first world waking hours. This was not an effort to have Art show us Reality as it Really Is, but to find some kind of awareness or even joy that encompasses the mess without minimizing it.

These sorts of ideas are always on the verge of becoming painfully trite. So, I listened to the “talkers” and their rude synthetic complements to see if they had any wise “requests”. I think that this is a pretty standard way of making music, even if talking about it usually gets sloppy – someone lays down a groove and then wonders, What next? To find out, theyessentially ask the groove. Anyway, the material here wanted to be blurred in some places, sharpened in others, and it didn’t want to be confined to my laptop-smartphone anxieties. It also didn’t want to get boring after two minutes. My own intentions, which were to make this a type of algorithmic composition like I just described, complicated things. So, if I felt that I needed a blurry contrast, I had to think about it in a general way that could be expressed as a process and maybe, at most, a category of sound. And I had to think about how another person could navigate through all of this to make their own realization of the piece. I wanted their experience to be challenging but doable.

Five (folders, classes of sounds, types of transformations) emerged as a reliable number for both fun / horrific / perplexing musical outcomes, and for keeping someone engaged in the process of realizing the piece. Each class engenders a different kind of focus, so, when you’ve tweaked “chips” so many times that you don’t know right from wrong, you can switch to “ludes” or “sonks” and refresh your approach.

EL: “These are They” is a work in progress. Aside from minor tweaks to the SC code, what do you envision may change?

BR: Compositionally, the main change I want to make is allowing files within the audio folders to be organized into subsets, so that the piece proceeds from subset to subset after each “lude”. This would allow an individual to plan for a longer compositional arc, and it would also allow for multiple realizations to be organized into a kind of generative compilation.

The code itself is relatively well-organized, but I would like to give it a once-over for clarity and a second once-over for poetry, because, why not let the code itself be part of the expressive universe of the piece?

Installation and usage could be more user-friendly. There are plenty of people who could do amazing work on this piece but who freeze up when confronted with Github and plugin installation and folder structure and a “piece” that looks like an inert slab of characters with no “On” button. I could at least make it so that changes to the audio folder and subsequent re-execution of code is a more simple-looking process. At the same time, none of these things are particularly hard. Most of it comes down to careful reading of easily-accessible instructions, and I’m a little dismayed at how quickly the majority of people who use computers and software every day are reduced to utter helplessness when something is not a two-click process.

That said, I do need to make the notes for the score (largely, the role each folder plays) more score-like. Currently, I’m talking directly with whoever is using the program (to my knowledge). I’m very happy to do that, and it helps me understand better what information gets lost or botched in communication. Eventually, though, I should find language that is both precise yet open enough for people to proceed with a pleasurable ambiguity.

We use cookies to improve your experience on our site. Read our privacy policy to learn more. Accept

Join Our Mailing List