In my last 2 articles Transplatforming, and Generative Music Culture – Then, Now, & Why, I provided a broad overview of generative music, its history, social complexities, and reasons why it’s become so exciting in 2016. Now it’s time to feature some of the tools music makers can incorporate into their workflow to start generating music, and as Brian Eno famously said “become an audience to themselves.” Even in the several days between this article and the last, Google announced at Moogfest this year a new project called Magenta, which will be part of the DeepDream group, described on the Google Research Blog:
“The question Magenta asks is, “Can machines make music and art? If so, how? If not, why not?” The goal if Magenta is to produce open-source tools and models that help creative people be even more creative.”
Google’s ambitious project is of the highest generative order, in which the machine is unleashed to generate of its own accord. But still its stated goal of helping creative people be more creative is its applicable purview. Within this purview, Of course, there are already ever-increasing tools with generative qualities. I say qualities because there is always room for discussion as to what constitutes generative music. For me, an app is nominally generative if it uses algorithms somewhere under the hood. But similarly to Google, I broaden that definition a bit to include tools allowing creatives to make sounds they couldn’t reverse engineer. By this I mean, you may use some preset, some sound file, or synthesize from the ground up – a generative tool takes your creation and runs with it, to a sonic place you couldn’t get to on your own.
Within this framework I’ve selected 3 generative tools that fall in variously along this spectrum, and spoke with their creators about their idea of generative music and how these ideas shaped their applications.
An excellent example to start with is Polyphylla. Released in late 2015, it’s developed by Berlin-based Mei-Fang Liau with Melllisonic. Polyphylla is a fractal-based additive synthesizer. These fractals, operating beneath the user interface, are its essence. While researching Polyphylla I was reading broadly about fractal-based algorithms and found the above picture of the rare and aptly titled Aloe Polyphylla, which bore strong resemblance to Liau’s, who confirmed it as an inspiration for Polyphylla. “Some of the algorithms,” liau notes, “share conceptual traits with the cactus.” This desire to model the natural world, seems more than ever important to artistic and computational creation. Now, if you’ve never attempted additive synthesis, it’s very challenging. Polyphylla eliminates these challenges with its simple interface. Simple yes, but within minutes of opening you’ll be creating unique and fascinating sounds. With its motion-based interface you can visually see your sounds modulating while you adjust parameters on the interface. This real-time visual feedback is helpful, but also mesmerizing and beautiful.
Generally, I would describe Polyphylla’s sounds as deep, dark, and lush, but qualify that by saying you can synthesize drum sounds, and everything in between. It comes with a host of presets as well, and Liau has even enlisted highly regarded sound shapers like Christian Kleine, developer of OSCiLLOT – the complete Modular System for Ableton Live to develop Polyphylla sounds. All these sounds and their presets are available for inspiration and download (see below). To see Polyphylla at work, you can check out the second Demo video. And for a last example, I created a pack of 5 preset sounds – these sounds took less than an hour in total to develop – a testament to the excellence of Polyphylla as a highly accessible, generative, and dynamic sound-shaper. You can pick up Polyphylla at the Ableton website.
Next up is Ideas, created by Nick Dwyer aka Flintpope. Also released in 2015, Ideas is significantly different than Polyphylla – being less about algorithms, but still being generative in nature. Dwyer describes it as such:
“thinking of randomising sounds in a Eno-esque way I put four samples into a rack (naming them as vaguely as possible: After / Before / During / Earlier and used Max’s brilliant Device Randomizer on three of them. Put them in a rack to make a combination instrument that sweeps from an upfront keyboard sound to a distant trippy pad”
Dropping ideas on a rack you’re met with 8 parameters to start moving about and creating sounds. Behind the scenes the mappings are at work. As you see below the knobs After Before During Earlier greet you. In my earlier article on the then and now of generative music – the notion of time seems always curiously tied to generative studies. Sonic creations come from the natural world, like Polyphylla, along with our reception of it, interpretation, accumulated selves, and output. All these streams, past and present, come to bear on musical creation, informing and affecting us – as our Ideas. Generative music, like our thoughts, is asynchronous – out of time, just as the arrangement of After Before During Earlier is within Ideas. Below are some sounds created with Ideas.
One of the nicest features of Ideas – it’s FREE! Along with Ideas, there is a treasure trove of free creation tools at Flintpope’s website. Evocatively named Tools like TensionCollision, Spotfield, and Analogika are there for your sonic experiments. Many times, I find that Free isn’t really free, or isn’t really great. These are authentically both. Of course, you can probably buy Nick a beer – I’m sure he wouldn’t mind! Go and grab them up at his website. As Dwyer noted to me, “change the time signatures, move the key around and see what clashes, what harmonises, and what sounds unexpectedly interesting – get the feeling of an orchestra playing in your own studio!”
COORD – by Jay Hardesty
From the fractal-based algorithms of Polyphylla, to the Eno-esque asynchronous creations of Ideas, we now come to another fascinating generative concept – COORD, by Jay Hardesty. COORD is unique in relation to our above discussion, both in theory – and that it’s still under development. So this is an exciting preview! Hardesty describes COORD as both Generative, and Adaptive:
“Coord is generative because a tiny set of rules governing note relationships is recursively applied to create note patterns in bottom-up fashion. It is therefore not tied to specific source material and strategies, unlike most adaptive music, which uses prefab musical elements and/or explicit top-down compositional rules.
Coord is adaptive because its generative approach can be biased to regenerate desired rhythms and melodies. The results are human-sounding variations, unlike most generative approaches, which only create music that sounds “generative” because there is no means of steering the output toward and between human-composed inputs.”
One of the compositional strategies often given to newer composers is to listen to music you like, and try to duplicate this. In Ableton Live, this means loading up a track you like, studying all of its compositional parts, and utilizing them as a framework for your own creations. With COORD – you are somewhat doing the same – allowing the user to change one piece of music by using another as specification, as Hardesty explains:
“Two sorts of ecosystems result from Coord’s approach. The first is an ecosystem of musical material, where new variations are created based on the selection of inputs. Preferred musical inputs will have more musical offspring over time, and musical family-trees arise that trace the influences reflected in various parts of a particular piece. (There are already recording artists releasing “albums” that talk the form of apps with variable playback, but so far these have mostly been algorithmic remixes – not the exposure and exercise of musical influences in unpredictable combinations of unforeseen music inputs.)
The second ecosystem is one of software components. A content-creation system enables production of music with embedded “moving parts” that are open of influences from each other or external influences. Currently this takes the form of a desktop app that controls parts within Ableton Live:
“Pieces created in such a form are ready for adaptive playback by other software, such as location-based apps, data auralization, fitness apps, mobile entertainment, game engines. In each case the musical output is affected by user actions or parameters that control the relative influence of musical inputs as the larger piece unfolds in time.
Pieces created in such a form are ready for adaptive playback by other software, such as location-based apps, data auralization, fitness apps, mobile entertainment, game engines. In each case the musical output is affected by user actions or parameters that control the relative influence of musical inputs as the larger piece unfolds in time.
Check out the below video of COORD in action within Ableton Live – it’s fascinating to see how you can morph between sounds. Be one of the first to see it!
You can learn more about Hardesty’s work at Coord.fm. And if you’d like to do a Deep Dive into the math and algorithmic background to COORD his recently published article, “A self-similar map of rhythmic components,” in the Journal of Mathematics and Music is here.
So we’ve seen how issues of time, the natural world, and self-directed composition – all have generative connections and inspirations. We continue to musically try to recreate these ecosystems, while maintaining creativity and individual styles. For music producers and listeners generative methods are rewarding and variant – modeled after the human experience.