Generative Music as a term is attributable to Brian Eno, who in the liner notes to his 1975 album Discreet Music wrote:
“Since I have always preferred making plans to executing them, I have gravitated towards situations and systems that, once set into operation, could create music with little or no intervention on my part. That is to say, I tend towards the roles of planner and programmer, and then become an audience to the results.”
Eno’s words are more than ever prescient in the current music production landscape, and to the creative artistry that lends such a strong voice to cultural transmissions, as much as it does to sound and technology. At Moog Festival this year, Gary Numan was named the recipient of the 2016 Moog Innovation Award, as Eno was in 2011, describing him and electronic culture:
Gary Numan is a manifestation of electronic culture’s progressive nature to explore the limits of traditional sound and develop new mechanisms for expression.
This progressive electronic music culture, pioneered by Eno and Numan, and others in the 1970’s, has now 40 years on, joined with a highly charged “post” or “trans” movement which spans silos from computer programming, to identity politics, platforms, and various media transmitting bespoke, modular, and agonistic experiences or forms of expression.
The unleashing of Google’s DeepDream to the web in the Summer of 2015, along with an aptly titled post on the Google Research Blog, “Inceptionism: Going Deeper into Neural Networks,” seemed to instantiate a generative modality across cultures and platforms. The article explains that by applying an algorithm iteratively on its own outputs and apply some zooming after each iteration, we get an endless stream of new impressions, exploring and refining the set of things the network knows about. We can even start this process from a random-noise image, so that the result becomes purely the result of the neural network, as seen in the following images:
When I first saw these images generating live, it was as if another field of consciousness was out there, comparable to my own, both childlike and terrifyingly sophisticated. It became clear for the first time to me, that the science fiction of artificial intelligence, was becoming scientific fact. And significantly, the Google blog post ends with a call to artists:
“It also makes us wonder whether neural networks could become a tool for artists – a new way to remix visual concepts – or perhaps even shed a little light on the roots of the creative process in general.”
Musically-oriented artists as well can begin with just noise, forming a sine wave, and then becoming an audience to its output., or remix an experience live. Generative music has an algorithmic nature, in which some input-output schema is enacted on by a precise set of rules defining a sequence of operations. Note here the difference between algorithm and algorithmic, in which the latter takes in a broader swath of input-output schema than algorithm which is defined solely by mathematical syntax. While a generative algorithm is likely generative due to some programmatic and mathematical instantiation of randomness, an algorithmic schema can bring randomness to bear on the input-output mechanism from other sources, like biological, environmental or otherwise external to its input-output medium.
Yet the history of generative computing dates to the 1950’s where devices like the U.S. Navy’s Percepteron, developed by Frank Rosenblatt were already being trained on learning sets for image recognition. The below New York Times article from July 8th, 1958 elaborates Percepteron – The article discusses self-replicating machines, translation and being “the embryo of an electronic computer today that it expects will be able to walk, talk, see, write, reproduce itself, and be conscious of its existence.”
As the above example shows, these possibilities are not just philosophical. The modular community of programmers has leaned well enough toward an open-source model of collaboration to generate a DIY culture of Arduino developers and 3D printers, bringing self-replicating machines and forms of expression to the consumer level. And in areas like Physical Modeling Synthesis, developers are continuously shrinking the answer space on heuristic challenges like the direct algorithmic modeling of physical (musical)instruments. In Physical Modeling Synthesis, waveforms of sound are computationally generated by algorithms to simulate physical sources of sound, including instruments, but also biologic and other non-traditional sources of noise. The reality of real-time replication of all the nuance of a live instrument continues on its asymptotic curve toward accuracy. Will this open creative possibilities, or ultimately homogenize individualism and culture?
These questions are dominating leading creative electronic events this year. Moogfest hosted by Modular Synthesizer pioneering company Moog, this year included themes like Transhumanism, and Art & Artificial Intelligence. Moog’s recognition and early success can be attributed to Wendy (then Walter) Carlos William’s Switched on Bach (1968), with its now iconic cover of the classical composer connected to wires.
Other innovative festivals like the 2016 CTM in Berlin are similarly switched-on social and technological feedback, the noise of culture and electronic creativity. Berlin’s CTM is a leading example, with its 2016 theme – New Geographies:
“Polar constellations between local and global practices, regional identity and cosmopolitan aspirations, physical locations and new social spaces housed within the global communication network, and between human agency and autonomous processes in nature and technology all feedback one into another, creating short-circuits that extend the possibilities and repertoire of current music even further.”
The question of human agency is well-played in the musical environment, artists from Wendy Williams to Holly Herndon whose 2015 Platform was described by Heather Phares of AllMusic as “nuanced in how it combines political, technological and structural and ideological concepts.” Herndon opens for Radiohead tonight (5-20-21) in Amsterdam.
The importance interconnected platforms have risen up in theorists of all mete. John’s Hopkins University based Political theorist William Connolly, writes in his 2011 A World of Becoming of distributed agency:
“It is to appreciate multiple degrees and sites of agency, flowing from simple natural processes, through higher processes, to human beings and collective social assemblages. Each level and site of agency also contains traces and remnants from the levels from which it evolved, and these traces affect its operation.”
Agency is bespoke, modular, and contains the traces of individuals’ or musical, biological, and source-codes of opposition. Agency is marked by agonism, a discourse of conflict, but with deep respect for the other – a mutual admiration.
This flow of agency, or what Randall McLeod described as “Transformission” where both complexity and culture are transmitted in equal measure, underpins the generative music-making environment, where hosts of tools and technologies are in the air. While selfie-stick culture embodies the isolation of interconnectivity, it also reflects the sampled transmission of on-demand identity, the environment of the agonist, audience to self, and selfless to audience.
The tidal shift in the ontology of electronic information, is creating feedback networks. Gender identity and the challenge of public restrooms has risen to a global conversation, with the U.s Department of Justice’s Civil Rights Division issuing a guidance Letter on Transgender Students. And similarly the decades-long struggle between musical profits, copyright, and creative freedom continues. Distributed architectures like Blockchain are being looked at to address these issues – which is a decentralized, and ideally fair-trade structure. However, it too is fraught with bad actors, perverse incentives – and as yet unknown unknowns.
From the Percepteron of the 1950’s to Google’s Inceptionism – art, and music specifically are brokering these challenging conversations – bringing sound closer to the variances and vacillations of behavior, always deeply impressing on media, from physical, to virtual, to the biological.
Next, I’ll take a look at 3 different examples of Generative Music-making tools, each enabling you to become an audience to your own results, though in differing algorithmic schema. I speak with their respective creators about generative music, both the conceptual and technical underpinnings of their applications:
Flintpope’s Broken Orchestra & others
Jay Hardesty’s Coord