Machine Learning & Improvisational Jazz: A Syntactic Connection
Machine learning is the study of algorithms, which is a subset of artificial intelligence (AI). Systems use it to perform a task without explicit instructions and instead rely on patterns and inference. Jazz improvisation (creativity) also relies on patterns and inferred information (knowledge) that in results in awareness or concept building:
Scientists assert that creativity is a twofold process, in which an initial improvisatory phase, characterized by spontaneous generation of novel material, is followed by a period of focused re-evaluation and revision. James A. Snead came to a similar conclusion in his essay on black cultural production.
I’m applying this development to computational aesthetics that can be seen in a number of different fields for various purposes such as AI, machine learning and computer vision involve digital images or videos, sensing devices, interpreting devices, and interpretation stages. We can make a computational assessment of beauty once we have the data, information and knowledge. This assessment can include knowledge of what what Alexander Weheliye refers to in “Phonographies” as sonic blackness or the “interplay between the musical apparatus and the materiality of sound.”
With visualization you take data of some kind and make compelling images with it such as with Deep Dream image style transfer (see above), which is a subset of machine learning. Sonification is visualization for ears. The information is translated into pitch, volume, stereo position, brightness, and so on (see above). Both images are data visualizations. Here’s the sonification of data coming from outer space:
How this resembles jazz improvisation lies in the structure of the algorithm: someone provides the information such as a personal store of melodic ideas. Next, the musician adapts the ideas to meet rhythmic and harmonic constraints imposed by the rhythm section (downbeat), and concurrently constructs and performs an improvised solo.
Improvisation depends on the ability to extemporize new melodies (training data) that fit the chord sequence (model). Call-and-response participation is happening, too, as the medium (machine) responds to the data input. For me this is just another way to understand the work I’m doing with AI.