Artificial Intelligence is definitely one of the most novel and groundbreaking technologies that has risen to popularity in the past few years. It has slithered itself into nearly every piece of technology we use, from the popular assistants Siri and Google Now to self-driving cars and busses. It’s inevitable that it would reach electronic music, as artists in the scene are always looking for new tech to use. Everyone’s favorite company, Google, has made strides in musical AI tools and a research group in Spain has even created their own composer.
N-Synth is a new synthesizer created by the amazingly smart folks at Google that combines sounds together using machine learning techniques to create entirely new sounds that would be extremely difficult to craft with normal synths. You can try it yourself here. It’d be cool to see artists in the electronic music scene using this sort of tech. I can just imagine the beautiful sound of a Sitar Cow in one of Hardwell’s drops. Magenta, the Google Brain branch which specializes in applying machine learning to music, has also crafted a fun little experiment that allows you to play a duet with it’s AI. It’s response time is fairly slow and it’s difficult to listen to but I can’t help but think that Pretty Lights might implement it in his live setup one day. Jokes aside, it’s fascinating how artificial intelligence can aid our creative process and allow artists to create never before heard compositions.
However, computers quickly got bored of aiding those pesky artists and started making music on their own. Here’s where we define a new type of electronic music: instead of an artist using electronic instruments or software to create, it is the program itself that composes. The electronics are creating music (haha). We thought music to be an innately human quality, but it’s difficult nowadays for one to tell the difference between computer generated music and that created by a human, putting a new spin on what electronic music is.
Iamus, a computer located in Málaga, Spain, is considered to be the first original artist that isn’t human. Its programmers informed it of a few rules, such as how many notes a person could play at a time on a piano and some basic music theory. They then created an algorithm that allows Iamus to take these rules and create compositions that people could then perform. Listen to it’s first composition here:
Iamus mainly composes avant-garde classical music, but its creators, Melomics Media, have spawned a new system, Melomics109, that composes pop and many other genres. One could argue that the programmer is the artist or that Iamus isn’t human because it “simply follows theory”, but it is apparent that the computer has its own, unique style. At the least, it is extremely thought-provoking, questions our fragile sense of human reality, and inspires artists to create more. We don’t want computers to be stealing our artist’s jobs as well, huh?
Can you tell the difference between computationally composed music and human music? Here’s a little musical Turing Test:
Another interesting AI that composes jazz: