Science & technology (artificial intelligence, robots, etc) can now be used to compose music & write songs, without human intervention. We have seen futuristic movies where robots play a dominant role in the society and often dismissed it as fiction but it seems like we are taking mini steps towards that reality after all.
A few decades back (in the 70s), synthesizers caused a sensation when they were first introduced, because on one had heard such sounds before, they sounded very futuristic back then. Ever since, the number of such sythesized sounds have only increased, with musicians now having access to equipment like samplers, sequencers, powerful outboard effects, and more.
So how easy is it going to be from here on to introduce more such sounds, aren’t they all going to sound similar (using the current music producing equipment/technologies)?
Google says artificial intelligence is the way forward if musicians want access to new musical sounds. Google claims their AI program has invented sounds that humans have never heard before.
JESSE ENGEL IS playing an instrument that’s somewhere between a clavichord and a Hammond organ—18th-century classical crossed with 20th-century rhythm and blues. Then he drags a marker across his laptop screen. Suddenly, the instrument is somewhere else between a clavichord and a Hammond. Before, it was, say, 15 percent clavichord. Now it’s closer to 75 percent. Then he drags the marker back and forth as quickly as he can, careening though all the sounds between these two very different instruments.
“This is not like playing the two at the same time,” says one of Engel’s colleagues, Cinjon Resnick, from across the room. And that’s worth saying. The machine and its software aren’t layering the sounds of a clavichord atop those of a Hammond. They’re producing entirely new sounds using the mathematical characteristics of the notes that emerge from the two. And they can do this with about a thousand different instruments—from violins to balafons—creating countless new sounds from those we already have, thanks to artificial intelligence.
Technology has made it so much easier for musicians to compose music, to use different sounds, to organize things, and to produce the final mix. However, things like melody of the song, what instruments to use, is still the brainchild of the composer (a human being).
Classical Music Composition by An Artificial Brain and Not Human
There’s another kind of technology that is also developing at a much faster rate – robotics, artificial intelligence, etc. where artificial intelligence (AI) write songs (instead of human).
Some time back, and AI system had crafted a Beatles-inspired track called ‘Daddy’s Car’.
Here’s one case where the artificial brain has composed classical music.
A powerful cloud computer brain which is primitive also called as the RNN or Recurrent Neural Network, was fed a series of short compositions, by Daniel Johnson. On the surface it may sound like a simple MIDI tune, but the process behind it involved feeding this computer brain with a series of short compositions and music excerpts to teach it the language of music. Johnson altered the algorithm to ensure that the output was not messy.
When the RNN was allowed to play, the result was as shown in this video below, and its indeed impressive (barring a few bars in between when it plays the same chord over and over for some time).
There were a lot of glitches on the way of getting the RNN to play, for example, it has not learnt how long to hold a chord which caused it to keep playing the same chords for a long time. But this is only a minor setback, the larger picture here is that the RNN is capable of processing it and probably even more.
Be it visual art, music, writing TED talks, the RNN is not yet there but the virtual brains of recurrent neural networks are creating fantastic new things by the day and probably with a little help from humans, they may soon achieve to do much more.
There’s a AI-inspired Christmas carol created by researchers at the University of Toronto. They feed the computer images and it can create songs based on what it “sees.” The AI system is able to writes lyrics (based on the images fed) and also composes music (it sings back) for that lyrics.
While this is definitely not the most beautiful music, it is indeed commendable, considering there’s no human brain behind it.
And over time, its only going to get better. Who knows, one day you might even have an AI channel on Pandora or Spotify for AI-inspired music, or takes people’s pictures/videos and writes sings on the fly?
I guess it would be tough to be an artist then.
Also Read: Robots playing music (have recorded albums too).
KeytarHQ editorial team includes musicians who write and review products for pianists, keyboardists, guitarists & other musicians. KeytarHQ is the best online resource for information on keyboards, pianos, synths, keytars, guitars and music gear for musicians of all abilities, ages and interests.
Leave a Reply