CD, CD Çalar, DVD, DVD Çalar, SACD, LP, Plak Çeşitleri ve Fiyatları
Just how big a threat is AI for musicians? In part 2 of our deep dive, we take a further look. Catch part 1 on Attack.
The Turning Point For AI And Music
“The real turning point for automatic music generation was brought by the Open Ai release in 2021 of GPT-3, a 175-billion parameter pre-trained language model for language generation,” explains Musi. “GPT 3 is an autoregressive language model based on deep learning where predictions are made step-by-step and the result of one prediction is used as input for the next prediction. GPT-3 constitutes a learning revolution (as) there is no need for annotated training data.”
This has allowed for a new style of AI-based services that compose music automatically, essentially replacing composers.
Music At The Push Of A Button
Generative AI music is here, mostly. You can’t press a button that says Club Banger and spawn a track that will shoot to the top of the Beatport charts yet, but you can make background music for your next piece of video content. Soundful, an AI music generator company that we featured earlier this year, offers such a service. Choose a genre, customize it how you want, and let the AI write the track for you.
This is both exciting for the content creator who needs original music for a YouTube or TikTok video and scary for established composers who make a living doing this. While this may seem like a quick way to put library producers out of work, Soundful claims they’re fulfilling a need. With something like more than two billion content creators worldwide, there isn’t enough library content available for everyone. The fact that you can customize the track to fit your video is also a benefit.
[quote align=right text=”The answer to any fear of technology is to remember that we want technology to do work that humans do so that humans don’t have to do work machines can do..”]
[advert]
AI And Wellness
AI is also helping personalize music for the wellness industry. Endel is an app that uses AI to create custom music based on users’ data such as heartbeat and immediate goals, like concentration or relaxation. It’s not one hundred percent machine-driven though. The AI acts like a composer – or even like a DJ – arranging samples provided by humans. These humans include well-known artists like James Blake as well as Richie Hawtin, who appears on the app in his Plastikman guise.
“The music I make at the moment has a lot of randomization and generative LFOs making things happen,” he told us last year. “(Endel) is an extension of that, allowing the final decisions to be made by the AI. I knew what textures and modulations I could bake into the samples. I had to know exactly where I could give up control.”
The AI Musical Experience
Another service offering customized music from the intersection of AI and dance music artists is Aimi.fm. Rather than wellness like Endel, Aimi.fm aims to allow users to interact with music at the composition level. The outfit uses AI to “analyze music, curate music, and to configure and program algorithms that compose music,” says CEO Edward Balassanian.
As with other AI-based services, the machine starts with human-created sounds, here provided by artists like Catz ’n Dogz and Space Dimension Controller. “We refer to the mixing of musical ideas (loops) as an experience,” explains Balassanian. “The instructions for how to create these experiences are contained in scores – javascript-like programs that are created in partnership with top producers around the world to capture specific styles of music. Humans create the loops and humans create the score. AI in turn populates the experiences with loops that have been curated based on properties extracted by our machine learning algorithms.”
[advert]
A Storm In A Teacup?
Despite a general unease around the idea of AI, it’s clear that – at least as of right now – it’s probably not going to be replacing human composers anytime soon. Whenever there’s a new breakthrough in music-making technology, there’s a period of fretting and pearl clutching until musicians realize that what they have in front of them is not a threat but another tool for music composition. This happened with the synthesizer, drum machine, and MIDI, with all of these debuts being greeted with howls of protest from musicians afraid of being replaced.
Aimi.fm’s Balassanian agrees with this. “My mom bought me a Yamaha DX-7 when they first came out. I remember musicians bemoaning the death of music when synthesizers like this first came to market. Now those same musicians have elevated music to a whole new level. In short, the answer to any fear of technology is to remember that we want technology to do work that humans do so that humans don’t have to do work machines can do.”
Rise Of The Man-Machines
Human and AI collaboration will likely be the next big thing. Much like we learned to program synthesizers and then make music with computer-based DAWs, the next frontier will be incorporating AI into our workflows.
“Soundful is not going to take anything anyway from a good producer,” said its founder, Diaa El Ali. “A good musician or producer can leverage new tools, like ours, in any way they see fit.” In fact, this is already happening, with some producers using it as a stem generator to supplement their own music – sort of as a bespoke Splice.
Richie Hawtin sees the possibilities of human/AI collaboration as endless. “You have human and AI interaction then you have the interaction of AI and AI, there are unlimited permutations that can come out. It’s exciting and scary at the same time. We’re heading towards augmented reality and real-time simulations where we can live in a world where experiences just unfold in front of us. This can’t be done by humans alone, the collaboration between man and machine could bring us into a whole new world of clubs.”
[quote align=right text=”My mom bought me a Yamaha DX-7 when they first came out. I remember musicians bemoaning the death of music when synthesizers like this first came to market”]
[advert]
Can We Ever Embrace Full Machine Music?
Human and AI collaborations are already underway. Will we ever find ourselves enjoying music made entirely by AI with no human input at all, though?
“It is always difficult to make this type of prediction, especially when Gen Z has very different behaviors towards the digital, in comparison to those of Millennials (like me),” says Musi. “Focusing on Millennials, my gut feeling is that AI-generated music will become more popular but confined to certain activities distinct from live music performances. At the end of the day, when you ask a conversational agent (e.g. Amazon Alexa) to put on some jazz music while you are cooking, you do not really care about the author, since the music you are listening to is not in the foreground. However, if you go to listen to live music, or you go to a music concert, or you listen to your favorite artist on Spotify in your room, there are an array of human aspects that are part and parcel of the experience: the body language, the persona that the artist represents, that mimesis process that makes you admire the artists. I mean, would you become a fan of a randomly generated AI song? I have doubts.”
For now, there’s collaboration and the excitement of an expanding tool palette. And don’t worry, despite what Blake Lemoine may say, fully conscious AI with independent agency is still a science fiction dream. “I believe we are quite far off (from conscious AI),” agrees Musi. “We still must figure out what consciousness amounts to in humans (at least from a neuroscience perspective). As to intentionality, for sure not in a human sense: to be intentional we need to make sense of reality, understanding its meaning in context. This is not what AI is ready to do at the moment.”
Read Part 1 on Attack.
Read our interview with Richie Hawtin.
Read our interview with Soundful.
[product-collection id=”75025″]
attackmagazine
Just how big a threat is AI for musicians? In part 2 of our deep dive, we take a further look. Catch part 1 on Attack.
The Turning Point For AI And Music
“The real turning point for automatic music generation was brought by the Open Ai release in 2021 of GPT-3, a 175-billion parameter pre-trained language model for language generation,” explains Musi. “GPT 3 is an autoregressive language model based on deep learning where predictions are made step-by-step and the result of one prediction is used as input for the next prediction. GPT-3 constitutes a learning revolution (as) there is no need for annotated training data.”
This has allowed for a new style of AI-based services that compose music automatically, essentially replacing composers.
Music At The Push Of A Button
Generative AI music is here, mostly. You can’t press a button that says Club Banger and spawn a track that will shoot to the top of the Beatport charts yet, but you can make background music for your next piece of video content. Soundful, an AI music generator company that we featured earlier this year, offers such a service. Choose a genre, customize it how you want, and let the AI write the track for you.
This is both exciting for the content creator who needs original music for a YouTube or TikTok video and scary for established composers who make a living doing this. While this may seem like a quick way to put library producers out of work, Soundful claims they’re fulfilling a need. With something like more than two billion content creators worldwide, there isn’t enough library content available for everyone. The fact that you can customize the track to fit your video is also a benefit.
[quote align=right text=”The answer to any fear of technology is to remember that we want technology to do work that humans do so that humans don’t have to do work machines can do..”]
[advert]
AI And Wellness
AI is also helping personalize music for the wellness industry. Endel is an app that uses AI to create custom music based on users’ data such as heartbeat and immediate goals, like concentration or relaxation. It’s not one hundred percent machine-driven though. The AI acts like a composer – or even like a DJ – arranging samples provided by humans. These humans include well-known artists like James Blake as well as Richie Hawtin, who appears on the app in his Plastikman guise.
“The music I make at the moment has a lot of randomization and generative LFOs making things happen,” he told us last year. “(Endel) is an extension of that, allowing the final decisions to be made by the AI. I knew what textures and modulations I could bake into the samples. I had to know exactly where I could give up control.”
The AI Musical Experience
Another service offering customized music from the intersection of AI and dance music artists is Aimi.fm. Rather than wellness like Endel, Aimi.fm aims to allow users to interact with music at the composition level. The outfit uses AI to “analyze music, curate music, and to configure and program algorithms that compose music,” says CEO Edward Balassanian.
As with other AI-based services, the machine starts with human-created sounds, here provided by artists like Catz ’n Dogz and Space Dimension Controller. “We refer to the mixing of musical ideas (loops) as an experience,” explains Balassanian. “The instructions for how to create these experiences are contained in scores – javascript-like programs that are created in partnership with top producers around the world to capture specific styles of music. Humans create the loops and humans create the score. AI in turn populates the experiences with loops that have been curated based on properties extracted by our machine learning algorithms.”
[advert]
A Storm In A Teacup?
Despite a general unease around the idea of AI, it’s clear that – at least as of right now – it’s probably not going to be replacing human composers anytime soon. Whenever there’s a new breakthrough in music-making technology, there’s a period of fretting and pearl clutching until musicians realize that what they have in front of them is not a threat but another tool for music composition. This happened with the synthesizer, drum machine, and MIDI, with all of these debuts being greeted with howls of protest from musicians afraid of being replaced.
Aimi.fm’s Balassanian agrees with this. “My mom bought me a Yamaha DX-7 when they first came out. I remember musicians bemoaning the death of music when synthesizers like this first came to market. Now those same musicians have elevated music to a whole new level. In short, the answer to any fear of technology is to remember that we want technology to do work that humans do so that humans don’t have to do work machines can do.”
Rise Of The Man-Machines
Human and AI collaboration will likely be the next big thing. Much like we learned to program synthesizers and then make music with computer-based DAWs, the next frontier will be incorporating AI into our workflows.
“Soundful is not going to take anything anyway from a good producer,” said its founder, Diaa El Ali. “A good musician or producer can leverage new tools, like ours, in any way they see fit.” In fact, this is already happening, with some producers using it as a stem generator to supplement their own music – sort of as a bespoke Splice.
Richie Hawtin sees the possibilities of human/AI collaboration as endless. “You have human and AI interaction then you have the interaction of AI and AI, there are unlimited permutations that can come out. It’s exciting and scary at the same time. We’re heading towards augmented reality and real-time simulations where we can live in a world where experiences just unfold in front of us. This can’t be done by humans alone, the collaboration between man and machine could bring us into a whole new world of clubs.”
[quote align=right text=”My mom bought me a Yamaha DX-7 when they first came out. I remember musicians bemoaning the death of music when synthesizers like this first came to market”]
[advert]
Can We Ever Embrace Full Machine Music?
Human and AI collaborations are already underway. Will we ever find ourselves enjoying music made entirely by AI with no human input at all, though?
“It is always difficult to make this type of prediction, especially when Gen Z has very different behaviors towards the digital, in comparison to those of Millennials (like me),” says Musi. “Focusing on Millennials, my gut feeling is that AI-generated music will become more popular but confined to certain activities distinct from live music performances. At the end of the day, when you ask a conversational agent (e.g. Amazon Alexa) to put on some jazz music while you are cooking, you do not really care about the author, since the music you are listening to is not in the foreground. However, if you go to listen to live music, or you go to a music concert, or you listen to your favorite artist on Spotify in your room, there are an array of human aspects that are part and parcel of the experience: the body language, the persona that the artist represents, that mimesis process that makes you admire the artists. I mean, would you become a fan of a randomly generated AI song? I have doubts.”
For now, there’s collaboration and the excitement of an expanding tool palette. And don’t worry, despite what Blake Lemoine may say, fully conscious AI with independent agency is still a science fiction dream. “I believe we are quite far off (from conscious AI),” agrees Musi. “We still must figure out what consciousness amounts to in humans (at least from a neuroscience perspective). As to intentionality, for sure not in a human sense: to be intentional we need to make sense of reality, understanding its meaning in context. This is not what AI is ready to do at the moment.”
Read Part 1 on Attack.
Read our interview with Richie Hawtin.
Read our interview with Soundful.
[product-collection id=”75025″]