Music And Artificial Intelligence: A Bond That’s Growing By Leaps And Bounds

By Anya Wassenberg on October 21, 2022

Over the last decade or so, artificial intelligence (AI) has become more and more prevalent in everyday life, from online ads that seem to know just what you’re looking for to music composition and other creative applications.

The very notion of making music with AI raises questions about the nature of creativity, and of the future for human composers. From useful tools to pioneering prototypes, here’s a look at some of the most recent innovations that use AI in the music writing process.

ScoreCloud Songwriter

DoReMIR Music Research AB recently announced the launch of ScoreCloud Songwriter, a tool that turns original music into lead sheets. The software uses the information recorded with a single microphone, and can include vocals and instruments. Various AI protocols separate out the vocals, and then transcribes the music, including melody and chords, along with the lyrics in English. What you’ll get is a lead sheet with the melody, lyrics, and chord symbols.

“Many established and emerging songwriters are brilliant musicians but struggle with notating their music to make it possible for others to play,” explained Sven Ahlback, DoReMIR CEO, in a media release. “Our vision is that ScoreCloud Songwriter will help songwriters, composers, and other music professionals, such as educators and performers. It may even inspire playful use by lovers of music who never thought they could write a song. Our hope is that it will become an indispensable tool for creating, sharing, and preserving musical works.”

Harmonai’s Dance Diffusion

Harmonai is a company that creates open-source models for the music industry, and Dance Diffusion is their latest innovation in AI audio generation. It uses a combination of publicly available models to create audio bits — so far, about 1-3 seconds long — from nothing, as it were, which can then be interpolated into longer recordings. Since it’s AI, the more users enter audio files for it to learn from, the more it will evolve and develop. If you’re interested in how Dance Diffusion came together, there’s a video interview with the creators here.

Here’s one of their projects, an infinite AI bass solo that has been playing since January 27, 2021. It’s based on the work of the late musician Adam Neely.

It’s still in the testing stages, but its implications are profound.

Google’s AudioLM

Google’s new AudioLM bases its approach to audio generation on the way language is processed. It can generate music for piano with a short excerpt of input. Speech combines sounds into words and sentences, in the same way the music is about individual notes that come together to form melody and harmony. Google engineers used the concepts in advanced language modelling as their guide. The AI captures melody as well as overall structure, and the details of the audio waveform to create realistic sounds. It reconstructs sounds in layers designed to capture the nuances.

Meta’s AudioGen

Meta’s new AudioGen uses a text-to-audio AI model to create sounds as well as music. The user enters a text prompt, such as “wind blowing”, or even a combination, such as “wind blowing and leaves rustling” and the AI responds with a corresponding sound. The system was developed by Met and the Hebrew University of Jerusalem, and it is able to generate sound from scratch. The AI can separate different sounds from a complex situation, such as several people speaking at once. Researchers trained the AI using a mix of audio samples, and it can generate new audio beyond its training dataset. Along with sounds, it can generate music, but that part of its functionality is still in its infancy.

What’s next?

With AI music generation in its infancy, it’s easy to dismiss its future impact on the industry. But, it can’t be ignored.

An electronic band by the name of YACHT recorded a full album with AI in 2019, using technology that’s already been surpassed. Essentially, they taught AI how to be YACHT, and it wrote the music. The band then turned it into their next album.

“I’m not interested in being a reactionary,” YACHT member and tech writer Claire L. Evans mentioned that ambivalence in a documentary about their 2019 AI-assisted album Chain Tripping (as quoted in Tech Crunch). “I don’t want to return to my roots and play acoustic guitar because I’m so freaked out about the coming robot apocalypse, but I also don’t want to jump into the trenches and welcome our new robot overlords either.”

The onslaught of new technology is relentless. The only choice is to hop on the train.

lv_banner_high_590x300
comments powered by Disqus

daily news straight to your inbox by 6 am

company logo
Terms of Service & Privacy Policy
© 2022, Museland Media, Inc., All Rights Reserved.