Human creativity was once thought to be protected from major improvements in artificial intelligence (AI) technology, but new generative AI systems are already competing with the imaginations of our most creative individuals, including musicians.
Google’s AI research project MusicLM has impressed listeners in the past few weeks, since it debuted its ability to turn simple text prompts into music that can sometimes be almost indistinguishable from works created by humans.
So we’ve collected five pieces of music created by humans and five created by the MusicLM AI, to see if you can spot differences between the two in the quiz below.
After the interactive questionnaire, we ask an expert to discuss how we might be able to better identify AI music, and what its advancements may mean for artists and the listening public.
For now, let’s get listening.
Was this music created by humans or AI?
Tap or click to play/pause audio. All snippets are 10 seconds long.
How can we tell if music was made by AI?
Google’s MusicLM project was trained on 280,000 hours of music, which taught it how to create audio that its creators said had “significant complexity”.
Musician and University of New South Wales associate professor Oliver Bown says while AI-generated music is improving, there are still some telltale signs that audio is created by an algorithm and not a person, including:
- Grainy sound
- Time warping and general “wonkiness”
- Inaudible vocals
- Lyrics that don’t make sense
“With a text-to-music system, you’re very likely to get mangled words, not coherent English,” Dr Bown says. “But you can still do some very nice, elegant algorithmic design that guarantees you’re going to get reasonably nice musical effects.”
Dr Bown says while music made by humans is usually more refined and has clearer shifts in energy or momentum, AI can sometimes do quirky and interesting things, such as complex guitar solos and drum fills, that make it harder to tell AI audio apart from the real thing.
“When I listen, if it sounds like music, then we’re talking about a really good, powerful system,” he says.
Creative opportunities for artists, but also copyright concerns
Dr Bown says that as AI systems improve and become publicly available, we are likely to see more musical artists use them for inspiration.
“You are still in the driving seat, composing, but if you want to ask the AI to get in there and give it a style or do some variation for you, or come up with some candidates, then that’s part of the process,” he says.
He says non-musicians will also have a chance to play around with the technology.
“You’re suddenly in a world where we can flick completely reasonable music compositions to each other without even a passing thought,” he says.
“All of the weirdness and idiosyncrasies of those systems will just become part of the fun of playing with them.”
Despite the excitement, there are still concerns about the future copyright implications of training AI on music created by humans. Laws around this are yet to catch up, just like in the art world.
Google says it doesn’t plan to release its MusicLM software publicly yet, as there remains a “risk of potential misappropriation of creative content”.
AI systems have also controversially been used to create deepfakes of singers’ voices and songs (be they dead, or alive) in recent years.
“It’s a double-edged sword, I think, for musicians because whichever way it goes there’s a risk,” Dr Bown says.
“There might potentially be fruitful revenue in having your data used to train an AI, but it’s just hard to see forward.
“One thing that is clear, though — there’s quite an issue globally with all AI implementations when it’s just being wrapped into a business model. It’s usually pretty disruptive and it needs to adopt a slightly more socially conscience, go-slow approach.”