Spotify Pulls Viral Track 'I Run' Amid AI Controversy: What Are The Boundaries Of AI In Music?

Artificial Intelligence (AI) is everywhere these days. What shyly started around three years ago as a simple tool to be asked questions or tasked with solving small problems, has effectively escalated into every part of our lives. Usually, the most acceptable use of AI is when it can cut down times and costs compared to its human peers — productivity. But what happens when a tool that's thought for solving productivity constraints, makes its way into the artistic side of mankind? We're starting to see some examples, and the challenges that come alongside it.
The freshest, and perhaps greatest, example the world got was with artist HAVEN.'s hit single 'I Run'. As EDM.com reports, the breakout dance track, which had climbed to No. 11 on Spotify’s US charts and amassed over 13 million streams, was removed from major streaming platforms over suspicions that it used AI-generated vocals mimicking real artists. Following its removal from Spotify and Apple Music, a spokesperson from the former commented, “Spotify strictly prohibits artist impersonation. This track was detected and removed, and no royalties were paid out for any streams generated”.
'I Run', released in October by the previously unknown artist HAVEN., sparked suspicions due to the vocals' striking similarity to those of British singer-songwriter Jorja Smith, alerted by users on social media, and who publicly denied any involvement. London-based producer Harrison Walker has since identified himself as HAVEN., insisting that the vocals are his own voice, albeit heavily processed with effects and filters. The controversy deepened when audio engineer Matt Cahill, who was tagged in a post by Walker, shared a video discussing the process of recording vocals with a phone and then feeding them through Suno, a generative AI platform focused in music.
How The Music Industry Will Have To Adapt To Safely Integrate AI

While the song is back up online due to a real vocalist featured and credited on the new version of 'I Ran', its case flags some things for us to take into consideration, mainly, that the track was discovered and removed because the algorithm determined it was impersonating someone else, which is the expected outcome with current copyright legislations. Only that, many similar tracks are not found. This one was likely only caught because it exploded in popularity, entered royalty territory, and blew up on social media, where human listeners noticed the similarities and talked streaming platforms into taking action.
This begs the question: What is the future for AI-generated music? Maybe, the music industry will need to found a separate category where AI music can roam freely, and disjointed from the standard music categories, and maybe it shouldn't celebrate any royalties if an artist is found to be impersonated, them benefiting from those royalties instead. And what does that mean for the teams behind huge artists, wouldn't they want to actively seek royalties from other tracks, even those that are not true impersonations of their artists? And what if they themselves made fully AI-generated tracks impersonating their artists to be able to put out an absurd amount of music that the artist itself would never be able to normally?
What Now? The Dilemma Surrounding The Future
The debate opens up a genuine Pandora's Box. Currently, we are primarily dealing with vocal AI-generated content, which is creating various disputes across the internet. But what happens when the issue moves to instrumentation?
Let's say a track is prompted to have a guitar that sounds just like Slash, or bass slaps generated to imitate Flea from the Red Hot Chili Peppers, a trumpet like Miles Davis', or Gerry Rafferty's iconic 'Baker Street' saxophone. How do the detection games play out in this scenario? It's arguably harder to spot something like that, or at least, less human users would be able to spot such a thing and flag it to the streaming platforms. So, probably the path of least resistance is for the platforms themselves to strengthen their algorithms to detect most elements in a track, using stem separation prior to the analysis. This, will of course be expensive, which in turn, could mean higher subscription costs for its users.
And if this headscratch wasn't enough, at what point does AI music stop being real music? How many different elements must be changed for the work to be considered AI, one, two, five? Do different elements weight differently, for example, if a song has AI vocals but human instrumentation, or the other way around, or if only the drums are AI, or the drums and the bass?
To conclude, the 'I Run' controversy is a big one, but it merely showed us a whole world of legal and ethical topics that will have to be discussed in the following years — possibly even months — regarding how artificial intelligence has entered the music industry. AI has rightfully bled into a plethora of industries for optimization, but what happens when it dives into art? Art is a historically human feature that serves no purpose other than entertainment. It doesn't optimize anything, it doesn't cut down times, and it doesn't portray anything that is quantifiable in terms of numbers.
Music, visual arts, cinema, and comedy are all just different branches of something that is instinctively human. What is going to happen to all of these creative fields in the coming years?



.webp)