WTF Is Going On With AI Music? (Spoiler: The Industry Just Did a Complete 180)

An AI-generated country song mimicking Blanco Brown's voice hit #1 on Billboard without his consent, exposing how music labels went from suing AI companies like Suno to partnering with them—even as the platform generates 7 million songs daily and the industry still can't figure out fair compensation for artists.

Imagine waking up to discover an AI-generated song that sounds exactly like you just topped the Billboard charts—except it's credited to a fictional white cowboy avatar and you had no idea it existed.

That's what happened to Blanco Brown, the Grammy-nominated Black country artist behind "The Git Up." An AI track called "Walk My Walk" hit #1 on Billboard's country digital sales chart this month, credited to "Breaking Rust"—a completely fictional artist with an AI-generated persona. But the vocal style, phrasing, and musical DNA? All Blanco Brown.

Here's the song to listen to it yourself:

"I didn't even know about the song until people hit me up," Brown told AP. "My phone just kept blowing up. Somebody said: 'Man, somebody done typed your name in the AI and made a white version of you.'"

The song was apparently created by Abraham Abushmais, a former collaborator who co-wrote tracks on Brown's 2019 album. Brown wasn't notified, credited, or compensated. "It's a white AI man with a Black voice," Brown said. "And he's singing like a Negro spiritual."

How is this possible? Doesn't the music industry hate AI music? Aren't these companies getting sued to high heaven? Not anymore.

Warner Music Just Went From Suing an AI Music Company to Partnering With It

The major record labels were in full panic mode over AI music companies allegedly training on their catalogs without permission. Warner Music Group was one of the loudest voices in that chorus, filing a copyright lawsuit against AI music generator Suno last summer.

Well, plot twist.

Warner Music and Suno just announced a partnership, flipping from courtroom adversaries to business collaborators in less than a year. The lawsuit = Settled and dismissed. Here's the wild part: Since January 2024, Suno spent $32 million on compute power to train its model. How much did they spend on data costs—the actual music used for training? $2,000.

Now, of course this settlement is going to lead to some sort of payment to the labels, which may or may not be disclosed. But more importantly, the settlement has led to a new plan: building what the labels and Suno are calling "next-generation licensed AI music" that creates "new revenue opportunities" for Warner's artists.

Here's How It Works:

Suno will launch a music generation tool that's specifically trained on Warner Music's catalog—but this time, with permission and proper licensing. Artists signed to Warner labels can opt in to have their music included in the training data, and they'll get paid when people use the AI to create music inspired by their style.

Think of it like this: instead of fans just streaming your music, they could use AI to create their own songs "in the style of" their favorite Warner artists. The original artists get compensated, fans get creative tools, and Warner gets a cut of a new revenue stream.

The partnership includes:

  • Artist control: Warner artists can choose whether to participate or not.
  • Commercial licensing: The AI-generated music can be used for commercial purposes.
  • Revenue sharing: Artists and songwriters get paid when their "style" is used.
  • Attribution technology: Systems to track and credit the original artists who influenced the AI outputs.

It's not just Suno, either. Sony, Warner, and Universal all signed deals with AI startup Klay to create what they're calling "large music models" trained exclusively on licensed catalogs.

These partnerships will expand beyond music generation—Warner and Suno are exploring how AI could help with music discovery, personalized playlists, and even new ways for fans to interact with their favorite artists' catalogs. This could be innovative and interesting, or it could be cringe and weird. For example, AI music videos of your K-pop faves starring you as the love interest? Kinda weird... but I bet fans would love it.

Two Perspectives on What's Happening:

Music producer Rick Beato called "Walk My Walk" pure "AI slop" that only topped charts because someone spent $3,000 buying downloads to game Billboard's system. The song has 2.2 million Spotify listeners but a 1.5-star rating from actual humans.

Developer Theo argues the real problem is that AI music companies are trying to replace musicians entirely instead of building tools that work alongside professional software. Now to be fair, Suno Studio aims to correct this by helping you generate stems (vocals, drums, synths) from any audio you upload, then lets you layer and edit them in a timeline.

He then broke down why AI music generation feels different from AI code tools: when you generate text or code, there is value that’s created outside of just consumption. But when you generate media (art or more, specifically music), the only value is in the consumption. Humans like music because we listen to music. Therefore, music has zero value outside of being consumed by humans. Conversely, code's value comes from running it, not reading it. And writing has value when read by AI algorithms and computer software because the information can get indexed by Google and direct readers to where to find your content.

But when Suno generates a song, it's not helping musicians work faster... it's trying to replace them entirely. It's not like robots get value from consuming AI music. The only value of music is humans listening to it... so why do we need robots doing it if it's not helping musicians in their process?

He also dropped insider knowledge from his Twitch days: Warner once extracted a billion-dollar licensing deal from Facebook, then watched Facebook do nothing with it. That set the precedent that "accessing Warner's IP is worth billions." When Twitch tried negotiating, the cost would've bankrupted them—so they built a two-track audio system that deletes music after streams end, a technical workaround to dodge Warner's pricing. As a result, he belives these deals with the AI companies won't result in artists getting paid fairly... it'll result in the middlemen extracting more value for themselves.

Think about how you can barely use a licensed song on YouTube today without getting flagged by YouTube's copright, or worse, if you're Rick Beato, you get constant legal threats from the music labels for using snippets of songs that are clearly fair use based on the context (talking to the original artist about a song they wrote and playing the song in the background).

Why This Matters

These deals between the music labels and the AI companies could be the template for how the music industry makes peace with AI. Instead of trying to ban the technology or sue it into oblivion, Warner's taking the "if you can't beat 'em, join 'em" approach: but with guardrails that protect artist interests and create new income streams.

The big question is whether artists will actually opt in. Some might see it as expanding their creative influence and generating passive income. Others might view it as cheapening their artistry or training their own competition. Either way, Warner's at least giving them the appearance of choice rather than having their music scraped without consent and added to an infinite conveyor belt churning out derivitative content at incredible scale.

What kind of scale are we talking about? According to Suno's investor pitch deck, users on the platform generate 7 million songs a day—enough to recreate Spotify's entire catalog every two weeks. The company just raised $250M at a $2.45B valuation, projecting it'll become a "$500 billion company" by powering a "new, bigger music ecosystem."

If artists revolt or consumers don't care, it'll be a cautionary tale about trying to monetize AI too aggressively.

The “Walk My Walk“ situation exposes the messy reality of AI-generated media, too: it can mimic an artist's sound with eerie accuracy, but the legal framework around consent, attribution, and compensation barely exists.

Blanco Brown, for example, isn't anti-AI, he's just asking the obvious question: “If someone is going to sing like me, it should be me.“ Or the subtext: if it's a robot singing like him, HE should get paid.

The Warner/Suno partnership might be the template for how the industry makes peace with AI, but it only works if artists actually opt in. And based on the Blanco Brown situation, we're still figuring out where the line is between inspiration, imitation, and straight-up theft.

Our Take

Whatever ultimate form these new partnerships takes needs to incorporate an automated compensation and licensing system so everyone can use music fairly and get compensated fairly. The current system is byzantine; if you're allowed to use music on TikTok in your videos, why can't you on YouTube? Why do you get flagged in one place and de-monetized (or worse, sued by the labels) when another instance is fair use? 

The music labels have been on a litigation spree: Warner sued Crumbl Cookies for $23.85M over 159 songs in TikTok posts (April 2025), then DSW Designer Shoe Warehouse for $30M over 200+ songs (May 2025). Sony hit USC, and UMG went after Chili's (you are NOT welcome to Chilis, UMG!). The pattern is clear: labels aggressively monetize social media usage via lawsuits while platforms like TikTok tell brands "use our music library!" but the licenses only cover personal use.

And it's not just brands getting squeezed, but TikTok itself. In early 2024, Universal Music pulled all its music from TikTok for three months after licensing negotiations collapsed. Millions of videos featuring Taylor Swift, Drake, and Billie Eilish went silent. UMG wanted more money (TikTok represented just 1% of their revenue) and stronger AI protections. TikTok accused UMG of "greed," while UMG said TikTok was trying to "bully" them into accepting less than fair market value. They settled in May 2024 with improved artist pay and AI safeguards—but the entire standoff showed how fragile and opaque these licensing deals really are.

The system is broken because it's binary: either you get nothing, or the rights holder (not even the artists, but the labels who own the artists) takes everything.

So clearly there needs to be a technical solution to this, maybe something closer to Twitch's solution: a system that separates music from other audio, tracks usage automatically, and splits revenue proportionally. If a song plays for 1 minute in a 20-minute video, the artist gets compensated for that percentage. If music plays throughout, creator and artist split it. Automate the calculations and compensation on the backend, and programmatically distribute as the ad revenue comes in.

The technology to do this already exists. Platforms can detect copyrighted audio down to the second. They just need to use that precision for fair compensation instead of all-or-nothing takedowns.

Until that happens, these AI partnerships are just moving deck chairs around. Artists need more than opt-in checkboxes. They need systems that actually track, attribute, and pay them fairly when their work (or their AI-cloned voice) gets used. Otherwise, we're just building a faster, shinier way to not pay musicians or creators what they are fairly owed for creating content on online platforms that the platforms and labels benefit from.

cat carticature

See you cool cats on X!

Get your brand in front of 550,000+ professionals here
www.theneuron.ai/newsletter/

Get the latest AI

email graphics

right in

email inbox graphics

Your Inbox

Join 550,000+ professionals from top companies like Disney, Apple and Tesla. 100% Free.