Staff Posts

The Future of Music and AI: A Conversation with Kevin Erickson

A discussion with Kevin Erickson from Future of Music Coalition about AI’s threats to musicians, the gatekeeping status of streaming platforms, and recommendations for lawmakers, regulators, and AI developers.

Giliann Karon
Giliann Karon,
Dec 16, 2024

No one becomes a musician for the money. Spotify famously only pays less than half a penny per stream and their algorithm favors musicians who already have a sizeable platform. Artists struggle to break even on touring, and some venues squeeze out as much as they can by taking up to 25% of merch sales. Unless you’ve struck TikTok’s golden algorithm or have rich parents, the current economic climate makes it untenable for musicians to quit their day jobs.

Musicians aren’t Luddites. They’ve always adapted to, not resisted, the latest technological developments. When was the last time anyone listened to something on a gramophone? Recent digital advancements have allowed musicians to weave together samples from other artists, create drum-like sounds without a drum kit, and configure pedals to morph an instrument into something that doesn’t resemble a physical instrument at all.

However, the rapid evolution and widespread deployment of generative AI have taken everyone by surprise. On top of existing challenges within the industry, artists must take greater steps to protect themselves from fraud. Earlier this year, over 200 musicians signed an open letter from the Artist Rights Alliance urging Big Tech companies, digital service providers (DSPs, like Apple Music and Spotify), and AI developers to “cease the use of artificial intelligence to infringe upon and devalue the rights of human artists.”

AI fraud still runs rampant and threatens the already difficult livelihood of musicians. I spoke to Kevin Erickson from Future of Music Coalition about AI’s threats to artists and how streaming platforms, developers, and lawmakers can ensure a fruitful creative economy. 

Given the challenges artists already face with streaming platforms, how will generative AI technologies further the gap between an artist’s commercial popularity and their compensation?

Much of the public conversation about music and AI has fixated on celebrity impersonation, but that’s only a small part of the problem. There are a few distinct concerns coming from different parts of the musician community, and it can be helpful to disentangle them.

One concern is that streaming platforms might seek to license AI-generated music at discount rates (or create their own) and then use their platform power to direct listeners to that music to save on licensing fees. Digital service providers (DSPs) are entities that provide music streaming and downloads, such as Apple, Spotify, and YouTube. DSPs have already been doing this with human-generated discount library music, so this is a reasonable concern–particularly for functional “playlist” music—for sleep, working out, lo-fi beats for studying, etc. 

In the absence of transparency requirements, DSPs have the profound power to reorient people’s listening habits in ways that maximize their earnings, and the incentives to nudge people to listen to cheaper music are very real. Still, another concern is AI-assisted fraud, as with the charges just filed by the DOJ against a musician named Michael Smith.

The music was entirely AI-generated, and the “listeners” were automated bots. But because of the way streaming royalties are set up to use play count as a proxy for value, Smith managed to siphon $10M away from artists who’d actually earned it. It wasn’t the streaming services getting ripped off. It was the artists, and there are fears that this is the tip of a massive iceberg of fraud.

The platforms’ responses to problems caused by AI may themselves end up harming artists. Earlier this year, Spotify announced that in an effort to discourage fraud and disincentivize the uploading of a glut of filler tracks, they’d stop paying out royalties for tracks that received less than 1000 plays. They figure it just generates a few dollars per track, so who’s going to miss it? But for artists or independent labels with deep catalogs, those dollars can add up.

We need a music marketplace that is capable of reflecting and meaningfully remunerating diversity of practice, recognizing the value contributed by niches and music scenes operating at small scales, but as it stands, the marketplace seems really oriented to maximize extraction of value, and AI seems to be accelerating this problem.

Finally, there is a concern that media companies that employ musicians and composers would seek to replace musicians and composers with AI to save money on up-front creation and licensing costs. This is a potentially serious issue for film and TV music and for library music used in podcasts and advertising. To their credit, record labels don’t really appear to be interested in pursuing this at this point; because recording expenses usually end up recouped from artists’ earnings anyway, there’s not much savings to be had, and they also have an incentive to protect the value of human created art since that’s what’s in their back catalog and what they know how to market-– so both indie and major labels have tended to be broadly on the same side as most artists and composers in most AI fights, despite other disagreements.

The music industry has adapted to new forms of media, from radio to peer-to-peer file sharing to streaming. New technology has allowed musicians to get even more creative and collaborate with artists across the world. How do you think the industry can best operate with new AI technologies to preserve artistic integrity, creativity, and compensation?

The most important factor isn’t technology itself. To paraphrase Astra Taylor, most debates about specific technologies are actually really about power. We do need to get the details about specific technologies right so we can arrive at the appropriate public policy response. But first, we need to get the power analysis right. 

That includes the questions about who has a role in what technologies are adopted and which are left in the dustbin of history with the Zune and the 8-track. The lazy neoliberal view is to assume that new music technologies emerge and fade naturally in response to demand. I try not to be lazy or neoliberal, so I suggest asking, “Who does this actually serve?” “Who asked for this”? “Is this solving a real problem?” “Who was consulted?” And then, “How do we orient technological development so it focuses on the problems workers are experiencing?”

It’s stunning how companies will drop unfathomable sums of money on campaigns to convince people that their novel technology is good for artists and citizens but rarely bother to spend a little time and money asking creative workers if the idea sounds good before they build it. That’s a reflection of power. New technology can be empowering, but it’s helpful to draw a distinction between generative AI and other forms of AI.

There are a number of uses of AI that have been in use for years that in the past would just have been called “machine learning.” Analytic AI is used in relatively uncontroversial tools that are standard parts of the recording workflow–beat detection tools, pitch shifting, noise reduction and equalization tools, that sort of thing…. And these tools can certainly be useful–for a mix engineer, it can be useful to use AI to separate a guitar and voice track recorded with a single mic so she can apply effects just to the voice recording.

In contrast to analytic AI, Gen AI raises a broader set of legal and ethical issues. There’s creative labor from the training data on the input side, and there’s the potential for a range of labor market impacts on the output side. Credit, consent, and compensation for anyone whose work is ingested into training datasets is a straightforward place to start, and there appears to be a broad alignment around those three principles—as close to consensus as diverse musicians are able to get about anything. You saw this with the thousands of artists speaking out against unlicensed training.

In addition to the public policy and legal debates, there are internal debates within the music community, which ultimately can be a healthy process by which the community defines its interests and protects itself. If artists decide they don’t want to use OpenAI’s Sora to make a music video because they understand how their peers regard that company, that isn’t a rejection of AI’s potential but a community development and enforcement of standards and norms. In doing so, we exercise some agency in creating the industry we want to be a part of. It’s one way we take our power back.

What concerns do you have for generative AI regarding copyright law? How can we work today to try to mitigate these concerns proactively?

There’s a popular view that copyright really only benefits large corporations. I think that view has mostly benefited large corporations by letting companies that violate copyright at a mass scale off the hook. AI debates have been a helpful corrective because they’ve reminded us that copyright, properly understood and properly enforced, exists to protect creative workers, and not just the famous ones.

Many of the past debates about copyright policy have centered on the wrong questions–is copyright too expansive or too weak? Should we have less copyright or more? Really, the policy goal ought to just be better copyright, aligned with its public purpose, ensuring that creators can benefit from what they make, and in turn, ensuring that the public benefits from creators continuing to be able to keep making work.

The good news with AI is that US copyright law currently is pretty good on this front: it only protects human creation. In my opinion, the widespread ingestion of works for training datasets in the absence of a license is a straightforward infringement. There are a number of lawsuits in progress that may end up testing this issue—RIAA is suing Suno and Udio—but even many of us who are generally skeptical of major-label lawsuits think this is clear-cut.

 We can’t overlook that AI firms, their investors, and the armies of lobbyists they fund are aggressively arguing in various forums that they don’t need permission to ingest existing works in bulk, twisting the important principle of fair use beyond recognition. We have to hold the line—we just learned that the UK is considering expanding a data mining exemption to broadly allow AI ingestion and forcing creators to actively opt out if they don’t want to be part of it.

At the same time, licensed training and compliance with copyright law are not the only indicators of whether the use of AI tech is ethical. You can have a licensed training set and still have an unethical business model—one that’s not really about facilitating expression but about the extraction of resources from creative communities. 

Apart from copyright, there’s a principle called “right of publicity,” which bridges privacy rights and a right to protection from economic exploitation. This right gives individuals a degree of protection against the misuse of their voice, image, and likeness, which has special importance for vocalists. In the US, numerous states have some kind of protection around this, and Tennessee, Illinois, and California have so far updated those protections to address some of the emerging concerns associated with AI.

But there’s a need to create a federal right of publicity to ensure that artists have the same protections from AI-fueled exploitation. This is not a new problem, but it is one that has been made more acute by what we’ve seen with “digital replicas.”

AI algorithms deliver targeted content to users by queuing songs they might like or autogenerated playlists. I’ve found some great artists through Spotify’s automatic queue. How can streaming services algorithmically recommend similar music to listeners in a more transparent and ethical way? How are artists affected by these recommendation systems?

Recommendation engines are not new. The “Musical Genome Project” was developed in 1999 and integrated into Pandora’s discovery engine. It was called “machine learning,” and many DSPs have been offering some kind of recommendation engine ever since. They can benefit artists who make particular kinds of functional music or tracks that perform well. DSPs argue recommendation services help artists circumvent the gatekeepers of traditional print and broadcast media.

In the absence of clear rules, there are concerns about orienting audiences toward certain listening behaviors. These behaviors reinforce platforms’ own gatekeeper status and leverage in negotiations. Pandora quickly figured out it could save money in licensing discussions by cutting deals that make certain music more likely to surface in its algorithm. Now, we have Spotify’s “Discovery Mode,” which boosts the likelihood of appearing in feeds if the rightsholder agrees to a massive reduction in the already low rates of pay. 

Congressional leaders have criticized this practice because it represents an unfair method of competition. It’s evidence of the monopsony problem that emerges when a handful of firms control so much of music streaming. TikTok is even less transparent and has gone a step further than Spotify by refusing to negotiate with independent record labels collectively. No one should feel bad for liking what comes up in their feed, though; sometimes you want to hear new stuff! Keeping money out of recommendation systems and maintaining the complexity of human taste may be a path forward for services that want to do recommendations right.

One thing Bandcamp gets right is recommendations based on purchase information. On a particular release page, you can see other releases also purchased by people who bought that album. There are no paid recommendations anywhere on the platform.
I’m glad you mentioned transparency because, right now, listeners can’t really have confidence about whether something they’re being recommended shows up in their feed due to an algorithmic choice, a human curatorial choice, or a financial deal. FTC could adopt the same level of disclosure that the FCC currently mandates when payola or other sponsored content is aired on broadcast radio—a verbal announcement at the time of broadcast. The FTC could do this via a section 5 rulemaking applied to all DSPs. Transparent labeling of any AI-generated content is a helpful step toward a healthier market.

Is there anything else you want to share about how you and artists are affected by the rapid development and deployment of AI?

First, I should note the music community is stepping up to address climate and sustainability issues through initiatives like Music Declares Emergency. The environmental cost of large language models is an important consideration for many musicians.

Beyond that, it’s important to draw connections between the AI issues and other concerns about tech accountability. Some of the same organizations, like the Chamber of Progress and NetChoice, defend AI firms against even the most modest regulatory interventions and also defend monopolists like Google and Meta as they face antitrust and consumer protection scrutiny. They’ve consistently opposed efforts by musicians and independent venues to bring some sensible reforms to concert ticketing at federal and state levels.

Some more creative proposed policy responses connect to other attempts to remedy the power imbalances that have allowed large tech firms to make the rules. One example is the Protect Working Musicians Act, which would allow independent musicians who own some of their own catalogs to collectively bargain with dominant streaming and AI firms. It’s built around the idea of an antitrust exemption, a policy tool with serious potential.

Given the power disparities and political challenges we’re all facing, communication and collaboration between tech critics, anti-monopolists, and creative workers is a central goal. We also must be clear-eyed about the barriers to such. As critical perspectives on Big Tech, including AI, have moved to the center of the regulatory conversation, some of the loudest voices that opposed tech regulation for years are now belatedly rebranding themselves as tech critics and purporting to share the interests of workers but ultimately oppose the interventions that creative workers have coalesced around. We’ve seen this on platform regulation, ticketing reform, and now on AI ingestion. The solution, ideally, is to listen to the voices of the many workers with diverse business models. 

More Staff Posts

Glints of Gratitude: AT’s Biggest Moments in 2024
Robbie Dornbush
Robbie Dornbush,
Dec 19, 2024

In 2024, Accountable Tech had a big year of campaigns to protect kids online, amplify the real world dangers of AI, defend democracy, and safeguard the privacy of abortion seekers.

Bluesky's the Limit
Nicole Gill
Nicole Gill,
Dec 17, 2024

Elon Musk’s race to the bottom for social media companies may finally be turning around thanks to fresh competition from Bluesky.

Why Accountable Tech is Leaving X
Kenya Juarez
Kenya Juarez,
Nov 15, 2024

Accountable Tech is leaving X, but we’re still committed to advocating for safer digital spaces elsewhere.

Join the fight to rein in Big Tech.

Big Tech companies are some of the most powerful and profitable companies in history, presenting new threats to the safety of communities and the health of democracy. We’re taking them on through legislation, regulation and direct advocacy.