Meta sued for allegedly using copyrighted work to train AI
Opinion>Opinions - Lindsey's Lens
The views expressed by contributors are their own and not the view of The Hill
Meta sued for allegedly using copyrighted work to train AI
Comments:
by Lindsey Granger, opinion contributor - 05/06/26 2:26 PM ET
Comments:
Link copied
by Lindsey Granger, opinion contributor - 05/06/26 2:26 PM ET
Comments:
Link copied
NOW PLAYING
Yesterday, a major legal battle put AI right in the center of a debate that goes far beyond technology — it really goes to the heart of creativity, ownership, and what it means to be an author in 2026.
Five major publishers, along with bestselling novelist Scott Turow, have filed a class-action lawsuit against Meta and its founder, Mark Zuckerberg. The accusation is serious: that Meta used millions of copyrighted books and journal articles to train its AI model, Llama, without permission, and in some cases allegedly pulled from pirated sites like LibGen and Sci-Hub.
The complaint even claims Zuckerberg himself “personally authorized and actively encouraged the infringement.”
Meta has not yet publicly responded to these allegations, but the implications here are already reverberating across the publishing world.
At the center of this case is a question I keep coming back to: what happens when the work of writers, the people who spend years building stories, ideas and entire worlds, is used to teach machines how to recreate them in seconds?
Turow didn’t hold back in his response, calling it “shameless, damaging and unjust behavior,” saying he finds it “distressing and infuriating” that one of the richest corporations in the world would allegedly use pirated versions of his work to build a system that can then produce “competing material, including works supposedly in my style.”
That’s a genuine fear for so many people — not just copying, but replacement.
The lawsuit argues that AI-generated books are already flooding marketplaces like Amazon, potentially pushing out human authors. And even more unsettling, these systems can summarize entire novels so well that, in theory, readers might not need to buy the original at all.
One example in the filing described how Llama was prompted to mimic a travel writer’s voice and produced what the complaint called a “convincing rendition” of that style. When asked how it did it, the system essentially admitted it had been trained on vast amounts of text, including that author’s published work.
Listen, AI is not going away. It can be powerful, efficient, even transformative. It has been for my business. It can open doors for creativity, for access, for new kinds of storytelling we haven’t even imagined yet.
But the argument from authors and publishers is that none of that should come at the expense of consent.
And that’s really the line being drawn right now: innovation versus ownership.
Should companies be able to move fast and build powerful systems using whatever data they can access? Or does that speed come with a responsibility — the responsibility to compensate, to protect, and to regulate the people whose work built the foundation in the first place?
This isn’t just about Meta. It’s part of a growing wave of lawsuits against AI companies like OpenAI, Anthropic, Google and others. In fact, Anthropic recently agreed to a $1.5 billion settlement with writers over similar claims.
And Congress is now being pulled into the conversation, with growing pressure to define what fair use looks like in the age of artificial intelligence, and whether human creativity is something that can be trained on without limits.
My thought is simple.
AI can absolutely be a great thing. It can create jobs, expand access and reshape industries in ways we’re just beginning to understand. But it also has the power to dismantle livelihoods if it moves without guardrails. And I think the excuse that we need to “move fast to compete” cannot come at the expense of the people who have spent their lives creating the very content these systems are built on.
We need regulation and oversight from Congress so that stealing intellectual property, and the erosion of creative work, doesn’t become the norm in the name of innovation. At its core, this is about respect for creative work and deciding, collectively, where we draw the line between inspiration and appropriation in the age of AI.
Lindsey Granger is a NewsNation contributor and co-host of The Hill’s commentary show “Rising.” This column is an edited transcription of her on-air commentary.
Add as preferred source on Google
Tags
AI
Artificial intelligence
class action
Copyright infringement
fair use
Llama
Mark Zuckerberg
Meta
Copyright 2026 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.
Comments:
Link copied
More Opini