America is in a global race to lead in artificial intelligence and the stakes couldn’t be higher.
President Trump’s recent AI Action Plan gave American competitors a welcome boost. One major threat remains: legal challenges to how AI models are trained. At the center of these lawsuits is the question of how we treat available data—the books, articles, websites, and other materials that AI uses not to copy, but to learn.
Policymakers on both sides of the aisle understand this. As the President recently put it: “You can’t be expected to have a successful AI program when every single article, book, or anything else that you’ve read or studied, you’re supposed to pay for. When a person reads a book or an article, you’ve gained great knowledge. That does not mean that you’re violating copyright laws or have to make deals with every content provider.”
We couldn’t agree more. The same legal principles that led to the Supreme Court’s Sony Betamax decision more than 40 years ago, which allowed the sale of video recorders, remain just as true in an AI-world.
Training AI models on data is not about copying content, it’s about building something new. Just as students learn by reading a wide range of materials, AI learns from data to generate insight, not imitation. This kind of transformative learning is foundational to how these systems work—and to how America has always led in innovation.
The problem? A wave of opportunistic lawsuits is now threatening to strangle innovation in its crib. These legal challenges don’t just target the biggest tech firms. They threaten startups, researchers, and every entrepreneur trying to build the next big breakthrough. If we let the past derail the future, we won’t just slow progress—we’ll forfeit American leadership in the most consequential technology of our time.
Should You Trust AI to Build Your Dream Car?

Before electronic computers became widespread, NASA’s most advanced technology wasn’t a machine. It was people. Teams of human “computers”–many of them women–performed intricate calculations using nothing but pencils, paper, and chalkboards… Continue reading
At CES, we have a front-row seat to the power of AI. We see it revolutionizing healthcare, transportation, manufacturing, agriculture, and more. AI is helping doctors diagnose diseases earlier, enabling farmers to grow more sustainably, and giving small businesses the tools to compete globally. These breakthroughs don’t just appear. They’re built on access to data, strong R&D, and a legal framework that supports innovation.
That framework is now at risk. If we lock down data used to train AI, we’ll lock out competition. Only large, well-resourced companies will be able to navigate the maze of permissions and licensing. Startups and independent creators will be out of the game. And with them, we’ll lose the diversity, ingenuity, and dynamism that define America’s innovation ecosystem.
Our courts and policymakers must protect the legal foundation that has fueled every great American leap forward. AI training is a transformative use of data—rooted in learning, not copying. U.S. law has long recognized the value of this kind of use. It must continue to do so.
As part of this effort, CTA recently joined several technology trade associations in filing a Ninth Circuit amicus brief against class certification in an AI case. Class certification in copyright suits involving fair use is generally disfavored because such cases are fact-specific and vary by circumstance. Where an innovative technology is involved and the law is unsettled, the threat of mandatory statutory damages (even in the absence of real harm) would bring unbearable pressure on innovators to settle, rather than have their rights determined in court: “Pressures imposed by erroneous class certification can distort our legal system [and] those distortions are magnified in cases alleging copyright infringement because statutory damages can reach $150,000 per work with no showing of actual harm.”
Of course, creators and rights holders should be part of this conversation. America thrives when we balance innovation with protection. But we must not let legacy industries rewrite the rules in ways that favor the status quo and stifle progress.
AI isn’t just about faster machines or better software. It’s about solving problems that impact real people every day. With one-third of Americans living in a healthcare desert, AI tools trained on data could be the difference between diagnosis and delay, between treatment and tragedy. And that’s just one example.
America leads when we protect openness, empower the next generation, and embrace the future with confidence. We must act now to preserve the conditions that make that leadership possible.
AI is here. The race is on. Winning isn’t optional. Let’s keep our edge.






