Take It Down Act marks a key 'inflection point' in US internet regulation, expert says

Gaby Clark
scientific editor

Andrew Zinin
lead editor

On Monday, President Donald Trump signed the Take It Down Act into law, making it a federal crime to publish AI-generated deep fakes and non-consensual intimate imagery.
John Wihbey, associate professor of media innovation and technology at Northeastern University, called the law a welcome step in addressing online harm. At the same time, he warns that poor implementation could lead to unintended consequences.
"The reporting system for triggering takedowns is in some ways the whole ball game," Wihbey says. "You have to construct a system where it's fair to victims and doesn't re-traumatize them. It should be relatively straightforward and easy to use, but is not susceptible to being gamified or weaponized or have unintended consequences. That's really tricky."
The law requires social media companies to remove flagged material within 48 hours of notification by victims. Individuals convicted of intentionally posting—or threatening to post—this kind of content could face prison time, fines or both.
Critics argue the law is too broad and could violate free speech protections. Some fear that legitimate content might be wrongly removed.
"I generally support experimentation with addressing online harms through public policy and legislation," Wihbey says, "and it's probably a good thing to at least initiate the process and ideally then iterate if it turns out there are unintended downstream consequences that outweigh some of the benefits of keeping people safe from non-consensual intimate imagery."
While several states have enacted similar laws, this is the first federal legislation of its kind. The bill was co-sponsored by Sens. Ted Cruz, R-Texas, and Amy Klobuchar, D-Minnesota, and received backing from first lady Melania Trump.
Wihbey says the legislation reflects a broader shift in how the United States relates to the internet. He points to Section 230 of the 1996 Communications Decency Act—a key law protecting platforms from liability for user-generated content—as an example of regulation that may need to evolve.
"We need to experiment as a society with new rules that don't necessarily overturn Section 230, but layer new kinds of approaches on top of it," he says.
This argument is central to Wihbey's forthcoming book, "Governing Babel: The Debate Over Social Media Platforms and Free Speech—and What Comes Next."
He noted that regulating older technologies like radio and television took decades—something to keep in mind as lawmakers begin addressing harms created by the modern internet.
"We're maybe only 20 years into the social web, but really only 10 years into the algorithmic social web," he says. "We should start to experiment in new ways, and this bill seems like a well-intentioned experiment that could really help people who are victims and prevent further victimization. If it turns out to be an idea in need of modification, well, that's what we have a representative democracy for, which is to iterate and modify rules and pass new laws."
Provided by Northeastern University
This story is republished courtesy of Northeastern Global News .