Photo by Markus Winkler on Pexels
The Take It Down Act, nearing final approval after a strong House vote, aims to combat nonconsensual intimate images (NCII), including AI-generated deepfakes. President Trump is expected to sign the bill, mandating social media platforms remove flagged NCII content within 48 hours. While proponents champion it as crucial for online safety and protecting individuals, critics fear potential abuse and unforeseen repercussions.
The bill criminalizes publishing NCII, real or AI-generated. Concerns center on the law being weaponized against unfavorable content. The Cyber Civil Rights Initiative (CCRI) welcomes the criminalization but worries about the takedown provision’s susceptibility to misuse, potentially harming victims.
CCRI fears selective enforcement by the FTC, favoring platforms aligned with the administration. This could lead to uneven enforcement, with some platforms ignoring reports and others overwhelmed by false reports. The Electronic Frontier Foundation (EFF) echoes these concerns, warning that rapid takedowns could force smaller platforms to rely on flawed filters, censoring legitimate content.
The EFF also emphasizes the impact on encrypted services, which are not exempt. Compliance could necessitate abandoning encryption, transforming private conversations into surveilled spaces. Despite these concerns, the Take It Down Act has broad support, including backing from First Lady Melania Trump, advocacy groups, and some tech companies like Google and Snap. Internet Works praised the bill’s passage for empowering victims.
Rep. Thomas Massie (R-KY), a dissenting voice, believes the bill is a “slippery slope, ripe for abuse, with unintended consequences.” The Take It Down Act presents a complex dilemma: balancing the protection against NCII harms with the risks of censorship, privacy violations, and selective enforcement.