Jess Miers's BestNetTech Profile

Jess Miers

About Jess Miers

Posted on BestNetTech - 28 July 2025 @ 10:48am

Oh My God, TAKE IT DOWN Kills Parody

Donald Trump is a notorious media bully. He uses lawsuits, executive power, and political pressure to punish critics and bend institutions to his will. Disney, Meta, and Paramount have since paid out multi-million-dollar settlements over content disputes. CBS News leaders resigned. Colbert’s show was canceled. The AP was barred from the White House. Even Rupert Murdoch is now being sued over unflattering coverage. Trump targets law firms, universities, and online services like TikTok, the moment they stop serving his interests. 

Despite this well-established pattern of silencing dissent, lawmakers handed him the Take It Down Act: a sweeping censorship weapon he has openly vowed to wield against his critics.

Recently, South Park fired back in its first new episode in years, with a bold, refreshing, and unapologetically crude parody of the Christian “He Gets Us” campaign—featuring a deepfaked, fully nude Donald Trump wandering the desert as a solemn narrator asks, “When things heat up, who will deliver us from temptation?” The “public service announcement” ends with a glowing political endorsement from Trump’s wide-eyed, “teeny tiny penis.”

It’s brilliant satire that cuts right to the heart of American political delusion. It’s also potentially criminal under the law Trump championed. Welcome to the new reality: mocking the President with AI could now land you in prison.


The Take It Down Act criminalizes the non-consensual publication of “intimate visual depictions.” This includes depictions that were generated using AI. Intimate visual depictions include images showing uncovered genitals. To qualify, the depiction must appear, in the eyes of a reasonable person, indistinguishable from a real image. The identifiable individual must not have provided consent, nor voluntarily exposed the depicted material in prior public or commercial settings. The publisher of the material must have either intended to cause or actually did cause harm to the depicted individual. The law leaves some room to assess whether the depiction is a matter of public concern, but there are no express carveouts for lawful speech such as commentary, satire or parody. Violations of this provision can incur both financial penalties and jail time. 

The broadcast and streaming versions of the South Park PSA are likely out of scope. Take It Down applies only to publishing through the use of an interactive computer service (as defined under Section 230). However, South Park also uploaded the PSA to YouTube and the site HeTrumpedUs.com, which could be a problem. The depiction includes a full, graphic display of Trump’s “teeny tiny” member. And it’s probably safe to assume that neither Trump, nor anyone on his behalf, consented to it. 

From the live broadcast, it might be unclear at first whether the depiction of Donald Trump was real or AI generated. On the one hand, it’s absolutely a line South Park would cross, and their fans know that. On the other hand, we might wonder whether Trey Parker and Matt Stone, the creators of South Park, would so willingly adopt AI given the controversy surrounding its use in the entertainment industry (though we know now that they are quite enthusiastic about it). Upon first impression, it’s possible they merely spliced a real video of Trump walking. South Park does this all the time—taking real images of public figures and effectively pasting them onto cartoon bodies. The way Trump swings his arms, and his gait, seemed typical of the President. It’s only when he starts stripping off his clothes that the use of AI becomes apparent. Even then, there are legitimate videos and images of Trump in his most natural form circling the web. We’ll spare you that evidence, though this might also leave open the question as to whether the PSA depicts any materials that Trump himself has voluntarily exposed in a public or commercial setting. 

The point is one could plausibly argue that, in the eyes of a reasonable person, the depiction of Trump in the PSA is indistinguishable from reality. Sure, the South Park of it all might tip-off viewers that the content is likely fake. South Park is notorious for precisely this type of raunchy, over-the-top political satire. But outside that context, it depends. For instance, if you search for nude images of Trump (which we don’t recommend at all), you will find out-of-context screenshots from the PSA of nude Trump. Plus, the creators recruited “the best deepfake artists in the world” for this project. Does that matter in terms of making the content indistinguishable? It’s one of many open questions for Trump-friendly prosecutors: in the eyes of a reasonable person, is the depiction indistinguishable? Maybe.

This also leaves open whether the online services hosting and spreading the video and screenshots of AI nude Trump could be on the hook. The Take It Down Act imposes civil penalties on online services that fail to remove intimate deepfake content upon request. The White House could send take down requests to social media companies that currently make the content available. This could potentially erase the content from existence, especially if the episode is ever banned from streaming services. As fans might recall, television and streaming companies banned South Park episodes 200 and 201 for merely depicting Muslim Prophet Muhammad.  

Ultimately, whether the PSA violates the law will come down to whether it’s a matter of public interest. Most criticism of public figures, especially elected officials, is a matter of public concern. The mere fact that the White House weighed-in on the episode suggests its importance. But it’s especially the case when you consider the underlying messages Parker and Stone are trying to convey about the Trump Administration to the public: a cutting commentary on how the MAGA movement holds Trump out as their god-king in hopes he one day leads them to eternal salvation (i.e. a promised land devoid of minorities and woke-ness), illustrating the evaporating line between church and state. More obviously, it’s a riff on The Emperor’s New Clothes—the tale of a vain ruler duped into believing he’s draped in invisible finery while parading around naked. The fable endures as a parable of mass delusion, where truth is swallowed for fear of offending power. And that’s precisely the dynamic at play today, as media empires continue to buckle under Trump’s relentless bullying, pretending not to see what’s right in front of them. In that context, the public undeniably has a compelling interest in knowing that the President is lying to them.

It’s especially significant that South Park was the one to take this shot. The show has long been known for skewering both the left and the right, cultivating an audience that prides itself on rejecting political correctness and ideological rigidity. That ethos even inspired the term “South Park Republican”—a loosely defined label for those who mock partisanship from the sidelines. The show’s core demographic—predominantly men aged 18 to 49—overlaps meaningfully with the audiences of figures like Joe Rogan and, to a lesser extent, Andrew Tate. So, unlike overtly partisan media, South Park holds a rare cultural position in that it can potentially speak directly to groups adjacent to the MAGA movement without preaching, pandering, or being immediately dismissed. That gives its political commentary a unique kind of weight to the extent it has potential to actually move the needle in shaping public opinion, and by extension, the direction of the country’s leadership. 

While the broader message is undeniably important, some might ask whether the commentary on the size of Trump’s penis is really a matter of public interest. Could the creators have made their point without the deepfaked, talking genitalia? From a First Amendment perspective, it shouldn’t matter. The depiction—however crude—is unlikely to fall into any of the narrow exceptions to protected speech, such as obscenity. And under the logic of the Take It Down Act, Trump’s endowment might well qualify as a matter of public concern. After all, he made it one. During the 2016 campaign, he famously implied that his penis was larger than Marco Rubio’s, citing their respective hand sizes as evidence. Once a candidate brings his genitals into the public discourse, this kind of satire seems obviously fair game.

Realistically, Parker and Stone will be fine if the DOJ comes knocking. They’ve got the weight of mainstream media credibility, an army of Paramount lawyers, and—at least for now—that pesky First Amendment the Trump administration hasn’t quite managed to extinguish. But their case also serves as a useful illustration of what happens when AI regulations, especially those targeting deepfakes, are crafted without any real regard for the lawful, valuable, and politically vital speech that will inevitably get caught in the dragnet.


This is especially troubling given the increasingly precarious status of First Amendment protections for AI-generated content. Recall that in the NetChoice cases, Justice Barrett floated the idea that certain uses of AI in publishing might fall outside the scope of the First Amendment. Not long after, a federal judge concluded that outputs from Character AI don’t qualify as protected speech. Legal scholars are arguing much the same.

It may seem absurd to suggest that South Park’s latest episode—a brazen, satirical, political public service announcement—might not count as protected expression. But under the emerging logic of AI speech exceptionalism, that outcome is far from unthinkable.

Which is dangerous. AI-generated speech is increasingly overlooked as worthy of constitutional protection. As a result, laws like the Take It Down Act are sailing through Congress with little regard for the types of lawful, socially valuable, and politically consequential expression they risk sweeping away. As AI becomes ever more entangled in creative production, and the imaginary line between human and machine expression continues to blur, this blind spot becomes a powerful tool for censorship. If policymakers can’t ban the message, they may decide to ban the method—the use of AI—instead.

Hence, South Park also offers a timely reminder that deepfakes aren’t inherently exploitative. They can be powerful tools for criticism, commentary, and satire, particularly when aimed at public figures. That nuance is often lost in deepfake proposals. The No Fakes Act, for example, gestures toward protecting parody, commentary, and satire, but explicitly withdraws that protection if the content is sexual in nature. It is also notably silent about when that content is aimed at public figures. The carveout, then, would do nothing to shield South Park.

Plus, sexual satire has long been a potent vehicle for confronting power. Consider Borat—one of the most talked-about films of the early 2000s. Its infamous nude wrestling scene was grotesque, jarring, and undeniably effective. It sparked debate, shattered taboos, and forced audiences to examine their own cultural assumptions. The provocation was the point.

South Park belongs to that same lineage. Its creators have made a career of using shock to expose hypocrisy. They understand that good satire isn’t supposed to comfort but to unsettle, provoke, and push people to reflect. Our elected officials may not always appreciate that. But that’s why we have the First Amendment. 

Perhaps most troubling is the emergence of a two-tiered system for political satire. South Park and Paramount can afford to take this risk (and a big one at that). But what about an anonymous Redditor using AI? Can the average person realistically challenge the king—especially when jail time is on the table?

If everyday creators are too afraid to speak, and the few with power keep backing down—Paramount included—then who’s left to confront authority? Who will be left to say the unsayable?

The Trump Administration, and those who follow, will always pose the gravest threat to speech and democracy. South Park dared to say it out loud. But in doing so, they revealed something deeper: that the fight over AI-generated content isn’t just about technology. It’s about power. It’s about who gets to speak, and who gets silenced.

AI is the next great battlefront for free expression. Like the early Internet, it is messy, disruptive, and often uncomfortable. But that’s exactly why it matters. And that’s exactly why it must be protected. Because if we allow fear, moral panic, or political convenience to strip AI-generated speech of First Amendment protection, then we’ve handed censors the easiest tool they’ve ever had.

And when that happens, it won’t just be the machines that go quiet. It’ll be us.

Jess Miers is an Assistant Professor of Law at the University of Akron School of Law. Kerry Smith is a rising second-year law student at the University of Akron School of Law

Posted on BestNetTech - 17 June 2025 @ 03:41pm

Yes, The FTC Wants You To Think The Internet Is The Enemy To The Great American Family

This is a combo piece with the first half written by law student Elizabeth Grossman about her take on the recent FTC moral panic about the internet, and the second part being some additional commentary and notes from her professor, Jess Miers.

The FTC is fanning the flames of a moral panic. On June 4, 2025, the Commission held a workshop called The Attention Economy: How Big Tech Firms Exploit Children and Hurt Families. I attended virtually from the second panel until the end of the day. Panelists discussed how the FTC could “help” parents, age verification as the “future,” and “what can be done outside of Washington DC.”  But the workshop’s true goal was to reduce the Internet to only content approved by the  Christian Right, regardless of the Constitution—or the citizens of the United States. 

Claim #1: The FTC Should Prevent Minors From Using App Stores and Support Age Verification Laws

FTC panelists argued that because minors lack the legal capacity to contract, app stores must obtain parental consent before allowing them to create accounts or access services. That, in turn, requires age verification to determine who is eligible. This contractual framing isn’t new—but it attempts to sidestep a well-established constitutional concern: that mandatory age verification can burden access to lawful speech. In Brown v. Entertainment Merchants Association, the Supreme Court reaffirmed minors’ rights to access protected content, while Reno v. ACLU struck down ID requirements that chilled adult access to speech. Today, state-level attempts to mandate age verification across the Internet have repeatedly failed on First Amendment grounds.

But by recasting the issue as a matter of contract formation rather than speech, proponents seek to sidestep those constitutional questions. This is the same argument at the heart of Paxton v. Free Speech Coalition, a case the FTC appears to be watching closely. FTC staff repeatedly described a ruling in favor of Texas as a “good ruling,” while suggesting a decision siding with the Free Speech Coalition would run “against” the agency’s interests. The case challenges Texas’ H.B. 1181, which mandates age verification for adult content sites. 

The FTC now insists that age verification isn’t about restricting access to content, but about ensuring platforms only contract with legal adults. But this rationale collapses under scrutiny. Minors can enter into contracts—the legal question is whether and when they can disaffirm them. The broader fallacy about minors’ contractual incapacity aside, courts have repeatedly rejected similar logic. Most recently, NetChoice v. Yost reaffirmed that age verification mandates can still violate the First Amendment, no matter how creatively they’re framed. In other words, there is no contract law exception to the First Amendment.

Claim #2: Chatbots Are Dangerous To Minors

The panel’s concerns over minors using chatbots to access adult content felt like a reboot of the violent video game panic. Jake Denton, Chief Technology Officer of the FTC,  delivered an unsubstantiated tirade about an Elsa-themed chatbot allegedly engaging in sexual conversations with children, but offered no evidence to support the claim. In practice, inappropriate outputs from chatbots like those on Character.AI generally occur only when users—minors or adults—intentionally steer the conversation in that direction. Even then, the platform enforces clear usage policies and deploys guardrails to keep bots within fictional contexts and prevent unintended interactions.

Yes, teens will test boundaries, as they always have, but that doesn’t eliminate their constitutional rights. As the Supreme Court held in Brown v. Entertainment Merchants Association, minors have a protected right to access legal expressive content. Then, it was video games. Today, it’s chatbots. 

FTC Commissioner Melissa Holyoak adopted a more cautious tone, suggesting further study before regulation. But even then, the agency failed to offer meaningful evidence that chatbots pose widespread or novel harm to justify sweeping intervention.

Claim #3: Pornography is Not Protected Speech

Several panelists called for pornography to be stripped of First Amendment protection and for online pornography providers to be denied Section 230 immunity. Joseph Kohm, of Family Policy Alliance,  in particular, delivered a barrage of inflammatory claims, including: “No one can tell me with any seriousness that the Founders had pornography in mind […] those cases were wrongly decided. We can chip away […] it is harmful.” He added that “right-minded people have been looking for pushback against the influence of technology and pornography,” and went so far as to accuse unnamed “elites” of wanting children to access pornography, without offering a shred of evidence.

Of course, pornography predates the Constitution, and the Founders drafted the First Amendment to forbid the government from regulating speech, not just the speech it finds moral or comfortable. Courts have consistently held that pornography, including online adult content, is protected expression under the First Amendment. Whether panelists find that inconvenient or not, it is not the FTC’s role to re-litigate settled constitutional precedent, much less redraw the boundaries of our most fundamental rights.

During the final panel, Dr. Mehan said that pornography  “is nothing to do with the glorious right of speech and we have to get the slowest of us, i.e. judges to see it as well.” He succeeds in disrespecting a profession he is not a part of and misunderstanding the law in one foul swoop. He also said “boys are lustful” because of pornography and “girls are vain” because of social media. Blatant misogyny aside, it’s absurd to blame social media for “lust” and “vanity”–after all, Shakespeare was writing about them long before XXX videos and Instagram—and even if it weren’t, teenage lust is not a problem for the government to solve.

Panelist Terry Schilling from the American Principles Project—known for his vehemently anti-LGBT positions—called for stripping Section 230 protections from pornography sites that fail to implement age verification. As discussed, the proposal not only contradicts longstanding First Amendment precedent but also reveals a fundamental misunderstanding of what Section 230 does and whom it protects.

Claim #4: The Internet Is Bad For Minors

FTC Commissioner Mark Meador compared Big Tech to Big Tobacco and said that letting children on the Internet is like dropping children off in the red light district. “This is not what congress envisioned,” he said, “when enacting Section 230.” Commissioner Melissa Holyoak similarly blamed social media for the rise in depression and anxiety diagnoses in minors. Yet, as numerous studies on social media and mental health have consistently demonstrated, this rise stems from a complex mix of factors—not social media.

Bizarrely, Dr. Mehan noted “Powerpoints,” he said, “are ruining the humanities.” And he compared online or text communication to home invasion: if his daughter was talking on the phone to a boy at 11 o’clock at night, he said, that boy would be invading his home.

This alarmist narrative ignores both the many benefits of Internet access for minors and the real harms of cutting them off. For young people, especially LGBTQ youth in unsupportive environments or those with niche interests, online spaces can be essential sources of community, affirmation, and safety. Just as importantly, not all parents share the same values or concerns as the government (or Dr. Mehan). It is the role of parents, not the government, to decide when and how their children engage with the Internet.

In the same vein, the Court in NetChoice v. Uthmeyer rejected the idea that minors are just “mere people-in-waiting,” affirming their full participation in democracy as “citizens-in-training.” The ruling makes clear that social media access is a constitutional right, and attempts to strip minors of First Amendment protections are nothing more than censorship disguised as “safety.”

Conclusion

The rhetoric at this event mirrored the early pages of Project 2025, pushing for the outright criminalization of pornography and a fundamental rewrite of Section 230. Speakers wrapped their agenda in the familiar slogan of “protecting the kids,” bringing up big right-wing talking points like transgender youth in sports and harping on good old family values—all while advocating for sweeping government control over the Internet.

This movement is not about safety. It is about power. It seeks to dictate who can speak, what information is accessible, and whose identities are deemed acceptable online. The push for broad government oversight and censorship undercuts constitutional protections not just for adults, but for minors seeking autonomy in digital spaces. These policies could strip LGBTQ youth in restrictive households of the only communities where they feel safe, understood, and free to exist as themselves.

This campaign is insidious. If successful, it won’t just reshape the Internet. It will undermine free speech, strip digital anonymity and force every American to comply with a singular, state-approved version of “family values.”

The First Amendment  exists to prevent exactly this kind of authoritarian overreach. The FTC should remember that.

Elizabeth Grossman is a first-year law student at the University of Akron School of Law in the Intellectual Property program and with a goal of working in tech policy.

Prof. Jess Miers’ Comments

Elizabeth’s summary makes it painfully clear: this wasn’t a serious workshop run by credible experts in technology law or policy. The title alone, “How Big Tech Firms Exploit Children and Hurt Families,” telegraphed the FTC’s predetermined stance and signaled a disinterest in genuine academic inquiry. More tellingly, the invocation of “families” serves as a dog whistle, gesturing toward the narrow, heteronormative ideals typically championed by the religious Right: white, patriarchal, Christian, and straight. The FTC may not say the quiet part out loud, but it doesn’t have to.

Worse still, most of the invited speakers weren’t experts in the topics they were pontificating on. At best, they’re activists. At worst, they’re ideologues—people with deeply partisan agendas who have no business advising a federal agency, let alone shaping national tech policy.

Just a few additional observations from me.

Chair Ferguson opened by claiming the Internet was a “fundamentally different place” 25 years ago, reminiscing about AOL Instant Messenger, Myspace Tom, and using a family computer his parents could monitor. The implication: the Internet was safer back then, and parents had more control. As someone who also grew up in that era, I can’t relate.

I, too, had a family computer in the living room and tech-savvy parents. It didn’t stop me from stumbling into adult AOL chatrooms, graphic porn, or violent videos, often unintentionally. I remember the pings of AIM just as vividly as the cyberbullying on Myspace and anonymous cruelty on Formspring. Parental controls were flimsy, easy to bypass, and rarely effective. My parents tried, but the tools of the time simply weren’t up to the task. The battle over my Internet use was constant, and my experience was hardly unique.

Still, even then, the Internet offered real value, especially for a queer kid who moved often and struggled to make “IRL” friends. But it also forced me to grow up fast in ways today’s youth are better shielded from. Parents now have far more effective tools to manage what their kids see and who they interact with. And online services have a robust toolbox for handling harmful content, not just because advertisers demand it, but thanks to Section 230, a uniquely forward-thinking law that encourages cleanup efforts. It built safety into the system before “trust and safety” became a buzzword. Contrary to Mark Meador’s baseless claims, that result was precisely its authors’ intent. 

A more serious conversation would focus on what we’ve learned and how the FTC can build on that progress to support a safer Internet for everyone, rather than undermining it. 

That aside, what baffles me most about these “protect the kids” conversations, which almost always turn out to be about restricting adults’ access to disfavored content, is how the supposed solution is more surveillance of children. The very services the FTC loves to criticize are being told to collect more sensitive information about minors—biometrics, ID verification, detailed behavioral tracking—to keep them “safe.” But as Eric Goldman and many other scholars who were notably absent from the workshop have extensively documented, there is no current method of age verification that doesn’t come at the expense of privacy, security, and anonymity for both youth and adults.

A discussion that ignores these documented harms, that fails to engage with the actual expert consensus around digital safety and privacy, is not a serious discussion about protecting kids. 

Which is why I find it especially troubling that groups positioning themselves as privacy champions are treating this workshop as credible. In particular, IAPP’s suggestion that the FTC laid the groundwork for “improving” youth safety online is deeply disappointing. Even setting aside the numerous privacy issues associated with age verification, does the IAPP really believe that a digital ecosystem shaped by the ideological goals of these panelists will be an improvement for kids, especially those most in need of support? For queer youth, for kids in intolerant households, for those seeking information about reproductive health or gender-affirming care? 

This workshop made the FTC’s agenda unmistakable. They’re not pursuing a safer Internet for kids. As Elizabeth said, the FTC is pushing a Christian nationalist vision of the web, built on censorship and surveillance, with children as the excuse and the collateral. 

Just as the playbook commands. 

Jess Miers is an Assistant Professor of Law at the University of Akron School of Law

Posted on BestNetTech - 5 June 2025 @ 01:00pm

A Takedown Of The Take It Down Act

This is a cross post from Prof. Eric Goldman’s blog, mostly written by Prof. Jess Miers, with additional commentary at the end from Eric.

Two things can be true: Non-consensual intimate imagery (NCII) is a serious and gendered harm. And, the ‘Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks Act,’’ a/k/a the TAKE IT DOWN Act, is a weapon of mass censorship.

Background

In October 2023, two high school students became the victims of AI-generated NCII. Classmates had used “nudify” tools to create fake explicit images using public photos pulled from their social media profiles. The incident sparked outrage, culminating in a hearing last June where the students’ families called for federal action.

Congress responded with the TAKE IT DOWN Act, introduced by Senator Ted Cruz and quickly co-sponsored by a bipartisan group of lawmakers. On its face, the law targets non-consensual intimate imagery, including synthetic content. In practice, it creates a sweeping speech-removal regime with few safeguards.

Keep in mind, the law was passed under an administration that has shown little regard for civil liberties or dissenting speech. It gives the government broad power to remove online content of which it disapproves and opens the door to selective enforcement. Trump made his intentions clear during his March State of the Union:

And I’m going to use that bill for myself too, if you don’t mind—because nobody gets treated worse than I do online.

Some interpreted this as a reference to a viral AI-generated video of Trump kissing Elon Musk’s feet—precisely the kind of political satire that could be subject to removal under the Act’s broad definitions.

The bill moved unusually fast compared to previous attempts at online speech regulation. It passed both chambers without a single amendment, despite raising serious First Amendment and due process concerns. Following the TikTok ban, it marks another example of Congress enacting sweeping online speech restrictions with minimal debate and virtually no public process.

Senator Booker briefly held up the bill in mid-2024, citing concerns about vague language and overbroad criminal penalties. After public backlash, including pressure from victims’ families, Senator Booker negotiated a few modest changes. The revised bill passed the Senate by unanimous consent in February 2025. The House advanced it in April, ignoring objections from civil liberties groups and skipping any real markup.

President Trump signed the TAKE IT DOWN Act into law in May. The signing ceremony made it seem even more like a thinly veiled threat toward online services that facilitate expression, rather than a legitimate effort to curb NCII. At the ceremony, First Lady Melania Trump remarked:

Artificial Intelligence and social media are the digital candy of the next generation—sweet, addictive, and engineered to have an impact on the cognitive development of our children. But unlike sugar, these new technologies can be weaponized, shape beliefs, and sadly, affect emotions and even be deadly.

And just recently, FTC Chair Andrew Ferguson—handpicked by Trump and openly aligned with his online censorship agenda—tweeted his enthusiasm about enforcing the TAKE IT DOWN Act in coordination with the Department of Homeland Security. Yes, the same agency that has been implicated in surveilling protestors and disappearing U.S. citizens off the streets during civil unrest. 

Statutory Analysis

Despite overwhelming (and shortsighted) support from the tech industry, the TAKE IT DOWN Act spells trouble for any online service that hosts third-party content. 

The law contains two main provisions: one criminalizing the creation, publication, and distribution of authentic, manipulated, and synthetic NCII, and another establishing a notice-and-takedown system for online services hosting NCII that extends to a potentially broader range of online content. 

Section 2: Criminal Prohibition on Intentional Disclosure of Nonconsensual Intimate Visual Depictions

Section 2 of the Act creates new federal criminal penalties for the publication of non-consensual “intimate visual depictions,” including both real (“authentic”) and technologically manipulated or AI-generated imagery (“digital forgeries”). These provisions are implemented via amendments to Section 223 of the Communications Act and took effect immediately upon enactment.

Depictions of Adults

The statute applies differently depending on whether the depiction involves an adult or a minor. With respect to depicting adults, it is a federal crime to knowingly publish an intimate visual depiction via an interactive computer service (as defined under Section 230) if the following are met: (1) the image was created or obtained under circumstances where the subject had a reasonable expectation of privacy; (2) the content was not voluntarily exposed in a public or commercial setting; (3) the image is not of public concern; and (4) the publication either was intended to cause harm or actually caused harm (defined to include psychological, financial, or reputational injury).

The statute defines “intimate visual depictions” via 15 U.S.C. § 6851. The definition includes images showing uncovered genitals, pubic areas, anuses, or post-pubescent female nipples, as well as depictions involving the display or transfer of sexual fluids. Images taken in public may still qualify as “intimate” if the individual did not voluntarily expose themselves or did not consent to the sexual conduct depicted.

In theory, the statute exempts pornography that was consensually produced and distributed online. In practice, the scope of that exception is far from clear. One key requirement for triggering criminal liability in cases involving adults is that “what is depicted was not voluntarily exposed by the identifiable individual in a public or commercial setting.” The intent seems to be to exclude lawful adult content from the law’s reach.

But the language is ambiguous. The statute refers to what is depicted—potentially meaning the body parts or sexual activity shown—rather than to the image itself. Under this reading, anyone who has ever publicly or commercially shared intimate content generally could be categorically excluded from protection under the law, even if a particular image was created or distributed without their consent. That interpretation would effectively deny coverage to adult content creators and sex workers, the very individuals who are often most vulnerable to nonconsensual republishing and exploitation of their content.

Depictions of Children

With respect to depictions of minors, the TAKE IT DOWN Act criminalizes the distribution of any image showing uncovered genitals, pubic area, anus, or female-presenting nipple—or any depiction of sexual activity—if shared with the intent to abuse, humiliate, harass, degrade, or sexually gratify. 

Although the Act overlaps with existing federal child sexual abuse material (CSAM) statutes, it discards the constitutional boundaries that have kept those laws from being struck down as unconstitutional. Under 18 U.S.C. § 2256(8), criminal liability attaches only to depictions of “sexually explicit conduct,” a term courts have narrowly defined to include things like intercourse, masturbation, or lascivious exhibition of genitals. Mere nudity doesn’t typically qualify, at least not without contextual cues. Even then, prosecutors must work to show that the image crosses a clear, judicially established threshold.

TAKE IT DOWN skips the traditional safeguards that typically constrain speech-related criminal laws. It authorizes felony charges for publishing depictions of minors that include certain body parts if done with the intent to abuse, humiliate, harass, degrade, arouse, or sexually gratify. But these intent standards are left entirely undefined. A family bathtub photo shared with a mocking or off-color caption could be framed as intended to humiliate or, in the worst-case reading, arouse. A public beach photo of a teen, reposted with sarcastic commentary, might be interpreted as degrading. Of course, these edge cases should be shielded by traditional First Amendment defenses.

We’ve seen this before. Courts have repeatedly struck down or narrowed CSAM laws that overreach, particularly when they criminalize nudity or suggestive content that falls short of actual sexual conduct, such as family photos, journalism, documentary film, and educational content. 

TAKE IT DOWN also revives the vagueness issues that have plagued earlier efforts to curb child exploitation online. Terms like “harass,” “humiliate,” or “gratify” are inherently subjective and undefined, which invites arbitrary enforcement. In effect, the law punishes speakers based on perceived motive rather than the objective content itself.  

Yes, the goal of protecting minors is laudable. But noble intentions don’t save poorly drafted laws. Courts don’t look the other way when speech restrictions are vague or overbroad just because the policy behind them sounds good. If a statute invites constitutional failure, it doesn’t end up protecting anyone. In short, the TAKE IT DOWN Act replicates the very defects that have led courts to limit or strike down earlier child-protection laws. 

Digital Forgeries

The statute also criminalizes the publication of “digital forgeries” without the depicted person’s consent, which differs from the “reasonable expectation of privacy” element for authentic imagery. A digital forgery is defined as any intimate depiction created or altered using AI, software, or other technological means such that it is, in the eyes of a reasonable person, indistinguishable from an authentic image. This standard potentially sweeps in a wide range of synthetic and altered content, regardless of whether a viewer actually believed the image was real or whether the underlying components were independently lawful.

Compared to existing CSAM laws, the TAKE IT DOWN Act also uses a more flexible visual standard when it comes to “digital forgeries.” Under CSAM law, synthetic or computer-generated depictions are only criminalized if they are “indistinguishable from that of a real minor engaging in sexually explicit conduct.” That standard makes it difficult to prosecute deepfakes or AI nudes unless they are photorealistic and sexually explicit. But under TAKE IT DOWN, a digital forgery is covered if it “when viewed as a whole by a reasonable person, is indistinguishable from an authentic visual depiction of the individual.” The focus isn’t on whether the depiction looks like a real child in general, but whether it looks like a real, identifiable person. This makes the law far more likely to apply to a broader range of AI-generated depictions involving minors, even if the underlying content wouldn’t meet the CSAM threshold. As discussed in the implications section, this too invites First Amendment scrutiny. 

There are several exceptions. The statute does not apply to disclosures made as part of law enforcement or intelligence activity, nor to individuals acting reasonably and in good faith when sharing content for legitimate legal, medical, educational, or professional purposes. The law also exempts people sharing intimate content of themselves (as long as it contains nudity or is sexual in nature) and content already covered by federal CSAM laws.

Penalties include fines and up to two years’ imprisonment for adult-related violations, and up to three years for violations involving minors. Threats to publish such material can also trigger criminal liability.

Finally, the Act leaves unanswered whether online services could face criminal liability for failing to remove known instances of authentic or AI-generated NCII. Because Section 230 never applies to federal criminal prosecutions, intermediaries cannot rely on it as a defense against prosecution. If a service knowingly hosts unlawful material, including not just NCII itself, but threats to publish it, such as those made in private messages, the government may claim the service is “publishing” illegal content in violation of the statute.

The Supreme Court’s decision in Taamneh provides some insulation. It held that general awareness of harmful conduct on a service does not amount to the kind of specific knowledge required to establish aiding-and-abetting liability. But the TAKE IT DOWN Act complicates that picture. Once a service receives a takedown request for a particular image, it arguably acquires actual knowledge of illegal content. If the service fails to act within the Act’s 48-hour deadline, it’s not clear whether that inaction could form the basis for a criminal charge under the statute’s separate enforcement provisions.

As Eric discusses below, there’s also no clear answer to what happens when someone re-uploads content that had previously been removed (or even new violating content). Does prior notice of a particular individual’s bad acts create the kind of ongoing knowledge that turns continued hosting into criminal publication? That scenario falls into a legal gap narrower than Taamneh might account for, but the statute doesn’t clarify how courts should treat repeat violations.

Section 3: Notice and Removal of Nonconsensual Intimate Visual Depictions

Alongside its criminal provisions, the Act imposes new civil compliance obligations on online services that host user-generated content. Covered services must implement a notice-and-takedown process to remove intimate visual depictions (real or fake) within one year of the law’s enactment. The process must allow “identifiable individuals” or their authorized agents to request removal of non-consensual intimate images. Once a valid request is received, the service has 48 hours to remove the requested content. Failure to comply subjects the service to enforcement by the Federal Trade Commission under its unfair or deceptive practices authority. 

The law applies to any public-facing website, app, or online service that primarily hosts user-generated content—or, more vaguely, services that “publish, curate, host, or make available” non-consensual intimate imagery as part of their business. This presumably includes social media services, online pornography services, file-sharing tools, image boards, and arguably even private messaging apps. It likely includes search engines as well, and the “make available” standard could apply to user-supplied links to other sites. Notably, the law excludes Internet access providers, email services, and services where user-submitted content is “incidental” to the service’s primary function. This carveout appears designed to protect online retailers, streaming services like Netflix, and news media sites with comment sections. However, the ambiguity around what qualifies as “incidental” will likely push services operating in the gray zone toward over-removal or disabling functionality altogether. 

Generative AI tools likely fall within the scope of the law. If a system generates and displays intimate imagery, whether real or synthetic, at a user’s direction, it could trigger takedown obligations. However, the statute is silent on how these duties apply to services that don’t “host” content in the traditional sense. In theory, providers could remove specific outputs if stored, or even retrain the model to exclude certain images from its dataset. But this becomes far more complicated when the model has already “memorized” the data and internalized it into its parameters. As with many recent attempts to regulate AI, the hard operational questions—like how to unwind learned content—are left unanswered, effectively outsourced to developers to figure out later.

Though perhaps inspired by the structure of existing notice-and-takedown regimes, such as the DMCA’s copyright takedown framework, the implementation here veers sharply from existing content moderation norms. A “valid” TAKE IT DOWN request requires four components: a signature, a description of the content, a good faith statement of non-consent, and contact information. But that’s where the rigor ends.

There is no requirement to certify a takedown request under penalty of perjury, nor any legal consequence for impersonating someone or falsely claiming to act on their behalf. The online services, not the requester, bear the burden of verifying the identity of both the requester and the depicted individual, all within a 48-hour window. In practice, most services will have no realistic option other than to take the request at face value and remove the content, regardless of whether it’s actually intimate or non-consensual. This lack of verification opens the door to abuse, not just by individuals but by third-party services. There is already a cottage industry emerging around paid takedown services, where companies are hired to scrub the Internet of unwanted images by submitting removal requests on behalf of clients, whether authorized or not. This law will only bolster that industry. 

The law also only requires a “reasonably sufficient” identification of the content. There’s no obligation to include URLs, filenames, or specific asset identifiers. It’s unclear whether vague descriptions like “nudes of me from college” are sufficient to trigger a takedown obligation. Under the DMCA, this level of ambiguity would likely invalidate a request. Here, it might not only be acceptable, it could be legally actionable to ignore.

The statute’s treatment of consent is equally problematic. A requester must assert that the content was published without consent but need not provide any evidence to support the claim, other than a statement of good faith belief. There is no adversarial process, no opportunity for the original uploader to dispute the request, and no mechanism to resolve conflicts where the depicted person may have, in fact, consented. In cases where an authorized agent submits a removal request on someone’s behalf (say, a family member or advocacy group), it’s unclear what happens if the depicted individual disagrees. The law contemplates no process for sorting this out. Services are expected to remove first and ask questions never.

Complicating matters further, the law imposes an obligation to remove not only the reported content but also any “identical copies.” While framed as a measure to prevent whack-a-mole reposting, this provision effectively creates a soft monitoring mandate. Even when the original takedown request is vague or incomplete—which the statute permits—services are still required to scan their systems for duplicates. This must be done despite often having little to no verification of the requester’s identity, authority, or the factual basis for the alleged lack of consent. Worse, online services must defer to the requester’s characterization of the content, even if the material in question may not actually qualify as an “intimate visual depiction” under the statutory definition.

Lastly, the law grants immunity to online services that remove content in good faith, even if the material doesn’t meet the definition of an intimate visual depiction. This creates a strong incentive to over-remove rather than assess borderline cases, especially when the legal risk for keeping content up outweighs any penalty for taking it down.

(Notably, neither the criminal nor civil provisions of the law expressly carve out satirical, parody, or protest imagery that happens to involve nudity or sexual references.)

* * *

Some implications of the law:

Over-Criminalization of Legal Speech

The law creates a sweeping new category of criminalized speech without the narrow tailoring typically required for content-based criminal statutes. Language surrounding “harm,” “public concern,” and “reasonable expectation of privacy” invite prosecutorial overreach and post-hoc judgments about whether a given depiction implicates privacy interests and consent, even when the speaker may have believed the content was lawful, newsworthy, or satirical.

The statute allows prosecution not only where the speaker knew the depiction was private, but also where they merely should have known. This is a sharp departure from established First Amendment doctrine, which requires at least actual knowledge or reckless disregard for truth in civil defamation cases, let alone criminal ones.

The law’s treatment of consent raises unresolved questions. It separates consent to create a depiction from consent to publish it, but says nothing about what happens when consent to publish is later withdrawn. A person might initially agree to share a depiction with a journalist, filmmaker, or content partner, only to later revoke that permission. The statute offers no clarity on how that revocation must be communicated and whether it must identify specific content versus a general objection.  

To be clear, the statute requires that the speaker “knowingly” publish intimate imagery without consent. So absent notice of revocation, criminal liability likely wouldn’t attach. But what counts as sufficient notice? Can a subject revoke consent to a particular use or depiction? Can they revoke consent across the board? If a journalist reuses a previously approved depiction in a new story, or a filmmaker continues distributing a documentary after one subject expresses discomfort, are those “new” publications requiring fresh consent? The law provides no mechanism for resolving these questions. 

Further, for adult depictions, the statute permits prosecution where the publication either causes harm or was intended to cause harm. This opens the door to criminal liability based not on the content itself, but on its downstream effects, regardless of whether the speaker acted in good faith. The statute includes no explicit exception for newsworthiness, artistic value, or other good-faith purposes, nor does it provide any formal opportunity for a speaker to demonstrate the absence of malicious intent. In theory, the First Amendment (and Taamneh) should cabin the reach, but the text itself leaves too much room for prosecutorial discretion.

The law also does not specify whether the harm must be to the depicted individual or to someone else, leaving open the possibility that prosecutors could treat general moral offense, such as that invoked by anti-pornography advocates, as sufficient. The inclusion of “reputational harm” as a basis for criminal liability is especially troubling. The statute makes no distinction between public and private figures and requires neither actual malice nor reckless disregard, setting a lower bar than what’s required even for civil defamation.

Further, because the law criminalizes “digital forgeries,” and defines them broadly to include any synthetic content indistinguishable, to a reasonable person, from reality, political deepfakes are vulnerable to prosecution. A video of a public official in a compromising scenario, even if obviously satirical or critical, could be treated as a criminal act if the depiction is deemed sufficiently intimate and the official claims reputational harm. [FN] The “not a matter of public concern” carveout is meant to prevent this, but it’s undefined and thus subject to prosecutorial discretion. Courts have repeatedly struggled to draw the line between private and public concern, and the statute offers no guidance.

[FN: Eric’s addition: I call this the Anthony Weiner problem, where his sexting recipients’ inability to prove their claims by showing the receipts would have allowed Weiner to lie without accountability.]

This creates a meaningful risk that prosecutors, particularly those aligned with Trump, could weaponize the law against protest art, memes, or critical commentary. Meta’s prior policy, for example, permitted images of a visible anus or close-up nudity if photoshopped onto a public figure for commentary or satire. Under the TAKE IT DOWN Act, similar visual content could become a target for prosecution or removal, especially when it involves politically powerful individuals. The statute provides plenty of wiggle room for selective enforcement, producing a chilling effect for creators, journalists, documentarians, and artists who work with visual media that is constitutionally protected but suddenly carries legal risk under this law.

With respect to depictions of minors, the law goes further: a person can be prosecuted for publishing an intimate depiction if they did so with the intent to harass or humiliate the minor or arouse another individual. As discussed, the definition of intimate imagery covers non-sexually explicit content, which covers content that is likely broader than existing CSAM or obscenity laws.  This means that the law creates a lower-tier criminal offense for visual content involving minors, even if the images are not illegal under current federal law. 

For “authentic” images, the law could easily reach innocent but revealing photos of minors shared online. As discussed, if a popular family content creator posts a photo of their child in the bathtub (content that arguably shouldn’t be online in the first place) and the government concludes the poster intended to arouse someone else, that could trigger criminal liability under the TAKE IT DOWN Act. Indeed, family vloggers have repeatedly been accused of curating “innocent” content to appeal to their adult male followers as a means of increasing engagement and revenue, despite pushback from parents and viewers. (Parents may be part of the problem). While the underlying content itself is likely legal speech to the extent it doesn’t fall within CSAM or obscenity laws, it could still qualify as illegal content, subject to criminal prosecution, under the Act. 

For AI-generated images, the law takes an even more aggressive approach for minors. Unlike federal CSAM laws, which only cover synthetic images that are “indistinguishable” from a real minor, the TAKE IT DOWN Act applies to any digital forgery that, in the eyes of a reasonable person, appears to depict a specific, identifiable child. That’s a significant shift. The higher standard in CSAM law was crafted to comply with Ashcroft v. Free Speech Coalition, where the Supreme Court struck down a federal ban on virtual CSAM that wasn’t tied to real individuals. The Court’s rationale protected fictional content, including cartoon imagery (think a nude depiction of South Park’s Eric Cartman) as constitutionally protected speech. By contrast, the TAKE IT DOWN Act abandons that distinction and criminalizes synthetic content based on how it appears to a reasonable viewer, not whether it reflects reality or actual harm. That standard is unlikely to survive Ashcroft-level scrutiny and leaves the law open to serious constitutional challenge.

Disproportionate Protections & Penalties For Vulnerable Groups

The TAKE IT DOWN Act is framed as a measure to protect vulnerable individuals, such as the high school students victimized by deepfake NCII. Yet its ambiguities risk leaving some vulnerable groups unprotected, or worse, exposing them to prosecution.

The statute raises the real possibility of criminalizing large numbers of minors. Anytime we’re talking about high schoolers and sharing of NCII, we have to ask whether the law applies to teens who forward nudes—behavior that is unquestionably harmful and invasive, but also alarmingly common (see, e.g., 123). While the statute is framed as a tool to punish adults who exploit minors, its broad language easily sweeps in teenagers navigating digital spaces they may not fully understand. Yes, teens should be more careful with what they share, but that expectation doesn’t account for the impulsiveness, peer pressure, and viral dynamics that often define adolescent behavior online. A nude or semi-nude image shared consensually between peers can rapidly spread beyond its intended audience. Some teens may forward it not to harass or humiliate, but out of curiosity or simply because “everyone else already saw it.” Under the TAKE IT DOWN Act, that alone could trigger federal criminal liability.

With respect to depictions of adults, the risks are narrower but still present. The statute specifies that consent to create a depiction does not equate to consent to publish it, and that sharing a depiction with someone else does not authorize them—or anyone else—to republish it. These provisions are intended to close familiar NCII loopholes, but they also raise questions about how the law applies when individuals post or re-share depictions of themselves. There is no broad exemption for self-publication by adults, only the same limited carveout for depictions involving nudity or sexual conduct. That may cover much of what adult content creators publish, but it leaves unclear how the law treats suggestive or partial depictions that fall short of statutory thresholds. In edge cases, a prosecutor could argue that a self-published image lacks context-specific consent or causes general harm, especially if the prosecutor is inclined to target adult content as a matter of policy.

At the same time, the law seems to also treat adult content creators and sex workers as effectively ineligible for protection. As discussed, prior public or commercial self-disclosure potentially disqualifies someone from being a victim of non-consensual redistribution. Instead of accounting for the specific risks these communities face, the law appears to treat them as discardable (as is typical for these communities). 

This structural asymmetry is made worse by the statute’s sweeping exemption for law enforcement and intelligence agencies, despite their well-documented misuse of intimate imagery. Police have used real sex workers’ photos in sting operations without consent, exposing individuals to reputational harm, harassment, and even false suspicion. A 2021 DOJ Inspector General report found that FBI agents, while posing as minors online, uploaded non-consensual images to illicit websites. This is conduct that violated agency policy but seems to be fully exempt under Take It Down. This creates a feedback loop: the state appropriates private images, recirculates them, and then uses the fallout as investigative justification. 

Over-Removal of Political Speech, Commentary, and Adult Content

Trump and his allies have a long track record of attempting to suppress unflattering or politically inconvenient content. Under the civil takedown provisions of the TAKE IT DOWN Act, they no longer need to go through the courts to do it. All it takes is an allegation that a depiction violates the statute. Because the civil standard is more permissive, that allegation doesn’t have to be well-founded, it just has to allege that the content is an “intimate visual depiction.” A private photo from a political fundraiser, a photoshopped meme using a real image, or an AI-generated video of Trump kissing Elon Musk’s feet could all be flagged under the law, even if they don’t meet the statute’s actual definition. But here’s the catch: services have just 48 hours to take the content down. That’s not 48 hours to investigate, evaluate, or push back, it’s 48 hours to comply or risk FTC enforcement. In practice, that means the content is far more likely to be removed than challenged, especially when the requester claims the material is intimate. Services will default to caution, pulling content that may not meet the statutory threshold just to avoid regulatory risk. As we saw after FOSTA-SESTA, that kind of liability pressure drives entire categories of speech offline.

Moreover, the provision requiring online services to remove identical copies of the reported content, in practice, might encourage online services to take a scorched-earth approach to removals: deleting entire folders, wiping user accounts, pulling down all images linked to a given name or metadata tag, or even removing the contents of an entire website. It’s easy to see how this could be  especially weaponized against adult content sites, where third-party uploads often blur the line between lawful adult material and illicit content.

Further, automated content moderation tools that are designed to efficiently remove content while shielding human workers from exposure harms may exacerbate the issue. Many online services use automated classifiers, blurred previews, and image hashing systems to minimize human exposure to disturbing content. But the TAKE IT DOWN Act requires subjective judgment calls that automation may not be equipped to make. Moderators must decide whether a depiction is truly intimate, whether it falls under an exception, whether the depicted individual voluntarily exposed themselves, and whether the requester is legitimate. These are subjective, context-heavy determinations that require viewing the content directly. In effect, moderators are now pushed back into front line exposure just to determine if a depiction meets the statute’s definition.

The enforcement provisions of the TAKE IT DOWN Act give the federal government—particularly a politicized FTC delighting in its newfound identity as a censorship board—broad discretion to target disfavored online services. A single flagged depiction labeled a digital forgery can trigger invasive investigations, fines, or even site shutdowns. Recall that The Heritage Foundation’s Project 2025 mandate explicitly calls for the elimination of online pornography. This law offers a ready-made mechanism to advance that agenda, not only for government officials but also for aligned anti-pornography groups like NCOSE. Once the state can reframe consensual adult content as non-consensual or synthetic, regardless of whether that claim holds, it can begin purging lawful material from the Internet under the banner of victim protection. 

This enforcement model will also disproportionately affect LGBTQ+ content, which is already subject to heightened scrutiny and over-removal. Queer creators routinely report that their educational, artistic, or personal content is flagged as adult or explicit, even when it complies with existing community guidelines. Under the TAKE IT DOWN Act, content depicting queer intimacy, gender nonconformity, or bodies outside heteronormative standards could be more easily labeled as “intimate visual depictions,” especially when framed by complainants as inappropriate or harmful. For example, a shirtless trans-identifying person discussing top surgery could plausibly be flagged for removal. Project 2025 and its enforcers have already sought to collapse LGBTQ+ expression into a broader campaign against “pornography.” The TAKE IT DOWN Act gives that campaign a fast-track enforcement mechanism, with no real procedural safeguards to prevent abuse.

Selective Enforcement By Trump’s FTC 

The Act’s notice-and-takedown regime is enforced by the FTC, an agency with no meaningful experience or credibility in content moderation. That’s especially clear from its attention economy workshop, which appear stacked with ideologically driven participants and conspicuously devoid of legitimate experts in Internet law, trust and safety, or technology policy. 

The Trump administration’s recent purge and re-staffing of the agency only underscores the point. With internal dissenters removed and partisan loyalists installed, the FTC now functions less as an independent regulator and more as an enforcement tool aligned with the White House’s speech agenda. The agency is fully positioned to implement the law exactly as Trump intends: by punishing political enemies.

We should expect enforcement will not be applied evenly. X (formerly Twitter), under Elon Musk, continues to host large volumes of NCII with little visible oversight. There is no reason to believe a Trump-controlled FTC will target Musk’s services. Meanwhile, smaller, less-connected sites, particularly those serving LGBTQ+ users and marginalized creators, will remain far more exposed to aggressive, selective enforcement.

Undermining Encryption

The Act does not exempt private messaging services, encrypted communication tools, or electronic storage providers. That omission raises significant concerns. Services that offer end-to-end encrypted messaging simply cannot access the content of user communications, making compliance with takedown notices functionally impossible. These services cannot evaluate whether a reported depiction is intimate, harmful, or duplicative because, by design, they cannot see it. See the Doe v. Apple case.

Faced with this dilemma, providers may feel pressure to weaken or abandon encryption entirely in order to demonstrate “reasonable efforts” to detect and remove reported content. This effectively converts private, secure services into surveillance systems, compromising the privacy of all users, including the very individuals the law claims to protect.

The statute’s silence on what constitutes a “reasonable effort” to identify and remove copies of reported imagery only increases compliance uncertainty. In the absence of clear standards, services may over-correct by deploying invasive scanning technologies or abandoning encryption altogether to minimize legal risk. Weakening encryption in this way introduces systemic security vulnerabilities, exposing user data to unauthorized access, interception, and exploitation. This is particularly concerning as AI-driven cyberattacks become more sophisticated, and as the federal government is actively undermining our nation’s cybersecurity infrastructure. 

Conclusion 

Trump’s public support for the TAKE IT DOWN Act should have been disqualifying on its own. But even setting that aside, the law’s political and institutional backing should have raised immediate red flags for Democratic lawmakers. Its most vocal champion, Senator Ted Cruz, is a committed culture warrior whose track record includes opposing same-sex marriageattacking DEI programs, and using students as political props—ironically, the same group this law claims to protect.

The law’s support coalition reads like a who’s who of Christian nationalist and anti-LGBTQ+ activism. Among the 120 organizations backing it are the National Center on Sexual Exploitation (NCOSE), Concerned Women for America Legislative Action Committee, Family Policy Alliance, American Principles Project, and Heritage Action for America. These groups have long advocated for expanded state control over online speech and sexual expression, particularly targeting LGBTQ+ communities and sex workers.

Civil liberties groups and digital rights organizations quickly flagged the law’s vague language, overbroad enforcement mechanisms, and obvious potential for abuse. Even groups who typically support online speech regulation warned that the law was poorly drafted and structurally dangerous, particularly in the hands of the Trump Administration.

At this point, it’s not just disappointing, it’s indefensible that so many Democrats waved this law through, despite its deep alignment with censorship, discrimination, and religious orthodoxy. The Democrats’ support represents a profound failure of both principle and judgment. Worse, it reveals a deeper rot within the Democratic establishment: legislation that is plainly dangerous gets waved through not because lawmakers believe in it, but because they fear bad headlines more than they fear the erosion of democracy itself.

In a FOSTA-SESTA-style outcome, Mr. Deepfakes—one of the Internet’s most notorious hubs for AI-generated NCII and synthetic abuse—shut down before the TAKE IT DOWN Act even took effect. More recently, the San Francisco City Attorney’s Office announced a settlement with one of the many companies it sued for hosting and enabling AI-generated NCII. That litigation has already triggered the shutdown of at least ten similar sites, raising the age-old Internet law question: was this sweeping law necessary to address the problem in the first place?

__

Eric’s Comments

I’m going to supplement Prof. Miers’ comments with a few of my own focused on the titular takedown provision. 

The Heckler’s Veto

If a service receives a takedown notice, the service must resolve all of the following tasks within 48 hours:

  • Can the service find the targeted item?
  • Is anyone identifiable in the targeted item?
  • Is the person submitting the takedown notice identifiable in the targeted item?
  • Does the targeted item contain an intimate visual depiction of the submitter?
  • Did the submitting person consent to the depiction?
  • Is the depiction otherwise subject to some privilege? (For example, the First Amendment)
  • Can the service find other copies of the targeted item?
  • [repeat all of the above steps for each duplicate. Note the copies may be subject to a different conclusion; for example, a copy may be in a different context, like embedded in a larger item of content (like a still image in a documentary) where the analysis might be different]

Alternatively, instead of navigating this gauntlet of short-turnaround tasks, the service can just immediately honor a takedown without any research at all. What would you do if you were running a service’s removals operations? This is not a hard question.

Because the takedown notices are functionally unverifiable and services have no incentive to invest any energy in diligencing them, takedown notices are like the equivalent of heckler’s vetoes. Anyone can submit them knowing that the service will honor them blindly and thereby scrub legitimate content from the Internet. This is a powerful and very effective form of censorship. As Prof. Miers explains, the most likely victims of heckler’s vetoes are communities that are otherwise marginalized.

One caveat: after Moody, it seems likely that laws reducing or eliminating the discretion of editorial services to remove or downgrade non-illegal content, like those contained in the Florida and Texas social media censorship laws, are unconstitutional. If not, the Take It Down Act sets up services for an impossible challenge: they would have to make the right call on the legality of each and every targeted item. Failing to remove illegal content would support a Take It Down FTC enforcement action; removing legal content would set up a claim under the must-carry law. Prof. Miers and I discussed the impossibility of perfectly discerning this border between legal and illegal content.

Bad Design of a Takedown System

The takedown system was clearly designed in reference to the DMCA’s 512 notice-and-takedown scheme. This is not a laudatory attribute. The 512 scheme was poorly designed, which has led to overremovals and consolidated the industry due to the need to achieve economies of scale. The Take It Down Act’s scheme is even more poorly designed. Congress has literally learned nothing from 25 years of experience with the DMCA’s takedown procedures. 

Here are some of the ways that the Take It Down Act’s takedown scheme is worse than the DMCA’s:

  • As Prof. Miers mentioned, the DMCA requires a high degree of specificity about the location of the targeted item. The Take It Down Act puts more of an onus on the service to find the targeted item in response to imprecise takedown notices.
  • The DMCA does not require services to look for and remove identical items, so the Take It Down Act requires services to undertake substantially more work that increases the risk of mistakes and the service’s legal exposure.
  • As Prof. Miers mentioned, DMCA notices require the sender to declare, under penalty of perjury, that they are authorized to submit the notice. As a practical matter, I am unaware of any perjury prosecutions actually being brought for DMCA overclaims. Nevertheless, the perjury threat might still motivate some senders to tell the truth. The Take It Down Act doesn’t require such declarations at risk of perjury, which encourages illegitimate takedown notices.
  • Further to that point, the DMCA created a new cause of action (512(f)) for sending bogus takedown notices. 512(f) has been a complete failure, but at least it provides some reason for senders to consider if they really want to submit the takedown notice. The Take It Down Act has no analogue to 512(f), so Take It Down notice senders who overclaim may not face any liability or have any reason to curb their actions. This is why I expect lots of robo-notices sent by senders who have no authority at all (such as anti-porn advocates with enough resources to build a robot and a zeal to eliminate adult content online), and I expect many of those robo-notices will be honored without question. This sounds like a recipe for mass chaos…and mass censorship.
  • Failure to honor a DMCA takedown notice doesn’t confer liability; it just removes a safe harbor. The Take It Down Act imposes liability for failure to honor a takedown notice two ways: the FTC can enforce that non-removal, plus the risk that the failed removal will support a federal criminal prosecution.
  • The DMCA tried to motivate services to provide an error-correction mechanism to uploaders who are wrongly targeted by takedown notices–the 512(g) putback mechanism provides the service with an immunity for restoring targeted content. The Take It Down Act has no error-correction mechanism, either via carrots or sticks, so any correction of bogus removals will be based purely on the service’s good graces.
  • The Take It Down Act tries to motivate services to avoid overremovals by providing an immunity for removals “based on facts or circumstances from which the unlawful publishing of an intimate visual depiction is apparent.” Swell, but as Prof. Miers and I documented, services aren’t liable for content removals they make (subject to my point above about must-carry laws), whether it’s in response to heckler’s veto notices or otherwise. So the Take It Down immunity won’t motivate services to be more careful with their removal determinations because it does not provide any additional legal protection the services value. 

The bottom line: however bad you think the DMCA encourages or enables overremovals, the Take It Down Act is 1000x worse due to its poor design. 

One quirk: the DMCA expects services to do two things in response to a copyright takedown notice: (1) remove the targeted item, and (2) assign a strike to the uploader, and terminate the uploader’s account if the uploader has received too many strikes. (The statute doesn’t specify how many strikes is too many, and it’s an issue that is hotly litigated, especially in the IAP context). The Take It Down Act doesn’t have a concept of recidivism. In theory, a single uploader could upload a verboten item, the service could remove it in response to a takedown notice, the uploader could reupload the identical item, and the service could wait for another Take It Down notice before doing anything. In fact, the Take It Down Act seemingly permits this process to repeat infinitely (though the service might choose to terminate such rogue accounts voluntarily based on its own editorial standards). Will judges consider that infinite loop unacceptable and, after too many strikes (whenever that is), assume some kind of actionable scienter on services dealing with recidivists?

The FTC’s Enforcement Leverage

The Take It Down Act enables the FTC to bring enforcement actions for a service’s “failure to reasonably comply with the notice and takedown obligations.” There is no minimum quantity of failures; a single failure to honor a takedown notice might support the FTC’s action. This gives the FTC extraordinary leverage over services. The FTC has unlimited flexibility to exercise its prosecutorial discretion, and services will be vulnerable if they’ve made a single mistake (which every service will inevitably do). The FTC can use this leverage to get services to do pretty much whatever the FTC wants–to avoid a distracting, resource-intensive, and legally risky investigation. I anticipate the FTC will receive a steady stream of complaints from people who sent takedown notices that weren’t honored (especially the zealous anti-porn advocates), and each of those complaints to the FTC could trigger a massive headache for the targeted services.

The fact that the FTC has turned into a partisan enforcement agency makes this discretionary power even more risky to the Internet’s integrity. For example, imagine that the FTC wants to do an anti-porn initiative; the Take It Down Act gives the FTC a cudgel to ensure that services are overremoving pornography in response to takedown notices–or perhaps even to anticipatorily reduce the availability of “adult” content on their service to avoid potential future entanglements. Even if the FTC doesn’t lean into an anti-porn crackdown, Chairman Ferguson has repeatedly indicated that he thinks he works for President Trump, not for the American people who pay his salary, and is on standby to use the weapons at the FTC’s disposal to do the president’s bidding.  

As I noted in my pieces on editorial transparency (12), the FTC’s investigatory powers can take the agency deep into a service’s “editorial” operations. The FTC can investigate if a service’s statutorily required reporting mechanism is properly operating; the FTC can ask to see all of the Take It Down notices submitted to the service and their disposition; the FTC can ask why each and every takedown notice refusal was made and question if that was a “correct” choice; the FTC can ask the service about its efforts to find identical copies that should have been taken down and argue that any missed copies were not reasonable. In other words, the FTC now becomes an omnipresent force in every service’s editorial decisions related to adult content–a newsroom partner that no service wants. These kinds of close dialogues between editorial publishers and government censors are common in repressive and authoritarian regimes, and the Take It Down Act reinforces that we are one of them.

The Death of Due Process

Our country is experiencing a broad-based retrenchment in support for procedures that follow due process. I mean, our government is literally disappearing people without due process and arguing that it has every right to do so–and a nontrivial number of Americans are cheering this on. President Trump even protested that he couldn’t depopulate the country of immigrants he doesn’t like and comply with due process because it would take too long and cost too much.

Yes, due process is slow and expensive, but countries that care about the rule of law require it anyway because it reduces errors that can be pernicious/life-changing and provides mechanisms to correct any errors. Because of the powers in the hands of government and the inevitability that governments make mistakes, we need more due process, not less.

The Take It Down Act is another corner-cut on due process. Rather than requiring people to take their complaints about intimate visual depictions to court, which would take a lot of time and cost a lot of money, the Take It Down Act contemplates a removal system that bears no resemblance to due process. As I discussed, the Take It Down Act massively puts the thumb on the scale of removing content (legitimate or not) in response to heckler’s vetoes, ensuring many erroneous removals, with no meaningful mechanism to correct those errors. 

It’s like the old adage in technology development circles (sometimes called the “Iron Triangle”): you can’t have good, fast, and cheap outcomes, at best you can pick two attributes of the three. By passing the Take It Down Act, Congress picked fast and cheap decisions and sacrificed accuracy. When the act’s takedown systems go into effect, we’ll find out how much that choice cost us.

Can Compelled Takedowns Survive a Court Challenge?

I’d be interested in your thoughts about whether the takedown notice procedures (separate from the criminal provisions) violate the First Amendment and Section 230. On the surface, it seems like the takedown requirements conflict with the First Amendment. The Take It Down Act requires the removal of content that isn’t obscene or CSAM, and government regulation of non-obscene/non-CSAM content raises First Amendment problems because it overrides the service’s editorial discretion. The facts that the censorship is structured as a notice-and-takedown procedure rather than a categorical ban, and the FTC can enforce violations per its unfair/deceptive authority, strike me as immaterial to the First Amendment analysis.

(Note: I could make a similar argument about the DMCA’s takedown requirements, which routinely lead to the removal of non-infringing and Constitutionally protected material, but copyright infringement gets a weird free pass from Constitutional scrutiny).

Also, Take It Down’s takedown procedures obviously conflict with Section 230 by imposing liability for continuing to publish third-party content. However, I’m not sure if Take It Down’s status as a later-passed law means that it implicitly amends Section 230. Furthermore, by anchoring enforcement in the FTC Act, the law may take advantage of cases like FTC v. LeadClick which basically said that the FTC Act punishes defendants for their first-party actions, not for third-party content (though that seems like an objectively unreasonable interpretation in this context). So I’m unsure how the Take It Down/Section 230 conflict will be resolved.

Note that it’s unclear who will challenge the Take It Down Act prospectively. It seems like all of the major services will do whatever they can to avoid triggering a Trump brain fart, which sidelines them from prospective challenges to the law. So we may not get more answers about the permissibility of the Take It Down Act scheme for years, until there’s an enforcement action against a service with enough money and motivation to fight.

Posted on BestNetTech - 25 March 2025 @ 03:49pm

How Democrats’ Attack On Section 230 Plays Right Into Trump’s Censorial Plans

Like clockwork, lawmakers are once again rallying around the idea of eliminating Section 230. That Republicans are leading this charge is hardly surprising—repealing Section 230 is explicitly laid out in the Project 2025 playbook. But what’s surprising, and increasingly reckless, is the willingness of Democratic lawmakers to join forces with Republicans in dismantling one of the few remaining legal safeguards standing between the Trump Administration and unchecked control over online speech. In doing so, they are handing the Trump Administration a powerful tool to execute its long-standing goal: total control over online discourse. And in a political climate where Trump is already targeting law firms that oppose him, loss of access to the skilled attorneys needed to defend online speech without Section 230 isn’t a side-effect, it’s the entire point.

Perhaps Democrats don’t fully grasp the strategic importance of Section 230. For years, many on the left have believed that repealing the law would pressure online services into “cleaning up” their spaces by removing hate speech, conspiracy theories, and other content deemed anti-social. The assumption is that without 230’s liability shield, companies will err on the side of caution and engage in more content moderation. But in reality, that outcome is far from guaranteed. The more likely result is either an explosion of harmful content (the stated goal of Project 2025) or aggressive over-moderation that silences all user speech: an “own goal” that would severely undermine the progressive causes Democrats claim to support.

But the most dangerous consequence of repealing Section 230 has nothing to do with content moderation policies themselves but rather the ability to defend those policies. Section 230 doesn’t grant new speech rights; the First Amendment already protects a website’s editorial decisions. What Section 230 does is provide a procedural “fastlane,” allowing websites and users to dismiss meritless lawsuits early—often at the motion to dismiss stage. That’s a big deal. With Section 230, defendants don’t need elite law firms or millions of dollars. Legal advocacy groups, and particularly those less susceptible to political pressure, can take on these cases pro bono, knowing they won’t be buried in years of litigation or financial ruin.

Without Section 230, the calculus changes drastically. Now, any lawsuit over a content decision, whether it’s removing Trump’s posts or leaving up white nationalist propaganda, typically requires a First Amendment defense. And unlike Section 230, First Amendment claims are fact-intensive, expensive, and slow-moving. Courts are reluctant to resolve them at the pleading stage. Instead, they often allow discovery, depositions, and extended litigation to explore whether a platform was acting as a state actor, or whether the content decisions were truly editorial in nature. These cases can drag on for years and cost defendants six or seven figures. Only the most well-resourced defendants with access to high-powered legal talent stand a fighting chance.

And that’s where things get even more sinister.

The Trump movement has made it abundantly clear: law firms that represent his political opponents are targets. And the pressure campaign is working. Paul Weiss, a major law firm, reportedly backed off representation of Trump-opposed clients. Perkins Coie “discovered” a conflict of interest mere days after being singled out in a Trump executive order. Other firms are falling in line too, particularly those with longstanding ties to litigation over online speech.

In a post-230 world, tech companies and individuals will face a flood of lawsuits over content moderation decisions—many of which will require expensive, high-stakes constitutional defenses. Large law firms, increasingly wary of political retaliation, will be even less willing to represent clients challenging Trump-aligned speech or policies. Under normal circumstances, independent attorneys and advocacy groups that are typically less susceptible to political pressure would be the ones to step in and defend these cases. But without Section 230’s early procedural protections, even they will struggle to absorb the financial and time burdens of full-blown constitutional litigation.

Imagine then a scenario where an online service removes Trump, or moderates
rhetoric aligned with his Administration’s agenda. The Trump Administration could respond with retaliatory executive action or lawsuits. Who’s going to step up to defend that service? Which firms are willing to risk executive orders, client loss, and political scrutiny to protect editorial discretion? Increasingly, the answer is: no one.

The combined effect is devastating. Faced with mounting legal risk and an eroding pool of legal help, online services will begin moderating content in line with the Administration’s interests, not out of ideological sympathy, but self-preservation. They’ll leave up speech they would have otherwise removed. They’ll take down speech that powerful actors deem objectionable. This won’t just preserve the exact kind of content the Democrats oppose; it will erase the speech of those pushing back against Trump.

The result is chilling: speech that offends those in power, particularly Trump, is suppressed not by law, but by lawsuit. Not by censorship orders, but by fear of retaliation and now the inability to find legal representation.

And yet here we are. Democrats are handing over the keys to this censorship machine, thinking they’re striking a blow for safer online spaces. But what they’re really doing is dismantling the only law that makes resistance possible. Unlike newspapers, cable, or legacy media—which are vulnerable to political coercion—Section 230 is authoritarian-proof. It’s the last structural safeguard we have to protect the essential free exchange of ideas online.

Repealing Section 230 won’t lead to the “better” Internet that Democrats envision. It will pave the way for the most powerful voices to dominate the conversation and make sure those who speak out against them can’t fight back.

Jess Miers is currently Visiting Assistant Professor of Law, University of Akron School of Law.

Posted on BestNetTech - 19 September 2024 @ 12:30pm

SB 1047: California’s Recipe For AI Stagnation

As California edges closer to enacting SB 1047, the state risks throwing the entire AI industry into turmoil. The bill has already cleared the legislative process and now sits on Governor Newsom’s desk, leaving him with a critical decision: veto this ill-conceived policy or sign away the U.S.’ future in AI. While Newsom appears skeptical of 1047, he still has not made it clear if he’ll actually veto the bill.

SB 1047 Summary

SB 1047 establishes an overly rigid regulatory framework that arbitrarily divides artificial intelligence systems into two categories: “covered models” and “derivative models.” Both are subject to extensive requirements, though at different stages of development. Developers of covered models face strict pre-training and pre-release requirements, while those developing derivative models are burdened with the responsibility of ensuring the model’s long-term safety, anticipating future hazards, and mitigating potential downstream abuses.

The bill also imposes a reasonableness standard on developers to demonstrate that they have exercised “reasonable care” in preventing their models from causing critical risks. This includes the implementation and adherence to extensive safety protocols before and after development. In practice, the standard merely introduces legal ambiguity. The vague nature of what constitutes “reasonable care” opens the door to costly litigation, with developers potentially stuck in endless legal battles over whether they’ve done enough to comply not only with the ever-evolving standards of care and best practices for AI development, but also their own extensive state-mandated safety protocols.

It’s no surprise that industry experts have raised serious concerns about SB 1047’s potential to stifle innovation, limit free expression through restrictions on coding, and undermine the future of U.S. AI development.

SB 1047 Will Cede U.S. AI Lead to Foreign Competitors

Under the bill, a covered model is defined as any advanced AI system that meets certain thresholds of computing power and cost. Models trained before January 1, 2027, are classified as covered if they use more than 1026 integer or floating-point operations and cost more than $100 million to develop.

But these thresholds are inherently flawed. Even cutting-edge AI systems like GPT-4, which are among the most advanced in the world, were trained using significantly less computing power than the bill’s benchmark. For example, estimates suggest that GPT-3 required around 1023 operations—far below the bill’s threshold. This highlights a key problem: the bill’s requirements for covered models primarily targets large, resource-intensive AI labs today, but as AI technologies and hardware improve, even smaller developers could find themselves ensnared by these requirements.

However, there’s a deeper irony here: scaling laws in AI suggest larger AI models generally perform better.The more computational power used to train a model, the better it tends to handle complex tasks, reduce errors like hallucinations, and generate more reliable results. In fact, larger AI models could actually reduce societal harms, making AI systems safer and more accurate over time—a result for which the California Legislature is supposedly striving.

This is why larger AI firms, like OpenAI and Google, are pushing for more computationally intensive models. While it may seem that the covered model requirements exclude startup companies for now, current advancements in hardware—such as specialized AI chips and quantum computing—suggest that even smaller commercial AI developers could potentially surpass this threshold within the next 5-10 years (i.e. Moore’s Law). In other words, as time goes on, we can expect more market entrants to fall under the bill’s regulatory framework sooner than expected.

What’s worse, the threshold component seems to discourage companies from pushing the limits of AI. Instead of harnessing high-computing power to build truly transformative systems, businesses might deliberately scale down their models just to avoid falling under the bill’s scope. This short-sighted approach won’t just slow down AI innovation; it could stifle progress in computing power as a whole. If companies are reducing their need for cutting-edge processors and hardware, the broader tech ecosystem—everything from next-gen chips to data centers—will stagnate. The very innovation we need to lead the world in technology could grind to a halt, all because we’ve made it too risky for AI labs to aim big.

Pre-Training Requirements & Commercial Use Restrictions for Covered Models

Before training (i.e. developing) a covered model, developers must first decide whether they can make a “positive safety determination” about the model. Developers must also implement a detailed “safety and security protocol,” including cybersecurity protections, testing procedures to assess potential harms, and the ability to enact a full shutdown if needed. Developers are prohibited from releasing their models for any purpose beyond training unless they can certify that the models pose no unreasonable risks of harm, either now or in the future.

The bill’s vague language around “hazardous capabilities” opens a Pandora’s box of potential issues. While it primarily aims to address catastrophic risks like cyberattacks or mass casualties, it includes a broad catch-all provision for other risks to public safety or infrastructure. Given the many “black-box” aspects of AI model development, developers will struggle to confidently rule out any unforeseen hazards, especially those arising from third-party developed derivatives. The reality is that most developers will find themselves constantly worried about potential legal and regulatory risks, chilling progress at a time when the global AI race is in full throttle.

SB 1047’s Reporting Requirements Will Bring AI Innovation to A Grinding Halt

Developers must also maintain and regularly update their safety and security protocols for both covered models and derivative models. Several additional requirements follow:

  • Model developers must conduct an annual review of their safety and security protocols to ensure that protocols are kept current with evolving risks and industry standards. This includes any rules adopted per the bill’s requirements after January 1, 2027. Developers must also update their protocols based on these reviews.
  • Beginning in 2026, developers must hire a third-party auditor to independently verify compliance with the safety protocols. The auditor’s report must include an assessment of the steps taken by the developer to meet SB 1047’s requirements (and any additional guidelines post-enactment) and identify any areas of non-compliance. Developers are required to address any findings by updating their protocols to resolve issues identified during these audits.
  • Model developers must retain an unredacted copy of the safety and security protocols for as long as the covered model is in commercial or public use, plus five years. They are also required to provide the Attorney General with an updated copy of the safety protocol upon request.
  • A conspicuously redacted copy of the safety and security protocols must be made publicly available.

In practice, the process of releasing new or updated models will be bogged down with arbitrary bureaucratic delays. This will demand significant resource allocation well before companies can even gauge the success of their products.

Not only that, the mandatory assessments will effectively derail essential safety practices, especially when it comes to red teaming—where teams simulate attacks to uncover vulnerabilities. The reality is that red teaming works best behind closed doors and with minimal friction, empowering developers to quickly (and honestly) address security issues. Yet, with the added layers of auditing and mandatory updates, developers may avoid these rigorous safety checks, fearing that each vulnerability discovered could generate legal liability and trigger more scrutiny and further delays.

In the same vein, the mandatory reporting component adds a layer of government scrutiny that will discourage timely security updates and continued transparency about discovered vulnerabilities. Knowing that every security flaw might be scrutinized by regulators, developers may hesitate to disclose issues or rapidly iterate on their models for fear of legal or regulatory backlash. Worse, developers may simply try their hardest to not “know” about the vulnerabilities. Instead of fostering collaboration, the mandatory reporting requirement pits developers against the California AG. 

As Eric Goldman observed, government-imposed reporting inherently chills expression, where companies become more conservative (i.e. even less transparent) to avoid regulatory scrutiny. The same applies to SB 1047. 

SB 1047 Will Insulate Established AI Companies at the Expense of Startups

In contrast to covered models, derivative models—those fine-tuned or modified from existing covered models—are subject to safety assessments post-modification. Fine-tuning, a routine process where a model is adapted using new data, empowers AI to perform better on targeted tasks without requiring full retraining. But SB 1047 places undue burdens on developers of derivative models, forcing them to conduct safety assessments every time they make updates.

The lifeblood of AI innovation is this iterative, adaptive process. Yet, SB 1047 effectively punishes it, creating significant hurdles for developers looking to refine and improve their models. This not only flies in the face of software engineering principles—where constant iteration is key—but also discourages innovation in AI, where flexibility is essential to keeping pace with technological progress.

Worse, SB 1047 shifts liability for derivative models to the original developers. This means companies like Google or OpenAI could be held liable for risks introduced by third-party developers who modify or fine-tune their models. This liability doesn’t just extend to the original version of the model but also to all subsequent changes, imposing a continuous duty of oversight. Such a framework not only contradicts long standing legal principles governing third-party liability for online platforms but also makes the AI marketplace unworkable for startups and independent developers.

Derivative Models Fuel the Current AI Marketplace

Derivative models are integral to the AI ecosystem. For example, Google’s BERT model—a covered model under SB 1047—has been fine-tuned by countless companies for specialized tasks like sentiment analysis and question answering. Similarly, OpenAI’s GPT-3 has been adapted for chatbots, writing tools, and automated customer service applications. OpenAI even operates a marketplace for third-party developers to customize GPT models for specific needs, similar to an app store for AI. While these derivative models serve legitimate purposes, there’s a real risk that third-party modifications could lead to abuse, potentially resulting in harmful or malicious applications anticipated by the bill. 

Drawing on lessons learned from online platform regulation, SB 1047’s framework risks making the AI marketplace inaccessible to independent developers and startups. Companies like Google, Meta, and OpenAI, which develop powerful covered models, may become hesitant to allow any modifications, effectively dismantling a growing ecosystem that thrives on the ability to adapt and refine existing AI technologies. For venture capitalists, the message is clear: open models come with significant legal risk, turning them into liability-laden investments. The repercussions of this would be profound. Just as a diverse media landscape is crucial for maintaining a well-rounded flow of information, a variety of AI models is essential to ensuring the continued benefit of different methodologies, data sets, and fine-tuning strategies. Limiting innovation in this space would stifle the dynamic evolution of AI, reducing its potential to meet varied societal needs.

Ironically, for a state that has been increasingly hellbent on destroyingbig tech,” California’s approach to AI will (once again) ensure that only the largest, most well-funded AI companies—those capable of developing their own powerful covered models—will not only dominate, but single handedly shape the future of AI, while smaller applications that currently build on and refine models from the larger players evaporate. 

SB 1047 Will Drag California Into More Costly Litigation Over Ill-Conceived Tech Regulations

California is already mired in legal battles over poorly crafted tech regulations. Now, with SB 1047, the state risks plunging into yet another costly, uphill legal fight. The bill’s restrictions on the development and release of AI models could infringe on the constitutional right to code, which courts have recognized as a form of protected expression. For instance, in Bernstein v. U.S. Department of State, export controls on encryption code were deemed to violate the First Amendment, affirming that code is a form of speech. More broadly, courts have consistently upheld the rights of developers to code, underscoring that limitations on innovation through code can encroach on constitutional protections.

This debate is strikingly similar to the legal battles over social media regulation. Just as social media platforms are fundamentally speech products, entitled to editorial discretion under the First Amendment, so too are today’s Generative AI services. Many of these AI systems center around the processing and production of expression, making them direct facilitators of speech. As with the algorithms that curate social media content, regulations targeting these models will inevitably raise serious First Amendment concerns, challenging the constitutionality of such measures.

SB 1047, far from being a model for “responsible” AI innovation, risks debilitating the U.S.’s leadership in AI, reinforcing the dominance of existing tech firms, and punishing developers for improving and iterating upon their models. Governor Newsom has a choice: veto this bill and support the growth of AI innovation, or sign it and watch California lead the charge in destroying the very industry it claims to protect.

Jess Miers is currently Visiting Assistant Professor of Law, University of Akron School of Law.

Posted on BestNetTech - 25 July 2024 @ 12:05pm

The Messy Reality Behind Trying To Protect The Internet From Terrible Laws

The recent Supreme Court case, Moody v. NetChoice & CCIA, confronted a pivotal question: Do websites have the First Amendment right to curate content they present to their global audiences? While the opinion has been dissected by many, this post peeks behind the Silicon curtain to address the practical aftermath of tech litigation. 

Well before this case, there has been significant discord among the federal government about how to regulate the Internet. Democrats criticize the Silicon Valley elite for failing to shield Americans from harmful content, while Republicans decry “censorship” and revere the notion of a ‘digital public square,’ a concept lacking in both legal precision and technological reality. Despite a shared disdain for Section 230—a statute that protects websites and their users from liability for third-party content—the two sides can’t agree on a “solution,” forcing a legislative deadlock.

This impasse empowered state legislators to act independently. Initially dismissed as ‘messaging bills’ designed merely to garner political favor, legislation in Texas and Florida soon crystallized into laws that significantly curtailed the editorial discretion of social media platforms. This prompted legal challenges from two trade associations, NetChoice and the Computer & Communications Industry Association, questioning the constitutional merits (or lack thereof) of these laws.

The prolonged conflict led to a, perhaps anticlimactic, Supreme Court decision last month, focused more on the procedural nuances of facial challenges. This outcome has led observers to question why the responsibility of defending Internet freedoms fell to trade associations instead of the platforms themselves. Having been involved with these cases from the outset and drawing on my experience in the tech industry, I may have some answers. 

Gone are the days when tech companies stood united for the good of the industry and the underlying internet. Recall over a decade ago when some of the biggest tech companies in the world darkened their home pages in protest of the Stop Online Piracy Act (SOPA) and the Protect IP Act (PIPA). These bills posed a serious threat not just to individual companies, but to the entire tech industry and everyday internet users. Other notable examples of collective industry protest include the battles over net neutrality, SESTA-FOSTA, and the EARN IT Act. But despite a recent influx in legislative threats, tech companies are noticeably absent.

There are several reasons for the silence. First, the sheer volume of bad bills threatening the tech sector has outpaced the resources available to fight them. California alone has introduced a flurry of legislation targeting social media companies and AI in recent years. When you add in efforts from other states, fighting these laws becomes an internal numbers game. Each bill requires a dedicated team to analyze its impact, meet with lawmakers, organize grassroots campaigns, and, as a last resort, litigate. As a result, companies must make tough decisions about where to invest their resources, often prioritizing bills that directly impact their own products, services, and users.

When bad bills reach the governor’s desk, two strategies typically unfold. The first is the veto strategy, where companies and their lobbyists work tirelessly to secure the coveted governor’s veto. The second is litigation, once the governor inevitably signs the bill into law. Litigation is a significant decision, involving a long, costly process that directly affects company shareholders and therefore typically requires executive approval. And that’s just for one bad bill. Imagine making these decisions for multiple bills across several states all year long. Companies are understandably reluctant to rush into litigation, especially when other companies could take up the fight. Why should Meta challenge a law that also impacts Google, Amazon, or Apple?

This leads to a game of chicken, where companies wait, hoping another will take action. Of course, not all legislation impacts companies equally. A law targeting Facebook, for example, may not affect others enough to justify the expense of a legal challenge. If the most impacted company decides compliance is a cheaper and safer alternative, the law may just go unchallenged. This leaves smaller companies, for whom litigation was never a realistic option, to fend for themselves—and for some of the major players, that might just be an added bonus

Litigation also incurs significant political and public costs. Companies and lawmakers navigate a complex interplay during the legislative season, where companies vie for a seat at closed-door meetings to influence bill drafting, while politicians attempt to manage the influence of these corporate lobbyists to achieve legislative gains for their constituents. Consequently, challenging a law—particularly one backed by politicians with deep corporate ties—could be perceived as a declaration of war, potentially alienating companies from future legislative discussions.

Beyond political capital, public perception is equally critical and increasingly fragile. Contemporary portrayals in the media often depict tech advocacy as self-serving or even harmful. This is particularly evident in the discourse surrounding new youth online safety laws, where tech companies face backlash for opposing measures like parental consent and age verification—mandates that many experts claim actually harm children

This growing disdain towards the tech industry (“techlash”) also shapes how companies assess the risks of contesting contentious laws. Which brings us to trade associations, like NetChoice and CCIA. 

Trade associations manage the interests of their industry members by engaging with lawmakers on key bills, testifying at hearings, submitting comments, and initiating legal challenges. These associations can vary in structure. For example, Chamber of Progress, where I previously worked, does not allow Partner companies to vote or veto, making it a relatively independent and agile organization compared to others. NetChoice operates under a similar model, facilitating quicker legal actions without the bureaucratic hurdles often encountered by other associations. The contrasting vote/veto structure was notably a factor in the dissolution of the Internet Association (IA).

However, trade associations are not a panacea for the complexities of litigation. For starters, cost is similarly a barrier. But to initiate legal challenges, the trade associations must establish standing to sue on behalf of their industry, a task complicated by recent judicial rulings like Murthy v. Missouri. Courts are reticent to grant standing to trade associations without explicit declarations from member companies about the specific harms they would face under the law in question. But these declarations are public, compelling companies to openly oppose the law, and thus exposing them to the same political and public scrutiny they might seek to avoid by leaving it to their associations.

Moreover, filing a declaration exposes companies to legal risks. It enables state defendants to request discovery into the declaring company, potentially leading to invasive examinations of the company’s operations—a deterrent for many. Given the proliferation of problematic laws across the U.S., a company that files a declaration once may be hesitant to do so repeatedly, especially if other companies remain reluctant to expose themselves similarly. And while it may seem like NetChoice is everywhere when it comes to the laws they have successfully challenged, there still remain several unconstitutional tech laws on the books today that have yet to be challenged, like the New York SAFE For Kids Act, possibly due to many of these lingering concerns. 

Even for non-declarant companies, the strategy of using trade associations like NetChoice to shield companies from public scrutiny is becoming less effective. Media coverage often portrays challenges brought by associations like NetChoice as if they are directly initiated by the member tech companies themselves. This occurs regardless of whether all of NetChoice’s members actually support the legal actions or not. The perception often leads to public backlash against the companies which can then manifest as company dissatisfaction with their own trades—a risk that all successful trade associations must constantly weigh. 

Another downside to litigation is the tremendous burden placed on third parties responsible for crafting amicus briefs. These briefs, written by entities wholly independent from the litigants, are not merely echoes of a plaintiff’s arguments; they provide courts with varied legal and policy perspectives that could be influenced by the law under challenge. Yet, crafting these briefs is an expensive and time-consuming endeavor. A single brief can cost between $20,000 to $50,000 or more, depending on the law firm and the depth required. The effort to rally additional signatories for a brief further multiplies these costs. For organizations like my previous employer, the investment in amicus briefs across multiple legal challenges and at various judicial levels, such as in the cases of NetChoice & CCIA v. Moody/Paxton, represents a significant strain. And though not obligatory (like company declarations), these briefs often play a crucial role in the success or failure of a legal challenge.

Furthermore, litigation may prove to be a flawed strategy simply because it arms lawmakers with insights on how to refine their legislation against future challenges. Each legal victory for groups like NetChoice reveals to state lawmakers how to craft more resilient laws. For example, the recent Moody v. NetChoice & CCIA decision detailed all the ways in which NetChoice’s facial challenge was deficient. Of all the reasons, the biggest was that NetChoice failed to articulate for every requirement in the Texas and Florida legislation, how that requirement impacted each of the products and services offered by each of NetChoice’s tech company members. In many ways, this could be a near impossible task, especially considering the drafting limits for a party’s brief and time spent at oral arguments. In turn, what this tells lawmakers is that their bills may just survive if they write laws with immense and convoluted requirements that make a facial challenge nearly impossible to thoroughly plead. 

The protracted nature of these legal battles further underscores their inefficiency. Years after the initial filing, with one Supreme Court hearing behind us, the merits of the constitutional challenge by NetChoice and CCIA have yet to be addressed. Moving forward, just refining their challenge for appellate consideration might necessitate another Supreme Court review. With states continually enacting problematic laws, the prospect of reaching substantive judicial review seems ever more distant, potentially dragging on for decades.

All this means is that tech litigation is neither a reliable nor sustainable method to address the rising hostility towards the tech industry and the degradation of our rights to access information and express ourselves online. To truly protect online expression—which, yes, means also preserving the technology companies that empower it—we must vigilantly monitor and respond to problematic legislation from its inception. If left unchecked, even seemingly innocuous messaging bills from states like Texas and Florida will gradually erode the foundations of our digital freedoms.

Jess Miers is currently Visiting Assistant Professor of Law, University of Akron School of Law. She formerly has worked for Chamber of Progress, Google, TechFreedom, and Twitter.

Posted on BestNetTech - 21 May 2024 @ 03:25pm

Five Section 230 Cases That Made Online Communities Better

The House Energy and Commerce Committee is holding a hearing tomorrow on “sunsetting” Section 230.

Despite facing criticism, Section 230 has undeniably been a cornerstone in the architecture of the modern web, fostering a robust market for new services, and enabling a rich diversity of ideas and expressions to flourish. Crucially, Section 230 empowers platforms to maintain community integrity through the moderation of harmful content.

With that, it’s somewhat surprising that the proposal to sunset Section 230 has garnered Democratic support, given that Section 230 has historically empowered social media services to actively remove content that perpetuates racism and bigotry, thus protecting marginalized communities, including individuals identifying as LGBTQ+ and people of color.

As the hearing approaches, I wanted to highlight five instances where Section 230 swiftly and effectively shielded social media platforms from lawsuits that demanded they host harmful content contrary to their community standards. Without Section 230, online services would face prolonged and costlier legal battles to uphold their right to moderate content — a right guaranteed by the First Amendment.

Section 230 Empowered Vimeo to Remove ‘Conversion Therapy’ Content

Christian Pastor James Domen and Church United sued Vimeo after the platform terminated their account for posting videos promoting Sexual Orientation Change Efforts (SOCE) (i.e. ‘conversion therapy’), which Vimeo argued violated its content policies.

Plaintiffs argued that Vimeo’s actions were not in good faith and discriminated based on sexual orientation and religion. However, the court found that the plaintiffs failed to demonstrate Vimeo acted in bad faith or targeted them discriminatorily.

The District Court initially dismissed the lawsuit, ruling that Vimeo was protected under Section 230 for its content moderation decisions. On appeal, the Second Circuit Court upheld the lower court’s dismissal. The appellate court emphasized that Vimeo’s actions fell within the protections of Section 230, particularly noting that decisions about content moderation are at the platform’s discretion when conducted in good faith. [Note: a third revision of the Court’s opinion omitted Section 230, however, the case remains a prominent example of how Section 230 ensures the initial dismissal of content removal cases].

In upholding Vimeo’s decision to remove content promoting conversion therapy, the Court reinforced that Section 230 protects platforms when they choose to enforce community standards that aim to maintain a safe and inclusive environment for all users, including individuals who identify with LGBTQ+ communities.

Notably, the case also illustrates how platforms can be safeguarded against lawsuits that may attempt to reinforce the privilege of majority groups under the guise of discrimination claims.

Case: Domen v. Vimeo, Inc., №20–616-cv (2d Cir. Sept. 24, 2021).

Section 230 Empowered Twitter to Remove Intentional Dead-Naming & Mis-Gendering

Meghan Murphy, a self-proclaimed feminist writer from Vancouver, ignited controversy with a series of tweets in January 2018 targeting Hailey Heartless, a transgender woman. Murphy’s posts, which included referring to Heartless as a “white man” and labeling her a “trans-identified male/misogynist,” clearly violated Twitter’s guidelines at the time by using male pronouns and mis-gendering Heartless.

Twitter responded by temporarily suspending Murphy’s account, citing violations of its Hateful Conduct Policy. Despite this, Murphy persisted in her discriminatory rhetoric, posting additional tweets that challenged and mocked the transgender identity. This pattern of behavior led to a permanent ban in November 2018, after Murphy repeatedly engaged in what Twitter identified as hateful conduct, including dead-naming and mis-gendering other transgender individuals.

In response, Murphy sued Twitter alleging, among other claims, that Twitter had engaged in viewpoint discrimination. Both the district and appellate courts held that the actions taken by Twitter to enforce its policies against hateful conduct were consistent with Section 230.

The case of Meghan Murphy underscores the pivotal role of Section 230 in empowering platforms like Twitter to maintain safe and inclusive environments for all users, including those identifying as LGBTQ+.

Case: Murphy v. Twitter, Inc., 2021 WL 221489 (Cal. App. Ct. Jan. 22, 2021).

Section 230 Empowered Twitter to Remove Hateful & Derogatory Content

In 2018, Robert M. Cox tweeted a highly controversial statement criticizing Islam, which led to Twitter suspending his account.

“Islam is a Philosophy of Conquests wrapped in Religious Fantasy & uses Racism, Misogyny, Pedophilia, Mutilation, Torture, Authoritarianism, Homicide, Rape . . . Peaceful Muslims are Marginal Muslims who are Heretics & Hypocrites to Islam. Islam is . . .”

To regain access, Cox was required to delete the offending tweet and others similar in nature. Cox then sued Twitter, seeking reinstatement and damages, claiming that Twitter had unfairly targeted his speech. The South Carolina District Court, however, upheld the suspension, citing Section 230:

“the decision to furnish an account, or prohibit a particular user from obtaining an account, is itself publishing activity. Therefore, to the extent Plaintiff seeks to hold the Defendant liable for exercising its editorial judgment to delete or suspend his account as a publisher, his claims are barred by § 230(c) of the CDA.”

In other words, actions taken upon third-party content, such as content removal and account termination, are wholly within the scope of Section 230 protection.

Like the Murphy case, Cox v. Twitter emphasizes the importance of Section 230 in empowering platforms like Twitter to decisively and swiftly remove hateful content, maintaining a healthier online environment without getting bogged down in lengthy legal disputes.

Case: Cox v. Twitter, Inc., 2:18–2573-DCN-BM (D.S.C.).

Section 230 Empowered Facebook to Remove Election Disinformation

In April 2018, Facebook took action against the Federal Agency of News (FAN) by shutting down their Facebook account and page. Facebook cited violations of its community guidelines, emphasizing that the closures were part of a broader initiative against accounts controlled by the Internet Research Agency (IRA), a group accused of manipulating public discourse during the 2016 U.S. presidential elections. This action was part of Facebook’s ongoing efforts to enhance its security protocols to prevent similar types of interference in the future.

In response, FAN filed a lawsuit against Facebook which led to a legal battle that centered on whether Facebook’s actions violated the First Amendment or other legal rights of FAN. The Court, however, determined that Facebook was not a state actor nor had it engaged in any joint action with the government that would make it subject to First Amendment constraints. The court also dismissed FAN’s claims for damages under Section 230.

In an attempt to avoid Section 230, FAN argued that Facebook’s promotion of FAN’s content via Facebook’s recommendation algorithms converts FAN’s content into Facebook’s content. The Court didn’t buy it:

Plaintiffs make a similar argument — that recommending FAN’s content to Facebook users through advertisements makes Facebook a provider of that content. The Ninth Circuit, however, held that such actions do not create “content in and of themselves.”

The FAN case illustrates the critical role Section 230 plays in empowering platforms like Facebook to decisively address and mitigate election-related disinformation. By shielding platforms that act swiftly against entities that violate their terms of service, particularly those involved in spreading divisive or manipulative content, Section 230 ensures that social media services can remain vigilant guardians against the corruption of public discourse.

Case: Federal Agency of News LLC v. Facebook, Inc., 2020 WL 137154 (N.D. Cal. Jan. 13, 2020).

Section 230 Empowered Facebook to Ban Hateful Content

Laura Loomer, an alt-right activist, filed lawsuits against Facebook (and Twitter) after her account was permanently banned. Facebook labeled Loomer as “dangerous,” a designation that she argued was both wrongful and harmful to her professional and personal reputation. Facebook’s classification of Loomer under this term was based on their assessment that her activities and statements online were aligned with behaviors that promote or engage in violence and hate:

“To the extent she alleges Facebook called her “dangerous” by removing her accounts pursuant to its DIO policy and describing its policy generally in the press, the law is clear that calling someone “dangerous” — or saying that she “promoted” or “engaged” in “hate” — is a protected statement of opinion. Even if it were not, Ms. Loomer cannot possibly meet her burden to prove that it would be objectively false to describe her as “dangerous” or promoting or engaging in “hate” given her widely reported controversial public statements. To the extent Ms. Loomer is claiming, in the guise of a claim for “defamation by implication,” that Facebook branded her a “terrorist” or accused her of conduct that would also violate the DIO policy, Ms. Loomer offers no basis to suggest (as she must) that Facebook ever intended or endorsed that implication.”

Loomer challenged Facebook’s decision on the grounds of censorship and discrimination against her political viewpoints. However, the Court ruled in favor of Facebook, citing Section 230 among other reasons. The Court’s decision emphasized that as a private company, Facebook has the right to enforce its community standards and policies, including the removal of users it deems as violating these policies.

Case: Loomer v. Zuckerberg, 2023 WL 6464133 (N.D. Cal. Sept. 30, 2023).

Jess Miers is Senior Counsel to the Chamber of Progress and a Section 230 expert. This post originally appeared on Medium and is republished here with permission.

Posted on BestNetTech - 16 August 2023 @ 10:44am

California’s SB 680: Social Media ‘Addiction’ Bill Heading For A First Amendment Collision

Similar to the “Age Appropriate Design Code” (AADC) legislation that became law last year, California’s latest effort to regulate online speech comes in the form of SB 680, a bill by Sen. Nancy Skinner targeting the designs, algorithms, and features of online services that host user-created content, with a specific focus on preventing harm or addiction risks to children.

SB 680 prohibits social media platforms from using a design, algorithm, or feature that causes a child user, 16 years or younger, to inflict harm on themselves or others, develop an eating disorder, or experience addiction to the social media platform. Proponents of SB 680 claim that the bill does not seek to restrict speech but rather addresses the conduct of the Internet services within its scope.

However, as Federal Judge Beth Labson Freeman pointed out during a recent court hearing challenging last year’s age-appropriate design law, if content analysis is required to determine the applicability of certain restrictions, it becomes content-based regulation. SB 680 faces a similar problem.

Designs, Algorithms, and Features are Protected Expression

To address the formidable obstacle presented by the First Amendment, policymakers often resort to “content neutrality” arguments to support their policing of expression. California’s stance in favor of AADC hinges on the very premise that AADC regulates conduct over content. Sen. Skinner asserted the same about SB 680, emphasizing that the bill is solely focused on conduct and not content.

“We used our best legal minds available […] to craft this in a way that did not run afoul of those other either constitutional or other legal jurisdictional areas. [T]hat is why [SB 680] is around the design features and the algorithms and such.”

However, the Courts have consistently held differently, and precedent reveals that these bills are inextricably intertwined with content despite such claims.

The Supreme Court has long held that private entities such as bookstores (Bantam Books, Inc. v. Sullivan (1963)), cable companies (Manhattan Community Access Corporation v. Halleck (2019)), newspapers (Miami Herald Publishing Co. v. Tornillo (1974)), video game distributors (Brown v. Entertainment Merchants Association (2011)), parade organizers (Hurley v. Irish-American Gay, Lesbian and Bisexual Group of Boston (1995)), pharmaceutical companies (Sorrell v. IMS Health, Inc. (2011)), and even gas & electric companies (Pacific Gas and Electric Co. v. Public Utilities Commission (1986)) have a First Amendment right to choose how they curate, display, and deliver preferred messages. This principle extends to online publishers as well, as the Court affirmed in Reno v. ACLU in 1997, emphasizing the First Amendment protection for online expression.

Moreover, courts have explicitly recognized that algorithms themselves constitute speech and thus deserve full protection under the First Amendment. In cases like Search King, Inc. v. Google Technology, Inc. and Sorrell, the courts held that search engine results and data processing are expressive activities, and algorithms used to generate them are entitled to constitutional safeguards.

In a more recent case, NetChoice v. Moody (2022), the U.S. Court of Appeals for the Eleventh Circuit declared certain provisions of Florida’s social media anti-bias law as unconstitutional, affirming that social media services’ editorial decisions — even via algorithm — constitute expressive activity.

Further, The Supreme Court’s stance in Twitter, Inc. v. Taamneh (2023) supports the idea that algorithms are merely one aspect of an overall publication infrastructure, warranting protection under the First Amendment.

This precedent underscores a general reluctance of the courts to differentiate between the methods of publication and the underlying messages conveyed. In essence, the courts have consistently acknowledged that the medium of publication is intricately linked to its content. Laws like SB 680 and the AADC are unlikely to persuade the courts to draw any lines.

SB 680’s Not-So-Safe Harbor Provision is Prior Restraint

Sen. Skinner also suggested at a legislative hearing that SB 680 is not overly burdensome for tech companies due to the inclusion of a “safe harbor” provision. This provision offers protection to companies conducting quarterly audits of their designs, algorithms, and features that may potentially harm users under 16. Companies that “correct” any problematic practices within 60 days of the audit are granted the safe harbor.

However, the safe harbor provision is yet another violation of the First Amendment. In practice, this provision acts as a prior restraint, compelling tech companies to avoid publication decisions that could be seen as violations for users under 16. The requirement to “correct” practices before publication restricts their freedom to operate.

Recall that the AADC also includes a similar requirement for mandatory data privacy impact assessments (DPIAs). Although the State of California defended this provision by arguing that it doesn’t mandate companies to alter the content they host, Judge Freeman disagreed, noting that the DPIA provision in the AADC forces social media services to create a “timed-plan” to “mitigate” their editorial practices.

In reality, both the “safe harbor” provisions of the AADC and SB 680 lead to services refraining from implementing certain designs, algorithms, or features that could potentially pose risks to individuals under 16. This cautious approach even extends to features that may enhance the online environment for parents and children, such as kid-friendly alternatives to products and services offered to the general public.

The online world, like the offline world, carries inherent risks, and services continually strive to assume and mitigate those risks. However, laws like the AADC and SB 680 make it too risky for services to make meaningful efforts in creating a safer online environment, ultimately hindering progress towards a safer web.

SB 680 is a Solution in Search of a Lawsuit

In a manner akin to newspapers making decisions about the content they display above the fold, letters to the editor they choose to publish, or the stories and speakers they feature, social media services also make choices regarding the dissemination of user-created content. While newspapers rely on human editors to diligently apply their editorial guidelines, social media companies use algorithms to achieve a similar objective.

However, it is puzzling that newspapers rarely face the kind of political scrutiny experienced by their online counterparts today. The idea of the government telling the New York Times how to arrange their stories in print editions seems inconceivable. But for some reason, we don’t react with similar concern when the government attempts to dictate how websites should display user content.

Despite an abundance of legal precedents upholding First Amendment protections for the publication tools that enable the delivery of protected expression, California lawmakers persist with SB 680. The federal courts’ skepticism toward the AADC law should be a warning light: If SB 680 becomes law this Fall, California will once again find itself embroiled in an expensive legal battle over online expression.

Jess Miers is Legal Advocacy Counsel at Chamber of Progress. This article was originally published on Medium and republished here with permission.

Posted on BestNetTech - 17 March 2023 @ 11:59am

Yes, Section 230 Should Protect ChatGPT And Other Generative AI Tools

Question Presented: Does Section 230 Protect Generative AI Products Like ChatGPT?

As the buzz around Section 230 and its application to algorithms intensifies in anticipation of the Supreme Court’s response, ‘generative AI’ has soared in popularity among users and developers, begging the question: does Section 230 protect generative AI products like ChatGPT? Matt Perault, a prominent technology policy scholar and expert, thinks not, as he discussed in his recently published Lawfare article: Section 230 Won’t Protect ChatGPT.

Perault’s main argument follows as such: because of the nature of generative AI, ChatGPT operates as a co-creator (or material contributor) of its outputs and therefore could be considered the ‘information content provider’ of problematic results, ineligible for Section 230 protection. The co-authors of Section 230, former Representative Chris Cox and Sen. Ron Wyden, have also suggested that their law doesn’t grant immunity to generative AI. 

I respectfully disagree with both the co-authors of Section 230 and Perault, and offer the counter argument: Section 230 does (and should) protect products like ChatGPT.

It is my opinion that generative AI does not demand exceptional treatment. Especially since, as it currently stands, generative AI is not exceptional technology; an understandably provocative take to which we’ll soon return. 

But first, a refresher on Section 230.

Section 230 Protects Algorithmic Curation and Augmentation of Third-Party Content 

Recall that Section 230 says websites and users are not liable for the content they did not create, in whole or in part. To evaluate whether the immunity applies, the Barnes v. Yahoo! Court provided a widely accepted three-part test:

  1. The defendant is an interactive computer service; 
  2. The plaintiff’s claim treats the defendant as a publisher or speaker; and
  3. The plaintiff’s claim derives from content the defendant did not create. 

The first prong is not typically contested. Indeed, the latter prongs are usually the flashpoint(s) of most Section 230 cases. And in the case of ChatGPT, the third prong seems especially controversial. 

Section 230’s statutory language states that a website becomes an information content provider when it is “responsible, in whole or in part, for the creation or development” of the content at issue. In their recent Supreme Court case challenging Section 230’s boundaries, the Gonzalez Petitioners assert that the use of algorithms to manipulate and display third-party content precludes Section 230 protection because the algorithms, as developed by the defendant website, convert the defendant into an information content provider. But existing precedent suggests otherwise.

For example, the Court in Fair Housing Council of San Fernando Valley v. Roommate.com (aka ‘the Roommates case’)—a case often invoked to evade Section 230—held that it is not enough for a website to merely augment the content at issue to be considered a co-creator or developer. Rather, the website must have materially contributed to the content’s alleged unlawfulness.  Or, as the majority put it, “[i]f you don’t encourage illegal content, or design your website to require users to input illegal content, you will be immune.” 

The majority also expressly distinguished Roomates.com from “ordinary search engines,” noting that unlike Roommates.com, search engines like Google do not use unlawful criteria to limit the scope of searches conducted (or results delivered), nor are they designed to achieve illegal ends. In other words, the majority suggests that websites retain immunity when they provide neutral tools to facilitate user expression. 

While “neutrality” brings about its own slew of legal ambiguities, the Roommates Court offers some clarity suggesting that websites with a more hands-off approach to content facilitation are safer than websites that guide, encourage, coerce, or demand users produce unlawful content. 

For example, while the Court rejected Roommate’s Section 230 defense for its allegedly discriminatory drop-down options, the Court simultaneously upheld Section 230’s application to the “additional comments” option offered to Roommates.com users. The “additional comments” were separately protected because Roommates did not solicit, encourage, or demand their users provide unlawful content via the web form. In other words, a blank web form that simply asks for user input is a neutral tool, eligible for Section 230 protection, regardless of how the user actually uses the tool. 

The Barnes Court would later reiterate the neutral tools argument, noting that the provision of neutral tools to carry out what may be unlawful or illicit content does not amount to ‘development’ for the purposes of Section 230. Hence, while the ‘material contribution’ test is rather nebulous (especially for emerging technologies), it is relatively clear that a website must do something more than just augmenting, curating, and displaying content (algorithmically or otherwise) to transform into the creator or developer of third-party content.

The Court in Kimzey v. Yelp offers further clarification: 

“the material contribution test makes a “‘crucial distinction between, on the one hand, taking actions (traditional to publishers) that are necessary to the display of unwelcome and actionable content and, on the other hand, responsibility for what makes the displayed content illegal or actionable.’”).”

So, what does this mean for ChatGPT?

The Case For Extending Section 230 Protection to ChatGPT

In his line of questioning during the Gonzalez oral arguments, Justice Gorsuch called into question Section 230’s application to generative AI technologies. But before we can even address the question, we need to spend some time understanding the technology. 

Products like ChatGPT use large language models (LLMs) to produce a reasonable continuation of human-sounding responses. In other words, as discussed here by Stephen Wolfram, renown computer scientist, mathematician, and creator of WolframAlpha, ChatGPT’s core function is to “continue text in a reasonable way, based on what it’s seen from the training it’s had (which consists in looking at billions of pages of text from the web, etc).” 

While ChatGPT is impressive, the science behind it is not necessarily remarkable. Computing technology reduces complex mathematical computations into step-by-step functions that the computer can then solve at tremendous speeds. As humans, we do this all the time, just much slower than a computer. For example, when we’re asked to do non-trivial calculations in our heads, we start by breaking up the computation into smaller functions on which mental math is easily performed until we arrive at the answer.

Tasks that we assume are fundamentally impossible for computers to solve are said to involve ‘irreducible computations’ (i.e. computations that cannot be simply broken up into smaller mathematical functions, unaided by human input). Artificial intelligence relies on neural networks to learn and then ‘solve’ said computations. ChatGPT approaches human queries the same way. Except, as  Wolfram notes, it turns out that said queries are not as sophisticated to compute as we may have thought: 

“In the past there were plenty of tasks—including writing essays—that we’ve assumed were somehow “fundamentally too hard” for computers. And now that we see them done by the likes of ChatGPT we tend to suddenly think that computers must have become vastly more powerful—in particular surpassing things they were already basically able to do (like progressively computing the behavior of computational systems like cellular automata).

But this isn’t the right conclusion to draw. Computationally irreducible processes are still computationally irreducible, and are still fundamentally hard for computers—even if computers can readily compute their individual steps. And instead what we should conclude is that tasks—like writing essays—that we humans could do, but we didn’t think computers could do, are actually in some sense computationally easier than we thought.

In other words, the reason a neural net can be successful in writing an essay is because writing an essay turns out to be a “computationally shallower” problem than we thought. And in a sense this takes us closer to “having a theory” of how we humans manage to do things like writing essays, or in general deal with language.”

In fact, ChatGPT is even less sophisticated when it comes to its training. As Wolfram asserts:

“ChatGPT as it currently is, the situation is actually much more extreme, because the neural net used to generate each token of output is a pure “feed-forward” network, without loops, and therefore has no ability to do any kind of computation with nontrivial “control Flow.””

Put simply, ChatGPT uses predictive algorithms and an array of data made up entirely of publicly available information online to respond to user-created inputs. The technology is not sophisticated enough to operate outside of human-aided guidance and control. Which means that ChatGPT (and similarly situated generative AI products) are functionally akin to “ordinary search engines” and predictive technology like autocomplete. 

Now we apply Section 230. 

For the most part, the courts have consistently applied Section 230 to algorithmically generated outputs. For example, the Sixth Circuit in O’Kroley v. Fastcase Inc. upheld Section 230 for Google’s automatically generated snippets that summarize and accompany each Google result. The Court notes that even though Google’s snippets could be considered a separate creation of content, the snippets derive entirely from third-party information found at each result. Indeed, the Court concludes that contextualization of third-party content is in fact a function of an ordinary search engine. 

Similarly, in Obado v. Magedson, Section 230 applies to search result snippets. The Court says: 

Plaintiff also argues that Defendants displayed through search results certain “defamatory search terms” like “Dennis Obado and criminal” or posted allegedly defamatory images with Plaintiff’s name. As Plaintiff himself has alleged, these images at issue originate from third-party websites on the Internet which are captured by an algorithm used by the search engine, which uses neutral and objective criteria. Significantly, this means that the images and links displayed in the search results simply point to content generated by third parties. Thus, Plaintiff’s allegations that certain search terms or images appear in response to a user-generated search for “Dennis Obado” into a search engine fails to establish any sort of liability for Defendants. These results are simply derived from third-party websites, based on information provided by an “information content provider.” The linking, displaying, or posting of this material by Defendants falls within CDA immunity.

The Court also nods to Roommates

“None of the relevant Defendants used any sort of unlawful criteria to limit the scope of searches conducted on them; “[t]herefore, such search engines play no part in the ‘development’ of the unlawful searches” and are acting purely as an interactive computer service…

The Court goes further, extending Section 230 to autocomplete (i.e. when the service at issue uses predictive algorithms to suggest and preempt a user’s query): 

“suggested search terms auto-generated by a search engine do not remove that search engine from the CDA’s broad protection because such auto-generated terms “indicates only that other websites and users have connected plaintiff’s name” with certain terms.”

Like Google Search, ChatGPT is entirely driven by third-party input. In other words, ChatGPT does not invent, create, or develop outputs absent any prompting from an information content provider  (i.e. a user). Further, nothing on the service expressly or impliedly encourages users to submit unlawful queries. In fact, OpenAI continues to implement guardrails that force ChatGPT to ignore requests that would demand problematic and / or unlawful responses. Compare this to Google Search which may actually still provide a problematic or even unlawful result. Perhaps ChatGPT actually improves the baseline for ordinary search functionality. 

Indeed, ChatGPT essentially functions like the “additional comments” web form in Roommates. And while ChatGPT may “transform” user input into a result that responds to the user-driven query, that output is entirely composed of third-party information scraped from the web. Without more, this transformation is simply an algorithmic augmentation of third-party content (much like Google’s snippets). And as discussed, algorithmic compilations or augmentations of third-party content are not enough to transform the service into an information content provider (e.g. Roommates; Batzel v. Smith; Dyroff v. The Ultimate Software Group, Inc.; Force v. Facebook). 

The Limit Does Exist

Of course, Section 230’s coverage is not without its limits. There’s no doubt that future generative AI defendants, like OpenAI, will face an uphill battle in persuading a court. Not only do defendants have the daunting challenge of explaining generative AI technologies for less technologically savvy judges, the current judicial swirl around Section 230 and algorithms does defendants no favors. 

For example, the Supreme Court could very well hand-down a convoluted opinion in Gonzalez that introduces ambiguity as to when Section 230 applies to algorithmic curation / augmentation. Such an opinion would only serve to undermine the precedence discussed above. Indeed, future defendants may find themselves embroiled in convoluted debate about AI’s capacity for neutrality. In fact, it would be intellectually dishonest to ignore emerging common law developments that preclude Section 230 from claims alleging dangerous / defective product designs (e.g. Lemmon v. Snap, A.M. v. Omegle, Oberdorf v. Amazon). 

Further, the Fourth Circuit’s recent decision in Henderson v. Public Data could also prove to be problematic for future AI defendants as it imposes contributive liability for publisher activities that go beyond those of “traditional editorial functions” (which could include any and all publisher functions done via algorithms). 

Lastly, as we saw in the Meta / DOJ settlement regarding Meta’s discriminatory practices involving algorithmic targeting of housing advertisements, AI companies cannot easily avoid liability when they materially contribute to the unlawfulness of the result. If OpenAI were to hard-code ChatGPT with unlawful responses, Section 230 will likely be unavailable. However, as you might imagine, this is a non-trivial distinction. 

Public Policy Demands Section 230 Protections for Generative AI Technologies

Section 230 was initially established with the recognition that the online world would undergo frequent advancements, and that the law must accommodate these changes to promote a thriving digital ecosystem. 

Generative AI is the latest iteration of web technology that has enormous potential to bring about substantial benefits for society and transform the way we use the Internet. And it’s already doing good. Generative AI is currently used in the healthcare industry, for instance, to improve medical imaging and to speed up drug discovery and development. 

As discussed, courts have developed precedence in favor of Section 230 immunity for online services that solicit or encourage users to create and provide content. Courts have also extended the immunity to online services that facilitate the submission of user-created content. From a legal standpoint, generative AI tools are not unique from any other online service that encourages user interaction and contextualizes third-party results. 

From a public policy perspective, it is crucial that courts uphold Section 230 immunity for generative AI products. Otherwise, we risk foreclosing on the technology’s true potential. Today, there are tons of variations of ChatGPT-like products offered by independent developers and computer scientists who are likely unequipped to deal with an inundation of litigation that Section 230 typically preempts. 

In fact, generative AI products are arguably more vulnerable to frivolous lawsuits because they depend entirely upon whatever query or instructions its users may provide, malicious or otherwise. Without Section 230, developers of generative AI services must anticipate and guard against every type of query that could cause harm. 

Indeed, thanks to Section 230, companies like OpenAI are doing just that by providing guardrails that limit ChatGPT’s responses to malicious queries. But those guardrails are neither comprehensive nor perfect. And like with all other efforts to moderate awful online content, the elimination of Section 230 could discourage generative AI companies from implementing said guardrails in the first place; a countermove that would enable users to prompt LLMs with malicious queries to bait out unlawful responses subject to litigation. In other words, plaintiffs could transform ChatGPT into their very own personal perpetual litigation machine. 

And as Perault rightfully warns: 

“If a company that deploys an LLM can be dragged into lengthy, costly litigation any time a user prompts the tool to generate text that creates legal risk, companies will narrow the scope and scale of deployment dramatically. Without Section 230 protection, the risk is vast: Platforms using LLMs would be subject to a wide array of suits under federal and state law. Section 230 was designed to allow internet companies to offer uniform products throughout the country, rather than needing to offer a different search engine in Texas and New York or a different social media app in California and Florida. In the absence of liability protections, platforms seeking to deploy LLMs would face a compliance minefield, potentially requiring them to alter their products on a state-by-state basis or even pull them out of certain states entirely…

…The result would be to limit expression—platforms seeking to limit legal risk will inevitably censor legitimate speech as well. Historically, limits on expression have frustrated both liberals and conservatives, with those on the left concerned that censorship disproportionately harms marginalized communities, and those on the right concerned that censorship disproportionately restricts conservative viewpoints.

The risk of liability could also impact competition in the LLM market. Because smaller companies lack the resources to bear legal costs like Google and Microsoft may, it is reasonable to assume that this risk would reduce startup activity.”

Hence, regardless of how we feel about Section 230’s applicability to AI, we will be forced to reckon with the latest iteration of Masnick’s Impossibility Theorem: there is no content moderation system that can meet the needs of all users. The lack of limitations on human awfulness mirrors the constant challenge that social media companies encounter with content moderation. The question is whether LLMs can improve what social media cannot.

Posted on BestNetTech - 2 November 2020 @ 09:35am

Your Problem Is Not With Section 230, But The 1st Amendment

Everyone wants to do something about Section 230. It?s baffling how seldom we talk about what happens next. What if Section 230 is repealed tomorrow? Must Twitter cease fact-checking the President? Must Google display all search results in chronological order? Perhaps PragerU would finally have a tenable claim against YouTube; and Jason Fyk might one day return to showering the Facebook masses with his prized collection of pissing videos.

Suffice to say, that?s not how any of this works.

Contrary to what seems to be popular belief, Section 230 isn?t what?s stopping the government from pulling the plug on Twitter for taking down NY Post tweets or exposing bloviating, lying, elected officials. Indeed, without Section 230, plaintiffs with a big tech axe to grind still have a significant hurdle to overcome: The First Amendment.

As private entities, websites have always enjoyed First Amendment?freedom of speech?protections for the content they choose (and choose not) to carry. What many erroneously (and ironically) declare as ?censorship? is really no different from the editorial discretion enjoyed by newspapers, broadcasters, and your local bookstore. When it comes to the online world, we simply call it content moderation. The decision to fact-check, remove, reinstate, or simply leave content up, is wholly within the First Amendment?s purview. On the flip side, as private, non-government actors, websites do not owe their users the same First Amendment protection for their content.

Or, as TechFreedom?s brilliant Ashkhen Kazaryan wisely puts it, the First Amendment protects Twitter from Trump, but not Trump from Twitter.

What then is Section 230?s use if the First Amendment already stands in the way? Put simply, Section 230 says websites are not liable for third-party content. In practice, Section 230 merely serves as a free speech fast-lane. Under Section 230, websites can reach the same inevitable conclusions they would reach under the First Amendment, only faster and cheaper. Importantly, Section 230 grants websites and users peace of mind knowing that plaintiffs are less likely to sue them for exercising their editorial discretion?and even if they do?websites and users are almost always guaranteed a fast, cheap, and painless win. That peace of mind is especially crucial for market entrants posed to unseat the big tech incumbents.

With that, it seems that Americans haven?t fallen out of love with Section 230, rather, alarmingly, they?ve fallen out of love with the First Amendment. In case you?re wondering if you too have fallen out of love with the freedom of speech, consider the following:

If you’re upset that Twitter and Facebook keep removing content that favors your political viewpoints,

Your problem is with the First Amendment, not Section 230.

If you’re upset that your favorite social media site won’t take down content that offends you,

Your problem is with the First Amendment, not Section 230.

If you’re mad at search engines for indexing websites you don’t agree with,

Your problem is with the First Amendment, not Section 230.

If you’re mad at a website for removing your posts – even when it seems unreasonable

Your problem is with the First Amendment, not Section 230.

If you don’t like the way a website aggregates content on your feed or in your search results,

Your problem is with the First Amendment, not Section 230.

If you wish websites had to carry and remove only specific pre-approved types of content

Your problem is with the First Amendment, not Section 230.

If you wish social media services had to be politically neutral,

Your problem is with the First Amendment, not Section 230.

If someone wrote a negative online review about you or your business,

Your problem is with the First Amendment, not Section 230.

If you hate pornography,

Your problem is with the First Amendment, not Section 230.

If you hate Trump?s Tweets

Your problem is with the First Amendment, not Section 230.

If you hate fact-checks,

Your problem is with the First Amendment, not Section 230.

If you love fact-checks and wish Facebook had to do more of them,

Your problem is with the First Amendment, not Section 230.

And at the end of the day, If you hate editorial discretion and free speech,

You probably just hate the First Amendment… not Section 230.