Eric Goldman's BestNetTech Profile

Eric Goldman

About Eric Goldman

Posted on BestNetTech - 5 June 2025 @ 01:00pm

A Takedown Of The Take It Down Act

This is a cross post from Prof. Eric Goldman’s blog, mostly written by Prof. Jess Miers, with additional commentary at the end from Eric.

Two things can be true: Non-consensual intimate imagery (NCII) is a serious and gendered harm. And, the ‘Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks Act,’’ a/k/a the TAKE IT DOWN Act, is a weapon of mass censorship.

Background

In October 2023, two high school students became the victims of AI-generated NCII. Classmates had used “nudify” tools to create fake explicit images using public photos pulled from their social media profiles. The incident sparked outrage, culminating in a hearing last June where the students’ families called for federal action.

Congress responded with the TAKE IT DOWN Act, introduced by Senator Ted Cruz and quickly co-sponsored by a bipartisan group of lawmakers. On its face, the law targets non-consensual intimate imagery, including synthetic content. In practice, it creates a sweeping speech-removal regime with few safeguards.

Keep in mind, the law was passed under an administration that has shown little regard for civil liberties or dissenting speech. It gives the government broad power to remove online content of which it disapproves and opens the door to selective enforcement. Trump made his intentions clear during his March State of the Union:

And I’m going to use that bill for myself too, if you don’t mind—because nobody gets treated worse than I do online.

Some interpreted this as a reference to a viral AI-generated video of Trump kissing Elon Musk’s feet—precisely the kind of political satire that could be subject to removal under the Act’s broad definitions.

The bill moved unusually fast compared to previous attempts at online speech regulation. It passed both chambers without a single amendment, despite raising serious First Amendment and due process concerns. Following the TikTok ban, it marks another example of Congress enacting sweeping online speech restrictions with minimal debate and virtually no public process.

Senator Booker briefly held up the bill in mid-2024, citing concerns about vague language and overbroad criminal penalties. After public backlash, including pressure from victims’ families, Senator Booker negotiated a few modest changes. The revised bill passed the Senate by unanimous consent in February 2025. The House advanced it in April, ignoring objections from civil liberties groups and skipping any real markup.

President Trump signed the TAKE IT DOWN Act into law in May. The signing ceremony made it seem even more like a thinly veiled threat toward online services that facilitate expression, rather than a legitimate effort to curb NCII. At the ceremony, First Lady Melania Trump remarked:

Artificial Intelligence and social media are the digital candy of the next generation—sweet, addictive, and engineered to have an impact on the cognitive development of our children. But unlike sugar, these new technologies can be weaponized, shape beliefs, and sadly, affect emotions and even be deadly.

And just recently, FTC Chair Andrew Ferguson—handpicked by Trump and openly aligned with his online censorship agenda—tweeted his enthusiasm about enforcing the TAKE IT DOWN Act in coordination with the Department of Homeland Security. Yes, the same agency that has been implicated in surveilling protestors and disappearing U.S. citizens off the streets during civil unrest. 

Statutory Analysis

Despite overwhelming (and shortsighted) support from the tech industry, the TAKE IT DOWN Act spells trouble for any online service that hosts third-party content. 

The law contains two main provisions: one criminalizing the creation, publication, and distribution of authentic, manipulated, and synthetic NCII, and another establishing a notice-and-takedown system for online services hosting NCII that extends to a potentially broader range of online content. 

Section 2: Criminal Prohibition on Intentional Disclosure of Nonconsensual Intimate Visual Depictions

Section 2 of the Act creates new federal criminal penalties for the publication of non-consensual “intimate visual depictions,” including both real (“authentic”) and technologically manipulated or AI-generated imagery (“digital forgeries”). These provisions are implemented via amendments to Section 223 of the Communications Act and took effect immediately upon enactment.

Depictions of Adults

The statute applies differently depending on whether the depiction involves an adult or a minor. With respect to depicting adults, it is a federal crime to knowingly publish an intimate visual depiction via an interactive computer service (as defined under Section 230) if the following are met: (1) the image was created or obtained under circumstances where the subject had a reasonable expectation of privacy; (2) the content was not voluntarily exposed in a public or commercial setting; (3) the image is not of public concern; and (4) the publication either was intended to cause harm or actually caused harm (defined to include psychological, financial, or reputational injury).

The statute defines “intimate visual depictions” via 15 U.S.C. § 6851. The definition includes images showing uncovered genitals, pubic areas, anuses, or post-pubescent female nipples, as well as depictions involving the display or transfer of sexual fluids. Images taken in public may still qualify as “intimate” if the individual did not voluntarily expose themselves or did not consent to the sexual conduct depicted.

In theory, the statute exempts pornography that was consensually produced and distributed online. In practice, the scope of that exception is far from clear. One key requirement for triggering criminal liability in cases involving adults is that “what is depicted was not voluntarily exposed by the identifiable individual in a public or commercial setting.” The intent seems to be to exclude lawful adult content from the law’s reach.

But the language is ambiguous. The statute refers to what is depicted—potentially meaning the body parts or sexual activity shown—rather than to the image itself. Under this reading, anyone who has ever publicly or commercially shared intimate content generally could be categorically excluded from protection under the law, even if a particular image was created or distributed without their consent. That interpretation would effectively deny coverage to adult content creators and sex workers, the very individuals who are often most vulnerable to nonconsensual republishing and exploitation of their content.

Depictions of Children

With respect to depictions of minors, the TAKE IT DOWN Act criminalizes the distribution of any image showing uncovered genitals, pubic area, anus, or female-presenting nipple—or any depiction of sexual activity—if shared with the intent to abuse, humiliate, harass, degrade, or sexually gratify. 

Although the Act overlaps with existing federal child sexual abuse material (CSAM) statutes, it discards the constitutional boundaries that have kept those laws from being struck down as unconstitutional. Under 18 U.S.C. § 2256(8), criminal liability attaches only to depictions of “sexually explicit conduct,” a term courts have narrowly defined to include things like intercourse, masturbation, or lascivious exhibition of genitals. Mere nudity doesn’t typically qualify, at least not without contextual cues. Even then, prosecutors must work to show that the image crosses a clear, judicially established threshold.

TAKE IT DOWN skips the traditional safeguards that typically constrain speech-related criminal laws. It authorizes felony charges for publishing depictions of minors that include certain body parts if done with the intent to abuse, humiliate, harass, degrade, arouse, or sexually gratify. But these intent standards are left entirely undefined. A family bathtub photo shared with a mocking or off-color caption could be framed as intended to humiliate or, in the worst-case reading, arouse. A public beach photo of a teen, reposted with sarcastic commentary, might be interpreted as degrading. Of course, these edge cases should be shielded by traditional First Amendment defenses.

We’ve seen this before. Courts have repeatedly struck down or narrowed CSAM laws that overreach, particularly when they criminalize nudity or suggestive content that falls short of actual sexual conduct, such as family photos, journalism, documentary film, and educational content. 

TAKE IT DOWN also revives the vagueness issues that have plagued earlier efforts to curb child exploitation online. Terms like “harass,” “humiliate,” or “gratify” are inherently subjective and undefined, which invites arbitrary enforcement. In effect, the law punishes speakers based on perceived motive rather than the objective content itself.  

Yes, the goal of protecting minors is laudable. But noble intentions don’t save poorly drafted laws. Courts don’t look the other way when speech restrictions are vague or overbroad just because the policy behind them sounds good. If a statute invites constitutional failure, it doesn’t end up protecting anyone. In short, the TAKE IT DOWN Act replicates the very defects that have led courts to limit or strike down earlier child-protection laws. 

Digital Forgeries

The statute also criminalizes the publication of “digital forgeries” without the depicted person’s consent, which differs from the “reasonable expectation of privacy” element for authentic imagery. A digital forgery is defined as any intimate depiction created or altered using AI, software, or other technological means such that it is, in the eyes of a reasonable person, indistinguishable from an authentic image. This standard potentially sweeps in a wide range of synthetic and altered content, regardless of whether a viewer actually believed the image was real or whether the underlying components were independently lawful.

Compared to existing CSAM laws, the TAKE IT DOWN Act also uses a more flexible visual standard when it comes to “digital forgeries.” Under CSAM law, synthetic or computer-generated depictions are only criminalized if they are “indistinguishable from that of a real minor engaging in sexually explicit conduct.” That standard makes it difficult to prosecute deepfakes or AI nudes unless they are photorealistic and sexually explicit. But under TAKE IT DOWN, a digital forgery is covered if it “when viewed as a whole by a reasonable person, is indistinguishable from an authentic visual depiction of the individual.” The focus isn’t on whether the depiction looks like a real child in general, but whether it looks like a real, identifiable person. This makes the law far more likely to apply to a broader range of AI-generated depictions involving minors, even if the underlying content wouldn’t meet the CSAM threshold. As discussed in the implications section, this too invites First Amendment scrutiny. 

There are several exceptions. The statute does not apply to disclosures made as part of law enforcement or intelligence activity, nor to individuals acting reasonably and in good faith when sharing content for legitimate legal, medical, educational, or professional purposes. The law also exempts people sharing intimate content of themselves (as long as it contains nudity or is sexual in nature) and content already covered by federal CSAM laws.

Penalties include fines and up to two years’ imprisonment for adult-related violations, and up to three years for violations involving minors. Threats to publish such material can also trigger criminal liability.

Finally, the Act leaves unanswered whether online services could face criminal liability for failing to remove known instances of authentic or AI-generated NCII. Because Section 230 never applies to federal criminal prosecutions, intermediaries cannot rely on it as a defense against prosecution. If a service knowingly hosts unlawful material, including not just NCII itself, but threats to publish it, such as those made in private messages, the government may claim the service is “publishing” illegal content in violation of the statute.

The Supreme Court’s decision in Taamneh provides some insulation. It held that general awareness of harmful conduct on a service does not amount to the kind of specific knowledge required to establish aiding-and-abetting liability. But the TAKE IT DOWN Act complicates that picture. Once a service receives a takedown request for a particular image, it arguably acquires actual knowledge of illegal content. If the service fails to act within the Act’s 48-hour deadline, it’s not clear whether that inaction could form the basis for a criminal charge under the statute’s separate enforcement provisions.

As Eric discusses below, there’s also no clear answer to what happens when someone re-uploads content that had previously been removed (or even new violating content). Does prior notice of a particular individual’s bad acts create the kind of ongoing knowledge that turns continued hosting into criminal publication? That scenario falls into a legal gap narrower than Taamneh might account for, but the statute doesn’t clarify how courts should treat repeat violations.

Section 3: Notice and Removal of Nonconsensual Intimate Visual Depictions

Alongside its criminal provisions, the Act imposes new civil compliance obligations on online services that host user-generated content. Covered services must implement a notice-and-takedown process to remove intimate visual depictions (real or fake) within one year of the law’s enactment. The process must allow “identifiable individuals” or their authorized agents to request removal of non-consensual intimate images. Once a valid request is received, the service has 48 hours to remove the requested content. Failure to comply subjects the service to enforcement by the Federal Trade Commission under its unfair or deceptive practices authority. 

The law applies to any public-facing website, app, or online service that primarily hosts user-generated content—or, more vaguely, services that “publish, curate, host, or make available” non-consensual intimate imagery as part of their business. This presumably includes social media services, online pornography services, file-sharing tools, image boards, and arguably even private messaging apps. It likely includes search engines as well, and the “make available” standard could apply to user-supplied links to other sites. Notably, the law excludes Internet access providers, email services, and services where user-submitted content is “incidental” to the service’s primary function. This carveout appears designed to protect online retailers, streaming services like Netflix, and news media sites with comment sections. However, the ambiguity around what qualifies as “incidental” will likely push services operating in the gray zone toward over-removal or disabling functionality altogether. 

Generative AI tools likely fall within the scope of the law. If a system generates and displays intimate imagery, whether real or synthetic, at a user’s direction, it could trigger takedown obligations. However, the statute is silent on how these duties apply to services that don’t “host” content in the traditional sense. In theory, providers could remove specific outputs if stored, or even retrain the model to exclude certain images from its dataset. But this becomes far more complicated when the model has already “memorized” the data and internalized it into its parameters. As with many recent attempts to regulate AI, the hard operational questions—like how to unwind learned content—are left unanswered, effectively outsourced to developers to figure out later.

Though perhaps inspired by the structure of existing notice-and-takedown regimes, such as the DMCA’s copyright takedown framework, the implementation here veers sharply from existing content moderation norms. A “valid” TAKE IT DOWN request requires four components: a signature, a description of the content, a good faith statement of non-consent, and contact information. But that’s where the rigor ends.

There is no requirement to certify a takedown request under penalty of perjury, nor any legal consequence for impersonating someone or falsely claiming to act on their behalf. The online services, not the requester, bear the burden of verifying the identity of both the requester and the depicted individual, all within a 48-hour window. In practice, most services will have no realistic option other than to take the request at face value and remove the content, regardless of whether it’s actually intimate or non-consensual. This lack of verification opens the door to abuse, not just by individuals but by third-party services. There is already a cottage industry emerging around paid takedown services, where companies are hired to scrub the Internet of unwanted images by submitting removal requests on behalf of clients, whether authorized or not. This law will only bolster that industry. 

The law also only requires a “reasonably sufficient” identification of the content. There’s no obligation to include URLs, filenames, or specific asset identifiers. It’s unclear whether vague descriptions like “nudes of me from college” are sufficient to trigger a takedown obligation. Under the DMCA, this level of ambiguity would likely invalidate a request. Here, it might not only be acceptable, it could be legally actionable to ignore.

The statute’s treatment of consent is equally problematic. A requester must assert that the content was published without consent but need not provide any evidence to support the claim, other than a statement of good faith belief. There is no adversarial process, no opportunity for the original uploader to dispute the request, and no mechanism to resolve conflicts where the depicted person may have, in fact, consented. In cases where an authorized agent submits a removal request on someone’s behalf (say, a family member or advocacy group), it’s unclear what happens if the depicted individual disagrees. The law contemplates no process for sorting this out. Services are expected to remove first and ask questions never.

Complicating matters further, the law imposes an obligation to remove not only the reported content but also any “identical copies.” While framed as a measure to prevent whack-a-mole reposting, this provision effectively creates a soft monitoring mandate. Even when the original takedown request is vague or incomplete—which the statute permits—services are still required to scan their systems for duplicates. This must be done despite often having little to no verification of the requester’s identity, authority, or the factual basis for the alleged lack of consent. Worse, online services must defer to the requester’s characterization of the content, even if the material in question may not actually qualify as an “intimate visual depiction” under the statutory definition.

Lastly, the law grants immunity to online services that remove content in good faith, even if the material doesn’t meet the definition of an intimate visual depiction. This creates a strong incentive to over-remove rather than assess borderline cases, especially when the legal risk for keeping content up outweighs any penalty for taking it down.

(Notably, neither the criminal nor civil provisions of the law expressly carve out satirical, parody, or protest imagery that happens to involve nudity or sexual references.)

* * *

Some implications of the law:

Over-Criminalization of Legal Speech

The law creates a sweeping new category of criminalized speech without the narrow tailoring typically required for content-based criminal statutes. Language surrounding “harm,” “public concern,” and “reasonable expectation of privacy” invite prosecutorial overreach and post-hoc judgments about whether a given depiction implicates privacy interests and consent, even when the speaker may have believed the content was lawful, newsworthy, or satirical.

The statute allows prosecution not only where the speaker knew the depiction was private, but also where they merely should have known. This is a sharp departure from established First Amendment doctrine, which requires at least actual knowledge or reckless disregard for truth in civil defamation cases, let alone criminal ones.

The law’s treatment of consent raises unresolved questions. It separates consent to create a depiction from consent to publish it, but says nothing about what happens when consent to publish is later withdrawn. A person might initially agree to share a depiction with a journalist, filmmaker, or content partner, only to later revoke that permission. The statute offers no clarity on how that revocation must be communicated and whether it must identify specific content versus a general objection.  

To be clear, the statute requires that the speaker “knowingly” publish intimate imagery without consent. So absent notice of revocation, criminal liability likely wouldn’t attach. But what counts as sufficient notice? Can a subject revoke consent to a particular use or depiction? Can they revoke consent across the board? If a journalist reuses a previously approved depiction in a new story, or a filmmaker continues distributing a documentary after one subject expresses discomfort, are those “new” publications requiring fresh consent? The law provides no mechanism for resolving these questions. 

Further, for adult depictions, the statute permits prosecution where the publication either causes harm or was intended to cause harm. This opens the door to criminal liability based not on the content itself, but on its downstream effects, regardless of whether the speaker acted in good faith. The statute includes no explicit exception for newsworthiness, artistic value, or other good-faith purposes, nor does it provide any formal opportunity for a speaker to demonstrate the absence of malicious intent. In theory, the First Amendment (and Taamneh) should cabin the reach, but the text itself leaves too much room for prosecutorial discretion.

The law also does not specify whether the harm must be to the depicted individual or to someone else, leaving open the possibility that prosecutors could treat general moral offense, such as that invoked by anti-pornography advocates, as sufficient. The inclusion of “reputational harm” as a basis for criminal liability is especially troubling. The statute makes no distinction between public and private figures and requires neither actual malice nor reckless disregard, setting a lower bar than what’s required even for civil defamation.

Further, because the law criminalizes “digital forgeries,” and defines them broadly to include any synthetic content indistinguishable, to a reasonable person, from reality, political deepfakes are vulnerable to prosecution. A video of a public official in a compromising scenario, even if obviously satirical or critical, could be treated as a criminal act if the depiction is deemed sufficiently intimate and the official claims reputational harm. [FN] The “not a matter of public concern” carveout is meant to prevent this, but it’s undefined and thus subject to prosecutorial discretion. Courts have repeatedly struggled to draw the line between private and public concern, and the statute offers no guidance.

[FN: Eric’s addition: I call this the Anthony Weiner problem, where his sexting recipients’ inability to prove their claims by showing the receipts would have allowed Weiner to lie without accountability.]

This creates a meaningful risk that prosecutors, particularly those aligned with Trump, could weaponize the law against protest art, memes, or critical commentary. Meta’s prior policy, for example, permitted images of a visible anus or close-up nudity if photoshopped onto a public figure for commentary or satire. Under the TAKE IT DOWN Act, similar visual content could become a target for prosecution or removal, especially when it involves politically powerful individuals. The statute provides plenty of wiggle room for selective enforcement, producing a chilling effect for creators, journalists, documentarians, and artists who work with visual media that is constitutionally protected but suddenly carries legal risk under this law.

With respect to depictions of minors, the law goes further: a person can be prosecuted for publishing an intimate depiction if they did so with the intent to harass or humiliate the minor or arouse another individual. As discussed, the definition of intimate imagery covers non-sexually explicit content, which covers content that is likely broader than existing CSAM or obscenity laws.  This means that the law creates a lower-tier criminal offense for visual content involving minors, even if the images are not illegal under current federal law. 

For “authentic” images, the law could easily reach innocent but revealing photos of minors shared online. As discussed, if a popular family content creator posts a photo of their child in the bathtub (content that arguably shouldn’t be online in the first place) and the government concludes the poster intended to arouse someone else, that could trigger criminal liability under the TAKE IT DOWN Act. Indeed, family vloggers have repeatedly been accused of curating “innocent” content to appeal to their adult male followers as a means of increasing engagement and revenue, despite pushback from parents and viewers. (Parents may be part of the problem). While the underlying content itself is likely legal speech to the extent it doesn’t fall within CSAM or obscenity laws, it could still qualify as illegal content, subject to criminal prosecution, under the Act. 

For AI-generated images, the law takes an even more aggressive approach for minors. Unlike federal CSAM laws, which only cover synthetic images that are “indistinguishable” from a real minor, the TAKE IT DOWN Act applies to any digital forgery that, in the eyes of a reasonable person, appears to depict a specific, identifiable child. That’s a significant shift. The higher standard in CSAM law was crafted to comply with Ashcroft v. Free Speech Coalition, where the Supreme Court struck down a federal ban on virtual CSAM that wasn’t tied to real individuals. The Court’s rationale protected fictional content, including cartoon imagery (think a nude depiction of South Park’s Eric Cartman) as constitutionally protected speech. By contrast, the TAKE IT DOWN Act abandons that distinction and criminalizes synthetic content based on how it appears to a reasonable viewer, not whether it reflects reality or actual harm. That standard is unlikely to survive Ashcroft-level scrutiny and leaves the law open to serious constitutional challenge.

Disproportionate Protections & Penalties For Vulnerable Groups

The TAKE IT DOWN Act is framed as a measure to protect vulnerable individuals, such as the high school students victimized by deepfake NCII. Yet its ambiguities risk leaving some vulnerable groups unprotected, or worse, exposing them to prosecution.

The statute raises the real possibility of criminalizing large numbers of minors. Anytime we’re talking about high schoolers and sharing of NCII, we have to ask whether the law applies to teens who forward nudes—behavior that is unquestionably harmful and invasive, but also alarmingly common (see, e.g., 123). While the statute is framed as a tool to punish adults who exploit minors, its broad language easily sweeps in teenagers navigating digital spaces they may not fully understand. Yes, teens should be more careful with what they share, but that expectation doesn’t account for the impulsiveness, peer pressure, and viral dynamics that often define adolescent behavior online. A nude or semi-nude image shared consensually between peers can rapidly spread beyond its intended audience. Some teens may forward it not to harass or humiliate, but out of curiosity or simply because “everyone else already saw it.” Under the TAKE IT DOWN Act, that alone could trigger federal criminal liability.

With respect to depictions of adults, the risks are narrower but still present. The statute specifies that consent to create a depiction does not equate to consent to publish it, and that sharing a depiction with someone else does not authorize them—or anyone else—to republish it. These provisions are intended to close familiar NCII loopholes, but they also raise questions about how the law applies when individuals post or re-share depictions of themselves. There is no broad exemption for self-publication by adults, only the same limited carveout for depictions involving nudity or sexual conduct. That may cover much of what adult content creators publish, but it leaves unclear how the law treats suggestive or partial depictions that fall short of statutory thresholds. In edge cases, a prosecutor could argue that a self-published image lacks context-specific consent or causes general harm, especially if the prosecutor is inclined to target adult content as a matter of policy.

At the same time, the law seems to also treat adult content creators and sex workers as effectively ineligible for protection. As discussed, prior public or commercial self-disclosure potentially disqualifies someone from being a victim of non-consensual redistribution. Instead of accounting for the specific risks these communities face, the law appears to treat them as discardable (as is typical for these communities). 

This structural asymmetry is made worse by the statute’s sweeping exemption for law enforcement and intelligence agencies, despite their well-documented misuse of intimate imagery. Police have used real sex workers’ photos in sting operations without consent, exposing individuals to reputational harm, harassment, and even false suspicion. A 2021 DOJ Inspector General report found that FBI agents, while posing as minors online, uploaded non-consensual images to illicit websites. This is conduct that violated agency policy but seems to be fully exempt under Take It Down. This creates a feedback loop: the state appropriates private images, recirculates them, and then uses the fallout as investigative justification. 

Over-Removal of Political Speech, Commentary, and Adult Content

Trump and his allies have a long track record of attempting to suppress unflattering or politically inconvenient content. Under the civil takedown provisions of the TAKE IT DOWN Act, they no longer need to go through the courts to do it. All it takes is an allegation that a depiction violates the statute. Because the civil standard is more permissive, that allegation doesn’t have to be well-founded, it just has to allege that the content is an “intimate visual depiction.” A private photo from a political fundraiser, a photoshopped meme using a real image, or an AI-generated video of Trump kissing Elon Musk’s feet could all be flagged under the law, even if they don’t meet the statute’s actual definition. But here’s the catch: services have just 48 hours to take the content down. That’s not 48 hours to investigate, evaluate, or push back, it’s 48 hours to comply or risk FTC enforcement. In practice, that means the content is far more likely to be removed than challenged, especially when the requester claims the material is intimate. Services will default to caution, pulling content that may not meet the statutory threshold just to avoid regulatory risk. As we saw after FOSTA-SESTA, that kind of liability pressure drives entire categories of speech offline.

Moreover, the provision requiring online services to remove identical copies of the reported content, in practice, might encourage online services to take a scorched-earth approach to removals: deleting entire folders, wiping user accounts, pulling down all images linked to a given name or metadata tag, or even removing the contents of an entire website. It’s easy to see how this could be  especially weaponized against adult content sites, where third-party uploads often blur the line between lawful adult material and illicit content.

Further, automated content moderation tools that are designed to efficiently remove content while shielding human workers from exposure harms may exacerbate the issue. Many online services use automated classifiers, blurred previews, and image hashing systems to minimize human exposure to disturbing content. But the TAKE IT DOWN Act requires subjective judgment calls that automation may not be equipped to make. Moderators must decide whether a depiction is truly intimate, whether it falls under an exception, whether the depicted individual voluntarily exposed themselves, and whether the requester is legitimate. These are subjective, context-heavy determinations that require viewing the content directly. In effect, moderators are now pushed back into front line exposure just to determine if a depiction meets the statute’s definition.

The enforcement provisions of the TAKE IT DOWN Act give the federal government—particularly a politicized FTC delighting in its newfound identity as a censorship board—broad discretion to target disfavored online services. A single flagged depiction labeled a digital forgery can trigger invasive investigations, fines, or even site shutdowns. Recall that The Heritage Foundation’s Project 2025 mandate explicitly calls for the elimination of online pornography. This law offers a ready-made mechanism to advance that agenda, not only for government officials but also for aligned anti-pornography groups like NCOSE. Once the state can reframe consensual adult content as non-consensual or synthetic, regardless of whether that claim holds, it can begin purging lawful material from the Internet under the banner of victim protection. 

This enforcement model will also disproportionately affect LGBTQ+ content, which is already subject to heightened scrutiny and over-removal. Queer creators routinely report that their educational, artistic, or personal content is flagged as adult or explicit, even when it complies with existing community guidelines. Under the TAKE IT DOWN Act, content depicting queer intimacy, gender nonconformity, or bodies outside heteronormative standards could be more easily labeled as “intimate visual depictions,” especially when framed by complainants as inappropriate or harmful. For example, a shirtless trans-identifying person discussing top surgery could plausibly be flagged for removal. Project 2025 and its enforcers have already sought to collapse LGBTQ+ expression into a broader campaign against “pornography.” The TAKE IT DOWN Act gives that campaign a fast-track enforcement mechanism, with no real procedural safeguards to prevent abuse.

Selective Enforcement By Trump’s FTC 

The Act’s notice-and-takedown regime is enforced by the FTC, an agency with no meaningful experience or credibility in content moderation. That’s especially clear from its attention economy workshop, which appear stacked with ideologically driven participants and conspicuously devoid of legitimate experts in Internet law, trust and safety, or technology policy. 

The Trump administration’s recent purge and re-staffing of the agency only underscores the point. With internal dissenters removed and partisan loyalists installed, the FTC now functions less as an independent regulator and more as an enforcement tool aligned with the White House’s speech agenda. The agency is fully positioned to implement the law exactly as Trump intends: by punishing political enemies.

We should expect enforcement will not be applied evenly. X (formerly Twitter), under Elon Musk, continues to host large volumes of NCII with little visible oversight. There is no reason to believe a Trump-controlled FTC will target Musk’s services. Meanwhile, smaller, less-connected sites, particularly those serving LGBTQ+ users and marginalized creators, will remain far more exposed to aggressive, selective enforcement.

Undermining Encryption

The Act does not exempt private messaging services, encrypted communication tools, or electronic storage providers. That omission raises significant concerns. Services that offer end-to-end encrypted messaging simply cannot access the content of user communications, making compliance with takedown notices functionally impossible. These services cannot evaluate whether a reported depiction is intimate, harmful, or duplicative because, by design, they cannot see it. See the Doe v. Apple case.

Faced with this dilemma, providers may feel pressure to weaken or abandon encryption entirely in order to demonstrate “reasonable efforts” to detect and remove reported content. This effectively converts private, secure services into surveillance systems, compromising the privacy of all users, including the very individuals the law claims to protect.

The statute’s silence on what constitutes a “reasonable effort” to identify and remove copies of reported imagery only increases compliance uncertainty. In the absence of clear standards, services may over-correct by deploying invasive scanning technologies or abandoning encryption altogether to minimize legal risk. Weakening encryption in this way introduces systemic security vulnerabilities, exposing user data to unauthorized access, interception, and exploitation. This is particularly concerning as AI-driven cyberattacks become more sophisticated, and as the federal government is actively undermining our nation’s cybersecurity infrastructure. 

Conclusion 

Trump’s public support for the TAKE IT DOWN Act should have been disqualifying on its own. But even setting that aside, the law’s political and institutional backing should have raised immediate red flags for Democratic lawmakers. Its most vocal champion, Senator Ted Cruz, is a committed culture warrior whose track record includes opposing same-sex marriageattacking DEI programs, and using students as political props—ironically, the same group this law claims to protect.

The law’s support coalition reads like a who’s who of Christian nationalist and anti-LGBTQ+ activism. Among the 120 organizations backing it are the National Center on Sexual Exploitation (NCOSE), Concerned Women for America Legislative Action Committee, Family Policy Alliance, American Principles Project, and Heritage Action for America. These groups have long advocated for expanded state control over online speech and sexual expression, particularly targeting LGBTQ+ communities and sex workers.

Civil liberties groups and digital rights organizations quickly flagged the law’s vague language, overbroad enforcement mechanisms, and obvious potential for abuse. Even groups who typically support online speech regulation warned that the law was poorly drafted and structurally dangerous, particularly in the hands of the Trump Administration.

At this point, it’s not just disappointing, it’s indefensible that so many Democrats waved this law through, despite its deep alignment with censorship, discrimination, and religious orthodoxy. The Democrats’ support represents a profound failure of both principle and judgment. Worse, it reveals a deeper rot within the Democratic establishment: legislation that is plainly dangerous gets waved through not because lawmakers believe in it, but because they fear bad headlines more than they fear the erosion of democracy itself.

In a FOSTA-SESTA-style outcome, Mr. Deepfakes—one of the Internet’s most notorious hubs for AI-generated NCII and synthetic abuse—shut down before the TAKE IT DOWN Act even took effect. More recently, the San Francisco City Attorney’s Office announced a settlement with one of the many companies it sued for hosting and enabling AI-generated NCII. That litigation has already triggered the shutdown of at least ten similar sites, raising the age-old Internet law question: was this sweeping law necessary to address the problem in the first place?

__

Eric’s Comments

I’m going to supplement Prof. Miers’ comments with a few of my own focused on the titular takedown provision. 

The Heckler’s Veto

If a service receives a takedown notice, the service must resolve all of the following tasks within 48 hours:

  • Can the service find the targeted item?
  • Is anyone identifiable in the targeted item?
  • Is the person submitting the takedown notice identifiable in the targeted item?
  • Does the targeted item contain an intimate visual depiction of the submitter?
  • Did the submitting person consent to the depiction?
  • Is the depiction otherwise subject to some privilege? (For example, the First Amendment)
  • Can the service find other copies of the targeted item?
  • [repeat all of the above steps for each duplicate. Note the copies may be subject to a different conclusion; for example, a copy may be in a different context, like embedded in a larger item of content (like a still image in a documentary) where the analysis might be different]

Alternatively, instead of navigating this gauntlet of short-turnaround tasks, the service can just immediately honor a takedown without any research at all. What would you do if you were running a service’s removals operations? This is not a hard question.

Because the takedown notices are functionally unverifiable and services have no incentive to invest any energy in diligencing them, takedown notices are like the equivalent of heckler’s vetoes. Anyone can submit them knowing that the service will honor them blindly and thereby scrub legitimate content from the Internet. This is a powerful and very effective form of censorship. As Prof. Miers explains, the most likely victims of heckler’s vetoes are communities that are otherwise marginalized.

One caveat: after Moody, it seems likely that laws reducing or eliminating the discretion of editorial services to remove or downgrade non-illegal content, like those contained in the Florida and Texas social media censorship laws, are unconstitutional. If not, the Take It Down Act sets up services for an impossible challenge: they would have to make the right call on the legality of each and every targeted item. Failing to remove illegal content would support a Take It Down FTC enforcement action; removing legal content would set up a claim under the must-carry law. Prof. Miers and I discussed the impossibility of perfectly discerning this border between legal and illegal content.

Bad Design of a Takedown System

The takedown system was clearly designed in reference to the DMCA’s 512 notice-and-takedown scheme. This is not a laudatory attribute. The 512 scheme was poorly designed, which has led to overremovals and consolidated the industry due to the need to achieve economies of scale. The Take It Down Act’s scheme is even more poorly designed. Congress has literally learned nothing from 25 years of experience with the DMCA’s takedown procedures. 

Here are some of the ways that the Take It Down Act’s takedown scheme is worse than the DMCA’s:

  • As Prof. Miers mentioned, the DMCA requires a high degree of specificity about the location of the targeted item. The Take It Down Act puts more of an onus on the service to find the targeted item in response to imprecise takedown notices.
  • The DMCA does not require services to look for and remove identical items, so the Take It Down Act requires services to undertake substantially more work that increases the risk of mistakes and the service’s legal exposure.
  • As Prof. Miers mentioned, DMCA notices require the sender to declare, under penalty of perjury, that they are authorized to submit the notice. As a practical matter, I am unaware of any perjury prosecutions actually being brought for DMCA overclaims. Nevertheless, the perjury threat might still motivate some senders to tell the truth. The Take It Down Act doesn’t require such declarations at risk of perjury, which encourages illegitimate takedown notices.
  • Further to that point, the DMCA created a new cause of action (512(f)) for sending bogus takedown notices. 512(f) has been a complete failure, but at least it provides some reason for senders to consider if they really want to submit the takedown notice. The Take It Down Act has no analogue to 512(f), so Take It Down notice senders who overclaim may not face any liability or have any reason to curb their actions. This is why I expect lots of robo-notices sent by senders who have no authority at all (such as anti-porn advocates with enough resources to build a robot and a zeal to eliminate adult content online), and I expect many of those robo-notices will be honored without question. This sounds like a recipe for mass chaos…and mass censorship.
  • Failure to honor a DMCA takedown notice doesn’t confer liability; it just removes a safe harbor. The Take It Down Act imposes liability for failure to honor a takedown notice two ways: the FTC can enforce that non-removal, plus the risk that the failed removal will support a federal criminal prosecution.
  • The DMCA tried to motivate services to provide an error-correction mechanism to uploaders who are wrongly targeted by takedown notices–the 512(g) putback mechanism provides the service with an immunity for restoring targeted content. The Take It Down Act has no error-correction mechanism, either via carrots or sticks, so any correction of bogus removals will be based purely on the service’s good graces.
  • The Take It Down Act tries to motivate services to avoid overremovals by providing an immunity for removals “based on facts or circumstances from which the unlawful publishing of an intimate visual depiction is apparent.” Swell, but as Prof. Miers and I documented, services aren’t liable for content removals they make (subject to my point above about must-carry laws), whether it’s in response to heckler’s veto notices or otherwise. So the Take It Down immunity won’t motivate services to be more careful with their removal determinations because it does not provide any additional legal protection the services value. 

The bottom line: however bad you think the DMCA encourages or enables overremovals, the Take It Down Act is 1000x worse due to its poor design. 

One quirk: the DMCA expects services to do two things in response to a copyright takedown notice: (1) remove the targeted item, and (2) assign a strike to the uploader, and terminate the uploader’s account if the uploader has received too many strikes. (The statute doesn’t specify how many strikes is too many, and it’s an issue that is hotly litigated, especially in the IAP context). The Take It Down Act doesn’t have a concept of recidivism. In theory, a single uploader could upload a verboten item, the service could remove it in response to a takedown notice, the uploader could reupload the identical item, and the service could wait for another Take It Down notice before doing anything. In fact, the Take It Down Act seemingly permits this process to repeat infinitely (though the service might choose to terminate such rogue accounts voluntarily based on its own editorial standards). Will judges consider that infinite loop unacceptable and, after too many strikes (whenever that is), assume some kind of actionable scienter on services dealing with recidivists?

The FTC’s Enforcement Leverage

The Take It Down Act enables the FTC to bring enforcement actions for a service’s “failure to reasonably comply with the notice and takedown obligations.” There is no minimum quantity of failures; a single failure to honor a takedown notice might support the FTC’s action. This gives the FTC extraordinary leverage over services. The FTC has unlimited flexibility to exercise its prosecutorial discretion, and services will be vulnerable if they’ve made a single mistake (which every service will inevitably do). The FTC can use this leverage to get services to do pretty much whatever the FTC wants–to avoid a distracting, resource-intensive, and legally risky investigation. I anticipate the FTC will receive a steady stream of complaints from people who sent takedown notices that weren’t honored (especially the zealous anti-porn advocates), and each of those complaints to the FTC could trigger a massive headache for the targeted services.

The fact that the FTC has turned into a partisan enforcement agency makes this discretionary power even more risky to the Internet’s integrity. For example, imagine that the FTC wants to do an anti-porn initiative; the Take It Down Act gives the FTC a cudgel to ensure that services are overremoving pornography in response to takedown notices–or perhaps even to anticipatorily reduce the availability of “adult” content on their service to avoid potential future entanglements. Even if the FTC doesn’t lean into an anti-porn crackdown, Chairman Ferguson has repeatedly indicated that he thinks he works for President Trump, not for the American people who pay his salary, and is on standby to use the weapons at the FTC’s disposal to do the president’s bidding.  

As I noted in my pieces on editorial transparency (12), the FTC’s investigatory powers can take the agency deep into a service’s “editorial” operations. The FTC can investigate if a service’s statutorily required reporting mechanism is properly operating; the FTC can ask to see all of the Take It Down notices submitted to the service and their disposition; the FTC can ask why each and every takedown notice refusal was made and question if that was a “correct” choice; the FTC can ask the service about its efforts to find identical copies that should have been taken down and argue that any missed copies were not reasonable. In other words, the FTC now becomes an omnipresent force in every service’s editorial decisions related to adult content–a newsroom partner that no service wants. These kinds of close dialogues between editorial publishers and government censors are common in repressive and authoritarian regimes, and the Take It Down Act reinforces that we are one of them.

The Death of Due Process

Our country is experiencing a broad-based retrenchment in support for procedures that follow due process. I mean, our government is literally disappearing people without due process and arguing that it has every right to do so–and a nontrivial number of Americans are cheering this on. President Trump even protested that he couldn’t depopulate the country of immigrants he doesn’t like and comply with due process because it would take too long and cost too much.

Yes, due process is slow and expensive, but countries that care about the rule of law require it anyway because it reduces errors that can be pernicious/life-changing and provides mechanisms to correct any errors. Because of the powers in the hands of government and the inevitability that governments make mistakes, we need more due process, not less.

The Take It Down Act is another corner-cut on due process. Rather than requiring people to take their complaints about intimate visual depictions to court, which would take a lot of time and cost a lot of money, the Take It Down Act contemplates a removal system that bears no resemblance to due process. As I discussed, the Take It Down Act massively puts the thumb on the scale of removing content (legitimate or not) in response to heckler’s vetoes, ensuring many erroneous removals, with no meaningful mechanism to correct those errors. 

It’s like the old adage in technology development circles (sometimes called the “Iron Triangle”): you can’t have good, fast, and cheap outcomes, at best you can pick two attributes of the three. By passing the Take It Down Act, Congress picked fast and cheap decisions and sacrificed accuracy. When the act’s takedown systems go into effect, we’ll find out how much that choice cost us.

Can Compelled Takedowns Survive a Court Challenge?

I’d be interested in your thoughts about whether the takedown notice procedures (separate from the criminal provisions) violate the First Amendment and Section 230. On the surface, it seems like the takedown requirements conflict with the First Amendment. The Take It Down Act requires the removal of content that isn’t obscene or CSAM, and government regulation of non-obscene/non-CSAM content raises First Amendment problems because it overrides the service’s editorial discretion. The facts that the censorship is structured as a notice-and-takedown procedure rather than a categorical ban, and the FTC can enforce violations per its unfair/deceptive authority, strike me as immaterial to the First Amendment analysis.

(Note: I could make a similar argument about the DMCA’s takedown requirements, which routinely lead to the removal of non-infringing and Constitutionally protected material, but copyright infringement gets a weird free pass from Constitutional scrutiny).

Also, Take It Down’s takedown procedures obviously conflict with Section 230 by imposing liability for continuing to publish third-party content. However, I’m not sure if Take It Down’s status as a later-passed law means that it implicitly amends Section 230. Furthermore, by anchoring enforcement in the FTC Act, the law may take advantage of cases like FTC v. LeadClick which basically said that the FTC Act punishes defendants for their first-party actions, not for third-party content (though that seems like an objectively unreasonable interpretation in this context). So I’m unsure how the Take It Down/Section 230 conflict will be resolved.

Note that it’s unclear who will challenge the Take It Down Act prospectively. It seems like all of the major services will do whatever they can to avoid triggering a Trump brain fart, which sidelines them from prospective challenges to the law. So we may not get more answers about the permissibility of the Take It Down Act scheme for years, until there’s an enforcement action against a service with enough money and motivation to fight.

Posted on BestNetTech - 18 October 2024 @ 01:47pm

Five Decisions Illustrate How Section 230 Is Fading Fast

Professor Eric Goldman continues to be the best at tracking any and all developments regarding internet regulations. He recently covered a series of cases in which the contours of Section 230’s liability immunity is getting chipped away in all sorts of dangerous ways. As it’s unlikely that I would have the time to cover any of these cases myself, Eric has agreed to let me repost it here. That said, his post is written for an audience that already understands Section 230 and nuances related to it, so be aware that it doesn’t go as deep into the details. If you’re just starting to understand Section 230, here’s a good place to start, though, as Eric notes, the old knowledge may be increasingly less important.

Section 230 cases are coming faster than I can blog them. This long blog post rounds up five defense losses, riddled with bad judicial errors. Given the tenor of these opinions, how are any plaintiffs NOT getting around Section 230 at this point?

District of Columbia v. Meta Platforms, Inc., 2024 D.C. Super. LEXIS 27 (D.C. Superior Ct. Sept. 9, 2024)

The lawsuit alleges Meta addicts teens and thus violates DC’s consumer protection act. Like other cases in this genre, it goes poorly for Facebook.

Section 230

The court distills and summarizes the conflicting precedent: “The immunity created by Section 230 is thus properly understood as protection for social media companies and other providers from “intermediary” liability—liability based on their role as mere intermediaries between harmful content and persons harmed by it…. But-for causation, however, is not sufficient to implicate Section 230 immunity….Section 230 provides immunity only for claims based on the publication of particular third-party content.”

I don’t know what “particular” third-party content means, but the statute doesn’t support any distinction based on “particular” and “non-particular” third-party content. It refers to information provided by another information content provider, which divides the world into first-party content and third-party content. Section 230 applies to all claims based on third-party content, whether that’s an individual item or the entire class.

Having manufactured the requirement of that the claim must be based on “particular” content to trigger Section 230, the court says none of the claims do that.

With respect to the deceptive omissions claims, Section 230 doesn’t apply because “Meta can simply stop making affirmative misrepresentations about the nature of the third-party content it publishes, or it can disclose the material facts within its possession to ensure that its representations are not misleading or deceptive within the meaning of the CPPA.”

With respect to a different deceptive omissions claim, the court says Facebook “could avoid liability for such claims in the future without engaging in content moderation. It could disclose the information it has about the prevalence of sexual predators operating on its platforms, and it could take steps to block adult strangers from contacting minors over its apps.” I’d love for the court to explain how blocking users from contacting each other on apps differs from “content moderation.”

With respect to yet other deceptive omissions claims, the court says “If the claim seeks to hold Meta liable for omissions that make its statements about eating disorders misleading, then, as with the omissions regarding the prevalence of harmful third-party content on Meta’s platforms, the claim seeks to hold Meta liable for its own false, incomplete, and otherwise misleading representations, not for its publication of any particular third-party content. If the claim seeks to hold Meta liable for breaching a duty to disclose the harms of its platforms’ features, including the plastic surgery filter, then the claim is based on Meta’s own conduct, not on any third-party content published on its platforms.”

First Amendment

“Meta’s counsel was unable to articulate any message expressed or intended through Meta’s implementation and use of the challenged design features.” The court distinguishes a long list of precedents that it says don’t apply because they “involved state action that interfered with messaging or other expressive conduct—a critical element that is not present in the case before this court.” I don’t see how the court could possibly say that a government agency suing Facebook for not complying with government rules about the design of speech venues isn’t state action that interferes with expressive conduct. (Also, the “expressive conduct” phrase doesn’t apply here. It’s called “publishing”).

The court distinguishes the Moody case:

Deprioritizing content relates to “the organizing and presenting” of content, as do the design features at issue here. But the reason deprioritizing specific content or content providers can be expressive is not that it affects the way content is displayed; it can be expressive because it indicates the provider’s relative approval or disapproval of certain messages.

I don’t understand how the court can acknowledge that Facebook’s design features relate to the “organizing and presenting” of content and still not say those features are not expressive.

The court continues with its odd reading of Moody:

The Supreme Court, moreover, expressly limited the reach of its holding in Moody to algorithms and other features that broadly prioritize or deprioritize content based on the provider’s preferences, and it emphasized that it was not deciding whether the First Amendment applies to algorithms that display content based on the user’s preferences

Huh? Every algorithm encodes the “provider’s preferences.” If the court is trying to say that Facebook didn’t intend to preference harmful content, that ignores the inevitability that the algorithm will make Type I/Type II errors. The court sidesteps this:

the District’s unfair trade practice claims challenge Meta’s use of addictive design features without regard to the content Meta provides, and Meta has failed to articulate even a broad or vague message it seeks to convey through the implementation of its design features. So although regulations of community norms and standards sometimes implicate expressive choices, the design features at issue here do not.

Every “design feature” implicates expressive choices. Perhaps Facebook should have done a better job articulating this, but the judge was far too eager to disrespect the editorial function.

The court adds that if the First Amendment applied, the enforcement action will be subject to, and survive, intermediate scrutiny. “The District’s stated interest in prosecuting its claims is the protection of children from the significant adverse effects of the addictive design features on Meta’s social media platforms. The District’s interest has nothing to do with the subject matter or viewpoint of the content displayed on Meta’s platforms; indeed, the complaint alleges that the harms arise without regard to the content served to any individual user. ”

It’s impossible to say with a straight face that the district is uninterested in the subject matter or viewpoint of the content displayed on Meta’s platforms. Literally, other parts of the complaint target specific subject matters.

Prima Facie Elements

The court says that the provision of Internet services constitutes a “transfer” for purposes of the consumer protection statute, “even though Meta does not charge a fee for the use of its social media platforms.”

The court says that the alleged health injuries caused by the services are sufficient harm for statutory purposes, even if no one lost money or property.

The court says some of Meta’s public statements may have been puffery, and other statements may not have been issued publicly, but “many of the statements attributed to Meta and its top officials in the complaint are not so patently hyperbolic that it would be implausible for a reasonable consumer to be misled by them. Others are sufficiently detailed, quantifiable, and capable of verification that, if proven false, they could support a deceptive trade practice claim.”

State v. Meta Platforms, Inc., 2024 Vt. Super. LEXIS 146 (Vt. Superior Ct. July 29, 2024)

Similar to the DC case, the lawsuit alleges Meta addicts teens and thus violates Vermont’s consumer protection act. This goes as well for Facebook as it did in DC.

With respect to Section 230, the court says:

Meta may well be insulated from liability for injuries resulting from bullying or sexually inappropriate posts by Instagram users, but the State at oral argument made clear that it asserts no claims on those grounds….

The State is not seeking to hold Meta liable for any content provided by another entity. Instead, it seeks to hold the company liable for intentionally leading Young Users to spend too much time on-line. Whether they are watching porn or puppies, the claim is that they are harmed by the time spent, not by what they are seeing. The State’s claims do not turn on content, and thus are not barred by Section 230.

The State’s deception claim is also not barred by Section 230 for the same reason—it does not depend on third party content or traditional editorial functions. The State alleges that Meta has failed to disclose to consumers its own internal research and findings about Instagram’s harms to youth, including “compulsive and excessive platform use.”  The alleged failure to warn is not “inextricably linked to [Meta’s] alleged failure to edit, monitor, or remove [] offensive content.”

Facebook’s First Amendment defense fails because it “fails to distinguish between Meta’s role as an editor of content and its alleged role as a manipulator of Young Users’ ability to stop using the product. The First Amendment does not apply to the latter.” Thus, the court characterizes the claims as targeting conduct, not content, which only get rational basis scrutiny. “Unlike Moody, where the issue was government restrictions on content…it is not the substance of the speech that is at issue here.”

T.V. v. Grindr, LLC, 2024 U.S. Dist. LEXIS 143777 (M.D. Fla. Aug. 13, 2024)

This is an extremely long (116 pages), tendentious, and very troubling opinion. The case involves a minor, TV, who used Grindr’s services to match with sexual abusers and then committed suicide. The estate sued Grindr for the standard tort claims plus a FOSTA claim. The court dismisses the FOSTA claim but rejects Grindr’s Section 230 defense for the remaining claims. It’s a rough ruling for Grindr and for the Internet generally, twisting many standard industry practices and statements into reasons to impose liability and doing a TAFS-judge-style reimagining of Section 230. Perhaps this ruling will be fixed in further proceedings, or perhaps this is more evidence we are nearing the end of the UGC era.

FOSTA

The court dismissed the FOSTA claim:

T.V., like the plaintiffs in Red Roof Inns, fails to allege facts to make Grindr’s participation in a sex trafficking venture plausible. T.V. alleges in a conclusory manner that the venture consisted of recruiting, enticing, harboring, transporting, providing, or obtaining by other means minors to engage in sex acts, without providing plausible factual allegations that Grindr “took part in the common undertaking of sex trafficking.”…, the allegations that Grindr knows minors use Grindr, knows adults target minors on Grindr, and knows about the resulting harms are insufficient.

This is the high-water mark of the opinion for Grindr. It’s downhill from here.

Causation

The court says the plaintiff adequately alleged that Grindr was the proximate cause of TV’s suicide:

reasonable persons could differ on whether Grindr’s conduct was a substantial factor in producing A.V.’s injuries or suicide or both and whether the likelihood adults would engage in sexual relations with A.V. and other minors using Grindr was a hazard caused by Grindr’s conduct

Strict Liability

The court doesn’t dismiss the strict liability claim because the Grinder “service” was a “product.” (The plaintiff literally called Grindr a service). The court says:

Like Lyft in Brookes, Grindr designed the Grindr app for its business; made design choices for the Grindr app; placed the Grindr app into the stream of commerce; distributed the Grindr app in the global marketplace; marketed the Grindr app; and generated revenue and profits from the Grindr app….

Grindr designed and distributed the Grindr app, making Grindr’s role different from a mere service provider, putting Grindr in the best position to control the risk of harm associated with the Grindr app, and rendering Grindr responsible for any harm caused by its design choices in the same way designers of physically defective products are responsible

This is not a good ruling for virtually every Internet service. You can see how problematic this is from this passage:

T.V. is not trying to hold Grindr liable for “users’ communications,” about which the pleading says nothing. T.V. is trying to hold Grindr liable for Grindr’s design choices, like Grindr’s choice to forego age detection tools, and Grindr’s choice to provide an interface displaying the nearest users first

These “design choices” are Grindr’s speech, and they facilitate user-to-user speech. The court’s anodyne treatment of the speech considerations doesn’t bode well for Grindr.

The court says TV adequately pleaded that Grindr’s design choices were “unreasonably dangerous”:

Grindr designed its app so anyone using it can determine who is nearby and communicate with them; to allow the narrowing of results to users who are minors; and to forego age detection tools in favor of a minor-based niche market and resultant increased market share and profitability, despite the publicized danger, risk of harm, and actual harm to minors. At a minimum, those allegations make it plausible that the risk of danger in the design outweighs the benefits.

Remember, this is a strict liability claim, and these alleged “defects” could apply to many UGC services. In other words, the court’s analysis raises the spectre of industry-wide strict liability–an unmanageable risk that will necessarily drive most or all players out of the industry. Uh oh.

Also, every time I see the argument that services didn’t deploy age authentication tools, when the legal compulsion to do so has been in conflict with the First Amendment for over a quarter-century, I wonder how we got to the point where the courts so casually disregard the constitutional limits on their authority.

Grindr tried a risky argument that everyone knows it’s a dangerous app, so basically, caveat user. Having flipped the argument around on the court, all of the sudden, the court doesn’t find the offline analogies so persuasive:

Grindr fails to offer convincing reasons why this Court should liken the Grindr app to alcohol and tobacco—products used for thousands of years—and rule that, as a matter of Florida law, there is widespread public knowledge and acceptance of the dangers associated with the Grindr app or that the benefits of the Grindr app outweigh the risk to minors.

Duty of Care

The court says TV adequately alleged that Grindr violated its duty of care:

Grindr’s alleged conduct created a foreseeable zone of risk of harm to A.V. and other minors. That alleged conduct, some affirmative in nature, includes launching the Grindr app “designed to facilitate the coupling of gay and bisexual men in their geographic area”; publicizing users’ geographic locations; displaying the image of the geographically nearest users first; representing itself as a “safe space”; introducing the “Daddy” “Tribe,” as well as the “Twink” “Tribe,” allowing users to “more efficiently identify” users who are minors; knowing through publications that minors are exposed to danger from using the Grindr app; and having the ability to prevent minors from using Grindr Services but failing to take action to prevent minors from using Grindr Services. These allegations describe a situation in which “the actor”—Grindr—”as a reasonable [entity], is required to anticipate and guard against the intentional, or even criminal, misconduct of others….

considering the vulnerabilities of the potential victims, the ubiquitousness of smartphones and apps, and the potential for extreme mental and physical suffering of minors from the abuse of sexual predators, the Florida Supreme Court likely would rule that public policy “lead[s] the law to say that [A.V. was] entitled to protection,” and that Grindr “should bear [the] given loss, as opposed to distributing the loss among the general public.”…Were Grindr a physical place people could enter to find others to initiate contact for sexual or other mature relationships, the answer to the question of duty of care would be obvious. That Grindr is a virtual place does not make the answer less so.

That last sentence is so painful. There are many reasons why a “virtual” place may have different affordances and warrant different legal treatment than “physical” space. For example, every aspect of a virtual space is defined by editorial choices about speech, which isn’t true in the offline world. The court’s statement implicates Internet Law Exceptionalism 101, and this judge–who was so thorough in other discussions–oddly chose to ignore this critical question.

IIED/NIED

It’s almost never IIED, and here there’s no way Grindr intended to inflict emotional distress on its users…right?

Wrong. The court says Grindr engaged in outrageous conduct based on the allegation that Grindr “served [minors] up on a silver platter to the adult users of Grindr Services intentionally seeking to sexually groom or engage in sexual activity with persons
under eighteen.” I understand the court was making all inferences in favor of the plaintiff, but “silver platter”–seriously? The court ought to push back on such rhetorical overclaims rather than rubberstamp them to discovery.

The court also says that Grindr directed the emotional distress at TV and never discusses Grindr’s intent at all. I’m not sure how it can be IIED without that intent, but the court didn’t seem perturbed.

The NIED claim isn’t dismissed because of the assailants’ physical contact with TV, however distant that is from Grindr.

Negligent Misrepresentations

The court says that Grindr’s statement that it “provides a safe space where users can discover, navigate, and interact with others in the Grindr Community” isn’t puffery, especially when combined with Grindr’s express “right to remove content.” Naturally, this is a troubling legal conclusion when every TOS reserves the right to remove content, and the First Amendment provides that right as well, while the word “safe” has no well-accepted definition and could mean pretty much anything–and certainly doesn’t act as a guarantee that no harm will ever befall a Grindr user. Grindr’s TOS also expressly said that it didn’t verify users, and the court said it was still justifiable to rely on the word “safe” over the express statements why the site might not be safe.

Section 230

The prior discussion shows just how impossible it will be for Internet services to survive their tort exposure without Section 230 protection. If Section 230 doesn’t apply, then plaintiffs’ lawyers can always find a range of legal doctrines that might apply, with existential damages at stake if any of the claims stick. Because services can never plaintiff-proof their offerings to the plaintiff lawyers’ satisfaction, they have to settle up quickly to prevent those existential damages, or they have to exit the industry because any profit will be turned over to the plaintiffs’ lawyers.

Given the tenor of the court’s discussion about the prima facie claims, any guess how the Section 230 analysis goes?

The court starts with the premise that it’s not bound by any prior decisions:

The undersigned asked T.V. to state whether binding precedent exists on the scope of § 230(c)(1). T.V. responded, “This appears to be an issue of first impression in the Eleventh Circuit[.]” Grindr does not dispute that response.

The court is playing word games here. The court is discounting a well-known precedential case, Almeida v. Amazon from 2006. The court says Almeida’s 230(c)(1) discussion–precisely on point–was dicta. That ruling focused primarily on 230(e)(2), the IP exception to 230, but the case only reaches that issue based on the initial applicability of 230(c)(1). In addition, there are at least three non-precedential 11th Circuit cases interpreting Section 230(c)(1), including McCall v. ZotosDowbenko v. Google, and Whitney v. Xcentric (the court acknowledges the first two and ignores the Whitney case). These rulings may not be precedential, but they are indicators of how the 11th Circuit thinks of Section 230 and deserved some engagement rather than being ignored. The Florida federal court might also apply Florida state law, which includes the old Doe v. AOL decision from the Florida Supreme Court and numerous Florida intermediate appellate court rulings.

The court acknowledges an almost identical case from a Florida district court case, Doe v. Grindr, where Grindr prevailed on Section 230 grounds. This court says that judge relied on “non-binding cases”–but if there are no binding 11th Circuit rulings, what else was that court supposed to do? And this court has already established that it will also rely on non-binding cases, so doesn’t pointing this out also undercut the court’s own opinion? The court also acknowledges MH v. Omegle, not quite identical to Grindr but pretty close and also a 230 defense-side win. This court also disregards it because it relied on “non-binding cases.”

This explains how the court treats ALL precedent as presumptively irrelevant so that it can treat Section 230 as a blank interpretative slate despite hundreds of precedent cases. The court thus forges its own path, redoes 230 analyses that have been done in superior fashion previously dozens of times, and cherrypicks precedent that supports its predetermined conclusion–a surefire recipe for problematic decisions. So unfortunate.

The court says “The meaning of § 230(c)(1) is plain. The provision, therefore, must be enforced according to its terms.” Because the language is so plain 🙄, the court uses dictionary definitions of “publisher” and “speaker” (seriously). It says that the CDA “sought to protect minors and other users from offensive content and internet-based crimes” (basically ignoring the legislative history), and because the CDA exhibited schizophrenia about its goals (something fully explained in the literature–extensively–but the court didn’t look), the court thinks it should “avoid the predominance of some congressional purposes over others, the provision should be interpreted neither broadly nor narrowly.”

Reminder: the Almeida opinion, in language this court chooses to ignore, said “The majority of federal circuits have interpreted the CDA to establish broad ‘federal immunity to any cause of action that would make service providers liable for information originating with a third-party user of the service’” (citing Zeran, emphasis added).

Having gone deeply rogue, the court says none of the plaintiff’s common law claims treat Grindr as the publisher of third-party content. “Grindr is responsible, in whole or in part, for the “Daddy” “Tribe,” the “Twink” “Tribe,” the filtering code, the “safe space” language, and the geolocation interface. To the extent the responsible persons or entities are unclear, discovery, not dismissal, comes next.”

The court acknowledges that “Grindr brings to the Court’s attention many cases” supporting Grindr’s Section 230 arguments, including the Fifth Circuit’s old Doe v. MySpace case. To “explain” why these “many cases” don’t count, the court marshals up the following citations: Justice Thomas’ statement in MalwarebytesJustice Thomas’ statement in Doe v. SnapJudge Katzmann’s dissent in Force v. Facebook, Judge Gould’s concurrence/dissent in Gonzalez v. Google (which was likely rendered moot by the Supreme Court’s punt on the case), and randomly, a single district court case from Oregon (AM v. Omegle). Notice a theme here? The court is relying exclusively on non-binding precedent–indeed, other than the Omegle ruling, not even “precedent” at all.

With zero trace of irony, after this dubious stack of citations, the court says it can ignore Grindr’s citations because “MySpace and the other cases on which Grindr relies are non-binding and rely on non-binding precedent.” Hey judge…the call is coming from inside the house…

(I could have sworn this was the work of a TAFS judge, especially with the shoutouts to Justice Thomas’ non-binding statements, the poorly researched conclusions, and cherrypicked citations. But no, Magistrate Judge Barksdale appears to be an Obama appointee).

Because this is a magistrate report, it will be reviewed by the supervising judge. For all of its prolixity, it’s shockingly poorly constructed and has many sharp edges. Grindr has unsurprisingly filed objections to the report. I’m sure this case will be appealed to the 11th Circuit regardless of what the supervising judge says.

A.S. v. Salesforce, Inc., 2024 WL 4031496 (N.D. Tex. Sept. 3, 2024)

Another FOSTA sex trafficking case against Salesforce for providing services to Backpage. The court previously rejected the Section 230 defense in a factually identical case (SMA v. Salesforce) and summarily rejects it this time.

In yet another baroque and complex opinion that’s typical for FOSTA cases, the court greenlights one claim of tertiary liability against Salesforce but rejects a different tertiary liability claim. If I thought there was value to trying to reconcile those conclusions, I would do it to benefit my readers. Instead I was baffled by the court’s razor-thin distinctions about the various ecosystem players’ mens rea and actus rea (another common attribute of FOSTA decisions).

ProcureNet Ltd. v. Twitter, Inc., 2024 WL 4290924 (Cal. App. Ct. Sept. 25, 2024)

The plaintiffs were heavy Twitter advertisers, spending over $1M promoting their accounts. Twitter suspended all of the accounts in 2022 (pre-Musk) for alleged manipulation and spam. The plaintiffs claim they were targeted by a brigading attack, but allegedly Twitter disregarded their evidence of that. Eventually, the brigading attack took out the plaintiffs’ personal accounts too. The plaintiffs claim Twitter breached its implied covenant of good faith and fair dealing. Twitter filed an anti-SLAPP motion to strike.

The court says that Twitter’s actions related to a matter of public interest. However, the court says the plaintiffs’ claims have enough merit to overcome the anti-SLAPP motion.

Twitter argued that Section 230 protected its decisions. The court disagrees: “the duty Twitter allegedly violated derives from its Advertising Contracts with plaintiffs, not from Twitter’s status as a publisher of plaintiffs’ content.”

Twitter cited directly relevant California state court decisions in Murphy and Prager that said Section 230 could apply to contract-based claims that would override the service’s editorial discretion, but the court distinguishes them: “These cases, however, do not address claims that a provider breached a separate enforceable agreement for which consideration was paid, like the Advertising Contracts here.” This makes no sense. Whether or not cash was involved, the Murphy and Prager cases involved mutual promises supported by contract consideration. In other words, in each case, the defendant had a contract agreeing to provide services to the plaintiff that the plaintiff valued, so I don’t see any basis to distinguish among these cases. The court might have found better support by citing the also-on-point Calise and YOLO Ninth Circuit cases, but neither case was cited.

Beyond the Section 230 argument, Twitter said that its contracts reserved the unrestricted discretion to deny services. The court says that the unrestricted discretion might still be subject to the implied covenant of good faith and fair dealing: “the purpose of the Advertising Contracts here was not to give Twitter discretion—its purpose, as alleged in plaintiffs’ complaint, was to buy advertising for plaintiffs’ accounts on Twitter’s platform.” In other words, the court effectively reads the reservation of discretion out of the contract entirely.

How bad a loss is this? The plaintiffs had moved to voluntarily dismiss the case while it was on appeal, so they no-showed at the appeal and the court ruled on uncontested papers filed only by Twitter. Ouch. The voluntary dismissal also makes this decision into something of an advisory opinion, and I’m surprised the court decided to issue it rather than deem the appeal moot.

BONUS: Corner Computing Solutions v. Google LLC, 2024 WL 4290764 (W.D. Wash. Sept. 25, 2024). This is also an implied covenant of good faith and fair dealing case. The plaintiff thinks Google should have removed some allegedly fake reviews. The court says the TOS never promised the removal of those reviews in its TOS, but some ancillary disclosures might have implied that Google would. Thus, despite dismissing the case, the court has some sharp words for Google:

It may be misleading for Defendant to state in a policy that fake engagement will be removed while admitting in its briefing that its policies are merely aspirational. But that does not make Defendant’s actions here a breach of contract.

Posted on BestNetTech - 26 June 2023 @ 11:55am

California’s Journalism Protection Act Is An Unconstitutional Clusterfuck Of Terrible Lawmaking

The California legislature is competing with states like Florida and Texas to see who can pass laws that will be more devastating to the Internet. California’s latest entry into this Internet death-spiral is the California Journalism Protection Act (CJPA, AB 886). CJPA has passed the California Assembly and is pending in the California Senate.

The CJPA engages with a critical problem in our society: how to ensure the production of socially valuable journalism in the face of the Internet’s changes to journalists’ business models? The bill declares, and I agree, that a “free and diverse fourth estate was critical in the founding of our democracy and continues to be the lifeblood for a functioning democracy…. Quality local journalism is key to sustaining civic society, strengthening communal ties, and providing information at a deeper level that national outlets cannot match.” Given these stakes, politicians should prioritize developing good-faith and well-researched ways to facilitate and support journalism. The CJPA is none of that.

Instead, the CJPA takes an asinine, ineffective, unconstitutional, and industry-captured approach to this critical topic. The CJPA isn’t a referendum on the importance of journalism; instead, it’s a test of our legislators’ skills at problem-solving, drafting, and helping constituents. Sadly, the California Assembly failed that test.

Overview of the Bill

The CJPA would make some Big Tech services pay journalists for using snippets of their content and providing links to the journalists’ websites. This policy approach is sometimes called a “link tax,” but that’s a misnomer. Tax dollars go to the government, which can then allocate the money to (in theory) advance the public good—such as funding journalism.

The CJPA bypasses the government’s intermediation and supervision of these cash flows. Instead, it pursues a policy worse than socialism. CJPA would compel some bigger online publishers (called “covered platforms” in the bill) to transfer some of their wealth directly to other publishers—intended to be journalistic operations, but most of the dollars will go to vulture capitalists’ stockholders and MAGA-clickbait outlets like Breitbart.

In an effort to justify this compelled wealth transfer, the bill manufactures a new intellectual property right—sometimes called an “ancillary copyright for press publishers“—in snippets and links and then requires the platforms to pay royalties (euphemistically called “journalism usage fee payments”) for the “privilege” of publishing ancillary-copyrighted material. The platforms aren’t allowed to reject or hide DJPs’ content, so they must show the content to their audiences and pay royalties even if they don’t want to.

The wealth-transfer recipients are called “digital journalism providers” (DJPs). The bill contemplates that the royalty amounts will be set by an “arbitrator” who will apply baseball-style “arbitration,” i.e., the valuation expert picks one of the parties’ proposals. “Arbitrator” is another misnomer; the so-called arbitrators are just setting valuations.

DJPs must spend 70% of their royalty payouts on “news journalists and support staff,” but that money won’t necessarily fund NEW INCREMENTAL journalism. The bill explicitly permits the money to be spent on administrative overhead instead of actual journalism. With the influx of new cash, DJPs can divert their current spending on journalists and overhead into the owners’ pockets. Recall how the COVID stimulus programs directly led to massive stock buybacks that put the government’s cash into the hands of already-wealthy stockholders—same thing here. Worse, journalist operations may become dependent on the platforms’ royalties, which could dry up with little warning (e.g., a platform could drop below CJPA’s statutory threshold). We should encourage journalists to build sustainable business models. CJPA does the opposite.

Detailed Analysis of the Bill Text

Who is a Digital Journalism Provider (DJP)? 

A print publisher qualifies as a DJP if it:

  • “provide[s] information to an audience in the state.” Is a single reader in California an “audience”? By mandating royalty payouts despite limited ties to California, the bill ensures that many/most DJPs will not be California-based or have any interest in California-focused journalism.
  • “performs a public information function comparable to that traditionally served by newspapers and other periodical news publications.” What publications don’t serve that function?
  • “engages professionals to create, edit, produce, and distribute original content concerning local, regional, national, or international matters of public interest through activities, including conducting interviews, observing current events, analyzing documents and other information, or fact checking through multiple firsthand or secondhand news sources.” This is an attempt to define “journalists,” but what publications don’t “observe current events” or “analyze documents or other information”?
  • updates its content at least weekly.
  • has “an editorial process for error correction and clarification, including a transparent process for reporting errors or complaints to the publication.”
  • has:
    • $100k in annual revenue “from its editorial content,” or
    • an ISSN (good news for me; my blog ISSN is 2833-745X), or
    • is a non-profit organization
  • 25%+ of content is about “topics of current local, regional, national, or international public interest.” Again, what publications don’t do this?
  • is not foreign-owned, terrorist-owned, etc.

If my blog qualifies as an eligible DJP, the definition of DJPs is surely over-inclusive.

Broadcasters qualify as DJPs if they:

  • have the specified FCC license,
  • engage journalists (like the factor above),
  • update content at least weekly, and
  • have error correction processes (like the factor above).

Who is a Covered Platform?

A service is a covered platform if it:

  • Acquires, indexes, or crawls DJP content,
  • “Aggregates, displays, provides, distributes, or directs users” to that content, and
  • Either
    • Has 50M+ US-based MAUs or subscribers, or
    • Its owner has (1) net annual sales or a market cap of $550B+ OR (2) 1B+ worldwide MAUs.

(For more details about the problems created by using MAUs/subscribers and revenues/market cap to measure size, see this article).

How is the “Journalism Usage Fee”/Ancillary Copyright Royalty Computed?

The CJPA creates a royalty pool of the “revenue generated through the sale of digital advertising impressions that are served to customers in the state through an online platform.” I didn’t understand the “impressions” reference. Publishers can charge for advertising in many ways, including ad impressions (CPM), clicks, actions, fixed fee, etc. Does the definition only include CPM-based revenue? Or all ad revenue, even if impressions aren’t used as a payment metric? There’s also the standard problem of apportioning ad revenue to “California.” Some readers’ locations won’t be determinable or will be wrong; and it may not be possible to disaggregate non-CPM payments by state.

Each platform’s royalty pool is reduced by a flat percentage, nominally to convert ad revenues from gross to net. This percentage is determined by a valuation-setting “arbitration” every 2 years (unless the parties reach an agreement). The valuation-setting process is confusing because it contemplates that all DJPs will coordinate their participation in a single “arbitration” per platform, but the bill doesn’t provide any mechanisms for that coordination. As a result, it appears that JDPs can independently band together and initiate their own customized “arbitration,” which could multiply the proceedings and possibly reach inconsistent results.

The bill tells the valuation-setter to:

  • Ignore any value conferred by the platform to the JDPs due to the traffic referrals, “unless the covered platform does not automatically access and extract information.” This latter exclusion is weird. For example, if a user posts a link to a third-party service, the platform could argue that this confers value to the JDP only if the platform doesn’t show an automated preview.
    • Note: In a typical open-market transaction, the parties always consider the value they confer on each other when setting the price. By unbalancing those considerations, the CJPA guarantees the royalties will overcompensate DJPs.
  • “Consider past incremental revenue contributions as a guide to the future incremental revenue contribution” by each DJP. No idea what this means.
  • Consider “comparable commercial agreements between parties granting access to digital content…[including] any material disparities in negotiating power between the parties to those commercial agreements.” I assume the analogous agreements will come from music licensing?

Each JDP is entitled to a percentage, called the “allocation share,” of the “net” royalty pool. It’s computed using this formula: (the number of pages linking to, containing, or displaying the JDP’s content to Californians) / (the total number of pages linking to, containing, or displaying any JDP’s content to Californians). Putting aside the problems with determining which readers are from California, this formula ignores that a single page may have content from multiple DJPs. Accordingly, the allocation share percentages cumulatively should add up to over 100% of the net royalty pool calculated by the valuation-setters. In other words, the formula ensures the unprofitability of publishing DJP content. For-profit companies typically exit unprofitable lines of business.

Elimination of Platforms’ Editorial Discretion

The CJPA has an anti-“retaliation” clause that nominally prevents platforms from reducing their financial exposure:

(a) A covered platform shall not retaliate against an eligible digital journalism provider for asserting its rights under this title by refusing to index content or changing the ranking, identification, modification, branding, or placement of the content of the eligible digital journalism provider on the covered platform.

(b) An eligible digital journalism provider that is retaliated against may bring a civil action against the covered platform.

(c) This section does not prohibit a covered platform from, and does not impose liability on a covered platform for, enforcing its terms of service against an eligible journalism provider.

This provision functions as a mandatory must-carry provision. It forces platforms to carry content they don’t want to carry and don’t think is appropriate for their audience—at peril of being sued for retaliation. In other words, any editorial decision that is adverse to any DJP creates a non-trivial risk of a lawsuit alleging that the decision was retaliatory. It doesn’t really change the calculus if the platform might ultimately prevail in the lawsuit; the costs and risks of being sued are enough to prospectively distort the platform’s decision-making.

[Note: section (c) doesn’t negate this issue at all. It simply converts a litigation battle over retaliation into a battle over whether the DJP violated the TOS. Platforms could try to eliminate the anti-retaliation provision by drafting TOS provisions broad enough to provide them with total editorial flexibility. However, courts might consider such broad drafting efforts to be bad faith non-compliance with the bill. Further, unhappy DJPs will still claim that broad TOS provisions were selectively enforced against them due to the platform’s retaliatory intent, so even tricky TOS drafting won’t eliminate the litigation risk.]

Thus, CJPA rigs the rules in favor of DJPs. The financial exposure from the anti-retaliation provision, plus the platform’s reduced ability to cater to the needs of its audience, further incentivizes platforms to drop all DJP content entirely or otherwise substantially reconfigure their offerings.

Limitations on JDP Royalty Spending

DJPs must spend 70% of the royalties on “news journalists and support staff.” Support staff includes “payroll, human resources, fundraising and grant support, advertising and sales, community events and partnerships, technical support, sanitation, and security.” This indicates that a DJP could spend the CJPA royalties on administrative overhead, spend a nominal amount on new “journalism,” and divert all other revenue to its capital owners. The CJPA doesn’t ensure any new investments in journalism or discourage looting of journalist organizations. Yet, I thought supporting journalism was CJPA’s raison d’être.

Why CJPA Won’t Survive Court Challenges

If passed, the CJPA will surely be subject to legal challenges, including:

Restrictions on Editorial Freedom. The CJPA mandates that the covered platforms must publish content they don’t want to publish—even anti-vax misinformation, election denialism, clickbait, shill content, and other forms of pernicious or junk content.

Florida and Texas recently imposed similar must-carry obligations in their social media censorship laws. The Florida social media censorship law specifically restricted platforms’ ability to remove journalist content. The 11th Circuit held that the provision triggered strict scrutiny because it was content-based. The court then said the journalism-protection clause failed strict scrutiny—and would have failed even lower levels of scrutiny because “the State has no substantial (or even legitimate) interest in restricting platforms’ speech… to ‘enhance the relative voice’ of… journalistic enterprises.” The court also questioned the tailoring fit. I think CJPA raises the same concerns. For more on this topic, see Ashutosh A. Bhagwat, Why Social Media Platforms Are Not Common Carriers, 2  J. Free Speech L. 127 (2022).

Note: the Florida bill required platforms to carry the journalism content for free. CJPA would require platforms to pay for the “privilege” of being forced to carry journalism content, wanted or not. CJPA’s skewed economics denigrate editorial freedom even more grossly than Florida’s law.

Copyright Preemption. The CJPA creates copyright-like protection for snippets and links. Per 17 USC 301 (the copyright preemption clause), only Congress has the power to provide copyright-like protection for works, including works that do not contain sufficient creativity to qualify as an original work of authorship. Content snippets and links individually aren’t original works of authorship, so they do not qualify for federal copyright protection at the federal or state level; while any compilation copyright is within federal copyright’s scope and therefore is also off-limits to state protection.

The CJPA governs the reproduction, distribution, and display of snippets and links, and the federal copyright law governs those activities in 17 USC 106. CJPA’s provisions thus overlap with 106’s scope, but the works are within the scope of federal copyright law. This is not permitted by federal copyright preemption.

Section 230. Most or all of the snippets/links governed by the CJPA will constitute third-party content, including search results containing third-party content and user-submitted links where the platform automatically fetches a preview from the JDP’s website. Thus, CJPA runs afoul of Section 230 in two ways. First, it treats the covered platforms as the “publishers or speakers” of those snippets and links for purposes of the allocation share. Second, the anti-retaliation claim imposes liability for removing/downgrading third-party content, which courts have repeatedly said is covered by Section 230 (in addition to the First Amendment).

DCC. I believe the Dormant Commerce Clause should always apply to state regulation of the Internet. In this case, the law repeatedly contemplates the platforms determining the location of California’s virtual borders, which will always have an error rate that cannot be eliminated. Those errors guarantee that the law reaches activity outside of California.

Takings. I’m not a takings expert, but a government-compelled wealth transfer from one private party to another sounds like the kind of thing our country’s founders would have wanted to revolt against.

Conclusion

Other countries have attempted “link taxes” like CJPA. I’m not aware of any proof that those laws have accomplished their goal of enhancing local journalism. Knowing the track record of global futility, why do the bill’s supporters think CJPA will achieve better results? Because of their blind faith that the bill will work exactly as they anticipate? Their hatred of Big Tech? Their desire to support journalism, even if it requires using illegitimate means?

Our country absolutely needs a robust and well-functioning journalism industry. Instead of making progress towards that vital goal, we’re wasting our time futzing with crap like CJPA.

For more reasons to oppose the CJPA, see the Chamber of Progress page.

A Few More CJPA Memes

Originally posted to Eric Goldman’s Technology & Marketing Law Blog, reposted here with permission, and (thankfully, for the time being) without having to pay Eric to link back to his original even though he qualifies as a “DJP” under this law.

Posted on BestNetTech - 18 May 2023 @ 01:42pm

Two Common But Disingenuous Phrases About Section 230

This blog post is about the following two phrases:

  • “[T]he Communications Decency Act was not meant to create a lawless no-man’s-land on the Internet.” This phrase originated in Kozinski’s Roommates.com en banc opinion. Including Roommates.com, I found 22 cases (25 opinions total) using the phrase (see Appendix A).
  • “Congress has not provided an all purpose get-out-of-jail-free card for businesses that publish user content on the internet.” This phrase originated in the Doe v. Internet Brands opinion. Including the Internet Brands case, I found six cases using the phrase (see Appendix B).

Both phrases suffer from the same defect: they “refute” strawmen arguments. In fact, no one has ever advanced the propositions the phrases disagree with; and if anyone did advance those propositions, the speakers would demonstrate their ignorance about Section 230. So the rhetorical flourishes in the phrases are just that; they aren’t substantive arguments.

Why are the refuted arguments strawmen? Let me explain:

OF COURSE Section 230 did not create a “lawless no-man’s-land” zone. Section 230, from day 1, always (1) retained all legal obligations for content originators, who definitely do not operate in a lawless zone, and (2) contained statutory exclusions for IP, ECPA, and federal criminal prosecutions, which means Internet services have always faced liability pursuant to those doctrines.

[Note: The “no-man’s-land” reference is also weird. Putting aside the gender skew, this concept usually refers to territory between opposing forces’ trenches in World War I, where any person entering the zone would be machine-gunned to death by the opposing force and thus humans cannot survive there for long. How does that relate to Section 230? It doesn’t. If anything, Section 230’s immunity creates a zone that’s overpopulated with content that might not otherwise exist online–the opposite of a “no-man’s-land.” The metaphor makes no sense.]

Similarly, OF COURSE Section 230 does not provide an “all purpose get-out-of-jail-free card.” In fact, Section 230 has always excluded federal criminal prosecutions and their potential risk of jailtime, so Section 230 literally is the opposite of a “get-out-of-jail-free” card. (The First Amendment significantly limits prosecutions against publishers, so the First Amendment acts more like a get-out-of-jail-free card than Section 230). Section 230 does block state criminal prosecutions, but that’s not an “all purpose” card and it’s the entire point of a statutory immunity (i.e., to remove liability that otherwise exists to facilitate other socially beneficial activity while preserving criminal liability for the primary wrongdoer).

[Note: the phrase “get-out-of-jail-free card” is generally associated with the board game Monopoly, but the Wikipedia page lists a reference back to the 16th Century.]

I hope this post makes clear why I get so irritated whenever I see the phrases referenced in a court opinion or invoked by a grandstanding politician. By attacking a strawman argument, they confirm the weakness of their argumentation and that they don’t have more persuasive arguments to make–a good tipoff that they are embracing a dubious result and are grasping for any justification, no matter how tenuous. Indeed, many of the cases enumerated below are best remembered for their contorted reasoning to reach questionable rulings for the plaintiffs (and the fact that several opinions show up in both appendices is a strong indicator of how shaky those specific rulings were).

In my dream world, a post like this proves the folly of using these phrases and discourages their further invocation. If you’ve ever uttered one of these phrases, I hope that ends today.

* * *

BONUS: Along the lines of the Internet as a “lawless” zone, I am similarly perplexed by characterizations of the Internet as a “Wild West.” I found 11 cases in Westlaw for the search query “internet /s ‘wild west’” (see, e.g., Ganske v. Mensch), though the usages vary, and I find this metaphor is more commonly used in popular rhetoric. The reference makes no sense because, if anything, there is too much law governing the Internet, not too little. Further, the “Wild West” metaphor more accurately suggests an underenforcement of existing laws, i.e., there were laws governing communities in the Western US, but sometimes they were practically unenforceable due to the scarcity of law enforcement officials and the difficulties of gathering evidence. This led to the development of alternative means of enforcing rules, including vigilantism. If you are using the “Wild West” metaphor, I assume you are implicitly calling for greater enforcement of existing laws. If you are invoking it to suggest that the Internet lacks governing laws, I strongly disagree.

* * *

Appendix A: Cases Using the Phrase “Lawless No-Man’s-Land”

Fair Housing Council of San Fernando Valley v. Roommates.Com, LLC, 521 F.3d 1157 (9th Cir. April 3, 2008)

Milo v. Martin, 311 S.W.3d 210 (Tex. Ct. App. April 29, 2010)

Hill v. StubHub, Inc., 2011 WL 1675043 (N.C. Superior Ct. Feb. 28, 2011)

Hare v. Richie, 2012 WL 3773116 (D. Md. Aug. 29, 2012)

Ascend Health Corp. v. Wells, 2013 WL 1010589 (E.D.N.C. March 14, 2013)

Jones v. Dirty World Entertainment Recordings, LLC, 965 F.Supp.2d 818 (E.D. Ky. Aug. 12, 2013)

J.S. v. Village Voice Media Holdings, L.L.C., 184 Wash.2d 95 (Wash. Sept. 3, 2015)

Doe v. Internet Brands, Inc., 824 F.3d 846 (9th Cir. May 31, 2016)

Fields v. Twitter, Inc., 200 F.Supp.3d 964 (N.D. Cal. Aug. 10, 2016)

Fields v. Twitter, Inc., 217 F.Supp.3d 1116 (N.D. Cal. Nov. 18, 2016)

Gonzalez v. Google, Inc., 282 F.Supp.3d 1150 (N.D. Cal. Oct. 23, 2017)

Daniel v. Armslist, LLC, 382 Wis.2d 241 (Wis. Ct. App. April 19, 2018)

Gonzalez v. Google, Inc., 335 F.Supp.3d 1156 (N.D. Cal. Aug. 15, 2018)

HomeAway.com, Inc. v. City of Santa Monica, 918 F.3d 676 (9th Cir. March 13, 2019)

Daniel v. Armslist, LLC, 386 Wis.2d 449 (Wis. April 30, 2019)

Airbnb, Inc. v. City of Boston, 386 F.Supp.3d 113 (D. Mass. May 3, 2019)

Turo Inc. v. City of Los Angeles, 2020 WL 3422262 (C.D. Cal. June 19, 2020)

Lemmon v. Snap, Inc., 995 F.3d 1085 (9th Cir. May 4, 2021)

In re Facebook, Inc., 625 S.W.3d 80 (Tex. June 25, 2021)

Doe v. Twitter, Inc., 555 F.Supp.3d 889 (N.D. Cal. Aug. 19, 2021)

Doe v. Mindgeek USA Inc., 574 F.Supp.3d 760 (C.D. Cal. Dec. 2, 2021)

Lee v. Amazon.com, Inc., 76 Cal.App.5th 200 (Cal. App. Ct. March 11, 2022)

Al-Ahmed v. Twitter, Inc., 603 F.Supp.3d 857 (N.D. Cal. May 20, 2022)

In re Apple Inc. App Store Simulated Casino-Style Games Litigation, 2022 WL 4009918 (N.D. Cal. Sept. 2, 2022)

Dangaard v. Instagram, LLC, 2022 WL 17342198 (N.D. Cal. Nov. 30, 2022)

* * *

Appendix B: Cases Using the Phrase “Get-Out-of-Jail-Free Card”

Doe No. 14 v. Internet Brands, Inc., 767 F.3d 894 (9th Cir. 2014), amended by Doe v. Internet Brands, Inc., 824 F.3d 846 (9th Cir. 2016)

Airbnb, Inc. v. City and County of San Francisco, 217 F.Supp.3d 1066 (N.D. Cal. 2016)

Daniel v. Armslist, LLC, 382 Wis.2d 241 (Wis. Ct. App. 2018)

Doe v. Mindgeek USA Inc., 574 F.Supp.3d 760 (C.D. Cal. 2021)

Lemmon v. Snap, Inc., 995 F.3d 1085 (9th Cir. 2021)

In re Apple Inc. App Store Simulated Casino-Style Games, 2022 WL 4009918 (N.D. Cal. 2022)

Posted on BestNetTech - 12 April 2023 @ 01:44pm

Recent Case Highlights How Age Verification Laws May Directly Conflict With Biometric Privacy Laws

California passed the California Age-Appropriate Design Code (AADC) nominally to protect children’s privacy, but at the same time, the AADC requires businesses to do an age “assurance” of all their users, children and adults alike. (Age “assurance” requires the business to distinguish children from adults, but the methodology to implement has many of the same characteristics as age verification–it just needs to be less precise for anyone who isn’t around the age of majority. I’ll treat the two as equivalent).

Doing age assurance/age verification raises substantial privacy risks. There are several ways of doing it, but the two primary options for quick results are (1) requiring consumers to submit government-issued documents, or (2) requiring consumers to submit to face scans that allow the algorithms to estimate the consumer’s age.

[Note: the differences between the two techniques may be legally inconsequential, because a service may want a confirmation that the person presenting the government documents is the person requesting access, which may essentially require a review of their face as well.]

But, are face scans really an option for age verification, or will it conflict with other privacy laws? In particular, face scanning seemingly directly conflict with biometric privacy laws, such as Illinois’ BIPA, which provide substantial restrictions on the collection, use, and retention of biometric information. (California’s Privacy Rights Act, CPRA, which the AADC supplements, also provides substantial protections for biometric information, which is classified as “sensitive” information). If a business purports to comply with the CA AADC by using face scans for age assurance, will that business simultaneously violate BIPA and other biometric privacy laws?

Today’s case doesn’t answer the question, but boy, it’s a red flag.

The court summarizes BIPA Sec. 15(b):

Section 15(b) of the Act deals with informed consent and prohibits private entities from collecting, capturing, or otherwise obtaining a person’s biometric identifiers or information without the person’s informed written consent. In other words, the collection of biometric identifiers or information is barred unless the collector first informs the person “in writing of the specific purpose and length of term for which the data is being collected, stored, and used” and “receives a written release” from the person or his legally authorized representative

Right away, you probably spotted three potential issues:

  • The presentation of a “written release” slows down the process. I’ve explained how slowing down access to a website can constitute an unconstitutional barrier to content.
  • Will an online clickthrough agreement satisfy the “written release” requirement? Per E-SIGN, the answer should be yes, but standard requirements for online contract formation are increasingly demanding more effort from consumers to signal their assent. In all likelihood, BIPA consent would require, at minimum, a two-click process to proceed. (Click 1 = consent to the BIPA disclosures. Click 2 = proceeding to the next step).
  • Can minors consent on their own behalf? Usually contracts with minors are voidable by the minor, but even then, other courts have required the contracting process to be clear enough for minors to understand. That’s no easy feat when it relates to complicated and sensitive disclosures, such as those seeking consent to engage in biometric data collection. This raises the possibility that at least some minors can never consent to face scans on their own behalf, in which case it will be impossible to comply with BIPA with respect to those minors (and services won’t know which consumers are unable to self-consent until after they do the age assessment #InfiniteLoop).

[Another possible tension is whether the business can retain face scans, even with BIPA consent, in order to show that each user was authenticated if challenged in the future, or if the face scans need to be deleted immediately, regardless of consent, to comply with privacy concerns in the age verification law.]

The primary defendant at issue, Binance, is a cryptocurrency exchange. (There are two Binance entities at issue here, BCM and BAM, but BCM drops out of the case for lack of jurisdiction). Users creating an account had to go through an identity verification process run by Jumio. The court describes the process:

Jumio’s software…required taking images of a user’s driver’s license or other photo identification, along with a “selfie” of the user to capture, analyze and compare biometric data of the user’s facial features….

During the account creation process, Kuklinski entered his personal information, including his name, birthdate and home address. He was also prompted to review and accept a “Self-Directed Custodial Account Agreement” for an entity known as Prime Trust, LLC that had no reference to collection of any biometric data. Kuklinski was then prompted to take a photograph of his driver’s license or other state identification card. After submitting his driver’s license photo, Kuklinski was prompted to take a photograph of his face with the language popping up “Capture your Face” and “Center your face in the frame and follow the on-screen instructions.” When his face was close enough and positioned correctly within the provided oval, the screen flashed “Scanning completed.” The next screen stated, “Analyzing biometric data,” “Uploading your documents”, and “This should only take a couple of seconds, depending on your network connectivity.”

Allegedly, none of the Binance or Jumio legal documents make the BIPA-required disclosures.

The court rejects Binance’s (BAM) motion to dismiss:

  • Financial institution. BIPA doesn’t apply to a GLBA-regulated financial institution, but Binance isn’t one of those.
  • Choice of Law. BAM is based in California, so it argued CA law should apply. The court says no because CA law would foreclose the BIPA claim, plus some acts may have occurred in Illinois. Note: as a CA company, BAM will almost certainly need to comply with the CA AADC.
  • Extraterritorial Application. “Kuklinski is an Illinois resident, and…BIPA was enacted to protect the rights of Illinois residents. Moreover, Kuklinski alleges that he downloaded the BAM application and created the BAM account while he was in Illinois.”
  • Inadequate Pleading. BAM claimed the complaint lumped together BAM, BCM, and Jumio. The court says BIPA doesn’t have any heightened pleading standards.
  • Unjust Enrichment. The court says this is linked to the BIPA claim.

Jumio’s motion to dismiss also goes nowhere:

  • Retention Policy. Jumio says it now has a retention policy, but the court says that it may have been adopted too late and may not be sufficient,
  • Prior Settlement. Jumio already settled a BIPA case, but the court says that only could protect Jumio before June 23, 2019.
  • First Amendment. The court says the First Amendment argument against BIPA was rejected in Sosa v. Onfido and that decision was persuasive.

[The Sosa v. Onfido case also involved face-scanning identity verification for the service OfferUp. I wonder if the court would conduct the constitutional analysis differently if the defendant argued it had to engage with biometric information in order to comply with a different law, like the AADC?]

The court properly notes that this was only a motion to dismiss; defendants could still win later. Yet, this ruling highlights a few key issues:

1. If California requires age assurance and Illinois bans the primary methods of age assurance, there may be an inter-state conflict of laws that ought to support a Dormant Commerce Clause challenge. Plus, other states beyond Illinois have adopted their own unique biometric privacy laws, so interstate businesses are going to run into a state patchwork problem where it may be difficult or impossible to comply with all of the different laws.

2. More states are imposing age assurance/age verification requirements, including Utah and likely Arkansas. Often, like the CA AADC, those laws don’t specify how the assurance/verification should be done, leaving it to businesses to figure it out. But the legislatures’ silence on the process truly reflects their ignorance–the legislatures have no idea what technology will work to satisfy their requirements. It seems obvious that legislatures shouldn’t adopt requirements when they don’t know if and how they can be satisfied–or if satisfying the law will cause a different legal violation. Adopting a requirement that may be unfulfillable is legislative malpractice and ought to be evidence that the legislature lacked a rational basis for the law because they didn’t do even minimal diligence.

3. The clear tension between the CA AADC and biometric privacy is another indicator that the CA legislature lied to the public when it claimed the law would enhance children’s privacy.

4. I remain shocked by how many privacy policy experts and lawyers remain publicly quiet about age verification laws, or even tacitly support them, despite the OBVIOUS and SIGNIFICANT privacy problems they create. If you care about privacy, you should be extremely worried about the tsunami of age verification requirements being embraced around the country/globe. The invasiveness of those requirements could overwhelm and functionally moot most other efforts to protect consumer privacy.

5. Mandatory online age verification laws were universally struck down as unconstitutional in the 1990s and early 2000s. Legislatures are adopting them anyway, essentially ignoring the significant adverse caselaw. We are about to have a high-stakes society-wide reconciliation about this tension. Are online age verification requirements still unconstitutional 25 years later, or has something changed in the interim that makes them newly constitutional? The answer to that question will have an enormous impact on the future of the Internet. If the age verification requirements are now constitutional despite the legacy caselaw, legislatures will ensure that we are exposed to major privacy invasions everywhere we go on the Internet–and the countermoves of consumers and businesses will radically reshape the Internet, almost certainly for the worse.

Reposted with permission from Eric Goldman’s Technology & Marketing Law Blog.

Posted on BestNetTech - 18 November 2022 @ 01:44pm

My Testimony To The Colombian Constitutional Court Regarding Online Account Terminations And Content Removals

This week, I testified remotely before the Colombian Constitutional Court in the case of Esperanza Gómez Silva c. Meta Platforms, Inc. y Facebook Colombia S.A.S. Expediente T-8.764.298. In a procedure I don’t understand, the court organized a public hearing to discuss the issues raised by the case. (The case involves Instagram’s termination of an adult film star’s account despite her account content allegedly never violating the TOS). My 15 minutes of testimony was based on this paper.

* * *

My name is Eric Goldman. I’m a professor at Santa Clara University School of Law, located in California’s Silicon Valley, where I hold the titles of Associate Dean for Research, Co-Director of the High Tech Law Institute, and Supervisor of the Privacy Law Certificate. I started practicing Internet Law in 1994 and first started teaching Internet Law in 1996. I thank the court for this opportunity to testify.

My testimony makes two points. First, I will explain the status of lawsuits regarding online account terminations and content removals in the United States. Second, I will explain why imposing legal restrictions on the ability of Internet services that gather, organize, and disseminate user-generated content (which I’ll call “UGC publishers”) to terminate accounts or remove content leads to unwanted outcomes.

Online Account Terminations and Content Removals in the US

In 2021, Jess Miers and I published an article in the Journal of Free Speech Law (run by the UCLA Law School) entitled “Online Account Terminations/Content Removals and the Benefits of Internet Services Enforcing Their House Rules.” The article analyzed all of the U.S. legal decisions we could find that addressed UGC publishers’ liability for terminating users’ accounts or removing users’ contents. When we finalized our dataset in early 2021, we found 62 opinions. There have been at least 15 more rulings since then.

What’s remarkable is how consistently plaintiffs have lost. No plaintiff has won a final ruling in court imposing liability on UGC publishers for terminating users’ accounts or removing users’ content. Though some recent regulatory developments in the U.S. seek to change this legal rule, those developments are currently being challenged in court and, in my opinion, will not survive the challenges.

It’s also remarkable why the plaintiffs have lost. Plaintiffs have attempted a wide range of common law, statutory, contract, and Constitutional claims, and courts have rejected those claims based on one or more of the following four grounds:

Prima Facie Elements. First, the claims may fail because the plaintiff cannot establish the prima facie elements of the claim. In other words, the law simply wasn’t designed to redress the plaintiffs’ concerns.

Contract. Second, the claims may fail because of the UGC publishers’ terms of service (called the “TOS”). TOSes often contain several provisions reinforcing the UGC publishers’ editorial freedom, including provisions expressly saying that (1) the UGC publisher can terminate accounts or remove content in its sole discretion, (2) it may freely change its editorial policies at any time, and (3) it doesn’t promise to apply its editorial policies perfectly. In the US, courts routinely honor such contract provisions, even if the TOS terms are non-negotiable and may seem one-sided.

Section 230. Third, the claims may fail because of a federal statute called “Section 230,” which says that websites aren’t liable for third-party content. Courts have treated the user-plaintiff content as “third-party” content to the UGC publisher, in which case Section 230 applies.

Constitution. Fourth, the claims may fail on Constitutional grounds. In the US, the Constitution only restricts the action of government actors, not private entities. Therefore, users do not have any Constitutional protections from the editorial decisions of UGC publishers. Instead, the Constitution protects the UGC publishers’ freedoms of speech and press, and any government intrusion into their speech or press decisions must comport with the Constitution. Accordingly, in the US, UGC publishers do not “censor” users. Instead, any government effort to curtail UGC publishers’ account termination or content removal constitutes censorship. This means the court cannot Constitutionally rule in favor of the plaintiffs.

This point bears emphasis. Any effort to force UGC publishers to publish accounts or content against their wishes would override the UGC publishers’ Constitutional protections. Unless the Supreme Court changes the law, this compelled publication is not permitted.

The Merits of UGC Publishers’ Editorial Discretion

I now turn to my second point. Giving unrestricted editorial discretion to UGC publishers may sometimes seem unfair. There is often a significant power imbalance between the “Tech Giants” and individual users, and this imbalance can leave aggrieved users without any apparent recourse for what may feel like capricious or arbitrary decisions by the UGC publisher.

I’m sympathetic to those concerns, and I hope UGC publishers continue to voluntarily adopt additional user-friendly features to reduce users’ feeling of powerlessness. However, government intrusion into the editorial process is not the solution.

When UGC publishers are no longer free to exercise editorial discretion, it means that the government hinders the UGC publisher’s ability to cater to the needs of its audience. In other words, the audience expects a certain experience from the UGC publisher, and government regulation prevents the UGC publisher from meeting those expectations.

This becomes an existential threat to UGC publishers if spammers, trolls, and other malefactors are provided mandatory legal authorization to reach the publisher’s audience despite the publisher’s wishes. That creates a poor reader experience that jeopardizes the publisher’s relationship with its audience.

If the publisher cannot sufficiently curb bad actors from overrunning the service, then advertisers will flee, users will not pay to access low-quality content, and UGC publishers will lack a tenable business model that puts the entire enterprise at risk. When UGC publishers are compelled to publish unwanted content, many UGC publishers will have to leave the industry.

Other UGC publishers will continue to publish content—just not user content because they can’t sufficiently ensure it meets their audience’s needs. In its place, the publishers will substitute to professionally-produced content, which the publishers still can fully control.

This switch from UGC to professionally-produced content will fundamentally change the Internet as we know it. Today, we take for granted that we can talk with each other online; in a future where publication access is mandated, we will talk to each other less, and more frequently publishers will be talking at us.

To have enough money to pay for the professionally-produced content, publishers will increasingly adopt subscription models to access the content (sometimes called “paywalls”), which means we will enjoy less free content online. This also exacerbates the digital divide, where wealthier users will get access to more and better content than poorer users can afford, perpetuating the divisions between these groups. Finally, professionally-produced content and paywalls will entrench other divisions in our society. Those in power with majority attributes will be the most likely to get the ability to publish their content and reach audiences; those without power won’t have the same publication opportunities, and that will leave them in a continually marginalized position.

This highlights the unfortunate irony of mandatory publication obligations. Instead of expanding publication opportunities, government-compelled online publication is far more likely to consolidate the power to publish content in a smaller number of hands that do not include the less wealthy or powerful members of our society. If the court seeks to vindicate the rights of less powerful authors online, counterintuitively, protecting publishers’ editorial freedom is the best way to do so.

Closing

I keep using the term UGC publishers, and this may have created some semantic confusion. In an effort to denigrate the editorial work of UGC publishers, they are often called anything but “publishers.” Indeed, the setup for today’s event uses several euphemisms for the publishing function, such as “content intermediation platforms” and “social network management.” (I understand there may have been something lost in translation).

The nomenclature matters a lot here. By using alternative descriptors, it downplays the seemingly obvious consequence that compelling UGC publishers’ publication decisions is government censorship. Reduced editorial freedom provides another way for the government to abuse its power to control how its citizens talk with each other.

Thank you for the opportunity to provide this input into your important efforts.

* * *

The judges asked three questions in the Q&A:

  • can Colombian courts reach transborder Internet services? [My answer: yes, if they have a physical presence in Colombia]
  • can content moderation account for off-service activity? [My answer: yes, this is no different than publishers deciding the identity fo the authors they want to publish]
  • must Internet services follow due process? [My answer: no, “due process” only applies to government actors].

Reposted with permission.

Posted on BestNetTech - 19 October 2022 @ 12:22pm

The Word ‘Emoji’ Is A Protectable Trademark?

Emoji Co. GmbH has registered trademarks in the dictionary word “Emoji.” They mostly are a licensing organization, and their registrations are in a wide range of classes: “from articles of clothing and snacks to ‘orthopaedic foot cushions’ and ‘[p]atient safety restraints.’” (Raise your hand if you’ve ever seen Emojico-branded patient safety restraints). Indeed, the court essentially questions the entire basis of Emojico’s licensing business, saying:

Given the ubiquity of the word “emoji” as a reference to the various images and icons used in electronic communications, it is especially important that Plaintiff come forward with evidence demonstrating that the term is also known as an identifier of Plaintiff as a source of goods….Other than its say-so, Plaintiff offers no evidence demonstrating, for instance, that consumers actually associate Plaintiff with emoji products such as those offered for sale by Defendants

(The absence of secondary meaning sounds like a major problem with Emojico’s case, one of several problems the court spots and then essentially ignores).

As I previously documented, Emojico has likely sued about 10,000 defendants for trademark infringement. Many defendants are small-time Amazon vendors (often from China) selling items depicting emojis, who Emojico claims are infringing by using the term “emoji” in their product listings. Defendants often no-show in court, making the rulings vulnerable to obvious mistakes that never will be appealed.

Without the defendants in court to defend themselves, the court rules that the defendants violated Emojico’s trademark rights and grants a permanent injunction. The judge then turns to Emojico’s request for statutory damages, including Emojico’s assertion that infringement was willful. The court says it

finds the nature of Plaintiff’s trademark to be relevant to the willfulness inquiry, as it raises the concern that many persons might innocently use the word “emoji” in commerce without awareness of Plaintiff’s intellectual property rights. Indeed, the various images and icons commonly referred to as “emojis” have become a staple of modern communication, such that the term “emoji” is even defined in many dictionaries.

This means the term “emoji” is generic with respect to the dictionary definitions and Emojico’s litigation empire should crumble. The trademark registrations discourage that outcome.

Otherwise, “emoji” is at most descriptive of the goods in question, so there should be an air-tight descriptive fair use defense. The court says:

Fair use, however, is an affirmative defense, and none of the defaulting Defendants have appeared to assert it. But the Court believes the principle underlying the defense, “that no one should be able to appropriate descriptive language through trademark registration,” is relevant to its willfulness analysis. If Plaintiff’s mark can legitimately be used for a substantial number of descriptive purposes, it suggests that any particular Defendant might not have knowingly or recklessly disregarded Plaintiff’s rights.

That’s how the court sidesteps the elephant in the room. The defendants did not “disregard Plaintiff’s rights” because it’s completely permissible to use “emoji” in a descriptive fair use sense. But the court didn’t consider descriptive fair use in flatly declaring infringement because… well, I’m not sure why not, other than this judge apparently thinks courts can’t raise screamingly obvious defenses sua sponte?

This next passage may require tissues:

many Defendants are using the word “emoji” to describe a product that depicts one of the many digital icons commonly used in electronic communications. For example, one Defendant offered for sale a jewel encrusted pendant in the shape of the “Fire” emoji under the listing “2.00 Ct. Round Diamond Fire Emoji Charm Piece Pendant Men’s 14k Yellow Gold Over.” Especially since “Emoji” was used in conjunction with the word “Fire,” it would be reasonable to conclude that this particular Defendant honestly believed that they were using the word “Emoji” to identify the product as depicting a specific emoji, namely the Fire Emoji. Another Defendant offered for sale a pillow depicting a smiley face emoji with the listing reading “1PC 32cm Emoji Smiley Emoticon Pillow Plush Toy Doll Pillow Soft Sofa Cushion.” Again, the word emoji is used to describe the product as depicting a smiley face emoji. Further, the listing uses another word, “Emoticon,” that is commonly associated with digital representations of facial expressions. The listing’s inclusion of a word describing a similar concept to an emoji suggests that both words are simply being used to describe the product being offered.

[We now know what happens if you yell “Fire Emoji” in a crowded online marketplace. TRADEMARK INFRINGEMENT. 🔥]

The court seemingly understands the problem perfectly. Any person looking at the listings in question would instantly interpret “emoji” as describing the product’s physical attributes – AS TRADEMARK LAW PERMITS IT TO DO. Yet, somehow, the court creates a Schrodinger’s fair use defense – the usages may be descriptive trademark use for damages purposes, but apparently not so obviously to resolve infringement. That’s messed up.

How messed up? The court says:

Plaintiff suggests that any person who sells a product depicting a familiar emoji is forbidden from using the one word that most closely describes the image depicted. Plaintiff’s right cannot be so expansive.

💯 How did the court find infringement again?

After questioning the foundation of Emojico’s trademark empire and reaching the obvious conclusion that the defendants engaged in descriptive fair use, the court nevertheless awards $25k of statutory damages per defendant. The court treats this as benevolence towards the defendants:

That figure is below Plaintiff’s requested awards because it accounts for the many possible fair uses of Plaintiff’s mark as well as Plaintiff’s failure to present sufficient evidence concerning many key factors relevant to the statutory damages determination. On the other hand, the award is greater than the minimum authorized by § 1117(c)(1) in light of the need for deterrence, the fact that Defendants’ infringing conduct occurred online, and Plaintiff’s evidence of its licensing efforts and efforts at enforcing its trademark rights.

So, was justice served in this case? On the one hand, it’s all for show, because Emojico will almost certainly collect zero dollars of this damages award. On the other hand, it’s a terrifying reminder of how things can go wrong in default proceedings, when the court is hearing only the plaintiff’s unrebutted advocacy. The true victims of this court’s error, and of Emojico’s litigation campaign, are consumers who love emoji-themed items but increasingly will find it harder to acquire those products in online marketplaces because Emojico keeps lawfaring vendors out of the marketplace or forcing vendors to use terms that consumers don’t recognize. Even if the defendants didn’t make the arguments, the judge should have listened to her instincts and intervened on the consuming public’s behalf. All of us, except possibly for Emojico and its lawyers, are poorer because she didn’t.

Reposted with permission from the Technology & Marketing Law Blog.

Posted on BestNetTech - 16 September 2022 @ 09:40am

California’s Age Appropriate Design Code Is Radical Anti-Internet Policy

When a proposed new law is sold as “protecting kids online,” regulators and commenters often accept the sponsors’ claims uncritically (because… kids). This is unfortunate because those bills can harbor ill-advised policy ideas. The California Age-Appropriate Design Code (AADC / AB2273, just signed by Gov. Newsom) is an example of such a bill. Despite its purported goal of helping children, the AADC delivers a “hidden” payload of several radical policy ideas that sailed through the legislature without proper scrutiny. Given the bill’s highly experimental nature, there’s a high chance it won’t work the way its supporters think–with potentially significant detrimental consequences for all of us, including the California children that the bill purports to protect.

In no particular order, here are five radical policy ideas baked into the AADC:

Permissioned innovation. American business regulation generally encourages “permissionless” innovation. The idea is that society benefits from more, and better, innovation if innovators don’t need the government’s approval.

The AADC turns this concept on its head. It requires businesses to prepare “impact assessments” before launching new features that kids are likely to access. Those impact assessments will be freely available to government enforcers at their request, which means the regulators and judges are the real audience for those impact assessments. As a practical matter, given the litigation risks associated with the impact assessments, a business’ lawyers will control those processes–with associated delays, expenses, and prioritization of risk management instead of improving consumer experiences.

While the impact assessments don’t expressly require government permission to proceed, they have some of the same consequences. They put the government enforcer’s concerns squarely in the room during the innovation development (usually as voiced by the lawyers), they encourage self-censorship by the business if they aren’t confident that their decisions will please the enforcers, and they force businesses to make the cost-benefit calculus before the business has gathered any market feedback through beta or A/B tests. Obviously, these hurdles will suppress innovations of all types, not just those that might affect children. Alternatively, businesses will simply route around this by ensuring their features aren’t available at all to children–one of several ways the AADC will shrink the Internet for California children.

Also, to the extent that businesses are self-censoring their speech (and my position is that all online “features” are “speech”) because of the regulatory intervention, then permissioned innovation raises serious First Amendment concerns.

Disempowering parents. A foundational principle among regulators is that parents know their children best, so most children protection laws center around parental decision-making (e.g. COPPA).The AADC turns that principle on its head and takes parents completely out of the equation. Even if parents know their children best, per the AADC, parents have no say at all in the interaction between a business and their child. In other words, despite the imbalance in expertise, the law obligates businesses, not parents, to figure out what’s in the best interest of children. Ironically, the bill cites evidence that “In 2019, 81 percent of voters said they wanted to prohibit companies from collecting personal information about children without parental consent” (emphasis added), but then the bill drafters ignored this evidence and stripped out the parental consent piece that voters assumed. It’s a radical policy for the AADC to essentially tell parents “tough luck” if parents don’t like the Internet that the government is forcing on their children.

Fiduciary obligations to a mass audience. The bill requires businesses to prioritize the best interests of children above all else. For example: “If a conflict arises between commercial interests and the best interests of children, companies should prioritize the privacy, safety, and well-being of children over commercial interests.” Although the AADC doesn’t use the term “fiduciary” obligations, that’s functionally what the law creates. However, fiduciary obligations are typically imposed in 1:1 circumstances, like a lawyer representing a client, where the professional can carefully consider and advise about an individual’s unique needs. It’s a radical move to impose fiduciary obligations towards millions of individuals simultaneously, where there is no individual considerations at all.

The problems with this approach should be immediately apparent. The law treats children as if they all have the same needs and face the same risks, but “children” are too heterogeneous to support such stereotyping. Most obviously, the law lumps together 17 year-olds and 2 year-olds, even though their risks and needs are completely different. More generally, consumer subpopulations often have conflicting needs. For example, it’s been repeatedly shown that some social media features provide net benefit to a majority or plurality of users, but other subcommunities of minors don’t benefit from those features. Now what? The business is supposed to prioritize the best interests of “children,” but the presence of some children who don’t benefit indicates that the business has violated its fiduciary obligation towards that subpopulation, and that creates unmanageable legal risk–despite the many other children who would benefit. Effectively, if businesses owe fiduciary obligation to diverse populations with conflicting needs, it’s impossible to serve that population at all. To avoid this paralyzing effect, services will screen out children entirely.

Normalizing face scans. Privacy advocates actively combat the proliferation of face scanning because of the potentially lifelong privacy and security risks created by those scans (i.e., you can’t change your face if the scan is misused or stolen). Counterproductively, this law threatens to make face scans a routine and everyday occurrence. Every time you go to a new site, you may have to scan your face–even at services you don’t yet know if you can trust. What are the long-term privacy and security implications of routinized and widespread face scanning? What does that do to people’s long-term privacy expectations (especially kids, who will infer that face scans just what you do)? Can governments use the face scanning infrastructure to advance interests that aren’t in the interests of their constituents? It’s radical to motivate businesses to turn face scanning of children into a routine activity–especially in a privacy bill.

(Speaking of which–I’ve been baffled by the low-key response of the privacy community to the AADC. Many of their efforts to protect consumer privacy won’t likely matter in the long run if face scans are routine).

Frictioned Internet navigation. The Internet thrives in part because of the “seamless” nature of navigating between unrelated services. Consumers are so conditioned to expect frictionless navigation that they respond poorly when modest barriers are erected. The Ninth Circuit just explained:

The time it takes for a site to load, sometimes referred to as a site’s “latency,” is critical to a website’s success. For one, swift loading is essential to getting users in the door…Swift loading is also crucial to keeping potential site visitors engaged. Research shows that sites lose up to 10% of potential visitors for every additional second a site takes to load, and that 53% of visitors will simply navigate away from a page that takes longer than three seconds to load. Even tiny differences in load time can matter. Amazon recently found that every 100 milliseconds of latency cost it 1% in sales.

After the AADC, before you can go to a new site, you will have to do either face scanning or upload age authenticating documents. This adds many seconds or minutes to the navigation process, plus there’s the overall inhibiting effects of concerns about privacy and security. How will these barriers change people’s web “surfing”? I expect it will fundamentally change people’s willingness to click on links to new services. That will benefit incumbents–and hurt new market entrants, who have to convince users to do age assurance before users trust them. It’s radical for the legislature to make such a profound and structural change to how people use and enjoy an essential resource like the Internet.

A final irony. All new laws are essentially policy experiments, and the AADC is no exception. But to be clear, the AADC is expressly conducting these experiments on children. So what diligence did the legislature do to ensure the “best interest of children,” just like it expects businesses to do post-AADC? Did the legislature do its own impact assessment like it expects businesses to do? Nope. Instead, the AADC deploys multiple radical policy experiments without proper diligence and basically hopes for the best for children. Isn’t it ironic?

I’ll end with a shoutout to the legislators who voted for this bill: if you didn’t realize how the bill was packed with radical policy ideas when you voted yes, did you even do your job?

Posted on BestNetTech - 2 August 2022 @ 01:38pm

Is the California Legislature Addicted to Performative Election-Year Stunts That Threaten the Internet?

It’s an election year, and like clockwork, legislators around the country want to show they care about protecting kids online. This pre-election frenzy leads performative bills that won’t actually help any kids. Today I’m blogging about one of those bills, California AB 2408, “Social media platform: child users: addiction.” (For more on how the California legislature is working to eliminate the Internet, see my posts on the pending bills AB587 and AB2273).

This bill assumes that social media platforms are intentionally addicting kids, so it creates business-ending liability to thwart those alleged addictions. The consequences depend on which the platforms choose to play it.

The platforms are most likely to toss all kids overboard. This is almost certainly what the legislators actually want given their antipathy towards the Internet, but it’s not a good outcome for anyone. It hurts the kids by depriving them of valuable social outlets and educational resources; it hurts adults by requiring age (and likely identity) verification to sort the kids from adults; and the age/identity verification hurts both kids and adults by exposing them to greater privacy and security risks. I explain all of this in my post on AB 2273 (the AADC), which redundantly also would require platforms to authenticate all users’ ages to avoid business-ending liability.

If platforms try to cater to kids, they would have to rely on an affirmative defense that hands power over to a censor (euphemistically called an “auditor” in the bill) who can declare that any feature is addictive, requiring the platform to promptly remove the feature or face business-ending liability. Handing control of publication decisions to a government-designated censor is as disrespectful to the Constitution as it sounds.

What the Bill Says

Who’s Covered? 

The bill defines “social media platform” as

a public or semipublic internet-based service or application that has users in California and that meets all of the following criteria:

(A) A substantial function of the service or application is to connect users in order to allow users to interact socially with each other within the service or application.

(B) A service or application that provides email or direct messaging services shall not be considered to meet this criterion on the basis of that function alone.

(C) The service or application allows users to do all of the following:

(i) Construct a public or semipublic profile for purposes of signing into and using the service.

(ii) Populate a list of other users with whom an individual shares a social connection within the system.

(iii) Create or post content viewable by other users, including, but not limited to, on message boards, in chat rooms, or through a landing page or main feed that presents the user with content generated by other users.

I critiqued similar language in my AB 587 blog post. Putting aside its clunky drafting, I assume this definition reaches all UGC services, subject to the statutory exclusions for:

  • email and direct messaging services (the bill doesn’t define either type).
  • tools that allow employees and affiliates to talk with each other (Slack, perhaps?).
  • businesses earning less than $100M/year in gross revenues. See my article on defining Internet service size for critiques about the pros and (mostly) cons of revenue metrics.
  • “A social media platform whose primary function is to allow users to play video games.” This is so interesting because video games have been accused of addicting kids for decades, but this bill would give them a free pass. Or maybe the legislature plans to target them in a bill sequel? If the legislature is willing to pass this bill, no business is safe.

What’s Restricted?

This is the bill’s core restriction:

A social media platform shall not use a design, feature, or affordance that the platform knew, or which by the exercise of reasonable care should have known, causes child users to become addicted to the platform.

Child = under 18.

Addiction is defined as: “(A) Indicates preoccupation or obsession with, or withdrawal or difficulty to cease or reduce use of, a social media platform despite the user’s desire to cease or reduce that use. and (B) Causes physical, mental, emotional, developmental, or material harms to the user.”

The restriction excludes third-party content and “passively displaying” that content (as we’ve discussed repeatedly, “passively publishing content” is an oxymoron). Parents cannot waive the bill’s liability for their kids.

The Affirmative Defense. The bill provides an affirmative defense against civil penalties if the platform: “(1) Instituted and maintained a program of at least quarterly audits of its practices, designs, features, and affordances to detect practices or features that have the potential to cause or contribute to the addiction of child users. [and] (2) Corrected, within 30 days of the completion of an audit described in paragraph (1), any practice, design, feature, or affordance discovered by the audit to present more than a de minimis risk of violating this subdivision.” Given that the defense would negate some, but not all, potential remedies, this defense doesn’t really help as much as it should.

Problems with the Bill

Social Media Benefits Minors

The bill enumerates many “findings” about social media’s evilness. The purported “findings” are mockably sophomoric because each fact claim is easily rebutted or disproven. However, they are a tell about the drafters’ mindset. The drafters approached the bill as if social media is never legitimate, which explains why the bill would nuke social media. Thus, with zero self-awareness, the findings say: “California should take reasonable, proportional, and effective steps to ensure that its children are not harmed by addictions of any kind.” The bill’s response is neither reasonable nor proportional–and it would be “effective” only in the sense of suppressing all social media activity, good and bad alike.

Of course, everyone (other than the bill drafters) know that social media has many benefits for its users, both adults and children alike. For example, the infamous slide showing that Instagram harmed 20% of teenage girls’ self-image also showed that it benefited 40% of teenage girls. Focusing on the 20% by eliminating the 40% is a policy choice, I guess. However, millions of unhappy Californian voters will be shocked by the legislature’s casual disregard towards something they value highly and care passionately about.

The Age Authentication Problem

The bill imposes liability for addicting children, but it doesn’t define when a platform knows that a user is a child. As I’ve discussed with other performative protect-kids-online bills, any attempt to segment kids from adults online doesn’t work because there’s no great method for age authentication. Any age authentication solution will set up barriers to moving around the Internet for both adults and children (i.e., welcome to our site, but we don’t really want you here until we’ve authenticated your age), will make errors in the classifications, and will expose everyone to greater privacy and security risks (which counterproductively puts kids at greater risk). If users have a persistent identity at a platform (necessary to avoid redundantly authenticating users’ ages each visit), then age authentication requires identity authentication, which expands the privacy and security risks (especially for minors) and subverts anonymous/pseudonymous Internet usage, which hurts users with minority characteristics and discourages critical content and whistleblowing. So protecting “kids” online comes with a huge package of unwanted consequences and tradeoffs, none of which the bill acknowledges or attempts to mitigate.

Another option is that the platform treats adults like kids, which I’m sure the bill drafters would be just fine with. However, that highlights the bill’s deceptive messaging. It isn’t really about protecting “kids.” It’s really about censoring social media.

Holding Manufacturers Liable for Addiction

This bill would hold platforms liable for addicting their customers–a very, very rare liability allocation in our legal system. Consider other addictions in our society. Cigarette manufacturers and retailers aren’t liable for the addictive nature of nicotine. Alcohol manufacturers and retailers aren’t liable for alcohol addiction. Casinos aren’t liable for gambling addiction. Those vices may be restricted to adults (but remember parents can’t waive 2408 for their kids), but virtually every marketplace product or service can “addict” some of its most fervent customers without facing liability. This bill seemingly opens up a major new frontier in tort law.

The Causation Problem

The bill sidesteps a key causation problem. If a practice is standard in the industry and a user uses multiple platforms, how do we know which platform caused the addiction? Consider something like infinite scrolling, which is used by many platforms.

This problem is easy to see by analogy. Assume that a gambling addict started gambling at Casino A, switched loyalty to Casino B, but occasionally gambles at Casino C. Which casino caused the addiction?

One possible answer is to hold all of the casinos liable. Or, in the case of this bill, hold every platform liable so long as the plaintiff can show the threshold condition of addiction (“preoccupation or obsession with, or withdrawal or difficulty to cease or reduce use of, a social media platform despite the user’s desire to cease or reduce that use”). But this also means platforms could be liable for addictions they didn’t “cause,” at least not initially.

The Impossibility of Managing the Liability Risk

There’s a fine line between standard product marketing – where the goal is to increase consumer demand for the product – and causing customers to become addicted. This bill erases the line. Platforms have no idea which consumers might become addicted and which won’t. There’s no way to segregate the addiction-vulnerable users and treat them more gently.

This means the platform must treat all of its customers as eggshell victims. Other than the affirmative defense, how can a platform manage its legal exposure to a customer base of possibly millions of California children, any one of which may be an eggshell? The answer: it can’t.

The unmanageable risk is why platforms’ dominant countermove to the bill will be to toss children off their service.

The Affirmative Defense

Platforms that don’t toss children overboard will rely on the affirmative defense. The affirmative defense is predicated on an audit, but the bill provides no details about the auditor’s credentials. Auditor-censors don’t need to have any specific certification or domain expertise. In theory, this permits self-auditing. More likely, it sets up a race to the bottom where the platforms retain auditor-censors based on their permissiveness. This would turn the audit a form of theater: everyone plays their statutory part, but a permissive auditor-censor nevertheless greenlights most features. In other words, auditing without certification doesn’t create any benefits for anyone.

If the auditor-censor’s report comes back clean, the platform has satisfied the defense. If the auditor-censor’s report doesn’t come back clean, the 30 day cure period is too short to fix or remove many changes. As a result, platforms will necessarily run all potential site changes by their auditor-censor before launch to preempt getting flagged in the next quarterly report. Thus, every quarterly report should come back clean because any potential auditor-censor concerns were resolved beforehand.

The affirmative defense mitigates civil penalties, but it does not address any other potential remedies created by the bill, including injunctive relief and criminal sanctions. As a result, the incomplete nature of the affirmative defense doesn’t really provide the legal protection that platforms need. This will further motivate platforms to toss kids overboard.

Section 230 Preemption

The bill has a savings clause to exclude any claims covered Section 230, the First Amendment, and the CA Constitution equivalent. That’s great, but what’s left of the bill after Section 230’s preemption? At their core, platforms are remixing third-party content, and any “addiction” relates to the consumption of that content. This bill tries to avoid reaching third-party content, but that’s all it does. Thus, it should squarely fall within Section 230’s preemption.

Constitutionality

If platforms conduct the audit theater, the auditor functions as a government-designated censor. The auditor-censor’s report is the only thing potentially protecting platforms from business-ending liability, so platforms must do whatever the auditor-censor says. This gives the auditor power to decide what features the platforms publish and what they don’t. For example, imagine a government-designated censor at a newspaper, deciding if the newspaper can add a new column or feature, add a new topical section, or change the size and layout of the paper. That censor overrides the publisher’s editorial choices of what content to present and how to present it. This bill does the same.

There are also the standard problems about who is and isn’t covered by the bill and why they were included/excluded, plus the typical Dormant Commerce Clause concern.

I’ll also note the serious tort doctrine problems (like the causation problem) and questions about whether the bill actually benefits any constituency (especially with the audit theater). Even if the bill gets lesser constitutional scrutiny, it still may not survive.

Conclusion

Numerous lawsuits have been filed across the country premised on the same theory underlying this bill, i.e., social media addicts kids. Those bills will run into tort law, Section 230, and Constitutionality challenges very soon. It would make sense for the California legislature to see how that litigation plays out and discover what, if any, room is left for the legislature to regulate. That would save taxpayers the costs of the inevitable, and quite possibly successful, court challenge to this bill if passed.

Originally posted to the Technology & Marketing Law Blog. Reposted here with permission.

Posted on BestNetTech - 29 June 2022 @ 11:55am

California Legislators Seek To Burn Down The Internet — For The Children

I’m continuing my coverage of dangerous Internet bills in the California legislature. This job is especially challenging during an election year, when legislators rally behind the “protect the kids” mantra to pursue bills that are likely to hurt, or at least not help, kids. Today’s example is AB 2273, the Age-Appropriate Design Code Act (AADC),

Before we get overwhelmed by the bill’s details, I’ll highlight three crucial concerns:

First, the bill pretextually claims to protect children, but it will change the Internet for EVERYONE. In order to determine who is a child, websites and apps will have to authenticate the age of ALL consumers before they can use the service. NO ONE WANTS THIS. It will erect barriers to roaming around the Internet. Bye bye casual browsing. To do the authentication, businesses will be forced to collect personal information they don’t want to collect and consumers don’t want to give, and that data collection creates extra privacy and security risks for everyone. Furthermore, age authentication usually also requires identity authentication, and that will end anonymous/unattributed online activity.

Second, even if businesses treated all consumers (i.e., adults) to the heightened obligations required for children, businesses still could not comply with this bill. That’s because this bill is based on the U.K. Age-Appropriate Design Code. European laws are often aspirational and standards-based (instead of rule-based), because European regulators and regulated businesses engage in dialogues, and the regulators reward good tries, even if they aren’t successful. We don’t do “A-for-Effort” laws in the U.S., and generally we rely on rules, not standards, to provide certainty to businesses and reduce regulatory overreach and censorship.

Third, this bill reaches topics well beyond children’s privacy. Instead, the bill repeatedly implicates general consumer protection concerns and, most troublingly, content moderation topics. This turns the bill into a trojan horse for comprehensive regulation of Internet services and would turn the privacy-centric California Privacy Protection Agency/CPPA) into the general purpose Internet regulator.

So the big takeaway: this bill’s protect-the-children framing is designed to mislead everyone about the bill’s scope. The bill will dramatically degrade the Internet experience for everyone and will empower a new censorship-focused regulator who has no interest or expertise in balancing complex and competing interests.

What the Bill Says

Who’s Covered

The bill applies to a “business that provides an online service, product, or feature likely to be accessed by a child.” “Child” is defined as under-18, so the bill treats teens and toddlers identically.

The phrase “likely to be accessed by a child means it is reasonable to expect, based on the nature of the content, the associated marketing, the online context, or academic or internal research, that the service, product, or feature would be accessed by children.” Compare how COPPA handles this issue; it applies when services know (not anticipate) users are under-13 or direct their services to an under-13 audience. In contrast, the bill says that if it’s reasonable to expect ONE under-18 user, the business must comply with its requirements. With that overexpansive framing, few websites and apps can reasonably expect that under-18s will NEVER use their services. Thus, I believe all websites/apps are covered by this law so long as they clear the CPRA quantitative thresholds for being a “business.” [Note: it’s not clear how this bill situates into the CPRA, but I think the CPRA’s “business” definition applies.]

What’s Required

The bill starts with this aspirational statement: “Companies that develop and provide online services, products, or features that children are likely to access should consider the best interests of children when designing, developing, and providing that service, product, or feature.” The “should consider” grammar is the kind of regulatory aspiration found in European law. Does this statement have legal consequences or not? I vote it does not because “should” is not a compulsory obligation. So what is it doing here?

More generally, this provision tries to anchor the bill in the notion that businesses owe a “duty of loyalty” or fiduciary duty to their consumers. This duty-based approach to privacy regulation is trendy in privacy circles, but if adopted, it would exponentially expand regulatory oversight of businesses’ decisions. Regulators (and private plaintiffs) can always second-guess a business’ decision; a duty of “loyalty” gives the regulators the unlimited power to insist that the business made wrong calls and impose punishments accordingly. We usually see fiduciary/loyalty obligations in the professional services context where the professional service provider must put an individual customer’s needs before its own profit. Expanding this concept to mass-market businesses with millions of consumers would take us into uncharted regulatory territory.

The bill would obligate regulated businesses to:

  • Do data protection impact assessments (DPIAs) for any features likely to be accessed by kids (i.e., all features), provide a “report of the assessment” to the CPPA, and update the DPIA at least every 2 years.
  • “Establish the age of consumers with a reasonable level of certainty appropriate to the risks that arise from the data management practices of the business, or apply the privacy and data protections afforded to children to all consumers.” As discussed below, this is a poison pill for the Internet. This also exposes part of the true agenda here: if a business can’t do what the bill requires (a common consequence), the bill drives businesses to adopt the most restrictive regulation for everyone, including adults.
  • Configure default settings to a “high level of privacy protection,” whatever that means. I think this meant to say that kids should automatically get the highest privacy settings offered by the business, whatever that level is, but it’s not what it says. Instead, this becomes an aspirational statement about what constitutes a “high level” of protection.
  • All disclosures must be made “concisely, prominently, and using clear language suited to the age of children likely to access” the service. The disclosures in play are “privacy information, terms of service, policies, and community standards.” Note how this reaches all consumer disclosures, not just those that are privacy-focused. This is the first of several times we’ll see the bill’s power grab beyond privacy. Also, if a single toddler is “likely” to access the service, must all disclosures must be written at toddlers’ reading level?
  • Provide an “obvious signal” if parents can monitor their kids’ activities online. How does this intersect with COPPA?
  • “Enforce published terms, policies, and community standards established by the business, including, but not limited to, privacy policies and those concerning children.” 🚨 This language unambiguously governs all consumer disclosures, not just privacy-focused ones. Interpreted literally, it’s ludicrous to mandate businesses enforce every provision in their TOSes. If a consumer breaches a TOS by scraping content or posting violative content, does this provision require businesses to sue the consumer for breach of contract? More generally, this provision directly overlaps AB 587, which requires businesses to disclose their editorial policies and gives regulators the power to investigate and enforce any perceived or alleged deviations how services moderate content. See my excoriation of AB 587. This provision is a trojan horse for government censorship that has nothing to do with protecting the kids or even privacy. Plus, even if it weren’t an unconstitutional provision, the CPPA, with its privacy focus, lacks the expertise to monitor/enforce content moderation decisions.
  • “Provide prominent, accessible, and responsive tools to help children, or where applicable their parent or guardian, exercise their privacy rights and report concerns.” Not sure what this means, especially in light of the CPRA’s detailed provisions about how consumers can exercise privacy rights.

The bill would also obligate regulated businesses not to:

  • “Use the personal information of any child in a way that the business knows or has reason to know the online service, product, or feature more likely than not causes or contributes to a more than de minimis risk of harm to the physical health, mental health, or well-being of a child.” This provision cannot be complied with. It appears that businesses must change their services if a single child might suffer any of these harms, which is always? This provision especially seems to target UGC features, where people always say mean things that upset other users. Knowing that, what exactly are UGC services supposed to do differently? I assume the paradigmatic example are the concerns about kids’ social media addiction, but like the 587 discussion above, the legislature is separately considering an entire bill on that topic (AB 2408), and this one-sentence treatment of such a complicated and censorial objective isn’t helpful.
  • “Profile a child by default.” “Profile” is not defined in the bill. The term “profile” is used 3x in the CPRA but also not defined. So what does this mean?
  • “Collect, sell, share, or retain any personal information that is not necessary to provide a service, product, or feature with which a child is actively and knowingly engaged.” This partially overlaps COPPA.
  • “If a business does not have actual knowledge of the age of a consumer, it shall not collect, share, sell, or retain any personal information that is not necessary to provide a service, product, or feature with which a consumer is actively and knowingly engaged.” Note how the bill switches to the phrase “actual knowledge” about age rather than the threshold “likely to be accessed by kids.” This provision will affect many adults.
  • “Use the personal information of a child for any reason other than the reason or reasons for which that personal information was collected. If the business does not have actual knowledge of the age of the consumer, the business shall not use any personal information for any reason other than the reason or reasons for which that personal information was collected.” Same point about actual knowledge.
  • Sell/share a child’s PI unless needed for the service.
  • “Collect, sell, or share any precise geolocation information of children by default” unless needed for the service–and only if providing “an obvious sign to the child for the duration of that collection.”
  • “Use dark patterns or other techniques to lead or encourage consumers to provide personal information beyond what is reasonably expected for the service the child is accessing and necessary to provide that service or product to forego privacy protections, or to otherwise take any action that the business knows or has reason to know the online service or product more likely than not causes or contributes to a more than de minimis risk of harm to the child’s physical health, mental health, or well-being.” No one knows what the term “dark patterns” means, and now the bill would also restrict “other techniques” that aren’t dark patterns? Also see my earlier point about the “de minimis risk of harm” requirement.
  • “Use any personal information collected or processed to establish age or age range for any other purpose, or retain that personal information longer than necessary to establish age. Age assurance shall be proportionate to the risks and data practice of a service, product, or feature.” The bill expressly acknowledges that businesses can’t authenticate age without collecting PI–including PI the business would choose not to collect but for this bill. This is like the CCPA/CPRA’s problems with “verifiable consumer request”–to verify the consumer, the business has to ask for PI, sometimes more invasively than the PI the consumer is making the request about. ¯_(ツ)_/¯

New Taskforce

The bill would create a new government entity, the “California Children’s Data Protection Taskforce,” composed of “Californians with expertise in the areas of privacy, physical health, mental health, and well-being, technology, and children’s rights” as appointed by the CPPA. The taskforce’s job is “to evaluate best practices for the implementation of this title, and to provide support to businesses, with an emphasis on small and medium businesses, to comply with this title.”

The scope of this taskforce likely exceeds privacy topics. For example, the taskforce is charged with developing best practices for “Assessing and mitigating risks to children that arise from the use of an online service, product, or feature”–this scope isn’t limited to privacy risks. Indeed, it likely reaches services’ editorial decisions. The CPPA is charged with constituting and supervising this taskforce even though it lacks expertise on non-privacy-related topics.

New Regulations

The bill obligates the CPPA to come up with regulations supporting this bill by April 1, 2024. Given the CADOJ’s and CPPA’s track record of missing statutorily required timelines for rule-making, how likely is this schedule? 🤣

Problems With the Bill

Unwanted Consequences of Age and Identity Authentication. Structurally, the law tries to sort the online population into kids and adults for different regulatory treatment. The desire to distinguish between children and adults online has a venerable regulatory history. The first Congressional law to crack down on the Internet, the Communications Decency Act, had the same requirement. It was struck down as unconstitutional because of the infeasibility. Yet, after 25 years, age authentication still remains a vexing technical and social challenge.

Counterproductively, age-authentication processes are generally privacy invasive. There are two primary ways to do it: (1) demand the consumer disclose lots of personal information, or (2) use facial recognition and collect highly sensitive face information (and more). Businesses don’t want to invade their consumers’ privacy these ways, and COPPA doesn’t require such invasiveness either.

Also, it’s typically impossible to do age-authentication without also doing identity-authentication so that the consumer can establish a persistent identity with the service. Otherwise, every consumer (kids and adults) will have to authentication their age each time they access a service, which will create friction and discourage usage. But if businesses authenticate identity, and not just age, then the bill creates even greater privacy and security risks as consumers will have to disclose even more PI.

Furthermore, identity authentication functionally eliminates anonymous online activity and all unattributed activity and content on the Internet. This would hurt many communities, such as minorities concerned about revealing their identity (e.g., LGBTQ), pregnant women seeking information about abortions, and whistleblowers. This also raises obvious First Amendment concerns.

Enforcement. The bill doesn’t specify the enforcement mechanisms. Instead, it wades into an obvious and avoidable tension in California law. On the one hand, the CPRA expressly negates private rights of action (except for certain data security breaches). If this bill is part of the CPRA–which the introductory language implies–then it should be subject to the CPRA’s enforcement limits. CADOJ and CPPA have exclusive enforcement authority over the CPRA, and there’s no private right of action/PRA. On the other hand, California B&P 17200 allows for PRAs for any legal violation, including violations of other California statutes. So unless the bill is cabined by the CPRA’s enforcement limit, the bill will be subject to PRAs through 17200. So which is it?  ¯\_(ツ)_/¯

Adding to the CPPA’s Workload. The CPPA is already overwhelmed. It can’t make its rule-making deadline of July 1, 2022 (missing it by months). That means businesses will have to comply with the voluminous rules with inadequate compliance time. Once that initial rule-making is done, the CPPA will then have to build a brand-new administrative enforcement function and start bringing, prosecuting, and adjudicating enforcements. That will be another demanding, complex, and time-consuming project for the CPPA. So it’s preposterous that the California legislature would add MORE to the CPPA’s agenda, when it clearly cannot handle the work that the California voters have already instructed it to do.

Trade Secret Problems. Requiring businesses to report about their DPIAs for every feature they launch potentially discloses lots of trade secrets–which may blow their trade secret protection. It certainly provides a rich roadmap for plaintiffs to mine.

Conflict with COPPA. The bill does not provide any exceptions for parental consent to the business’ privacy practices. Instead, the bill takes power away from parents. Does this conflict with COPPA such that COPPA would preempt it? No doubt the bill’s basic scheme rejects COPPA’s parental control model.

I’ll also note that any PRA may compound the preemption problem. “Allowing private plaintiffs to bring suits for violations of conduct regulated by COPPA, even styled in the form of state law claims, with no obligation to cooperate with the FTC, is inconsistent with the treatment of COPPA violations as outlined in the COPPA statute.” Hubbard v. Google LLC, 546 F. Supp. 3d 986 (N.D. Cal. 2021).

Conflict with CPRA’s Amendment Process. The legislature may amend the CPRA by majority vote only if it enhances consumer privacy rights. As I’ve explained before, this is a trap because I believe the amendments must uniformly enhance consumer privacy rights. In other words, if some consumers get greater privacy rights, but other consumers get less privacy rights, then the legislature cannot make the amendment via majority vote. In this case, the AADC undermines consumer privacy by exposing both children and adults to new privacy and security risks through the authentication process. Thus, the bill, if passed, could be struck down as exceeding the legislature’s authority.

In addition, the bill says “If a conflict arises between commercial interests and the best interests of children, companies should prioritizes the privacy, safety, and well-being of children over commercial interests.” A reminder of what the CPRA actually says: “The rights of consumers and the responsibilities of businesses should be implemented with the goal of strengthening consumer privacy, while giving attention to the impact on business and innovation.” By disregarding the CPRA’s instructions to consider impacts on businesses, this also exceeds the legislature’s authority.

Dormant Commerce Clause. The bill creates numerous potential DCC problems. Most importantly, businesses necessarily will have authenticate the age of all consumers, both in and outside of California. This means that the bill would govern how businesses based outside of California interact with non-Californians, which the DCC does not permit.

Conclusion

Due to its scope and likely impact, this bill is one of the most consequential bills in the California legislature this year. The Internet as we know it hangs in the balance. If your legislator isn’t paying proper attention to those consequences (spoiler: they aren’t), you should give them a call.

Originally posted to Eric Goldman’s Technology & Marketing Law blog. Reposted with permission.