riana.pfefferkorn's BestNetTech Profile

riana.pfefferkorn

About riana.pfefferkorn

Posted on BestNetTech - 18 September 2025 @ 12:00pm

The World’s Most Popular Porn Site Is a Government Agent Now. Does It Matter?

On Monday, I published a two-part blog post about the Federal Trade Commission (FTC) settlement with Aylo, parent company of Pornhub. The FTC’s complaint alleged that Aylo violated federal consumer protection law by allowing child sex abuse material (CSAM) and non-consensual pornography (which I’ll call NCII) on its various sites, despite claiming it didn’t. The resulting order, now approved by a Utah federal judge, imposes a bunch of requirements to make Aylo clean up its act. 

In part 1, I discussed the lurking Fourth Amendment problem with the “content review” provisions of that order. (Part 2 explained why this isn’t really about fighting CSAM and NCII; it’s a power grab over free speech online by the Trump FTC.) The tl;dr: by forcing Aylo to scan every uploaded file to check if it’s CSAM or NCII, the FTC has turned Aylo into an agent of the government for purposes of the Fourth Amendment, making all those scans warrantless searches. 

Warrantless searches are typically considered unreasonable and thus unconstitutional, unless consent or some other exception to the warrant requirement applies. The usual remedy for unconstitutional searches is suppression. Consequently, I said in part 1, any evidence turned up in the scans ought to be inadmissible in any resulting prosecutions of the accused uploaders. 

A couple of readers challenged my assumption about the outcome by raising a provocative question: Doesn’t the order also force waiver of the reasonable expectation of privacy in file uploads, dooming any motion to suppress? That is, even if the world’s most popular porn site – one of the world’s most popular websites, period – is now an agent of the U.S. federal government: does it matter? 

The FTC Order Purports to Make Aylo Users Waive All Privacy Rights in Uploads

In response to a suppression motion based on the content review mandate I quoted in part 1, prosecutors will point out a different provision that requires Aylo to (1) notify users that uploaded files will be searched for CSAM and NCII, and (2) include a waiver of “any privacy rights” in that notice.

Per the order (at pp. 13-16), for any file uploaded by “Content Partners” (meaning professional porn companies) or “Models” (meaning any other “third-party individual or entity that uploads” content to an Aylo site besides Content Partners), Aylo must not make the content available unless they:

Provide a notice and a consent checkbox for each piece of Content to the uploader of the Content, which the uploader must review and endorse prior to submitting Content for review. The notice and checkbox will inform the uploader that Defendants will review Content prior to its publication and may report actual or suspected CSAM or [NCII] to the National Center for Missing and Exploited Children or to relevant law enforcement. The notice and consent checkbox will inform the uploader that if the Content is approved for publication it will be made public and that the uploader is waiving any privacy rights they may have previously had in the Content by submitting Content for Defendants’ review…

The FTC is trying to use Aylo to do something the government would have a very hard time doing directly. Via a consent order, it’s making Aylo force its users (models and content partners) to consent to a search of their uploaded files and waive all privacy rights therein. This would allow future prosecutors to invoke either the consent exception to the warrant requirement, or to argue that Aylo’s scans aren’t a Fourth Amendment “search” in the first place, even if there’s no dispute that Aylo is a government agent. (In Fourth Amendment law, a “search” only “occurs when the government infringes upon ‘an expectation of privacy that society is prepared to consider reasonable.’”)

The question, then, is: Can they do that? Will that work? I think there are good arguments for “no,” but the real answer is probably “I guess we’ll find out once CSAM defendants start filing motions to suppress.”

The notice-and-consent language that Aylo ultimately implements will be subject to a fact-specific analysis if it’s ever challenged in court. As the Second Circuit recently noted, courts have shied away from the question of “whether terms of service pertaining to content review might ever be so broadly and emphatically worded as to categorically extinguish internet service users’ reasonable expectations of privacy in the contents of their [files], even as against the government.” “It may well be that such terms, as parts of ‘[p]rivate contracts[,] have little effect in Fourth Amendment law because the nature of those [constitutional] rights is against the government rather than private parties,’” that court continued, quoting from a recent law review article by my Stanford colleague Orin Kerr. But, in the case before it, there was no need for “categorical conclusions,” because the specific terms in question didn’t extinguish the defendant’s “reasonable expectation of privacy in that content as against the government.”

In Kerr’s article, he argues that “Terms of Service can define relationships between private parties, but private contracts cannot define Fourth Amendment rights.” Kerr’s article expresses skepticism that language purporting to authorize a service provider to act as the government’s agent and search the user’s data would be effective, even assuming the user saw and understood that language (and users typically don’t read TOS). He thinks that court decisions to the contrary are wrongly decided.

The Aylo situation has some twists from the cases and hypotheticals Kerr discusses. Which is to say that I don’t think this particular fact pattern has, uh, happened before. (Because, as my first post discussed, the government usually tries very hard to avoid the impression that it’s making platforms scan for CSAM!) What is the result where the private platform is already an agent of the government thanks to the FTC order? What if the user didn’t know that? Does it affect the “reasonableness” analysis if the user thinks they’re giving consent to a private company, not to the government? After all, the “notice and consent” disclosures do not require Aylo to disclose that the company is under an FTC order (which compels the user’s upload to be reviewed) and that’s why the user is being shown the notice and consent flow in the first place. 

Is the notice-and-consent language the order requires “emphatically worded” enough to “categorically extinguish” Aylo uploaders’ reasonable expectation of privacy? Does it procure valid consent to an otherwise problematic search? Is the notice-and-consent language’s wording irrelevant, and the dispositive factor is that the uploader intended the file to be publicly viewable on a porn site, not to attach it to a private email message or add it to a private cloud storage account?

This is all complicated. Needlessly complicated. None of this was necessary.

The Aylo Order Will Add Needless Work in Criminal Cases

Maybe a future court will decide that the “make your users waive their privacy rights” language in one part of the Aylo order cures the Fourth Amendment problem created by the content review mandate in another part of the order. Maybe suppression motions will ultimately fail when made by defendants accused of uploading CSAM/NCII to Aylo. But criminal defense lawyers will still file them (as they must, ethically, and should, to make the government meet its burden). Prosecutors will have to make specific arguments in every case for why the defendant had no reasonable expectation of privacy. There will probably be arguing over whether the “waiver of privacy” language in the Aylo order actually holds up. There may be discovery involved. Courts will have to decide all those motions. 

We can also expect to see suppression motions citing the Aylo order in other CSAM/NCII cases that didn’t originate on Aylo sites. In my previous blog posts, I talked about how the FTC regulates by consent decree; the Aylo order signals to other platforms (and not just adult sites) that they’d better scan uploads for CSAM/NCII, or they might catch a case too. The Aylo order opens the door for criminal defendants caught by scans on other platforms to argue that those scans aren’t voluntary (even if they used to be), rather they’re induced by the FTC. They’ll try to subpoena documents and witnesses from the platform, looking for proof. And in those cases, there won’t be any order that Department of Justice (DOJ) prosecutors can cite that purports to make that platform make its users waive their privacy rights. Will those suppression motions work? Maybe, maybe not. But criminal defense attorneys will try, because, god love ‘em, they’ll throw a lot of stuff at the wall to see what sticks, and sometimes, bless them, something does.

All of this is work nobody would need to do if the FTC hadn’t put all this problematic language into the order with Aylo. When drafting the terms of that order, it would have been so easy not to manufacture any Fourth Amendment issues.

Erase the Fourth Amendment Online with This One Weird Trick!

But then, maybe that’s the point. The FTC apparently believes it has the power to enter orders making online platforms search every single file uploaded to the service and report any illegal material that turns up (as per pp. 34-35 of the Aylo order, duplicating what’s statutorily required for CSAM anyway)… and, because they’d also be forced to notify users of the searches and obtain users’ “consent,” that’s A-OK. Government-mandated disclosures would be all that’s needed to wipe away users’ constitutional rights not to be subjected to warrantless surveillance conducted, at the FTC’s behest, by what looks like a private company but is actually an agent of the government (likely unbeknownst to the user). 

Having used this theory on a major porn site, the FTC can later apply the same approach the next time they go after a Big Tech company – many of which are already under decades-long consent decrees with the FTC over prior incidents (often alleged privacy or data security issues), making them potentially susceptible to additional enforcement actions. And that’s how the Trump FTC will try to use its orders with companies, not just to control speech online, but to get rid of Americans’ Fourth Amendment rights online in an era where the Supreme Court has been deeply skeptical of the third-party doctrine. I sure hope Professor Kerr is right.

Conclusion

Maybe the Aylo order won’t end up letting a bunch of accused CSAM and NCII defendants go free, like I feared. Maybe, instead, it’s how the Trump administration tees up a future court challenge with the goal of getting a ruling that severely harms our Fourth Amendment rights online. If that’s the order’s secret purpose, then the FTC’s power grab is even worse than I thought.

The DOJ has spent years making its “terms of service beat the Fourth Amendment” argument in response to CSAM suppression motions. Hanlon’s Razor says not to ascribe to malice that which can be explained by incompetence. That’s what I did in my first blog post, assuming the FTC order was the work of attorneys who know consumer protection law but not the niceties of the Fourth Amendment. But now I wonder whether the DOJ’s fingerprints aren’t actually all over this order. It might be time to grudgingly come around to a remark someone made to me: that the FTC’s order is a work of “evil genius.”

Posted on BestNetTech - 15 September 2025 @ 01:42pm

The FTC’s Settlement With Aylo: This Isn’t Really About Fighting CSAM And Revenge Porn

This is part two of a two-part series about the recent settlement between the Federal Trade Commission (FTC), the Utah Consumer Protection Division (CPD), and Aylo, the parent company of Pornhub. The order (which has now been approved by a federal court) settled allegations that Aylo let child sex abuse material (CSAM) and non-consensual intimate imagery (NCII) such as revenge porn and rape videos run rampant on Pornhub and other adult sites under Aylo’s umbrella. 

Reducing the incidence of such abysmal content online is certainly a good thing. However, as part one explained, the way the FTC went about it is ultimately self-defeating. Plus, as this post will explain, fighting CSAM and NCII is only the surface goal of the Aylo settlement. Make no mistake: The FTC’s and Utah’s real agenda here is to attack free speech on the Internet – legal speech that, unlike CSAM, is protected by the Constitution – under the guise of “protecting consumers.”

This investigation was initiated under the Biden FTC, but it concluded under Trump’s FTC, and it must be understood in that context: as part of a larger right-wing effort to ban even legal, constitutionally-protected pornography and to control what Americans say and read online. 

Neither of those goals is a secret. The Heritage Foundation’s Project 2025 roadmap says, point-blank, “Pornography should be outlawed”: “It has no claim to First Amendment protection. Its purveyors are child predators and misogynistic exploiters of women. … The people who produce and distribute it should be imprisoned. … And telecommunications and technology firms that facilitate its spread should be shuttered.” 

A former Heritage Foundation fellow now sits on the FTC: Commissioner Mark Meador, one of the three Republican commissioners remaining at the agency after Trump illegally fired the Democratic ones. True to the Project 2025 vision, Meador told the conservative Washington Examiner that the Aylo settlement is just “the first step” in going after porn. “There’s a much bigger problem with even the quote-unquote ‘consensual’ pornography that’s out there,” he said, adding, “It poses a grave threat to children, but this is the first step that we could take under the powers that we have.” That is, the FTC is cannily using illegal CSAM and NCII to lay the groundwork for achieving the Project 2025 goal of banning all pornography.

And they won’t stop at porn. Before his confirmation as the current FTC chair, Andrew Ferguson promised to use his power to “fight wokeness” such as “DEI,” “ESG,” and “the trans agenda,” and to go after Big Tech companies for “engag[ing] in unlawful censorship.” In February, he launched an “inquiry” into that last topic (which is going just great), despite that being exactly the kind of government pressure on platforms that Republicans had decried as unconstitutional under Biden. It’s all part of the larger right-wing playbook against content moderation, which is best summarized by journalist Adam Serwer’s mordant adage that “free speech is when conservatives can say what they want and when you can say what they want.” 

That effort suffered a setback last year when the Supreme Court struck down two red-state laws that would have overridden social media sites’ moderation choices and forced them to carry content. In a rare unanimous decision, the Supreme Court held that the First Amendment protects online platforms’ decisions about which content to display and how. Nevertheless, since the 2024 election, multiple major platforms have backed off from their prior content moderation approaches, even though that’s not what users actually want. The pressure campaign worked; Big Tech bent the knee. Meanwhile, the Aylo order perverts that Supreme Court decision: If the government can’t outlaw platforms from having content moderation policies, how about it weaponizes those policies against them instead? 

In Counts I-IV, VI, and VII, the agency’s complaint against Aylo takes the position that imperfect content moderation is both an unfair and a deceptive practice: If a platform’s policies say “XYZ types of content are not allowed on our service, and we remove it and ban accounts that post it,” but some instances of such content nevertheless slip through and/or some non-compliant accounts remain, that’s a violation of Section 5 of the FTC Act. That is, it’s unfair for a platform to distribute certain illegal content to consumers, and it’s deceptive for it not to live up to its representations about what it does with content its policies prohibit. 

Of course, per Masnick’s Impossibility Theorem, content moderation at scale is impossible to do well. That means every platform (certainly every large one) will inevitably leave some content up and take some content down that its stated policies say it shouldn’t. Under the FTC’s theory, they’re all breaking the law. 

Again, it’s no coincidence that the FTC chose categories of content that are already illegal as a starting point. But there’s no limiting principle to the FTC’s theory; any “lawful but awful” content could be slotted in instead. What stops the FTC from alleging that it’s an unfair business practice for a short-form video app’s algorithm to unexpectedly display animal torture content (which is protected speech) to users? What stops the agency from alleging that it’s deceptive for a social media platform to contain perfectly legal pornography despite its stated policy to remove “sexual imagery”?

This isn’t even the first time an FTC settlement has claimed the power to police content moderation practices: the Biden FTC already did that last year when settling with a different platform called NGL, as BestNetTech discussed at the time. The Commission is no stranger to pushing the limits of its authority, to put it lightly. But that impulse has become much more ominous now that we live in a country where Calvinball has replaced outmoded concepts like “arbitrary and capricious” and “selective enforcement.” 

In Calvinball America, Andrew Ferguson and Mark Meador can decide which platforms they want to target for mistakes every platform makes. This time, it was Pornhub’s corporate parent – which, to be fair, comes off terribly in the FTC’s complaint. (If the complaint’s awful allegations are true, why was this exclusively an FTC matter, not a DOJ criminal investigation? Where was the DOJ?) Next time, the FTC’s target may be a platform that has nothing to do with porn at all, but whose policies allow speech this administration disfavors or disallow speech it does favor. Or maybe the platform competes with a Trump-owned service, or it’s headed by someone who’s fallen out of Trump’s favor or isn’t enough of a bootlicker, or it refuses to sell Trump a stake of the business or pay him a bribe

Or maybe the reason will be pure pretext. The FTC could go after a platform for allegedly failing to follow its stated policies with respect to content the government couldn’t care less about, just because that serves as a convenient excuse to target it. We’ve already seen this dynamic in the punishment of universities ostensibly for not doing enough about antisemitism on campus, by an administration that’s rife with open antisemites in high-ranking positions. Even in the Aylo case, it’s hard not to smell a strong whiff of hypocrisy in the Trump administration’s supposed concern for children and sex trafficking victims.

This is exactly what Mike warned of when he dissected the FTC’s NGL order in July 2024: “Just think of whichever presidential candidate you dislike the most, and what would happen if they could have their FTC investigate any platform they dislike for not fairly living up to their public promises on moderation. It would be a dangerous, free speech-attacking mess.”

The Aylo settlement should be viewed as part of an ongoing attempt by the FTC to exceed its statutory and constitutional authority in trying to regulate platforms’ content moderation practices, now turbocharged under Republican leadership hellbent on exerting control over online speech more generally. No matter who’s President, the FTC does not have that power, any more than did the red states whose unconstitutional laws prompted that unanimous Supreme Court decision I mentioned. But by rolling over instead of challenging the FTC’s position, Aylo – like NGL before it – gave the FTC’s stance undeserved weight, which will bolster future agency actions against other platforms on the same theory. 

Sometimes, FTC targets have litigated instead of settling. In those cases, the FTC’s claims get tested through adversarial proceedings in a neutral court, creating case law that can serve as precedent for everyone. Fighting back is all the more important when there are major weaknesses in the FTC’s theory. Indeed, when private litigants sue platforms for allegedly not abiding by their content policies, they tend to lose (mostly). And when the FTC hit a journalism organization with a transparently retaliatory investigation earlier this year, the org sued the FTC and recently won a preliminary injunction. Still, faced with an intrusive and expensive federal government investigation, companies usually choose to settle rather than fight, even if they might have a strong case (and even after the Supreme Court killed court deference to agencies’ interpretations of their enabling statutes). 

Usually, settlements have no precedential value. What sets FTC consent orders apart from most settlement agreements in civil court, which do not bind anyone beyond the parties, is that the FTC is a regulatory agency and consent orders are a primary way it regulates. So long as they’re sufficiently specific, the allegations against the settling company put other companies on notice of what practices the FTC considers unlawful, and the conditions imposed in the consent order tell other companies what the agency thinks are good practices to follow. 

This is why the FTC’s settlement with Aylo will have ramifications beyond Aylo’s services. Other platforms – not just other porn sites, but other services that allow users to upload content (and thus run the risk that users will try to upload CSAM or NCII) – will look to the Aylo matter for guidance on how to stay out of trouble with the Commission. And what the Commission is signaling to all those platforms is that (1) if they say they moderate XYZ content, they’d better do what they say they do, and they might catch a deceptive-practices case if they don’t do it perfectly; (2) even if they amend their policies to remove any statements about content moderation, they can still get hit with an unfair-practices complaint if CSAM or NCII appears where consumers can see it; and (3) to keep CSAM and NCII off their services, the FTC expects them to affirmatively check whether any uploaded content is CSAM or NCII before making the content available, like it’s making Aylo do from now on.

This is how the Heritage Foundation pursues its goal of obliterating online pornography (good luck with that) and controlling what Americans say and hear online: by using the FTC’s powers to go after an unsympathetic actor for abhorrent content that’s not even protected speech. Having poked its hard-right nose into the free-expression tent, that camel will keep trying to nudge its way in. We can expect the FTC to keep expanding the categories of legal speech it deems “unfair” for platforms to show to consumers, and to use its “deceptiveness” theory to try to scare platforms into backing off even further from content moderation policies the GOP considers “censorship,” unanimous Supreme Court decision be damned. 

The FTC got its nose in the tent with consent orders. Consent orders aren’t an inevitable conclusion to an FTC investigation; they’re just the more popular option than litigating. Aylo (and NGL before it) decided to go along to get along. But that’s not how bullies work. Aylo’s deluding itself if it believes this settlement is going to satisfy agency commissioners who think pornography should be illegal, its producers should be imprisoned, and sites carrying it should be shuttered. Their ultimate goal is scorched earth, not a slap on the wrist. This is an agency that’s openly declared war on tech platforms and their supposed “censorship.” Whichever UGC platform gets targeted next had better understand the existential stakes and make a decision: do they want to be the next Media Matters, or the next CBS? Whether it’s the FTC or any other part of this government, appeasement is not the answer.    

Posted on BestNetTech - 15 September 2025 @ 12:03pm

The Trump FTC’s War On Porn Just Ensured That Accused CSAM Offenders Will Walk Free

Well, they finally did it. A federal agency finally shattered the precarious base that upholds the edifice of prosecutions for child sex abuse material (CSAM) in America. That agency is the Federal Trade Commission (FTC), which just entered into a deeply problematic settlement with a major online content platform for “doing little to block” CSAM and got a federal judge to approve it. On its face, the order may sound like a win. But, in fact, it will help accused offenders walk free, perversely undermining its own stated purpose.

I’m going to discuss the settlement order in a two-part series. In this post, I’ll describe what the order requires Aylo to do and explain why that’s going to create a huge headache for prosecutors in CSAM cases. In the next post, I’ll discuss why the settlement isn’t really about fighting CSAM anyway; it’s a stalking horse for the Trump FTC’s ulterior agenda, which comes straight out of Project 2025.

The FTC is the nation’s consumer protection watchdog. It lacks criminal enforcement authority; that’s the Department of Justice’s job. As such, the FTC is not in the business of investigating reports of CSAM from online platforms and prosecuting suspected CSAM purveyors. Yet the Commission decided to push the envelope of its authority by dipping its toe into those unfamiliar waters. By refusing to stay in their lane, the FTC just made it harder for the people who actually do prosecute crimes to bring CSAM offenders to justice. Meanwhile, CSAM’s illegality and obloquy will let the Commission disguise a power grab that’s really about controlling legal speech online. 

As I know from my own research, everyone who is familiar with the ecosystem of reporting, investigating, and prosecuting online CSAM steers well clear of disturbing the fragile edifice underpinning CSAM prosecutions in the U.S. – to wit, that platforms’ common practice of scanning their services for CSAM is completely voluntary, not the result of government compulsion. That voluntariness is sacrosanct, as I last explained in BestNetTech scarcely a year ago, because of the Fourth Amendment. Typically, its prohibition against unreasonable searches and seizures applies only to the government, not to private actors. But if platforms search users’ uploaded files for CSAM at the government’s behest, not of their own volition, they stop being private entities and become agents of the government. That turns all those scans into unconstitutional warrantless searches, meaning anything they turn up isn’t admissible as evidence in court, making it harder to convict anyone caught by the scans. The government’s compulsion ends up being self-defeating. 

That’s why it’s crucial to avoid government interference with online platforms’ decisionmaking about whether and how to search their services for illegal content. But the FTC just fucked it all up. 

What Happened?

On September 8, a federal judge in Utah approved a proposed stipulated order against Aylo that had been filed a few days earlier by the FTC and the Utah Consumer Protection Division (CPD). You may not know Aylo’s name, but you know its product: it operates Pornhub, the world’s most popular porn website, along with numerous other NSFW properties. To announce the settlement, the FTC issued a press release that linked to the proposed order and the complaint against Aylo. (The full court docket is here, thanks to the folks at the Free Law Project.)

The FTC and Utah CPD allege that Aylo was committing unfair and deceptive trade practices by hosting thousands of pieces of CSAM and non-consensual pornography (revenge porn, rape videos, etc.), despite saying it prohibited those kinds of content. (The complaint calls the latter “NCM,” but I’ll call it by the more-common acronym NCII, for non-consensual intimate imagery.) The investigation stemmed from a December 2020 New York Times opinion column wherein columnist Nick Kristof asserted that CSAM and NCII were rampant problems on Pornhub and other adult sites under the same corporate umbrella (then MindGeek, now Aylo since 2023). 

To settle the allegations, Aylo agreed to make a ton of changes, including significant reforms aimed at removing and preventing CSAM and NCII on its sites. It also agreed to pay the Utah CPD $5 million now, and another $10 million if it fails to comply with the order, which the Utah court retains jurisdiction to enforce. 

Reducing the availability of CSAM and NCII on major porn sites is a worthwhile aim. But by mandating that Aylo monitor all files uploaded to its various services, the settlement will backfire at its own ostensible purpose by making it harder to convict anybody caught trying to upload CSAM or NCII to Aylo’s sites. 

What Does the Order Require?

The order is over 60 pages long, and it requires a lot of things that fall outside the scope of this discussion (but some of which I’ll cover in part two). The part that’s a problem for CSAM prosecutions comes in Section III of the order, which requires Aylo to “establish, implement, and thereafter maintain” a “Mandated Program to Prevent the Posting and Proliferation of CSAM and NCM.” (That starts at page 11 of the order.) One of the requirements of this program (at p. 17) is that Aylo must start “[u]tilizing available tools and technologies to review Content [defined as ‘any depiction of sexually explicit conduct’] to determine whether it is actual or suspected CSAM or NCM prior to its publication or otherwise making it available to a consumer on [Aylo’s sites], including, but not limited to … [c]omparing Content, via internal or external tools, to Content previously identified and/or fingerprinted or otherwise marked (whether by any Defendant or another entity) as actual or suspected CSAM or NCM.” 

Other elements of the mandated program include requiring human moderators who review content to watch/listen to each file in its entirety, or alternatively read an entire transcript of the content (p. 18), and requiring Aylo to implement “[p]olicies, practices, procedures, and technical measures designed to ensure the consistent and thorough review of Content to determine whether it is actual or suspected CSAM or NCM, … before that Content is published on any Covered Service” (p. 19). 

Put simply, these provisions constitute a mandate to scan all uploaded files for CSAM or NCII. That’s what “content review” means here1. As required by the FTC and the Utah CPD, agreed to by Aylo, and endorsed by a federal court, Aylo must search all uploaded files to check if they’re a match to known CSAM or NCII, whether the files are uploaded by a user, a content partner, or a model who contributes content to the site. Noncompliance with the order will cost Aylo a $10 million penalty that’s currently suspended (see p. 56).

Scanning for CSAM is a standard practice that is already widespread among many adult sites and user-generated content (UGC) sites generally. The best-known example is Microsoft’s PhotoDNA software, which finds matches to known CSAM. Companies like Google and Meta also have tools for detecting new, previously unseen instances of CSAM. (Some platforms also search for known NCII, a practice likely to expand under the TAKE IT DOWN Act.) Crucially, however, that widespread standard practice is voluntary. The reason it has never before, to my knowledge, been compelled by any U.S. authority is because of the Fourth Amendment. 

The government cannot force a private actor to carry out a search the government could not constitutionally conduct itself; if it could, the Fourth Amendment would be a dead letter. When the government coerces a private actor to carry out a search, the private entity becomes an agent of the government, and its searches must comport with the Fourth Amendment. Generally, a search requires a warrant to be reasonable. Of course, privately-owned online platforms can’t get warrants. So when they search users’ files at the government’s behest, all those scans become mass warrantless searches of the contents of users’ files, in violation of the Constitution. 

All of this was laid out in a landmark 2016 ruling by then-Judge Neil Gorsuch while he was a judge on the Tenth Circuit — the appeals court for the very same federal district court in Utah that just rubber-stamped the Aylo order. That would’ve been great ammo for Aylo if they’d fought back instead of settling.

Why Care About the Privacy Rights of Alleged Pedophiles?

Who cares about the rights of accused CSAM offenders? Beyond the fact that even the worst among us have constitutional rights, this matters because violating the accused’s Fourth Amendment rights makes it harder to convict them for the terrible things they’re accused of.

The remedy for an unconstitutional search is suppression of the evidence obtained via the illegal search, including any evidence turned up as a result of the initial unlawful search. That is, if the government compels a private platform to scan user files, and the scan turns up CSAM, the CSAM cannot then be used against the user in a prosecution for that CSAM, which, needless to say, makes securing a conviction more difficult. This is why the federal statute requiring platforms to report CSAM when they find it on their services (namely, to a clearinghouse called the National Center for Missing and Exploited Children, or NCMEC) is very, very explicit in saying the statute does not require any service to monitor the contents of user files for CSAM.

Mandating that private platforms scan for CSAM is a completely self-defeating policy. That is why CSAM scans must be voluntary. The entire ecosystem of online platforms scanning for CSAM — which results in tens of millions of reports to NCMEC per year — depends entirely on the voluntariness of those searches. If they’re not voluntary, that whole system comes crashing down.

This “Fourth Amendment agency dilemma” (to borrow the title of a great 2021 paper) is very well-understood among the people who, unlike the Federal Trade Commission, actually work on fighting online CSAM as their daily jobs. As I wrote in a research publication last year, every actor in that ecosystem — platforms, law enforcement, the federal government, NCMEC — is excruciatingly aware of the Fourth Amendment government agent doctrine and takes great care to respect the voluntary nature of platforms’ scanning choices. Now the FTC and the state of Utah have waltzed in, slapped all those people in the face, and gotten a federal court to order Aylo to do the very thing that everyone who knew what they were doing had scrupulously avoided for years

Thanks to the FTC, all the scans that Aylo conducts under this stipulated order will be mass warrantless searches of the contents of other people’s files. If anyone gets arrested and prosecuted due to CSAM or NCII that’s turned up in those scans, they are, ironically, likelier to walk free precisely because of the FTC’s order turning Aylo into an agent of the government.

This will also affect cases that don’t involve Aylo. As I’ll explain more in part two, the FTC regulates via consent decrees. Inserting a scanning mandate into the Aylo order, based on allegations that Aylo did little to block CSAM or NCII uploads, signals to other UGC-driven sites (not just adult sites) that the FTC expects CSAM/NCII scanning as a baseline. After the FTC put Aylo’s head on a pike to serve as a warning to others, future criminal defendants ensnared by scanning can argue that even if their platform’s scans used to be voluntary, they aren’t anymore. The argument will be stronger for any platforms that only began scanning after the Aylo order. Even if those motions to suppress ultimately fail, they’ll still be a headache for prosecutors, and if any of them do succeed, those defendants will have the FTC to thank.

How Did This Happen, and What Comes Next?

How did this ticking constitutional time bomb make it into the final order? The simplest explanation is that these are consumer protection attorneys who didn’t have the necessary Fourth Amendment knowledge to spot the government agency problem. As any lawyer knows, having deep expertise in one area of the law doesn’t mean you can necessarily issue-spot all the other things lurking in a matter you’re handling. But that’s why you either loop in teammates who have different skills or stay in your damn lane.

Frustratingly, I think the FTC did talk to criminal prosecutors. The Utah court approved the Aylo settlement on the very same day a criminal defendant was sentenced in a different federal court on sex trafficking charges for his role in GirlsDoPorn, a one-time content partner of Aylo’s to which a whole page of the Aylo complaint is dedicated. That’s quite the coincidence, so maybe the FTC lawyers coordinated the timing with the GirlsDoPorn prosecutors. Still, why expect them to find and fix a problem buried partway into a 60-page order in someone else’s case? 

In any event, here we are. Now that the order has been approved by the court, it’s not clear what can be done to fix it. Aylo, the FTC, and the Utah CPD have “waive[d] all rights to appeal or otherwise challenge or contest the validity of this Order” (p. 4). Maybe if there’s a congressional hearing where Aylo, the FTC, and the Utah CPD are asked to explain what “content review” means, the court might sua sponte reconsider the order. I’m not holding my breath. In the meantime, Christmas has come early for the criminal defense bar. Heckuva job, FTC.

  1. Or maybe the mandate to “review content” doesn’t really require Aylo to search all uploads. But if it doesn’t mean that, what does it mean? If Aylo isn’t on notice of what it must do to comply, then the language is vague, which is a separate legal problem.
    ↩︎

Posted on BestNetTech - 12 August 2025 @ 11:06am

Another Look At The STOP CSAM Act

Amidst the current batch of child safety bills in Congress, a familiar name appears: the STOP CSAM Act. It was previously introduced in 2023, when I wrote about the threat the bill posed to the availability of strong encryption and consequently to digital privacy in the post-Roe v. Wade era. Those problems endure in the 2025 version (which has passed out of committee), as explained by the EFF, the Internet Society, and many more civil society orgs. To their points, I’ll just add that following the Salt Typhoon hack, no politician in any party has any business ever again introducing a bill that in any way disincentivizes encryption.

With all that said, the encryption angle is not the only thing worth discussing about the reintroduced bill. In this post, I’d like to focus on some other parts of STOP CSAM – specifically, how the bill addresses online platforms’ removal and reporting of child sex abuse material (CSAM), including new language concerning AI-generated CSAM. The bill would make platforms indicate whether reported content is AI – something my latest research finds platforms are not all consistently doing. However, the language of the requirement is overbroad, going well beyond generative AI. What’s more, forcing platforms to indicate whether content is real or AI overlooks the human toll of making that evaluation, risks punishing platforms for inevitable mistakes, and assumes too much about the existence, reliability, and availability of technological tools for synthetic content provenance detection.

STOP CSAM Would Make Platforms Report Whether Content Is AI-Generated

One of the many things the STOP CSAM bill would do is amend the existing federal statute that requires platforms to report apparent CSAM on their services to the CyberTipline operated by the nonprofit clearinghouse the National Center for Missing and Exploited Children (NCMEC). The 2025 version of the bill dictates several new requirements to platforms for how to fill out CyberTipline reports. One is that, “to the extent the information is within the custody or control of a provider,” every CyberTipline report “shall include, to the extent that it is applicable and reasonably available,” “an indication as to whether” each item of reported content “is created in whole or in part through the use of software, machine learning, artificial intelligence, or any other computer-generated or technological means, including by adapting, modifying, manipulating, or altering an authentic visual depiction” (i.e., real abuse material). If a platform knowingly omits that information when it’s “reasonably available” – or knowingly submits a report that “contains materially false or fraudulent information” – STOP CSAM permits the federal government to impose a civil penalty of $50,000 to $250,000.

This provision is pertinent to the findings of a paper about AI-generated CSAM that my colleague Shelby Grossman and I published at the end of May. Based on our interviews with platforms (including some AI companies), we find that platforms are generally confident in their ability to detect AI CSAM, and they’re reporting AI CSAM to the CyberTipline (as they must), but it appears platforms aren’t all consistently and accurately labeling the content as being AI-generated when submitting the CyberTipline reporting form (which includes a checkbox marked “Generative AI”). When we interviewed NCMEC employees as part of our research, they confirmed to us that they receive CyberTipline reports with AI-generated files that aren’t labeled as AI. Our paper urges platforms to (1) invest resources in assessing whether newly identified CSAM is AI-generated and accurately labeling AI CSAM in CyberTipline reports, and (2) communicate to NCMEC the platform’s policy for assessing whether CSAM is AI-generated and labeling it as such in its reports.

In short, current practice for AI CSAM seems to be to remove it and report it to NCMEC, but our sense is that most platforms are not prioritizing labeling CSAM as AI-generated in CyberTipline reports. Presently, reporting CSAM (irrespective of whether it’s AI or real) is mandatory, but the statute doesn’t give that many specifics about what information must be included, meaning most parts of the CyberTipline reporting form are optional. Thus there’s currently no incentive to spend extra time trying to figure out whether an image is AI and checking another box (all while the neverending moderation queue keeps piling up). STOP CSAM would change that, and would likely lead platforms to spend more time filling out CyberTipline reports about the content they’d quickly remove.

The $250,000 question is: How accurate does an “indication as to whether” a reported file is partially/wholly AI-generated have to be – and how much effort do platforms have to put into it? Can platforms rely on a facial assessment by a front-line content moderator, or is some more intensive analysis required? At what point is information about a file not “reasonably available” to the platform, even if it’s technically within the platform’s “custody or control”? Also, a lot of CyberTipline reports are submitted automatically without human review at the platform, typically where a platform’s CSAM detection system flags a hash match to known imagery that’s been confirmed as CSAM. How would this AI “indication” requirement interact with automated reporting? 

The Reporting Requirement Goes Beyond “AI”

STOP CSAM’s new reporting provision doesn’t require the reporting only of AI-generated imagery. Read the language again: when submitting a CyberTipline report, platforms must include “an indication as to whether the apparent [CSAM] is created in whole or in part through the use of software, machine learning, artificial intelligence, or any other computer-generated or technological means, including by adapting, modifying, manipulating, or altering an authentic visual depiction.”

That goes well beyond the “Generative AI” checkbox currently included in the reporting form (which can already mean multiple different things if it’s checked, according to our interview with NCMEC). Indeed, this language is so broad that it seems like it would apply even to very minor changes to real abuse images, like enhancing the brightness and saturation of the colors, or flipping it so it’s a mirror-image. I’m not sure why or how a platform could reasonably be expected to know what edits have been made to an image. Plus, it’s strange to equate a fully AI-generated image with a real image that’s merely had the color saturation tweaked in a photo editing app. Yet the bill language treats those two things as the same. 

This broad language would turn that “Generative AI” checkbox into a catch-all. Checking the checkbox could equally likely mean (1) “this is a digital image of a child who’s actively being abused which has been converted from color to grayscale,” (2) “this is an image from a years-old known abuse image series that’s been altered with Photoshop,” (3) “this is a morphed image of a real kid that’s been spit out by an AI-powered nudify app,” or (4) “this is a fully virtual image of an imaginary child who does not exist.” How is that useful to anyone? Until NCMEC adds more granularity to the reporting form, how is NCMEC, or law enforcement, supposed to triage all the reports with the “Generative AI” box checked? Is Congress’s expectation that platforms must also include additional details elsewhere (i.e. the free text entry box also included in the CyberTipline form)? Will they be fined if they don’t? 

It’s not a speculative concern that platforms would comply with STOP CSAM by reporting that an image has an AI element even if it merely has minor edits. In both this AI CSAM paper and our previous paper on the CyberTipline, we found that platforms are incentivized to “kick the can down the road” when reporting and let NCMEC and law enforcement sort it out. As one platform employee told us, “All companies are reporting everything to NCMEC for fear of missing something.” The burden then falls to NCMEC and law enforcement to deal with the deluge of reports of highly variable quality. Congress reinforces this incentive to over-report whenever it ups the ante for platforms by threatening to punish them more for not complying with increased reporting requirements – such as by fining them up to a quarter of a million dollars for omitting information that was “reasonably available.” The full Senate should keep that in mind should the bill ever be brought to the floor.

The Human Cost of the “Real or AI?” Determination

Although our report urges platforms to try harder to indicate in CyberTipline reports whether content is AI-generated, there are downsides if Congress forces platforms to do so. In adding that mandate to platforms’ CyberTipline reporting requirements, the STOP CSAM bill does not seem to contemplate the human factors involved in making the call as to whether particular content is AI-generated. 

As our paper discusses, there are valid reasons why platforms might hesitate to make the assessment that a file is AI-generated or convey that in a CyberTipline report. For one, platforms may not want to make moderators spend additional time scrutinizing deeply disturbing images or videos. Doing content moderation for CSAM was already psychologically harmful work even before generative AI, and we heard from respondents that AI-generated CSAM tends to be more violent or extreme than other material. One platform employee memorably called it “nightmarescape” content: “It’s images out of nightmares now, and they’re hyperrealistic.” By requiring an indication of whether reported content is AI, the STOP CSAM Act would incentivize platforms to make moderators spend longer analyzing content that’s particularly traumatic to view. Congress should not ignore the human toll of their child-safety bill.

Platforms may also fear making the wrong call: What if a platform reports an image as AI CSAM when it’s actually of a real child in need of rescue? What if the law enforcement officer who receives that report deprioritizes it for action out of the mistaken belief that it’s “just” AI, thereby letting the harm continue? Besides the weight of that mistake on platform personnel’s conscience, there’s also the specter of potential corporate liability for the error. (Platforms are supposed to be immune from liability for their CyberTipline reports, but that isn’t always the case.)

STOP CSAM would exacerbate the fear of getting the “real or AI?” assessment wrong. Platforms could incur stiff fines if a CyberTipline report knowingly omits required information or knowingly includes “materially false or fraudulent” information. That is, a platform could get fined both for failing to indicate that content is AI-generated when in fact it is, and for wrongly indicating that it is when in fact it isn’t, if the government concludes the conduct was knowing. (Even if the platform ends up getting absolved, the path to reaching that outcome will likely be costly and intrusive.)

Forcing platforms to make this assessment, while threatening to fine them for getting it wrong, could improve the consistency and accuracy of platforms’ CyberTipline reporting for AI-generated content. But it won’t come without a human cost, and it won’t guarantee 100% accuracy. There will inevitably be errors where real abuse imagery is mistakenly indicated to be AI (potentially delaying a child’s rescue), or where, as now, AI imagery is mistakenly indicated to be real (potentially wasting investigators’ time). 

To try to comply while mitigating their potential liability for errors, platforms might submit more CyberTipline reports with that “Generative AI” box checked, but add a disclaimer: that this is the platform’s best guess based on reasonably available information, but the platform is not guaranteeing the assessment’s accuracy and the assessment should not be relied on for legal purposes, etc. If platforms hedge their bets, what’s the point of making them check the box?

What’s the State of the Art for AI CSAM Detection?

Congress seems to believe that platforms know for a fact whether any given image they encounter is AI-generated or not, or at least that they can conclusively determine the ground truth. I’m not sure that’s true yet, based on our interviews for the AI CSAM paper

A respondent from a company that does include AI labels in its CyberTipline reports told us that they still use a manual process of determining whether CSAM is AI-generated. For now, most of our respondents believe the AI CSAM they’re seeing still has obvious tells that it’s synthetic. But moderators will need new strategies as AI CSAM becomes increasingly photorealistic. Already, one platform employee said that even with significant effort, it remains extremely difficult to determine whether AI-generated CSAM is entirely synthetic or based on the likeness of a real child. 

When it comes to content provenance, Congress should take care not to impose reporting requirements without understanding the current state of the technology for detecting AI content as well as the availability of such tools. True, there are already hash lists for AI CSAM that platforms are implementing, and tools do exist for AI CSAM detection. One respondent said that general AI-detection models are often sufficient to determine whether CSAM is AI-generated; we heard from a couple of respondents that existing machine learning classifiers do decently well at detecting AI CSAM, about as well as they do at detecting traditional CSAM. However, we also heard that the results vary by tool and tend to decline when the AI content is less photorealistic. And even currently performant tools can’t remain static, since the cat-and-mouse game of content generation and detection will continue as long as determined bad actors keep exploiting advances in generative AI. 

There’s also the issue of scale. Congress shouldn’t expect every entity that reports CSAM to NCMEC to have the same resources as a massive tech company that submits hundreds of thousands of CyberTipline reports annually. Implementing AI CSAM detection tools might not be appropriate for a small platform that submits only a handful of reports each year and does everything manually. This goes back to the question of how much effort a platform must put into indicating whether reported material is AI, and how accurate that indication is expected to be. Even for big platforms, it is a challenge to determine conclusively whether highly realistic-looking material is real or AI-generated, much less for small ones. Congress should not lose sight of that.

Conclusion

The reboot of STOP CSAM is just one of several bills introduced in this Congress that involve AI and child safety, of which the TAKE IT DOWN Act is the most prominent. Having devoted most of my work over the past two years to the topic of AI-generated CSAM, it is gratifying to see Congress pay it so much attention. That said, it’s dismaying when legislators’ alleged concern about child sex abuse manifests as yet another plan to punish online platforms unless they “do better,” without reckoning with the counterproductive incentives that creates, the resources available for compliance (especially to different-size platforms), or the technological state of the art. In that regard, unfortunately, the new version of STOP CSAM is the same as the old.

Riana Pfefferkorn (writing in her personal capacity) is a Policy Fellow at the Stanford Institute for Human-Centered AI.

Posted on BestNetTech - 19 August 2024 @ 12:55pm

Suing Apple To Force It To Scan iCloud For CSAM Is A Catastrophically Bad Idea

There’s a new lawsuit in Northern California federal court that seeks to improve child safety online but could end up backfiring badly if it gets the remedy it seeks. While the plaintiff’s attorneys surely mean well, they don’t seem to understand that they’re playing with fire.

The complaint in the putative class action asserts that Apple has chosen not to invest in preventive measures to keep its iCloud service from being used to store child sex abuse material (CSAM) while cynically rationalizing the choice as pro-privacy. This decision allegedly harmed the Jane Doe plaintiff, a child whom two unknown users contacted on Snapchat to ask for her iCloud ID. They then sent her CSAM over iMessage and got her to create and send them back CSAM of herself. Those iMessage exchanges went undetected, the lawsuit says, because Apple elected not to employ available CSAM detection tools, thus knowingly letting iCloud become “a safe haven for CSAM offenders.” The complaint asserts claims for violations of federal sex trafficking law, two states’ consumer protection laws, and various torts including negligence and products liability.

Here are key passages from the complaint:

[Apple] opts not to adopt industry standards for CSAM detection… [T]his lawsuit … demands that Apple invest in and deploy means to comprehensively … guarantee the safety of children users. … [D]espite knowing that CSAM is proliferating on iCloud, Apple has “chosen not to know” that this is happening … [Apple] does not … scan for CSAM in iCloud. … Even when CSAM solutions … like PhotoDNA[] exist, Apple has chosen not to adopt them. … Apple does not proactively scan its products or services, including storages [sic] or communications, to assist law enforcement to stop child exploitation. … 

According to [its] privacy policy, Apple had stated to users that it would screen and scan content to root out child sexual exploitation material. … Apple announced a CSAM scanning tool, dubbed NeuralHash, that would scan images stored on users’ iCloud accounts for CSAM … [but soon] Apple abandoned its CSAM scanning project … it chose to abandon the development of the iCloud CSAM scanning feature … Apple’s Choice Not to Employ CSAM Detection … Is a Business Choice that Apple Made. … Apple … can easily scan for illegal content like CSAM, but Apple chooses not to do so. … Upon information and belief, Apple … allows itself permission to screen or scan content for CSAM content, but has failed to take action to detect and report CSAM on iCloud. … 

[Questions presented by this case] include: … whether Defendant has performed its duty to detect and report CSAM to NCMEC [the National Center for Missing and Exploited Children]. … Apple … knew or should have known that it did not have safeguards in place to protect children and minors from CSAM. … Due to Apple’s business and design choices with respect to iCloud, the service has become a go-to destination for … CSAM, resulting in harm for many minors and children [for which Apple should be held strictly liable] … Apple is also liable … for selling defectively designed services. … Apple owed a duty of care … to not violate laws prohibiting the distribution of CSAM and to exercise reasonable care to prevent foreseeable and known harms from CSAM distribution. Apple breached this duty by providing defective[ly] designed services … that render minimal protection from the known harms of CSAM distribution. … 

Plaintiff [and the putative class] … pray for judgment against the Defendant as follows: … For [an order] granting declaratory and injunctive relief to Plaintiff as permitted by law or equity, including: Enjoining Defendant from continuing the unlawful practices as set forth herein, until Apple consents under this court’s order to … [a]dopt measures to protect children against the storage and distribution of CSAM on the iCloud … [and] [c]omply with quarterly third-party monitoring to ensure that the iCloud product has reasonably safe and easily accessible mechanisms to combat CSAM….”

What this boils down to: Apple could scan iCloud for CSAM, and has said in the past that it would and that it does, but in reality it chooses not to. The failure to scan is a wrongful act for which Apple should be held liable. Apple has a legal duty to scan iCloud for CSAM, and the court should make Apple start doing so.

This theory is perilously wrong.

The Doe plaintiff’s story is heartbreaking, and it’s true that Apple has long drawn criticism for its approach to balancing multiple values such as privacy, security, child safety, and usability. It is understandable to assume that the answer is for the government, in the form of a court order, to force Apple to strike that balance differently. After all, that is how American society frequently remedies alleged shortcomings in corporate practices. 

But this isn’t a case about antitrust, or faulty smartphone audio, or virtual casino apps (as in other recent Apple class actions). Demanding that a court force Apple to change its practices is uniquely infeasible, indeed dangerous, when it comes to detecting illegal material its users store on its services. That’s because this demand presents constitutional issues that other consumer protection matters don’t. Thanks to the Fourth Amendment, the courts cannot force Apple to start scanning iCloud for CSAM; even pressuring it to do so is risky. Compelling the scans would, perversely, make it way harder to convict whoever the scans caught. That’s what makes this lawsuit a catastrophically bad idea.

(The unconstitutional remedy it requests isn’t all that’s wrong with this complaint, mind. Let’s not get into the Section 230 issues it waves away in two conclusory sentences. Or how it mistakes language in Apple’s privacy policy that it “may” use users’ personal information for purposes including CSAM scanning, for an enforceable promise that Apple would do that. Or its disingenuous claim that this isn’t an attack on end-to-end encryption. Or the factually incorrect allegation that “Apple does not proactively scan its products or services” for CSAM at all, when in fact it does for some products. Let’s set all of that aside. For now.)

The Fourth Amendment to the U.S. Constitution protects Americans from unreasonable searches and seizures of our stuff, including our digital devices and files. “Reasonable” generally means there’s a warrant for the search. If a search is unreasonable, the usual remedy is what’s called the exclusionary rule: any evidence turned up through the unconstitutional search can’t be used in court against the person whose rights were violated. 

While the Fourth Amendment applies only to the government and not to private actors, the government can’t use a private actor to carry out a search it couldn’t constitutionally do itself. If the government compels or pressures a private actor to search, or the private actor searches primarily to serve the government’s interests rather than its own, then the private actor counts as a government agent for purposes of the search, which must then abide by the Fourth Amendment, otherwise the remedy is exclusion. 

If the government – legislative, executive, or judiciary – forces a cloud storage provider to scan users’ files for CSAM, that makes the provider a government agent, meaning the scans require a warrant, which a cloud services company has no power to get, making those scans unconstitutional searches. Any CSAM they find (plus any other downstream evidence stemming from the initial unlawful scan) will probably get excluded, but it’s hard to convict people for CSAM without using the CSAM as evidence, making acquittals likelier. Which defeats the purpose of compelling the scans in the first place. 

Congress knows this. That’s why, in the federal statute requiring providers to report CSAM to NCMEC when they find it on their services, there’s an express disclaimer that the law does not mean they must affirmatively search for CSAM. Providers of online services may choose to look for CSAM, and if they find it, they have to report it – but they cannot be forced to look. 

Now do you see the problem with the Jane Doe lawsuit against Apple?

This isn’t a novel issue. BestNetTech has covered it before. It’s all laid out in a terrific 2021 paper by Jeff Kosseff. I have also discussed this exact topic over and over and over and over and over and over again. As my latest publication (based on interviews with dozens of people) describes, all the stakeholders involved in combating online CSAM – tech companies, law enforcement, prosecutors, NCMEC, etc. – are excruciatingly aware of the “government agent” dilemma, and they all take great care to stay very far away from potentially crossing that constitutional line. Everyone scrupulously preserves the voluntary, independent nature of online platforms’ decisions about whether and how to search for CSAM. 

And now here comes this lawsuit like the proverbial bull in a china shop, inviting a federal court to destroy that carefully maintained and exceedingly fragile dynamic. The complaint sneers at Apple’s “business choice” as a wrongful act to be judicially reversed rather than something absolutely crucial to respect.

Fourth Amendment government agency doctrine is well-established, and there are numerous cases applying it in the context of platforms’ CSAM detection practices. Yet Jane Doe’s counsel don’t appear to know the law. For one, their complaint claims that “Apple does not proactively scan its products or services … to assist law enforcement to stop child exploitation.” Scanning to serve law enforcement’s interests would make Apple a government agent. Similarly, the complaint claims Apple “has failed to take action to detect and report CSAM on iCloud,” and asks “whether Defendant has performed its duty to detect and report CSAM to NCMEC.” This conflates two critically distinct actions. Apple does not and cannot have any duty to detect CSAM, as expressly stated in the statute imposing a duty to report CSAM. It’s like these lawyers didn’t even read the entire statute, much less any of the Fourth Amendment jurisprudence that squarely applies to their case. 

Any competent plaintiff’s counsel should have figured this out before filing a lawsuit asking a federal court to make Apple start scanning iCloud for CSAM, thereby making Apple a government agent, thereby turning the compelled iCloud scans into unconstitutional searches, thereby making it likelier for any iCloud user who gets caught to walk free, thereby shooting themselves in the foot, doing a disservice to their client, making the situation worse than the status quo, and causing a major setback in the fight for child safety online. 

The reason nobody’s filed a lawsuit like this against Apple to date, despite years of complaints from left, right, and center about Apple’s ostensibly lackadaisical approach to CSAM detection in iCloud, isn’t because nobody’s thought of it before. It’s because they thought of it and they did their fucking legal research first. And then they backed away slowly from the computer, grateful to have narrowly avoided turning themselves into useful idiots for pedophiles. But now these lawyers have apparently decided to volunteer as tribute. If their gambit backfires, they’ll be the ones responsible for the consequences.

Riana Pfefferkorn is a policy fellow at Stanford HAI who has written extensively about the Fourth Amendment’s application to online child safety efforts.

Posted on BestNetTech - 1 May 2023 @ 12:11pm

The STOP CSAM Act Is An Anti-Encryption Stalking Horse

Recently, I wrote for Lawfare about Sen. Dick Durbin’s new STOP CSAM Act bill, S.1199. The bill text is available here. There are a lot of moving parts in this bill, which is 133 pages long. (Mike valiantly tries to cover them here.) I am far from done with reading and analyzing the bill language, but already I can spot a couple of places where the bill would threaten encryption, so those are what I’ll discuss today.

According to Durbin, online service providers covered by the bill would have “to produce annual reports detailing their efforts to keep children safe from online sex predators, and any company that promotes or facilitates online child exploitation could face new criminal and civil penalties.” Child safety online is a worthy goal, as is improving public understanding of how influential tech companies operate. But portions of the STOP CSAM bill pose risks to online service providers’ ability to use end-to-end encryption (E2EE) in their service offerings. 

E2EE is a widely used technology that protects everyone’s privacy and security by encoding the contents of digital communications and files so that they’re decipherable only by the sender and intended recipients. Not even the provider of the E2EE service can read or hear its users’ conversations. E2EE is built in by default to popular apps such as WhatsApp, iMessage, FaceTime, and Signal, thereby securing billions of people’s messages and calls for free. Default E2EE is also set to expand to Meta’s Messenger app and Instagram direct messages later this year. 

E2EE’s growing ubiquity seems like a clear win for personal privacy, security, and safety, as well as national security and the economy. And yet E2EE’s popularity has its critics – including, unfortunately, Sen. Durbin. Because it’s harder for providers and law enforcement to detect malicious activity in encrypted environments than unencrypted ones (albeit not impossible, as I’ll discuss), law enforcement officials and lawmakers often demonize E2EE. But E2EE is a vital protection against crime and abuse, because it helps to protect people (children included) from the harms that happen when their personal information and private conversations fall into the wrong hands: data breaches, hacking, cybercrime, snooping by hostile foreign governments, stalkers and domestic abusers, and so on.

That’s why it’s so important that national policy promote rather than dissuade the use of E2EE – and why it’s so disappointing that STOP CSAM has turned out to be just the opposite: yet another misguided effort by lawmakers in the name of online safety that would only make us all less safe. 

First, STOP CSAM’s new criminal and civil liability provisions could be used to hold E2EE services liable for CSAM and other child sex offenses that happen in encrypted environments. Second, the reporting requirements look like a sneaky attempt to tee up future legislation to ban E2EE outright.

STOP CSAM’s New Civil and Criminal Liability for Online Service Providers

Among the many, many things it does in 133 pages, STOP CSAM creates a new federal crime, “liability for certain child exploitation offenses.” It also creates new civil liability by making a carve-out from Section 230 immunity to allow child exploitation victims to bring lawsuits against the providers of online services, as well as the app stores that make those services available. Both of these new forms of liability, criminal and civil, could be used to punish encrypted services in court.

The new federal crime is for a provider of an interactive computer service (an ICS provider, as defined in Section 230) “to knowingly (1) host or store child pornography or make child pornography available to any person; or (2) otherwise knowingly promote or facilitate a violation of” certain federal criminal statutes that prohibit CSAM and child sexual exploitation (18 U.S.C. §§ 2251, 2251A, 2252, 2252A, or 2422(b)). 

This is rather duplicative: It’s already illegal under those laws to knowingly possess CSAM or knowingly transmit it over the Internet. That goes for online service providers, too. So if there’s an online service that “knowingly hosts or stores” or transmits or “makes available” CSAM (whether on its own or by knowingly letting its users do so), that’s already a federal crime under existing law, and the service can be fined.

So why propose a new law that says “this means you, online services”? It’s the huge size of the fines that could be imposed on providers: up to $1 million, or $5 million if the provider’s conduct either causes someone to be harmed or “involves a conscious or reckless risk of serious personal injury.” Punishing online service providers specifically with enormous fines, for their users’ child sex offenses, is the point of re-criminalizing something that’s already a crime.

The new civil liability for providers comes from removing Section 230’s applicability to civil lawsuits by the victims of CSAM and other child sexual exploitation crimes. There’s a federal statute, 18 U.S.C. § 2255, that lets those victims sue the perpetrator(s). Section 230 currently bars those lawsuits from being brought against providers. That is, Congress has heretofore decided that if online services commit the aforementioned child sex offenses, the sole enforcer should be the Department of Justice, not civil plaintiffs. STOP CSAM would change that. (More about that issue here.) 

Providers would now be fair game for 2255 lawsuits by child exploitation victims. Victims could sue for “child exploitation violations” under an enumerated list of federal statutes. They could also sue for “conduct relating to child exploitation.” That phrase is defined with respect to two groups: ICS providers (as defined by Section 230), and “software distribution services” (think: app stores, although the definition is way broader than that). 

Both ICS providers and software distribution services could be sued for one type of “conduct relating to child exploitation”: “the intentional, knowing, reckless, or negligent promotion or facilitation of conduct that violates” an enumerated list of federal child exploitation statutes. And, ICS providers alone (but not software distribution services) could be sued for a different type of conduct: “the intentional, knowing, reckless, or negligent hosting or storing of child pornography or making child pornography available to any person.” 

So, to sum up: STOP CSAM

(1) creates a new crime when ICS providers knowingly promote or facilitate CSAM and child exploitation crimes, and 

(2) exposes ICS providers to civil lawsuits by child exploitation victims if they intentionally, knowingly, recklessly, or negligently (a) host/store/make CSAM available, or (b) promote or facilitate child exploitation conduct (for which app stores can be liable too).

Does E2EE “Promote or Facilitate” Child Exploitation Offenses?

Here, then, is the literally million-dollar question: Do E2EE service providers “promote or facilitate” CSAM and other child exploitation crimes, by making their users’ communications unreadable by the provider and law enforcement? 

It’s not clear what “promote or facilitate” even means! That same phrase is also found in a 2018 law, SESTA/FOSTA, that carved out sex trafficking offenses from providers’ general immunity against civil lawsuits and state criminal charges under Section 230. And that same phrase is currently being challenged in court as unconstitutionally vague and overbroad. Earlier this year, a panel of federal appeals judges appeared skeptical of its constitutionality at oral argument, but they haven’t issued their written opinion yet. Why Senator Durbin thinks it’s a great idea to copy language that’s on the verge of being held unconstitutional, I have no clue.

If a court were to hold that E2EE services “promote or facilitate” child sex offenses (whatever that means), then the E2EE service provider’s liability would turn on whether the case was criminal or civil. If it’s criminal, then federal prosecutors would have to prove the service knowingly promoted or facilitated the crime by being E2EE. “Knowing” is a pretty high bar to prove, which is appropriate for a crime. 

In a civil lawsuit, however, there are four different mental states the plaintiff could choose from. Two of them – recklessness or negligence – are easier to prove than the other two (knowledge or intent). They impose a lower bar to establishing the defendant’s liability in a civil case than the DOJ would have to meet in a federal criminal prosecution. (See here for a discussion of these varying mental-state standards, with helpful charts.)

Is WhatsApp negligently facilitating child exploitation because it’s E2EE by default? Is Zoom negligently facilitating child exploitation because users can choose to make a Zoom meeting E2EE? Are Apple and Google negligently facilitating child exploitation by including WhatsApp, Zoom, and other encrypted apps in their app stores? If STOP CSAM passes, we could expect plaintiffs to immediately sue all of those companies and argue exactly that in court.

That’s why STOP CSAM creates a huge disincentive against offering E2EE. It would open up E2EE services to a tidal wave of litigation by child exploitation victims for giving all their users a technology that is indispensable to modern cybersecurity and data privacy. The clear incentive would be for E2EE services to remove or weaken their end-to-end encryption, so as to make it easier to detect child exploitation conduct by their users, in the hopes that they could then avoid being deemed “negligent” on child safety because, ironically, they used a bog-standard cybersecurity technology to protect their users.

It is no accident that STOP CSAM would open the door to punishing E2EE service providers. Durbin’s February press release announcing his STOP CSAM bill paints E2EE as antithetical to child safety. The very first paragraph predicts that providers’ continued adoption of E2EE will cause a steep reduction in the volume of (already mandated) reports of CSAM they find on their services. It goes on to suggest that deploying E2EE treats children as “collateral damage,” framing personal privacy and child safety as flatly incompatible. 

The kicker is that STOP CSAM never even mentions the word “encryption.” Even the EARN IT Act – a terrible bill that I’ve decried at great length, which was reintroduced in the Senate on the same day as STOP CSAM – has a weak-sauce provision that at least kinda tries halfheartedly to protect encryption from being the basis for provider liability. STOP CSAM doesn’t even have that!

Teeing Up a Future E2EE Ban

Even leaving aside the “promote or facilitate” provisions that would open the door to an onslaught of litigation against the providers of E2EE services, there’s another way in which STOP CSAM is sneakily anti-encryption: by trying to get encrypted services to rat themselves out to the government.

The STOP CSAM bill contains mandatory transparency reporting provisions, which, as my Lawfare piece noted, have become commonplace in the recent bumper crop of online safety bills. The transparency reporting requirements apply to a subset of the online service providers that are required to report CSAM they find under an existing federal law, 18 U.S.C. § 2258A. (That law’s definition of covered providers has a lot of overlap, in practice, with Section 230’s “ICS provider” definition. Both of these definitions plainly cover apps for messaging, voice, and video calls, whether they’re E2EE or not.) In addition to reporting the CSAM they find, those covered providers would also separately have to file annual reports about their efforts to protect children. 

Not every provider that has to report CSAM would have to file these annual reports, just the larger ones: specifically, those with at least one million unique monthly visitors/users and over $50 million in annual revenue. That’s a distinction from the “promote or facilitate” liability discussed above, which doesn’t just apply to the big guys.

Covered providers must file an annual report with the Attorney General and the Federal Trade Commission that provides information about (among other things) the provider’s “culture of safety.” This means the provider must describe and assess the “measures and technologies” it employs for protecting child users and keeping its service from being used to sexually abuse or exploit children. 

In addition, the “culture of safety” report must also list “[f]actors that interfere with the provider’s ability to detect or evaluate instances of child sexual exploitation and abuse,” and assess those factors’ impact.

That provision set off alarm bells in my head. I believe this reporting requirement is intended to force providers to cough up internal data and create impact assessments, so that the federal government can then turn around and use that information as ammunition to justify a subsequent legislative proposal to ban E2EE. 

This hunch arises from Sen. Durbin’s own framing of the bill. As I noted above, his February press release about STOP CSAM spends its first two paragraphs claiming that E2EE would “turn off the lights” on detecting child sex abuse online. Given this framing, it’s pretty straightforward to conclude that the bill’s “interfering factors” report requirement has E2EE in mind. 

So: In addition to opening the door to civil and/or criminal liability for E2EE services without ever mentioning the word “encryption” (as explained above), STOP CSAM is trying to lay the groundwork for justifying a later bill to more overtly ban providers from offering E2EE at all.

But It’s Not That Simple, Durbin

There’s no guarantee this plan will succeed, though. If this bill passes, I’m skeptical that its ploy to fish for evidence against E2EE will play out as intended, because it rests on a faulty assumption. The policy case for outlawing or weakening E2EE rests on the oftrepeated premise that online service providers can’t fight abuse unless they can access the contents of users’ files and communications at will, a capability E2EE impedes. However, my own research has proved this assumption untrue. 

Last year, I published a peer-reviewed article analyzing the results of a survey I conducted of online service providers, including some encrypted messaging services. Many of the participating providers would likely be covered by the STOP CSAM bill. The survey asked participants to describe their trust and safety practices and rank how useful they were against twelve different categories of online abuse. Two categories pertained to child safety: CSAM and child sexual exploitation (CSE) such as grooming and enticement.

My findings show that CSAM is distinct from other kinds of online abuse. What currently works best to detect CSAM isn’t what works best against other abuse types, and vice versa. For CSAM, survey participants considered scanning for abusive content to be more useful than other techniques (user reporting and metadata analysis) that — unlike scanning — don’t rely on at-will provider access to user content. However, that wasn’t true of any other category of abuse — not even other child safety offenses

For detecting CSE, user reporting and content scanning were considered equally useful for abuse detection. In most of the remaining 10 abuse categories, user reporting was deemed more useful than any other technique. Many of those categories (e.g., self-harm and harassment) affect children as well as adults online. In short, user reports are a critically important tool in providers’ trust and safety toolbox.

Here’s the thing: User reporting — the best weapon against most kinds of abuse, according to providers themselves — can be, and is, done in E2EE environments. That means rolling out E2EE doesn’t nuke a provider’s abuse-fighting capabilities. My research debunks that myth.

My findings show that E2EE does not affect a provider’s trust and safety efforts uniformly; rather, E2EE’s impact will likely vary depending on the type of abuse in question. Even online child safety is not a monolithic problem (as was cogently explained in another recent report by Laura Draper of American University). There’s simply no one-size-fits-all answer to solving online abuse. 

From these findings, I conclude that policymakers should not pass laws regulating encryption and the Internet based on the example of CSAM alone, because CSAM poses such a unique challenge. 

And yet that’s just what I suspect Sen. Durbin has in mind: to collect data about one type of abusive content as grounds to justify a subsequent law banning providers from offering E2EE to their users. Never mind that such a ban would affect all content and all users, whether abusive or not.

That’s an outcome we can’t afford. Legally barring providers from offering strong cybersecurity and privacy protections to their users wouldn’t keep children safe; it would just make everybody less safe, children included. As a recent report from the Child Rights International Network and DefendDigitalMe describes, while E2EE can be misused, it is nevertheless a vital tool for protecting the full range of children’s rights, from privacy to free expression to protection from violence (including state violence and abusive parents). That’s in addition to the role strong encryption plays in protecting the personal data, financial information, sensitive secrets, and even bodily safety of domestic violence victims, military servicemembers, journalists, government officials, and everyone in between. 

Legislators’ tunnel-vision view of E2EE as nothing but a threat requires casting all those considerations aside — treating them as “collateral damage,” to borrow Sen. Durbin’s phrase. But the reality is that billions of people use E2EE services every day, of whom only a tiny sliver use them for harm — and my research shows that providers have other ways to deal with those bad actors. As I conclude in my article, anti-E2EE legislation just makes no sense. 

Given the crucial importance of strong encryption to modern life, Sen. Durbin shouldn’t expect the providers of popular encrypted services to make it easy for him to ban it. Those major players covered by the STOP CSAM bill? They have PR departments, lawyers, and lobbyists. Those people weren’t born yesterday. If I can spot a trap, so can they. The “culture of safety” reporting requirements are meant to give providers enough rope to hang themselves. That’s like a job interviewer asking a candidate what their greatest weakness is and expecting a raw and damning response. The STOP CSAM bill may have been crafted as a ticking time bomb for blowing up encryption, but E2EE service providers won’t be rushing to light the fuse. 

From my research, I know that providers’ internal child-safety efforts are too complex to be reduced to a laundry list of positives and negatives. If forced to submit the STOP CSAM bill’s mandated reports, providers will seize upon the opportunity to highlight how their E2EE services help protect children and describe how their panoply of abuse-detection measures (such as user reporting) help to mitigate any adverse impact of E2EE. While its opponents try to caricature E2EE as a bogeyman, the providers that actually offer E2EE will be able to paint a fuller picture. 

Will It Even Matter What Providers’ “Culture of Safety” Reports Say?

Unfortunately, given how the encryption debate has played out in recent years, we can expect Congress and the Attorney General (a role recently held by vehemently anti-encryption individuals) to accuse providers of cherry-picking the truth in their reports. And they’ll do so even while they themselves cherry-pick statistics and anecdotes that favor their pre-existing agenda. 

I’m basing that prediction on my own experience of watching my research, which shows that online trust and safety is compatible with E2EE, get repeatedly cherry-picked by those trying to outlaw E2EE. They invariably highlight my anomalous findings regarding CSAM while leaving out all the other findings and conclusions that are inconvenient to their false narrative that E2EE wholly precludes trust and safety enforcement. As an academic, I know I can’t control how my work product gets used. But that doesn’t mean I don’t keep notes on who’s misusing it and why.

Providers can offer E2EE and still effectively combat the misuse of their services. Users do not have to accept intrusive surveillance as the price of avoiding untrammeled abuse, contrary to what anti-encryption officials like Sen. Durbin would have us believe. 

If the STOP CSAM bill passes and its transparency reporting provisions go into effect, providers will use them to highlight the complexity of their ongoing efforts against online child sex abuse, a problem that is as old as the Internet. The question is whether that will matter to congressmembers who have already made up their minds about the supposed evils of encryption and the tech companies that offer it — or whether those annual reports were always intended as an exercise in futility.

What’s Next for the STOP CSAM Bill?

It took two months after that February press release for Durbin to actually introduce the bill in mid-April, and it took even longer for the bill text to actually appear on the congressional bill tracker. Durbin chairs the Senate Judiciary Committee, where the bill was supposed to be considered in committee meetings during each of the last two weeks, but it got punted out both times. Now, the best guess is that it will be discussed and marked up this coming Thursday, May 4. However, it’s quite possible it will get delayed yet again. On the one hand, Durbin as the committee chair has a lot of power to move his own bill along; on the other hand, he hasn’t garnered a single co-sponsor yet, and might take more time to get other Senators on board before bringing it to markup.

I’m heartened that Durbin hasn’t gotten any co-sponsors and has had to slow-roll the bill. STOP CSAM is very dense, it’s very complicated, and in its current form, it poses a huge threat to the security and privacy of the Internet by dissuading E2EE. There may be some good things in the bill, as Mike wrote, but at 133 pages long, it’s hard to figure out what the bill actually does and whether those would be good or bad outcomes. I’m sure I’ll be writing more about STOP CSAM as I continue to read and digest it. Meanwhile, if you have any luck making sense of the bill yourself, and your Senator is on the Judiciary Committee, contact their office and let them know what you think.

Riana Pfefferkorn is a Research Scholar at the Stanford Internet Observatory. A version of this piece originally appeared on the Stanford CIS blog.

Posted on BestNetTech - 9 March 2023 @ 01:03pm

Another Casualty If Section 230 Gets Repealed: Food Safety Data

I’m a latecomer to the whole “podcasts” phenomenon. I didn’t start listening to them until 2020, when the pandemic suddenly gave me the free time and the incentive to get out of my small apartment and go on long walks. That’s my excuse for only recently discovering “Maintenance Phase,” a terrific podcast that “debunks and decodes” the wellness and weight-loss industries. 

Last week, I (finally) listened to a November episode about a food-poisoning outbreak last year among customers of meal-kit startup the Daily Harvest. I had good timing, as it happens: the FDA just released a report about the incident. According to FDA data (which may understate the outbreak’s extent), nearly 400 people were sickened nationwide, of whom a whopping one-third were hospitalized; some even had to have their gallbladders removed. 

How did this outbreak come to light? The Internet. News outlets covered the incident after numerous people posted to Reddit and a social-media influencer posted to TikTok, all with similar horror stories. According to NPR, Twitter and Instagram users also contributed to the groundswell of data that all pointed to a particular dish (a lentil & leek “crumble”). By sharing their stories on social media, those affected were able to put two and two together, and to call on the company and the FDA to respond.

One thing I learned from the podcast was that, even though the FDA is the federal agency responsible for the safety of the nation’s food supply, when Americans report being sickened by food they’ve consumed, those reports go to state agency investigators. As “Maintenance Phase” pointed out, that means it can be difficult to see the nationwide forest for the state-by-state trees: Food-safety incidents in different states may all come from the same source, but the links between different states’ outbreaks may be non-obvious or slow to come to light.

That’s part of what made the postings to Reddit and other social media sites so important. Those platforms allowed affected Daily Harvest customers to share their experiences, connect with others – and warn everyone else not to eat the culprit dish, beating the company and the FDA to the punch. Important food-safety information reached the general public quickly, while providing data points about the scope, reach, and severity of the outbreak. 

And Section 230 is the reason that could happen.

Businesses hate negative online reviews. That’s true whether it’s a major multinational corporation or your local dentist’s office. And for years, they’ve shown their contempt for their customers: first by providing subpar products, services, or customer experiences, then by trying to silence dissatisfied customers from telling others. They’ve pulled every nasty legal trick in the book, from claiming that “gripe sites” infringe their trademarks to inserting “non-disparagement” clauses in their customer contracts. It got so bad that Congress, which can’t even reliably keep the government running, took action in 2016 by outlawing these unconscionable contract clauses. Federal government intervention was necessary because businesses have zero scruples about suing their own customers. 

Fortunately, however, they can’t sue the platforms that host those customers’ complaints. Section 230 is the law that immunizes consumer review outlets, social media platforms, and any other website that hosts user-generated content from (potentially ruinous) liability for their users’ postings. Yelp, Amazon reviews, Angi (f/k/a Angie’s List), Google Maps reviews: all of those platforms are protected by Section 230, which applies to both user reviews they leave up and those they take down. 

Without 230, consumer review sites would be on the hook whether they moderated user postings or not. Businesses could sue for libel over negative reviews that stayed online. (Indeed, Section 230 was Congress’s response to a court decision over a defamation claim.) But if the site took down all negative reviews in order to avoid libel suits, then harmed consumers could sue them for negligence for removing information that might otherwise have warned them away from an unsanitary restaurant or incompetent construction firm. 

Make no mistake, Reddit is notorious for hosting a ton of awful speech. When playing amateur detective, its users have been terribly wrong in the past. But this time, they were right. An ingredient in the Daily Harvest’s lentil & leek crumble really did sicken hundreds of people. And thanks to Section 230, Reddit, Twitter, Instagram, and TikTok didn’t have to fear the Daily Harvest might sue them, so they didn’t have any incentive to take those posts down.

Online reviews aren’t just guideposts for other consumers. They’re data that can be very valuable in the hands of government agencies charged with protecting the public and doing scientific research. That might be food safety regulators investigating consumer complaints; it can also be the USGS tracking Twitter mentions of earthquakes

If Section 230 gets repealed, the San Andreas Fault won’t threaten to sue Twitter when a user mistakes a dump truck for a temblor. But users’ accounts of suffering harm from consumer products and services would suddenly become a liability for the websites hosting them – so that valuable data might be silenced. 

Without 230, as said, content moderation becomes “damned if you do, damned if you don’t.” The only surefire way to avoid getting sued is to shut down the forum hosting all of that user speech, just in case some of the speech might expose the forum owner to liability. That’s exactly what happened the moment Congress passed a carve-out from Section 230 for certain material back in 2018. If Congress (or the Supreme Court) keeps tinkering with the statute, we can expect a lot more user speech forums, carrying a lot of socially-beneficial speech, to be shuttered for good. And that’s what really makes me feel sick to my stomach.

Riana Pfefferkorn is a Research Scholar at the Stanford Internet Observatory. She promises this post is not spon-con for “Maintenance Phase” (or The Daily Harvest).

Posted on BestNetTech - 16 May 2022 @ 01:37pm

The End Of Roe Will Bring About A Sea Change In The Encryption Debate

With the Supreme Court poised to rip away a constitutional right that’s been the law of the land for nearly half a century by overturning Roe v. Wade, it’s time for the gloves to come off in the encryption debate. For a quarter of a century, it has been an unspoken prerequisite for “serious” discussion that American laws and law enforcement must be given a default presumption of legitimacy, respect, and deference. That was always bullshit, the end of Roe confirms it, and I’m not playing that game anymore.

Weirdly, there are a lot of similarities between encryption and abortion. Encryption is a standard cybersecurity measure, just like abortion is a standard medical procedure. Encryption is just one component of a comprehensive data privacy and security program, just like abortion is just one component of reproductive health care. They both save lives. They both support human dignity. They’re both deeply bound up with the right to autonomy privacy, no matter what a hard-right Supreme Court says. (Ironically, the way things are going, the Supreme Court’s position will soon be that we have more privacy rights in our phones than in our own bodies.) And finally, both encryption and abortion keep being framed as something “controversial” rather than something that you and I have every damn right to – something that should be ubiquitously available without encumbrance. 

It would be nice if both of these things were settled questions, but as we’ve seen in both cases, the opponents of each will never let them be. The opponents of bodily autonomy are about to score a victory they’ve been working towards for decades. The immediate result will be total bans and criminalization of abortion in large swaths of the United States. We absolutely cannot afford for the opponents of encryption to prevail as well, whether in the U.S., the EU, its member states, or anywhere else.

The only reason there’s still any “debate” over encryption is because law enforcement refuses to let it drop. For over a quarter of a century, they’ve constantly insisted on the primacy of their interests. They demand to be centered in every discussion about encryption. They frame encryption as a danger to public safety and position themselves as having a monopoly on protecting public safety. They’ve insisted that all other considerations – cybersecurity, privacy, free expression, personal safety – must be made subordinate to their priorities. They expect everyone else to make trade-offs in the name of their interests but refuse to make trade-offs themselves. Nothing trumps the investigation of crime.

Why should law enforcement’s interests outweigh everything else? Because they’re “the good guys.” In debates about whether law enforcement should get “exceptional access” (i.e., a backdoor) to our encrypted communications and files, we pretend that American (and other Western democracies’) law enforcement are “the good guys,” positioned in contrast to “the bad guys”: criminals, hackers, foreign adversaries. When encryption advocates talk about how encryption is vital for protecting people from the threat posed by abusive, oppressive governments, we engage in the polite fiction that we’re talking about “that other country, over there.” It’s China, or Russia, or Ethiopia, not the U.S. If we talk about the threats posed by U.S.-based law enforcement at all, it’s the “a few bad apples” framing: we hypothesize about the occasional rogue cop who’d abuse an encryption backdoor in order to steal money or stalk his ex-wife. 

We don’t confront the truth: that law enforcement in the U.S. is rife with institutional rot. Law enforcement does not have a monopoly on protecting public safety. In fact, they’re often its biggest threat. When encryption advocates play along with framing law enforcement as “the good guys,” we’re agreeing to avert our eyes from the fact that one-third of all Americans killed by strangers are killed by police, the fact that police kill three Americans a day, and the staggering rates of domestic violence by cops. When actual horrific crimes get reported to them – the very crimes they say they need encryption backdoors to investigate – they turn a blind eye and slander the victims. Law enforcement is a scourge on Americans’ personal safety. The same is true of our privacy as well: as a brand-new report from Georgetown underscores, law enforcement agencies don’t hesitate to flout the law with impunity in the pursuit of their perfect surveillance state. 

U.S. law enforcement officers and agencies have shown us with their own actions that they don’t deserve any deference whatsoever in discussions about encryption policy. They aren’t entitled to any presumption of legitimacy. They are just another one of the threats that encryption protects people from. With the demise of Roe, we can no longer ignore that the same is also true of American laws. 

Of course, this has always been the case. “Crimes” are whatever a group of lawmakers at some point in time decide they are, and “criminals” are whoever law enforcement selectively decides to enforce those laws against: Black and brown people, undocumented immigrants, homeless people, sex workers, parents of trans kids, drug users. Now that we’re rolling back the clock on social progress by half a century, “criminals” once again will include people who have abortions (which, don’t you ever forget, does not just mean cisgender women) and those who provide them. Already, some deeply conservative states are plotting for using contraception to make you a criminal again too. People in consensual same-sex relationships or interracial marriages may be next. All of these “crimes” are what should come to your mind whenever you hear somebody tout “fighting crime” as a reason to outlaw strong encryption. 

If you’re an encryption advocate in the United States, it’s time to stop pretending that encryption’s protection against oppressive governments is only about Uighurs in Xinjiang or gay people in Uganda. Americans also need strong encryption to protect ourselves from our own domestic governments and their abominable laws. The impending end of Roe has laid that bare. The threat is coming from inside the house. “China” was really a euphemism for “Alabama” this whole time. Encryption advocates in the U.S. just usually aren’t willing to say so. 

Why not? Because we’ve internalized that unless we treat American laws (and the people who enforce them) as unimpeachably legitimate and supreme, we won’t be treated as “serious.” We’ll be derided as “zealots” and “absolutists” who aren’t willing to have a “mature conversation” about “finding a balance” and “working together” to find a “middle-ground solution” on encryption. Our views and demands will be dismissed out of hand. We won’t get invited anymore to events put on by universities and foundations. We won’t get to talk in endless circles while sitting in fancy conference rooms far away from the jailhouses where Purvi Patel and Lizelle Herrera were held.  

The loss of Roe will unavoidably usher in a new phase of the encryption debate in the U.S., because Roe has been the law of the land throughout the entire time that strong encryption has been generally available. Roe was decided in 1973, and the landmark Diffie/Hellman paper “New Directions in Cryptography” didn’t come out until 1976. In the decades since, strong encryption went from a niche concern of the military and banks to being in widespread use by average consumers – while, simultaneously, the constitutional right to abortion was slowly and systematically chipped away. Nevertheless, Roe still stood. In all the years since 1976, encryption policy discussions about “balancing” privacy rights and criminal enforcement have never had to seriously grapple with what it means for abortion to be a crime rather than a right. That’s about to change.

Encryption advocates: It’s time to stop playing along with U.S. law enforcement’s poisonous expectation to exempt them from the threat model. The next time you’re at yet another fruitless roundtable event to “debate” encryption and some guy from the FBI complains that law enforcement must always be the star of the show, ask him to defend his position now that abortion will be against the law across much of the country. If he whines that that’s states’ laws, not federal, ask him what the FBI is going to do once a tide of investigators from those states start asking the FBI for help unlocking the phones of people being prosecuted for seeking, having, or performing an abortion. 

Tech companies: Do you want to help put your users behind bars by handing over the data you hold about them in response to legal demands by law enforcement? Do you not really care if they go to prison, but do care about the bad PR you’ll get if the public finds out about it? Then start planning now for what you’re going to do when – not if – those demands start coming in. Data minimization and end-to-end-encryption are more important than ever. And start worrying about internal access controls and insider threats, too: don’t assume that none of your employees would ever dream of quietly digging through users’ data looking for people they could dox to the police in anti-abortion jurisdictions. Protecting your users is already so hard, and it’s going to get a lot harder. Update your threat models.

Lawmakers: You can no longer be both pro-choice and anti-encryption. The treasure troves of Americans’ digital data are about to be weaponized against us by law enforcement to imprison people for having abortions, stillbirths, and miscarriages. If you believe that Americans are entitled to bodily autonomy and decisional privacy, if you believe that abortion is a right and not a crime, then I don’t want to hear you advocate ever again for giving law enforcement the ability to read everyone’s communications and unlock anyone’s phone. Whether or not you manage to codify Roe or to crack down on data brokers that sell information about abortion clinic visitors, you need to stop talking out of both sides of your mouth by claiming you care about privacy and abortion rights while also voting for bills like the EARN IT Act that would weaken encryption. The midterms are coming, and we are watching.

As I wrote recently for Brookings, encryption protects our privacy where the law falls short. Once Roe is overturned, the law will fall short for tens of millions of people. We no longer have the luxury of indulging in American exceptionalism. The enforcement of American laws isn’t a justification for weakening encryption. It’s an urgent argument in favor of strengthening it. 

Riana Pfefferkorn is a Research Scholar at the Stanford Internet Observatory.

This post originally published to Stanford’s Cyberlaw blog.

Posted on BestNetTech - 11 March 2022 @ 01:29pm

Ignoring EARN IT’s Fourth Amendment Problem Won’t Make It Go Away

A month ago, the controversial EARN IT Act sailed through a markup hearing in the Senate Judiciary Committee. If enacted, the bill would strip the providers of online services of Section 230 immunity for their users’ child sexual exploitation offenses, meaning they could be subject to civil suit by private plaintiffs and criminal charges under state law. The idea is that providers aren’t presently doing enough to combat child sex abuse material (CSAM) on their services, and that exposing them to more liability would goad them into better behavior.

A handful of committee members—Senators Lee, Coons, Ossoff, Booker, and Padilla (plus Leahy, kind of)—voiced concerns that, as written, the bill would have negative consequences for encryption, privacy, security, free speech, and human rights, and would further harm already at-risk populations at home and abroad, such as domestic abuse survivors, LGBTQ individuals, and journalists. (Two weeks after the hearing, Russia’s invasion of Ukraine underscored these high stakes.)

Despite their reservations, all of those senators joined their colleagues to vote the bill unanimously out of committee without any changes. They expressed confidence, however, that these issues could be addressed before the bill reaches the Senate floor (if it ever does).

I don’t share their professed optimism, whether it’s sincere or not. Either they truly believe the bill will receive amendments that assuage their concerns… or they don’t, but they’ll keep voting “yes” anyway for fear of being branded a sympathizer to pedophiles (or worse, Big Tech). If they’re sincere, they’re setting themselves up for disappointment when EARN IT’s sponsors refuse to fix its problems. If not, their failure to stand up against the bill may come back to haunt them later if, perversely, its passage helps set pedophiles free.

EARN IT’s Problems Aren’t an Accident, They’re the Point

The bill’s issues regarding encryption and privacy (to say nothing of all the other stuff) are unlikely to be fixed, because to its sponsors, they’re not bugs, they’re features.

As I explained when EARN IT was reintroduced, the current bill’s encryption-related language (which originated in the House) is less protective of encryption than the version (drafted by Leahy) that was in the bill the last time it passed out of this committee in mid-2020. And even that was skim milk, not full-fat.

Will this language be strengthened? Doubtful. On the eve of markup, and again during the hearing, bill sponsor Senator Richard Blumenthal finally dispensed with his years-long pretense that his bill is not about punishing providers that offer strong encryption to their users. He made it clear that he will not agree to an amendment that truly protects encryption, because, in his own words, he doesn’t want encryption to be a “get out of jail free card” for providers. (This is the same senator who, at the same time he was pushing EARN IT, got mad at Zoom for not using the strong encryption he wants companies to get punished for using.) Blumenthal’s recalcitrance bodes poorly for the other senators’ encryption concerns.

Their worries about privacy will fare no better: Blumenthal wants more surveillance by online service providers of their users’ files and communications. That’s the whole point of his bill, after all: hey, that CSAM isn’t gonna find itself. In the hearing, Blumenthal didn’t shy away from admitting that his goal is to get providers to start scanning all user content for CSAM if they aren’t doing so already. (Side note: “Scan all the things” is the constant drumbeat of surveillance-happy policymakers, but as my newly-published article describes, it’s not the only way to detect abuse online, or even the most effective in most contexts.) This objective had already been emphasized in a “myths vs. facts” document that was circulated with the bill language.

(Perplexingly, Blumenthal, along with several other EARN IT sponsors, endorsed a bill in 2015 that would have strengthened privacy protection for Americans’ online communications instead of undermining it as EARN IT would. Between that and his anger at Zoom, it seems the gentleman from Connecticut is happily unencumbered by the proverbial hobgoblin of little minds.)

In apparent reference to that “myths vs. facts” document, committee chair Dick Durbin (who’s also a bill co-sponsor) asked whether the bill requires providers to proactively inspect content for CSAM. Blumenthal replied that while there’s no “express” duty, there’s a “moral responsibility.” (Note the failure to deny that there’s an implied duty.) When the bill’s other lead author, Senator Lindsey Graham, suggested there should be an affirmative duty to inspect, Blumenthal proposed to work with Graham and Durbin on a future bill that would expressly contain such a mandate.

Those remarks were a mistake.

The Problem with Admitting on the Record to Wanting a CSAM Scanning Mandate

As I’ve explained before (and as Professor Jeff Kosseff explained far better in a brisk 2021 paper), forcing tech companies to scan for CSAM would upset the delicate arrangement that presently enables online service providers to find and report CSAM without running afoul of Americans’ legal privacy rights. Blumenthal and Graham are dissatisfied with the current setup’s shortcomings, and they claim Section 230 is to blame. But it’s not really Section 230 they’re mad at. It’s the Fourth Amendment.

Fourth Amendment government agency doctrine

The Fourth Amendment prohibits unreasonable searches and seizures by the government. Like the rest of the Bill of Rights, the Fourth Amendment doesn’t apply to private entities—except where the private entity gets treated like a government actor in certain circumstances. Here’s how that happens: The government may not make a private actor do a search the government could not lawfully do itself. (Otherwise, the Fourth Amendment wouldn’t mean much, because the government could just do an end-run around it by dragooning private citizens.) When a private entity conducts a search because the government wants it to, not primarily on its own initiative, then the otherwise-private entity becomes an agent of the government with respect to the search. (This is a simplistic summary of “government agent” jurisprudence; for details, see the Kosseff paper.) And government searches typically require a warrant to be reasonable. Without one, whatever evidence the search turns up can be suppressed in court under the so-called exclusionary rule because it was obtained unconstitutionally. If that evidence led to additional evidence, that’ll be excluded too, because it’s “the fruit of the poisonous tree.”

Fourth Amendment government agency doctrine is why lawmakers and law enforcement must tread very carefully when it comes to CSAM scanning online. Many online service providers already choose voluntarily to scan all (unencrypted) content uploaded to their services, using tools such as PhotoDNA. But it must be a voluntary choice, not one induced by government pressure. (Hence the disclaimer in the federal law requiring providers to report CSAM on their services that they know about, which makes clear that they do not have to go looking for it.) If the provider counts as a government agent, then its CSAM scans constitute warrantless mass surveillance. Whatever CSAM they find could get thrown out in court should a user thus ensnared raise a Fourth Amendment challenge during a resulting prosecution. But that’s often a key piece of evidence in CSAM prosecutions; without it, it’s harder to convict the accused. In short, government pressure to scan for CSAM risks letting offenders off the hook.  

EARN IT has a “government agent” problem – and the Senate Judiciary Committee knows it

This brittle state of affairs is what’s at stake when lawmakers try to pressure private tech companies into looking harder for CSAM—the very thing Blumenthal and Graham are openly doing. As NetChoice’s Kir Nuthi explained in Slate after the markup hearing, Blumenthal’s phrasing is apropos: EARN IT is indeed a “get out of jail free card”… for CSAM offenders.

Lawmakers have been warned for years that EARN IT could end up backfiring in this way. Yet the senators on the Judiciary Committee unanimously decided to entrust Senators Blumenthal and Graham with fixing the problem. Fat chance: Blumenthal and Graham had just said during the hearing that they favor an affirmative monitoring requirement.

I doubt that’s the answer Chairman Durbin was fishing for when he asked them if their bill requires proactive content monitoring. As a fellow sponsor of the bill, he was presumably angling to get remarks on the record that EARN IT isn’t pushing providers to scan. But they said what they said – remarks that (along with their “myths vs. facts” document) will be attached as exhibits to countless motions to suppress if EARN IT passes. Durbin’s question showed that the committee is well aware of the bill’s Fourth Amendment problem. And Graham and Blumenthal’s responses showed they have no intention of fixing it.

And yet, despite that clear signal not to expect any meaningful amendments to the bill, the entire Judiciary Committee voted unanimously for the bill. In doing so, the committee members indicated that they’re willing to roll the dice in actual CSAM cases. Should the bill end up passing as-is, every CSAM defendant who walks free will have Congress to thank for getting them off the hook.

Senators Are Putting the Fate of Their Bill in the Hands of the Very People They’re Trying to Compel

By consciously leaving the bill vulnerable to the Fourth Amendment government agency argument, lawmakers will be putting the power to decide their bill’s fate into the hands of the very people whose behavior the bill is targeting. I’m not sure Congress has thought about who those people are, or how they might react if pulled into litigation over the “government agent” question.

As said, the legality of CSAM scanning comes down to voluntariness. So far, major online service providers (many of whose legal departments are chock-full of former federal prosecutors) all choose to scan. Companies such as Microsoft, Yahoo, Google, Facebook, Dropbox, and AOL routinely have their personnel file declarations that support the government, not the defendant, when defendants move to suppress the evidence found via CSAM scan. Courts tend to give a lot of weight to those declarations that the providers choose to scan for their own business reasons. These providers have an interest in not making the argument that they’re being forced to scan, because they don’t want to be declared government agents: If they were, they would have to shut off their scanning programs, which help them find unwelcome content. The providers, not just the government, have an interest in upholding the narrative that’s sustained the legality of their CSAM scanning regimes for many years: that all of this is totally, 100% voluntary. (Kosseff’s explanation is especially barbed on this point.)

The EARN IT bill explodes that polite fiction, even in its current version with supposedly nonbinding “best practices” that tell providers what they really ought to do to avoid liability. (In actuality, those best practices are hardly voluntary, as cogently explained here.) The bill would destroy that assertion—”we want to scan, we choose to scan, we are not being forced to scan”—that has been key in getting courts to shoot down so many CSAM defendants’ suppression motions.

Should EARN IT pass, we may start seeing provider affidavits that tell a different story: that the provider never chose to scan before, and that EARN IT is the only reason it scans now. Take, for example, Signal and Telegram, which provide end-to-end encrypted messaging functionality; Telegram also offers non-E2EE “channels” for broadcasting public messages. Both apps’ founders are ideologically opposed to censorship and surveillance (and both have historically taken a light touch to trust and safety issues). Or consider right-wing “alt” social networks Parler and Gettr: as this run-down of the EARN IT markup hearing explains, both have refused to use PhotoDNA because they just don’t want to know about CSAM on their apps.

Put simply, the providers of online services don’t all fall into the “lawful good” box on the D&D alignment chart. That’s exactly why Blumenthal and Graham feel the need to regulate them with EARN IT: to get them to do more than they do now. And yet they’re implicitly assuming that if the bill passes, not only would everyone knuckle under and start scanning for CSAM (as opposed to, say, leaving the U.S. market, as Signal has threatened to do), they’d also behave themselves if called into court during any resultant motions to suppress.

But if EARN IT is the thing that finally pushes Telegram or Parler to start scanning for fear of potential criminal and civil liability, why would their representatives say otherwise in court? Why play along and pretend the government isn’t forcing them to do searches they don’t want to do? What incentive would they have to pretend to a court that they freely chose to scan (which would, after all, border on perjury), especially knowing that if they tell the truth, they won’t have to keep scanning anymore? The fate of those CSAM prosecutions, and therefore of EARN IT itself, could very well hinge on what the representatives of such apps—apps that are known for not playing nicely with governments—decide they want to say in a sworn declaration. Congress will have placed all the power in the hands of the very providers it ostensibly deplores.

If, as its sponsors have candidly admitted, the goal of EARN IT is to goad more providers to start scanning, then the success of that pressure campaign (if any) will, paradoxically, also be the law’s downfall.

True, providers that were already scanning can probably keep doing so and have a solid argument that EARN IT didn’t suddenly convert them into government agents overnight. The EARN IT sponsors’ “myths vs. facts” document claims that “little will change under this bill” for providers like that. Assuming that’s true (and I’m dubious), EARN IT doesn’t move the needle as to those providers. At best, if the one and only goal of EARN IT is to bring about universal CSAM scanning, EARN IT is superfluous.

But providers that weren’t already scanning are the ones most likely to be deemed government agents in Fourth Amendment challenges. So the CSAM cases where defendants will walk free will be the ones where the scans were conducted by the only providers whose behavior was affected by EARN IT. The net effect of EARN IT would be that fewer CSAM offenders would be brought to justice, because Congress will have shattered the “voluntariness” premise on which so many CSAM convictions currently rest. The law’s net effect would be to make the problem it aims to solve worse, not better.

Mess? What Mess?

If Congress punts on EARN IT’s Fourth Amendment problem and the law ends up backfiring, will they have the intestinal fortitude to admit it, confront the harm they’ve done, and repeal EARN IT? Or will they deny there even was any harm, like they’ve done with the utterly disastrous FOSTA law that amended Section 230 in 2018—a law that multiple senators actually referred to as a victory during the markup hearing?

When it’s federal judges publishing opinions saying “EARN IT is to blame for this undesirable outcome,” not just some marginalized voices (i.e., sex workers harmed by FOSTA) that those in power can ignore and dismiss out of hand, will Congress listen then? If EARN IT prompts court rulings highlighting Congress’s role in helping people who victimize children to evade accountability, will Congress own up to its mistake?

With EARN IT’s risks just as foreseeable now as FOSTA’s were in 2018, just how badly does Congress want to pass the EARN IT Act? What’s it willing to sacrifice—and whom?

* * *

Postscript: the “binary search” and “no Fourth Amendment rights online anyway” theories

In the past, the government has made some arguments in CSAM cases that we might see it make again if EARN IT passes and defendants move to suppress based on the “government agency” theory. These aren’t slam-dunk winners, so it would be very risky for Congress to bet that one of these arguments will prevail in court every single time. (Remember, children are the bargaining chips.)

One argument contends that a scanning program that discloses nothing more than the presence or absence of CSAM might not count as a “search” at all for Fourth Amendment purposes, because CSAM is contraband and the courts have ruled that there’s no protectable privacy interest in contraband. In that case, providers’ CSAM scans would be permissible even if they were compelled rather than voluntary.

But opinions differ as to the viability of this “binary search” theory. A 2021 student note on EARN IT gives it more credence than do two student notes from 2018. Kosseff’s paper consigns it to a skeptical footnote. Even top Fourth Amendment scholar Orin Kerr has admitted that it’s not certain that a court would endorse this theory to uphold a provider’s warrantless CSAM scanning at government behest. (If it were certain, then why does that aforementioned disclaimer in the CSAM reporting law keep coming up over and over again when courts rule that online service providers aren’t government agents?) While the idea has been raised occasionally in CSAM cases, it hasn’t caught on: the courts in those cases didn’t rely on it and neither did the government.

In an even more extreme argument, discussed by Elizabeth Banker in Medium, the government has occasionally asserted in court that Americans don’t even have Fourth Amendment privacy rights in our online files and communications: because we hand them over to third-party providers, we can’t reasonably expect them to stay private. (A related argument claims there’s no reasonable expectation of privacy because, in agreeing to a provider’s terms of service, users consent to the provider’s search of their account contents—even when the provider is acting as a government agent. But this argument only works if the language of the TOS backs it up. Apropos of nothing, here’s a link to Telegram’s TOS.)

“You have no Fourth Amendment rights online anyway” was Blumenthal’s previous retort to concerns about an earlier iteration of EARN IT. As I said then, this frightening position goes against the grain of contemporary Supreme Court jurisprudence in the digital era. It has not been universally adopted among the lower courts; while Banker cites some cases adopting it, others have vigorously rejected it. Even the Wolfenbarger opinion Banker cites, which agreed with the government’s argument, was later vacated and replaced with an opinion that avoided addressing the issue. In short, like the “binary search” theory, this is hardly a slam-dunk argument—but per Banker, we might see the government press it more often if EARN IT passes.

It’s still shocking to me that EARN IT’s authors endorsed this view. I wonder how many other members of Congress are willing to stand up in front of their constituents and tell them they can’t reasonably expect their emails, DMs, cloud storage, etc. to be private. If they really believe that, the proper response isn’t to use it to justify EARN IT. Rather (as Banker notes), it’s for them to strengthen our federal electronic communications privacy statutes.

That’s just what the bill I said Blumenthal supported back in 2015 would have done. It’s passed again and again in the House (once unanimously, 419-0), only to grind to a halt in the Senate. If the Senate has repeatedly refused to pass an online privacy bill that’s been incredibly popular on the House side, why should we expect it will have much appetite for fixing EARN IT’s constitutional pitfalls before it reaches the Senate floor?

Maybe it will never get there, though: floor time is precious, this bill is controversial, and there are other things (COVID, inflation, budget, Russia/Ukraine…) on Congress’s plate. For senators who dislike the bill but don’t want to look soft on crimes against children, not having to make a decision about how to vote might be the best outcome they can hope for. Stay tuned.

Riana Pfefferkorn is a Research Scholar at the Stanford Internet Observatory. This post originally appeared on the Stanford CIS blog.

Posted on BestNetTech - 19 January 2022 @ 10:45am

The UK Has A Voyeuristic New Propaganda Campaign Against Encryption

Over the weekend, Rolling Stone reported on a new propaganda campaign the United Kingdom’s government is rolling out to try to turn public opinion against end-to-end encryption (E2EE). It’s the latest salvo in the UK’s decades-long war against encryption, which in the past has relied on censorious statements from the Home Office and legislation such as the Snooper’s Charter rather than ad campaigns. According to the report, the plans for the PR blitz (which is funded by UK taxpayers’ money) include “a striking stunt — placing an adult and child (both actors) in a glass box, with the adult looking ‘knowingly’ at the child as the glass fades to black.”

This stunt, devised by ad agency M&C Saatchi, is remarkably similar to one of Leopold Bloom’s advertising ideas in James Joyce’s Ulysses: “…a transparent show cart with two smart girls sitting inside writing letters, copybooks, envelopes, blotting paper. I bet that would have caught on. Everyone dying to see what she’s writing. … Curiosity.” (U154) 

A century ago, Bloom the ad man cannily intuited how to achieve an agenda by manipulating humans’ nosy nature. And now the UK government — possibly the nosiest humans on earth — is betting it can do the same.

The evil genius of this bit of propaganda is that it works on two levels. The link between them turns on the symbolism that, as my Stanford Internet Observatory colleague David Thiel observed, an opaque box with people inside is what’s otherwise known as “a house.”

On one level, the opaque room represents encrypted messaging. The audience’s inability to see what happens inside is meant to provoke sympathy for the child, who, it’s leeringly implied, is about to be victimized by the adult. This is supposed to turn the audience’s opinion against encryption: Wouldn’t it be better if someone could see in?

But focusing on this shallow symbolism ignores what’s right there on the surface. On a different level, the opaque room isn’t a metaphor at all. It is just what it seems to be: an opaque room — that is, a house. 

A home. 

The audience isn’t meant to sympathize with the people inside the home, people just like them, who can shield themselves from prying eyes. Rather, they’re meant to sympathize with the would-be watcher: the UK government. On this level, it’s the frustrated voyeurs who are the victims. Their desire to watch what happens inside has been stymied by that demonic technology known as “walls.” Wouldn’t it be better if someone could see in?

To be sure, the glass room is, as it seems, an unsubtle allegory meant to gain public support for banning encryption, which allows people to have private spaces in the virtual world. E2EE protects children’s and adults’ communications alike, and by focusing on adult/child interactions, this stunt hides the fact that removing E2EE for children’s conversations necessarily means removing it for adults’ conversations too. So on one level, it’s normalizing the idea that adults aren’t entitled to have private conversations online. 

But the campaign’s more insidious message is literally hiding in plain sight. By portraying the transparent room as desirable and the opaque room as a sinister deviation from the norm, the government is peddling the idea that it is suspect for people to have our own private spaces in the physical world.

The goal of this propaganda campaign is to turn the UK public’s opinion against their own privacy, not just in their electronic conversations, but even in the home, where the right to privacy is strongest and most ancient. Were the Home Office to say that overtly, many people would immediately reject it as outrageous, and rightly so. But through this campaign, the UK government can get its citizens to come up with that idea all on their own. The hook for this hard-to-swallow notion is the more readily-accepted premise that children should have less privacy and be under more surveillance than adults. But if it’s adults who harm children, then the conclusion follows naturally: adults had better be watched as well. Even inside their own homes. 

This isn’t a new idea; it’s a longstanding fantasy of the British government, given voice over the centuries by authors from Bentham to Orwell. Heck, general warrants were one of the causes of the American Revolution against the British government. But the new twist of hiring an ad agency to sell people their own subjugation, using their own tax money, is just insulting. Here’s hoping the Home Office’s anti-privacy ulterior motive will be like that glass box: people will see right through it.

Riana Pfefferkorn is a Research Scholar at the Stanford Internet Observatory.

More posts from riana.pfefferkorn >>