Whose job is it to provide consequences when someone breaks the law?
It seems like this issue shouldn’t be that complicated. We expect law enforcement to deal with it when someone breaks the law. Not private individuals or organizations. Because that’s vigilantism.
Yet, on the internet, over and over again, we keep seeing people set the expectations that tech companies need to provide the consequences. That’s even when those who actually violate the law already face legal consequences.
None of this is to say that tech companies shouldn’t be focused on trying to minimize the misuse of their products. They have trust & safety teams for a reason. They know that if they don’t, they will face all sorts of reasonable backlash from advertisers or users leaving, due to negative media coverage and more. But demanding that they face legal consequences, while ignoring the legal consequences facing the actual users who violated the law… is weird.
For years, one of the cases that we kept hearing about as an example of why Section 230 was bad and needed to be done away with was Herrick v. Grindr. In that case, a person who was stalked and harassed sued Grindr for supposedly enabling such harassment and stalking.
What’s left out of the discussion is that the guy who stalked Herrick was arrested and ended up pleading guilty to criminal contempt, identity theft, falsely reporting an incident, and stalking. He was then sentenced to over a year in prison. Indeed, it appears he was arrested a few weeks before the lawsuit was filed against Grindr.
So, someone broke the law and faced the legal consequences. Yet some people are still much more focused on blaming the tech companies for not somehow “dealing” with these situations. Hell, much of the story around the Herrick case was about how there were no other remedies that he could find, even as the person who wronged him was, for good reason, in prison.
We’re now seeing a similar sort of thing with a new case you might have heard about recently. A few weeks ago, a high school athletic director, Dazhon Darien, was arrested in Baltimore after using some AI tools to mimic the principal at Pikesville High School, Eric Eiswart. Now Darien may need to use his AI tools to conjure up a lawyer.
A Maryland high school athletic director is facing criminal charges after police say he used artificial intelligence to duplicate the voice of Pikesville High School Principal Eric Eiswert, leading the community to believe Eiswert said racist and antisemitic things about teachers and students.
“We now have conclusive evidence that the recording was not authentic,” Baltimore County Police Chief Robert McCullough told reporters during a news conference Thursday. “It’s been determined the recording was generated through the use of artificial intelligence technology.”
Dazhon Darien, 31, was arrested Thursday on charges of stalking, theft, disruption of school operations and retaliation against a witness after a monthslong investigation from the Baltimore County Police Department.
This received plenty of attention as an example of the kind of thing people are worried about regarding “deepfakes” and whatnot: where someone is accused of doing something they didn’t by faking proof via AI tools.
However, every time this comes up, the person seems to be caught. And, in this case, they’ve been arrested and could face some pretty serious consequences including prison time and a conviction on their record.
And yet, in that very same article, NPR quotes professor Hany Farid complaining about the lack of consequences.
After following this story, Farid is left with the question: “What is going to be the consequence of this?”
[….]
Farid said there remains, generally, a lackluster response from regulators reluctant to put checks and balances on tech companies that develop these tools or to establish laws that properly punish wrongdoers and protect people.
“I don’t understand at what point we’re going to wake up as a country and say, like, why are we allowing this? Where are our regulators?”
I guess “getting arrested and facing being sentenced to prison” aren’t consequences? I mean, sure, maybe it doesn’t have the same ring to it as “big tech bad!!” but, really, how could anyone say with a straight face that there are no consequences here? How could anyone in the media print that without noting what the focus of the very story is?
It already breaks the law and is a criminal matter, and we let law enforcement handle those. If there were no consequences, and we were allowing this as a society, Darien would not have been arrested and would not be facing a trial next month.
I understand that there’s anger from some corners that this happened in the first place, but this is the nature of society. Some things break the law, and we treat them accordingly. Wishing to live in a world in which no one could ever break the law, or in which companies were somehow magically responsible for guaranteeing no one would ever misuse their products is not a good outcome. It would lead to a horrific mess of mostly useless tools, ruined by the small group of people who might misuse them.
We have a system to deal with criminals. We can use it. We shouldn’t be deputizing tech companies which are problematic enough to also have to take on Minority Report “pre-crime” style policing as well.
I understand that this is kinda Farid’s thing. Last year we highlighted him blaming Apple for CSAM online. Farid constantly wants to blame tech for the fact that some people will misuse the tech. And, I guess that gets him quoted in the media, but it’s pretty silly and disconnected from reality.
Yes, tech companies can put in place some safeguards, but people will always find some ways around them. If we’re talking about criminal behavior, the way to deal with it is through the criminal justice system. Not magically making tech companies go into excess surveillance mode to make sure no one is ever bad.
His heart is probably in the right place. That’s the best thing I can say about Berkeley professor Dr. Hany Farid, who has spent the last couple of years being wrong about CSAM (child sexual abuse material) detection.
That he’s been wrong has done little to shut him up. But he appears to deeply feel he’s right. And that’s why I’m convinced his heart is in the right place: right up there in the chest cavity where most hearts are located.
Physical heart location aside, he’s pretty much always wrong. He’s always happy to offer his (non-expert) opinion and deploy presentations that preach to the converted. He’s sure the CSAM problem is the fault of service providers, rather than those who create and upload CSAM.
So, he’s spent a considerable amount of time going after Apple. Apple, at one point, considered client-side scanning to be an acceptable solution to this problem, even if it meant making Apple less secure than its competitors. Shortly thereafter — following plenty of unified criticism — Apple decided it was better off protecting millions of innocent customers and users, rather than sacrificing them on the altar of “for the children” just because it might make it easier for the government to locate and identify the extremely small percentage of users engaged in illicit activity.
Why are there so many images of child abuse stored on iCloud? Because Apple allows it
There’s a difference between “allows” and “this kind of thing happens.” That’s the difference Farid hopes to obscure. No matter what platform is involved, a certain number of users will attempt to use it to share illicit content. That Apple’s cloud service is host to (a minimal amount) of CSAM says nothing about Apple’s internal attitude towards CSAM, much less about it’s so-called “allowing” of this content to be hosted and shared via its services.
But Farid insists Apple is complicit in the sharing of CSAM, something he attempts to prove by highlighting recent convictions aided by (wait for it) evidence obtained from Apple itself.
Earlier this year, a man in Washington state was sentenced to 22 years in federal prison for sexually abusing his girlfriend’s 7-year-old stepdaughter. As part of their investigation, authorities also discovered the man had been storing known images and videos of child sexual abuse on his Apple iCloud account for four years.
Why was this man able to maintain a collection of illegal and sexually exploitative content of children for so long? Because Apple wasn’t looking for it.
The first paragraph contains facts. The second paragraph contains conjecture. The third paragraph of this op-ed again mixes both, presenting both conjecture and and a secured conviction as evidence of Apple’s unwillingness to police iCloud for CSAM.
What goes ignored is the fact that the evidence used to secure these convictions was derived from iCloud accounts. If Apple indeed has no desire to rid the world of CSAM, it seems it might have put up more of a fight when asked to hand over this content.
What this does show is something that runs contrary to Farid’s narrative: Apple is essential in securing convictions of CSAM producers and distributors. The content stored in these iCloud accounts was essential to the success of these prosecutions. If Apple was truly more interested in aiding and abetting in the spread of CSAM, it would have done more to prevent prosecutors from accessing this evidence.
And that’s the problem with disingenuous arguments like the ones Farid is making. Farid claims Apple isn’t doing enough to stymie CSAM distribution. But then he tries to back his claims by detailing all the times Apple has been instrumental in securing convictions of child abusers.
Not content with ignoring this fatal flaw in his argument, Farid moves on to make arguably worse arguments using his version of known facts.
Back in the summer of 2021, Apple announced a plan to use innovative methods to specifically identify and report known images and videos of the rape and molestation of a child — without compromising the privacy that its billions of users expect.
This is a huge misrepresentation of Apple’s client-side scanning plan. It definitely would “compromise the privacy that its billions of users expect.” Apple’s proposed scanning of all content on user devices that might be hosted (however temporarily) by its iCloud service very definitely compromised their privacy. Worse, it compromised their security by introducing a new attack vector for malicious governments and malicious hackers that could have allowed anyone to access content phone users (incorrectly, in this case) assumed was only accessible to them.
That misrepresentation is followed by another false assertion by Farid:
Apple did not “quietly” abandon this plan. It publicly announced this reversal, something that led almost immediately to a number of government figures, talking heads, and special interest groups publicly expressing their displeasure with this move by Apple. It was anything but “quiet.”
Adding to this wealth of misinformation is Farid’s unsupported claims about hash-matching, which has been repeatedly shown to be easily circumvented and, even worse, easily manipulated to create false positives capable of causing irreparable damage to innocent people.
Detecting known images is a tried and true way many companies, including Apple’s competitors, have detected this content for years. Apple could deploy this same technique to find child sexual abuse images and videos on its platforms.
Translation: A parent innocently taking pictures of their infant in the bathtub will not be reported to law enforcement because those images have not previously been determined to be illicit. This critical distinction ensures that innocent users’ privacy remains intact while empowering Apple to identify and report the presence of known child sexual abuse images and videos on iCloud.
While it’s true hash-matching works to a certain extent, pretending innocent people won’t be flagged and/or the system can’t be easily defeated is ridiculous. But Farid has an ax to grind, and he’s obviously not going to be deterred by the reams of evidence that contradict what he obviously considers to be foregone conclusions.
The ultimate question is this: is it better to be wrong but loud about stuff? Or is it better to be right, even if it means some of the worst people in the world will escape immediate detection by governments or service providers?
Or, if those aren’t the questions you like, consider this: is it more likely Apple desires to be host of illicit images or is it more likely Apple isn’t willing to intrude on the privacy of users because it wishes to earn the trust of non-criminal users — users who make up the largest percentage of Apple customers?
People like Professor Farid aren’t willing to consider the most likely explanation. Instead, they insist — without evidence — big tech companies are willfully ignoring illegal activity so they can increase their profits. That’s just stupid. Companies that ignore illegal activity may enjoy brief bumps in profit margin but the long-term profitability of relying (as Farid insists they are) on illegal activity is something no tech company, no matter how large, would consider to be a solid business model.
Hany Farid is a computer science professor at Berkeley. Here he is insisting that his students should all delete Facebook and YouTube because they often recommend to you things you might like (the horror, the horror):
Farid once did something quite useful, in that he helped Microsoft develop PhotoDNA, a tool that has been used to help websites find and stop child sexual abuse material (CSAM) and report it to NCMEC. Unfortunately, though, he now seems to view much of the world through that lens. A few years back he insisted that we could also tackle terrorism videos with a PhotoDNA — despite the fact that such videos are not at all the same as the CSAM content PhotoDNA can identify, which has strict liability under the law. On the other hand, terrorism videos are often not actually illegal, and can actually provide useful information, including evidence of war crimes.
Anyway, over the years, his views have tended towards what appears to be hating the entire internet because there are some people who use the internet for bad things. He’s become a vocal supporter of the EARN IT Act, despite its many, many problems. Indeed, he’s so committed to it that he appeared at a “Congressional briefing” on EARN IT organized by NCOSE, the group of religious fundamentalist prudes formerly known as “Morality in Media” who believe that all pornography should be illegal because nekked people scare them. NCOSE has been a driving force behind both FOSTA and EARN IT, and they celebrate how FOSTA has made life more difficult for sex workers. At some point, when you’re appearing on behalf of NCOSE, you probably want to examine some of the choices that got you there.
Last week, Farid took to the pages of Gizmodo to accuse me and professor Eric Goldman of “fearmongering” on AB 2273, the California “Age Appropriate Design Code” which he insists is a perfectly fine law that won’t cause any problems at all. California Governor Gavin Newsom is still expected to sign 2273 into law, perhaps sometime this week, even though that would be a huge mistake.
Before I get into some of the many problems with Farid’s article, I’ll just note that both Goldman and I have gone through the bill and explained in great detail the many problems with it, and even highlighted some fairly straightforward ways that the California legislature could have, but chose not to, limit many of its most problematic aspects (though probably not fix them, since the core of the bill makes it unfixable). Farid’s piece does not cite anything in the law (it literally quotes not a single line in the bill) and makes a bunch of blanket statements without much willingness to back them up (and where it does back up the statements, it does so badly). Instead, he accuses Goldman of not substantiating his arguments, which is hilarious.
The article starts off with his “evidence” that the internet is bad for kids.
Leaders have rightly taken notice of the growing mental health crisis among young people. Surgeon General Vivek Murthy has called out social media’s role in the crisis, and, earlier this year, President Biden addressed these concerns in his State of the Union address.
Of course, saying that “there is no longer any question” about the “nature of the harm to children” displays a profound sense of hubris and ignorance. There are in fact many, many questions about the actual harm. As we noted, just recently, there was a big effort to sort through all of the research on the “harms” associated with social media… and it basically came up empty. That’s not to say there’s no harm, because I don’t think anyone believes that. But the actual research and actual data (which Hany apparently doesn’t want to talk about) is incredibly inconclusive.
For each study claiming one thing, there are equally compelling studies claiming the opposite. To claim that “there is no longer any question” is, empirically, false. It is also fearmongering, the very thing Farid accuses me and Prof. Goldman of doing.
Just for fun, let’s look at each of the studies or stories Farid points to in the two paragraphs above, which open the article. The study about “body image issues” that was the centerpiece of the WSJ’s “Facebook Files” reporting left out an awful lot of context. The actual study was, fundamentally, an attempt by Meta to better understand these issues and look for ways to mitigate the negative (which, you know, seems like a good thing, and actually the kind of thing that the AADC would require). But, more importantly, the very survey that is highlighted around body image impact looked at 12 different issues regarding mental health, of which “body image” was just one, and notably it was the only issue out of 12 where teen girls said Instagram made them feel worse, not better (teen boys felt better, not worse, on all 12). The slide was headlined with “but, we make body image issues worse for 1 in 3 teen girls” because that was the only one of the categories where that was true.
And, notably, even as Farid claims that it’s “no longer a question” that Facebook “heightened body image issues,” it also made many of them feel better about body image. And, again, many more felt better on every other issue, including eating, loneliness, anxiety, and family stress. That doesn’t sound quite as damning when you put it that way.
The “TikTok challenges” thing is just stupid, and it’s kind of embarrassing. First of all, it’s been shown that a bunch of the moral panics about “TikTok challenges” have actually been about parents freaking out over challenges that didn’t exist. Even the few cases where someone doing a “TikTok challenge” has come to harm — including the one Farid links to above — involved challenges that kids have done for decades, including before the internet. To magically blame that on the internet is the height of ridiculousness.
I mean, here’s the CDC warning about it in 2008, where they note it goes back to at least 1995 (with some suggestion that it might actually go back decades earlier).
But, yeah, sure, it’s TikTok that’s to blame for it.
The link on the “sexualization of children on YouTube” appears to show the fact that there have been pedophiles trying to game YouTube comments, though a variety of sneaky moves, which is something that YouTube has been trying to fight. But it’s not exactly an example of something that is widespread or mainstream.
As for the last two, fearmongering and moral panics by politicians are kind of standard and hardly proof of anything. Again, the actual data is conflicting and inconclusive. I’m almost surprised that Farid didn’t also toss in claims about suicide, but maybe even he has read the research suggesting you can’t actually blame youth suicide on social media.
So, already we’re off to a bad start, full of questionable fear mongering and moral panic cherry picking of data.
From there, he gives his full-throated support to the Age Appropriate Design Code, and notes that “nine-in-ten California voters” say they support the bill. But, again, that’s meaningless. I’m surprised it’s not 10-in-10. Because if you ask people “do you want the internet to be safe for children” most will say yes. But no one answering this survey actually understands what this bill does.
Then we get to his criticisms of myself and Professor Goldman:
In a piece published by Capitol Weekly on August 18, for example, Eric Goldman incorrectly claims that the AADC will require mandatory age verification on the internet. The following week, Mike Masnick made the bizarre and unsubstantiated claim in BestNetTech that facial scans will be required to navigate to any website.
So, let’s deal with his false claim about me first. He says that I made the “bizarre and unsubstantiated claim” that facial scans will be required. But, that’s wrong. As anyone who actually read the article can see quite clearly, it’s what the trade association for age verification providers told me. The quote literally came from the very companies who provide age verification. So, the only “bizarre and unsubstantiated” claims here are from Farid.
As for Goldman’s claims, unlike Farid, Goldman actually supports them with an explanation using the language from the bill. AB 2273 flat out says that “a business that provides an online service, product, or feature likely to be accessed by children shall… estimate the age of child users with a reasonable level of certainty.” I’ve talked to probably a half a dozen actual privacy lawyers about this, and basically all of them say that they would recommend to clients who wish to abide by this that they invest in some sort of age verification technology. Because, otherwise, how would they show that they had achieved the “reasonable level of certainty” required by the law?
Anyone who’s ever paid attention to how lawsuits around these kinds of laws play out knows that this will lead to lawsuits in which the Attorney General of California will insist that websites have not complied unless they’ve implemented age verification technology. That’s because sites like Facebook will implement that, and the courts will note that’s a “best practice” and assume anyone doing less than that fails to abide by the law.
Even should that not happen, the prudent decision by any company will be to invest in such technology to avoid even having to make that argument in court.
Farid insists that sites can do age verification by much less intrusive means, including simple age “estimation.”
Age estimation can be done in a multitude of ways that are not invasive. In fact, businesses have been using age estimation for years – not to keep children safe – but rather for targeted marketing. The AADC will ensure that the age-estimation practices are the least invasive possible, will require that any personal information collected for the purposes of age estimation is not used for any other purpose, and, contrary to Goldman’s claim that age-authentication processes are generally privacy invasive, require that any collected information is deleted after its intended use.
Except, the bill doesn’t just call for “age estimation,” it requires “a reasonable level of certainty” which is not defined in the bill. And getting age estimation for targeted ads wrong means basically nothing to a company. They target an ad wrong, big deal. But under the AADC, a false estimation is now a legal liability. That, by itself, means that many sites will have strong incentives to move to true age verification, which is absolutely invasive.
And, also, not all sites engage in age estimation. BestNetTech does not. I don’t want to know how old you are. I don’t care. But under this bill, I might need to.
Also, it’s absolutely hilarious that Farid, who has spent many years trashing all of these companies, insisting that they’re pure evil, that you should delete their apps, and insisting that they have “little incentive” to ever protect their users… thinks they can then be trusted to “delete” the age verification information after it’s been used for its “intended use.”
On that, he’s way more trusting of the tech companies than I would be.
Goldman also claims – without any substantiation – that these regulations will force online businesses to close their doors to children altogether. This argument is, at best, disingenuous, and at worst fear-mongering. The bill comes after negotiations with diverse stakeholders to ensure it is practically feasible and effective. None of the hundreds of California businesses engaged in negotiations are saying they fear having to close their doors. Where companies are not engaging in risky practices, the risks are minimal. The bill also includes a “right to cure” for businesses that are in substantial compliance with its provisions, therefore limiting liability for those seeking in good faith to protect children on their service.
I mean, a bunch of website owners I’ve spoken to over the last month has asked me about whether or not they should close off access to children altogether (or just close off access to Californians), so it’s hardly an idle thought.
Also, the idea that there were “negotiations with diverse stakeholders” appears to be bullshit. Again, I keep talking to website owners who were not contacted, and the few I’ve spoken to who have been in contact with legislators who worked on this bill have told me that the legislators told them, in effect, to pound sand when they pointed out the flaws in the bill.
I mean, Prof. Goldman pointed out tons of flaws in the bill, and it appears that the legislators made zero effort to fix them or to engage with him. No one in the California legislature spoke to me about my concerns either.
Exactly who are these “hundreds of California businesses engaged in negotiations”? I went through the list of organizations that officially supported the bill, and there are not “hundreds” there. I mean, there is the guy who spread COVID disinfo. Is that who Farid is talking about? Or the organizations pushing moral panics about the internet? There are the California privacy lawyers. But where are the hundreds of businesses who are happy with the law?
We should celebrate the fact that California is home to the giants of the technology sector. This success, however, also comes with the responsibility to ensure that California-based companies act as responsible global citizens. The arguments in favor of AADC are clear and uncontroversial: we have a responsibility to keep our youngest citizens safe. Hyperbolic and alarmist claims to the contrary are simply unfounded and unhelpful.
The only one who has made “hyperbolic and alarmist” claims here is the dude who insists that “there is no longer any question” that the internet harms children. The only one who has made “hyperbolic and alarmist” claims is the guy who tells his students that recommendations are so evil you should stop using apps. The only one who is “hyperbolic and alarmist” is the guy who insists the things that age verification providers told me directly are “bizarre an unsubstantiated.”
Farid may have built an amazing tool in PhotoDNA, but it hardly makes him an expert on the law, policy, how websites work, or social science about the supposed harms of the internet.