tarleton.gillespie's BestNetTech Profile

tarleton.gillespie

About tarleton.gillespie

Posted on BestNetTech - 4 September 2018 @ 11:59am

There's A Reason That Misleading Claims Of Bias In Search And Social Media Enjoy Such Traction

President Trump’s tweets charging that Google search results are biased, against him and against conservatives, are the loudest and latest version of a growing attack on search engines and social media platforms. It is potent, and it’s almost certainly wrong. But it comes at an unfortunate time, just as a more thoughtful and substantive challenge to the impact of Silicon Valley tech companies has finally begun to emerge. If someone were truly concerned about free speech, news, and how platforms subtly reshape public participation, they would be engaging these deeper questions. But these simplistic and ill-informed claims of deliberate political bias are the wrong questions, and they risk undermining and crowding out the right ones. Trump’s charges against Google, Twitter, and Facebook reveal a basic misunderstanding of how search and social media work, and they continue to confuse “fake news” with bad news, all in the service of scoring political points. However, even if these companies are not responsible for silencing conservative speech, they may be partly responsible for allowing this charge to gain purchase, by being so secretive for so long about how their algorithms and moderation policies work.

So what do search engines actually do when users access them for information or news? Search engines deliver relevant results, nothing more. That judgment of relevance is based on hundreds of factors: including popularity, topic relevance, and timeliness. Results are fluid and personalized. There’s plenty of room in this complex process for overemphasis and oversight, and these are important questions to examine. But serious researchers who actually already study this are careful to take into account the effects of personalization, changes over time, and the powerful feedback effects of users. This is a far cry from looking at your own search results and being troubled by what you see. (Even the author of the report Trump was likely reacting to acknowledges that it was unscientific and disagrees with the suggestion that regulation of search should follow.)

To understand, for instance, the results for “Trump” in Google News, or “Trump news” in Google — different things, by the way — we would need to consider some much more likely explanations than deliberate political manipulation: major outlets like CNN may publish a lot more content a lot more often; more users may click on, read, and forward links from these sources; outspoken right-wing sites like Gateway Pundit may have much less trust outside of their devoted base than they imagine; CNN may be much more congruent with centrist political leanings than Trump and conservative critics admit; well-established news sources may already circulate more widely and successfully on social media platforms like Facebook and Twitter, boosting their rankings on search engines; users may simply be more convinced by these news sources, “voting” for them with their clicks and links in ways that Google picks up on.

In truth, there are important questions to be asked about search engines, social media platforms, and the circulation of news online. There are profound concerns about the economic sustainability of journalism itself when it has to compete on social media platforms. There a profound concerns about the subtle effects of how algorithms work. But the noise that right-wing critics are stirring up is not subtle, it is not helpful, it is not well informed — and more than that, it is clearly about scoring political points. Those claiming political bias seem wholly uninterested in acknowledging the inquiries already underway.

Charges of left-leaning bias are not new, of course. They come from a very old playbook conservatives have used against newspapers and broadcasters for decades. Unfortunately, Silicon Valley is partly to blame for why it is working so well today. Search engines and social media platforms have been too secretive about how their algorithms work, and too secretive about how content moderation works. In the absence of substantive explanations, users have been left to wonder why search results look the way they do, or why some posts get removed and others don’t. This uncertainty breeds suspicion, and that suspicion goes looking for other explanations. This leaves room for trolls, conspiracy mongers, and demagogues to suggest that the platforms are silencing them for their political speech — conveniently overlooking the fact that they been suspended for making hateful threats, or can’t reach the first page of search results because readers trust other sources. And Silicon Valley has bruised their users’ trust for so long, that even their genuine explanations sound suspect.

Some of the press coverage, when it’s not careful, can inadvertently make the very same easy assumptions that these critics do. Search results, trending lists, and content moderation are not the same thing, they are not managed by the same people, and they are not handled in the same way. Too often, a critic will thread together ill-informed charges against search, one outdated incident regarding trending, and continued uncertainty about moderation practices, and lace them together into a blanket charge of bias. But they are simply different things.

It is unnerving to feel like an apologist for these tech companies. There are real and concerning questions about how search and social media work. I ask some of these questions in my own research, and my field has been thinking about them for years. The ways these companies have addressed, or often failed to address, the public ramifications of search algorithms and moderation policies has been deeply problematic. But these questions of bias distract us from the deeper problems.

It is also disconcerting, just as the public is finally grasping the subtle ways in which search and social media platforms matter, that we are ready to fall back on so simplistic a charge as deliberate political bias. I feel a bit like critics of mainstream news media, who for years have tried to highlight the way contemporary US news organizations are subtly centrist, structurally cautious, founded by commercial imperatives, and under attentive to marginalize voices — who now have to bracket those critiques and come to the defense of CNN when the President dismisses them as “fake news.” Those of us who ask hard questions about search and social media should do so, but we must also steadfastly refused to lump these real concerns in with facile, politically motivated charges of bias that miss the deeper point.

Tarleton Gillespie is the author of Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media. He is a principal researcher at Microsoft Research and an affiliated associate professor at Cornell University.

Posted on BestNetTech - 6 February 2018 @ 12:02pm

Moderation Is The Commodity

Last week, Santa Clara University hosted a gathering of tech platform companies to discuss how they actually handle content moderation questions. Many of the participants in the event have written essays about the questions that were discussed at the event, which we are publishing here. This one is excerpted from Custodians of the internet: Platforms, Content Moderation, and the Hidden Decisions that Shape Social Media. forthcoming, Yale University Press, May 2018.

Content moderation is such a complex and laborious undertaking that, all things considered, it’s amazing that it works at all, and as well as it does. Moderation is hard. This should be obvious, but it is easily forgotten. It is resource intensive and relentless; it requires making difficult and often untenable distinctions; it is wholly unclear what the standards should be, especially on a global scale; and one failure can incur enough public outrage to overshadow a million quiet successes. And we are partly to blame for having put platforms in this untenable situation, by asking way too much of them. We sometimes decry the intrusion of platform moderation, and sometimes decry its absence. Users probably should not expect platforms to be hands-off and expect them to solve problems perfectly and expect them to get with the times and expect them to be impartial and automatic.

Even so, as a society we have once again handed over to private companies the power to set and enforce the boundaries of appropriate public speech for us. That is an enormous cultural power, held by a few deeply invested stakeholders, and it is being done behind closed doors, making it difficult for anyone else to inspect or challenge. Platforms frequently, and conspicuously, fail to live up to our expectations?in fact, given the enormity of the undertaking, most platforms’ own definition of success includes failing users on a regular basis.

The companies that have profited most from our commitment to platforms have done so by selling back to us the promises of the web and participatory culture. But as those promises have begun to sour, and the reality of their impact on public life has become more obvious and more complicated, these companies are now grappling with how best to be stewards of public culture, a responsibility that was not evident to them at the start.

It is time for the discussion about content moderation to shift, away from a focus on the harms users face and the missteps platforms sometimes make in response, to a more expansive examination of the responsibilities of platforms. For more than a decade, social media platforms have presented themselves as mere conduits, obscuring and disavowing the content moderation they do. Their instinct has been to dodge, dissemble, or deny every time it becomes clear that, in fact, they produce specific kinds of public discourse. The tools matter, and our public culture is in important ways a product of their design and oversight. While we cannot hold platforms responsible for the fact that some people want to post pornography, or mislead, or be hateful to others, we are now painfully aware of the ways in which platforms invite, facilitate, amplify, and exacerbate those tendencies: weaponized and coordinated harassment; misrepresentation and propaganda buoyed by its algorithmically-calculated popularity; polarization as a side effect of personalization; bots speaking as humans, humans speaking as bots; public participation emphatically figured as individual self-promotion; the tactical gaming of platforms in order to simulate genuine cultural participation and value. In all of these ways, and others, platforms invoke and amplify particular forms of discourse, and they moderate away others, all in the name of being impartial conduits of open participation. The controversies around content moderation over the last half decade have helped spur this slow recognition, that platforms now constitute powerful infrastructure for knowledge, participation, and public expression.

~ ~ ~

All this means that our thinking about platforms must change. It is not just that all platforms moderate, or that they have to moderate, or that they tend to disavow it while doing so. It is that moderation, far from being occasional or ancillary, is in fact an essential, constant, and definitional part of what platforms do. I mean this literally: moderation is the essence of platforms, it is the commodity they offer.

First, moderation is a surprisingly large part of what they do, in a practical, day-to-day sense, and in terms of the time, resources, and number of employees they devote to it. Thousands of people, from software engineers to corporate lawyers to temporary clickworkers scattered across the globe, all work to remove content, suspend users, craft the rules, and respond to complaints. Social media platforms have built a complex apparatus, with innovative workflows and problematic labor conditions, just to manage this?nearly all of it invisible to users. Moreover, moderation shapes how platforms conceive of their users?and not just the ones who break the rules or seek their help. By shifting some of the labor of moderation back to us, through flagging, platforms deputize users as amateur editors and police. From that moment, platform managers must in part think of, address, and manage users as such. This adds another layer to how users are conceived of, along with seeing them as customers, producers, free labor, and commodity. And it would not be this way if moderation were handled differently.

But in an even more fundamental way, content moderation is precisely what platforms offer. Anyone could make a website on which any user could post anything he pleased, without rules or guidelines. Such a website would, in all likelihood, quickly become a cesspool of hate and porn, and then be abandoned. But it would not be difficult to build, requiring little in the way of skill or financial backing. To produce and sustain an appealing platform requires moderation of some form. Content moderation is an elemental part of what makes social media platforms different, what distinguishes them from the open web. It is hiding inside every promise social media platforms make to their users, from the earliest invitations to “join a thriving community” or “broadcast yourself,” to Mark Zuckerberg’s promise to make Facebook “the social infrastructure to give people the power to build a global community that works for all of us.”

Content moderation is part of how platforms shape user participation into a deliverable experience. Platforms moderate (removal, filtering, suspension), they recommend (news feeds, trending lists, personalized suggestions), and they curate (featured content, front page offerings). Platforms use these three levers together to, actively and dynamically, tune the participation of users in order to produce the “right” feed for each user, the “right” social exchanges, the “right” kind of community. (“Right” here may mean ethical, legal, and healthy; but it also means whatever will promote engagement, increase ad revenue, and facilitate data collection.)

Too often, social media platforms discuss content moderation as a problem to be solved, and solved privately and reactively. In this “customer service” mindset, platform managers understand their responsibility primarily as protecting users from the offense or harm they are experiencing. But now platforms find they must answer also to users who find themselves implicated in and troubled by a system that facilitates the reprehensible?even if they never see it. Whether I ever saw, clicked on, or ?liked’ a fake news item posted by Russian operatives, I am still worried that others have; I am troubled by the very fact of it and concerned for the sanctity of the political process as a result. Protecting users is no longer enough: the offense and harm in question is not just to individuals, but to the public itself, and to the institutions on which it depends. This, according to John Dewey, is the very nature of a public: “The public consists of all those who are affected by the indirect consequences of transactions to such an extent that it is deemed necessary to have those consequences systematically cared for.” What makes something of concern to the public is the potential need for its inhibition.

? So, despite the safe harbor provided by U.S. law and the indemnity enshrined in their terms of service contracts as private actors, social media platforms now inhabit a new position of responsibility?not only to individual users, but to the public they powerfully affect. When an intermediary grows this large, this entwined with the institutions of public discourse, this crucial, it has an implicit contract with the public that, whether platform management likes it or not, may be quite different from the contract it required users to click through. The primary and secondary effects these platforms have on essential aspects of public life, as they become apparent, now lie at their doorstep.

~ ~ ~

If content moderation is the commodity, if it is the essence of what platforms do, then it makes no sense for us to treat it as a bandage to be applied or a mess to be swept up. Rethinking content moderation might begin with this recognition, that content moderation is part of how they tune the public discourse they purport to host. Platforms could be held responsible, at least partially so, for how they tend to that public discourse, and to what ends. The easy version of such an obligation would be to require platforms to moderate more, or more quickly, or more aggressively, or more thoughtfully, or to some accepted minimum standard. But I believe the answer is something more. Their implicit contract with the public requires that platforms share this responsibility with the public?not just the work of moderating, but the judgment as well. Social media platforms must be custodians, not in the sense of quietly sweeping up the mess, but in the sense of being responsible guardians of their own collective and public care.

Tarleton Gillespie is a Principal Researcher at Microsoft Research and an Adjunt Associate Professor in the Department of Communications at Cornell University.

More posts from tarleton.gillespie >>