Bluesky Plans Decentralized Composable Moderation

from the power-to-the-people dept

We just wrote about Substack’s issue with content moderation and the Nazi bar problem. As I highlighted in that piece, any centralized service is going to be defined by their moderation choices. If you cater to terrible, abusive people, you become “the site that caters to terrible abusive people.” That’s not a comment on “free speech” because it has nothing to do with free speech. It has to do with how you keep your own corner of the internet and what people will associate with you and your brand.

This is why I’ve argued for years that any one particular private service cannot be “the town square.” The internet itself is the town square built on decentralized protocols that allow everyone to communicate. Each centralized service is a private plot of land on that wider open vista, and if you want it to be unkempt and full of terrible people, you are free to do so, but don’t be surprised, or act offended, when lots of people decide they don’t want to associate with you.

This is also why I’ve spent so many years talking up the importance of a protocols not platforms approach to free speech. With decentralized protocols, the questions are different, and the ability to speak freely is retained, but the central issue of abusers, harassers, nonsense peddlers and more can be dealt with in different ways, rather than looking to a centralized nexus of control to handle it.

This is why I remain encouraged about Bluesky, the decentralized social media protocol which was ever so slightly influenced by my paper. It’s been in beta testing over the past few months, and has plenty of promise, including in overcoming some of the limitations of the ActivityPub-driven fediverse.

Around the same time that Substack’s Chris Best was melting down in response to fairly straightforward content moderation questions, Bluesky put up a blog post explaining its philosophy around content moderation: Composable Moderation.

Moderation is a necessary feature of social spaces. It’s how bad behavior gets constrained, norms get set, and disputes get resolved. We’ve kept the Bluesky app invite-only and are finishing moderation before the last pieces of open federation because we wanted to prioritize user safety from the start.

Just like our approach to algorithmic choice, our approach to moderation allows for an ecosystem of third-party providers. Moderation should be a composable, customizable piece that can be layered into your experience. For custom feeds, there is a basic default (only who you follow), and then many possibilities for custom algorithms. For moderation as well, there should be a basic default, and then many custom filters available on top.

The basics of our approach to moderation are well-established practices. We do automated labeling, like centralized social sites, and make service-level admin decisions, like many federated networks. But the piece we’re most excited about is the open, composable labeling system we’re building that both developers and users can contribute to. Under the hood, centralized social sites use labeling to implement moderation — we think this piece can be unbundled, opened up to third-party innovation, and configured with user agency in mind. Anyone should be able to create or subscribe to moderation labels that third parties create.

The actual details of how this will be implemented matter, but this seems like the right approach. There is certain content that needs to get taken down: generally child sex abuse material, outright commercial spam, and copyright infringement. But, beyond that, there are many different directions one can go, and allowing third parties to join in the process, opens up some really interesting vectors of competition to explore alternative forms of moderation, and create different views of content.

Here’s the way we’re designing an open, composable labeling system for moderation:

  • Anyone can define and apply “labels” to content or accounts (i.e. “spam”, “nsfw”). This is a separate service, so they do not have to run a PDS (personal data server) or a client app in order to do so.
  • Labels can be automatically generated (by third-party services, or by custom algorithms) or manually generated (by admins, or by users themselves)
  • Any service or person in the network can choose how these labels get used to determine the final user experience.

So how will we be applying this on the Bluesky app? Automated filtering is a commoditized service by now, so we will be taking advantage of this to apply a first pass to remove illegal content and label objectionable material. Then we will apply server-level filters as admins of bsky.social, with a default setting and custom controls to let you hide, warn, or show content. On top of that, we will let users subscribe to additional sets of moderation labels that can filter out more content or accounts.

Let’s dig into the layers here. Centralized social platforms delegate all moderation to a central set of admins whose policies are set by one company. This is a bit like resolving all disputes at the level of the Supreme Court. Federated networks delegate moderation decisions to server admins. This is more like resolving disputes at a state government level, which is better because you can move to a new state if you don’t like your state’s decisions — but moving is usually difficult and expensive in other networks. We’ve improved on this situation by making it easier to switch servers, and by separating moderation out into structurally independent services.

We’re calling the location-independent moderation infrastructure “community labeling” because you can opt-in to an online community’s moderation system that’s not necessarily tied to the server you’re on.

This, combined with Bluesky’s plan to allow anyone to create their own algorithms, and offer up a kind of marketplace of algorithms, is what makes Bluesky such an interesting project to me, in that it creates a much more decentralized social media, but without the philosophical issues that often seem to hold back Mastodon (some top down decisions and norms against any algorithms or search, and still relying on individual instances to handle moderation issues).

I’ve seen some people complain about the initial implementation of Bluesky’s content moderation system, which is in user settings, and pops up a window like this (with these defaults):

The negative feedback I heard was that setting things up this way suggests that Bluesky is “okay with Political Hate-Groups” but I actually think it’s much more interesting, and much more nuanced than that. Again, remember the idea here is that it’s allowing lots of people to put in place their own moderation rules and systems, allowing for their to be competition over them.

This approach, actually has some pretty clear advantages, in that it gets us somewhat past the nonsense about “censorship” and basically says “look, we’re not taking down your speech, but it’s not going to be seen as the default.” And, on top of that, it takes away the excuses from huffy nonsense consumers who whine about not being able to get content from their favorite nonsense peddlers. You can argue that nonsense peddlers should never be able to find a space to spew their nonsense, but that’s never going to work. Beyond being an affront to general principles of free speech, it also is simply impossible to stop.

We’ve seen that already: people banned from Twitter or Facebook found their own places to speak and spew their nonsense. That’s always going to happen. With a setup like this, it actually does help limit the biggest concern: the discoverability that drives more users down the path from reality to nonsense.

But, also, importantly, a system like this actually makes it easier for those who need to monitor the nonsense peddlers, from law enforcement to academic researchers to the media, great visibility into what’s happening, and to have better responses prepared.

Again, these are early days, but I’m encouraged by this approach, and think it’s going to be much more interesting than lots of other approaches out there for content moderation.

Filed Under: , , , ,
Companies: bluesky

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Bluesky Plans Decentralized Composable Moderation”

Subscribe: RSS Leave a comment
88 Comments
This comment has been deemed insightful by the community.
PaulT (profile) says:

“The negative feedback I heard was that setting things up this way suggests that Bluesky is “okay with Political Hate-Groups” but I actually think it’s much more interesting, and much more nuanced than that”

The key there is going to be how that term is defined – and it’s not ever going to please everyone. Some would consider that to include BLM or any random pride event for various reasons, while if you just stick to the ones confirmed by the likes of the SPLC, well those are half the people who were whining about unfair bias in the first place.

The devil is always in the details, and the fact that the ultimate problem is that there’s a type of person who wants to be in the popular club but feels they shouldn’t need to conform in order to be accepted there (or is deliberately trying to disrupt). That’s a human issue, not a tech issue, and not one that was ever truly solved offline. It’s just that if someone got kicked out of a bar for abusive language, they’d go elsewhere after being intimidated by a bouncer, not try to force the government to force the bar to let them back in.

The approach sounds good, but the devil is always in the details.

This comment has been deemed insightful by the community.
Samuel Abram (profile) says:

Re:

The key there is going to be how that term is defined – and it’s not ever going to please everyone. Some would consider that to include BLM or any random pride event for various reasons, while if you just stick to the ones confirmed by the likes of the SPLC, well those are half the people who were whining about unfair bias in the first place.

Exactly, and there are going to be a lot of edge cases vis-à-vis Hate Speech. I mean, you know this emoji 👌? Though it has come to be hate-speak to mean “White Power” (the middle, ring, and pinky fingers being the “W” and the arm and the circle being the “P”), but many fans of the awesome show MST3K (which calls out bigotry in the old movies they watch) recognize it as the “it stinks!” symbol (if you don’t get it, watch the episode “Pod People” here). Also the numbers “88” can have some cryptically hateful connotations, as they mean the eighth letter of the alphabet “H” twice for “HH” for “Heil Hitler”. But “8” is also pronounced “ba” in Mandarin Chinese so “88” means “ba ba” in that language to approximate the English valediction “Bye-bye!” It can also mean the 88 keys on a piano, as in the musician 88bit.

It just shows you, there are a lot of edge cases with hate speech, and its not always clear cut as with Stormfront, Kiwifarms, or Tucker Carlson.

This comment has been deemed insightful by the community.
PaulT (profile) says:

Re: Re:

Yeah, I was thinking more about how the group is defined, but even that’s variable – half the Jan 6th crowd were called antifa the moment they were facing consequences. A lot of recent movements are genuinely grass roots, so there’s no central definition or control of membership.

But, yeah, language is also a problem. One of the tendencies of the far right in recent years has been to try and refine positive words to mean something different, and co-opt otherwise innocent symbols to mean something negative (as you suggested). If you moderate that stuff one way, you run the risk of people being unfairly moderated (the Scunthorpe problem, etc), while if you allow it users might go elsewhere because they’re being attacked in ways that don’t flag up.

That’s the root issue, checking a box is one thing, but what’s behind it? Does allowing political hate groups mean you’ll be flooded with neo-Nazis, but blocking it means you can’t host speech in support of gun control or LGBTQ right? Depends on who is in terms of those definitions. Hide explicit sexual images but allow non-sexual nudity? Who defines the parameters and how do they define the statue of David? And so on…

It can make a good discussion, I’m just wary of who controls what happens behind the scenes and where their biases lie.

Anonymous Coward says:

Re: Re: Re:3

The reporter being a part of the offended minority might give it more weight. It reminds me of a scene from of Seinfeld.

Seinfeld: I have a suspicion that he’s converted to Judaism purely for the jokes.
Priest: And this offends you as a Jewish person?
Seinfeld: No, it offends me as a comedian.

I’m likely underestimating the actual work involved, but I can’t believe this is impossible.

Anonymous Coward says:

Re: Re: Re:

Who defines the parameters

i believe the idea here is that you do. You know, subscribe to filtering that works for you, or write/modify your own.

Break out those top-level categories. Even grouping self-harm with gore and torture seems weird to me (define those, and in what contexts), but you can’t really have 200 top-level categories.

ninbura (profile) says:

Re:

I think this somewhat hilariously misses the point.

You can use a 3rd party moderator to determine what any of these terms mean. Which is why social media “needs” to be decentralized in the first place. No one is going to agree on a definition for “hate speech”, “hate group”, etc.

Bluesky gives you the ability to create your own definitions, and ultimately moderate your own content feed with the option to outsource the work if desired.

I guess I could see a problem with the default settings/definitions, but that seems a bit superfluous.

This comment has been flagged by the community. Click here to show it.

Koby (profile) says:

Has Potential

The negative feedback I heard was that setting things up this way suggests that Bluesky is “okay with Political Hate-Groups” but I actually think it’s much more interesting, and much more nuanced than that. Again, remember the idea here is that it’s allowing lots of people to put in place their own moderation rules and systems, allowing for their to be competition over them.

For the lefists, simply allowing any speech of which they disagree is considered to be an affront. Filters are insufficient; they want to control what others see.

Anyhow, it’s a good start. The next battle will be over where users get categorized. For example, conservatives consider antifa to be a terroristic political hate group, while leftists consider anyone who votes Republican to be part of a political hate group. This platform could stand to have some more categories, but those wouldn’t be so difficult add.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re:

For the lefists, simply allowing any speech of which they disagree is considered to be an affront. Filters are insufficient; they want to control what others see.

It is not the left that is trying to control speech, and they usually refer hate speakers to forums that are more accepting of their speech. t is the right complaining that the forums where their speech is acceptable have a low user count, and therefore they should be allowed to speak in all forums.

This comment has been flagged by the community. Click here to show it.

Matthew M Bennett says:

It has everything to do with free speech, you moron

If you cater to terrible, abusive people, you become “the site that caters to terrible abusive people.” That’s not a comment on “free speech” because it has nothing to do with free speech. It has to do with how you keep your own corner of the internet and what people will associate with you and your brand.

Incorrect. “Terrible, abusive” is just your opinion, probably a shitty one, at that.

And moderation is anti-free speech. So yeah, it has everything to do with that.

Just admit you hate free speech.

Spunjji (profile) says:

Re: Methinks the crybaby troll doth protest too much

If you think that Nazi ideology being terrible and abusive is “just [someone’s] opinion”, then you’re telling us all we need to know about you.

That’s the entire point here, of course. You have a right to say things, other people have a right to not want to share space with people who say the things you want to say. If you cannot stomach the latter then you do not truly believe in the former, and can GTFO.

HotHead (profile) says:

We’re calling the location-independent moderation infrastructure “community labeling” because you can opt-in to an online community’s moderation system that’s not necessarily tied to the server you’re on.

Woah. I’m almost sold.

I’m just unsure about how the labels will work. I hope that there’ll be a way for each user and third-party algorithm to figure out which labels are the most relevant and which labels are likely to be accurate. What happens if I maliciously add a “dog” to a post about only a cat? The viewers of the post need to be able to figure out that the dog label shouldn’t be there, and algorithms need the opportunity to ignore the inaccurate “dog” label.

David says:

More than labels

Hopefully there’s more to it than the labels themselves.

The idea itself sounds like a moderation version of uBlock Origin. You can have the “for dummies” setup to start with, but the beauty is in how well you can refine the results, and share your own or use other people’s moderation setup.

I’d also expect it to be able to tag content to the level of the Danbooru sites — hundreds of potential tags, each of which can be contained within broader categories. For example, Naruto (the anime) is different from Naruto (the character) is different than naruto (the ramen). Probably don’t show all of them at the typical visual level, but you should be able to refine your personal moderation filter based on something far more complex than “Hate Groups”.

Samuel Abram (profile) says:

Re:

“for dummies”

Nota Bene: The “For Dummies” IP is now held by John Wiley & Sons, as in the company that attempted (but fortunately lost) to get rid of first sale in the SCOTUS case Kirtsaeng v. Wiley and they also are a party to the lawsuit that (so far successfully) sued the Internet Archive for having the temerity to loan books in their inventory. How dare a library loan books!

This comment has been deemed insightful by the community.
Bobson Dugnutt (profile) says:

There Will Always Be Nazis

I raised this issue in the Substack Nazi bar thread.

When Patel and Best had the heated exchange about clamping down on hate speech, I noted that the problem Patel and Best would have goes beyond denying a platform to, or tolerating hate speech from, far-right figures.

The problem is this: There Will Always Be Nazis.

Even if you have policies denying certain kinds of speech, like Patel seeks, the only way to ensure a completely Nazi-sterile environment is a permission to speak. This defeats the purpose of the internet.

Remember that Nazis are determined. Even if you block Nazi speech, the smarter Nazis understand code-switching and disguising their symbolism and opinions in a way to engage the mainstream in their place, rather than dragging the mainstream to their viewpoint (i.e., sealioning, “just asking questions”/JAQing off, tu quoque, false equivalences, lawyering/working the refs/playing the rules).

Rules and moderation don’ deter them. On the contrary, they look forward to the challenge.

The other problem comes from free-speech absolutism. Much of the ethics a free-speech culture like ours have developed over centuries were in the analog world, where there was a slow build toward freer, but never truly absolutely, free speech. What limited speech were gatekeepers (who would have publishing discretion, and who would finance, content), the cost of producing content, the time delay of publication, and the skill required to produce content (written text favored writers and editors, audio-visual content favored people who are telegenic and have public speaking ability).

What’s happened with the internet is that content costs relatively little to nothing to produce, the internet is the least gatekeepered medium, information can reach its maximum potential audience instantaneously and with a near-zero marginal cost to replicate, and now it takes very little skill to produce text, audio or video.

Because there’s so much information, by so many participants, and coming so fast, the internet creates an ecosystem that allows the worst information to stick out and thrive.

Communication is in a state of perpetual war, which is the kind of habitat where Nazis thrive.

Nazis are as energetic as they are determined. Just as much as they are determined to thwart rules, Nazis are driven by a “to the last man, to the last hour” ethos and will play to win or die trying.

This is what I like about Bluesky’s content moderation system. It puts quality control in the hands of users, who can set limits on what content they can engage with and set the dials on how safe they want their spaces to be. Yes, “safe spaces” are going to be an issue, but a mechanism like this allows for a community-guided space without leaving free speech debates in the hands of a tech company’s equivalent of a football chain gang.

Anonymous Coward says:

Re:

Perhaps I’m being more charitable to Patel than you think I should be, but I thought his implication was that people don’t like to be hanging around Nazis.

The trick to dealing with Nazis is to be aware if it’s set on their terms. For example, putting the rights of vulnerable minorities up for debate tends to preassume their rights are in question. Debating whether a post is offensive, thus deserving of the offensive label, accepts the precondition that vulnerable minorities deserve respect and good manners. That is a refreshing enough inversion is why I’m optimistic about this. At least, it would make people more conscious of prejudice by doing the work of defining it.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re: Re:

i think the real issue is that this is a red herring. The issue is that the motherfucker would not answer the goddamned question. A question which is already covered by existing terms of service. He was simply being a jackass, trying not to alienate nazis at large, even though they would get kicked individually for TOS violations. Or, if the policy will change, he can say yeah, nazis or whoever else are totes welcome, and risk alienating a whole bunch of other people.

He isn’t being nuanced, or thoughtful, or realistic, or trying to promote free speech. he’s being a bloody coward.

No shit you can never get rid of all the nazis. Even if you did, the equivalent would re-invent itself later. Most people, especially here, get that.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re: Re: Re:2

Captains of industry also find Nazism and other hate ideologies useful, in that they allow the anger to be directed to where it does them no damage, rather than at their greed and wealth and their need to control more and more of the wealth and resources of society.

This comment has been deemed insightful by the community.
Bobson Dugnutt (profile) says:

Re: Re: Re:3 Palingenetic ultranationalism

Roger Griffin’s theory on fascism says the origins of fascism aren’t monocausal, and it wasn’t a single class that was the kernel for fascist theory or action.

Griffin describes fascism as a cycle of “palingenetic ultranationalism.” The word has nothing to do with you-know-who, it means rebirth.

First, there is a widespread sense of decadence — the economy, government or culture is in decline, death (a sudden collapse of government or economy or military defeat), or zombified (society was never able to heal from a “death” event).

Second, society feels the present is the worst of all possible worlds, and the future will be even worse still. So there’s a sentimental appeal to an idealized, mythologized past. “A great future is only possible through people with a great past.” All fascist movements want to Make Great Again.

Third, fascism has a two-sided coin of eclecticism and syncretism to overcome its inherent contradictions. Eclecticism means the mass politics that makes fascism possible allows classes in struggle to see fascism as a mechanism to see what they want to see — aristocrats a restoration of the old order, the bourgeoisie a muzzled and leashed underclass, religious fundamentalists who yearn for refused temporal and spiritual power, a working class who wants a strongman to keep the aristocracy and bourgeoisie in check futurists who want to be unburdened of all of those previous institutions, etc.

Syncretism is used to flatten these eclectic contradictions through a slapdash mix of modernity and tradition, fact and fable, myth and reality, popular and high culture. In Nazi Germany, the Aryan race mythology was largely taken from Hitler’s love of Wagnerian opera. Today, America’s fascists are likely drawing their history lessons from too-closely watching “The Matrix” and “Fight Club.”

Bobson Dugnutt (profile) says:

Re: Re: Re:

Best answered the question in the way a CEO is conventionally expected to answer questions, and not to Patel’s satisfaction.

A CEO with loose lips can change the material fortunes of their company through something as small as a quote. If “free speech” is a dog-whistle for the Peter Thiel kind of free speech — and most of the venture capital chieftains are just like Thiel — then Best has his eye on keeping his insiders happy.

Besides, Patel and Best can’t really hash out and settle the banning or accommodation of Nazis in the interview because There Will Always Be Nazis. Banning Nazis only makes them more clever, and the smarter and savvier will sneak through. Letting “free speech take the wheel” also lets Nazis colonize the space, like 4Chan and Twitter.

Substack has pretty good bulkheads — newsletters, pay mechanisms, and hopefully something like this Bluesky mechanism that allows users and communities to set their own speech thresholds to keep a few steps ahead of the true malefactors.

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

Re:

People not wanting to listen to you, and telling their friends, or anybody else for that matter, not to bother listening to you is not a violation of your right to speak freely. Nor are publication platforms refusing to carry your speech a violation of your rights.

So long as there are ways for you to publish your speech, and people can make up their own made, including ignoring advice, as to whether they want to listen too you or not, your free speech rights are intact. Just because the platforms that allow you to speak have small audiences, and/or you cannot attract an audience in the sea of voices on the Internet does not grant you the right to insists that more popular platforms carry your words.

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

Re: Re:

Who is talking about the right to speak freely? These are all privately owned platforms, so the only people with any rights are the people who own them. They may moderate and censor as they wish, and their users have no say in the matter unless the owners want to grant them one.

What I’m talking about, and what people here will willfully refuse to understand, is that Masnick says that a good thing about Bluesky is that (he thinks) it will hinder the discoverability of material he does not like (what he calls “nonsense”). But the purpose of free speech is to convince others of the positions it takes. A system that is designed to make speech difficult to find for people who might welcome it if they heard it is antithetical to free speech. That’s fine for people who already hate the freedoms – speech, religion, petition – granted by the 1st Amendment, but Masnick claims to support free speech. He doesn’t, of course, when that speech is both in favor of viewpoints he hates and is widely popular, but he has to twist and spin to try to find his way out of the dilemma. As he claimed when he was speaking about Best (of Substack), I would just like him to admit what he is instead of pretending to be a friend of freedom. Mike, just admit that you want to silence people who hold popular views that you disagree with, and we can be done – you’ll still be wrong, but at least not a hypocrite about it.

Anonymous Coward says:

one thing that wasn’t clear to me: will labels be directly attached to the content by third parties, or will you also be able to subscribe to “label streams” or what ever?

This makes a difference because I am sure there will be major disagreements about how stuff should be labeled. And surely there will be people who label stuff in dishonest/deceptive ways.

Anonymous Coward says:

Except that’s not what Masnick said. He said

With a setup like this, it actually does help limit the biggest concern: the discoverability that drives more users down the path from reality to nonsense.

So he’s not talking about people choosing not to hear speech they don’t like. He’s talking about preventing people from encountering speech they might like if they heard it, so that they are not “driven from reality to nonsense” where those are determined by the censors, not the listeners.

bhull242 (profile) says:

Re:

Both Mike and I have explained that you’re wrong. I even went into detail explaining why you’re wrong about what was meant.

Again, this is about allowing users to choose not to hear content that falls into certain categories of their own choosing. It’s limiting discoverability in that people who have said they don’t want to discover such content are far less likely to discover it. If you want to see it, there is nothing stopping you from doing so by turning off those filters. This isn’t forcing anyone not to hear anything; it’s allowing them to choose not to hear it, which includes not being exposed to it in the first place.

You can continue to misstate what the words you quote mean, but the fact is that they don’t mean what you say they mean.

Anonymous Coward says:

Re: Re:

No. Here is the quote again

it actually does help limit the biggest concern: the discoverability that drives more users down the path from reality to nonsense

First of all, to discover something means to find something new, that you didn’t know was there. Choosing to not see something you don’t want to see is not “limiting discoverability”.

Second, the truly relevant part is “drives … users … from reality to nonsense”. The only possible meaning of this is that Mike holds that a benefit of Bluesky is preventing people from seeing material that they would believe if they saw it. Having third parties be the arbiters of reality and nonsense such that they get to censor content so that people will not get a chance to decide for themselves is the opposite of freedom of speech. (Not of the 1st Amendment. Of freedom of speech as a concept and as a value.)

Anonymous Coward says:

Okay, so what happens when the government of India calls up Bluesky like it does Twitter and says “block this guy, remove those tweets, or we take our own measures”? And by what standards are they determining which content is illegal and thus subject to automatic moderation/removal?

I don’t understand enough about the distinctions between “platform” and “protocol” to grasp how Bluesky being the latter will allow it to avoid messy stuff like that. And if they can’t avoid that stuff, I’m not sure why it doesn’t seem to be explicitly part of the conversation about their content policies.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a BestNetTech Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

BestNetTech community members with BestNetTech Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the BestNetTech Insider Shop »

Follow BestNetTech

BestNetTech Daily Newsletter

Subscribe to Our Newsletter

Get all our posts in your inbox with the BestNetTech Daily Newsletter!

We don’t spam. Read our privacy policy for more info.

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
BestNetTech needs your support! Get the first BestNetTech Commemorative Coin with donations of $100
BestNetTech Deals
BestNetTech Insider Discord
The latest chatter on the BestNetTech Insider Discord channel...
Loading...