Bluesky Plans Decentralized Composable Moderation
from the power-to-the-people dept
We just wrote about Substack’s issue with content moderation and the Nazi bar problem. As I highlighted in that piece, any centralized service is going to be defined by their moderation choices. If you cater to terrible, abusive people, you become “the site that caters to terrible abusive people.” That’s not a comment on “free speech” because it has nothing to do with free speech. It has to do with how you keep your own corner of the internet and what people will associate with you and your brand.
This is why I’ve argued for years that any one particular private service cannot be “the town square.” The internet itself is the town square built on decentralized protocols that allow everyone to communicate. Each centralized service is a private plot of land on that wider open vista, and if you want it to be unkempt and full of terrible people, you are free to do so, but don’t be surprised, or act offended, when lots of people decide they don’t want to associate with you.
This is also why I’ve spent so many years talking up the importance of a protocols not platforms approach to free speech. With decentralized protocols, the questions are different, and the ability to speak freely is retained, but the central issue of abusers, harassers, nonsense peddlers and more can be dealt with in different ways, rather than looking to a centralized nexus of control to handle it.
This is why I remain encouraged about Bluesky, the decentralized social media protocol which was ever so slightly influenced by my paper. It’s been in beta testing over the past few months, and has plenty of promise, including in overcoming some of the limitations of the ActivityPub-driven fediverse.
Around the same time that Substack’s Chris Best was melting down in response to fairly straightforward content moderation questions, Bluesky put up a blog post explaining its philosophy around content moderation: Composable Moderation.
Moderation is a necessary feature of social spaces. It’s how bad behavior gets constrained, norms get set, and disputes get resolved. We’ve kept the Bluesky app invite-only and are finishing moderation before the last pieces of open federation because we wanted to prioritize user safety from the start.
Just like our approach to algorithmic choice, our approach to moderation allows for an ecosystem of third-party providers. Moderation should be a composable, customizable piece that can be layered into your experience. For custom feeds, there is a basic default (only who you follow), and then many possibilities for custom algorithms. For moderation as well, there should be a basic default, and then many custom filters available on top.
The basics of our approach to moderation are well-established practices. We do automated labeling, like centralized social sites, and make service-level admin decisions, like many federated networks. But the piece we’re most excited about is the open, composable labeling system we’re building that both developers and users can contribute to. Under the hood, centralized social sites use labeling to implement moderation — we think this piece can be unbundled, opened up to third-party innovation, and configured with user agency in mind. Anyone should be able to create or subscribe to moderation labels that third parties create.
The actual details of how this will be implemented matter, but this seems like the right approach. There is certain content that needs to get taken down: generally child sex abuse material, outright commercial spam, and copyright infringement. But, beyond that, there are many different directions one can go, and allowing third parties to join in the process, opens up some really interesting vectors of competition to explore alternative forms of moderation, and create different views of content.
Here’s the way we’re designing an open, composable labeling system for moderation:
- Anyone can define and apply “labels” to content or accounts (i.e. “spam”, “nsfw”). This is a separate service, so they do not have to run a PDS (personal data server) or a client app in order to do so.
- Labels can be automatically generated (by third-party services, or by custom algorithms) or manually generated (by admins, or by users themselves)
- Any service or person in the network can choose how these labels get used to determine the final user experience.
So how will we be applying this on the Bluesky app? Automated filtering is a commoditized service by now, so we will be taking advantage of this to apply a first pass to remove illegal content and label objectionable material. Then we will apply server-level filters as admins of bsky.social, with a default setting and custom controls to let you hide, warn, or show content. On top of that, we will let users subscribe to additional sets of moderation labels that can filter out more content or accounts.
Let’s dig into the layers here. Centralized social platforms delegate all moderation to a central set of admins whose policies are set by one company. This is a bit like resolving all disputes at the level of the Supreme Court. Federated networks delegate moderation decisions to server admins. This is more like resolving disputes at a state government level, which is better because you can move to a new state if you don’t like your state’s decisions — but moving is usually difficult and expensive in other networks. We’ve improved on this situation by making it easier to switch servers, and by separating moderation out into structurally independent services.
We’re calling the location-independent moderation infrastructure “community labeling” because you can opt-in to an online community’s moderation system that’s not necessarily tied to the server you’re on.
This, combined with Bluesky’s plan to allow anyone to create their own algorithms, and offer up a kind of marketplace of algorithms, is what makes Bluesky such an interesting project to me, in that it creates a much more decentralized social media, but without the philosophical issues that often seem to hold back Mastodon (some top down decisions and norms against any algorithms or search, and still relying on individual instances to handle moderation issues).
I’ve seen some people complain about the initial implementation of Bluesky’s content moderation system, which is in user settings, and pops up a window like this (with these defaults):

The negative feedback I heard was that setting things up this way suggests that Bluesky is “okay with Political Hate-Groups” but I actually think it’s much more interesting, and much more nuanced than that. Again, remember the idea here is that it’s allowing lots of people to put in place their own moderation rules and systems, allowing for their to be competition over them.
This approach, actually has some pretty clear advantages, in that it gets us somewhat past the nonsense about “censorship” and basically says “look, we’re not taking down your speech, but it’s not going to be seen as the default.” And, on top of that, it takes away the excuses from huffy nonsense consumers who whine about not being able to get content from their favorite nonsense peddlers. You can argue that nonsense peddlers should never be able to find a space to spew their nonsense, but that’s never going to work. Beyond being an affront to general principles of free speech, it also is simply impossible to stop.
We’ve seen that already: people banned from Twitter or Facebook found their own places to speak and spew their nonsense. That’s always going to happen. With a setup like this, it actually does help limit the biggest concern: the discoverability that drives more users down the path from reality to nonsense.
But, also, importantly, a system like this actually makes it easier for those who need to monitor the nonsense peddlers, from law enforcement to academic researchers to the media, great visibility into what’s happening, and to have better responses prepared.
Again, these are early days, but I’m encouraged by this approach, and think it’s going to be much more interesting than lots of other approaches out there for content moderation.
Filed Under: at protocol, composable moderation, content moderation, decentralized, protocols
Companies: bluesky




Comments on “Bluesky Plans Decentralized Composable Moderation”
The key there is going to be how that term is defined – and it’s not ever going to please everyone. Some would consider that to include BLM or any random pride event for various reasons, while if you just stick to the ones confirmed by the likes of the SPLC, well those are half the people who were whining about unfair bias in the first place.
The devil is always in the details, and the fact that the ultimate problem is that there’s a type of person who wants to be in the popular club but feels they shouldn’t need to conform in order to be accepted there (or is deliberately trying to disrupt). That’s a human issue, not a tech issue, and not one that was ever truly solved offline. It’s just that if someone got kicked out of a bar for abusive language, they’d go elsewhere after being intimidated by a bouncer, not try to force the government to force the bar to let them back in.
The approach sounds good, but the devil is always in the details.
Re:
Exactly, and there are going to be a lot of edge cases vis-à-vis Hate Speech. I mean, you know this emoji 👌? Though it has come to be hate-speak to mean “White Power” (the middle, ring, and pinky fingers being the “W” and the arm and the circle being the “P”), but many fans of the awesome show MST3K (which calls out bigotry in the old movies they watch) recognize it as the “it stinks!” symbol (if you don’t get it, watch the episode “Pod People” here). Also the numbers “88” can have some cryptically hateful connotations, as they mean the eighth letter of the alphabet “H” twice for “HH” for “Heil Hitler”. But “8” is also pronounced “ba” in Mandarin Chinese so “88” means “ba ba” in that language to approximate the English valediction “Bye-bye!” It can also mean the 88 keys on a piano, as in the musician 88bit.
It just shows you, there are a lot of edge cases with hate speech, and its not always clear cut as with Stormfront, Kiwifarms, or Tucker Carlson.
Re: Re:
Yeah, I was thinking more about how the group is defined, but even that’s variable – half the Jan 6th crowd were called antifa the moment they were facing consequences. A lot of recent movements are genuinely grass roots, so there’s no central definition or control of membership.
But, yeah, language is also a problem. One of the tendencies of the far right in recent years has been to try and refine positive words to mean something different, and co-opt otherwise innocent symbols to mean something negative (as you suggested). If you moderate that stuff one way, you run the risk of people being unfairly moderated (the Scunthorpe problem, etc), while if you allow it users might go elsewhere because they’re being attacked in ways that don’t flag up.
That’s the root issue, checking a box is one thing, but what’s behind it? Does allowing political hate groups mean you’ll be flooded with neo-Nazis, but blocking it means you can’t host speech in support of gun control or LGBTQ right? Depends on who is in terms of those definitions. Hide explicit sexual images but allow non-sexual nudity? Who defines the parameters and how do they define the statue of David? And so on…
It can make a good discussion, I’m just wary of who controls what happens behind the scenes and where their biases lie.
Re: Re: Re:
I suspect a good direction would be to get even more granular with the tags. Like “Racial Hate”, “Transphobic”, “Ableism”, etc. Pragmatically speaking, each of those are going to hit each individual person less personally for obvious reasons.
Re: Re: Re:2
The bigger problem is how do you ensure that the tags are reasonably accurate, and that they do not become a mean for trolls to suppress speech.
Re: Re: Re:3
The reporter being a part of the offended minority might give it more weight. It reminds me of a scene from of Seinfeld.
I’m likely underestimating the actual work involved, but I can’t believe this is impossible.
Re: Re: Re:
Who defines the parameters…
i believe the idea here is that you do. You know, subscribe to filtering that works for you, or write/modify your own.
Break out those top-level categories. Even grouping self-harm with gore and torture seems weird to me (define those, and in what contexts), but you can’t really have 200 top-level categories.
Re:
That’s the whole point of this system I assume? You don’t have to accept Blue Sky’s definition of hate group. Presumably the mid term goal is to allow you say “block all interaction with anyone Org X designates as a hate group”. And you can swap out “moderation vendors” until you find someone you like.
Re:
I think this somewhat hilariously misses the point.
You can use a 3rd party moderator to determine what any of these terms mean. Which is why social media “needs” to be decentralized in the first place. No one is going to agree on a definition for “hate speech”, “hate group”, etc.
Bluesky gives you the ability to create your own definitions, and ultimately moderate your own content feed with the option to outsource the work if desired.
I guess I could see a problem with the default settings/definitions, but that seems a bit superfluous.
This comment has been flagged by the community. Click here to show it.
Has Potential
For the lefists, simply allowing any speech of which they disagree is considered to be an affront. Filters are insufficient; they want to control what others see.
Anyhow, it’s a good start. The next battle will be over where users get categorized. For example, conservatives consider antifa to be a terroristic political hate group, while leftists consider anyone who votes Republican to be part of a political hate group. This platform could stand to have some more categories, but those wouldn’t be so difficult add.
Re:
Koby, I know you’re not the sharpest knife in the drawer, but you read this entire article without realizing the point: it’s not one organization that makes those decisions. It’s anyone, and then anyone can subscribe to the moderation of those they think will do a good job. If you don’t like someone else’s moderation, then just pick a different moderation setup.
Re:
It is not the left that is trying to control speech, and they usually refer hate speakers to forums that are more accepting of their speech. t is the right complaining that the forums where their speech is acceptable have a low user count, and therefore they should be allowed to speak in all forums.
Re:
“For the lefists, simply allowing any speech of which they disagree is considered to be an affront.”
incorrect
Re:
Koby.
U still don’t see any commies in power.
Re:
…projected nobody not on hallucinogens, ever.
This comment has been flagged by the community. Click here to show it.
It has everything to do with free speech, you moron
Incorrect. “Terrible, abusive” is just your opinion, probably a shitty one, at that.
And moderation is anti-free speech. So yeah, it has everything to do with that.
Just admit you hate free speech.
Re:
FTFY
Re:
Why do people like you keep forcing their way in where they are not welcome? Is it because few people want to listen to when they can avoid you?
Re:
” you hate free speech”
incorrect
Re:
You know who tends to hold this opinion?
Assholes.
Re: Re: Opinions are just like assholes ...
… if nobody wants to hear yours, they sure as hell aren’t going to smell yours either.
Re:
free speech is cool! your speech sucks!
Re: Methinks the crybaby troll doth protest too much
If you think that Nazi ideology being terrible and abusive is “just [someone’s] opinion”, then you’re telling us all we need to know about you.
That’s the entire point here, of course. You have a right to say things, other people have a right to not want to share space with people who say the things you want to say. If you cannot stomach the latter then you do not truly believe in the former, and can GTFO.
Woah. I’m almost sold.
I’m just unsure about how the labels will work. I hope that there’ll be a way for each user and third-party algorithm to figure out which labels are the most relevant and which labels are likely to be accurate. What happens if I maliciously add a “dog” to a post about only a cat? The viewers of the post need to be able to figure out that the dog label shouldn’t be there, and algorithms need the opportunity to ignore the inaccurate “dog” label.
Re:
I imagine it’d be namespaced or grouped somehow. For example, the tag could be something like #dog@foo.org and only members of foo.org are allowed to tag.
Re: Re:
Yeah, don’t let the fail tag the dog.
More than labels
Hopefully there’s more to it than the labels themselves.
The idea itself sounds like a moderation version of uBlock Origin. You can have the “for dummies” setup to start with, but the beauty is in how well you can refine the results, and share your own or use other people’s moderation setup.
I’d also expect it to be able to tag content to the level of the Danbooru sites — hundreds of potential tags, each of which can be contained within broader categories. For example, Naruto (the anime) is different from Naruto (the character) is different than naruto (the ramen). Probably don’t show all of them at the typical visual level, but you should be able to refine your personal moderation filter based on something far more complex than “Hate Groups”.
Re:
Nota Bene: The “For Dummies” IP is now held by John Wiley & Sons, as in the company that attempted (but fortunately lost) to get rid of first sale in the SCOTUS case Kirtsaeng v. Wiley and they also are a party to the lawsuit that (so far successfully) sued the Internet Archive for having the temerity to loan books in their inventory. How dare a library loan books!
Re:
That will work, so long as tags do not becomes like the Usenet hierarchy system, abused to the point that they become useless.
I know it’s getting late in the day when I read the headline as…”Bluesky Plans Decentralized Compostable Moderation”.
Yeesh!
Re:
It’s early here and I just did the same thing 🙂
There Will Always Be Nazis
I raised this issue in the Substack Nazi bar thread.
When Patel and Best had the heated exchange about clamping down on hate speech, I noted that the problem Patel and Best would have goes beyond denying a platform to, or tolerating hate speech from, far-right figures.
The problem is this: There Will Always Be Nazis.
Even if you have policies denying certain kinds of speech, like Patel seeks, the only way to ensure a completely Nazi-sterile environment is a permission to speak. This defeats the purpose of the internet.
Remember that Nazis are determined. Even if you block Nazi speech, the smarter Nazis understand code-switching and disguising their symbolism and opinions in a way to engage the mainstream in their place, rather than dragging the mainstream to their viewpoint (i.e., sealioning, “just asking questions”/JAQing off, tu quoque, false equivalences, lawyering/working the refs/playing the rules).
Rules and moderation don’ deter them. On the contrary, they look forward to the challenge.
The other problem comes from free-speech absolutism. Much of the ethics a free-speech culture like ours have developed over centuries were in the analog world, where there was a slow build toward freer, but never truly absolutely, free speech. What limited speech were gatekeepers (who would have publishing discretion, and who would finance, content), the cost of producing content, the time delay of publication, and the skill required to produce content (written text favored writers and editors, audio-visual content favored people who are telegenic and have public speaking ability).
What’s happened with the internet is that content costs relatively little to nothing to produce, the internet is the least gatekeepered medium, information can reach its maximum potential audience instantaneously and with a near-zero marginal cost to replicate, and now it takes very little skill to produce text, audio or video.
Because there’s so much information, by so many participants, and coming so fast, the internet creates an ecosystem that allows the worst information to stick out and thrive.
Communication is in a state of perpetual war, which is the kind of habitat where Nazis thrive.
Nazis are as energetic as they are determined. Just as much as they are determined to thwart rules, Nazis are driven by a “to the last man, to the last hour” ethos and will play to win or die trying.
This is what I like about Bluesky’s content moderation system. It puts quality control in the hands of users, who can set limits on what content they can engage with and set the dials on how safe they want their spaces to be. Yes, “safe spaces” are going to be an issue, but a mechanism like this allows for a community-guided space without leaving free speech debates in the hands of a tech company’s equivalent of a football chain gang.
Re:
Perhaps I’m being more charitable to Patel than you think I should be, but I thought his implication was that people don’t like to be hanging around Nazis.
The trick to dealing with Nazis is to be aware if it’s set on their terms. For example, putting the rights of vulnerable minorities up for debate tends to preassume their rights are in question. Debating whether a post is offensive, thus deserving of the offensive label, accepts the precondition that vulnerable minorities deserve respect and good manners. That is a refreshing enough inversion is why I’m optimistic about this. At least, it would make people more conscious of prejudice by doing the work of defining it.
Re: Re:
i think the real issue is that this is a red herring. The issue is that the motherfucker would not answer the goddamned question. A question which is already covered by existing terms of service. He was simply being a jackass, trying not to alienate nazis at large, even though they would get kicked individually for TOS violations. Or, if the policy will change, he can say yeah, nazis or whoever else are totes welcome, and risk alienating a whole bunch of other people.
He isn’t being nuanced, or thoughtful, or realistic, or trying to promote free speech. he’s being a bloody coward.
No shit you can never get rid of all the nazis. Even if you did, the equivalent would re-invent itself later. Most people, especially here, get that.
Re: Re: Re:
I read somewhere that poverty is a huge contributor to the influence of fascist ideology. Captains of industry have the resources to reduce poverty levels and yet many do not, much money is used influencing legislation which results in worsening the problems.
Re: Re: Re:2
Captains of industry also find Nazism and other hate ideologies useful, in that they allow the anger to be directed to where it does them no damage, rather than at their greed and wealth and their need to control more and more of the wealth and resources of society.
Re: Re: Re:3
“hate ideologies useful”
Short term only
Re: Re: Re:4
They have become short term thinkers n the drive to continuously increase their profits an wealth.
Re: Re: Re:3 Palingenetic ultranationalism
Roger Griffin’s theory on fascism says the origins of fascism aren’t monocausal, and it wasn’t a single class that was the kernel for fascist theory or action.
Griffin describes fascism as a cycle of “palingenetic ultranationalism.” The word has nothing to do with you-know-who, it means rebirth.
First, there is a widespread sense of decadence — the economy, government or culture is in decline, death (a sudden collapse of government or economy or military defeat), or zombified (society was never able to heal from a “death” event).
Second, society feels the present is the worst of all possible worlds, and the future will be even worse still. So there’s a sentimental appeal to an idealized, mythologized past. “A great future is only possible through people with a great past.” All fascist movements want to Make Great Again.
Third, fascism has a two-sided coin of eclecticism and syncretism to overcome its inherent contradictions. Eclecticism means the mass politics that makes fascism possible allows classes in struggle to see fascism as a mechanism to see what they want to see — aristocrats a restoration of the old order, the bourgeoisie a muzzled and leashed underclass, religious fundamentalists who yearn for refused temporal and spiritual power, a working class who wants a strongman to keep the aristocracy and bourgeoisie in check futurists who want to be unburdened of all of those previous institutions, etc.
Syncretism is used to flatten these eclectic contradictions through a slapdash mix of modernity and tradition, fact and fable, myth and reality, popular and high culture. In Nazi Germany, the Aryan race mythology was largely taken from Hitler’s love of Wagnerian opera. Today, America’s fascists are likely drawing their history lessons from too-closely watching “The Matrix” and “Fight Club.”
Re: Re: Re:
Best answered the question in the way a CEO is conventionally expected to answer questions, and not to Patel’s satisfaction.
A CEO with loose lips can change the material fortunes of their company through something as small as a quote. If “free speech” is a dog-whistle for the Peter Thiel kind of free speech — and most of the venture capital chieftains are just like Thiel — then Best has his eye on keeping his insiders happy.
Besides, Patel and Best can’t really hash out and settle the banning or accommodation of Nazis in the interview because There Will Always Be Nazis. Banning Nazis only makes them more clever, and the smarter and savvier will sneak through. Letting “free speech take the wheel” also lets Nazis colonize the space, like 4Chan and Twitter.
Substack has pretty good bulkheads — newsletters, pay mechanisms, and hopefully something like this Bluesky mechanism that allows users and communities to set their own speech thresholds to keep a few steps ahead of the true malefactors.
This comment has been flagged by the community. Click here to show it.
As usual, the hypocrite of free speech thinks that free speech he doesn’t like is just great as long as nobody can hear it. Do you even hear yourself?
Re:
Sorry, I can’t hear him over the voices in your head.
Re:
I do hear myself, which is why I know that that’s not at all what I said or what I’m arguing for. The whole idea of this is that you actually solve the very problem you’re always so mad about: you don’t like any moderation. So on Bluesky, you can choose to go with a different moderation option, and get all the crap you want to see.
I’m saying let’s make it more of a free market, where people get to choose the moderation they want. And we’ll see what the market thinks? Do they want nonsense peddlers or not? Do they want Nazis or not?
I’m literally pushing this because it enables exactly what you claim you want: more “speech.” And, on top of that, a marketplace of ideas. And then let’s see what the market says.
This comment has been flagged by the community. Click here to show it.
Re: Re:
And this composable moderation from many sources is also the same thing I have said many times is good – that large generic speech platforms should facilitate subgroups that can moderate as their members wish, and should provide a variety of moderation that those users can opt in to. If this moderation can be pulled off in a distributed way, terrific.
Which does not address the pull quote from you that I commented on; you said
In what way have I misinterpreted you? How does “limiting discoverability” differ from silencing speech? You want people who speak things you hate to have the ability to speak, but only where people who might be convinced by them will not hear them.
Re: Re: Re:
Hyman.
Again…
You have been told to stop acting like a transphobic fuck. Many, many times.
From both the community AND AND the owner.
If you don’t want to take the fucking hint then Mike will have to do worse than ignore your harassment.
And to answer your shitty question…
Because moderation, ie, filtering out transphobic assholes like you, is different from discoverability, ie, finding a non-transphobic asshole like you.
Re: Re: Re:
Think of it like the difference between offering people earplugs and taping people’s mouths shut.
Re: Re: Re:2
No. People put in earplugs to avoid hearing sounds that they know are there. “Limiting discoverability” means forcibly putting earplugs in people’s ears so that they will not realize that there may be sounds that they want to hear.
Re: Re: Re:3 Broken analogy
It actually means something more like having a sound-dampened lobby where people can choose what they want to hear in peace, instead of having the most vocal dickhead with a megaphone drown everyone else out with their incessant one-note blathering. In an accurate analogy, nobody is being forced to do anything.
Of course, that’s never really good enough for the kind of people who scream and thrash and cry whenever they’re not handed first go at the megaphone.
Re: Re: Re:4
Again, that is not what Masnick said. He said that Bluesky would have the advantage of limiting discoverability. When discoverability is limited, by definition that is not someone choosing for themselves what to hear and what to ignore, it’s someone else making the choice for them, so that they never get to see the information that’s been concealed from them. And again, this is coming from a supposed advocate of free speech. An advocate of free speech who thinks that it’s good for speech to be hidden away from people who might be convinced that it’s true if they hear it. That hypocrisy goes down to the bone.
Re: Re: Re:5
Dude. I told you to fuck off until you can understand basic fucking English. You decided to continue showing your whole ass instead.
Limiting discoverability FOR THOSE WHO CHOOSE TO TURN ON THE FILTERS BECAUSE THEY DON’T WANT TO DEAL WITH ABSOLUTE FUCKING FOOLS like yourself.
Learn to read.
Also: fuck off. Go away. No one wants you here. You’re a perverted idiot obsessed with other people’s genitals, and I’ve asked you to leave. Go away.
Re: Re: Re:5
You need to stop otherwording people. Mike was saying that it limits discoverability to only those who are willing to discover such things in the first place! That doesn’t place any limits or restrictions on anyone who is fine with discovering them. It’s like having a filter to prevent children from watching mature-rated content; it prevents unintentional discovery by those who don’t want it.
What you call “limiting discoverability” would be more properly “eliminating discoverability” or maybe “minimizing discoverability”. “Limiting” just means “placing limits on”, with nothing suggested about how much they are limited by. In this case, the limits are customizable by individual users, so no one is being forced to not hear anything not barred by law.
Re: Re: Re:3
Again, literally nothing here is about forcing anyone to do anything. The whole fucking point is that people get to choose what ear plugs they want to put in.
Re: Re: Re:3
Nope. “Limiting discoverability” means giving people the option to put earplugs in, not forcing them to wear them. This should be obvious in this context, where Bluesky’s system doesn’t force anyone to filter at all. And as for “not hearing sounds they know they are there”, people know that stuff they don’t like is going to be there, so that isn’t a difference between the two. What the earplugs do is allow you not hear them, and you aren’t exactly aware of the details of what you are missing either way, only the general stuff, so that isn’t an actual difference.
Granted, it’s not a perfect analogy, since earplugs aren’t exactly selective, but the idea is the same. It’s the difference between giving people the option to not hear what is being said and forcing people not to speak to anyone. Basically, this is about limiting discovery to those who want to hear, or preventing accidental discovery by people who don’t want to hear, depending on which is the default. Either way, those who want to hear it are not prevented from or substantially inconvenienced when trying to do so.
Now, you can attack the strawman position that “limiting discoverability” in this context means “not allowing anyone to discover it”, but that isn’t what anyone here means by it, so you’re attacking a position no one here is advocating for here, meaning that, at best, you’re wasting time. (Really, you seem to be confusing “limiting discoverability” with “eliminating discoverability”, because only the latter is consistent with forcing people not to hear. We’re talking about the former.)
Re: Re: Nazi bar
The Nazi bar analogy doesn’t reflect the way that things work in the real world. If somebody wants to open a Nazi bar in San Francisco, they are free to do so. The city has no authority to moderate a Nazi bar and would not try to. But very few barhoppers are going to say to themselves, “there’s a Nazi bar in San Francisco, so I am not going to go out for drinks in San Francisco.” They will simply not go to the Nazi bar.
If a Nazi bar pops up on Twitter or Substack, people who don’t like Nazi bars can simply choose to not go to that bar. They can patronize the bars they like and avoid the ones they don’t like.
Re: Re: Re:
That works on the existing substack, but with their proposed notes, Twitter etc. the problem is that the Nazis, bigots and trolls will not stay in their own bars, but insist on being able to use all the bars. It has a lot to do with their need for victims.
Re: Re: Re:2
This has, indeed, been happening. I keep seeing transphobic twerps pop up in replies to people I follow. They have a pathological need for other people to smell their farts.
Re: Re: Re:
Huh? You’re just repeating the point I was making and saying it doesn’t reflect the way things work in the real world. So you’re… disagreeing with me, by restating exactly what I was saying?
Re:
People not wanting to listen to you, and telling their friends, or anybody else for that matter, not to bother listening to you is not a violation of your right to speak freely. Nor are publication platforms refusing to carry your speech a violation of your rights.
So long as there are ways for you to publish your speech, and people can make up their own made, including ignoring advice, as to whether they want to listen too you or not, your free speech rights are intact. Just because the platforms that allow you to speak have small audiences, and/or you cannot attract an audience in the sea of voices on the Internet does not grant you the right to insists that more popular platforms carry your words.
This comment has been flagged by the community. Click here to show it.
Re: Re:
Who is talking about the right to speak freely? These are all privately owned platforms, so the only people with any rights are the people who own them. They may moderate and censor as they wish, and their users have no say in the matter unless the owners want to grant them one.
What I’m talking about, and what people here will willfully refuse to understand, is that Masnick says that a good thing about Bluesky is that (he thinks) it will hinder the discoverability of material he does not like (what he calls “nonsense”). But the purpose of free speech is to convince others of the positions it takes. A system that is designed to make speech difficult to find for people who might welcome it if they heard it is antithetical to free speech. That’s fine for people who already hate the freedoms – speech, religion, petition – granted by the 1st Amendment, but Masnick claims to support free speech. He doesn’t, of course, when that speech is both in favor of viewpoints he hates and is widely popular, but he has to twist and spin to try to find his way out of the dilemma. As he claimed when he was speaking about Best (of Substack), I would just like him to admit what he is instead of pretending to be a friend of freedom. Mike, just admit that you want to silence people who hold popular views that you disagree with, and we can be done – you’ll still be wrong, but at least not a hypocrite about it.
Re: Re: Re:
Your reading comprehension still sucks dude.
Come back when you can comprehend basic English. Until then, will you fuck off already?
This comment has been flagged by the community. Click here to show it.
Re: Re: Re:2
You literally said
What do you think I’m not comprehending? You are saying that a benefit of Bluesky is that it will prevent people from seeing material that they would like if they saw it. What else is limiting discoverability supposed to mean? What kind of supposed advocate of free speech thinks that speech is better served by a system that prevents people from knowing it’s there?
You know my favorite bugaboo. People certainly don’t agree on what is “nonsense”, and they can’t even agree on what is “reality”. And yet you want a system that makes it hard to hold conversations to bring people around in their views. Is it free speech when you speak in the forest and there’s no one there to hear you?
Re: Re: Re:3
“yet you want a system that makes it hard to hold conversations to bring people around in their views”
Explain? Seems like he wants a system that makes it difficult for bullshit merchants to manipulate a feed to get their wares forcibly injected into the eyeballs of rubes, as per YouTube and Facebook’s algorithms.
You seem really keen to make this about restricting people from speech “they would like if they saw it”, and part of the problem with that view is that if we’re going off what people like, most people don’t really like free speech at all – and yet we accept that it’s a social good, so we try to protect it. A lot of people really like some really objectively bad stuff, like pogroms. They’re super popular across societies and time periods.
Why are you so averse to the idea of discussing disinformation and misinformation as social harms, instead preferring to refer to them as just a form of speech that some people enjoy, like a kind of intellectual junk food? It’s a busted analogy and it kind of makes you look like you’re not so much advocating for speech in general, as speech that you know sucks ass but serves a purpose for you and yours.
This comment has been flagged by the community. Click here to show it.
Re: Re: Re:4
What counts as “bullshit” is in the eye of the beholder, obviously. Misinformation and disinformation might be social harms, but since we have no consensus in our society as to which viewpoints are true and which are false, what constitutes misinformation and disinformation has to be an individual matter.
If you decide that some viewpoint is misinformation and disinformation and elect not to see it, that’s great. If systems like Bluesky help you with the tech to filter out things that you consider nonsense, that’s great. If someone is using that tech to hide viewpoints from you that you have not seen, might be interested in seeing, and might believe are correct if you saw them, that’s not so great. And that’s what Masnick said was a positive feature – that Bluesky could be used to prevent people from discovering viewpoints that they might like if they saw them.
Re: Re: Re:5
Does that mean you do not believe doctors, or news reports that do not reinforce your personal belief system? Are you safe to do anything without adult supervision?
Re: Re: Re:6
Of course. Are you telling me that you believe news reported by Fox News, or medical opinions delivered by Dr. Mehmet Oz? All claims should be fileted through your life experience for evaluation, not blindly accepted on the basis of credentialism.
Re: Re: Re:5
Funny comment for the same guy who has spent months in the comments here insisting that we all MUST hear about that your own views on the genitals of children are “truth”.
This comment has been flagged by the community. Click here to show it.
Re: Re: Re:6
It’s up to the owner of the site to decide whether my comments appear. It is up to me to state the truth as I believe it to be. It is your choice to believe or not what I say, and to respond or not to what I say.
The “genitals of children” (and their genetics) are the only things that will ever determine their sex. No matter how much you mock and despise the physical world, the physical world stubbornly remains exactly the same and does not care one whit about your wishes that it change to conform to your mistaken beliefs.
Re: Re: Re:7
What shape is your nose in, because in the real world, ignoring repeated requests to leave leads to a race between someone forcing out of the premises, and whoever you offending tacking more direct action.
This comment has been flagged by the community. Click here to show it.
Re: Re: Re:8
The owner of the site is free to take whatever direct action he likes. You making vague threats from your mother’s basement is rather pathetic.
Re: Re: Re:9
What threats? I just observed that you are ignoring a request to leave, and wonder if you act the same in the real world.
Re: Re: Re:9
Nobody cares what you straight white males think. You had your time in the spotlight and you ruined it for yourselves
cope
stay mad die mad
get fucked
Re: Re: Re:3
You seem to want a system where bigoted people can butt in on conversations taking place in a social group that doesn’t approve of bigoted views and ask for “debate” or “reasonable discussion” about those bigoted views. You also seem to think that those groups telling those people who barged in to “fuck off” is the same thing as those groups “silencing” the people who barged in.
A trans-inclusive social group should have the freedom to deny people with anti-trans views from being in, or making demands of, that group. No one in that group should be forced to “debate” whether trans people deserve to exist or educate someone with anti-trans views on every aspect of being transgender.
The same logic applies when that social group is a social media service such as a Mastodon instance or a Discord server. To believe otherwise is to believe in compelled association. I doubt you’d enjoy being forced to associate with people who don’t share your views—and who might actively question whether you deserve to exist. What makes you think trans people would (or should) accept that?
Re: Re: Re:3
You keep misunderstanding what is meant. It means “limiting discoverability so that only those who want to hear this sort of thing will hear it”. This was explained to you multiple times.
Re: Re: Re:4
I think that misunderstanding is deliberate, as that troll thinks that discoverability means listening to their speech before rejecting it, even when you have heard them say the same thing repeatedly.
Re: Re: Re:5
You don’t “discover” something you already know is there. Discovery means finding something new. Limiting discoverability means preventing people from finding something new, not people choosing to silence things they already know they don’t want to hear. As with “man” and “woman”, you don’t get to torture words into meaning whatever you want them to mean.
Re: Re: Re:6
Let me put it to you another way. There are lots of kinds of pseudoscientific naturopathy, and new such claims are being made all the time. Say I decide I don’t want to see anything in that realm because I know that it’s basically all nonsense. By turning on a filter that prevents such a thing from being shown to me, I will be preventing myself from discovering any new posts made that are labeled such, including claims I’ve never heard of.
Whether you agree or disagree with my choice, it’s still my choice to ask the filter to remove certain kinds of posts from my feed before I can see them. By choosing to wear the earplugs, I know that there will be things I never heard before that I will not be able to hear, and I’m okay with that. If you have a problem with that, you are demanding a particular audience who doesn’t want to hear, and you don’t have any right to that, nor is rejecting that demand silencing anyone.
So for this:
It is people choosing to silence things they don’t want to hear, even if they haven’t heard that specific post before. It’s the post’s discoverability being limited, not necessarily the ideas expressed within the post, though the limitation is done based on the ideas expressed. Moreover, they are also saying they don’t want to be exposed to any ideas in a certain category, including things they don’t already know. Again, you don’t have the right to force a particular audience to hear, especially when they expressly say they don’t want to.
The transphobic attack is irrelevant and uncalled for.
Re: Re: Re:6
Also, people have the right to choose not to hear an idea without hearing it first. It’s still them making the choice, not someone imposing it on them. Just as the right to speak comes with it the right not to speak, the right to hear comes with it the right not to hear. You are not entitled to make people listen to what you have to say before rejecting it if they don’t want to.
Re: Yes, that is how facts work
Some speech smells of roses and some of rancid farts. Always will it be so, no matter how hard you want people to join you in huffing farts and proclaiming then to be Free Range Farts That Smell of Roses, Ackshually.
Interesting
Mike,
Thanks for the update.
This looks like it has the potential to be something really interesting and dare I say, possibly even useful.
Take care and do keep us informed on this front.
O how I wish for it to be real.
I would like to request a “nonsense peddlers” tag/filter I could apply to both social media/Internet and real life please.
one thing that wasn’t clear to me: will labels be directly attached to the content by third parties, or will you also be able to subscribe to “label streams” or what ever?
This makes a difference because I am sure there will be major disagreements about how stuff should be labeled. And surely there will be people who label stuff in dishonest/deceptive ways.
Re:
Somewhat answering my own question: It appears that labeling will be moderation? I could envision a separation of labeling services/subscriptions vs moderation “subscriptions”. However maybe I’m not fully seeing how they envision it to be designed.
Except that’s not what Masnick said. He said
So he’s not talking about people choosing not to hear speech they don’t like. He’s talking about preventing people from encountering speech they might like if they heard it, so that they are not “driven from reality to nonsense” where those are determined by the censors, not the listeners.
Re:
Dude. You’re wrong. I’ve told you a dozen times you’re wrong. You read it wrong. I explained why you’re wrong and you continue to read it wrong. Give it a fucking rest you absolute numbskull.
Re:
Both Mike and I have explained that you’re wrong. I even went into detail explaining why you’re wrong about what was meant.
Again, this is about allowing users to choose not to hear content that falls into certain categories of their own choosing. It’s limiting discoverability in that people who have said they don’t want to discover such content are far less likely to discover it. If you want to see it, there is nothing stopping you from doing so by turning off those filters. This isn’t forcing anyone not to hear anything; it’s allowing them to choose not to hear it, which includes not being exposed to it in the first place.
You can continue to misstate what the words you quote mean, but the fact is that they don’t mean what you say they mean.
Re: Re:
No. Here is the quote again
First of all, to discover something means to find something new, that you didn’t know was there. Choosing to not see something you don’t want to see is not “limiting discoverability”.
Second, the truly relevant part is “drives … users … from reality to nonsense”. The only possible meaning of this is that Mike holds that a benefit of Bluesky is preventing people from seeing material that they would believe if they saw it. Having third parties be the arbiters of reality and nonsense such that they get to censor content so that people will not get a chance to decide for themselves is the opposite of freedom of speech. (Not of the 1st Amendment. Of freedom of speech as a concept and as a value.)
Who Tags the Tags?
As Archive Of Our Own discovered, tagging itself can become a source of spam and annoyance. Distributed tagging is going to have the same sorts of issues that the distributed content itself has with respect to authenticity and verification. Perhaps the tags will need to have supertags 🙂
Okay, so what happens when the government of India calls up Bluesky like it does Twitter and says “block this guy, remove those tweets, or we take our own measures”? And by what standards are they determining which content is illegal and thus subject to automatic moderation/removal?
I don’t understand enough about the distinctions between “platform” and “protocol” to grasp how Bluesky being the latter will allow it to avoid messy stuff like that. And if they can’t avoid that stuff, I’m not sure why it doesn’t seem to be explicitly part of the conversation about their content policies.