Lots of folks here on BestNetTech have seemed to have no problem with the Federal government strong-arming tech companies to suppress speech which (1) questioned the "expert consensus" on COVID origins, lockdowns, masking, vaccine efficacy or (2) reported the actual content of Hunter Biden's actual laptop, rather than dismissing at as "Russian disinformation" or (3) anything else mildly supportive of Donald Trump's electoral prospects in any of his runs for the Presidency that some Democrat characterized as "misinformation". Moreover those supportive of this indirect censorship via content moderation ahve vociferously defended the tech companies kowtowing to such pressure as the tech companies exercising their First Amendment rights. So, at least if it's a state government, the SCOTUS thinks otherwise, and unanimously so. Hints of the final disposition of Murthy v. Missouri, perhaps?
The simplest reform would be to require any law enforcement agency to obtain a warrant from a Federal judge (in an ordinary Federal court, not the FISA court, since the basis for the warrant would be an ordinary criminal investigation, not a classified matter) with the same evidentiary standards needed to obtain a wiretap warrant, and naming the specific individual(s) or corporate entities whose oversees communications are sought from the NSA before the NSA can share anything with them.
As the title says.
And while we're at it. Curriculum decisions to remove books formerly in the curriculum or move them to a later grade when children are more mature are not censorship. They are curriculum decisions, which, depending on the laws of the particular state are a matter for some level of government, be it the state legislature or an elected school board, unless one of those has delegated its authority to teachers or professional administrators, in which case, it can still take back that authority and make curriculum decisions.
Free speech is about the government not being able to control what citizens say, not about the government not being able to control what agents of the government say.
As to stealh edits by the NYTimes. We have a recent notorious example:
Israeli Strike Kills Hundreds in Hospital, Palestinians Say.
replaced with
At Least 500 Dead in Strike on Gaze Hospital, Palestinians Say.
and then with
At Least 500 Dead in Blast at Gaze Hospital, Palestinians Say.
All in the space of a few hours.
Now, since all of these are reports of a claim made by Palestinians (meaning Hamas in this instance), at one reading every headline was a true statement.
However, one must wonder about the editorial judgement of not merely reporting claims made by an organization that had mere days earlier carried out the deliberate mass-murder of Jews on a scale not seen since the Holocaust (and with the sort of glee at the murders for which even the SS rebuked the Croatian Ustashe running the Jasanovic death camp), but running with them as a headline on an event cloaked in the fog of war.
And then, trying to cover up the bad judgement by stealth edits (even as they repeated the bad judgement by using again in the headline the Hamas-supplied casualty figure of 500, seriously doubted by all Western intelligence services, who think the actual casualty count from the blast was 100 or less).
I am old enough to have been assigned in high school to write term papers in which not only every direct quotation, but every assertion of a fact, had to be supported by a footnote and a bibliography of sources used provided.
Perhaps text-based AI should be subjected to the same rigor: a footnote to an actually existing source (book, journal article, judicial ruling, website) must be provided for every factual assertion, as well as to any quotations, and a bibliography of sources cited provided. If that would be annoying for users to have presented to them as a matter of course, allow two versions of any output, one without the textual apparatus, one with, which the user could toggle between.
Some of us are old enough to have been assigned term papers as exercises in high school, in which we were required to provide footnotes not just for direct quotations, but for all factual assertions that were not common knowledge, and bibliography of sources cited in those footnotes.
One could envision a regulation requiring that text-based AIs which make factual assertions must provide footnotes and a bibliography, which would have to accurately direct readers of the resulting text to books, journal articles, or at least websites from which the fact asserted was drawn. Perhaps two versions of the text would always be generated: one with and one without the textual apparatus, which a user could toggle between.
Yes, the AI could come up with erroneous information, but the frequency of this would then be comparable to the frequency of erroneous information in human-generated writing with the same requirement about citing sources, since it would only occur when a pre-existing error was repeate, rather than having the AI make up "facts" out of whole cloth.
ChatGPT is censored to avoid its "speech" addressing topics our pro-government (at least when a Democrat is in the White House) biens pensants regard as "misinformation", even in the context of requests to write fiction. So China is different how?
Copyright being a government granted monopoly, those outside the granting country are under no obligation to not copy, reprint or even sell copies of reprints copyrighted works unless their country's government has entered into a treaty to give the granting country's copyrights the force of law. Treaties can always be withdrawn from or abrogated by one of the parties. Belarusians taking advantage of this move are not pirates, but privateers.
Commercial academic publishers used to provide a meaningful service by typesetting and distributing scientific results as well as organizing their vetting by peer review. They no longer do. They should be competed out of business.
What is needed is for every scientific (and social scientific and humanistic) discipline and sub-discipline to start free online peer reviewed journals the way category theorists did with Theory and Applications of Categories: free to publish in, free to access, fully peer reviewed and now arguably the premier category theory journal. All that is needed is university willing to provide the webhosting space for the journal's soft-copy and someone to print a couple of copies to put in the host university's library and the Library of Congress (or the equivalent in another country: the two print copies of TAC reside at Mount Allison University in New Brunswick and the National Library of Canada).
This particular "robot lawyer" is plainly a publicity stunt and at best useless and at worst harmful is applied to anything more than its original few easy applications of contesting parking tickets and lodging minor consumer complaints.
However, the practice of law is a lot more like a formal system than are most human activities outside of the mathematical science. It really is ripe for a genuinely good AI system to do a lot of the work we usually hire lawyers to undertake. This one just isn't a really good AI system.
While the law suit may be completely without legal merit, the view that social media constitutes a public nuisance strikes me as entirely sensible.
Just think: without social media, conspiracy theories would not spread so easily; regardless of your politics, the folks you regard as yahoos on the other side would not be able to organize so easily; and we wouldn't waste untold time debating when content moderation is too lax or so strict as to constitute censorship.
No, activists don't assume there's some magic that can determine when content is "harmful". They assume that people of who agree with them, or are cowed by a baying social media mob who agree with them, will be making the decision so that content, whether true or false, that would discomfit their point of view will be deemed "harmful" and removed.
An unsurprising finding. When an enterprise holds a monopoly (in this case created artificially by a government edict called a patent) on a product for which demand is inelastic (as in the case of the only, or by far the most effective treatement, for a life-threatening illness), it will raise prices.
This is why "natural monopolies" called utilities, which provide goods (water, natural gas for heating and cooking, electricity) for which demand is inelastic have their prices regulated. This is not seen as an affront to the free market, even by us resolute Hayekians, because there is not a free market in goods the supply of which is controlled by a monopoly. Plainly the same principle should be applied to pharmaceuticals for which there are not good substitutes (and measures taken to improve competition between producers of interchangeable drugs for those for which there are).
Surely the word "fuck" in a distinctive font with some distinctive background elements could be trademarked. Any competent trademark lawyer should be warning clients off of attempts to trademark words in common use. (cf. The impossibility of trademarking "ugg" absent distinctive trade dress, in Australia, where the word simply means a fleece-inward sheepskin boot.)
No. not the teleconferencing program. A feature on almost all cell-phone cameras.
Fine, you can't stand within 8 feet of a cop and record him. I'm sure zooming in by a factor of say 1.2 from 9 feet away will get enough detail to either convict or exonerate the cop(s) in question.
Heck, zooming in by a larger factor from 25 feet out will probably work, too.
Not that these laws should not be overturned as violations of the First Amendment. Just a reminder in the meantime.
It just occurred to me that content moderation at scale is impossible for the same reasons Hayek and vonMises gave for the impossibility of centrally planned economies working: one can't gather all the needed information (vonMises) and one can't process it in real-time (Hayek).
Thus, the solution is the same: a content-moderation analogue of the free market. Let individual users or voluntarily-formed small communities of users moderate the content they see, rather than have a central authority moderate content for all.
Well, that's the solution for those of us who want an open internet. For those who see content-moderation as the means of censoring political opponents and allowing the professional managerial class to define the bounds of allowable discussion, I guess it's a non-starter, the way free market economics is a non-starter for a devoted Stalinist.
Moody writes, "I think there are more interesting questions here than what exactly is plagiarism, which arises from copyright’s obsessions with ownership."
I'm not sure why copyright comes into the question of what constitutes plagiarism. Plagiarism as a condemnable notion predates and is independent of any copyright regime, as is its misunderstanding as theft of words or ideas, rather a fraudulent claim (most often implicit) to having originated them.
Plainly very little content moderation has to do with politics. However, some does: case in point, the suppression of all posts mentioning the New York Post report on Hunter Biden's laptop, because Trump.
A survey of Biden voters suggests that 16% of them were unaware of the content of that report (which I remind you was based on Hunter-Biden-generated content in the laptop, not Russian disinformation) and moreover would have voted differently had they known.
Very little has to do with politics, but enough to swing a national election does, and that's a problem, unless of course you support censorship on behalf of whatever political tendency is driving most of the small part of content moderation that does involve politics.
This comment thread should be interesting
Lots of folks here on BestNetTech have seemed to have no problem with the Federal government strong-arming tech companies to suppress speech which (1) questioned the "expert consensus" on COVID origins, lockdowns, masking, vaccine efficacy or (2) reported the actual content of Hunter Biden's actual laptop, rather than dismissing at as "Russian disinformation" or (3) anything else mildly supportive of Donald Trump's electoral prospects in any of his runs for the Presidency that some Democrat characterized as "misinformation". Moreover those supportive of this indirect censorship via content moderation ahve vociferously defended the tech companies kowtowing to such pressure as the tech companies exercising their First Amendment rights. So, at least if it's a state government, the SCOTUS thinks otherwise, and unanimously so. Hints of the final disposition of Murthy v. Missouri, perhaps?
How about just requiring a warrant?
The simplest reform would be to require any law enforcement agency to obtain a warrant from a Federal judge (in an ordinary Federal court, not the FISA court, since the basis for the warrant would be an ordinary criminal investigation, not a classified matter) with the same evidentiary standards needed to obtain a wiretap warrant, and naming the specific individual(s) or corporate entities whose oversees communications are sought from the NSA before the NSA can share anything with them.
Government schools, government rules.
As the title says. And while we're at it. Curriculum decisions to remove books formerly in the curriculum or move them to a later grade when children are more mature are not censorship. They are curriculum decisions, which, depending on the laws of the particular state are a matter for some level of government, be it the state legislature or an elected school board, unless one of those has delegated its authority to teachers or professional administrators, in which case, it can still take back that authority and make curriculum decisions. Free speech is about the government not being able to control what citizens say, not about the government not being able to control what agents of the government say.
A recent example
As to stealh edits by the NYTimes. We have a recent notorious example: Israeli Strike Kills Hundreds in Hospital, Palestinians Say. replaced with At Least 500 Dead in Strike on Gaze Hospital, Palestinians Say. and then with At Least 500 Dead in Blast at Gaze Hospital, Palestinians Say. All in the space of a few hours. Now, since all of these are reports of a claim made by Palestinians (meaning Hamas in this instance), at one reading every headline was a true statement. However, one must wonder about the editorial judgement of not merely reporting claims made by an organization that had mere days earlier carried out the deliberate mass-murder of Jews on a scale not seen since the Holocaust (and with the sort of glee at the murders for which even the SS rebuked the Croatian Ustashe running the Jasanovic death camp), but running with them as a headline on an event cloaked in the fog of war. And then, trying to cover up the bad judgement by stealth edits (even as they repeated the bad judgement by using again in the headline the Hamas-supplied casualty figure of 500, seriously doubted by all Western intelligence services, who think the actual casualty count from the blast was 100 or less).
A "guard-rail" for text-based AI?
I am old enough to have been assigned in high school to write term papers in which not only every direct quotation, but every assertion of a fact, had to be supported by a footnote and a bibliography of sources used provided. Perhaps text-based AI should be subjected to the same rigor: a footnote to an actually existing source (book, journal article, judicial ruling, website) must be provided for every factual assertion, as well as to any quotations, and a bibliography of sources cited provided. If that would be annoying for users to have presented to them as a matter of course, allow two versions of any output, one without the textual apparatus, one with, which the user could toggle between.
Suggesting a possible guard-rail for text-based AI
Some of us are old enough to have been assigned term papers as exercises in high school, in which we were required to provide footnotes not just for direct quotations, but for all factual assertions that were not common knowledge, and bibliography of sources cited in those footnotes. One could envision a regulation requiring that text-based AIs which make factual assertions must provide footnotes and a bibliography, which would have to accurately direct readers of the resulting text to books, journal articles, or at least websites from which the fact asserted was drawn. Perhaps two versions of the text would always be generated: one with and one without the textual apparatus, which a user could toggle between. Yes, the AI could come up with erroneous information, but the frequency of this would then be comparable to the frequency of erroneous information in human-generated writing with the same requirement about citing sources, since it would only occur when a pre-existing error was repeate, rather than having the AI make up "facts" out of whole cloth.
And this differed from America how?
ChatGPT is censored to avoid its "speech" addressing topics our pro-government (at least when a Democrat is in the White House) biens pensants regard as "misinformation", even in the context of requests to write fiction. So China is different how?
Being like the military
If cops want to be like the military, I'm fine with that, if they're really like the military:
- Not allowed to unionize.
- Rules of engagement like the ones we give soldiers in counter-insurgency theaters.
- Severe punishments for killing civilians by breaching their rules of engagement.
- Lots of de-escalation training like MPs get.
- Being removed from a police-force for cause having the same sort of consequences as a dishonorable discharge from the military.
If the cops are subject to those conditions, give them all the fancy hardware they want.But is it really piracy?
Copyright being a government granted monopoly, those outside the granting country are under no obligation to not copy, reprint or even sell copies of reprints copyrighted works unless their country's government has entered into a treaty to give the granting country's copyrights the force of law. Treaties can always be withdrawn from or abrogated by one of the parties. Belarusians taking advantage of this move are not pirates, but privateers.
What is really needed
Commercial academic publishers used to provide a meaningful service by typesetting and distributing scientific results as well as organizing their vetting by peer review. They no longer do. They should be competed out of business. What is needed is for every scientific (and social scientific and humanistic) discipline and sub-discipline to start free online peer reviewed journals the way category theorists did with Theory and Applications of Categories: free to publish in, free to access, fully peer reviewed and now arguably the premier category theory journal. All that is needed is university willing to provide the webhosting space for the journal's soft-copy and someone to print a couple of copies to put in the host university's library and the Library of Congress (or the equivalent in another country: the two print copies of TAC reside at Mount Allison University in New Brunswick and the National Library of Canada).
Law and AI
This particular "robot lawyer" is plainly a publicity stunt and at best useless and at worst harmful is applied to anything more than its original few easy applications of contesting parking tickets and lodging minor consumer complaints. However, the practice of law is a lot more like a formal system than are most human activities outside of the mathematical science. It really is ripe for a genuinely good AI system to do a lot of the work we usually hire lawyers to undertake. This one just isn't a really good AI system.
Legal merit v. good sense
While the law suit may be completely without legal merit, the view that social media constitutes a public nuisance strikes me as entirely sensible. Just think: without social media, conspiracy theories would not spread so easily; regardless of your politics, the folks you regard as yahoos on the other side would not be able to organize so easily; and we wouldn't waste untold time debating when content moderation is too lax or so strict as to constitute censorship.
No magic needed
No, activists don't assume there's some magic that can determine when content is "harmful". They assume that people of who agree with them, or are cowed by a baying social media mob who agree with them, will be making the decision so that content, whether true or false, that would discomfit their point of view will be deemed "harmful" and removed.
Monopolies
An unsurprising finding. When an enterprise holds a monopoly (in this case created artificially by a government edict called a patent) on a product for which demand is inelastic (as in the case of the only, or by far the most effective treatement, for a life-threatening illness), it will raise prices. This is why "natural monopolies" called utilities, which provide goods (water, natural gas for heating and cooking, electricity) for which demand is inelastic have their prices regulated. This is not seen as an affront to the free market, even by us resolute Hayekians, because there is not a free market in goods the supply of which is controlled by a monopoly. Plainly the same principle should be applied to pharmaceuticals for which there are not good substitutes (and measures taken to improve competition between producers of interchangeable drugs for those for which there are).
Surely some version of fuck can be a trademark
Surely the word "fuck" in a distinctive font with some distinctive background elements could be trademarked. Any competent trademark lawyer should be warning clients off of attempts to trademark words in common use. (cf. The impossibility of trademarking "ugg" absent distinctive trade dress, in Australia, where the word simply means a fleece-inward sheepskin boot.)
Queen of Christmas?
Wouldn't that be the Most Holy Theotokos and Ever-Virgin Mary?
Zoom
No. not the teleconferencing program. A feature on almost all cell-phone cameras. Fine, you can't stand within 8 feet of a cop and record him. I'm sure zooming in by a factor of say 1.2 from 9 feet away will get enough detail to either convict or exonerate the cop(s) in question. Heck, zooming in by a larger factor from 25 feet out will probably work, too. Not that these laws should not be overturned as violations of the First Amendment. Just a reminder in the meantime.
Content Moderation and Austrian School Economics
It just occurred to me that content moderation at scale is impossible for the same reasons Hayek and vonMises gave for the impossibility of centrally planned economies working: one can't gather all the needed information (vonMises) and one can't process it in real-time (Hayek). Thus, the solution is the same: a content-moderation analogue of the free market. Let individual users or voluntarily-formed small communities of users moderate the content they see, rather than have a central authority moderate content for all. Well, that's the solution for those of us who want an open internet. For those who see content-moderation as the means of censoring political opponents and allowing the professional managerial class to define the bounds of allowable discussion, I guess it's a non-starter, the way free market economics is a non-starter for a devoted Stalinist.
Plagiarism and copyright
Moody writes, "I think there are more interesting questions here than what exactly is plagiarism, which arises from copyright’s obsessions with ownership." I'm not sure why copyright comes into the question of what constitutes plagiarism. Plagiarism as a condemnable notion predates and is independent of any copyright regime, as is its misunderstanding as theft of words or ideas, rather a fraudulent claim (most often implicit) to having originated them.
Amount vs. Importance
Plainly very little content moderation has to do with politics. However, some does: case in point, the suppression of all posts mentioning the New York Post report on Hunter Biden's laptop, because Trump. A survey of Biden voters suggests that 16% of them were unaware of the content of that report (which I remind you was based on Hunter-Biden-generated content in the laptop, not Russian disinformation) and moreover would have voted differently had they known. Very little has to do with politics, but enough to swing a national election does, and that's a problem, unless of course you support censorship on behalf of whatever political tendency is driving most of the small part of content moderation that does involve politics.