Glyn Moody's BestNetTech Profile

Glyn Moody

About Glyn Moody

Posted on BestNetTech - 23 December 2025 @ 08:01pm

40 Years Of Copyright Obstruction To Human Rights And Social Justice

One of the little-known but extremely telling episodes in the history of modern copyright, discussed in Walled Culture the book (free digital versions available), concerns the Marrakesh Treaty. A post on the Corporate Europe Observatory (CEO) site from 2017 has a good summary of what the treaty is about, and why it is important:

It sets out exceptions and limits to copyright rules so that people unable to use print media (including blind, visually impaired, and dyslexic people) can access a far greater range of books and other written materials in accessible formats. These exceptions to copyright law are important in helping to combat the ‘book famine’ for print-disabled readers. The Marrakesh Treaty is particularly important in global south countries where the range of materials in an accessible format – usually expensive to produce and disseminate – can be extremely limited.

Its importance was recognised long ago, as indicated by a timeline on the Knowledge Economy International (KEI) site:

In 1981, the governing bodies of WIPO and UNESCO agreed to create a Working Group on Access by the Visually and Auditory Handicapped to Material Reproducing Works Produced by Copyright. This group meeting took place on October 25-27, 1982 in Paris, and produced a report that included model exceptions for national copyright laws. (UNESCO/WIPO/WGH/I/3). An accessible copy of this report is available here.

And yet it was only in 2013 – 31 years after the original report – that the treaty was finally agreed. The reason for this extraordinary delay in making it easier for the visually impaired to enjoy even a fraction of the material that most have access to is simple: copyright. As KEI’s director, James Love, told Walled Culture in an interview three years ago: “the initial opposition was from the publishers, and the publishers did everything you can imagine to derail this [treaty]”. The CEO post explains why:

Industry’s lobby efforts have attempted to re-frame the Marrakesh Treaty away from being a matter of human rights, education, and social justice, towards a copyright agenda by portraying it as a threat to business’ interests.

Indeed, even industries well outside publishing lobbied hard against the treaty. For example:

Caterpillar, the machinery manufacturer, joined the campaign to oppose it, apparently convinced that the Treaty would act as a slippery slope towards weaker intellectual property rules elsewhere.

As the CEO article noted, after the Marrakesh Treaty was agreed, several EU member states insisted on it being watered down further:

contrary to the obvious benefits of the ratification and implementation of the Marrakesh Treaty for the 30 million blind or visually-impaired people in Europe (and 285 million worldwide), several EU member state governments have instead bought the business line that these issues should be viewed through the lens of copyright.

That was eight years ago. And yet – incredibly – the pushback against providing the visually impaired with at least minimal rights to convert print and digital material into forms that they could access has continued unabated. A recent post on the International Federation of Library Associations and Institutions (IFLA) blog analyses the ways in which the already diluted benefits of the Marrakesh Treaty have been diminished further:

it has become clear that there are a number of ways in which it is possible to undermine the goals and intent of the Marrakesh Treaty, ultimately limiting the progress of access to information than would otherwise be possible.

This article highlights examples from countries that are arguably getting Marrakesh implementation wrong. The list below illustrates provisions (or a lack of provisions) to avoid because they undermine the purpose of the treaty and create barriers to access for people with disabilities.

One extraordinary failure to implement the Marrakesh Treaty properly, a full 40 years after it was first discussed, is “where laws have set out that authorised entities need to be registered in order to use Marrakesh provisions, but then there is no way of registering.” According to the IFLA this is the case in Brazil and Argentina. Just slightly better is the situation where “only certain institutions and libraries should count as authorised entities.” Clearly, this “may have the effect of limiting the number of service providers, and place an additional burden on institutions.” Another problem concerns remuneration:

The Marrakesh Treaty includes an optional provision for remuneration of rightholders. This non-compulsory clause was added in order to secure support during negotiations, but undermines the Treaty’s purpose by allowing the payment of a royalty for an inaccessible work, and creates a financial and administrative burden, ultimately drawing resources away services to persons with disabilities.

Germany is a disappointing example of how new barriers can be placed in the way of the visually impaired by adding unjustified and exorbitant costs:

a fee of at least €15 is charged for each transfer of a book for each individual format. Fees (approx. 15 cents) are also charged for each download or stream of a book. Additionally, fees are charged for obtaining books from other German-speaking countries and for borrowing them. This leads to considerable costs, which inevitably result in a decline in purchases and the range of services offered.

Another obstacle is the requirement in some countries for “a commercial availability check for a work in an accessible format, when the very purpose of the Marrakesh Treaty was to address a market failure.” As the IFLA post rightly points out:

A commercial availability check is unnecessary – libraries will buy books in accessible formats where they can, as it is far more cost effective to purchase the work than produce it in accessible format. Yet Canada has introduced such a provision, and indeed even requires a second check when exporting books. It is burdensome to expect a library to conduct a search in a foreign market and be 100% sure that a book is not available in a given format there. Often the information simply is not available. Such provisions therefore create unacceptable liability, chilling the sharing of books. 

Finally, there are countries that have joined the Marrakesh Treaty, but have done little or nothing to implement it:

recent piece from Bangladesh highlights how delays in reforming domestic copyright laws, coupled with underinvestment, have meant that three years on from ratifying the Treaty, persons with print disabilities are still waiting for change. Similarly in South Africa, despite a judgement from the Constitutional Court, the necessary reforms to implement the Treaty are still being held up.

The Marrakesh Treaty saga shows the copyright industry and its friends in governments around the world at their very worst. Unashamedly placing copyright’s intellectual monopoly above other fundamental human rights, these groups have selfishly done all they can to block, hinder, delay and dilute the idea that granting the visually impaired ready access to books and other material is a matter of social justice and basic compassion.

Follow me @glynmoody on Mastodon and on Bluesky. Originally posted to Walled Culture.

Posted on BestNetTech - 16 December 2025 @ 12:02pm

When People Realize How Good The Latest Chinese Open Source Models Are (And Free), The GenAI Bubble Could Finally Pop

Although the field of artificial intelligence (AI) goes back more than half century, its latest incarnation generative AI is still very new: ChatGPT was launched just three years ago. During that time a wide variety of issues have been raised, ranging from concerns about the impact of AI on copyright, people’s ability to learn or even think, job losses, the flood of AI slop on the Internet, the environmental harms of massive data centers, and whether the creation of a super-intelligent AI will lead to the demise of humanity. Recently, a more mundane worry is that the current superheated generative AI market is a bubble about to pop. In the last few days, Google’s CEO, Sundar Pichai, has admitted that there is some “irrationality” in the current AI boom, while the Bank of England has warned about the risk of a “sharp correction” in the value of major players in the sector.

One element that may not yet be factored in to this situation is the rising sophistication of open source models from China. Back in April, BestNetTech wrote about how the release of a single model from the Chinese company DeepSeek had wiped a trillion dollars from US markets. Since then, DeepSeek has not been standing still. It has just launched its V3.2 model, and a review on ZDNet is impressed by the improvements:

the fact that a company — and one based in China, no less — has built an open-source model that can compete with the reasoning capabilities of some of the most advanced proprietary models currently on the market is a huge deal. It reiterates growing evidence that the “performance gap” between open-source and close-sourced models isn’t a fixed and unresolvable fact, but a technical discrepancy that can be bridged through creative approaches to pretraining, attention, and posttraining.

It is not just one open source Chinese model that is close to matching the best of the leading proprietary offerings. An article from NBC News notes that other freely downloadable Chinese models like Alibaba’s Qwen were also “within striking distance of America’s best.” Moreover, these are not merely theoretical options: they are already being put to use by AI startups in the US.

Over the past year, a growing share of America’s hottest AI startups have turned to open Chinese AI models that increasingly rival, and sometimes replace, expensive U.S. systems as the foundation for American AI products.

NBC News spoke to over 15 AI startup founders, machine-learning engineers, industry experts and investors, who said that while models from American companies continue to set the pace of progress at the frontier of AI capabilities, many Chinese systems are cheaper to access, more customizable and have become sufficiently capable for many uses over the past year.

As well as being free to download and completely configurable, these open source models from Chinese companies have another advantages over many of the better-known US products: they can be run locally without needing to pay any fees. This also means no data leaves the local system, which offers enhanced privacy and control over sensitive business data. However, as the NBC article notes, there are still some worries about using Chinese models:

In late September, the U.S. Center for AI Standards and Innovation released a report outlining risks from DeepSeek’s popular models, finding weakened safety protocols and increased pro-Chinese outputs compared to American closed-source models.

And the success of China’s open source models is prompting US efforts to take catch up:

In July, the White House released an AI Action Plan that called for the federal government to “Encourage Open-Source and Open-Weight AI.”

In August, ChatGPT maker OpenAI released its first open-source model in five years. Announcing the model’s release, OpenAI cited the importance of American open-source models, writing that “broad access to these capable open-weights models created in the US helps expand democratic AI.”

And in late November, the Seattle-based Allen Institute released its newest open-source model called Olmo 3, designed to help users “build trustworthy features quickly, whether for research, education, or applications,” according to its launch announcement.

The open source approach to generative AI is evidently growing in importance, driven by enhanced capabilities, low price, customizability, reduced running costs and better privacy. The free availability of these open source and open weight models, whether from China or the US, is bound to call into question the underlying assumption of today’s generative AI companies that there will be a commensurate payback for the trillions of dollars they are currently investing. Maybe it will be the realization that today’s open source models are actually good enough for most applications that finally pops the AI bubble.

Follow me @glynmoody on  on Bluesky and Mastodon.

Posted on BestNetTech - 9 December 2025 @ 07:54pm

Public AI, Built On Open Source, Is The Way Forward In The EU

Aquarter of a century ago, I wrote a book called “Rebel Code”. It was the first – and is still the only – detailed history of the origins and rise of free software and open source, based on interviews with the gifted and generous hackers who took part. Back then, it was clear that open source represented a powerful alternative to the traditional proprietary approach to software development and distribution. But few could have predicted how completely open source would come to dominate computing. Alongside its role in running every aspect of the Internet, and powering most mobile phones in the form of Android, it has been embraced by startups for its unbeatable combination of power, reliability and low cost. It’s also a natural fit for cloud computing because of its ability to scale. It is no coincidence that for the last ten years, pretty much 100% of the world’s top 500 supercomputers have all run an operating system based on the open source Linux.

More recently, many leading AI systems have been released as open source. That raises the important question of what exactly “open source” means in the context of generative AI software, which involves much more than just code. The Open Source Initiative, which drew up the original definition of open source, has extended this work with its Open Source AI Definition. It is noteworthy that the EU has explicitly recognized the special role of open source in the field of AI. In the EU’s recent Artificial Intelligence Act, open source AI systems are exempt from the potentially onerous obligation to draw up a range of documentation that is generally required.

That could provide a major incentive for AI developers in the EU to take the open source route. European academic researchers working in this area are probably already doing that, not least for reasons of cost. Paul Keller points out in a blog post that another piece of EU legislation, the 2019 Copyright in the Digital Single Market Directive (CDSM), offers a further reason for research institutions to release their work as open source:

Article 3 of the CDSM Directive enables these institutions to text and data-mine all “works or other subject matter to which they have lawful access” for scientific research purposes. Text and data mining is understood to cover “any automated analytical technique aimed at analysing text and data in digital form in order to generate information, which includes but is not limited to patterns, trends and correlations,” which clearly covers the development of AI models (see here or, more recently, here).

Keller’s post goes through the details of how that feeds into AI research, but the end-result is the following:

as long as the model is made available in line with the public-interest research missions of the organisations undertaking the training (for example, by releasing the model, including its weights, under an open-source licence) and is not commercialised by these organisations, this also does not affect the status of the reproductions and extractions made during the training process.

This means that Article 3 does cover the full model-development pathway (from data acquisition to model publication under an open source license) that most non-commercial Public AI model developers pursue.

As that indicates, the use of open source licensing is critical to this application of Article 3 of EU copyright legislation for the purpose of AI research.

What’s noteworthy here is how two different pieces of EU legislation, passed some years apart, work together to create a special category of open source AI systems that avoid most of the legal problems of training AI systems on copyright materials, as well as the bureaucratic overhead imposed by the EU AI Act on commercial systems. Keller calls these “public AI”, which he defines as:

AI systems that are built by organizations acting in the public interest and that focus on creating public value rather than extracting as much value from the information commons as possible.

Public AI systems are important for at least two reasons. First, their mission is to serve the public interest, rather than focusing on profit maximization. That’s obviously crucial at time when today’s AI giants are intent on making as much money as possible, presumably in the hope that they can do so before the AI bubble bursts.

Secondly, public AI systems provide a way for the EU to compete with both US and Chinese AI companies – by not competing with them. It is naive to think that Europe can ever match levels of venture capital investment that big name US AI startups currently enjoy, or that the EU is prepared and able to support local industries for as long and as deeply as the Chinese government evidently plans to do for its home-grown AI firms. But public AI systems, which are fully open source, and which take advantage of the EU right of research institutions to carry out text and data mining, offer a uniquely European take on generative AI that might even make such systems acceptable to those who worry about how they are built, and how they are used.

Follow me @glynmoody on Mastodon and on Bluesky. Originally published to the Walled Culture blog.

Posted on BestNetTech - 19 November 2025 @ 01:35pm

Fans Of Open Access, Unite: You Have Nothing To Lose But Your Chained Libraries

When books were rare and extremely expensive, they were often chained to the bookcase to prevent people walking off with them, in what were known as “chained libraries”. Copyright serves a similar purpose today, even though, thanks to the miracle of perfect, zero-cost digital copies, it is possible simultaneously to take an ebook home and yet leave the original behind. For a quarter of a century, the open access movement has been fighting to break those virtual chains for academic works, and to allow anyone freely to read and make copies of the knowledge contained in online virtual libraries.

The detailed history of the movement can be found in Chapter 3 of Walled Culture the book (free digital versions available). As the timeline there, and posts on this blog both make clear, the open access movement has made only limited progress despite the enormous effort expended by many dedicated individuals. Moreover the open access idea has been embraced and then subverted by the academic publishers whose greed and selfishness it was meant to fight.

One version of open access, known as the “diamond” variant, still offers hope that the goals of free access to knowledge for everyone could still be achieved. But even this minimalist approach to academic publishing requires funding, which raises questions about its long-term sustainability. Economic issues also lie at the heart of wider discussions about what could replace copyright, which was born in the analogue world, and whose dysfunctional nature in the digital environment is evident every day.

Walled Culture the book concludes with a look at perhaps the most promising alternative model, whereby “true fans” support directly the creators whose work they value. This approach can also be applied to open access. In this case, the “true fans” of the research work published in papers and books are the academic libraries, acting on behalf of the people who use them. There are various ways for them to support the journals their academics want to access, but one of the most promising is “subscribe to open” (S2O), which helps publishers convert traditional journals into open access. The idea was formalized by Raym Crow, Richard Gallagher, Kamran Naim in 2019. Here’s their explanation of how it works:

S2O offers a journal’s current subscribers continued access at a discount off the regular subscription price. If current subscribers participate in the S2O offer, the publisher opens the content covered by that year’s subscription. If participation is not sufficient – for example, if some subscribers delay renewing in the expectation that they can gain access without participating – then the content remains gated. Because the publisher does not guarantee that the content will be opened unless all subscribers participate in the offer, institutions that value access to the content – demonstrably, the journal’s current subscribers – must either subscribe conventionally (at full price) or participate in S2O (at a discount) to ensure continued access. The offer is repeated every year, with the opening of each year’s content contingent on sufficient participation.

As with the “true fans” model, supporting S2O journals is in the self-interest of libraries, which receive subscriptions to journals their academics want, and for a lower price. But there is a collateral benefit for society because everyone else also receives access to the knowledge contained in those titles. Publishers receive a guaranteed subscription income up front, and as a consequence of the open access route, they can also reach a larger audience. For example, when the Annual Review of Public Health publication tried out the S2O model, its monthly usage factor went up by a factor of eight. Since that successful trial, the S2O model has gone from strength to strength, as a review article published at the end of last year explains:

As of 2024, thanks to the Subscribe to Open model, over 180 journals have been able to publish entire volumes in open access, which would never have been possible otherwise because of the shortcomings of the [article processing charge] models for these journals and their respective disciplines. The S2O model continues to grow, with more publishers set to launch their S2O offerings in 2025.

In August, the prestigious Royal Society announced that it would be moving eight of its subscription journals to S2O. Among those titles is Philosophical Transactions of the Royal Society, the world’s longest-running scientific journal. In an article reflecting on that move, Rod Cookson, publishing director of The Royal Society, explained why he and other forward-thinking publishers are fans of S2O:

It is cost neutral and a relatively small change through which libraries can enable entire journals to become open access. This combination of simplicity and transparency has generated enthusiasm for S2O among librarians the world over. Publishers now need to demonstrate to those librarians that in addition to being aligned with their missions, S2O delivers a return on investment that justifies their expenditure. With sensible features that make the S2O proposition work well for both libraries and publishing houses—like multi-year agreements, “premium benefits” for S2O supporters, and collective sales packages—S2O will continue to grow as a trusted and durable model for delivering open access.

S2O represents a successful application of the true fans idea in the context of academic publishing. But perhaps supporters of open access should embrace even more of the true fan spirit and look to the example of fan fiction to help re-imagine scholarly publishing. That, at least, is the bold idea of Caroline Ball, who is the community engagement lead for the Open Book Collective, and whose advocacy work appeared in Walled Culture four years ago. Here’s why she thinks academic research should be more like fan fiction:

At first glance, fanfiction—non-commercial works created by fans who reimagine and remix existing stories, characters, and worlds—and academic research may seem worlds apart. But look closer, and both are practices of deep engagement, intertextual interpretation, and knowledge creation.

Fanfiction doesn’t just regurgitate stories; it interrogates, reinvents, and expands on them, often filling in gaps and exclusions left offscreen. Likewise, scholarship builds on prior work, challenges assumptions, and contributes new insights. Both are iterative, dialogic, and community based. And both, at their best, come from a place of passion and curiosity.

Her post explains how Archive of Our Own (AO3), a community-run digital repository for fan fiction, works, and why it could be a model for a new kind of open access:

Archive of Our Own (AO3) is a community-run digital repository for fanfiction. Launched in 2008 by the nonprofit Organization for Transformative Works (OTW), AO3 is entirely open access. It charges nothing to publish, nothing to read, and is powered by open-source code and volunteer labor. As of May 2025 (according to the OTW Communications Committee), it hosts over 15 million works across 71,880 fandoms and sees a daily average of 94 million hits.

Ball goes on to suggests ways in which scholarly publishing could learn from that evident success. Specific areas include AO3’s flexible metadata system; its innovative approach to reviews and comments; its “format agnosticism”, accepting any kind of contribution; and the way it re-imagines recognition and reputation. In summary, she writes:

AO3 reminds us that platforms can be built by and for communities, without extractive profit models or exclusionary hierarchies. It shows what’s possible when infrastructure is treated as a public good, and when participation is scaffolded, not gated. And crucially, AO3 demonstrates how practices that have been piloted in isolation across the scholarly landscape—open peer commentary, volunteer governance, flexible metadata, inclusive formats—can be woven together into a single, sustainable system.

The S2O model described above is a welcome addition to the ways in which sustainable open access can be brought in by publishers. But ultimately Ball is right in emphasizing that universal and unconstrained access to knowledge will only be achieved when the entire scholarly publishing system is re-invented with that goal in mind. It’s well past time for all the fans of open access to unite in this endeavor, and to do away with today’s digital chained libraries forever.

Follow me @glynmoody on Mastodon and on Bluesky. Originally published to Walled Culture.

Posted on BestNetTech - 14 November 2025 @ 11:56am

Copyright Is The Wrong Tool To Deal With Deepfake Harms

A key theme of Walled Culture the book (free digital versions available) is that copyright, born in an analogue age of scarcity, works poorly in today’s digital world of abundance. One manifestation of that is how lawmakers struggle to adapt the existing copyright rules to deal with novel technological developments, like the new generation of AI technologies. The EU’s AI Act marks a major step in regulating artificial intelligence, but it touches on copyright only briefly, leaving many copyright-related questions still open. The process of aligning national copyright laws with the AI Act provides an opportunity for EU Member States to flesh out some of the details, and that is what Italy has done with its new “Disposizioni e deleghe al Governo in materia di intelligenza artificiale.” (Provisions and delegations to the Government regarding artificial intelligence). The Communia blog explains the two main provisions. The first specifies that only works of human creativity are eligible for protection under Italian copyright law:

It codifies a crucial principle: while AI can be a tool in the creative process, copyright protection remains reserved for human-generated intellectual effort. This positions Italian law in alignment with the broader international trend, seen in the EU, U.S., and UK, of rejecting full legal authorship rights for non-human agents such as AI systems. In practice, this means that works solely generated by AI without significant human input will likely fall outside the scope of copyright protection.

The second provision deals with the legality of text and data mining (TDM) activities used in the training of AI models:

This provision essentially reaffirms that text and data mining (TDM) is permitted under certain conditions, namely where access to the source materials is lawful and the activity complies with the existing TDM exceptions under EU copyright law

The Italian AI law is about clarifying existing copyright law to deal with issues raised by AI. But some EU countries want to go much further in their response to generative AI, and bring in an entirely new kind of copyright. Both Denmark and the Netherlands are proposing to give people the copyright to their body, facial features, and voice. The move is intended as a response to the rising number of AI-generated deepfakes, where aspects such as someone’s face, body and voice are used without their permission, often for questionable purposes, and sometimes for criminal ones. There are good reasons for tackling deepfakes, as noted in an excellent commentary by P. Bernt Hugenholtz regarding the proposed Danish and Dutch laws:

Fake porn and other deepfake content is causing serious, and sometimes irreversible, harm to a person’s integrity and reputation. Fake audio or video content might deceive or mislead audiences and consumers, poison the public sphere, induce hatred, manipulate political discourse and undermine trust in science, journalism, and the public media. Like misinformation more generally, deepfakes pose a threat to our increasingly fragile democracies.

The problem is not that new laws are being brought in, but that the Danish and Dutch governments are proposing to use the wrong legal framework – copyright – to do so:

If concerns over privacy and reputation are the main reasons for regulating deepfakes, any new rules should be grounded in the law of privacy. If preserving trust in the media or safeguarding democracy are the dominant concerns, deepfakes ought to be addressed in media regulation or election laws. The Danish and Dutch bills address and alleviate none of these concerns.

It’s a classic example of copyright maximalism, where wider and stronger copyright laws are seen as the solution to everything. As well as being a poor fit for the problem, taking this approach would bring with it a real harm:

both deepfake bills conceive the new right to control deepfakes as a marketable, exploitable right, subject to monetization by way of licensing.

The message both bills convey is not that deepfakes are taboo, but that deepfakes amount to a new licensing opportunity.

In other words, the copyright maximalist approach makes everything about money, not morals. Ironically, taking such an approach would weaken copyright itself, as Communia’s submission to the Danish consultation on the deepfake proposal explains:

the proposal risks undermining the coherence of copyright law itself by introducing doctrinal inconsistencies. Copyright protects original expressive works, not a person’s indicia of personal identity, such as their image, voice or other physical characteristics. It is awarded for a limited duration in order to incentivise the creation of new works, and the existing corpus of limitations and exceptions has been designed with this premise in mind. Extending copyright to subject matter of an entirely different nature, for which marketisation is not an intended objective, will inevitably create legal uncertainty.

Communia points out a further reason not to take the copyright route for protecting people against deepfakes. The Danish bill would grant performing artists a new and wide-ranging copyright in their performances that would have a negative impact on the public domain:

the proposed extension of protection to subject matter that does not constitute a performance of an artistic or literary work raises significant concerns as to scope and proportionality. The introduction of a new exclusive right with such a wide scope would unduly restrict the Public Domain, interfering with the lawful access and reuse of subject matter that is currently out-of-copyright and that should remain as such, in the absence of clear economic evidence that such expansion is needed.

Moreover, as Communia notes:

The recitals of the draft [Danish] bill themselves acknowledge that multiple legal bases for acting against deepfakes already exist, including within criminal law. If individuals face difficulties in asserting their rights under the current framework, the appropriate course of action would be for the legislator to clarify the existing legal position. Introducing an additional and conceptually flawed layer of protection risks creating confusion and may ultimately prove counterproductive.

There’s no doubt that the harms caused by AI-generated deepfakes need tackling. The situation is made worse by advanced AI apps explicitly designed to make deepfake generation as easy as possible, such as OpenAI’s Sora, which are currently entering the market. But introducing a new kind of copyright is the wrong way to do it.

Follow me @glynmoody on Mastodon and on Bluesky. Originally published to Walled Culture.

Posted on BestNetTech - 27 October 2025 @ 03:20pm

Are Web Browsers With Integrated Chatbots A Paradigm Shift – Or Just Privacy And Security Disasters Waiting To Happen?

In a further sign of where the generative AI world is heading, OpenAI has launched ChatGPT Atlas, “a new web browser built with ChatGPT at its core.” It’s not the first to do something like this: earlier browsers incorporating varying degrees of AI include Microsoft Edge (with Copilot), Opera (with Aria), Brave (with Leo), The Browser Company’s Dia, Perplexity’s Comet, and Google’s Gemini in Chrome. Aside from a desire to jump on the genAI bandwagon, a key reason for this sudden flowering of browsers with built-in chatbots is summarized by Sam Altman in the video introducing ChatGPT Atlas. Right at the beginning, Altman says:

We think that AI represents a rare once a decade opportunity to rethink what a browser can be about and how to use one, and how to most productively and pleasantly use the Web.

AI is a disruptive force that could allow new sectoral leaders to emerge in the digital world, and the browser is clearly a key market. Chatbots are already popular as an alternative way to search for and access information, so it makes sense to embrace that by fully integrating them into the browser. Moreover, as OpenAI writes in its post about Atlas: “your browser is where all of your work, tools, and context come together. A browser built with ChatGPT takes us closer to a true super-assistant that understands your world and helps you achieve your goals.” The intent to supplant Google’s browser at the heart of the digital world is clear.

Given its leading role in AI, OpenAI’s offering is of particular interest as a guide to how this new kind of browser might work and be used. There are two main elements to Atlas. One is “browser memories”:

If you turn on browser memories, ChatGPT will remember key details from content you browse to improve chat responses and offer smarter suggestions—like creating a to-do list from your recent activity or continuing to research holiday gifts based on products you’ve viewed.

Browser memories are private to your ChatGPT account and under your control. You can view them all in settings, archive ones that are no longer relevant, and clear your browsing history to delete them. Even when browser memories are on, you can decide which sites ChatGPT can or can’t see using the toggle in the address bar. When visibility is off, ChatGPT can’t view the page content, and no memories are created from it.

Browser memories are potentially a privacy nightmare, since they can hold all kinds of sensitive information about users — and their browsing habits. OpenAI is clearly aware of this, hence the numerous options to control exactly what is remembered. The problem is that many users can’t be bothered making privacy-preserving tweaks to how they browse. Browser memories could certainly make online activities easier and more efficient, which is likely to encourage people to turn them on without much thought for possible consequences later on. The same is true of the other important optional feature of Atlas: agent mode.

In agent mode, ChatGPT can complete end to end tasks for you like researching a meal plan, making a list of ingredients, and adding the groceries to a shopping cart ready for delivery. You’re always in control: ChatGPT is trained to ask before taking many important actions, and you can pause, interrupt, or take over the browser at any time.

Once again, OpenAI is aware of the risks such a powerful agent mode brings with it, and has tried to minimize these in the following ways:

It cannot run code in the browser, download files, or install extensions

It cannot access other apps on your computer or file system

It will pause to ensure you’re watching it take actions on specific sensitive sites such as financial institutions

You can use agent in logged out mode to limit its access to sensitive data and the risk of it taking actions as you on websites

Even so, the company emphasizes bad stuff can still happen:

Besides simply making mistakes when acting on your behalf, agents are susceptible to hidden malicious instructions, which may be hidden in places such as a webpage or email with the intention that the instructions override ChatGPT agent’s intended behavior. This could lead to stealing data from sites you’re logged into or taking actions you didn’t intend.

Someone who is still skeptical about this new kind of browser is AI expert Simon Willison. Writing on his blog about OpenAI Atlas, Willison says:

The security and privacy risks involved here still feel insurmountably high to me – I certainly won’t be trusting any of these products until a bunch of security researchers have given them a very thorough beating.

Web browsers with chatbots built in are an interesting development, and may represent a paradigm shift for working online. Done properly, their utility could range from handy to life changing. But the danger is that FOMO and pressure from investors will cause companies to rush the release of products in this sector, before they are really safe for ordinary users to deploy with real, deeply-private information, and with agent access to critically-important online accounts — and real money.

Follow me @glynmoody on Mastodon and on Bluesky.

Posted on BestNetTech - 14 October 2025 @ 07:58pm

Research: Italy’s Piracy Shield Is Just As Big A Disaster As Everyone Predicted

Walled Culture first wrote about Piracy Shield, Italy’s automated system for tackling alleged copyright infringement in the streaming sector, two years ago. Since then, we have written about the serious problems that soon emerged. But instead of fixing those issues, the government body that runs the scheme, Italy’s AGCOM (the Italian Authority for Communications Guarantees), has extended it. The problems may be evident, but they have not been systematically studied, until now: a peer-reviewed study from a group of (mostly Italian) researchers has just been published as a preprint (found via TorrentFreak). It’s particularly welcome as perhaps the first rigorous analysis of Piracy Shield and its flaws.

The paper begins with a good introduction to the general area of IP and DNS blocking, also discussed in Walled Culture the book (free digital versions available), before detailing the history of Piracy Shield. As the paper notes, one of the major concerns about the system is the lack of transparency: AGCOM does not publish a list of IP addresses or domain names that are subject to its blocking. That not only makes it extremely difficult to correct mistakes, it also – conveniently – hides those mistakes, as well as the scope and impact of Piracy Shield. To get around this lack of transparency, the researchers had to resort to a dataset leaked on GitHub, which contained 10,918 IPv4 addresses and 42,664 domain names (more precisely, the latter were “fully qualified domain names” – FQDN) that had been blocked. As good academics, the researchers naturally verified the dataset as best they could:

While this dataset may not be exhaustive … it nonetheless provides a conservative lower-bound estimate of the platform’s blocking activity, which serves as the foundation for the subsequent analyses.

Much of the paper is devoted to the detailed methodology. One important result is that many of the blocked IP addresses belonged to leased IP address space. As the researchers explain:

This suggests that illegal streamers may attempt to exploit leased address space more intensively, even if just indirectly, by obtaining them by hosting companies that leases them, leading to more potential collateral damages for new lessees.

This particular collateral damage arises from the fact that even after the leased IP address is released by those who are using it for allegedly unauthorized streaming, it is still blocked on the Piracy Shield system. That means whoever is allocated that leased IP address subsequently is blocked by AGCOM, but are probably unaware of that fact, because of the opaque nature of the blocking process. More generally, collateral damage arose from the wrongful blocking of a wide range of completely legitimate sites:

During our classification process, we observed a wide range of website types across these collaterally affected domains, including personal branding pages, company profiles, and websites for hotels and restaurants. One notable case involves 19 Albanian websites hosted on a single IP address assigned to WIIT Cloud. These sites are still unreachable from Italy.

Italian sites were also hit, including a car mechanic, several retail shops, an accountant, a telehealth missionary program – and a nunnery. More amusingly, the researchers write:

we found a case of collateral damage involving a Google IP. Closer inspection revealed the IP was used by Telecom Italia to serve a blocking page for FQDNs filtered by Piracy Shield. Although later removed from the blocklist, this case suggests that collateral damage may have affected the blocking infrastructure itself.

The academics summarize their work as follows:

Our results on the collateral damages of IP and FQDN blocking highlight a worrisome scenario, with hundreds of legitimate websites unknowingly affected by blocking, unknown operators experiencing service disruption, and illegal streamers continuing to evade enforcement by exploiting the abundance of address space online, leaving behind unusable and polluted address ranges. Still, our findings represent a conservative lower-bound estimate.

It distinguished three ways in which Piracy Shield is harmful. Economically, because it disrupts legitimate businesses; technically, because it blocks shared infrastructure such as content delivery networks, while “polluting the IP address space” for future, unsuspecting users; and operationally, because it imposes a “growing, uncompensated burden on Italian ISPs forced to implement an expanding list of permanent blocks.” The paper concludes with some practical suggestions for improving a system that is clearly not fit for purpose, and poses a threat to national security, as discussed previously on Walled Culture. The researchers suggest that:

widespread and difficult-to-predict collateral damage suggests that IP-level blocking is an indiscriminate tool with consequences that outweigh its benefits and should not be used.

Instead, they point out that there are other legal pathways that can be pursued, since many of the allegedly infringing streams originate within the EU. If FQDN blocking is used, it should be regarded as “a last resort in tightly constrained time windows, i.e., only for the duration of the live event.” Crucially, more transparency is needed from AGCOM:

To mitigate damages, resource owners must be immediately notified when their assets are blocked, and a clear, fast unblocking mechanism must be in place.

This is an important piece of work, because it places criticisms of Piracy Shield on a firm footing, with rigorous analysis of the facts. However, AGCOM is unlikely to pay attention, since it is in the process of expanding Piracy Shield to apply to vast swathes of online streaming: amendments to the relevant law mean that automatic blocks can now be applied to film premieres, and even run-of-the-mill TV shows. Based on its past behavior, the copyright industry may well push to extend Piracy Shield to static Web material too, on the basis that the blocking infrastructure is already in place, so why not use it for every kind of material?

Follow me @glynmoody on Mastodon and on Bluesky. Originally posted to Walled Culture.

Posted on BestNetTech - 7 October 2025 @ 10:57am

Google’s Requirement For All Android Developers To Register And Be Verified Threatens To Close Down Open Source App Store F-Droid

It would be something of an understatement to say that Alphabet, Google’s holding company, is big and successful. Some Wall Street analysts are even predicting it could become the world’s most valuable corporation. Of course, even for business giants, enough is never enough. They always want more: more money, more power. As part of that tendency, Google seems to have decided that F-Droid, the free and open source app store for the Android platform, is a threat to the official Google Play Store that needs to be neutralized. At least that is likely to be the effect of Google’s announcement that it will require all Android developers to register and be verified before their apps can be allowed to run on certified Android devices. A post on the F-Droid blog explains what the problem is:

In addition to demanding payment of a registration fee and agreement to their (non-negotiable and ever-changing) terms and conditions, Google will also require the uploading of personally identifying documents, including government ID, by the authors of the software, as well as enumerating all the unique “application identifiers” for every app that is to be distributed by the registered developer.

According to the blog post, the impact on the F-Droid project would be severe:

the developer registration decree will end the F-Droid project and other free/open-source app distribution sources as we know them today, and the world will be deprived of the safety and security of the catalog of thousands of apps that can be trusted and verified by any and all. F-Droid’s myriad users will be left adrift, with no means to install — or even update their existing installed — applications.

Google says registration is needed to “better protect users from repeat bad actors spreading malware and scams”. Registration “creates crucial accountability, making it much harder for malicious actors to quickly distribute another harmful app after we take the first one down.” Slightly less convenient, perhaps, but not much harder. The F-Droid blog post points out that its open source app store already has a far better approach to security than Google’s proposed registration and verification:

every [F-Droid] app is free and open source, the code can be audited by anyone, the build process and logs are public, and reproducible builds ensure that what is published matches the source code exactly. This transparency and accountability provides a stronger basis for trust than closed platforms, while still giving users freedom to choose. Restricting direct app installation not only undermines that choice, it also erodes the diversity and resilience of the open-source ecosystem by consolidating control in the hands of a few corporate players.

Google is at pains to emphasize “Verified developers will have the same freedom to distribute their apps directly to users through sideloading or through any app store they prefer.” But that’s not true: their “freedom” will be soon be conditional, subject to Google’s whim and veto (as the company’s recent removal of the ICE-spotting app ‘Red Dot’ demonstrates). As a special concession, the company says:

we are also introducing a free developer account type that will allow teachers, students, and hobbyists to distribute apps to a limited number of devices without needing to provide a government ID.

But again that is subject to Google’s approval, and only allows distribution to a “limited number of devices” – a circumscribed “freedom”, in other words. And for F-Droid it’s not even an option, because of the following:

How many F-Droid users are there, exactly? We don’t know, because we don’t track users or have any registration: “No user accounts, by design”

As the F-Droid post comments, Google’s move is not credibly about “security”, but actually about “consolidating power and tightening control over a formerly open ecosystem”:

If you own a computer, you should have the right to run whatever programs you want on it. This is just as true with the apps on your Android/iPhone mobile device as it is with the applications on your Linux/Mac/Windows desktop or server. Forcing software creators into a centralized registration scheme in order to publish and distribute their works is as egregious as forcing writers and artists to register with a central authority in order to be able to distribute their creative works. It is an offense to the core principles of free speech and thought that are central to the workings of democratic societies around the world.

Google’s attack on F-Droid is ironic. At the heart of Android, and the key element that allowed it to become so successful so quickly, is the GPL-licensed Linux kernel. Over the years, Google has increased its control over Android by adding more non-free elements. If, as seems likely, its latest move leads to the shutdown of the 15-year-old F-Droid platform, it would represent a further betrayal of the open source world it once supported.

Posted on BestNetTech - 23 September 2025 @ 08:01pm

Anthropic’s AI Lawsuit Settlement May Not Go Through, But It Exposes A Truth About Copyright

The latest generation of AI systems, based on large language models (LLMs), is perceived as the biggest threat in decades to the established copyright order. The scale of that threat can be gauged by the flurry of AI lawsuits that publishers and others have launched against generative AI companies. Since the first of these, reported here on Walled Culture back in January 2023, there have been dozens of others, catalogued on Wikipedia, and represented visually on the Chat GPT is Eating the World site. One is against Anthropic. Three authors alleged in a class-action lawsuit that the company had used unauthorized copies of their works to train its AI-powered chatbot, Claude:

Anthropic has built a multibillion-dollar business by stealing hundreds of thousands of copyrighted books. Rather than obtaining permission and paying a fair price for the creations it exploits, Anthropic pirated them.

In June of this year, Anthropic won a partial victory. The federal judge considering the case ruled that the training of the company’s system on legally purchased copies of books was fair use, and did not need the authors’ permission. However, Judge Alsup also ruled that Anthropic should face trial for downloading millions of books from sites such as Library Genesis (LibGen) and the Pirate Library Mirror (PiLiMi), both of which held unauthorized copies of works. The potential penalty was huge. Under US law, the company might have to pay damages of up to $150,000 per work. With millions of books allegedly downloaded from the online sites, that could amount to many billions of dollars, even a trillion dollars. Faced with certain ruin if such a penalty were handed down, Anthropic had a strong incentive to settle out of court. On 5 September, the parties proposed just such a settlement. The New York Times had the following summary:

In a landmark settlement, Anthropic, a leading artificial intelligence company, has agreed to pay $1.5 billion to a group of authors and publishers after a judge ruled it had illegally downloaded and stored millions of copyrighted books.

The settlement is the largest payout in the history of U.S. copyright cases. Anthropic will pay $3,000 per work to 500,000 authors.

The agreement is a turning point in a continuing battle between A.I. companies and copyright holders that spans more than 40 lawsuits across the country. Experts say the agreement could pave the way for more tech companies to pay rights holders through court decisions and settlements or through licensing fees.

Some saw the $3,000 per work figure as setting a benchmark for future deals that other AI companies would need to follow in order to settle similar lawsuits (although a settlement would not set a legal precedent). Music publishers were hopeful they could point to the settlement with writers in order to win a similar deal for musicians. Others worried that the overall size of the settlement – $1.5 billion – meant that only the largest companies could afford to pay such sums, shutting out smaller startups and limiting competition in this nascent market. Indeed, big as the $1.5 billion settlement was, it paled in comparison to the $13 billion that Anthropic has recently raised, to say nothing of its nominal $183 billion valuation. But a post by Dave Hansen on the Authors Alliance blog puts all these breathless predictions and impressive numbers into perspective. For example, he points out:

The settlement isn’t a settlement with “authors.” Or at least not just authors. The moment Judge Alsup defined and certified the class in this case to include any rightsholder with an interest in the exclusive copyright right of reproduction in a LibGen/PilLiMi book downloaded by Anthropic, this case became at least as important for publishers as authors.

Crucially, that means only a portion of that $1.5 billion would go to the actual authors. Some of it would go to the usual suspects: the plaintiff’s lawyers. But there are other costs that must be covered too, and Hansen writes: “it’s easy to see that about a quarter to a third of this settlement is being used up before rightsholders see anything.” And then there is the question of who exactly those “rightsholders” are: the writers or the publishers? Probably both in many cases, with a variable split depending on the contract they signed.

Even before those complex questions are addressed, there is a huge assumption that the proposed settlement will go through in its present form. That’s by no means assured. As Bloomberg Law reported, Judge Alsup said he was worried that lawyers were striking a deal behind the scenes that will be forced “down the throat of authors,” and that the agreement is “nowhere close to complete.”

Judge William Alsup at the hearing said the motion to approve the deal was denied without prejudice, but in a minute order after the hearing said approval is postponed pending submission of further clarifying information.

During the first hearing since the deal was announced on Sept. 5, Alsup said he felt “misled” and needs to see more information about the claim process for class members.

Another important point underlined by Dave Hansen on the Authors Alliance blog is that even if the settlement goes through, it doesn’t really help to resolve any of the larger copyright issues raised by the new LLMs:

The settlement isn’t far-reaching. While the payment is record-setting for a copyright class action ($1.5 billion), the settlement terms are pretty narrow in scope. Anthropic simply gets a release from liability for past conduct – namely, use of the LibGen and PiLiMi datasets. It is therefore unlike the proposed settlement in the Google Books Settlement that would have created a novel licensing scheme for a wide variety of future uses

The Google Books Settlement is discussed in Walled Culture the book (free digital versions available), as is another notable moment in copyright history. This concerns the fate of Jammie Thomas, a single mother of two. In 2007, she was found liable for $222,000 in damages for sharing twenty-four songs on the P2P service Kazaa. The judge, ordering a new trial for Thomas, called “the award of hundreds of thousands of dollars in damages unprecedented and oppressive”, and took the opportunity to “implore Congress to amend the Copyright Act to address liability and damages in peer-to-peer network cases such as the one currently before this Court.” On retrial, Thomas was found liable for even more: $1.92 million.

It is instructive to compare that $1.92 million fine for sharing 24 songs – $80,000 per work – with the $1,500 per work that Anthropic is now offering to pay. This confirms once more that when it comes to copyright and its enforcement, there is one law for the rich corporations, and another law for the rest of us.

Follow me @glynmoody on Mastodon and on Bluesky. Originally posted to WalledCulture.

Posted on BestNetTech - 12 September 2025 @ 12:12pm

After 30+ Deaths In Protests Triggered by Nepal’s Social Media Ban, 145,000 People Debate The Country’s Future In Discord Chatroom

The Himalayan nation of Nepal has featured only rarely on BestNetTech. The first time was back in 2003, with a story about an early Internet user there. According to the post, he would spend five hours walking down the mountain to the main road, and then another four hours on a bus to get to the nearest town that had an Internet connection he could use. As a recent Ctrl-Alt-Speech podcast explained, Nepal’s digital society has moved on a long way since then, with massive street protests in the country’s capital, Kathmandu, triggered by a government order banning 26 social media platforms, later rescinded. Those protests turned violent, leaving more than 30 people killed in clashes with the police, key government buildings in flames, and the prime minister ousted. Although the attempt to block the main social media platforms for their failure to submit to governmental registration — and thus control — may have been the final spark that ignited the violence, the underlying causes lie deeper, as NPR explains:

Frustrations have been mounting among young people in Nepal over the country’s unemployment and wealth gap. According to the Nepal Living Standard Survey 2022-23, published by the government, the country’s unemployment rate was 12.6%.

Leading up to the protests, the hashtag #NepoBaby had been trending in the country, largely to criticize the extravagant lifestyles of local politicians’ children and call out corruption, NPR previously reported.

The use of popular digital platforms to criticize the government in this way was probably a key reason for the authorities’ botched clampdown on social media, which in turn led to the large-scale protests and ensuing chaos. And now another popular digital platform is being used in an attempt to find a way to move forward:

After the government’s collapse on Tuesday, the military imposed a curfew across the capital, Kathmandu, and restricted large gatherings. With the country in political limbo and no obvious next leader in place, Nepalis have taken to Discord, a platform popularized by video gamers, to enact the digital version of a national convention.

As one person participating in the discussions told the New York Times: “The Parliament of Nepal right now is Discord.” It is a parliament like no other: in just a few days, more than 145,000 people have joined a Discord server to discuss who should lead the country, at least for the moment:

The channel’s organizers are members of Hami Nepal, a civic organization, and many of those participating in the chat are the so-called Gen-Z activists who led this week’s protests. But since the prime minister’s abrupt resignation on Tuesday, power in Nepal effectively resides with the military. The army’s chiefs, who most likely will decide who next leads the country, have met with the channel’s organizers and asked them to put forth a potential nominee for interim leader.

Whether this unprecedented experiment in large-scale digital politics succeeds in bringing order and stability to Nepal remains to be seen. But it is certainly extraordinary to watch history being made as, once more, the online world rapidly and profoundly reshapes the offline world.

Follow me @glynmoody on Mastodon and on Bluesky.

More posts from Glyn Moody >>