Joe Mullin's BestNetTech Profile

Joe Mullin

About Joe Mullin

Posted on BestNetTech - 30 January 2026 @ 03:28pm

Search Engines, AI, And The Long Fight Over Fair Use

Long before generative AI, copyright holders warned that new technologies for reading and analyzing information would destroy creativity. Internet search engines, they argued, were infringement machines—tools that copied copyrighted works at scale without permission. As they had with earlier information technologies like the photocopier and the VCR, copyright owners sued.

Courts disagreed. They recognized that copying works in order to understand, index, and locate information is a classic fair use—and a necessary condition for a free and open internet.

Today, the same argument is being recycled against AI. It’s whether copyright owners should be allowed to control how others analyze, reuse, and build on existing works.

Fair Use Protects Analysis—Even When It’s Automated

U.S. courts have long recognized that copying for purposes of analysis, indexing, and learning is a classic fair use. That principle didn’t originate with artificial intelligence. It doesn’t disappear just because the processes are performed by a machine.

Copying that works in order to understand them, extract information from them, or make them searchable is transformative and lawful. That’s why search engines can index the web, libraries can make digital indexes, and researchers can analyze large collections of text and data without negotiating licenses from millions of rightsholders. These uses don’t substitute for the original works; they enable new forms of knowledge and expression.

Training AI models fits squarely within that tradition. An AI system learns by analyzing patterns across many works. The purpose of that copying is not to reproduce or replace the original texts, but to extract statistical relationships that allow the AI system to generate new outputs. That is the hallmark of a transformative use. 

Attacking AI training on copyright grounds misunderstands what’s at stake. If copyright law is expanded to require permission for analyzing or learning from existing works, the damage won’t be limited to generative AI tools. It could threaten long-standing practices in machine learning and text-and-data mining that underpin research in science, medicine, and technology. 

Researchers already rely on fair use to analyze massive datasets such as scientific literature. Requiring licenses for these uses would often be impractical or impossible, and it would advantage only the largest companies with the money to negotiate blanket deals. Fair use exists to prevent copyright from becoming a barrier to understanding the world. The law has protected learning before. It should continue to do so now, even when that learning is automated. 

A Road Forward For AI Training And Fair Use 

One court has already shown how these cases should be analyzed. In Bartz v. Anthropic, the court found that using copyrighted works to train an AI model is a highly transformative use. Training is a kind of studying how language works—not about reproducing or supplanting the original books. Any harm to the market for the original works was speculative. 

The court in Bartz rejected the idea that an AI model might infringe because, in some abstract sense, its output competes with existing works. While EFF disagrees with other parts of the decision, the court’s ruling on AI training and fair use offers a good approach. Courts should focus on whether training is transformative and non-substitutive, not on fear-based speculation about how a new tool could affect someone’s market share. 

AI Can Create Problems, But Expanding Copyright Is the Wrong Fix 

Workers’ concerns about automation and displacement are real and should not be ignored. But copyright is the wrong tool to address them. Managing economic transitions and protecting workers during turbulent times may be core functions of government, but copyright law doesn’t help with that task in the slightest. Expanding copyright control over learning and analysis won’t stop new forms of worker automation—it never has. But it will distort copyright law and undermine free expression. 

Broad licensing mandates may also do harm by entrenching the current biggest incumbent companies. Only the largest tech firms can afford to negotiate massive licensing deals covering millions of works. Smaller developers, research teams, nonprofits, and open-source projects will all get locked out. Copyright expansion won’t restrain Big Tech—it will give it a new advantage.  

Fair Use Still Matters

Learning from prior work is foundational to free expression. Rightsholders cannot be allowed to control it. Courts have rejected that move before, and they should do so again.

Search, indexing, and analysis didn’t destroy creativity. Nor did the photocopier, nor the VCR. They expanded speech, access to knowledge, and participation in culture. Artificial intelligence raises hard new questions, but fair use remains the right starting point for thinking about training.

Republished from the EFF’s Deeplinks blog.

Posted on BestNetTech - 21 January 2026 @ 03:38pm

Congress Wants To Hand Your Parenting To Big Tech

Lawmakers in Washington are once again focusing on kids, screens, and mental health. But according to Congress, Big Tech is somehow both the problem and the solution. The Senate Commerce Committee recently held a hearing on “examining the effect of technology on America’s youth.” Witnesses warned about “addictive” online content, mental health, and kids spending too much time buried in screen. At the center of the debate is a bill from Sens. Ted Cruz (R-TX) and Brian Schatz (D-HI) called the Kids Off Social Media Act (KOSMA), which they say will protect children and “empower parents.” 

That’s a reasonable goal, especially at a time when many parents feel overwhelmed and nervous about how much time their kids spend on screens. But while the bill’s press release contains soothing language, KOSMA doesn’t actually give parents more control. 

Instead of respecting how most parents guide their kids towards healthy and educational content, KOSMA hands the control panel to Big Tech. That’s right—this bill would take power away from parents, and hand it over to the companies that lawmakers say are the problem.  

Kids Under 13 Are Already Banned From Social Media

One of the main promises of KOSMA is simple and dramatic: it would ban kids under 13 from social media. Based on the language of bill sponsors, one might think that’s a big change, and that today’s rules let kids wander freely into social media sites. But that’s not the case.   

Every major platform already draws the same line: kids under 13 cannot have an account. FacebookInstagramTikTokXYouTubeSnapchatDiscordSpotify, and even blogging platforms like WordPress all say essentially the same thing—if you’re under 13, you’re not allowed. That age line has been there for many years, mostly because of how online services comply with a federal privacy law called COPPA

Of course, everyone knows many kids under 13 are on these sites anyways. The real question is how and why they get access. 

Most Social Media Use By Younger Kids Is Family-Mediated 

If lawmakers picture under-13 social media use as a bunch of kids lying about their age and sneaking onto apps behind their parents’ backs, they’ve got it wrong. Serious studies that have looked at this all find the opposite: most under-13 use is out in the open, with parents’ knowledge, and often with their direct help. 

A large national study published last year in Academic Pediatrics found that 63.8% of under-13s have a social media account, but only 5.4% of them said they were keeping one secret from their parents. That means roughly 90% of kids under 13 who are on social media aren’t hiding it at all. Their parents know. (For kids aged thirteen and over, the “secret account” number is almost as low, at 6.9%.) 

Earlier research in the U.S. found the same pattern. In a well-known study of Facebook use by 10-to-14-year-olds, researchers found that about 70% of parents said they actually helped create their child’s account, and between 82% and 95% knew the account existed. Again, this wasn’t kids sneaking around. It was families making a decision together.

2022 study by the UK’s media regulator Ofcom points in the same direction, finding that up to two-thirds of social media users below the age of thirteen had direct help from a parent or guardian getting onto the platform. 

The typical under-13 social media user is not a sneaky kid. It’s a family making a decision together. 

KOSMA Forces Platforms To Override Families 

This bill doesn’t just set an age rule. It creates a legal duty for platforms to police families.

Section 103(b) of the bill is blunt: if a platform knows a user is under 13, it “shall terminate any existing account or profile” belonging to that user. And “knows” doesn’t just mean someone admits their age. The bill defines knowledge to include what is “fairly implied on the basis of objective circumstances”—in other words, what a reasonable person would conclude from how the account is being used. The reality of how services would comply with KOSMA is clear: rather than risk liability for how they should have known a user was under 13, they will require all users to prove their age to ensure that they block anyone under 13. 

KOSMA contains no exceptions for parental consent, for family accounts, or for educational or supervised use. The vast majority of people policed by this bill won’t be kids sneaking around—it will be minors who are following their parents’ guidance, and the parents themselves. 

Imagine a child using their parent’s YouTube account to watch science videos about how a volcano works. If they were to leave a comment saying, “Cool video—I’ll show this to my 6th grade teacher!” and YouTube becomes aware of the comment, the platform now has clear signals that a child is using that account. It doesn’t matter whether the parent gave permission. Under KOSMA, the company is legally required to act. To avoid violating KOSMA, it would likely  lock, suspend, or terminate the account, or demand proof it belongs to an adult. That proof would likely mean asking for a scan of a government ID, biometric data, or some other form of intrusive verification, all to keep what is essentially a “family” account from being shut down.

Violations of KOSMA are enforced by the FTC and state attorneys general. That’s more than enough legal risk to make platforms err on the side of cutting people off.

Platforms have no way to remove “just the kid” from a shared account. Their tools are blunt: freeze it, verify it, or delete it. Which means that even when a parent has explicitly approved and supervised their child’s use, KOSMA forces Big Tech to override that family decision.

Your Family, Their Algorithms

KOSMA doesn’t appoint a neutral referee. Under the law, companies like Google (YouTube), Meta (Facebook and Instagram), TikTok, Spotify, X, and Discord will become the ones who decide whose account survives, whose account gets locked, who has to upload ID, and whose family loses access altogether. They won’t be doing this because they want to—but because Congress is threatening them with legal liability if they don’t. 

These companies don’t know your family or your rules. They only know what their algorithms infer. Under KOSMA, those inferences carry the force of law. Rather than parents or teachers, decisions about who can be online, and for what purpose, will be made by corporate compliance teams and automated detection systems. 

What Families Lose 

This debate isn’t really about TikTok trends or doomscrolling. It’s about all the ordinary, boring, parent-guided uses of the modern internet. It’s about a kid watching “How volcanoes work” on regular YouTube, instead of the stripped-down YouTube Kids. It’s about using a shared Spotify account to listen to music a parent already approves. It’s about piano lessons from a teacher who makes her living from YouTube ads.

These aren’t loopholes. They’re how parenting works in the digital age. Parents increasingly filter, supervise, and, usually, decide together with their kids. KOSMA will lead to more locked accounts, and more parents submitting to face scans and ID checks. It will also lead to more power concentrated in the hands of the companies Congress claims to distrust. 

What Can Be Done Instead

KOSMA also includes separate restrictions on how platforms can use algorithms for users aged 13 to 17. Those raise their own serious questions about speech, privacy, and how online services work, and need debate and scrutiny as well. But they don’t change the core problem here: this bill hands control over children’s online lives to Big Tech.

If Congress really wants to help families, it should start with something much simpler and much more effective: strong privacy protections for everyone. Limits on data collection, restrictions on behavioral tracking, and rules that apply to adults as well as kids would do far more to reduce harmful incentives than deputizing companies to guess how old your child is and shut them out.

But if lawmakers aren’t ready to do that, they should at least drop KOSMA and start over. A law that treats ordinary parenting as a compliance problem is not protecting families—it’s undermining them.

Parents don’t need Big Tech to replace them. They need laws that respect how families actually work.

Republished from the EFF’s Deeplinks blog.

Posted on BestNetTech - 18 December 2025 @ 01:47pm

Thousands Tell The Patent Office: Don’t Hide Bad Patents From Review

We filed our own comment with the USPTO regarding their attempt to weaken the important inter partes review (IPR) process that has been hugely helpful in getting rid of bad patents. Over at EFF, Joe Mullin wrote up an analysis of some of the comments to the USPTO, which we’re running here as well.

A massive wave of public comments just told the U.S. Patent and Trademark Office (USPTO): don’t shut the public out of patent review.

EFF submitted its own formal comment opposing the USPTO’s proposed rules, and more than 4,000 supporters added their voices—an extraordinary response for a technical, fast-moving rulemaking. We comprised more than one-third of the 11,442 comments submitted. The message is unmistakable: the public wants a meaningful way to challenge bad patents, and the USPTO should not take that away.

The Public Doesn’t Want To Bury Patent Challenges

These thousands of submissions do more than express frustration. They demonstrate overwhelming public interest in preserving inter partes review (IPR), and undermine any broad claim that the USPTO’s proposal reflects public sentiment. 

Comments opposing the rulemaking include many small business owners who have been wrongly accused of patent infringement, by both patent trolls and patent-abusing competitors. They also include computer science experts, law professors, and everyday technology users who are simply tired of patent extortion—abusive assertions of low-quality patents—and the harm it inflicts on their work, their lives, and the broader U.S. economy. 

The USPTO exists to serve the public. The volume and clarity of this response make that expectation impossible to ignore.

EFF’s Comment To USPTO

In our filing, we explained that the proposed rules would make it significantly harder for the public to challenge weak patents. That undercuts the very purpose of IPR. The proposed rules would pressure defendants to give up core legal defenses, allow early or incomplete decisions to block all future challenges, and create new opportunities for patent owners to game timing and shut down PTAB review entirely.

Congress created IPR to allow the Patent Office to correct its own mistakes in a fair, fast, expert forum. These changes would take the system backward. 

A Broad Coalition Supports IPR

A wide range of groups told the USPTO the same thing: don’t cut off access to IPR.

Open Source and Developer Communities 

The Linux Foundation submitted comments and warned that the proposed rules “would effectively remove IPRs as a viable mechanism for challenges to patent validity,” harming open-source developers and the users that rely on them. Github wrote that the USPTO proposal would increase “litigation risk and costs for developers, startups, and open source projects.” And dozens of individual software developers described how bad patents have burdened their work. 

Patent Law Scholars

A group of 22 patent law professors from universities across the country said the proposed rule changes “would violate the law, increase the cost of innovation, and harm the quality of patents.” 

Patient Advocates

Patients for Affordable Drugs warned in their filing that IPR is critical for invalidating wrongly granted pharmaceutical patents. When such patents are invalidated, studies have shown “cardiovascular medications have fallen 97% in price, cancer drugs dropping 80-98%, and treatments for opioid addiction becom[e] 50% more affordable.” In addition, “these cases involved patents that had evaded meaningful scrutiny in district court.” 

Small Businesses 

Hundreds of small businesses weighed in with a consistent message: these proposed rules would hit them hardest. Owners and engineers described being targeted with vague or overbroad patents they cannot afford to litigate in court, explaining that IPR is often the only realistic way for a small firm to defend itself. The proposed rules would leave them with an impossible choice—pay a patent troll, or spend money they don’t have fighting in federal court. 

What Happens Next

The USPTO now has thousands of comments to review. It should listen. Public participation must be more than a box-checking exercise. It is central to how administrative rulemaking is supposed to work.

Congress created IPR so the public could help correct bad patents without spending millions of dollars in federal court. People across technical, academic, and patient-advocacy communities just reminded the agency why that matters. 

We hope the USPTO reconsiders these proposed rules. Whatever happens, EFF will remain engaged and continue fighting to preserve  the public’s ability to challenge bad patents. 

Republished from the EFF’s Deeplinks blog.

Posted on BestNetTech - 30 July 2025 @ 01:14pm

Canada’s Bill C-2 Opens The Floodgates To U.S. Surveillance

The Canadian government is preparing to give away Canadians’ digital lives—to U.S. police, to the Donald Trump administration, and possibly to foreign spy agencies.

Bill C-2, the so-called Strong Borders Act, is a sprawling surveillance bill with multiple privacy-invasive provisions. But the thrust is clear: it’s a roadmap to aligning Canadian surveillance with U.S. demands. 

It’s also a giveaway of Canadian constitutional rights in the name of “border security.” If passed, it will shatter privacy protections that Canadians have spent decades building. This will affect anyone using Canadian internet services, including email, cloud storage, VPNs, and messaging apps. 

joint letter, signed by dozens of Canadian civil liberties groups and more than a hundred Canadian legal experts and academics, puts it clearly: Bill C-2 is “a multi-pronged assault on the basic human rights and freedoms Canada holds dear,” and “an enormous and unjustified expansion of power for police and CSIS to access the data, mail, and communication patterns of people across Canada.”

Setting The Stage For Cross-Border Surveillance 

Bill C-2 isn’t just a domestic surveillance bill. It’s a Trojan horse for U.S. law enforcement—quietly building the pipes to ship Canadians’ private data straight to Washington.

If Bill C-2 passes, Canadian police and spy agencies will be able to demand information about peoples’ online activities based on the low threshold of “reasonable suspicion.” Companies holding such information would have only five days to challenge an order, and blanket immunity from lawsuits if they hand over data. 

Police and CSIS, the Canadian intelligence service, will be able to find out whether you have an online account with any organization or service in Canada. They can demand to know how long you’ve had it, where you’ve logged in from, and which other services you’ve interacted with, with no warrant required.

The bill will also allow for the introduction of encryption backdoors. Forcing companies to surveil their customers is allowed under the law (see part 15), as long as these mandates don’t introduce a “systemic vulnerability”—a term the bill doesn’t even bother to define. 

The information gathered under these new powers is likely to be shared with the United States. Canada and the U.S. are currently negotiating a misguided agreement to share law enforcement information under the US CLOUD Act. 

The U.S. and U.K. put a CLOUD Act deal in place in 2020, and it hasn’t been good for users. Earlier this year, the U.K. home office ordered Apple to let it spy on users’ encrypted accounts. That security risk caused Apple to stop offering U.K. users certain advanced encryption features, , and lawmakers and officials in the United States have raised concerns that the UK’s demands might have been designed to leverage its expanded CLOUD Act powers.

If Canada moves forward with Bill C-2 and a CLOUD Act deal, American law enforcement could demand data from Canadian tech companies in secrecy—no notice to users would be required. Companies could also expect gag orders preventing them from even mentioning they have been forced to share information with US agencies.

This isn’t speculation. Earlier this month, a Canadian government official told Politico that this surveillance regime would give Canadian police “the same kind of toolkit” that their U.S. counterparts have under the PATRIOT Act and FISA. The bill allows for “technical capability orders.” Those orders mean the government can force Canadian tech companies, VPNs, cloud providers, and app developers—regardless of where in the world they are based—to build surveillance tools into their products.

Under U.S. law, non-U.S. persons have little protection from foreign surveillance. If U.S. cops want information on abortion access, gender-affirming care, or political protests happening in Canada—they’re going to get it. The data-sharing won’t necessarily be limited to the U.S., either. There’s nothing to stop authoritarian states from demanding this new trove of Canadians’ private data that will be secretly doled out by its law enforcement agencies. 

EFF joins the Canadian Civil Liberties Association, OpenMedia, researchers at Citizen Lab, and dozens of other Canadian organizations and experts in asking the Canadian federal government to withdraw Bill C-2. 

Originally posted to the EFF’s Deeplinks blog.

Posted on BestNetTech - 16 May 2025 @ 01:02pm

The Kids Online Safety Act Will Make The Internet Worse For Everyone

The Kids Online Safety Act (KOSA) is back in the Senate. Sponsors are claiming—again—that the latest version won’t censor online content. It isn’t true. This bill still sets up a censorship regime disguised as a “duty of care,” and it will do what previous versions threatened: suppress lawful, important speech online, especially for young people.

KOSA Still Forces Platforms to Police Legal Speech

At the center of the bill is a requirement that platforms “exercise reasonable care” to prevent and mitigate a sweeping list of harms to minors, including depression, anxiety, eating disorders, substance use, bullying, and “compulsive usage.” The bill claims to bar lawsuits over “the viewpoint of users,” but that’s a smokescreen. Its core function is to let government agencies sue platforms, big or small, that don’t block or restrict content someone later claims contributed to one of these harms. 

This bill won’t bother big tech. Large companies will be able to manage this regulation, which is why Apple and X have agreed to support it. In fact, X helped negotiate the text of the last version of this bill we saw. Meanwhile, those companies’ smaller competitors will be left scrambling to comply. Under KOSA, a small platform hosting mental health discussion boards will be just as vulnerable as Meta or TikTok—but much less able to defend itself. 

To avoid liability, platforms will over-censor. It’s not merely hypothetical. It’s what happens when speech becomes a legal risk. The list of harms in KOSA’s “duty of care” provision is so broad and vague that no platform will know what to do regarding any given piece of content. Forums won’t be able to host posts with messages like “love your body,” “please don’t do drugs,” or “here’s how I got through depression” without fearing that an attorney general or FTC lawyer might later decide the content was harmful. Support groups and anti-harm communities, which can’t do their work without talking about difficult subjects like eating disorders, mental health, and drug abuse, will get caught in the dragnet. 

When the safest legal option is to delete a forum, platforms will delete the forum.

There’s Still No Science Behind KOSA’s Core Claims

KOSA relies heavily on vague, subjective harms like “compulsive usage.” The bill defines it as repetitive online behavior that disrupts life activities like eating, sleeping, or socializing. But here’s the problem: there is no accepted clinical definition of “compulsive usage” of online services.

There’s no scientific consensus that online platforms cause mental health disorders, nor agreement on how to measure so-called “addictive” behavior online. The term sounds like settled medical science, but it’s legislative sleight-of-hand: an undefined concept given legal teeth, with major consequences for speech and access to information.

Carveouts Don’t Fix the First Amendment Problem

The bill says it can’t be enforced based on a user’s “viewpoint.” But the text of the bill itself preferences certain viewpoints over others. Plus, liability in KOSA attaches to the platform, not the user. The only way for platforms to reduce risk in the world of KOSA is to monitor, filter, and restrict what users say.

If the FTC can sue a platform because minors saw a medical forum discussing anorexia, or posts about LGBTQ identity, or posts discussing how to help a friend who’s depressed, then that’s censorship. The bill’s stock language that “viewpoints are protected” won’t matter. The legal incentives guarantee that platforms will silence even remotely controversial speech to stay safe.

Lawmakers who support KOSA today are choosing to trust the current administration, and future administrations, to define what youth—and to some degree, all of us—should be allowed to read online. 

KOSA will not make kids safer. It will make the internet more dangerous for anyone who relies on it to learn, connect, or speak freely. Lawmakers should reject it, and fast. 

Reposted from the EFF’s Deeplinks blog.

Posted on BestNetTech - 7 April 2025 @ 01:47pm

Site-Blocking Legislation Is Back. It’s Still A Terrible Idea.

More than a decade ago, Congress tried to pass SOPA and PIPA—two sweeping bills that would have allowed the government and copyright holders to quickly shut down entire websites based on allegations of piracy. The backlash was immediate and massive. Internet users, free speech advocates, and tech companies flooded lawmakers with protests, culminating in an “Internet Blackout” on January 18, 2012. Turns out, Americans don’t like government-run internet blacklists. The bills were ultimately shelved. 

Thirteen years later, as institutional memory fades and appetite for opposition wanes, members of Congress in both parties are ready to try this again. 

The Foreign Anti-Digital Piracy Act (FADPA), along with at least one other bill still in draft form, would revive this reckless strategy. These new proposals would let rights holders get federal court orders forcing ISPs and DNS providers to block entire websites based on accusations of infringing copyright. Lawmakers claim they’re targeting “pirate” sites—but what they’re really doing is building an internet kill switch.

These bills are an unequivocal and serious threat to a free and open internet. EFF and our supporters are going to fight back against them. 

Site-Blocking Doesn’t Work—And Never Will 

Today, many websites are hosted on cloud infrastructure or use shared IP addresses. Blocking one target can mean blocking thousands of unrelated sites. That kind of digital collateral damage has already happened in AustriaRussia​, and in the US.

Site-blocking is both dangerously blunt and trivially easy to evade. Determined evaders can create the same content on a new domain within hours. Users who want to see blocked content can fire up a VPN or change a single DNS setting to get back online. 

These workarounds aren’t just popular—they’re essential tools in countries that suppress dissent. It’s shocking that Congress is on the verge of forcing Americans to rely on the same workarounds that internet users in authoritarian regimes must rely on just to reach mislabeled content. It will force Americans to rely on riskier, less trustworthy online services. 

Site-Blocking Silences Speech Without a Defense

The First Amendment should not take a back seat because giant media companies want the ability to shut down websites faster. But these bills wrongly treat broad takedowns as a routine legal process. Most cases would be decided in ex parte proceedings, with no one there to defend the site being blocked. This is more than a shortcut–it skips due process entirely. 

Users affected by a block often have no idea what happened. A blocked site may just look broken, like a glitch or an outage. Law-abiding publishers and users lose access, and diagnosing the problem is difficult. Site-blocking techniques are the bluntest of instruments, and they almost always punish innocent bystanders. 

The copyright industries pushing these bills know that site-blocking is not a narrowly tailored fix for a piracy epidemic. The entertainment industry is booming right now, blowing past its pre-COVID projections. Site-blocking legislation is an attempt to build a new American censorship system by letting private actors get dangerous infrastructure-level control over internet access. 

EFF and the Public Will Push Back

FADPA is already on the table. More bills are coming. The question is whether lawmakers remember what happened the last time they tried to mess with the foundations of the open web. 

If they don’t, they’re going to find out the hard way. Again. 

Site-blocking laws are dangerous, unnecessary, and ineffective. Lawmakers need to hear—loud and clear—that Americans don’t support government-mandated internet censorship. Not for copyright enforcement. Not for anything.

Reposted from the EFF’s Deeplinks blog.

Posted on BestNetTech - 27 March 2025 @ 01:28pm

New USPTO Memo Makes Fighting Patent Trolls Even Harder

The U.S. Patent and Trademark Office (USPTO) just made a move that will protect bad patents at the expense of everyone else. In a memo released February 28, the USPTO further restricted access to inter partes review, or IPR—the process Congress created to let the public challenge invalid patents without having to wage million-dollar court battles.

If left unchecked, this decision will shield bad patents from scrutiny, embolden patent trolls, and make it even easier for hedge funds and large corporations to weaponize weak patents against small businesses and developers.

IPR Exists Because the Patent Office Makes Mistakes

The USPTO grants over 300,000 patents a year, but many of them should not have been issued in the first place. Patent examiners spend, on average, around 20 hours per patent, often missing key prior art or granting patents that are overly broad or vague. That’s how bogus patents on basic ideas—like podcastingonline shopping carts, or watching ads online—have ended up in court.

Congress created IPR in 2012 to fix this problem. IPR allows anyone to challenge a patent’s validity based on prior art, and it’s done before specialized judges at the USPTO, where experts can re-evaluate whether a patent was properly granted. It’s faster, cheaper, and often fairer than fighting it out in federal court.

The USPTO is Blocking Patent Challenges—Again

Instead of defending IPR, the USPTO is working to sabotage it. The February 28 memo reinstates a rule that allows for widespread use of “discretionary denials.” That’s when the Patent Trial and Appeal Board (PTAB) refuses to hear an IPR case for procedural reasons—even if the patent is likely invalid. 

The February 28 memo reinstates widespread use of the Apple v. Fintiv rule, under which the USPTO often rejected IPR petitions whenever there’s an ongoing district court case about the same patent. This is backwards. If anything, an active lawsuit is proof that a patent’s validity needs to be reviewed—not an excuse to dodge the issue.

In 2022, former USPTO Director Kathi Vidal issued a memo making clear that the PTAB should hear patent challenges when “a petition presents compelling evidence of unpatentability,” even if there is parallel court litigation. 

That 2022 guidance essentially saved the IPR system. Once PTAB judges were told to consider all petitions that showed “compelling evidence,” the procedural denials dropped to almost nothing. This February 28 memo signals that the USPTO will once again use discretionary denials to sharply limit access to IPR—effectively making patent challenges harder across the board.  

Discretionary Denials Let Patent Trolls Rig the System

The top beneficiary of this decision will be patent trolls, shell companies formed expressly for the purpose of filing patent lawsuits. Often patent trolls seek to extract a quick settlement before a patent can be challenged. With IPR becoming increasingly unavailable, that will be easier than ever. 

Patent owners know that discretionary denials will block IPRs if they file a lawsuit first. That’s why trolls flock to specific courts, like the Western District of Texas, where judges move cases quickly and rarely rule against patent owners.

By filing lawsuits in these troll-friendly courts, patent owners can game the system—forcing companies to pay up rather than risk millions in litigation costs.

The recent USPTO memo makes this problem even worse. Instead of stopping the abuse of discretionary denials, the USPTO is doubling down—undermining one of the most effective ways businesses, developers, and consumers can fight back against bad patents.

Congress Created IPR to Protect the Public—Not Just Patent Owners

The USPTO doesn’t get to rewrite the law. Congress passed IPR to ensure that weak patents don’t become weapons for extortionary lawsuits. By reinforcing discretionary denials with minimal restrictions, and, as a result, blocking access to IPRs, the USPTO is directly undermining what Congress intended.

Leaders at the USPTO should immediately revoke the February 28 memo. If they refuse, as we pointed out the last time IPR denials spiraled out of control, it’s time for Congress to step in and fix this. They must ensure that IPR remains a fast, affordable way to challenge bad patents—not just a tool for the largest corporations. Patent quality matters—because when bad patents stand, we all pay the price.

Republished from the EFF’s Deeplinks blog.

Posted on BestNetTech - 24 March 2025 @ 12:21pm

A Win For Encryption: France Rejects Backdoor Mandate

In a moment of clarity after initially moving forward a deeply flawed piece of legislation, the French National Assembly has done the right thing: it rejected a dangerous proposal that would have gutted end-to-end encryption in the name of fighting drug trafficking. Despite heavy pressure from the Interior Ministry, lawmakers voted Thursday night (article in French) to strike down a provision that would have forced messaging platforms like Signal and WhatsApp to allow hidden access to private conversations.

The vote is a victory for digital rights, for privacy and security, and for common sense.

The proposed law was a surveillance wish list disguised as anti-drug legislation. Tucked into its text was a resurrection of the widely discredited “ghost” participant model—a backdoor that pretends not to be one. Under this scheme, law enforcement could silently join encrypted chats, undermining the very idea of private communication. Security experts have condemned the approach, warning it would introduce systemic vulnerabilities, damage trust in secure communication platforms, and create tools ripe for abuse.

The French lawmakers who voted this provision down deserve credit. They listened—not only to French digital rights organizations and technologists, but also to basic principles of cybersecurity and civil liberties. They understood that encryption protects everyone, not just activists and dissidents, but also journalists, medical professionals, abuse survivors, and ordinary citizens trying to live private lives in an increasingly surveilled world.

A Global Signal

France’s rejection of the backdoor provision should send a message to legislatures around the world: you don’t have to sacrifice fundamental rights in the name of public safety. Encryption is not the enemy of justice; it’s a tool that supports our fundamental human rights, including the right to have a private conversation. It is a pillar of modern democracy and cybersecurity.

As governments in the U.S., U.K., Australia, and elsewhere continue to flirt with anti-encryption laws, this decision should serve as a model—and a warning. Undermining encryption doesn’t make society safer. It makes everyone more vulnerable.

This victory was not inevitable. It came after sustained public pressure, expert input, and tireless advocacy from civil society. It shows that pushing back works. But for the foreseeable future, misguided lobbyists for police national security agencies will continue to push similar proposals—perhaps repackaged, or rushed through quieter legislative moments.

Supporters of privacy should celebrate this win today. Tomorrow, we will continue to keep watch.

Republished from the EFF’s Deeplinks blog.

Posted on BestNetTech - 18 March 2025 @ 03:46pm

California’s A.B. 412: A Bill That Could Crush Startups And Cement A Big Tech AI Monopoly

California legislators have begun debating a bill (A.B. 412) that would require AI developers to track and disclose every registered copyrighted work used in AI training. At first glance, this might sound like a reasonable step toward transparency. But it’s an impossible standard that could crush small AI startups and developers while giving big tech firms even more power.

A Burden That Small Developers Can’t Bear

The AI landscape is in danger of being dominated by large companies with deep pockets. These big names are in the news almost daily. But they’re far from the only ones – there are dozens of AI companies with fewer than 10 employees trying to build something new in a particular niche. 

This bill demands that creators of any AI model—even a two-person company or a hobbyist tinkering with a small software build—identify copyrighted materials used in training.  That requirement will be incredibly onerous, even if limited just to works registered with the U.S. Copyright Office. The registration system is a cumbersome beast at best—neither machine-readable nor accessible, it’s more like a card catalog than a database—that doesn’t offer information sufficient to identify all authors of a work,  much less help developers to reliably match works in a training set to works in the system.

Even for major tech companies, meeting these new obligations  would be a daunting task. For a small startup, throwing on such an impossible requirement could be a death sentence. If A.B. 412 becomes law, these smaller players will be forced to devote scarce resources to an unworkable compliance regime instead of focusing on development and innovation. The risk of lawsuits—potentially from copyright trolls—would discourage new startups from even attempting to enter the field.

A.I. Training Is Like Reading And It’s Very Likely Fair Use 

A.B. 412 starts from a premise that’s both untrue and harmful to the public interest: that reading, scraping or searching of open web content shouldn’t be allowed without payment. In reality, courts should, and we believe will, find that the great majority of this activity is fair use. 

It’s now bedrock internet law principle that some forms of copying content online are transformative, and thus legal fair use. That includes reproducing thumbnail images for image search, or snippets of text to search books

The U.S. copyright system is meant to balance innovation with creator rights, and courts are still working through how copyright applies to AI training. In most of the AI cases, courts have yet to consider—let alone decide—how fair use applies. A.B. 412 jumps the gun, preempting this process and imposing a vague, overly broad standard that will do more harm than good.

Importantly, those key court cases are all federal. The U.S. Constitution makes it clear that copyright is governed by federal law, and A.B. 412 improperly attempts to impose state-level copyright regulations on an issue still in flux. 

A.B. 412 Is A Gift to Big Tech

The irony of A.B. 412 is that it won’t stop AI development—it will simply consolidate it in the hands of the largest corporations. Big tech firms already have the resources to navigate complex legal and regulatory environments, and they can afford to comply (or at least appear to comply) with A.B. 412’s burdensome requirements. Small developers, on the other hand, will either be forced out of the market or driven into partnerships where they lose their independence. The result will be less competition, fewer innovations, and a tech landscape even more dominated by a handful of massive companies.

If lawmakers are able to iron out some of the practical problems with A.B. 412 and pass some version of it, they may be able to force programmers to research—and effectively, pay off—copyright owners before they even write a line of code. If that’s the outcome in California, Big Tech will not despair. They’ll celebrate. Only a few companies own large content libraries or can afford to license enough material to build a deep learning model. The possibilities for startups and small programmers will be so meager, and competition will be so limited, that profits for big incumbent companies will be locked in for a generation. 

If you are a California resident and want to speak out about A.B. 412, you can find and contact your legislators through this website

Originally published to the EFF’s Deeplinks blog.

Posted on BestNetTech - 7 November 2024 @ 11:48am

Judge’s Investigation Into Patent Troll Results In Criminal Referrals

In 2022, three companies with strange names and no clear business purpose beyond  patent litigation filed dozens of lawsuits in Delaware federal court, accusing businesses of all sizes of patent infringement. Some of these complaints claimed patent rights over basic aspects of modern life; one, for example, involved a  patent that pertains to the process of clocking in to work through an app.

These companies – named Mellaconic IP, Backertop Licensing, and Nimitz Technologies – seemed to be typical examples of “patent trolls,” companies whose primary business is suing others over patents or demanding licensing fees rather than providing actual products or services. 

However, the cases soon took an unusual turn. The Delaware federal judge overseeing the cases, U.S. District Judge Colm Connolly, sought more information about the patents and their ownership. One of the alleged owners was a food-truck operator who had been promised “passive income,” but was entitled to only a small portion of any revenue generated from the lawsuits. Another owner was the spouse of an attorney at IP Edge, the patent-assertion company linked to all three LLCs. 

Following an extensive investigation, the judge determined that attorneys associated with these shell companies had violated legal ethics rules. He pointed out that the attorneys may have misled Hau Bui, the food-truck owner, about his potential liability in the case. Judge Connolly wrote: 

[T]he disparity in legal sophistication between Mr. Bui and the IP Edge and Mavexar actors who dealt with him underscore that counsel’s failures to comply with the Model Rules of Professional Conduct while representing Mr. Bui and his LLC in the Mellaconic cases are not merely technical or academic.

Judge Connolly also concluded that IP Edge, the patent-assertion company behind hundreds of patent lawsuits and linked to the three LLCs, was the “de facto owner” of the patents asserted in his court, but that it attempted to hide its involvement. He wrote, “IP Edge, however, has gone to great lengths to hide the ‘we’ from the world,” with “we” referring to IP Edge. Connolly further noted, “IP Edge arranged for the patents to be assigned to LLCs it formed under the names of relatively unsophisticated individuals recruited by [IP Edge office manager] Linh Deitz.” 

The judge referred three IP Edge attorneys to the Supreme Court of Texas’ Unauthorized Practice of Law Committee for engaging in “unauthorized practices of law in Texas.” Judge Connolly also sent a letter to the Department of Justice, suggesting an investigation into “individuals associated with IP Edge LLC and its affiliate Maxevar LLC.” 

Patent Trolls Tried To Shut Down This Investigation

The attorneys involved in this wild patent trolling scheme challenged Judge Connolly’s authority to proceed with his investigation. However, because transparency in federal courts is essential and applicable to all parties, including patent assertion entities, EFF and two other patent reform groups filed a brief in support of the judge’s investigation. The brief argued that “[t]he public has a right—and need—to know who is controlling and benefiting from litigation in publicly-funded courts.” Companies targeted by the patent trolls, as well as the Chamber of Commerce, filed their own briefs supporting the investigation. 

The appeals court sided with usupholding Judge Connolly’s authority to proceed, which led to the referral of the involved attorneys to the disciplinary counsel of their respective bar associations. 

After this damning ruling, one of the patent troll companies and its alleged owner made a final effort at appealing this outcome. In July of this year, the U.S. Court of Appeals for the Federal Circuit ruled that investigating Backertop Licensing LLC and ordering its alleged owner to testify was “an appropriate means to investigate potential misconduct involving Backertop.” 

In EFF’s view, these types of investigations into the murky world of patent trolling are not only appropriate but should happen more often. Now that the appeals court has ruled, let’s take a look at what we learned about the patent trolls in this case. 

Patent Troll Entities Linked To French Government

One of the patent trolling entities, Nimitz Technologies LLC, asserted a single patent, U.S. Patent No. 7,848,328, against 11 companies. When the judge required Nimitz’s supposed owner, a man named Mark Hall, to testify in court, Hall could not describe anything about the patent or explain how Nimitz acquired it. He didn’t even know the name of the patent (“Broadcast Content Encapsulation”). When asked what technology was covered by the patent, he said, “I haven’t reviewed it enough to know,” and when asked how he paid for the patent, Hall replied, “no money exchanged hands.” 

The exchange between Hall and Judge Connolly went as follows: 

Q. So how do you come to own something if you never paid for it with money?

A. I wouldn’t be able to explain it very well. That would be a better question for Mavexar.

Q. Well, you’re the owner?

A. Correct.

Q. How do you know you’re the owner if you didn’t pay anything for the patent?

A. Because I have the paperwork that says I’m the owner.

(Nov. 27, 2023 Opinion, pages 8-9.) 

The Nimitz patent originated from the Finnish cell phone company Nokia, which later assigned it and several other patents to France Brevets, a French sovereign investment fund, in 2013. France Brevets, in turn, assigned the patent to a US company called Burley Licensing LLC, an entity linked to IP Edge, in 2021. Hau Bui (the food truck owner) signed on behalf of Burley, and Didier Patry, then the CEO of France Brevets, signed on behalf of the French fund. 

France Brevets was an investment fund formed in 2009 with €100 million in seed money from the French government to manage intellectual property. France Brevets was set to receive 35% of any revenue related to “monetizing and enforcement” of the patent, with Burley agreeing to file at least one patent infringement lawsuit within a year, and collect a “total minimum Gross Revenue of US $100,000” within 24 months, or the patent rights would be given back to France Brevets. 

Burley Licensing LLC, run by IP Edge personnel, then created Nimitz Technologies LLC— a company with no assets except for the single patent. They obtained a mailing address for it from a Staples in Frisco, Texas, and assigned the patent to the LLC in August 2021, while the obligations to France Brevets remained unchanged until the fund shut down in 2022.

The Bigger Picture

It’s troubling that patent lawsuits are often funded by entities with no genuine interest in innovation, such as private equity firms. However, it’s even more concerning when foreign government-backed organizations like France Brevets manipulate the US patent system for profit. In this case, a Finnish company sold its patents to a French government fund, which used US-based IP lawyers to file baseless lawsuits against American companies, including well-known establishments like Reddit and Bloomberg, as well as smaller ones like Tastemade and Skillshare.

Judges should enforce rules requiring transparency about third-party funding in patent lawsuits. When ownership is unclear, it’s appropriate to insist that the real owners show up and testify—before dragging dozens of companies into court over dubious software patents. 

Reposted from the EFF’s Deeplinks blog.