katharine.trendacosta's BestNetTech Profile

katharine.trendacosta

About katharine.trendacosta

Posted on BestNetTech - 19 December 2025 @ 03:59pm

The Best Big Media Merger Is No Merger At All

The state of streaming is… bad. It’s very bad. The first step in wanting to watch anything is a web search: “Where can I stream X?” Then you have to scroll past an AI summary with no answers, and then scroll past the sponsored links. After that, you find out that the thing you want to watch was made by a studio that doesn’t exist anymore or doesn’t have a streaming service. So, even though you subscribe to more streaming services than you could actually name, you will have to buy a digital copy to watch. A copy that, despite paying for it specifically, you do not actually own and might vanish in a few years. 

Then, after you paid to see something multiple times in multiple ways (theater ticket, VHS tape, DVD, etc.), the mega-corporations behind this nightmare will try to get Congress to pass laws to ensure you keep paying them. In the end, this is easier than making a product that works. Or, as someone put it on social media, these companies have forgotten “that their entire existence relies on being slightly more convenient than piracy.” 

It’s important to recognize this as we see more and more media mergers. These mergers are not about quality, they’re about control. 

In the old days, studios made a TV show. If the show was a hit, they increased how much they charged companies to place ads during the show. And if the show was a hit for long enough, they sold syndication rights to another channel. Then people could discover the show again, and maybe come back to watch it air live. In that model, the goal was to spread access to a program as much as possible to increase viewership and the number of revenue streams.  

Now, in the digital age, studios have picked up a Silicon Valley trait: putting all their eggs into the basket of “increasing the number of users.” To do that, they have to create scarcity. There has to be only one destination for the thing you’re looking for, and it has to be their own. And you shouldn’t be able to control the experience at all. They should.  

They’ve also moved away from creating buzzy new exclusives to get you to pay them. That requires risk and also, you know, paying creative people to make them. Instead, they’re consolidating.  

Media companies keep announcing mergers and acquisitions. They’ve been doing it for a long time, but it’s really ramped up in the last few years. And these mergers are bad for all the obvious reasons. There are the speech and censorship reasons that came to a head in, of all places, late night television. There are the labor issues. There are the concentration of power issues. There are the obvious problems that the fewer studios that exist the fewer chances good art gets to escape Hollywood and make it to our eyes and ears. But when it comes specifically to digital life there are these: consumer experience and ownership.  

First, the more content that comes under a single corporation’s control, the more they expect you to come to them for it. And the more they want to charge. And because there is less competition, the less they need to work to make their streaming app usable. They then enforce their hegemony by using the draconian copyright restrictions they’ve lobbied for to cripple smaller competitors, critics, and fair use.  

When everything is either Disney or NBCUniversal or Warner Brothers-Discovery-Paramount-CBS and everything is totally siloed, what need will they have to spend money improving any part of their product? Making things is hard, stopping others from proving how bad you are is easy, thanks to how broken copyright law is.  

Furthermore, because every company is chasing increasing subscriber numbers instead of multiple revenue streams, they have an interest in preventing you from ever again “owning” a copy of a work. This was always sort of part of the business plan, but it was on a scale of a) once every couple of years,  b) at least it came, in theory, with some new features or enhanced quality and c) you actually owned the copy you paid for. Now they want you to pay them every month for access to same copy. And, hey, the price is going to keep going up the fewer options you have. Or you will see more ads. Or start seeing ads where there weren’t any before.  

On the one hand, the increasing dependence on direct subscriber numbers does give users back some power. Jimmy Kimmel’s reinstatement by ABC was partly due to the fact that the company was about to announce a price hike for Disney+ and it couldn’t handle losing users due to the new price and due to popular outrage over Kimmel’s treatment.  

On the other hand, well, there’s everything else. 

The latest kerfuffle is over the sale of Warner Brothers-Discovery, a company that was already the subject of a sale and merger resulting in the hyphen. Netflix was competing against another recently merged media megazord of Paramount Skydance.  

Warner Brothers-Discovery accepted a bid from Netflix, enraging Paramount Skydance, which has now launched a hostile takeover.  

Now the optimum outcome is for neither of these takeovers to happen. There are already too few players in Hollywood. It does nothing for the health of the industry to allow either merger. A functioning antitrust regime would stop both the sale and the hostile takeover attempt, full stop. But Hollywood and the federal government are frequent collaborators, and the feds have little incentive to stop Hollywood’s behemoths from growing even further, as long as they continue to play their role pushing a specific view of American culture.    

The promise of the digital era was in part convenience. You never again had to look at TV listings to find out when something would be airing. Virtually unlimited digital storage meant everything would be at your fingertips. But then the corporations went to work to make sure it never happened. And with each and every merger, that promise gets further and further away.  

Republished from the EFF’s Deeplinks blog.

Posted on BestNetTech - 2 July 2025 @ 03:43pm

The NO FAKES Act Has Changed – And It’s So Much Worse

A bill purporting to target the issue of misinformation and defamation caused by generative AI has mutated into something that could change the internet forever, harming speech and innovation from here on out.

The Nurture Originals, Foster Art and Keep Entertainment Safe (NO FAKES) Act aims to address understandable concerns about generative AI-created “replicas” by creating a broad new intellectual property right. That approach was the first mistake: rather than giving people targeted tools to protect against harmful misrepresentations—balanced against the need to protect legitimate speech such as parodies and satires—the original NO FAKES just federalized an image-licensing system.

The updated bill doubles down on that initial mistaken approach by mandating a whole new censorship infrastructure for that system, encompassing not just images but the products and services used to create them, with few safeguards against abuse.

The new version of NO FAKES requires almost every internet gatekeeper to create a system that will a) take down speech upon receipt of a notice; b) keep down any recurring instance—meaning, adopt inevitably overbroad replica filters on top of the already deeply flawed copyright filters;  c) take down and filter tools that might have been used to make the image; and d) unmask the user who uploaded the material based on nothing more than the say so of person who was allegedly “replicated.”

This bill would be a disaster for internet speech and innovation.

Targeting Tools

The first version of NO FAKES focused on digital replicas. The new version goes further, targeting tools that can be used to produce images that aren’t authorized by the individual, anyone who owns the rights in that individual’s image, or the law. Anyone who makes, markets, or hosts such tools is on the hook. There are some limits—the tools must be primarily designed for, or have only limited commercial uses other than making unauthorized images—but those limits will offer cold comfort to developers given that they can be targeted based on nothing more than a bare allegation. These provisions effectively give rights-holders the veto power on innovation they’ve long sought in the copyright wars, based on the same tech panics. 

Takedown Notices and Filter Mandate

The first version of NO FAKES set up a notice and takedown system patterned on the DMCA, with even fewer safeguards. NO FAKES expands it to cover more service providers and require those providers to not only take down targeted materials (or tools) but keep them from being uploaded in the future.  In other words, adopt broad filters or lose the safe harbor.

Filters are already a huge problem when it comes to copyright, and at least in that instance all it should be doing is flagging for human review if an upload appears to be a whole copy of a work. The reality is that these systems often flag things that are similar but not the same (like two different people playing the same piece of public domain music). They also flag things for infringement based on mere seconds of a match, and they frequently do not take into account context that would make the use authorized by law.

But copyright filters are not yet required by law. NO FAKES would create a legal mandate that will inevitably lead to hecklers’ vetoes and other forms of over-censorship.

The bill does contain carve outs for parody, satire, and commentary, but those will also be cold comfort for those who cannot afford to litigate the question.

Threats to Anonymous Speech

As currently written, NO FAKES also allows anyone to get a subpoena from a court clerk—not a judge, and without any form of proof—forcing a service to hand over identifying information about a user.

We’ve already seen abuse of a similar system in action. In copyright cases, those unhappy with the criticisms being made against them get such subpoenas to silence critics. Often that the criticism includes the complainant’s own words as proof of the criticism, an ur-example of fair use. But the subpoena is issued anyway and, unless the service is incredibly on the ball, the user can be unmasked.

Not only does this chill further speech, the unmasking itself can cause harm to users. Either reputationally or in their personal life.

Threats to Innovation

Most of us are very unhappy with the state of Big Tech. It seems like not only are we increasingly forced to use the tech giants, but that the quality of their services is actively degrading. By increasing the sheer amount of infrastructure a new service would need to comply with the law, NO FAKES makes it harder for any new service to challenge Big Tech. It is probably not a coincidence that some of these very giants are okay with this new version of NO FAKES.

Requiring removal of tools, apps, and services could likewise stymie innovation. For one, it would harm people using such services for otherwise lawful creativity.  For another, it would discourage innovators from developing new tools. Who wants to invest in a tool or service that can be forced offline by nothing more than an allegation?

This bill is a solution in search of a problem. Just a few months ago, Congress passed Take It Down, which targeted images involving intimate or sexual content. That deeply flawed bill pressures platforms to actively monitor online speech, including speech that is presently encrypted. But if Congress is really worried about privacy harms, it should at least wait to see the effects of the last piece of internet regulation before going further into a new one. Its failure to do so makes clear that this is not about protecting victims of harmful digital replicas.

NO FAKES is designed to consolidate control over the commercial exploitation of digital images, not prevent it. Along the way, it will cause collateral damage to all of us.

Originally posted to the EFF’s Deeplinks blog, with a link to EFF’s Take Action page on the NO FAKES bill, which helps you tell your elected officials not to support this bill.

Posted on BestNetTech - 13 August 2021 @ 10:48am

Why Companies Keep Folding to Copyright Pressure, Even If They Shouldn't

The giant record labels, their association, and their lobbyists have succeeded in getting a number of members of the U.S. House of Representatives to pressure Twitter to pay money it does not owe, to labels who have no claim to it, against the interests of its users. This is a playbook we?ve seen before, and it seems to work almost every time. For once, let us hope a company sees this extortion attempt for what it is and stands up to it.

Here is the deal. Online platforms that host user content are not liable for copyright infringement done by those users so long as they fulfill the obligations laid out in the Digital Millennium Copyright Act (DMCA). One of those obligations is to give rightsholders an unprecedented ability to have speech removed from the internet, on demand, with a simple notice sent to a platform identifying the offending content. Another is that companies must have some policy to terminate the accounts of ?repeat infringers.?

Not content with being able to remove content without a court order, the giant companies that hold the most profitable rights want platforms to do more than the law requires. They do not care that their demands result in other people?s speech being suppressed. Mostly, they want two things: automated filters, and to be paid. In fact, the letter sent to Twitter by those members of Congress asks Twitter to add ?content protection technology??for free?and heavily implies that the just course is for Twitter to enter into expensive licensing agreements with the labels.

Make no mistake, artists deserve to be paid for their work. However, the complaints that the RIAA and record labels make about platforms are less about what individual artists make, and more about labels? control. In 2020, according to the RIAA, revenues rose almost 10% to $12.2 billion in the United States. And Twitter, whatever else it is, is not where people go for music.

But the reason the RIAA, the labels, and their lobbyists have gone with this tactic is that, up until now, it has worked. Google set the worst precedent possible in this regard. Trying to avoid a fight with major rightsholders, Google voluntarily created Content ID. Content ID is an automated filter that scans uploads to see if any part?even just a few seconds?of the upload matches the copyrighted material in its database. A match can result in either a user?s video being blocked, or monetized for the claiming rightsholder. Ninety percent of Content ID partners choose to automatically monetize a match?that is, claim the advertising revenue on a creator?s video for themselves?and 95 percent of Content ID matches made to music are monetized in some form. That gives small, independent YouTube creators only a few options for how to make a living. Creators can dispute matches and hope to win, sacrificing revenue while they do and risking the loss of their channel. Fewer than one percent of Content ID matches are disputed. Or, they can painstakingly edit and re-edit videos, or avoid including almost any music whatsoever and hope that Content ID doesn?t register a match on static or a cat?s purr.

While any creator has the right to use copyrighted material without paying rightsholders in circumstances where fair use applies, Content ID routinely diverts money away from creators like these to rightsholders in the name of policing infringement. Fair use is an exercise of your First Amendment rights, but Content ID forces you to pay for that right. WatchMojo, one of the largest YouTube channels, estimated that over six years, roughly two billion dollars in ads have gone to rightsholders instead of creators. YouTube does not shy away from this effect. In its 2018 report ?How Google Fights Piracy,? the company declares that ?the size and efficiency of Content ID are unparalleled in the industry, offering an efficient way to earn revenue from the unanticipated, creative ways that fans reuse songs and videos.? In other words, Content ID allows rightsholders to take money away from creators who are under no obligation to obtain a license for their lawful fair uses.

That doesn?t even include the times these filters just get things completely wrong. Just the other week, a programmer live-streamed his typing and a claim was made for the sound of ?typing on a modern keyboard.? A recording of static got five separate notices placed on it by the automated filter. These things don?t work.

YouTube also encourages people to simply use only the things that they have a license for or are in a library of free resources. That ignores that there is a fair use right to use copyrighted material in certain cases, and lets companies argue that no one has to use their work without paying since these free options exist.

So, when the labels make a lot of disingenuous noise about how inadequate the DMCA is and how platforms need to do more, they have YouTube to point to as a ?voluntary? system that should be replicated. And companies will fold, especially if they end up being inundated with DMCA takedowns?some bogus?and if they think the other option is being required to do it by law, the implicit threat of a letter like the one Twitter received.

This tactic works. Twitch found itself buried under DMCA takedowns last year, handled that poorly, and then found itself being, like Twitter, blamed for taking money out of the hands of musicians by the RIAA. Twitch now makes removing music and claimed bits of videos easier, has adopted a similar repeat infringer policy to YouTube?s, and makes deleting clips easier for users. Snap, owner of Snapchat, went the route of getting a license, paying labels to make music available to its users.

Creating a norm of licensed or free music, monetization, or automated filters functionally eviscerates fair use. Even if people have the right to use something, they won?t be able to. On YouTube, reviewers don?t use the clips of the music or movies that are the best example of what they?re talking about?they pick whatever will satisfy the filter. That is not the model we want as a baseline. The baseline should be more protective of legal speech, not less.

Unfortunately, when the tech companies are facing off against the largest rightsholders, it’s users who most often lose. Twitter is only the latest target, we hope they become the one to stand up for its users.

Originally posted to the EFF Deeplinks blog.

More posts from katharine.trendacosta >>