jeffrey.westling's BestNetTech Profile

jeffrey.westling

About jeffrey.westling

Posted on BestNetTech - 23 October 2020 @ 01:36pm

Should Antitrust Protect Competitors Or Competition?

While there is plenty of breaking news to go around, tech junkies will not have missed the Department of Justice?s long-awaited announcement of their antitrust lawsuit against Google. This is just the latest in a number of government moves aimed at applying more pressure to big tech. Congress is also reviewing potential reforms to antitrust law in order to make it easier to target online platforms. During recent hearings, the House Judiciary Committee examined how these companies compete and highlighted individual competitors?? that struggle to compete or work with dominant firms.

But in its rush to legislate market fairness into the tech world, Congress seems to be missing the point: we need to protect competition, not the competitors themselves.

The Supreme Court warned about this nearly two decades ago. As the Court explained of the Sherman Act, ?[t]he purpose of the Act is not to protect businesses from the working of the market; it is to protect the public from the failure of the market. The law directs itself not against conduct which is competitive, even severely so, but against conduct which unfairly tends to destroy competition itself.?

In other words, competition law doesn?t care what happens to small competitors, it cares that these companies have a chance to compete.

This is where Congress is treading on dangerous ground. Earlier this month, the House Judiciary Committee released an extensive report detailing their investigation into online markets. In this report, Congress finds themselves worried about what happens to individual competitors, not competition writ large.

For example, at one point the report states that ?Google?s preferential treatment of its own verticals, as well as its direct listing of information in the ?OneBox? that appears at the top of Google search results, has the net effect of diverting traffic from competing verticals and jeopardizing the health and viability of their business.? Because of this, the report recommends Congress overturn judicial precedent on attempted monopolization, which currently requires that plaintiffs show that the company has a dangerous probability of monopolization.

But Google does not seem to be preventing vertical search engines from competing and as far as we can tell, it hasn?t monopolized this service. Their ?OneBox? gives users a quick answer to a question or a product they are looking for. True, a competing vertical search engine may lose traffic, but the consumers get more search results overall. If Google could monopolize vertical search markets, then that would effectively prohibit competitors from offering better, rival services. But so long as a firm like Google can?t actually achieve that monopoly in the new market, the competitive constraints on behavior still exist.

And even if a general search feature and anticompetitive conduct led to monopolization of vertical search, antitrust law would act as a check to protect competition. The Department of Justice?s long-predicted antitrust lawsuit against Google is evidence of this. If Google has illegally acquired, attempted to acquire or maintained a monopoly, then current antitrust law will ensure that any anticompetitive harms are corrected without hurting the consumers. But if they simply outcompeted rivals by offering a more efficient product, then competition policy should not, and currently does not, worry about the individual competitors who can?t keep up.

Competition protects consumers and is critical in the online marketplace. In the fast-moving technology sector, some companies will not keep up. But Congress cannot lose focus by worrying about individual competitors. Instead, they must keep an eye out for anticompetitive behavior that prohibits competitions because the firm controls the entire market. In the end, if we artificially prop up less efficient or innovative competitors, then it will be the consumers who end up suffering.

Jeffrey Westling is a technology and innovation resident fellow at the R Street Institute.

Posted on BestNetTech - 29 April 2020 @ 12:01pm

Game Of (Internet) Life: How Social Media Reacts To Questionable News

On April 11, Princeton mathematician and the inventor of ?Game of Life? John Horton Conway passed away from the coronavirus. Known as a ?magical genius? whose curiosity extended beyond just mathematics, the passing was a devastating blow to many who loved the man.

Yet as news of his passing broke, an interesting scenario developed. Instead of a formal statement from the institution or his family, the news first appeared on Twitter. With no verifiable proof of the claim, many were left struggling to determine whether to believe the story.

This scenario??a questionable story that can be proven true or false in time??presents a challenge for combating the spread of false information online. As we have seen many times before on social media, stories are often shared prior to the information being verified. Unfortunately, this will increasingly occur??especially in an election year and during a pandemic. Therefore, examining how social media responded during this particular event can help better determine the rules and patterns that drive the spread of information online.

Around 2:00 pm EST on Saturday, April 11, news started to spread on social media that John Horton Conway had died. The main source was a tweet that came from a fellow mathematician, who expressed his condolences and shared a story of Conway writing a blog post for April Fool?s Day.

As the news began to spread, most individuals who saw the tweets accepted the information as true and began expressing condolences themselves.

However some started to question the news; mainly because the original tweet had no source verifying the claim. As time went on, people began to speculate that this may indeed be a hoax, and many began deleting and retracting earlier tweets; a void existed where a source should be.

Users filled that void with Wikipedia, a platform where any individual can make changes to the information on any given page. However, this led to a series of citation conflicts, where users would post and then others would delete the post, claiming a lack of source.

The confusion eventually died down as more individuals who knew John Horton Conway explained what had happened, and how they knew. Indeed, the account that first broke the news followed up later with an explanation of what happened. But in that brief window where questions arose, we received a glimpse into how social media reacts to questionable news. And as if discovering the rules to a ?Game of Misinformation,? this teaches us a few important lessons about user behavior and how misinformation spreads over time.

First, most users quickly trusted the initial reports as the information filtered in. This is to be expected: research has shown that individuals tend to trust those in their social networks. And indeed, the mathematician whose tweet was the primary source, while not the closest person to the deceased, was in the same community. In other words, what he said had weight. Further, by linking an article in Scientific American, users may have made a connection between the news and the article, even when the tweet did specify that was not the case.

Because of this level of trust within networks, individuals must carefully consider the content and the context by which they share information. Rushing to post breaking news can cause significant harms when that information is incorrect. At the same time, presentation can also have a drastic impact on how the reader digests the information. In this case, linking to the Scientific American story provided interesting context about the man behind the name, but also could give the reader the impression that the article supported the claim that he had died. That is not to say that any tweets in this situation were hasty or ill-conceived, but individuals must remain mindful of how the information shared online is presented and may be perceived by the audience.

Second, people do read comments and replies. The original tweet or social media post may receive the most attention, but many users will scroll through the comments, especially those who post the original material. This leads to two key conclusions. First, users should critically examine information and wait for additional verification before accepting assertions as truth. Second, when information seems incorrect, or at least unverified, users can and should engage with the content to point out the discrepancy. This can mean the difference between a false story spreading between 1,000 people or 1,000,000 people before the information is verified/disproven. Again, while this will not stop the spread of false information outright, it can lead to retractions and a general awareness from other users, which will ?flatten the misinformation curve?, so to speak.

Finally, when a void of sources exists, individuals may try to use other mediums or hastily reported news to bolster their point of view. In this case, so-called ?edit wars? developed on John Conway?s Wikipedia page, with some writing that he had died while others removed the information. While it is impossible to say whether the same individuals who edited the Wikipedia page also used it as evidence to support the original tweet, it does highlight how easy it could be to use a similar method in the future. Users often have to rely on the word of a small number of individuals in the hours following the release of a questionable story. When this is the case, some may try to leverage the implicit trust we have in other institutions to bolster their claims and arguments. In this case, it was Wikipedia, but it could be others. Users must carefully consider the possible biases or exploits that exist with specific sources.

Like Conway?s Game of Life, there are patterns to how information spreads online. Understanding these patterns and the rules by which false information changes and grows will be critical as we prepare for the next challenge. Sadly, the story that spread earlier this month turned out to be true, but the lessons we can learn from it can be applied to similar stories moving forward.

Jeffrey Westling is a technology and innovation policy fellow at the R Street Institute, a free-market think tank based in Washington, D.C.

Posted on BestNetTech - 9 April 2020 @ 12:37pm

The EARN IT Act Creates A New Moderator's Dilemma

Last month, a bipartisan group of U.S. senators unveiled the much discussed EARN IT Act, which would require tech platforms to comply with recommended best practices designed to combat the spread of child sexual abuse material (CSAM) or no longer avail themselves of Section 230 protections. While these efforts are commendable, the bill would cause significant problems.

Most notably, the legislation would create a Commission led by the Attorney General with the authority to draw up a list of recommended best practices. Many have rightly explained that AG Barr will likely use this new authority to prohibit end-to-end encryption as a best practice. However, less discussed is the recklessness standard the bill adopts. This bill would drastically reduce free speech online because it eliminates the traditional moderator’s dilemma and instead creates a new one: either comply with the recommended best practices, or open the legal floodgates.

Prior to the passage of the Communications Decency Act in 1996, under common law intermediary liability, platforms could only be held liable if they had knowledge of the infringing content. This meant that if a platform couldn’t survive litigation costs, they could simply choose not to moderate at all. While not always a desirable outcome, this did provide legal certainty for smaller companies and start-ups that they wouldn’t be litigated into bankruptcy. This dilemma was eventually resolved thanks to Section 230 protections, which prevent companies from having to make that choice.

However, the EARN IT Act changes that equation in two key ways. First, it amends Section 230 by allowing civil and state criminal suits against companies who do not adhere to the recommended best practices. Second, for the underlying Federal crime (which Section 230 doesn’t affect), the bill would change the scienter requirement from actual knowledge to recklessness.  What does this mean in practice? Currently, under existing Federal law, platforms must have actual knowledge of CSAM on their service before any legal requirement goes into effect. So if, for example, a user posts material that could be considered CSAM but the platform is not aware of it, then they can’t be guilty of illegally transporting CSAM. Platforms must remove and report content when it is identified to them, but they are not held liable for any and all content on the website. However, a recklessness standard turns this dynamic on its head.

What actions are “reckless” is ultimately up to the jurisdiction, but the model penal code can provide a general idea of what it entails: a person acts recklessly when he or she “consciously disregards a substantial and unjustifiable risk that the material element exists or will result from his conduct.” But what’s worse, the bill opens the platform’s actions to civil cases. Federal criminal enforcement normally targets the really bad actors, and companies that comply with reporting requirements will generally be immune from liability. However with these changes, if a user posts material that could potentially be considered CSAM, despite no knowledge on the part of the platform, civil litigants could argue that the moderation and detection practices of the companies, or lack thereof, constituted a conscious disregard of the risk that CSAM will be shared by users.

When the law introduces ambiguity into liability, companies tend to err on the side of caution. In this case, that means the removal of potentially infringing content to ensure they cannot be brought before a court. For example, in the copyright context, a Digital Millennium Copyright Act safe-harbor exists for internet service providers (ISPs) who “reasonably implement” policies for terminating repeat infringers on their service in “appropriate circumstances.” However, courts have refused to apply that safe-harbor when a company didn’t terminate enough subscribers. This uncertainty about whether a safe-harbor applies will undoubtedly lead ISPs to act on more complaints, ensuring they cannot be liable for the infringement. Is it “reckless” for a company not to investigate postings from an IP address if other postings from that IP address were CSAM? What if the IP address belongs to a public library with hundreds of daily users?

This ambiguity will likely force platforms to moderate user content and over-remove legitimate content to ensure they cannot be held liable. Large firms that have the resources to moderate more heavily and that can survive an increase in lawsuits may start to invest the majority of moderation resources into CSAM out of an abundance of caution. As a result, this would leave less resources to target and remove other problematic content such as terrorist recruitment or hate speech. Mid-sized firms may end up over-removing user content that in any way features a child or limit posting to trusted sources, insulating them from potential lawsuits that could cripple the business. And small firms, who likely can’t survive an increase in litigation could ban user content entirely, ensuring nothing on the website hasn’t been posted without vetting. These consequences, and the general burden on the First Amendment, are exactly the type of harms that drove courts to adopt a knowledge standard for online intermediary liability, ensuring that the free flow of information was not unduly limited.

Yet, the EARN IT Act ignores this. Instead, the bill assumes that companies will simply adhere to the best practices and therefore retain Section 230 immunity, avoiding these bad outcomes. After all, who wouldn’t want to comply with best practices? Instead, this could force companies to choose between vital privacy protections like end-to-end encryption or litigation. The fact is there are better ways to combat the spread of CSAM online which don’t require platforms to remove key privacy features for user.

As it stands now, the EARN IT Act solves the moderator’s dilemma by creating a new one: comply, or else.

Jeffrey Westling is a technology and innovation policy fellow at the R Street Institute, a free-market think tank based in Washington, D.C.

Posted on BestNetTech - 28 February 2019 @ 01:31pm

Deception & Trust: A Deep Look At Deep Fakes

With recent focus on disinformation and “fake news,” new technologies used to deceive people online have sparked concerns among the public. While in the past, only an expert forger could create realistic fake media, deceptive techniques using the latest research in machine-learning allow anyone with a smartphone to generate high-quality fake videos, or “deep fakes.”

Like other forms of disinformation, deep fakes can be designed to incite panic, sow distrust in political institutions, or produce myriad other harmful outcomes. Because of these potential harms, lawmakers and others have begun expressing concerns about deep-fake technology.

Underlying these concerns is the superficially reasonable assumption that deep fakes represent an unprecedented development in the ecosystem of disinformation, largely because deep-fake technology can create such realistic-looking content. Yet this argument assumes that the quality of the content carries the most weight in the trust evaluation. In other words, people making this argument believe that the highly realistic content of a deep fake will induce the viewer to trust it — and share it with other people in a social network — thus hastening the spread of disinformation.

But there are several reasons to be suspicious of that assumption. In reality, deep-fake technology operates similarly to other media that people use to spread disinformation. Whether content will be believed and shared may not be derived primarily from the content’s quality, but from psychological factors that any type of deceptive media can exploit. Thus, contrary to the hype, deep fakes may not be the techno-boogeyman some claim them to be.

Deceiving with a deep fake.

When presented with any piece of information — be it a photograph, a news story, a video, etc. — people do not simply take that information at face value. Instead, individuals in today’s internet ecosystem rely heavily on their network of social contacts when deciding whether to trust content online. In one study, for example, researchers found that participants were more likely to trust an article when it had been shared by people whom the individual already trusted.

This conclusion comports with an evolutionary understanding of human trust. In fact, humans likely evolved to believe information that comes from within their social networks, regardless of its content or quality.

At a basic level, one would expect such trust would be unfounded; individuals usually try to maximize their fitness (the likelihood they will survive and reproduce) at the expense of others. If an individual sees an incoming danger and fails to alert anyone else, that individual may have a better chance of surviving that specific interaction.

However, life is more complex than that. Studies suggest that in repeated interactions with the same individual, a person is more likely to place trust in the other individual because, without any trust, neither party would gain in the long term. When members of a group can rely on other members, individuals within the group gain a net benefit on average.

Of course, a single lie or selfish action could help an individual survive an individual encounter. But if all members of the group acted that way, the overall fitness of the group would decrease. And because groups with more cooperation and trust among their members are more successful, these traits were more likely to survive on an aggregate level.

Humans today, therefore, tend to trust those close to them in a social network because such behavior helped the species survive in the past. For a deep fake, then, the apparent authenticity of the video may be less of a factor in deciding whether to trust that information than whether the individual trusts who shares it.

Further, even the most realistic, truthful-sounding information can fail to produce trust when the individual holds beliefs that contradict the presented information. The theory of cognitive dissonance contends that when an individual’s beliefs contradict his or her perception, mental tension — or cognitive dissonance — is created. The individual will attempt to resolve this dissonance in several ways, one of which is to accept evidence that supports his or her existing beliefs and dismissing evidence that does not. This leads to what is known as confirmation bias.

One fascinating example of confirmation bias in action came in the wake of President Donald Trump’s press secretary claiming that more people watched Trump’s inauguration than any other inauguration in history. Despite the video evidence and a side-by-side photo comparison of the National Mall indicating the contrary, many Trump supporters claimed that a photo depicting turnout on Jan. 20, 2017, showed a fuller crowd than it actually did because they knew it was a photo of Trump’s inauguration (Sean Spicer later clarified that he was including the television audience as well as the in-person audience, but the accuracy of that characterization is also debatable.) In other words, the Trump supporters either convinced themselves that the crowd size was larger despite observable evidence to the contrary, or they knowingly lied to support — or confirm — their bias.

The simple fact is that it does not require much convincing to deceive the human mind. For instance, multiple studies have shown that rudimentary disinformation can generate inaccurate memories in the targeted individual. In one study, researchers were able to implant fake childhood memories in subjects by simply providing a textual description of an event that never occurred.

According to these theories, then, when it comes to whether a person believes a deep fake is real, the quality matters less than whether an individual has pre-existing biases or trusts the person who shared it. In other words, existing beliefs, not the perceived “realness” of a medium, drives whether new information is believed. And, given the diminished role that the quality of a medium plays in the believability calculus, more rudimentary methods — like using Photoshop to alter photographs — can achieve the same results as a deep fake in terms of spreading disinformation. Thus, while deep fakes present a challenge generally, deep fakes as a class of disinformation do not present an altogether new problem as far as believability is concerned.

Sharing Deep Fakes Online.

With the rise of social media and the fundamental change in how we share information, some worry that the unique characteristics of deep fakes could make them more likely to be shared online regardless of whether they deceive the target audience.

People share information — whether it be in written, picture or video form — online for many different reasons. Some may share it because it is amusing or pleasing. Others may do so because it offers partisan political advantage. Sometimes the sharer knows the information is false. Other times, the sharer does not know whether the information is accurate but simply does not care enough to correct the record.

People also tend to display a form of herd behavior in which seeing others share content drives the individual to share the content themselves. This allows disinformation to spread across larger platforms like Facebook or Twitter as the content builds up a base of sharing. The number of people who receive disinformation, then, can grow exponentially at a very rapid pace. As the popularity of a given piece of content increases, so too does its credibility as it reaches the edges of a network, exploiting the trust that individuals have in their social networks. And even if the target audience does not believe a given deep fake, widespread propagation of the content can still cause damage; simply viewing false content can reinforce beliefs that the user already has, even if the individual knows that the content is an exaggeration or a parody.

Deep fakes, in particular, present the audience with rich sound and video that engage the viewer. A realistic deep fake that can target the user’s existing beliefs and exploit his or her social ties, therefore, may spread rapidly online. But so, too, do news articles and simple image-based memes. Even without the richness of a deep fake, still images and written text can target the psychological factors that drive content-sharing online. In fact, image-based memes already spread at alarming rates due to their simplicity and the ease with which they convey information. And while herd-behavior tendencies will drive more people to share content, this applies to all forms of disinformation, not just deep fakes.

Currently, a video still represents an undeniable record of events for many people. But as this technology becomes more commonplace and the limitations of video become more apparent, the psychological factors above will drive trust and sharing. And the tactics that bad actors use to deceive will exploit these social patterns regardless of medium.

When viewed in this context, deep fakes are not some unprecedented challenge society cannot adapt to; they are simply another tool of disinformation. We should of course remain vigilant and understand that deep fakes will be used to spread disinformation. But we also need to consider that deep fakes may not live up to the hype.

Jeffrey Westling (@jeffreywestling) is a Technology and Innovation Research Associate at the R Street Institute.

Posted on BestNetTech - 30 January 2019 @ 12:05pm

Deep Fakes: Let's Not Go Off The Deep End

In just a few short months, “deep fakes” are striking fear in technology experts and lawmakers. Already there are legislative proposals, a law review article, national security commentaries, and dozens of opinion pieces claiming that this new deep fake technology — which uses artificial intelligence to produce realistic-looking simulated videos — will spell the end of truth in media as we know it.

But will that future come to pass?

Much of the fear of deep fakes stems from the assumption that this is a fundamentally new, game-changing technology that society has not faced before. But deep fakes are really nothing new; history is littered with deceptive practices — from Hannibal’s fake war camp to Will Rogers’ too-real impersonation of President Truman to Stalin’s disappearing of enemies from photographs. And society’s reaction to another recent technological tool of media deception — digital photo editing and Photoshop — teaches important lessons that provide insight into deep fakes’ likely impact on society.

In 1990, Adobe released the groundbreaking Adobe Photoshop to compete in the quickly-evolving digital photograph editing market. This technology, and myriad competitors that failed to reach the eventual popularity of Photoshop, allowed the user to digitally alter real photographs uploaded into the program. While competing services needed some expertise to use, Adobe designed Photoshop to be user-friendly and accessible to anyone with a Macintosh computer.

With the new capabilities came new concerns. That same year, Newsweek published an article called, “When Photographs Lie.” As Newsweek predicted, the consequences of this rise in photographic manipulation techniques could be disastrous: “Take China’s leaders, who last year tried to bar photographers from exposing [the leaders’] lies about the Beijing massacre. In the future, the Chinese or others with something to hide wouldn’t even worry about photographers.”

These concerns were not entirely without merit. Fred Ritchin, formerly the picture editor of The New York Times Magazine who is now the Dean Emeritus of the International Center of Photography School, has continued to argue that trust in photography has eroded over the past few decades thanks to photo-editing technology:

There used to be a time when one could show people a photograph and the image would have the weight of evidence—the “camera never lies.” Certainly photography always lied, but as a quotation from appearances it was something viewers counted on to reveal certain truths. The photographer’s role was pivotal, but constricted: for decades the mechanics of the photographic process were generally considered a guarantee of credibility more reliable than the photographer’s own authorship. But this is no longer the case.

It is true that the “camera never lies” saying can no longer be sustained — the camera can and often does lie when the final product has been manipulated. Yet the crisis of truth that Ritchin and Newsweek predicted has not come to pass.

Why? Because society caught on and adapted to the technology.

Think back to June 1994, when Time magazine ran O.J. Simpson’s mugshot on the cover of its monthly issue. Time had drastically darkened the mugshot, making Simpson appear much darker than he actually was. What’s worse, Newsweek ran the unedited version of the mugshot, and the two magazines sat side-by-side on supermarket shelves. While Time defended this as an artistic choice with no intended racial implications, the obviously edited photograph triggered massive public outcry.

Bad fakes were only part of the growing public awareness of photographic manipulation. For years, fashion magazines have employed deceptive techniques to alter the appearance of cover models. Magazines with more attractive models on the cover generally sell more copies than those featuring less attractive ones, so editors retouch photos to make them more appealing to the public. Unfortunately, this practice created an unrealistic image of beauty in society and, once this was discovered, health organizations began publically warning about the dangers this phenomenon caused — most notably eating disorders. And due to the ensuing public outcry, families across the country became aware of photo-editing technology and what it was capable of.

Does societal adaptation mean that no one falls for photo manipulation anymore? Of course not. But instead of prompting the death of truth in photography, awareness of the new technology has encouraged people to use other indicators — such as trustworthiness of the source — to make informed decisions about whether an image presented is authentic. And as a result, news outlets and other publishers of photographs have gone on to establish policies and make decisions regarding the images they use with an eye toward fostering their audience’s trust. For example, in 2003, the Los Angeles Times quickly fired a reporter who had digitally altered Iraq War photographs because the editors realized that publishing a manipulated image would diminish their reader’s perception of the paper’s veracity.

No major regulation or legislation was needed to prevent the apocalyptic vision of Photoshop’s future; society adapted on its own.

Now, however, the same “death of truth” claims — mainly in the context of fake news and disinformation — ring out in response to deep fakes as new artificial-intelligence and machine-learning technology enter the market. What if someone released a deep fake of a politician appearing to take a bribe right before an election? Or of the president of the United States announcing an imminent missile strike? As Andrew Grotto, International Security Fellow at the Center for International Security and Cooperation at Stanford University, predicts, “This technology … will be irresistible for nation states to use in disinformation campaigns to manipulate public opinion, deceive populations and undermine confidence in our institutions.” Perhaps even more problematic, if society has no means to distinguish a fake video from a real one, any person could have plausible deniability for anything they do or say on film: It’s all fake news.

But who is to say that societal response to deep fakes will not evolve similarly to the response to digitally edited photographs?

Right now, deep fake technology is far from flawless. While some fakes may appear incredibly realistic, others have glaring imperfections that can alert the viewer to their forged nature. As with Photoshop and digital photograph editing before it, poorly made fakes generated through cellphone applications can educate viewers about the existence of this technology. When the public becomes aware, the harms posed by deep fakes will fail to materialize to the extent predicted.

Indeed, new controversies surrounding the use of this technology are likewise increasing public awareness about what the technology can do. For example, the term “deep fake” actually comes from a Reddit user who began using this technology to generate realistic-looking fake pornographic videos of celebrities. This type of content rightfully sparked outrage as an invasion of the depicted person’s privacy rights. As public outcry began to ramp up, the platform publically banned the deep fake community and any involuntary pornography from its website. As with the public outcry that stemmed from the use of Photoshop to create an unrealistic body image, the use of deep fake technology to create inappropriate and outright appalling content will, in turn, make the public more aware of the technology, potentially stemming harms.

Perhaps most importantly, many policymakers and private companies have already begun taking steps to educate the public about the existence and capabilities of deep fakes. Notable lawmakers such as Sens. Mark Warner of Virginia, and Ben Sasse of Nebraska, have recently made deep fakes a major talking point. Buzzfeed released a public service announcement from “President Obama,” which was in fact a deep fake video with a voice-over from Jordan Peele, to raise awareness of the technology. And Facebook recently announced that it is investing significant resources into deep fake identification and detection. With so much focus on educating the public about the existence and uses of this technology, it will be more difficult for bad actors to successfully spread harmful deep fake videos.

That is not to say deep fakes do not pose any new harms or threats. Unlike Photoshop, anyone with a smartphone can use deep fake technology, meaning that a larger number of deep fakes may be produced and shared. And unlike during the 1990s, significantly more people use the internet to share news and information today, facilitating the dissemination of content across the globe at breakneck speeds.

However, we should not assume that society will fall into an abyss of deception and disinformation if we do not take steps to regulate the technology. There are many significant benefits that the technology can provide, such as aging photos of children missing for decades or creating lifelike versions of historical figures for children in class. Instead of rushing to draft legislation, lawmakers should look to the past and realize that deep fakes are not some unprecedented problem. Instead, deep fakes simply represent the newest technique in a long line of deceptive audiovisual practices that have been used throughout history. So long as we understand this fact, we can be confident that society will come up with ways of mitigating new harms or threats from deep fakes on its own.

Jeffrey Westling is a Technology and Innovation policy associate at the R Street Institute, a free-market think tank based in Washington, D.C.

More posts from jeffrey.westling >>