This is the property of the Daily Journal Corporation and fully protected by copyright. It is made available only to Daily Journal subscribers for personal or collaborative purposes and may not be distributed, reproduced, modified, stored or transferred without written permission. Please click "Reprint" to order presentation-ready copies to distribute to clients or use in commercial marketing materials or for permission to post on a website. and copyright (showing year of publication) at the bottom.

Government,
U.S. Supreme Court

Jun. 20, 2019

Federal law is inadequate to address ‘deep fakes’

It is encouraging that the House Intelligence Committee is currently holding hearings on the problem and has issued a warning that they could have a disastrous effect on the 2020 election. But the owners and operators of social media platforms must act now, and not wait for the federal government to act.

John H. Minan

Emeritus Professor of Law, University of San Diego School of Law

Professor Minan is a former attorney with the Department of Justice in Washington, D.C. and the former chairman of the San Diego Regional Water Quality Board.

In Packingham v. North Carolina, 137 S. Ct. 1730 (2017), the U.S. Supreme Court addressed for the first time the relationship between the First Amendment and social-networking websites. It invalidated a North Carolina statute that restricted convicted sex offenders from having access to social-networking websites, including Facebook. The court reasoned that under the First Amendment everyone should have access to places where they can speak and listen. But Social media platforms may also be used for more sinister and harmful purposes.

"Deep fake" technologies may be used to create fake videos and audios of real people saying and doing things they never said or did. The crudely doctored video of House Speaker Nancy Pelosi, which made her appear confused or drunk, recently posted on Facebook is simply the start of things to come. It has been viewed more than 2 million times. The technology is rapidly developing so that detecting the fakes is becoming more and more difficult. We are entering an era where realistic deep fake videos and audios will allow virtually anything and everything to be challenged as "fake." The risk to our democracy is profound.

One area of special concern is public confidence in our elections. The Russian-controlled Internet Research Agency, for example, used a variety of social media platforms to interfere with the 2016 presidential election. In his report on Russian interference, Special Counsel Robert Mueller found that this meddling occurred "in a sweeping and systematic fashion." The IRA's sinister purpose was to provoke political and social discord through "information warfare." The actual impact is impossible to determine, but it is estimated that the Russian disinformation and recruitment campaign reached tens of millions of Americans.

By developing deep fake technology standards, the IRA's efforts in 2016 were relative unsophisticated. The IRA used of fictitious social media accounts and personas to post false messages on Facebook, YouTube, Twitter, Tumblr and Instagram. Russian specialists falsely posed as anti-immigration groups, Tea Party activists, Black Lives Matter protestors, and other American social and political activists. In one typical message posted on Facebook, the defendants used a false persona to post the following message on "Florida for Trump," a real Facebook account: "I'm a member of Being Patriotic online community ... If we lose Florida, we lose America. We can't let that happen, right? What about organizing a YUGE pro-Trump flash mob in every Florida town? We are currently reaching out to local activists and we've got the folks who are okay to be in charge of organizing their events almost everywhere in FL. However, we still need your support. What do you think about that? Are you in?" Trump campaign affiliates used those false tweets, posts, and other political content created by the IRA to promote pro-Trump rallies.

The fact that the truth is already under constant attack by claims of "fake news" adds to mistrust and polarization. At a Veterans of Foreign Wars Convention in Missouri, President Donald Trump told the audience that they shouldn't believe what they are reading and seeing about his administration: "Just remember, what you are seeing and what you are reading is not what is happening."

Federal law currently is inadequate to address the problem. The Communications Decency Act, 47 U.S.C. Section 230(c)(1), provides immunity from civil liability for providers of interactive computer services, including social media providers, who publish information by third-party users. In short, no basis for legal accountability exists. One solution is to amend the CDA to condition immunity based on a media platform identifying and removing "harmful" fake videos. Critics of this approach will undoubtedly argue that changing the law weakens the safe harbor provisions of Section 230, and places unnecessary burdens on internet providers to take proactive action.

With respect to foreign interference in our elections, the criminal law is another option. But no publicly available evidence exists that the Mueller criminal indictments have thwarted or stopped Russian interference. Moreover, President Trump's failure to condemn foreign interference during his recent ABC news interview with George Stephanopoulos is likely to be seen as a "green light" invitation to use more sophisticated methods. Overt and covert technological counter responses and economic sanctions in dealing with foreign interference by the federal government are likely to be more successful.

Social media platforms must also proactively adopt industry-wide technological standards for identifying and removing "harmful" fake videos and audios. Because the social media platforms are privately owned, First Amendment claims by users are not likely to succeed unless the platforms are treated as "state actors." In Packingham, the Supreme Court disputed any suggestion that cyberspace was the 21st century equivalent of "public streets and parks" for the purpose of the First Amendment. Thus, the terms-of-service agreements, which allow users access, provide contract authority to deal with the problem. Clearly stated "post-and-remove" platform policies, including source verification, and increased technological and human methods of identifying deep fake videos and posts also are needed. Advertisers also have a role to play in assuring the social media platforms take their responsibilities seriously.

The risk to our democracy from deep fake videos and audios is real and growing. It is encouraging that the House Intelligence Committee is currently holding hearings on the problem and has issued a warning that they could have a disastrous effect on the 2020 election. But the owners and operators of social media platforms must act now, and not wait for the federal government to act. 

#353053


Submit your own column for publication to Diana Bosetti


For reprint rights or to order a copy of your photo:

Email jeremy@reprintpros.com for prices.
Direct dial: 949-702-5390

Send a letter to the editor:

Email: letters@dailyjournal.com