Is Public Relations Prepared for Deep Fakes?

The term “deepfake” sounds like a play-call in football where the quarterback fakes running the ball but then throws deep for his wide receiver. I learned in a presentation while at my Syracuse Immersion this spring that a Deepfake is something much more sinister that can have big impacts on our communications industry from journalism to advertising and public relations.

Simply put, deepfakes are fabricated videos that have been produced and edited to make a person say or do something that they never did. For example, in a political campaign, video of candidates can be manipulated to show that they said something they didn’t and it can be hard to detect because technology has become so sophisticated. The creators of deepfakes use face-swapping which means they edit one person’s face onto another person’s head (Villasenor, 2019). The most common method to create deepfakes is with free AI software (Fagan, 2018). These videos are based on “deep learning” or neural networks using artificial intelligence and they can be hard to detect.

How can you spot a deep fake? It can be difficult. Just check out this example from BuzzFeed that uses video of former President Obama. It sounds and looks real.

Deepfake example.

According to John Villasenor (2019), these deepfake videos exploit our inclination to trust the reliability of evidence we witness with our own eyes and therefore can turn fiction into fact.

When you think about it, deepfakes can ruin a candidate’s chances of getting elected, it can ruin people’s careers or it can ruin an entire company if the video is of your CEO doing or saying something inflammatory or detrimental. In our fast-paced digital world it’s easy for anyone to obtain and manipulate video of deepfake targets.

As our presenter and media law professor Nina Brown (2019) pointed out, deep fake videos are not synonymous with fake videos. Fake videos or scenes like the ones in Forrest Gump where the character is placed back in time with events in the 60s and 70s are harmless and not meant to deceive.

Imagine if someone made a deepfake video of your CEO talking negatively about your company, sharing fake financial information about the company or being placed in a crime scene. Talk about a nightmare for public relations professionals. This could ruin the company or send stocks crashing. How would you as a PR practitioner deal with such a crisis? In my opinion, we need to ensure that we have such scenarios included in our crisis plans and we should work with organizations and companies to create awareness of the things technology can do to harm our reputations.

Brown (2019) also discussed the implications of deepfakes as it relates to current law because the First Amendment protects freedom of speech and there is no legal duty for digital and social media to stop the spread of fake news. But as citizens, we have the duty to search for the truth. The problem is that most people are not media savvy enough to detect fake news or these deep fake videos. That’s why it’s critical for PR professionals to help create awareness.

According to Brown (2019), the solutions currently being developed include algorithms to detect deep fakes, but they must be implemented at the top of the distribution channel before they go viral or are shared. The other solution in the works is creating new laws for digital and social and using current laws like false light, defamation and right of publicity just to name a few. Also, public awareness is critical in helping publics understand that these exist, and they should not take things they see on the internet at face value, but rather should be critical of information.

The bigger issue is that the internet is worldwide and other countries do not have to abide by our laws. Also, under current law, Internet Service Providers cannot be held accountable for the content or sharing of third-party content. I believe though that ISPs must also take steps to deter or ban such content from their sites. Facebook and Twitter have implemented some policies and they do delete content based upon their policies. Villasenor (2019) wrote that deepfakes detection techniques will never be perfect and even the best detection advances will not be able to keep up with the pace of deepfake technology.

The bottom line is that AI is adding more complications to an already complicated digital and social media world, and as PR professionals we must be prepared to adapt and manage these issues and help create solutions.

References

Brown, Nina. (March 29, 2019). “Deepfakes and the Law.” Syracuse University. Lecture.

BuzzFeedVideo. (April 17, 2018). You won’t believe what Obama says in this video. In Youtube.com. Retrieved from https://www.youtube.com/watch?v=cQ54GDm1eL0

Fagan, K. (2018, April 17). A viral video that appeared to show Obama calling Trump a ‘dips—‘shows a disturbing new trend called ‘deepfakes’. In http://www.businessinsider.com. Retrieved from https://www.businessinsider.com/obama-deepfake-video-insulting-trump-2018-4

Peel, J., & Obama, B. (Actor). Peel, J. (Narrator). (2018). You Won’t Believe What Obama Says In The [Online video]. USA: BuzzFeed. Retrieved from https://www.youtube.com/watch?v=cQ54GDm1eL0

Villasenor, J. (2019, February 14). Artificial intelligence, deepfakes, and the uncertain future of truth. In http://www.brookings.edu. Retrieved from https://www.brookings.edu/blog/techtank/2019/02/14/artificial-intelligence-deepfakes-and-the-uncertain-future-of-truth/

Deepfake photo: Shutterstock.

Computer-Generated Influencers: Can they Benefit Public Relations?

For two years Lil Miquela, a 19-year old model from California, existed on Instagram as a social media influencer with more than 1 million followers (Lil Miquela, Instagram, 2018)—that is, before she revealed in April that she is not human, but actually a computer-generated image or “virtual” person. In a publicity stunt, another computer-generated woman named Bermuda supposedly took over Miquela’s account and outed her. Miquela wrote on Instagram that she is not human, but rather a robot (Yurieff, 2018).

Picture1

Posing with real people does make her look more human.
Photo from Lil Miquela Instagram https://www.instagram.com/lilmiquela/

The success of this CGI influencer begs the question—are these virtual personalities the next big thing in social media and technology? How will these computer-generated personalities impact business? How will they impact the public relations profession?

I think first and foremost that the creators of Miquela misled the public by not disclosing that she is computer generated. Sure, most of us could tell she’s not real, but from a public relations standpoint, my concerns with this are that it violated the professional code of ethics that holds us accountable for truth, honesty and transparency. I would have had to reveal up front that Miquela was a CGI. I think the story is interesting enough without misleading the public.

Miquela3

It’s possible she could be human in this photo.
Photo from Lil Miquela Instagram https://www.instagram.com/lilmiquela/

However, I do believe that this was a great marketing tool and creative way to showcase the talents of Brud, the company that is credited with creating Miquela along with her account hijacker Bermuda.

Brud created a personality so popular that even high-end clothing and apparel brands partnered with Miquela to promote their products. The concern is whether she was paid for those endorsements.

It presents a challenge to the public relations profession as noted in a Wired article (Katz, 2018) about who is responsible for disclosing that an influencer was paid. The Federal Trade Commission requires that influencers provide a highly visible hashtag such as #ad, #paid or #sponsored on their social media posts to disclose that they were paid for their endorsement. The Wired article poses the question that if the influencer is a robot who is responsible? I would argue the company who created the CGI. Also, was she a machine learning (AI) or was there a human posting for her?

Another concern with computer-generated influencers may include the question of copyright and celebrity rights. I’m sure we will have further discussions about the laws surrounding this topic.

CGI personalities can also benefit the public relations profession. We could use their popularity to build awareness, encourage people to donate to nonprofits and causes, act as influencers to encourage change or even serve as an online spokesperson. There are multitudes of positive uses.

Miquela2

She does look computer-generated in this photo.
Photo from Lil Miquela Instagram https://www.instagram.com/lilmiquela/

One final thought—if you think about it, Disney World (and Disneyland) may have started this evolution of AI decades ago in their theme parks with the introduction of animatronics. It is not on the scale of HBO’s Westworld or what we are seeing today with AI, but I do see a connection.

I’m interested to hear from other public relations professionals. How do you think AI and CGI influencers will impact PR?

References

Katz, M. (2018, May 1). CGI ‘Influencers’ Like Lil Miquela Are About to Flood Your Feeds. In http://www.Wired.com. Retrieved from https://www.wired.com/story/lil-miquela-digital-humans/

Lil Miquela. In Instagram. Retrieved July 24, 2018, from https://www.instagram.com/lilmiquela/

Yurieff, K. (2018, June 25). Instagram star isn’t what she seems. But brands are buying in. In Money.CNN.com. Retrieved from https://money.cnn.com/2018/06/25/technology/lil-miquela-social-media-influencer-cgi/index.html

Featured Photo Credit:  metamorworks/Shutterstock.com