The term “deepfake” sounds like a play-call in football where the quarterback fakes running the ball but then throws deep for his wide receiver. I learned in a presentation while at my Syracuse Immersion this spring that a Deepfake is something much more sinister that can have big impacts on our communications industry from journalism to advertising and public relations.
Simply put, deepfakes are fabricated videos that have been produced and edited to make a person say or do something that they never did. For example, in a political campaign, video of candidates can be manipulated to show that they said something they didn’t and it can be hard to detect because technology has become so sophisticated. The creators of deepfakes use face-swapping which means they edit one person’s face onto another person’s head (Villasenor, 2019). The most common method to create deepfakes is with free AI software (Fagan, 2018). These videos are based on “deep learning” or neural networks using artificial intelligence and they can be hard to detect.
How can you spot a deep fake? It can be difficult. Just check out this example from BuzzFeed that uses video of former President Obama. It sounds and looks real.
According to John Villasenor (2019), these deepfake videos exploit our inclination to trust the reliability of evidence we witness with our own eyes and therefore can turn fiction into fact.
When you think about it, deepfakes can ruin a candidate’s chances of getting elected, it can ruin people’s careers or it can ruin an entire company if the video is of your CEO doing or saying something inflammatory or detrimental. In our fast-paced digital world it’s easy for anyone to obtain and manipulate video of deepfake targets.
As our presenter and media law professor Nina Brown (2019) pointed out, deep fake videos are not synonymous with fake videos. Fake videos or scenes like the ones in Forrest Gump where the character is placed back in time with events in the 60s and 70s are harmless and not meant to deceive.
Imagine if someone made a deepfake video of your CEO talking negatively about your company, sharing fake financial information about the company or being placed in a crime scene. Talk about a nightmare for public relations professionals. This could ruin the company or send stocks crashing. How would you as a PR practitioner deal with such a crisis? In my opinion, we need to ensure that we have such scenarios included in our crisis plans and we should work with organizations and companies to create awareness of the things technology can do to harm our reputations.
Brown (2019) also discussed the implications of deepfakes as it relates to current law because the First Amendment protects freedom of speech and there is no legal duty for digital and social media to stop the spread of fake news. But as citizens, we have the duty to search for the truth. The problem is that most people are not media savvy enough to detect fake news or these deep fake videos. That’s why it’s critical for PR professionals to help create awareness.
According to Brown (2019), the solutions currently being developed include algorithms to detect deep fakes, but they must be implemented at the top of the distribution channel before they go viral or are shared. The other solution in the works is creating new laws for digital and social and using current laws like false light, defamation and right of publicity just to name a few. Also, public awareness is critical in helping publics understand that these exist, and they should not take things they see on the internet at face value, but rather should be critical of information.
The bigger issue is that the internet is worldwide and other countries do not have to abide by our laws. Also, under current law, Internet Service Providers cannot be held accountable for the content or sharing of third-party content. I believe though that ISPs must also take steps to deter or ban such content from their sites. Facebook and Twitter have implemented some policies and they do delete content based upon their policies. Villasenor (2019) wrote that deepfakes detection techniques will never be perfect and even the best detection advances will not be able to keep up with the pace of deepfake technology.
The bottom line is that AI is adding more complications to an already complicated digital and social media world, and as PR professionals we must be prepared to adapt and manage these issues and help create solutions.
References
Brown, Nina. (March 29, 2019). “Deepfakes and the Law.” Syracuse University. Lecture.
BuzzFeedVideo. (April 17, 2018). You won’t believe what Obama says in this video. In Youtube.com. Retrieved from https://www.youtube.com/watch?v=cQ54GDm1eL0
Fagan, K. (2018, April 17). A viral video that appeared to show Obama calling Trump a ‘dips—‘shows a disturbing new trend called ‘deepfakes’. In http://www.businessinsider.com. Retrieved from https://www.businessinsider.com/obama-deepfake-video-insulting-trump-2018-4
Peel, J., & Obama, B. (Actor). Peel, J. (Narrator). (2018). You Won’t Believe What Obama Says In The [Online video]. USA: BuzzFeed. Retrieved from https://www.youtube.com/watch?v=cQ54GDm1eL0
Villasenor, J. (2019, February 14). Artificial intelligence, deepfakes, and the uncertain future of truth. In http://www.brookings.edu. Retrieved from https://www.brookings.edu/blog/techtank/2019/02/14/artificial-intelligence-deepfakes-and-the-uncertain-future-of-truth/
Deepfake photo: Shutterstock.
Scary.
Sandi Daly
>
LikeLike