Opening Pandora's box; should deep fake technology be regulated?

Surfing the internet means coming across a variety of information, images and videos. One could easily argue that it is crucial for us to be able to distinguish genuine content from fake, while the existence of a strong legal framework could contribute to the effective protection of our rights online. The desire to stay relevant and share the latest trend or newsworthy story should not trump our need for the truth and online safety.


Imagine now that you come across a video, where the former US President Barack Obama is supposedly sharing his 'wisdom'. You are excited and you press 'share'. What if you were told that this was never Barack Obama but merely a fake video created to accurately mimic him to deceive online viewers, ones just as yourself? Would you believe it? Unfortunately, this is not fiction. It is a possible application of a new kind of technology called 'deep fake' and its efficient regulation constitutes an issue that needs to be properly examined.


What is Deep Fake Technology?

Deep fake technology is one of the most complex applications of a technological advancement called Generative Adversarial Networks. In short, Generative Adversarial Networks (GANs) can be characterized as an artificial intelligence (AI) method, where generative models use neural networks. This technique was introduced by Ian J. Goodfellow and his colleagues in 2014 and continues to inspire AI researchers ever since. Deep fake technology can be used for the creation of convincing videos. Public figures such as celebrities or politicians are usually the targets of deep fake videos, often depicted discussing a certain topic that was scripted by programmers.


An interesting example of a video created using this type of technology is the one featuring Mark Zuckerberg. This video circulated on Instagram, where his digital double was talking about a 'man with total control of billions of people's stolen data.'[1] It must be noted, that the said video was made for a documentary festival in the UK. A more recent example of a widely broadcasted deep fake video is the one featuring Queen Elizabeth II delivering her Christmas speech on Channel 4, which caused a controversy. The video was intended to be viewed as a warning to the public regarding the abuse of technology. It also depicted the Queen expressing her 'honest opinion' on different events that occurred during 2020, such as the departure of Prince Harry and Meghan Markle from the UK, which ended with a 'dance' from the Queen.


It should be acknowledged that GAN technologies have other applications, which fortunately are not as alarming. These include the use of photo-editing mobile applications that transform photos to resemble certain painting styles. Some of them even utilize this type of technology to create images of different artistic styles. Translating satellite images to Google Maps is another implementation of this neural network, which is considered a breakthrough in navigation and logistics. It is beneficial to explain that GAN technology operates by discovering and learning patterns of data, which subsequently generate new outputs. The machine's output is created in a similar way to that of an individual who is using original data (data created by a human). Reaching high accuracy, the model keeps on attempting to distinguish if its output is real (made by a human) or fake (generated by the machine). The unique capability of this model is the generation of new outputs, without the involvement of a human creator, such as images and videos.


Possible Legal Paths

From a legal perspective deep fakes can be divided into four possible categories. These are 'revenge porn and political deep fakes', which can be defined as 'hard' cases.[2] The other two are the ones 'created in commercial or creative content', which can be viewed as 'socially beneficial' and therefore less alarming for our online safety.[3] Moreover, it has been suggested accurately, that 'deepfake technology has earned its reputation as a threat to our already vulnerable information ecosystem'.[4] So, one might wonder what could the potential options be about the effective regulation of deep fake technology in order to prevent its exploitation from malicious actors?


Notably, there is a number of legal paths available to individuals who have been the targets of the unlawful use of deep fake technology. One of them is copyright law. The possible routes for an individual seeking some form of compensation from the creator of a deep fake video will vary depending on the circumstances. In the case that the use of deep fake technology took advantage of copyrighted material (such as photographs that were used in the making of a deep fake video without the consent of the copyright owner) a potential solution in terms of copyright law could be 'opening the door to monetary damages and a notice-and-takedown procedure that can result in removal of the offending content'.[5]


It should be noted that in return the creator of such a video could potentially argue that his/her use was 'fair' according to the doctrine of fair dealing in the UK, or the doctrine of fair use in the US. Notably, no statutory definition of the doctrine of fair dealing exists. However, it has been argued that it is 'a matter of fact, degree, and impression.' The significant aspect in these cases is to address how a fair-minded person would have dealt with the said work. To solve this puzzle the fairness factors derived from case law must be examined each time by the courts.


Another possible solution is seeking protection through defamation law, in those cases where the material appears to cause harm to the reputation of the targeted individual whether a celebrity, politician or private person. Specifically, one could point out that in the UK, in comparison to other jurisdictions, 'a more conservative approach to the protection of publicity value and commercial magnetism embodied in the name, likeness or photograph of an individual' exists.[6] When examining the potential application of defamation law to deep fake technology disputes it has been suggested that defamation could 'serve as a useful tool against non-commercial uses of deepfakes', especially in circumstances where there was 'a lowering of someone's image in the eyes of the digital public'.[7]


A similar legal path in those instances where the 'image' of a person has been harmed, could be through the right to one's publicity. This right is regarded 'as a property right in one's personality'.[8] However, not all countries offer legal protection under this right. Specifically, the right to publicity is available in the US, but not in the UK, where protection in similar cases can be provided under Intellectual Property law. For example, this could be achieved under the common law of passing off. It is beneficial to note that the three elements which must be examined in such cases are goodwill, misrepresentation and damage.


At present in the UK 'no claims of passing off have been brought against deepfakes'[9]. However, 'celebrities have found success by using the tort of passing off to protect against unauthorised advertising and merchandising uses in the past', namely in Irvine v Talksport, and Fenty and others v Arcadia Group Brands Ltd [10]. Consequently, the common law of passing off could be used by individuals to protect their 'commercial magnetism' from being exploited without their permission in deep fake videos that involve them. For example, in the future advertisers could 'use deep fakes to send a positive message about their brand, thus conveying endorsement and satisfying the misrepresentation element' required in cases involving the application of the common law of passing off.[11]


A controversial suggestion is the adoption of a complete ban on the use of this kind of technology. However, it has been argued that adopting a 'flat ban is not desirable' since it is evident that 'digital manipulation is not inherently problematic'.[12] The problems arise from the purposes deep fake technology is utilized for. Specifically, the use of deep fake technology can present significant issues in those cases, where it is used for unethical purposes (such as manipulating elections or damaging national security), while if used for artistic purposes it is thought to be generally harmless and even desired.


On top of that, the adoption of a complete ban does not sound convincing since it could cause implications in terms of respecting the right to freedom of expression online. According to Article 10(1) of the European Convention on Human Rights 'everyone has the right to freedom of expression. This right shall include freedom to hold opinions and to receive and impart information and ideas without interference by public authority and regardless of frontiers.' Thus, it has been suggested that banning the use of deep fake technology will go against the exercise of this right in the sense that it will constitute a form of censorship on online speech.


Final Thoughts

In conclusion, technology seems to always challenge us and propel change inside society. However, lawmakers need to examine the new technological advancements and enact regulations or modify older ones in order to address effectively the novel developments. It is important to stress that lawmakers should act as soon as changes emerge, since a delayed regulation might cause irreversible damage to society. If action is not taken promptly, especially in the digital era, the difficulties that were presented originally can be magnified.


Consequently, when keeping in mind that the effective online enforcement of our rights is a particularly difficult task to achieve, unnecessary delay regarding the regulation of deep fake technology will only hinder the chance of ever reaching it. From our brief analysis it becomes apparent that deep fakes are not harmless and thus their imminent regulation is of paramount importance to prevent the possible dangers that emerge from their unlawful use.


So, what do you think? Are deep fakes really a 'menace on the horizon'?



Endnotes

[1] Lisa Eadicicco, 'There's a Fake Video Showing Mark Zuckerberg Saying He's in Control of 'Billions of People's Stolen Data,' as Facebook Grapples with Doctored Videos that Spread Misinformation' (Business Insider , June 2019). [2] Edvinas Meskys, Aidas Liaudanskas, Julija Kalpokiene, Paulius Jurcys, 'Regulating Deep -Fakes:Legal and Ethical Considerations' (2019), 5. [3] ibid. [4] Suyoung Baek, 'Free Speech in the Digital Age: Deepfakes and the Marketplace of Ideas' , Honors Theses (PPE) Paper 42, University of Pennsylvania (2020), 2. [5] Bobby Chesney, Danielle Citron, 'Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security' (2019) 107 Calif L Rev 1753, 1793.

[6] Emma Perot, Frederick Mostert, 'Fake It Till You Make It: An Examination of the US and English Approaches to Persona Protection as Applied to Deepfakes on Social Media' (2020) Journal of Intellectual Property Law & Practice , 8. Defamation Act 2013. [7] ibid 9. [8] Jennifer E. Rothman, 'The Inalienable Right of Publicity' (2012) 101 The Georgetown Law Journal 185, 187. [9] Perot, Mostert (n6) 12. [10]ibid. Irvine v Talksport [2003] EWCA Civ 423; [2003] 2 All ER 881; [2003] EMLR 538. Fenty and others v Arcadia Group Brands Ltd [2013] EWHC 2310 (Ch). [11] Perot, Mostert (n6) 13. [12] Chesney, Citron (n5) 1788.

Bibliography

Brownlee J., 'A Gentle Introduction to Generative Adversarial Networks (GANs)' (Machine Learning Mastery, 19 July 2019)

Das S., 'The AI Behind Face App' (Analytics India Magazine, 29 June 2020)

Dl Cade, 'Prisma Arrives on Android, Turns Your Photos into Painterly Works of Art' (Peta Pixel, 25 July 2016)

Goodfellow I.J., Pouget-Abadie J., Mirza M., Xu B., Warde-Farley D., Ozair S., Courville A., Bengio Y., 'Generative Adversarial Networks' (2014) Universite de Montreal

136 views0 comments