Rules for the use of deepfake technologies in the law of the USA and China: adaptation of foreign experience of legal regulation
The article discusses possible ways to improve Russian legislation in the field of regulating the use of deepfake technologies based on an analysis of the legislation of the United States and China regulating legal relations in the field of deepfake technologies. The purpose of the study is to analyze the theoretical and practical aspects of protecting the rights of citizens and public interests in the creation and distribution of diplomatic posts in information and telecommunications networks. It is concluded that it is necessary to ban certain types of applications for creating deepfakes, in particular, the inadmissibility of using deepfake technologies in political advertising, including election campaigning and referendum campaigning. Taking into account the risks associated with the deliberate use of deepfake technologies for illegal purposes, a proposal has been formulated to introduce a separate criminal offense in the field of the dissemination of deepfakes in cyberspace.
Keywords: deepfake technologies, deepfake, image rights, content labeling, information intermediary, false information, information and telecommunication networks.
Introduction. Several decades have passed since the advent of deepfake technology, and if initially it was imperfect and did not pose any danger to the rights of citizens, then starting in 2018, when applications allowing the use of this technology in large quantities and everywhere became widely available on the Internet, the technology became easily accessible to a wide range of people, practically without restrictions.
Deepfake, created as a result of using special software, is a form of synthetic media in which a digital copy of the image or voice of a particular person is created using artificial intelligence. In Russian judicial practice, deepfake technology is defined as an additional tool for processing (technical installation) of video materials, and not a way to create them [2].
Deepfake technology can be used in many areas of economic and social life: to create interactive educational content in the educational field, to replace the face of an understudy (stuntman) with the image of an actor in the film industry, to provide consulting services to potential buyers in the retail sector. However, despite the large number of positive examples of the use of deepfake technology in various sectors of the economy and the social sphere, the uncontrolled use of deepfakes can lead to unreasonable risks and serious negative consequences, in particular to political disinformation, the creation of deepfake pornography, as well as financial fraud. So, there is a known case when the CEO of a British energy company became a victim of fraud using a deepfake: with the help of this technology, it was possible to fake the voice of the head of the main company, who instructed the CEO to transfer 200 thousand euros to the account of a shell company in Hungary. Despite the fact that the fraudulent scheme was soon uncovered, the company was unable to recover its losses due to the lack of proper legal regulation of this area and difficulties in tracing the origin of the dipfakes [12].
Currently, the lack of legal regulation of the use of deepfake technology creates a risk of violation of citizens' rights to the image and can damage personal and professional reputation.
In order to form proposals and recommendations regarding the legal regulation of the use of deepfake technology in Russian legislation, an analysis of advanced foreign experience in this field is of interest.
In December 2019, the US Congress passed the National Defense Authorization Act (NDAA), which also mentioned deepfake in terms of establishing the duty of the head of the national Intelligence service to submit reports on the use of deepfakes by international governments, their ability to spread disinformation and their potential impact on the national security of the country. The Eastern approach to the problem under consideration is presented by China, where there is a strict ban on creating deepfakes without the user's consent. At the moment, China is the only country that has introduced a legislative ban on the use of certain types of dipfakes.
Legal regulation of the use of deepfake technology in the USA. For the first time, US legislation regulating the rules for the use and distribution of deepfakes was adopted at the state level. Thus, since September 2019, the state of Texas has a law prohibiting the distribution of misleading video clips aimed at harming political candidates or influencing the electoral system during the 30 days preceding the election. In October 2019, a similar law was passed in the state of California; it establishes an identical ban, but with a longer validity period: deepfakes cannot be distributed within 60 days before the election. In November 2020, the State of New York passed a law explicitly prohibiting the use of a digital copy of a deceased performer in audiovisual content for 40 years after the death of the performer, if such use may cause the public to have a false impression that the performer has consented to such use.
In the state of Virginia, in July 2019, a law was passed criminalizing the distribution of fake news aimed at coercion, harassment or intimidation of a citizen. The Virginia Code prohibits the distribution of pornographic deepfakes, but not the creation of such deepfakes. In addition, the legislation of the state of Virginia establishes a ban only on pornographic dipfakes created with the intention of depicting a citizen who can be identified as a real person by face [11].
It should be noted that there are two institutions in the United States that exercise authority in the field of deepfake technologies: the US National Science Foundation (NSF), which supports research in the field of creating deepfake technologies and verifying the authenticity of deepfakes, and the US National Institute of Standards and Technology (NIST), which carries out research on deepfake standards. Both institutions are collaborating with private companies to develop ways to detect deepfakes [8]. In addition, the US National Defense Authorization Act (NDAA) [13] obliges the US Department of Homeland Security (DHS) to submit an annual report on the falsification of digital content for five years. DHS is required to monitor the use of deepfakes by foreign countries, as well as the threats that deepfakes pose to society, and explore ways to create, detect and counter deepfakes.
The Communications Decency Act prescribes that the provider or user of an interactive computer service should not be equated with the publisher or carrier of any information provided by another information content provider [4]. Individual users can potentially be punished for defamation and illegal use of someone else's image, but the platform itself, on which the deepfake is posted, is not at risk of liability. In addition, it is prohibited to define certain forms of speech or self-expression for websites on the Internet, which corresponds to the principle of freedom of speech and allows the use of all types of self-expression in cyberspace.
However, this creates some difficulties when it comes to preventing the spread of fake news and disinformation materials. Similar to the Communications Decency Act, the First Amendment to the U.S. Constitution is an integral part of American democracy, which ensures and protects freedom of speech and expression [5]. In a number of court decisions, this provision of American law has been interpreted by the courts in relation to deepfakes as an act of self-expression. In the case of Hustler Magazine, Inc. against Falwell" The Supreme Court rejected a claim for compensation for moral damage in connection with the publication of an article accusing the minister of incest, since additional evidence of deception and actual malicious intent was not provided, necessary for a proper assessment of the possibility of restricting freedom of speech protected by the First Amendment in relation to public figures or issues of public interest [7]. This judicial precedent demonstrates the protection of freedom of speech and expression, but makes it difficult to limit the use of deepfakes.
According to US law, deepfakes are a derivative work that is protected by Section 107 of the US Copyright Act [6]. A derivative work means that the new result of intellectual activity differs significantly in the external form of expression or the nature of the copyrighted work from the original work. The doctrine of transformative use applied in judicial practice allows, under certain conditions, to recognize the act of creating a deepfake as an act of fair use, in which the protected work is used with a new meaning or a new aesthetic, style and, therefore, does not violate the rights to the original. At the same time, the work should be reasonably perceived as the embodiment of a certain artistic purpose, conveying a meaning completely separated from the original. The courts are guided by a two-step test of the "transformability" of work: the meaning of the derivative work is separate from the original work; such meaning is obviously read by the user.
Legal regulation of the use of deepfake technology in China. The Chinese government is trying to curb the spread of fake news. On January 1, 2020, a law was passed prescribing the mandatory labeling of any falsified video or audio content, as well as content created using artificial intelligence and neural networks. The obligation to label the specified content is assigned to application providers [9]. The law obliges platform operators to independently identify and mark or delete unmarked content. According to the Law, the production and distribution of fake news is prohibited and must be immediately deleted after identification. The Chinese Cyberspace Administration (CAC) is responsible for compliance with the rules.
In order to combat illegal political content, the Law establishes additional requirements and rules. Firstly, it provides for the obligation of users to register on the platforms with mandatory indication of personal data, including the number of the state identity card and the mobile phone number. Secondly, each platform should have a built-in electronic service for filing complaints about the misuse of content. Third, audiovisual content distribution platforms should issue their own industry standards and guidelines. Finally, government departments are required to organize regular inspections in order for platforms to comply with the mechanism for regulating online content in accordance with service agreements [3].
Since January 10, 2023, the Law on Deepfakes has been in force in China. The content of the Law includes three basic rules. First, it is necessary that individuals give their consent so that their image can be used in a video interface. Secondly, it is forbidden to use deepfake technologies to spread fake news. Finally, any content that contradicts current laws, damages the image of the country and national interests, or endangers national security and undermines the economy is prohibited. In addition, it is necessary to make a special mark on the video content using deepfake technology.
On the one hand, the Law allows the Chinese government to maintain control over deepfake technology, on the other hand, it leaves private companies the opportunity to benefit from the use of deepfake technology. However, any company wishing to produce deepfakes must undergo a security audit.
Conclusions. Despite the existing threat posed by deepfakes, there are many restrictions in US law for their legislative consolidation. Such restrictions include section 230 of the Communications Decency Act, the First Amendment, copyright laws, including Fair Use laws.
Unlike the United States, Chinese legislation implements targeted legal regulation of the use of deepfakes and deepfake technologies. However, the practical implementation of the special law causes a number of problems. Firstly, the procedure for obtaining consent from the persons whose image is used in the video is not clearly spelled out. Secondly, the labeling of deepfakes can be easily changed by transcoding the video.
It is proposed to ban certain types of applications for creating deepfakes without the user's consent. Given the potential negative impact of specific applications for creating deepfakes, in particular of a pornographic nature or for the purposes of political disinformation, on the one hand, and certain positive aspects of using deepfake technology, on the other hand, a complete ban on this technology seems unreasonable. One of the solutions to this problem may be to ban certain types of applications for creating deepfakes. In some countries, such proposals have already been put forward, for example in the USA, the Netherlands and the UK [14].
Given the harm that can be caused by the deliberate use of dipfakes, it is necessary to expand the existing legal framework in relation to criminal offenses, and the legislation of some states provides for offences in this area [11]. Thus, in the United States, criminal liability is provided for impersonating another person with the intention of harming, intimidating, threatening or deceiving him.
The Criminal Code of the Russian Federation, in Part 2 of Article 128.1, provides for criminal punishment for defamation (dissemination of knowingly false information discrediting the honor and dignity of another person or undermining his reputation), including in a publicly displayed work or committed publicly using information and telecommunications networks. In order to apply this rule in relation to the dissemination of a deepfake, it will be necessary to provide evidence of the falsity and defamatory nature of the information contained in the deepfake, which is not possible in all cases [1]. In this regard, it is necessary to provide for a special criminal offense in the field of spreading fake news in cyberspace.
It is necessary to introduce a ban on the use of deepfakes in political advertising. Research shows that fraud combined with targeting can be used to manipulate users' political opinions. In order to mitigate this risk, amendments should be made to Federal Laws No. 20-FZ of 22.02.2014 "On Elections of Deputies of the State Duma of the Federal Assembly of the Russian Federation", No. 67-FZ of 12.06.2002 "On Basic Guarantees of Electoral rights and the right to participate in a referendum of Citizens of the Russian Federation", No. 19-FZ of 10.01.2003 "On the election of the President of the Russian Federation" regarding the introduction of a complete ban on the use of deepfake technologies in political advertising, including in election campaigning and campaigning on referendum issues.