Blogs

The Creation and Detection of Deepfakes – A Socio-Legal Analysis

Deepfakes

Wherever you look, there is technology. Even the device on which you are reading this article is technology. Technology has gained importance in today’s world and now it is next to impossible to aloof ourselves from it. The development in technology has made our life far easier than it was 10 years ago.  Technology has made everything available to us at our fingertips. The world is just a click away from us. Society is dramatically changing every day due to the evolution of modern technology. Now, people get to spend time with themselves because most of the things are taken care of by the use of technology. It is all because of technology that our life has become simpler in this modern world. Deepfakes is one such development in technology.

Deepfakes Explained

Deepfake is a combination of deep learning and fake. It is a form of synthetic artificial intelligence that is used to create images, audio, and video, in which an existing person in such image audio or video is replaced with someone else. Deepfakes refer to any media or digital representation that has been distorted using artificial intelligence and deep learning. It has been used in multiple fields including the entertainment sector, MedTech, education, etc. The most common form of Deepfakes involves the generation and manipulation of human imagery. This technology has creative and productive applications, for example, realistic video dubbing of foreign films, education through the reanimation of historical figures, and virtually trying on clothes while shopping.

How are Deepfakes created?

Auto-encoder is a neutral network on which Deepfakes rely. These consist of an encoder, which reduces an image to a lower-dimensional latent space, and a decoder, which reconstructs the image from the latent representation. It utilizes this architecture by having a universal encoder that encodes a person into the latent spaceThe latent representation contains key features about their facial features and body posture. This can then be decoded with a model trained specifically for the target. This means the target’s detailed information will be superimposed on the underlying facial and body features of the original video, represented in the latent space. A popular upgrade to this architecture attaches a generative adversarial network to the decoder.

A GAN trains a generator, in this case, the decoder, and a discriminator in an adversarial relationship. The generator creates new images from the latent representation of the source material, while the discriminator attempts to determine whether or not the image is generated. Both algorithms improve constantly in a zero-sum game. This causes the generator to create images that mimic reality extremely well as any defects would be caught by the discriminator. This makes Deepfakes difficult to combat as they are constantly evolving; any time a defect is determined, it can be corrected.

Can Deepfakes be detected?

As the technology creates Deepfakes, the same technology provides a method to detect these Deepfakes. Most of the academic research surrounding it seeks to detect the videos. The most popular technique is to use algorithms similar to the ones used to build the Deepfake to detect them. By recognizing patterns regarding how these Deepfakes are created, the algorithm can pick up the subtle inconsistencies. Researchers have developed automatic systems that examine videos for errors such as irregular blinking patterns of lighting. 

This technique has also been criticized for creating a Moving Goal Post in which, at any point of time the algorithms for detecting get better, so do the Deepfakes. In this, a subject’s face is modified to create convincingly realistic footage of events that never actually happened. As a result, typical deepfake detectors focus on the face in videos: first tracking it and then passing on the cropped face data to a neural network that determines whether it is real or fake. For example, eye blinking is not reproduced well in deepfakes, so detectors focus on eye movements as one way to make that determination. State-of-the-art Deepfake detectors rely on machine learning models for identifying fake videos. 

For example, eye blinking is not reproduced well in deepfakes, so detectors focus on eye movements as one way to make that determination. State-of-the-art Deepfake detectors rely on machine learning models for identifying fake videos.

Application of the Technology

Deepfakes can be used in many places in our day-to-day life. It is important to note that this technology has both advantages and disadvantages. Deepfakes are mainly used negatively to manipulate society at large. Some of its uses are discussed below.

Political Arena

Deepfakes have been well used in politics to misrepresent the political parties all around the globe. Some political parties use this technology to gather more votes and to defame the other political party. Some of the examples include-

  1. In February 2020, this technology was used during the elections in Delhi. A video was circulated on WhatsApp, where a senior politician criticized the Delhi government in which the president of BJP Manoj Tiwari was seen attacking Arvind Kejriwal in two different languages.
  2. In April 2018, Jordan Peele collaborated with Buzzfeed to create a deepfake of Barack Obama with Peele’s voice; it served as a public service announcement to increase awareness of Deepfakes.

Pornography

Pornography is one such area where technology is used the most. People use this technology to defame females, especially celebrities. Non-consensual pornography is prominently surfaced on the internet and many incidents have been noted. Some of these include.

  1. In June 2019, a downloadable Windows and Linux application called DeepNude was released which used neural networks, specifically generative adversarial networks, to remove clothing from images of women. The app had both a paid and unpaid version, the paid version cost $50.
  2. In 2019, Mumbai police arrested a 20-year-old college student for blackmailing a teenage girl with fake pornography. The student had edited her face into an obscene video. He later contacted her on Instagram using an anonymous account and threatened to share it online.

Other areas of use include Blackmailing, Movies, Acting, Sock-puppets, etc.

Deepfakes – A Legal Analysis

Facial Recognition

Deepfakes technology, when looked deeply has a lot of legal areas of consideration. The technology is of such a nature that it infringes the copyright, hampers data protection, defames people, violates freedom of speech and expression, and moreover includes a lot of digital fraud and crime against women ranging from Revenge pornography to harassment. This technology must be well within the legal limits to not hamper the credibility of an individual. The area that is hampered the most using this technology is intellectual property. There have been different stands concerning the same:

World Intellectual Property Organisation

Draft issues Paper on Intellectual Property Policy and Artificial Intelligence was published by the WIPO  in December 2019. The draft dealt with the problems of intellectual property rights about deepfake technology. Two specific questions were addressed in the draft –

  • Since deep fakes are created based on data that may be the subject of copyright, to whom should the copyright in a deep fake belong? 
  • Should there be a system of equitable remuneration for persons whose likenesses and “performances” are used in a deep fake?

WIPO while addressing the issue related to deepfake technology made a statement that this violation is not just a copyright infringement but it can cause serious problems such as violation of the right to privacy, human rights, etc. Regarding the question as to whom the copyright of the deepfake technology should belong, the main concern of WIPO was whether such work should be rewarded copyright protection or not. The organization was of the view that if deepfakes are subject to copyright then it should belong to the inventor of the same.

EU General Data Protection Regulation

By keeping the view in the above-mentioned article it is clear that deepfake content is relevant and should be erased or rectified without any delay. Under Article 17 of the General Data Protection Regulation, the citizens of Europe are provided with the right to erasure. The victim of such technology can avail this right. Under Article 17, the data subject shall have the right to obtain from the controller, the erasure of personal data concerning him or her without undue delay and the data controller shall have the obligation to erase personal data without failure.

United States of America

The Fair Use doctrine in the United States under US Code §107 is determined based on the following four-factor test –  

  • Purpose and character of the use, 
  • Nature of copyrighted work, 
  •  Amount and substantiality of the portion taken
  •  The effect of the use on potential markets

India

As of now, India does not have any particular legislation specifically talking about deepfake technology. Section 52 of the Indian Copyright Act is the only legislation that can tackle issues related to it. This section gives an exhaustive list of works that can be considered as infringement and it also differentiates between Bonafide and malafide users of the protected work. This particular section is in relation to Article 13 of the Trade-Related Aspects of Intellectual Property Rights (TRIPS) Agreement which reads as “Members shall confine limitations or exceptions to exclusive rights to certain special cases which do not conflict with a normal exploitation of the work and do not unreasonably prejudice the legitimate interests of the right holder.”

In India, however, there is no explicit law that bans deepfakes. Amidst the current laws in force, Sections 67 and 67A of the Information Technology Act provide punishment for publishing sexually explicit material in electronic form, and under section 79 of the Information Technology Act 2000, orders for immediately taking down unlawful content upon receiving actual knowledge or by the court order can be obtained in such cases. Section 500 of the Indian Penal Code provides punishment for defamation, but these provisions are insufficient to tackle various forms in which deepfakes exist.

Deepfakes – A Social Analysis

This technology does not only come with legal issues but also social issues. By using deepfake the society at large is being hampered. The wrong information is being served to the public and the reputations of people are taken down: 

Image and Dignity of Celebrities

As an outcome, Deepfake is broadly utilized in making explicitly unequivocal pictures or recordings distorting the famous people in parodies. Furthermore, famous entertainers or stars from the music business are targeted while making the Deepfake counterfeit grown-up recordings or pictures. The makers trade entertainers’ countenances into recordings by porno stars empowering the exchange of sexual dreams from the minds of individuals to the Internet.

Manipulated Content on Social Media

Individuals throughout continue searching for fascinating substances on the web. Web-based media channels are one the most favored online stages where such substances are posted with no confirmation impacting the gathering of individuals on the web. Web-based media organizations like employees or individuals or Deepfake administration giving organizations to spot such substance and assist them with removing it before it spreads in large numbers.

Disturbing Political Campaigns

Deepfakes can have significant adverse results to popular governments, profound fake news reports could be pointed toward focusing on the standing of specific people, depict counterfeit occasions, or affect such equitable cycles as appointive missions or other socially critical occasions. Deepfakes might be utilized as an instrument to dissolve trust in political foundations, develop division among people. Whenever utilized by unfriendly governments, it could even posture dangers to public safety or impede global relations. From the regulating viewpoint, tracking down an effective reaction to deepfakes which are used to influence political cycles is particularly difficult. Certain approvals for dispersing bogus data could be forced in criminal laws. Government officials whose pictures are utilized in making slanderous or bogus deepfakes could look for cures dug in misdeed laws or intellectual property law.

Conclusion

With the growth in the area of artificial learning, data science, and high-speed network a new technology emerged, using deep learning. Just like all other technological advancements, these deepfakes have been applied in illegal and immoral activities. This deepfake technology has the power to manipulate the working of an economy, freedom of people as well as the security of a nation. It is important that the citizens are well aware of such technology and service providers of major social media platforms should monitor such activities.

Deepfakes technology needs a proper set of rules and regulations to protect people and their privacy. This technology does not just have its effect on data infringement but can hamper the lives of people. It is also important to note that this technology can be used positively. People using this technology should consider the ethical and social considerations of their work. It is a well-said principle that everything has both a positive as well as a negative effect. We must come together to find more positive sides to using this technology.


Editor’s Note
The article focus on the profound impact of deepfake technology on society. It also lays down how this technology is created and how it can be detected. The author has given a brief legal analysis of the technology from a global perspective and has also explained the application of this technology. Along with the legal analysis, a social analysis has also been done to gain in-depth knowledge on the same and the author has concluded by pointing out various negatives as well as positives of this technology.

Leave a Reply

Your email address will not be published. Required fields are marked *