Deepfake refers to synthetic media digitally manipulated to convincingly replace one person’s likeness with another. Utilizing advanced techniques from machine learning and AI, such as deep learning and Generative Adversarial Networks (GANs), deep fakes serve various purposes, including entertainment, education, art, and activism. While they can enhance film dubbing, facilitate engaging educational experiences, and spur artistic innovation, deepfakes also present ethical and social challenges, including spreading misinformation, violating privacy, and damaging reputations.
In a recent revelation by a fact-checking website, a viral video featuring an actor entering a lift was exposed as a deepfake, triggering widespread debates. The discourse gained momentum as actors and industry stakeholders called for legal regulations to curb the proliferation of such deceptive videos. In response, the Minister of State for Electronics and Information Technology highlighted the existing regulations under the IT Act, of 2000, as a potential tool to combat the dissemination of deepfakes. However, addressing the deepfake challenge requires a comprehensive regulatory approach that not only considers the interplay between platform and Artificial Intelligence (AI) regulation but also incorporates safeguards for emerging technologies more broadly.
The usage of deepfakes for commercial purposes is quite common today. Deepfake technology enables realistic lip-syncing for actors who speak different languages, enhancing accessibility and immersion for global audiences. It can also bring historical figures to life in the classroom, creating interactive simulations to make learning more engaging. Meanwhile, artists can use deepfake technology as a creative tool, experimenting with styles or collaborating with others to promote their work. It also empowers individuals to control their digital identity, protecting privacy and allowing diverse forms of self-expression. Deepfakes can also help to amplify the impact of individuals facing discrimination or censorship, giving a voice to those who need it. The technology aids in reconstructing missing or damaged digital data, enhancing public safety by creating realistic training materials for emergency responders. It can drive innovation in entertainment, gaming, marketing, and other industries, enabling new forms of storytelling and interaction. However, technology poses far wider challenges than its utility.
Deepfakes can be used to purposefully spread false information, influencing public opinion and potentially impacting elections. Designed to harass and undermine individuals, deepfakes can fuel unethical actions like revenge porn, leading to privacy violations and psychological distress. They may be used to fabricate evidence, defrauding the public, harming state security, or manipulating legal proceedings. It can create false images or videos to tarnish an individual’s or organization’s reputation, leading to reputational and financial losses. The technology can be exploited to impersonate executives or manipulate individuals into revealing sensitive information, resulting in financial losses.
As deepfakes continue to evolve, a proactive and holistic regulatory approach is imperative. To address the menace of deepfakes, a multi-faceted strategy is required. Draw insights from countries like China and Canada, incorporating measures such as obtaining consent for deepfake technologies and public awareness campaigns to prevent harm. Watermarks on AI-generated videos can aid in detection and attribution, revealing the content’s origin and ownership. Online platforms should implement measures to educate users on content policies and deter the upload of inappropriate content. Enhancing deepfake detection technologies, and utilizing sophisticated algorithms and context-based identification methods are some measures that should be taken at a community level. The government also needs to develop clear and consistent laws and policies to define and prohibit the malicious use of deepfakes, ensuring effective remedies and sanctions. There is a need to establish and enforce codes of conduct and standards for the creators and users of deep fake technology, promoting positive applications.