In the continuously evolving internet landscape of today, where reality meets the virtual realm, the spectre of deepfake technology looms large, casting a shadow on the very foundations of truth. From fabricated political statements to manipulated celebrity appearances, the rise of AI-driven deepfakes has blurred the lines between fact and fiction, raising serious concerns about misinformation, cybersecurity, and the safety of individuals online. Deepfake technology refers to a type of artificial intelligence used to create convincing images, audio, and video hoaxes.
The term “deepfake” describes both the technology and the resulting bogus content, and is a portmanteau of deep learning and fake. Deepfakes often transform existing source content where one person is swapped for another, or create entirely original content where someone is represented doing or saying something they didn’t do or say. The greatest danger posed by deep fakes is their ability to spread false information that appears to come from trusted sources, which can have serious implications for democracy and society.
Recall the infamous 2018 video where Barack Obama allegedly labelled Donald Trump a ‘complete dipshit’ or the 2019 video featuring Mark Zuckerberg boasting about ‘total control of billions of people’s stolen data.’ Though debunked, these instances exemplify the potency of deepfake technology in sowing the seeds of deception. The recent surge in AI advancements, particularly in generative AI, has only exacerbated the problem, allowing any piece of information to be morphed into a deepfake, thereby unleashing a flood of misinformation on the unsuspecting public. Deepfake technology not only has severe implications for misinformation such as the deepfake video of Ukrainian president Volodymyr Zelenskyy appearing to surrender at the beginning of Russia’s invasion, but also poses a credible threat to cybersecurity and the safety of women and children online. According to a 2021 report by law consultancy DAC Beachcroft, women make up 90% of deepfake victims. Politicians and public figures, by comparison, only accounted for 5% of cases they tracked. Online sexual harassment via AI-generated pornography depicting women’s faces was the most common content that was created. In its 2023 thematic intelligence report into social media, research company Global Data forecasts that misinformation will steadily increase due to the rise of easily available AI tools and geopolitical tensions.
India, with its 800 million internet users poised to surpass 1.2 billion in the next two years, finds itself at the epicentre of this technological storm. The recent uproar caused by AI-generated deepfake videos featuring Indian actors Rashmika Mandanna, Katrina Kaif, and even PM Narendra Modi playing garba and the morphed image of Sara Tendulkar, underscores the urgency of addressing this issue. The latest victim is actor Alia Bhatt, as a video is circulating on social platforms that depicts her face edited on to another woman’s face. There was another video that went viral wherein a dubious claim was made that International star singer Rihanna and her husband are expecting another baby, with a fake video and audio of her announcing a third pregnancy.
Personality rights refer to the rights of individuals, particularly famous personalities and celebrities, to protect their image, name, voice, likeness, and other personal attributes that have commercial value and can be used to influence the public. While some aspects of personality rights are protected under various laws in India, such as trademarks, copyright, and privacy laws, there is no single law that specifically addresses all aspects of personality rights. The right to publicity, which is a key aspect of personality rights, is guaranteed under Article 19 and Article 21 of the Constitution of India, which define freedom of speech and expression and the right to privacy, respectively. Personality rights are important for celebrities and public figures, as their names, images, voices, and other personal attributes can be misused in various commercial activities, such as advertisements, without their permission. The absence of a codified law specifically addressing personality rights in India has led to a lack of clarity and consistency in the protection of these rights. However, there have been judicial pronouncements that recognize the importance of personality rights and provide some level of protection to individuals in this regard. Anil Kapoor won a significant victory in the Delhi High Court over unauthorized AI use of his likeness, he obtained an interim order against 16 defendants, restraining them from using his name, likeness, image, deepfakes, voice, or any other aspect of his persona to create any merchandise, ringtones, or other content for monetary gain or otherwise.
While Section 66 D of the IT Act deals with punishment for cheating through personation using a ‘computer resource,’ Sections 499 and 500 of the IPC provide avenues for defamation cases. However, pursuing legal action requires victims to establish the elements of falsity, harm, and negligence by the content creator. Deepfakes could also be covered under the ambit of consumer protection laws in instances where such synthetic content is used for fraudulent purposes and harming consumers.
In response to the rising tide of deepfake challenges, the Indian government has adopted a multifaceted approach as the government is in the process of formulating regulations aimed at penalizing both individuals uploading deepfake content and the social media platforms hosting it. A dedicated officer has been appointed to investigate deepfake videos on online platforms and assist citizens in filing cases against them. Social media platforms have also been warned to control the proliferation of AI-generated deepfake content. The government emphasizes the legal obligation of platforms to promptly remove such content within 36 hours. Ongoing efforts are directed at creating sophisticated deepfake detection tools, with a focus on adopting new technology and establishing a robust legal framework to govern the creation and dissemination of deepfake content. Despite these measures, the urgency for a comprehensive regulatory framework for AI in India’s Digital India law is palpable. With over 800 million internet users, the government acknowledges AI as a kinetic enabler of the digital economy but underscores the necessity for guardrails to navigate the challenges posed by deepfake technology.
President Joe Biden last month signed an executive order requiring developers of AI systems that pose risks to U.S. national security, the economy or public health or safety to share the results of safety tests with the U.S. government before they are released to the public. The United Nations too has created a 39-member advisory body to address issues in the governance of AI, while European lawmakers have prepared a draft set of rules which could be approved by next month.
Effectively addressing the deepfake challenge requires the Indian government to enact specific laws targeting the creation, distribution, and malicious use of deepfake content, with corresponding penalties for illegal activities involving deepfakes. It’s crucial to establish guidelines for online platforms and social media sites to detect and promptly remove deepfake content, necessitating collaboration with tech companies to enforce these regulations. A proactive approach involves investing in public awareness campaigns to educate individuals about the existence and potential dangers of deepfake technology. This includes providing guidance on how to identify and report suspicious content, and empowering users to play a role in curbing the spread of misinformation. Allocating research and development funding to support initiatives focused on deepfake detection and prevention methods is paramount. By investing in technology development, the government can foster the creation of tools capable of identifying and mitigating the impact of deepfakes. We must look beyond national borders, as broader international collaboration is essential to address the transboundary nature of deepfake threats. Governments should work collaboratively on an international level to share information, exchange best practices, and coordinate efforts in addressing the global challenges posed by deepfake technology. In doing so, India, with its abundant technological talent, is poised to take a leadership role in spearheading this collaborative initiative.
In conclusion, as we await the integration of AI regulations into India’s legal framework, the battle against deepfakes serves as a stark reminder of the imperative to protect the truth in the digital age. The government’s commitment to addressing these challenges reflects a crucial step toward ensuring a safer and more reliable online environment for all.
Authors: Abha Shah & Nitika Nagar