When Reality Fakes Itself: Deepfakes and the Imminent Crisis in Indian Law
Deepfakes, powered by AI and GAN technology, are blurring the line between truth and illusion. With real-world misuse—from fraud to defamation—India faces a legal void. While countries like China, the U.S., and the EU enforce deepfake-specific regulations, India still relies on outdated provisions. A forward-thinking approach rooted in transparency, ethical AI use, and legal reform is essential to protect digital integrity.
ARTIFICIAL INTELLIGENCE
Aayushi Kharb
4/21/20256 min read


Have you ever seen a video on the internet only to discover later that it wasn't genuine? The face, the voice, the movements-all synthetically created by artificial intelligence. Welcome to the unsettling age of deepfakes, where sight is no longer synonymous with truth.
A New Age of Deception
But how did this happen?
The seeds of deepfakes were planted in 2014 when computer scientist Ian Goodfellow introduced a groundbreaking machine learning algorithm known as Generative Adversarial Networks (GANs). GANs opened a new frontier in artificial intelligence: machines capable of producing images, sounds, and videos so convincing they could deceive even the most trained eye.
The term deepfake itself emerged in 2017 on Reddit, where a user posted manipulated pornographic videos replacing the faces of celebrities with jaw-dropping realism. What began as niche content quickly went viral-spreading to memes, entertainment, politics, scams, and malicious material.
The Tech Behind the Trick: What Are GANs and How Do They Work?
Before diving into the more alarming and innovative aspects of deepfakes and AI, let's first understand how it all works and the basic technicalities you need to be aware of. At the heart of deepfake technology lies a deceptively simple idea: two neural networks locked in a creative battle.
Generative Adversarial Networks (GANs) consist of:
A generator, which tries to produce fake content-such as a human face that doesn't exist.
A discriminator, which attempts to determine whether the image is real or fake.
The generator produces synthetic outputs, and the discriminator judges them. Through thousands of iterations, the generator improves-learning to create fake content capable of deceiving the discriminator. This adversarial training results in highly realistic media that replicates human speech, movement, and appearance. Deepfakes run on this GAN-based model, which learns from vast datasets-thousands of clips or pictures of a single individual. The more it ingests, the eerily more realistic the impersonation becomes.
"Deepfakes don't mimic-they copy the look of truth. In an economy of data, they forge reality."
With advancements like StyleGAN, DeepFaceLab, and First Order Motion Models, creating deepfakes no longer requires a tech lab-just an internet connection and a few clicks. From Fascination to Fear: When Deepfakes Become Criminal
From Fascination to Fear: When Deepfakes Become Criminal
What happens when this fascinating tech becomes a weapon?
Rleasing such a powerful tool into the hands of a diverse and largely unregulated population was bound to produce both innovation and chaos. People began manifesting their wildest fantasies through AI-acting out what reality wouldn't allow. AI became the escape door. Had it been used for noble purposes, the rise of AI might have sparked debate, but given the malicious uses people have found for it, growing concerns are more than justified.
Deepfakes have already crossed ethical and legal lines in actual cases. In Karnataka, authorities reported 12 deepfake-related cybercrimes within two years, ranging from blackmail to cyberstalking and reputational damage.
In Bengaluru, scammers used deepfaked videos of Narayana Murthy and Mukesh Ambani to defraud residents of nearly 95 lakh. These aren't just digital pranks-they're emotional, financial, and reputational weapons. Victims have discovered their faces placed on explicit content or seen themselves making statements they never uttered.
India has no deepface-specific legislation as of now. Victims are forced to cobble together protections under:
• Section 66C – IT Act identity theft
• Section 66E – Privacy violation
• Section 67 – Obscene content distribution
•Relevant sections of the Bharatiya Nyay Sanhita (BNS) on defamation and criminal intimidation
However, these provisions weren’t drafted with AI-generated realities in mind. The threat is new, but our tools are old.
Global Examples: Learning from the World
While India struggles with having no dedicated legislation for deep fakes, various other nations have already moved forcefully toward the regulation of AI-produced content. Every legal reaction demonstrates the distinct political, cultural, and technological context of the country. Let's take a closer look:
China: Mandatory Disclosure and Tough Platform Accountability
China has been one of the first major countries to directly regulate deep fakes. In 2022, its Cyberspace Administration issued the "Provisions on the Administration of Deep Synthesis Internet Information Services", which took effect in early 2023. These regulations mandate:
• All deepfake content to be clearly labeled or watermarked to distinguish it from real media.
• AI-generated avatars and voice clones to obtain explicit consent from the individuals being represented.
• Tech platforms and app developers to implement identity verification protocols, content review systems, and takedown mechanisms.
Violations can result in platform bans, penalties, or even criminal prosecution. China's approach is preventive and strict, seeking to manage both the creators and the distributors of synthetic media.
United States: State-Level Laws for Political and Sexual Deepfakes
The United States has no yet unified federal legislation regulating deep fakes, but some individual states have passed specific legislation:
•California has criminalized the creation and dissemination of deepfakes in both non-consensual pornography and election tampering. The California AB 602 legislation prohibits the creation or distribution of deep fake pornographic content without consent.
• The Texas law (SB 751) prohibits the publication of deepfake videos intended to influence elections, especially within 30 days before voting.
• Additionally, some states allow victims to sue for civil damages if their likeness has been misused.
The U.S. approach is fragmented but focused, targeting the most immediate threats—election integrity and sexual abuse—while discussions continue at the federal level for broader legislation.
European Union: The AI Act and Transparency Mandates
The EU has adopted a more holistic and forward-looking position with its AI Act, on track to complete in 2024. The EU categorizes AI systems into their risk class—unacceptable, high-risk, and low-risk—and brings deepfakes into this matrix of risk.
Main points:
•Disclosure requirement: All synthetic material, including deep fakes, shall be explicitly disclosed when provided to the general public.
•traceability, documentation, and transparency obligations for developers and platforms toward their AI systems.
• Non-compliance can result in significant fines—up to 6% of the global annual turnover for corporations.
The EU model is regarded as citizen- and rights-oriented, with a focus on data protection, consent, and human dignity—providing a robust starting point for India's upcoming rules.
South Korea: Ethical AI Guidelines and Broadcast Rules
South Korea, a country with robust media regulation, has made AI media guidelines that:
• Require broadcasters to identify AI-generated content during broadcasts.
•Ban the application of deepfake technology to manipulate political, financial, or social discourse in a non disclosive manner.
•Recommend ethical review boards for media outlets and platforms employing AI tools, with editorial accountability.
In contrast to punitive models, South Korea emphasizes media accountability and ethical AI use, finding a balance between innovation and public trust.
How India Can Implement These Principles
India, with its enormous digital space and fast-growing AI capabilities, can learn a great deal from South Korea's AI media guidelines. To begin with, India may implement mandatory disclosure requirements for AI-generated content in all media, so that deep fakes or manipulated media are properly labeled, thereby ensuring transparency. Media houses may be persuaded to constitute AI ethics councils, as proposed by South Korea, to gauge the ethical context of AI-manufactured news, particularly within politically or societally delicate scenarios. Besides this, India can ban political exploitation or defaming using deepfakes as in South Korea's stringent regulatory laws, assuring that the political debate shall remain free of fake media.
Public education campaigns and AI literacy programs must be mainstreamed in Indian education and civil society, making citizens more discerning of AI-generated content. Lastly, there must be transparent content moderation responsibilities assigned to social media platforms, holding them accountable for detecting, marking, and promptly deleting harmful deep fake content. By incorporating these principles, India can build a more ethical and transparent digital media environment that successfully controls the emergence of AI-produced content without compromising public trust and people's rights.
Conclusion: Truth as a Skill
Deep Fakes are not only a technological wonder—they're a reflection of the impermanence of truth in the digital era. As more people gain access to these tools, and they become increasingly realistic, the responsibility to discern falls from institutions to individuals.
In such a world, truth can no longer be assumed—it must be verified, protected, and taught.
About the Author- Aayushi Kharb is a First Year Law student at Faculty of Law, University of Delhi.
References
News Articles & Reports:
1. A Landmark Case in India on AI-Generated Avatars – DDG
2. Karnataka Reports 12 Deepfake-Related Cybercrime Cases – The Hindu
3.Bengaluru Residents Duped of ₹95 Lakh by Deepfake Videos – Times of India
Judicial & Legal Sources:
1. Justice K.S. Puttaswamy v. Union of India, Supreme Court of India – Right to Privacy as a Fundamental Right.
2. Provisions of the Information Technology Act, 2000 – Sections 66C, 66E, and 67.
3. References to Bharatiya Nyaya Sanhita (BNS) – for emerging legal challenges like defamation and identity misuse.
The following articles-
1. https://www.ddg.fr/actualite/a-landmark-case-in-india-on-ai-generated-avatars
