Today : Nov 09, 2025
Politics
26 October 2025

India Moves To Regulate AI Deepfakes With New IT Rules

A sweeping government amendment seeks to rein in AI-generated misinformation and deepfakes, but critics warn it could undermine privacy and free speech for millions of digital creators.

On October 22, 2025, the Ministry of Electronics and Information Technology (MeitY) in India unveiled a draft amendment to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. The move, which takes aim at the rapidly expanding world of synthetically generated content—including AI-powered deepfakes—marks a watershed moment for digital governance in the world’s largest democracy. Yet, as with any sweeping reform, it has ignited a fierce debate over privacy, free speech, and the delicate balance between technological innovation and public interest.

The government’s motivation is clear. According to The Sentinel Assam, the proliferation of deepfakes and AI-generated misinformation has begun to erode the foundations of democratic discourse and public trust in India. Recent viral videos—one purporting to show the Prime Minister announcing new Rs 2000 notes, another featuring fabricated apologies by national leaders for a fictitious “Operation Sindoor,” and even fake endorsements of financial scams—have stoked alarm about the ease with which truth can be manipulated in the digital age. As Pallab Bhattacharyya, former director-general of police, put it, these incidents “signalled the dawn of a dangerous digital era.”

The 2025 amendment, therefore, is designed to counteract the menace of synthetic media that can defame individuals, manipulate elections, and threaten national security. Under the proposed rules, significant social media intermediaries (SSMIs) must now obtain declarations from content creators whenever they use synthetic tools—like ChatGPT—to generate, modify, or even refine content. Platforms are also required to label such material as “synthetically generated” and, crucially, to act within 36 hours of notification to remove flagged content. AI-altered visuals or voices must be clearly marked, reflecting the government’s determination to preserve authenticity in an online world increasingly shaped by algorithms and artificial intelligence.

But the amendment’s reach doesn’t stop there. As Maktoob Media notes, the definition of “synthetically generated information” under the proposed Rule 2(wa) is so broad that it encompasses any content—videos, images, or text—created, generated, modified, or altered using a computer resource, provided it appears authentic. This means that even something as innocuous as using a grammar improvement tool to polish an Instagram caption could require disclosure and labelling. SSMIs are further obliged to deploy technical measures to verify the accuracy of such declarations, raising the specter of an aggressive, automated takedown regime that may inadvertently stifle artistic freedom and legitimate expression.

Perhaps the most controversial aspect of the amendment is its requirement that intermediaries embed synthetic content with a permanent, unique metadata identifier. Metadata, as privacy advocates have long warned, is far from innocuous. It includes details such as provenance, size, device, date, and time of generation—seemingly mundane data points that, when pieced together, can reveal intimate personal details. The American Civil Liberties Union of California, in its report “Metadata: Piecing Together a Privacy Solution,” cautioned: “Metadata can reveal who we are, who we know, what we do and care about and plan to do next.” The risks are not just theoretical. In 2012, cybersecurity pioneer John McAfee was located by authorities after geographical coordinates embedded in a photo were published online by a magazine—an episode that underscores the very real privacy implications of metadata in the digital era.

India’s own legal framework adds another layer of complexity. The Digital Personal Data Protection Act, 2023 (DPDP Act), allows the government to exempt state agencies and private data fiduciaries from privacy provisions in the interest of sovereignty, integrity, security, or public order. This means that, even if metadata is considered personal data, authorities and certain private entities could still be exempt from compliance, raising concerns about misuse and unchecked surveillance. As Maktoob Media points out, the casual adoption of such intrusive methods in a diverse country like India could lead to the digital marginalization of genuine content creators—especially those from marginalized communities who use technology to protect their identities and safely share their stories.

Consider, for example, a Dalit woman using AI-generated graphics and voice modification to highlight caste-based atrocities on YouTube, or a Muslim journalist employing AI software to edit videos documenting crimes against minorities. The requirement to attach metadata to such content could enable authorities—or malicious actors—to track, censor, or intimidate these voices. This, critics argue, would violate the necessity and proportionality principles established in the landmark Supreme Court case K.S. Puttaswamy v. Union of India, which enshrined the right to privacy as a fundamental constitutional guarantee.

Proponents of the amendment, however, see things differently. They argue that the new rules are not arbitrary controls, but rather a necessary evolution in digital governance. As Bhattacharyya observes, the journey from the IT (Intermediaries Guidelines) Rules of 2011—drafted when social media was still in its infancy—through the more robust 2021 and 2025 frameworks, reflects India’s growing recognition that intermediaries are no longer passive carriers of content. Instead, they are active participants in the digital ecosystem, responsible for maintaining the integrity of online communication. Safe harbour protections, once unconditional, are now contingent on due diligence and compliance with lawful orders.

The amendments also draw legitimacy from constitutional principles. Articles 14, 19, and 21 of the Indian Constitution guarantee equality before the law, freedom of expression, and the right to life and personal liberty. Supreme Court rulings, from Shreya Singhal v. Union of India (2015) to Justice K.S. Puttaswamy v. Union of India (2017), have underscored that restrictions on digital speech must be reasonable, transparent, and procedurally fair. By introducing defined grievance procedures, faster complaint resolution, and avenues for appeal, the IT Rules aim to align regulatory oversight with these safeguards—not to silence dissent, but to prevent deception and harm.

India is not alone in this struggle. The 2025 amendment mirrors international efforts to rein in harmful online content. The European Union’s Digital Services Act (2022) mandates algorithmic transparency and annual risk audits. The UK’s Online Safety Act (2023) compels platforms to protect users—especially minors—from illegal or harmful material. Singapore’s POFMA law and Australia’s eSafety model similarly empower regulators while safeguarding user rights. India’s approach, however, is distinct in its emphasis on multilingual accessibility, user education, and proportional regulation, rather than blanket censorship.

Yet, concerns remain. Critics have decried the short 15-day public consultation period—from October 22 to November 6, 2025—as insufficient for such a consequential regulatory shift. Civil society groups and digital rights advocates are calling for deeper debate, urging policymakers to ensure that regulation evolves through dialogue, not decree. The challenge, as Maktoob Media and The Sentinel Assam both acknowledge, is to strike a balance between swift action against digital deception and the preservation of the freedoms that lie at the heart of India’s constitutional promise.

As technology continues its relentless advance, the ethical frontiers of artificial intelligence, virtual reality, and quantum communication will present new dilemmas. India’s digital governance must remain dynamic, adaptive, and consultative, evolving in step with both technological realities and democratic values. The 2025 amendment may yet be tested in the courts and the court of public opinion, but its legacy will depend on whether it can protect both the authenticity of truth and the vibrancy of free expression in the digital age.