Regulating synthetically generated information: India amends IT intermediary rules

Managing IP is part of Legal Benchmarking Limited, 1-2 Paris Gardens, London, SE1 8ND

Copyright © Legal Benchmarking Limited and its affiliated companies 2026

Accessibility | Terms of Use | Privacy Policy | Modern Slavery Statement

Regulating synthetically generated information: India amends IT intermediary rules

Sponsored by

rna-400px.jpg
Abstract image of a person working in front of a digital wall

Ranjan Narula and Parth Bajaj of RNA, Technology and IP Attorneys analyse new obligations concerning synthetically generated information, accelerated takedown provisions, and safe harbour implications amid rising SGI-driven scams

The rapid proliferation of AI technologies has ushered in an era of synthetically generated information (SGI), encompassing deepfakes, AI-altered audiovisual content, and algorithmically manipulated media that blurs the line between reality and fabrication. In response, the Indian Ministry of Electronics and Information Technology (MeitY) issued the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 (the Amendment Rules) on February 10 2026. The Amendment Rules became effective from February 20 2026 and mark India’s first comprehensive statutory framework for SGI, prioritising transparency, swift enforcement, and accountability amid rising misinformation threats.

The objective of these rules is to address exponential digital growth in India from 250 million to over 1 billion internet users in less than a decade, outpacing digital literacy and fuelling SGI-driven scams. These varied from financial scams to celebrity impersonation, false endorsements, and offering merchandise bearing celebrity names. In the past year, the Delhi and Bombay High Courts have witnessed a flurry of personality rights cases involving SGI impersonations, underscoring the judicial consensus on harm and injury arising out of such impersonations.

Definition of synthetically generated information

The cornerstone of the Amendment Rules lies in the introduction of the definition of “synthetically generated information” under Rule 2(1)(wa), a novel definition absent from the original 2021 framework. SGI is defined as any audio, visual, or audiovisual content that is “artificially or algorithmically created, generated, modified or altered using a computer resource” in a manner that depicts or conveys information appearing authentic but which a reasonable person would perceive as realistically depicting persons, events, or scenes that have not occurred. Exclusions carve out bona fide uses, such as routine photo/video editing, academic/research content, watermarking for branding, or AI training data devoid of realistic impersonation.

This precise delineation addresses ambiguities in prior rules while targeting high-risk deepfakes, including realistic forgeries often used for electoral manipulation, defamation, or non-consensual pornography. MeitY further clarifies that the focus is multimodal (images, audio, video) and not just textual content, narrowing the scope to perceptible harms. However, platforms/intermediaries fear overbroad interpretations could interfere with legitimate generative AI tools, potentially stifling innovation without clear technical standards for the term “realistic depiction”.

Labelling and metadata obligations for intermediaries

The amendments to Rule 3(1) impose stringent due diligence obligations on intermediaries, particularly significant social media intermediaries (SSMIs) with over 5 million users. Rule 3(3) mandates prominent disclosure of SGI at upload, prohibiting users from removing or suppressing labels/metadata. Intermediaries must ensure all SGI is embedded with unalterable metadata or unique identifiers tracing the origin, provenance, and alterations, while deploying algorithms to detect and prevent non-disclosure.

Rule 4(1A), inserted for SSMIs, elevates user declarations from mere endeavours to mandatory verification; i.e., platforms are directed to confirm SGI disclosures before publication and reject non-compliant uploads. The rules emphasise that a clear and prominent marking must be visible to recipients, underscoring permanence and mandating that no intermediary may facilitate label stripping. Non-compliance risks forfeiture of safe harbour immunity under Section 79(1) of the Information Technology Act, 2000, exposing platforms to vicarious liability for user-generated SGI violations.

Accelerated takedown regime and grievance redressal

A paradigm shift emerges in the enforcement timelines, with a clear obligation on SSMIs to acknowledge complaints upon actual knowledge received by:

  • An order of a court of competent jurisdiction; or

  • A reasoned intimation from the authorised officer of the appropriate government or its agency.

SSMIs must remove/disable access or issue warnings within three hours to the publisher for unlawful SGI, slashing the prior 36-hour window.

Grievance officers are also now required to acknowledge complaints within 24 hours and resolve issues within seven days (reduced from 15). The rules also specify that intermediaries are required to deploy reasonable and appropriate technical measures, such as automated detection, to ensure that unlawful/prohibited SGI is prevented under Rule 3(3)(a)(i), such as child sexual exploitative and abuse material (CSEAM), sexually explicit content, or non-consensual intimate imagery.

Challenges for intermediaries/businesses

The labelling requirement for SGI highlights a shift to proactive obligations, akin to the EU’s AI Act watermarking. However, the lack of grace periods for technology upgrades could burden SMEs disproportionately.

Many businesses believe the three-hour deadline is operationally unfeasible without 24/7 human-AI hybrids, risking erroneous takedowns and free speech under Article 19(1)(a).

The amendments explicitly link SGI lapses to the breach of due diligence requirement. The amendments to Rule 3(2) specify that failure to expeditiously remove flagged content post-notice voids protection as an intermediary, thereby incentivising over-removal.

MeitY has clarified that the application of the rules is platform-wide, and not per post, which could lead to platforms pre-emptively censoring ambiguous content to mitigate risks.

While advancing transparency, the new rules raise certain implementation hurdles. The technical feasibility of metadata persistence across edits/sharing chains remains unaddressed, with no government-provided tools being offered under the Amendment Rules.

The overall cost burden of implementation and talent shortages for compliance teams may have an impact on innovation, with resources being allocated towards compliance.

Final thoughts

While free speech curtailment and operational infeasibility challenges remain, such criticisms overlook India’s scam epidemic, such as the use of SGI for ‘digital arrest’ frauds exploiting low literacy users.

The 2026 amendments indicate a proactive, SGI-centric evolution of India's intermediary liability framework, mandating labelling, verification, and ultra-swift takedowns to combat AI-fuelled misrepresentation and misinformation. By tethering compliance to safe harbour survival, the amended rules would compel platforms to internalise harms, fostering a safer digital ecosystem.

However, the success of the new regime hinges on clarifying guidelines, technical aids, and judicial guardrails to balance innovation with rights. Policymakers must monitor efficacy through annual reports, refining ambiguities lest over-regulation stifles the very openness AI promises.

more from across site and SHARED ros bottom lb

More from across our site

Vaping dispute, in which Stobbs and Brandsmiths are the representatives, tested how the UK's Human Rights Act can apply to injunctions restraining unjustified threats
An AI platform being sold for £40m, and lateral hires involving law firms Womble Bond Dickinson and Cadwell Thomas were among the top talking points
With the London Annual Meeting behind us, we look back at some of the lessons learned this week and ahead to what 2027 will bring
In-house counsel aren’t impressed with law firms’ international networks, but practitioners say they are crucial for business
Publication of the UPC’s annual report and adoption of the procedural rules of the Patent Mediation and Arbitration Centre were also among major developments
With the INTA Annual Meeting drawing to a close, we asked attendees for their top tips on how to close business after a meeting
Senior UK judges discussing the impact of AI on the judiciary, and the role of in-house IP lawyers during corporate transactions and carve-outs were among the top talking points
Tarun Khurana, founding partner of Khurana & Khurana, discusses juggling tasks, why every hour has a value, and the importance of ‘trusting the process’
Annual Meeting hears that IP firms are targeting hires with technical literacy in a fragmented landscape, and that those that build an online presence will distinguish themselves from the digital chaos
How law firms can secure themselves in a technology-driven IP landscape and how IP teams can develop future leadership were among the top talking points
Gift this article