You are currently viewing AI & Deepfake Law in India IT Rules 2026 Amendment Tightens Control Over AI-Generated Content

AI & Deepfake Law in India IT Rules 2026 Amendment Tightens Control Over AI-Generated Content

On 10 February 2026, the Govt of India, through MeitY (Ministry of Electronics and Information Technology), notified The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, amending the IT Rules, 2021.This development marks a major shift in AI & Deepfake Law in India 2026 Amendment Tightens Control Over AI-Generated Content, strengthening oversight of synthetic AI content and imposing stricter compliance obligations on social media platforms.

These new rules come into effect from 20 February 2026.

The core objective?
To ensure an open, safe, trusted, and accountable internet, especially in an era where AI can create hyper-realistic fake content within minutes.

Why MeitY Amended the IT Rules, 2021 to Regulate AI Content

Artificial Intelligence has made it extremely easy to generate realistic content — from cloned voices to deepfake videos of public figures. While AI supports innovation, education, and accessibility, it also carries serious risks when misused.

Misuse of AI-generated content can lead to:

  • Fake news and public misinformation
  • Identity fraud and impersonation
  • Non-consensual intimate content (NCII)
  • Reputation damage and blackmail
  • Mental and emotional harm
  • Forged documents and fraudulent records

Recognizing these risks, the government introduced stricter clarity, faster action timelines, and stronger accountability for online platforms.

What is Synthetically Generated Information (SGI)?

The amendment focuses heavily on regulating AI-generated synthetic content known as SGI. It refers to AI-created or AI-modified content that:

  • Looks or sounds real
  • Depicts fake people or events as genuine
  • Can mislead viewers into believing it is authentic

Examples of SGI:

  • Deepfake videos
  • Voice cloning
  • AI-generated realistic images
  • Fake riot or attack videos
  • AI-made fake ID cards or certificates

📌 If AI creates something that looks real and can fool people, it is SGI.

What Is NOT Considered SGI?

Not all AI use is treated as synthetic content. The following are not SGI, as long as they do not mislead:

Basic Editing

  • Increasing brightness
  • Noise reduction
  • Video compression
  • Adding subtitles
  • Stabilising shaky footage
  • Translating content without altering meaning

Normal AI Assistance

  • Creating PowerPoint slides
  • Generating training diagrams
  • Drafting sample notices
  • Formatting documents
  • Writing hypothetical case studies

📌AI becomes SGI only when it creates realistic fake content that can deceive or cause harm.

Does the amended IT Rule 2021 Apply to AI generated Text?

The strict SGI framework mainly applies to:

  • Audio
  • Images
  • Videos

Pure text content alone is not classified as SGI.

📌 However, text can still be illegal under other applicable laws if it spreads misinformation or violates legal provisions.

How Fast Must Social Media Platforms Remove Content Under India’s New AI & Deepfake Rules?

One of the biggest changes is the drastic reduction in response time.

1.Court or Government Order

Content must be removed within 3 hours
Earlier: 36 hours

2.General Complaints

Must be resolved within 7 days
Earlier: 15 days

3. Urgent Cases (Impersonation / Deception)

Action within 36 hours
Earlier: 72 hours

4.Nudity / Morphed Intimate Images

Removal within 2 hours
Earlier: 24 hours

This signals a clear move toward victim-first digital protection.

New IT Compliance Rules for Social Media Platforms in India, 2026

1. Quarterly User Warnings

Platforms must now inform users every 3 months about:

  • Prohibited content including deepfakes
  • Risk of account suspension
  • Legal consequences of misuse

Previously, this was required only once a year.

2. Mandatory Warnings for AI Tools

If a platform offers AI content creation tools such as image generators or voice cloning tools, it must:

  • Warn users against misuse
  • Inform users of legal penalties
  • Display warnings during sign-up and periodically

3. Immediate Action Against Violators

If a user misuses SGI:

  • Content must be removed
  • Account may be suspended or terminated
  • Evidence must be preserved
  • User details may be shared with authorities if legally required

AI Content Labelling Requirements Under India’s IT Rules 2026

Legal synthetic content is allowed, but it must be transparent.

Platforms must ensure:

  • Clear label stating “Synthetically Generated”
  • Audio disclosure for AI-generated voice
  • Embedded metadata or unique identifiers
  • Labels and metadata cannot be removed

This ensures responsible innovation without deception.

Additional Compliance Rules for Large Social Media Platforms (SSMIs)

Significant Social Media Intermediaries (SSMIs), including large platforms such as Meta-owned Facebook and Instagram, X, and Google-operated YouTube, face stricter obligations.

They must:

  • Require users to declare if content is AI-generated
  • Technically verify the declaration
  • Apply clear labels before publishing
  • Not rely solely on user honesty
  • Act strictly if unlawful SGI is promoted

If platforms knowingly allow harmful SGI to circulate, they risk losing safe harbour protection under Section 79 of the IT Act.

However, if they remove harmful content as required under the rules, including through automated tools, they retain legal protection.

Prevention Obligations Under Rule 3(3) of the IT Rules, 2021

Platforms offering AI tools must use technical safeguards to prevent:

  • Deepfake impersonation
  • Non-consensual intimate imagery
  • Child sexual abuse material
  • Fake government documents
  • Explosives or arms tutorials
  • Fabricated event videos

This shifts responsibility from reactive removal to proactive prevention.

How do AI regulations apply to satire, artistic expression, and educational content?

Creative and lawful uses remain allowed, including:

  • Satire
  • Art
  • Educational material
  • Accessibility tools

As long as:

  • The content does not violate the law
  • It is properly labelled if synthetic
  • It does not create fake official records or cause harm

Responsible AI use is protected.

How the Amended IT Rules, 2021 and the DPDP Act, 2023 Benefit Individuals

While the Digital Personal Data Protection Act 2023 (DPDP Act) focuses on personal data processing and privacy obligations, the MeitY IT Amendment Rules 2026 specifically regulate harmful AI-generated synthetic content and intermediary accountability.

Together, these frameworks strengthen India’s broader digital governance ecosystem.

When Do The New Amended IT Rules, 2021 Take Effect?

The amendment becomes operational from: 20 February 2026

All platforms must comply from this date onward.

What (Amended IT Rule, 2021)This Means for Social Media Companies: Meta, YouTube, and X

The 2026 amendment represents a clear shift:

  • From slow response to rapid removal
  • From passive hosting to active prevention
  • From vague AI regulation to defined SGI classification
  • From annual warnings to continuous accountability

India’s IT Rules empower the government to order removal of content deemed illegal under any Indian law, including those related to national security and public order.

In recent years, thousands of takedown orders have been issued.

According to transparency disclosures, Meta alone restricted more than 28,000 pieces of content in India in the first six months of 2025 following government requests. Meta declined to comment on the latest changes, while X and Alphabet-owned Google, which operates YouTube, did not immediately respond to requests for comment.

There is also mounting global pressure on social media companies to police content more aggressively, with governments from Brussels to Brasilia demanding faster takedowns and greater accountability.

For companies like Meta, YouTube, and X, the new three-hour removal rule could present significant operational challenges, requiring near real-time legal review, automated detection systems, and round-the-clock compliance teams.

As Akash Karmakar, partner at Indian law firm Panag and Babu specialising in technology law, stated:

“It’s practically impossible for social media firms to remove content in three hours. This assumes no application of mind or real world ability to resist compliance.”

Keep yourself updated on India’s IT Rules 2026 and AI regulations. Reach out to Prime Infoserv for expert advice

Leave a Reply