🔔 Stay Updated!

Get instant alerts on breaking news, top stories, and updates from News EiSamay.

Three-hour takedown, AI labels and stricter oversight: What the new rules mean for social media platforms

Government notifies changes to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, that formally define AI-generated and synthetic content

By Trisha Katyayan

Feb 10, 2026 18:50 IST

The government has set a deadline of 3 hours for social media platforms to remove content flagged by the authorities for breaking local laws, as per a new order from the Ministry of Electronics and Information Technology. Previously, the deadline was 36 hours. These new rules will take effect on February 20, 2026.

What do the new rules entail?

As the government tightened rules for handling AI-generated and "synthetic content", including deepfakes, on social media platforms like X and Instagram, the new rules also require labelling of AI content, news agency PTI reported.

Also Read | 'Got pre-order link from ex-Army chief General MM Naravane': Rahul Gandhi claims after Penguin India's statement

The government notified changes to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, that formally define AI-generated and synthetic content.

These platforms now must clearly label such content and wherever technically feasible, companies are needed to embed permanent identifiers or metadata to ensure traceability, the Ministry of Electronics and Information Technology (MeitY) said in a notification.

AI-generated content will be treated the same as other information for determining unlawful acts under IT rules.

"Synthetically generated information means audio, visual or audio-visual information which is artificially or algorithmically created, generated, modified or altered using a computer resource, in a manner that such information appears to be real, authentic or true and depicts or portrays any individual or event in a manner that is, or is likely to be perceived as indistinguishable from a natural person or real-world event," the notification said.

Use of automated systems

Additionally, intermediaries must use automated systems to detect and stop illegal AI content. The rules specifically target deceptive material, sexually exploitative or non-consensual content, false documentation, child abuse material, impersonation and content related to explosives.

Also Read | In its BIGGEST arms deal, India set to approve Rs 3.25 lakh crore Rafale deal for 114 jets

The government also prohibited platforms from removing or changing AI labels once they have been applied. This signals a tougher regulatory stance as concerns about the misuse of generative AI continue to grow.

Prev Article
Abhishek Banerjee's speech in Lok Sabha covers Pahalgam attacks to MSP issue - key details here

Articles you may like: