app-store-logo
play-store-logo
February 11, 2026

Government Intensifies Regulations on Deepfakes with New Digital Guidelines

The CSR Journal Magazine

In an effort to combat the growing influence of deepfakes and other AI-generated content, the Indian government has introduced significant amendments to its digital regulations. The revisions, effective from February 20, 2026, require mandatory labeling, traceability, and user declarations concerning AI-generated materials. By making these amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, the government aims to establish a formal regulatory framework for synthetic content.

Regulatory Focus on AI-Generated Content

For the first time, the regulations specifically define “synthetically generated information” (SGI), encompassing deepfake videos, synthetic audio, and manipulated images. Under these new rules, all content that is created or altered using AI technologies must feature clear and prominent disclosures that are easily visible to users. This includes the embedding of explicit metadata and unique identifiers, enabling the tracking of the content’s source and the methodologies employed in its creation. Importantly, once applied, these disclosures are immutable and cannot be hidden or removed.

Platform Responsibilities and Compliance Deadlines

Social media platforms are now tasked with stringent responsibilities regarding AI-generated content. Before any content is published, intermediaries must ensure that users declare its AI origin. To facilitate this, platforms are required to deploy automated tools capable of verifying these declarations through an analysis of the content’s format, source, and other identifying characteristics. If content is determined to be synthetic, it must be labeled accordingly. Failure to manage unlabelled AI-generated content may result in the platform being classified as non-compliant with its due diligence obligations.

Significantly, the amended rules have also accelerated compliance timelines. Platforms are now mandated to respond to lawful takedown requests within three hours, a drastic reduction from the previous 36-hour period. Other response times have also been tightened, reducing a 15-day window to seven days and a 24-hour window to 12 hours. Furthermore, user grievances must be acknowledged within a two-hour timeframe and resolved within a week.

Oversight and User Protection Measures

The Ministry of Electronics and Information Technology will oversee the implementation and enforcement of these regulations. Users will also have the option to appeal decisions made by platforms to a grievance appellate committee, ensuring an additional layer of oversight.

In cases of misuse of synthetic content, which may include but is not limited to child sexual abuse material, fraudulent electronic records, impersonation, or content linked to explosives, severe penalties will be enforced under multiple criminal statutes. Additionally, social media platforms are required to issue alerts to users at least once every three months, detailing the potential penalties associated with the misuse of AI-generated content.

Through these measures, the government aims to safeguard digital spaces and protect individuals and communities from the harmful effects of unregulated AI-generated material.

Long or Short, get news the way you like. No ads. No redirections. Download Newspin and Stay Alert, The CSR Journal Mobile app, for fast, crisp, clean updates!

Latest News

Popular Videos