app-store-logo
play-store-logo
February 11, 2026

From Feb 20, social media must tag AI content or face action

The CSR Journal Magazine

To curb the increase in systematic generated information, India is strengthening its AI rules for audio, visual or audio-visual content. Notified on February 10, the new rule will come in full force starting from February 20. From now on all social media platforms would require to label AI content properly and in case any content has objectionable matter should be removed within a 3 hours deadline, and this will be mandatory.

Under the amended Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules now any platform that generates AI content would label it prominently, clearly mentioning that the content has been created or altered using AI. Besides, there will be intermediaries that will remove derogatory and harmful content within three hours. As per the new rule, the government will make it mandatory that all social media platforms use tools that would check declarations, and if someone doesn’t disclose this properly then they would be held responsible.

What content is being covered under the new rules?

Under the new rules, any AI-generated content be it audio, video, or audio-video that seems to be real and make it impossible for users to distinguish whether AI generated or real, should be clearly labelled as AI-generated content. This new rule has come amidst the growing concern over the usage of deepfakes, misinformation that is leading to harassment, and other unlawful activities.

Any content that is unlawful will fall under the category of illegal content. Especially content that involves child sexual abuse material, obscene or indecent content, impersonation, false electronic records, or material linked to weapons, explosives or other illegal activities, would be strictly monitored.

Why is mandatory labelling important?

The key takeaway of the new rule is the mandatory labelling. Social media platforms and other digital intermediaries have been directed by the government to ensure that synthetic content has to be prominently labelled as AI-generated. Besides, different platforms have also been directed to embed persistent metadata like unique identifiers, in order to track and trace the platform from where the content was first generated. This time the new rules are very strict, intermediaries have been strictly prohibited from removing or tampering with the metadata.

Social media platforms have been even directed to take declarations during the time of upload. Now social platforms will ask the users if the content that they are posting has been altered using AI. That isn’t enough, platforms have to deploy technical measures, some automated tools, that will check the authenticity and accuracy of declarations.  As per the new rule, if the platforms fail to do this, then they will be subjected to liability.

What will be the timeline for content moderation?

Not only the disclosure, but this time under the new rule, time for the content moderation has been shortened. Now within three hours, social platforms have to act upon the complaint, earlier it was 36 hours. And the response deadline, which was 15 days earlier, has been cut short to just 7 days and from 24 hours to 12 hours, all depending on the gravity of the violation.

 

Latest News

Popular Videos