On 10-2-2026, the Ministry of Electronics and Information Technology notified the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 to amend the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. The provisions will come into effect on 20-2-2026.
Amendments in Rule 3 relating to “Due diligence by an intermediary”:
Rule 3 relates to “Due diligence by an intermediary” under which an intermediary, including a social media intermediary, a significant social media intermediary and an online gaming intermediary, will have to observe the following due diligence while discharging its duties:
-
Now, an intermediary, will have to inform its users, at least once every 3 months, in clear and simple language about the following:
-
Consequences of breaking the platform’s rules: If a user does not follow the platform’s rules, privacy policy, or user agreement, the platform has the right to:
✓ Immediately suspend or terminate the user’s access; and/or
✓ Remove or block access to any content that violates the rules.
-
Legal consequences for unlawful content: If the user’s rule-breaking involves creating or sharing content that violates any law, then the user may face penalties or punishment under the relevant laws.
-
Mandatory reporting of serious offences: If the violation involves committing an offence under laws such as:
✓ The Bharatiya Nagarik Suraksha Sanhita, 2023;
✓ The Protection of Children from Sexual Offences Act, 2012,
and if the law requires such offences to be reported, then the platform must report the offence to the appropriate authority. -
Where an intermediary who is offering a computer resource which provides tools that can create, edit, or distribute such content, then in addition to its regular notice, it will also have to inform the users about the following:
✓ Misuse of AI- generated content can lead to legal punishment under: Bharatiya Nyaya Sanhita, 2023, Protection of Children from Sexual Offences Act, 2012; Representation of the People Act, 1951; Indecent Representation of Women (Prohibition) Act, 1986; Sexual Harassment of Women at Workplace (Prevention, Prohibition And Redressal) Act, 2013; and Immoral Traffic (Prevention) Act, 1956.
✓ On violation, the intermediary can take these actions:
⮚ Immediately remove or disable access to the offending content.
⮚ Suspend or terminate the account of the violating user (and do this in a way that preserves evidence).
⮚ Identify and disclose the violator’s identity to the complainant, if required by law, especially when the complainant is the victim or is acting for the victim.
⮚ Any offence reported under Bharatiya Nyaya Sanhita, 2023 or Protection of Children from Sexual Offences Act, 2012 must be reported to the appropriate authority by the intermediary.
-
-
If an intermediary becomes aware, either on its own or through a complaint, grievance, or any information received, of a violation related to the creation or sharing of synthetic information, it must take quick and appropriate action. This includes actions such as removing the content, disabling access, suspending or terminating the user’s account, or taking other required measures.
-
An intermediary, on whose computer resource the information which is used to commit an unlawful act in relation to contempt of court; defamation; incitement to an offence relating to the above, or any information which is prohibited under any law then the intermediary will have to remove or disable access to such information within 3 hours, which was earlier 36 hours, of the receipt of such actual knowledge.
-
Due diligence in relation to synthetically generated information: If an intermediary provides tools that allow users to create or modify AI- generated content, then intermediary will have to ensure the following:
-
The intermediary must use reasonable and appropriate technical measures—such as automated tools—to stop users from creating or sharing AI-generated content that breaks the law.
-
It will have to block the AI- generated content that:
✓ Includes harmful or sexual content such as: child sexual abuse material, non-consensual intimate images, obscene, pornographic, pedophilic, vulgar, indecent, or sexually explicit content, anything invading someone’s privacy, including bodily privacy.
✓ Creates or alters false documents or false electronic records;
✓ Involves the making or obtaining of explosives, arms, or ammunition;
✓ Falsely depicts a real person or real event in a way that can deceive others which includes: Identity, voice, actions, statements or making it seem like an event happened when it did not.
-
If the AI-generated content does not fall under the illegal/harmful categories above, then the platform will have to:
✓ Put a clear and noticeable label on visual content stating that it is AI-generated.
✓ Add a clear audio disclosure before audio content.
✓ Embed permanent metadata or technical provenance data, such as: a unique identifier or Information identifying the platform/tool used to create the AI-generated content
✓ Ensure this metadata is included where technically feasible.
-
Platform should not allow the users to remove/ edit/ hide metadata or tamper with the AI- generated content label, embedded metadata or unique identifier.
This will help users identify that the content is synthetically generated.
-
Amendment in Rule 4 relating to “Additional due diligence to be observed by significant social media intermediary and online gaming intermediary”:
-
A significant social media intermediary that allows users to display, upload, or publish content will have to follow these steps before the content goes live:
-
Ask users to declare if the content is synthetically generated;
-
Verify user’s declaration;
-
Label AI- generated content.
-
-
In case there is a failure on the part of intermediary to act on , knowingly allowing/ promoting AI- generated content that breaks the Rules then the platform will be treated as having failed to follow due diligence requirements.
-
The platform will be responsible for:
-
taking reasonable and proportionate technical steps to check the correctness of user declarations;
-
ensuring that no AI-generated content is published without a proper declaration or label.
-
Note: “Synthetically generated information”:
-
It means audio, visual or audio-visual information which is artificially or algorithmically created, generated, modified or altered using a computer resource, in a manner that such information appears to be real, authentic or true and depicts or portrays any individual or event in a manner that is, or is likely to be perceived as indistinguishable from a natural person or real-world event;
-
Situations where audio, visual or audio- visual information will not be deemed to be “synthetically generated information” where the audio, visual or audio-visual information arises from:
-
routine or good-faith editing, formatting, enhancement, technical correction, colour adjustment, noise reduction, transcription, or compression that does not materially alter, distort, or misrepresent the substance, context, or meaning;
-
the routine or good-faith creation, preparation, formatting, presentation or design of documents, presentations, portable document format (PDF) files, educational or training materials, research outputs, including the use of illustrative, hypothetical, draft, template-based or conceptual content, where such creation or presentation does not result in the creation or generation of any false document or false electronic record;
-
the use of computer resources solely for improving accessibility, clarity, quality, translation, description, searchability, or discoverability, without generating, altering, or manipulating any material part of the underlying audio, visual or audiovisual information.

