Site icon SCC Times

IT Amendment Rules 2026: AI & Intermediary Compliance

IT Rules 2026

On 10-2-2026, the Ministry of Electronics and Information Technology notified the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 to amend the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. The provisions will come into effect on 20-2-2026.

Amendments in Rule 3 relating to “Due diligence by an intermediary”:

Rule 3 relates to “Due diligence by an intermediary” under which an intermediary, including a social media intermediary, a significant social media intermediary and an online gaming intermediary, will have to observe the following due diligence while discharging its duties:

  1. Now, an intermediary, will have to inform its users, at least once every 3 months, in clear and simple language about the following:

  2. If an intermediary becomes aware, either on its own or through a complaint, grievance, or any information received, of a violation related to the creation or sharing of synthetic information, it must take quick and appropriate action. This includes actions such as removing the content, disabling access, suspending or terminating the user’s account, or taking other required measures.

  3. An intermediary, on whose computer resource the information which is used to commit an unlawful act in relation to contempt of court; defamation; incitement to an offence relating to the above, or any information which is prohibited under any law then the intermediary will have to remove or disable access to such information within 3 hours, which was earlier 36 hours, of the receipt of such actual knowledge.

  4. Due diligence in relation to synthetically generated information: If an intermediary provides tools that allow users to create or modify AI- generated content, then intermediary will have to ensure the following:

    • The intermediary must use reasonable and appropriate technical measures—such as automated tools—to stop users from creating or sharing AI-generated content that breaks the law.

    • It will have to block the AI- generated content that:

      ✓ Includes harmful or sexual content such as: child sexual abuse material, non-consensual intimate images, obscene, pornographic, pedophilic, vulgar, indecent, or sexually explicit content, anything invading someone’s privacy, including bodily privacy.

      ✓ Creates or alters false documents or false electronic records;

      ✓ Involves the making or obtaining of explosives, arms, or ammunition;

      ✓ Falsely depicts a real person or real event in a way that can deceive others which includes: Identity, voice, actions, statements or making it seem like an event happened when it did not.

    • If the AI-generated content does not fall under the illegal/harmful categories above, then the platform will have to:

      ✓ Put a clear and noticeable label on visual content stating that it is AI-generated.

      ✓ Add a clear audio disclosure before audio content.

      ✓ Embed permanent metadata or technical provenance data, such as: a unique identifier or Information identifying the platform/tool used to create the AI-generated content

      ✓ Ensure this metadata is included where technically feasible.

    • This will help users identify that the content is synthetically generated.

    • Platform should not allow the users to remove/ edit/ hide metadata or tamper with the AI- generated content label, embedded metadata or unique identifier.

Amendment in Rule 4 relating to “Additional due diligence to be observed by significant social media intermediary and online gaming intermediary”:

  1. A significant social media intermediary that allows users to display, upload, or publish content will have to follow these steps before the content goes live:

    • Ask users to declare if the content is synthetically generated;

    • Verify user’s declaration;

    • Label AI- generated content.

  2. In case there is a failure on the part of intermediary to act on , knowingly allowing/ promoting AI- generated content that breaks the Rules then the platform will be treated as having failed to follow due diligence requirements.

  3. The platform will be responsible for:

    • taking reasonable and proportionate technical steps to check the correctness of user declarations;

    • ensuring that no AI-generated content is published without a proper declaration or label.

Note: “Synthetically generated information”:

  1. It means audio, visual or audio-visual information which is artificially or algorithmically created, generated, modified or altered using a computer resource, in a manner that such information appears to be real, authentic or true and depicts or portrays any individual or event in a manner that is, or is likely to be perceived as indistinguishable from a natural person or real-world event;

  2. Situations where audio, visual or audio- visual information will not be deemed to be “synthetically generated information” where the audio, visual or audio-visual information arises from:

  3. routine or good-faith editing, formatting, enhancement, technical correction, colour adjustment, noise reduction, transcription, or compression that does not materially alter, distort, or misrepresent the substance, context, or meaning;

  4. the routine or good-faith creation, preparation, formatting, presentation or design of documents, presentations, portable document format (PDF) files, educational or training materials, research outputs, including the use of illustrative, hypothetical, draft, template-based or conceptual content, where such creation or presentation does not result in the creation or generation of any false document or false electronic record;

  5. the use of computer resources solely for improving accessibility, clarity, quality, translation, description, searchability, or discoverability, without generating, altering, or manipulating any material part of the underlying audio, visual or audiovisual information.

Exit mobile version