Last updated:
IT Rules 2026 expand regulation to AI-generated content like deepfakes. Social media platforms must label, remove or ban misleading content within a time limit or face penalties
The rules remove ambiguity on AI-generated content. (AI generated image for representation)
The government on Tuesday notified the new Information Technology (Intermediary Guidelines and Digital Media Ethics Code). Amendment Rules, 2026, Issued under the IT Act, 2000, the 2021 amendments amend the rules and apply to all online intermediaries, including social media platforms, messaging services, content-hosting websites and AI-based platforms.
The amendments, effective from 20 February 2026, significantly expand the regulatory scope by bringing artificial-intelligence-generated, synthetic and manipulated content – ​​such as deepfakes, voice cloning and algorithmically altered text, images, audio or video – within the definition of regulated online content.
By doing so, the rules remove any ambiguity about whether AI-generated content is covered or not and treat such content on par with other forms of user-generated information under Indian law.
To whom will the new rules apply?
These rules, issued under Section 87 of the IT Act, 2000, amend the 2021 IT Rules. They apply to all intermediaries, including social media platforms, messaging apps, video-sharing platforms, AI-powered content platforms, any service that hosts, publishes, broadcasts or enables access to user-generated content.
When will they come into effect?
They will come into effect from February 20, 2026
What are the new definitions?
The rules now clearly define the content:
- Generated in whole or in part by AI, algorithms or automated systems
- Contains text, images, audio, video or mixed formats
- Covers deepfakes, altered scenes, voice cloning and fake detection
This eliminates ambiguity about whether AI-generated content is regulated – it clearly is.
What content is regulated?
Content is considered regulated if it – distorts reality in a misleading manner; Is capable of misleading users about facts, identities or events; Presented as authentic without disclosure.
What does this mean for social media companies and platforms?
Platforms must now take proactive responsibility, not just reactive steps.
a) Reasonable efforts to prevent violations: Intermediaries must stop hosting or disseminating illegal content; Use automated tools, human review, or other appropriate measures; And regularly review your systems to reduce abuse.
Failure to act despite knowledge is considered non-compliance
B) Prohibited Content Categories: Platforms should take action against content that violates Indian laws (criminal, civil, regulatory); National security and public order; Court orders or government instructions; User safety and dignity (harassment, impersonation, fraud)
What are the rules on removals and access restrictions?
Compulsory response to orders is required when directed by courts, government officials under valid legal powers. Platforms must remove content, disable access, restrict visibility, and failure or delay will be considered a violation of the rules.
What if there is a delay?
The amendments shorten and clarify response timelines, indicating that penalties may be imposed for delays. Partial compliance is not enough.
What is the ‘3-hour window’? When does it come into effect?
The 3 hour window is an emergency compliance timeline built into the revised IT rules framework. This applies to exceptional, high-risk situations, not routine complaints.
This is for cases when an intermediary receives a legal instruction relating to material that poses an immediate and serious risk, such as a threat to national security or public order; risk of violence, riots, or mass injury; Content involving terrorism, pedophilia, or serious impersonation; Or time-sensitive misinformation likely to cause real-world harm.
In such cases, the Platform is expected to remove, block or disable access within 3 hours of receiving the direction.
This window exists because waiting 24 hours may be too late for rapidly spreading digital harm. The three-hour period is not optional nor advisory. Failure to act within this period is considered prima facie non-compliance, even if action is taken later.
What is labeling clause?
The labeling section primarily deals with AI-generated, synthetic or manipulated content. Intermediaries must ensure that users are not misled into considering synthetic or AI-generated content as genuine, particularly when:
- The identity, voice, image or likeness of a real person is used
- Content may influence public opinion, beliefs or behavior
- Content is presented as factual or authentic
Platforms can comply by labeling content as “AI-generated”, “synthetic”, or “manipulated”; adding relevant warnings; reducing visibility or distribution if labeling is not possible; and removing content if it is misleading or harmful
who is responsible?
Responsibility has been shared. Users must not knowingly misrepresent synthetic content as genuine. The Platform must take appropriate steps to detect, flag or label such content upon becoming aware of it
What happens if platforms do not follow these terms?
To make the 3-hour window disappear: immediate loss of safe harbor protection; potential criminal or civil liability; Strong grounds for court or government enforcement action
For violating the labeling requirement: Content that would be considered misleading or unlawful; mandatory removal or ban; Repeated failure may count as systemic non-compliance.
What are the time limits?
24 Hours – Quick Response Obligation: This is the most frequently triggered time frame.
This applies to material affecting public order, security or sovereignty. Complaints regarding illegal, misleading or harmful content. Initial response to serious user complaints
What will need to be done?
accept the issue
Take interim action (delete, ban, downgrade, or block access)
Initiate a formal review
Platforms cannot wait for a full internal assessment before taking action.
36 hours – Some government instructions: This is in cases where a valid government order specifies this window (carried over and reinforced from the 2021 framework).
Intermediaries should:
Remove or disable access to content within 36 hours
Report compliance if necessary
Failure counts as non-compliance under due diligence obligations.
72 hours – Information assistance to authorities: When legally required, intermediaries must provide information, data support, user or content descriptions (as permitted by law). This primarily applies to investigations and law-enforcement cooperation.
24 Hours – Complaint Acceptance: For user complaints lodged through the platform’s complaint system, the complaint must be acknowledged within 24 hours. Silence or automatic non-response is considered a failure of the complaint mechanism
15 Days – Final Complaint Resolution: The platforms will have to decide and inform the final result within 15 days. They must explain the reason for the action or inaction. Take corrective action if violations are found. Unresolved or ignored complaints weaken the platform’s compliance record.
“Without delay” – Self-detected violation: This applies when a platform detects illegal or prohibited content through AI tools or internal review or becomes aware of a violation through a source
Immediate/ongoing – repeated actions of the offender: For accounts that repeatedly violate the rules, the platform should take timely action. Continued tolerance may be considered a systemic failure. No fixed hour count is given, but enforcement must be prompt and proportionate.
As specified in the order – directions of the court: Courts may set custom deadlines depending on urgency. These override normal deadlines. Even small compliance windows can be installed.
What is expected from companies?
Intermediaries are allowed – and expected – to deploy automated moderation tools, use AI detection systems for harmful or synthetic content, and combine automation with human oversight. However, tools must be proportionate, with excessive removals without review still able to be challenged.
What information do users have to provide?
Clearly inform users about prohibited content. Explain consequences such as content removal, account suspension, reduced reach or visibility. Users should be able to understand why the action was taken.
What action is taken against repeat or serious offenders?
Moderators may suspend or terminate accounts, restrict posting or sharing features, and limit the visibility of content. Especially in cases of repeated violations, serious harm or fraud, coordinated abuse.
Is there any grievance redressal framework?
Platforms should maintain an effective complaint mechanism, act on complaints within prescribed timelines, escalate unresolved issues appropriately. Failure to respond to complaints counts against compliance.
What is safe harbor under section 79?
This protects intermediaries from liability for user content only if they comply with the rules.
Safe harbor is lost if – due diligence is not followed; Platforms knowingly allow illegal or harmful content; Orders are ignored or delayed.
Once the safe harbor is exhausted, the platform may be sued or prosecuted directly.
Are they consistent with other laws?
The rules are clearly in line with the Indian Judicial Code/Criminal Laws, Cyber ​​Security Laws, Consumer Protection Laws and Intellectual Property Laws.
February 10, 2026, 17:55 IST
read more






