Fri, Feb 13, 2026
Whatsapp

India’s new IT Rules 2026: Deepfake crackdown, mandatory AI labels, 3-hour takedown deadline explained

Amended rules were notified on February 10 and will come into force from February 20, leaving intermediaries with just 10 days to update their policies and technological systems

Reported by:  Agencies and social media  Edited by:  Jasleen Kaur -- February 13th 2026 02:01 PM
India’s new IT Rules 2026: Deepfake crackdown, mandatory AI labels, 3-hour takedown deadline explained

India’s new IT Rules 2026: Deepfake crackdown, mandatory AI labels, 3-hour takedown deadline explained

PTC Web Desk:  In a significant move aimed at tackling the growing menace of deepfakes and misleading AI-generated content, the Ministry of Electronics and Information Technology (MeitY) has notified amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules. The amended rules were notified on February 10 and will come into force from February 20, leaving intermediaries with just 10 days to update their policies and technological systems.

The move follows months of public debate over celebrity deepfakes, AI-generated child sexual abuse material and the misuse of generative AI tools to create realistic-looking obscene or deceptive content.



Clear legal definition of ‘Synthetic Content’ introduced

For the first time, the rules formally define “synthetically generated information.” The definition covers any audio, visual or audio-visual content that is artificially created, generated, modified or altered using computer resources in a way that makes it appear real, authentic or indistinguishable from a real person or event.

However, the government has drawn a distinction between harmful synthetic content and routine edits. Basic formatting, colour correction, enhancement, compression, noise reduction or other good-faith technical adjustments that do not materially alter the original meaning will not fall within the definition of synthetic content.

This clarification attempts to separate everyday editing practices from deliberately misleading or harmful AI manipulation.

Mandatory labelling of AI-generated content

Under the new framework, users who create AI-generated images, videos or audio that appear realistic will be required to clearly label such material as “AI-generated” or synthetic. Failure to do so could result in content takedown, suspension of accounts or other platform-level penalties.

The responsibility will not rest on users alone. Social media intermediaries will be required to deploy tools, automated or otherwise, to detect and label such synthetic uploads.

Interestingly, earlier draft rules published in October 2025 had proposed that AI labels should occupy at least 10% of the surface area of visual content and 10% of the duration for audio. The final notified rules do not specify any minimum size requirement, leaving platforms to determine how prominently such labels should be displayed.

Legal experts say this could result in varying standards across platforms.

Criminal liability for illegal AI content

The amended rules also warn that users may face criminal consequences if synthetic content violates existing laws. This includes offences under the Protection of Children from Sexual Offences Act (POCSO), the Bharatiya Nyaya Samhita and other applicable statutes.

The rules make it clear that the creation, hosting, sharing, or dissemination of illegal AI-generated material, including child sexual abuse content, obscenity, fraud, or fabricated documents, will attract strict legal consequences.

Social media platforms such as X, WhatsApp and Instagram will now be required to implement enhanced due diligence measures. They must deploy technological tools capable of identifying and blocking synthetic content that violates criminal law, promotes child sexual abuse material, falsely depicts individuals in a deceptive manner or spreads fabricated information such as forged documents or instructions for criminal activities.

Platforms are also required to periodically inform users about these rules, with reminders at least once every three months. The information must be made available in English or any official Indian language.

Tighter takedown deadlines

One of the most notable changes is the drastic reduction in content removal timelines.

Earlier, intermediaries had up to 36 hours to act on government or court-ordered takedown notices. Under the new rules, this window has been reduced to just three hours. For cases involving AI-generated obscene content,  particularly where a person’s identity has been misused, platforms must act within two hours of receiving a complaint.

Additionally, grievance officers must now resolve complaints within seven days, compared to the earlier 15-day timeline.

Legal experts warn of over-regulation and censorship

While many agree that deepfakes pose serious societal risks, the new framework has sparked concerns about potential overreach.

Some legal experts warn that mandatory automated flagging and extremely short compliance timelines could result in excessive content moderation. Satire, parody, political criticism, and artistic expression, which often rely on context,  may be mistakenly flagged or removed by automated systems.

There are also fears that smaller platforms may struggle with the financial and technical burden of compliance. The requirement to deploy advanced detection systems, implement user declarations, and maintain rapid-response compliance teams could increase operational costs significantly.

Some experts suggest that only large technology companies may be able to absorb these costs, potentially leading to market consolidation and reduced competition in India’s digital ecosystem.

Others argue that many of the obligations outlined in the rules already exist in practice. Platforms currently use automated tools, metadata logging, identity verification systems and behavioural analytics as part of trust and safety operations.

From this perspective, detecting synthetic content is seen as an extension of existing moderation frameworks rather than an entirely new compliance burden. Given the societal harm that deepfakes can cause, including reputational damage, financial fraud, and psychological trauma, some legal observers believe courts may view the measures as proportionate.

Impact on Startups and innovation

However, compliance costs remain a serious concern, particularly for startups and emerging Indian platforms. Experts note that companies may need to redesign their technical infrastructure specifically for the Indian market. This includes building rapid-response systems capable of meeting the two-hour and three-hour takedown deadlines.

Some platforms may even consider temporarily suspending operations to ensure compliance, rather than risk government penalties.

The requirement for stronger user declarations and verification processes may also alter the open nature of social media platforms, potentially affecting user experience and freedom of expression.

Balancing safety and free speech

The rise of generative AI and deepfake technology has intensified calls from courts and civil society for stronger safeguards. The government’s latest amendments represent one of the most detailed attempts yet to regulate synthetic content in India.

Whether these rules successfully strike a balance between curbing harmful deepfakes and protecting free speech will become clear only after implementation begins.

For now, both platforms and users are on notice: the era of lightly regulated generative AI content in India appears to be over.

- With inputs from agencies

Top News view more...

Latest News view more...

PTC NETWORK
PTC NETWORK