India Social Media Laws 2026: Platforms Must Remove Unlawful Content in Three Hours, Auto-Detect Illegal AI

Sebastian Hills
2 Min Read
Image Credit Rakesh Mondal on Unsplash
Add us on Google
Add as preferred source on Google

India’s government has rolled out stringent amendments to its Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, mandating that social media platforms and intermediaries swiftly remove unlawful content within three hours of notification and deploy automated tools to detect and prevent the spread of illegal AI-generated material, including deepfakes.

The updated rules, effective February 20, 2026, slash the previous 36-hour takedown window for flagged unlawful content to just three hours, applying to major platforms like Meta’s Facebook and Instagram, Google’s YouTube, and Elon Musk’s X. Intermediaries must now prominently label all AI-generated or synthetically altered audio, visual, or audiovisual content to alert users of potential deception. Platforms are required to implement automated detection systems to proactively identify and block illegal AI content, such as child abuse material, impersonation, false documents, and non-consensual imagery.

Non-compliance risks the loss of safe harbor protections under Section 79 of the IT Act, exposing platforms to liability for user-generated content. User grievance redressal timelines have also been shortened, adding to the operational burden on intermediaries.

Also Read: India Hits 100M Weekly Active ChatGPT Users, Says OpenAI CEO Sam Altman

The amendments, notified on February 10, 2026, represent India’s most aggressive stance yet on content moderation, reinforcing its position as one of the world’s strictest regulators of online platforms. Critics argue the three-hour window is impractical and could lead to over-censorship, while the government maintains it’s necessary to combat misinformation, deepfakes, and other harms in a digital landscape serving over 900 million internet users.

Similar measures have emerged globally, with the EU’s Digital Services Act imposing rapid takedowns and transparency for AI content, but India’s accelerated timeline could set a new benchmark, or spark legal challenges from platforms balancing compliance with free speech. As AI proliferation accelerates deepfake threats, these rules may influence how tech giants operate in emerging markets, potentially reshaping global content moderation standards.

Share This Article
notification icon

We want to send you notifications for the newest news and updates.

Enable Notifications OK No thanks