Why blanket AI labelling may backfire — and what India should do instead.
India’s Ministry of Electronics & IT (MeitY) has proposed new amendments to the IT Rules, 2022 to tackle deepfakes. While the intent is valid, the draft creates a blanket obligation on platforms to pre-label and verify all “synthetically generated information” — a definition so broad that almost every digitally edited image, video, or reel could qualify.
In this video, I break down:
🔹 What the amendment actually mandates (10% visible labels, provenance, loss of safe harbour)
🔹 Why the definition of “synthetic content” is too vague and over-inclusive
🔹 How forced pre-tagging will lead to over-removal, false positives, and chilling of creativity
🔹 Why no other country in the world forces all AI content to be labelled
🔹 A better middle-path model: pre-emptive containment, not pre-publication censorship
🔹 A proposed framework that protects citizens and protects innovation
📌 The problem is not AI — the problem is harmful deception.
📌 India needs a harm-based, accountability-focused law, not a blanket tagging regime.
✅ Download Note
📥 Download the slides used in this video (PDF) — here


