India has proposed sweeping amendments to its IT Rules that would, for the first time, legally define “synthetically generated information” and require visible and persistent labeling across images, video, and audio. The mandate applies to every significant social platform operating in India; losing safe-harbor protection is the stick. Public comments close Nov 6, 2025.
What’s Happening
- Mandatory labels: AI-touched visuals must display markers over ≥10% of the frame; audio must disclose within the first 10% of duration. Labels must also ride in tamper-resistant metadata.
- Scope: Anything “artificially or algorithmically created, generated, modified or altered” in a way that can appear “authentic or true.”
- Who’s on the hook: “Significant Social Media Intermediaries” (SSMIs) — think WhatsApp, Instagram, YouTube, Facebook, X — must collect user declarations, deploy “reasonable and proportionate technical measures” to verify, and preserve labels end-to-end.
- Enforcement: Non-compliance risks loss of Section 79 safe harbor, exposing platforms to liability for user posts.
- Clock: Draft open for feedback till Nov 6; final notification expected soon after.
Why now
- Election-grade deepfakes: 2024 polls saw AI videos and voices of public figures circulate at scale.
- High-profile harms: Cases like the Sadhguru deepfake moved the debate from nuisance to rights and safety.
- Global trendline: EU’s AI Act transparency duties and China’s synthetic-media rules have normalized labeling regimes.
- India’s leverage: With India a top-two market for most AI products, New Delhi can set onerous obligations and expect compliance.
How the rule actually works
- Definition: “Synthetically generated information” includes any computer-aided creation or modification likely to appear real.
- User flow: Uploaders must declare whether content is synthetic.
- Platform duty: Use technical controls to verify declarations, label content (on-screen + metadata), and block label removal on re-shares.
- Redressal: Flag, take down, or re-label when disputes arise; keep auditable trails to retain safe harbor.
What changes for whom
For users
- Clearer provenance cues on feeds; expect more “AI-assisted” tags on everyday edits and filters.
- More friction on uploads (declaration prompts), possible false positives, and slower publishing in edge cases.
For creators & newsrooms
- If you used Lightroom, autotune, face retouching, voice clean-ups, or AI B-roll: labeling may trigger. This is the chilling-effects zone unless definitions narrow.
- Visible 10% watermarks can mar aesthetics for photography, design, film promos.
For platforms
- Build (or buy) detection pipelines; store, propagate, and integrity-check labels; redesign players to show visible markers.
- Operate under legal risk when detectors miss adversarial fakes — or over-label authentic posts.
The hard problems
- Overbreadth: “Modified or altered” can sweep in benign edits (exposure, grammar, filters). Without an intent-to-deceive or material-alteration test, over-labeling is likely.
- Detection limits: At Internet scale, current classifiers swing between false positives (angering users) and false negatives (legal risk); adversarial content evades quickly.
- Metadata fragility: Screenshots, re-encodes, and screen-captures often strip or corrupt metadata, undermining “permanent” provenance.
- Label fatigue: If everything wears an AI tag, users may tune out — and unlabeled fakes may paradoxically gain credibility.
- Compliance asymmetry: Big Tech can absorb costs; startups, creators, and regional apps may struggle, creating a moat for incumbents.
What experts and industry are signaling
- Intent praised: Guardrails against impersonation and election manipulation are overdue.
- Execution worries: Definition breadth, feasibility of verification, and UX damage from a hard 10% rule top the list.
- Pragmatic path: Co-drafting technical standards, phasing rollouts, and recognizing provenance-standards compliance as a “safe harbor enhancer.”
What to watch next
- Now–Nov 6: NASSCOM/IAMAI filings; platforms propose C2PA-style compliance and narrower definitions.
- Immediately after notification: Platform UX changes (upload declarations, visible markers), surge in vendor RFPs for deepfake detection and provenance.
- Quarter ahead: First enforcement actions will define reality: a) warnings for under-labeling, b) model notices, or c) a landmark safe-harbor challenge.
FAQs
1) Does every edited photo need a giant watermark?
Not necessarily — as drafted, the rule could be read that way. Expect the final text (or a standard) to limit big, on-screen labels to synthetic or materially deceptive edits, with metadata labels for routine enhancements.
2) Can platforms actually detect AI reliably?
Not today with high confidence at national scale. Expect “best-efforts” approaches: provenance standards, model ensemble detectors, anomaly scoring, and responsive takedown SLAs.
3) Won’t bad actors just strip metadata?
Some will. That’s why open, cryptographically bound content credentials plus server-side integrity checks and penalties for removal are important. Labels alone aren’t a silver bullet.
4) What happens if a platform mislabels my work?
You’ll need an appeals process. The final rules should require time-bound review, restoration, and a record so creators aren’t penalized by algorithmic mistakes.
5) How does this interact with free speech?
Labeling regulates provenance, not speech content. But over-broad labeling or error-prone detectors can chill expression. Narrow definitions and due-process safeguards matter.
6) Will WhatsApp forwards be labeled?
If the media object contains a valid credential/metadata, yes; otherwise the client may show a “source unknown / verification unavailable” affordance. The implementation details will decide usability.

Leave a Reply