Deepfakes & Digital Privacy: How the IT Rules 2026 and DPDP Act Together Protect Indian Users
The emergence of generative AI has brought forth unbelievable innovation, but it has also introduced an unmatched burst of digital deceit. Deepfakes that are hyper-realistic now pose a threat to privacy, financial health, and democracy. Realizing this growing crisis, India changed its deepfake laws. The legal system now provides robust protection to digital citizens by integrating the fast-response requirements of the recently amended IT Rules 2026 with the privacy protections of the Digital Personal Data Protection Act 2023.
The True Cost of Synthetic Deception
It is crucial to acknowledge the actual fear that AI-generated media evokes before engaging with legal solutions. Deepfakes have ceased to be a niche technology gimmick; they are a mass menace capable of causing emotional, reputational, and financial damage. If you find yourself online, statistics indicate that you should be concerned. According to the recent industry reports, the number of cybercrimes involving deepfakes in India has increased by over 550% since 2019. Regrettably, experts estimate that deepfake fraud will cost India thousads of crores by 2025. Approximately 47 percent of adult Indians have heard an AI voice or encountered a deep-faked scam, which is almost twice the world average. Cases go as far as faked company leaders approving counterfeit wire transfers to non-consenting intimate imagery (NCII) victims.
IT Rules 2026: The Ultimate Rapid Response Mechanism
The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, which shift the burden of truth onto the digital platforms, were notified in February 2026. The amendment serves as a temporary solution to counteract the spread of malicious deepfakes on the internet.
The 3‑Hour Takedown Mandate
The Indian law has now clarified synthetically generated information (SGI). It enforces very strict removal schedules. In the past, the platforms would have given up to 36 hours to remove illegal content, an eternity in the viral age. This was reduced to 3 hours of the new rules on court-ordered or government-notified deepfakes.
Mandatory AI Labeling & Exemptions
Now there is a need for transparency. The IT Rules 2026 provide that visual and audio deepfakes should be accompanied by conspicuous disclaimers and permanent metadata (digital fingerprints) to allow tracing of the source. The regulations also include intelligent exceptions for
good-faith editing, such as simple smartphone touch-ups or text-to-speech accessibility initiatives, so that daily innovation is not wrongfully demonised.
The Loss of Safe Harbour Protection
If social media sites fails to impose these labels or lacks fast takedown windows against malicious SGI, they lose their safe-harbour immunity under Section 79 of the IT Act. This implies that the platform itself can be sued or even criminally charged as though it were the creator of the illegal material, and would require intermediaries to act as if they were not passively hosting but rather actively complying.
The DPDP Act
While the IT Rules serve as an emergency measure, the Digital Personal Data Protection Act 2023 serves as a proactive safeguard. To produce a very persuasive deepfake, the malicious individual needs raw material, i.e., high-quality photographs, videos, and voice samples. The DPDP Act is a strict law governing how companies (also referred to as Data Fiduciaries) collect, store, and process sensitive information.
Securing Biometric “Raw Material”
The Act is broad enough to cover personal data, including subjective identifiers such as facial biometrics and voice patterns.
Compulsory Protections: Under Section 8(5), any platform that collects your biometric information, whether a social media platform, a banking application, or an employer, must implement strong, state-of-the-art security measures to protect against unauthorized access and information leakage.
Crippling Fines for Data Breaches
If a company’s database is hacked due to ineffective encryption or negligence, and your stolen photos are later used to train a malicious AI model, the company will be fully liable for the consequences.
Hefty Fines: The Data Protection Board has the power to impose fines of up to 250 crore rupees on those who fail to protect their users’ data.
Proactive Defense: The law imposes hefty financial fines, compelling corporations to actively defend the precise data elements that deepfake operators use.
The Right to Erasure (Section 12)
The DPDP Act will give people the ability to reclaim their digital footprint, effectively depriving potential AI scraping tools of historical data.
Consent Withdrawal: In Section 12, there is the Right to Correction and Erasure. If you revoke your consent, platforms must, by law, permanently delete your personal data from their servers.
Reducing Exposure: This right prevents your old or unwanted digital footprint from being harvested indefinitely to create synthetic media in the future.
Conclusion
As technology is changing at a breakneck pace, these integrated frameworks help accountability keep up. The DPDP Act deprives malicious actors of the personal data needed to create synthetic media, and the amended IT Rules provide a fast takedown procedure when breaches have occurred. The two of them constitute the indomitable core of the new DeepFake laws in India and bring back security to our online lives. To strategize on the best approach to comply with these
strict compliance requirements, the professionals at CLS are available to help protect your business.

