
A recent report has sparked concern across the country, highlighting the potential misuse of advanced AI tools like ChatGPT for generating fake Aadhaar cards. While the claim itself may sound alarming, OpenAI has categorically denied that its models can or are allowed to generate authentic government-issued documents, including India’s Aadhaar identification card. However, the incident shines a light on a broader, urgent issue: the rising misuse of AI tools for illegal or unethical purposes.
Misleading Claims on Social Media
The controversy started when several videos and posts surfaced online, primarily across YouTube and Instagram, claiming that ChatGPT, particularly through tools like “ChatGPT Studio,” could be prompted to generate Aadhaar cards with realistic-looking names, numbers, and QR codes. These videos gained significant traction, with thousands of views and comments—many of which were from users asking for instructions or confirming that the fake documents “worked” when casually shown as ID.
Cybersecurity experts were quick to respond. “Even if these aren’t fully functional Aadhaar cards, their ability to mimic real ones can aid in social engineering, scams, or phishing attacks,” said a senior analyst from the Indian Computer Emergency Response Team (CERT-In).
OpenAI’s Clarification
OpenAI has responded to these concerns, reiterating that its models do not access or retrieve personal data unless explicitly shared by the user. The organization has built-in safeguards to prevent its models from generating or replicating official documents. Any attempt to generate personally identifiable information (PII) or imitate government documentation violates OpenAI’s usage policy.
An OpenAI spokesperson said, “We take these reports seriously. If users are found attempting to misuse our technology for creating fraudulent content, we investigate and may suspend access to our tools.”
A Larger Pattern of AI Misuse
The Aadhaar incident is not isolated. Similar misuses of AI have been documented globally. Research by Check Point Research (CPR) has shown how large language models (LLMs) like ChatGPT can be manipulated to generate phishing emails, malicious code, and even entire infection flows for cyberattacks
Earlier this year, the UK’s National Cyber Security Centre (NCSC) warned that generative AI could be weaponized by cybercriminals to create deepfakes and forge identities for fraudulent purposes. India, with its massive digital identity ecosystem, is particularly vulnerable.
Government Response
India’s Ministry of Electronics and Information Technology (MeitY) has taken note of the situation. According to sources, the ministry is in consultation with cybersecurity experts and legal advisors to determine the appropriate response. There are growing calls for stronger AI regulation and digital literacy campaigns to prevent citizens from falling prey to AI-generated fraud.
Rajeev Chandrasekhar, Union Minister of State for Electronics and IT, previously commented on the importance of creating a robust legal framework to manage AI tools, especially those with the potential to manipulate or deceive.
Public Caution and Digital Literacy
This situation serves as a reminder that while AI offers numerous benefits—from virtual assistants to medical research—it can also be exploited when left unchecked. Users must approach claims made online with skepticism, especially when it involves sensitive information like government documents or banking credentials.
Security experts recommend:
- Never sharing personal information with unverified platforms or tools.
- Reporting content that promotes the use of AI for illegal activities.
- Staying updated on government advisories regarding digital security.
While ChatGPT and similar AI tools do not inherently pose a threat, the way they are used—and in some cases, misused—depends on the intent of the user. As AI continues to evolve, so must our ethical understanding, legal boundaries, and public awareness.
For now, one thing is clear: ChatGPT cannot and should not be used to generate Aadhaar cards. Any claims suggesting otherwise are not just false—they are dangerous.