The Indian Ministry of Finance has banned the use of AI tools like ChatGPT and DeepSeek on government devices to protect sensitive data. Issued on 29th January 2025, the directive aims to prevent potential security risks.
Signed by Joint Secretary Pradeep Kumar Singh, the notice cautions that AI-powered applications on office computers could compromise confidential information. The ministry has advised all employees to refrain from using such tools on official devices.
The circular, approved by the Finance Secretary, has been distributed to key government departments, including Revenue, Economic Affairs, Expenditure, Public Enterprises, DIPAM, and Financial Services.
This ban reflects global concerns over AI platforms handling sensitive data. Many AI models, including ChatGPT, process user inputs on external servers, raising risks of data leakage or unauthorized access.
Governments and corporations worldwide have implemented similar AI restrictions, with several private companies and global organizations limiting AI tool usage to prevent data exposure.
While the order bans AI tools on official devices, it does not clarify whether employees can use them on personal devices for work. This move reflects the government’s cautious approach to AI adoption, prioritizing data security over convenience.
As AI tools gain prominence in workplaces, it remains unclear whether the Indian government will introduce regulated AI usage policies in the future. For now, finance ministry officials must rely on traditional methods, at least on their office computers.
Why has the ban?
The Indian Finance Ministry’s decision to ban AI tools on official devices stems from security and confidentiality concerns. Here are some of the reason why the government could be taking this step:
1. Risk of data leaks
AI models like ChatGPT and DeepSeek process user inputs on external servers, which means any sensitive government data entered into these tools could be stored, accessed, or even misused. Since government offices handle classified financial data, policy drafts, and internal communications, even unintentional sharing could pose risks.
2. Lack of control over AI models
Unlike traditional software used in government offices, AI tools are cloud-based and owned by private companies (such as OpenAI for ChatGPT). The government has no direct control over how these tools store or process information, increasing concerns about foreign access or cyber threats.
3. Compliance with data protection policies
India is working on stronger data privacy laws, including the Digital Personal Data Protection (DPDP) Act, 2023. Allowing AI tools on office devices without clear regulations could lead to violations of data protection policies, making government systems vulnerable.