The Unique Identification Authority of India (UIDAI) has successfully conducted a face authentication test during the National Eligibility cum Entrance Test (NEET UG) 2025 in Delhi, according to the Ministry of Electronics and Information Technology (MeitY). This initiative, in collaboration with the National Informatics Centre (NIC) and the National Testing Agency (NTA), aims to enhance exam security and candidate verification processes using advanced biometric technology.
The UIDAI claimed that the tests evaluated the feasibility and effectiveness of candidates’ identities, addressing criticisms stemming from the NEET UG 2024 paper leak controversy. MeitY reported that face authentication was conducted in real time using Aadhaar’s biometric database, which streamlined the process and made it contactless. The test results demonstrated high accuracy and efficiency in verifying candidates, though the metrics used to determine this accuracy were not disclosed.
While the government has touted Aadhaar face authentication as secure, scalable, and student-friendly for identifying candidates in large-scale examinations, it has not addressed potential privacy risks for students. Notably, the initiative is limited to students under the age of 18 for facial recognition technology, and there is no mention of obtaining consent for identity verification, raising concerns about the right to privacy for Indian citizens.
In a related development, the National Medical Commission (NMC) transitioned from fingerprint-based biometric systems to face-based systems in all medical colleges and institutions, effective May 1, 2025. This new attendance marking system utilizes facial recognition and is integrated through the NMC Aadhaar-Enabled Biometric Attendance System (AEBAS) platform. Medical colleges are required to provide GPS coordinates for designated attendance zones within a 100-metre radius, facilitating geofenced attendance marking.
These biometric systems have sparked significant debate, as many argue that they fundamentally violate individuals' right to privacy. Critics point out that there are no regulations to supervise the use of these technologies or to address potential data breaches.
As India continues to embrace digital transformation, the introduction of the Digital Personal Data Protection (DPDP) Act in 2023 has aimed to address the growing concerns around personal data protection in the age of artificial intelligence (AI). The DPDP Act seeks to safeguard individual privacy by promoting transparency, ensuring accountability in data handling, and giving citizens greater control over their personal information.
While the DPDP Act lays down regulatory criteria for AI model developers and deployers, it also acknowledges the need to balance innovation with strong privacy protections. This marks a significant step in regulating emerging technologies like AI within a responsible and secure digital ecosystem.
India's digital transformation era, driven by initiatives like Digital India, has embedded technology into everyday life, revolutionizing sectors such as payments, transport, and identity verification. Platforms like BHIM, Paytm, FASTag, and biometric-based KYC have improved service delivery and public convenience.
In parallel, the Indian government has launched AI-focused initiatives such as the National Strategy for Artificial Intelligence by NITI Aayog, the IndiaAI Mission under MeitY, and Centers of Excellence (CoEs) to advance AI research and deployment in key sectors. Tools like SUPACE highlight the government's commitment to leveraging AI for public good, particularly in the judiciary.
However, these advancements come with significant challenges regarding data privacy, security, and ethical AI use. The enormous amounts of personal data generated, including names, biometric details, and financial information, are vulnerable to breaches and misuse. This underscores the urgent need for a robust data protection framework to safeguard individual rights in the age of AI and big data.
While the European Union's AI Act has garnered global attention, India is proactively addressing emerging challenges related to the proliferation of deep fakes and other risks that may hinder the growth of promising startups. The DPDP Act does not explicitly mention AI, but its interconnection with AI is evident, as key definitions in the Act are open to interpretation.
For instance, Section 2(b) of the Act defines ‘Automated’ processes as those that operate without human input once initiated, implicitly including AI systems that engage in decision-making or predictions. Section 2(s)(vii) introduces the concept of an ‘artificial juristic person,’ legally recognizing non-human entities such as AI-driven companies, thus enabling accountability under the Act.
However, several ambiguities remain unaddressed, particularly regarding the intersection of AI and the DPDP Act. Concerns include the public interest loopholes in Section 7, which allows data processing without consent, and the accountability gaps surrounding automated decision-making. For example, Aadhaar-linked biometric failures in welfare schemes have disproportionately impacted marginalized groups, raising issues under the right to equality.
Furthermore, restrictions on cross-border data transfers and the push for data localization hinder global AI collaboration and raise compliance costs for companies. The complexities surrounding AI systems often make it challenging for users to provide informed consent, undermining the core principle of the DPDP Act.
To address these gaps, the upcoming Digital India Act (DIA) is poised to introduce robust mechanisms to govern AI, ensuring platform accountability and safeguarding user rights. The DIA aims to complement the DPDP Act by embedding transparency and accountability into the digital ecosystem.
This act is expected to include risk-based classification of AI systems, enhanced duties for digital intermediaries, regulation of synthetic and AI-generated media, and regulatory sandboxes to foster innovation. By balancing technological progress with ethical and legal safeguards, the DIA seeks to ensure that India can lead globally in AI while maintaining strong protections for individual rights.
In summary, as India navigates the complexities of digital transformation and AI integration, it is essential to establish a comprehensive regulatory framework that addresses accountability, bias, and surveillance. This framework must align with constitutional values and promote sustainable innovation, ensuring that the benefits of technology are realized without compromising individual rights.