On April 12, 2025, POLITICO reported that Ireland's data protection supervisory authority, on behalf of the European Union (EU), launched an investigation into the social media platform X's use of personal data to train its Grok chatbot. This investigation scrutinizes whether X, owned by billionaire Elon Musk, is legally processing personal data to train the chatbot's large language model (LLM). Last year, after X began mining personal data from users' public posts, the Irish Data Protection Commission (DPC) filed a lawsuit alleging privacy violations. Following this, X agreed to stop using EU citizens' data to train its AI models.
The case highlights ongoing challenges for policymakers in various countries regarding the protection of citizens' data on online platforms. Among global regions, the EU is recognized for having some of the strictest regulations governing AI. In August 2024, the world’s first AI control law came into effect in the EU, imposing rigorous regulations on companies developing AI models. Earlier, in March 2024, the EU had also investigated major tech companies like Apple, Meta, and Google for non-compliance with the Digital Market Act (DMA), which was designed to manage AI activities.
Data protection has become a pressing issue, especially as companies increasingly rely on vast amounts of data to train AI systems. This process can involve collecting and processing enormous datasets from various sources. For instance, Google faced a class-action lawsuit in 2024 over allegations that it collected user data from Chrome without consent. Similarly, OpenAI encountered copyright infringement claims for using journalists' and writers' articles to train its ChatGPT model. In a contrasting case, Apple asserted in February 2025 that it would prevent the UK government from accessing users' encrypted data, responding to demands from the UK Home Office for access under national security laws.
On April 9, 2025, the EU announced it is considering adjustments to its AI regulations to help European businesses compete with rivals in the US and China. This includes a potential simplification of the AI Act, as data published by the European Commission in January 2024 indicated that the EU is lagging behind other regions, including the US, in AI innovation and investment. Observers note that over-regulation and administrative barriers are causing tech companies to relocate.
US Vice President JD Vance commented in February 2025 that Europe's stringent regulations could stifle the AI industry, warning that excessive rules might hinder innovation and competitiveness.
In a related discourse, the Holy See has also weighed in on the management of AI. According to Vatican News, the Holy See recognizes advancements in information technology as a means of fostering global development, impacting economic, social, and governance sectors. However, it pointed out significant disparities in access to emerging technologies, particularly in poorer nations lacking the necessary infrastructure and resources.
Archbishop Balestrero highlighted the importance of not propagating the misconception that technology can resolve all issues. He cautioned against a 'technocratic model' that could undermine human dignity, fraternity, and social justice. The Holy See identifies risks associated with the commercialization of education, adverse effects on the labor market, the virtualization of human relationships, the spread of fake news, and serious privacy violations.
In reaffirming the need to manage AI, the Holy See acknowledges both its vast opportunities and its accompanying risks, particularly concerning ethical implications if AI applications remain concentrated in the hands of a few corporations. The Holy See expressed support for drafting an assessment document on AI, as proposed in the Global Digital Agreement, viewing it as a positive step towards a balanced approach to the challenges posed by AI.
Archbishop Balestrero emphasized that ethical standards must ensure AI fosters progress, and that legal entities must be held accountable for its use, implementing appropriate measures to protect transparency, privacy, and responsibility.
The convergence of these discussions—ranging from regulatory investigations in the EU to ethical considerations from the Holy See—illustrates a growing global awareness of the complexities surrounding AI. As nations and organizations grapple with the implications of AI technologies, the balance between innovation and ethical responsibility remains a critical focus.
Ultimately, the ongoing dialogue about AI regulation and ethical standards reflects a broader societal concern about the future of technology and its impact on human rights and dignity. The EU's regulatory framework and the Holy See's moral guidance may serve as frameworks for other regions as they navigate the challenges and opportunities presented by artificial intelligence.