Today : Apr 16, 2025
Technology
16 April 2025

Meta Faces Scrutiny Over Data Practices Seven Years After Zuckerberg's Testimony

The tech giant resumes using public data in the EU amid ongoing privacy concerns and legal challenges.

In April 2018, Mark Zuckerberg, the founder and chief executive officer of Facebook (now known as Meta), appeared before the committees of the Senate and Trade in the U.S. Congress, in one of the most famous and influential hearing sessions in the history of Silicon Valley. The reason for the hearing was the Cambridge Analytica scandal, which irregularly obtained data from approximately 87 million users and was used, according to reports, to influence the 2016 U.S. presidential elections.

At that time, the U.S. Congress was not yet ready to face a world where people's thoughts are controlled through fake news, and the general public was not aware of the extent of the impact that technology companies have on politics, society, and their rights. Zuckerberg's testimony was a pivotal moment, as he acknowledged the responsibility that comes with managing such a powerful platform. "We need some regulation... but we have to be careful about doing this," he said, highlighting the need for a balanced approach to oversight.

Fast forward seven years, and the questions surrounding Zuckerberg's accountability remain largely unchanged. Have the company's policies evolved enough to protect users and their data? Or are the promises made during that historic testimony merely ink on paper?

During his extensive testimony, which lasted for hours and included more than 600 pages of questions and answers, Zuckerberg made several clear commitments. He promised a comprehensive review of all applications that had access to large amounts of data before 2014, to work with U.S. and U.K. authorities to ensure accountability for violators, to inform affected users of any leaks or misuse of their data, and to limit developers' access to user data. He even went so far as to declare that "advertisers and developers will not be prioritized over users as long as I run Facebook."

However, the reality seven years later paints a different picture. While Facebook has made some technical adjustments to its data-sharing practices, its fundamental business model remains largely unchanged and has become more complex. Meta has not published the results of the app audits it promised, and it continues to expand its data collection tools across WhatsApp, Instagram, and other Meta applications.

When asked directly in 2018 if he would change the default privacy settings to reduce data collection, Zuckerberg evaded a straightforward answer, stating that the issue was "complex." This response was revealing, suggesting that the company may not have fully embraced the changes it promised.

Moreover, a significant recent development has raised concerns about the integrity of information shared on Meta's platforms. The company announced it would discontinue its fact-checking program, opting instead for a system of "community notes," similar to what Elon Musk implemented on X (formerly Twitter). This change opens the door to misinformation and represents a step backward in the fight against fake news.

The 2018 hearing was not just a media moment; it was a critical juncture in the discussion surrounding digital privacy. After seven years, it is evident that the issue transcends data protection to encompass the integrity of information, the balance between free expression and truth, and the accountability of tech companies wielding unprecedented power.

As Zuckerberg stated in 2018, "We need some regulation... but we have to be careful about doing this." However, Meta has used this warning as a shield to evade any substantial regulation, continuing to lobby against legislation that would require greater transparency.

The question remains: Who watches the watchmen? Between the promises of reform in 2018 and the discontinuation of fact-checking in 2025, it is clear that Meta has not learned its lesson; rather, it has intensified practices that threaten the global information infrastructure. Privacy has become a luxury, truth is a relative concept, and users are merely algorithms in an advertising equation.

In a related development, on April 15, 2025, Meta announced it would resume using public posts and comments from users in the European Union to train its artificial intelligence models. This decision comes after a year-long pause due to privacy concerns. The company clarified on its official blog that it would increasingly leverage user interactions with the Meta AI chatbot to enhance its capabilities, while emphasizing that it does not utilize private messages or non-public content in its training operations.

Meta seeks to justify this move by aligning its practices with those of major American companies like Google and OpenAI, which also rely on public data for training language models. However, the company faces mounting legal pressures as strict EU data protection laws hinder its previous plans, which require companies to respect individuals' rights to control how their personal data is used.

Several European privacy organizations have filed formal complaints with national supervisory authorities, leading to investigations into Meta's data collection and usage practices. This scrutiny highlights the ongoing tension between technological advancement and the need for robust privacy protections.

As the debate unfolds, the overarching concern remains: who truly controls the data? Who decides what appears on our screens? And can democracy thrive in an environment where information is tailored by hidden algorithms? In 2018, Zuckerberg dominated the headlines, but the issue at hand is far greater than any individual or company; it represents a pivotal moment in our relationship with technology. Unless the rules of the game change, we may find ourselves facing new iterations of the Cambridge Analytica scandal, albeit with names and forms that are more sophisticated and elusive.