Meta, the parent company of Facebook, Instagram, and Threads, is set to discontinue its fact-checking program in the United States starting Monday, April 7, 2025. This significant policy shift was announced by Joel Kaplan, Meta's Chief Global Affairs Officer, on Friday, April 4, 2025. Kaplan stated that there would no longer be any new fact-checkers and that the company would sever ties with existing fact-checking organizations in the U.S.
In a post on X, Kaplan confirmed, "By Monday afternoon, our fact-checking programme in the US will be officially over. That means no new fact checks and no fact checkers. We announced in January we’d be winding down the programme & removing penalties." This change marks a major pivot for Meta, which has been under scrutiny for its content moderation practices, especially in the wake of political shifts in the U.S.
The decision to end the fact-checking program aligns with a broader trend of loosening content moderation rules, which was first announced in January 2025. During that announcement, Meta founder and CEO Mark Zuckerberg expressed that the recent elections felt like a cultural tipping point toward prioritizing free speech. In a video titled "More speech and fewer mistakes," Zuckerberg laid out the changes, emphasizing a new approach to content moderation.
In place of traditional fact-checks, Meta plans to implement a system called Community Notes, modeled after a similar initiative on Elon Musk's social media platform, X. This community-based approach shifts the responsibility of moderation to users rather than relying on paid professionals. Kaplan noted that the first Community Notes would begin appearing gradually across Facebook, Threads, and Instagram, with no penalties attached.
While the concept of Community Notes aims to provide context for potentially misleading or controversial posts, critics have raised concerns about the effectiveness of this model. Kaplan's statement that "we’re getting rid of a number of restrictions on topics like immigration, gender identity and gender" indicates a deliberate move to allow more controversial discussions on these subjects, which have been hotly debated in recent years.
As Meta rolls back its fact-checking efforts, there are already signs of misinformation spreading across its platforms. A Facebook page manager recently shared a viral, false claim that U.S. Immigration and Customs Enforcement (ICE) would pay people $750 to report undocumented immigrants, highlighting the potential consequences of reduced oversight.
Since 2016, Meta has collaborated with over 90 fact-checking organizations globally, assessing content and flagging misinformation for further investigation. These organizations, including PolitiFact and FactCheck.org, have played a crucial role in identifying false information and limiting its reach on Meta's platforms. Previously, when a fact-checker identified content as false, Meta would take steps to reduce its visibility significantly, ensuring it reached a smaller audience. However, only Meta has the authority to delete content that violates its Community Standards.
The implications of this policy change extend beyond just Meta's platforms. Following the announcement, the Canada-based tech company Telus laid off 2,000 workers from its content moderation center in Barcelona after Meta severed its contract with them. Local unions confirmed that employees were placed on gardening leave, indicating the ripple effects of Meta's decision.
The move to Community Notes has drawn comparisons to X's approach, which was launched as a pilot in 2021 and gained traction in 2023. On X, Community Notes allow users to provide context for flagged content, potentially correcting misleading information. However, eligibility for contributing to these notes is restricted to users without any account violations since January 2023, who possess a verified phone number and have had an account for at least six months.
Despite the potential for community engagement, the effectiveness of such a system remains to be seen. Critics argue that relying on users to moderate content may lead to inconsistencies and may not adequately address the spread of misinformation. Meta's decision to prioritize user-generated content moderation over professional fact-checking raises questions about the future of information accuracy on its platforms.
As Meta embarks on this new chapter without a dedicated fact-checking program, the consequences for users, advertisers, and society at large remain uncertain. The company's focus on increasing user engagement may come at the expense of truthfulness and accountability. With the changing landscape of social media and the increasing prevalence of misinformation, Meta's new approach will undoubtedly be closely monitored by users and critics alike.
In conclusion, the discontinuation of Meta's fact-checking program signifies a major shift in how the company handles content moderation. As the platforms prepare to roll out Community Notes, the effectiveness and implications of this user-driven model will be scrutinized in the coming weeks and months.