Concerns are mounting over Apple Inc.'s newly launched AI feature, Apple Intelligence, which aims to make device notifications more efficient. Released just last week, the technology's intentions have been overshadowed by its controversial performance, particularly its propagation of misinformation. The journalism advocacy group Reporters Without Borders has demanded Apple scrap the feature, citing worries about the dangers of AI spreading false information.
The issues began when users were incorrectly alerted to incidents involving high-profile individuals. A misleading notification alleged—falsely—that the BBC had reported murder suspect Luigi Mangione had shot himself. This prompted the BBC itself to send complaints to Apple, seeking clarification and resolution, though they did not disclose whether the company had responded.
Vincent Berthier, the head of technology at Reporters Without Borders, stressed the importance of reliability, stating, “Facts can't be decided by a roll of the dice.” He continued: “RSF calls on Apple to act responsibly by removing this feature. The automated production of false information attributed to a media outlet is a blow to the outlet’s credibility and a danger to the public’s right to reliable information on current affairs.”
But it wasn’t just the BBC. Other prominent media houses, including The New York Times, raised similar grievances after Apple Intelligence mishandled several articles, leading to headlines claiming Israeli Prime Minister Benjamin Netanyahu had been arrested. These instances have illuminated the precarious balance between the cutting-edge technology of AI and the traditional principles of journalism.
Indeed, as Apple navigates this fraught terrain, the relationship between journalism and AI is becoming increasingly complex. Major newspapers are partnering with AI companies, hoping to use these technologies to streamline content production. For example, News Corp recently arranged to collaborate with OpenAI, allowing the AI entity access to various publications, including The Wall Street Journal and The Times of London, to improve AI models such as ChatGPT. The financial specifics of the deal remain undisclosed, but sources suggest it could be valued at around $250 million.
Sam Altman, CEO of OpenAI, highlighted the ambition behind this collaboration: “Together, we are setting the foundation for a future where AI deeply respects, enhances, and upholds the standards of world-class journalism.” Similar sentiments were echoed by Mathias Döpfner, CEO of Axel Springer, whose company specializes in digital content and news, emphasizing the exploration of AI’s potential to improve journalistic practices.
While many of these partnerships hold potential for enriching journalism, they also raise ethical questions. Concerns linger over job displacement, data privacy, and the potential for AI-generated content to undermine journalistic integrity. These challenges are underscored by the recent rise of AI-generated websites masquerading as reputable news sources. Reports have emerged detailing how entrepreneurs acquired defunct news sites, filling them with AI-generated content—frequently of dubious quality—for ad revenue.
McKenzie Sadeghi from NewsGuard pointed out, “Over a thousand AI-generated websites were operating with little to no human oversight.” Many of these sites churn out clickbait articles on gossip, celebrity news, and politics, causing followers to fall prey to misleading information. This growing prevalence of misinformation poses serious ramifications for public discourse and journalistic credibility.
The overarching question remains: will AI act as a benefactor or detractor to journalism? For now, the answer remains uncertain. The dual-edged sword of AI’s implementation is evident; it could either bolster the industry by enhancing content delivery or become yet another instrument for misinformation, diluting the principles of responsible journalism. The future of journalism may well hinge on how carefully and ethically AI technologies are developed and integrated.