Apple has quietly updated its privacy policy, allowing the company to utilize data from crash reports—including logs and diagnostic files—to train its AI models without providing users with an opt-out option. This change directly affects developers and Beta testers using the Feedback app. When submitting crash reports in iOS 18.5 Beta, users are required to consent to Apple using the content, including sysdiagnose attachments, for training Apple Intelligence models. The only way to avoid this data usage is by not reporting errors.
The updated privacy policy of the Feedback app clearly states: "Apple may use the content you submit to improve Apple products and services, such as training Apple Intelligence models and other machine learning models." This shift has raised eyebrows among developers, with one, Joachim, being the first to discover the change and share his findings on social media. He criticized Apple for altering the terms without clear notification or offering an opt-out option.
Many in the developer community have echoed Joachim's concerns, viewing this as a violation of privacy despite Apple's claims of protective measures. Apple asserts that its AI training process employs Differential Privacy—a technique that adds artificial noise to data to prevent the retrieval of personal information, which is applied to features like Genmoji and Image Playground. However, privacy advocates argue that forcing users to consent to error reporting without an alternative is unacceptable.
Developers, who are often the first to identify and report bugs in the system, now face a difficult choice: help Apple improve iOS or protect their personal data. Users can opt out of Apple Intelligence training more broadly by navigating to Settings > Privacy & security > Analytics & improvements and disabling "Share iPhone & Watch analytics." However, this does not stop the content of crash reports from being used if submitted via the Feedback app.
This new policy marks a significant expansion in how Apple manages diagnostic data. As of now, if users participate in Apple's Beta program and report errors, their data will be utilized to train AI, regardless of their consent. Apple has yet to publicly respond to the wave of criticism or confirm whether future updates will include an opt-out option.
In another development, OpenAI has released its latest models, o3 and o4-mini, which can transform ChatGPT into a powerful geolocation tool. According to TechCrunch, the new capability of "reverse location lookup" has gained traction on social media, becoming a viral trend. However, this has also raised privacy concerns.
Experts in geolocation are expressing unease about ChatGPT's new abilities, which now include reasoning based on image content and performing editing tasks such as cropping, rotating, and zooming. More importantly, these models can discover the locations of various images, leading to a trend where users ask ChatGPT to play GeoGuessr based on provided images.
Brendan Jowett, a YouTube and Twitter user passionate about AI, shared an example of ChatGPT's impressive location-guessing skills. However, he also cautioned that the new "reverse image search" function could pose privacy risks, particularly concerning doxing—the act of publicly revealing someone else's private information, including their home address. Doxing is often performed with malicious intent, leading to serious risks for the victims.
TechCrunch noted that while the "Geoguessr" capability is not entirely new to ChatGPT, awareness of it has significantly increased. Although o3 is touted as highly effective in reverse location searches, GPT-4o, a model lacking image reasoning capabilities, can still occasionally provide accurate answers. It's important to highlight that ChatGPT's geolocation abilities are not entirely reliable or 100% accurate.
OpenAI has confirmed that it is working to enhance its tools and is training models to reject requests related to private or sensitive information. The company stated that it will take action upon discovering evidence of abuse of its usage policies.
Meanwhile, a U.S. appeals court has revived a proposed data privacy class action lawsuit against Shopify, a Canadian e-commerce company. The decision by the U.S. 9th Circuit Court of Appeals in San Francisco could make it easier for U.S. courts to assert jurisdiction over internet-based platforms. The ruling, issued on April 21, 2025, was a 10-1 decision.
The lawsuit against Shopify was filed by Brandon Briskin, a California resident, who alleges that Shopify installed tracking software, known as cookies, on his iPhone without his consent when he purchased sports gear from the retailer I Am Becoming. Briskin claims Shopify used his data to create a profile that they could sell to other merchants.
Shopify contended that it should not be sued in California because it operates nationwide and did not specifically target its conduct at that state. The company argued that Briskin could file the lawsuit in Delaware, New York, or Canada. However, the entire appeals court disagreed with Shopify's position.
The court stated that Shopify had "expressly aimed" its conduct at California. Judge Kim McLane Wardlaw, writing for the majority, noted that Shopify intentionally reached out by installing tracking software on the phones of unsuspecting California residents to sell the collected data later. This ruling could have significant implications for how data privacy issues are handled in the digital age.
As data privacy continues to be a hot topic, these developments from Apple, OpenAI, and the recent court ruling against Shopify highlight the ongoing struggle between technological advancement and the protection of personal information. The landscape of data privacy is evolving, and stakeholders from all sides must navigate these changes carefully.