BOSTON and TEL AVIV, April 16, 2025 – MineOS, a data privacy operations and AI-based risk management company, has released an addition to its MineOS platform: the MineOS AI Agent. MineOS claims it is the first AI-powered agent that builds Records of Process Activities (RoPAs), detects data risks, and delivers privacy insights. The AI Agent is purpose-built to help organizations fulfill privacy compliance requirements while aligning with regulatory standards.
Privacy, compliance, and legal teams are grappling with labor-intensive regulatory mandates, such as the EU General Data Protection Regulation (GDPR), California’s Consumer Privacy Act (CCPA), and the emerging EU AI Act. Building a RoPA and keeping apace with regulatory developments is an essential yet time-consuming task that traditionally consumed the majority of teams’ resources.
MineOS’s AI Agent is the first solution to fully automate RoPA creation and maintenance – turning hours of manual documentation into instant, audit-ready records, with a click of a button. The MineOS AI Agent provides built-in risk detection by analyzing real data systems to deliver actionable, risk-prioritized insights, including surfacing misclassified data, untagged sensitive records, systems without proper governance, and more.
The AI Agent features an on-demand privacy advisor that provides fast, precise answers to regulatory questions, operational issues, and internal policy decisions based on external frameworks and internal privacy programs. “RoPA is one of the most critical components of privacy compliance – and one of the most time-consuming,” said Gal Ringel, Co-founder and CEO of MineOS. “With the MineOS AI Agent, privacy teams can finally automate this process end-to-end. Our AI Agent is not just a chatbot nor a search bar, it’s a true AI assistant that builds your RoPA, flags risks, and answers complex questions on the spot. This is what real privacy automation looks like – it’s the new standard.”
MineOS’s agent combines automation, intelligence, and real-time context to help privacy teams work smarter, act faster, and stay ahead of risk. MineOS delivers the only fully embedded, task-driven AI agent purpose-built for privacy. Gal Ringel will join industry leaders in Washington, D.C. at the IAPP Global Privacy Summit on Wednesday, April 23 at 2:30 pm ET, to share insights on aligning privacy and security strategies, resolving friction, and building stronger cross-functional collaboration to protect data more effectively.
In the digital age, privacy preservation is of paramount importance while processing health-related sensitive information. A recent study explores the integration of Federated Learning (FL) and Differential Privacy (DP) for breast cancer detection, leveraging FL’s decentralized architecture to enable collaborative model training across healthcare organizations without exposing raw patient data.
To enhance privacy, DP injects statistical noise into the updates made by the model. This mitigates adversarial attacks and prevents data leakage. The proposed work uses the Breast Cancer Wisconsin Diagnostic dataset to address critical challenges such as data heterogeneity, privacy-accuracy trade-offs, and computational overhead.
From the experimental results, FL combined with DP achieves 96.1% accuracy with a privacy budget of ε = 1.9, ensuring strong privacy preservation with minimal performance trade-offs. In comparison, the traditional non-FL model achieved 96.0% accuracy, but at the cost of requiring centralized data storage, which poses significant privacy risks.
These findings validate the feasibility of privacy-preserving artificial intelligence models in real-world clinical applications, effectively balancing data protection with reliable medical predictions. Artificial Intelligence (AI) has revolutionized numerous industries, with the healthcare industry being the most promising sector for AI implementation. AI models have proven to be highly efficient in improving diagnostic accuracy, optimizing treatment plans, and improving medical research.
However, traditional AI models require centralized data storage, where patient information is aggregated in a single repository. This centralized approach raises concerns regarding patient privacy, security risks, and regulatory compliance, especially in the healthcare sector, where sensitive medical data is involved. With the growing emphasis on data security, there is a requirement for decentralized learning approaches that can support collaborative AI training while ensuring data confidentiality.
Federated Learning (FL) and Differential Privacy (DP) are two such approaches that address these challenges by enabling the development of AI that preserves privacy in healthcare. FL allows multiple medical institutions to collaboratively train a shared AI model without transferring raw patient data to a central server, thereby mitigating the risk of data breaches.
Instead, model updates are exchanged and aggregated, retaining patient data in a local hospital environment. Think of Differential Privacy as the process of adding a slight blur to a picture, just enough to obscure distinctive details but preserve the overall pattern. DP achieves this by adding statistical noise to model updates, preventing the identification of individual patient data.
By combining FL and DP, it is possible to develop robust AI models that respect patient privacy while maintaining predictive accuracy, making them well-suited for regulatory-compliant healthcare applications. The increasing digitization of healthcare has led to a massive accumulation of patient-related data, including electronic health records (EHRs), diagnostic imaging, genomic data, and real-time monitoring from wearable devices.
This data-driven transformation has paved the way for predictive models, personalized treatment plans, and data-driven decision-making. However, with these advancements come significant challenges related to data privacy and security. Stringent regulations such as the Health Insurance Portability and Accountability Act (HIPAA) in the United States and the General Data Protection Regulation (GDPR) in the European Union impose strict constraints on the sharing of medical data.
Traditional centralized AI models often violate these regulations because they require data transfer across institutions, increasing privacy risks. In contrast, FL-DP provides a legally compliant alternative that allows institutions to develop AI models collaboratively without violating data protection laws.
FL has emerged as a preferred solution for attaining collaborative AI training while upholding stringent privacy requirements. In contrast to standard centralized machine learning models that rely on aggregating raw data, FL avoids local dataset sharing by any institution while only encrypted updates of the model are shared with a central aggregator.
FL not only increases security but also avoids the risk of large-scale data breaches. FL also addresses core challenges in healthcare AI deployment, such as data heterogeneity, security loopholes, and high communication overhead. Nonetheless, even though promising, FL alone does not guarantee complete protection of data.
There is a risk of model inversion attacks as well as other adversarial attacks, and thus, other privacy-maintaining measures like DP must be combined. The primary contribution of this study is to introduce a privacy-preserving FL framework for breast cancer detection that incorporates DP to ensure secure decentralized AI training.
In summary, the integration of Federated Learning and Differential Privacy presents a promising pathway for enhancing privacy in healthcare AI applications while maintaining high diagnostic accuracy, particularly in sensitive areas such as breast cancer detection.