As the global healthcare sector faces mounting challenges—from budget cuts to the explosive growth of digital data—an unlikely solution is emerging at the intersection of technology and policy: artificial intelligence (AI) and sophisticated data governance frameworks. In Missouri, for example, small town and inner-city hospitals are bracing for the fallout from Medicaid cuts that could force many to close their doors. Yet, as Don Rubin, president of the nonprofit BioSTL, told KMOX Radio in St. Louis on September 1, 2025, "AI could make healthcare delivery a lot leaner." Rubin believes that while major health systems may have the resources to implement AI, smaller providers in urban and rural areas are at risk of being left behind.
"They’re not going to have the resources and the expertise to be able to navigate this new world of AI on their own," Rubin said. In response, BioSTL is partnering with Washington University and others to establish a new nonprofit entity designed to help these under-resourced healthcare providers. This organization would work with small hospitals to identify their unique challenges, test AI-powered solutions, and then make those tools accessible to providers in both urban and rural settings. Rubin’s vision extends beyond Missouri: "St. Louis and Missouri have an opportunity to be national leaders and to be a magnet for innovation and capital and talent and businesses in this space," he said.
This local initiative is unfolding against a backdrop of sweeping changes in global healthcare data governance. According to a 2025 study published in PLoS ONE, the digital economy’s integration with healthcare is transforming how care is delivered, how data is managed, and how stakeholders interact. The research constructs a tripartite evolutionary game model involving healthcare data management authorities (DMAs), healthcare data operating departments (DODs), and data-related entities (DEs)—typically patients or data owners—within a triple principal-agent framework. The study analyzes the dynamic push-and-pull among these stakeholders, focusing on privacy, security, moral hazard, and the alignment of interests.
The results are telling: strategic instability often arises when data property rights are ambiguous and risk responsibilities are unevenly distributed. However, when certain policy thresholds are crossed—such as the introduction of strong incentive strategies by regulators—the system can stabilize into compliance-oriented equilibria. This means that with the right incentives and penalties, hospitals, data operators, and patients can align their interests to promote secure and efficient data sharing.
This is not just theoretical. The World Health Organization’s Global Strategy on Digital Health 2020–2025, the European Union’s European Health Data Space (launched in 2022), Australia’s Digital Health Blueprint 2023–2033, and the U.S. Department of Health and Human Services’ Data Strategy 2023–2028 all underscore the importance of balancing innovation with ethical and legal safeguards. According to the PLoS ONE study, China’s digital economy reached a staggering 53.9 trillion yuan in 2023—nearly 43% of its GDP—thanks in part to major investments in healthcare data infrastructure since 2015.
Yet, even as digital health becomes a cornerstone of economic growth and patient care, significant hurdles remain. The study identifies persistent challenges, including mismatches in technical capability between medical institutions and data operators, unequal demand structures, and a lack of trust among stakeholders. These issues often result in "data silos," where valuable health data is trapped within individual organizations, limiting the potential for large-scale insights and innovation.
To address these barriers, the research advocates for a set of policy instruments: tiered dynamic regulation, revenue-risk linked distribution mechanisms, and collaborative governance ecosystems. For example, a dual "penalty-compensation" system can help offset the economic losses from data breaches, while financial incentives tied to compliance can reduce regulatory costs and encourage responsible behavior. The study’s model shows that when DMAs (regulators) adopt strong incentive strategies—such as substantial rewards for compliance and significant penalties for violations—compliance and data sharing increase dramatically.
But what does this look like in practice? The study’s simulation results reveal that under low penalties, data operators are more likely to engage in risky or non-compliant behavior, leading to instability and poor outcomes. As penalties and financial rewards increase, however, the system quickly stabilizes, with hospitals and data operators choosing compliance and patients more willing to authorize the use of their health data. This not only improves data security but also unlocks the potential for AI-driven tools—like those Rubin hopes to deploy in Missouri—to improve care delivery and efficiency.
One of the most striking takeaways from the research is the importance of equitable revenue distribution. The study recommends that patients—the original data sources—should have a right to benefit from the use of their data, while hospitals and data operators should receive compensation that reflects their investments in data collection and management. Policymakers are urged to construct systems that combine positive incentives (like tax breaks or direct subsidies for compliance technology) with negative constraints (such as risk reserves and revenue-linked penalties for data breaches).
Trust, it turns out, is a critical ingredient. The study notes that patients are most likely to trust clinicians and healthcare organizations with their personal information, but this trust is fragile. Strengthening it requires not just technical safeguards, but also robust legal frameworks and transparent governance. The research points to the EU’s General Data Protection Regulation as a model, suggesting that penalties for mishandling sensitive health data should be tied to the severity of the breach and the organization’s revenue—a "floating benchmark" that increases with repeated violations.
As Missouri’s small hospitals grapple with existential threats from Medicaid cuts, the state’s push to become a hub for healthcare technology innovation may offer a lifeline. By leveraging AI and adopting cutting-edge data governance models, these providers could not only survive but thrive in a rapidly changing landscape. The lessons from China and other global leaders suggest that success depends on a careful balance of incentives, penalties, and trust-building measures that align the interests of regulators, providers, and patients alike.
For now, the future of healthcare in Missouri—and beyond—hinges on whether these ambitious frameworks can be implemented effectively. As Rubin and his partners at BioSTL and Washington University move forward, their efforts may well serve as a blueprint for other regions seeking to harness technology and policy to safeguard the health of their most vulnerable communities.