Artificial Intelligence

The AI Push in Health Care is Deepening Medicine’s Trust Crisis

The AI push in health care is deepening medicine’s trust crisis

As artificial intelligence (AI) becomes increasingly integrated into health care, concerns about trust in the medical system are rising. The rapid adoption of AI technologies, often without thorough testing, is exacerbating existing mistrust among patients and communities.

The Decline of Trust in Health Care

Trust in the U.S. health care system has been on a downward trajectory, a trend that worsened during the Covid-19 pandemic. A national survey conducted between 2020 and 2024 revealed a staggering drop in trust in physicians and hospitals, with confidence levels plummeting from 72% to 40%. This decline affected various demographic groups, but it was particularly pronounced among Black, Latine, and Indigenous communities, who face a legacy of medical racism.

The Impact of Distrust on Patient Care

Research indicates that patients who lack trust in their health care providers are more likely to delay necessary care, including preventive screenings, and to discontinue medications. These behaviors can lead to higher rates of hospitalization and premature death. The introduction of AI into health care is compounding these issues.

AI’s Documented Harms

AI systems have shown significant flaws that can directly impact patient care. For instance, a widely used algorithm underestimated the severity of illness in Black patients by relying on medical expenses as a measure of health. This algorithm affected approximately 200 million Americans, often without their knowledge. Additionally, AI tools used by Medicare Advantage insurers contributed to a doubling of denial rates for elderly patients, with many denials overturned upon appeal, yet fewer than 1% of patients ever appealed these decisions.

The Financial Push for AI in Health Care

The health care sector, which accounted for $5.3 trillion or 18% of the U.S. GDP in 2024, is a major target for AI companies. In 2025, U.S. health organizations invested $1.4 billion in AI tools—nearly three times the previous year’s expenditure. These tools are employed for various functions, including analyzing medical images and automating billing processes. The data generated from electronic health records, insurance claims, and diagnostic images is invaluable for AI companies, often collected without sufficient transparency or patient consent.

Public Perception of AI in Health Care

A February 2025 study found that 66% of Americans reported low trust in their health care systems to use AI responsibly. Furthermore, 58% expressed doubt that their health care providers would ensure AI tools would not cause harm. Notably, neither knowledge about AI nor health literacy influenced these perceptions; the primary factor was existing trust in the health care system.

The Need for Transparency

Patients overwhelmingly want to be informed when AI is used in their diagnosis and treatment. However, there is currently no federal law mandating such disclosure, and only a few states have enacted relevant legislation. This lack of transparency is particularly harmful to communities that already experience distrust due to historical discrimination in health care.

Building a Trustworthy AI Framework

To rebuild trust, health care systems must change how decisions regarding AI tools are made. It is essential that patients and community members have formal roles in the decision-making process, rather than merely serving in advisory capacities. Additionally, health care providers should publicly report the performance of AI tools, particularly concerning different racial and ethnic groups, before these systems are implemented.

Key Recommendations for Trustworthy AI Implementation

  • Involve patients and community members in decision-making about AI tools.
  • Ensure transparency regarding the use of AI in patient care.
  • Publicly report the performance of AI systems across diverse demographic groups.
  • Educate patients about AI technologies and their implications for care.

Conclusion

Health care systems and companies have the opportunity to earn the trust of their patients and communities through thoughtful and transparent decision-making processes. While the industry can move quickly to adopt new technologies, it is crucial to prioritize trust-building measures. Moving at the speed of trust means involving patients and communities in the decision-making process before AI systems are implemented, ensuring that their voices are heard and considered.

Note: The integration of AI in health care presents both opportunities and challenges. It is essential for stakeholders to prioritize ethical considerations and patient trust as they navigate this evolving landscape.

Disclaimer: A Teams provides news and information for general awareness purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of any content. Opinions expressed are those of the authors and not necessarily of A Teams. We are not liable for any actions taken based on the information published. Content may be updated or changed without prior notice.