Finding Humanity in Healthcare with AI and Human Oversight
Chris Bowen, is the CISO and founder at ClearDATA. Chris leads ClearDATA’s privacy, security and compliance strategies. This article was recently published as a Forbes Council Post.
Finding Humanity in Healthcare with AI and Human Oversight
The recent enactment of California Senate Bill 1120, the “Physicians Make Decisions Act,” has ignited a critical conversation about the role of AI in healthcare decision-making. As a healthcare technology professional with years of experience balancing innovation and patient rights, I view this legislation as a timely chance to explore the potential and challenges of AI in healthcare.
At its core, the law prohibits health insurers from using AI as the sole basis for denying health insurance claims, ensuring that human judgment remains integral to coverage decisions. This isn’t just a California issue — it’s a national debate waiting to unfold. While the act highlights the importance of preserving humanity in healthcare, it brings operational and technological challenges that require careful consideration.
Restoring Humanity to Healthcare
Healthcare decisions are profoundly personal. For a patient awaiting approval for a life-changing procedure, the process isn’t merely transactional, but it is deeply emotional. By mandating human oversight, California’s law ensures that decisions affecting patients’ lives incorporate additional externalities, context, and nuance—elements that no algorithm, regardless of its sophistication, can fully replicate.
In this scenario, a patient with a rare or complex condition might not fit neatly into an AI model’s training data. AI might flag the claim for denial based on statistical anomalies or incomplete patterns. A human reviewer, however, can take a holistic view, considering unique circumstances and consulting with clinicians when needed.
The relationship between patients and their healthcare providers should be rooted in compassion and accountability—qualities that too often seem at odds with the financial pressures that define modern medicine. Could an algorithmic system, programmed to prioritize fairness and patient outcomes, display more empathy than a human practitioner shackled by corporate directives? Empathy, after all, is not just an emotional response—it’s the ability to act with the patient’s best interests in mind. This is an area where AI, free from perverse financial incentives, might actually excel.
We need to consider the systems we’ve built, ensuring that empathy—whether human or machine-driven—is integrated into every stage of diagnosis, treatment, and care. It’s time to put the patient, not the ledger, at the center of every decision.
Public trust in healthcare systems has been tested in recent years, with rising denial rates contributing to frustration and skepticism. By promoting human accountability, this legislation sends a strong message—patients are more than data points. This is a step toward rebuilding trust in a system that often feels impersonal and opaque.
The Operational and Technological Trade-Offs
That said, no law comes without trade-offs, and the “Physicians Make Decisions Act” is no exception. Requiring human oversight for all claim decisions introduces significant operational challenges. Human review is inherently more time-consuming and costly than automated processes, which may translate to higher administrative expenses for insurers. These costs could eventually be passed on to patients in the form of higher premiums.
Additionally, slower claim processing times could delay care or reimbursements—an outcome at odds with the industry’s goal of improving efficiency and patient satisfaction. AI, when deployed responsibly, excels at processing large volumes of claims quickly and accurately, identifying patterns, and flagging discrepancies. Restricting its use may hinder insurers’ ability to scale operations effectively, especially as claim volumes continue to grow.
The underutilization of advanced AI technologies risks stalling innovation. Modern AI systems are increasingly sophisticated, capable of learning from historical data and improving their accuracy over time. Instead of dismissing these systems outright, the focus should be on developing frameworks that combine AI’s efficiency with human judgment’s empathy and nuance.
A Path Forward: Balancing Technology and Humanity
The tension between innovation and accountability isn’t new, but the stakes are higher in healthcare, where decisions can literally be life or death. California’s legislation is a significant moment, but it shouldn’t be seen as the final word on the issue.
Instead of an outright prohibition on AI-only decisions, a more balanced approach may better serve both patients and the healthcare system. This could include:
- Hybrid Decision Models: Insurers could implement systems where AI performs the initial claim evaluation, flagging straightforward approvals or denials, while routing complex cases to human reviewers. This approach leverages AI’s efficiency while ensuring human oversight for edge cases.
- Transparency and Audits: Requiring insurers to document and audit AI-driven decisions can enhance accountability without stifling innovation. Patients should have access to clear explanations of how their claims are evaluated, including the role of AI.
- National Standards: Healthcare is inherently interconnected, crossing state lines through telemedicine, multi-state health plans, and national insurers. Relying on state-by-state legislation creates a patchwork of rules that can be confusing and inefficient. Establishing national standards for AI use in healthcare claim decisions would provide clarity and consistency, while setting guardrails to protect patients.
- Investment in AI Training: Not all AI and Large Language Models (LLMs) are created equal. Training AI models on diverse, high-quality data and continuously validating their outputs against real-world outcomes can reduce errors and improve trust in the technology. Insurers should collaborate with clinicians, patients, and regulators to refine these systems.
The Road Ahead
The “Physicians Make Decisions Act” is a wake-up call, reminding us that healthcare isn’t just about efficiency—it’s about people. As we embrace the transformative potential of AI, we must remain vigilant about its limitations and unintended consequences. For those operating in healthcare technology, this moment requires us to call for solutions that strike a balance between innovation and humanity, and to leverage AI as a tool to enhance—not replace—human judgment.
California’s approach is one way to address the challenge, but it’s not the only way. What’s clear is that this conversation is just beginning. As more states consider similar legislation, and as AI continues to evolve, the healthcare industry must work together to ensure that technology serves its ultimate purpose — to improve patient outcomes.
Remember, the goal isn’t to pit humans against machines, or to claim humans are the only ones with empathy, but to find a way for both to work in harmony. When we get that balance right, everyone wins, especially the patient.
Chris Bowen, is the CISO and founder at ClearDATA. Chris leads ClearDATA’s privacy, security and compliance strategies. This article was recently published as a Forbes Council Post.