LLMs in Healthcare: Transforming Patient Care and Efficiency 

Large Language Models (LLMs) in healthcare aren’t just the future—they’re already transforming the way we work. From automating sales research to eliminating tedious admin tasks, LLMs have evolved from niche innovations to essential business tools practically overnight. But as companies – including healthcare organizations– race to adopt them, they are also reviving a familiar and dangerous challenge—Shadow IT.

Are organizations prepared for the risks that come with this rapid adoption? Or are they charging full speed ahead without realizing the security gaps that they are creating?

The emergence of “Bring Your Own LLM” (BYO-LLM) introduces risks and challenges reminiscent of the unregulated adoption of cloud services and personal devices in the workplace. Nowhere is this more pressing than in healthcare, where data privacy and regulatory compliance are paramount.

Learning from the Past

For years, IT departments battled the proliferation of unauthorized software and services—what we called Shadow IT. Employees frustrated with slow, bureaucratic approval processes turned to consumer-grade apps and cloud solutions to get their jobs done. This led to significant security and compliance risks, as corporate data was unknowingly stored, shared, and exposed in unsecured environments.

The adoption of LLMs follows the same pattern. Today, anyone can access a powerful AI model with a few clicks—whether it’s through OpenAI’s ChatGPT, Google’s Gemini, or one of the many open-source alternatives. The problem is, many organizations have yet to define guardrails around their usage. Employees—ranging from hospital administrators to EMTs—are already leveraging LLMs to speed up workflows, often inputting sensitive information without fully considering the risks.

The Risks and Pitfalls of Leveraging LLMs in Healthcare

The risk of data leakage is real. Unlike traditional software applications, LLMs don’t just store data—they learn from it. If an employee unknowingly submits protected health information (PHI) or proprietary corporate data to a public LLM, that data is potentially exposed.

In 2023, we saw high-profile incidents where sensitive company data was inadvertently ingested by LLMs, making it retrievable through future queries. Recently, Samsung employees inadvertently leaked confidential information by entering sensitive code into ChatGPT, leading the company to ban the use of such AI tools.

In healthcare, this isn’t just a cybersecurity issue—it’s a regulatory one. HIPAA mandates strict protections around PHI, and a single lapse in governance can lead to massive fines and reputational damage. Yet, despite these high stakes, many organizations have not yet developed a comprehensive approach to LLM governance in healthcare.

Some organizations are responding by banning access to public LLMs. Companies like Apple and JPMorgan Chase have restricted employee use of ChatGPT due to concerns over potential data leaks and security vulnerabilities.

Recent developments surrounding the Chinese AI application DeepSeek have also highlighted significant data privacy and security concerns. In February 2025, New York Governor Kathy Hochul announced a statewide ban on DeepSeek for all government networks and devices, citing fears of foreign government surveillance. These incidents underscore the critical importance of implementing robust data governance policies and security measures when integrating LLMs into organizational workflows. Unregulated use can lead to data breaches, unauthorized data transmission, and potential exploitation by malicious actors.

But in a multi-device world, where employees use personal phones and unsecured networks, bans are often ineffective at best. If an EMT is using ChatGPT on their personal device to generate patient transport summaries, that data is still at risk—regardless of whether the hospital’s IT team has implemented blocks on corporate devices.

Organizations must proactively address these challenges by establishing clear guidelines for LLM usage, ensuring compliance with data protection regulations, and conducting regular security assessments to safeguard sensitive information.

Instead of outright bans, organizations must shift their focus to enablement. Just as enterprise IT departments eventually embraced the cloud by offering secure, sanctioned solutions, healthcare organizations must do the same with LLMs. The goal should not be to prevent employees from using AI but to provide them with safe, compliant alternatives.

Best Practices for Safe LLM Adoption in Healthcare

Organizations must take a structured approach to LLM governance, one that includes three key pillars:

  • Data Governance & Protection: Implement strict data access controls and ensure PHI is never fed into public LLMs. Utilize data loss prevention (DLP) tools to monitor and restrict unauthorized data transfers. Encourage the use of private, company-managed LLMs that do not share data with external parties.
  • Training & Awareness: Employees need clear, practical guidance on safe LLM usage. Most people exploring AI tools today are doing so without understanding the security and compliance implications. Implement ongoing security awareness training, like phishing simulations, that educates staff on the risks and best practices of using AI responsibly.
  • Controlled Enablement: Instead of banning LLMs, organizations should create approved AI workflows that align with security policies. Consider offering a company-sanctioned LLM trained on internal, proprietary data—ensuring employees have a safe alternative to consumer-grade models. Work with vendors who specialize in secure AI implementations rather than trying to build an in-house solution from scratch.
  • Regular Audits & Compliance Checks: Conduct routine audits to ensure LLM implementations adhere to healthcare regulations like HIPAA. Regularly review and update AI policies to address emerging threats or compliance requirements, ensuring that patient data remains secure and protected.

The Future of LLMs in Healthcare

It’s easy to see why LLM adoption has exploded—these tools provide real business value. From reducing administrative burdens to assisting with documentation, they have the potential to make healthcare operations more efficient. But without a proactive strategy, they also pose a significant risk to patient privacy and data security.

The Healthcare industry already lags in digital transformation due to competing priorities and limited IT budgets. Waiting for a major data breach before addressing AI security is not an option. Just as hospitals have learned to secure electronic health records (EHRs) and cloud-based systems, it might be time to consider developing frameworks for LLMs.

LLMs in healthcare are here to stay. The question is not whether employees will use them—but whether organizations will take the steps necessary to use them securely.

 

FAQ

What are Large Language Models (LLMs) in healthcare?

LLMs in healthcare are advanced AI models capable of processing and generating human-like text. They assist with tasks such as documentation, patient communication, administrative automation, and research. However, their use requires careful governance to ensure data privacy and compliance.

How are LLMs transforming the healthcare industry?

LLMs are streamlining healthcare workflows by automating administrative tasks, enhancing medical research, supporting clinical decision-making, and improving patient engagement. They help reduce workload pressures on healthcare professionals while increasing efficiency and accuracy.

What are the risks of using LLMs in healthcare?

  • Data Privacy & Security – LLMs can inadvertently store or expose protected health information (PHI), leading to potential HIPAA violations.
  • Regulatory Compliance Issues – Improper handling of data can result in non-compliance with laws like HIPAA.
  • Shadow IT & Unauthorized Use – Employees may use unregulated AI tools, increasing security vulnerabilities.
  • Data Leakage & Cybersecurity Threats – If PHI is input into public AI models, there’s a risk of data breaches.

Can banning LLMs in healthcare prevent security risks?

Not entirely. Employees may still use LLMs on personal devices, leading to unmonitored data exposure. Instead of bans, organizations should implement structured AI governance policies and provide secure, approved LLM alternatives.

Leverage LLMs in Healthcare With Confidence

Speak with a healthcare cybersecurity and compliance expert today.

Speak with an expert