Collaborative Efforts Aim to Safely Deploy AI in Healthcare

https://icaro.icaromediagroup.com/system/images/photos/15909001/original/open-uri20231130-17-126q7cz?1701376559
ICARO Media Group
News
30/11/2023 20h34

In a joint effort to harness the potential of artificial intelligence (AI) in healthcare while addressing concerns around transparency and patient privacy, major organizations and institutions are banding together to develop locally refined, transparently evaluated large language models (LLMs). These models have the potential to transform the healthcare industry, aiding in tasks such as clinical documentation, diagnosis, and treatment planning.

Tech giant Microsoft and software provider Epic have begun discussions on integrating LLMs into healthcare, spurring initiatives at the University of California San Diego Health system and Stanford University Medical Center. Similarly, Google has announced partnerships with the Mayo Clinic and other healthcare organizations. The launch of Amazon Web Services' HealthScribe, a generative AI clinical documentation service, and investments in Hippocratic AI, a startup developing an LLM for healthcare, further highlight the increasing interest in AI in the medical field.

However, concerns loom over the potential risks of utilizing off-the-shelf proprietary LLMs. The reliance on these models, which are difficult to evaluate and can be modified or discontinued without notice, may compromise the quality of patient care, privacy, and safety. To mitigate this risk, collaboration between healthcare institutions, organizations, and corporations is crucial.

Healthcare systems possess a valuable asset - vast repositories of clinical data. With thousands of hospitals and institutions investing millions of dollars in AI integration, efforts are underway to safely implement AI in healthcare. The US Department of Health and Human Services, the US Department of Veterans Affairs, and other organizations are exploring ways to leverage AI in healthcare following an executive order signed by US President Joe Biden. In the United Kingdom, the National Health Service has allocated significant funds for the development, evaluation, and deployment of AI in healthcare. The European Union has also dedicated a substantial amount to research AI in healthcare.

By pooling their resources and expertise, these organizations can work towards developing transparent, locally refined models that align with privacy regulations and incorporate privately held data. Open collaboration between stakeholders, including corporations, can steer the development and adoption of LLMs to enhance medicine rather than hinder it.

Training LLMs involves feeding massive text-based datasets to create a base model, which is then refined with curated data and specialist feedback. Impressive capabilities of LLMs in the medical domain have already been demonstrated, such as passing medical tests and generating clinical notes preferred by clinicians. LLMs have also shown promise in diagnosing challenging cases and simplifying complex surgical consent forms.

Despite their potential benefits, challenges exist in deploying LLMs in healthcare settings. LLMs may generate false outputs, and upgrading their knowledge base without expensive retraining raises concerns. Privacy issues can arise when using sensitive medical records for training, including the potential for information leakage. Biases related to gender, race, disability, and socioeconomic status may exacerbate when LLMs are trained on data from the internet. Additionally, evaluating the safety and accuracy of LLMs is still an ongoing challenge, as performance on specific tasks may not necessarily reflect real-world usefulness.

The future integration of AI in healthcare depends on collaborative efforts that ensure transparency, privacy, and patient well-being. With careful development and evaluation, LLMs have the potential to revolutionize clinical practices, improve patient experiences, and predict medical outcomes.

The views expressed in this article do not reflect the opinion of ICARO, or any of its affiliates.

Related