The Ethical Frontier: Implementing Artificial Intelligence in Healthcare Systems

The integration of AI in healthcare holds transformative potential to enhance patient care, though it also brings forward essential ethical considerations that need addressing to ensure fair and equitable deployment1,2. Dr. Julia Mokhova, Head of Medical at Vivanti, and Kenza Benkirane, AI Lead at Vivanti, explore and discuss this significant topic from their respective areas of expertise.
Before we dive into the ethical concerns, let’s review key AI frameworks:
- Machine Learning (ML): Algorithms that learn how to do a task without explicitly programmed to do so. It uses data to make predictions or decisions. In healthcare, ML can analyse patient records to predict disease risk or recommend treatments based on a large dataset3.
- Computer Vision: Systems that process and analyse visual information from the world. In healthcare, this enables automated medical imaging analysis, from X-rays to pathology slides4.
- Natural Language Processing (NLP): Technology that processes and analyses human language. This paradigm also includes Large Language Models (LLMs), with advanced models such as GPT (used by ChatGPT) were trained on vast text datasets that can understand and generate human-like text. In healthcare, these can assist in medical research, documentation, and patient communication5.
While radiology remains the top AI application in healthcare due to the high availability of medical images6,7, natural language processing (NLP) is increasingly being utilised – both with the onset of digitalisation in healthcare settings and to help manage the heavy workload healthcare professionals increasingly face8,9,10.
Core Ethical Considerations
AI implementation in healthcare must be guided by fundamental ethical principles that protect patient interests while promoting innovation.
1. Patient privacy and Data protection
The development and operation of healthcare AI systems require vast amounts of sensitive patient data. In acute care settings, where immediate access to patient information can be crucial, there's a delicate balance between data accessibility and privacy protection11,12.
Key considerations include:
- Implementing robust data encryption and access controls
- Ensuring compliance with healthcare data protection regulations
- Developing clear protocols for data sharing and storage
- Maintaining transparent communication with patients about data usage
2. Algorithmic Bias and Fairness
One of the most pressing ethical concerns in healthcare AI is algorithmic bias. AI systems learn from historical data, which may contain inherent biases reflecting societal inequalities in healthcare access and treatment. In acute care settings, where rapid decisions are crucial, these biases could lead to discriminatory outcomes affecting vulnerable populations13,14,15.
To address this, healthcare organisations must:
- Regularly audit AI systems for bias across different demographic groups
- Ensure training data represents diverse patient populations
- Implement continuous monitoring systems to detect and correct bias in real-time
- Maintain transparency about known limitations and potential biases in AI systems
Dr. JM: "The issue of bias is extremely important. On one hand, we have an obligation to address and correct injustices rooted in the past. On the other, while it’s still debated, we can’t ignore that there are gender and ethnic differences, such as physiological variations or differing reactions to certain drugs. It’s therefore essential to adopt a balanced approach when designing the databases that AI relies on."
KB: "To effectively combat algorithmic bias in healthcare, we must address it at every stage of AI development and implementation:
- Data collection: As Dr. Mokhova noted, data collection may unintentionally exclude certain populations. For example, using only electronic health records might overlook individuals with limited healthcare access, skewing data towards wealthier or urban groups.
- Data preprocessing: Decisions made during data cleaning can introduce bias. Removing outliers, for instance, could disproportionately impact minority groups whose health patterns differ from the majority.
- Feature selection: Choosing variables can reinforce bias. For instance, using postcodes as a proxy for race can lead to unfair outcomes.
- Model development: Algorithms can embed biases based on design or optimisation criteria. A focus on overall accuracy, for example, might compromise fairness across demographic groups.
- Model evaluation: Standard metrics may miss biases; assessing performance across subgroups helps ensure fairness.
- Deployment and monitoring: Ongoing monitoring post-deployment is crucial, as real-world usage may reveal new biases.
By maintaining a critical eye on bias throughout these stages, we can work towards creating AI systems that are not only accurate but also equitable in their application across diverse patient populations. This comprehensive approach is essential for building trust in AI-driven healthcare solutions and ensuring they benefit all patients equally16."
3. Transparency in AI Decision-Making
The "black box" nature of deep-learning-based AI algorithms poses significant ethical challenges, particularly in critical healthcare settings where clinicians need to understand and explain treatment decisions17.
To address transparency concerns:
- Implement explainable AI systems where possible
- Maintain clear documentation of AI decision-making processes
- Develop protocols for situations where AI recommendations conflict with clinical judgment
- Ensure clinicians understand both the capabilities and limitations of AI systems
Dr. JM: "Some experts I know criticise artificial intelligence for its lack of flexible thinking. For instance, AI may identify a prominent feature and build its reasoning around it i.e., acting in a ‘logical and direct’ way. However, there are times when it’s necessary to consider a less obvious factor, even if it seems less significant at that moment. In these situations, doctors often rely on intuition. For AI to develop a similar kind of ‘intuition,’ it would likely require an extensive database of diverse cases."
KB: “Current AI systems excel at recognising prominent features but often lack the causal understanding and contextual awareness that human intuition has, particularly in complex fields like medicine. This limitation comes from how heavily they rely on training data and the challenges of generalising to novel situations. Research in causal AI, meta-learning, and neuro-symbolic approaches aims to enhance AI's adaptability and mimic human-like reasoning.
However, achieving true cognitive flexibility comparable to human intuition - a key requirement for artificial general intelligence (AGI) - remains an unmet challenge. In the meantime, a practical solution is to work on fine-tuned models for a specific downstream task18,19. However, this strategy requires substantial domain expertise and extensive high-quality training data tailored to the target task. Such task-specific models, while effective within their defined scope, demand significant resources for both development and validation.”
The quest for fairness and equity in healthcare delivery becomes more complex with AI implementation. Algorithmic bias can perpetuate or exacerbate existing healthcare disparities if not carefully monitored and corrected. Transparency and accountability in AI systems must be balanced with their technical complexity, ensuring patients and healthcare providers can trust AI-driven decisions while maintaining system sophistication19b.
Dr. JM: "The burning question is: how will the boundaries of a doctor’s responsibility be defined when relying on SaMD (Software as a Medical Device) for decision-making? Doctors sometimes assume full responsibility when making risky choices. But what if they follow AI guidance from SaMD officially used in their clinic? And what happens if SaMD suggests one course, while the doctor’s experience points to another? Clear guidelines are essential to protect doctors when working with AI-driven SaMD."
KB: “While legal questions about responsibility and liability are critical concerns, AI model development necessitates collaboration with medical professionals for relevant integration. This approach enables clinicians to evaluate model outputs critically, incorporates robust trustworthiness metrics, establishes clear usage protocols, and defines liability boundaries. By fostering this collaboration, we create systems that augment clinical judgment while preserving physician autonomy and patient safety.”
Use Cases and Implementation Challenges
1. Clinical diagnosis & medical imaging
Specific Example: In radiology and thoracic imaging, AI models has demonstrated the capability to detect cancerous lesions and analyse medical images with accuracy comparable to or exceeding human dermatologists, as shown in rigorous studies published in scientific journals20.
Key considerations:
- The technology has shown promise for X-rays, MRIs and detecting cancer abnormalities
- However, a major bias issue emerged with skin cancer detection algorithms that were primarily trained on fair-skinned populations, making them less effective for people of colour.
- This highlights the critical need for diverse, representative training data.
Dr. JM: "A recent publication revealed that AI classified all skin change images as pathological solely due to the presence of a ruler. This highlights a dual challenge: while AI can identify subtle nuances that human eyes might miss, it's crucial to flag cases needing specialist input for second opinions. Ongoing issues include image accuracy; surgeons often find discrepancies between diagnostic imaging and actual disease extent. Additionally, images should correlate with clinical findings. We must also consider the risk of data distortion from computer viruses. Can AI recognise anomalies, or will it provide a definitive yet misleading diagnosis?"
KB: “The challenge of AI image misclassification, exemplified by the ruler case, demonstrates the issue of spurious correlations in deep learning systems. This highlights why we need medical-specific foundation models and fine-tuned architectures that better understand clinical imaging nuances. However, with increasing AI-enabled cyberattacks in healthcare, we must prioritise both model accuracy and robust cybersecurity measures to protect system integrity. Success requires advancing model development, implementing rigorous validation protocols, and fostering meaningful collaboration between healthcare professionals and AI systems.”
2. Patient triage & resource allocation
Specific Example: During the COVID-19 pandemic, DeepSOFA was developed as an AI version of the Sequential Organ Failure Assessment score to help predict mortality and guide resource allocation decisions21.
Key considerations:
- It can help optimise scarce resources like hospital beds and ventilators
- Raises ethical concerns about algorithmic bias affecting access to care
- Must ensure human oversight of critical care decisions
- Requires transparency about how algorithms make prioritisation decisions
Dr. JM: "Refusal to maintain vital functions, further treatment, and other interventions is the hardest step for a doctor and often a tragedy for the patient and their loved ones. How empathetic will the AI be in these situations, and will it end up focusing solely on saving money and bed space? Or perhaps AI will be a temptation for physicians to abandon clinical reasoning and responsibility. We'll see."
KB: “It's crucial to emphasise that AI, including large language models (LLMs), does not possess consciousness or genuine understanding of human suffering. These systems are essentially processing tokens - strings of text or numerical data - based on patterns learned from training data.
They lack the emotional intelligence, empathy, and nuanced understanding that human healthcare professionals bring to these sensitive situations. This limitation of AI systems underscores the importance of managing expectations around AI capabilities in healthcare settings.
Healthcare professionals and patients must be clearly informed about what AI can and cannot do. AI should enhance human decision-making, not replace it, especially in ethically complex situations like end-of-life care.
Moreover, AI developers bear the responsibility of embedding ethical considerations in their designs. While AI lacks empathy, developers can ensure models flag cases needing human intervention and prioritise patient well-being over economic factors.”
Future Directions and Recommendations
To ensure the ethical implementation of AI in acute healthcare settings, we recommend the following24,25,26:
1. Regulatory Framework Development:
- Create specific guidelines for AI use in acute care settings
- Establish clear accountability mechanisms
- Develop standards for AI system validation and testing
2. Educational Initiative:
- Implement comprehensive training programs for healthcare professionals
- Develop resources for patient education about AI in healthcare
- Foster interdisciplinary collaboration between technical and medical experts
3. Continuous Evaluation and Improvement
- Regular assessment of AI system performance and impact
- Ongoing monitoring for bias and ethical concerns
- Continuous updating of protocols based on emerging evidence and experience
Conclusion
The integration of AI in acute healthcare settings represents both an extraordinary opportunity and a significant ethical challenge. Success lies in maintaining a careful balance between technological advancement and ethical healthcare delivery. As Dr. Mokhova aptly puts it, AI should be "an assistant, but not a doctor."
The path forward requires continuous dialogue between healthcare professionals, technologists, ethicists, and patients. By addressing ethical considerations proactively and maintaining focus on patient welfare, we can harness AI's potential while preserving the human element that lies at the heart of healthcare delivery.
Only through this balanced approach can we ensure that AI serves its intended purpose: enhancing, rather than replacing, the critical human elements of healthcare delivery in acute settings.
About the Authors:
References:
-
Ong, J. J., De Luca, M., Hsia, C. C., & Khoo, X. Y. (2023). Artificial intelligence in dentistry: current applications and future perspectives. British Dental Journal, 234(9), 681-687. https://www.nature.com/articles/s41415-023-5845-2
-
Panch, T., Mattie, H., & Atun, R. (2021). Artificial intelligence and algorithmic bias: implications for health systems. Journal of Global Health, 11, 07003. https://pmc.ncbi.nlm.nih.gov/articles/PMC8285156/
-
Chen, A., & Chen, D. O. (2022). Simulating machine learning-enabled learning health systems using synthetic patient data. Scientific Reports, 12, 17917. https://www.nature.com/articles/s41598-022-23011-4
-
Litjens, G., Kooi, T., Bejnordi, B. E., Setio, A. A. A., Ciompi, F., Ghafoorian, M., ... & Sánchez, C. I. (2017). A survey on deep learning in medical image analysis. Medical image analysis, 42, 60-88. https://www.sciencedirect.com/science/article/abs/pii/S1361841517301135
-
Demner-Fushman, D., Chapman, W. W., & McDonald, C. J. (2009). What can natural language processing do for clinical decision support? Journal of biomedical informatics, 42(5), 760-772.
-
Benjamens, S., Dhunnoo, P., & Meskó, B. (2020). The state of artificial intelligence-based FDA-approved medical devices and algorithms: an online database. Npj Digital Medicine 2020 3:1, 3(1), 1–8. https://doi.org/10.1038/s41746-020-00324-0
-
Yordanova, M. Z. (2024). The Applications of Artificial Intelligence in Radiology: Opportunities and Challenges. European Journal of Medical and Health Sciences, 6(2), 11–14. https://doi.org/10.24018/EJMED.2024.6.2.2085
-
Jiang, M., Sanger, T., & Liu, X. (2021). Natural Language Processing for Smart Healthcare. IEEE Journal of Biomedical and Health Informatics, 25(11), 4083-4099.
-
Spasic, I., & Nenadic, G. (2020). Clinical Text Data in Machine Learning: Systematic Review. JMIR Medical Informatics, 8(3), e17984.
-
World Economic Forum. (2022, July 27). Natural language processing could help alleviate healthcare worker shortage. https://www.weforum.org/agenda/2022/07/natural-language-processing-healthcare-worker-shortage/
-
Kaissis, G.A., Makowski, M.R., Rückert, D. et al. Secure, privacy-preserving and federated machine learning in medical imaging. Nat Mach Intell 2, 305–311 (2020). https://doi.org/10.1038/s42256-020-0186-1
-
Price, W. N., & Cohen, I. G. Privacy in the age of medical big data. Nature Medicine, 25(1), 37-43 (2019). https://doi.org/10.1038/s41591-018-0272-7
-
Aquino, Y. S. J., Carter, S. M., Houssami, N., Braunack-Mayer, A., Win, K. T., Degeling, C., Wang, L., & Rogers, W. A. (2023). Practical, epistemic and normative implications of algorithmic bias in healthcare artificial intelligence: a qualitative study of multidisciplinary expert perspectives. Journal of Medical Ethics, 0, 1–9. https://doi.org/10.1136/JME-2022-108850
-
Lin, S. (2022). A Clinician’s Guide to Artificial Intelligence (AI): Why and How Primary Care Should Lead the Health Care AI Revolution. Journal of the American Board of Family Medicine, 35(1), 175. https://doi.org/10.3122/JABFM.2022.01.210226
-
Vyas, D. A., Eisenstein, L. G., & Jones, D. S. (2020). Hidden in Plain Sight — Reconsidering the Use of Race Correction in Clinical Algorithms. New England Journal of Medicine, 383(9), 874–882. https://doi.org/10.1056/NEJMMS2004740
-
Chin, M.H., Afsar-Manesh, N., Bierman, A.S., Chang, C., Colón-Rodríguez, C.J., Dullabh, P., Duran, D.G., Fair, M., Hernandez-Boussard, T., Hightower, M. and Jain, A., 2023. Guiding principles to address the impact of algorithm bias on racial and ethnic disparities in health and health care. JAMA Network Open, 6(12). https://doi.org/10.1001/jamanetworkopen.2023.45050
-
Ghassemi, M., Oakden-Rayner, L., & Beam, A. L. (2021). The false hope of current approaches to explainable artificial intelligence in health care. The Lancet Digital Health, 3(11), e745-e750.
-
Vaid, A., Landi, I., Nadkarni, G., & Nabeel, I. (2023). Using fine-tuned large language models to parse clinical notes in musculoskeletal pain disorders. The Lancet Digital Health, 5(12), e855–e858. https://doi.org/10.1016/S2589-7500(23)00202-9
-
Wang, G., Yang, G., Du, Z., Fan, L., & Li, X. (2023). ClinicalGPT: Large Language Models Finetuned with Diverse Medical Data and Comprehensive Evaluation. ArXiv.Org. https://doi.org/10.48550/ARXIV.2306.09968
-
Shevtsova, D., Ahmed, A., Boot, I. W. A., Sanges, C., Hudecek, M., Jacobs, J. J. L., Hort, S., & Vrijhoef, H. J. M. (2024). Trust in and Acceptance of Artificial Intelligence Applications in Medicine: Mixed Methods Study. JMIR Human Factors, 11, e47031. https://doi.org/10.2196/47031
-
World Health Organisation. (2021). Ethics and governance of artificial intelligence for health: WHO guidance. OMS, 1, 1–148. http://apps.who.int/bookorders. Shickel, B., Loftus, T. J., Adhikari, L., Ozrazgat-Baslanti, T., Bihorac, A., & Rashidi, P. (2019). DeepSOFA: A Continuous Acuity Score for Critically Ill Patients using Clinically Interpretable Deep Learning. Scientific Reports 2019 9:1, 9(1), 1–12. https://doi.org/10.1038/s41598-019-38491-0
-
Callaway, E. (2024). Chemistry Nobel goes to developers of AlphaFold AI that predicts protein structures. Nature, 634(8034), 525–526. https://doi.org/10.1038/D41586-024-03214-7
-
Hswen, Y., & Brownstein, J. S. (2019). Real-Time Digital Surveillance of Vaping-Induced Pulmonary Disease. New England Journal of Medicine, 381(18), 1778–1780. https://doi.org/10.1056/NEJMC1912818/SUPPL_FILE/NEJMC1912818_DISCLOSURES.PDF
-
Gerke, S., Minssen, T., & Cohen, G. (2020). Ethical and legal challenges of artificial intelligence-driven healthcare. Artificial Intelligence in Healthcare, 295-336.
-
Sit, C., Srinivasan, R., Amlani, A., Muthuswamy, K., Azam, A., Monzon, L., & Poon, D. S. (2020). Attitudes and perceptions of UK medical students towards artificial intelligence and radiology: a multicentre survey. Insights into Imaging, 11(1), 1-6.
-
Geis, J. R., Brady, A. P., Wu, C. C., Spencer, J., Ranschaert, E., Jaremko, J. L., ... & Kohli, M. (2019). Ethics of artificial intelligence in radiology: summary of the joint European and North American multisociety statement. Radiology, 293(2), 436-440.
Сontact
Hello Vivanti, I would like to discuss a project idea with you.
Get involved
Realized the most interesting and unusual ideas. Let's cooperate!


United States
United Kingdom
European Union