Nabeel Abu-Mahfouz1*
1University of Guelph, Ontario. 50 Stone Rd E, Guelph, ON N1G 2W1
*nabeelabumahfouz@gmail.com
Abstract
Artificial intelligence (AI) is revolutionizing medical diagnostics, treatment planning and patient care, but bias in AI algorithms poses significant ethical and technical challenges. Skewed training datasets, algorithmic flaws and systemic healthcare inequity contribute to bias in AI algorithms. This may lead to disparities in AI-driven medical decisions that disproportionately represent marginalized populations. To tackle such biases, a multi-faceted approach integrating data-centric strategies, algorithmic innovation and regulative interventions is warranted. Maintaining diversity in datasets through enriched curation and reweighing techniques can help mitigate bias within the training data used to develop AI models. Simultaneously, bias-aware training methodologies, such as adversarial debiasing and federated learning, can enhance model fairness. Transparency and accountability are also critical, with AI models designed to show how they reach their decisions fostering trust and continuous fairness assessments ensuring equitable outcomes. The establishment of regulative policies and multidisciplinary collaboration ensures the preservation of patient privacy and informed consent. Unless checked through proactive interventions, unbalanced AI algorithms will widen current inequity in healthcare, not narrow it. By combining engineering interventions with ethical governance, medical professionals can develop AI facilitated tools for healthcare with enhanced equity and patient care in diverse groups. This commentary explores obstacles and approaches to enhancing algorithmic fairness in medical AI, ensuring that technological advancements benefit all patient groups equitably.
Author Summary
AI is transforming healthcare by improving diagnoses, therapy and care of patients. However, AI is susceptible to bias through unbalanced training datasets, algorithmic flaws and healthcare system inequities. The bias can translate into unequal healthcare decisions that disproportionately affect marginalized groups of people. To address this, diverse, well-curated datasets and bias-detecting training processes like debiasing through adversarial processes are needed. Explainable AI models and ongoing fairness audits ensure accountability of these processes. Ethical guidelines need to ensure protection of patient confidentiality and informed consent. If left unchecked, AI bias can perpetuate healthcare inequities instead of resolving them. By combining engineering solutions with ethical oversight, medical professionals can create fairer AI-driven tools that benefit all patients. This article explores the challenges of bias in medical AI and strategies to make these technologies more equitable for diverse populations.
Introduction
Artificial intelligence (AI) is increasingly becoming an important tool in medical diagnostics and medical planning with the potential to enhance efficiency, accuracy and access in medical care delivery (Bajwa et al. 2021). However, despite its promise, the deployment of AI in medicine is not without significant challenges. One such key challenge is bias in AI algorithms. Bias can arise through unbalanced training datasets, algorithm faults and systemic inequities in healthcare (Grzybowski et al. 2024). AI biases can amplify medical care inequity, significantly impacting disadvantaged groups in society. It is critical to explore the various biases in medical AI, its long-term impact in patient care as well as the technical and moral frameworks for countervailing such biases. By combining engineering solutions with ethical oversight, medical professionals can create fairer AI-driven tools that benefit all patients. This article explores the challenges of bias in medical AI and strategies to make these technologies more equitable for diverse populations.
Causes of Bias in Medical AI
Bias in healthcare AI stems mainly from biased training data. The AI systems learn from historical data of patients and if this data is unrepresentative and imbalanced, the AI software can learn skewed predictions (Norori et al. 2021). For example, if an AI system has been mostly trained on data from a single demographic group it may underperform on others, leading to disparities in treatment and diagnosis. Also, historical data from healthcare can be reflective of present-day clinical decision-making biases, where populations are treated differently because of socioeconomic status and systematic disparities (Gopal et al., 2021). These can then be compounded with the use of AI and disparities can be reinforced rather than mitigated.
Another significant source of bias in medical AI is algorithmic and model design errors (Cross et al., 2024). Developers can unintentionally infuse their systems with bias when selecting features, data preprocessing and testing and validating their models. If an AI system is tested on data that does not reflect real-world populations, its performance can be inconsistent when implemented on other populations of patients. Also, human annotators’ personal judgments can skew AI predictions with labeling bias. The lack of explainability and transparency in many of the AI systems render it difficult to uncover and correct such biases. To overcome such issues, there must be diverse, high-quality training data, rigorous testing across populations and the use of fairness-aware AI methods for equitable healthcare outcomes.
Consequences of Bias in AI-Driven Healthcare
Bias in AI-driven healthcare can lead to misdiagnosis and inappropriate treatment, disproportionately affecting marginalized populations (Hanna et al., 2025). If an AI model underperforms for certain demographics, patients may receive delayed or incorrect diagnoses, worsening health outcomes (Chen et al., 2023). For example, AI-driven dermatology tools that are primarily trained on lighter skin tones might struggle to detect skin cancer in individuals with darker skin, potentially resulting in missed diagnoses or late-stage detection. (Rezk et al., 2022). Inaccurate risk assessments can also cause over- or under-prescription of medications, increasing the likelihood of adverse effects or ineffective treatment (Ferrara et al., 2024).
Beyond clinical error, biased AI can also reinforce healthcare disparities and degrade trust in healthcare technology. Underrepresented populations of patients may be limited in their access to quality treatment if their populations are not prioritized in AI-assisted systems of triage (Celi et al., 2022). This can reinforce preexisting systematic disparities, discouraging underrepresented patients from treatment. Ethical questions of fairness and liability can also lead to regulatory challenges, slowing adoption of AI in healthcare settings.
Proposed Solutions for AI Bias
Addressing bias in medical AI will necessitate a general strategy combining both algorithmic advances and approaches at multiple levels and with a commitment to transparency and accountability.
Data-Centric Strategies
Ensuring dataset diversity is fundamental for the development of balanced and fair AI models for medicine. AI models inherit biases from historically skewed medical datasets, providing inaccurate predictions for underprivileged groups. To address this, data sets must be carefully collected to reflect representative demographics, socioeconomic levels and expressions of diseases. This includes deliberate data collection processes where underprivileged groups are targeted and accounted for. In doing this, AI models can enhance generalizations, closing the disparities for medical decision-making.
Beyond diverse data collection, algorithmic processes like data augmentation and reweighing can also balance data sets. Reweighing increases the weight given to underrepresented patients, causing them to count the same when models train. Synthetic data generation and the practice of using the technique of transfer learning can also make data sets inclusive when data is limited.
Strong, inclusive data sets eventually build AI systems that are stronger, more robust and fairer, leading to greater health benefits for all populations.
Algorithmic Innovations
Beyond improving diversity in datasets, algorithmic approaches are important in bias reduction. Bias-aware training approaches, such as adversarial debiasing, adapt model training processes in a way that lessens discrepancies in predictions (Yang et al. 2024). Federated training, a method that trains models across decentralized devices or servers holding local data samples without exchanging them, ensures that models incorporate insights from diverse sources while preserving patient privacy. The key is in the collection and preprocessing techniques, such as reweighing and resampling to address imbalances in training data.
Transparency and Accountability
Implementing explainable AI algorithms fortify decision-making by providing insights into how conclusions are reached, and in the process, helps build trust between providers and patients. Continuous fairness assessment using standardized metrics is necessary to monitor and ensure equitable performance across all patient populations. The Standards for Reporting and Improving Data Diversity and Inclusion in Health Research (STANDING Together) recommendations aim to enable transparency regarding limitations of health datasets and proactive evaluation of their effect across population groups (Alderman et al. 2025).
By integrating these strategies, the medical community can work towards developing AI systems that are both effective and equitable, ultimately improving patient care across diverse populations.
Ethical and Regulatory Considerations
Outside technological solutions, ethical and regulatory considerations play a foundational role in reducing bias in medical AI (Hanna et al. 2025). Ethics of patient privacy and consent for information gathering underpin trust and ethical adherence in medical AI. Regulatory oversight by governments and medical institutions is necessary to enforce fairness standards and ensure that AI systems adhere to ethical guidelines (Alderman et al. 2025). Cross-professional cooperation among engineers, ethicists and healthcare professionals is crucial in developing AI-driven medical care solutions with an inclination towards inclusiveness and equitability (Hanna et al. 2025).
Conclusion
Though AI has the potential to transform healthcare, it is simultaneously capable of reinforcing existing disparities if biases are not corrected. It is essential to incorporate ethics and engineering approaches in developing equitable AI systems that ensure fair treatment of all patients. Regular assessment and refinement of AI algorithms, grounded in ethics and fairness metrics, is necessary to ensure AI becomes a means for reducing disparities and not increasing them.
Declaration of Author’s Competing Interests
Author declares no conflicts of interest.
Acknowledgements
The author would like to thank the editor board and reviewers of the JYI journal for their time and guidance throughout this publishing process.
References
Alderman, J. E., Palmer, J., Laws, E., McCradden, M. D., Ordish, J., Ghassemi, M., Pfohl, S. R., Rostamzadeh, N., Cole-Lewis, H., Glocker, B., Calvert, M., Pollard, T. J., Gill, J., Gath, J., Adebajo, A., Beng, J., Leung, C. H., Kuku, S., Farmer, L.-A., … Liu, X. (2025). ‘Tackling algorithmic bias and promoting transparency in health datasets: The STANDING Together consensus recommendations’, The Lancet Digital Health, 7(1), e64–e88, available: https://doi.org/10.1016/S2589-7500(24)00224-3
Bajwa, J., Munir, U., Nori, A., & Williams, B. (2021) ‘Artificial intelligence in healthcare: Transforming the practice of medicine’, Future Healthcare Journal, 8(2), e188–e194, available: https://doi.org/10.7861/fhj.2021-0095
Celi, L. A., Cellini, J., Charpignon, M.-L., Dee, E. C., Dernoncourt, F., Eber, R., Mitchell, W. G., Moukheiber, L., Schirmer, J., Situ, J., Paguio, J., Park, J., Wawira, J. G., & Yao, S. (2022) ‘Sources of bias in artificial intelligence that perpetuate healthcare disparities—A global review’, PLOS Digital Health, 1(3), e0000022, available: https://doi.org/10.1371/journal.pdig.0000022
Chen, R. J., Wang, J. J., Williamson, D. F. K., Chen, T. Y., Lipkova, J., Lu, M. Y., Sahai, S., & Mahmood, F. (2023) ‘Algorithm fairness in artificial intelligence for medicine and healthcare’, Nature Biomedical Engineering, 7(6), 719-742, available: https://doi.org/10.1038/s41551-023-01056-8
Cross, J. L., Choma, M. A., & Onofrey, J. A. (2024) ‘Bias in medical AI: Implications for clinical decision-making’, PLOS Digital Health, 3(11), e0000651, available: https://doi.org/10.1371/journal.pdig.0000651
Ferrara, M., Bertozzi, G., Di Fazio, N., Aquila, I., Di Fazio, A., Maiese, A., Volonnino, G., Frati, P., & La Russa, R. (2024) ‘Risk Management and Patient Safety in the Artificial Intelligence Era: A Systematic Review’, Healthcare, 12(5), 549, available: https://doi.org/10.3390/healthcare12050549
Gopal, D. P., Chetty, U., O’Donnell, P., Gajria, C., & Blackadder-Weinstein, J. (2021) ‘Implicit bias in healthcare: Clinical practice, research and decision making’, Future Healthcare Journal, 8(1), 40–48, available: https://doi.org/10.7861/fhj.2020-0233
Grzybowski, A., Jin, K., & Wu, H. (2024) ‘Challenges of artificial intelligence in medicine and dermatology’, Clinics in Dermatology, 42(3), 210–215, available: https://doi.org/10.1016/j.clindermatol.2023.12.013
Hanna, M. G., Pantanowitz, L., Jackson, B., Palmer, O., Visweswaran, S., Pantanowitz, J., Deebajah, M., & Rashidi, H. H. (2025) ‘Ethical and Bias Considerations in Artificial Intelligence/Machine Learning’, Modern Pathology, 38(3), 100686, available: https://doi.org/10.1016/j.modpat.2024.100686
Norori, N., Hu, Q., Aellen, F. M., Faraci, F. D., & Tzovara, A. (2021) ‘Addressing bias in big data and AI for healthcare: A call for open science’, Patterns, 2(10), 100347, available: https://doi.org/10.1016/j.patter.2021.100347
Rezk, E., Eltorki, M., & El-Dakhakhni, W. (2022) ‘Improving Skin Color Diversity in Cancer Detection: Deep Learning Approach’, JMIR Dermatology, 5(3), e39143, available: https://doi.org/10.2196/39143
Yang, Y., Lin, M., Zhao, H., Peng, Y., Huang, F., & Lu, Z. (2024) ‘A survey of recent methods for addressing AI fairness and bias in biomedicine’, Journal of Biomedical Informatics, 154, 104646, available: https://doi.org/10.1016/j.jbi.2024.104646

