Preprint / Version 1

Medical AI Chatbots: Benefits and Drawbacks for Patients and Medical Workers

##article.authors##

  • Lexi Dai Polygence

DOI:

https://doi.org/10.58445/rars.2312

Keywords:

Medical AI Chatbots, Healthcare, AI

Abstract

As medical Artificial Intelligence chatbots have emerged, consumers, doctors, and policymakers are increasingly questioning their reliability. For patients who cannot readily access hospitals, such as people who live in rural areas or do not have health insurance, these online accessible AI tools can prove invaluable. These chatbots may also be able to detect issues earlier on due to their accessibility, allowing for a better chance of recovery. Medical workers may also use these chatbots for diagnostic reassurance and as an educational tool. Additionally, there is potential for many patients to use these chatbots in low-risk situations, which may lead to a decrease in patient congestion and reduce strain on hospital staff. Although the benefits the chatbots bring are great, they also carry many drawbacks, such as errors, biases, and patient data security. Errors and biases may lead to the misdiagnosis of patients, thus jeopardizing their health. Data breaches could also happen, exposing private and sensitive medical records. Currently, these medical AI chatbots are flawed but have lots of potential. This paper will review the benefits and drawbacks of medical AI chatbots and will cover how these AI tools provide benefits to both patients and medical workers.

References

Altamimi, I., Altamimi, A., Alhumimidi, A. S., Altamimi, A., & Temsah, M. H. (2023). Artificial intelligence (AI) chatbots in medicine: a supplement, not a substitute. Cureus, 15(6).

Avery, K., Finegold, K., & Xiao, X. (2016). Impact of the Affordable Care Act coverage expansion on rural and urban populations.

Brenan, M. (2023, January 17). Record high in U.S. put off medical care due to cost in 2022. Gallup.com. https://news.gallup.com/poll/468053/record-high-put-off-medical-care-due-cost-2022.aspx

Donahue, K., Chouldechova, A., & Kenthapadi, K. (2022, June). Human-algorithm collaboration: Achieving complementarity and avoiding unfairness. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (pp. 1639-1656).

Elder, N. C., Jacobson, C. J., Zink, T., & Hasse, L. (2005, November). How experiencing preventable medical problems changed patients’ interactions with primary health care | annals of family medicine. The Annals of Family Medicine. https://www.annfammed.org/content/3/6/537/tab-e-letters

Gillespie, N., Lockey, S., Curtis, C., & Pool, J. (2023). Trust in artificial intelligence. KPMG. https://kpmg.com/xx/en/home/insights/2023/09/trust-in-artificial-intelligence.html#:~:text=Three%20in%20five%20

Hann, I. H., Hui, K. L., Lee, T., & Png, I. (2002). Online information privacy: Measuring the cost-benefit trade-off. ICIS 2002 proceedings, 1.

Hasal, M., Nowaková, J., Saghair, K. A., Abdulla, H., Snášel, V., & Ogiela, L. (2021). Chatbots: Security, privacy, data protection, and social aspects. Concurrency and Computation: Practice and Experience, 33(19), e6426. https://doi.org/10.1002/cpe.6426

Kazi, H., Chowdhry, B. S., & Memon, Z. (2012). MedChatBot: an UMLS based chatbot for medical students.

Kim, J., Cai, Z. R., Chen, M. L., Simard, J. F., & Linos, E. (2023). Assessing biases in medical decisions via clinician and AI chatbot responses to patient vignettes. JAMA Network Open, 6(10), e2338050-e2338050.

Li, F.; Ruijs, N.; Lu, Y. Ethics & AI: A Systematic Review on Ethical Concerns and Related Strategies for Designing with AI in Healthcare. AI 2023, 4, 28–53. https://doi.org/ 10.3390/ai4010003

Office of Disease Prevention and Health Promotion. (n.d.). Preventive care. Preventive Care - Healthy People 2030. https://health.gov/healthypeople/objectives-and-data/browse-objectives/preventive-care

Ortega, M. V., Hidrue, M. K., Lehrhoff, S. R., Ellis, D. B., Sisodia, R. C., Curry, W. T., ... & Wasfy, J. H. (2023). Patterns in physician burnout in a stable-linked cohort. JAMA Network Open, 6(10), e2336745-e2336745.

Schmidgall, S., Harris, C., Essien, I., Olshvang, D., Rahman, T., Kim, J. W., ... & Chellappa, R. (2024). Addressing cognitive bias in medical language models. arXiv preprint arXiv:2402.08113.

Shrider, E. (2023, September 12). Poverty rate for the black population fell below pre-pandemic levels. Census.gov. https://www.census.gov/library/stories/2023/09/black-poverty-rate.html

Swick, R. K. (2021). The accuracy of artificial intelligence (ai) chatbots in telemedicine. Journal of the South Carolina Academy of Science, 19(2), 17.

United Nations. (n.d.). - SDG indicators. United Nations. https://unstats.un.org/sdgs/report/2019/goal-01/#:~:text=About%2079%20per%20cent%20of,under%2014%20years%20of%20age

Wang, W., & Siau, K. (2018). Trust in health chatbots. Thirty ninth International Conference on Information Systems, San Francisco.

World Health Organization. (2023, May 16). Who calls for safe and ethical AI for health. World Health Organization. https://www.who.int/news/item/16-05-2023-who-calls-for-safe-and-ethical-ai-for-health

Downloads

Posted

2025-02-27