Generative AI - Benefits, Limitations, Potential risks and Challenges in Healthcare Industry

Authors

  • SARIKA MULUKUNTLA

DOI:

https://doi.org/10.53555/eijmhs.v8i4.211

Keywords:

Generative AI, healthcare, predictive models, skill gap, training needs

Abstract

Generative AI, a frontier of innovation within the healthcare industry, stands as both a beacon of hope and a subject of cautious scrutiny. At its core, generative AI offers remarkable benefits by revolutionizing drug discovery, personalizing patient care, and advancing predictive models for disease prevention. Its ability to generate new data and simulations based on complex patterns in existing datasets can drastically reduce the time and cost associated with developing new treatments and understanding complex health conditions. However, alongside these promising benefits, generative AI brings forth limitations and potential risks that warrant attention. One significant challenge lies in ensuring the accuracy and reliability of the generated data, as inaccuracies can lead to misdiagnosis or ineffective treatments. Furthermore, ethical concerns emerge regarding patient privacy and the potential for generating biased or discriminatory medical insights. The healthcare industry also faces hurdles in integrating these advanced AI systems into existing infrastructure, requiring substantial investment in technology and training. As we navigate this exciting yet uncertain terrain, the balance between harnessing generative AI's transformative potential and mitigating its risks becomes crucial. Ensuring rigorous validation, ethical oversight, and equitable access will be paramount in leveraging generative AI to its fullest, promising a future where healthcare is more efficient, effective, and personalized.

Author Biography

SARIKA MULUKUNTLA

Health IT Specialist

References

Shaw, J., Rudzicz, F., Jamieson, T., & Goldfarb, A. (2019). Artificial intelligence and the implementation challenge. Journal of medical Internet research, 21(7), e13659.

Aggarwal, R., Farag, S., Martin, G., Ashrafian, H., & Darzi, A. (2021). Patient perceptions on data sharing and applying artificial intelligence to health care data: cross-sectional survey. Journal of medical Internet research, 23(8), e26162.

Bommasani, R., Hudson, D. A., Adeli, E., Altman, R., Arora, S., von Arx, S., ... & Liang, P. (2021). On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258.

Murdoch, B. (2021). Privacy and artificial intelligence: challenges for protecting health information in a new era. BMC Medical Ethics, 22, 1-5.

Dwivedi, Y. K., Hughes, L., Ismagilova, E., Aarts, G., Coombs, C., Crick, T., ... & Williams, M. D. (2021). Artificial Intelligence (AI): Multidisciplinary perspectives on emerging challenges, opportunities, and agenda for research, practice and policy. International Journal of Information Management, 57, 101994.

Zhavoronkov, A., Mamoshina, P., Vanhaelen, Q., Scheibye-Knudsen, M., Moskalev, A., & Aliper, A. (2019). Artificial intelligence for aging and longevity research: Recent advances and perspectives. Ageing research reviews, 49, 49-66.

Dilibal, C., Davis, B. L., & Chakraborty, C. (2021, June). Generative design methodology for internet of medical things (IoMT)-based wearable biomedical devices. In 2021 3rd International Congress on Human-Computer Interaction, Optimization and Robotic Applications (HORA) (pp. 1-4). IEEE.

Chang, C. H., Caruana, R., & Goldenberg, A. (2021). Node-gam: Neural generalized additive model for interpretable deep learning. arXiv preprint arXiv:2106.01613.

Gontijo-Lopes, R., Dauphin, Y., & Cubuk, E. D. (2021). No one representation to rule them all: Overlapping features of training methods. arXiv preprint arXiv:2110.12899.

Weihs, L., Salvador, J., Kotar, K., Jain, U., Zeng, K. H., Mottaghi, R., & Kembhavi, A. (2020). Allenact: A framework for embodied AI research. arXiv preprint arXiv:2008.12760.

Marcus, G. (2020). The next decade in AI: four steps towards robust artificial intelligence. arXiv preprint arXiv:2002.06177.

Nguyen, T., Raghu, M., & Kornblith, S. (2020). Do wide and deep networks learn the same things? uncovering how neural network representations vary with width and depth. arXiv preprint arXiv:2010.15327.

Battaglia, P. W., Hamrick, J. B., Bapst, V., Sanchez-Gonzalez, A., Zambaldi, V., Malinowski, M., ... & Pascanu, R. (2018). Relational inductive biases, deep learning, and graph networks. arXiv preprint arXiv:1806.01261.

Zhang, C., Bengio, S., Hardt, M., Recht, B., & Vinyals, O. (2021). Understanding deep learning (still) requires rethinking generalization. Communications of the ACM, 64(3), 107-115.

Abnar, S., Dehghani, M., & Zuidema, W. (2020). Transferring inductive biases through knowledge distillation. arXiv preprint arXiv:2006.00555.

Downloads

Published

2022-10-09