Skip to main content

WHO Offers Guidance on Use of Artificial Intelligence in Medicine, from JAMA Health Forum

Published on: Aug 11, 2021

WHO large

In a July 13, 2021 JAMA Health Forum article author Joan Stephenson, PhD, writes that the use of artificial intelligence (AI) in a variety of applications “holds great promise for the practice of public health and medicine” but also poses ethical challenges that must be addressed, according to a new report released by the World Health Organization (WHO). 

The report, Ethics and Governance of Artificial Intelligence for Health, urges the adoption of a half-dozen principles aimed at ensuring that unethical practices related to the use of AI are avoided and that equitable use and access are encouraged. 

“Companies and governments should introduce AI technologies only to improve the human condition and not for objectives such as unwarranted surveillance or to increase the sale of unrelated commercial goods and services,” the report says. “Providers should demand appropriate technologies and use them to maximize both the promise of AI and clinicians’ expertise.” 

The report is the product of 2 years of consultations held by a panel of international experts appointed by the WHO. It is intended to provide a guide for countries on how to maximize AI’s benefits while minimizing its risks, said Dr Tedros Adhanom Ghebreyesus, WHO Director-General, in a statement announcing the report’s release. 

Various applications of AI include improving the accuracy and speed of diagnosis and screening for disease; assisting with clinical care; supporting a range of public health interventions, such as disease surveillance, response to disease outbreaks, and health systems management; and bolstering health research and drug development, the report says. 

The technology also could empower patients to help manage their own medical conditions, particularly chronic diseases such as diabetes and cardiovascular diseases, the authors note. 

“AI could assist in self-care, including through conversation agents (eg, “chat bots”), health monitoring and risk prediction tools and technologies designed specifically for individuals with disabilities,” they write. They caution, however, that although some patients might consider a shift to patient-based care empowering and beneficial, for other individuals, the additional responsibility might be stressful and limit their access to formal health care services. 

Use of AI also has the potential to bridge gaps in resource-poor countries and rural communities, making health care services more accessible. “In settings with limited resources, AI could be used to conduct screening and evaluation if insufficient medical expertise is available, a common challenge in many resource-poor settings,” the authors note. 

However, the report cautions that whether AI can be used effectively to help patients and communities is “inextricably linked” to such challenges and risks as biases encoded in AI algorithms; unethical collection and use of health data; and risks of AI to patient security, cybersecurity, and the environment. 

For example, AI systems trained primarily using data from individuals in high-income countries may not function well for people in low- and middle-income settings. “To reduce bias, people with diverse ethnic and social backgrounds should be included, and a diverse team is necessary to recognize flaws in the design or functionality of the AI in validating algorithms to ensure lack of bias,” the report says. 

Patients and communities also need assurance that their rights and interests will not be outranked by the commercial interests of technology companies or the interests of governments in surveillance and social control. For example, the authors noted that uses of AI in the context of COVID-19, such as applications for proximity tracking for COVID-19 contact tracing, sparked concerns about surveillance, privacy and autonomy, and other issues. 

Comprehensive international guidance on use of AI for health in accordance with ethical norms and human rights standards has been lacking. To ensure that AI serves the public interest in all countries, the report proposes 6 ethical principles to guide AI regulation and governance. These include 

  • protecting human autonomy by ensuring that people remain in control of health care systems and medical decisions (rather than transferring decision-making power to AI systems), as well as by protecting patient privacy and confidentiality and obtaining valid informed consent; 

  • promoting human well-being and safety and the public interest through AI technologies that are designed to meet regulatory requirements for safety, accuracy, and efficacy for “well-defined use cases or indications”; 

  • ensuring that information about the AI technology is transparent and easily understood to facilitate public debate on its design and use; 

  • fostering responsibility and accountability by stakeholders to ensure that AI technologies are used under appropriate conditions and by people who have received appropriate training; 

  • ensuring inclusiveness and equity by requiring that AI in health-related applications is designed to encourage “the widest possible appropriate, equitable use and access, irrespective of age, sex, gender, income, race, ethnicity, sexual orientation, ability or other characteristics protected under human rights codes”; and 

  • promoting AI that is responsive and sustainable, meaning that an AI application responds adequately to expectations during actual use and is consistent with efforts to reduce effects of human activity on the environment. 

The authors note that because AI for health is an evolving and rapidly moving field and that many new applications may emerge, the WHO may issue specific guidance for additional tools and applications and may periodically update the current guidance. 

Read the full article in JAMA Health Forum here.  

Full link: