How to Achieve Trusthworthy Artificial Intelligence for Health
Journal article, Peer reviewed
Published version
View/ Open
Date
2020Metadata
Show full item recordCollections
Original version
Bulletin of the World Health Organization. 2020;98(4):257–262 10.2471/blt.19.237289Abstract
Artificial intelligence holds great promise in terms of beneficial, accurate and effective preventive and curative interventions. At the same time, there is also awareness of potential risks and harm that may be caused by unregulated developments of artificial intelligence. Guiding principles are being developed around the world to foster trustworthy development and application of artificial intelligence systems. These guidelines can support developers and governing authorities when making decisions about the use of artificial intelligence. The High-Level Expert Group on Artificial Intelligence set up by the European Commission launched the report Ethical guidelines for trustworthy artificial intelligence in2019. The report aims to contribute to reflections and the discussion on the ethics of artificial intelligence technologies also beyond the countries of the European Union (EU). In this paper, we use the global health sector as a case and argue that the EU’s guidance leaves too much room for local, contextualized discretion for it to foster trustworthy artificial intelligence globally. We point to the urgency of shared globalized efforts to safeguard against the potential harms of artificial intelligence technologies in health care.