A global guide of questions about the usage of artificial intelligence in healthcare has been launched in South Korea.
Three organisations collaborated in developing the guide titled “Using AI to Support Healthcare Decisions: A Guide for Society”. They are the Korea Policy Center for the Fourth Industrial Revolution at the Korea Advanced Institute of Science and Technology; the Lloyd’s Register Foundation Institute for the Public Understanding of Risk at the National University of Singapore; and the UK-based science communication non-profit Sense about Science.
It was presented during the 2021 Special Interest Group on Knowledge Discovery and Data Mining Conference on 15 August.
WHAT IT’S ABOUT
Intended for patients, policymakers, journalists, clinicians and decision-makers, the guide offers three guiding principles and questions to assess a technology’s quality. It includes understanding the AI’s base data; AI’s assumption about patients and diseases; and how much decision weight can be placed on an AI recommendation.
The guide was designed to serve as a “benchmark” for the responsible usage of AI technologies and promote clarity and high standards for technological applications in the healthcare sector.
In crafting the guide, the collaborators consulted with various experts from the KAIST Graduate School of AI, the Science and Technology Policy Institute, Asan Medical Centre, Seoul National University Bundang Hospital and various AI companies.
WHY IT MATTERS
While AI has shown promising value over the years, confusion and fear about its application have spread, such as about data privacy and misdiagnoses. Instead of shunning such innovative tools, the guide’s authors said, “We’ll be better off if we discuss the right questions now about the standards AIs should meet.”
The guide, the authors said, aims “to transform the conversation” around AI. It wants people to have confidence in technologies that do improve medical treatment and avoid those that can cause harm.
The guide emphasises the importance of asking questions about AI applications in healthcare given its rapid development and rising use cases among providers. Asking these questions, as provided by the guide, also ensures the responsible use of AI.
Moreover, applying these questions help society ensure that AI solutions in healthcare are “making good use of data and knowledge available with minimal error, across different countries and populations, without deepening inequalities that are already high”.
THE LARGER TREND
During the recent HIMSS21 Digital, Dr John Halamka, president of Mayo Clinic Platform, noted the benefits of AI supporting human decision making. But there are significant challenges to its adoption in healthcare, especially around equity and bias. One solution he offered was “greater transparency,” freely sharing data that go into an algorithm, like ethnicity, race, gender, education and income.
ON THE RECORD
“We focused on the ‘reliability’ of AI applications in the healthcare sector to make all data well represented, in good quality. The technology will eventually innovate to better serve the people’s new demand[s], especially critical demands for safety and precision in healthcare services. This global guide will help both developers and people’s understanding of the appropriate technology applications,” KPC4IR Director So Young Kim said in a press statement.
“As more people ask the questions in this guide, more people in authority will expect to be asked. In this way, we create a virtuous circle of responsible discussion, and ultimately, higher standards in using AI to guide healthcare decisions,” the guide concluded.