Computer That 'Predicts' Patient Deaths, Why Hospitals Can't Use It [Im Juhyung's Tech Talk]
Multiple AI Developed to Predict Patient Deaths
Accuracy 80-90%... High Potential for Medical Advancement
However, Challenges Remain for On-Site Medical Adoption
Because Algorithm Operation Cannot Be Understood
Experts Say "AI Is a 'Black Box'... a Device Humans Cannot Comprehend"
"To Use in Practice, Trust from Patients and Doctors Is Needed"
A hospital in France treating COVID-19 confirmed patients. / Photo by Yonhap News
View original image[Asia Economy Reporter Lim Juhyung] With the development of a computer system that uses artificial intelligence (AI) to predict patient mortality, there is growing anticipation that the healthcare system could undergo significant changes. If a device capable of accurately determining the reasons for human death exists, it would also make preventing death much easier.
However, many challenges remain before such systems can be implemented in actual medical settings. In particular, although it is possible to create AI that predicts death, the mystery remains as to 'how' the AI can foresee human death in advance.
A research team led by Professor Mads Nielsen of the Department of Computer Science at the University of Copenhagen, Denmark, announced on the 5th of last month (local time) in the American scientific journal Scientific Reports that they have developed an AI capable of assessing whether COVID-19 infected patients will die.
According to the paper published by Professor Nielsen's team, this AI can predict with approximately 90% accuracy whether a confirmed COVID-19 patient will die using a specialized algorithm. It also predicted with 80% accuracy whether patients admitted to the hospital would require oxygen ventilator treatment.
The research team built the system based on data from 3,944 Danish and New Zealand patients who tested positive for COVID-19. They obtained critical information from health authorities, including patients' disease details, body mass index, age, nutritional status, and medical records, and trained the AI to predict mortality risk. It is known that COVID-19 fatality rates, which cause damage to the nervous and respiratory systems, are particularly influenced by body mass and age.
A system that predicts the mortality of patients with diseases through artificial intelligence (AI) algorithms is being developed. / Photo by Yonhap News
View original imageThe research team expects that AI with high predictive accuracy can protect more lives of COVID-19 patients. Professor Nielsen explained in the paper, "During the first wave of COVID-19, I saw medical staff worried about the shortage of ventilators in intensive care units," adding, "The goal is to predict the number of COVID-19 patients five days in advance using AI."
Attempts to develop AI algorithms that can predict patient death in advance have been ongoing for some time. Previously, in 2018, the U.S. Food and Drug Administration (FDA) approved the use of an algorithm capable of predicting sudden cardiovascular and respiratory deaths for the first time in history.
Introducing AI that predicts early mortality into medical settings can protect patients at risk of sudden death due to unforeseen problems. Additionally, by detecting diseases that doctors might miss early on, it has the potential to significantly improve healthcare systems by enabling safer and more cost-effective treatment.
However, despite these advantages, many obstacles remain before AI can be practically used in frontline medical environments.
The biggest current issue with AI-based mortality prediction systems is that no one can explain 'how' the AI predicts a patient's death.
AI-based mortality prediction systems work by inputting data such as patient age, underlying diseases, physical condition, and nutritional information that affect mortality probability into the AI, which then undergoes repeated training to learn specific 'patterns.' Through machine learning, the AI identifies patterns that critically influence death among these patients and uses them to make accurate predictions.
However, these patterns are often difficult for ordinary humans to understand. Computer scientists typically refer to such cases as a 'black box'?a scientific term for a system whose internal workings are unknown. Like a black box that cannot be visually inspected inside, AI makes decisions through processes we cannot comprehend.
The learning and inference results of AI algorithms that humans cannot understand are called a 'black box.' / Photo by Yonhap News
View original imageIntroducing AI that humans cannot understand into healthcare settings, where safety is paramount, can be very risky.
AI's most important function is to find patterns through repeated training on given data. But if we cannot understand how AI finds these patterns, we cannot determine what data to train the AI with or whether the AI is learning incorrectly.
If AI learns in the wrong way, it might make harmful decisions for patients.
Experts explain that the key is to enhance AI's reliability to a level that doctors and patients can accept.
Maxine McIntosh, a researcher at the UK's AI research institution, the Turing Institute, told the British media outlet The Telegraph, "The AI black box is like a sausage-making machine. No one can see how meat is turned into sausage, but everyone knows the machine produces sausages."
She added, "To use such an unknown device in medical settings, it is important to gain the trust of doctors and patients. This is especially crucial for patients, as AI decisions can involve surgeries causing significant pain or even life-or-death outcomes."
Hot Picks Today
"Not Everyone Can Afford This: Inside the World of the True Top 0.1% [Luxury World]"
- While Everyone Focused on Samsung and Nix, This Company Soared 50%... Hit Record Highs for 4 Days [Weekend Money]
- "Plunged During the War, Now Surging Again"... The Real Reason Behind the 6% One-Day Silver Market Rally [Weekend Money]
- Incoming Fed Chairman Kevin Walsh to Sell $2.52 Million Worth of Coupang Shares
- "Target Price Set at 970,000 Won"... Top Investors Already Watching, Only an 'Uptrend' Remains [Weekend Money]
However, McIntosh also noted, "There are still technologies we use today that human science does not fully understand. If we cannot understand AI models, proving their reliability in a way everyone can accept could be an alternative approach to AI adoption."
© The Asia Business Daily(www.asiae.co.kr). All rights reserved.