Google AI's medical predictions prove more accurate than doctors'
19 June 2018
Google’s Medical Brain team is now training its artificial intelligence capabilities to predict the death risk among hospital patients — and its early results show it has slightly higher accuracy than a hospital’s own warning system.
Bloomberg describes the healthcare potential of the Medical Brain’s findings, including its ability to use previously unusable information in order to reach its predictions. The AI, once fed this data, made predictions about the likelihood of death, discharge, and readmission.
The harrowing account of the unidentified woman’s death was published by Google in May in research highlighting the health-care potential of neural networks, a form of artificial intelligence software that’s particularly good at using data to automatically learn and improve.
Google had created a tool that could forecast a host of patient outcomes, including how long people may stay in hospitals, their odds of re-admission and chances of early death.
What impressed medical experts most was Google’s ability to sift through data previously out of reach — notes buried in PDFs or scribbled on old charts. The neural net gobbled up all this haphazard information then churned out predictions, doing it far faster and more accurately than existing techniques. Google’s system even showed which records led it to conclusions.
In a paper published in Nature in May, from Google’s team, says of its predictive algorithm: “These models outperformed traditional, clinically-used predictive models in all cases. We believe that this approach can be used to create accurate and scalable predictions for a variety of clinical scenarios.
“In one major case study in the findings, Google applied its algorithm to a patient with metastatic breast cancer. 24 hours after she was admitted, Google gave her a 19.9 per cent chance of dying in the hospital, in contrast with the 9.3 estimate with the hospital’s augmented Early Warning Score. Less than 2 weeks later, the patient died from her condition.”
In that case, the AI tallied 175,639 data points from the patient’s electronic medical records, including handwritten notes. According to the paper, this is the difference between Google’s work and previous deep learning approaches: “In general, prior work has focused on a subset of features available in the EHR, rather than on all data available in an EHR, which includes clinical free-text notes, as well as large amounts of structured and semi-structured data.”
This isn’t the first time Google’s AI has been applied to predictive healthcare. Earlier this year, DeepMind partnered with the Department of Veterans Affairs to feed its AI 700,000 medical records from veterans in order to predict deadly changes in patient condition.
The company is also working to develop a voice recognition system for clinical notes which will eliminate the need for doctors to type them in. In that particular case, the challenge comes from inaccuracy — even the smallest mistakes in a patient’s record can result in them getting the wrong care.
Dr Steven Lin, who spearheaded the research with Google, told CNBC, “This is even more of a complicated, hard problem than we originally thought. But if solved, it can potentially unshackle physicians from EHRs and bring providers back to the joys of medicine: actually interacting with patients.”
If Google can both smooth the process of entering data and improve the means by which that data is used, it could cut down on human error in medical care.
But there is a flip side. In 2016, the company faced backlash from patients when it was revealed it had gained access to the data of 1.6 million patients without their consent from three hospitals in London in order to develop an app which notified doctors when a patient was likely to get kidney disease.
It could also stoke fears of an AI having too much say over who gets what care. If a patient is given a significantly higher risk than another, will the hospital allocate more resources to the former based on the AI’s prediction?