Computers learning to find Australian cancers and broken bones that people miss

October 28, 2015 , Science in Public

A deal signed today means a 'deep learning system' will soon help Australian radiologists to find cancers and breaks that are often missed, and to ignore lumps that don't matter. Then it will bring modern medical diagnostics to developing countries where radiologists are in short supply.

In a global first, Melbourne-headquartered radiology business Capitol Health announced this morning that it will implement the machine-learning system developed by Silicon Valley start-up Enlitic.

Founded by Melbourne serial entrepreneur Jeremy Howard, Enlitic has created computer learning systems that can take millions of scans, tests and medical records and learn from them to help doctors rapidly diagnose problems.

"This is the beginning of a transformation of global health services," says Jeremy.

Radiologists view hundreds of X-rays and other every week looking for the unusual. Sometimes they're looking for something they've never actually seen before. Sometimes they're looking at something that's just four pixels in a two-million-pixel image.

"The new system will learn from a million scans held by Capitol.

"And it will keep learning from every ultrasound, CT, MRI, PET, and X-ray we perform," says Capitol Managing Director John Conidi.

"Within a year this system will be implemented across our clinics. Our will be able to work faster, provide more accurate results and save more lives. Many unnecessary, expensive and dangerous procedures will be avoided," he says.

"This system will transform Western healthcare," says Jeremy Howard. "The more data and computing time it gets, the more it learns and the more accurate it becomes. Eventually it will handle lab tests, patient histories, and genomic information. It will take much of the guess work out of medicine.

"In developing countries our impact will be even more profound. Most medical images are never seen by a doctor. Our system will enable a remote health worker to do an ultrascan and get a result in minutes."

How Enlitic works

You need a chest X-ray; it's doctor's orders. Is it pneumonia? Or maybe ?

What if your radiologist could draw on the collective wisdom of hundreds of other health professionals, thousands of patient case studies, and millions of medical images? And do so in a matter of minutes?

Data scientist, entrepreneur and Melbourne-boy-made-good Jeremy Howard developed Enlitic's 'deep learning' algorithm, connecting complex layers of medical and anatomical information, inspired by the function and interconnection of the human brain.

Enlitic is an example of machine learning, which brings together huge amounts of data and the ability of modern computing to crunch the numbers and make the connections. If you give the algorithm a stack of images—such as X-rays, CTs, MRIs and ultrasound scans—and the accompanying diagnoses, it learns the patterns. With enough base data, it can rapidly process and recognise a new medical image and predict the diagnosis. The more data you give it, the better it becomes: it literally learns.

Enlitic's technology is already used to help radiologists detect and diagnose the early signs of lung cancer and detect bone fractures, including in complex joints with multiple bones, such as the wrist. Enlitic's algorithm currently draws on an archive of about one million patients.

The evidence that Enlitic works

Lung cancer kills 80-90 per cent of all patients diagnosed in late-stages; this is one of the hardest cancers to detect in medical images. If caught early, survival is nearly 10 times more likely.

Enlitic adapted deep learning to automatically detect lung cancer nodules in chest CT images 50 per cent more accurately than an expert panel of thoracic radiologists, as found in a US trial looking at 1,000 people with cancer and 5,000 people without. The reduction of false negatives and the ability to detect early-stage nodules saves lives. The simultaneous reduction of false positives leads to fewer unnecessary and often costly biopsies, and less patient anxiety.

Enlitic benchmarked its performance against the publicly available, NIH-funded Lung Image Database Consortium data set, demonstrating its commitment to transparency.

Bone fractures – Enlitic has shown it is three times better at detecting extremity (e.g. wrist) , which are very common yet extremely difficult for radiologists to reliably detect. Errors can lead to improper bone healing, resulting in a lifetime of alignment issues.

These fractures are often represented only by 4×4 pixels in a 4,000×4,000-pixel X-ray image, pushing the limits of computer vision technology.

In detection of fractures, Enlitic achieved 0.97 AUC (the most common measure of predictive modelling accuracy), more than three times better than the 0.85 AUC achieved by leading radiologists and many times better than the 0.71 AUC achieved by traditional computer vision approaches.

Enlitic was able to support analysis of thousands of image studies in a fraction of the time needed for a human to analyse a single study.

Why Enlitic is needed

Interpreting medical images can be incredibly challenging. Radiologists require years of training and there simply aren't enough of them with enough time to view the many medical images ordered by doctors.

Doctors can also make mistakes, more so later in the day due to tiredness. And less-experienced radiologists tend to make more mistakes. This can lead to unnecessary and invasive medical interventions that are expensive and distressing for patients.

Enlitic's capability won't replace radiologists, it will make it much faster for them to do their work.

Another benefit of the machine learning approach is that Enlitic is learning about what is normal and healthy alongside what is pathological. This information is transferable, providing the foundation for the application of the technology to other afflictions.

Medical imaging technologies are getting cheaper and more portable. But having a medical imaging machine is not all that is needed for an accurate diagnosis. Places like India, for example, have relatively few trained radiologists, so X-rays may never be looked at by a radiologist, but just reviewed by technicians or nurses. Even in developed countries, it may be nursing or other health staff who first view a diagnostic image, with hours or even days before the expert eye of a radiologist gets to view it. In time-critical health conditions, this can cost lives.

This is particularly crucial for the developing world. The World Economic Forum has estimated it will take hundreds of years to train enough experts to meet the professional healthcare needs of the developing world, including radiology.

All the data is there, it just isn't connected. This technology will change this, so that a scan ordered to detect one condition may in practice find something else. For example, a lung X-ray may be looking for pneumonia, but analysis through Enlitic may find a rib fracture.

What does the partnership mean?

This partnership will bring together Enlitic's capability and Capitol's financial investment, its network of nearly 100 radiology centres (collectively conducting ~150,000 X-rays, CTs and MRIs per year) and its experience in the form of their archive of about one million patients' diagnosis and imaging data.

"Every image improves the system," says Jeremy.

Radiology is the first step in Jeremy's 25-year plan. Ultimately, he wants to bring further data into Enlitic to work with other medical images, such as slides from pathology, images of the eye to support ophthalmology and optometry, patient notes and genomic information.

Consequently, this may become a tool that gives both a diagnosis and a prognosis, due to its ability to simultaneously incorporate and cross-reference a host of medical indicators.

Provided by Science in Public