Artificial Intelligence-enabled systems that interpret radiologic studies are advancing to the point where they often match the best radiologists in the world.
And they are getting better every day. From CT scans to MRIs, angiograms to Doppler studies, the data captured is digital—and digital data can be read by machines.
Human doctors traditionally sit in dark rooms and interpret medical diagnostic images. A radiologist’s report is then given to the practicing physician, who merges the objective digital image interpretations with the more subjective clinical presentation and exam. Often there is more information read by the radiologists than is relevant to the physician and patient; at other times, important information is missed. In a knee MRI, for example, there may be degeneration of a meniscus tissue reported—but this finding has no importance to a patient with bone-on-bone arthritis, in need of a joint replacement.
As with genome interpretations, where there is data overload and the possibility of over-diagnosing problems, patients have generally not had direct access to their radiology images or reports. And coalescing this data can take days, if not weeks.
This may soon change due to several parallel advances. AI bots will no longer be interpreting only isolated radiologic scans. Due to the wide availability of electronic medical records, a patient’s entire medical history will available to these bots. In combination with data mining programs like LynxCare, all of the diagnostic information in a person’s medical record will be correlated with any symptom, exam finding, or lab result. And with the advance of do-it-yourself image uploading systems, any patient may be able to send their images not only to their doctor but to a cloud-based radiologic interpretation station as well.
Here’s an example of how we are working with the first stages of this system.
Let’s say a woman injures her knee while skiing in a remote part of Alaska. She gets an exam, X-ray, and MRI done by a local physician in a mountain clinic. The doctor enters this information into an electronic medical record and provides the patient with a web link. The patient uploads the links or the images directly, using a cloud-based radiology storage system like Purview.
The patient’s own physician, who subscribes to the future “Global Medical AI” network, is notified of the injury and provided with the AI bot-produced report over-read by a supervising networked radiologist. This report combines all of the medical information with human and computer software interpretations. It not only diagnoses a torn cruciate ligament (ACL), but highlights some important genomic information previously submitted to the doctor and patient: Due to a specific genomic pattern, a blood clot in the leg is 10% more likely to form during the post-injury period.
Since the AI bot has also reviewed the entire pharmacy records available for this patient (in addition to their credit card purchases of over-the-counter drugs), it is clear that the patient has been consuming high doses of NSAIDs. These, the bot notes, might increase the bleeding risk if the patient were to be placed on anticoagulants before being taken off the other drugs. In addition, the evaluation of the patient’s recent tweets indicates a new level of depression that may require extra psychological support during the postoperative healing phase.
So, before the skiing patient has ever left the remote mountain town, the level of medical insight about their underlying conditions, lifestyle choices, and potential risks exceeds most of what is known today. Yet, very few extra resources were consumed.
Several recent events are making this all possible:
1. The general acceptance and recognition of the lack of privacy inmedical and consumer information. Electronic medical records are improving by sharing information about patients, no matter where they are being treated. On the positive side, this permits the doctor to know a great deal about the patient. All of their credit card or Apple Pay-like purchases are recorded and that information is mined to paint a general picture of the cardholder. When combined with the patient’s Facebook and Twitter feeds, Instagram pictures, and other social media revelations, a highly detailed portrait of the patient emerges.
2. Because they are self-learning, AI bots and data mining technologies are improving more rapidly than humans ever could. They are not intimidated by vast amounts of data and can quickly correlate disparate information sources.
3. Genetic information able to predict disease or highlight risks is rapidly improving. Today we are researching a sliver of this information as it relates to injury and arthritis. While we still do not have a handle on exactly how certain conditions predicted by the genetic code are expressed or suppressed, such patterns will be more predictive after billions of people have been sequenced.
4. Patient access to direct medical resources — through medical data uploading services, along with data mining their own healthcare data — will democratize medical diagnoses. This will not reduce the physician’s role; it will empower doctors with potent interpretation tools, making them far better diagnosticians. And it will free patients from the tyranny of socialized medical care systems and insurance companies that, in their effort to drive down costs and improve profits for their executives and shareholders, have robbed millions of patients of the best that medicine can offer.
So it is time to embrace the belief that, at least in medicine, “all things digital are public.” And if they are going to be, we might as well all benefit from this transparency by mobilizing the best minds to keep us healthy—even if those minds are not human.
Dr. Kevin R. Stone is an orthopedic surgeon at The Stone Clinic and chairman of the Stone Research Foundation in San Francisco.