“Big Data” refers to the mining of enormous data banks for trends, patterns and strategies for influencing decision making in an informed way. One example of this is The Hospital Electronic Medical Records. In this case, vast amounts of data are collected, pooled and analyzed by insurance companies, drug companies, researchers and everyone interested in understanding the health of patients and the consequences of various treatments.
But does it work? Not really. The joke in hospitals these days is that if a patient comes in for an amputation of his leg, you can counsel him not to worry: When he returns to the hospital for something else — skin cancer, for example — his leg will be back on. Why? Because while the doctor admitting the patient for the skin cancer might comment on that particular problem, he is also highly likely to cut and paste into the record the results of a long past physical exam, in which the leg appeared as normal. And this is not just the fault of doctors. Everyone in the hospital has to hunt and peck to fill in these computerized records, and no one has the time to do it.
It is not just the medical records that are contaminated by obsolete information. The entire fields of “consensus medicine” or “standards of care” are suspect. In orthopedics there are very few Level 1 studies proving the efficacy of one device over another. So large pooled studies (such as meta-analysis or Cochrane analysis) get their data from very few or very poor studies. The input data is weak, at best.
So big data suffers from the old “garbage in, garbage out” problem of trying to digitize human characteristics. While it’s well meaning, and possibly an improvement over the old paper records, one has to ask: “Is the quality of our data better today than it was 10 years ago, or is it just larger?” Are the conclusions about which treatments work and which should be approved — drawn from the Hospital EMR and used by insurers and the government — based on facts or fictional convenience?
There are solutions to this problem. The EMR, for one, needs an artificial intelligence agent monitoring the inputs. The inputs should be oral, with logical prompts. Doctor: “I would like to admit this patient for a knee replacement.”
“OK” the record responds, “I note she had a blood clot in the past. Shall we order anti-coagulation specific to her genotype?”
Only when this real-time feedback is in place will medical record evolve logically and accurately, and be truly useful in fulfilling the promise of data mining.
Further, all doctors — and not just at the major medical centers — need to use outcomes tools as part of their patient care plans. Any patient given a drug or therapy must have an easy way to add their information to the mix. This could be done with a follow up questionnaire or automated call to discover if the therapy works. Since everyone has a cell phone, and almost no one ever changes their cell number, no patient should ever be lost to follow up.
So there need be no more confusion about whether or not a patient will be bringing one leg or two to their next appointment. The tools are now available for our Electronic Medical Record to be accurate, comprehensive and updated in real time. That’s when these big data recommendations will turn Big Data into Big Results.
Dr. Kevin R. Stone is an orthopedic surgeon at The Stone Clinic and chairman of the Stone Research Foundation in San Francisco.