More light needed on medical ‘shadow’ records, ‘black box’ tools
ANN ARBOR—Every American has official medical records, locked away in the computers and file cabinets of their doctors’ offices and hospitals, and protected by strict privacy laws.
But most people don’t think about their other medical record—the informal “shadow” one they generate during everyday life.
In a new article in the journal Science, a team of experts led by two University of Michigan researchers calls for attention to this shadow record.
They describe it as the data generated by everyone who wears a fitness tracker, uses a smartphone health app, shops for health-related items—or really, almost anything—online or with a customer loyalty card, orders DNA tests to learn about their genetic disease risk or ancestry, searches the internet for health information, or posts on social media or other sites about their health.
When shadow-record elements from many people are pooled together and used by academic researchers or industry, they can fuel progress in health care research and innovation, says the international team led by Nicholson Price, U-M assistant professor of law.
In fact, he and his colleagues say companies have already started gathering and selling access to massive amounts of such data. But, they say, few rules apply to how shadow data is stored and used—and how the privacy of the people behind the data is protected.
Meanwhile, academic health services researchers already study bulk data from official medical records, after it’s been stripped of individual identification. That kind of study, called health services research, has fueled many improvements in care and policy.
Not all industry involvement in health data is a bad thing. Industry can help propel innovation. But relying on loopholes to collect personal health data without knowledge is predatory.
Kayte Spector-Bagdady
Price and Kayte Spector-Bagdady, an assistant professor at the U-M Medical School and member of the U-M Center for Bioethics & Social Sciences in Medicine, reviewed the current laws and regulations surrounding shadow medical records.
“Not all industry involvement in health data is a bad thing,” Spector-Bagdady said. “Industry can help propel innovation. But relying on loopholes to collect personal health data without knowledge is predatory.”
She and Price worked with Margot Kaminski, an associate professor of law from the University of Colorado, and Timo Minssen, director of the Center for Advanced Studies in Biomedical Innovation Law at the University of Copenhagen.
They call for better clarity in current regulations, to make sure research in the public interest can go forward. And they recommend that any future data privacy-related hearings, legislation and regulations should pay special attention to health-related topics.
Price and Spector-Bagdady are both members of the U-M Institute for Healthcare Policy and Innovation.
More transparent black boxes
Price has also written in recent months about another area of data-drive health innovation that he says needs more transparency.
Called “black box” algorithms, they’re a burgeoning type of artificial intelligence software that harnesses large-scale medical data to give doctors, other health providers or consumers advice on health topics.
They’re based on a technique called machine learning, which feeds massive amounts of data into computerized systems and teaches them to recognize and predict patterns.
For instance, using data about lung cancer risk from thousands of patients, an algorithm could help doctors decide which patients should go for chest CT scans to see if they have early signs of lung cancer, and which aren’t as likely to benefit from such screening.
Writing in Science Translational Medicine in December, Price described the potential value of these algorithms in predicting the course of disease, or augmenting the skills of radiologists and pathologists in reading scans and tissue samples from patients.
But he also noted that guidelines for the clinical use and regulation of such tools need to be developed now, including standards for validating the tools’ actual usefulness.
Big data and AI are racing ahead in medicine. And right now, law and policy are playing catch-up.
Nicholson Price
More transparency by industry about the data and assumptions used in making their black boxes could help increase the chance that providers will opt to use the tools—and that regulators would not crack down on them in a way that limits their use.
Rather, he argued, the medical algorithm sector needs regulatory flexibility, transparency and broad involvement by different sectors of the health system including researchers, providers and regulators.
“Big data and AI are racing ahead in medicine,” Price said. “And right now, law and policy are playing catch-up.”
Warning about too much privacy
Price also co-authored a review article in Nature Medicine in January with I. Glenn Cohen of Harvard University’s Petrie-Flom Center for Health Law Policy, Biotechnology and Bioethics.
In it, they look at many legal and ethical aspects of the rise of big data, artificial intelligence, machine learning and other data-driven technologies in medicine. They call for a balance between maximizing the potential development and use of such tools, and protecting the privacy of those whose medical and shadow data would be used anonymously to build and test the tools.
“It is important that we not assume privacy maximalism across the board is the way to go,” Price and Cohen wrote. “Privacy underprotection and overprotection each create cognizable harms to patients both today and tomorrow.”
More information:
- Science article: Shadow health records meet new data privacy laws
- Science Translational Medicine article: Big data and black-box medical algorithms
- Nature article: Privacy in the age of medical big data