Have you ever bought something online and then had ads for that same item annoy you across the internet for weeks? Creepy, right?
What if your doctor discovered you had high blood pressure, and then promos for anti-hypertension medication started popping up everywhere? That would be really creepy (and possibly illegal), but it’s technologically feasible.
Thanks to digital developments like electronic health records and smartphone apps, there’s a growing pile of computerized personal health data out there. It’s being used not only by doctors to treat patients, but by dozens of organizations and companies. Some of that data has been “de-identified” for research — but some could be traced back to you.
The conflict between technology and medical privacy has been thrust even further into the spotlight with the COVID-19 pandemic, as public health leaders and technologists explore ways to fight the spread of disease by using cellphone data and other tools to trace the movements of infected people.
You may not care if the world knows your cholesterol readings or where you stopped off for coffee. However, some people have medical history that they want to keep ultraprivate, and others are just uncomfortable with their health information being used without their knowledge or specific permission.
Where’s Your Data?
Maybe you’ve been feeling glum and you downloaded an app that promised to help boost your mood. As a result, information about your depressed period may wind up in places you didn’t expect. Psychiatrist and clinical informaticist John Torous directs the digital psychiatry division at Beth Israel Deaconess Medical Center and leads the American Psychiatric Association’s work group on the evaluation of smartphone apps for mental health. He and his colleagues analyzed the “terms and conditions” that most people click through without reading on three dozen of the most popular smartphone apps for treating depression and quitting smoking. What they found shocked Torous.
“Health information” can take many forms. The most obvious are patient records maintained by providers. Those are protected by law, including the Health Insurance Portability and Accountability Act. But HIPAA was passed in 1996, when the internet was less evolved and smartphones were in their infancy. It was meant to make health information more, not less, shareable, by defining legitimate uses and privacy obligations that users must meet.
Health providers can share your information for treatment, payment, operations and research — which covers a lot of ground. “People should be aware that a lot of their data is shared, and there’s not much they can do to prevent it under the current system,” says Peter Szolovits, head of the Clinical Decision-Making Group at Massachusetts Institute of Technology’s Computer Science and Artificial Intelligence Laboratory. Szolovits cites a National Research Council study showing that a typical medical record is shared with dozens of firms and organizations with some type of “business associate” relationship with a health care provider. “Most people are surprised when they learn the extent to which their data spreads,” he says.
Your ability to protect your medical information is already limited by your need for care, says Mildred Cho, associate director of the Stanford Center for Biomedical Ethics, who specializes in genomic data usage. “When you sign that little piece of paper that gives them access to your health information to take care of you — which they need — it also gives access to the health care system to use your data for other purposes such as research,” she says. “Not that that’s a bad thing in and of itself, but HIPAA doesn’t actually protect you from uses of your information or access by third parties. We are all giving up our data involuntarily all the time and hoping it doesn’t come back to bite us later.”
Deven McGraw enforced HIPAA while working at the Department of Health and Human Services and is now chief regulatory officer for Ciitizen, a startup that helps people collect and control their medical data. She says that while some protection exists from discrimination based on health information — for example, the Genetic Information Nondiscrimination Act, which applies to employers and health insurers — the protections are spotty. “We don’t have a set of laws that say, ‘We think you’re going to be unhealthy, and it will not affect your ability to be employed,’ ” or to qualify for a home mortgage, she says.
HIPAA should be updated to reflect the increasingly complex web of health information and the opportunities it contains, says David Vawdrey, chief data informatics officer for Geisinger, one of the first health systems to adopt commercially-developed electronic records and a leader in using data (including genomic information gathered from patients voluntarily via its MyCode initiative) to improve care. What happens to your data “needs to be more transparent,” he says.
Deals for Data
This tsunami of data holds huge potential, because researchers can use it to analyze diagnoses, treatments and outcomes for millions of patients at a time, uncovering patterns that are easily overlooked with smaller groups. In a recent study, for example, Google trained an algorithm to detect breast cancer more accurately than a team of radiologists, using mammography images from more than 90,000 women.
Many health systems are exploring how to analyze patients’ data to optimize care, but it’s proving tricky. Some have agreed to share patients’ electronic health records with Google to take advantage of the company’s advanced analytic and artificial intelligence capabilities to make accurate diagnoses more quickly and to select the most effective treatments.
One such initiative, introduced in 2019 as “Project Nightingale,” has sparked public concern about what Google might do with all of that data, given how much it might already know about everyone’s location and travel routines (Google Maps), buying habits (Google Shopping), work habits (Gmail, Google Docs, Google Voice, and other business apps) and interests (Google Search and Google Alerts). The project involving millions of patient records between Google and Ascension, a large Catholic health system, has gotten a lot of press, though both organizations have stated that they will operate within the requirements of the law, and Google will not combine any of Ascension’s information with data from other sources.
Anonymizing the Data?
The Mayo Clinic launched the Mayo Clinic Platform in 2019 to improve care through insights and knowledge derived from data. It selected Google to provide the project with data science services. Like Ascension’s agreement, the deal specifically prohibits Google from combining Mayo’s medical data about a given patient with data it may have about the same person from other sources, says the platform’s President John Halamka.
“The Mayo Clinic has a moral imperative to cure disease, which means discovering new therapies and medications, but also a solemn obligation to keep data private,” he says. “Doing both is really hard.” For many types of studies, data are “de-identified” by stripping out obvious elements like name and address, but Halamka says it’s often simple for a computer algorithm to “re-identify” data, especially for people with unusual characteristics or rare conditions. One such algorithm successfully re-identified 80% of children and 95% of adults in a 2018 study of 14,000 records.
Mayo is working on a “next-generation” method of de-identifying data that Halamka says will make it more difficult to re-identify an individual by removing subtle clues. For example, if five Brazilians in a study live in a given Chicago ZIP code, their national origin might be changed to “South America.”
Beyond the medical record at your doctor’s office or hospital, there is information you gather yourself, from mental health apps and exercise apps, and your step-counter and Wi-Fi-connected scale, for example. (By the way, Google would potentially gain access to data from the latter two by acquiring Fitbit, though that deal wasn’t a sure thing as of press time.)
None of that self-collected data is covered by HIPAA or any other regulations. If you want to protect it, you’re on your own, McGraw says. She echoes Torous’ advice to read the fine print, because you may be allowing app developers access to information they can use to advertise to you or share with your employer or insurer. “Read the terms and conditions and do your best to understand what they’re doing with data,” she advises. “If you can’t figure it out, don’t use that tool.” For example, an ovulation-tracking app might reveal that a woman is thinking about having a baby — a detail that employers can’t legally use but that she might rather keep private.
McGraw adds that while you are legally entitled to get a copy of your medical record — and everyone should — it’s no longer protected by HIPAA if you share it with an app or online service.
Deductive Data Trail
Then there’s information that can be used to deduce things about health status. “People can take demographics, education level, what websites you visit, and put together profiles to predict certain things about your health,” McGraw says. One company developed a “medication adherence” analyzer that used nonhealth data to predict whether people would take their medications, and their likelihood of hospitalization if they didn’t. The company sold its tool to employers and health plans. While it’s hard to pinpoint scenarios where that type of information might hurt you, McGraw cautions that laws protecting people from discrimination may not extend to this territory depending on the specific circumstances. “These are not issues that individuals can handle on their own,” she says. “Policymakers really need to pay attention.”
Even an everyday tool like Google Maps can give away your health status, says health information technology consultant Lisa Bari, who worked at the Centers for Medicare & Medicaid Services on innovations related to artificial intelligence and patient access to data. “You can look at my Google Maps information and all the health-related places I’ve been, and it’s easy to figure out that I’m dealing with some orthopedic issue,” she says. “That information is not protected under HIPAA and that gets at the need for change. Health information should be an inherently protected class of data.”
The European Union has a strict data privacy law, the General Data Protection Regulation, or GDPR, that gives people “the right to be forgotten,” Vawdrey says. “You can ask to have your data deleted, and they have to do it. The right to be informed and have control over the process is something we’re beginning to contemplate as a society.”
Thanks to GDPR, traveling in Europe has shown Bari just how closely her wellness apps are keeping track of her. “When I connect to the internet in Europe, my U.S. apps have to start treating me differently,” she says. “They give me the opportunity to opt out of nearly all tracking. It’s been instructive and scary to see all the different types of tracking they use. It’s caused me to step back and delete some of them.”
California has passed a similar law, the California Consumer Privacy Act, that went into effect in 2020 (though it doesn’t currently apply to any health information covered by HIPAA). But experts say a federal law is unlikely in the near future, though changes are needed. Says Vawdrey: “We don’t quite know what the future is going to hold, but especially in health care we shouldn’t be doing creepy stuff with people’s data.”
More from U.S. News