Facebook’s proposal to match anonymized patient data with user profiles has some experts worried about the possible privacy implications of such a project.
A proposed medical data sharing project that would have given Facebook access to anonymized patient information has raised questions about the potential privacy issues related to such a plan, as well as the role of patient consent in deciding who should be able to access that data.
The social network had asked several major U.S. hospitals and healthcare organizations to sign a medical data sharing agreement that would give it access to anonymized patient information. Facebook would then attempt to match that data with user profiles using a cryptographic technique, called hashing. The company said the data would only be used for research conducted by the medical community.
Facebook has put the brakes on the proposed project, but experts said the challenge with proposals such as Facebook’s becomes whether or not HIPAA applies — not just to the data it receives from the healthcare organizations, but to the information it collects from users, as well. Generally, a HIPAA violation related to social media would be the posting or sharing of protected health information without the patient’s consent. But if that information has been shared willingly, privacy becomes a little trickier.
The HIPAA in the room
On the surface, Facebook’s medical data sharing proposal brings up an interesting question: Would such a project comply with HIPAA?
The short answer is yes, said David Harlow, healthcare lawyer for The Harlow Group in Newton, Mass.
“What they’re really describing is a research project to see if this is something that might work, that might be useful [or] helpful in the future,” said Harlow, who also runs the healthcare law and policy blog HealthBlawg. Essentially, Facebook would enter into a medical data sharing agreement with a healthcare organization and receive a limited data set that excludes 16 categories of identifiers, as provisioned by HIPAA.
“So, somebody — let’s say a third party — receives both a Facebook profile and a health record from a healthcare organization. That third party … does the anonymization and creates the hash and creates the linkage, so that we know patient X is also Facebook user X. Facebook doesn’t need to know who is X in order to do the research part of this project,” Harlow said.
“In the framework of a data use agreement for a limited data set for a research project, that’s a perfectly legitimate approach to working on a project in compliance with HIPAA.”
There is a caveat, however.
“The fine legal line Facebook would walk here is between HIPAA and the individual’s expectation of privacy, or lack thereof, when using Facebook,” said Tirena Dingeldein, senior content analyst at Capterra, a website for business software buyers in Arlington, Va.
“If, for instance, I were to go on Facebook and complain about my chest pain and then follow up a month later talking about my heart attack, one could argue that [I] have no expectation of privacy when [I] post personal information on a public forum,” Dingeldein said. “Your choice to put your health information on a public forum means you have no expectation of privacy and is, therefore, not covered by HIPAA.”
Kurt Long, CEO of data protection company FairWarning in Clearwater, Fla., however, argued it’s human nature to want to control what information we share and with whom.
“I think privacy has been underestimated …What we’re really finding out is that, as human beings, all of us cherish some part of privacy, and we all want to be able to hold back certain kinds of things,” he said.
“And there’s trust with these businesses, and we’re seeing that when you pull the threads on privacy out of that fabric, it’s pretty easy to tear.”
Long said regulations such as HIPAA and other standards are already in place to protect data, but there is not a framework for the ethical use of data and machine learning algorithms. That dilemma could potentially lead to unauthorized uses of data, he said.
“I think the intent is always good to get better care to these cardiatric patients and understand and predict and get in front of the problem,” Long said. “But what happens is there’s so much pressure on publicly held companies to grow revenue. Once you have that data, you forget the initial authorized use.”
Long said there are also loopholes around consent and authorized use, which could lead to data being used for other purposes, such as marketing for prescription drugs. One potential fix is to implement something like the General Data Protection Regulation in the European Union, which addresses issues around consent and authorized use.
Social media can predict health conditions …
While Facebook’s intention for its proposed medical data sharing project was for research purposes, there are potential uses for social media as a predictive tool when combined with AI, Dingeldein said.
… But should it?
Kaitlin Costello, assistant professor of library and information science at the School of Communication at Rutgers University, said trying to match patient data with Facebook profiles without patients’ consent could have unintended consequences.
“Anytime a new technology is developed, especially anytime we try to solve a problem like healthcare, [which] has so many layers … anytime we try to fix something, we usually break something else and cause other problems,” Costello said.
“We’re really messing around with people’s autonomy and lives,” she said.
Costello added that Facebook users may stop sharing health information if they can’t be sure who else has access to that data. She also said even if social media could be used as a predictive tool, not everyone will want to use it for that purpose.
“People’s information practices around health are actually really complicated,” she said. “Some people don’t want to know that stuff, and they have a right not to know. Some people want to avoid health information, and that needs to be OK.”