By Jackie Kimmell, senior analyst
At this time last year, news leaked that Facebook had presented an ambitious plan to health care organizations aimed at connecting patients' health care data to their social media presences.
Today, that's been put on pause. But is the company done in health care?
Remember Facebook's patient data-matching plan?
In 2017, Facebook presented an aggressive plan to health care providers: Providers would provide anonymized patient data (on illnesses, prescriptions, etc…) to Facebook, which the company in turn would match with the patient's social media data. The goal was to provide a greater picture of patients' lives outside of the hospital—informing decisions like, for example, who would benefit from extra outreach after surgery.
And, despite later backlash, the idea seemed to be gaining traction. According to CNBC, Facebook had spoken to many leading health care organizations, including "major U.S. hospitals," Stanford Medical School, and the American College of Cardiology, about the project.
But momentum on the project fizzled out shortly after the Cambridge Analytics scandal, when the world found out that 87 million users' data had been shared without their knowledge and used for targeting political ads.
Since the scandal, Facebook hasn't made any major health care announcements—but that doesn't mean the company is done with the industry altogether. Rather, they've been quietly pursuing several health care projects (though they aren't all without their controversies).
How Facebook is slowly moving into health care
Outside of online advertising, which the company has heavily pursued for pharmaceutical companies, Facebook's main entryway into the health care industry is its AI capabilities. Facebook in 2018 announced that it planned to double the size of its Artificial Intelligence Research (FAIR) group, which, according to the company is focusing "on open and foundational research that advances the state of AI." Here are six ways Facebook is leveraging AI to improve health care.
1. Improving the MRI. In August, the company launched a pilot project with the NYU School of Medicine aimed at using AI to make MRI scans up to 10 times faster. The project, called fastMRI, will test ways to generate an MRI image using less raw MRI data. Scans currently range from 15 minutes to over an hour in order to gather the necessary data in the proper order to create an image. But Facebook believes it can collect less data and train AI to fill in the blanks to accelerate the time it takes to generate a scan.
2. Facebook wants to improve facial recognition. Another area where Facebook seems to have an edge in AI is with facial recognition. The company has focused on tools that, for example, can reconstruct partially-hidden human faces, or generate fake eyes that can be edited into an image if the subject has blinked. Obviously, most of these efforts have been aimed at improving photo posting on the site, but the company is also potentially considering health care applications for this technology. For instance, facial recognition software has already been used to accurately predict physiological health, track patients through the hospital, measure their pain or even for hospital security. Facebook could use AI to offer even more. Still, it will have to overcome privacy hurdles, including a class-action lawsuit over its use of the technology on the site.
3. Brain research. In more futuristic plans, Facebook CEO Mark Zuckerberg during a recent discussion at Harvard spoke about a brain-computer interface his company has been researching. According to Wired, the technology Zuckerberg described is a "shower-cap-looking device" that aims to identify connection between certain thoughts and brain activity. The end product would allow a person to "type" by thinking. In 2017, Facebook said they were working with scientists at UC San Francisco, UC Berkley, Johns Hopkins Medicine, and Washington University School of Medicine in St. Louis on the telepathic project, which could prove useful in health-related experiments, like connecting brain waves to move prosthetics.
4. Facebook is building its own Alexa. Facebook just last week confirmed it is working on its own AI-based digital voice assistant to compete with Amazon's Alexa and Google Assistant. Facebook in 2013 acquired the technology company Oculus VR and plans to incorporate voice and AI assistants into Oculus' products. While it's not yet clear whether Facebook will seek to introduce its voice assistant into the health care industry, Amazon earlier this month announced its Alexa Skills Kit now allows certain HIPAA-covered entities to build Alexa skills that can share and receive protected health information. Six health care companies—including Atrium Health, Boston Children's Hospital, and Cigna—already are testing HIPAA-compliant Alexa skills, suggesting the industry is ripe for competition.
5. Suicide prevention. Facebook also has deployed AI for suicide prevention efforts, aiming to build models to predict when someone may commit suicide (and especially to prevent suicides from being streamed on their live video platform). The company's AI mines users' posts and comments for language that may indicate possibly suicidality, and alerts law enforcement if they sense an imminent threat of self-harm. Within the first year of launching this tool, Facebook said they had contacted about 100 local law enforcement officials about users. In November, they said that number had increased to 3,500 over a year—marking about 10 contacts to emergency responders per day—and that they had at least 7,500 community operations staffers reviewing cases of potential suicide every day.
However, these efforts have been controversial. While some have praised the efforts, calling them "ahead of the pack," others have criticized the opacity of Facebook's AI system. John Torous, director of the digital psychiatry division at Beth Israel Deaconess Medical Center, said, "It's hard to know what Facebook is actually picking up on, what they are actually acting on, and are they giving the appropriate response to the appropriate risk." Mason Marks, a research fellow at Yale Law School and NYU Law School, wondered if the model may cause people to "fear a visit from the police, so they may pull back and not engage in an open and honest dialogue."
6. Global population health. Facebook also waded into global humanitarian projects by using AI to map population density in Africa. They say the initiative has already supported government and nonprofit organizations in their health efforts, including coordinating a measles vaccination campaign in Malawi. They also launched a blood donation tool on the site in India, Pakistan, and Bangladesh—though critics argued the tool could launch a black market for blood.
Facebook must overcome troubled past to gain trust in health care
While these health care efforts suggest Facebook may have bigger plans for health care, the company first will have to build up more user trust in their data privacy and clearly define parameters for how the patient data they collect could be shared with advertisers.
One major concern is that, while the company gets users' permission to collect a broad range of data, that doesn't necessarily mean that users expect or consent for it to be used in medical research or projects. As Aneesh Chopra, president of health software company CareJourney and the former White House chief technology officer, said after news of Facebook's hospital patient data matching plan leaked, even if patient data was anonymized, "Consumers wouldn't have assumed their data would be used in this way."
Plus, in addition to the Cambridge scandal, Facebook has faced a number of other privacy scandals related to health. For instance, it's faced growing backlash over its handling of patient support groups. Last April, Facebook was grilled by lawmakers over its handling of the private medical data generated from these groups. In February, congressional leaders sent a letter to Zuckerberg summoning him to speak about reports that Facebook sold data about users' health-related group memberships to advertisers. The letter charges that Facebook's group policies left users open to bullying, vulnerable to predatory advertisements, and could have even impacted their health insurance options. A complaint by the FTC alleged similar charges, saying, "Sharing of privately posted personal health information violates the law, but this serious problem with Facebook's privacy implementation also presents an ongoing risk of death or serious injury to Facebook users." Facebook last week took some initial steps to address those concerns, unveiling a new community of health support groups that will allow users to post questions anonymously.
But other controversies have arisen based on health and wellness apps selling personal information to Facebook. The Wall Street Journal conducted an investigation of 70 leading apps, and found that at 11 sent potentially controversial information to Facebook (including users' heart rates and ovulation tracking—even if they didn't have a Facebook account). Facebook responded that "sharing information across apps on your iPhone or Android device is how mobile advertising works and is industry standard practice," but the issue is far from settled. The company is now being investigated in a federal criminal investigation for letting tech companies see user's friends, contact, and other data.
Zuckerberg has admitted publically in congressional testimony that Facebook does collect some medical data from users. In a recent editorial, he called for greater privacy regulation and suggested that there "should be a way to hold companies such as Facebook accountable by imposing sanctions when we make mistakes."
But, despite these moves to rebuild their reputation for data privacy, the company has almost certainly damaged its reputation, likely greatly impinging their ability to gain the trust of health care consumers any time soon. According to a Fortune poll from November 2018, only 22% of Americans trust Facebook with their personal data, half as many who said the same for Amazon (49%), Google (41%), Microsoft (40%), and Apple (39%).
As Advisory Board's Peter Killbridge and Andrew Rebhan said, if providers choose to partner with them, they'll have to ensure that privacy is paramount, collection is secure, and that data is stored with the "same rigor as other, more standard protected health information." American Health Line is published by Advisory Board, a division of Optum, which is a wholly owned subsidiary of UnitedHealth Group.