Facebook Is Using AI to Prevent Suicides, But Privacy Advocates Are Concerned

Tuesday, 15 January 2019 - 12:12PM
Artificial Intelligence
Tuesday, 15 January 2019 - 12:12PM
Facebook Is Using AI to Prevent Suicides, But Privacy Advocates Are Concerned
< >
Pixabay
We talk a lot about the potential dangers of artificial intelligence and the evils of social media, but there are some really positive things happening at the intersection of the two, namely over at Facebook. The social media giant launched a project back in 2017 that uses an algorithm to access whether a user is a suicide risk based on his or her posting habits. In theory the program is objectively good, but privacy experts are now questioning whether or not it should exist.

According to Business Insider, Facebook's AI sees almost every piece of content that is posted to the site and applies a rating on a scale from zero to one, with one being the highest probability of "imminent harm." If Facebook believes that the user is at risk, it forwards that information to local authorities, who then decide if it is necessary to find the user to make sure they are OK. Privacy advocates are arguing in favor of regulations that would put the program and others like it on the same level as companies that deal with health information. Your health information (at least in the United States) is protected by the Health Insurance Portability and Accountability Act (HIPAA) which comes with rules about how it must be stored, shared, etc., but those rules do not currently apply to Facebook because it does not provide healthcare services as defined by the act. 

"I think this should be considered sensitive health information," Center for Democracy and Technology policy analyst Natasha Duarte told Business Insider. "Anyone who is collecting this type of information or who is making these types of inferences about people should be considering it as sensitive health information and treating it really sensitively as such." Facebook has had its share of problems in the recent past in terms of what it is or isn't doing with the private information of its massive user base, and with the mental health data, the same issue of transparency has come up. A rep told Business Insider that scores that don't warrant concern/review are stored and deleted after 30 days, but did not comment about the storage of data that generates higher scores or warrants further action. One of the concerns of analysts and privacy advocates is that if Facebook is hacked, someone somewhere will have that information and can use it in a number of nefarious ways. Another concern is that of "false positive" risk assessment leading to unnecessary police involvement. 

The suicide watch algorithm has already been banned in the European Union because of protections granted under the General Data Protection Regulation (GDPR), and people like Duarte are fighting to see changes made here in the states. "It's one of the big gaps that we have in privacy protections in the US, that sector by sector there's a lot of health information or pseudo health information that falls under the auspices of companies that aren't covered by HIPAA and there's also the issue information that is facially health information but is used to make inferences or health determinations that is currently not being treated with the sensitivity that we'd want for health information."
Science
Technology
Artificial Intelligence