This is software to save lives. Facebook’s new “proactive detection” artificial intelligence technology will scan all posts for patterns of suicidal thoughts, and when necessary send mental health resources to the user at risk or their friends, or contact local first-responders. By using AI to flag worrisome posts to human moderators instead of waiting for consumer reports, Facebook can lessen how long it takes to send help.

Facebook previously tested applying AI to detect troubling posts and more prominently surface suicide reporting options to friends in the U.S. Now Facebook is will scour all types of content around the world with this AI, except in the European union, where General Data Protection Regulation privacy statutes on profiling consumers based on sensitive knowledge complicate the use of this tech.

Facebook likewise will use AI to prioritize especially risky or urgent customer reports so they’re more rapidly addressed by moderators, and tools to instantly surface local speech resources and first-responder contact info. It’s likewise dedicating more moderators to suicide prevention, training them to deal with the cases 24/7, and now has 80 local partners like Save.org, National Suicide Prevention Lifeline and Forefront from which to provide resources to at-risk users and their networks.

“This is about shaving off minutes at every single pace of the process, especially in Facebook Live, ” says VP of product handling Guy Rosen. Over the past month of testing, Facebook has initiated more than 100 “wellness checks” with first-responders visiting affected customers. “There have been cases where the first-responder has arrived and the person is still broadcasting.”

The idea of Facebook proactively scanning the contents of people’s posts could trigger some dystopian panics about how else the technology could be applied. Facebook didn’t have answers about how it would avoid scanning for political dissent or petty crime, with Rosen merely saying “we have an opportunity to help here so we’re going to invest in that.” There are certainly massive beneficial facets about the technology, but it’s another space where we have little option but to hope Facebook doesn’t “re going too” far.

[ Update: Facebook’s chief security officer Alex Stamos responded to these concerns with a heartening tweet signaling that Facebook does take seriously responsible apply of AI.

Facebook CEO Mark Zuckerberg praised the product update in a post today, writing that “In the future, AI will be able to understand more of the subtle subtleties of speech, and will be able to identify different issues beyond suicide as well, including quickly spotting more various kinds of bullying and hate.”

Unfortunately, after TechCrunch asked if there was a lane for consumers to opt out, of having their posts a Facebook spokesperson responded that users cannot opt out. They noted that the feature is designed to enhance user safety, and that support resources offered by Facebook can be quickly dismissed if a customer doesn’t want to see them .]

Facebook qualified the AI by discovering patterns in the words and imagery being implemented in posts that have been manually reported for suicide risk in the past. It also looks for statements like “are you OK? ” and “Do you need aid? ”

“We’ve talked to mental health experts, and one of the best ways to help prevent suicide is for people in need to hear from friends or household that care about them, ” Rosen says. “This sets Facebook in a really unique stance. We can help connect people who are in distress is attached to pals and to organizations that can help them.”

How suicide reporting works on Facebook now

Through the combination of AI, human moderators and crowdsourced reports, Facebook could try to prevent tragedies like when a father killed himself on Facebook Live last month. Live broadcasts including with regard to be given the opportunity to mistakenly glorify suicide, hence the necessary new precautions, and likewise to affect a large audience, as everyone ensure the contents simultaneously unlike recorded Facebook videos that is likely to be flagged and brought down before they’re viewed by many people.

Now, if someone is expressing believes of suicide in any type of Facebook post, Facebook’s AI will both proactively see it and flag it to prevention-trained human moderators, and construct reporting options for onlookers more accessible.

When a report comes in, Facebook’s tech can highlight the part of the post or video that matches suicide-risk patterns or that’s obtain concerned statements. That avoids moderators having to skim through a whole video themselves. AI prioritizes users reports as more urgent than other types of content-policy misdemeanours, like depicting violence or nudity. Facebook says that these accelerated reports get escalated to local authorities twice as quickly as unaccelerated reports.

Mark Zuckerberg gets teary-eyed discussing inequality during his Harvard commencement speech in May

Facebook’s tools then bring up local speech resources from its partners, including telephone hotlines for suicide prevention and nearby authorities. The moderator can then contact the responders and try to send them to the at-risk user’s location, surface the mental health resources to the at-risk consumer themselves or send them to friends who can talk to the user. “One of our goals to ascertain that our team can respond worldwide in all official languages we are in favour, ” says Rosen.

Back in February, Facebook CEO Mark Zuckerberg wrote that “There have been atrociously tragic events — like suicides, some live streamed — that perhaps could have been prevented if someone had realized what was happening and reported them sooner . . . Artificial intelligence can help provide a better approach.”

With more than 2 billion customers, it’s good to see Facebook stepping up here. Not only has Facebook made a lane for users to get in touch with and care for each other. It’s also unfortunately made an unmediated real-time distribution channel in Facebook Live that can appeal to people who want an audience for violence they inflict on themselves or others.

Creating a ubiquitous world communication utility comes with responsibilities beyond those of most tech companies, which Facebook seems to be coming to words with.

Read more: https :// techcrunch.com/ 2017/11/ 27/ facebook-ai-suicide-prevention /~ ATAGEND