Home Editor's Pick How Artificial Intelligence is used to identify tendencies and prevent suicide

How Artificial Intelligence is used to identify tendencies and prevent suicide

1856
0
SHARE
 
Social media company Meta helped save the life of a young woman in Lucknow last week after it got a hint of her suicidal tendency through her virtual behaviour. The woman had posted a distressing video on Instagram where she had a noose around her neck. Lucknow Police took action to prevent the woman from committing suicide after being alerted by Meta AI (Artificial Intelligence). The 21-year-old was allegedly upset over being abandoned by her husband, as per the police.
This is not the first time Meta AI has saved someone’s life. In May this year, Kota police in Rajasthan tied up with Meta to identify students exhibiting suicidal tendencies on the company’s social media platforms, Facebook and Instagram, allowing for their timely intervention. Soon after the collaboration, the police claimed they have already prevented a student from Jhunjhunu from committing suicide in India’s “coaching hub”, where young people from across the country come to prepare for competitive exams.
Facebook and Instagram’s parent company Meta uses its technology to flag posts or browsing behaviours indicating that someone is contemplating suicide. It keeps an eye on people searching for suicide-related content, for example people searching for “How to end my life?” or “How to commit suicide?” In such cases, the platform sends a message to the user with information about how to reach support services like the Suicide and Crisis Lifeline. The police are informed if Meta’s team considers it necessary.

Use of Rapid technology

Meta uses technology to identify possible suicide and self-harm integrates into both Facebook and Instagram posts and Live. If it appears someone in a live video is considering self-harm, people watching can reach out directly to the person and report the video to the Meta.
Whether a post is reported by a concerned friend or family member or identified by machine learning, the next step is the same: review. A member of the Community Operations team reviews the report for imminent risk of self-harm or policy violations. In serious cases, the team works with emergency services to conduct a wellness check. With Meta technology, the company has been able to help first responders quickly reach people in distress.

Machine learning

Meta introduced a machine-learning model to detect suicide-related keywords like “kill,” “goodbye,” or “depressed,” based on expert input a few years ago. However, these words can also be used in non-harmful contexts, which is a reason why Meta’s community operations team filters it manually. The AI tool has been trained to detect suicidal patterns for better accuracy. The tool scrutinizes comments and checks patterns in previous posts and their timing to assess if the user is in immediate danger.
Using machine learning technology, Meta expanded its ability to identify possible suicide or self-injury content and in many countries this technology has been used to get timely help to people in need. This technology uses pattern-recognition signals, such as phrases and comments of concern, to identify possible distress.
Artificial Intelligence (AI) is used to prioritize reported posts, videos and livestreams being reviewed by the Meta team. By using technology to prioritize and streamline reports, it escalates the content to the Community Operations team, who can quickly decide whether there are policy violations and whether to recommend contacting local emergency responders.
“It also lets our reviewers prioritise and evaluate urgent posts, contacting emergency services when members of our community might be at risk of harm. Speed is critical,” the company mentions in a blog post.