Around the world, a suicide occurs every 40 seconds\u2014and suicide is the second-leading cause of death for 15- to 29-year-olds, as per WHO data. Facebook, the social media giant that is being investigated by regulators the world over, has been, since the latter half of 2017, using algorithms and user reports to flag possible suicide threats. Facebook\u2019s algorithms scan posts, comments and videos of users for indications of immediate suicide risk. When a post is flagged, by the technology or a concerned user, human reviewers at the company who are empowered warn local law enforcement. The New York Times reports that possible suicide threat alerts have been sent to police officers from Massachusetts to Mumbai and, in a Facebook post in November, Mark Zuckerberg said the new tool has already intervened about 3,500 times. Facebook, for \u201cprivacy reasons\u201d, doesn\u2019t track the outcomes of its calls to the police. And it has not disclosed exactly how its reviewers decide whether to call emergency responders. Therefore, the public does not know what information Facebook collects, how it perceives threats, and whether its actions are appropriate given its estimations of risk. Neither is there an option for all users\u2014suicidal or otherwise\u2014to opt out of data collection. Four police reports from four Facebook-flagged suicide threats obtained by The Times show that only one case was deemed actionable. While Facebook\u2019s capacity to effect a positive change\u2014in terms of curbing suicides\u2014is significant, its efforts need to be transparent and open to scrutiny. After all, there are health researchers out there who are transparently attempting to study suicide risk such as the US\u2019s department of veterans affairs that is using AI to study and analyse retired army personnel medical records. In an environment where trust in the social media giant is faltering, it is likely that Facebook\u2019s recent efforts will do little to bolster its image.