Medical experts recruited to advise Crisis Text Line say they weren’t consulted about its data sharing and voiced objection to the arrangement. “This would never have passed a sniff test,” one says.
Five years after a top suicide prevention hotline got off the ground, it spun out a for-profit arm, in part to help keep its lights on.
Crisis Text Line was an important service with an admirable mission. It was worthy of the support it received from the tech sector. But it is no surprise that ethical questions have been raised about the personal data shared with the commercial spin-off, Loris. The arguments are: 1. The data does not contain personal identifiers. 2. They shared their plans in 2018. 3. Their privacy statement disclosed that there would be sharing. But the matters raise the questions: (i) Does consent have a spirit that extends beyond a legal contract? (ii) Can an activity be ethical (as defined in ethics standards) and legal (as defined in privacy legislation) but still be in bad faith with participants?
Crisis Text Line started as the passion project of a tech entrepreneur with close ties to Silicon Valley and backing from some of its best-known founders and billionaires. The nonprofit would also strike partnerships with Meta and Google — some of the most powerful companies on the planet that also helped pioneer the collecting and selling of individuals’ personal information as a standard way to generate revenue.
It’s perhaps no surprise, then, that Crisis Text Line’s Loris spinoff, formed in 2018, was supported by the tech industry and widely applauded in tech publications from the Bay Area to New York, where the nonprofit is based. And against the backdrop of Silicon Valley, where data sharing had become the norm, it also may not have occurred to Crisis Text Line that some might view what it was doing as wrong.