There is plenty that scientists don’t understand about the long-term effects of COVID-19 on society. But a year in, at least one thing seems clear: the pandemic has been terrible for our collective mental health — and a surprising number of tech platforms seem to have given the issue very little thought.
First, the numbers. Nature reported that the number of adults in the United Kingdom showing symptoms of depression had nearly doubled from March to June of last year, to 19 percent. In the United States, 11 percent of adults reported feeling depressed between January and June 2019; by December 2020, that number had nearly quadrupled, to 42 percent.
Prolonged isolation created by lockdowns has been linked to disruptions in sleep, increased drug and alcohol use, and weight gain, among other symptoms. Preliminary data about suicides in 2020 is mixed, but the number of drug overdoses soared, and experts believe many were likely intentional. Even before the pandemic, Glenn Kessler reports at The Washington Post, “suicide rates had increased in the United States every year since 1999, for a gain of 35 percent over two decades.”
Issues related to suicide and self-harm touch nearly every digital platform in some way. The internet is increasingly where people search, discuss, and seek support for mental health issues. But according to new research from the Stanford Internet Observatory, in many cases, platforms have no policies related to discussion of self-harm or suicide at all.
In “Self-Harm Policies and Internet Platforms,” the authors surveyed 39 online platforms to understand their approach to these issues. They analyzed search engines, social networks, performance-oriented platforms like TikTok, gaming platforms, dating apps, and messaging apps. Some platforms have developed robust policies to cover the nuances of these issues. Many, though, have ignored them altogether.
“There is vast unevenness in the comprehensiveness of public-facing policies,” write Shelby Perkins, Elena Cryst, and Shelby Grossman. “For example, Facebook policies address not only suicide but also euthanasia, suicide notes, and livestreaming suicide attempts. In contrast, Instagram and Reddit have no policies related to suicide in their primary policy documents.”
Among the platforms surveyed, Facebook was found to have the most comprehensive policies. But researchers faulted the company for unclear policies at its Instagram subsidiary; technically, the parent company’s policies all apply to both platforms, but Instagram maintains a separate set of policies that do not explicitly mention posting about suicide, creating some confusion.
Still, Facebook is miles ahead of some of its peers. Reddit, Parler, and Gab were found to have no public policies related to posts about self-harm, eating disorders, or suicide. That doesn’t necessarily mean that the companies have no policies whatsoever. But if they aren’t posted publicly, we may never know for sure.
In contrast, researchers said that what they call “creator platforms” — YouTube, TikTok, and Twitch — have developed smart policies that go beyond simple promises to remove disturbing content. The platforms offer meaningful support in their policies both for people who are recovering from mental health issues and those who may be considering self-harm, the authors said.
“Both YouTube and TikTok are explicit in allowing creators to share their stories about self-harm to raise awareness and find community support,” they wrote. “We were impressed that YouTube’s community guidelines on suicide and self-injury provide resources, including hotlines and websites, for those having thoughts of suicide or self-harm, for 27 countries.”
Outside the biggest platforms, though, it’s all a toss-up. Researchers could not find public policies for suicide or self-harm for NextDoor or Clubhouse. Dating apps? Grindr and Tinder have policies about self-harm; Scruff and Hinge don’t. Messaging apps tend not to have any such public policies, either — iMessage, Signal, and WhatsApp don’t. (The fact that all of them use some form of encryption likely has a lot to do with that.)
Why does all of this matter? In an interview, the researchers told me there are at least three big reasons. One is essentially a question of justice: if people are going to be punished for the ways in which they discuss self-harm online, they ought to know that in advance. Two is that policies offer platforms a chance to intervene when their users are considering hurting themselves. (Many do offer users links to resources that can help them in a time of crisis.) And three is that we can’t develop more effective policies for addressing mental health issues online if we don’t know what the policies are.
And moderating these kinds of posts can be quite tricky, researchers said. There’s often a fine line between posts that are discussing self-harm and those that appear to be encouraging it.
“The same content that could show someone recovering from an eating disorder is something that can also be triggering for other people,” Grossman told me. “That same content could just affect users in two different ways.”
But you can’t moderate if you don’t even have a policy, and I was surprised, reading this research, at just how many companies don’t.
This has turned out to be a kind of policy week here at Platformer. We talked about how Clarence Thomas wants to blow up platform policy as it exists today; how YouTube is shifting the way it measures harm on the platform (and discloses it); and how Twitch developed a policy for policing creators’ behavior on other platforms.
What strikes me about all of this is just how fresh it all feels. We’re more than a decade into the platform era, but there are still so many big questions to figure out. And even on the most serious of subjects — how to address content related to self-harm — some platforms haven’t even entered the discussion.
The Stanford researchers told me they believe they are the first people to even attempt to catalog self-harm policies among the major platforms and make them public. There are doubtless many other areas where a similar inventory would serve the public good. Private companies still hide too much, even and especially when they are directly implicated in questions of public interest.
In the future, I hope these companies collaborate more — learning from one another and adopting policies that make sense for their own platforms. And thanks to the Stanford researchers, at least on one subject, they can now find all of the existing policies in a single place.
This column was co-published with Platformer, a daily newsletter about Big Tech and democracy.