It’s now a truism that it’s psychologically unhealthy to spend too much time on social media. The American writer Caitlin Flanagan recently likened her relationship with Twitter to drug addiction: “The simplest definition of an addiction is a habit that you can’t quit, even though it poses obvious danger. … It’s madness.” Is it, clinically? Very intense social-media use can be associated with mental illness, although cause and effect here aren’t straightforward. Simply spending time online doesn’t indicate anything troubling, and can even be beneficial. Yet often, the more outrageous or extremist a post is on social media, the more engagement it gets—and the more attention social-media algorithms bring to it. So, what’s the relationship between social-media use and mental illness?

Tom Chivers is the London-based science editor for UnHerd and the co-author of How to Read Numbers, on the use of statistics in news coverage. According to Chivers, there’s little evidence that social-media use itself is harming mental health. Rather, online dynamics tend to reward extreme, angry, and hyper-emotional behavior, in a way that offline life discourages. This pattern, associated especially with political or activist Twitter, shows up even in online forums on topics like knitting or popular literature. Not infrequently, Chivers finds, the people on social media driving “purity spirals”—a dynamic in which they level extreme accusations of bigotry at anyone they disagree with—openly suffer from conditions as acute as Borderline Personality Disorder. The point, he says, is not that everyone behaving badly online is mentally ill, or that everyone suffering from mental illness uses the internet in a harmful way. It’s that social efforts to raise awareness of mental-health issues have ignored “less-sympathetic” kinds of mental illness. And this has real-life consequences, for both the troubled and those they post about.

Phoebe Maltz Bovy: How does the use of social media affect mental health?

Tom Chivers: There’s a lot of worry about whether social media causes mental-health issues, and the evidence on that is weak. The question is whether there are personality types that, in public life and real human interactions, would be quite difficult to be around but on Twitter are often incentivized and rewarded. Behaving hyper-aggressively in daily life is not okay. Shouting at people and calling them racist or transphobic is not okay. On Twitter, that behavior is often encouraged.

[Chivers raises the example of a recent high-profile online conflict between two writers, one of whom has publicly referred to a diagnosis of Dissociative Identity Disorder, the condition previously categorized as Multiple Personality Disorder.]

Dissociative Identity Disorder is strongly correlated with Borderline Personality Disorder. And both present often as people having extreme swings of emotion. If someone has a setback, they might become extremely depressed about it, or extremely angry, and if something goes well, they become ecstatic—and their opinions about other people swing from extreme to extreme. This is relevant to Twitter.

Bovy: Is the relationship between severe mental illness and social media that you describe mainly confined to certain networks on Twitter, or is something broader happening online?

Pexels / The Signal

Chivers: Twitter is where I am most aware of it. I have made brief forays into Tumblr, where you can also see tendencies in communities there to form purity spirals. I’m wary to an extent, because you can risk pathologizing normal bad behavior or declaring the people who disagree with you mentally ill. On the other hand, in certain online communities, there’s a tendency to police people heavily if they have the slightest wrong opinions.

You see it with communities that should be quite kind, and low-drama, like knitting communities or young-adult literature communities and Harry Potter fandom. There are strict ideological lines about who can write which particular form of young-adult novel. Or if people don’t declare loudly enough that they’re anti-racist in knitting forums, they can get thrown out. It’s easy to say this sounds funny, but it’s traumatic for people who have lived their lives in these communities.

People who become the loudest voices are the ones who feel emotions most strongly and who feel most attacked. When online communities become toxic, it’s often driven by people in the storm of a purity spiral, who sometimes have mental-health conditions. If you see some of the worst behavior in these situations, and you go to the people’s social-media feeds, and search for terms related to, for example, Borderline Personality Disorder, you will find a number of them do suffer from mental-health issues.

When online communities become toxic, it’s often driven by people in the storm of a purity spiral, who sometimes have mental-health conditions.

That doesn’t mean that they’re therefore bad people. But these personality disorders and mental-health issues drive a lot of the most extreme behavior in various online forums. And then other people around them, who are aware that they could get turned on if they don’t toe the line, find themselves toeing the line. It’s worth looking at as a hypothesis that these personality types and personality disorders can drive a lot of the worst behavior at the center of some of these purity spirals.

Bovy: So is the problem more about the intensity of social-media use than about the purpose of online communities themselves?

Chivers: The problem springs up in almost any online community, but it will tend to be about the most sensitive topics in society as a whole, such as racism or homophobia. Even though the communities will be very different, the fault lines are often the same. They go along extremely charged social-justice topics, because that’s where the most people are most vulnerable.

As for fault lines in different places, I think the tendencies of U.S. politics are everywhere now. Globally, we’ve imported all American attitudes and politics and divides as though they’re our own.

Bovy: Are there demographic differences in the relationship between social-media use and mental health?

Robin Worrall

Chivers: There are demographic differences in diagnoses themselves. Borderline Personality Disorder, for instance, is three times as common among women as among men. But men are much more likely to be diagnosed with antisocial personality disorder, which will manifest itself in different bad behavior online, such as sending horrible messages.

Generally speaking, people who are younger are more likely to do this sort of thing. Young people have greater extremes of emotion and are therefore more likely to say something like, This person is now awful and we must destroy them.

Bovy: How can we separate out cause and effect? Does being very active online harm mental health, or is it that people whose mental health is already compromised are more likely to use social media either a lot or in unhealthy ways?

Chivers: It’s very hard to find a stable relationship between screen time and mental-health issues. If you do proper science, you end up with these caveat-filled headlines, like, Screen use is correlated with slight increases in mental well-being up to a certain level, and then this slight decrease after five hours a day. If there’s a real link between social-media use and negative mental-health outcomes, it’s small and fuzzy, and only at the extremes.

If there’s a real link between social-media use and negative mental-health outcomes, it’s small and fuzzy, and only at the extremes.

That said, we have a situation in which a subset of people who already have mental-health issues will be driven to behave in destructive, including self-destructive, ways online by the incentive-and-reward structures of social media. That may or may not worsen their mental health, but will probably worsen other people’s lives—and so end up making their own lives harder, by just creating a massive contentious argument around them, which is hard to experience.

Bovy: To what extent do algorithms incentivize social-media posts conveying lots of emotion, or extreme attitudes, to get more engagement?

Chivers: I don’t think algorithms literally read emotional state. Artificial intelligence and machine learning are perfectly capable of doing emotional analysis on tweets these days, though I don’t know whether they do—or whether their analysis would be any good. But what the algorithm undoubtedly does is say, If this social-media post has lots of likes and lots of shares, then we’ll put it in more people’s feeds.

The things that get loads of likes and shares generally are not, I’m not sure about this, or, I don’t feel very strongly about it. If you’re saying, This is a disaster, or, This person is a monster, worse than Hitler, we must utterly drag him out of polite society, that is more likely to get attention. It’s not that the algorithm is looking at your emotions directly; it’s that the emotions of the users engaging with you are triggering the algorithm.

Stella Ntelimichali

Bovy: Can we distinguish how much aggressive online behavior is performance—intentional and aimed at gaining an audience—and how much comes from genuine emotional instability?

Chivers: The blogger John Nerst writes about how misunderstandings and disagreements play out online—and he has a word, semitentional. You’re asking, How much is it an intentional simulation of real feelings, and how much is real emotion? And the answer is, sometimes it will be people thinking, I will fake emotion now. And sometimes people will be writing through their rage tears. But a lot of the time, it’s semi-intentional. Your brain has many parts and puts forward various ideas.

I would not be surprised if there’s a large ratio of cynical actors, who’re posting hyperbolically for cold-blooded gain. But I think they’ll be a much smaller ratio than those who take audience reaction into account but are genuine in feeling strongly.

Bovy: Are there potential upsides, where posting about mental illness contributes to diminishing stigma around it or allows people to get the help they need—maybe in ways that wouldn’t be possible in their offline lives?

A subset of people who already have mental-health issues will be driven to behave in destructive, including self-destructive, ways online by the incentive-and-reward structures of social media.

Chivers: There’s a real movement now to build awareness of mental health. But it’s often people saying, Yes, I’ve been really down, I’m really anxious.

And it’s good that people are able to talk about depression and anxiety like this. But in cases where they have less-sympathetic but still very real disorders—like Borderline Personality Disorder, or bipolarity, or schizophrenia—it’s much harder to talk about these conditions in the way of, Let’s be aware of mental health. And when people do behave badly, it’s hard to say, Well, okay, mental health probably played a role here.

But these are real problems, not just for the people with mental-health issues but for those around them. And only focusing on the more sympathetic versions of mental illness doesn’t do the job here. We should acknowledge that—especially online, where there’s this mechanism that rewards people for damaging behaviors.

Bovy: Can the broader valorization of de-stigmatizing mental illness, however well intended, lead people to share more about their struggles, under their real names, than might be best for them, giving them problems in their offline lives?

Chivers: People are rewarded online for emotional honesty, which is sometimes great. But also it means that you get lots of likes for saying, I’m in a really bad place now. Which also incentivizes people to say things are worse than they are. But also, sometimes you’ll be putting things out there that, on sober analysis, you might rather weren’t out there. Everything’s a trade-off. I don’t know what the correct balance is. But certainly, editors of publications who encourage people to write about mental-health issues have a real duty of care to make sure writers are not putting out things they will regret putting out. There is no person with a duty of care on social media, except the people around them.