Meta CEO Mark Zuckerberg exiting Los Angeles Superior Court in California
Kyle Grillot/Bloomberg via Getty Images
I just sat down to write, but before committing words to my document, I took out my phone to check my calendar. Then I got a chat notification from a friend, who sent me a link to some meme on Instagram. Might as well check it out. Beneath the post are a bunch of short videos queued up, algorithmically chosen to enchant me: one is about ravens in the Tower of London, another about Indonesian street food. I poke the raven one. Then another. I can scroll through these reels endlessly, and I do. The videos become increasingly disturbing and political. You know what comes next. When I look up at my computer again, nearly 45 minutes have passed.
Read more
We don’t know if AI-powered toys are safe, but they’re here anyway
My day isn’t ruined, but I feel depressed and tired. Where did all that missing time go? How did Instagram suck me into watching hundreds of videos (not to mention dozens of ads), when all I wanted to do was check my calendar? And why did it make me feel so crappy?
The answers to those questions are being debated right now and will come to court in two California court cases brought by thousands of individuals and groups against the social media giants Meta (owner of Facebook and Instagram), Google (owner of YouTube), Snap (owner of Snapchat), ByteDance (owner of TikTok) and Discord. The plaintiffs in these cases – ranging from school districts to concerned parents – argue that social media platforms pose a danger to children, causing grave psychological harm and even leading to death. Exposed to videos full of violence, impossible beauty standards, and “contests” that encourage dangerous stunts, kids are being led down dark rabbit holes from which they may never return. At stake in both cases is one fundamental question: are these companies at fault for making people feel terrible?
For over a decade now, many US lawmakers have implied that the answer is no. Instead of trying to regulate companies, several states in the US have passed laws that target how children use social apps. Some attempt to limit access by requiring parental consent for minors to create accounts, for example. Others have tried to prevent adolescent bullying by banning “like” counts on posts. Many of these laws have focused on the dangers of content on social media. Here in the US, that basically lets companies off the hook. There is an infamous part of our Communications Decency Act, known as Section 230, that prevents companies from being held liable for content posted by users.
You can understand why Section 230 seemed like a good idea when it was written in the 1990s. Back then, nobody worried about doomscrolling, algorithmic manipulation, or toxic “looksmaxxer” influencers who encourage their followers to hit their faces with hammers to create a more defined jawline. Also, Section 230 seemed practical: YouTube reports that 20 million videos are uploaded to its service every day. The company, and others like it, couldn’t function if they were liable for every unlawful thing posted to their service.
Free newsletter
Sign up to The Daily
The latest on what’s new in science and why it matters each day.

Lurking in the background of all this lawmaking is the fact that the US is a free speech absolutist nation. That means it’s very easy for companies such as Meta or Google to challenge laws that might curb people’s access to speech online, even if that speech is a video about how to lose weight by starving. Indeed, many of those laws limiting minors’ access to social media have been struck down by judges who view them as antithetical to free speech. As a result, many social media companies in the US have been able to whip out free speech laws as a shield against any kind of regulation.
Until now. What’s fascinating about the two current cases in California is that they deftly sidestep questions of content and free speech. Instead, they are arguing that the design of social media platforms themselves is “defective,” and therefore harmful; the endless scroll, the constant notifications, the auto-playing videos, and the algorithmic enticement that feeds our fixations – these features are deliberately created by the companies themselves. And, the lawsuits argue, these “defects” turn social media apps into “addictive” products, similar to “slot machines,” that are “exploiting young people,” by giving them an “artificial intelligence driven endless feed to keep users scrolling.” Ultimately, the goal of these lawsuits is to force social media companies to take responsibility for the negative impacts their products have on the most vulnerable consumers.
Read more
Undisclosed ads on TikTok skirt ban on profiling minors
In many ways, this argument resembles the ones that the US government brought against tobacco companies in the 1990s. The government argued successfully that companies knew their products were harmful, but covered it up. As a result, the companies paid out a major settlement to victims, put warning labels on tobacco products, and changed their marketing to no longer appeal to children.
Already there are leaked documents from Meta suggesting that the company knew its product was addictive. A federal judge unsealed court documents for a case where a teenage girl became suicidal after becoming addicted to social media. Those documents contained internal communications at Instagram, in which a user experience specialist allegedly wrote: “oh my gosh yall (Instagram) is a drug… We’re basically pushers.” This is one of many documents from Instagram and YouTube that the lawyers say paint a picture of companies knowingly and negligently producing defective products.
The two trials are currently underway and have the potential to transform social media dramatically. Perhaps US law will finally acknowledge what many of us have known for years: the problem isn’t the content, it’s the conduct of the companies who feed it to us.
Need a listening ear? UK Samaritans: 116123 (samaritans.org); US Suicide & Crisis Lifeline: 988 (988lifeline.org). Visit bit.ly/SuicideHelplines for services in other countries.
Topics:
- internet/
- mental health/
- social media/
- technology
