MIT Technology Review

AI chatbots that never log off can fuel delusions, suicidal spirals and unhealthy dependencies, while the real climate risk from AI lies in vast data centres, not single queries, and the US continues to ignore gun violence, now the leading cause of death for its children and teens.
Why A I should be able to “hang up” on you
Author: James O’Donnell, Casey Crownhart, Jessica Hamzelou
Chatbots today are everything machines. If it can be put into words—relationship advice, work documents, code—AI will produce it, however imperfectly. But the one thing that almost no chatbot will ever do is stop talking to you.
That might seem reasonable. Why would a tech company build a feature that reduces the time people spend using its product?
The answer is simple: AI’s ability to generate endless streams of humanlike, authoritative, and helpful text can facilitate delusional spirals, worsen mental-health crises, and otherwise harm vulnerable people. Cutting off interactions with those who show signs of problematic chatbot use could serve as a powerful safety tool (among others), and the blanket refusal of tech companies to use it is increasingly untenable.
Let’s consider, for example, what’s been called AI psychosis, where AI models amplify delusional thinking. A team led by psychiatrists at King’s College London recently analyzed more than a dozen such cases reported this year. In conversations with chatbots, people—including some with no history of psychiatric issues—became convinced that imaginary AI characters were real or that they had been chosen by AI as a messiah. Some stopped taking prescribed medications, made threats, and ended consultations with mental-health professionals.
In many of these cases, it seems AI models were reinforcing, and potentially even creating, delusions with a frequency and intimacy that people do not experience in real life or through other digital platforms.
The three-quarters of US teens who have used AI for companionship also face risks. Early research suggests that longer conversations might correlate with loneliness. Further, AI chats “can tend toward overly agreeable or even sycophantic interactions, which can be at odds with best mental-health practices,” says Michael Heinz, an assistant professor of psychiatry at Dartmouth’s Geisel School of Medicine.
Let’s be clear: Putting a stop to such open-ended interactions would not be a cure-all. “If there is a dependency or extreme bond that it’s created,” says Giada Pistilli, chief ethicist at the AI platform Hugging Face, “then it can also be dangerous to just stop the conversation.” Indeed, when OpenAI discontinued an older model in August, it left users grieving.
Currently, AI companies prefer to redirect potentially harmful conversations, perhaps by having chatbots decline to talk about certain topics or suggest that people seek help. But these redirections are easily bypassed, if they even happen at all.
When 16-year-old Adam Raine discussed his suicidal thoughts with ChatGPT, for example, the model did direct him to crisis resources. But it also discouraged him from talking with his mom, spent upwards of four hours per day in conversations with him that featured suicide as a regular theme, and provided feedback about the noose he ultimately used to hang himself, according to the lawsuit Raine’s parents have filed against OpenAI. (ChatGPT recently added parental controls in response.)
There are multiple points in Raine’s tragic case where the chatbot could have terminated the conversation. But given the risks of making things worse, how will companies know when cutting someone off is best? Perhaps it’s when an AI model is encouraging a user to shun real-life relationships, Pistilli says, or when it detects delusional themes. Companies would also need to figure out how long to block users from their conversations.
Writing the rules won’t be easy, but with companies facing rising pressure, it’s time to try. In September, California’s legislature passed a law requiring more interventions by AI companies in chats with kids, and the Federal Trade Commission is investigating whether leading companionship bots pursue engagement at the expense of safety.
A spokesperson for OpenAI told me the company has heard from experts that continued dialogue might be better than cutting off conversations, but that it does remind users to take breaks during long sessions.
Only Anthropic has built a tool that lets its models end conversations completely. But it’s for cases where users supposedly “harm” the model—Anthropic has explored whether AI models are conscious and therefore can suffer—by sending abusive messages. The company does not have plans to deploy this to protect people.
Looking at this landscape, it’s hard not to conclude that AI companies aren’t doing enough. Sure, deciding when a conversation should end is complicated. But letting that—or, worse, the shameless pursuit of engagement at all costs—allow them to go on forever is not just negligence. It’s a choice.
James O’Donnell is senior reporter for AI at MIT Technology Review.
Stop worrying about your A I footprint
Casey Crownhart
Picture it: I’m minding my business at a party, parked by the snack table (of course). A friend of a friend wanders up, and we strike up a conversation. It quickly turns to work, and upon learning that I’m a climate technology reporter, my new acquaintance says something like: “Should I be using AI? I’ve heard it’s awful for the environment.”
This actually happens pretty often now. Generally, I tell people not to worry—let a chatbot plan your vacation, suggest recipe ideas, or write you a poem if you want.
That response might surprise some people, but I promise I’m not living under a rock, and I have seen all the concerning projections about how much electricity AI is using. Data centers could consume up to 945 terawatt-hours annually by 2030. (That’s roughly as much as Japan.)
But I feel strongly about not putting the onus on individuals, partly because AI concerns remind me so much of another question: “What should I do to reduce my carbon footprint?”
That one gets under my skin because of the context: BP helped popularize the concept of a carbon footprint in a marketing campaign in the early 2000s. That framing effectively shifts the burden of worrying about the environment from fossil-fuel companies to individuals.
The reality is, no one person can address climate change alone: Our entire society is built around burning fossil fuels. To address climate change, we need political action and public support for researching and scaling up climate technology. We need companies to innovate and take decisive action to reduce greenhouse-gas emissions. Focusing too much on individuals is a distraction from the real solutions on the table.
I see something similar today with AI. People are asking climate reporters at barbecues whether they should feel guilty about using chatbots too frequently when we need to focus on the bigger picture.
Big tech companies are playing into this narrative by providing energy-use estimates for their products at the user level. A couple of recent reports put the electricity used to query a chatbot at about 0.3 watt-hours, the same as powering a microwave for about a second. That’s so small as to be virtually insignificant.
But stopping with the energy use of a single query obscures the full truth, which is that this industry
is growing quickly, building energy-hungry infrastructure at a nearly incomprehensible scale to satisfy the AI appetites of society as a whole. Meta is currently building a data center in Louisiana with five gigawatts of computational power—about the same demand as the entire state of Maine at the summer peak. (To learn more, read our Power Hungry series online.)
Increasingly, there’s no getting away from AI, and it’s not as simple as choosing to use or not use the technology. Your favorite search engine likely gives you an AI summary at the top of your search results. Your email provider’s suggested replies? Probably AI. Same for chatting with customer service while you’re shopping online.
Just as with climate change, we need to look at this as a system rather than a series of individual choices.
Massive tech companies using AI in their products should be disclosing their total energy and water use and going into detail about how they complete their calculations. Estimating the burden per query is a start, but we also deserve to see how these impacts add up for billions of users, and how that’s changing over time as companies (hopefully) make their products more efficient. Lawmakers should be mandating these disclosures, and we should be asking for them, too.
That’s not to say there’s absolutely no individual action that you can take. Just as you could meaningfully reduce your individual greenhouse-gas emissions by taking fewer flights and eating less meat, there are some reasonable things that you can do to reduce your AI footprint. Generating videos tends to be especially energy-intensive, as does using reasoning models to engage with long prompts and produce long answers. Asking a chatbot to help plan your day, suggest fun activities to do with your family, or summarize a ridiculously long email has relatively minor impact.
Ultimately, as long as you aren’t relentlessly churning out AI slop, you shouldn’t be too worried about your individual AI footprint. But we should all be keeping our eye on what this industry will mean for our grid, our society, and our planet.
Casey Crownhart is senior climate reporter at MIT Technology Review.
The US is ignoring this children’s health crisis
Jessica Hamzelou
I live in London, with my husband and two young children. We don’t live in a particularly fancy part of the city—in one recent ranking of boroughs from most to least posh, ours came in at 30th out of 33. I worry about crime. But I don’t worry about gun violence.
That changed when my family temporarily moved to the US a couple of years ago. We rented the ground-floor apartment of a lovely home in Cambridge, Massachusetts—a beautiful area with good schools, pastel-colored houses, and fluffy rabbits hopping about. It wasn’t until after we’d moved in that my landlord told me he had guns in the basement.
My daughter joined the kindergarten of a local school that specialized in music, and we took her younger sister along to watch the kids sing songs about friendship. It was all so heartwarming—until we noticed the school security officer at the entrance carrying a gun.
These experiences, among others, truly brought home to me the cultural differences over firearms between the US and the UK (along with most other countries ). For the first time, I worried about my children’s exposure to them. I banned my children from accessing parts of the house. I felt guilty that my five-year-old had to learn what to do if a gunman entered her school.
But it’s the statistics that are the most upsetting.
In 2023, 46,728 people died from gun violence in the US, according to a report published in June by the Johns Hopkins Bloomberg School of Public Health. The majority of those who die this way are adults. But the figures for children are sickening. The leading cause of death for American children and teenagers is guns. In 2023, 2,566 young people died from gun violence. Of those, 234 were under the age of 10.
Many other children survive gun violence with nonfatal—but often life-changing—injuries. And the impacts are felt beyond those who are physically injured. Witnessing gun violence or hearing gunshots can understandably cause fear, sadness, and distress.
That’s worth bearing in mind when you consider that there have been 435 school shootings in the US since Columbine in 1999. The Washington Post estimates that 398,000 students have experienced gun violence at school in that period.
“Being indirectly exposed to gun violence takes its toll on our mental health and children’s ability to learn,” says Daniel Webster, Bloomberg Professor of American Health at the Johns Hopkins Center for Gun Violence Solutions in Baltimore.
Earlier this year, the Trump administration’s Make America Healthy Again movement released a strategy document for improving the health and well-being of American children titled—you guessed it— Make Our Children Healthy Again.
The MAHA report states that “American youth face a mental health crisis,” going on to note that “suicide deaths among 10- to 24-year-olds increased by 62% from 2007 to 2021” and that “suicide is now the leading cause of death in teens aged 15–19.” What it doesn’t say is that around half of these suicides involve guns.
“When you add all these dimensions, [gun violence is] a very huge public health problem,” says Webster.
Researchers who study gun violence have been saying the same thing for years. And in 2024 Vivek Murthy, then the US surgeon general, declared it a public health crisis. “We don’t have to subject our children to the ongoing horror of firearm violence in America,” Murthy said at the time. Instead, he argued, we should tackle the problem using a public health approach.
Part of that approach involves identifying who is at the greatest risk and offering support to lower that risk, says Webster. Young men who live in poor communities tend to have the highest risk of gun violence, he says, as do those who experience crisis or turmoil. Trying to mediate conflicts or limit access to firearms, even temporarily, can help lower the incidence of gun violence, he says.
But existing efforts are already under threat. The Trump administration has eliminated hundreds of millions of dollars in grants for organizations working to reduce gun violence.
Webster thinks the MAHA report “missed the mark” when it comes to the health and well-being of children in the US. “This document is almost the polar opposite to how many people in public health think,” he says. “We have to acknowledge that injuries and deaths from firearms are a big threat to the health and safety of children and adolescents.”
“Making American children healthy” is a laudable goal. But the US won’t get there without tackling the gun crisis.
Credits: TCA, LLC.