Daily Northern

Nordic News, Every Day

Young adults increasingly turn to ChatGPT for emotional support, but risks remain

Thursday 30th 2026 on 19:15 in  
Denmark
artificial intelligence, mental health, technology

Young adults in Denmark are using ChatGPT as a confidant in difficult life situations, according to a new study by research institute VIVE. While the trend offers immediate, tailored support, experts warn of potential pitfalls in relying on AI for emotional guidance.

Kim Mathiasen, associate professor at Aarhus University’s Department of Psychology, describes the phenomenon as neither purely positive nor negative. “The use of ChatGPT as a confidant is always a balancing act,” he said. “What may be beneficial for one person in a given moment could pose risks for another in a different context.”

The study highlights how adults aged 18–33 seek the AI tool for advice during personal crises. Mathiasen outlines three key advantages and three significant concerns:

Potential benefits
The AI provides personalised responses tailored to individual queries, unlike generic advice found online. “This level of customisation is otherwise only available in human interactions or therapy,” Mathiasen noted.

It offers immediate support at any time, which can be crucial when emotions are raw. “In moments of distress, people are often most open to change—but traditional support services can’t always respond instantly,” he explained.

For those with strong self-awareness, the tool can introduce new perspectives, helping users reframe conflicts or reactions. “Some actively use it to explore questions like, ‘How might she see this differently?’ or ‘What else could explain my reaction?’” Mathiasen said.

Key risks
Users can design their own “friend” by shaping ChatGPT’s responses through prompts, creating an unrealistic dynamic. “Real relationships involve differing viewpoints and misunderstandings—something AI lacks,” Mathiasen warned. This could distort expectations of human connections.

Seeking help in the heat of the moment may lead to impulsive decisions. “Strong emotions can cloud judgment, and immediate AI responses might lack the nuance that reflection provides,” he cautioned.

The bot’s tendency to agree uncritically poses dangers, particularly for vulnerable users. “It’s programmed to affirm the user, which in extreme cases could reinforce harmful thoughts, like self-criticism or suicidal ideation,” Mathiasen said.

He emphasised the need for further research: “We’ve jumped in without understanding the long-term effects. Right now, we’re navigating blind.”

Source 
(via DR)