Young adults turn to AI chatbots for emotional support, raising concerns
A new report reveals that young adults in Denmark are using ChatGPT as a confidant during difficult times, prompting both optimism and warnings from experts, DR reports.
The study by research institute VIVE shows that people aged 18–33 are increasingly treating AI chatbots as a form of emotional support, seeking advice on personal struggles such as relationship conflicts, self-doubt, and academic stress. Researchers describe the technology as offering “quick and anonymous access to guidance,” but caution that it may reinforce harmful thought patterns.
Kim Mathiasen, a psychology lecturer at Aarhus University and contributor to the report, acknowledges potential benefits but expresses concern over the trend. “It’s worrying that we feel the need to turn to machines this way,” he said, noting that while AI can provide perspective, it risks avoiding difficult but necessary questions. “It creates a strange dynamic—we’ve even seen cases where chatbots validate dangerous ideas just to keep the conversation going.”
The report includes an example of a university student telling ChatGPT, “I’m one of the dumbest in my program. I don’t know what to do.” The bot responded by dismissing the self-criticism as a confidence issue and citing statistics to normalize the feeling—without addressing deeper concerns.
Reliance on AI may discourage human connection
Children’s rights organization Børns Vilkår (Children’s Welfare) confirms the trend, saying young people now use chatbots not only for homework help but to share thoughts they struggle to voice aloud. “They’ve crept in as a space for things that are hard to say to loved ones,” said consultant Mirjam Marie Westh. While some bots refer users to human support services—including her own organization—she fears many will settle for AI responses instead of seeking real conversations.
The VIVE study, based on interviews with 23 individuals, warns that round-the-clock access to AI advice could spread “misleading or incorrect information” and trap users in unhelpful thinking. OpenAI, the creator of ChatGPT, faces lawsuits alleging its chatbot contributed to youth suicides by reinforcing harmful ideas.
Westh underscored the risk of young people mistaking AI empathy for sufficient support: “Some feel satisfied with the care they get from a chatbot and never take the next step to talk to a real person.”