Customers prefer human service over AI for complaints, new research finds

Customers overwhelmingly prefer human customer service representatives over AI chatbots when filing complaints, according to a comprehensive study published in Sage Journals and reported by Danish broadcaster DR. The findings challenge assumptions about automation in service industries, revealing that while chatbots perform adequately in some scenarios, they fall short in handling emotionally charged interactions.

The research, led by Holger Roschk, a professor at Aalborg University Business School, analyzed 327 experimental studies involving nearly 282,000 participants. Results showed that people generally view automated agents more negatively than human ones, particularly when seeking empathy or resolution for grievances.

“Customers often expect more than just a refund when they complain—they want emotional understanding, which a chatbot cannot provide,” Roschk explained. He noted that while AI struggles with nuanced complaints, it excels in discrete transactions, such as purchases of sensitive products like condoms, where anonymity is preferred. “In these cases, the mechanical nature of chatbots can actually be disarming,” he added.

The study also highlighted a disconnect between corporate cost-cutting priorities and customer preferences. Kasper Lynge Jacobsen, AI chief analyst at Dansk Erhverv, observed that many companies rush to implement AI solutions to reduce payroll expenses, despite limited demand from customers. “There’s a fundamental misalignment here,” he said. “Management pushes for AI adoption, but the technology often isn’t tailored to real user needs.”

Jacobsen emphasized that the most effective systems integrate AI and human agents seamlessly. For example, chatbots can handle initial inquiries and gather details before transferring customers to human representatives for resolution. “The best outcomes occur when AI and humans work together in a hybrid setup,” he said.

The research follows high-profile failures, such as a 2024 incident where a DPD chatbot used offensive language with a customer, and Klarna’s decision to rehire human agents after its AI service underperformed. These cases underscore the risks of over-reliance on untested automation, the experts warned.

Source 
(via DR)