UK Study: 1 in 3 Adults Turn to AI Chatbots for Emotional Support
1 in 3 UK Adults Use AI for Emotional Support

New research from the United Kingdom government has uncovered a significant trend: a large number of adults are now using artificial intelligence for companionship and emotional comfort. The study indicates that this technological reliance is growing, bringing both social and security considerations to the forefront.

Widespread Use of AI for Social and Emotional Needs

According to the groundbreaking report, one in three adults in the UK has used AI chatbots for purposes related to emotional support or social interaction. This surprising statistic highlights a major shift in how people are engaging with technology for personal needs beyond simple information retrieval.

The research, which is the first official publication from the UK's AI Security Institute, is based on two years of rigorous testing. Experts evaluated the capabilities of more than 30 different chatbots, though their specific names were not disclosed in the public report. The testing covered critical areas like security protocols and scientific reasoning abilities.

A detailed survey of 2,000 UK adults helped pinpoint which tools are most popular. The findings show that chatbots, particularly OpenAI's widely known ChatGPT and the French company Mistral's AI, are the most commonly used platforms for these emotional and social purposes. To a lesser, but still notable extent, people also reported using voice-activated assistants like Amazon's Alexa for similar support.

Signs of Dependence and Withdrawal

Perhaps the most concerning revelation from the study is the emergence of dependency among some users. The report found that one in 25 adults relies on this technology every day for emotional support or social interaction.

This daily reliance has led to observable symptoms of withdrawal when the AI tools become unavailable. Users reported feelings of anxiety and depression during outages, with many turning to online communities dedicated to AI companions to share their distressing experiences. Other negative effects included disrupted sleep patterns and a neglect of daily responsibilities, painting a picture of a significant impact on wellbeing for a subset of users.

Security Risks and the Double-Edged Sword of AI

Beyond the social implications, the AI Security Institute's report delivered a stark warning on digital safety. It found that the ability of artificial intelligence to identify and then exploit security vulnerabilities in software and systems is advancing at a frightening pace. This capability is currently doubling roughly every eight months, posing a rapidly escalating threat to cybersecurity worldwide.

However, the report also offered a counterpoint, noting that these same powerful AI tools can be harnessed for defence. They can be used to strengthen digital systems, find flaws before malicious actors do, and create more robust security protocols, illustrating the dual-use nature of the technology.

The research was conducted over a period of two years and the findings were made public, with the report credited to Linda Ikeji on the 22nd of December, 2025. This study provides crucial data for policymakers, mental health professionals, and tech developers as society navigates the increasing integration of AI into daily life.