How many Britons know AI lies?

Matthew SmithHead of Data Journalism
July 21, 2025, 10:19 AM GMT+0

One in nine Britons who use AI for factfinding on a daily basis say they have "never" encountered it hallucinating

While AI chatbots like ChatGPT and DeepSeek are becoming increasingly common, a major concern is their tendency to give ‘hallucinations’ – that is, to often provide incorrect information or indeed to just make it up entirely. For instance, Google’s Gemini AI has been the subject of ridicule for inventing idioms and folk sayings based on users’ searches.

Now a new YouGov study examines how many people are using AI for fact-finding purposes, how many have ever noticed AI bots having such hallucinations, and how trusted AI is compared to other sources of information.

How often do Britons use AI?

Our survey shows that 10% of Britons say they use AI for personal or leisure reasons every day, rising to 26% who use it at least weekly. At the same time, 7% say they use AI on a daily basis for work or study, increasing to 18% who do so weekly.

Around four in ten Britons (38%) say they never use AI for either leisure, work or study reasons.

Among those who do use AI, 80% say they use it for gaining factual information about a simple topic, while 75% say the same for complicated topics. Around 11-16% are using AI for one or both of these fact-finding purposes on a daily basis, rising to 38-47% doing so at least weekly.

How often do AI users encounter ‘hallucinations’?

Among those who ever use AI for factfinding, 23% say they encounter hallucinations very or fairly often. A further third (33%) say they encounter them infrequently or rarely, while one in six (17%) say they have never noticed AI being incorrect or making things up.

Frequent AI users are more likely to have come across hallucinations, with 33% of those who use AI to get factual information on a daily basis encountering them very or fairly often, and a slightly reduced figure of 11% saying they never do so.

How much do Britons trust AI as a source of factual information?

It is clear that AI is less trusted as a source of information than other traditional sources. Asked to rate their level of trust in AI chatbots as sources of factual information on a scale from 0-10, only one in seven Britons (14%) give them a score of 6 or more – on a par with tabloid newspapers (13%) and only convincingly above social media (9%).

This compares to 60% for academic journals, 46% on TV news broadcasters, and 41% for Wikipedia, the top three from our list.

Among Britons who ever use AI, the trust figure in the technology is unchanged (14%), however among those who use AI for factfinding purposes on a daily basis with 43% rating ChatGPT and its competitors as a six or higher on the trust scale. This is the same score achieved by broadsheet newspapers and news podcasts for these Britons.

By contrast, a mere 3% of Britons who have never used AI see it as a trustworthy source of information.

Unsurprisingly, we see that trust in AI correlates strongly with experience of hallucinations, even among those who use the tools frequently. Among those who use AI to gather factual information, only 21% who encounter hallucinations very or fairly frequently rate chatbots as a 6 or higher out of 10 on the trust scale; for those who infrequently or never encounter them, this rises to 35%.

Results tables coming shortly

What do you think about the trustworthiness of AI, the impact it is likely to have on society, and everything else? Have your say, join the YouGov panel, and get paid to share your thoughts. Sign up here.

Photo: Getty