Two thirds of all undergraduate home students say they ever use AI, and one in seven have engaged in potentially cheating behaviour
With the rise of AI language models like ChatGPT causing disruptions across society, one area of concern has been how they will affect education, particularly higher education, where there is significant potential for students to use the tools to cheat at coursework.
Now a new YouGov survey of more than 1,000 home students attending UK universities and studying at an undergraduate level examines how many students are using AI for their studies, what they are using it for, whether this includes cheating, and how they think it has affected their studies.
How many UK students use AI?
Two thirds of students (66%) say that they ever use AI for work and study towards their degree, including one third (33%) who say they do so on at least a weekly basis.
More broadly, almost three quarters of students (74%) say they ever use AI for any purpose, with again a third (34%) doing so at least weekly.
What AI programs do UK students use?
Asked what AI programs and features they use for their university work, almost all examples are LLMs, with ChatGPT by far the most common answer at 74%. Gemini comes in a distant second on 11%, followed by Copilot on 8%.
What are UK universities’ policies on AI?
One in six students (18%) say their university takes a hardline stance against AI, describing their institution policy as “actively discourages the use of AI and gives detailed guidance on the ways in which it is bad, without demonstrating or allowing ways in which it could be used ethically”.
A further 11% say their university takes a vaguer anti-AI stance, in that it “gives little guidance other than a general warning against using AI”.
While most students attend universities which take a more permissive stance regarding AI, many still feel there is a lack of proactivity in teaching how to use it appropriately. Almost half of students (45%) describe their university’s AI policy as “has set out the boundaries for acceptable AI usage, but has not actively taught us the skills to use AI well and how to avoid its pitfalls”. Just 11% say that their university “actively encourages the use of AI in ethical ways, and has taught us skills to use AI well and how to avoid its pitfalls”.
Only 2% of students say their university has issued no guidance on AI usage at all.
Most students (55%) say they believe their university gets the balance on AI about right. A quarter (24%) say the rules aren’t strict enough, while only 7% feel they are too restrictive.
Those who say their university’s policy on AI is to actively encourage its use are the most likely to feel the rules are appropriate (75%), while six in ten of those whose institution has set out boundaries but not taught appropriate skills (63%), and those who are at a university that actively discourages AI use (62%) also feel the institution is striking the right balance.
What are students using AI for, and how many admit to using it to cheat?
From our list of AI uses, the most common way in which the technology is utilised is “to explain concepts … that I wanted to understand better”, with 81% of students who use AI for work and study saying they use it in this way.
Other common ways in which such AI-using students use the technology include summarising sources (69%), identifying relevant sources (55%) and suggesting improvements to graded work they had already created (52%).
One in twenty students who use AI for their degree work (5%) confess to having “entirely create[d] a piece of work that counted towards your pass/fail grade, like an essay or coursework, and submitted it without editing”. This group accounts for 3% of the wider student population.
One in eight (12%) students who use AI say they have entirely created a piece of graded work using AI, which they then edited before submitting. This group accounts for 8% of the wider student population.
In total, 14% of students who use AI gave one or both of these answers (accounting for 9% of the wider student population).
One in five students (20%) also admitted to using AI to create sections of a piece of graded work (equivalent to 13% of the whole student population). If included with the prior two groups, this brings the total number of students using AI material in part or wholesale as 23% of AI-using students, or 15% of the student populace.
What do students see as acceptable and unacceptable AI practice?
Comparing the prevalence of AI usage for certain tasks against the belief that those practices are acceptable shows comparable figures for those uses that could potentially be considered cheating.
For instance, the 5% of AI-using students who use it to create graded work wholesale and submitting it without editing are matched by the 5% who see this as acceptable. This trend is the same for generating graded work with AI but then editing it (12-13%) and using AI to create sections of graded work (19-20%).
Despite the similarity of these figures, this is not to say that the only people who see these things as acceptable are those who are doing them. Between 41-58% of those who have used AI to create graded work in one of these ways do so despite saying it is an unacceptable way to utilise the technology, with the difference made up by the small number of those not taking part in these tasks who nevertheless think they are acceptable (3-10% across the three scenarios).
Other AI uses are seen more positively by students. Three quarters of AI-using students (78%) say it is acceptable to leverage the technology to suggest improvements to work they have already created, while 58% say the same of doing so to create content for non-graded work like presentations.
How likely do students think AI cheaters are to be caught?
Two thirds of students (66%) currently think it is likely that someone submitting a piece of work created entirely using AI would be detected by their university, although only 24% consider this possibility “very likely”.
Slightly less than a quarter (23%) think such behaviour would probably go undetected, with just 4% thinking a person’s chances of being caught are “very unlikely”.
How do students think AI has affected their education?
Aside from outright cheating, one of the major concerns about students using AI is that they will use it to create and submit superior work, without having actually learned or developed the skills their lecturers are trying to impart.
Certainly, three in ten students who use AI (30%) think that their marks are better as a result. However, about half (48%) think they are getting approximately the same marks they would have done anyway, while 11% think their marks are actually worse as a result of using AI.
Students are more likely, however, to think that AI has improved rather than detracted from their learning experience. More than four in ten (44%) believe they have learned and developed more at university than they would have done without being able to use AI.
This compares to one in three (32%) who believe their development at university is consistent with what it would have been without AI, with only 12% thinking they are worse off for having used the technology.
A counter-argument to the concerns about student usage of AI is that knowing how to use the technology effectively will be vital to workers as today’s young people embark upon their careers.
Students themselves tend to agree with this reasoning. When it comes to the importance of AI towards their future careers, almost half (47%) believe being able to use the technology will be important, although only 12% think it will be “very” important.
Four in ten disagree (39%), believing this knowledge will not be important to their future employment.
There is a clear split in this regard between those students who use AI for their studies and those who do not. While 59% of the former group believe it will be important to their working life to know how to use the technology, only 23% of those not currently doing so agree.
Those studying STEM subjects are also more likely to see AI skills as important than those taking other courses (53% vs 40%).
How many UK students encounter AI hallucinations?
One of the most pressing problems with AI, certainly from an academic point of view, is that they have a tendency to make up facts. The risks of users uncritically accepting AI as a source of truth is obvious, and fortunately our survey shows that students are more aware of AI’s potential pitfalls in this regard than the wider public.
Almost half who use AI for work or study towards their degree (47%) say they often notice AI hallucinating – this stands in stark contrast to a separate recent YouGov poll which found this figure at just 23% among the AI-using public (in this case, Britons who ever use AI for factfinding).
Only 4% of AI-using students say they have “never” encountered AI hallucinations – again, significantly different to the 17% among the wider AI-using public.
What do you think about artificial intelligence, its impact on education, and everything else? Have your say, join the YouGov panel, and get paid to share your thoughts. Sign up here.
Photos: Getty