The first thing to say about my column this week is that it was written by me. You can take my word for it. But why should you? It’s true that if you are a regular reader you may notice that the style of writing is pretty much the same as all the other columns I have been contributing to YouGov over the past couple of decades. Similar sentence structure. Similar choice of phrasing and vocabulary. Similar architecture. I introduce the topic, offer alternative views on whatever it happens to be and invite you to respond with your own views. And this column is following that pattern. So that proves I’m the author eh?
Actually no, it doesn’t.
It is entirely possible that I invited a chatbot using artificial intelligence to write it for me. It’s called ChatGPT. The only difference might be that, where I would have spent several hours researching the topic and ordering my thoughts ‘it’ will have taken only a few minutes. I put ‘it’ into quotation marks because, if I’m to be entirely frank with you, I don’t really know what ‘it’ is. What I do know is that it is the latest chatbot from the OpenAI Foundation and it is causing an awful lot of very clever people sleepless nights.
If this piece had indeed been written by the bot my input would have been minimal. My only contribution would have been to download the ChatGPT app and order it to write an essay in the style of John Humphrys considering whether immensely sophisticated AI language processing models are to be welcomed or feared. This latest version has been on the market for only a matter of months and the fear factor seems to be growing – not least in the world of education. That’s because students are now able to use it to write essays for them instead of using their own knowledge and research and even judgement to write them for themselves.
The Guardian reports that in the short time since it was released academics have used it to write essays or answer difficult questions that would have resulted in full marks if they had been handed in by students. So how does the lecturer know whether a particular student has worked hard, mastered the subject and written an intelligent analysis of it – or simply spent a few minutes telling the bot to do it?
Dan Gillmor, a journalism professor at Arizona State University, asked the AI to handle one of the assignments he gives his students: writing a letter to a relative giving advice regarding online security and privacy. This was part of the bot’s response: ‘If you’re unsure about the legitimacy of a website or email, you can do a quick search to see if others have reported it as being a scam.’ There was much else in the ‘essay’ that reflected the student’s knowledge and understanding of the issues involved. Except, of course, that it was not a ‘student’: it was a bot. Professor Gillmor admits he would have given it a good grade. And his conclusion: ‘Academia has some very serious issues to confront.”
It's not only universities who are worried that AI is being used to fool their academics into believing that their students have mastered their subjects and can write about them in an intelligent and even thoughtful way. Only a few days ago Alleyn’s, one of London’s top private schools, announced that it would no longer be setting essays for homework because an AI chatbot produced an English assignment for one pupil that the school rated as A*.
All this raises an obvious question. Why should a student go through all the effort and anguish of diligently researching a subject and sweating over an essay which demonstrates not just their knowledge but their analytical ability when they can simply tell their chatbot to do it for them? And, of course, this is not just restricted to students or sixth-form pupils. What about university entrance exams or, indeed, job applications? And what about the world outside academia? What about our artistic and cultural life?
I present a weekly programme for Classic FM in which I include a ‘poet’s corner’. A different poem every week by a different poet. Or not. This week I asked my geeky friend if the chatbot could write a poem about the appeal of classical music. It could indeed. It took a minute or two. Admittedly the resulting verses would not have challenged WH Auden or WB Yeats, but it was a poem and some would have found it rather appealing.
So should we be scared of this latest development by those clever people who create artificial intelligence to take over the world as we know it?
OpenAI predictably says no. Quite the opposite. ChatGPT, it says, was created with a focus on ease of use: “The dialogue format makes it possible for ChatGPT to answer follow-up questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests.’ Unlike most highly sophisticated AI tools, this is free to use. But not for long. This is effectively a “feedback” period. The company says it hopes to use this feedback to improve the final version of the tool.
So far that feedback has been mixed, to put it mildly. Professor Gillmor of Arizona may have serious worries. Maybe it’s true that professors, programmers and journalists could all be out of a job in just a few years. Or maybe it’s not. Another professor – John Naughton of the Centre for Research at Cambridge University and a longstanding writer on all matters digital for the Guardian, is pretty relaxed.
He concedes that we generally overestimate the short-term impact of new communication technologies, while grossly underestimating their long-term implications. In this case, though, he says it’s worth examining the nature of the beast before we press the panic button.
‘At best’, he writes, ‘ChatGPT is an assistant, a tool that augments human capabilities. And it’s here to stay.
So what’s going on is “next-token prediction”, which happens to be what many of the tasks that we associate with human intelligence also involve. This may explain why so many people are so impressed by the performance of ChatGPT. It’s turning out to be useful in lots of applications: summarising long articles, for example, or producing a first draft of a presentation that can then be tweaked.’
One of its more unexpected capabilities, writes Naughton, is as a tool for helping to write computer code. ‘Dan Shipper, an experienced software guy, reports that he spent Christmas experimenting with it as a programming assistant, concluding that: “It’s incredibly good at helping you get started in a new project. It takes all of the research and thinking and looking things up and eliminates it… In five minutes you can have the stub of something working that previously would’ve taken a few hours to get up and running.” His caveat, though, was that you had to know about programming first.’
And that, says Naughton, seems to be the beginning of wisdom about ChatGPT. At best it is an assistant: a tool that augments human capabilities. And at worst?
Well… what do you think? Are you likely to use ChatGPT and, if so, for what sort of tasks? Or are you already using it? Is it making your life easier? Or are you worried that AI is taking over the world and will ultimately reduce us human beings to mere cyphers?
Do let us know.
PS: This piece really was written by me and not by a bot. But then it would say that wouldn’t it?