If you’re a regular reader of this website you may remember I asked for your views recently on ChatGPT. Many people were getting very exercised about it partly because students could cheat on their exams by getting the bot to write their essays for them. That was only a few weeks ago but already the debate about Artificial Intelligence has moved into a different stratosphere. At the risk of sounding alarmist, the floodgates of fear have been opened. Many of the most respected and experienced figures in the world of high tech are warning that if we’re not very cautious we may unleash a monster capable of devouring its creators. Many others say the opposite. They believe AI may prove the saviour rather than the curse of mankind.
Elon Musk and the co-founder of Apple, Steve Wozniak, have signed a letter calling for a six-month moratorium on the development of AI systems rather than see an “out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict, or reliably control”. The goal is to give society time to adapt to what the signatories describe as an “AI summer”, which would ultimately benefit humanity, as long as the right guardrails are put in place. These guardrails include rigorously audited safety protocols.
A laudable goal obviously. Or maybe it’s just wishful thinking.
Let’s assume that there are a thousand scientists out there whose lives are devoted entirely to the creation of artificial intelligence. Each and every one of them wants to be the first to come up with the magic formula – whatever that formula happens to be. They each want history to record them as the father – or mother – of a technological development that will simultaneously encompass everything humanity needs to know to create a better world but with “us” in charge rather than “them” whoever – or, rather, whatever - “they” ultimately turn out to be.
Is it even remotely realistic to expect these people to respect a moratorium? And do what? Twiddle their thumbs for six months, knowing that their most ruthless competitors are out there desperately seeking answers that will guarantee their place in history? Many believe it is not.
Evgeny Morozov, an acknowledged expert in the field, has gone much further. He says we should retire the hackneyed label of “artificial intelligence” from public debate altogether. The term, he says, belongs to the same scrapheap of history that includes “iron curtain”, “domino theory” and “Sputnik moment”. It survived the end of the cold war because of its allure for science fiction enthusiasts and investors.
He wrote in The Guardian: ‘In reality, what we call “artificial intelligence” today is neither artificial nor intelligent. The early AI systems were heavily dominated by rules and programmes, so some talk of “artificiality” was at least justified. But those of today, including everyone’s favourite, ChatGPT, draw their strength from the work of real humans: artists, musicians, programmers and writers whose creative and professional output is now appropriated in the name of saving civilisation. At best, this is “non-artificial intelligence.
“As for the ‘intelligence’ part, the cold war imperatives that funded much of the early work in AI left a heavy imprint on how we understand it. We are talking about the kind of intelligence that would come in handy in a battle. For example, modern AI’s strength lies in pattern-matching. It’s hardly surprising given that one of the first military uses of neural networks – the technology behind ChatGPT – was to spot ships in aerial photographs.
“Machines cannot have a sense (rather than mere knowledge) of the past, the present and the future…
Human intelligence is not one-dimensional. It rests on what the 20th-century Chilean psychoanalyst Ignacio Matte Blanco called bi-logic: a fusion of the static and timeless logic of formal reasoning and the contextual and highly dynamic logic of emotion. The former searches for differences; the latter is quick to erase them…AI will never get there because machines cannot have a sense (rather than mere knowledge) of the past, the present and the future; of history, injury or nostalgia… Thus, machines remain trapped in the singular formal logic. So there goes the ‘intelligence’ part.
“The danger of continuing to use the term ‘artificial intelligence’ is that it risks convincing us that the world runs on a singular logic: that of highly cognitive, cold-blooded rationalism. Many in Silicon Valley already believe that – and they are busy rebuilding the world informed by that belief.”
So what do the Silicon Valley types who signed the letter have to say to that? One of them is Alan Lewis, director of SigmaTech. He believes that chatbots like ChatGPT do indeed represent a significant and potentially worrying, development. He agrees that AI is a long way behind the general sentient intelligence of human beings but says that rather misses the point: “As with other significant technologies that have had an impact on human civilisation, their development and deployment often proceeds at a rate far faster than our ability to understand all their effects – leading to sometimes undesirable and unintended consequences. We need to explore these consequences before diving into them with our eyes shut. The problem with AI is not that it is neither artificial nor intelligent, but that we may in any case blindly trust it.”
Britain’s former foreign secretary William Hague drew a parallel in The Times with 1939 when Albert Einstein wrote to President Roosevelt to warn him that “the element uranium may be turned into a new and important source of energy” and that “extremely powerful bombs may thus be constructed”. Given that the first breakthrough towards doing this had been made in Nazi Germany, the United States set out with great urgency to develop atomic bombs, ultimately used against Japan in 1945.
Such was the unsurpassable power of nuclear weapons that, once the science behind them had been discovered, their development could not conceivably be stopped. A race had begun in which it was imperative to be ahead.”
It is the “race” towards AI that scares Lord Hague and he, too, is deeply sceptical that the Silicon Valley letter is going to have the desired effect. He asks this question: “ Is the US, having gone to great trouble to deny China the most advanced semi-conductors necessary for cutting-edge AI, going to voluntarily slow itself down? Is China going to pause in its own urgent effort to compete? Putin observed six years ago that ‘whoever becomes leader in this sphere will rule the world. We are now in a race that cannot be stopped.”
What troubles Hague and so many others is not only the speed of the race towards AI but its implications. He points out that we are used to the idea that technology progresses incrementally. Smartphones, for example, have changed our lives in countless ways. We have more computing power in our smartphones than the Apollo spacecraft that went to the moon and it increases with every new model. But that’s a process that has taken years. Everything changed with the advent of Deep Learning about ten years ago. Within only weeks of ChatGPT’s launch we were seeing new and more powerful variations. Many accept that within five years AI will become a thousand times more powerful.
Hague believes the rise of AI is almost certainly one of the two main events of our lifetimes, alongside the acceleration of climate change: “ It will transform war and geopolitics, change hundreds of millions of jobs beyond recognition, and open up a new age in which the most successful humans will merge their thinking intimately with that of machines. Adapting to this will be an immense challenge for societies and political systems, although it is also an opportunity and — since this is not going to be stopped — an urgent responsibility.
“Like the nuclear age heralded by Einstein, the age of AI combines the promise of extraordinary scientific advances with the risk of being an existential threat. It opens the way to medical advances beyond our dreams and might well provide the decisive breakthroughs in new forms of energy. It could be AI that works out how we can save ourselves and the planet from our destructive tendencies, something we are clearly struggling to work out for ourselves.”
On the other hand, no one has yet determined how to solve the problem of “alignment” between AI and human values, or which human values those would be. Without that, says the leading US researcher Eliezer Yudkowsky, “the AI does not love you, nor does it hate you, and you are made of atoms it can use for something else”.
And what about your attitude to AI? Do you love it or hate it? Do you eagerly anticipate the great benefits it might bestow on mankind or fear some malign power exploiting it to seize control? Or, even more chilling, do you tremble at the thought that AI itself will inevitably become that “malign power’?
Let me know what you think.