John Humphrys – AI: What is the biggest threat?

November 03, 2023, 1:12 PM GMT+0

The last time we discussed AI on this site it was in the context of teachers and lecturers getting worried because students might use ChatGPT to write their essays. How things have changed in those few months. The big story as I write is the unprecedented meeting at Bletchley Park of world leaders: leaders and experts who are rather more worried about the existential threat posed by AI to the future of humanity than a student’s copied essay.

In the relatively cautious words of our own prime minister Rishi Sunak: “There is a case to believe that it may pose a risk on a scale like pandemics and nuclear war.” Elon Musk, arguably the most powerful individual in the field, went further. He called it “one of the biggest threats to humanity”.

Do you share those fears and do you trust our leaders to guard against them?

The good news on the AI front was not just that that the conference happened, but that it was attended by senior figures from 28 of the most powerful countries in the world including the United States and China. It’s also encouraging that they signed what they themselves described as an “historic” agreement vowing to protect the world against the potential of AI to cause “catastrophic” harm. They called it the Bletchley Declaration and promised they would work together on shared safety standards in a process officials likened to the COP summits on the climate crisis.

It was partly overshadowed the next day by an announcement from the American commerce secretary of a new AI Safety Institute in Washington, but British officials say they expect to work closely with it and others to create a “network” of similar organisations that can do testing around the world. Officials say they are likely to reveal further details about how such a network will operate on Thursday.

At the heart of the new safety measures is that eventually governments big and small will be able to work together to test the safety of AI “tools” (so-called) before they are released onto the market. The declaration said: “There is potential for serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capabilities of these AI models.”

The unresolved question remains what it has been ever since AI became a reality rather than a fanciful notion in the mind of Sci-Fi writers. That question is: can we ever properly control a machine (for want of a better word) whose brain is so much more powerful than anything the world has ever seen and threatens to become ever more powerful with every passing hour. A machine that can out-perform the human brain. A machine that can, quite simply, take over the world and everything in it. An unimaginably scary concept.

And yet, within hours of the conference ending, we were being invited to consider a dramatically different AI scenario. And this was presented to us not in the austere setting of a conference hall with sombre speakers delivering their apocalyptic warnings from lecterns and meticulously prepared autocue scripts but by two old mates who could have been enjoying a cosy chat after a football game. Except that the setting was 10 Downing Street and the men each wielded immense power. One was Rishi Sunak and the other was Elon Musk. Their cosy chat was being watching on television by millions. And it was what Mr Musk said that made the headlines the next morning.

Artificial Intelligence, he told the Prime Minister, is “the most disruptive force in human history… a magic genie capable of granting wishes that could have dangerous consequences”. It will, he said, eliminate the need for all jobs: “We will have for the first time something that is smarter than the smartest human being. There will come a point where no job is needed. You can have a job if you want a job for personal satisfaction, but the AI will be able to do it.”

It seemed almost a throwaway comment in the form of its delivery but it changed the tenor of the whole AI debate. The conference itself was clearly preoccupied with the threats posed to every person on the planet by a potentially malign force that the vast majority of us cannot even begin to understand. AI could, for instance, take control of the deadliest weapons in existence and turn them against us. But at least we could take comfort from the earnest declarations that our leaders recognise those dangers and are talking seriously about setting up controls to mitigate them.

Musk himself acknowledged that there was a “safety concern” about robots equipped with AI software. He put it like this: “If you have humanoid robots they can basically chase you everywhere.” The solution, he suggested, might be a “local off switch, where you say a key word or something and that puts the robot into a safe state”.

He also suggested that AI robots might be a force for good because they could offer companionship: “If you have an AI that has memory and has permission to read everything you’ve ever done and will know you better than you know yourself, you’ll actually have a great friend”. Mr Musk himself has a disabled son. He said: “An AI friend will be good for him”. Indeed, even as he was speaking, Meta was in the process of releasing 28 “expert companions” who can guide you through life’s challenges so effectively, they claim, you might come to rely on them as your best friend.

The philosopher Yuval Noah Harari has warned that AI “could use the power of intimacy to change our opinions and world views”. William Hague agrees: “We can begin to see here one of the biggest dangers posed by artificial intelligence: not that it will become a global super brain that turns against us, but rather a friend that encourages our delusions. The threat will not be an open rebellion with the authorities struggling for the off switch, but an insidious sycophancy that makes millions of individuals receptive to what they are told.”

Hague says we need an urgent review of laws needed to prevent the use of chatbots for radicalisation: “We could restrict the efficacy of chatbot intimacy by requiring platforms to wipe personal data regularly, and ensuring that all bots are identified as such, making it harder for them to masquerade as people. Rigorous testing of AI models is needed before they are made open source. And AI chatbots for children should be restricted, so that the damage already done to mental health by social media doesn’t become even greater. Soon there will be teddy bears powered by AI. This is a truly terrible idea.”

But none of this addresses that truly existential threat that Musk raised in his chat with Sunak: AI will “eliminate the need for all jobs”.

On one level, of course, it’s a wonderfully attractive notion. How many of us actively enjoy our jobs and dream of spending our time pursuing whatever hobbies we may have or simply doing whatever takes our fancy at any given hour of the day. A spot of gardening or DIY or taking the dog for a longer walk or spending more time with the kids? Or reading a book. Or just watching the telly.

Let’s assume such people do indeed exist. But what about all those who not only enjoy their jobs but for whom it’s the driving force in their lives? Those whose self esteem is measured by their progress at work? Those who take huge pride in what they contribute to society as teachers or nurses or bus drivers or rubbish collectors. And what would motivate our children who dream of what they want to be when they grow up? What, indeed, would be the point of education if there’s no need to learn things in order to get a job? The tec venture capitalist Garry Tan has predicted that anyone lucky enough to have a job in the future will either be telling a computer what to do or having a computer tell them what to do.

And how would a country’s economy function? In today’s world we earn money and pay taxes to governments and they spend that money. How would wealth be generated in the new world? Perhaps the richest who own the most sophisticated AI tools would sell their services to the poorest. But how could the poorest raise the money they would need without AI?

The questions are endless and the solutions range from the bizarre to the terrifying. There is, of course, the dream world in which we rely on AI to improve our lives in ways which we cannot even begin to imagine. But who makes those decisions? Who controls AI?

What are your thoughts? Are you fundamentally optimistic or are you scared for the future. And why?

Do let us know.

Explore more data & articles