John Humphrys - AI: A Blessing or a Threat?

June 02, 2023, 11:57 AM GMT+0

If it seems only a couple of months since I first addressed the question of Artificial Intelligence in this space… that’s because it was. My specific concern was the emergence of ChatGPT and worries that it might undermine the way universities test the knowledge of their students. How could examiners ever again be sure that the essays submitted for a degree had been written by the students based on their own knowledge and not by a faceless bot? An important question for sure, but monumentally trivial compared with the fears that have been raised since. Here’s a taste of some front page headlines this past week: “AI could wipe out humanity” (Daily Mail) or “AI pioneers face extinction” (The Times) or “Risk of extinction should be global priority” (The Guardian”)

It is tempting, of course, to dismiss this sort of stuff as scaremongering. It would hardly be the first time the papers had been accused of that, would it? Well maybe, but it’s not just the papers: it’s broadcasters like the BBC too who have been leading their news programmes on it. And the stories are not based on a few self-styled “experts” desperate to make a name for themselves. They are based on a statement that has been signed by renowned scientists and that statement is as brief as it is frightening. It says: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

The signatories include dozens of distinguished academics and senior bosses at some of the most powerful companies in the digital world. They include Google DeepMind, the co-founder of Skype, and Sam Altman, the chief executive of OpenAI, which is of course the very company that gave us ChatGPT. Another signatory is Dr. Geoffrey Hinton, one of three scientists dubbed the “godfathers of AI”. He has spent his career researching the uses of AI technology and won the Turing Award, the biggest prize the digital world has to offer. A month ago he resigned from Google and told the New York Times the progress made in AI technology over the last five years had been “scary” and posed “existential risks”. He said “bad actors” could use it to harm others and that could spell the end of humanity.

Another of the “godfathers” is Yoshua Bengio. He told the BBC this week he felt “lost” over his life works. He admitted that if he had realised the speed at which AI would evolve he would have prioritised safety over usefulness. He is one of the many who fear that advanced computational ability could be used for harmful purposes, such as the development of deadly new chemical weapons. He is worried that "bad actors" will get hold of AI, especially as it becomes more sophisticated and powerful.

"It might be military,” he says, “ it might be terrorists, it might be somebody very angry, psychotic. And so if it's easy to program these AI systems to ask them to do something very bad. This could be very dangerous… If they're smarter than us, then it's hard for us to stop these systems or to prevent damage".

Michael Osborne, a professor in machine learning at Oxford university and the co-founder of Mind Foundry, points out that this week’s statement is hardly the first warning about the risks of AI, but there are significant differences between this and much of what has gone before. One is the stature of the people sounding the alarm. The other is that they clearly see AI as an existential threat to humanity.

Until now the risks have been pretty obvious and entirely predictable. The more capable AI becomes of carrying out necessary but mundane tasks at mind-blowing speed, the less need there will be for humans to carry out those tasks. Great news, perhaps, for the business needing the job to be done as fast as possible and as cheaply as possible – but disastrous for those who find themselves out of a job. It seems inevitable that unemployment will rocket.

Lord Rees, the UK's Astronomer Royal, is another scientist who signed the statement. He told the Mail: “I worry less about some super-intelligent 'takeover' than about the risk of over-reliance on large-scale interconnected systems. These can malfunction through hidden 'bugs' and breakdowns could be hard to repair. Large-scale failures of power-grids, the internet and so forth can cascade into catastrophic societal breakdown.'

It was the military dangers of AI that preoccupied the London Defence Conference last week. Ian Martin, who chaired the conference, described the nightmare of trying to balance sophisticated AI with more traditional military capabilities. Last year, Australia bought three “autonomous” submarines.. The computerised systems that power these weapons are becoming more autonomous. Defence experts foresee “wolf packs” of such submarines, able to change course of their own volition, patrolling the Pacific They will steadily require fewer human inputs or prompts to operate.

Martin asks this: Who does the prompting “when the machines learn at a dizzying speed and will communicate with each other in ever more sophisticated ways? Once AI fuses with quantum computing (computing made ever faster with vastly more calculations) it will be so powerful that it will be difficult to direct, regulate and control.”

The Guardian has just reported a chilling example of what can happen when AI “takes over”. In a virtual test staged by the US military, an air force drone controlled by AI was ordered to destroy a specific threat. It was also ordered not to “kill” its operator. But that’s precisely what it did. to prevent it from interfering with its efforts to achieve its mission. It was specifically ordered not to “kill” the human operator in charge of the system. But that was precisely what it did. It used its own “intelligence” to calculated that the operator was preventing it from “accomplishing its objective”.

It does not require a great leap of imagination to create another fictional scenario in which a school full of children stands in the way of an AI-controlled missile ordered to attack a military target. If “accomplishing its objective” results in children dying … so be it. A computer has no “human” feelings. That’s the whole point.There are endless ethical dilemmas confronting our leaders as to how they use AI on the battlefield.

But many perceive an even bigger threat. It’s not AI we should be most scared of: it is AGI. Artificial General Intelligence.

This, says Jonathan Freedland in the Guardian, is a “category leap” in the technology. In this scenario the computer would no longer need specific prompts from humans: it would develop its own goals, its own agency. That was once the stuff of science fiction but now, Freedland says, many experts believe it’s only a matter of time. Given the galloping rate at which these systems are learning, it could be sooner rather than later.

Many – including Yuval Noah Harari in The Economist – point out that the existing technology is already capable of destroying what we think of as truths or facts. He points to the US stock market plunging last week when a photograph appears around the world of an explosion at the Pentagon. It was indeed scary. But it wasn’t real. It was a hoax generated by AI. Harari warns: “People may wage entire wars, killing others and willing to be killed themselves, because of their belief in this or that illusion”.

So where, if anywhere, can we look for some relief in this doom-laden scenario? One of the most powerful figures in the digital world, Elon Musk, offers a scintilla of optimism. "I don't think”, he says, “that AI will try to destroy humanity, but it might put us under strict controls."

I quoted earlier the apocalyptic warnings from two of the “godfathers” of AI. What of the third? He is Professor Yann LeCun, the chief AI scientist at New York University and also a winner of the Turing Award. He says there is "a small likelihood of it annihilating humanity.” He adds that it is “close to zero but not impossible."

Again, hardly something to put a smile on the face of humanity. So let’s turn to yet another distinguished academic and acknowledged expert in AI. She is Dr Sasha Luccioni, research scientist at the AI firm Huggingface. She says society should focus on issues like the spread of misinformation, which can be weaponised by those who would do us "very concrete harms" rather than on “than on the hypothetical risk that AI will destroy humanity."

But she also draws attention to the “many examples of AI bringing benefits to society.” Last week alone, she points out, “an AI tool discovered a new antibiotic, and a paralysed man was able to walk again just by thinking about it, thanks to a microchip developed using AI.”

In his budget last month the chancellor, Jeremy Hunt, talked about the UK winning the global AI race and warned against the UK “erecting protectionist barriers for all our critical industries”. Rishi Sunak himself said if AI is “used securely, obviously there are benefits from artificial intelligence for growing our economy, for transforming our society, improving public services”. But he also warned that it “has to be done safely and securely and with guardrails in place.”

Lord Rees, the UK's Astronomer Royal, who signed the statement, is concerned with exactly that: “I worry less about some super-intelligent 'takeover' than about the risk of over-reliance on large-scale interconnected systems. These can malfunction through hidden 'bugs' and breakdowns could be hard to repair. Large-scale failures of power-grids, the internet and so forth can cascade into catastrophic societal breakdown.”

So where do you stand? Are you scared and, if you are, do you find it encouraging that so many eminent academics and scientists are finally recognising the dangers? Or do you believe there might be too much scare mongering out there and we should welcome AI for the benefits it will bring?

Let us know.

Image: Pexels (cottonbro studio)

Explore more data & articles