08 Aug Artificial intelligence: how much is too much?
Professor Stephen Hawking was one of the most eminent theoretical physicists of the 20th century, so when he spoke to the BBC on the dangers posed by artificial intelligence (AI) back in 2014, people took notice: “The development of full artificial intelligence could spell the end of the human race.”
Fast forward eight years and Google has fired Blake Lemoine – one of its engineers working on its Responsible AI Project – after he publicly claimed that one of the company’s AI programmes (known as LaMDA) had allegedly gained sentience.
Tempest takes off
All of this was fresh in my mind when I read the news that the new Tempest Jet (being developed by a consortium of companies including BAE Systems, Rolls-Royce, MBDA, and Leonardo) will feature an AI tool to assist the human pilot when they are overwhelmed or under extreme stress.
It works by using sensors in the pilot’s helmet to monitor brain signals and other medical data during numerous test flights. The AI system will then use its database of biometric and psychometric information to step-in and take full control of the aircraft if the sensors indicate the pilot may need help during a live mission. (Not something Pete “Maverick” Mitchell would stand for, I am sure).
The UK government has already committed £2 billion to the Tempest project, with more expected. The reason for the big investment in AI? To counter new threats and more sophisticated weapons being developed by other nations. It would seem we are in a race to the top of AI mountain then.
It is no surprise to see projects like Tempest being undertaken, given the advantages AI has been shown to provide. It can multi-task and reduce the time it takes to complete certain functions, operate 24/7 with no breaks, and has – within specific parameters – been shown to be capable of making smarter and faster decisions than human beings.
As AI evolves and its use widens across society, in areas such as defence, manufacturing, and customer service, our day-to-day interactions with artificial ‘life’ will continue to increase; so much so that the (already blurring) line between real and artificial will become harder to discern.
Does that mean we will end up with personal AI assistants like J.A.R.V.I.S from Iron Man? Maybe. I doubt I will be around to see it though.
The likelihood is that, as resources become limited and AI is proven to be more effective and efficient at manual jobs, human beings will come to use and rely on AI even more. And like most things built by humans (the flawed beings that we are), we will only know we have gone too far when we go too far. And by then it may be too late.