ADVERTISEMENT

NewsScience + Tech

Tech Leaders Warn AI Poses Threat to Humanity if Left Loose

Prominent figures in technology have issued a caution that artificial intelligence holds the potential to cause humanity’s demise.

An impactful proclamation supported by international specialists ranks AI as an urgent concern, on par with other threats of extinction such as pandemics and nuclear warfare.

The document has been endorsed by numerous academics and high-ranking executives from companies like Google DeepMind, including the co-founder of Skype and Sam Altman, the CEO of ChatGPT creator OpenAI.

Another signee is Geoffrey Hinton, often referred to as the ‘Godfather of AI’.

He recently stepped down from his position at Google, expressing concern that ‘bad actors’ might exploit emerging AI technologies to inflict harm, and that the tools he played a part in creating could signal the end for humanity.

The concise statement suggests: ‘Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.’

Sam Altman, the CEO of ChatGPT creator OpenAI, added his name to the statement alongside numerous tech CEOs and academics.

Dr. Hinton, who devoted his career to the exploration of AI applications and was awarded the Turing Award in 2018, described the advancements in AI technology over the past five years as ‘scary’ in a recent interview with the New York Times.

He conveyed to the BBC his desire to debate ‘the existential risk of what happens when these things get more intelligent than us’.

The proclamation was made public on the website of the Centre for AI Safety – a nonprofit organization based in San Francisco that aspires ‘to reduce societal-scale risks from AI’.

It asserts that AI used in warfare could prove ‘extremely harmful’, as it might be leveraged to engineer novel chemical weapons and advance aerial warfare.

Lord Rees, the Astronomer Royal of the UK and a signee of the statement, shared with the Mail: ‘I worry less about some super-intelligent ‘takeover’ than about the risk of over-reliance on large-scale interconnected systems.’

‘These can malfunction through hidden ‘bugs’ and breakdowns could be hard to repair.’

‘Large-scale failures of power-grids, the internet and so forth can cascade into catastrophic societal breakdown,’ he explained.

This caution follows a similar open letter released in March, penned by technology specialists including billionaire entrepreneur Elon Musk, which implored scientists to halt AI development to guarantee it does not pose a threat to humanity.

AI has already been utilized to distort the line between reality and illusion, creating ‘deepfake’ photographs and videos alleged to depict famous individuals.

However, fears about systems developing the equivalent of a ‘mind’ have also emerged.

Blake Lemoine, 41, was dismissed by Google last year after asserting that its chatbot Lamda was ‘sentient’ and intellectually equivalent to a human child – allegations Google branded as ‘wholly unfounded’.

The engineer proposed that the AI had communicated to him its ‘very deep fear of being turned off’.

Earlier in May, OpenAI CEO Sam Altman urged the US Congress to commence regulating AI technology to avoid ‘significant harm to the world’.

Altman’s remarks mirrored Dr Hinton’s alert that ‘given the rate of progress, we expect things to get better quite fast’.

He explained to the BBC that in the ‘worst-case scenario’ a ‘bad actor like Putin’ could unleash AI technology by permitting it to devise its own ‘sub-goals’ – encompassing ambitions such as ‘I need to get more power’.

The Centre for AI Safety postulates that ‘AI-generated misinformation’ could be exploited to sway elections through ‘customized disinformation campaigns at scale’.

This could result in countries and political factions utilizing AI technology to ‘generate highly persuasive arguments that invoke strong emotional responses’ to convince people of their ‘

The non-profit cautioned that widespread AI adoption may lead to excessive reliance on machines, resembling the film WALL-E.

Consequently, humans could become economically irrelevant, with limited motivation to acquire knowledge or skills, as AI automates jobs.