ChatGPT, what is the Terminator scenario feared by the founder of OpenAI


Sam Altman, founder of OpenAI, the startup creator of ChatGPT, in an interview with ABC, expressed his concern about the possibility that general artificial intelligence (Agi) systems “could be used for large-scale disinformation. Furthermore, as he gets better and better at writing computer code, they could also be used to carry out cyber-attacks” thus explaining the reason for his company’s growing caution in creating and disseminating these models. This would lead to the so-called Terminator scenario, reminiscent of the film saga, with the cyborg played by Arnold Schwarzenegger, which has fascinated millions of people since his debut in 1984.

The Terminator scenario

According to Wired reports, Altman is concerned about the “existential risk” associated with the spread of increasingly advanced artificial intelligence. Along with philosophers like Nick Bostrom and entrepreneurs like Bill Gates and his former partner Elon Musk, Altman is among the theorists of the so-called “Terminator scenario”. According to this theory, developing artificial intelligence could mean endangering the existence of human beings, made obsolete by artificial intelligences that develop consciousness and cognitive abilities superior to those of humans. This could lead artificial intelligences to pursue goals that are contrary to human well-being.

Elon Musk: “It would be like summoning the devil”

It is precisely to this eventuality that Elon Musk was referring when he said that developing the Agi is equivalent to “summoning the devil”. Musk’s fear, however, was deemed science fiction and detached from reality so much so that Andrew Ng, former head of artificial intelligence at Google, compared it to the concern “of the overpopulation of Mars”. This is because the functioning of these systems is under human control.

The longtermists

However, this does not seem to reassure Altman, who is also an exponent of the current of thought called “long termism”, which pays attention to the “existential risk” that artificial intelligence could represent. “Some people in the AI ​​field consider the risks associated with AGI (and subsequent systems) imaginary. If they are right, we will be very happy to know it, but we will instead act as if these risks were existential”, Altman said, as reported by Wired, in a post published on the OpenAI blog explaining that he has “set a limit on the economic returns that our investors can get, lest they have an incentive to seek profits even at the cost of deploying something that could potentially be catastrophically dangerous. […] A superintelligent Agi that is not aligned with human values ​​could cause atrocious damage to the world”.

Concern or marketing?

According to Wired, there are market strategies behind Altman’s statements. “It is equally difficult not to suspect that the decision to renounce the open nature of OpenAI is actually linked to reasons of an exclusively commercial nature, however masked by a sense of responsibility – explains the newspaper – In fact, the entire narration set by Sam Altman can be also interpret as an elaborate marketing operation, in which OpenAI is described as a sort of scientific bulwark: the only reality capable of safely developing an otherwise potentially catastrophic technology. An interpretation that provides an almost salvific, messianic aura to what is, on the other hand, from every point of view, a normal company developing (powerful) technological software”.



Source-tg24.sky.it