Experts: “Artificial intelligence without control? So humanity risks extinction”

New open letter from industry insiders: the future of AI in 22 words

A group of researchers, engineers, programmers and CEOs of tech companies has published a new joint notice warning of the threats that artificial intelligence poses on humanity’s path. It is not the first time that authoritative voices have raised themselves to warn of the possible repercussions of an inconsiderate use of AI, but the particularities of this “memorandum” are two: it is very short (22 words in English), and speaks expressly of the risk of extinction. “Reducing the risk of extinction due to AI should be a global priority, on a par with other social emergencies such as pandemics and nuclear war,” it reads. Signing it are several prominent figures, including Google DeepMind CEO Demis Hassabis and OpenAI CEO Sam Altman. In March, another open letter was published, explicitly calling for a halt to the unchecked development of technology by the giants of the sector. The letter reads how the Far West of artificial intelligence, considered the new Eldorado of hi-tech, involves “profound risks for society and humanity”. We also ask to stop research on digital intelligence systems that are more powerful than GPT-4, the latest version of the assistant created by OpenAI, for at least six months