OpenAI co-founder wanted to build doomsday bunker to protect company scientists from rapture: book

The co-founder of ChatGPT maker OpenAI proposed building a doomsday bunker that would house the company’s top researchers in case of a “rapture” triggered by the release of a new form of artificial intelligence that could surpass the cognitive abilities of humans, according to a new book.Ilya Sutskever, the man credited with being the brains behind ChatGPT, convened a meeting with key scientists at OpenAI in the summer of 2023 during which he said: “Once we all get into the bunker…”A confused researcher interrupted him.“I’m sorry,” the researcher asked, “the bunker?”“We’re definitely going to build a bunker before we release AGI,” Sutskever replied, according to an attendee.The plan, he explained, would be to protect OpenAI’s core scientists from what he anticipated could be geopolitical chaos or violent competition between world powers once AGI — an artificial intelligence that exceeds human capabilities — is released.“Of course,” he added, “it’s going to be optional whether you want to get into the bunker.”The exchange was first reported by Karen Hao, author of the upcoming book “Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI.”An essay adapted from the book was published by The Atlantic.The bunker comment by Sutskever wasn’t a one-off.
Two other sources told Hao that Sutskever had regularly referenced the bunker in internal discussions.One OpenAI researcher went so far as to say that “there is a group of people — Ilya being one of them — who believe that building AGI will bring about a rapture.Literally, a rapture.”Though Sutskever declined to comment on the matter, the idea of a secure refuge for scientists developing AGI underscores the extraordinary anxieties gripping some of the minds behind the most powerful technology in the world.Sutskever has long been seen as a kind of mystic within OpenAI, known for discussing AI in moral and even metaphysical terms, according to the author.At the ...