OpenAI Insider Says 70% Chance AI Could Harm or Destroy Humanity

San Francisco, USA, February 15, 2023: OpenAI headquarters glass building concept. ChatGPT artificial intelligence company symbol on front facade 3d illustration.

Daniel Kokotajlo, a former researcher on OpenAI’s governance, has accused the organization of being so enamored with the potential benefits of artificial general intelligence (AGI) that it is disregarding the enormous dangers associated with it.

One of the signatories to the open letter that current and former OpenAI employees have written, Kokotajlo, stated that they are being silenced for bringing up safety concerns and made an even more horrifying prediction: that the odds of AI destroying or catastrophically harming humanity are higher than a coin toss.

Kokotajlo said OpenAI is really excited about building AG, and they are recklessly racing to be the first there.

The most controversial critique Kokotajlo made was the 70% probability that AI will wipe humans off the earth, yet OpenAI and similar organizations are moving forward with them nonetheless.

An ongoing source of contention within the machine learning community is the phrase “p(doom),” which is AI jargon for the likelihood that AI will bring about the end of humanity.

31-year-old Kokotajlo said that he became convinced of two things after joining OpenAI in 2022 and being asked to predict the advancement of the technology: first, that the industry would achieve artificial general intelligence (AGI) by 2027, and second, that it was highly likely that it would severely damage or erase humanity.

After concluding that AI constituted grave dangers to mankind, Kokotajlo went so far as to directly push OpenAI CEO Sam Altman to “pivot to safety” and focus less on making AI better and more on establishing controls to limit its use.

Altman appeared to agree with the ex-employee at the moment, but later, it felt like empty rhetoric. Frustrated, Kokotajlo left the company in April, stating in an email to his colleagues that he lost confidence that OpenAI would behave responsibly.

The current news from OpenAI doesn’t appear to have any bright side, given all the high-profile departures and these horrifying projections.

OpenAI, however, expressed its pride in a recent statement, saying its history of delivering AI systems that are both highly powerful and very safe. They firmly believe in their scientific approach to mitigating risk.