Ex-OpenAI Chief Scientist Launches SSI To Focus On AI ‘Safety’

Ex-OpenAI Chief Scientist Launches SSI To Focus On AI ‘Safety’

Authored by Derek Andersen via CoinTelegraph.com,

Co-founder and former chief scientist of OpenAI, Ilya Sutskever, and former OpenAI engineer Daniel Levy have joined forces with Daniel Gross, an investor and former partner in startup accelerator Y Combinator, to create Safe Superintelligence, Inc. (SSI). The new company’s goal and product are evident from its name.

SSI is a United States company with offices in Palo Alto and Tel Aviv. It will advance artificial intelligence (AI) by developing safety and capabilities in tandem, the trio of founders said in an online announcement on June 19. They added:

“Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures.”

Sustkever and Gross were already worried about AI safety

Sutskever left OpenAI on May 14. He was involved in the firing of CEO Sam Altman and played an ambiguous role at the company after stepping down from the board after Altman returned. Daniel Levy was among the researchers who left OpenAI a few days after Sutskever.

Sutskever and Jan Leike were the leaders of OpenAI’s Superalignment team created in July 2023 to consider how to “steer and control AI systems much smarter than us.” Such systems are referred to as artificial general intelligence (AGI). OpenAI allotted 20% of its computing power to the Superalignment team at the time of its creation.

Leike also left OpenAI in May and is now the head of a team at Amazon-backed AI startup Anthropic. OpenAI defended its safety-related precautions in a long X post by company president Greg Brockman but dissolved the Superalignment team after the May departure of its researchers.

Other top tech figures worry too

The former OpenAI researchers are among many scientists concerned about the future direction of AI.

Ethereum co-founder Vitalik Butertin called AGI “risky” in the midst of the staff turnover at OpenAI.

He added, however, that “such models are also much lower in terms of doom risk than both corporate megalomania and militaries.”

Source: Ilya Sutskever

“This company is special in that its first product will be the safe superintelligence, and it will not do anything else up until then,” Sutskever says in an exclusive interview with Bloomberg about his plans.

“It will be fully insulated from the outside pressures of having to deal with a large and complicated product and having to be stuck in a competitive rat race.”

Tesla CEO Elon Musk, once an OpenAI supporter, and Apple co-founder Steve Wozniak were among more than 2,600 tech leaders and researchers who urged that the training of AI systems be paused for six months while humanity pondered the “profound risk” they represented.

Musk replied to a post on X that highlighted the potential for this all to go very wrong: “…based on the naming conventions established by OpenAI and StabilityAI, this may be the most dangerous AI company yet…”

😂

— Elon Musk (@elonmusk) June 19, 2024

The SSI announcement noted that the company is hiring engineers and researchers.

[ZH: Finally, we have a simple question – most commentary we have sen has focused on the “this is not going to become SkyNet” aspect of ‘safety’ but what if Sustkever’s new firm (which has an ironic similarity to SSRI) is about “save the public and democracy from themselves” aspect of ‘safety’ and all the DEI dregs that entails?]

“By safe, we mean safe like nuclear safety as opposed to safe as in ‘trust and safety,’” Sutskever says.

Sutskever declines to name Safe Superintelligence’s financial backers or disclose how much he’s raised. CIA?

Tyler Durden
Thu, 06/20/2024 – 10:45

Read More

Please wait...

Author:

Subscribe
Notify of
guest

0 Comments
Inline Feedbacks
View all comments