After a brief but dramatic disagreement with OpenAI CEO Sam Altman last November, Ilya Sutskever, the organization’s former chief scientist, is embarking on a new venture. Sutskever almost forced Altman out of his AI startup back then.
Now he has joined forces with OpenAI colleague Daniel Levy and Daniel Gross, a previous AI leader at Apple, to launch Safe Superintelligence Inc (SSI). This name clearly reflects the company’s core objective: developing safe and beneficial artificial intelligence.
The newly formed company, Safe Superintelligence Inc. (SSI), isn’t mincing words about its mission. “SSI is our mission, our name, and our entire product roadmap,” declared the founders, Ilya Sutskever, Daniel Levy, and Daniel Gross, in a statement on their website. They believe building safe superintelligence is “the most important technical problem of our time.”
Here’s why it’s a big deal: Experts predict that machines won’t stop getting smarter once they reach a human-level intelligence known as Artificial General Intelligence (AGI). This hypothetical future stage, called Artificial Superintelligence (ASI), is what worries Sutskever and his team. Their company aims to ensure this superintelligence is developed safely and ethically.
Sutskever’s focus on safe superintelligence isn’t new. Leading computer scientists like Geoffrey Hinton have voiced concerns that ASI could pose an existential threat to humanity. In fact, ensuring safeguards that benefit humanity was a core mission for Sutskever during his time at OpenAI.
His departure from OpenAI in May was quite dramatic. Just six months prior, he, along with independent board members Helen Toner, Tasha McCauley, and Adam D’Angelo, attempted to remove CEO Sam Altman in a failed power struggle. However, their effort was thwarted by chairman Greg Brockman, who instead resigned himself.
The post Ex-OpenAI Chief Launches Safe AI Startup After Clash with Sam Altman appeared first on ProPakistani.