Safe Superintelligence Secures $1B to Invent Safer AI Systems

September 5, 2024: Safe Superintelligence Inc (SSI), the latest AI initiative by OpenAI co-founder and former Chief Scientist Ilya Sutskever, has successfully secured a massive $1 billion in funding. this funding demonstrates the growing interest in the development of safe artificial intelligence.

SSI, which was founded in June this year, aims to prioritize the creation of trustworthy and secure AI technologies.

The company shared the news through a post on X (formerly Twitter), stating, “SSI is building a straight shot to safe superintelligence. We’ve raised $1B from NFDG, a16z, Sequoia, DST Global, and SV Angel.” The announcement first broke through Reuters, which highlighted the company’s strategic focus on acquiring advanced computing resources and attracting top AI talent.

Safe Superintelligence to balance AI Safety with Capability Development; Starts world’s First Straight- Shot SSI Lab

Safe Superintelligence
Safe Superintelligence starts worlds first straight-shot SSI lab

SSI, a compact team of 10 members, is headquartered in both Palo Alto, California, and Tel Aviv, Israel. The startup’s mission revolves around balancing AI safety with capability development, treating both as technical challenges to be addressed through innovative approaches.

In a prior statement, Sutskever, along with co-founders Daniel Gross and Daniel Levy, emphasized their commitment to advancing AI capabilities quickly while ensuring safety measures stay ahead of progress. They noted that their business model is designed to shield their efforts from the typical pressures of commercialization, allowing for uninterrupted focus on security and progress.

Gross, previously the AI lead at Apple, and Levy, a former OpenAI team member, bring a wealth of expertise to the new venture. Sutskever’s move comes after he announced his departure from OpenAI earlier this year, following a key role in the controversial removal of OpenAI CEO Sam Altman.

Interestingly, AI researcher Jan Leike also exited OpenAI recently to join rival AI startup Anthropic, citing safety concerns.

The Safe Superintelligence investment, includes backing from notable firms such as NFDG, Andreessen Horowitz, Sequoia Capital, DST Global, and SV Angel,

More stories

Share article

spot_img

Latest articles