OpenAI Co-Founder Ilya Sutskever's Safe Superintelligence Startup Aims for Safe AI Systems
Innovating AI: The Birth of Safe Superintelligence
OpenAI co-founder Ilya Sutskever's new AI start-up Safe Superintelligence (SSI) has raised $1 billion according to a Reuters report. The company plans to utilize this fund to develop safe artificial intelligence systems that aim to surpass human capabilities. Although the exact valuation during the funding remains undisclosed, sources indicate a valuation of $5 billion. This considerable investment is aimed at hiring talented engineers and researchers located in strategic tech hubs like Palo Alto, California, and Tel Aviv, Israel, as per the Reuters report.
Founders of Safe Superintelligence
- Safe Superintelligence was founded in June by Ilya Sutskever, Daniel Gross, and Daniel Levy.
- Daniel Levy is a former OpenAI researcher now leading optimizations at SSI.
- Daniel Gross, also a co-founder, has a background in technology strategy and previously co-founded another AI start-up, Cue, which was acquired by Apple.
After nearly a decade with OpenAI, Sutskever departed in May, expressing confidence in OpenAI's goal to build AGI that is both safe and beneficial. OpenAI's CEO lamented the split, acknowledging Sutskever's significant contributions to the field and his enduring friendship.
This article was prepared using information from open sources in accordance with the principles of Ethical Policy. The editorial team is not responsible for absolute accuracy, as it relies on data from the sources referenced.