Ilya Sutskever, the co-founder and former chief scientist of OpenAI, has made headlines once again with the announcement of his new venture, Safe Superintelligence. The company, founded alongside A.I. experts Daniel Gross and Daniel Levy, aims to develop superintelligence in a safe and responsible manner.
After his controversial departure from OpenAI last month, Dr. Sutskever remained tight-lipped about his next move. However, it has now been revealed that he is taking on the role of chief scientist at Safe Superintelligence, where he will be leading efforts to achieve groundbreaking advancements in the field of artificial intelligence.
The news of Safe Superintelligence comes on the heels of Dr. Sutskever’s involvement in the ousting of former OpenAI CEO Sam Altman. The decision to remove Mr. Altman from his position was met with backlash from both within and outside the company, leading to Dr. Sutskever expressing regret over the situation.
With the rise of generative artificial intelligence technologies like ChatGPT, the tech industry is on the brink of a major transformation. Safe Superintelligence’s focus on developing superintelligence in a safe and ethical manner is sure to be a topic of interest and debate within the A.I. community.
As Dr. Sutskever and his team at Safe Superintelligence embark on this ambitious journey, the world will be watching closely to see how they navigate the complex and rapidly evolving landscape of artificial intelligence.