OpenAI, the organization behind the popular AI chatbot ChatGPT, has announced the creation of a new team dedicated to managing the potential risks associated with superintelligent AI systems. In a blog post on July 5, OpenAI emphasized the need to steer and control AI systems that surpass human intelligence. While acknowledging the positive impact that superintelligence can have in solving various problems, OpenAI also recognizes the dangers it poses, including the potential disempowerment or even extinction of humanity.
Anticipating the arrival of superintelligence within the next decade, OpenAI plans to allocate 20% of its existing computing power to this endeavor. The organization aims to develop a “human-level” automated alignment researcher that would assist the team in ensuring the safety and alignment of superintelligence with human values. Ilya Sutskever, OpenAI’s chief scientist, and Jan Leike, head of alignment at the research lab, will co-lead this initiative. OpenAI has extended an open invitation to machine learning researchers and engineers to join their efforts.
This announcement from OpenAI comes at a time when governments worldwide are considering regulations to govern the development, deployment, and use of AI systems. Notably, the European Union has made significant progress in implementing AI regulations, with the European Parliament passing the initial EU AI Act on June 14. The legislation mandates disclosure of all AI-generated content by systems like ChatGPT, among other measures. However, concerns have been raised by AI developers regarding potential innovation constraints resulting from such regulations.
In the United States, lawmakers have introduced the “National AI Commission Act,” which seeks to establish a commission responsible for shaping the nation’s approach to AI. U.S. regulators have expressed their intentions to regulate AI technology, with Senator Michael Bennet recently urging major tech companies, including OpenAI, to label AI-generated content.
By forming this new team, OpenAI demonstrates its commitment to proactively address the risks associated with superintelligent AI systems. Their efforts align with ongoing discussions on the governance of AI and the future implications it holds for humanity. The development of human-aligned AI and collaboration among experts in the field will play a vital role in shaping a responsible and beneficial AI-powered future.