The growing concern surrounding the potential dangers posed by highly-intelligent AI systems has been a focal point for experts in the field of artificial intelligence.
Recently, Geoffrey Hinton, renowned as the "Godfather of AI," voiced his fears regarding the possibility of superintelligent AI surpassing human capabilities and leading to catastrophic consequences for humanity. Similarly, Sam Altman, CEO of OpenAI, the company responsible for the widely used ChatGPT chatbot, admitted his apprehension regarding the impacts of advanced AI on society.
In response to these apprehensions, OpenAI has taken decisive action by establishing a new unit named Superalignment.
The primary goal of this groundbreaking initiative is to ensure that superintelligent AI does not bring chaos or, worse, result in human extinction. OpenAI acknowledges the immense power that superintelligence can wield and the potential dangers it poses to humanity.
While the development of superintelligent AI might still be a few years away, OpenAI envisions its potential reality by 2030. Consequently, there is an urgent need for proactive measures, given that there is currently no established system for controlling and guiding such advanced AI systems.
The Superalignment team will consist of elite machine learning researchers and engineers who will collaborate on creating a "roughly human-level automated alignment researcher." This specialized researcher will assume the responsibility of conducting safety checks on superintelligent AI systems.
OpenAI acknowledges the ambitious nature of this mission and recognizes that success is not guaranteed. Nevertheless, the company remains optimistic, believing that a focused and concerted effort can lead to the solution of the superintelligence alignment problem.
AI tools like OpenAI's ChatGPT and Google's Bard have already brought significant changes to the workplace and society at large. Experts predict that these transformations will only intensify in the near future, even before the emergence of superintelligent AI.
Governments worldwide are keenly aware of the transformative potential of AI and are racing to establish regulations to ensure its safe and responsible deployment. However, the lack of a unified international approach poses significant challenges. Divergent regulations across countries could complicate achieving Superalignment's goal.
OpenAI is committed to proactively aligning AI systems with human values and developing necessary governance structures. Their aim is to mitigate the potential dangers stemming from the extraordinary power of superintelligence.
Undoubtedly, the task at hand is complex, but OpenAI's dedication to addressing these challenges and involving top researchers in the field signals a significant stride towards responsible and beneficial AI development.