top of page

OpenAI Establishes Safety Committee as it Trains Latest AI Model

OpenAI establishes a safety and security committee to address AI safety concerns. Recent controversy prompts OpenAI to prioritise safety alongside AI development. OpenAI begins training its next frontier model, claiming industry-leading capability and safety.

This move comes as the company aims to address concerns surrounding AI safety and ensure that critical decisions regarding safety and security are given due consideration.


The group, which includes industry experts and corporate insiders, will advise and guide the OpenAI board on safety and security issues associated to the development and management of its programmes. OpenAI's intention to form this council follows recent debate about the company's approach to AI safety.


When researcher Jan Leike left and accused OpenAI of putting "shiny products" ahead of safety, the business came under fire. This resulted in the resignation of OpenAI co-founder and chief scientist Ilya Sutskever, as well as the disbandment of the "superalignment" team, which focused on AI hazards. Leike has subsequently joined Anthropic, a competing AI business, to further his aim of encouraging congruence between AI systems and human values.


However, OpenAI remains committed to expanding AI capabilities while maintaining safety. The business added that it has already began training the next frontier model, which it believes is the industry's most capable and safest. OpenAI supports a healthy conversation about AI safety and is committed to addressing concerns at this critical juncture.


AI models, such as OpenAI's ChatGPT chatbot, learn from large datasets to generate text, graphics, video, and human-like dialogue. Frontier models are the cutting-edge of AI systems, pushing the limits of what is achievable.


The safety committee is made up of top OpenAI executives, including CEO Sam Altman, Chairman Bret Taylor, and four technical and policy specialists. It also has board members Adam D'Angelo, CEO of Quora, and Nicole Seligman, former Sony general counsel. The committee's primary responsibility will be to examine and improve OpenAI's current processes and protections, with the goal of presenting suggestions to the board within 90 days.

 
  • OpenAI establishes a safety and security committee to address AI safety concerns

  • Recent controversy prompts OpenAI to prioritise safety alongside AI development

  • OpenAI begins training its next frontier model, claiming industry-leading capability and safety


Source: AP NEWS

As Asia becomes the fastest growing tech adoption region, biz360tv is committed to keeping readers up to date on the latest developments in business technology news in Asia and beyond.

While we use new technologies such as AI to improve our storytelling capabilities, our team carefully select the stories and topics to cover and goes through fact-checking, editing, and oversight before publication. Please contact us at editorial@tech360.tv if you notice any errors or inaccuracies. Your feedback will be vital in ensuring that our articles are accurate for all of our readers.

bottom of page