February 26, 2024



OpenAI is working on expanding its internal safety processes to prevent any harmful AI from being developed and released. This includes the creation of a safety advisory group that will make recommendations to leadership, with the board having the power to veto decisions. OpenAI has updated its “Preparedness Framework” to identify and address catastrophic risks associated with the AI models they are developing. They use a rubric to evaluate the risks, with models rated on categories such as cybersecurity, persuasion, model autonomy, and CBRN threats. Only medium and high risks will be tolerated, with critical risks leading to a model not being developed further. A cross-functional Safety Advisory Group will review recommendations and provide a higher vantage point, in order to uncover any “unknown unknowns.” However, it is unclear if the board will feel empowered to contradict the CEO and hit the brakes on a decision. Transparency is not addressed outside the promise that OpenAI will solicit audits from independent third parties. It is also unclear if OpenAI will actually decline to release models that warrant a “critical” risk category. Overall, OpenAI is taking steps to ensure that harmful AI is not developed and released but there remain questions about the actual implementation of these safety measures.



Source link

About YOU:

Your Operating System: Unknown OS

Your IP Address: 44.192.15.251

Your Browser: N/A

Want your privacy back? Try NordVPN

About Author