February 29, 2024



OpenAI has implemented advanced safety protocols to protect against potential threats from artificial intelligence. This includes the establishment of a safety advisory group with the authority to veto decisions and make recommendations to leadership. The company has updated its “Preparedness Framework” to systematically identify and address catastrophic risks in new AI models.

The Preparedness Framework categorizes models into three phases of governance, with an emphasis on risk evaluation in areas such as cybersecurity and model autonomy. The cross-functional Safety Advisory Group is responsible for independently reviewing reports to ensure objectivity in the risk evaluation process.

High-risk and critical-risk models are not allowed to be developed, and the CEO and CTO have the final decision-making authority. OpenAI has also committed to auditing its systems by independent third parties to ensure effectiveness.

The company’s commitment to transparency and risk management is evident in its proactive approach to identifying and addressing potential risks in AI models. The effectiveness of its safety measures will depend on the expert recommendations and scrutiny from the safety advisory group. As OpenAI takes steps to fortify its safety framework, the tech community is closely observing its efforts.



Source link

About YOU:

Your Operating System: Unknown OS

Your IP Address: 3.236.145.153

Your Browser: N/A

Want your privacy back? Try NordVPN

About Author