OpenAI, the creator of ChatGPT, has formed a team led by MIT AI professor Aleksander Madry to address serious dangers of the technology including the potential for bad actors to learn how to build chemical and biological weapons. The team will continuously test and monitor the company’s AI tech for dangerous capabilities, sitting between OpenAI’s “Safety Systems” team and its “Superalignment” team. There’s debate around the dangers of this technology with warnings from AI leaders about its existential risks and other leaders saying the risks are overblown and the tech could actually help society and make money. Altman has also expressed the need to focus on fixing current problems and not allow regulation to hinder smaller companies. Madry resigned and later returned to OpenAI. The company is in the process of selecting new board members, and it has also begun discussions with organizations like the National Nuclear Security Administration to ensure the company can appropriately study the risks of AI. The preparedness team will monitor how the AI can instruct people to hack computers or build dangerous weapons. Madry’s goal is to ensure that the upsides of AI are realized and the downsides are mitigated.