February 26, 2024

Research documentation for artificial intelligence programs has been a common source of information in the past, but in 2023, companies like OpenAI and Google are publishing new programs with almost no technical details. Google’s new generative AI program, Gemini, was released without clear technical specifications, with only general information and benchmark scores provided. OpenAI launched GPT-4 in a similar way earlier in the year. This lack of transparency raises ethical concerns and creates difficulty in assessing capabilities and potential risks of these AI programs. Additionally, Google’s omission of model cards, a standard form of disclosure for neural networks, has sparked confusion and concern regarding oversight and safety. The secrecy and lack of technical details for these AI programs have left researchers and the public in the dark about what is really happening in these companies’ computing clouds. High-profile AI researchers have expressed their concerns about the lack of transparency and detail in the disclosure of these AI programs, and the lack of oversight and safety precautions for neural networks. This lack of transparency is seen as a significant problem for AI and society as it has far-reaching consequences in terms of potential risks and ethical considerations.

Source link

About YOU:

Your Operating System: Unknown OS

Your IP Address:

Your Browser: N/A

Want your privacy back? Try NordVPN

About Author