February 29, 2024



The World Health Organisation has highlighted the potential risks associated with Artificial Intelligence (AI) in the health sector. While AI has the potential to transform healthcare through things like drug development and more rapid diagnoses, the WHO has warned about the potential dangers of large multi-modal models (LMMs) which are a type of generative AI.

The WHO outlined five broad areas where LMMs could be applied, including diagnosis, scientific research and drug development, medical and nursing education, clerical tasks, and patient-guided use. However, there are concerns that LMMs could produce false, inaccurate, biased, or incomplete outcomes as well as being trained on poor quality data or data containing biases relating to race, ethnicity, ancestry, sex, gender identity, or age.

The health agency issued recommendations on the ethics and governance of LMMs, emphasizing the need for transparent information and policies to manage the design, development, and use of these technologies. They also highlighted the need for liability rules to ensure that users harmed by an LMM are adequately compensated or have other forms of redress.

The WHO warned that LMMs present risks that societies, health systems, and end-users may not yet be prepared to fully address. They recommended that LMMs should be developed with input from medical professionals and patients and stressed the importance of cybersecurity measures to protect patient information and the trustworthiness of healthcare provision. They also emphasized the need for governments to assign a regulator to approve LMM use in healthcare and implement auditing and impact assessments.



Source link

About YOU:

Your Operating System: Unknown OS

Your IP Address: 3.236.145.153

Your Browser: N/A

Want your privacy back? Try NordVPN

About Author