WHO chief Tedros says AI shows ‘great promise for health’ but regulation remains the key
The World Health Organization (WHO) on Thursday issued a call for better regulations over the use and potential mis-use of artificial intelligence (AI) in the healthcare industry.
Its new publication emphasises the importance of establishing safe and effective AI systems and fostering dialogue about using it as a positive tool, bringing together developers, regulators, manufacturers, health workers, and patients.
With the increasing availability of healthcare data and rapid progress in analytic techniques, WHO recognizes the potential AI has, to enhance health outcomes by strengthening clinical trials, improving medical diagnosis, and supplementing healthcare professionals’ knowledge and competencies.
‘Serious challenges’
When using health data, however, AI systems could potentially access sensitive personal information, necessitating robust legal and regulatory frameworks for safeguarding privacy, security, and integrity.
“Artificial intelligence holds great promise for health, but also comes with serious challenges, including unethical data collection, cybersecurity threats and amplifying biases or misinformation,” said Tedros Adhanom Ghebreyesus, WHO Director-General.
In response to the growing need to responsibly manage the rapid rise of AI health technologies, WHO is stressing the importance of transparency and documentation, risk management, and externally validating data.
“This new guidance will support countries to regulate AI effectively, to harness its potential, whether in treating cancer or detecting tuberculosis, while minimising the risks,” said Ghebreyesus.
Complex regulations
The challenges posed by important, complex regulations – such as the General Data Protection Regulation (GDPR) in Europe and the Health Insurance Portability and Accountability Act (HIPAA) in the United States – are addressed with an emphasis on understanding the scope of jurisdiction and consent requirements, in service of privacy and data protection.
AI systems are complex and depend not only on the code they are built with but also on the data they are trained on, said WHO. Better regulation can help manage the risks of AI amplifying biases in training data.
It can be difficult for AI models to accurately represent the diversity of populations, leading to biases, inaccuracies, or even failure.
To help mitigate these risks, regulations can be used to ensure that the attributes – such as gender, race and ethnicity – are reported and datasets are intentionally made representative.
A commitment to quality data is vital to ensuring systems do not amplify biases and errors, the report stressed.
Support Our Journalism
We cannot do without you.. your contribution supports unbiased journalism
IBNS is not driven by any ism- not wokeism, not racism, not skewed secularism, not hyper right-wing or left liberal ideals, nor by any hardline religious beliefs or hyper nationalism. We want to serve you good old objective news, as they are. We do not judge or preach. We let people decide for themselves. We only try to present factual and well-sourced news.