Rights expert alerts AI boom facilitates child abuse via the Internet
In another indicator of the importance of AI oversight, a top UN-appointed human rights expert warned on Monday that the volume of reported child sexual abuse material online has increased by 87 per cent since 2019.
Calling for more action to eradicate child exploitation online, the UN Special Rapporteur on the sale and sexual exploitation of children, Mama Fatima Singhateh, said that generative AI and eXtended Reality software had made the problem worse.
It is now possible to create computer-generated “deepfakes” and so-called “deepnudes” and distribute them encrypted and without built-in safety mechanisms, the Special Rapporteur said.
Reliability issue
And ahead of Safer Internet Day on 6 February, Singhateh alleged that the private sector and big tech were “less reliable” than they had promised to be, “with serious ingrained biases, flaws in programming and surveillance software to detect child abuse”, (and a) failure to crack down on child sexual abuse and exploitation networks.
The Special Rapporteur welcomed the UN Secretary-General’s AI Advisory Body which is tasked with making recommendations for the establishment of an international agency to govern and coordinate Artificial Intelligence.
And she also insisted that governments and companies should “work together to solve the issue of child abuse, by including the victims’ voices “in the design and development of ethical digital products to foster a safer online environment”.
Some of the world’s leading tech companies have signed a ground-breaking agreement with the UN to build more ethical Artificial Intelligence systems.
Lenovo Group, LG AI Research, Mastercard, Microsoft, Salesforce, Telefonica, GSMA network of mobile phone operators, and INNIT have pledged to “integrate the values and principles” of UNESCO’s Recommendation on the Ethics of AI when designing and deploying AI systems.
The announcement came at the UN education, science and culture agency’s Global Forum on AI, taking place in Slovenia on Monday and Tuesday.
UNESCO forged a consensus between all its member States in November 2021 to adopt the first global ethical framework for the use of AI.
‘Concrete commitment’
“Today, we are taking another major step by obtaining the same concrete commitment from global tech companies”, said UNESCO’s Director-General, Audrey Azoulay.
“I call on all tech stakeholders to follow the example of these first eight companies. This alliance of the public and private sectors is critical to building AI for the common good.”
In the first commitment of its kind to the UN, the agreement compels companies to fully guarantee human rights in the design, development, purchase, sale, and use of AI.
It states that due diligence, must be carried out to meet safety standards, to identify the adverse effects of AI, and timely measures taken to prevent, mitigate, or remedy them, in line with domestic legislation, said UNESCO in a press release.
The agreement also notes that testing before a new AI system is released onto the market is essential but given the fast evolution of systems already on the market, it also calls for the development of post-deployment risk assessments and mitigation practices.
Support Our Journalism
We cannot do without you.. your contribution supports unbiased journalism
IBNS is not driven by any ism- not wokeism, not racism, not skewed secularism, not hyper right-wing or left liberal ideals, nor by any hardline religious beliefs or hyper nationalism. We want to serve you good old objective news, as they are. We do not judge or preach. We let people decide for themselves. We only try to present factual and well-sourced news.