April 16, 2024 22:01 (IST)
Follow us:
facebook-white sharing button
twitter-white sharing button
instagram-white sharing button
youtube-white sharing button
Top Maoist leader among 29 killed in massive Chhattisgarh encounter | Delhi excise policy case: Arvind Kejriwal to remain in jail as Supreme Court turns down urgent hearing | Excise policy case: Delhi court extends BRS leader K Kavitha's judicial custody to April 23 | 'What else is Congress doing other than looting?': Kangana Ranaut in Himachal Pradesh | Iran could attack Israel sooner than later, warns US President Joe Biden
AI will match human intelligence by 2062, claims UNSW expert

AI will match human intelligence by 2062, claims UNSW expert

India Blooms News Service | @indiablooms | 05 Nov 2018, 09:54 am

Sydney, Nov 5 (IBNS): Scientia Professor Toby Walsh tells the Festival of Dangerous Ideas that Artificial Intelligence is less than 50 years away from matching humans.

The idea that Artificial Intelligence will learn unique human traits like adaptability, creativity and emotional intelligence is something that many in society consider to be an unlikely or distant possibility.

But Toby Walsh, Scientia Professor of Artificial Intelligence at UNSW Sydney, has put a date on this looming reality. He considers 2062 the year that artificial intelligence will match human intelligence, although a fundamental shift has already occurred in the world as we know it.

Speaking at the Festival of Dangerous Ideas, Walsh argued that we are already experiencing the risks of artificial intelligence that seem to be so far in the future.

“Even without machines that are very smart, I’m starting to get a little bit nervous about where it’s going and the important choices we should be making”, Walsh said.

The key challenge will be to avoid the apocalyptic rhetoric of AI and to determine how to move forward in the new age of information.

Weapons of mass persuasion

Privacy concerns about the collection of personal data is nothing new. Citing the Cambridge Analytica scandal, Walsh argues that we should be more sceptical about how data is misused by tech companies.

“A lot of the debate has focused on how personal information was stolen from people, and we should be rightly outraged by that,” Walsh said.

“But there is another side to the story that I’m surprised hasn’t gotten as much attention from the media, which is that the information was used very actively to manipulate how people were going to vote.”

Information is the currency of today’s tech giants, and there is a growing fear that many people are in denial, or even complicit, in just how much data is collected about themselves on a daily basis. According to Walsh, breaches of data privacy will occur more often and are becoming increasingly normalised.

“Many of us have smartwatches that are monitoring our vital signs; our blood pressure, our heartbeat, and if you look at the terms of service, you don’t own that data,” Walsh explained.

“We’re giving up our analogue privacy, the most personal things about us. Just think what you could do as an advertiser if you could tell how people really respond to your adverts.

“You can lie about your digital preferences, but you can’t lie about your heartbeat.”

The ethics of killer robots

Untangling the ethics of machine accountability will be the second fundamental shift in the world as we know it, according to Walsh.

“Fully autonomous machines will radically change the nature of warfare and will be the third revolution in warfare,” Walsh said.

But using autonomous machines as weapons of war poses an ethical dilemma – can you hold a machine accountable for death?

“Machines have no moral compass, they are not sentient, they don’t suffer pain and they can’t be punished,” Walsh added.

“This takes us into interesting new legal territory of who should be held responsible, and there is no simple answer.”

Artificial Intelligence is developed by learning from examples - therefore the key driver of its behaviour is the environment that it is exposed to, more so than the programmer.

Walsh believes the issue is creating machines that are aligned with human values, which is currently a problem on other platforms driven by Artificial Intelligence.

“Facebook is an example of the alignment problem, it is optimised for your attention, not for creating political debate or for making society a better place,” Walsh said.

But it’s not all doom and gloom, according to Walsh. Artificial Intelligence isn’t necessarily heading towards an apocalyptic scenario.

“The future is not fixed. There is this idea that technology is going to shape our future and that we are going to have to deal with it, but this is the wrong picture to think of because society gets to push back and change the technology,” he said.  

Instead of being proponents of technological determinism, Walsh argued that we need to push for societal determinism, ensuring that we build trustworthy systems with distinct lines of accountability.

Support Our Journalism

We cannot do without you.. your contribution supports unbiased journalism

IBNS is not driven by any ism- not wokeism, not racism, not skewed secularism, not hyper right-wing or left liberal ideals, nor by any hardline religious beliefs or hyper nationalism. We want to serve you good old objective news, as they are. We do not judge or preach. We let people decide for themselves. We only try to present factual and well-sourced news.

Support objective journalism for a small contribution.