高清福利片

Stock image of humanoid robot
高清福利片_

Will robots inherit the earth?

25 June 2018
The dimensions of AI, with New Scientist鈥檚 Alice Klein
Ahead of her appearance at the next Outside the Square event, the New Scientist reporter and University of Sydney Science graduate sits down to discuss the implications of Artificial Intelligence.
Alice Klein, Outside the Square panellist

Alice Klein, Outside the Square panellist

Not just robots

Of all the areas of enquiry her role at leads her, of particular interest to Alice is artificial intelligence (AI). From the outset, she鈥檚 quick to explain that AI goes beyond traditional ideas around robots.

鈥淚 don鈥檛 think anyone鈥檚 agreed on a single definition, but I think of AI as any kind of computer system that does things we consider to be 鈥榮mart鈥. Whether that鈥檚 predicting what you want to watch on Netflix, recognising faces in a crowd, or operating self-driving cars.鈥

As a result, there are currently developments in AI which many of us might not recognise. Automation is a major benefit, Alice explains, as she gives the 鈥減retty mundane鈥 example of universities now using smart software to organise the timetables of thousands of students instead of someone doing it manually. Other advances utilise machine learning, a computer system used to find patterns in data.

鈥淭here are examples in medicine where they showed a machine learning system multiple examples of skin moles, adding details for which ones turned cancerous. The computer started to see patterns and tiny details in that mole that were related to later melanoma.

There are AI systems that are actually better than dermatologists at working out the likelihood of a mole turning into skin cancer.

She goes on to describe other cases of AI being used for the benefit of society. A robot called MARIO has been built to increase people with dementia鈥檚 social interactions by giving them access to the things they enjoy such as reading materials, music, event reminders, family photos and more.

Artificial stupidity

Of course, Alice admits, there are downsides to AI.

鈥淪ometimes,鈥 she says, 鈥淎I just doesn鈥檛 work that well. And that can be dangerous.鈥

For example, in the 1990s, researchers from the University of Pittsburgh wrote a computer program designed to predict the outcomes of patients suffering from pneumonia. The program went through data from approximately 750,000 patients in 78 hospitals in 23 states and found, oddly, that patients with asthma had better outcomes.

And this, according to Alice, is the problem with machine learning. 鈥淚t doesn鈥檛 tell you 鈥榳hy鈥. So the system might see a relationship, but it won鈥檛 tell you why that relationship exists.鈥

Critics of machine learning describe this phenomenon as a 鈥榖lack box鈥. It鈥檚 a situation where the data goes into a box and a prediction comes out, but no one knows why or how it happens. This makes it hard for researchers to then evaluate and improve the machine when new data is available.

I ask Alice if they ever figured out the case from Pittsburgh with the pneumonia patients and she laughs.

鈥淭hey actually found out that the algorithm was predicting people with pneumonia and asthma did better because if you arrive at the hospital with pneumonia and you have a history of asthma, the hospital will give you better care as you鈥檙e seen as a higher risk patient.

People have this idea that AI is an objective system that can make all of these great predictions, but they forget that the data it鈥檚 using can be flawed.

Can we keep up?

According to Swedish philosopher, Nick Bostrom, AI will reach the technological singularity (the point at which AI becomes as smart as us) within the next 50-100 years. After that, it will only get smarter than us.

鈥淚t鈥檚 not that AI is going to become human,鈥 says Alice. 鈥淚t鈥檚 that they鈥檙e going to become smarter than humans. That, I think, is more of a worry. The fact is that we鈥檙e not making human-like systems, we鈥檙e making intelligent systems. And they have no inherent care for humans.鈥

We could tell an AI to stop people from hurting each other and it might decide the best way to achieve that is to put all humans in cages.

鈥淪ome people think if we have enough regulation, we can make sure AI is contained in a box. But even if you put an AI in a big steel box, if you make it smart enough, it鈥檚 going to work out how to get out of that box.鈥

The fact is that as long as AI is based on data that has been influenced by humans, there will always be an inherent bias. The question now, is whether we can keep up with the technology that we have created.

鈥淭hroughout human history, we鈥檝e had these big technological shake-ups. We used to exist in mainly agricultural societies, but then came the industrial revolution followed by manufacturing. Now we鈥檝e moved towards a service-based industry. Over time, we鈥檝e adjusted to each of those shifts. However, I think the AI revolution is happening so fast that it鈥檚 unclear whether we can adjust quickly enough.鈥


Alice will debate the merits and ethics of AI at the Outside the Square event,聽Ethical AI: Are robots our friends?, 2 August 2018 at the Old Rum Store, Chippendale.聽Book tickets here.

This article was authored by Theodora Chan (BA, MECO 2010; BA, HONS 2012), Co-Founder and Content Director at Pen and Pixel.

Jennifer Peterson-Ward

Media and PR Adviser (Humanities)

Related articles