Is artificial intelligence a good or bad thing?
November 23, 2017With movie theaters full of films about rogue robots taking over the world, many fear the impact artificial intelligence (AI) might have on human life.
In light of this year's Queen's Lecture at the Technical University (TU) Berlin, DW has caught up with AI and engineering expert Zoubin Ghahramani. We asked him about human and artificial intelligence, machine learning and what he thinks our future with AI might look like.
DW: Professor Ghahramani, before we speak about artificial intelligence and machine learning, could you define human intelligence for us?
Zoubin Ghahramani: When they hear the word 'intelligence' people often think about the differences between individual humans, but actually the more interesting question is 'how are we different from other animals, plants and computers?'
Evolution made humans good at certain things and not so good at others and each person is different. We tend to call the things we're good at 'intelligence,' which I think is unfair to other animals and even computers.
We humans like to think that we're special. Once we start understanding something, it takes away the mystique.
Chess, for example, was considered the pinnacle of intelligence. Then, in 1997, chess became one of the first major breakthroughs of AI, when the world chess champion Garry Kasparov lost against IBM's super-computer "Deep Blue." From the moment we understood how a computer could play chess, we said ‘well that's not really intelligence.'
What is the difference between artificial intelligence and machine learning? And how would you define an intelligent machine?
Ever since computers were first developed, people have been thinking about how to make them intelligent. Originally, people thought that the way to do it is by coming up with lots of rules, which the computer would know and act upon through logical reasoning.
Machine learning was a splinter group of AI, where people decided they didn't want to stuff more rules into a computer; they wanted computers to learn from patterns and data. Imagine you want to get a computer to recognize the differences between cats and dogs. I think no human could sit down and write a bunch of rules to do that, but we can all almost instantly tell the difference between a cat and a dog.
So how do we get machines to do this? We give the machine millions of images with the labels ‘cat' and ‘dog.' Then we give it a method that it can improve over time. That's the machine learning part. Initially, the machine starts making mistakes when identifying an image, but over time it modifies its computations to improve its performance. Eventually, you'll give it an image and it will recognize whether it shows a cat or a dog. And one day, it will translate between English and French, recognize speech, or maybe even drive a car.
Many people fear artificial intelligence. What impact will AI have on humankind?
AI is going to affect most aspects of our lives. It's similar in impact to other revolutions that have occurred in human history, like the agricultural revolution, the industrial revolution and the computer revolution. During the industrial revolution a lot of formerly manual labor processes became automated through things like the steam engine. And the computer revolution led to the automation of some very mechanical computations, such as accountancy.
What we see with AI is that some aspects of pattern recognition and decision making are becoming automated, which can have a tremendous number of positive impacts. Think about the effect it could have in medicine: algorithms analyzing medical images might be much faster, cheaper and more accurate at diagnosing certain diseases; computers can look at genetic data for specific patterns; and treatments could become more personalized.
The impact of AI on our cities will also be transforming: self-driving cars will make car ownership unnecessary, transportation would essentially become a cheap, efficient and environmentally friendly door-to-door service, parking lots would disappear. The whole nature of cities could change and people would have more time, because they could work while commuting.
We have to keep in mind, however, that AI is also going to cause disruption, especially social disruption. Often when a process is improved and made more efficient, the nature of employment around the process will change, which can displace people. We need to prepare for this, because we can't stop technological progress. In the long run, it can increase everybody's living standard, productivity and health. On the other hand, we need to make sure that AI doesn't increase inequality and that its benefits are widely spread.
On that note, one of society's biggest worries seems to be the fact that increased use of AI could lead to a loss of jobs and make workers redundant. Is this fear justified?
The many people who have studied this question have found that it's not entire jobs that will become automated, it is specific tasks. Eventually some jobs will go, however, I like to think of AI not as replacing people, but giving people some sort of superpower.
If we compare our modern life to life a hundred years ago, we really do have superpowers: we can fly around the world in a few hours, we can communicate with people around the world, we can find our way around cities we've never been to, and we can summon up knowledge in different languages just by pulling something out of our pockets. I like to think that with AI we're building tools that will give us superpowers.
The key question is how we are going to use those superpowers. Are we going to use them to make everybody's standard of living better, make transportation more efficient, make people's lives healthier, increase global happiness, or prevent war? Or are we going to use them to damage humanity and our world?
I actually worry more about the human side than the machines, because any technology in the wrong hands can be misused. We have to make sure that we have safeguards against that.
Zoubin Ghahramani is Professor of Information Engineering at the University of Cambridge, UK, and Chief Scientist at Uber.