Dr. Quoc Viet Le is a research scientist at Google Brain known for his path-breaking work on deep neural networks (DNN). He is especially famous for his Ph.D work in image processing under Andrew Ng, one of the pioneers of the DNN revolution. Le’s and Ng’s work demonstrated how computers could be used to learn complicated features and patterns in a way similar to how the mammalian brain learns.
This revolutionized the interest in DNNs, and got the current giants of the computer industry such as Google, Facebook and Microsoft in a race to incorporate AI techniques into their software. DNNs perform effectively in tasks such as image processing, handwriting recognition and game-playing, and are being explored for solutions to other problems such as self-driving cars, robotics, medical diagnosis and environmental and social problems.
Quoc Le was listed as one of the top tech innovators under 35 in the MIT tech review. At EmtechAsia, we asked Quoc Le a few questions about his take on neural networks, its development, philosophy, challenges and future role in enabling or threatening humanity.
In part two of our interview with Quoc Le, we discuss the bottlenecks in the development of neural networks, his take on adapting an open philosophy for artificial intelligence (AI) development, its future and whether it could be a threat to humanity. Read on for insights from one of the brains behind making computers brainier. (Read part one of this interview here. )
Q: You told us about the rapid strides that deep neural networks have made so far. What is the current bottleneck in the development of this technology ?
Le: Two things I can think of.
1. Scaling up the networks that we are training. Currently, the sizes of the DNNs we are working with are about 100 times bigger than what people have tried before, and now we will try for a 1000 times. But we are still far from the size of the rat or cat brain by a few orders of magnitude, let alone the human brain. So one thing we would want to do is to scale up to the size of an animal brain. We will face some challenges in this.
2. Mastering Unsupervised learning
The training we have succeeded in doing so far is Supervised learning – using data where the labels or `answers’ are known. Let me try to explain this. Imagine a learning where you walk around with a teacher who tells you everyday what to learn, and the answers to certain questions. What you learn is from the answers that the teacher tells you. That is supervised learning. If you were observing a collection of images, for example, the teacher points to each one and tells you what it is – whether it is an image of a cat, dog, car, house, etc.
What we dont have enough of is Unsupervised learning. In this case, you walk around with no teacher. You have observations, but nobody tells you what it really is, i.e, what the answers are. If you were observing images, for example, nobody tells you what categories they fall under. But given this collection of images, you can learn some sort of simpler representation of these images, identify some patterns in this data, and use it later for some purpose. This is something humans learn well to do, but not machines, yet. This improvement must be done at the software side, and is complicated.
Q: How has unsupervised learning been used so far ?
Similarly, it has been used with speech and handwriting recognition.
As a possible future idea in medical diagnosis and healthcare – suppose we want to learn only from the good doctors, which limits the amount of labelled data we can use. But we still have a lot of medical records, right ? How do we learn from this large set of records if we dont have the labels ? As a first step, we can characterize the patients into different possible categories based on his/her symptoms, even if we have no idea what those symptoms mean. So we can do things like that using unsupervised learning to make our job easier.
Q: Other than scaling up networks and doing unsupervised learning, do you suggest any other steps needed to be taken in this field ?
Le: Yes, we need to improve our understanding of neural networks. Currently our understanding of DNNs and why they work so well, is still limited. Back in the 1990s, one obstacle for people working on neural nets was to understand their working. It was a big problem then which actually convinced scientists not to work on neural networks for a period of time. They didn’t want to work on something they didn’t really understand.
Fast forward a decade or two. Today, we see that even though we can use DNNs well, our understanding of deep learning is still limited !
A better understanding will be great. This will be good for issues like safety and security too.
Q: (With respect to that suggestion) The public is mostly interested in the application part of neural nets, like say, automated cars. Do you think they would care about how the black box of the neural net system works, as long as it guarantees you insured safety?
Le: I do think so. For example, in your example of self-driving cars, an algorithm that identifies a car from just a black pixel in an image fed to it, is far different from one that identifies it from more concrete and reliable features like a tyre or windows. So I’m sure if we know how the AI is identifying it, or what it is using to identify it, it is better.
Q: You mentioned about following an open philosophy for development of AI? Why do you think this is important ?
Le: Yes. In new technology, the hardest part is to get people interested to work on your tech. If a company’s approach to a certain technology is open, a lot of people get inspiration to work in such companies – it has happened to many of my friends. In my case, Google happened to be a very open company, and that is a factor that persuaded many of my friends to join.
I think it will happen in the future as well. Companies like Google, Facebook, Microsoft and Baidu have opened up. That’s a good sign. Now people have more choices on open companies to choose. Researchers care deeply about making a big impact in the world. As soon as a technology is forced to be developed in a secretive way, we will fail to attract talented people and fail in our mission to build good AI. So I think we will stay open as long as we want to do this.
Q: But when you keep a technology open, doesn’t that mean you now don’t have control over who deploys it ?
Right now its hard to say what is best, but that is a question is for the far future. Maybe deep learning isn’t really the key technology that will lead to a breakthrough in sometime, maybe it could be something else, right?
Q: There are many famous people raising concerns about AI and where deep learning will go. There are concerns that we have no clue what could happen if AI blows up fast and leads to a technological singularity, things like whether it could eventually lead to mankind’s destruction.
1. The time frame for something like what Elon Musk said to happen, is large, like a 1000 years.
Le: I won’t dismiss that is one possible future.