Companies like Facebook and Google are always looking for new applications for AI and machine learning. I caught up with Nick Heath to talk about his interview with the AI Engineering Chief for Facebook. The following is an edited transcript of our interview.
Nick Heath: So there’s a couple of major obstacles. A big one is that it requires so much data to train these machine learning systems. They’re basically learning by examples so you might think of a system that has millions and millions of photos with cats in them which have been labeled to say this photo has a cat. But the problem is that to drive the accuracy rates up, you need more and more data. And as time goes on, they’re needing more and more data and the manpower required to label millions, and it’s now got up to a billion scale, is just too much. So that is really one of the big problems is the having enough manpower to label the data that these systems need.
The other side of that is that there’s also a need for a lot of competing powers. So speaking to me, the Facebook AI Platform Chief was saying that just training one of these image recognition models requires an equivalent amount of computing power that if you gave every person in the city of London one operation to do, it would take them 4,000 years to complete it. So you get into to the scale of data, you get into the scale of competing power which requires an entire data center.
Karen Roby: So that is a major task to even begin to process Nick. So what about solutions though. Did he talk about that? How they’re gonna face these challenges?
Nick Heath: Yeah so basically they’re trying to cut the humans out of the loop because it’s the manual overhead which is the real problem. So they’re looking at automated solutions. What Facebook has done is use hashtags associated with images on Instagram to label them. And doing that they were able to create a labeled data set of 3.5 billion images. So this is what’s driving them up to that huge scale of data that they really need to train these systems.
And then from the hardware point of view what you’re finding is companies like Google are creating their own custom chips which are designed to excel at the type of calculations that machine learning requires. All of the customization is actually done at the silicon level, in the hardware. An example to those are what are called applications specific integrated circuits or ASICs are Google’s tensor processing units which are just starting to roll out across their cloud platforms.
SEE: Artificial intelligence: Trends, obstacles, and potential wins (Tech Pro Research)
Karen Roby: What did he mean Nick in your interview when he talked about machine learning’s Moore’s Law?
Nick Heath: Well Moore’s Law was the observation made by Intel engineer Gordon Moore, that the number of transistors on a chip would double every two years. And that’s really what’s driven a lot of the advances in computing over recent decades. It is starting to slow down now but that’s really been the engine of change in the competing industry. And what he was saying was progress in the field of artificial intelligence is gonna require research breakthroughs. And this is why he referred to Moore’s Law because he said that the number of research papers that are being written now that are citing this seminal paper on machine learning that was written by a researcher named Yann LeCun.
The number of citations of that paper are just growing at an exponential rate and that’s basically reflecting this explosion in the rate of machine learning related research which will only lead to an increased chance of breakthroughs. Because without the research being there you’ve got no chance of breakthroughs but with there being so much research going on at the moment, he’s saying that this is gonna be the engine that’s really going to drive forwards at progressing AI.
Karen Roby: Wow, fascinating changes there Nick. Thanks so much for talking here with us, for more on Nick’s interview, just check out TechRepublic.