Monday, March 31, 2014

drag2share: Why Nvidia thinks it can power the AI revolution

Source: http://gigaom.com/2014/03/31/why-nvidia-thinks-it-can-power-the-ai-revolution/

Smarter robots and devices are coming to a home near you, and chipmaker Nvidia wants to help make it happen. It won’t develop the algorithms that dictate their behavior or build the sensors that let them take in our world, but its graphics-processing units, or GPUs, might be a great way to handle the heavy computing necessary to make many forms of artificial intelligence a reality.

Most applications don’t use GPUs exclusively, but rather offload the most computationally intensive tasks onto them from standard microprocessors. Called GPU acceleration, the practice is very common in supercomputing workloads and it’s becoming ubiquitous in the area of computer vision and object recognition, too. In 2013, more than 80 percent of the teams participating in the ImageNet image-recognition competition utilized GPUs, said Sumit Gupta, general manager of the the Advanced Computing Group at Nvidia.

In March 2013, Google acquired DNNresearch, a deep learning startup co-created by University of Toronto professor Geoff Hinton. Part of the rationale behind that acquisition was team’s performance of Hinton’s team in the 2012 ImageNet competition, where the group’s GPU-powered deep learning models easily bested previous approaches.

Source: Nvidia

Source: Nvidia

“It turns out that the deep neural network … problem is just a slam dunk for the GPU,” Gupta said. That’s because deep learning algorithms often require a lot of computing power to process their data (e.g., images or text) and extract the defining features of the things included in that data. Especially during the training phase, when the models and algorithms are being tuned for accuracy, they need to process a lot of data.

Numerous customers are using Nvidia’s Tesla GPUs for image and speech recognition, including Adobe and Chinese search giant Baidu. Nvidia is working on other aspects of machine learning as well, Gupta noted. Netflix uses them (in the Amazon Web Services cloud) to power its recommendation engine, Russian search engine Yandex uses GPUs to power its search engine, and IBM uses them to run clustering algorithms in Hadoop.

Nvidia might be so excited about machine learning because it has been pushing GPUs as a general-purpose computing platform — not just a graphics and gaming chip — for years with mixed results. The company has tried to do this by simplify programming its processors via the CUDA language it has developed, but Gupta acknowledged there’s still an overall lack of knowledge about how to use GPUs effectively. That’s why so much real innovation still remains with these large users that have the parallel-programming skills necessary to take advantage of 2,500 or more cores at a time (and even more in multi-GPU systems).

Source: Nvidia

Source: Nvidia

However, Nvidia is looking beyond servers and into robotics to fuel some of its machine learning ambitions over the next decade. Last week, the company announced its Jetson TK1 development kit, which Gupta called “a supercomputing version of Raspberry Pi.” At $192, the kit is programmable using CUDA and includes all the ports one might expect to see, as well as a Tegra K1 system-on-a-chip (the latest version of Nvidia’s mobile processor) that’s comprised of a 192-core Kepler GPU, an ARM Cortex A15 CPU and 300 gigaflops of performance.

Well into the 1990s, that type of performance would have put Jetson at or near near the top of any list of the world’s fastest supercomputers.

The company is touting the kit for computer vision, security and other computations that will be critical to mainstream robotics, and Gupta raised the question of how fast the internet of things might advance if smart devices came equipped with this kind of power. While Google and Facebook might train massive artificial intelligence models across hundreds or thousands of servers (or, in Google’s case, on a quantum computer) in their data centers, one big goal is to get the resulting algorithms running on smartphones to reduce the amount of data that needs to be sent immediately to the cloud for processing. Three hundred gigaflops embedded into a Nest thermostat or a drone, for example, is nothing to sneeze at.

Nvidia expects the rise in machine learning workloads to drive “pretty good” revenue growth in the years to come, Gupta said, but beyond the obvious examples he’s not ready to predict the types of computations its GPUs will end up running. “We’ve only just figured out how to use machine learning for a few things, but in fact it’s applicable to a lot of things,” he said. With respect to the Jetson kit, he added, “We’re still trying to imagine what you can do with it.”

Related research and analysis from Gigaom Research:
Subscriber content. Sign up for a free trial.

---
drag2share - drag and drop RSS news items on your email contacts to share (click SEE DEMO)