Deep learning, artificial intelligence, machine learning, neural networks, these are all terms that are quite common in the research and enterprise sectors, but you should expect to see them in more mainstream devices going forward. While autonomous vehicles are at the forefront of much of AI and deep learning’s focus, don’t be surprised to start seeing it in mobile and IoT devices over the next few years.
At the GTC event being held in San Jose, NVIDIA announced its partnership with world leading SoC designer, Arm, on a new product portfolio that’ll soon be released, centering around bringing machine learning to IoT devices.
You’ll be forgiven if you haven’t heard of Arm’s Project Trillium, as it’s a relatively new initiative to make smarter Internet enabled devices. Part of the groundwork for the project has already been field-test so to speak in the Hive security cameras that are currently available, however, Trillium is about building a standardized framework for IoT devices with “AI at the edge” – meaning locally accelerated inferencing without relying on cloud-enabled services.
The partnership with NVIDIA is an extension of Trillium by including an additional architecture in these IoT devices, NVIDIA’s Deep Learning Accelerator (NVDLA). This is an open-source architecture that has its roots in NVIDIA’s Xavier, the autonomous machine SoC at the heart of the DRIVE platform being used by hundreds of companies involved in various forms of autonomous vehicles and robotaxis.
The inclusion of NVDLA is important, because companies will have already invested significant resources into NVIDIA’s other deep learning platforms and architectures, because of Volta, Tensor Cores, TensorRT, and the aforementioned Xavier. While these systems and architectures do most of the heavy lifting – the deep learning aspect that’s very time-consuming – putting that work into action will still require devices in the field to make the decisions, the inferencing part of the equation.
Arm made Trillium so that it could support additional frameworks; this is the payoff, as it lets researchers and developers to release the inferencing engines into products without having to learn yet another language, or rely on the platform’s transcoding capabilities. With NVIDIA’s extensive experience in deep learning, and Arm’s ubiquity in IoT, the partnership makes a lot of sense, so it will be interesting what sort of products get released in the near future.