This site may earn chapter commissions from the links on this page. Terms of use.

Earlier this yr, Facebook introduced Glow, a new open up source machine learning compiler intended for heterogeneous systems. The goal is to provide superior performance and improved energy efficiency via generating more than efficient code. Here's how the squad behind Glow described the project in its initial whitepaper:

In the Glow project, we focus on the lower parts of the software stack. We piece of work to provide PyTorch and other frameworks with a low-level graph and a lawmaking generator for neural networks. The name Glow is an abridgement for Graph-Lowering, which is the main technique that the compiler uses for generating efficient code. The Glow low-level graph will not replace the machine learning high-level graph, in the same fashion that the low-level intermediate representation in compilers does not replace the abstract syntax tree. Nosotros aim to provide a useful compiler toolkit that will allow hardware developers to focus on implementing efficient acceleration hardware, each of which likely differ in capabilities, and use Glow for automating compilation tasks such as instruction selection, memory allocation and graph scheduling. The full compiler toolkit is open-source and publicly available.

What Facebook is announcing at present is a new suite of hardware partners that have pledged to support Glow in their ain products. Cadence, Esperanto, Intel, Marvell, and Qualcomm accept all committed to supporting Glow in silicon in time to come projects. The software isn't only designed to generate code for a unmarried given architecture — Facebook intends to support a range of specialized auto accelerators from multiple companies, with corresponding operation improvements for multiple vendors. This support for hardware accelerators isn't limited to a single type of operation, either. FB's press release notes that the hardware-independent aspects of the compiler focus on math optimizations that aren't tied to whatsoever specific model. Glow likewise ships with a linear algebra optimizer, CPU-based reference implementation (for testing hardware accurateness), and various test suites. The goal is to reduce the amount of time information technology takes hardware manufacturers to bring new devices to market.

Glow versus conventional CPU performance according to FB. Click to overstate.

FB is putting serious effort backside Glow. The company launched version 1.0 of its PyTorch deep learning framework earlier this year, new object detection models, libraries for language translation, and Tensor Comprehensions for automatically synthesizing machine learning kernels. There's been a tremendous effort in recent years to build common frameworks for AI and ML that will run on a wide range of hardware, and Glow wants to be a part of information technology.

It's interesting to encounter the ii companies not on this list: AMD and Nvidia. Both accept a keen interest in AI/ML — AMD equally a newcomer to the industry that wants to make its mark with a 7nm Vega data eye product after this year and Nvidia as a leader in the AI/ML space. AMD has participated in Facebook's Open up Compute Project before, so it's possible we'll see some activeness on this front at a later date.

Now Read: General AI is Here and Impala Is Its Name, Stanford Researchers Build AI Into Camera Optics, and MIT Creates AI to Optimize Encephalon Cancer Treatments