Microsoft has been using field-programmable gate arrays (FPGAs) to improve performance and efficiencies of Bing and Azure for the last few years.
But next year, Microsoft plans to make this kind of FPGA processing power available to developers who will be able to use it to run their own tasks, including intensive artificial-intelligence ones, like deep-neural-networking (DNN).
At its Build developers conference this Spring, Azure CTO Mark Russinovich outlined Microsoft’s big-picture plans for delivering “Hardware Microservices” via the Azure cloud. Russinovich told attendees that once Microsoft solves some lingering security and other issues, “we will have what we consider to be a fully configurable cloud.”
“This is the core of an AI cloud,” Russinovich said, and “a major step toward democratizing AI with the power of FPGA.” (A good recap of Russinovich’s remarks can be found in this TheNewStack article.)
FPGAs are chips that can be custom-configured after they’re manufactured. Microsoft researchers have been doing work in the FPGA space for more than a decade.
More recently, Microsoft has added FPGAs to all of its Azure servers in its own datacenters, as well as implementing FPGAs in some of the machines that power Bing’s indexing servers as part of its Project Catapult efforts. Microsoft’s Azure Accelerated Networking service, which is generally available for Windows and in preview for Linux, also makes use of FPGAs under the covers.
In May, Russinovich said Microsoft didn’t have a firm timeline as to when the company might be ready to bring hardware microservices and FPGA cloud-processing power to customers outside the company. But this week, Microsoft officials said the goal for doing this is some time in calendar 2018.
Microsoft’s Hardware Microservices are built on Intel FPGAs. (Intel bought FPGA-maker Altera in 2015.) These chips, coupled with Microsoft’s framework, will provide advances in speed, efficiency and latency that are particularly suited to big-data workloads.
Microsoft also is working specifically on the DNN piece via a project codenamed “Brainwave.” Microsoft demonstrated BrainWave publicly at the company’s Ignite 2016 conference, when Microsoft used it to run a massive language-translation demonstration on FPGAs.
AI has become one of the great, meaningless buzzwords of our time. In this video, the Chief Data Scientist of Dun and Bradstreet explains AI in clear business terms.
Microsoft officials were planning to discuss Brainwave at the company’s recent Faculty Research Summit in Redmond, which was entirely dedicated to AI, but looking at the updated agenda, it seems references to Brainwave were removed.
BrainWave is a deep-learning platform running on FPGA-based Hardware Microservices, according to a Microsoft presentation on its configurable-cloud plans from 2016. That presentation mentions “Hardware Acceleration as a Service” across datacenters or the Internet. BrainWave distributes neural-network models across as many FPGAs as needed.
Microsoft is not the only company looking to FPGAs in its cloud datacenters; both Amazon and Google are using custom-built silicon for AI tasks.
Amazon already offers an FPGA EC2 F1 instance for programming Xilinx FPGAs and provides a hardware development kit for FPGA. Google has been doing work around training deep-learning models in TensorFlow, its machine-learning software library and has built its own underlying Tensor Processing Unit silicon.