Wednesday , 22 November 2017
Home >> P >> Processors >> Nvidia’s Tesla V100 GPU gets extended subsidy as server vendors eye AI workloads

Nvidia’s Tesla V100 GPU gets extended subsidy as server vendors eye AI workloads

The vital server vendors are backing adult behind Nvidia’s Tesla V100 GPU accelerator in a pierce that is approaching to make synthetic comprehension and appurtenance training workloads some-more mainstream.

Dell EMC, HPE, IBM and Supermicro summarized servers on Nvidia’s latest GPU accelerators, that are formed on a Volta design from a graphics chip maker. Nvidia’s V100 GPUs have some-more than 120 teraflops of low training opening per GPU. That throughput effectively takes a speed extent off AI workloads.

In a blog post, IBM’s Brad McCredie, clamp boss of a Big Blue’s cognitive complement development, remarkable that Nvidia with a V100 as good as a NVLINK PCI-Express 4 and Memory Coherence record brings “unprecedented inner bandwidth” to AI-optimized systems.

The V100-based systems include:

  • Dell EMC’s PowerEdge R740, that supports adult to 3 V100 GPUs for PCIe and dual aloft finish systems with a R740XD, and C4130.
  • HPE’s Apollo 65000, that will support adult to 8 V100 GPUs for PCIe, and a ProLiant DL380 complement ancillary adult to 3 V100 GPUs.
  • IBM’s Power System with a Power 9 processor will support mixed V100 GPUs. IBM will hurl out a Power 9-based systems after this year.
  • Supermicro also has a array of workstations and servers built with a V100.
  • Inspur, Lenovo and Huawei also launched systems formed on a V100.

More: NVIDIA morphs from graphics and gaming to AI and low learning

close
==[ Click Here 1X ] [ Close ]==