IT Brief Australia - Technology news for CIOs & IT decision-makers
Story image

GTC18 - NVIDIA cranks up GPU power for deep learning

Wed, 28th Mar 2018
FYI, this story is more than a year old

NVIDIA has unveiled a series of advances to its deep learning computing platform which delivers an 8x performance boost on deep learning workloads compared with the previous generation.

"The extraordinary advances of deep learning only hint at what is still to come," says NVIDIA founder and CEO Jensen Huang.

"Many of these advances stand on NVIDIA's deep learning platform, which has quickly become the world's standard. We are dramatically enhancing our platform's performance at a pace far exceeding Moore's law, enabling breakthroughs that will help revolutionise healthcare, transportation, science exploration and countless other areas."

Tesla V100 gets double the memory

Tesla V100 GPU has received a 2x memory boost and is now equipped with 32GB of memory.

V100 GPUs will help data scientists train deeper and larger deep learning models that are more accurate than ever.

They can also improve the performance of memory-constrained HPC applications by up to 50% compared with the previous V100 16GB version.

The new V100 32GB GPU is immediately available across the complete NVIDIA DGX system portfolio.

NVSwitch: hyper-connected GPUs

NVSwitch offers 5x higher bandwidth than the best PCIe switch, allowing developers to build systems with more GPUs hyper-connected to each other.

NVSwitch allows system designers to build even more advanced systems that can flexibly connect any topology of NVLink-based GPUs.

GPU-accelerated deep learning and HPC software stack

The updates to NVIDIA's deep learning and HPC software stack are available at no charge to its developer community.

Among its updates are new versions of NVIDIA CUDA, TensorRT, NCC and cuDNN, and a new Isaac software developer kit for robotics.

Additionally, through close collaboration with leading cloud service providers, every major deep learning framework is continually optimised for NVIDIA's GPU computing platform.

DGX-2: the two petaflop system

NVIDIA's new DGX-2 system, 'the biggest GPU ever made', reached the two petaflop milestone by drawing from a wide range of technology advances developed by NVIDIA at all levels of the computing stack.

It is the first system to debut NVSwitch, which enables all 16 GPUs in the system to share a unified memory space.

Combined with a fully optimised, updated suite of NVIDIA deep learning software, DGX-2 is purpose-built for data scientists pushing the outer limits of deep learning research and computing.

DGX-2 can train FAIRSeq, a state-of-the-art neural machine translation model, in two days - an 8x improvement in performance from the DGX-1 with Volta, introduced in September.

DGX-2 is the latest addition to the NVIDIA DGX product portfolio, which consists of three systems designed to help data scientists quickly develop, test, deploy and scale new deep learning models and innovations.

DGX-2, with 16 GPUs, joins the NVIDIA DGX-1 system, which features eight V100 GPUs, and DGX Station, the personal deep learning supercomputer, with four V100 GPUs in a compact, deskside design.

These systems enable data scientists to scale their work from the complex experiments they run at their desk to the largest deep learning problems.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X