NEWSROOM

Centralizing GPU Compute

By Shara Atkinson

October 31, 2017

In this post, we’ll explore the topic of centralizing the graphics processing unit (GPU) compute within a virtual desktop infrastructure (VDI) environment hosted on blade servers.  Typical problems are addressed in both decentralized and centralized environments.  Available technology is then examined to determine if it can resolve the problems.

Problem #1 – Inefficient Dedicated Workstations

Organizations often purchase expensive physical workstations to support power users that have demanding graphic and compute workloads.  The dedicated workstation model does not leverage the efficiencies gained from virtualization where hardware is shared among a group of users to maximize utilization.

Problem #2 – Basic VDI Environments Lack GPUs

Without GPU virtualization, VDI environments do not have the necessary resources to deliver the type of performance power users depend on when working with computationally and graphically intense workloads.  Power users are often left out of VDI migrations since their workloads are not supported.

Problem #3 – Blade Servers have Physical Limitations

Blade servers tend to be smaller than traditional physical servers.  Due to the compressed design of blade servers, expansion slots are often limited to the MXM form factor.  A viable GPU must not only have the correct interface, it must also fit within the blade server.

My Proposed Solution?  The NVIDIA® Tesla® P6

NVIDIA® is a leading provider of GPUs within the VDI space.  I consider this the best blade server GPU on the market today.  There are several companies that make blade GPUs, however NVIDIA is considered the gold standard right now in the server market.

Released August 17, 2017, the NVIDIA® Tesla® P6 is NVIDIA’s latest blade optimized GPU.  Leveraging the revolutionary NVIDIA Pascal™ architecture, the Tesla® P6 delivers higher graphics performance, improved energy efficiency, and up to twice the frame buffer of its predecessor the NVIDIA® Tesla® M6.  With 2,048 CUDA cores and 16 GB of GDDR5 memory, each GPU can support up to 16 concurrent users.  Some blade servers support the installation of multiple GPUs.  NVIDIA’s long term strategy is to push for centralized GPU compute in data centers. 

The NVIDIA® Tesla® P6 GPU solves the problems addressed earlier.  By centralizing GPU compute within the data center, enterprises can leverage the efficiencies gained from sharing hardware resources.  Installing GPUs within a VDI environment will also provide VDI users with support for computationally and graphically intense workloads.  Since the NVIDIA® Tesla P6 is available in the MXM form factor, it is also compatible with most blade servers.

Conclusion

When basic hardware virtualization was first introduced, many data center managers hesitated to transition to a shared hardware platform.  It took time for the technology to mature enough that organizations were willing to accept the risk of moving to an unfamiliar environment.  As centralized GPU technology matures, more enterprises will transition from a decentralized to a centralized GPU compute model.  As with the early transition to virtualizing basic hardware, the transition to centralizing GPU compute is a slow and steady progression.

Written by Charles Sather, Systems Engineer

2017-11-09T14:03:26+00:00

Are you interested in learning more?

Contact Us Now

Leave A Comment