Skip to main content

NVIDIA GRID showcase: The deepest dive into desktop virtualisation

Last week I spent a couple of days out in Santa Clara, California with visual computing firm NVIDIA, getting a deep dive into its GRID desktop virtualisation technology.

And when I say deep dive, I really mean deep dive. I'll be the first to admit that I'm not an overly technical person and we were taken several layers deep into the inner workings of the GRID product, and desktop virtualisation in general.

It all started with a visit to NVIDIA's demo room, showing off some of the other innovations that the company is working on. One of the most interesting was its autonomous driving solution. We were shown a demonstration of a car autonomously driving through a car park using multiple external sensors, locating an empty spot and pulling off an impressive maneuver that my parking-illiterate sister would have been proud of.

There was also an example of a car driving along a stretch of motorway - again using several external cameras - to track the locations of other cars around it. The cameras are able to record a range of different data metrics, including the location of other cars, the types of vehicles, the distance away they are and even highlight the availability (or lack thereof) of switching lanes.

Another impressive feature of the demo room really highlighted the power of today's graphics technology, specifically through image rendering. One example used a drill and two images on a display. One was a live image via a 4K camera and the other was a 3D rendering. Side by side, the two images were pretty much identical. This technology is extremely powerful for manufacturers, as it enables them to test out the look of different shapes or materials without physically having to build anything, saving both time and money. An even more impressive example showed a 3D image of a BMW with an extraordinary level of detail. Reflections and light refraction was calculated and updated in seconds and it again enables manufacturers to experiment with different colours and materials without any physical work taking place.

Next up were several case studies from NVIDIA customers, featuring the likes of Citrix, Textron and Cannon Design. One point that kept cropping up throughout all of the customer talks was that of user experience, namely the importance of making sure that the experience is acceptable to the end user. Latency is of course one of the biggest pain points for business - especially when employees are based all around the world - so this is an area that NVIDIA has paid close attention to with its GRID offering.


Other benefits that were mentioned on multiple occasions were security, cost saving and flexibility. Improving global collaboration and communication were also cited, an important factor for Textron and Cannon Design to consider as staff from different departments based in different countries are often required to collaborate on major projects.

To complete the first day, NVIDIA's CTO of GPU virtualisation Andy Currid arrived to take us under the hood of GRID, giving us a crash course in how the whole things fits together. In a nutshell, a vGPU consists of components such as a Tesla GPU, a Bass Address Register which provides a "window" into the GPU, a Memory Management Unit and a Scheduler which places work in the various GPU engines to be processed. There are two options for scheduling: One big engine that processes sequentially or multiple, slower engines which process in parallel, both of which have their pros and cons. For more information on the technical makeup of a vGPU, scroll back through the blog we were updating throughout the event.


Day two started with a talk on Time Slicing - a technique to implement multitasking in operating systems - from Luke Wignall, GRID performance engineering manager at NVIDIA. He spoke about sharing resources between multiple virtual machines and the importance of replicating actual human behaviour for benchmarking purposes, i.e. factoring in toilet breaks, thinking time etc. We were then shown a demo of a 'Click to Photon test' which measures "the time it takes from mouse click to screen update." This is where the issue of latency is especially relevant as users are willing to accept a smaller latency than before. Some user acceptance stats that Luke mentioned were:

  • Normal web browsing - 400ms
  • E-mail - 500ms
  • Audio conferencing - 150ms
  • Video conferencing - 150ms
  • Voice over IP (Skype) - 100ms

The setup for the test itself was actually fairly simple (see the image below) and - in technical terms - measures both the mouse click and release points and the time taken for the photo diode signal to be received. We're talking somewhere between 65 and 200ms.


And it all finished with a hands-on training exercise where we set up our own virtual machines using GRID. It's a bit of a complicated process, but all the journalists in the room were able to get them up and running, demonstrating that it's not actually as difficult as you might think.

Desktop virtualisation in general isn't a trend that will be disappearing any time soon - due to the scalability and cost-saving benefits it provides - and NVIDIA will be doing it's best to be at the forefront of the wave.

Image source: Shutterstock/Katherine Welles