Skip to content

Data comparison of NVIDIA GRID Tesla P4 and P40

For my book about NVIDIA GRID I created a Data comparison table of the two graphics cards P4 and P40. From my point of view the P4 is a real underestimated card. After my presentation at NVIDIA GTC Europe this year NVIDIA added the P4 to their comparison slides – thanks for listening. During DCUG TecCon I showed my Data comparison table and got some real positive feedback. So I thought it’s worth to publish the comparison in a blog post:

P40 (3X)

P4 (6X)

GPU

3X Pascal

6X Pascal

CUDA Cores (per Card)

3840

2560

CUDA Cores (Total)

11520

15360

Frame Buffer (Total)

72GB

48GB

H.264 1080p30 Streams (Total)

75

150

Max vGPU (Total)

72

48

Max Power (per Card)

250W

75W

Max Power (Total)

750W

450W

Price per Card (in $)*

11149,99

3649,99

Price for all Cards (in $)*

33449,97

21899,94

*NVIDIA doesn’t publish list prices – so I picked list prices from one server vendor.

Current server generations allow either the usage of six P4 or three P40. As you can see the price of the P4 is much lower. On the other side the available CUDE Cores on the P40 are higher. Thus one user could get a higher peak performance on a P40 card. On the other hand you have more CUDE cores available on all P4 cards compared to all P40 cards. With the P4 cards you are limited to 48 vGPU Instances – the P40 allows up to 72.

There is another fact most people don’t think about. Citrix uses H264 in it’s HDX 3D Pro Protocol. When a user now connects to a VM one stream is created for every monitor he is using. In many offices nearly every user already has two monitors – resulting in two streams. If you now connect 72 users to a P40 and every user has two monitors – it would require 144 streams. However there are only 75 available. This can lead to a reduced performance for the user. In contrast the P4 has 150 streams available – for 48 vGPU Instances.

In addition the maximum power of all six P4 is 300W lower than three P40 resulting in lower cooling requirements and power needs for the Data Center.

I hope you like the comparison – if you think something is wrong or missing please contact me.

Published Applications blacked out / Published Desktop black borders with Citrix XenDesktop and NVIDIA GRID

During the last days I was facing an interesting error. Starting a published application from a Windows Server 2016 with a NVIDIA GRID vGPU only showed a black screen:

published_applications_black_window_01

Interestingly it was possible to move the black window around and even maximize it. But it stayed black. The taskbar Icon instead was correctly shown:

published_applications_black_window_02

Other users had the problem that when they started a published desktop they had black borders on the side:

published_applications_black_window_03

To fix this they needed to change the window size of the desktop.

There was no specific graphics setting configured for the server – except from this one:

Use the hardware default graphics adapter for all Remote Desktop Services sessions

published_applications_black_window_04

The same problem is described in this Citrix Discussion. It’s also mentioned that LC7875 fixes the problem. Thus I created a Case at Citrix and requested this hotfix. The hotfix contains a changed icardd.dll (C:\Windows\System32). After installing the fix the problem was gone.

GPU Powered VDI – Virtual Desktops with NVIDIA GRID

 

As you might have noticed I only wrote a few blog posts during the last month. One of the reasons was that I was (secretely) working on a book about Virtual Desktops using a Graphics card. The ones which joined Thomas Remmlinger and my session @NVIDIA GTC Europe today already know that this book is now finished and available at Amazon. Only one thing – sorry for that – currently it’s only available in German. Hope you like it – if (not) please tell Smile.


gpu_powered_vdi-3D_Model

Remove pinned Server Manager Icon from Start Menu for new Users on Server 2016

If you plan to deploy a Server 2016 e.g. as an RDSH or XenApp VDA you might want to remove the pinned Server Manager from the Start Menu for new users. I don’t know why, but Microsoft does not offer a GPO setting for this. This blog post from Clint Boessen describes how to remove the pinned Server Manager Icon from Server 2012. For Server 2016 the Shortcut that needs to be removed is in a different Path:

C:\ProgramData\Microsoft\Windows\Start Menu\Programs

After deleting the shortcut new users don’t get a pinned Server Manager in their Start Menu. It is not necessary to remove the shortcut from the following path:

C:\ProgramData\Microsoft\Windows\Start Menu\Programs\Windows Administrative Tools

ServerManager

Scheduled Delivery Group reboots are not working after upgrading to XenDesktop / XenApp 7.12 / 7.13

After upgrading Citrix XenApp / XenDesktop to 7.12 our scheduled reboots for the Server VDA Delivery Groups did not work any longer. Citrix provided us with a Hotfix (that’s also included in 7.13) – but even that did not fix the problem. To fix it we had to recreate the reboot schedule. If you now think you can just remove the reboot settings using the Studio you are wrong:
scheduled_reboots_01

To see that the reboot schedule still exists you need to open a PowerShell and enter the following commands:

Add-PSSnapin Citrix*
Get-BrokerRebootSchedule

You will now get a list off all configured reboots. The only thing changed is that the reboot schedule for your delivery group was Disabled:
scheduled_reboots_02

For completely removing the reboot schedule enter the following command:

Remove-BrokerRebootSchedule -DesktopGroupName "DESKTOP GROUP NAME"

scheduled_reboots_03

After that you can recreate your schedule using the Studio (or PowerShell). That’s it.

3DConnexion SpaceMouse / Space Pilot Keys (Ctrl, Alt, ESC,..) are not working in a Citrix HDX 3D Pro Session

If you passthrough a 3DConnexion SpaceMouse to a Citrix HDX 3D Pro Session you expect that everything is working like on a local computer. Unfortunately that’s not always the truth. It might happen that the left side of Keys (Ctrl, Alt, ESC,…) are not working. Everything else instead is working fine. When you look at the release notes of XenDesktop 7.11 you find the following fix:

Customized functions for a 3DConnexion SpaceMouse might not work in a VDA session.
[#LC4797]
3dconnexion_01

When you now think – ok let’s upgrade to 7.11 (or one version later) and the problem is gone – that might help you – but did not help us. Thus we created a ticket at Citrix. They told me that the following Registry-Key also needs to be added:

Path: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\picakbf
Key: Enable3DConnexionMouse
Type: REG_DWORD
Value: 1
3dconnexion_02

After adding the key you need to reboot your VDA. Now the 3DConnexion Mouse should fully work. As far as I know this is not documented anywhere else….

Workaround for crashing XenServer 7 with Xeon v4 CPUs, Nvidia M60 / M10 (and Dell R730)

You might have read my last article about a crashing XenServer 7 when it’s used with an Intel Xeon v4 and a NVIDIA M60 in a Dell R730. The same problem happens when you use a M10 from NVIDIA. Furthermore I have heard that the same problem happens with Fujitsu Servers.

Our workaround until now was to replace the Xeon v4 CPUs with v3 ones (with less performance). Luckily Citrix Engineering (and Support) found the issue for the problem. It’s related to the Intel PML (Page Modification Logging) function. You can find some detailed information on this side from Intel.

Thus we disabled this function in XenServer and replaced the v3 CPUs with v4 ones. After that we started stress testing the system. No error occurred. Thus we placed users on the system. The system was still running. Until now we had no more crashes while the function is disabled.

To disable the function you need to modify the grub.cfg – those of you who have more than 512GB Ram in their hosts should already now that process.

Open a console session and either switch to /boot/grub (Bios) or /boot/efi/EFI/xenserver (UEFI) – depending on the boot setting of your server.

BIOS:
/boot/grub

01_no-pml

UEFI:
/boot/efi/EFI/xenserver

02_no-pml

Open grub.cfg with vi:
vi grub.cfg
03_no-pml

Now you need to add the following command to the multiboot2 /boot/xen.gz line:

ept=no-pml

After that the file should look like this (don’t remove iommu… if you have more than 512GB Ram):
04_no-pml

Save and close the file with :wq and reboot the host. That’s it. Now the function is disabled and the problem is gone. A big thanks to Antony Peter (Citrix Support) and Anshul Makkar (Citrix Engineering) for taking so much time to look at our environment and debugging the problem with me.

%d bloggers like this: