Skip to content

Data Comparison of NVIDIA Tesla P4 and T4

During NVIDIA GTC 2018 in Europe NVIDIA announced the new Turing T4 Graphics card. On Twitter this card got a lot of love as it looked like a good evolution of the (now) mainly suggested Tesla P4 (you remember last year it didn’t even appear on NVIDIA’s slides – now it’s their primary suggestion – thanks for listening NVIDIA). I saw the first details about the card in John Fannelis Presentation on the first day. They looked really promising. Here is the picture I with the shown card details (sorry for the head in front of it – I didn’t expect I need it for a blog post…):


You can find the P4 data in this PDF.


Let us compare the data of both cards that we have until now.






CUDA Cores



Frame Buffer (Memory)

8 GB

16 GB

vGPU Profiles

1 GB, 2 GB, 4 GB, 8GB

1 GB, 2 GB, 4 GB, 8GB, 16 GB

Form Factor

PCIe 3.0 single slot

PCIe 3.0 single slot

Max Power

75 W

70 W




As you can see both cards are really similar. The just need a single slot. Thus, you can put up to six (or sometimes eight -yes there are a few servers that support eight(!) cards – just check the HCL) cards in one server and have a limited power consumption. The only difference is that the T4 uses the new Turing chip and has doubled Frame Buffer (16 GB). That means you can run 16 VMs each with a 1 GB Frame Buffer on this card. Although I don’t know if the touring chip offers enough performance for that (as I have no testcard until now @NVIDIA) it might be a good option to have 8 VMs with 2 GB Frame Buffer on one card. That would help in many situations where 1 GB Frame Buffer is not enough. In this situation you could only put 4 VMs on one card.

So, at this point I also thought that the T4 is a good evolution of the P4. The only main point I was missing was a price as this could be an argue against the price. But then I attended another session. Here they also showed some details about the T4. There was one detail I noticed quite late and thus I was too late to take a picture. Luckily my friend Tobias Zurstegen made this picture showing the technical information’s:


Here are two details shown I didn’t find somewhere else. First there are the Max Users – wouldn’t be to hard to calculate when you know that the smallest profile is a 1B profile (= 1GB Frame Buffer). But next too that there is the number of H.264 1080p 30 Streams. So, let’s add these points to our list.






CUDA Cores



Frame Buffer (Memory)

8 GB

16 GB

vGPU Profiles

1 GB, 2 GB, 4 GB, 8GB

1 GB, 2 GB, 4 GB, 8GB, 16 GB

Form Factor

PCIe 3.0 single slot

PCIe 3.0 single slot

Max Power

75 W

70 W




Max Users (VMs)



H.264 1080p 30 Streams



What’s that? The number of H.264 1080p 30 Streams is lower on a T4 (16) compared to a P4 which has 24 Streams. The T4 has 8 (!) Streams less than a P4?!? If you keep in mind that e.g. in a HDX 3D Pro environment each monitor of a user with activity requires one stream that means that with 8 users having a dual monitor all available Streams can already be used. If you put more users on the same card it might happen that this leads to a performance issue for the users as there not enough H.264 Streams available. Unfortunately, I haven’t found anything on NVIDIAs website that proofs that this number of H.264 Streams is correct and was not just a typo in the slide.

But if it’s trues I am wondering why that happened? What did NVIDIA change thus the number of streams went down and not up. I would have expected at least 32 Streams (compared to the P4). If it was a card design change that would be contra productive. Let us hope it was just a Typo on the slide.

If someone found some other official document which (not) proofs this number please let me know.

Should the number be correct I hope NVIDIA listens again and changes this before the card is released. I see this as a big bottleneck for many environments. Especially as many don’t know about the number of streams and then wonder why they have a bad performance as the graphics chip itself is not under heavy load and they don’t know that to many required H.264 streams can also lead to a poor performance. Next to that keep in mind that it’s now also possible to use H.265 in some environments – but using that leads possible to less streams as it’s encoding is more resource intensive.

IT-Administrator Article – Skype for Business in Citrix Environments

It’s been quite a long time since I have written an article for the German magazine IT Administrator. Thus I thought it’s time for another article. You can find it it in the current IT-Administrator. The article describes how you can use Skype for Business in Citrix Environments and benefit from the Citrix RealTime Optimization Pack.


Citrix möchte die reibungslose Nutzung von Skype for Business in VDI-Umgebungen ermöglichen und hat hierfür das Citrix RealTime Optimization Pack veröffentlicht. Wir wollen uns ansehen, wie dieses Plug-in funktioniert und wie Sie es in Ihrer Umgebung implementieren.

NVIDIA GRID license not applied before the user connects – License Restriction will not be removed until the user reconnects

When you are using NVIDIA GRID, you might know that NVIDIA started to activate license checking in Version 5.0. This means if no license is available, there are the following restrictions applied to the user:

  1. Screen resolution is limited to no higher than 1280×1024.
  2. Frame rate is capped at 3 frames per second.
  3. GPU resource allocations are limited, which will prevent some applications from running correctly

Why ever NVIDIA has not enabled a Grace Period after a VM started – thus the restrictions are active until the VM successfully checked out a license. This has the effect that a user might connect to a just booted VM his session experience is limited. Furthermore, he has to disconnect and reconnect his session when the NVIDIA GRID license was applied to e.g. work with a higher resolution. Currently I know about three workarounds to fix this issue:

1. Change the Citrix Desktopservice startup to Automatic (Delayed Start)
2. Configure a Settlement Time before a booted VM is used for the affected Citrix Delivery Groups
3. NVIDIA Registry Entry – Available since GRID 5.3 / 6.1

Both workarounds have their limitations – thus I would like to show you both of them.

Changing the Citrix Desktopservice to Automatic (Delayed Start)

When you change the Service to an Automatic (Delayed Start) it will not directly run after the VM booted. This has the result that the VM is later registered at the Delivery Controller – it has time to check out the required NVIDIA license – before the Delivery Controller will broker a session to the VM.

To change the service to an Automatic (Delayed Start) open the Services Console on you master image (Run => services.msc) and open the properties of the Citrix Desktopservice. Change the Startup type to Automatic (Delayed Start) and confirm the change with OK.

Now update your VMs from the Master Image and that’s it. The VM should now have enough time to grab a NVIDIA License before it registers to the Delivery Controller.
The downside of this approach is that with every VDA update / upgrade you need to configure it again. Instead of doing this manually you can run a script with the following command on your maintenance VM before every shutdown. This command changes the Startup type – you cannot forget to change it (Important: There must be a space after “start=”).

sc config BrokerAgent start= delayed-auto

In a PowerShell the command needs to be modified a little bit:
&cmd.exe /c sc config BrokerAgent start= ‘delayed-auto’

Configure a Settlement Time before a booted VM is used

Alternatively, you have the possibility to configure a Settlement Time for a Delivery Group. This means that after the VM has registered to the Delivery Controller no sessions are brokered to the VM during this configured time. Again, the VM has enough time to request the necessary NVIDIA license. Howerver, this approach also a down side – if no other VMs are available users will still be brokered to just booted VMs although the Settlement Time did not end. This means if you didn’t configure enough Standby-VMs to be up and running when many users connect they still might be brokered to a just booted VM (without a license).

To check the currently active Settlement Time for a Delivery Group open a PowerShell and enter the following commands:

Add-PSSnapin Citrix*
Get-BrokerDesktopGroup –Name “DELIVERY GROUP NAME”

Replace the Delivery Group Name with the Name of the Delivery Group you would like to check

You now get some information’s about the Delivery Group. The interesting point is the SettlementPeriodBeforeUse in the lower area. By default, it should be 00:00:00.

To change this Time enter the following command:

Set-BrokerDesktopGroup –Name “DELIVERY GROUP NAME” –SettlementPeriodBeforeUse 00:05:00

With the above setting, the Settlement Time is changed to 5 Minutes – do not forget to replace the Delivery Group Name with the actual one.

If you now again enter the command to see the settings for your Delivery Group you will notice that the SettlementPeriodBeforeUse was changed.

NVIDIA Registry Entry – Available since GRID 5.3 / 6.1

With GRID Release 5.3 / 6.1 NVIDIA has published a Registry Entry to also fix this. The description is a little bit limited but when I got it correct, it changes the driver behavior – thus that all restrictions are also gone when the license was applied after the session was started. Before you can add the registry setting, you need to install the NVIDIA Drivers in your (Master-) VM and apply a NVIDIA License. When the license was successfully applied, create the following Registry Entry:

Path: HKLM\SOFTWARE\NVIDIA Corporation\Global\GridLicensing
Name: IgnoreSP
Value: 1

After creating, you must restart the VM (and update your Machine Catalogs if it is a Master-VM). Honestly I wasn’t able to test this last solution by myself – so I can’t tell if this really fixes the issue all times or only mostly….

That’s it – hope it helps (and NVIDIA will completely fix this issue in one of their next releases).

Creating a High Available IGEL Universal Management Server Infrastructure with Citrix NetScaler

When you reach a point where you manage many IGELThin Clients you might think that it’s quite helpful that the Igel UMS is still available even if the Server with the UMS fails. Furthermore, you can use this to reduce the load on one systems. For example when multiple Clients update their Firmware at the same time, they all download it from the IGELUM Server. To reach this goal you have two options:

1. Buy the IGEL High-Availability Option for UMS (including a Load Balancer)
2. Load Balance two UMS that use the same Database with a Citrix NetScaler

In this blog post, I would like to show you how to realize the second option. Before we can start with the actual configuration, we need to have a look at a few requirements for this configuration.

1. Database Availability
When you create a High Available UMS Server Infrastructure you should also make the required Database also High Available. Otherwise, all configured UMS Servers stop to work when the Database Server failed.

2. Client IP Forwarding
The UMS Servers need to know the actual IP of the Thin Clients. When you would not forward the Client IP all Thin Clients would have the same IP Address. You then won’t be able to send commands to a Thin Client or see their online status. Unfortunately, this leads to another problem. The client connects to the load balanced IP of the UMS Servers. This is forwarded (with the original Client IP) to one UMS Server. This server replies directly to the Thin Client IP. The Thin Client now receives a reply not from the IP Address it initially connected to and ignores the reply. One (easy) way to fix this issue is to put the UMS Servers in a separate Subnet. Add a Subnet IP (SNIP) to the NetScaler in this Subnet and configure this NetScaler SNIP as the Default Gateway for the UMS Servers. When doing this the UMS Servers receive the original Client IP but the reply to the Client still passes the NetScaler which can then replace the Server IP with the load balanced IP Address.

Now let’s start with the actual configuration. The first step is to install the Standard UMS (with UMS Console) on two (or more) Servers.

After the installation finished successfully, it’s time to connect the external Database (if you are unsure about some installation steps have a look at the really good Getting Started Guide). Therefore, open the IGEL Universal Management Suite Administrator and select Datasource.

In this example, I will use a Microsoft SQL Always On Cluster – but you can select every type of available Database that offers a high availability option. Of course this is nothing that must be – but how does it help you, when the UMS Servers are high available and the database not? If the database server fails, the UMS would also be down – you still would have a single point of failure.
Enter the Host name, Port, User, Schema and Database Name.

Keep in mind that the database is not automatically created – you have to do this manually before. For a Microsoft SQL Server you can use the following script to create the Database. After creating, the database don’t forget to make it highly available – e.g. using the Always-On function.
If you prefer a different name change rmdb to the required name. Beside that replace setyourpasswordhere with a Password. The user (Create User) and Schema (Create Schema) name can also be changed .

USE rmdb
CREATE LOGIN igelums with PASSWORD = ‘setyourpasswordhere’,
CREATE USER igelums with DEFAULT_SCHEMA = igelums

After confirming the connection details, you now see the connection. To enable the connection select Activate and enter the Password of the SQL User.

On the first server, you will get the information that there is no schema in the Database that needs to be created. Confirm this with Yes.

You now should see an activated Datasource Configuration. Repeat the same steps on the second UMS Server. Of course, you don’t need to create another Database – just connect to the same Database like with the first server.igel_load_balancing_netscaler_06

Time to start with the actual Load Balancing configuration on the Citrix NetScaler. Open the Management Website and switch to Configuration => Traffic Management => Servers

Select Add to create the IGEL UM-Servers. Enter the Name and either the IP Address or Domain Name

Repeat this for all UM-Servers (in my example I added two servers).

Now we need Services or a Service Group containing / for all UMS Servers. I personally prefer the Service Groups but if you normally use Services this is also possible.

After switching to Service Groups select again Add to create the first UMS Service Group. In total, we need three Service Groups.
Port 30001: Thin Client Connection Port
Port 8443: Console Connection Port
Port 9080: Firmware Updates

The first one we create is the Service Group for Port 30001. Enter a Name and select TCP as the Protocol. The other settings don’t need to be changed.


Now we need to add the Service Group Members. Select therefore No Service Group Member.

Mark the UMS Servers created in the Servers area and confirm the selection with Select.

Again, enter the Port number 30001 and finish with Create.

The Service Group now contains two Service Group Members.

As mentioned at the beginning we need to forward the Client IP to the UMS Servers. Otherwise, every client would have the same IP – the NetScaler Subnet IP. Therefore, edit the Settings (not Basic Settings!) and enable Use Client IP. Confirm the configuration with OK.

That’s it – the Service Group for Port 30001 is now configured.

Repeat the same steps for Port 8443 – but do not enable Use Client IP. Otherwise, you will not be able to connect to the UMS Servers with the load balanced IP / Name inside the IP Range of the UMS Servers itself.

Finally, you need to create a Service Group for Port 9080 – this time you can again forward the Client IP.

At the end, you should have three Service Groups.

Time to create the actual client connection points – the Virtual Servers (Traffic Management => Load Balancing => Virtual Servers).


Like before select Add to create a new Virtual Server. Again, we need three virtual servers for the Ports 30001, 8443 and 9080.

The first Virtual Server we create is for Port 30001. Enter a Name and choose TCP as the Protocol. Furthermore, enter a free IP Address in the separate Subnet of the UM-Servers. The Port is of course 30001.

After this, we need to bind the Services or Service Group to this Virtual Server. If you created Services and not a Service Group make sure, you add the Services of all UMS Servers. To add a created Service Group click on No Load Balancing Virtual Server Service Group Binding.

Select the Service Group for Port 30001 and confirm the selection with Bind.

The Service Group is now bound to the Virtual Server. Press Continue to get to the next step.

When a client connects, we need to make sure it always connects to the same UMS Server after the initial connection and not flips between them. When a client stopped the connection or a UMS Server failed, it’s of course OK if the client connects to the other UMS Server. Herefore, we need to configure a Persistence. As Persistence Type, we select Source IP and the Time-Out should be changed to 5. IPv4 Netmask is Confirm the Persistence with OK.

Finish the Virtual Server configuration with Done.

Repeat the same steps for the other two ports – thus you have three Virtual Servers at the end. Of course, all need to use the same IP Address.

To make it more easy to connect to the Load Balanced UMS Servers it is a good idea to create a DNS-Host-Entry e.g. with the name Igel and the IP address from the Virtual Servers. When you added a DHCP Option or DNS Name for the Thin Client Auto registration / connection change them also to the IP address of the Virtual Servers.

You can now start the IGEL Universal Management Suite and connect to the created Host-Name.

After a successful connection, you can see the used server name in the bottom left area and under the Toolbar.

We now need to point the Thin Clients to the new Load Balanced UMS Servers. You need either to modify an existing policy or create a new one. The necessary configuration can be found in the following area:
System => Remote Management => Universal Management Suite (right area).
Modify the existing entry and change it to the created Host name. Save the profile and assign the configuration to your Thin Clients

The last step is necessary to allow the Thin Clients to update / download a firmware even when one UMS Server is not available. By default, a Firmware always points to one UMS Server and not the Load Balanced Host name. Therefore, switch to the Firmware area and select one Firmware. Here you can find the Host. Change this to the created Host name and save the settings. Repeat this for all required Firmware’s. If you download a new Firmware make sure you always modify the Host – otherwise a new Firmware will only be available from one UMS Server.

Of course, when you download or import a Firmware using the UMS this is only stored on one of the UMS Servers. To make the Firmware available on both UMS Servers you need to replicate the following folder (if you modified the UMS installation path this would be different):
C:\Program Files (x86)\IGEL\RemoteManager\rmguiserver\webapps\ums_filetransfer
A good way to do this is using DFS. Nevertheless, every replication technology is fine – just make sure (when changing the Host entry for a Firmware) that the Firmware’s are available on both UMS Servers.

That’s it – hope this was helpful for some of you.

Enable XenServer Live Migration for VMs with a NVIDIA vGPU

When you install NVIDIA Grid 6.0 on a XenServer you need to manually activate the possibility to Live-Migrate a VM which is using a (v)GPU. Otherwise, you see the following message when you try to migrate a VM:

The VGPU is not compatible with any PGPU in the destination


The steps to enable the migration are described in the Grid vGPU User Guide. To enable the live migration you need to connect to the Console of the XenServer (eg using Putty). Login with the root user. The next step is to edit (create) the nvidia.conf in the folder etc/modprobe.d. Therefore, enter the following command:

vi /etc/modprobe.d/nvidia.conf


Here you need to add the following line:

options nvidia NVreg_RegistryDwords="RMEnableVgpuMigration=1"


Save the change with :wq and restart the Host. Repeat this for all Hosts in the Pool. After restarting you can now migrate a running VM with a (v)GPU on all Hosts that have the changed setting. If you haven’t configured the setting or didn’t reboot the Host only the other Hosts are available as a Target Server.


New book: Citrix XenApp and XenDesktop 7.15 LTSR (German)


My book about Citrix XenApp and XenDesktop 7.15 LTSR (German) is finally available in Store. It was a lot of work (and learnings) but also fun to test out some not so often used features. You can find more details here.

Update a single Citrix XenServer in a Pool

The Citrix XenCenter offers an easy way to update your XenServer (Pools). Furthermore, most updates can nowadays be installed without a reboot. Only a few require a reboot or a XE-Toolstack restart. This process is fully automated. This means that the first host is updated and (if necessary) rebooted. Then the second one follows and so on. To reboot a host the VMs are migrated to another host. This is fine until you use local storage for your VMs. (I don’t want to discuss here if this makes sense or not!) When the VMs are using local storage, they cannot be automatically migrated. When they are deployed using MCS they even can not be migrated manually (the base disk would be still on the initial host). In the past, this was not a problem. You could just put the VMs into maintenance mode and update one host.

Therefore, you started the Install Update Wizard, selected the updates you would like to install and had the possibility to select the host you would like to update:


When the update was finished, you disabled the Maintenance mode on the corresponding VMs and enabled it on the VMs on the next host. After some time the VMs from the second host were not in use any longer and you could update the second host. Since XenCenter 7.2, this is not possible any longer. After selecting an Update, you can only select to update the whole Pool:


Luckily, there is a small “workaround” to use the XenCenter to download the updates and copy them to the XenServer. (I really like the Update Overview in XenCenter – no hassle to look online which updates are available.) To use this workaround you start the Install Update Wizard and select the Update you would like to install.


In the next step, select the pool containing the server(s) on which you would like to install the update. Now the update is downloaded and transferred to the pool. It is important that you now do not press Next! When you close this dialog (Cancel), the update will be deleted from the Pool.


Instead, you need to connect to the console (e.g. using Putty) of a XenServer that is a member of the Pool. Now we need to figure out the UUID of the update. Therefore, you need to enter the following command:

xe update-list name-label=HOTFIXNAME

You can find the Name of the Update in the XenCenter Download Window. For example, Hotfix 11 for XenServer 7.1 CU1 has the Name XS71ECU1011. Copy the UUID and note if something is required after the hotfix is installed (after-apply-guidance). Either this can be a Toolstack restart (restartXAPI) or a host reboot (restartHost).


The next step is to install the Update on the required hosts. This can be achieved with the following command:

xe update-apply host=HOSTNAME uuid=PATCH-UUID

Replace HOSTNAME with the name of the XenServer you would like to update and PATCH-UUID with the copied UUID from the Patch. Repeat the same for all Hosts you would like to update. When the patch was applied, no further message is displayed.


That means you have to remember the after-apply-guidance which was shown with the Update UUID. The good thing is – if you have forgotten that you can check in XenServer if another step is necessary. Just open the General area from the Server. Under Updates you can see if an installed Update requires a Toolstack or Host restart. If a Toolstack or Host restart is necessary, do them to finish installing the update.


That is it – now you know how to install an Update to a single server on a XenServer Pool member. There are just two other things I would like to add.

The first is that you can install multiple updates at the same time. Therefore, you start with the same steps. You select an Update in the XenCenter and continue until it was transferred to the Pool. Now you go back with “Previous” to the Select Update area. Select the next Update and continue like before until the Update was transferred to the XenServer Pool. Repeat this for all updates you would like to install. Remember tha you not close the Update dialog – otherwise the update files will be removed from the Pool and you cannot install them any longer. Now note down the UUID from the updates and install all of them. It is not necessary to reboot after each update (which requires a Reboot of the Host / Restart of the Toolstack). Just install all and if an update (or multiple) require a reboot, reboot once at the end.

The next thing I would like to add is that you can also keep the files on the XenServer. Therefore, you need to kill the XenCenter trough the Task Manager when the Updates have been transferred to the Pool.

%d bloggers like this: