Skip to content

NVIDIA GRID license not applied before the user connects – License Restriction will not be removed until the user reconnects

When you are using NVIDIA GRID, you might know that NVIDIA started to activate license checking in Version 5.0. This means if no license is available, there are the following restrictions applied to the user:

  1. Screen resolution is limited to no higher than 1280×1024.
  2. Frame rate is capped at 3 frames per second.
  3. GPU resource allocations are limited, which will prevent some applications from running correctly

Why ever NVIDIA has not enabled a Grace Period after a VM started – thus the restrictions are active until the VM successfully checked out a license. This has the effect that a user might connect to a just booted VM his session experience is limited. Furthermore, he has to disconnect and reconnect his session when the NVIDIA GRID license was applied to e.g. work with a higher resolution. Currently I know about three workarounds to fix this issue:

1. Change the Citrix Desktopservice startup to Automatic (Delayed Start)
2. Configure a Settlement Time before a booted VM is used for the affected Citrix Delivery Groups
3. NVIDIA Registry Entry – Available since GRID 5.3 / 6.1

Both workarounds have their limitations – thus I would like to show you both of them.

Changing the Citrix Desktopservice to Automatic (Delayed Start)

When you change the Service to an Automatic (Delayed Start) it will not directly run after the VM booted. This has the result that the VM is later registered at the Delivery Controller – it has time to check out the required NVIDIA license – before the Delivery Controller will broker a session to the VM.

To change the service to an Automatic (Delayed Start) open the Services Console on you master image (Run => services.msc) and open the properties of the Citrix Desktopservice. Change the Startup type to Automatic (Delayed Start) and confirm the change with OK.
nvidia_license_not_applied_01

Now update your VMs from the Master Image and that’s it. The VM should now have enough time to grab a NVIDIA License before it registers to the Delivery Controller.
The downside of this approach is that with every VDA update / upgrade you need to configure it again. Instead of doing this manually you can run a script with the following command on your maintenance VM before every shutdown. This command changes the Startup type – you cannot forget to change it (Important: There must be a space after “start=”).

sc config BrokerAgent start= delayed-auto

In a PowerShell the command needs to be modified a little bit:
&cmd.exe /c sc config BrokerAgent start= ‘delayed-auto’

Configure a Settlement Time before a booted VM is used

Alternatively, you have the possibility to configure a Settlement Time for a Delivery Group. This means that after the VM has registered to the Delivery Controller no sessions are brokered to the VM during this configured time. Again, the VM has enough time to request the necessary NVIDIA license. Howerver, this approach also a down side – if no other VMs are available users will still be brokered to just booted VMs although the Settlement Time did not end. This means if you didn’t configure enough Standby-VMs to be up and running when many users connect they still might be brokered to a just booted VM (without a license).

To check the currently active Settlement Time for a Delivery Group open a PowerShell and enter the following commands:

Add-PSSnapin Citrix*
Get-BrokerDesktopGroup –Name “DELIVERY GROUP NAME”

Replace the Delivery Group Name with the Name of the Delivery Group you would like to check
nvidia_license_not_applied_02

You now get some information’s about the Delivery Group. The interesting point is the SettlementPeriodBeforeUse in the lower area. By default, it should be 00:00:00.
nvidia_license_not_applied_03

To change this Time enter the following command:

Set-BrokerDesktopGroup –Name “DELIVERY GROUP NAME” –SettlementPeriodBeforeUse 00:05:00

With the above setting, the Settlement Time is changed to 5 Minutes – do not forget to replace the Delivery Group Name with the actual one.
nvidia_license_not_applied_04

If you now again enter the command to see the settings for your Delivery Group you will notice that the SettlementPeriodBeforeUse was changed.
nvidia_license_not_applied_05

NVIDIA Registry Entry – Available since GRID 5.3 / 6.1

With GRID Release 5.3 / 6.1 NVIDIA has published a Registry Entry to also fix this. The description is a little bit limited but when I got it correct, it changes the driver behavior – thus that all restrictions are also gone when the license was applied after the session was started. Before you can add the registry setting, you need to install the NVIDIA Drivers in your (Master-) VM and apply a NVIDIA License. When the license was successfully applied, create the following Registry Entry:

Path: HKLM\SOFTWARE\NVIDIA Corporation\Global\GridLicensing
Type: DWORD
Name: IgnoreSP
Value: 1

After creating, you must restart the VM (and update your Machine Catalogs if it is a Master-VM). Honestly I wasn’t able to test this last solution by myself – so I can’t tell if this really fixes the issue all times or only mostly….

That’s it – hope it helps (and NVIDIA will completely fix this issue in one of their next releases).

Creating a High Available IGEL Universal Management Server Infrastructure with Citrix NetScaler

When you reach a point where you manage many IGELThin Clients you might think that it’s quite helpful that the Igel UMS is still available even if the Server with the UMS fails. Furthermore, you can use this to reduce the load on one systems. For example when multiple Clients update their Firmware at the same time, they all download it from the IGELUM Server. To reach this goal you have two options:

1. Buy the IGEL High-Availability Option for UMS (including a Load Balancer)
2. Load Balance two UMS that use the same Database with a Citrix NetScaler

In this blog post, I would like to show you how to realize the second option. Before we can start with the actual configuration, we need to have a look at a few requirements for this configuration.

1. Database Availability
When you create a High Available UMS Server Infrastructure you should also make the required Database also High Available. Otherwise, all configured UMS Servers stop to work when the Database Server failed.

2. Client IP Forwarding
The UMS Servers need to know the actual IP of the Thin Clients. When you would not forward the Client IP all Thin Clients would have the same IP Address. You then won’t be able to send commands to a Thin Client or see their online status. Unfortunately, this leads to another problem. The client connects to the load balanced IP of the UMS Servers. This is forwarded (with the original Client IP) to one UMS Server. This server replies directly to the Thin Client IP. The Thin Client now receives a reply not from the IP Address it initially connected to and ignores the reply. One (easy) way to fix this issue is to put the UMS Servers in a separate Subnet. Add a Subnet IP (SNIP) to the NetScaler in this Subnet and configure this NetScaler SNIP as the Default Gateway for the UMS Servers. When doing this the UMS Servers receive the original Client IP but the reply to the Client still passes the NetScaler which can then replace the Server IP with the load balanced IP Address.

Now let’s start with the actual configuration. The first step is to install the Standard UMS (with UMS Console) on two (or more) Servers.
igel_load_balancing_netscaler_01

After the installation finished successfully, it’s time to connect the external Database (if you are unsure about some installation steps have a look at the really good Getting Started Guide). Therefore, open the IGEL Universal Management Suite Administrator and select Datasource.
igel_load_balancing_netscaler_02

In this example, I will use a Microsoft SQL Always On Cluster – but you can select every type of available Database that offers a high availability option. Of course this is nothing that must be – but how does it help you, when the UMS Servers are high available and the database not? If the database server fails, the UMS would also be down – you still would have a single point of failure.
Enter the Host name, Port, User, Schema and Database Name.
igel_load_balancing_netscaler_03

Keep in mind that the database is not automatically created – you have to do this manually before. For a Microsoft SQL Server you can use the following script to create the Database. After creating, the database don’t forget to make it highly available – e.g. using the Always-On function.
If you prefer a different name change rmdb to the required name. Beside that replace setyourpasswordhere with a Password. The user (Create User) and Schema (Create Schema) name can also be changed .

CREATE DATABASE rmdb
GO
USE rmdb
GO
CREATE LOGIN igelums with PASSWORD = ‘setyourpasswordhere’,
DEFAULT_DATABASE=rmdb
GO
CREATE USER igelums with DEFAULT_SCHEMA = igelums
GO
CREATE SCHEMA igelums AUTHORIZATION igelums GRANT CONTROL to igelums
GO

After confirming the connection details, you now see the connection. To enable the connection select Activate and enter the Password of the SQL User.
igel_load_balancing_netscaler_04

On the first server, you will get the information that there is no schema in the Database that needs to be created. Confirm this with Yes.
igel_load_balancing_netscaler_05

You now should see an activated Datasource Configuration. Repeat the same steps on the second UMS Server. Of course, you don’t need to create another Database – just connect to the same Database like with the first server.igel_load_balancing_netscaler_06

Time to start with the actual Load Balancing configuration on the Citrix NetScaler. Open the Management Website and switch to Configuration => Traffic Management => Servers
igel_load_balancing_netscaler_07

Select Add to create the IGEL UM-Servers. Enter the Name and either the IP Address or Domain Name
igel_load_balancing_netscaler_08

Repeat this for all UM-Servers (in my example I added two servers).
igel_load_balancing_netscaler_09

Now we need Services or a Service Group containing / for all UMS Servers. I personally prefer the Service Groups but if you normally use Services this is also possible.
igel_load_balancing_netscaler_10

After switching to Service Groups select again Add to create the first UMS Service Group. In total, we need three Service Groups.
Port 30001: Thin Client Connection Port
Port 8443: Console Connection Port
Port 9080: Firmware Updates

The first one we create is the Service Group for Port 30001. Enter a Name and select TCP as the Protocol. The other settings don’t need to be changed.
igel_load_balancing_netscaler_11

    

Now we need to add the Service Group Members. Select therefore No Service Group Member.
igel_load_balancing_netscaler_12

Mark the UMS Servers created in the Servers area and confirm the selection with Select.
igel_load_balancing_netscaler_13

Again, enter the Port number 30001 and finish with Create.
igel_load_balancing_netscaler_14

The Service Group now contains two Service Group Members.
igel_load_balancing_netscaler_15

As mentioned at the beginning we need to forward the Client IP to the UMS Servers. Otherwise, every client would have the same IP – the NetScaler Subnet IP. Therefore, edit the Settings (not Basic Settings!) and enable Use Client IP. Confirm the configuration with OK.
igel_load_balancing_netscaler_16

That’s it – the Service Group for Port 30001 is now configured.
igel_load_balancing_netscaler_17

Repeat the same steps for Port 8443 – but do not enable Use Client IP. Otherwise, you will not be able to connect to the UMS Servers with the load balanced IP / Name inside the IP Range of the UMS Servers itself.
igel_load_balancing_netscaler_18

Finally, you need to create a Service Group for Port 9080 – this time you can again forward the Client IP.
igel_load_balancing_netscaler_19

At the end, you should have three Service Groups.
igel_load_balancing_netscaler_20

Time to create the actual client connection points – the Virtual Servers (Traffic Management => Load Balancing => Virtual Servers).

igel_load_balancing_netscaler_21

Like before select Add to create a new Virtual Server. Again, we need three virtual servers for the Ports 30001, 8443 and 9080.

The first Virtual Server we create is for Port 30001. Enter a Name and choose TCP as the Protocol. Furthermore, enter a free IP Address in the separate Subnet of the UM-Servers. The Port is of course 30001.
igel_load_balancing_netscaler_22

After this, we need to bind the Services or Service Group to this Virtual Server. If you created Services and not a Service Group make sure, you add the Services of all UMS Servers. To add a created Service Group click on No Load Balancing Virtual Server Service Group Binding.
igel_load_balancing_netscaler_23

Select the Service Group for Port 30001 and confirm the selection with Bind.
igel_load_balancing_netscaler_24

The Service Group is now bound to the Virtual Server. Press Continue to get to the next step.
igel_load_balancing_netscaler_25

When a client connects, we need to make sure it always connects to the same UMS Server after the initial connection and not flips between them. When a client stopped the connection or a UMS Server failed, it’s of course OK if the client connects to the other UMS Server. Herefore, we need to configure a Persistence. As Persistence Type, we select Source IP and the Time-Out should be changed to 5. IPv4 Netmask is  255.255.255.255. Confirm the Persistence with OK.
igel_load_balancing_netscaler_26

Finish the Virtual Server configuration with Done.
igel_load_balancing_netscaler_27

Repeat the same steps for the other two ports – thus you have three Virtual Servers at the end. Of course, all need to use the same IP Address.
igel_load_balancing_netscaler_28

To make it more easy to connect to the Load Balanced UMS Servers it is a good idea to create a DNS-Host-Entry e.g. with the name Igel and the IP address from the Virtual Servers. When you added a DHCP Option or DNS Name for the Thin Client Auto registration / connection change them also to the IP address of the Virtual Servers.
igel_load_balancing_netscaler_29

You can now start the IGEL Universal Management Suite and connect to the created Host-Name.
igel_load_balancing_netscaler_30

After a successful connection, you can see the used server name in the bottom left area and under the Toolbar.
igel_load_balancing_netscaler_31

We now need to point the Thin Clients to the new Load Balanced UMS Servers. You need either to modify an existing policy or create a new one. The necessary configuration can be found in the following area:
System => Remote Management => Universal Management Suite (right area).
Modify the existing entry and change it to the created Host name. Save the profile and assign the configuration to your Thin Clients
igel_load_balancing_netscaler_32

The last step is necessary to allow the Thin Clients to update / download a firmware even when one UMS Server is not available. By default, a Firmware always points to one UMS Server and not the Load Balanced Host name. Therefore, switch to the Firmware area and select one Firmware. Here you can find the Host. Change this to the created Host name and save the settings. Repeat this for all required Firmware’s. If you download a new Firmware make sure you always modify the Host – otherwise a new Firmware will only be available from one UMS Server.
igel_load_balancing_netscaler_33

Of course, when you download or import a Firmware using the UMS this is only stored on one of the UMS Servers. To make the Firmware available on both UMS Servers you need to replicate the following folder (if you modified the UMS installation path this would be different):
C:\Program Files (x86)\IGEL\RemoteManager\rmguiserver\webapps\ums_filetransfer
A good way to do this is using DFS. Nevertheless, every replication technology is fine – just make sure (when changing the Host entry for a Firmware) that the Firmware’s are available on both UMS Servers.

That’s it – hope this was helpful for some of you.

Enable XenServer Live Migration for VMs with a NVIDIA vGPU

When you install NVIDIA Grid 6.0 on a XenServer you need to manually activate the possibility to Live-Migrate a VM which is using a (v)GPU. Otherwise, you see the following message when you try to migrate a VM:

The VGPU is not compatible with any PGPU in the destination

nvidia_enable_live_migration_01

The steps to enable the migration are described in the Grid vGPU User Guide. To enable the live migration you need to connect to the Console of the XenServer (eg using Putty). Login with the root user. The next step is to edit (create) the nvidia.conf in the folder etc/modprobe.d. Therefore, enter the following command:

vi /etc/modprobe.d/nvidia.conf

nvidia_enable_live_migration_02

Here you need to add the following line:

options nvidia NVreg_RegistryDwords="RMEnableVgpuMigration=1"

nvidia_enable_live_migration_03

Save the change with :wq and restart the Host. Repeat this for all Hosts in the Pool. After restarting you can now migrate a running VM with a (v)GPU on all Hosts that have the changed setting. If you haven’t configured the setting or didn’t reboot the Host only the other Hosts are available as a Target Server.

nvidia_enable_live_migration_04

New book: Citrix XenApp and XenDesktop 7.15 LTSR (German)

xa_xd_7_15_ltsr

My book about Citrix XenApp and XenDesktop 7.15 LTSR (German) is finally available in Store. It was a lot of work (and learnings) but also fun to test out some not so often used features. You can find more details here.

Update a single Citrix XenServer in a Pool

The Citrix XenCenter offers an easy way to update your XenServer (Pools). Furthermore, most updates can nowadays be installed without a reboot. Only a few require a reboot or a XE-Toolstack restart. This process is fully automated. This means that the first host is updated and (if necessary) rebooted. Then the second one follows and so on. To reboot a host the VMs are migrated to another host. This is fine until you use local storage for your VMs. (I don’t want to discuss here if this makes sense or not!) When the VMs are using local storage, they cannot be automatically migrated. When they are deployed using MCS they even can not be migrated manually (the base disk would be still on the initial host). In the past, this was not a problem. You could just put the VMs into maintenance mode and update one host.

Therefore, you started the Install Update Wizard, selected the updates you would like to install and had the possibility to select the host you would like to update:

xenserver_update_single_host_from_pool_01

When the update was finished, you disabled the Maintenance mode on the corresponding VMs and enabled it on the VMs on the next host. After some time the VMs from the second host were not in use any longer and you could update the second host. Since XenCenter 7.2, this is not possible any longer. After selecting an Update, you can only select to update the whole Pool:

xenserver_update_single_host_from_pool_02

Luckily, there is a small “workaround” to use the XenCenter to download the updates and copy them to the XenServer. (I really like the Update Overview in XenCenter – no hassle to look online which updates are available.) To use this workaround you start the Install Update Wizard and select the Update you would like to install.

xenserver_update_single_host_from_pool_03

In the next step, select the pool containing the server(s) on which you would like to install the update. Now the update is downloaded and transferred to the pool. It is important that you now do not press Next! When you close this dialog (Cancel), the update will be deleted from the Pool.

xenserver_update_single_host_from_pool_04

Instead, you need to connect to the console (e.g. using Putty) of a XenServer that is a member of the Pool. Now we need to figure out the UUID of the update. Therefore, you need to enter the following command:

xe update-list name-label=HOTFIXNAME

You can find the Name of the Update in the XenCenter Download Window. For example, Hotfix 11 for XenServer 7.1 CU1 has the Name XS71ECU1011. Copy the UUID and note if something is required after the hotfix is installed (after-apply-guidance). Either this can be a Toolstack restart (restartXAPI) or a host reboot (restartHost).

xenserver_update_single_host_from_pool_05

The next step is to install the Update on the required hosts. This can be achieved with the following command:

xe update-apply host=HOSTNAME uuid=PATCH-UUID

Replace HOSTNAME with the name of the XenServer you would like to update and PATCH-UUID with the copied UUID from the Patch. Repeat the same for all Hosts you would like to update. When the patch was applied, no further message is displayed.

xenserver_update_single_host_from_pool_06

That means you have to remember the after-apply-guidance which was shown with the Update UUID. The good thing is – if you have forgotten that you can check in XenServer if another step is necessary. Just open the General area from the Server. Under Updates you can see if an installed Update requires a Toolstack or Host restart. If a Toolstack or Host restart is necessary, do them to finish installing the update.

xenserver_update_single_host_from_pool_07

That is it – now you know how to install an Update to a single server on a XenServer Pool member. There are just two other things I would like to add.

The first is that you can install multiple updates at the same time. Therefore, you start with the same steps. You select an Update in the XenCenter and continue until it was transferred to the Pool. Now you go back with “Previous” to the Select Update area. Select the next Update and continue like before until the Update was transferred to the XenServer Pool. Repeat this for all updates you would like to install. Remember tha you not close the Update dialog – otherwise the update files will be removed from the Pool and you cannot install them any longer. Now note down the UUID from the updates and install all of them. It is not necessary to reboot after each update (which requires a Reboot of the Host / Restart of the Toolstack). Just install all and if an update (or multiple) require a reboot, reboot once at the end.

The next thing I would like to add is that you can also keep the files on the XenServer. Therefore, you need to kill the XenCenter trough the Task Manager when the Updates have been transferred to the Pool.

Reduced Performance with Citrix Linux Receiver 13.7 / 13.8 (compared to 13.5)

Today I would like to show you a current issue when upgrading your Linux Receiver 13.5 to 13.7 or 13.8. This is especially important for you when you are using Thin Clients. With a new Firmware a ThinClient Vendor often also updates the used Receiver Version. For example with one of the last Firmware’s Igel switched the default Receiver to Version 13.7. Unfortunately, it seems that there was a bigger change in the Linux Receiver, which leads to a performance reduction – especially when playing a video. This is a problem because it means that all HDX 3D Pro users are effected (HDX 3D Pro = H264 Stream). To show you the differences between both Receiver versions I created the following Video. Both screens are using the same Hardware / Firmware. Only the Linux Receiver Version was changed.

We currently have a ticket open at Citrix to fix the problem – but there is no solution until now available.

NVIDIA Grid Cards – OEM “List” Price Comparison

In a discussion with a smart guy from NVIDIA he showed me Thinkmate.com as a nice source for Supermicro Servers (and Grid Cards). The visible prices for the Grid cards were kind of “interesting” – so I thought a comparison of the somewhere public visible list prices for Grid cards from the different OEMs might be interesting. Prices might change in the future. In addition you need for some cards a separate Cable Kit. All Prices from 13.02.2018. (Don’t forget – that are list prices!)

OEM M10 M60 P4 P40
Dell $5,050.56 $9,821.84 $4,020.79 $12,305.72
Supermicro $2,299.00 $4,399.00 $1,899.00 $5,699.00
HP $3,999.00 $8,999.00 $3,699.00 $11,599.00
Cisco $8,750.00 $16,873.00 $6,999.00 $21,000.00
Lenovo ?? $9,649.00 ?? ??

If you know a source for the missing Lenovo List Prices – please let me know so I can add them.

Sources:
Dell:
M10: http://www.dell.com/en-us/shop/accessories/apd/490-bdig
M60: http://www.dell.com/en-us/work/shop/accessories/apd/490-bcwc
P4: http://www.dell.com/en-us/work/shop/accessories/apd/489-bbcp
P40: http://www.dell.com/en-us/work/shop/accessories/apd/489-bbco

Supermicro:
https://www.thinkmate.com/system/superserver-2029gp-tr

HP:
https://h22174.www2.hpe.com/SimplifiedConfig/Welcome
=> ProLiant DL Servers => ProLiant DL300 Servers => HPE ProLiant DL380 Gen10 Server => Configure (just choose one) => Graphics Options

Cisco:
http://itprice.com/cisco-gpl/GPU

Lenovo:
http://itprice.com/lenovo-price-list/nvidia.html

%d bloggers like this: