Skip to content

Windows 10 Enterprise for Virtual Desktops (or WVD On-Prem?)

As many of you know Microsoft integrated a special Enterprise edition in the Insider Preview 17713:
Windows 10 Enterprise for Remote Services

If you search for this, you only find a few blog posts about this version – most focus on Microsoft Windows Virtual Desktops (WVD). The actual content is mostly only showing multiple RDP session at the same time on a Windows 10 VM – like you know it from the Microsoft Remote Desktop Session Host (RDSH) (or to keep the old name: Terminal Server). Only Cláudio posted two post during the last days about this topic which lead to some interesting discussions. You can find them here and here.

When using Office 365 and other programs there are some limitations when running them on RDSH – often it’s even not supported. When now Windows 10 allows multiple users to connect to one VM they have the full Windows 10 experience and on the other side they are still sharing one VM (like with RDSH).

When booting an official Windows 10 1809 Iso I saw that there is still Windows 10 Enterprise for Virtual Desktops available. As I didn’t find any blog posts having a look at this version I decided to have a look myself.

[Update]
 posted the following on Twitter – thus you can also use other ISOs of Windows 10:

If you installed Enterprise and want to get #WVD, you can also simply upgrade it with this key: CPWHC-NT2C7-VYW78-DHDB2-PG3GK – this is the KMS Client key for ServerRdsh (SKU for WVD). Works with jan_2019 ISO.

[/Update]

Disclaimer:
I am not a licensing expert and quite sure some of the following conflicts with license agreements. During the tests in my Lab I only wanted to figure out what’s possible and where the limitations are. Beside that some of the things described are not supported (and I bet will never be supported) – but I like to know what’s technical possible Smile

Installing Windows 10 Enterprise for Virtual Desktops and logging in

Ok let’s get to the technical part. I connected an 1809 Iso, booted a VM and got the following Windows-Version selection. After selecting Windows 10 Enterprise for Virtual Desktops the normal Windows Installer questions followed.win_10_ent_wvd_01

After a reboot I was presented with this:win_10_ent_wvd_02
No question for domain join or user – just a login prompt. So, I started to search and only found one hint on twitter (about the Insiders Build) that the only options are to reset the password with a tool or add a new user. Before you are wondering: Yes it’s possible to create another Administrator Account for Windows without having login credentials. As I already tested that in the past (and didn’t want to test out which password reset tool fits) I decided to take that way.

[Update]

 gave me the following hint on Twitter – this way you can skip the steps below to create a local Administrator-User and continue at Multiple RDP-Connections – Not Domain Joined.

Easier, boot to safe mode and it logs right in… then add your user.

[/Update]

Ok time to boot another time from the ISO and open the Command Prompt (press Shift + F10 when the setup shows up).
win_10_ent_wvd_03

Now we need to replace the Utility Manager (from the normal Login-Screen) with the cmd.exe. The Utility Manager from the Login-Prompt is always started with Admin-Rights….
To replace the Utility Manager with a cmd enter the following commands:

move d:\windows\system32\utilman.exe d:\windows\system32\utilman.exe.bak

copy d:\windows\system32\cmd.exe d:\windows\system32\utilman.exe

2019-02-13 12_59_15-win10_ent_64bit_multi_user_master

That’s it. Time to boot Windows again. Now press the Utility Manager Icon on the bottom right side. Voila: A command prompt (with elevated rights):
win_10_ent_wvd_05

The next step is to create an account and add this one to the local admin group. Therefore, you need to enter the following commands:

net user username /add

net localgroup administrators username /add

win_10_ent_wvd_07

And voila – you can now login with your just created user (without a password):
win_10_ent_wvd_08

Multiple RDP-Connections – Not Domain Joined

For my first tests I just wanted to connect multiple users to the machine without joining the VM to the AD to prevent impacts by policies or something else. I created three local users:
win_10_ent_wvd_09

When I logged of my created Administrator all three had been available on the Welcome-Screen to Login:win_10_ent_wvd_10

A RDP connection was not possible at this time. So let’s check the System Properties => Remote Settings. As you can see Remote Desktop was not allowed:win_10_ent_wvd_11

I enabled Allow remote connections to this computer” and added the three created users to the allowed RDP Users:
win_10_ent_wvd_12

Time for another RDP-Test. The first user logged on successfully – but the second received the message that already two users are logged on and one must disconnect:
win_10_ent_wvd_13

That was not the behavior I expected. After searching a little bit around and finding no solution I tried the old “Did you try a reboot yet?” method. And now something interesting happened. When shutting down this was shown:
win_10_ent_wvd_14

And after booting Windows showed this:win_10_ent_wvd_15
All Windows Updates had been installed before and I added no other Windows features, functions or any programs. So what was installed now?

The Welcome-Screen after the reboot also looked different:
win_10_ent_wvd_16
Before you could see the created users and select one for login – now you need to enter username and password. Looks quite familiar to an RDSH host after booting or? Smile

So logging in with the local admin again and directly this message was displayed:win_10_ent_wvd_17

Time for another RDP-Test. And guess? This time I was able to connect multiple RDP-Sessions at the same time:win_10_ent_wvd_18
So it looks like the “RDSH”-Role is installed when RDP was enabled and Windows is rebooted. Time for the next tests.

Multiple RDP-Connections – Domain Joined

After everything worked fine I thought it’s time to test how everything behaves in a Domain. I reverted to a snapshot where I just had created the local Admin-User. I logged in and joined the machine to the domain:
win_10_ent_wvd_19

After a reboot I logged in with a Domain Admin to Allow Remote Connections to this Computer.
win_10_ent_wvd_20
Interestingly this was already enabled. Furthermore it was not possible to disable it any longer. There had been no policies applied to enable this setting – only the default Domain Policy was active.

So let’s go on and allow the Domain Users to connect.
win_10_ent_wvd_21

Like before the logon of a third user was denied:
win_10_ent_wvd_22

But as we already know a reboot is necessary to install the missing features. Like before magically some features are installed:
win_10_ent_wvd_23

Now several domain users are able to connect to the Windows 10 VM at the same time using RDP.
win_10_ent_wvd_24

Citrix Components

The next logical step for me was to try to install a Citrix Virtual Delivery Agent on the VM. So connect the Virtual Apps and Desktop 7 – 1811 iso and start the Component selection. But what’s that?
win_10_ent_wvd_25
Next to the VDA it’s also possible to select all other roles – which are normally only available on a Server OS! (Just a quick reminder: None of the things I test here are designed or supported to work under such circumstances).

Delivery Controller

I couldn’t resist and selected Delivery Controller. After accepting the license agreement the Component selection appeared. I selected all available Components.
win_10_ent_wvd_26

The first prerequisites are installed without issues – but the installation of the Microsoft Internet Information Service (IIS) failed. The error just showed something failed with DISM.
win_10_ent_wvd_27

So I just installed the IIS with all available components manually.
win_10_ent_wvd_28

But even after installing all available IIS components the Delivery Controller installation still failed at the point Microsoft Internet Information Service.
win_10_ent_wvd_29

I decided to first have no deeper look into this issue and remove all components that require the ,IIS: Director and StoreFront.
win_10_ent_wvd_30

The installation of the other components worked without any issues – as you can see all selected components are installed on Windows 10 Enterprise for Virtual Desktops.
win_10_ent_wvd_31

The Citrix Studio opens and asks for a Site Configuration – as on every supported Server OS.
win_10_ent_wvd_32

Time to create a Site on our Windows 10 Delivery Controller.
win_10_ent_wvd_33
The Summary shows that the Site was successfully created.
win_10_ent_wvd_34

And the Citrix Studio now shows the option to create Machine Catalog.
win_10_ent_wvd_35

And a last check of the services: Everything is running fine.
win_10_ent_wvd_36

Virtual Delivery Agent

Before we now can create a Machine Catalog we need a Master VDA. So let’s create another VM with Windows 10 Enterprise for Virtual Desktops. Repeat the steps from above (Domain-Join with two reboots) and connect the Citrix Virtual Apps and Desktops ISO. This time we select Virtual Delivery Agent for Windows 10 Enterprise for Virtual Desktops. To be able to create multiple VMs from this Master-VM using MCS select Create a master MCS image.
win_10_ent_wvd_37

The next step is to select the Additional Components – like in every other VDA Installation.
win_10_ent_wvd_38

Now I entered the name of the Windows 10 VM where I previously installed the Delivery Controller.
win_10_ent_wvd_39
The summary shows the selected components and the Requirements.
win_10_ent_wvd_40

The installation started….
win_10_ent_wvd_41

….and finished without any errors.
win_10_ent_wvd_42

The VDA was successfully installed on Windows 10 Enterprise for Virtual Desktops.
win_10_ent_wvd_43

Machine Catalog

Time to deploy some VMs – so switch back to the Studio and start the Machine Catalog Configuration.
win_10_ent_wvd_44

The first step is to select the Operation System. Normally this is easy: Server OS for Windows Server 2016 / 2019 – Desktop OS for Windows 10. But this time it’s tricky – we have a Desktop OS with Server OS functions. I first decided to take Desktop OS – although I thought Server OS might fit better.
win_10_ent_wvd_45

Select the just created Master-VM and configure additional Settings like CPU, RAM, etc..
win_10_ent_wvd_46

Finally enter a name for the Machine Catalog and let MCS deploy the configured VMs.
win_10_ent_wvd_47

Delivery Group

As we have now a Machine Catalog with VMs it was time to create a Delivery Group – thus we can allow users to access the VM(s).
win_10_ent_wvd_48

I selected the created Machine Catalog…
win_10_ent_wvd_49

… and added a Desktop to the Delivery Group.
win_10_ent_wvd_50

The Summary shows the configured settings.
win_10_ent_wvd_51
Now we have everything ready on the Delivery Controller – just the user access is missing.

As the installation of StoreFront failed I added the Windows 10 Delivery Controller to my existing StoreFront Deployment.
win_10_ent_wvd_52

After logging in the User can see the just published Desktop. Unfortunately, the user is not able to start the published Desktop.
win_10_ent_wvd_53

So back to the Delivery Controller. Oh – the VDA is still unregistered with a “!” in front of the Status.
win_10_ent_wvd_54

Let’s check the VDA-Event-Log:

The Citrix Desktop Service was reused a connection to the delivery controller ‘win10-multi-01.lab.jhmeier.com.

The registration was refused due to ‘SingleMultiSessionMismatch’.

win_10_ent_wvd_55

Looks like my feeling was correct that the Machine Catalog should have the Operating System Server OS. I created another Machine Catalog and Delivery Group – this time with the type Server OS.
win_10_ent_wvd_56

Let’s boot a VDA and check the Status: Registered.
win_10_ent_wvd_57

Time to check the user connections. After logging on to StoreFront the user now sees the second published Desktop – Type Server OS.
win_10_ent_wvd_58

And this time the connection is possible. As you can see we have now multiple users connected using ICA to the same Windows 10 VDA.
win_10_ent_wvd_59

At this point I would like to sum up what worked and what didn’t work until now:

What worked:

  • Multiple RDP connections to one Windows 10 VM – Domain-Joined and not Domain-Joined
  • Multiple ICA connections to Domain-Joined Windows 10 VDA
  • Delivery Controller on Windows 10 (including Studio and License Server)

What did not work:

  • Installation of Citrix Components that require an IIS (StoreFront and Director)

If the last point can be fixed it would be possible to create an “all-in” Windows 10 Citrix VM. Thus you could run all Citrix Infrastructure Components on one VM – not supported of cause but nice for some testing. Beside that it’s really interesting to see that next to the VDA also a lot of the Delivery Controller Components just work out of the box on Windows 10 Enterprise for Virtual Desktops.

When we look at the behaviour of the Citrix Components it looks like all Server features (including RDSH) that the Citrix installer uses to detect if it is a Desktop- or Server-OS are integrated into Windows 10 Enterprise for Virtual Desktops.

That’s it for now – I have some other things I will test with this Windows 10 Edition – if I find something interesting another blog post will follow Smile

Data Comparison of NVIDIA Tesla P4 and T4

During NVIDIA GTC 2018 in Europe NVIDIA announced the new Turing T4 Graphics card. On Twitter this card got a lot of love as it looked like a good evolution of the (now) mainly suggested Tesla P4 (you remember last year it didn’t even appear on NVIDIA’s slides – now it’s their primary suggestion – thanks for listening NVIDIA). I saw the first details about the card in John Fannelis Presentation on the first day. They looked really promising. Here is the picture I with the shown card details (sorry for the head in front of it – I didn’t expect I need it for a blog post…):

tesla_t4_data_jf

You can find the P4 data in this PDF.

image

Let us compare the data of both cards that we have until now.

P4

T4

GPU

Pascal

Turing

CUDA Cores

2560

2560

Frame Buffer (Memory)

8 GB

16 GB

vGPU Profiles

1 GB, 2 GB, 4 GB, 8GB

1 GB, 2 GB, 4 GB, 8GB, 16 GB

Form Factor

PCIe 3.0 single slot

PCIe 3.0 single slot

Max Power

75 W

70 W

Thermal

Passive

Passive

As you can see both cards are really similar. The just need a single slot. Thus, you can put up to six (or sometimes eight -yes there are a few servers that support eight(!) cards – just check the HCL) cards in one server and have a limited power consumption. The only difference is that the T4 uses the new Turing chip and has doubled Frame Buffer (16 GB). That means you can run 16 VMs each with a 1 GB Frame Buffer on this card. Although I don’t know if the touring chip offers enough performance for that (as I have no testcard until now @NVIDIA) it might be a good option to have 8 VMs with 2 GB Frame Buffer on one card. That would help in many situations where 1 GB Frame Buffer is not enough. In this situation you could only put 4 VMs on one card.

So, at this point I also thought that the T4 is a good evolution of the P4. The only main point I was missing was a price as this could be an argue against the price. But then I attended another session. Here they also showed some details about the T4. There was one detail I noticed quite late and thus I was too late to take a picture. Luckily my friend Tobias Zurstegen made this picture showing the technical information’s:

image

Here are two details shown I didn’t find somewhere else. First there are the Max Users – wouldn’t be to hard to calculate when you know that the smallest profile is a 1B profile (= 1GB Frame Buffer). But next too that there is the number of H.264 1080p 30 Streams. So, let’s add these points to our list.

P4

T4

GPU

Pascal

Turing

CUDA Cores

2560

2560

Frame Buffer (Memory)

8 GB

16 GB

vGPU Profiles

1 GB, 2 GB, 4 GB, 8GB

1 GB, 2 GB, 4 GB, 8GB, 16 GB

Form Factor

PCIe 3.0 single slot

PCIe 3.0 single slot

Max Power

75 W

70 W

Thermal

Passive

Passive

Max Users (VMs)

8

16

H.264 1080p 30 Streams

24

16

What’s that? The number of H.264 1080p 30 Streams is lower on a T4 (16) compared to a P4 which has 24 Streams. The T4 has 8 (!) Streams less than a P4?!? If you keep in mind that e.g. in a HDX 3D Pro environment each monitor of a user with activity requires one stream that means that with 8 users having a dual monitor all available Streams can already be used. If you put more users on the same card it might happen that this leads to a performance issue for the users as there not enough H.264 Streams available. Unfortunately, I haven’t found anything on NVIDIAs website that proofs that this number of H.264 Streams is correct and was not just a typo in the slide.

But if it’s trues I am wondering why that happened? What did NVIDIA change thus the number of streams went down and not up. I would have expected at least 32 Streams (compared to the P4). If it was a card design change that would be contra productive. Let us hope it was just a Typo on the slide.

If someone found some other official document which (not) proofs this number please let me know.

Should the number be correct I hope NVIDIA listens again and changes this before the card is released. I see this as a big bottleneck for many environments. Especially as many don’t know about the number of streams and then wonder why they have a bad performance as the graphics chip itself is not under heavy load and they don’t know that to many required H.264 streams can also lead to a poor performance. Next to that keep in mind that it’s now also possible to use H.265 in some environments – but using that leads possible to less streams as it’s encoding is more resource intensive.

IT-Administrator Article – Skype for Business in Citrix Environments

It’s been quite a long time since I have written an article for the German magazine IT Administrator. Thus I thought it’s time for another article. You can find it it in the current IT-Administrator. The article describes how you can use Skype for Business in Citrix Environments and benefit from the Citrix RealTime Optimization Pack.

ita201405-titel

Citrix möchte die reibungslose Nutzung von Skype for Business in VDI-Umgebungen ermöglichen und hat hierfür das Citrix RealTime Optimization Pack veröffentlicht. Wir wollen uns ansehen, wie dieses Plug-in funktioniert und wie Sie es in Ihrer Umgebung implementieren.

NVIDIA GRID license not applied before the user connects – License Restriction will not be removed until the user reconnects

When you are using NVIDIA GRID, you might know that NVIDIA started to activate license checking in Version 5.0. This means if no license is available, there are the following restrictions applied to the user:

  1. Screen resolution is limited to no higher than 1280×1024.
  2. Frame rate is capped at 3 frames per second.
  3. GPU resource allocations are limited, which will prevent some applications from running correctly

Why ever NVIDIA has not enabled a Grace Period after a VM started – thus the restrictions are active until the VM successfully checked out a license. This has the effect that a user might connect to a just booted VM his session experience is limited. Furthermore, he has to disconnect and reconnect his session when the NVIDIA GRID license was applied to e.g. work with a higher resolution. Currently I know about three workarounds to fix this issue:

1. Change the Citrix Desktopservice startup to Automatic (Delayed Start)
2. Configure a Settlement Time before a booted VM is used for the affected Citrix Delivery Groups
3. NVIDIA Registry Entry – Available since GRID 5.3 / 6.1

Both workarounds have their limitations – thus I would like to show you both of them.

Changing the Citrix Desktopservice to Automatic (Delayed Start)

When you change the Service to an Automatic (Delayed Start) it will not directly run after the VM booted. This has the result that the VM is later registered at the Delivery Controller – it has time to check out the required NVIDIA license – before the Delivery Controller will broker a session to the VM.

To change the service to an Automatic (Delayed Start) open the Services Console on you master image (Run => services.msc) and open the properties of the Citrix Desktopservice. Change the Startup type to Automatic (Delayed Start) and confirm the change with OK.
nvidia_license_not_applied_01

Now update your VMs from the Master Image and that’s it. The VM should now have enough time to grab a NVIDIA License before it registers to the Delivery Controller.
The downside of this approach is that with every VDA update / upgrade you need to configure it again. Instead of doing this manually you can run a script with the following command on your maintenance VM before every shutdown. This command changes the Startup type – you cannot forget to change it (Important: There must be a space after “start=”).

sc config BrokerAgent start= delayed-auto

In a PowerShell the command needs to be modified a little bit:
&cmd.exe /c sc config BrokerAgent start= ‘delayed-auto’

Configure a Settlement Time before a booted VM is used

Alternatively, you have the possibility to configure a Settlement Time for a Delivery Group. This means that after the VM has registered to the Delivery Controller no sessions are brokered to the VM during this configured time. Again, the VM has enough time to request the necessary NVIDIA license. Howerver, this approach also a down side – if no other VMs are available users will still be brokered to just booted VMs although the Settlement Time did not end. This means if you didn’t configure enough Standby-VMs to be up and running when many users connect they still might be brokered to a just booted VM (without a license).

To check the currently active Settlement Time for a Delivery Group open a PowerShell and enter the following commands:

Add-PSSnapin Citrix*
Get-BrokerDesktopGroup –Name “DELIVERY GROUP NAME”

Replace the Delivery Group Name with the Name of the Delivery Group you would like to check
nvidia_license_not_applied_02

You now get some information’s about the Delivery Group. The interesting point is the SettlementPeriodBeforeUse in the lower area. By default, it should be 00:00:00.
nvidia_license_not_applied_03

To change this Time enter the following command:

Set-BrokerDesktopGroup –Name “DELIVERY GROUP NAME” –SettlementPeriodBeforeUse 00:05:00

With the above setting, the Settlement Time is changed to 5 Minutes – do not forget to replace the Delivery Group Name with the actual one.
nvidia_license_not_applied_04

If you now again enter the command to see the settings for your Delivery Group you will notice that the SettlementPeriodBeforeUse was changed.
nvidia_license_not_applied_05

NVIDIA Registry Entry – Available since GRID 5.3 / 6.1

With GRID Release 5.3 / 6.1 NVIDIA has published a Registry Entry to also fix this. The description is a little bit limited but when I got it correct, it changes the driver behavior – thus that all restrictions are also gone when the license was applied after the session was started. Before you can add the registry setting, you need to install the NVIDIA Drivers in your (Master-) VM and apply a NVIDIA License. When the license was successfully applied, create the following Registry Entry:

Path: HKLM\SOFTWARE\NVIDIA Corporation\Global\GridLicensing
Type: DWORD
Name: IgnoreSP
Value: 1

After creating, you must restart the VM (and update your Machine Catalogs if it is a Master-VM). Honestly I wasn’t able to test this last solution by myself – so I can’t tell if this really fixes the issue all times or only mostly….

That’s it – hope it helps (and NVIDIA will completely fix this issue in one of their next releases).

Creating a High Available IGEL Universal Management Server Infrastructure with Citrix NetScaler

When you reach a point where you manage many IGELThin Clients you might think that it’s quite helpful that the Igel UMS is still available even if the Server with the UMS fails. Furthermore, you can use this to reduce the load on one systems. For example when multiple Clients update their Firmware at the same time, they all download it from the IGELUM Server. To reach this goal you have two options:

1. Buy the IGEL High-Availability Option for UMS (including a Load Balancer)
2. Load Balance two UMS that use the same Database with a Citrix NetScaler

In this blog post, I would like to show you how to realize the second option. Before we can start with the actual configuration, we need to have a look at a few requirements for this configuration.

1. Database Availability
When you create a High Available UMS Server Infrastructure you should also make the required Database also High Available. Otherwise, all configured UMS Servers stop to work when the Database Server failed.

2. Client IP Forwarding
The UMS Servers need to know the actual IP of the Thin Clients. When you would not forward the Client IP all Thin Clients would have the same IP Address. You then won’t be able to send commands to a Thin Client or see their online status. Unfortunately, this leads to another problem. The client connects to the load balanced IP of the UMS Servers. This is forwarded (with the original Client IP) to one UMS Server. This server replies directly to the Thin Client IP. The Thin Client now receives a reply not from the IP Address it initially connected to and ignores the reply. One (easy) way to fix this issue is to put the UMS Servers in a separate Subnet. Add a Subnet IP (SNIP) to the NetScaler in this Subnet and configure this NetScaler SNIP as the Default Gateway for the UMS Servers. When doing this the UMS Servers receive the original Client IP but the reply to the Client still passes the NetScaler which can then replace the Server IP with the load balanced IP Address.

Now let’s start with the actual configuration. The first step is to install the Standard UMS (with UMS Console) on two (or more) Servers.
igel_load_balancing_netscaler_01

After the installation finished successfully, it’s time to connect the external Database (if you are unsure about some installation steps have a look at the really good Getting Started Guide). Therefore, open the IGEL Universal Management Suite Administrator and select Datasource.
igel_load_balancing_netscaler_02

In this example, I will use a Microsoft SQL Always On Cluster – but you can select every type of available Database that offers a high availability option. Of course this is nothing that must be – but how does it help you, when the UMS Servers are high available and the database not? If the database server fails, the UMS would also be down – you still would have a single point of failure.
Enter the Host name, Port, User, Schema and Database Name.
igel_load_balancing_netscaler_03

Keep in mind that the database is not automatically created – you have to do this manually before. For a Microsoft SQL Server you can use the following script to create the Database. After creating, the database don’t forget to make it highly available – e.g. using the Always-On function.
If you prefer a different name change rmdb to the required name. Beside that replace setyourpasswordhere with a Password. The user (Create User) and Schema (Create Schema) name can also be changed .

CREATE DATABASE rmdb
GO
USE rmdb
GO
CREATE LOGIN igelums with PASSWORD = ‘setyourpasswordhere’,
DEFAULT_DATABASE=rmdb
GO
CREATE USER igelums with DEFAULT_SCHEMA = igelums
GO
CREATE SCHEMA igelums AUTHORIZATION igelums GRANT CONTROL to igelums
GO

After confirming the connection details, you now see the connection. To enable the connection select Activate and enter the Password of the SQL User.
igel_load_balancing_netscaler_04

On the first server, you will get the information that there is no schema in the Database that needs to be created. Confirm this with Yes.
igel_load_balancing_netscaler_05

You now should see an activated Datasource Configuration. Repeat the same steps on the second UMS Server. Of course, you don’t need to create another Database – just connect to the same Database like with the first server.igel_load_balancing_netscaler_06

Time to start with the actual Load Balancing configuration on the Citrix NetScaler. Open the Management Website and switch to Configuration => Traffic Management => Servers
igel_load_balancing_netscaler_07

Select Add to create the IGEL UM-Servers. Enter the Name and either the IP Address or Domain Name
igel_load_balancing_netscaler_08

Repeat this for all UM-Servers (in my example I added two servers).
igel_load_balancing_netscaler_09

Now we need Services or a Service Group containing / for all UMS Servers. I personally prefer the Service Groups but if you normally use Services this is also possible.
igel_load_balancing_netscaler_10

After switching to Service Groups select again Add to create the first UMS Service Group. In total, we need three Service Groups.
Port 30001: Thin Client Connection Port
Port 8443: Console Connection Port
Port 9080: Firmware Updates

The first one we create is the Service Group for Port 30001. Enter a Name and select TCP as the Protocol. The other settings don’t need to be changed.
igel_load_balancing_netscaler_11

    

Now we need to add the Service Group Members. Select therefore No Service Group Member.
igel_load_balancing_netscaler_12

Mark the UMS Servers created in the Servers area and confirm the selection with Select.
igel_load_balancing_netscaler_13

Again, enter the Port number 30001 and finish with Create.
igel_load_balancing_netscaler_14

The Service Group now contains two Service Group Members.
igel_load_balancing_netscaler_15

As mentioned at the beginning we need to forward the Client IP to the UMS Servers. Otherwise, every client would have the same IP – the NetScaler Subnet IP. Therefore, edit the Settings (not Basic Settings!) and enable Use Client IP. Confirm the configuration with OK.
igel_load_balancing_netscaler_16

That’s it – the Service Group for Port 30001 is now configured.
igel_load_balancing_netscaler_17

Repeat the same steps for Port 8443 – but do not enable Use Client IP. Otherwise, you will not be able to connect to the UMS Servers with the load balanced IP / Name inside the IP Range of the UMS Servers itself.
igel_load_balancing_netscaler_18

Finally, you need to create a Service Group for Port 9080 – this time you can again forward the Client IP.
igel_load_balancing_netscaler_19

At the end, you should have three Service Groups.
igel_load_balancing_netscaler_20

Time to create the actual client connection points – the Virtual Servers (Traffic Management => Load Balancing => Virtual Servers).

igel_load_balancing_netscaler_21

Like before select Add to create a new Virtual Server. Again, we need three virtual servers for the Ports 30001, 8443 and 9080.

The first Virtual Server we create is for Port 30001. Enter a Name and choose TCP as the Protocol. Furthermore, enter a free IP Address in the separate Subnet of the UM-Servers. The Port is of course 30001.
igel_load_balancing_netscaler_22

After this, we need to bind the Services or Service Group to this Virtual Server. If you created Services and not a Service Group make sure, you add the Services of all UMS Servers. To add a created Service Group click on No Load Balancing Virtual Server Service Group Binding.
igel_load_balancing_netscaler_23

Select the Service Group for Port 30001 and confirm the selection with Bind.
igel_load_balancing_netscaler_24

The Service Group is now bound to the Virtual Server. Press Continue to get to the next step.
igel_load_balancing_netscaler_25

When a client connects, we need to make sure it always connects to the same UMS Server after the initial connection and not flips between them. When a client stopped the connection or a UMS Server failed, it’s of course OK if the client connects to the other UMS Server. Herefore, we need to configure a Persistence. As Persistence Type, we select Source IP and the Time-Out should be changed to 5. IPv4 Netmask is  255.255.255.255. Confirm the Persistence with OK.
igel_load_balancing_netscaler_26

Finish the Virtual Server configuration with Done.
igel_load_balancing_netscaler_27

Repeat the same steps for the other two ports – thus you have three Virtual Servers at the end. Of course, all need to use the same IP Address.
igel_load_balancing_netscaler_28

To make it more easy to connect to the Load Balanced UMS Servers it is a good idea to create a DNS-Host-Entry e.g. with the name Igel and the IP address from the Virtual Servers. When you added a DHCP Option or DNS Name for the Thin Client Auto registration / connection change them also to the IP address of the Virtual Servers.
igel_load_balancing_netscaler_29

You can now start the IGEL Universal Management Suite and connect to the created Host-Name.
igel_load_balancing_netscaler_30

After a successful connection, you can see the used server name in the bottom left area and under the Toolbar.
igel_load_balancing_netscaler_31

We now need to point the Thin Clients to the new Load Balanced UMS Servers. You need either to modify an existing policy or create a new one. The necessary configuration can be found in the following area:
System => Remote Management => Universal Management Suite (right area).
Modify the existing entry and change it to the created Host name. Save the profile and assign the configuration to your Thin Clients
igel_load_balancing_netscaler_32

The last step is necessary to allow the Thin Clients to update / download a firmware even when one UMS Server is not available. By default, a Firmware always points to one UMS Server and not the Load Balanced Host name. Therefore, switch to the Firmware area and select one Firmware. Here you can find the Host. Change this to the created Host name and save the settings. Repeat this for all required Firmware’s. If you download a new Firmware make sure you always modify the Host – otherwise a new Firmware will only be available from one UMS Server.
igel_load_balancing_netscaler_33

Of course, when you download or import a Firmware using the UMS this is only stored on one of the UMS Servers. To make the Firmware available on both UMS Servers you need to replicate the following folder (if you modified the UMS installation path this would be different):
C:\Program Files (x86)\IGEL\RemoteManager\rmguiserver\webapps\ums_filetransfer
A good way to do this is using DFS. Nevertheless, every replication technology is fine – just make sure (when changing the Host entry for a Firmware) that the Firmware’s are available on both UMS Servers.

That’s it – hope this was helpful for some of you.

Enable XenServer Live Migration for VMs with a NVIDIA vGPU

When you install NVIDIA Grid 6.0 on a XenServer you need to manually activate the possibility to Live-Migrate a VM which is using a (v)GPU. Otherwise, you see the following message when you try to migrate a VM:

The VGPU is not compatible with any PGPU in the destination

nvidia_enable_live_migration_01

The steps to enable the migration are described in the Grid vGPU User Guide. To enable the live migration you need to connect to the Console of the XenServer (eg using Putty). Login with the root user. The next step is to edit (create) the nvidia.conf in the folder etc/modprobe.d. Therefore, enter the following command:

vi /etc/modprobe.d/nvidia.conf

nvidia_enable_live_migration_02

Here you need to add the following line:

options nvidia NVreg_RegistryDwords="RMEnableVgpuMigration=1"

nvidia_enable_live_migration_03

Save the change with :wq and restart the Host. Repeat this for all Hosts in the Pool. After restarting you can now migrate a running VM with a (v)GPU on all Hosts that have the changed setting. If you haven’t configured the setting or didn’t reboot the Host only the other Hosts are available as a Target Server.

nvidia_enable_live_migration_04

New book: Citrix XenApp and XenDesktop 7.15 LTSR (German)

xa_xd_7_15_ltsr

My book about Citrix XenApp and XenDesktop 7.15 LTSR (German) is finally available in Store. It was a lot of work (and learnings) but also fun to test out some not so often used features. You can find more details here.

%d bloggers like this: