Unidesk: Layered VDI Management

VDI is one of the most intensive workloads in the datacenter today and by nature uses every major component of the enterprise technology stack: networking, servers, virtualization, storage, load balancing. No stone is left unturned when it comes to enterprise VDI. Physical desktop management can also be an arduous task with large infrastructure requirements of its own. The sheer complexity of VDI drives a lot of interesting and feverish innovation in this space but also drives a general adoption reluctance for some who fear the shift too burdensome for their existing teams and datacenters. The value proposition Unidesk 2.0 brings to the table is a simplification of the virtual desktops themselves, simplified management of the brokers that support them, and comprehensive application management .

The Unidesk solution plugs seamlessly into a new or existing VDI environment and is comprised of the following key components:

  • Management virtual appliance
  • Master CachePoint
  • Secondary CachePoints
  • Installation Machine

 

Solution Architecture

At its core, Unidesk is a VDI management solution that does some very interesting things under the covers. Unidesk requires vSphere at the moment but can manage VMware View, Citrix XenDesktop, Dell Quest vWorkspace, or Microsoft RDS. You could even manage each type of environment from a single Unidesk management console if you had the need or proclivity. Unidesk is not a VDI broker in and of itself, so that piece of the puzzle is very much required in the overall architecture. The Unidesk solution works from the concept of layering, which is increasingly becoming a hotter topic as both Citrix and VMware add native layering technologies to their software stacks. I’ll touch on those later. Unidesk works by creating, maintaining, and compositing numerous layers to create VMs that can share common items like base OS and IT applications, while providing the ability to persist user data including user installed applications, if desired. Each layer is stored and maintained as a discrete VMDK and can be assigned to any VM created within the environment. Application or OS layers can be patched independently and refreshed to a user VM. Because of Unidesk’s layering technology, customers needing persistent desktops can take advantage of capacity savings over traditional methods of persistence. A persistent desktop in Unidesk consumes, on average, a similar disk footprint to what a non-persistent desktop would typically consume.

CachePoints (CP) are virtual appliances that are responsible for the heavy lifting in the layering process. Currently there are two distinct types of CachPoints: Master and Secondary. The Master CP is the first to be provisioned during the setup process and maintains the primary copy of all layers in the environment. Master CPs replicate the pertinent layers to Secondary CPs who have the task of actually combining layers to build the individual VMs, a process called Compositing. Due to the role played by each CP type, the Secondary CPs will need to live on the Compute hosts with the VMs they create. Local or Shared Tier 1 solution models can be supported here, but the Secondary CPs will need to be able to the “CachePoint and Layers” volume at a minimum.

The Management Appliance is another virtual machine that comes with the solution to manage the environment and individual components. This appliance provides a web interface used to manage the CPs, layers, images, as well as connections to the various VDI brokers you need to interface with. Using the Unidesk management console you can easily manage an entire VDI environment almost completely ignoring vCenter and the individual broker management GUIs. There are no additional infrastructure requirements for Unidesk specifically outside of what is required for the VDI broker solution itself.

Installation Machines are provided by Unidesk to capture application layers and make them available for assignment to any VM in the solution. This process is very simple and intuitive requiring only that a given application is installed within a regular VM. The management framework is then able to isolate the application and create it as an assignable layer (VMDK). Many of the problems traditionally experienced using other application virtualization methods are overcome here. OS and application layers can be updated independently and distributed to existing desktop VMs.

Here is an exploded and descriptive view of the overall solution architecture summarizing the points above:

Storage Architecture

The Unidesk solution is able to leverage three distinct storage tiers to house the key volumes: Boot Images, CachePoint and Layers, and Archive.

  • Boot Images – Contains images having very small footprints and consist of a kernel and pagefile used for booting a VM. These images are stored as VMDKs, like all other layers, and can be easily recreated if need be. This tier does not require high performance disk.
  • CachePoint and Layers – This tier stores all OS, application, and personalization layers. Of the three tiers, this one sees the most IO so if you have high performance disk available, use it with this tier.
  • Archive – This tier is used for layer backup including personalization. Repairs and restored layers can be pulled from the archive and placed into the CachePoint and Layers volume for re-deployment, if need be. This tier does not require high performance disk.

image

The Master CP stores layers in the following folder structure, each layer organized and stored as a VMDK.

Installation and Configuration

New in Unidesk 2.x is the ability to execute a completely scripted installation. You’ll need to decide ahead of time what IPs and names you want to use for the Unidesk management components as these are defined during setup. This portion of the install is rather lengthy to it’s best to have things squared away before you begin. Once the environment variables are defined, the setup script takes over and builds the environment according to your design.

Once setup has finished, the Management appliance and Master CP will be ready, so you can log into the mgmt console to take the configuration further. Of the initial key activities to complete will be setting up an Active Directory junction point and connecting Unidesk to your VDI broker. Unidesk should already be talking to your vCenter server at this point.

Your broker mgmt server will need to have the Unidesk Integration Agent installed which you should find in the bundle downloaded with the install. This agent listens on TCP 390 and will connect the Unidesk management server to the broker. Once this agent is installed on the VMware View Connection Server or Citrix Desktop Delivery Controller, you can point the Unidesk management configuration at it. Once synchronized all pool information will be visible from the Unidesk console.

A very neat feature of Unidesk is that you can build many AD junction points from different forests if necessary. These junction points will allow Unidesk to interact with AD and provide the ability to create machine accounts within the domains.

Desktop VM and Application Management

Once Unidesk can talk to your vSphere and VDI environments, you can get started building OS layers which  will serve as your gold images for the desktops you create. A killer feature of the Unidesk solution is that you only need a single gold image per OS type even for numerous VDI brokers. Because the broker agents can be layered and deployed as needed, you can reuse a single image across disparate View and XenDesktop environments, for example. Setting up an OS layer simply points Unidesk at an existing gold image VM in vCenter and makes it consumable for subsequent provisioning. 

Once successfully created, you will see your OS layers available and marked as deployable.

 

Before you can install and deploy applications, you will need to deploy a Unidesk Installation Machine which is done quite simply from the System page. You should create an Installation Machine for each type of desktop OS in your environment.

Once the Installation Machine is ready, creating layers is easy. From the Layers page, simply select “Create Layer,” fill in the details, choose the OS layer you’ll be using along with the Installation machine and any prerequisite layers.

 

To finish the process, you’ll need to log into the Installation Machine, perform the install, then tell the Unidesk management console when you’re finished and the layer will be deployable to any VM.

Desktops can now be created as either persistent of non-persistent. You can deploy to already existing pools or if you need a new persistent pool created, Unidesk will take care of it. Choose the type of OS template to deploy (XP or Win7), select the connected broker to which you want to deploy the desktops, choose an existing pool or create a new one, and select the number of desktops to create.

Next select the CachePoint that will deploy the new desktops along with the network they need to connect to and the desktop type.

Select the OS layer that should be assigned to the new desktops.

Select the application layers you wish to assign to this desktop group. All your layers will be visible here.

Choose the virtual hardware, performance characteristics and backup frequency (Unidesk Archive) of the desktop group you are deploying.

Select an existing or create a new maintenance schedule that defines when layers can be updated within this desktop group.

Deploy the desktops.

Once the creation process is underway, the activity will be reflected under the Desktops page as well as in vCenter tasks. When completed all desktops will be visible and can be managed entirely from the Unidesk console.

Sample Architecture

Below are some possible designs that can be used to deploy Unidesk into a Local or Shared Tier 1 VDI solution model. For Local Tier 1, both the Compute and Management hosts will need access to shared storage, even though VDI sessions will be hosted locally on the Compute hosts. 1Gb PowerConnect or Force10 switches can be used in the Network layer for LAN and iSCSI. The Unidesk boot images should be stored locally on the Compute hosts along with the Secondary CachePoints that will host the sessions on that host. All of the typical VDI management components will still be hosted on the Mgmt layer hosts along with the additional Unidesk management components. Since the Mgmt hosts connect to and run their VMs from shared storage, all of the additional Unidesk volumes should be created on shared storage. Recoverability is achieved primarily in this model through use of the Unidesk Archive function. Any failed Compute host VDI session information can be recreated from the Archive on a surviving host.

Here is a view of the server network and storage architecture with some of the solution components broken out:

For Shared Tier 1 the layout is slightly different. The VDI sessions and “CachePoint and Layers” volumes must live together on Tier 1 storage while all other volumes can live on Tier 2. You could combine the two tiers for smaller deployments, perhaps, but your mileage will vary. Blades are also an option here, of course. All normal vSphere HA options apply here with the Unidesk Archive function bolstering the protection of the environment.

Unidesk vs. the Competition

Both Citrix and VMware have native solutions available for management, application virtualization, and persistence so you will have to decide if Unidesk if worth the price of admission. On the View side, if you buy a Premier license, you get ThinApp for applications, Composer for non-persistent linked clones, and soon the technology from VMware’s recent Wanova acquisition will be available. The native View persistence story isn’t great at the moment, but Wanova Mirage will change that when made available. Mirage will add a few layers to the mix including OS, apps, and persistent data but will not be as granular as the multi-layer Unidesk solution. The Wanova tech notwithstanding, you should be able to buy a cheaper/ lower level View license as with Unidesk you will need neither ThinApp nor Composer. Unidesk’s application layering is superior to ThinApp, with little in the way of applications that cannot be layered, and can provide persistent or non-persistent desktops with almost the same footprint on disk. Add to that the Unidesk single management pane for both applications and desktops, and there is a compelling value to be considered.

On the Citrix side, if you buy an Enterprise license, you get XenApp for application virtualization, Provisioning Services (PVS) and Personal vDisk (PVD) for persistence from the recent RingCube acquisition. With XenDesktop you can leverage Machine Creation Services (MCS) or PVS for either persistent or non-persistent desktops. MCS is deadly simple while PVS is incredibly powerful but an extraordinary pain to set up and configure. XenApp builds on top of Microsoft’s RDS infrastructure and requires additional components of its own such as SQL Server. PVD can be deployed with either catalog type, PVS or MCS, and adds a layer of persistence for user data and user installed applications. While PVD provides only a single layer, that may be more than suitable for any number of customers. The overall Citrix solution is time tested and works well although the underlying infrastructure requirements are numerous and expensive. XenApp offloads application execution from the XenDesktop sessions which will in turn drive greater overall host densities. Adding Unidesk to a Citrix stack again affords a customer to buy in at a lower licensing level, although Citrix is seemingly removing value for augmenting its software stack by including more at lower license levels. For instance, PVD and PVS are available at all licensing levels now. The big upsell now is for the inclusion of XenApp. Unidesk removes the need for MCS, PVS, PVD, and XenApp so you will have to ask yourself if the Unidesk approach is preferred to the Citrix method. The net result will certainly be less overall infrastructure required but net licensing costs may very well be a wash.

Resource Sharing in Windows Remote Desktop Services

Resource sharing is at the crux of every virtual environment and is ultimately what makes a shared environment feasible at all. A very common pain point in Remote Desktop Services/ Terminal Services (RDS) environments is the potential for a single user to negatively impact every other user on that RDS host. Server 2008 R2 includes a feature called Dynamic Fair Share Scheduling (DFSS) to balance CPU usage between users. This is a proactive feature so it is enabled by default and levels the CPU playing field at all times depending on how many users are logged in and CPU available. In Server 2012 the fair share features have been expanded to include network and disk as well.
From the Server 2012 RC Whitepaper, the 2012 fair share experience:

  • Network Fair Share. Dynamically distributes available bandwidth across sessions based on the number of active sessions to enable equal bandwidth usage.
  • Disk Fair Share. Prevents sessions from excessive disk usage by equal distribution of disk I/O among sessions.
  • CPU Fair Share. Dynamically distributes processor time across sessions based on the number of active sessions and load on these sessions. This was introduced in Windows Server 2008 R2 and has been improved

Cool! New features are great. Fair sharing has been traditionally an on or off feature, this is where the extent of the configuration ends. From what I can see in the 2012 RC, that doesn’t appear to have changed. Of course if you are running Citrix XenApp, you would want to disable all Windows fair share features and let XA take care of those functions. Fair sharing can be controlled via Group Policy or registry but only the CPU piece is visible in the RC GPOs. 

 

It is also stored all by itself in the registry under \Quota System.

I do see the additional fair share elements in the registry, however, so the missing GPO elements should appear in the RTS version of the product. Obviously, 1 = on, 0 = off. Looks like there is some registry organization work that still needs to happen.

In an environment where 100% of the users run 100% of the same applications with 100% predictable usage patterns, this model works fine. The trouble begins when you need to support an environment the requires some users or applications to be prioritized over others. There is also nothing there to deal with application memory usage. You could make the argument that this is the point at which these special users should be carved off of the shared session host and given VDI sessions. Luckily, WSRM is available to provide tighter controls in a shared session environment.
Windows System Resource Manager (WSRM) is a tool that has been around since Server 2003. It’s purpose is to granularly define resource limits as they pertain to specific users, sessions, applications or IIS app pools. WSRM’s use isn’t limited to RD Session Hosts, it just happens to be very useful in a shared environment. It should be noted that WSRM is a reactive tool, so you have to cross a certain threshold before the limits it imposes kick in. In the case of CPU, the host has to reach 70% utilization first, then any defined WSRM CPU limitation policies would begin to ratchet down the host. Best practices call for the use of targeted CPU limits to restrict resources, not memory limits. Use memory limits only if an application is exhibiting a memory consumption problem.
Here is a quick example of an allocation policy limiting IE to 25% CPU. This policy would need to be set as the managing policy for it to take affect after the host’s total CPU reached or exceeded 70%.


Another simpler option could be to use weighted remote sessions, categorizing users into basic, standard, and premium workloads to appropriately prioritize resources.

In the Server 2012 RC Add Roles and Features wizard, it is clearly called out that WSRM is now deprecated. Server 2012 will have the tool but the next server release in 4 years will not. Hopefully Microsoft has something up their sleeves to replace this tool or bolster the configurability of the fair sharing features.

References:
2012 list of deprecated features: http://technet.microsoft.com/en-us/library/hh831568.aspx
WSRM for Server 2012: http://technet.microsoft.com/library/hh997019
Server 2012 RC whitepaper: http://download.microsoft.com/download/5/D/B/5DB1C7BF-6286-4431-A244-438D4605DB1D/WS%202012%20White%20Paper_Hyper-V.pdf

Delegating permissions to BitLocker recovery keys

BitLocker is a useful hard drive encryption tool supported by the Enterprise and Ultimate versions of Windows7. Recovery is handled through the use of 48-digit keys that are generated for each host running BitLocker. Best practice and common sense is to configure your environment so that the recovery keys are stored in Active Directory. There are a number of scenarios in which the use of these keys are required to gain access to the OS. By default only members of the Domain Admins group has access to these keys which is very inconvenient if you have a delegated support staff that are not domain admins. 

You can grant your support group full control to the AD container housing computers with BitLocker enabled and they will still not be able to see the recovery keys. Delegation of this access is done via a script. Just copy the text below, save it to a file with a .vbs extension, and run cscript whatever.vbs from a DC or workstation with a Domain Admin logged in. The only thing you need to change in this script is the second line: enter whatever your support AD group is called here. This all of course only applies to Server 2008/R2 and Windows7.

‘To refer to other groups, change the group name (ex: change to “DOMAIN\Help Desk Staff”)
strGroupName = “DOMAIN\Help Desk Staff” 

‘ ——————————————————————————–
‘ Access Control Entry (ACE) constants
‘ ——————————————————————————–

‘- From the ADS_ACETYPE_ENUM enumeration
Const ADS_ACETYPE_ACCESS_ALLOWED_OBJECT      = &H5  ‘Allows an object to do something

‘- From the ADS_ACEFLAG_ENUM enumeration
Const ADS_ACEFLAG_INHERIT_ACE                = &H2  ‘ACE applies to target and inherited child objects
Const ADS_ACEFLAG_INHERIT_ONLY_ACE           = &H8  ‘ACE does NOT apply to target (parent) object

‘- From the ADS_RIGHTS_ENUM enumeration
Const ADS_RIGHT_DS_CONTROL_ACCESS      = &H100 ‘The right to view confidential attributes
Const ADS_RIGHT_DS_READ_PROP                 = &H10  ‘ The right to read attribute values

‘- From the ADS_FLAGTYPE_ENUM enumeration
Const ADS_FLAG_OBJECT_TYPE_PRESENT           = &H1  ‘Target object type is present in the ACE
Const ADS_FLAG_INHERITED_OBJECT_TYPE_PRESENT = &H2  ‘Target inherited object type is present in the ACE

‘ ——————————————————————————–
‘ BitLocker schema object GUID’s
‘ ——————————————————————————–

‘- ms-FVE-RecoveryInformation object:
‘  includes the BitLocker recovery password and key package attributes
SCHEMA_GUID_MS_FVE_RECOVERYINFORMATION = “{EA715D30-8F53-40D0-BD1E-6109186D782C}”

‘- ms-FVE-RecoveryPassword attribute: 48-digit numerical password
SCHEMA_GUID_MS_FVE_RECOVERYPASSWORD = “{43061AC1-C8AD-4CCC-B785-2BFAC20FC60A}”

‘- ms-FVE-KeyPackage attribute: binary package for repairing damages
SCHEMA_GUID_MS_FVE_KEYPACKAGE = “{1FD55EA8-88A7-47DC-8129-0DAA97186A54}”

‘- Computer object
SCHEMA_GUID_COMPUTER = “{BF967A86-0DE6-11D0-A285-00AA003049E2}”

‘Reference: “Platform SDK: Active Directory Schema”

‘ ——————————————————————————–
‘ Set up the ACE to allow reading of all BitLocker recovery information properties
‘ ——————————————————————————–

Set objAce1 = createObject(“AccessControlEntry”)

objAce1.AceFlags = ADS_ACEFLAG_INHERIT_ACE + ADS_ACEFLAG_INHERIT_ONLY_ACE
objAce1.AceType = ADS_ACETYPE_ACCESS_ALLOWED_OBJECT
objAce1.Flags = ADS_FLAG_INHERITED_OBJECT_TYPE_PRESENT

objAce1.Trustee = strGroupName
objAce1.AccessMask = ADS_RIGHT_DS_CONTROL_ACCESS + ADS_RIGHT_DS_READ_PROP
objAce1.InheritedObjectType = SCHEMA_GUID_MS_FVE_RECOVERYINFORMATION

‘ Note: ObjectType is left blank above to allow reading of all properties

‘ ——————————————————————————–
‘ Connect to Discretional ACL (DACL) for domain object
‘ ——————————————————————————–

Set objRootLDAP = GetObject(“LDAP://rootDSE”)
strPathToDomain = “LDAP://” & objRootLDAP.Get(“defaultNamingContext”) ‘ e.g. string dc=fabrikam,dc=com

Set objDomain = GetObject(strPathToDomain)

WScript.Echo “Accessing object: ” + objDomain.Get(“distinguishedName”)

Set objDescriptor = objDomain.Get(“ntSecurityDescriptor”)
Set objDacl = objDescriptor.DiscretionaryAcl
 
‘ ——————————————————————————–
‘ Add the ACEs to the Discretionary ACL (DACL) and set the DACL
‘ ——————————————————————————–

objDacl.AddAce objAce1

objDescriptor.DiscretionaryAcl = objDacl
objDomain.Put “ntSecurityDescriptor”, Array(objDescriptor)
objDomain.SetInfo

WScript.Echo “SUCCESS!”

Once the script has run successfully, the BitLocker Recovery tab will now be accessible in ADUC and ADAC.

 

Reference:

TechNet

Split-Scope DHCP in Server 2008 R2

Splitting DHCP scopes is a best practice that has been around for a very long time. The purpose is to spread a given IP scope across multiple DHCP servers in multiple locations for redundancy and load balancing. Let’s say that your company has two locations, A and B; Location A has two DHCP servers, location B has one. The IP schemes are different at each location but you want to ensure that DHCP services are available for all IP scopes at both should any of the DHCP servers ever have a problem. The best way to achieve this is to spread the scopes across all three servers. This is done by defining the scope on each server then configuring exclusions for the IP range that each server should not serve, to avoid overlap. While this was all possible in previous versions of Windows Server it had to be configured manually.

Server 2008 R2 makes this easier by adding a split-scope wizard in the DHCP MMC. First configure a DHCP scope as you normally would with the first and last IP address that you want to be dynamically assigned. Leave out the range of the IP space you intend to use for static assignments as we are only concerned with dynamic assignments. Once the scope has been defined and created it can be split by invoking the split-scope wizard in the advanced sub menu of the scope.

image

Once you have Selected the authorized server that you would like to add to the scope, the wizard will allow you to define the percentage of the address space that you want to give to the new server. Move the slider left and right to adjust the pool given to the new server you are configuring. If you plan to split the scope across 3 or more servers, do some quick math to determine how many addresses each should serve if you want to split the scope evenly. In the example below I am keeping 33% of the address space on the original server and giving 67% to the new server because I plan to split the scope again. The bottom portion of the window displays the exclusions that will be created on both the original host and new server.

SNAGHTMLb349de

Next you can configure a server offer delay so you can help shape which servers you want to be able to respond in which order. I intend to make my third server primary for this scope so will configure delays for both the host and added DHCP servers.

SNAGHTMLbbe996

Before executing the split you are presented with a summary screen to ensure that your configuration is correct. Once you execute, the wizard will create an exclusion on the host server as well as create the new scope with exclusion on the added server. A time saver for sure!

SNAGHTMLc0ecf1

The wizard runs through each required step and displays the result of each.

SNAGHTMLc3ae9a

On the newly added server the scope is not automatically activated so if you made a mistake you can easily start over by deleting the new scope on the added server and the exclusion on the host server. Repeat this process on the second server to split the scope again ensuring an even split across all 3 servers. Once your configuration is solid all you need to do is activate the scope on each server you added it to.

image

Here is how the scope and exclusions look on each server after the split.

Original host:

image

2nd added server:

image

3rd added server:

image

As you can see best practice is automatically applied by defining the entire DHCP range on each server, then limiting the assignable range on each through an exclusion. Future adjustments can be made easily by changing the exclusion.

As a final step, make sure you have IP helpers configured in your switch on the proper VLAN interfaces so it knows where to send your client’s DHCPDISCOVER messages.

SNAGHTMLd13382

Hey where did my drive space go??!?

This one is easy to overlook and can have you scratching your head weeks to months later. Recently disk space has been mysteriously disappearing on a few select servers of mine. The free/used ratio was way out of whack with >75% of the disk being reported as used by Windows on certain drives. Auditing the drive reveals no culprit, with all visible files not consuming anything near what is being reported, including hidden files. System Volume Information reports 0 bytes and access is denied to this folder. If you use Windows Server Backup (WSB), in Server 2008 R2, and in particular VSS backups this can happen to you too (or VSS at all for that matter).

WSB uses VSS for backups and you have two choices in your backup sets: VSS full or copy backups.

 image

In either case the backup job will create a Shadow Copy task on the volume pertaining to your backup. By default the shadow copy task is set to use no limit which will, eventually, completely consume your drive. The tricky part is that the space consumed by VSS is not visible in Windows Explorer and if you don’t set a hard limit on the shadow copy it can grow out of control.

image

Make sure to limit these and you should be ok. Changing no limit to use limit will immediately delete and free space on your server!

image

VSS settings are accessed by opening the properties of any drive in your server and selecting the Shadow Copies tab.

Application elevation for non-administrative users

Despite whatever you may feel about UAC for use on your home PC, it is a great tool for desktops in the enterprise. “Run As” served its purpose well in the XP days almost a decade ago (has it really been that long?). As much as I like UAC in my enterprise for easy elevation of administrative tasks and accompanied profile virtualization, it’s not without its shortcomings. Namely, it is lacking the ability to selectively elevate specific applications that, for whatever reason, require administrative privileges, without the user knowing an administrator’s password. The solution used to be to grant the users administrative privileges outright or try to figure out the required permissions for every file write, registry call, or function the application would need to perform (subinACL). There has to be a better way. The principle of least privilege is a best practice that should be enforced in all environments, which serves to protect the organization as a whole as well as the individual user.

The problem:

I have several legacy applications with custom device drivers, built-in updaters, or program dependencies requiring change or write access to files that live in protected system directories. I don’t want the user to have to call for support every time they need to run the program and I don’t want to grant them administrative access.

BeyondTrust.com While researching solutions we came across a product called PowerBroker:Desktops by BeyondTrust. They just re-branded this product so it used to exist as Privilege Manager. As advertised this solution was designed to address my problem exactly. Not only that this product supports the ability to elevate MSI’s and Active-X controls. Windows 7 and Server 2008 R2 are fully supported. The solution is comprised of two parts:

  1. The Manager which contains the GPO editor extensions and RSOP snap-ins.
  2. The Client which is a kernel-mode driver that lives on the user’s PC including group policy client side extension and WMI namespace (deployable via MSI)

The driver on the client PC watches processes launched locally and checks against any configured GPO rules. If a rule exists for a launched process, the driver elevates the security token for that process based on settings in the rule. The added value for this method is that when applications are elevated the file system is not virtualized as is the case with a default “run as admin” operation. Additional privileges are added to the user’s security token for that specific process when it’s launched. This is like adding the user to the Administrators group temporarily just for that process for the duration it’s run (sudo).

The Manager piece is installed on your domain controllers, very straight forward.

Once installed the new features are immediately available in GPMC. Create a new or edit an existing GPO. You will see a new settings container under both the computer and user sections of the policy called Computer Security and User Security, respectively.

image

Before you can get to creating new policy items you will have to configure licensing. Along with the new settings containers, you should see a new button between View and Help called Privilege Manager. Open the Licensing tool and you will see 3 tabs: Local, GPO, and Request. You can generate a new request from the tool or simply import and activate a license you have received. To enable you first have to import the license you have purchased, then deploy it to the GPO in which you have enabled Privilege Manager features.

Deployment is done via the GPO License tab.

Once the license is active you can begin setting up policies. There are a number of options at your disposal which at first glance look a lot like a software restriction policy.

Once the program information is supplied you can assign permissions, privileges, and integrity settings to it. You can get very granular at each level if you need to.

Permissions Privileges

Integrity

Under the last tab, Common, you can apply filters to your policy much like a GPO Preference item.

Testing

To test, I have set up a path rule for cmd.exe that will elevate a non-administrative user to the builtin\administrators group for this process alone. The PowerBroker client can be deployed via GPO or other systems management solution to the specific PCs that need it. A single service will run on the client PCs called BeyondTrust Reporting Service. To prove that this solution works I will use a tool deployed with the client called Policy Monitor (polmon) as well as execute a privileged command from the prompt.

Logged into a PC with my test user John Doe, with PolMon running, I launch cmd.exe. Polmon displays that cmd.exe has been invoked and that there is a rule match for this application. The rule is parsed and John’s security token is elevated for the cmd process only, as configured.

With my elevated command prompt running, John executes a GPResult /R which under normal user mode only displays information about the user account. Computer related information is normally denied from being displayed without command prompt elevation. As you can see John was able to display RSOP results for both the user and computer accounts. Not only that the file system is not virtualized as you can see from the working directory. The user sees no UAC or other prompts, the application just works.

A quick note on licensing. PowerBroker:Desktops is licensed per enabled user that has the client installed. The license is installed at an OU level of your choosing and enabled per GPO. According to the documentation, each active user in an OU will be counted as a user for PowerBroker. So as far as the software is concerned any user in an OU that it can see is considered an active user. BeyondTrust has a few ways to deal with this, both extremely fair. You can use an enterprise license, enabling access for all users, which requires an annual true-up where you prove the actual number of users using the product. The other option is buying x number of licenses and they will provide some additional license padding for growth. User licenses will run in the $30-40 range each so depending on the size of your environment is very reasonable.

But here’s the rub…BeyondTrust will only issue you a temporary 30-day license until their invoice is paid. Yep, even if you buy through the channel! So if your reseller (their partner) doesn’t pay their invoice, guess what? No permanent license for you. If your reseller (their partner) has net 60 terms with them, guess what, no permanent license for you. This is an extremely paranoid way to license a product that I have honestly not seen ever in over a decade in this business! I am over 60 days now, on my second temp license, and am still waiting for my permanent lic. The product definitely works but their licensing policy is asinine. Buyer beware!

PowerBroker:Desktops is an extremely clean solution that works exactly as advertised. For those of us that have to deal with badly written or legacy applications requiring administrative access, this solution is a winner. I have ~20 users that will need this functionality so at the quoted price point this solution is a great deal with the amount of time and frustration it will save.

Connecting to administrative shares remotely in a workgroup

Here’s a  useful quick hit. By default in Windows Vista and 7 you cannot connect to administrative shares on other PCs in a workgroup (C$, admin$). This is the case even after enabling file/print sharing, opening the firewall, etc. Yes, you could just create a share on C:\ named something else but I want to connect to the hidden root without creating additional shares.

To do this open regedit and navigate to:

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System

Create a new 32-bit DWORD value called LocalAccountTokenFilterPolicy and set its value to 1.

Enabling this registry entry builds an elevated security token that allows for remote administration and access to the administrative shares.

Reference:

http://support.microsoft.com/kb/947232

How to control DNS resolution for an external domain

I recently had a situation come up where I needed to change the traffic flow on my LAN for Outlook Anywhere clients that were going out to the internet to connect to our email provider (outlookanywhere.domain.com). Our provider is also accessible internally via a disparate and complicated network so the internet method was preferred due to less complexity. The Outlook Anywhere public facing servers were having problems, denying my client connections, so I needed to force the connections internally.

Caveats:

  • I still want my clients to be able to access Outlook Anywhere outside the office so I can’t disable it or change the address it connects to
  • HOSTS files are unmanageable
  • I do not own the namespace of the servers that Outlook Anywhere connects to
  • I will have to statically route to each destination in the provider’s namespace (from my core network) as our networks are connected but not well routed

To pull this off there are a few available options:

  • Leverage conditional forwarders to the affected namespace
  • Create a new primary DNS zone for this namespace, add the host record I need to redirect
  • Create a new primary DNS zone for each FQDN that I want to redirect

The first option is the simplest, I could just route all of the requests to this domain directly to its internal DNS servers. The problem with this is that I’ll have to also statically route to every possible server/host in that network that users might access. There are too many.

The second option would also work but then I’d need to create A records for any other hosts in that namespace or clients would be unable to resolve them.  The routing problem exists here too so this is a bad option for me.

The third option is the money ticket. Using this method I can simply create a new forward lookup zone for outlookanywhere.domain.com and in that zone create a nameless A record with the internal IP of that server. Easy. I still have to statically route to this server but all other public facing resolutions will continue to work without issue. This solution will work for any external namespace if you need to redirect your internal clients somewhere else.

Review: Symantec Endpoint Protection 11

According to most industry comparative sources, two of the top 10 enterprise antivirus products are EndPoint Protection by Symantec (SEP) and NOD32 by Eset. These products often occupy the top two positions, again depending on your source. Av-comparatives.org gave its product of the year award to Symantec for 2009 and NOD32 has earned back-to-back VB100 awards since 2002. Personally, I have been well protected using SEP, formerly SAVCE (Symantec Antivirus Corporate Edition), for over a decade but am always open to better solutions if they exist. NOD32 is well-known on the internet by PC-enthusiasts as a purportedly bullet-proof AV scanner, so the enterprise solution must be as equally impressive right? I decided to find out. Having recently gone through a new enterprise antivirus rollout, I thoroughly vetted Nod32 v4 and SEP 11. I’ll share my review of NOD32 for the enterprise in another posting.

First up the incumbent, SEP, which has seen many changes since the days I first started using SAVCE, back when client management was NetBIOS dependent. AD-integration is now seamless, Server 2008 R2/ Windows 7 are fully supported, and the legacy requirements of WINS are long gone. Installation of the the management server piece is still as easy as it was years ago, all controlled from a single media source where the client bits also live. Setup allows you to launch the server or client installs from the installation media. Simple.

image

The SEP manager component is web-based now with a Java front end so you will need to install the JRE on your server. The manager runs happily in IIS which will be configured for you during setup, assuming you have the role and requisite services installed.

image image

By default the server is setup to use tcp/8443 for the server port and tcp/9090 for the web console port. 8443 is what is used when logging into the manager console itself while 9090 hosts links and the download for the Manager console.

image

Once logged in, you are immediately presented with a well laid out dashboard that includes risk detections, client definition status, as well as the current Symantec ThreatCon level and links to pertinent info.

image

Let’s start at the bottom with the Admin tab and work up. Here you choose which components to work with, assigning admin rights, domain and server management, and manipulation of installation packages. In the domains section, by default, there is only a single domain defined called Default. Any deployment packages you created during the server install will assign clients here. You can manage multiple domains if you have a disparate environment but if not I recommend you disable the Default domain and add your own right away. To switch between domains you need to select the domain at the top and click the Administer Domain button down below. Now only the clients and policies assigned to the active domain will be displayed under those tabs. If you need to move clients between management domains at some point you can either uninstall/reinstall the client that points to the correct domain, or run the sylink drop tool which can move an existing client from one domain to another. If you use AD-integration your clients will show up only in their real OUs on the Clients tab.

 image

Also within the Admin tab you can set the console administrators, control SEP server options like mail and directory servers, and create client install packages. Install packages consist of 3 main parts:

  • the base client package itself (x86/ x64)
  • Install settings: silent or interactive installation, upgrade settings
  • Install feature set: which components of the client to actually install

image

Once the install settings and feature sets are defined, custom clients can be pushed from the console or migration wizard, or exported from the install packages screen.

On the clients tab you will see any non-AD SEP folders you created as well as the currently active domain under My Company. The functionality here is logical but not as useful as it could be. There is no consolidated client view here, you have to drill down into each OU to see your clients or run a “search clients” operation to pull the entire list broken out into several pages. Green bubbles mean that a client is online and communicating with the server. Important to note is that if a client is a member of the actively managed domain, it cannot live outside this structure in a non-AD SEP group.

image

The policies client tab show everything active for a given container. Here you can see that all policies are inherited and which are currently applied to this OU.

image

Important to note here is the “Communications Settings” button whose contents are inherited from the parent My Company. At the parent level you can change the method in which clients communicate with the server. By default Push mode is used which keeps an open channel between clients and the management server so that policy changes can be quickly propagated. Running a netstat –an on the management server will reveal every open TCP session to all connected clients.

image

The policies tab provides access to control policies for all installed components in your client. Multiple policies can be created and assigned at different levels either to local SEP groups or OUs in your AD. LiveUpdate is set to run on the management server every 4 hours by default but make sure to assign a LU policy to your clients.

image

All fields in each component of the client can be clearly and easily set in a policy along with line item lockout so the item can’t be changed by the end-user. This is extremely straight forward, as it should be.

image

Centralized exceptions can be set either through administrative policy or as user-defined. We obviously don’t want users excluding potentially unsafe items so you can lock them out from this. In previous builds of SEP you could open the AV client on a user’s PC and see the admin-defined exceptions but this has since been removed, by customer request according to Symantec. You have to look at the policy serials numbers, as reported by your clients, and trust that these exceptions are making it in as all that is visible from the client is the user defined exceptions.

image

Reporting is robust, beautiful and can be created ad-hoc or on scheduled a basis. Detailed break-outs of risks, compliance, and status (among others) are all available.

 image

 image

Environment monitoring is further detailed in the Monitors tab which provides summary pie charts, access to logs, the status of any commands issued in the console,  and notification options. Every condition you would expect to want to be notified about is available and configurable here.

image

Add-on security components like firewalls and heuristic scanning usually result in problems so I don’t install anything but anti-virus and malware protection in my environments. There are posts all over the internet of people looking for help on how to remove these products and clean up from their aftermath.

All things considered I give SEP 11 a solid 9 for the enterprise. It works exactly how I expect my corporate AV solution to. Despite a few configuration items being somewhat hard to find and buried in the GUI, the functionality is fantastic and features a true “set it and forget it” type of system. The architecture features classic web and data tiers for management and a full featured client capable of much more than I will use it for. Client management, reporting and alerting are robust which are absolute requirements. It would be nice if there was a consolidated client view that displays all clients and their status regardless of which OU they live in. Currently you have to either drill into each OU or run a search. Either way is functional but undesirable. It would also be nice to be able to optionally view administratively-defined centralized exceptions on the clients themselves.

Design and Review: Citrix XenApp 5 on Server 2008

Citrix currently has 4 flavors of XenApp (XA) being actively sold on the open market: XenApp Fundamentals for server 2008, XenApp 5 for Server 2003, XenApp 5 for Server 2008, and XenApp 6 for Server 2008 R2. For the most part these all look and feel the same from an end user experience perspective. The value for XenApp in my environment is twofold: managing legacy “problem” applications centrally and providing a secure remote access alternative to traditional VPN for my users. While it’s tempting to jump straight to XA6, my first objective prevents this as many of my problem business applications won’t run on Server 2008 R2 (Windows 7) nor on a 64-bit OS. 

Another tool in the Citrix arsenal is application streaming which is the new, and only, version of application isolation. Streaming is available on Advanced, Enterprise, and Platinum versions of XA and there is a special installation method to enable this on Advanced. Streaming is similar to VMware’s ThinApp product where apps are built (profiled) separately on a workstation but published through Citrix. Another advantage is that you can stream directly to the client which will use the client’s hardware resources to run the app taking the load off of the server. Some published limitations exist for streamed apps, however, that include anything requiring special drivers or certain .NET apps.

Planning and design

Unlike XA Fundamentals, the full version of the product contains several roles which can all be distributed depending on your environment. The basic server functions in a XA farm are infrastructure servers and application publishing servers. Infrastructure servers handle the functions not related to directly presenting applications to end users. These roles include licensing, data store, zone data collector, web interface, and XML service broker. These roles are well documented and all relate to farm communication and user application presentation. Best practice is to have at least one infrastructure server that will not publish apps to your users and then at least one publishing server. This design model scales easily depending on the size of your organization and performance requirements.

image 

My deployment will consist of 5 servers, all server 2008 R1 x86, all VMs, running on a highly-available ESX4 cluster. 1 x infrastructure server, 3 x publishing servers, and 1 Citrix Secure Gateway (CSG) server. My infrastructure server will host all core roles including the Web Interface (WI) to publish the PNAgent config.xml. Given the size of my organization I have no need to house the data store in a full blown SQL server so I will use the local SQL Express option. Even smaller organizations can use an Access database as well.

My 3 publishing servers will be configured in dual-mode to host both published and streamed apps. 2 of the servers will host identical applications so connections can be load-balanced while the third will be used to host a business application that can only be used with Office 2003. I plan to stream these 2 apps together but I also want to be able to publish these apps externally via a web interface.

My CSG server will be configured to run the gateway and WI for external users to connect to. It will  be configured in a single-hop DMZ design. Dual-hop would separate the CSG from the WI on different servers in different DMZs but I don’t have a need to do that now. The firewall will need to be configured to allow the CSG server to talk to each of the publishers via TCP/1494 (ICA), TCP/2598 (session reliability), and TCP/80 for the XML service (optionally configured to use HTTPS). The CSG will also need to be able to talk to the infrastructure server on TCP/80 for XML/ STA (Secure Ticket Authority) services. There is no need to open anything further to the infrastructure server.

image

Here is the logical architecture for my deployment:

image  

Infrastructure Server Installation

There are plenty of walk-throughs that take you step-by-step through the install process, check out http://www.dabcc.com for some good ones. I want to discuss some of the less documented gotchas that I ran across during my build-out. One of my big complaints with XA5 is the handling of pre-requisites. While this is much improved in XA6, XA5 is pretty painful in this regard depending on which components you install. There are 3 main roles that will be needed on the infrastructure server: terminal services, application server, and web server. My TS licensing is hosted on another server so I only need to install the actual TS component. Please note that EVERY XA server that you turn up, including the infrastructure server, requires terminal services to be installed. Even though your infrastructure server will not be publishing apps, it will be fully capable of doing so. XA cannot be installed without the TS role installed first!

Each component of the XA setup wizard will prompt you for the role services that it needs for the install to complete, but you will have to restart the wizard each time if you are missing anything.  To save you the trouble ensure that the following IIS role services are installed before you launch setup (make sure to also enable Windows Authentication under security, not shown):

image

Optionally add the COM+ Network Access role service to the Application Server role if you want to be able to remotely manage your server from the Access Management Console (AMC):

image

You will also need the latest x86 JRE installed for the licensing server component. Important to note is that you will need the x86 JRE even on an x64 server!

image

If you plan to use SQL Express for the data store, go ahead and get this ready now as well because the setup wizard will not do it for you. Run SetupSqlExpressForCPS.cmd in DVDRoot\Support\SqlExpress_2005_SP2. It will instal SQL Express 2005 SP2 and create a database called CITRIX_METAFRAME that the XA setup will look for if you choose the SQL Express installation option. Citrix strongly recommends that you do not install SQL Server on your XA server, only Access or SQL Express.

With those pieces installed you should be good to kick off the XA installer. XA5 doesn’t really break out individual roles like “XML broker” or “zone data collector”, much of it is implied. Such as the first server in a farm automatically becomes the XML broker, STA, and zone data collector. All servers will run the XML Service but only the XML broker will be running IIS which can share the same port. Any of the servers are capable of becoming the Zone Data Collector, controlled via election preferences, which is controlled in the Advanced Configuration tool. If you are installing Advanced Edition and plan to use Application Streaming, you must run the Platinum Edition installer! The server edition will be changed later in the AMC.

image

Select the Application Virtualization option next which will launch the installer for all common components. On your infrastructure server enable all options, for you publishers all you need is the Access Management Console and XenApp:

 image

Here is why you run the Platinum installer with the intention of ultimately running Advanced Edition, the Advanced installer does not have this option available:

image

Each component will install separately and can be upgraded individually. This will provide you with the base install. There are 2 additional categories of installables after the base install: hotfixes and feature packs (FP). Hotfixes are provided per OS platform and are installed sequentially, each requiring a reboot. The feature pack, currently FP3 for XA5, includes upgrades to individual server components installed individually, such as the AMC, License server, plugins, Secure Gateway, and WI. Install your required hotfixes then upgrade the individual components included in the FP. You may not need everything in the pack. For Server 2008 R1 x86 I have the following hotfixes installed on each server in my farm. There will eventually be a hotfix rollup that bundles these but not as of this writing. I installed these in order:

XAE500W2K8005.msp
XAE500W2K8017.msp
XAE500W2K8018.msp
XAE500W2K8030.msp
XAE500W2K8042.msp
XAE500W2K8049.msp

Don’t forget to also upgrade the online and offline plugins on all XA servers! 

image

[Advanced only] Once installation is complete and all desired components are updated, launch the AMC, run the discovery process, navigate to your server and set the server edition to Advanced. This will allow proper licensing to work and make your publishers streaming capable:

image

Repeat this process on all XA servers and you’ll be ready to start installing and publishing apps. The first time you run the AMC it will run discovery on the environment, uncheck the Password Manager piece or you will get errors, unless you installed single-sign on with Platinum. The very next thing you should do is create the XenApp Services Site that clients using the online plugin will connect to. Under the Citrix Web Interface Console, right-click the XA Services Site and click create site. Once the PNAgent site is set up, right click it in the WI Console for a number of configuration options. I chose to enable pass-through authentication and made it the default, as well as enabled dual-mode for resource types, for starters. Publishing apps is fairly self-explanatory. Install the on/offline plugins on your client’s PCs and when they are granted access to applications they will see them either in the plugin, the start menu, desktop, or all three.

Citrix Secure Gateway

Both the Citrix Access Gateway (CAG) and Secure Gateway solutions accomplish the same thing, the difference being that the CAG is an appliance (not free) and the CSG (free) runs on a Windows server. The installation process is similar although you can skip right to the latest versions of both the WI and CSG. You will need a valid SSL cert on this server as it will be public facing and accepting only secure connections. When you generate your Cert request (CSR) make sure to choose at least 2048-bits as most CAs are requiring this as a minimum now. This will generate a 256-bit certificate for your site.

My goal was to provide an HTTP redirect to the gateway, so clients can connect to http, as well as host the WI on a TCP port not opened to the world. By default the WI will install into the default IIS site bound to TCP/80. I created a new site, bound to tcp/8080, where the WI would live. I left a single page in the default site that performs a simple redirect to tcp/443. The CSG is the only service that should be listening on this port. Run the WI installer first and create your XenApp Web Site in the new IIS site with port 8080. Once your new certificate is installed on your server you can run the CSG Configuration Wizard. It should find your new cert. Most of the defaults in the configuration wizard can be accepted. You will need to specify your STA server (infrastructure), as well as specify where the WI lives. If you follow my example you will need to tell the CSG that the WI is installed on the same server on TCP/8080.

image

Make sure to set up your WI site properly with your farm info and the appropriate secure access method. Direct is the simplest unless you’re using NAT between your DMZ and inside networks in which case you’d use gateway alternate or gateway translated. Customize your WI appearance how you like and assuming all firewall changes are in place you should now be able to access the WI externally.

image

At a high-level that’s it for my architecture. Citrix is a deep well and there are obviously a ton of other options and features that one could deploy. Another consideration worth pondering is user profiles. Citrix recommends using roaming TS profiles centrally stored on NAS or SAN to ensure a consistent user experience. This is easily enforced via GPO which I have employed and it works well. In combination with folder redirection this provides a very solid solution for managing your Citrix resources.

While it is ok to mix x86 and x64 servers in your XA farm, consider the implications this will have on program installations and user profiles. Published apps are defined via paths to their executables so if these paths are not the same on all of your servers you will have to make special concessions (c:\program files vs c:\program files (x86)). There are also additional differences in appdata folders used by some applications for x64 or x86 platforms. Many have had luck using the Flex Kit for managing profiles in these environments.

Although not cheap, XenApp 5 provides a very slick application delivery solution. XA5 Advanced costs in the neighborhood of $250/CCU and you also need to buy TS CALs as well. Enterprise and Platinum Editions increase costs significantly. The basic RemoteApp functionality baked into Server 2008 works well but at the sacrifice of granular presentation management, detailed performance tuning, and easy scalability. Citrix doesn’t care how many servers you put in your farm or what roles you run where, it’s all about the CCU licenses. Citrix and Microsoft have a very cohesive relationship going way back so XenApp will integrate seamlessly into your environment. One thing to watch out for is XenApp component changes to your servers after installation. I’ve noticed some finicky behavior specifically with regard to the license server component and having to reinstall it to get it working correctly again. 

Resources:

Application Streaming with XenApp Advanced

XenApp 5 Installation Guide

Citrix Secure Gateway Admin Guide

Web Interface 5.0.1 Admin Guide