Dell XC Series 2.0: Product Architectures

Following our launch of the new 13G-based XC series platform, I present our product architectures for the VDI-specific use cases. Of the platforms available, this use case is focused on the extremely powerful 1U XC630 with Haswell CPUs and 2133MHz RAM. We offer these appliances on both Server 2012 R2 Hyper-V and vSphere 5.5 U2 with Citrix XenDesktop, VMware Horizon View, or Dell vWorkspace.  All platform architectures have been optimized, configured for best performance and documented.
dn

Platforms

We have three platforms to choose from optimized around cost and performance, all being ultimately flexible should specific parameters need to change. The A5 model is the most cost effective leveraging 8-core CPUs, 256GB RAM 2 x 200GB SSDs for performance and 4 x 1TB HDDs for capacity.  For POCs, small deployments or light application virtualization, this platform is well suited. The B5 model steps up the performance by adding four cores per socket, increasing the RAM density to 384GB and doubling the performance tier to 2 x 400GB SSDs. This platform will provide the best bang for the buck on medium density deployments of light or medium level workloads. The B7 is the top of the line offering 16-core CPUs and a higher capacity tier of 6 x 1TB HDDs. For deployments requiring maximum density of knowledge or power user workloads, this is the platform for you.
image
At 1U with dual CPUs, 24 DIMM slots and 10 drive bays…loads of potential and flexibility!
image

Solution Architecture

Utilizing 3 platform hardware configurations, we are offering 3 VDI solutions on 2 hypervisors. Lots of flexibility and many options. 3 node cluster minimum is required with every node containing a Nutanix Controller VM (CVM) to handle all IO. The SATADOM is present for boot responsibilities to host the hypervisor as well as initial setup of the Nutanix Home area. The SSDs and NL SAS disks are passed through directly to each CVM which straddle the hypervisor and hardware. Every CVM contributes its directly-attached disks to the storage pool which is stretched across all nodes in the Nutanix Distributed File System (NDFS) cluster. NDFS is not dependent on the hypervisor for HA or clustering. Hyper-V cluster storage pools will present SMB version 3 to the hosts and vSphere clusters will be presented with NFS. Containers can be created within the storage pool to logically separate VMs based on function. These containers also provide isolation of configurable storage characteristics in the form of dedupe and compression. In other words, you can enable compression on your management VMs within their dedicated container, but not on your VDI desktops, also within their own container. The namespace is presented to the cluster in the form of \\NDFS_Cluster_name\container_name.
The first solution I’ll cover is Dell’s Wyse vWorkspace which supports either 2012 R2 Hyper-V or vSphere 5.5. For small deployments or POCs we offer this solution in a “floating mgmt” architecture which combines the vWorkspace infrastructure management roles and VDI desktops or shared session VMs. vWorkspace and Hyper-V enables a special technology for non-persistent/ shared image desktops called Hyper-V Catalyst which includes 2 components: HyperCache and HyperDeploy. Hyper-V Catalyst provides some incredible performance boosts and requires that the vWorkspace infrastructure components communicate directly with the hyper-V hypervisor. This also means that vWorkspace does not require SCVMM to perform provisioning tasks for non-persistent desktops!

  • HyperCache – Provides virtual desktop performance enhancement by caching parent VHDs in host RAM. Read requests are satisfied from cache including requests from all subsequent child VMs.
  • HyperDeploy – Provides instant cloning of parent VHDs massively diminishing virtual desktop pool deployment times.

You’ll notice the HyperCache components included on the Hyper-V architectures below. 3 to 6 hosts in the floating management model, depicted below with management, desktops and RDSH VMs logically separated only from a storage container perspective by function. The recommendation of 3-7 RDSH VMs is based our work optimizing around NUMA boundaries. I’ll dive deeper into that in an upcoming post. The B7 platform is used in the architectures below.
image
Above ~1000 users we recommend the traditional distributed management architecture to enable more predictable scaling and performance of both the compute and management hosts. The same basic architecture is the same and scales to the full extent supported by the hypervisor, is this case Hyper-V which supports up to 64 hosts. NDFS does not have a scaling limitation so several hypervisor clusters can be built within a single contiguous NDFS namespace. Our recommendation is to then build independent Failover Clusters for compute and management discretely so they can scale up or out independently.
The architecture below depicts a B7 build on Hyper-V applicable to Citrix XenDesktop or Wyse vWorkspace.
image

This architecture is relatively similar for Wyse vWorkspace or VMware Horizon View on vSphere 5.5 U2 but fewer total compute hosts per HA cluster, 32 total. For vWorkspace, Hyper-V Catalyst is not present in this scenario so vCenter is required to perform desktop provisioning tasks.
image
For the storage containers, the best practice of less is more still stands. If you don’t need a particular feature don’t enable it, as it will consume additional resources. Deduplication is always recommended on the performance tier since the primary OpLog lives on SSD and will always benefit. Dedupe or compression on the capacity tier is not recommended, of course you absolutely need it. And if you do prepare to increase each CVM RAM allocation to 32GB.

Container Purpose Replication Factor Perf Tier Deduplication Capacity Tier Deduplication Compression
Ds_compute Desktop VMs 2 Enabled Disabled Disabled
Ds_mgmt Mgmt Infra VMs 2 Enabled Disabled Disabled
Ds_rdsh RDSH Server VMs 2 Enabled Disabled Disabled

Network Architecture

As a hyperconverged appliance, the network architecture leverages the converged model. A pair of 10Gb NICs minimum in  each node handle all traffic for the hypervisor, guests and storage operations between CVMs. Remember that the storage of all VMs is kept local to the host to which the VM resides, so the only traffic that will traverse the network is LAN and replication. There is no need to isolate storage protocol traffic when using Nutanix. 
Hyper-V and vSphere are functionally similar. For Hyper-V there are 2 vSwitches per host, 1 external that aggregates all of the services of the host management OS as well as the vNICs for the connected VMs. The 10Gb NICs are connected to a LBFO team configured in Dynamic Mode. The CVM alone connects to a private internal vSwitch so it can communicate with the hypervisor.
image
In vSphere it’s the same story but with the concept of Port Groups and vMotion.
image
We have tested the various configurations per our standard processes and documented the performance results which can be found in the link below. These docs will be updated as we validate additional configurations.

Product Architectures for 13G XC launch:

Resources:

About Wyse vWorkspace HyperCache
About Wyse vWorkspace HyperDeploy
SJbWUYl4LRCwFNcatHuG

Dell XC Series Web-scale Converged Appliance 2.0

I am pleased to present the Dell XC 2.0 series of web-scale appliances based on the award-winning 13G PowerEdge server line from Dell. There’s lots more in store for the XC so just focusing on this part of the launch we are introducing the XC630 and the XC730xd appliances.

Flexibility and performance are key tenets of this launch providing not only a choice in 1U or 2U form factors, but an assortment of CPU and disk options. From a solution perspective, specifically around VDI, we are releasing three optimized and validated platforms with accompanied Product Architectures to help you plan and size your Dell XC deployments.

The basic architecture of the Dell XC powered by Nutanix remains the same. Every node is outfitted with a Nutanix Controller VM (CVM) that connects to a mix of SSD and HDD to contribute to a distributed storage pool that has no inherent scaling limitation. Three nodes minimum required and either VMware vSphere or Microsoft Windows Server 2012 R2 Hyper-V are supported hypervisors. Let’s take a look at the new models.

image

XC630

The XC630 is a 1U dual-socket platform that supports 6-core to 16-core CPUs and up to 24 x 2133MHz 16GB RDIMMs or 32GB LRDIMMs. The XC630 can be configured using all flash or using two tiers of storage which can consist of 2 to 4 x SSDs (200GB, 400GB or 800GB) and 4 to 8 x 1TB HDDs (2TB HDDs coming soon). Flexible! All flash nodes must have a minimum of 6 x SSDs while nodes with two storage tiers must have a minimum of two SSDs and four HDDs. All nodes have a minimum of 2 x 1Gb plus 2 x 10Gb SFP+ or BaseT Ethernet that can be augmented via an additional card.

New to the XC 2.0 series is a 64GB SATADOM that is used to boot each node. Each node is also outfitted with a 16GB SD card used for the purposes of initial deployment and recovery. The SSDs and HDDs that comprise the Nutanix Distributed File System (NDFS) storage pool are presented to each CVM via an on-board 1GB PERC H730 set in pass-through mode. Simple, powerful, flexible.

image

 

XC730xd

For deployments requiring a greater amount of cold tier data capacity, the XC730xd can provide up to 32TB raw per node. The XC730xd is a 2U dual-socket platform that supports 6-core to 16-core CPUs and up to 24 x 2133MHz 16GB RDIMMs or 32GB LRDIMMs. The XC730xd is provided with two chassis options: 24 x 2.5” disks or 12 x 3.5” disks. The 24-drive model requires the use of two tiers of storage which can consist of 2 to 4 x SSDs (200GB, 400GB or 800GB) and 4 to 22 x 1TB HDDs .The 12-drive model also requires two tiers of storage consisting of 2 to 4 x SSDs (200, 400GB or 800GB) and up to 10 x 4TB HDDs. All nodes have a minimum of 2 x 1Gb plus 2 x 10Gb SFP+ or BaseT Ethernet that can be augmented via an additional card.

The XC730xd platforms are also outfitted with a 64GB SATADOM that is used to boot the nodes. The 16GB SD card used for the purposes of initial deployment and recovery is present on these models as well. The SSDs and HDDs that comprise the Nutanix Distributed File System (NDFS) storage pool are presented to each CVM via an on-board 1GB PERC H730 set in pass-through mode. Simple, powerful, flexible.

12 drive option, hopefully the overlaps in the image below make sense:

image

 

24 drive option:

image

 

Nutanix Software Editions

All editions of the Nutanix software platform are available with variable lengths for support and maintenance.

image

This is just the beginning. Keep an eye out for additional platforms and offerings from the Dell + Nutanix partnership! Next up is the VDI product architectures based on the XC630. Stay tuned!!

http://www.dell.com/us/business/p/dell-xc-series/pd

Dell XC Series – Product Architectures

Hyperconverged Web-scale Software Defined Storage (SDS) solutions are white hot right now and Nutanix is leading the pack with their ability to support all major hypervisors (vSphere, Hyper-V and KVM) while providing nearly unlimited scale. Dell partnering with Nutanix was an obvious mutually beneficial choice for the reasons above plus supplying a much more robust server platform. Dell also provides a global reach for services and support as well as solving other challenges such as hypervisors installed in the factory.

Nutanix operates below the hypervisor layer so as a result requires a lot of tightly coupled interaction with the hardware directly. Many competing platforms in this space sit above the hypervisor so require vSphere, for example, to provide access to storage and HA but they are also limited by the hypervisor’s limitations (scale).  Nutanix uses its own algorithm for clustering and doesn’t rely on a common transactional database which can cause additional challenges when building solutions that span multiple sites. Because of this the Nutanix Distributed Filesystem (NDFS) supports no known limits of scale. There are current Nutanix installations in the thousands of nodes across a contiguous namespace and now you can build them on Dell hardware.

Along with the Dell XC720xd appliances, we have released a number of complementary workload Product Architectures to help customers and partners build solutions using these new platforms. I’ll discuss the primary architectural elements below.

Wyse Datacenter Appliance Architecture for Citrix

Wyse Datacenter Appliance Architecture for VMware

Wyse Datacenter Appliance Architecture for vWorkspace

 

Nutanix Architecture

Three nodes minimum are required for NDFS to achieve quorum so that is the minimum solution buy in, then storage and compute capacity can be increased incrementally beyond by adding one or more nodes to an existing cluster. The Nutanix architecture uses a Controller VM (CVM) on each host which participates in the NDFS cluster and manages the hard disks local to its own host. Each host requires two tiers of storage: high performance/ SSD and capacity/ HDD. The CVM manages the reads/writes on each host and automatically tiers the IO across these disks. A key value proposition of the Nutanix model is data locality which means that the data for a given VM running on a given host is stored locally on that host as apposed to having reads and writes crossing the network. This model scales indefinitely in a linear block manner where you simply buy and add capacity as you need it. Nutanix creates a storage pool that is distributed across all hosts in the cluster and presents this pool back to the hypervisor as NFS or SMB.

You can see from the figure below that the CVM engages directly with the SCSI controller through which it accesses the disks local to the host it resides. Since Nutanix sits below the hypervisor and handles its own clustering and data HA, it is not dependent upon the hypervisor to provide any features nor is it limited by any related limitations.

image

From a storage management and feature perspective, Nutanix provides two tiers of optional deduplication performed locally on each host (SSD and HDD individually), compression, tunable replication (number of copies of each write spread across disparate nodes in the cluster) and data locality (keeps data local to the node the VM lives on). Within a storage pool, containers are created to logically group VMs stored within the namespace and enable specific storage features such as replication factor and dedupe. Best practice says that a single storage pool spread across all disks is sufficient but multiple containers can be used. The image below shows an example large scale XC-based cluster with a single storage pool and multiple containers.

image

While the Nutanix architecture can theoretically scale indefinitely, practicality might dictate that you design your clusters around the boundaries of the hypervisors, 32 nodes for vSphere, 64 nodes for Hyper-v. The decision to do this will be more financially impactful if you separate your resources along the lines of compute and management in distinct SDS clusters. You could also, optionally, install many maximum node hypervisor clusters within a single very large, contiguous Nutanix namespace, which is fully supported. I’ll discuss the former option below as part of our recommended pod architecture.

Dell XC720xd platforms

For our phase 1 launch we have five platforms to offer that vary in CPU, RAM and size/ quantity of disks. Each appliance is 2U, based on the 3.5” 12-gen PowerEdge R720XD and supports from 5 to 12 total disks, each a mix of SSD and HDD. The A5 platform is the smallest with a pair of 6-core CPUs, 200GB SSDs and a recommended 256GB RAM. The B5 and B7 models are almost identical except for the 8-core CPU on the B5 and the 10-core CPU on the B7. The C5 and C7 boast a slightly higher clocked 10-core CPU with doubled SSD densities and 4-5x more in the capacity tier. The suggested workloads are specific with the first three targeted at VDI customers. If greater capacity is required, the C5 and C7 models work very well for this purpose too.

image

For workload to platform sizing guidance, we make the following recommendations:

Platform

Workload

Special Considerations

A5

Basic/ light task users, app virt

Be mindful of limited CPU cores and RAM densities

B5

Medium knowledge workers

Additional 4 cores and greater RAM to host more VMs or sessions

B7

Heavy power users

20 cores per node + a recommended 512GB RAM to minimize

oversubscription

C5 Heavy power users Higher density SSDs + 20TB in the capacity tier for large VMs or amount of user data
C7 Heavy power users Increased number of SSDs with larger capacity for greater amount of T1 performance

Here is a view of the 12G-based platform representing the A5-B7 models. The C5 and C7 would add additional disks in the second disk bay. The two disks in the rear flexbay are 160GB SSDs configured in RAID1 via PERC used to host the hypervisor and CVM, these disks do not participate in the storage pool. The six disks in front are controlled by the CVM directly via the LSI controller and contribute to the distributed storage pool across all nodes.

image

Dell XC Pod Architecture

This being a 10Gb hyperconverged architecture, the leaf/ spine network model is recommended. We do recommend a 1Gb switch stack for iDRAC/ IPMI traffic and build the leaf layer from 10Gb Force10 parts. The S4810 is shown in the graphic below which is recommended for SFP+ based platforms or the S4820T can be used for 10GBase-T.

image

In our XC series product architecture, the compute, management and storage layers, typically all separated, are combined here into a single appliance. For solutions based on vWorkspace under 10 nodes, we recommend a “floating management” architecture which allows the server infrastructure VMs to move between hosts also being used for desktop VMs or RDSH sessions. You’ll notice in the graphics below that compute and management are combined into a single hypervisor cluster which hosts both of these functions.

Hyper-V is shown below which means the CVMs present the SMBv3 protocol to the storage pool. We recommend three basic containers to separate infrastructure mgmt, desktop VMs and RDSH VMs. We recommend the following feature attributes based on these three containers (It is not supported to enable compression and deduplication on the same container):

Container

Purpose

Replication Factor

Perf Tier Deduplication

Capacity Tier Deduplication

Compression

Ds_compute

Desktop VMs

2

Enabled

Enabled

Disabled

Ds_mgmt

Mgmt Infra VMs

2

Enabled

Disabled

Disabled

Ds_rdsh

RDSH Server VMs

2

Enabled

Enabled

Disabled

You’ll notice that I’ve included the resource requirements for the Nutanix CVMs (8 x vCPUs, 32GB vRAM). The vRAM allocation can vary depending on the features you enable within your SDS cluster. 32GB is required, for example, if you intend to enable both SSD and HDD deduplication. If you only require SSD deduplication and leave the HDD tier turned off, you can reduce your CVM vRAM allocation to 16GB. We highly recommend that you disable any features that you do not need or do not intend to use!

image

For vWorkspace solutions over 1000 users or solutions based on VMware Horizon or Citrix XenDesktop, we recommend separating the infrastructure management in all cases. This allows management infrastructure to run in its own dedicated hypervisor cluster while providing very clear and predictable compute capacity for the compute cluster. The graphic below depicts a 1000-6000 user architecture based on vWorkspace on Hyper-V. Notice that the SMB namespace is stretched across both of the discrete compute and management infrastructure clusters, each scaling independently. You could optionally build dedicated SDS clusters for compute and management if you desire, but remember the three node minimum, which would raise your minimum build to 6 nodes in this scenario.

image

XenDesktop on vSphere, up to 32 nodes max per cluster, supporting around 2500 users in this architecture:

image

Horizon View on vSphere, up to 32 nodes max per cluster. supporting around 1700 users in this architecture:

image

Network Architecture

Following the leaf/ spine model, each node should connect 2 x 10Gb ports to a leaf switch which are then fully mesh connected to an upstream set of spine switches.

image

On each host there are two virtual switches: one for external access to the LAN and internode communication and one private internal vSwitch used for the CVM alone. On Hyper-V the two NICs are configured in a LBFO team on each host with all management OS vNICs connected to it.

image

vSphere follows the same basic model except for port groups configured for the VM type and VMKernel ports configured for host functions:

image

Performance results

The tables below summarize the user densities observed for each platform during our testing. Please refer to the product architectures linked at the beginning of this post for the detailed performance results for each solution.

image

image

Resources:

http://en.community.dell.com/dell-blogs/direct2dell/b/direct2dell/archive/2014/11/05/dell-world-two-questions-cloud-client-computing

http://blogs.citrix.com/2014/11/07/dell-launches-new-appliance-solution-for-desktop-virtualization/

http://blogs.citrix.com/2014/11/10/xendesktop-technologies-introduced-as-a-new-dell-wyse-datacenter-appliance-architecture/

http://blogs.vmware.com/euc/2014/11/vmware-horizon-6-dell-xc-delivering-new-economics-simplicity-desktop-application-virtualization.html

http://stevenpoitras.com/the-nutanix-bible/

http://www.dell.com/us/business/p/dell-xc-series/pd

Dell XC Web Scale Converged Appliance

That’s a mouthful! Here’s a quick taste of the new Dell + Nutanix appliance we’ve been working on.  Our full solution offering will be released soon, stay tuned. In the meantime, the Dell marketing folks put together a very sexy video:

VOIP on Android, What’s Next?

Rumors on the future of Google Voice (gVoice) have been abound for well over a year now, with most indications pointing to an eventual consolidation with the G+ Hangouts app, that could be announced at Google I/O soon. As of yesterday 5/15/14, the long awaited closing of the gVoice API (XMPP) to 3rd parties has happened or soon will in the next few days. snrb Labs, creators of Android VOIP stalwart app GrooVe IP, has been warning about this shift for a long time and as of yesterday no longer supports gVoice users. GrooVe IP formerly integrated with the gVoice service via XMPP allowing users to make outbound voice calls over cellular data plans or wifi service using gVoice phone numbers, thus conserving cellular minutes. This is appealing for obvious reasons and delivers on the core value proposition of VOIP. There is also the slightly more complicated SIP client method that requires a bit more setup and has more moving parts on the backend. Any XMPP integration is now dead there too, however.

gVoice has had a good run

Despite the ability to make free calls over VOIP via 3rd-party app integration, gVoice in its current state has a host of other useful features, albeit mostly for inbound calls.

  • Cellular voicemail integration/ replacement – in my phone, instead of using Verizon for vmail service, gVoice captures the messages which are then (somewhat poorly) transcripted to text and emailed to me along with a flash audio file. I can play any message from my phone via the voice app, via email or /voice site on any PC. No dialing into a legacy central voicemail service to get messages.

2014-05-16 15.19.25

  • Multi-ring – gVoice allows users to specify multiple end points to route incoming calls. I could have my cell, my wife’s cell and the house phone all ring when anyone calls my gVoice number.

image

  • Call screening/ blocking – gVoice provides a great buffer and allows you to obfuscate your “real” phone number. Any caller that comes in via my gVoice number must first announce themselves, then I have the option to accept the call, send them straight to vmail, or listen to them live while they leave a message and pick up during if I choose. Just like old-school answering machine screening. You can also permanently block any caller that comes in via the service through a few creative means (not in service message).

SNAGHTMLdb69d2ba

  • SMS consolidation – gVoice supports in/outbound text messaging too which comes with the same benefits as voice calls in that they show up on your phone and in email. With gmail integration, you can simply reply to a gVoice text message from email and the recipient will get the text message on their device.
  • Caller Rules – Another great feature of gVoice allows users to specify calling rules based on the caller profile. If they’re in your contacts list you could: ring all phones, use a personal greeting and don’t screen. If anonymous: ring only mobile, use a generic greeting and screen. Lots of options and customizable possibilities here.

image

This is the end

Is the end near for gVoice? I certainly hope not, as outbound VOIP calling was far from one of the most useful features I enjoy. Google is an advertising company, first and foremost, and people forget that. Yes they are extremely profitable and lead some of the best innovation of the day, but you have to consider why Google invests in the technology it does. gVoice came out before Google’s speech-to-text initiative, which clearly benefitted from the work done and data collected in gVoice. It was possibly why gVoice was created in the first place! Google invests in technologies to bolster its core search business or related offerings. Once that pipeline runs dry it might kill the initiative or integrate it into something else. gVoice fed the core search business helping to bring voice to that space, which is now a core feature in Android fed Google searches (“Ok, Google”). gVoice has served its purpose for Google. The lack of development effort or improvement to gVoice is pretty glaring at this point. The last update to the Android app was almost 8 months ago and google.com/voice hasn’t changed in a very long time.

Google is still betting big on Google+ and the more it can do to attract and retain users in that ecosphere the better. I use Hangouts on my Android devices but I’m still one of the early Google adopter hold-outs who still does not have a G+ profile. I’ll hold out as long as I can but Google is trying very hard to close the loop and slowly force everyone into G+. At some point, if you want to be social at all in any capacity in the Google universe, you will have to have a G+ profile. Why? Laser focused and relevant advertising based on a wide spectrum view of all of your data. Period. The data Google amasses on its users is extremely valuable to advertisers.

So, what is next?

image

The future of gVoice and its current features is uncertain. A voice-enabled Hangouts app seems inevitable and a logical step for Google to make. Will most of the features I currently use in gVoice disappear at some point? Probably. With XMPP access gone, apps like GrooVe IP have had to find other means to provide their services. GrooVe IP has partnered with ring.to (backed by bandwidth.com) to carry this torch forward. This will require porting your gVoice number to ring.to or generating a new number via GrooVe IP. SMS is not currently supported with the new solution which is a bummer, but GrooVe says it’s coming. The good news is that you can still make free calls using VOIP on your Android device using your gVoice number if you wish. The bad news is that all of the gVoice integration goodness is gone and it will take the 3rd parties a while to get their solutions up to par.

Remediating the OpenSSL Heartbleed

Heartbleed Bug

You’ve no doubt by now heard about the massive scale OpenSSL vulnerability that was discovered to have left a large majority of the web servers on the internet open to compromise. The net of the attack is that once an SSL session has been established between the client and web server, there is a heartbeat mechanism that functions as a keep-alive to the client. By sending an empty 64KB data request back to the server, vulnerable servers are tricked into returning the contents of their memory buffers back to the client. These buffers can contain passwords, among other data. This is the “heartbleed” of the hearbeat mechanism. Best of all, apparently there is little trace left behind once this attack has completed successfully, although there are logged attempts of the attack.

It remains to be seen how widespread this is or what exactly was compromised, although there is proof of Yahoo passwords revealed in the wild. There is a server-side fix now for this which results in requiring new stronger certificate generation and many sites have begun implementing. What we do know is which sites use OpenSSL and when new certificates are issued. If a given website is still vulnerable to Heartbleed, there is no point in changing your password until they implement the fix server-side. If you’re busy changing passwords on sites that have not yet patched this bug and generated new certs, those new passwords will still be vulnerable until the fix is in place; I.e. waste of time.

If you don’t use LastPass yet (you should be) this is a very good time to start. One password tied to private keys you hold, not LastPass, enabling the use of 100% complex and random passwords for every site you log into. It’s a fantastic solution that I use on every device I own. If you’re one of those who still use a single common password on every site you log into, there is a very good chance that it has been compromised somewhere, possibly a long time ago. One of the many things to love about LastPass is that they provide a security challenge tool that tests the strength and frequency of your passwords. Now part of that tool also includes a Heartbleed site checker that watches the websites in your vault, when those sites patch against the bug and when you should change your password after the fix. Important to note: you will need to run this tool again every day for awhile until you are completely in the clear. This brings a great deal of sanity to this otherwise chaotic and widespread problem.

image

References:

Ars Technica

LastPass Blog

LastPass Security Challenge Tool

Dell Wyse Datacenter for Citrix XenDesktop

Dell DVS Enterprise is now Dell Wyse Datacenter (DWD) and with the brand change we are proud to announce the 6th release of our flagship set of enterprise VDI solutions for Citrix XenDestop. Dell Wyse Datacenter is the only true end to end (E2E) VDI solution suite in the industry providing best of breed hardware and software for any size enterprise. What makes Dell different is that we carefully vet, craft and test our solutions using real-world requirements and workloads making it easier for customers to adopt our solutions. Dell Wyse Datacenter includes a number of platform and scaling options to meet almost any need: rack, blade, local disk, iSCSI, fiber channel, low cost, high performance, high scale…we have a validated and proven answer to solve your VDI business problem.

The full DWD reference architectures for Citrix can be found here:

Core RA

VRTX RA

Graphics RA

New in DWD 6th release:

  • Citrix XenDesktop 7.1 support
  • Support for Dell EqualLogic PS6210 series
  • Dell Wyse Cloud Client updates
  • High performance SAN-less blade offering
  • Support for LoginVSI 4
  • High IOPS persistent desktops using Atlantis ILIO: Link
  • Citrix vGPU support for graphical acceleration

Citrix XenDesktop 7.1

Architecturally, Citrix XenDesktop has never been simpler. Yes, this is still a potentially very large scale distributed architecture that solves a complicated problem, but the moving parts have been greatly simplified. One of the most important architectural shifts involves the way XenApp is integrated within a XenDesktop Infrastructure. Now the management infrastructures are combined and all that separates shared hosted sessions vs dedicated desktops are where the Virtual Delivery Agent (VDA) is installed. All “XenApp” really means now is that the VDA is installed on an RDSH host, physical or virtual. That’s it! Otherwise, Machine Creation Services (MCS) performance is getting closer to Provisioning Services (PVS) and catalogs/ delivery groups have never been easier to manage.

image

 

Support for Dell EqualLogic PS6210 series

Dell EqualLogic has begun shipping the new and improved PS6210 series which most notably provides the DWD Tier 1 iSCSI hybrid array: PS6210XS. Not only are the Base-T and SFP controller ports doubled but so is the performance! We have literally seen a 100% performance improvement on the PS6210XS hosting VDI desktops which will stretch your shared storage dollar and users per rack unit even further.

image

 

Dell Wyse Cloud Client updates

DWD E2E solutions provide everything required to design, implement and manage industry leading VDI infrastructures. This of course includes one of the most important aspects which is how the end user actually interfaces with the environment: the client. DWD for Citrix XenDesktop includes Citrix Ready clients to meet any organization need. ThinOS, Windows Embedded, Linux, dual/quad core, BYOD, HDMI, tablet and Chromebook. image 

 image

image

 

 

 

image

 

 

 

 

 

One of the most exciting new clients from Dell Wyse is the Cloud Connect. The Cloud Connect turns any HDMI-enabled TV or monitor into a thin client through which to access your VDI desktop. Portable and high performing!

image

 

High performance SAN-less blade offering

New in DWD is a high-performance SAN-less blade offering that leverages Citrix MCS. This very simple but high performance design makes use of a pair of 800GB SSDs in each blade to host the mgmt or compute functions. Gold images are stored on each compute blade with a corresponding non-persistent pool spun up on each blade by the Delivery Controller. This configuration is capable of supporting the maximum possible compute host density based on our standard CPU and RAM recommendations. HA is provided as an option for the mgmt layer through the introduction of shared Tier 2 and clustering with a second mgmt blade.

image

 

Support for LoginVSI 4

Login Consultants Login VSI is the industry standard VDI load generation tool to simulate actual workloads. Moving from version 3.7 to 4 introduces some changes in the load generated and consequently the densities we see on our VDI solution infrastructures. Mostly notably, version 4 reduces the amount of read/write IO generated but increases the CPU utilization. All of our test results have been updated using the new tool.

 

image

 

High IOPS persistent desktops using Atlantis ILIO

Partnering with Atlantis Computing, we have collaborated on a solution to deliver high IOPS for persistent desktops where disk is typically at a premium. ILIO is a software solution that fits seamlessly into the existing Local Tier 1 model for DWD that makes use of compute host RAM to store and execute desktop VMs. No server medium is faster than RAM and with ILIO’s deduplication ability, only a fraction of the storage that would otherwise be required is necessary. This solution fits easily with the highly reliable and economical 1Gb EqualLogic PS6100E that provides hosting to the mgmt VMs, user data, and persistent desktop data.

image

 

Citrix vGPU support for graphical acceleration

Fleshing out the DWD virtualized graphics offerings, we are proud to include the new Citrix vGPU technology which utilizes the NVIDIA Grid boards. For XenDesktop, we offer vDGA (pass-through) and vSGA (shared) graphics on vSphere and now vGPU on Citrix XenServer. vGPU enables a hybrid variety of graphic acceleration by sharing the on-board GPUs while also enabling OpenGL capabilities by running graphics drivers in the VMs themselves. This is done through vGPU profiles that are targeted to specific use cases and maximum Grid board capabilities. The higher the resources consumed with higher resolutions = fewer users per Grid board.

image

High level vGPU architecture:

image

 

Stay tuned for more exciting VDI products from Dell coming late this spring. Citrix Synergy is right around the corner. In the meantime please check out our RAs and thanks for stopping by!

Sync.com: Secure Cloud Storage Contender

image

*Updated 5/16/14 – Sync just announced a new Vault feature that allows you to keep some files in the cloud that you don’t want to sync down to devices.

Previously, I took a look at another promising secure cloud storage provider called Tresorit, currently my default, which you can read about here. Toronto based Sync.com (currently in beta) looks to be going after the same market during a poignant time of concern over privacy and data security. Ever more more relevant as Dropbox just updated their TOS so more users will be looking for a more secure offering. Due to the NSA’s ability to snoop data “on the wire” as well as within US datacenters, sadly, this is also a time where considering cloud storage providers outside of US soil is a good thing.

 

“Zero Knowledge” Architecture

Sync.com uses what they call a zero knowledge storage environment, meaning that they have no knowledge of nor ability to access the files you upload.  Just like Tresorit, Sync.com provides a cloud encryption solution, meaning neither solution encrypts your files on local disk, only files sent up to and stored in the cloud as they leave your PC.  

There is a degree of trust one has to be willing to accept with these solutions as we really have no way to definitely prove that what is stored in the cloud is actually encrypted. Nor are we privy to the security standards or mechanisms in place in these datacenters to keep our data secure. That said, Sync.com expresses interest on their website to make their client software open source so greater transparency will come when that happens.

Sync.com does not provide a detailed white paper like Tresorit does but they do describe their methods.  Pretty straight-forward stuff. Your Sync.com account password protects your private key (AES-256), this key is used to encrypt a “file unlock key” for every file you upload to the cloud. File transfer and authentication happens via SSL. Authentication strings are SHA256 (never clear) and stored in their database as a salted BCRYPT-hashed string immune to rainbow table lookups. This is their claim, I can’t speak to the validity of this. 

Looks like the packaging of private key-encrypted file unlock keys with each encrypted file is what enables Sync.com to provide access to your files via a web portal (unlike Tresorit). This is closer to the LastPass method which also provides a web portal for that offering. I created a quickie diagram that illustrates their method below.

Just like Dropbox, Sync.com ultimately rides on the Amazon infrastructure but uses the Elastic Computing service (EC2), instead of S3, and keeps many local processes running at all times. Three most notably are the taskbar process, several dispatch instances and a watch master process. Interestingly, one of the dispatch processes keeps a persistent connection to a Network Solutions domain which also receives the majority of all bytes sent into the Sync.com cloud. All files added to any Sync.com folder locally get pushed up to this netfirms.com address first. It’s unclear to me what happens from there. Dropbox file uploads establish connections directly to AWS as well as Dropbox servers. Sync.com appears to be proxying all file uploads to an unknown location.

Keeping processes alive with persistent connections to the cloud definitely enables speedy syncs, up and down. Tresorit still uses the timed sync approach where at specified times its processes are spun up, connections to the cloud made, files synced, then connections and processes torn down. Not as speedy to sync, but there are no persistent connections.

 

Sync.com Client

User accounts are created via the download and install of the sync.com client. There is no way to log into the web portal until the client has been installed and account created first. Just like Tresorit, Sync.com provides a 5GB free account which is much larger than Dropbox’s 2GB. They don’t specifically mention a file size limit so to test I uploaded a 600MB file then a 3.7GB file, they both went. This could be a byproduct of the beta so uncertain if this will stick around but if so, this could be very promising.

 

The basic functionality of Sync.com is exactly like that of Dropbox: one folder whose contents gets synced up and down. Drop in files or folders and they get uploaded to the cloud and other Sync.com clients. The software client itself is very reminiscent of earlier versions of the Dropbox client.

They do provide a more mobile-friendly up/down limiter for those who presumably would run up against mobile data limits, although there is currently no mobile client. Curious that they would include this here in the desktop client.

The one key point of differentiation here is that in the client preferences they provide a progress tab showing any activity and estimated time remaining. Neither Dropbox nor Tresorit do this.

The right-click system tray menu is almost identical to Tresorit’s.

Web Portal

From a local PC capabilities perspective, that’s about it. All other functions, including sharing, happen from the web portal. Considering that Tresorit does not currently have a web portal, this is a pretty big deal.

The aesthetic and design of the web portal clearly took many queues from box.net although the Sync.com team managed to make it much cleaner and more intuitive.

 

Navigation and file operation are fairly predictable along with the expected ability to upload files to the web portal directly. You don’t have to use the desktop client to create new files. At this time folders cannot be uploaded to the portal, but more than one file can be selected and added to the upload queue. Cool!

 

Box.net and Dropbox offer a nice online photo viewer with Box’s offering heads and shoulders ahead of the other two. Sync.com’s implementation follows pretty close to what Dropbox has in play.

 

Sharing

Something else that occurs via the web portal is sharing which can be done in a few different ways. The simplest method happens via Secure Links which enables the secure public sharing of single files or folders. This sharing method is probably best suited to blind read-only sharing amongst larger groups. First select the file or folder to create a public link for by clicking the chain icon next to a given item.

You’ll then receive a pop-up with the link URL that you can opt whether or not to include the link password in the URL itself or leave it separated.

Other more precise ways to share content involve explicit sharing to people you specify.  Clicking the “Create a share” button will create a new folder and allow you to invite specific people via email addresses. The people invitation part is mandatory, no invited people = no share, but you can invite only yourself to test.

The third way to share is by clicking the plus sign next to any existing file or folder and enable sharing directly.

The same sharing dialog will appear which, again, requires the entering of someone else’s email address to create the share.

Once shares are created, they can be managed either from the top level sharing link or via the “Manage Share” dialog next to any shared item.

Sharing management is simple and provides almost everything you need. Member status, permissions, ability to re-invite or invite new as well as stop sharing. Permissions need to be a bit more robust here to include ideally read-only and editing levels.

 

Mobile Client

As of this writing there is no Sync.com mobile client which will be a deal breaker to some but this is obviously coming. Sync.com states in their blog that they will simul-release clients for both Android and IOS but that was printed July of last year. The web portal does work on the mobile device, however, and even has a nice photo viewer functionality. 

 

What Sync.com does well

  • Files synced to the cloud are encrypted in transit (SSL) and at rest in the Sync.com datacenter (AES-256).
  • Very large file limits or none at all (>550MB and 3.7GB verified on my free account, but this could and likely will change)
  • The ability to store, access and share encrypted files via a web portal is a big deal, no one else in the business can do this today!
  • Split-brain: Ability to host some files in the cloud only that don’t have to be sync’d down to clients.
  • Multi-client subfolder sync (Have to check since Box has struggled with this)
  • 5GB free accounts with 500MB referrals, no indication of limits
  • Robust and granular sharing options
  • Robust and well-known compute backend (Amazon EC2)
  • Persistent cloud connections and watcher processes = near instant file syncs
  • 30-day file history to restore deleted or previous file versions

Where Sync.com falls short

  • Amazon EC2 is a flexible compute service for web services, not a storage service like Amazon S3. Where is Sync.com actually storing our data? 
  • No mobile Android or IOS client yet, but they say this is coming soon in their blog.
  • No mobile client = no camera uploads
  • Web portal provides upload ability for multi-select individual files only, not entire folders like Box does.
  • Single bucket root folder like Dropbox. I like Tresorit’s approach of multiple root folders residing anywhere you like better. Sync the folders you like on the clients you like, save local space if you need to.   
  • No LAN sync like Dropbox but seems they have the framework to make this a reality at some point.
  • No MS Office mobile previews like Box.net but this too could be accomplished.

Watch this space!

The Sync.com team has clearly tried to preserve the best of what makes Dropbox great while incorporating some other useful features, the likes used by Box.net, while also providing data security in the cloud. The current climate of public technology will make this a requirement soon, no longer merely a luxury. This will leave the cloud storage pioneers (Dropbox) left to retool and scramble as they figure out how to make their now very mature offerings more secure, if they hope to survive. There’s too much good competition appearing in this space with nothing really compelling that will keep people using a less secure service. Building this type of offering with security as a focus first is absolutely the right approach and one that will pay dividends for companies like Tresorit and Sync.com.  After a few days using Sync.com there’s really not much to dislike. There are a few key features I’d like to see before I consider (another) switch. Tresorit better hurry up with their web portal or they will be outdone here. Hopefully Sync.com will offer some of us beta testers a 50GB+ opportunity soon. 🙂

Feel free to use my referral link if you want to check out Sync.com with some free bonus space: here.

References:

Sync.com Privacy

Sync.com Blog

Goodbye Dropbox, Hello Tresorit?

Check out my other cloud storage posts as well:

*Updated 3/24/14

Part of living on the bleeding edge of technology is always looking for that better mouse trap. Needs and circumstances change along with the climate we live in, these things drive evolutionary product innovations and robust competition.  No service or offering is perfect so there are always concessions to be made but as long as you aren’t stepping backwards, technological change can be a good thing. I previously wrote about a potential shift from Dropbox to Box, but due to a fatal flaw in the Box offering I moved away from that ledge. (With the Box.net Sync v4 client, this is purportedly fixed, finally, although I haven’t tested. Even still, Box  = unsecure cloud storage since they encrypt contents with private keys they own. Tresorit started with a very strong launch but is now imposing limits that make the offer much less compelling. Read on for more info.

So what’s wrong with Dropbox? Honestly, not much, it is the feature/ functionality yard stick that all other cloud storage providers measure themselves against. The 2 big pain points I see are free storage space and security. It has become relatively easy to get 50GB on competing platforms, so Dropbox providing a meager 2GB with opportunities to increase it to ~20GB with referrals on the high side, is paltry. That said, Tresorit looks to be matching the 16GB possible extra referral space which will net them 3GB more total in the end. Secondly, beyond SSL + 2-factor authentication to access your Dropbox, you are on your own for securing your contents which ultimately reside in the Amazon S3 cloud. Although Dropbox claims to encrypt your files in the S3 cloud, this has proven to be exploitable and remains a problem for them today. Tresorit looks hopeful on solving both of these problems while providing the majority of the interesting features native to Dropbox.

What is Tresorit?

Tresorit is another contender in the cloud storage category with a heavy emphasis on security. Security is the first and foremost consideration of Tresorit, not an afterthought (see the whitepaper link at the bottom). A quick overview of the related nomenclature before we go deeper:

Tresorit (“Treasure It”) is the holistic cloud storage solution centered around protecting personal data in transit and at rest in the cloud.

Tresors are individual folders that have been client-side encrypted and synced up to the cloud.

The technique employed here is very similar to what LastPass does where all data is encrypted client side using a strong secret password and only the encrypted data is sent up to the cloud using secure channels. Tresorit never has your password so only you or those you explicitly share with can unencrypt your data via the Tresorit client. The communication channels use SSL/TLS but the security of this solution does not depend upon it as your data package itself is encrypted. Shared Tresors are decrypted client-side to the people you have expressly allowed access. Important to note that your files are not encrypted locally at rest on disk, they are only encrypted when synced up to the cloud.

Everything happens via the Tresorit client which must be installed on your PC or mobile device. There is no web portal to log into. Account registration and referral processing also happen via the client. Once installed, start creating new Tresors by connecting to any folder anywhere in your local file system. This is one of the cool things about Tresorit: many buckets, not just one like Dropbox. Although I have admittedly grown accustomed to keeping all things cloud in a single folder. Each Tresor will then be encrypted and sent up to the cloud.

Tresorit also handles selective syncing very intuitively: all Tresors disconnected by default. You choose what to sync on which clients simply by choosing to connect to the Tresor you would like to pull down on a given device. I have a 70GB account which is nearly 4x what my Dropbox account is so it is nice that I can selectively connect/disconnect those buckets as I see fit. Dropbox provides selective sync for top-level folders 1 level below the root Dropbox folder. Functionally no different, but Tresorit’s method is cleaner.

 

Any Tresor can be securely shared with anyone you like with 3 simple permissions to attach to each invitee.  

Any Tresor can be individually disconnected or deleted from the cloud or sync stopped. This is an improvement over the Dropbox offering. Sync can also be disabled globally too, of course. Disconnecting a Tresor from the PC or deleting a Tresor from the cloud does not delete the files locally.

 

On the mobile side, Tresorit also supports seamless mobile camera uploads, just like Dropbox, which is an awesome feature. Not having a web portal to access your files, however, not only requires you to install Tresorit but to connect and download a full Tresor before you can manipulate any files within it. If you have a space limited device this can prove challenging for obvious reasons. If you have an Anrdoid or IOS device, the Tresorit client on those platforms will allow more granular manipulation without having to first download a full Tresor. You can delete individual files or send a file which will trigger a download, but for that file only. It seems like the same functionality should be eventually brought to the desktop client as well at some point.

New Features (*updated 3/24/14)

Encrypted links allow you to securely share any individual file in any Tresor. Free users are limited in both the number of links that can be created but also how many times each can be downloaded. Not tremendously different than Dropbox’s sharing mechanism but secure and severely more limited.

image

File activity can now be viewed from within the Tresorit desktop or mobile client. Encrypted links can be generated for the latest version of any active file, however deleted or prior versions cannot be viewed or retrieved.

image

image

Plans

There are paid plan options just like all of these types of services but keep in mind the limitations of the free version (which have sadly increased in Q1 of 2014). New limitations have been introduced around the number of devices you can use, monthly traffic, bandwidth speeds and the number of people you can share with (down to 5 from 10). The device limit alone is now a deal breaker for me, maybe not for you. 

image

 

Disconnected Tresor sync

To prove that Tresorit could handle a net new file within an incomplete Tresor sync from a disconnected Tresor I performed the following test:

  • Disconnect a Tresor folder with incomplete content (only 50% downloaded from the cloud)
  • Uninstalled Tresorit client on same PC, reinstalled
  • Added new file to existing directory structure 3 folder levels deep (still not synced)
  • Connect the Tresor, sync

This performed flawlessly. The PC that had the Tresor with the 50% sync picked up where it left off (despite the Tresorit client being uninstalled/ reinstalled) and successfully pushed up the new file to the other PCs connected to the same Tresor. This is logical and exactly how it should work. Box couldn’t do this (maybe still can’t) which was their fatal flaw and why I felt compelled to test this here.

The one issue that popped up during this test was the fact that when reinstalling the client, it automatically logged into the same account with no prompt for credentials. This is obviously a serious problem if you ever wanted to install Tresorit at a friend or family member’s house to access a file, then uninstall thinking your account is safe. I contacted Tresorit about this and they said the issue will be resolved in the next release. As a precaution, manually logging out of the client before uninstalling would be a prudent step to protect your account.

 

What Tresorit does well

  • Auto-encrypt and sync any folder anywhere on your Mac/PC to the cloud. (not just a single bucket like Dropbox)
  • Multi-client sub-folder sync
  • Granular selective folder sync enabled by default
  • 5GB space for free users (>2x more than Dropbox), bonus referrals of 16GB, opportunities to get 50GB+ (Gone for now)
  • Robust mobile client for Android/IOS that includes camera uploads, selective local caching, and passcode lockout
  • Actual support! – That’s right, you can open a support request with Tresorit and actually expect an email back in a reasonable time. The staff seems genuinely interested in helping.

Where Tresorit falls short

  • Limitations – Number of devices that can access an account, download, bandwidth and sharing.
  • No web portal: Tresorit provides no web portal through which you can access your files. If this is possible with LastPass, also client-side encrypted, it should be possible here.
  • Full folder download required: Full Tresor download is required to manipulate files on the PC client. Mobile client provides more granularity being able to access, delete or send individual files without downloading full Tresor folder. This applies to sharing as well which is a hassle to have to download 100% of all files in a given Tresor.
  • No deleted files protection: There is no mechanism to retrieve deleted files today but the activity mechanism, currently in beta, may be an indication this is coming.
  • No device management: There is no way to manage the multiple devices you have permitted to access your Tresorit account. If you lost your phone, for example, the only recourse you have right now is changing your password.
  • No MS Office file previews: There is no mobile MS Office file preview which is something only Box does right now. I really like this feature and hope it makes it in at some point.
  • 500GB file limits + 20 top-level folder max on free accounts
  • There is no way (that I see) to change the master password once it’s been set (As of v0.8.100.133, there is!)
  • No LAN sync – very nice Dropbox feature if you use multiple PCs as it copies bit via the LAN faster than pulling everything down from the cloud.
  • Timed syncs – Tresorit syncs appear to operate on a timed schedule so are not change aware nor are they instantaneous. Opening the client manually will trigger the sync operation.
  • No password recovery – This could go in either bucket really depending on where you stand. Bottom line, if you forget your password, kiss your Tresorit account and all data within goodbye.
  • Camera uploads are broken on Android as of 1/5/14. This worked for a short time on a previous build but the Tresorit team appears to be struggling with this one. I’ve re-enabled my Dropbox account for this purpose alone until this gets sorted out. As of 2/21/14, this is working for me!

Off to a good start at first, now wait and see

After several days months of exclusive Tresorit use across multiple devices, I really have no didn’t have too many complaints. The majority of the core features I need are there. PC client syncs are not as snappy as they should be. Dropbox clients keep a persistent connection open to dropbox.com and all other LAN sync clients, so file changes sync almost immediately. Tresorit uses timed syncs which establish and tear down the sync session every time. To force a local sync early, you have to actually open the client which triggers an update operation. I don’t like this. On the other hand, I do feel like my data is secure and I have a huge storage footprint that I can grow further with referrals. This I like. New limitations introduced in 2014 have me rethinking how far I can go with Tresorit now. Of course, if a fatal flaw rears its head, I’ll tuck my tail and crawl back to Dropbox but so far I have no reason to see that happening. Working across several devices daily, I live in my cloud storage so this is important to me and having it secure without taking additional steps is a huge bonus. I’m now debating my options which include Box, Dropbox, Tresorit and Sync.

Feel free to use my referral link if you want to check out Tresorit with some free bonus space: Link

Resources:

Tresorit White Paper: Link

Citrix XenDesktop on Dell VRTX

Dell Desktop Virtualization Solutions/ Cloud Client Computing is proud to announce support for the Dell VRTX on DVS Enterprise for Citrix XenDesktop.  Dell VRTX is the new consolidated platform that combines blade servers, storage and networking into a single 5U chassis that we are enabling for the Remote Office/ Branch Office (ROBO) use case. There was tremendous interest in this platform at both the Citrix Synergy and VMworld shows this year!

Initially this solution offering will be available running on vSphere with Hyper-V following later, ultimately following a similar architecture to what I did in the Server 2012 RDS solution. The solution can be configured in either 2 or 4 M620 blades, Shared Tier 1 (cluster in a box), with 15 or 25 local 2.5” SAS disks. 10 x 15K SAS disks are assigned to 2 blades for Tier 1 or 20 disks for 4 blades. Tier 2 is handled via 5 x 900GB 10K SAS. A high-performance shared PERC is behind the scenes accelerating IO and enabling all disk volumes to be shared amongst all hosts. 
Targeted sizing for this solution is 250-500 XenDesktop (MCS) users with HA options available for networking. Since all blades will be clustered, all VDI sessions and mgmt components will float across all blades to provide redundancy. The internal 8-port 1Gb switch can connect up to existing infrastructure or a Force10 switch can be added ToR. Otherwise there is nothing externally required for the VRTX VDI solution, other than the end user connection device. We of course recommend Dell Wyse Cloud Clients to suit those needs. 

For the price point this one is tough to beat. Everything you need for up to 500 users in a single box! Pretty cool.
Check out the Reference Architecture for Dell VRTX and Citrix XenDesktop: Link