Citrix XenDesktop and PVS: A Write Cache Performance Study

image
If you’re unfamiliar, PVS (Citrix Provisioning Server) is a vDisk deployment mechanism available for use within a XenDesktop or XenApp environment that uses streaming for image delivery. Shared read-only vDisks are streamed to virtual or physical targets in which users can access random pooled or static desktop sessions. Random desktops are reset to a pristine state between logoffs while users requiring static desktops have their changes persisted within a Personal vDisk pinned to their own desktop VM. Any changes that occur within the duration of a user session are captured in a write cache. This is where the performance demanding write IOs occur and where PVS offers a great deal of flexibility as to where those writes can occur. Write cache destination options are defined via PVS vDisk access modes which can dramatically change the performance characteristics of your VDI deployment. While PVS does add a degree of complexity to the overall architecture, since its own infrastructure is required, it is worth considering since it can reduce the amount of physical computing horsepower required for your VDI desktop hosts. The following diagram illustrates the relationship of PVS to Machine Creation Services (MCS) in the larger architectural context of XenDesktop. Keep in mind also that PVS is frequently used to deploy XenApp servers as well.
image
PVS 7.1 supports the following write cache destination options (from Link):

  • Cache on device hard drive – Write cache can exist as a file in NTFS format, located on the target-device’s hard drive. This write cache option frees up the Provisioning Server since it does not have to process write requests and does not have the finite limitation of RAM.
  • Cache on device hard drive persisted (experimental phase only) – The same as Cache on device hard drive, except cache persists. At this time, this write cache method is an experimental feature only, and is only supported for NT6.1 or later (Windows 7 and Windows 2008 R2 and later).
  • Cache in device RAM – Write cache can exist as a temporary file in the target device’s RAM. This provides the fastest method of disk access since memory access is always faster than disk access.
  • Cache in device RAM with overflow on hard disk – When RAM is zero, the target device write cache is only written to the local disk. When RAM is not zero, the target device write cache is written to RAM first.
  • Cache on a server – Write cache can exist as a temporary file on a Provisioning Server. In this configuration, all writes are handled by the Provisioning Server, which can increase disk IO and network traffic.
  • Cache on server persistent – This cache option allows for the saving of changes between reboots. Using this option, after rebooting, a target device is able to retrieve changes made from previous sessions that differ from the read only vDisk image.

Many of these were available in previous versions of PVS, including cache to RAM, but what makes v7.1 more interesting is the ability to cache to RAM with the ability to overflow to HDD. This provides the best of both worlds: extreme RAM-based IO performance without the risk since you can now overflow to HDD if the RAM cache fills. Previously you had to be very careful to ensure your RAM cache didn’t fill completely as that could result in catastrophe. Granted, if the need to overflow does occur, affected user VMs will be at the mercy of your available HDD performance capabilities, but this is still better than the alternative (BSOD).

Results

Even when caching directly to HDD, PVS shows lower IOPS/ user numbers than MCS does on the same hardware. We decided to take things a step further by testing a number of different caching options. We ran tests on both Hyper-V and ESXi using our standard 3 user VM profiles against LoginVSI’s low, medium, high workloads. For reference, below are the standard user VM profiles we use in all Dell Wyse Datacenter enterprise solutions:

Profile Name Number of vCPUs per Virtual Desktop Nominal RAM (GB) per Virtual Desktop Use Case
Standard 1 2 Task Worker
Enhanced 2 3 Knowledge Worker
Professional 2 4 Power User

We tested three write caching options across all user and workload types: cache on device HDD, RAM + Overflow (256MB) and RAM + Overflow (512MB). Doubling the amount of RAM cache on more intensive workloads paid off big netting a near host IOPS reduction to 0. That’s almost 100% of user generated IO absorbed completely by RAM. We didn’t capture the IOPS generated in RAM here using PVS, but as the fastest medium available in the server and from previous work done with other in-RAM technologies, I can tell you that 1600MHz RAM is capable of tens of thousands of IOPS, per host. We also tested thin vs thick provisioning using our high end profile when caching to HDD just for grins. Ironically, thin provisioning outperformed thick for ESXi, the opposite proved true for Hyper-V. To achieve these impressive IOPS number on ESXi it is important to enable intermediate buffering (see links at the bottom). I’ve highlighted the more impressive RAM + overflow results in red below. Note: IOPS per user below indicates IOPS generation as observed at the disk layer of the compute host. This does not mean these sessions generated close to no IOPS.

Hyper-visor PVS Cache Type Workload Density Avg CPU % Avg Mem Usage GB Avg IOPS/User Avg Net KBps/User
ESXi Device HDD only Standard 170 95% 1.2 5 109
ESXi 256MB RAM + Overflow Standard 170 76% 1.5 0.4 113
ESXi 512MB RAM + Overflow Standard 170 77% 1.5 0.3 124
ESXi Device HDD only Enhanced 110 86% 2.1 8 275
ESXi 256MB RAM + Overflow Enhanced 110 72% 2.2 1.2 284
ESXi 512MB RAM + Overflow Enhanced 110 73% 2.2 0.2 286
ESXi HDD only, thin provisioned Professional 90 75% 2.5 9.1 250
ESXi HDD only thick provisioned Professional 90 79% 2.6 11.7 272
ESXi 256MB RAM + Overflow Professional 90 61% 2.6 1.9 255
ESXi 512MB RAM + Overflow Professional 90 64% 2.7 0.3 272

For Hyper-V we observed a similar story and did not enabled intermediate buffering at the recommendation of Citrix. This is important! Citrix strongly recommends to not use intermediate buffering on Hyper-V as it degrades performance. Most other numbers are well inline with the ESXi results, save for the cache to HDD numbers being slightly higher.

Hyper-visor PVS Cache Type Workload Density Avg CPU % Avg Mem Usage GB Avg IOPS/User Avg Net KBps/User
Hyper-V Device HDD only Standard 170 92% 1.3 5.2 121
Hyper-V 256MB RAM + Overflow Standard 170 78% 1.5 0.3 104
Hyper-V 512MB RAM + Overflow Standard 170 78% 1.5 0.2 110
Hyper-V Device HDD only Enhanced 110 85% 1.7 9.3 323
Hyper-V 256MB RAM + Overflow Enhanced 110 80% 2 0.8 275
Hyper-V 512MB RAM + Overflow Enhanced 110 81% 2.1 0.4 273
Hyper-V HDD only, thin provisioned Professional 90 80% 2.2 12.3 306
Hyper-V HDD only thick provisioned Professional 90 80% 2.2 10.5 308
Hyper-V 256MB RAM + Overflow Professional 90 80% 2.5 2.0 294
Hyper-V 512MB RAM + Overflow Professional 90 79% 2.7 1.4 294

Implications

So what does it all mean? If you’re already a PVS customer this is a no brainer, upgrade to v7.1 and turn on “cache in device RAM with overflow to hard disk” now. Your storage subsystems will thank you. The benefits are clear in both ESXi and Hyper-V alike. If you’re deploying XenDesktop soon and debating MCS vs PVS, this is a very strong mark in the “pro” column for PVS. The fact of life in VDI is that we always run out of CPU first, but that doesn’t mean we get to ignore or undersize for IO performance as that’s important too. Enabling RAM to absorb the vast majority of user write cache IO allows us to stretch our HDD subsystems even further, since their burdens are diminished. Cut your local disk costs by 2/3 or stretch those shared arrays 2 or 3x. PVS cache in RAM + overflow allows you to design your storage around capacity requirements with less need to overprovision spindles just to meet IO demands (resulting in wasted capacity).
References:
DWD Enterprise Reference Architecture
http://support.citrix.com/proddocs/topic/provisioning-7/pvs-technology-overview-write-cache-intro.html
When to Enable Intermediate Buffering for Local Hard Drive Cache

Sync.com: Secure Cloud Storage Contender

image

*Updated 5/16/14 – Sync just announced a new Vault feature that allows you to keep some files in the cloud that you don’t want to sync down to devices.

Previously, I took a look at another promising secure cloud storage provider called Tresorit, currently my default, which you can read about here. Toronto based Sync.com (currently in beta) looks to be going after the same market during a poignant time of concern over privacy and data security. Ever more more relevant as Dropbox just updated their TOS so more users will be looking for a more secure offering. Due to the NSA’s ability to snoop data “on the wire” as well as within US datacenters, sadly, this is also a time where considering cloud storage providers outside of US soil is a good thing.

 

“Zero Knowledge” Architecture

Sync.com uses what they call a zero knowledge storage environment, meaning that they have no knowledge of nor ability to access the files you upload.  Just like Tresorit, Sync.com provides a cloud encryption solution, meaning neither solution encrypts your files on local disk, only files sent up to and stored in the cloud as they leave your PC.  

There is a degree of trust one has to be willing to accept with these solutions as we really have no way to definitely prove that what is stored in the cloud is actually encrypted. Nor are we privy to the security standards or mechanisms in place in these datacenters to keep our data secure. That said, Sync.com expresses interest on their website to make their client software open source so greater transparency will come when that happens.

Sync.com does not provide a detailed white paper like Tresorit does but they do describe their methods.  Pretty straight-forward stuff. Your Sync.com account password protects your private key (AES-256), this key is used to encrypt a “file unlock key” for every file you upload to the cloud. File transfer and authentication happens via SSL. Authentication strings are SHA256 (never clear) and stored in their database as a salted BCRYPT-hashed string immune to rainbow table lookups. This is their claim, I can’t speak to the validity of this. 

Looks like the packaging of private key-encrypted file unlock keys with each encrypted file is what enables Sync.com to provide access to your files via a web portal (unlike Tresorit). This is closer to the LastPass method which also provides a web portal for that offering. I created a quickie diagram that illustrates their method below.

Just like Dropbox, Sync.com ultimately rides on the Amazon infrastructure but uses the Elastic Computing service (EC2), instead of S3, and keeps many local processes running at all times. Three most notably are the taskbar process, several dispatch instances and a watch master process. Interestingly, one of the dispatch processes keeps a persistent connection to a Network Solutions domain which also receives the majority of all bytes sent into the Sync.com cloud. All files added to any Sync.com folder locally get pushed up to this netfirms.com address first. It’s unclear to me what happens from there. Dropbox file uploads establish connections directly to AWS as well as Dropbox servers. Sync.com appears to be proxying all file uploads to an unknown location.

Keeping processes alive with persistent connections to the cloud definitely enables speedy syncs, up and down. Tresorit still uses the timed sync approach where at specified times its processes are spun up, connections to the cloud made, files synced, then connections and processes torn down. Not as speedy to sync, but there are no persistent connections.

 

Sync.com Client

User accounts are created via the download and install of the sync.com client. There is no way to log into the web portal until the client has been installed and account created first. Just like Tresorit, Sync.com provides a 5GB free account which is much larger than Dropbox’s 2GB. They don’t specifically mention a file size limit so to test I uploaded a 600MB file then a 3.7GB file, they both went. This could be a byproduct of the beta so uncertain if this will stick around but if so, this could be very promising.

 

The basic functionality of Sync.com is exactly like that of Dropbox: one folder whose contents gets synced up and down. Drop in files or folders and they get uploaded to the cloud and other Sync.com clients. The software client itself is very reminiscent of earlier versions of the Dropbox client.

They do provide a more mobile-friendly up/down limiter for those who presumably would run up against mobile data limits, although there is currently no mobile client. Curious that they would include this here in the desktop client.

The one key point of differentiation here is that in the client preferences they provide a progress tab showing any activity and estimated time remaining. Neither Dropbox nor Tresorit do this.

The right-click system tray menu is almost identical to Tresorit’s.

Web Portal

From a local PC capabilities perspective, that’s about it. All other functions, including sharing, happen from the web portal. Considering that Tresorit does not currently have a web portal, this is a pretty big deal.

The aesthetic and design of the web portal clearly took many queues from box.net although the Sync.com team managed to make it much cleaner and more intuitive.

 

Navigation and file operation are fairly predictable along with the expected ability to upload files to the web portal directly. You don’t have to use the desktop client to create new files. At this time folders cannot be uploaded to the portal, but more than one file can be selected and added to the upload queue. Cool!

 

Box.net and Dropbox offer a nice online photo viewer with Box’s offering heads and shoulders ahead of the other two. Sync.com’s implementation follows pretty close to what Dropbox has in play.

 

Sharing

Something else that occurs via the web portal is sharing which can be done in a few different ways. The simplest method happens via Secure Links which enables the secure public sharing of single files or folders. This sharing method is probably best suited to blind read-only sharing amongst larger groups. First select the file or folder to create a public link for by clicking the chain icon next to a given item.

You’ll then receive a pop-up with the link URL that you can opt whether or not to include the link password in the URL itself or leave it separated.

Other more precise ways to share content involve explicit sharing to people you specify.  Clicking the “Create a share” button will create a new folder and allow you to invite specific people via email addresses. The people invitation part is mandatory, no invited people = no share, but you can invite only yourself to test.

The third way to share is by clicking the plus sign next to any existing file or folder and enable sharing directly.

The same sharing dialog will appear which, again, requires the entering of someone else’s email address to create the share.

Once shares are created, they can be managed either from the top level sharing link or via the “Manage Share” dialog next to any shared item.

Sharing management is simple and provides almost everything you need. Member status, permissions, ability to re-invite or invite new as well as stop sharing. Permissions need to be a bit more robust here to include ideally read-only and editing levels.

 

Mobile Client

As of this writing there is no Sync.com mobile client which will be a deal breaker to some but this is obviously coming. Sync.com states in their blog that they will simul-release clients for both Android and IOS but that was printed July of last year. The web portal does work on the mobile device, however, and even has a nice photo viewer functionality. 

 

What Sync.com does well

  • Files synced to the cloud are encrypted in transit (SSL) and at rest in the Sync.com datacenter (AES-256).
  • Very large file limits or none at all (>550MB and 3.7GB verified on my free account, but this could and likely will change)
  • The ability to store, access and share encrypted files via a web portal is a big deal, no one else in the business can do this today!
  • Split-brain: Ability to host some files in the cloud only that don’t have to be sync’d down to clients.
  • Multi-client subfolder sync (Have to check since Box has struggled with this)
  • 5GB free accounts with 500MB referrals, no indication of limits
  • Robust and granular sharing options
  • Robust and well-known compute backend (Amazon EC2)
  • Persistent cloud connections and watcher processes = near instant file syncs
  • 30-day file history to restore deleted or previous file versions

Where Sync.com falls short

  • Amazon EC2 is a flexible compute service for web services, not a storage service like Amazon S3. Where is Sync.com actually storing our data? 
  • No mobile Android or IOS client yet, but they say this is coming soon in their blog.
  • No mobile client = no camera uploads
  • Web portal provides upload ability for multi-select individual files only, not entire folders like Box does.
  • Single bucket root folder like Dropbox. I like Tresorit’s approach of multiple root folders residing anywhere you like better. Sync the folders you like on the clients you like, save local space if you need to.   
  • No LAN sync like Dropbox but seems they have the framework to make this a reality at some point.
  • No MS Office mobile previews like Box.net but this too could be accomplished.

Watch this space!

The Sync.com team has clearly tried to preserve the best of what makes Dropbox great while incorporating some other useful features, the likes used by Box.net, while also providing data security in the cloud. The current climate of public technology will make this a requirement soon, no longer merely a luxury. This will leave the cloud storage pioneers (Dropbox) left to retool and scramble as they figure out how to make their now very mature offerings more secure, if they hope to survive. There’s too much good competition appearing in this space with nothing really compelling that will keep people using a less secure service. Building this type of offering with security as a focus first is absolutely the right approach and one that will pay dividends for companies like Tresorit and Sync.com.  After a few days using Sync.com there’s really not much to dislike. There are a few key features I’d like to see before I consider (another) switch. Tresorit better hurry up with their web portal or they will be outdone here. Hopefully Sync.com will offer some of us beta testers a 50GB+ opportunity soon. 🙂

Feel free to use my referral link if you want to check out Sync.com with some free bonus space: here.

References:

Sync.com Privacy

Sync.com Blog

Goodbye Dropbox, Hello Tresorit?

Check out my other cloud storage posts as well:

*Updated 3/24/14

Part of living on the bleeding edge of technology is always looking for that better mouse trap. Needs and circumstances change along with the climate we live in, these things drive evolutionary product innovations and robust competition.  No service or offering is perfect so there are always concessions to be made but as long as you aren’t stepping backwards, technological change can be a good thing. I previously wrote about a potential shift from Dropbox to Box, but due to a fatal flaw in the Box offering I moved away from that ledge. (With the Box.net Sync v4 client, this is purportedly fixed, finally, although I haven’t tested. Even still, Box  = unsecure cloud storage since they encrypt contents with private keys they own. Tresorit started with a very strong launch but is now imposing limits that make the offer much less compelling. Read on for more info.

So what’s wrong with Dropbox? Honestly, not much, it is the feature/ functionality yard stick that all other cloud storage providers measure themselves against. The 2 big pain points I see are free storage space and security. It has become relatively easy to get 50GB on competing platforms, so Dropbox providing a meager 2GB with opportunities to increase it to ~20GB with referrals on the high side, is paltry. That said, Tresorit looks to be matching the 16GB possible extra referral space which will net them 3GB more total in the end. Secondly, beyond SSL + 2-factor authentication to access your Dropbox, you are on your own for securing your contents which ultimately reside in the Amazon S3 cloud. Although Dropbox claims to encrypt your files in the S3 cloud, this has proven to be exploitable and remains a problem for them today. Tresorit looks hopeful on solving both of these problems while providing the majority of the interesting features native to Dropbox.

What is Tresorit?

Tresorit is another contender in the cloud storage category with a heavy emphasis on security. Security is the first and foremost consideration of Tresorit, not an afterthought (see the whitepaper link at the bottom). A quick overview of the related nomenclature before we go deeper:

Tresorit (“Treasure It”) is the holistic cloud storage solution centered around protecting personal data in transit and at rest in the cloud.

Tresors are individual folders that have been client-side encrypted and synced up to the cloud.

The technique employed here is very similar to what LastPass does where all data is encrypted client side using a strong secret password and only the encrypted data is sent up to the cloud using secure channels. Tresorit never has your password so only you or those you explicitly share with can unencrypt your data via the Tresorit client. The communication channels use SSL/TLS but the security of this solution does not depend upon it as your data package itself is encrypted. Shared Tresors are decrypted client-side to the people you have expressly allowed access. Important to note that your files are not encrypted locally at rest on disk, they are only encrypted when synced up to the cloud.

Everything happens via the Tresorit client which must be installed on your PC or mobile device. There is no web portal to log into. Account registration and referral processing also happen via the client. Once installed, start creating new Tresors by connecting to any folder anywhere in your local file system. This is one of the cool things about Tresorit: many buckets, not just one like Dropbox. Although I have admittedly grown accustomed to keeping all things cloud in a single folder. Each Tresor will then be encrypted and sent up to the cloud.

Tresorit also handles selective syncing very intuitively: all Tresors disconnected by default. You choose what to sync on which clients simply by choosing to connect to the Tresor you would like to pull down on a given device. I have a 70GB account which is nearly 4x what my Dropbox account is so it is nice that I can selectively connect/disconnect those buckets as I see fit. Dropbox provides selective sync for top-level folders 1 level below the root Dropbox folder. Functionally no different, but Tresorit’s method is cleaner.

 

Any Tresor can be securely shared with anyone you like with 3 simple permissions to attach to each invitee.  

Any Tresor can be individually disconnected or deleted from the cloud or sync stopped. This is an improvement over the Dropbox offering. Sync can also be disabled globally too, of course. Disconnecting a Tresor from the PC or deleting a Tresor from the cloud does not delete the files locally.

 

On the mobile side, Tresorit also supports seamless mobile camera uploads, just like Dropbox, which is an awesome feature. Not having a web portal to access your files, however, not only requires you to install Tresorit but to connect and download a full Tresor before you can manipulate any files within it. If you have a space limited device this can prove challenging for obvious reasons. If you have an Anrdoid or IOS device, the Tresorit client on those platforms will allow more granular manipulation without having to first download a full Tresor. You can delete individual files or send a file which will trigger a download, but for that file only. It seems like the same functionality should be eventually brought to the desktop client as well at some point.

New Features (*updated 3/24/14)

Encrypted links allow you to securely share any individual file in any Tresor. Free users are limited in both the number of links that can be created but also how many times each can be downloaded. Not tremendously different than Dropbox’s sharing mechanism but secure and severely more limited.

image

File activity can now be viewed from within the Tresorit desktop or mobile client. Encrypted links can be generated for the latest version of any active file, however deleted or prior versions cannot be viewed or retrieved.

image

image

Plans

There are paid plan options just like all of these types of services but keep in mind the limitations of the free version (which have sadly increased in Q1 of 2014). New limitations have been introduced around the number of devices you can use, monthly traffic, bandwidth speeds and the number of people you can share with (down to 5 from 10). The device limit alone is now a deal breaker for me, maybe not for you. 

image

 

Disconnected Tresor sync

To prove that Tresorit could handle a net new file within an incomplete Tresor sync from a disconnected Tresor I performed the following test:

  • Disconnect a Tresor folder with incomplete content (only 50% downloaded from the cloud)
  • Uninstalled Tresorit client on same PC, reinstalled
  • Added new file to existing directory structure 3 folder levels deep (still not synced)
  • Connect the Tresor, sync

This performed flawlessly. The PC that had the Tresor with the 50% sync picked up where it left off (despite the Tresorit client being uninstalled/ reinstalled) and successfully pushed up the new file to the other PCs connected to the same Tresor. This is logical and exactly how it should work. Box couldn’t do this (maybe still can’t) which was their fatal flaw and why I felt compelled to test this here.

The one issue that popped up during this test was the fact that when reinstalling the client, it automatically logged into the same account with no prompt for credentials. This is obviously a serious problem if you ever wanted to install Tresorit at a friend or family member’s house to access a file, then uninstall thinking your account is safe. I contacted Tresorit about this and they said the issue will be resolved in the next release. As a precaution, manually logging out of the client before uninstalling would be a prudent step to protect your account.

 

What Tresorit does well

  • Auto-encrypt and sync any folder anywhere on your Mac/PC to the cloud. (not just a single bucket like Dropbox)
  • Multi-client sub-folder sync
  • Granular selective folder sync enabled by default
  • 5GB space for free users (>2x more than Dropbox), bonus referrals of 16GB, opportunities to get 50GB+ (Gone for now)
  • Robust mobile client for Android/IOS that includes camera uploads, selective local caching, and passcode lockout
  • Actual support! – That’s right, you can open a support request with Tresorit and actually expect an email back in a reasonable time. The staff seems genuinely interested in helping.

Where Tresorit falls short

  • Limitations – Number of devices that can access an account, download, bandwidth and sharing.
  • No web portal: Tresorit provides no web portal through which you can access your files. If this is possible with LastPass, also client-side encrypted, it should be possible here.
  • Full folder download required: Full Tresor download is required to manipulate files on the PC client. Mobile client provides more granularity being able to access, delete or send individual files without downloading full Tresor folder. This applies to sharing as well which is a hassle to have to download 100% of all files in a given Tresor.
  • No deleted files protection: There is no mechanism to retrieve deleted files today but the activity mechanism, currently in beta, may be an indication this is coming.
  • No device management: There is no way to manage the multiple devices you have permitted to access your Tresorit account. If you lost your phone, for example, the only recourse you have right now is changing your password.
  • No MS Office file previews: There is no mobile MS Office file preview which is something only Box does right now. I really like this feature and hope it makes it in at some point.
  • 500GB file limits + 20 top-level folder max on free accounts
  • There is no way (that I see) to change the master password once it’s been set (As of v0.8.100.133, there is!)
  • No LAN sync – very nice Dropbox feature if you use multiple PCs as it copies bit via the LAN faster than pulling everything down from the cloud.
  • Timed syncs – Tresorit syncs appear to operate on a timed schedule so are not change aware nor are they instantaneous. Opening the client manually will trigger the sync operation.
  • No password recovery – This could go in either bucket really depending on where you stand. Bottom line, if you forget your password, kiss your Tresorit account and all data within goodbye.
  • Camera uploads are broken on Android as of 1/5/14. This worked for a short time on a previous build but the Tresorit team appears to be struggling with this one. I’ve re-enabled my Dropbox account for this purpose alone until this gets sorted out. As of 2/21/14, this is working for me!

Off to a good start at first, now wait and see

After several days months of exclusive Tresorit use across multiple devices, I really have no didn’t have too many complaints. The majority of the core features I need are there. PC client syncs are not as snappy as they should be. Dropbox clients keep a persistent connection open to dropbox.com and all other LAN sync clients, so file changes sync almost immediately. Tresorit uses timed syncs which establish and tear down the sync session every time. To force a local sync early, you have to actually open the client which triggers an update operation. I don’t like this. On the other hand, I do feel like my data is secure and I have a huge storage footprint that I can grow further with referrals. This I like. New limitations introduced in 2014 have me rethinking how far I can go with Tresorit now. Of course, if a fatal flaw rears its head, I’ll tuck my tail and crawl back to Dropbox but so far I have no reason to see that happening. Working across several devices daily, I live in my cloud storage so this is important to me and having it secure without taking additional steps is a huge bonus. I’m now debating my options which include Box, Dropbox, Tresorit and Sync.

Feel free to use my referral link if you want to check out Tresorit with some free bonus space: Link

Resources:

Tresorit White Paper: Link

New PC Build for 2013

Contrary to popular attempts to convince you otherwise – the PC is not dead. Fewer novice-level users may actually require a PC these days thanks to smart phones and tablets but I’m not quite ready to concede to “app store” style games and pecking out emails with my index fingers pressed on smudgy glass as my only means. I have smart phones, a tablet, several laptops, and yet I still find myself wanting and using a PC more than the others combined. Multi-displays serving rich graphics rendered from a discrete GPU, Chrome with 50 tabs opened at once, a few RDP sessions, streaming audio, RARs unpacked and copied in seconds, Visio, PowerPoint, Premier Pro, the latest DX11 game titles…you get the picture. Some things just work better with a mouse and keyboard and honestly, unless Minority Report ever becomes a reality (via Kinnect perhaps?) this will probably always be the case. My current PC is still humming along just fine with a 4-year old Windows 7 build on what was then, top of the line hardware. Ivy Bridge is here now, parts are more efficient, use less power, and are faster than ever. Time to upgrade.

Parts Selection

 
My motivation for this build was: small and quiet. Being insanely powerful as well should go without saying. With these goals in mind, I set out to build a power house in a Small Form Factor (SFF). I don’t use PCIe slots for anything but my GPU anymore and unless you plan to install a massive RAID array (I don’t), ATX just takes up a lot of desk space. The two predominant choices for SFF are Micro ATX (mATX) and Mini ITX (mITX). The size of the motherboard and its available peripherals defines the difference with mITX only supporting 2 x DIMM slots (16GB max) plus a single PCIe slot (x16) and mATX supporting 4 x DIMM slots (32GB) plus 2 PCIe slots. Being limited to a single GPU for the life of this build was a key concern so if down the road if I wanted to add a card and run SLI I could not. My previous PC had 24GB RAM in 6 x DIMMs (triple channel) and came very close to consuming 16GB fairly regularly. Topping out at 16GB for my next platform is cutting it too close. If mITX supported 32GB RAM, it would have been a tougher decision. It’s really a shame because the mITX cases on the market at the moment are among the most beautiful available. Fractal Design’s Node 304 case, for example, almost had me committed to mITX just to build using this case! Simple, clean, highly functional and superbly constructed. There is nothing quite like this in the mATX space, sadly.

Now that I decided mATX was the way to go, it was time to assemble the Bill of Materials. Case selection was not an easy feat and I usually find myself just ahead of the curve, wanting what’s just about to come out. My last few ATX builds have been using Lian-Li cases which are high quality, aluminum, artful designs that do tend to cost more. Silverstone is another manufacturer of like ilk creating some of the highest reviewed products in this space. Taiwan seems to have the high-end PC case market cornered between these two companies.

I looked at a lot of cases over the course of several days seeking the smallest most functional mATX case I could find. I based my decision between 2 cases: The PC-V355B by Lian-Li and the Sugo SG09 by SilverStone. Lian-Li’s case is the closest thing out there to the Node 304 in looks, trouble is there are fitment limitations and design problems. The biggest restriction is the CPU cooler height at 100mm which rules out any high performance tower style cooler. Some people have reported that the motherboard once installed, does not sit flush to the IO shield in the back.  Another problem is that the PSU perches over the motherboard without even a shelf to support it! TweakTown’s review of this case highlighted that using a larger ATX PSU actually caused the rear of the case to flex. Not good. Shame because she’s pretty.

SilverStone on the other hand, created a case that is not only smaller but has no restrictions on CPU coolers or GPU lengths plus supports ATX sized PSUs in a proper shelf. The SG09 has received high praise from just about every review I read. Space efficient, quality construction, free flowing and small. Exactly what I’m looking for. The largest point of contention with the SG09 are its looks. At first glance I didn’t like it either. That front panel is actually plastic made to look like brushed aluminum. I can honestly say that I’ve stared at this thing so long now that it has really grown on me. SilverStone announced at CES a premature successor to the SG09 called the SG10. It’s essentially the same case with a flatly-styled aluminum front panel. Initially I was holding out for the SG10 which is rumored to ship some time in April 2013. The more I looked at the 2 cases the more I liked the SG09 and less so the SG10. I’m also not afraid to modify this thing if it ever bothers me. Functionally this case is very close to mATX perfection which at the end of the day is what matters most.

I didn’t find building in the SG09 terribly difficult, but I’ve been doing this for a while. Careful part selection is of utmost importance and having patience working in tight spaces is key. There is a big warning right on the box to dissuade novice builders.

For my power supply, considering this SFF build in a tight space, I chose a 140mm modular PSU from Silverstone: ST55F. Gold efficiency, clean power, fully modular, with a max of 600 watts, this PSU should provide everything I need and more. I also purchased the SilverStone PP05 short cable kit which, in hindsight, is still a good idea but I should have gone with a fewer higher quality PP07 cables instead. I used maybe 2 or 3 cables from the PP05 kit leaving the others to waste. Space is at a premium so the less you have to tuck inside of tight spaces the better! The review site JonnyGuru.com does an incredibly detailed job reviewing power supplies so definitely stop there first to check out any PSU contenders.

CPU, RAM and Mobo

I’ve long since abandoned AMD as the economy play in the CPU market for Intel who continues to have very little competition in the high-end CPU space. Ivy Bridge is no exception and is the smallest (22nm) most powerful hyperthreaded quad-core yet, with power consumption under 80 watts. More and more functionality is going on-chip with this “Tick” in Intel’s Tick-Tock cycle, getting a die shrink and enhanced graphics capabilities. I debated my CPU choice considering even the 3570 as clock cycles really don’t matter today as much as they once did. Overclockability, cores, cache, power consumption and instructions are all notable considerations but my decision was ultimately made up for me by price. 4 years ago I got an amazing deal on my 1st gen Core-i7 920 CPU from MicroCenter locally, this time around MicroCenter again did not disappoint. They were selling the high end 3770K CPU (K=unlocked) for $100 less than NewEgg, who are really the last bastion of online PC parts. The lesser chips that I was considering were more expensive than this high performer. This was really a no brainer.

The 3770K provides a ton of performance right out of the box. Pushing this chip further will be an easy albeit potentially unnecessary task.

CPU cooling is an area that has a lot of very good data to help make your selection, if opting to buy into the after-market. The stock Intel coolers are marginally ok, barely. If any overclocking is to be attempted at all, an after-market cooler should be a serious consideration. Noctua is one of my favorite brands of cooler with somewhat expensive but highly effective products including some of the best fans around. The NH-D14 is a MASSIVE cooler that is pretty much top dog right now. It occupies a tremendous amount of real estate so low profile RAM is a must in SFF. It’s also $80. For a fraction of that cost ($35), the Cooler Master Hyper 212 Evo delivers impressive results stacking well against its pricier counterparts and was my choice for this build. It is a somewhat rawer product with a direct contact style sleeve where the CPU rests directly against the flattened copper pipes of the cooler. Some minor prep work was required on the 212 Evo before installation, namely filling the cracks between the pipes with Arctic Silver 5.

I also replaced the cheapy Cooler Master fan with a high quality Noctua for silent operation and high performance. Here it is installed in the SG09. There is plenty of clearance for DIMMs with taller heat spreaders.

For motherboards, I’ve bought into the Asus high end for a very long time and have never been disappointed. Performance, value, stability, and longevity are all traits I’ve come to expect from Asus. ASRock has ramped up as a worthy competitor along with Gigabyte with many good reviews and satisfied customers, but at the $200 price point I felt no need to stray from Asus who has only left me satisfied in the past. The Maximus V Gene is the Asus answer for the high-end mATX z77 chipset market. Overclocking, customization, features, performance… everything I could want is in this board and for less than many of its high-end competitors.

Memory is another area of agony with so many choices on the market. One thing was clear at the outset: I wanted a 32GB 1600Mhz DDR3 kit (4 x 8GB). When building SFF the size of the heat sink on the RAM is a key consideration especially if you plan to run an after market CPU cooler. I wanted a kit with tight timings (CAS 8 or CAS 9), low voltage (1.35v) and a low profile to sit unobstructed in the DIMM channels. The Crucial Ballistix Tactical and Sport VLP products fit this bill perfectly. The Tactical LP DIMMs (CAS8) in yellow would have been my first choice but there was a lack of third-party supply for this build and buying directly from Crucial for much more really negated the value for these modules. I ended up going with the 32GB VLP kit (CAS9) in black and have absolutely no regrets. All head-to-head reviews peg these kits very close in performance with a slight edge going to the Tactical LP for overclocking. The Tactical LPs also have a slightly taller heat spreader and carry a higher price premium. The VLP (Very Low Profile) sit almost flush to the DIMM channels providing zero obstruction for any CPU cooler. These modules are seriously low profile and with Micron chips under the hood, you can absolutely trust their performance and longevity.

 

Both run very tight timings at the stock 1.35v and offer a couple of XMP profiles to boost performance. There are other modules on the market that will clock higher but for me, these are perfect. The VLP modules clock in at 800Mhz out of the box with a very attractive 9-9-9-24 at a 2T command rate.

Data

MLC SSD prices continue to become more attractive with loads of available performance. At a minimum, running an SSD for your OS disk is a worthy investment. For this build I’m running 3 disks: 1 x 128GB SSD for OS, 1 x 64GB SSD for apps/scratch, 1 x 1TB 7.2K SATA for data. I have an external NAS array for backup and greater data capacity. My OS disk is the highly acclaimed Samsung Pro 840 SATA 6Gb/s that boasts random reads up to 97K IOPS and sequential reads/writes up to 530/390 MB/s, respectively. In other words, pretty damn fast.

For scratch and application files I’m reusing my Kingston SSDnow+ drive and for data I got a new Seagate Constellation ES drive. Any drive worth buying will have a warranty of at least 5 years. Those with 2 tend to wear out faster from my experience.

All hard drives are mounted cleverly behind the motherboard in the SG09. Up to 2 x 3.5” drives or 4 x 2.5” drives. Having done away with optical media a very long time ago, this new PC will not be getting a DVD/BR drive which must be slot loader style in this case anyway.

Graphics

Although the Ivy Bridge CPU now includes a more powerful on-chip graphics processor, it’s better suited to mobile devices or applications with less graphical demand. I opted to continue to run a discrete GPU for high performance graphics. The GTX 670 is a very good high/middle-end card that many benchmarks have pegged near the 680. I chose the 670 over the closely-spec’d 660Ti due to its 256-bit memory architecture which is stunted on the 660 at 192-bit and as a result offers less memory bandwidth. Most of the other specs are almost identical. Near term you would probably not notice much in the way of a severe performance difference but long game, the 670 has longer legs and more headroom. If the price is close, the 670 is the way to go.

 

 

Build Summary

As far as sourcing parts goes, it found it interesting that Amazon and NewEgg are really the only reasonable places online to source parts anymore. Sure TigerDirect is out there (compusa) but I found their prices to be generally not competitive. I suppose this does reflect the current state of the custom PC today with fewer builders perhaps but there are still a plethora of available parts. As a long time customer of NewEgg, they appear to be doing quite well and have earned top spot as premier parts destination online. Its return and RMA process still leave something to be desired with restocking fees and customer paid return shipping. If Amazon started stocking more PC parts I‘m pretty sure it could push NewEgg out of the game entirely. The only thing I bought from Amazon for this build was the PSU, consequently. MicroCenter provided the CPU and Pro 840, everything else was cheaper with coupons, shipped free, and tax free from NewEgg.

Motherboard

Asus Maximus V Gene
CPU Intel Core i7-3770K
CPU Cooler Cooler Master Hyper 212 Evo
RAM Crucial Ballistix VLP 32GB
Case SilverStone SG09
PSU SilverStone ST55F (+PP05)
OS SSD Samsung Pro 840 128GB
Scratch/apps SSD Kingston SSDNow 64GB
Data disk Seagate Constellation ES 1TB
GPU eVGA GTX 670
OS Windows 7 Ultimate SP1 x64

 

Windows 8, or…bust?

There’s a lot to like about Windows 8: settings sync between devices, new windows explorer, task manager, file copy/ pause operations, power savings, memory savings, oops reset, and some increased security. There’s a lot not to like about Win8: start screen, devalued search, hot corners, separate UIs to update and manage, a device with no touch negates many of the new improvements. I’ve run every version of Win8 since the first preview release. I’ve had a lot of time to digest it, watch the changes, read the white papers, blogs and get accustomed to the UI(s). I’m sad to say that Win8 will be the first new Windows OS ever that I will not be installing on my new PC. It just doesn’t make sense to me. The majority of the under the hood changes to Win8 deal with optimizing the OS for portability, which is fine, but I’m not overly concerned with less RAM consumption or new sleep modes on my PC. I am sure that Win8 is an amazing product on a proper touch enabled device a la the Surface or Dell XPS 12.

Win8 is a dual-personality OS with 2 distinct UIs each with its own set of applications. Metro apps run sandboxed much like Android or IOS and silently suspend or terminate in the background, much the same way. After a long time forcing myself to adapt, I’ve come to the conclusion that metro just isn’t for me on a desktop PC with a keyboard and mouse. I’m not interested in live tiles or apps from an app store, nor do I want to hover over hotspots or drag windows via mechanisms designed for touch. The new Start screen is essentially an exploded version of the Start menu, but now the search functionality has been handicapped. You can still push the windows key, type cmd and hit enter but if you’re searching for something there are more clicks involved now between the apps, settings, and files sub categories. Windows7 start search is far superior in this regard. Metro (formerly known as) is an interface designed for touch, period. So I resigned myself to “live on the desktop side”, I installed Classic Shell and enabled booting to the desktop which brought back the familiar feel of Win7. Start search is still severely lacking with Classic Shell in place and I’ve had a few application compatibility problems as well. Win8 just doesn’t feel like an upgrade, especially having to make so many concessions to live with it. The user experience overall has gotten worse, not better, in my opinion for non-touch enabled devices. Win8 will likely be the new Vista (which I never had a problem with personally) and Microsoft has some time to watch the market before making their next move. They hedged their bets between tablets and PCs with Win8 and have managed to give each group enough of what they need. If this tablet thing fizzles into a fad, no problem, out comes the full blown desktop, if the PC dies completely metro will be primary. I’ll continue to run Win8 on my laptop, but my new powerhouse desktop will be getting a fresh build of Win7 SP1.

Final Thoughts

Overall I am extremely pleased with my new Ivy Bridge build in the SG09. One of the few aesthetically bothering items is that the ST55-F PSU in the SG09 sits off-center to the left so you can see the PSU logo off behind the case logo. Also the bar of stickers on the right side of the face of the PSU are somewhat distracting. The use of 90-degree sata connectors should be avoided in this case as they can interfere with the closing of the top lid. SATA “6Gb” cables came with the V Gene board but tests show there is no performance difference whatsoever between these and SATA cables from years ago. The 6Gb cables do have locking mechanisms on the ends which are a nice touch.

The OS installation was a marvel all unto its own taking 5 minutes flat from start to finish. Reboots are a literal 10 seconds from click to logon screen. In complete stock form this rig is a beast. I plan to overclock a bit once the TIM has broken in on the CPU but the need to do so is really minimal. It will be for fun more than anything.

From a power consumption perspective, this PC has yet to break 200w of usage, even during peak graphical or CPU intensive operations. For anyone wondering if 550w in a SFX PSU will be enough, yes it will! I have light fixtures that consume more power than this PC! It is also very quiet with all fans operating normally. There is a fan switch in the rear of the case to boost all RPMs to high which does kick up the noise a bit. Between this build and my old rig it’s almost like my office is too quiet now. 🙂

Temperatures are so far better than average from what I’ve read, with the CPU idling ~33-34 Celsius. This will get better as the Artic Silver 5 sets in.

I don’t have any other performance benchmark data at this point (other than my 7.9 Windows user experience score) so I’ll just leave it that this thing is FAST and was well worth the upgrade. Faster, quieter, smaller, less power consumed = mission accomplished.

From:

To:

Your next thermal compound: Mayonnaise?

I’m building a new PC right now, which I’ll detail in another post, and am spending hours in research to catch up on the latest and greatest to make my selections. Unless you’re an avid follower of the ever-changing PC parts space, you might not keep up with the day to day there unless it pertains to a relevant activity: new PC build.
Thermal compound, aka thermal grease, aka thermal interface material aka TIM, is the important heat conducting stuff that goes most notably between the heat spreader of your CPU and the heatsink + fan mounted to it. A year ago, the hardware review site extraordinaire hardwaresecrets.com did a comprehensive comparison of almost every relevant TIM on the market including some common household materials, the likes of mayo, butter, and toothpaste. While I’d never seriously entertain the notion of applying mayo to protect my multi-hundred dollar chips, it sure is interesting to see how these materials fared. 
Artic Silver 5, as my TIM of choice, is one of the highest reviewed materials available and has been for the better part of a decade so I’m happy to see it maintain its rightful spot in the top 3 as part of this test. What’s shocking is how well mayo actually performed, beating out many products on the market and tying with the famous CPU cooler manufacturer Noctua’s NT-H1 product. There are many other famous names on this list beaten by mayo such as Zalman and Rosewill. Just goes to show, all TIMs are not created equal and most appear to be nothing more than expensive tubes of snake oil, er, grease rather. 

Check out the full article on HardwareSecrets.com.

Review: ServerSafe Remote Backup by NetMass

ServerSafe is the enterprise online e-vaulting service from NetMass, Inc. that employs an agentless architecture to backup files over the internet to a secure datacenter. Backup methodologies have been evolving for many years and include practices the likes of disk-to-disk (D2D) and disk-to-disk-to-tape (D2D2T). It’s the tape part that most IT workers loathe and costs companies the most. Of course at some large scale it doesn’t make sense to pay a 3rd-party to offsite your data, leaving you to implement an offsite vaulting and retention solution of your own. It’s all about “the cloud” these days and ServerSafe floats solidly in the stratosphere.

Based on the Televaulting DS-Client platform from Asigra, ServerSafe is able to provide a robust and feature rich backup solution for Windows or Unix. To get started you need a dedicated machine to serve as the DS-Client which will connect to the NetMass datacenter and stream all the bits across the internet. This is a 3-tier architecture that comprises of a DS-System (storage array), DS-Client (backup server), and DS-User (admin console). The DS-User portion can happily live on the DS-Client system or can be installed on an administrator’s workstation for remote management. Once the DS-Client is installed and authorized to connect to the NetMass datacenter, backup sets, policies, and schedules can be created. Your data is then compressed, encrypted, and sent to the NetMass datacenter, to live on their storage arrays, where they do not cap disk usage but do charge based on how much disk you consume.

All data is encrypted before it leaves your premises on it’s travel across the internet. AES is the algorithm and the bit level depends on the key you supply during install; a 32-character key will yield 256-bit encryption which is, today, unbreakable. Your data is encrypted before it gets put on the wire and will sit on the storage array encrypted as well, in-flight and at rest. Before a DS-Client can connect to the NetMass datacenter it has to be expressly permitted, which is a cooperative effort between you and the NetMass personnel. The DS-Client is secured by local security or Active Directory accounts that combine with application-level role based permissioning.

The install consists of running an x86 or x64 installer, pointing to a NetMass-provided customer information file (CRI), installing MSDE with a new SQL instance (or pointing to another SQL server), and generating an account key for AES encryption. Once this key is created it cannot be changed without destroying the DS-Client registration and starting over from scratch. For the version 8 build of the DS-Client, only Server 2003 or XP are supported as hosts. Server 2008 can be backed up by the DS-Client but it is not yet supported as a host OS. I’m told this is coming in the v9 build.

Setup

image

Setup is collaborative and done via a shared desktop session with a NetMass engineer. Asigra has published a 500+ page PDF manual for this product so the options run vast and deep. Once logged into the DS-Client, the first step is to set defaults that will control the behavior of the backup sets you create later. Generations will dictate the default number of versions of a file you want to store online and this number will be pre-populated in a backup set. The manual suggests setting the total number of generations allowed to something very high, like 300, then controlling them more granularly via retention rules. You could set up a base retention rule here also but it might make better sense to set these up per server or file type you’re backing up. CDP is Continuous Data Retention which means that data can be constantly backed up on-the-fly as it changes. Depending on your generation limits and retention rules this could be very costly in both online storage costs and bandwidth usage. Defaults for scheduling and email notification are set here as well. There are two options for compression: ZLIB or LZOP. Some quick unofficial research in the Asigra forums tells me that ZLIB has slightly higher ratios. The best I’ve seen so far is a 30GB SQL database being reduced to 2GB before transmission to NetMass. The other tabs are accurately descriptive providing support for features you would expect in an enterprise product. The advanced tab allows for the tweaking of a number of low-level parameters which include compression levels, retry values, and various other settings. The parameters tab controls the Admin Process which is essentially database maintenance on the DS-Client.

Backups

Core functionality is completely wizard-driven and includes scheduled backups, On demand backups, and restores. Backup options include common file types you would be most likely to protect. You can browse the network to select target hosts or enter them by name in UNC format to connect. Once connected, the client will show you all available drives and shares that you can drill down into. The DS-Client has no trouble discovering hidden file shares ($).

image image

For SQL Servers you need to choose the path to which the database files live and the DS-Client will display each database. Choose the databases you want to include in the backup set and then decide if you want to run DBCC and truncate the logs. If your SQL Server is running in simple recovery mode then you do not need to truncate the logs here and you will see deprecation warnings on your servers. The DS-Client uses standard Microsoft API calls and via T-SQL triggers all operations against the database. Remember this is all done agentless, so the backup client connects to SQL like a regular consumer with administrative privileges. A dump of the database has to be performed which can occur in the same directory as the DB files, a remote location, or in the DS-Client buffer (DS-Client machine). If you have the disk space, dumps in the same place as the DB are easiest. It is important to note that this can affect production performance as the DS-Client will generate additional server load in the form of IO and network traffic.

image image

Next you can choose to use an existing retention rule, create a new one, or use none at all. While retention rules can be created separately, this is an easy way to mate online retention with backup sets as you create them. Three options are available at this time in the DS-Client, via NetMass, with regard to online retention:

  • Deletion of files removed from source: how long to keep files online whose source file has been deleted
  • Time-based online retention: how many generations to keep online and for how long
  • Enable local storage retention: keeping generations locally in addition to online

Deletion of files if removed from source defaults to 30 days with no kept generations. This may work fine or you could keep the last changed generation of a file online even after it’s been deleted from the source, just in case. Keep in mind, however, that this will keep all files until an applicable retention rule deletes them from the DS-System.

image image

Time-based retention dictates how long to store generations online, near and long term. This is useful for compliance requirements, especially if you work in an industry that has specific data retention requirements. You can get extremely granular with these rules which can be mind-bending to your non-technical senior management staff. For instance, a good general policy might be to protect data for up to 7 years. This essentially ensures that the last, most recent copy of every file, whether it was deleted or not, will be stored up to 7 years online. As time goes on and changes to a file slow down, its generations will slow as well. For example, following this policy for a document that eventually stops getting updated will result in a single generation of the file, the last update, kept for 7 years.

1 generation every day for a week
1 generation every week for a month
1 generation every month for a year
1 generation every year for 7 years

image image

Local retention policy is married to online policy so unfortunately you can’t set local retention higher. But this option is still useful for quick restores that can be retrieved from local media without having to be pulled down from NetMass across the wire. The same compression and encryption applies to backup sets you store on locally-attached media. Server storage or a dense USB drive would work fine for this.

image

The next step sets up options in the backup set including taking an initial backup to local media if your online backup would otherwise take days. Once your initial backups are complete, NetMass will either come get the drive from you or you can send it to them so the data can be put directly in the array. After the initial backup the mode switches to differential so much less data travels the internet.

image

Notification options are plentiful but your inbox can become overwhelmed easily with daily status messages. Scheduling options are also plentiful allowing granularity for as many schedules as you care to create.

image image

As a final step, you can go ahead and make the new backup active by designating it as a regular set type, or you can make it statistical. Statistical set type will allow you to do a test run of the backup and estimate how much online storage that job will consume.

Restores

The restore process is very clean and can be executed on any backup set that has data online. Once you launch the wizard you can see the directories and files associated with a particular backup set. This is also one way to look at what and how much you have stored online. Clicking “file info” on any selected file shows you how many generations are stored online and the size of each. You can also look at only the files that were modified in any folder. The last step is deciding whether to restore files to their original location or an alternate.

image image

The other way to look at what you’re storing online is to open the Online File Summary under Reports. This tells you almost everything you need to know about your online data and can chart certain filters as well. This is one area that could be made a little better I think. Too much digging is required to find out exactly what you have online with how many generations. If Asigra could tie this to the backup set information better, in the same context, I think it would be much more user friendly overall.

image

Backup sets can be suspended or deleted outright but you can get much more granular with selective backups to remove any online generations of individual files. Overall I am very impressed with the ServerSafe offering from NetMasss and think it is a great next gen offsite backup solution. I’m currently running my trial DS-Client on an old P4 PC running XP Pro and the jobs are completing with no problem. I have roughly 100GB worth of data being protected now. While a large company may not find much value paying NetMass to store their backup data, they could implement the Asigra DS solution and create an e-vaulting architecture of their own, in-house. We all hate tapes and this is a solid alternative. The trial of ServerSafe is a snap and can seamlessly transition into a production state as soon as you say you’re ready. NetMass is all about low pressure and let’s you become acclimated at your own pace. I look forward to see what’s new in the next major release of the DS-Client.

Review: KBox by Kace

If you’re familiar with Microsoft SMS (SCCM), Altiris, or LANDesk, then you’re familiar with the space the KBox plays and what intrinsic value these products bring to an enterprise. The problem with the traditional aforementioned products, besides enormous cost, is complexity of architecture, deployment, and management. Don’t get me wrong, these products do what their intended purpose is and do it well, but let’s face it, people can and do make careers from supporting SMS infrastructures. They are large, distributed architectures that are not easily managed. Enter the KBox by Kace, an appliance based solution that can be scaled horizontally with add-on modules and additional appliances. The hardened, FreeBSD-based platform sets up in a snap and contains everything you need in a single unit.

There is a lot to like about the KBox but simplicity is what really stands out. There are 3 primary flavors the platform can be had in: 1100, 1200, and v-KBox. The v-KBox is the appliance in VMware VM form that is intended to integrate into an existing ESX/Vi3 environment. The 1100 and 1200 appliances are 1U server boxes that differ in CPU speeds, RAM, power, and disk configurations. The 1100 can manage 3000 nodes while the 1200 can scale to 30,000 and supports system segregation for distributed IT departments. All platforms are built on FreeBSD and include other features and packages required to complete the solution: MySQL, Samba, etc. The KBox can manage all Windows versions, Mac, and Red Hat Linux.

Initial setup consists of naming the KBox, assigning IP information, then after a quick reboot it is ready to go. You browse to the IP or name (if you created an A record) and start managing. Because the KBox is module-based what you see next will vary depending on what you’ve purchased. Included in my demo I have the core Management Appliance as well as the Asset Management and Help Desk add-on modules, version 4.3.20109. Additional capabilities include OS deployment, Security Audit and Enforcement, and iPhone Management. Once logged in, on the home page, you will see news, faqs, alerts, and general status of your managed nodes.

The KBox can be configured to integrate with LDAP (Active Directory) or you can create and use local accounts contained within the appliance database. The local admin account will always work in either mode, but if LDAP mode is selected no other local account will be granted access. In either mode, role based access can be granularly configured to delegate very specific accesses to any number of users.

Client management is agent based and can be easily pushed from the KBox. This can be done ad-hoc, targeted to a group of computers, scheduled, or staggered. Once the client is pushed you can check the Provisioned Configurations job progress to watch which step in the process is under way and the results of each step.

Once your clients have successfully checked in with the KBox you will see them as managed nodes in the Inventory pane. You can view detailed information about each node by drilling down into each. The amount of available information is vast. All software, processes, services, and hardware are available for easy scrutiny.




Labels are at the core of the KBox management system and determine in how much detail you can manage. You can create any number of labels for literally anything. Users, business units, software, hardware, locations, anything. Labels can be used to group assets, in filters to drill down in software updates and reporting, and serve as the basis to which actions can easily be applied. For instance you could run a software removal script against all nodes running Limewire (label=limewire), or push patches to all Vista desktops (label=vista), or run a report to show all tickets opened by the sales team (label=sales). Labels are created within the Inventory context.

One of my favorite sections of the appliance is software distribution. Kace has acquired appdeploy.com and has provided easy integration with the KBox. Appdeploy.com is a fantastic website that houses extensive information for almost any application you can think of. KBs, switches, install strings, gotchas. No other vendor in this space can provide you with such detailed information usable to create and deploy software packages. Once the KBox has been set to use AppDeploy Live (configured in system settings), you can see detail for each software item in the inventory. This free feature saves a ton of time when configuring deployment/ removal packages.

Distributable software packages include .exe, .msi, or .zip and can be stored on the KBox or referenced on another server. The easy way to deploy software is to first install it on a managed client then let the KBox discover it. Find the application in the software inventory and associate the installation bits by uploading the install package or specify an alternate location via UNC, a DFS source, or HTTP location.

The Asset Management module is one of the more expensive add-on options as module price points are based off of number of managed nodes. Compared to the Help Desk module (also licensed per managed node) the Asset module is over 50% more in cost. It builds on the native Inventory function allowing the creation and storage of additional item types. Vendors, licenses, locations, etc. Software keys, serials, and vendor contacts can all be stored within. Software metering is a useful feature of this module allowing for the usage monitoring of specific software on managed nodes. All said and done, from where I’m sitting, it looks to be really a big, very expensive data repository which is another database instance in the common MySQL database. Not so sure it’s worth it, but it does provide a nice feature.

The scripting features are particularly powerful providing easy customized executions on the managed nodes. The sky is the limit here as files can be pushed on the fly which can be set to follow strict policy and job rules that include verification and remediation options. Three different scripting options are available.


The patch management offering is extensive providing support for both OS and application patches. Once an OS is selected all available updates, both OS and application, are cataloged but application updates have to be expressly allowed to download. The KBox will download ALL cataloged applications which will include those that you do not own. Each update can be marked active (ready to deploy), or inactive (ignored). Every update can be assigned to a label and the KBox keeps detailed track of which updates are installed on what computers. Patches can be deployed forcing reboots or allowing an active user to snooze. 250GB is not really a very large disk anymore so I have concerns of overfilling from downloading every patch and update under the sun. Kace tells me that they haven’t had any issues with disk space shortages yet.

Reporting options are ample and can be generated in a variety of formats for 75 items out of the box. Custom reporting, including direct SQL queries, are available as well.

Finally, if you’re looking for another bird to kill, the Help Desk module can provide you with an enterprise ticketing system. Pricing is definitely industry competitive with the added benefit of no additional hardware or OS licensing to procure. The interface is completely customizable, integrates with AD, can be delegated at very granular levels, and includes a knowledge base system to boot.

My setup as tested including 100 managed nodes, a year of support, after discounts, is roughly 50% less than a similarly configured SCCM solution with an add-on ticket system. You can do the math on this. Enterprise management is not a cheap buy-in and for the SMB the KBox makes mind numbing sense. I’d be interested to see how well the solution scales in a big enterprise because the platform is stable, user friendly, cost effective, and powerful. If you have deployed or managed SMS you can appreciate how easy this setup is. If you’re in the market to implement an enterprise management solution, I encourage you to give Kace a shot and do a Pepsi challenge with any other vendor you’re considering.

http://www.kace.com