vSphere 5.5: Navigating the Web Client

As I called out in my vSphere 5.5 upgrade post, the vSphere Client is now deprecated in 5.5 so in preparation of an inevitable future,  I’m forcing myself to use the Web Client to gain familiarity. Turns out there was way more moving around than I initially thought so I’m selfishly documenting a few pertinent items that seemed less than intuitive to me my first time through. Some things are just easier to do in the legacy vSphere Client, or maybe I’m just too accustomed after 3 generations of ESX/i. In any case, I encourage you to use the web client as well and hopefully these tips will help.

Topics covered in this post:

  • How to configure iSCSI software adapters
  • How to add datastores
  • How to manage multipathing
  • How to rename an ESXi host
  • Cool changes in Recent Tasks pane
  • Traffic Shaping
  • Deploying vCenter Operations Manager (vCOps)

How to configure iSCSI software adapters:

This assumes that the preliminary steps of setting up your storage array and requisite physical networking have already been properly completed. The best and easiest way to do this is via dedicated switches and server NICs for iSCSI in a Layer2 switch segment. Use whatever IP scheme you like, this should be a closed fabric and there is no reason to route this traffic.

First things first, if you don’t have a software iSCSI adapter created on your hosts, create one in the Storage Adapters section of Storage Management for a particular ESXi host. Once created, it will appear in the list below. A quick note on software vs hardware iSCSI initiators. Physical initiators can generally do iSCSI offload OR jumbo frames. not both. We have seen the use of jumbo frames to be more impactful to performance than iSCSI offload, so software initiators with jumbo frames enabled is the preferred way to go here.

Click over to the Networking tab and create a new vSwitch with a VMkernel Network Adapter for iSCSI.

Choose the physical adapters to be used in this vSwitch, create a useful network Port Group label such as iSCSI-1 and assign an IP address that can reach the storage targets. Repeat this process and add a second VMkernel adapter to the same vSwitch. Configure your VMK ports to use apposing physical NICs. This is done by editing the port group settings and changing the Failover order. This allows you to cleanly share 2 physical NICs for 2 iSCSI connections within a single vSwitch.

In my case VMK2 is active on vmnic3 and VMK3 is active on vmnic1 providing physical path redundancy to the storage array.

When all is said and done, your vSwitch configuration should look something like this:

Next under the iSCSI software adapter, add the target IP to your storage (group IP for EqualLogic). Authentication needs and requirements will vary between organizations. Choose and configure this scheme appropriately for your environment. For my lab, I scope connections based on subnet alone which defines the physical Layer2 boundary of my iSCSI fabrics.

Next configure the network port binding to ensure that the port groups you defined earlier get bound to the iSCSI software adapter using the proper physical interfaces.

At this point, if you have any volumes created on your array and presented to your host, a quick rescan should reveal the devices presented to your host as LUNs.

You should also see 2 paths per LUN (device) per host based on 2 physical adapters connecting to your array. EqualLogic is an active/passive array so only connections to the active controller will be seen here.

If you run into trouble making this work after these steps, jump over to the vSphere Client which does make this process a bit easier. Also keep in mind that all pathing will be set to Fixed by default. See my How to manage multipathing topic below for guidance on changing this.

iSCSI works very well with jumbo frames which is an end-to-end Layer2 technology, so makes sure a MTU of 9000 is set on all ESXi iSCSI vSwitches, VMK ports, as well as the NICs on the storage array. Your switches must be capable of supporting jumbo frames as well. This will increase the performance of your iSCSI network and front-end storage operation speeds.

 

How to add datastores:

Once your new datastore has been provisioned from your storage platform and presented to your ESXi hosts, from the Hosts and Clusters view, navigate to Related Objects then datastores. From here click the Create a new Datastore button.

Choose the host or cluster to add the datastore to, choose whether it is NFS or VMFS, name the datastore and choose a host that can see it. You should see the raw LUN in the dialog below.

Choose the VMFS version and any partition options you want to implement. Confirm and deploy.

If presenting to multiple hosts, once the VMFS datastore is created and initialized on one, they all should see it assuming the raw device is present via a previous adapter rescan.

 

How to manage multipathing:

From the Hosts and clusters view, click the Storage tab, choose the datastore you want to manage, click Manage in the middle pane then click Connectivity and Multipathing under Settings.

Alternatively, from the Hosts and Clusters view (from any level item), navigate to Related Objects, then Datastores. Either click the volume you want to edit or choose Settings from the dropdown. Either method will get you to the same place.

From the datastore Settings page, click Manage and under Settings (once again) click Connectivity and Multipathing. In the middle of the screen you should see all hosts attached to whatever datastore you selected. Clicking on each host will reveal the current Path Selection Policy below, “Fixed” by VMware default along with the number of paths present per host.

To change this to Round Robin, click Edit Multipathing, change the Path Selection Policy, repeat for each host connected to the datastore.

 

How to rename an ESXi host:

Renaming hosts is one area that the Web Client has made significantly easier (once you figure out where to go)! Select a host from the Hosts and Clusters view, click Manage, click Networking, then TCP/IP Configuration below.

From the DNS Configuration menu, select “Enter settings manually”, put whatever hostname you would like here.

VMware recommends putting a host in maintenance mode and disconnecting it from vCenter before doing this. I did this hot with my host active in an HA cluster with zero ill affects. I did it a few times just to make sure. The other way to do this is via the CLI. Connect to your ESXi host via SSH, vMA or vCLI and run:

esxcli system hostname set –host=hostname

Cool changes in Recent Tasks pane:

Not only is the Recent Tasks pane off to the right now, which I really like, it breaks out tasks by All, Running and Failed individually for easier viewing, including the ability to view your own tasks for environments with many admins. Previously these tasks were all lumped together and longer running tasks would get buried in the task stream.

image

 

The Recent Tasks pane also provides a new and better method to deal with pop-up configuration dialogs. Ever start configuring something using the old vSphere Client, get 4-5 clicks deep in the pop-up configuration, then realize you need some other piece of information requiring you to cancel out so you can go back to some other area in vCenter? This problem is now resolved in the web client with a cool new section of the Tasks pane called Work in Progress. It doesn’t matter what you’re doing or how far along you are in the dialog. If you need to break away for any reason, you can simply minimize the pop up and come back to it later. These minimized pop-ups will show in the Work in Progress pane below recent tasks.

The example here shows 3 concurrent activities in various states: a vMotion operation, a VM edit settings and a clone operation of the same VM even. Any activity that generates a pop-up dialog can be set aside and picked up again later. This is a huge improvement over the legacy vSphere Client. Very cool!!

 

Traffic Shaping:

It appears that in the web client you can only apply traffic shaping at the vSwitch level, not at an individual port group or VMK level. Here you can see shaping available for the standard vSwitch:

These settings, while viewable in the VMK policies summary, are not changeable (that I can see).

To override the vSwitch shaping policy and apply one to an individual port group or VMK, you have to use the legacy vSphere Client. Not sure if this is an oversight on VMware’s part or yet another sign of things to come requiring dvSwitching to assign shaping policies below the vSwitch level.

 

Deploying vCenter Operations Manager (vCOps):

Made extremely easy in vSphere 5.5 via the web client is the deployment of the incredible vCOps vApp for advanced monitoring of your environment. VMware has made detailed performance monitoring of your vSphere world incredibly simply and intuitive through this easy to set up and use vApp. Really impressive. From the home page, click vCenter Operations Management.

On the Getting Started screen, click Deploy vCOps. If you have a valid vmware.com login, entire it here to download the OVF and related files for deployment. You can alternatively point to the file locally if you have it already.

Accept the EULAs and choose all the placement and sizing options for the VM.

A word of caution, do not expect DRS to make a host placement decision for you here during the deployment. The wizard will allow you to select your cluster as a resource destination but the deployment will ultimately fail. Choose a specific host to deploy the VM to instead.

The requisite files will be downloaded from VMware directly and deployed to your environment. Off to the races!

Once deployed, you’ll see 2 new VMs running under the vCOps vApp object in your datacenter.

Once the VMs are powered on and the vApp has been started, you should see new options under vCenter Operations Manager.

First, click the Configure link to open the admin site in a web page. The default login for the admin account is admin/ admin, for root the password is vmware. Configure the initial setup to point to vCenter and the analytics VM which it should detect. Install the certificates as prompted and continue through the registration process.

Once complete, return to the vCOps page in vCenter and click Open, a new web page will launch for you to consume the vCOps goodness. After a short while performance stats should start pouring in for everything in your vSphere environment. Usage patterns and workload profiles can be identified so appropriate adjustments can be made. What you do from here with the data collected is entirely up to you. 🙂

A couple more screens just to show you the capability of vCOps, since I like it so much. Storage at the datastore view:

VM performance view:

Citrix XenDesktop on Dell VRTX

Dell Desktop Virtualization Solutions/ Cloud Client Computing is proud to announce support for the Dell VRTX on DVS Enterprise for Citrix XenDesktop.  Dell VRTX is the new consolidated platform that combines blade servers, storage and networking into a single 5U chassis that we are enabling for the Remote Office/ Branch Office (ROBO) use case. There was tremendous interest in this platform at both the Citrix Synergy and VMworld shows this year!

Initially this solution offering will be available running on vSphere with Hyper-V following later, ultimately following a similar architecture to what I did in the Server 2012 RDS solution. The solution can be configured in either 2 or 4 M620 blades, Shared Tier 1 (cluster in a box), with 15 or 25 local 2.5” SAS disks. 10 x 15K SAS disks are assigned to 2 blades for Tier 1 or 20 disks for 4 blades. Tier 2 is handled via 5 x 900GB 10K SAS. A high-performance shared PERC is behind the scenes accelerating IO and enabling all disk volumes to be shared amongst all hosts. 
Targeted sizing for this solution is 250-500 XenDesktop (MCS) users with HA options available for networking. Since all blades will be clustered, all VDI sessions and mgmt components will float across all blades to provide redundancy. The internal 8-port 1Gb switch can connect up to existing infrastructure or a Force10 switch can be added ToR. Otherwise there is nothing externally required for the VRTX VDI solution, other than the end user connection device. We of course recommend Dell Wyse Cloud Clients to suit those needs. 

For the price point this one is tough to beat. Everything you need for up to 500 users in a single box! Pretty cool.
Check out the Reference Architecture for Dell VRTX and Citrix XenDesktop: Link

Upgrading to VMware vSphere 5.5

Like all good stewards of all things virtual I need stay current on the very foundations of everything we do:  hypervisors. So this post contains my notes on upgrading to the new vSphere 5.5 build from 5.1. This isn’t meant to be an exhaustive step-by-step 5.5 upgrade guide, as that’s already been done, and done very well (see my resources section at the bottom for links). This is purely my experience upgrading with a few interesting call outs along the way that I felt worth writing down should anyone else encounter them.

The basic upgrade sequence goes like this, depending on which of these components you have in play:

  1. vCloud components
  2. View server components
  3. vCenter (and vSphere Clients)
  4. SRM/ VR/ vCOPS/ VDP/ VSA
  5. ESXi hosts
  6. VM Tools (and virtual hardware –see my warning in the clients section)
  7. vShield components
  8. View agent

The environment I’m upgrading consists of the following:

  • 2 x Dell PE R610 hosts running ESXi 5.1 on 1GB SD
  • 400GB of local SAS
  • 3TB Equallogic iSCSI storage
  • vCenter 5.1 VM (Windows)

 

vCenter 5.5

As was the case in 5.1, there are still 2 ways to go with regard to vCenter: Windows-based or vCenter Server Appliance (vCSA). The Windows option is fully featured and capable of supporting the scaling maximums as well as all published configurations. The vCSA is more limited but in 5.5, a bit less so. Most get excited about the vCSA because it’s a Linux appliance (no Windows license), it uses vPostgres (no SQL license) and is dead simple to set up via OVF deployment. The vCSA can use only Oracle externally, MS SQL Server support appears to have been removed. The scaling maximums of the vCSA has increased to 100 hosts/ 3000 VMs with the embedded database which is a great improvement over the previous version. There are a few things that still require the Windows version however, namely vSphere Update Manager and vCenter Linked Mode.

While I did deploy the vCSA, additionally, my first step was upgrading my existing Windows vCenter instance. The Simple Install method should perform a scripted install of the 4 main components. This method didn’t work completely for me as I had to install the inventory and vCenter services manually. This is also the dialog from which you would install the VUM service on your Windows vCenter server.

I did receive an error about VPXD failing to install after the vCenter Server installation ran for the first time. A quick reboot cleared this up. With vCenter upgraded, the vSphere Client also needs to be upgraded anywhere you plan to access the environment using that tool. VMware is making it loud and clear that the preferred method to manage vSphere moving forward is the web client.

vCenter can support up to 10,000 active VMs now on a single instance, which is a massive improvement. If you plan to possibly power on more than 2000 VMs simultaneously, make sure to increase the number of ephemeral ports during the port configuration dialog of the vCenter setup.

Alternatively, the vCSA is very easy to deploy and configure with some minor tweaks necessary to connect to AD authentication. Keep in mind that the maximum number of VMs supported on the vCSA with the embedded DB is only 3000. To get the full 10,000 out of the vCSA you will have to use an Oracle DB instance externally. The vCSA is configured via the web browser and is connected to by the web and vSphere Clients the same way as its Windows counterpart.

If you fail to connect to the vCSA through the web client and receive a “Failed to connect to VMware Lookup Service…” error like this:

…from the admin tab in the vCSA, select yes to enable the certificate regeneration option and restart the appliance.

 

Upgrading ESXi hosts:

The easiest way to do this is starting with hosts in a HA cluster attached to shared storage, as many will have in their production environments. With vCenter upgraded, move all VMs to one host, upgrade the other, rinse, repeat, until all hosts are upgraded. Zero downtime. For your lab environments, if you don’t have luxury of shared storage, 2 x vCenter servers can be used to make this easier as I’ll explain later. If you don’t have shared storage in your production environment and want to try that method, do so at your own risk.

There are two ways to go to upgrade your ESXi hosts: local ISO (scripted or not) or vSphere Update Manager. I used both methods, one on each, to update my hosts.

ISO Method

The ISO method simply requires that you attach the 5.5 ISO to a device capable of booting your server: either USB or Virtual Media via IPMI (iDRAC). Boot, walk through the steps, upgrade. If you’ve attached your ISO to Virtual Media, you can specify within vCenter to boot your host directly to the virtual CD to make this a little easier. Boot Options for your ESXi hosts are buried on the Processors page for each host in the web or vSphere Client.

 

 

VUM method:

Using VUM is an excellent option, especially if you already have this installed on your vCenter server.

  • In VUM console page, enter “admin view”
  • Download patches and upgrades list
  • Go to ESXi images and import 5.5 ISO
  • Create baseline during import operation

  • Scan hosts for non-compliance, attach baseline group to the host needing upgrade
  • Click Remediate, walk through the screens and choose when to perform the operation

The host will go into maintenance mode, disconnect from vCenter and sit at 22% completion for 30 minutes, or longer depending on your hardware, while everything happens behind the scenes. Feel free to watch the action in KVM or IPMI.

When everything is finished and all is well, the host will exit maintenance mode and vCenter will report a successful remediation.

Pros/ Cons:

  • The ISO method is very straight-forward and likely how you built your ESXi host to begin with. Boot to media, upgrade datastores, upgrade host. Speed will vary by media used and server hardware, whether connected to USB directly or ISO via Virtual Media.
    • This method requires a bit more hand-holding. IPMI, virtual media, making choices, ejecting media at the right time…nothing earth shattering but not light touch like VUM..
  • If you already have VUM installed on your vCenter server and are familiar with its operation, then upgrading it to 5.5 should be fairly painless. The process is also entirely hands-off, you press go and the host gets upgraded magically in the background.
    • The down side to this is that the vSphere ISO is stored on and the update procedure is processed from your vCenter server. This could add time delays to load everything from vCenter to your physical ESXi hosts, depending on your infrastructure.
    • This method is also only possible using the Windows version of vCenter and is one of the few remaining required uses of the vSphere Client.

No Shared Storage?

Upgrading hosts with no shared storage is a bit more trouble but still totally doable without having to manually SCP VM files between hosts. The key is using 2 vCenter instances in which vCSA works great as a second for this purpose. Simply transfer your host ownership between vCenter instances. As long as both hosts are “owned” by the same vCenter instance, any changes to inventory will be recognized and committed. Any VMs reported as orphaned should also be resolved this way. You can transfer vCenter ownership back and forth any number of times without negatively impacting your ESXi hosts.  Just keep all hosts on one or the other! The screenshot below shows 2 hosts owned by the Windows vCenter (left) and the vCSA on the right showing those same hosts disconnected. No VMs were negatively impacted doing this.

The reason this is necessary is because VM migrations are controlled and handled by vCenter, cold migrations included which is all you’ll be able to do here. Power off all VMs, migrate all VMs off one host, fire up vCenter on the other host, transfer host ownership, move the final vCenter VM off then upgrade that host. Not elegant but it will work and hopefully save some pain.

 

vSphere Clients

The beloved vSphere Client is now deprecated in this release and comes with a heavy push towards exclusive use of the Flash-based web client. My advice, start getting very familiar with the web client as this is where the future is heading, like it or not. Any new features enabled in vCenter 5.5 will only be accessible via the web client.

Here you can see this clearly called out on the, now, deprecated vSphere Client login page:

 

WARNING – A special mention needs to be made about virtual hardware version 10. If you upgrade your VMs to the latest hardware version, you will LOSE the ability to edit their settings in the vSphere Client. Yep, one step closer to complete obsolesce. If you’re not yet ready to give up using the vSphere Client, you may want to hold off upgrading the virtual hardware for a bit.

vSphere Web Client

The web client is not terribly different from the fat client but its layout and operational methods will take some getting used to. Everything you need is in there, it may just take a minute to find it. The recent tasks pane is also off to the right now, which I really like.

image

The familiar Hosts and Clusters view:

Some things just plain look different, most notably the vSwitch configuration. Also the configuration items you’ve grown used to being in certain property menus are now spread out and stored in different places. Again, not bad just…different.

I also see no Solutions and Applications section in the web client, only vApps. So things like the Dell Equallogic Virtual Storage Manager would have to be accessed via the old vSphere Client.

Client Integration Plugin

To access features like VM consoles, file transfers to datastores and OVF deployments via the web client, the 153MB Client Integration plugin must be installed. Attempting to use any features that require this should prompt you for the client install. One of the places it can also be found is by right-clicking while browsing within a datastore.

The installer will launch and require you to close IE before continuing.

 

VM consoles now appear in additional browser tabs which will take some getting used to. Full screen mode looks very much like a Windows RDP session which I like and appreciate.

Product Feature Request

This is a very simple request (plea) to VMware to please combine the Hosts and Clusters view with the VMs and Templates view. I have never seen any value in having these separated. Datacenters, hosts, VMs, folders and templates should all be visible and manageable from a single pane. There should be only 3 management sections separating the primary vSphere inventories: Compute (combined view), storage and network. Clean and simple.

Resources:

vSphere 5.5 upgrade order (KB): Link

vSphere 5.5 Configuration Maximiums: Link

Awesome and extensive vSphere 5.5 guide: Link

Resolution for VCA FQDN error: Link

Cleaning up SMVI snapshots in vSphere

For whatever reason, sometimes VM snapshots get stuck and ultimately forgotten. As you know this can be disastrous as once that line in the sand has been drawn, the resulting redo log will continue to grow until its underlying disk is completely exhausted. One-off manual snaps can be easily forgotten but worse is when programmatic snaps, the likes of NetApp SMVI or VCB, don’t get removed cleanly and start to stack up. I just had this problem when my SMVI, for reasons unknown, stopped removing snaps from one of my volumes and started incrementing them. At the point I caught the problem, some of my VMs on this volume had as many as 5 SMVI snapshots! Not good. SMVI is a great solution overall that works really well, but its handling and reporting of VI snapshots could be a lot better.

So now I have exposed a problem in my environment. The performance of my VMs is suffering and the integrity of my backups could be questionable. The first step is to create a new alarm in vCenter that watches VM snapshot size. I will warn on anything over 1GB and alert on 2GB. While researching for solutions to this problem (and manually deleting snapshots) I came across a tool written by one of NetApp’s Architects last year. Even though it was written by NetApp specifically for SMVI, it is solid enough to be used in a number of other scenarios. CVMS (Cleanup VMware Snapshots) is an executable that can be run manually via CLI or called via a script. I my case I will call this tool via a script in SMVI.
Feed it the vCenter address, credentials, and snap name prefix, then you can scope by datastore, VM, or VM set. It will go in order and remove all snaps that match the defined prefix, one at a time. Manually-created user snaps will not be affected, unless of course the snap name matches your defined prefix. The tool is incredibly well written and provides just about every customization you would care to do in this scenario.

Sample command string and output:

C:\>cvms -vcuser administrator -vcpasswd passwd -vcip 10.2.1.2 snapname bar -ds test_ds -verbose
LOG REPORT FOR CVMS
—————————————————–
CVMS Version: 1.0
Log Filename: \NetApp\CVMS\Report\CVMS_20100131_143350.log
Start Time: Sun Jan 31 14:33:50 2010
Datastore(s) selected: test_ds
Command line arguments successful.
Initializing connectivity to Virtual Center and storage appliances.
Converting Virtual Center hostname to IP address …
Attempting to ping Virtual Center 10.2.1.2 …
Ping of Virtual Center 10.2.1.2 successful.
Creating new Virtual Center instance for 10.2.1.2 …
Logging into Virtual Center server 10.2.1.2 …
Virtual Center login successful.
Collecting VMware and storage appliance configuration data.
Collecting datacenter information …
Found 2 Datacenter(s).
Collecting host system information …
Host system information collected.
Looking on host system esx2.internal.net for datastore test_ds …
Requested Datastore (test_ds) is available.
Saving virtual machine information for vm2.
Saving virtual machine information for vm1.
Cleaning up snapshots for all VMs listed …
Checking snapshot capability of VM vm1 …
Removing all snapshots with string ‘bar’ from VM vm1 …
No VM snapshots found.
Checking snapshot capability of VM vm2 …
Removing all snapshots with string ‘bar’ from VM vm2 …
Removing VM snapshot ‘bar2’ …
Removal of VM snapshot for vm2 successful.
Command completed successfully.
Backup End Time: Sun Jan 31 14:34:02 2010
Exiting with return code: 0

In my particular scenario, I backup per volume in SMVI so will add a custom script to each backup job to ensure that all snaps get properly cleaned up afterwards. To present a script to SMVI, it needs to exist in %PROGRAMFILES%\NetApp\SMVI\server\scripts (or <drive>:\Program Files (x86)). SMVI can use .bat, .cmd, .pl, etc. Here is the syntax of one of my volume clean scripts:

if not %BACKUP_PHASE% == POST_BACKUP goto end
set PATH=”D:\Program Files (x86)\NetApp\CVMS”; %PATH%
cvms.exe -vcip vcenter –vcuser domain\account –vcpasswd password -ds Volume1 -snapname smvi -reportdir “D:\Program Files (x86)\NetApp\CVMS”
:end

Depending on how your backups are configured, you could run a script like this at the end of the day or like me, after every backup. I backup every 8 hours so have plenty of time for cleanup in between. The report directory will house text file outputs of each instance run with the same output you would see in the CLI using the –verbose switch. Refer to the SMVI 2.0 best practices guide for available variables that can be referenced in a script.
The NA community homepage for the tool is in the references below. CVMS is not in the NA Tool Chest however, I checked, so unless someone tells me otherwise I will host a mirror of the utility.

References:
CVMS (homepage)
CVMS (mirror)
Scripting SMVI cleanup
SMVI 2.0 Best Practices Guide

vSphere Disaster Recovery using NetApp’s SMVI

Disaster recovery (DR) is always a hot topic that many companies do not do at all for one reason or another or do badly. Providing DR for a virtual environment can be a particularly challenging and expensive endeavor. In my enterprise I am running ESX4 U1 on top of Brocade fiber-channel connected NetApp FAS arrays (1 production, 1 DR) with a 1Gb Metro-Ethernet link between sites. I do not currently have the luxury of using VMware’s Site Recovery Manager (SRM) in my environment so my process will be completely manual. SRM removes many of the monotonous tasks of turning up a virtual DR environment including testing your plan, but this comes with a heavy price tag. I am fortunate enough, however, to have the full suite of NetApp tools at my disposal.

Snap Manager for Virtual Infrastructure (SMVI) is NetApp’s answer to vSphere VM backups for use with their storage arrays. SMVI does require a license and requires some specific array-level licenses, such as: SnapRestore, applicable protocol license (NFS, FCP, iSCSI), SnapMirror (optional), and FlexClone. There are some specific instances in which FlexClone is not required such as for NFS VM in-place VMDK restores. All by itself SMVI can be used instead of VCB or Ranger type products to backup/restore VMs, volumes, or individual files within the guest VM OS. SnapMirror can be used in conjunction with SMVI to provide DR by sending backed up VM volumes offsite to another Filer.

Here is the backup process:

  1. Once a backup is initiated, a VMware snapshot is created via vCenter for each powered-on VM in a selected volume, or for each VM that is selected for backup. You can choose either method but volume backups are recommended.
  2. The VM snapshots preserve state and are used during restores. Windows application consistency can be achieved by using VMware’s VSS support for snapshots.
  3. Once all VMs are snapped, SMVI initiates a snapshot of the volume on the Filer.
  4. Once the volume snaps are complete, the VM snapshots are deleted in vCenter.
  5. If you have a SnapMirror in place for your backed up volumes, it is then updated by SMVI.

NetApp fully supports, and recommends, running SMVI on the vCenter server for simplicity. Setup is very straight forward and only requires the vCenter server/ credentials and the storage array names or IPs/ credentials. Best practice is to set up a dedicated user in vCenter as well as on the arrays for SMVI. The required vCenter permissions for this service account are detailed in the best practices guide in the references section at the bottom of this post.

SMVI Setup

Once SMVI can communicate with vCenter, you will see the Vi datacenters, datastores and VMs on the inventory tab.

 

Backup configuration is simple. Name the job, choose the volume, specify the desired backup schedule, how long to keep the backups, and where to send alerts. If you’ll be using SnapMirror, make sure to check the “Initiate SnapMirror update” option.

By default the SMVI job alerts include a link to the job log but the listed address may be unreachable. In my case the links sent were to an IPv6 address even though IPv6 was disabled on the server. This can be changed by editing the smvi.override file in \Program Files (x86)\NetApp\SMVI\server\etc and adding the following lines:

smvi.hostname=vcenter.domain.com
smvi.address=10.10.10.10

Once you successfully run a backup job in SMVI you will be able to see the available snapshots for the volume on the source Filer. Note the SMVI snaps vs snaps used by the SnapMirror.

I my scenario, I am backing up the 2 LUNS called VMProd1_FC and VMProd2_FC which exist in volumes called ESXLUN1 and 2, respectively. Both of these volumes have corresponding SnapMirrors configured between the primary and DR Filers.

 

A couple of things to keep in mind about timing:

  • It is a good idea to sync your SnapMirrors outside of SMVI which will reduce the time it takes when SMVI updates the mirrors. Just make sure to do this at a time other than when your SMVI jobs are running!
  • If you are using deduplication on these volumes (you should be), schedule it to run at a time when you are not running SMVI backups or syncing SnapMirrors.

Once SMVI successfully updates the SnapMirror, you will see the replicated snapshots on the destination side of the mirror as well. DR for your ESX environment is now in effect!

Testing the DR Plan

Here comes the fun part and where having SRM would be extremely helpful to automate most of this. Thanks to NetApp’s FlexClone technology we can test our DR plan without breaking the mirrors, so you could test whenever and as often as necessary without affecting production.

First step, create a FlexClone Volume of the replicated snapshot you want to be able to mount and test with. Choose the appropriate volume then select Snapshop—>Clone from the Volumes menu. Important to note is that you must use a File space guarantee or the FlexClone creation will fail! This can be done via System Manager, CLI, or FilerView:

SNAGHTML4f473a3

Your new volume will now be visible in the Filer’s volumes list, which also creates a corresponding LUN that you’ll notice is offline by default:

image

 image

Bring the new LUN online and then you will be able to present it to your DR ESX hosts:

SNAGHTML4fab0c2

The LUN should instantly appear on the ESX hosts in your iGroup but if it does not run a rescan on the appropriate storage adapter:

image

ESX sees the LUN, now it needs to be added to the cluster. Switch to the Storage page in vCenter and select “add storage.” Select the new disk and be sure to select “Assign a new signature” to the disk or ESX will not be able to write to it! Only a new UUID is written, all existing data will be preserved.

image

The new LUN will now be accessible to all ESX hosts in the cluster. Note the default naming convention used after adding the LUN:

image

This is where things get REALLY manual and you wish you ponied that $25k for SRM. Browse the newly mounted datastore and register each VM you need to test, one by one.

image 

Now your organization’s policies take over as to how far you need to go to satisfy a successful DR test. Once the VM is registered it can be powered on, but if it’s counterpart is still running in production disable its vNIC first. If you need to go further, then shut down the production VM, re-IP the DR clone and then bring it online. If you need to have users connect to the DR clone then there are other implications to consider such as DNS, DFS, ODBC pointers, etc. When your test is complete, power off all DR VM clones, dismount the datastore from within ESX, delete the FlexClone Volume on the DR Filer, bring the production VMs back up, check DNS, done. A beautiful thing and best of all the prod—>DR mirror is still in sync!

References:

SMVI 2.0 Best Practices

SMVI 2.0 Admin Guide

NetApp KB52524

Scripted install of vSphere (ESX4)

First and foremost download a good text editor with a UNIX mode like Notepad++. You MUST be in UNIX mode or your script will FAIL. You cannot use notepad or wordpad for editing these scripts!

A lot of the commands used previously in ESX3.5 have been deprecated or changed in ESX4 so this script took a bit of reworking. The only part that doesn’t work fully is the vswitch port group settings, but this script will definitely get you up and running. Please note that this only applies to ESX and not ESXi.

One of the biggest differences in ESX4 is that the service console now lives in a VMFS volume as a VM. Only the /boot and vmkcore partitions are actually carved out of a physical disk now. The rest of the partitioning is done from within the COS (Console OS) VM’s file system. The smallest disk drives available for HP servers these days are 146GB so I was fairly liberal with my file system layout. 50GB for the local storage VMFS volume where the COS VM will live, set to grow, 30GB for the COS VMDK itself and the rest you can see below in the partitioning section. You want to make sure that however you partition the COS file system that you allocate enough space in the VMDK to house it or your install will fail (virtualdisk esxconsole…).

VMware finally woke up and got rid of that awful FlexLM licensing scheme and has gone back to serial numbers, except now they can be purchased and used in a volume license capacity similar to how Microsoft does it. I now also like to execute all of the ESX configuration parts in the %post section instead of creating a separate SH script to compile and execute. I’m no longer using Altiris so this method is much simpler.

ESX uses a formula based on how much system RAM you have as to how high to set the service console’s memory. 800MB still the max amount and the host RAM sizing chart is accurate. Sean D figured out how to make this work in ESX4, thanks Sean! The esxcfg commands are essentially the same. Modify any of the sections that contain <domain.com> entries with your own servers. NTP, firewall, and AD-integration sections work perfectly. I do enable root SSH access, against best practice recommendations, so if that isn’t cool in your environment comment it out. I also left in the HP mgmt agent install section but haven’t tried it yet on ESX4.

Happy scripting.

########### ESX 4.0 KICKSTART SCRIPT ###############
# +——————————-+ #
# | ESX 4.0 install | #
# +——————————-+ #
# | Author: Peter Fine | #
# +——————————-+ #
#################################################

# root password (Replace with your own password hash)
rootpw –iscrypted $1$4…/

# Authentication
authconfig –enableshadow –enablemd5

# Bootloader options
bootloader –location=mbr

#Installation Method
install cdrom
#install url ftp://10.x.x.x/

# Network install type (change static IP, mask, gateway, and hostname)
network -–device=vmnic0 –bootproto=static –ip=10.0.0.0 –netmask=255.255.255.0 –gateway=10.0.0.0 –hostname=ESX1.domain.com -–addvmportgroup=0

# Regional Settings
keyboard us
timezone America/Chicago –utc

#reboot after script
reboot

# Partitioning

clearpart –alldrives –overwritevmfs
part /boot –fstype=ext3 –size=2048 –onfirstdisk
part None –fstype=vmkcore –size=110 –onfirstdisk
part Local_CUESX1 –fstype=vmfs3 –size=51200 –grow –onfirstdisk
virtualdisk esxconsole –size=30720 –onvmfs=Local_CUESX1
part swap –fstype=swap –size=2048 –onvirtualdisk=esxconsole
part /var/log –fstype=ext3 –size=4096 –onvirtualdisk=esxconsole
part /opt –fstype=ext3 –size=2048 –onvirtualdisk=esxconsole
part /tmp –fstype=ext3 –size=2048 –onvirtualdisk=esxconsole
part /home –fstype=ext3 –size=2048 –onvirtualdisk=esxconsole
part / –fstype=ext3 –size=10240 –grow –onvirtualdisk=esxconsole

# Licensing
accepteula
serialnum –esx=x-x-x-x-x

#+———————————–+
#| Begin %POST Section |
#+———————————–+
%post –interpreter=bash

# This script will configure the following items:
#
# 1. Set Service Console memory to 512MB
# 2. Configure all networking except VMotion
# 3. Add NTP servers, configure and start the NTP service
# 4. Set the proper frewall settings
# 5. Enable root SSH access
#
##********Command Switch Legend*********
##esxcfg-vswitch -a vSwitchX:[ports]–adds new vSwitch
##esxcfg-vswitch -A [pg name] vSwitchX –adds portgroup [name] to vSwitchX
##esxcfg-vswitch -L vmnicX vSwitchX –links vmnicX to vswitchX
##esxcfg-vswitch -v [vlan ID] -p [pg name] vSwitchX –assigns VLAN ID X to [pg name] on vSwitchX
##
##

#+——————————————————-+
#| Set Service Console Memory to 800MB |
#+——————————————————-+

#backup esx.conf and grub.conf
/bin/cp /etc/vmware/esx.conf /etc/vmware/esx.conf.bak
/bin/cp /boot/grub/grub.conf /boot/grub/grub.conf.bak
#Perform copy/replace operations to increase default values
#ESX Host – 8GB RAM -> Default allocated Service Console RAM = 300MB
#ESX Host – 16GB RAM -> Default allocated Service Console RAM = 400MB
#ESX Host – 32GB RAM -> Default allocated Service Console RAM = 500MB
#ESX Host – 64GB RAM -> Default allocated Service Console RAM = 602MB
#ESX Host – 96GB RAM -> Default allocated Service Console RAM = 661MB
#ESX Host – 128GB RAM -> Default allocated Service Console RAM = 703MB
curMEM=`grep ‘^/boot/memSize’ /etc/vmware/esx.conf | cut -f2 -d\”`
sed -i -e “s/boot\/memSize = \”${curMEM}\”/boot\/memSize = \”800\”/g” /etc/vmware/esx.conf
sed -i -e “s/uppermem $(( curMEM * 1024 ))/uppermem 819200/g” -e “s/mem=${curMEM}M/mem=800M/g” /boot/grub/grub.conf

# +—————————————————————————+
# | Create the Service Console |
# | vSwitch0 creation to assign Service Console |
# +—————————————————————————+

#Create and name VSwitch0 (change IP for your server)
esxcfg-vswitch -a vSwitch0
esxcfg-vswitch -A “Service Console” vSwitch0
#Link vSwitch0 to vmnic0 (pNIC0)
esxcfg-vswitch -L vmnic0 vSwitch0
esxcfg-vswitch -L vmnic1 vSwitch0
#Assign vswif interface and assign IP
esxcfg-vswif –add vswif0 –portgroup “Service Console” –ip=10.0.0.0 –netmask=255.255.255.0

# +—————————————————————————+
# | Create the Production0 vSwitch |
# | vSwitch1 creation and NIC assignments |
# +—————————————————————————+
#Create and name vSwitch1
esxcfg-vswitch -a vSwitch1:1016
esxcfg-vswitch -A VM_Servers vSwitch1
esxfg-vswitch -v 110 -p VM_Servers vSwitch1
#Add pNICs 2 & 3 to vSwitch1
esxcfg-vswitch -L vmnic2 vSwitch1
esxcfg-vswitch -L vmnic3 vSwitch1
esxcfg-vswitch -A VM_Dev vSwitch1
esxfg-vswitch -v 130 -p VM_Dev vSwitch1

# Restart vmware mgmt service for Virtual Center
service mgmt-vmware restart

# +——————————————————————+
# | Firewall Configuration |
# +——————————————————————+
echo “Now configuring firewal…”
# Open for SSH client
esxcfg-firewall -e sshClient

# Open for SSH Server
esxcfg-firewall -e sshServer

# Open for VCB
esxcfg-firewall -e VCB

# Open for iSCSI
esxcfg-firewall -e swISCSIClient

# Open for Update Manager
esxcfg-firewall -e updateManager

# Open for ntp out
esxcfg-firewall -e ntpClient

# Open for SNMP
esxcfg-firewall -e snmpd

# Open for CIM server services
esxcfg-firewall -e CIMSLP
esxcfg-firewall -e CIMHttpServer
esxcfg-firewall -e CIMHttpsServer

# Open for Virtual Center heartbeats
esxcfg-firewall -e vpxHeartbeats

# Open for FTP out
esxcfg-firewall -e ftpClient

# Open for Kerberos services outbound (should be handled by esxcfg-auth)
#esxcfg-firewall -o 464,tcp,out,KerberosPasswordChange
#esxcfg-firewall -o 88,tcp,out,KerberosClient
#esxcfg-firewall –o 749,tcp,out,KerberosAdm

# Open for HPSIM
esxcfg-firewall -o 2381,tcp,in,HPSIM

# Restart firewall to enable changes
service firewall restart

# +—————————————————————————–+
# | Active Directory authentication for SSH – /etc/krb5.conf |
# +—————————————————————————–+

# Configure Active Directory authentication (change both domains to yours)
esxcfg-auth –enablead –addomain=domain.com –addc=domain.com

# Add active directory users to the local database
useradd AD_User1
useradd AD_User2

# DNS configuration (add all ESX hosts and VC servers to HOSTS file, just for safety)
echo nameserver 10.0.0.0 >> /etc/resolv.conf
echo nameserver 10.0.0.0 >> /etc/resolv.conf
echo “Configuring hosts file”
echo “127.0.0.1 esx1.domain.com localhost” > /etc/hosts
echo “10.0.0.0 esx1.domain.com esx1” >> /etc/hosts
echo “10.0.0.0 esx2.domain.com esx2” >> /etc/hosts
echo “10.0.0.0 esx3.domain.com esx3” >> /etc/hosts
echo “10.0.0.0 esx4.domain.com esx4” >> /etc/hosts
echo “10.0.0.0 vc1.domain.com vc1” >> /etc/hosts
echo “10.0.0.0 vc2.domain.com vc2” >> /etc/hosts

# +——————————————————————+
# | NTP configuration |
# +——————————————————————+

# Backup ntpd.conf and step-tickers file
mv /etc/ntpd.conf /etc/ntpd.conf.bak
mv /etc/ntpd/step-tickers /etc/ntpd/step-tickers.bak

# Add Servers to step-tickers
echo “dc2.domain.com” > /etc/ntp/step-tickers
echo “dc1.domain.com” >> /etc/ntp/step-tickers

# create ntp.conf
echo “restrict 127.0.0.1” > /etc/ntp.conf
echo “restrict dc1.domain.com mask 255.255.255.255 nomodify notrap noquery” >> /etc/ntp.conf
echo “restrict dc2.domain.com mask 255.255.255.255 nomodify notrap noquery” >> /etc/ntp.conf
echo “server dc1.domain.com” >> /etc/ntp.conf
echo “server dc2.domain.com” >> /etc/ntp.conf
echo “driftfile /var/lib/ntp/drift” >> /etc/ntp.conf

# Service restart
service ntpd restart

# Make ntp start at boot time
chkconfig –level 345 ntpd on

# Sync hardware clock
hwclock –systohc

#+———————————–+
#| Enable Root SSH Access |
#+———————————–+

/bin/cp /etc/ssh/sshd_config /etc/ssh/sshd_config.bak
sed -e ‘s/PermitRootLogin no/PermitRootLogin yes/’ /etc/ssh/sshd_config > /etc/ssh/sshd_config.new
mv -f /etc/ssh/sshd_config.new /etc/ssh/sshd_config
service sshd restart

# SSH Banner Message…
# The echo >> /etc/banner are blank spaces for readability…

echo >> /etc/banner
echo This is a private and secure system. >> /etc/banner
echo Authorized users ONLY! >> /etc/banner
echo All logon attemtps and activites are monitored. >> /etc/banner
echo Banner /etc/banner >> /etc/ssh/sshd_config

# +—————————————————————–+
# | Scripted HP Insight Manager Agent install |
# | Download agent tar to local tmp dir |
# | To download, first open the firewall |
# +—————————————————————–+
#
#cd /tmp
#/usr/sbin/esxcfg-firewall –allowOutgoing
#lwp-download http://10.0.0.0/hpmgmt-8.3.1-vmware4x.tgz /tmp/hpmgmt-8.3.1-vmware4x.tgz
#lwp-download http://0.0.0.0/hpagent/esx3/hpmgmt.conf /tmp/hpmgmt.conf
#
# extract tar file
#tar -zxvf hpmgmt-8.3.1-vmware4x.tgz
#
# execute auto install
#cd /tmp/hpmgmt/831
#./installvm831.sh –silent –inputfile /tmp/hpmgmt.conf
#/usr/sbin/esxcfg-firewall –blockOutgoing
#
# End of first script
#EOF1
#
# All of the above has been sent to /tmp/esxcfg.sh (not been executed yet)
# next step is to make /tmp/esxcfg.sh executable
#chmod +x /tmp/esxcfg.sh
#
# Backup of original rc.local file
#cp /etc/rc.d/rc.local /etc/rc.d/rc.local.bak
#
#
# edit rc.local to call esxcfg.sh
# and to make rc.local reset itself after calling
#
#cat >> /etc/rc.d/rc.local <<EOF
#cd /tmp
#/tmp/esxcfg.sh > /tmp/post_install.log
#mv -f /etc/rc.d/rc.local.bak /etc/rc.d/rc.local
#EOF
#
####################################### END OF KICKSTART SCRIPT #################################

Scripted install of ESX 3.5

I am currently working with ESX 4 (vSphere) but wanted to document how I performed scripted installations with ESX 3.5. Unfortunately a lot has changed in ESX4 Update 1 requiring a lot of this script to be modified to accomplish the same goal. I’ll go over those details in the next post. 

There are many ways to install ESX but one of the fastest and easiest ways to keep your environment uniform is through a scripted install. This can be done via free UDA solutions, costly management solutions such as Altiris, or simply with a script and the installation media. I’ll cover option C. This script can be used to accept input variables that dynamically set the server’s hostname and IP address, setup all networking configurations as well as install Altiris and HP management agents. Simply comment out or delete the sections you don’t want.

This script is tailored for HP servers but should work with other brands as well, just make sure to change the partition and mgmt agents sections. The script is well commented so should be very self explanatory from section to section. It has served me well allowing me to build and rebuild ESX servers uniformly and quickly.

########### ESX 3.5 KICKSTART SCRIPT ##############

#Installation Method
url –url ftp://anonymous:rdp@x.x.x.x/esx35

# Regional Settings
keyboard us
lang en_US
langsupport –default en_US
timezone America/Chicago

# Installatition settings: no X windowing, no mouse, no firewall
skipx
mouse none
firewall –disabled

# root password (generate your own and paste below)
rootpw –iscrypted <$1$6…>

# Authconfig
auth –enableshadow –enablemd5

# BootLoader ( The user has to use grub by default )
bootloader –location=mbr

#Install or Upgrade then reboot
install
reboot

# Text Mode
text

# Network install type
# Create a default network for Virtual Machines
network –bootproto static –ip 0.0.0.0 –netmask 255.255.255.0 –gateway 0.0.0.0 –nameserver 0.0.0.0 –hostname ESX1 –addvmportgroup=0

#(experimental) auto-populated system fields from Altiris database
#network –bootproto static –ip=%NWSERVER% –gateway=%NWTREE% –netmask=%#NWCONTEXT% –nameserver= –hostname %#*”select replace([name],’ ‘,”) from computer #where computer_id={ID}”% –addvmportgroup=1 –vlanid=0

# Driver disks

# Load drivers

# ignoredisk to prevent installation on SAN
# This might NOT work for brands other than HP
# HP has its first disk on cciss/c0d0, LUNs are seen as sda, sdb, etc
ignoredisk –drives=sda,sdb,sdc,sdd,sde,sdf,sdg,sdh,sdi,sdj,sdk

# Bootloader options
bootloader –location=mbr –driveorder=cciss/c0d0 

# Authentication
auth –enableshadow –enablemd5

# Partitioning
#/, boot, & swap on c0d0
#/var, /tmp, /home, & vmkcore on c0d1
# %hddevice% is replaced with the detected storage device name by the
# vmesx.sh script executed on the target server.
# To specify specific or custom device names simply replace %hddevice%
# with specific device names ( Ex cciss/c0d0 ).
clearpart –all –drives=cciss/c0d0,cciss/c0d1 –initlabel
part /boot –fstype ext3 –size 250 –ondisk cciss/c0d0
part / –fstype ext3 –size 10240 –ondisk cciss/c0d0
# Swap partition = 2x service console RAM, max=800MB but disk is cheap. 😉
part swap –size 2048 –ondisk cciss/c0d0
part /var –fstype vmfs3 –size 4096 –ondisk cciss/c0d1
part /tmp –fstype ext3 –size 4096 –ondisk cciss/c0d1
part /home –fstype ext3 –size 2048 –grow –ondisk cciss/c0d1
#vmkCore partition = 100MB per VMware recommendation
part None –fstype vmkcore –size 100 –ondisk cciss/c0d1

# Loading network configuration from /tmp/networkconfig. This file
# is created in the %PRE section

# Network Configurations
%include /tmp/networkconfig

# VMWare License options
vmaccepteula
vmlicense –mode=server –server=27000@vc1.domain.com.com –edition=esxfull –features=vsmp

%vmlicense_text

%packages
@base

# +————————————–+
# | Start ESX 3.5 install                |
# +————————————–+

#+—————–+
#| Begin %PRE      |
#+—————–+

%pre
# In the pre section we read a parameter from the commandline
# Example commandline in the UDA
# append ip=dhcp ksdevice=eth0  load_ramdisk=1 initrd=initrd.esx301 network ks=http://x.x.x.249/kickstart/TST31.cfg ESXIP=49
# Important is the ESXIP=49 part. An ESX host that should get 192.168.1.49 as IP, will get ESXIP=49
# Because the names of my esx hosts is always:  vmesx001, vmesx002, vmesx003, … vmesx049, I re-use the IP part for
# the dns name. Also, my vmotion interfaces will always have the same ending digit of the IP address, but just a
# different subnet

# I now read the commandline used to start the script. Each part is put in a variable. But I only need the ESXIP variable
# The = is used as a delimiter for the var

set — `cat /proc/cmdline`
for I in $*; do case “$I” in *=*) eval $I;; esac; done

echo IP adres found $ESXIP

# Create /tmp/networkconfig file which will be read in a later section
cat << EOF >> /tmp/networkconfig
network –device eth0 –bootproto static –ip x.x.x.${ESXIP} –netmask 255.255.255.0 –gateway x.x.x.254 –nameserver y.y.y.203 –nameserver y.y.y.204  –hostname vmesx0${ESXIP}.domain.com –addvmportgroup=0
EOF

# Because the ESXIP variable is lost when switching fro, %PRE to the next section
# I write them to a file. The call-script.sh file is purely used to store the variables between sections.
# Later on I will call /tmp/esx-post-script.sh using the host IP address, followed by the vmotion IP

cat << EOF1 >> /call-script.sh
echo Now start /tmp/esx-post-script.sh
/tmp/esx-post-script.sh x.x.26.${ESXIP} x.x.28.${ESXIP}
EOF1
chmod a+x /call-script.sh

#+———————-+
#| End PRE section      |
#+———————-+

#+———————————–+
#| Begin %POST –nochroot section    |
#+———————————–+
# In the %PRE section I’ve written the vars to /call-script.sh. But when the %POST section starts,
# a new filesystem is mounted. And the file is lost. There is this special -nochroot section that
# enables both filesystems and enables me to copy the file from one filesystem to the next filesystem.

%post –nochroot
cp /call-script.sh /mnt/sysimage/call-script.sh

#+—————–+
#| Begin %POST     |
#+—————–+
# Post install
%post
# Transfer the Altiris agent along with its config files
mkdir /tmp/altiris
cd /tmp/altiris
ftp -n <<EOF2
open %#*”select tcp_addr from aclient_prop where computer_id=0″%
user anonymous rdp
cd /dslib/osoem/altiris
binary
prompt
mget altiris*.i386.bin
mget adlagent.conf.custom
mget adlagent.conf.default
exit
EOF2

AltirisConfDir=/opt/altiris/deployment/adlagent/conf
# Create script to configure ESX and install adlagent (called by rc.local)
# Using echos to thwart post section script variable and command substitution
echo ‘#!/bin/bash’ >> ./hpinstall.sh
echo ‘# Script to configure ESX and install adlagent.  Called from rc.local.’ >> ./hpinstall.sh

echo ‘# RDP install log file’ >> ./hpinstall.sh
echo ‘logfile=/root/install.rdp.log’ >> ./hpinstall.sh
echo ‘# Create vmfs filesystem’ >> ./hpinstall.sh
echo ‘vmfsqueuedir=”/etc/vmware/vmfs3queue”‘ >> ./hpinstall.sh
echo ‘filecount=$(ls -1A /vmfs/volumes | wc -l)’ >> ./hpinstall.sh
echo ‘# Check for existing vmfs volumes’ >> ./hpinstall.sh
echo ‘if [ $filecount -eq 0 ]; then’ >> ./hpinstall.sh
echo ‘   # No current vmfs volumes’ >> ./hpinstall.sh
echo ‘   # Check vmfs fs creation queue in case ESX is waiting to build on next boot’ >> ./hpinstall.sh
echo ‘   if [[ -s $vmfsqueuedir ]]; then’ >> ./hpinstall.sh
echo ‘      # Items in queue’ >> ./hpinstall.sh
echo ‘      echo vmfs fs queue contains data – no vmfs created >>$logfile’ >> ./hpinstall.sh
echo ‘   else’ >> ./hpinstall.sh
echo ‘      # Nothing in queue’ >> ./hpinstall.sh
echo ‘      # All clear to go ahead and create vmfs fs’ >> ./hpinstall.sh
echo ‘      # Create vmfs fs’ >> ./hpinstall.sh
echo ‘      vmfsdevice=`fdisk -l | grep %hddevice% | grep fb | cut -d” ” -f1`’ >> ./hpinstall.sh
echo ‘      partnum=${vmfsdevice:(-1)}’ >> ./hpinstall.sh
echo ‘      vmfspart=`esxcfg-vmhbadevs | grep %hddevice% | cut -d” ” -f1`’ >> ./hpinstall.sh
echo ‘      echo “Creating vmfs fs on $vmfspart:$partnum” >>$logfile’ >> ./hpinstall.sh
echo ‘      vmkfstools -C vmfs3 -S localvmfs $vmfspart:$partnum’ >> ./hpinstall.sh
echo ‘   fi’ >> ./hpinstall.sh
echo ‘else’ >> ./hpinstall.sh
echo ‘   # vmfs volumes exist’ >> ./hpinstall.sh
echo ‘   echo vmfs fs volumes exist – no vmfs created >>$logfile’ >> ./hpinstall.sh
echo ‘fi’ >> ./hpinstall.sh

cat >> ./hpinstall.sh << EOF1
# Firewall Configuration
# Enable adlagent  and file transfer ports
# You need to set a static port (“4300” in this example) for file transfer in
# the deployment console under Tools->Options->Global
esxcfg-firewall –openPort 402,tcp,out,adlagent
esxcfg-firewall –openPort 4300,tcp,out,adlagentFileTransfer

# Install Altiris Adlagent
cd /tmp/altiris
chmod +x altiris-adlagent*.bin
./altiris-adlagent*.i386.bin 1>>/root/install.rdp.log 2>>/root/install.rdp.log
# Install adlagent custom configuration
if [ -e adlagent.conf.custom ]; then
   mv $AltirisConfDir/adlagent.conf $AltirisConfDir/adlagnet.conf.bak
   cp -f adlagent.conf.custom $AltirisConfDir/adlagent.conf
elif [ -e adlagent.conf.default ]; then
   mv $AltirisConfDir/adlagent.conf $AltirisConfDir/adlagent.conf.bak
   sed -e “s/0.0.0.0/%#*”select tcp_addr from aclient_prop where computer_id=0″%/g” adlagent.conf.default > $AltirisConfDir/adlagent.conf
fi
# Reset adlagent to pick up config if necessary
/etc/init.d/adlagent stop
/etc/init.d/adlagent start

# Reset rc.local to original
mv -f /etc/rc.d/rc.local.sav /etc/rc.d/rc.local
EOF1

# make hpinstall.sh executable
chmod +x /tmp/altiris/hpinstall.sh

# save a copy of rc.local
cp /etc/rc.d/rc.local /etc/rc.d/rc.local.sav

# add hpinstall.sh to rc.local
cat >> /etc/rc.d/rc.local << EOF
cd /tmp/altiris
/tmp/altiris/hpinstall.sh
EOF

# +————————————–+
# + Download script that is the same for +
# + each host                            +
# +————————————–+

lwp-download http://x.x.x.249/scripts/esx-post-script.sh /tmp/esx-post-script.sh
chmod a+x /tmp/esx-post-script.sh

# +——————————————+
# + Call script containing vars              +
# +——————————————+
echo Now running /call-script.sh
/call-script.sh

########### ESX POST KICKSTART SCRIPT ##############
#!/bin/sh

#+————————————————————————–+
#| Just a test to show upon starting script that parameters have been received |
#| from %PRE sectie through /call-script.sh                               |
#+————————————————————————–+

#ESXHOSTIP=$1
#ESXVMOTIONIP=$2

#echo Found ESX Host IP = $1
#echo Found VMotion IP  = $2

# +——————–+
# | Deploy ESX patches |
# +——————–+

# download esx-autopatch.pl script
#lwp-download http://x.x.x.249/patches/3.0.1/esx-autopatch.pl /root/esx-autopatch.pl

# call esx-autopatch.pl script
#perl /root/esx-autopatch.pl

# +—————————————————————————+
# | Creation of /tmp/esxcfg.sh file. This files contains command that         |
# | can only be executed when the VMkernel is loaded                  |
# +—————————————————————————+

cat > /tmp/esxcfg.sh <<EOF1
#!/bin/sh

#—–Set Service Console Memory to 512M—————————————–
#backup esx.conf and grub.conf
/bin/cp /etc/vmware/esx.conf /etc/vmware/esx.conf.bak
/bin/cp /boot/grub/grub.conf /boot/grub/grub.conf.bak
#editing esx.conf and grub.conf
/bin/sed -i -e ‘s/272/512/’ /etc/vmware/esx.conf
/bin/sed -i -e ‘s/272M/512M/’ /boot/grub/grub.conf
/bin/sed -i -e ‘s/277504/523264/’ /boot/grub/grub.conf

# +—————————————————————————+
# | Create the Service Console                                                    |
# | vSwitch0 creation to assign Service Console                           |
# +—————————————————————————+

#Create and name VSwitch0
esxcfg-vswitch -a vSwitch0:32
esxcfg-vswitch -A “Service Console” vSwitch0
#Link vSwitch0 to vmnic0 (pNIC0)
esxcfg-vswitch -L vmnic0 vSwitch0
#Assign vswif interface and assign IP
esxcfg-vswif -a vswif0 -p “Service Console” -i 0.0.0.0 -n 255.255.255.0

# +—————————————————————————+
# | Create the Production0 vSwitch                                             |
# | vSwitch1 creation and NIC assignments                                  |
# +—————————————————————————+
#Create and name vSwitch1
esxcfg-vswitch -a vSwitch1:1016
esxcfg-vswitch -A Production0 vSwitch1
#Add pNICs 2, 3, 4, and 5 to vSwitch1
esxcfg-vswitch -L vSwitch1 vmnic1
#esxcfg-vswitch -L vSwitch1 vmnic3
#esxcfg-vswitch -L vSwitch1 vmnic4
#esxcfg-vswitch -L vSwitch1 vmnic5

# Restart vmware mgmt service for Virtual Center
service mgmt-vmware restart

# +—————————————————————————+
# | **Begin Altiris Sample Portion**                                               |
# +—————————————————————————+
# During auto-install a vSwitch0 is created, but I wanted a different naming
#esxcfg-vswif -d vswif0
#esxcfg-vswitch –del-pg=’Service Console’ vSwitch0
#esxcfg-vswitch -d vSwitch0
#
# Creation of the Service Console
#esxcfg-vswitch -a vsw-cos
# Connect physical nics vmnic0 & vmnic6 to vsw-cos
#esxcfg-vswitch -L vmnic0 vsw-cos
#esxcfg-vswitch -L vmnic6 vsw-cos
# Connect portgroups to vsw-cos
#esxcfg-vswitch –add-pg=’Service Console’ vsw-cos
# Next assign the IP address
# ===========================================================================
#esxcfg-vswif -a vswif0 -p ‘Service Console’ -i $ESXHOSTIP -n 255.255.255.0
# ===========================================================================
# +———————————————————————+
# | Creeating VMOTION kernelswitch                                    |
# +———————————————————————+
#
#esxcfg-vswitch -a vsw-vmotion
#esxcfg-vswitch -L vmnic2 vsw-vmotion
#esxcfg-vswitch -L vmnic8 vsw-vmotion
#esxcfg-vswitch –add-pg=vmotion vsw-vmotion
# Assign an IP address to the vmotion interface.
# This doesn’t enable vmotion yet !!
# ===========================================================================
#esxcfg-vmknic -a “vmotion” -i $ESXVMOTIONIP -n 255.255.255.0
# ===========================================================================
# Setting vmkernel default gateway
#esxcfg-route x.x.x.254
#
# +——————————————————————-+
# | Creating portgroups / VLANS                                        |
# | First create vsw-vms01 which will be used by all VMs     |
# +——————————————————————-+
#
#esxcfg-vswitch -a vsw-vms01
#
#esxcfg-vswitch -L vmnic1 vsw-vms01
#esxcfg-vswitch -L vmnic3 vsw-vms01
#esxcfg-vswitch -L vmnic7 vsw-vms01
#esxcfg-vswitch -L vmnic9 vsw-vms01
#
# Create portgroups and add them to vSwitch vsw-vms01
#esxcfg-vswitch –add-pg=VLAN0019 vsw-vms01
#
#
# Now we connect the VLAN ID to a portgroup
#esxcfg-vswitch -v 19  -p VLAN0019 vsw-vms01
#
# +——————————————————————+
# | **END Altiris Sample portion**                                      |
# +——————————————————————+

# +——————————————————————+
# | Firewall Configuration                                                |
# +——————————————————————+

# Open for SSH client
esxcfg-firewall -e sshClient

# Open for SSH Server
esxcfg-firewall -e sshServer

# Open for ntp out
esxcfg-firewall -e ntpClient

# Open for SNMP
esxcfg-firewall -e snmpd

# Open for FlexLM out
esxcfg-firewall -e LicenseClient

# Open for CIM server services
esxcfg-firewall -e CIMSLP
esxcfg-firewall -e CIMHttpServer
esxcfg-firewall -e CIMHttpsServer

# Open for Virtual Center heartbeats
esxcfg-firewall -e vpxHeartbeats

# Open for FTP out
esxcfg-firewall -e ftpClient

# Open for Kerberos services outbound (should be handled by esxcfg-auth)
#esxcfg-firewall -o 464,tcp,out,KerberosPasswordChange
#esxcfg-firewall -o 88,tcp,out,KerberosClient
#esxcfg-firewall –o 749,tcp,out,KerberosAdm

# Open for AAM Client (enabled by vpxa client)
#esxcfg-firewall -e AAMClient

# Open for HPSIM
esxcfg-firewall -o 2381,tcp,in,HPSIM

# Restart firewall to enable changes
service firewall restart

# +——————————————————————+
# | Active Directory authentication for SSH                      |
# +——————————————————————+

# Configure Active Directory authentication
esxcfg-auth –enablead –addomain=domain.com –addc=domain.com

# Add a user to the local database, which is also in Active Directory
useradd domainuser1
useradd domainuser2
useradd domainuser3

# DNS configuration
echo nameserver 0.0.0.0 >> /etc/resolv.conf
echo nameserver 0.0.0.0 >> /etc/resolv.conf

# +——————————————————————+
# | NTP configuration                                                       |
# +——————————————————————+

# Backup ntpd.conf and step-tickers file
mv /etc/ntpd.conf /etc/ntpd.conf.bak
mv /etc/ntpd/step-tickers /etc/ntpd/step-tickers.bak

# Add Servers to step-tickers
echo “dc2.domain.com” > /etc/ntp/step-tickers
echo “dc1.domain.com” >> /etc/ntp/step-tickers

# create ntp.conf
echo “restrict 127.0.0.1” > /etc/ntp.conf
echo “restrict dc1.domain.com mask 255.255.255.255 nomodify notrap noquery” >> /etc/ntp.conf
echo “restrict dc2.domain.com mask 255.255.255.255 nomodify notrap noquery” >> /etc/ntp.conf
echo “server dc1.domain.com” >> /etc/ntp.conf
echo “server dc2.domain.com” >> /etc/ntp.conf
echo “driftfile /var/lib/ntp/drift” >> /etc/ntp.conf

# Service restart
service ntpd restart

# Make ntp start a boot time
chkconfig –level 345 ntpd on

# Sync hardware clock
hwclock –systohc

# +—————————————————————–+
# | Scripted HP Insight Manager Agent install                   |
# | Download agent tar to local tmp dir                            |
# | To download, first open the firewall                            |
# +—————————————————————–+
#
#cd /tmp
#/usr/sbin/esxcfg-firewall –allowOutgoing
#lwp-download http://0.0.0.0/repository/hpmgmt-7.9.1-vmware3x.tgz /tmp/hpmgmt-7.9.1-vmware3x.tgz
#lwp-download http://0.0.0.0/hpagent/esx3/hpmgmt.conf /tmp/hpmgmt.conf
#
# extract tar file
#tar -zxvf hpmgmt-7.9.1-vmware3x.tgz
#
# execute auto install
#cd /tmp/hpmgmt/791
#./installvm791.sh –silent –inputfile /tmp/hpmgmt.conf
#/usr/sbin/esxcfg-firewall –blockOutgoing
#
# Unload VMFS2 drivers from kernel to increase LUN speed
#mv /etc/init.d/vmware /etc/init.d/vmware.old
#sed -e “s/echo \”vmfs2 vmfs2\”/\#echo \”vmfs2 vmfs2\”/g” /etc/init.d/vmware.old > /etc/init.d/vmware
#chmod 744 /etc/init.d/vmware
#
# End of first script
#EOF1
#
# All of the above has been sent to /tmp/esxcfg.sh (not been executed yet)
# next step is to make /tmp/esxcfg.sh executable
#chmod +x /tmp/esxcfg.sh
#
# Backup of original rc.local file
#cp /etc/rc.d/rc.local /etc/rc.d/rc.local.bak
#
# edit rc.local to call esxcfg.sh
# and to make rc.local reset itself after calling
#cat >> /etc/rc.d/rc.local <<EOF
#cd /tmp
#/tmp/esxcfg.sh
#mv -f /etc/rc.d/rc.local.bak /etc/rc.d/rc.local
#EOF
#
##############################END OF KICKSTART SCRIPT#########################