High availability should be a goal for any environment, but when you are the sole sysadmin and have to balance a second job role at the same time, it is key to protecting your sanity!
Here is what is entailed in creating a highly available Exchange 2010 environment. Our goal is to allow for one server to completely fail without disrupting clients or losing any email.
- Make sure you are using Server 2008R2 Enterprise or Datacenter, or Server 2012 with Exchange 2010 SP3. Create at least two servers holding the mailbox role. If you desire, these servers can also host the CAS and Hub Transport roles.
- Create a Database Availability Group.
- Create a CAS array. Choose a DNS name for the CAS array, such as casarray.domain.local.
- If you didn't start out using the CAS array (like me), you will need to find a method of updating the profile on each client so they are aware of the CAS array. I created a profile using the Office setup wizard and then pushed it out to clients using GPP.
- Assign the CAS array object to each database so that newly created profiles will point to the right place.
- Make sure that the appropriate certificates are installed on each CAS server. Here, it is very helpful to use a SAN (subject alternative name) certificate as it is not keyed to a particular IP address.
- Set up load balancing for the CAS array.
Load Balancing your CAS array
The standard approach is to use Windows Network Load Balancing. However, Microsoft no longer recommends this.
The key limitation that resonated with me is that adding a new NLB node will disrupt the existing cluster. This makes it very hard to create your NLB cluster without downtime. For other small environments without a separate mailbox and client access server, a third-party load balancer is the only option. Failover cluster (used by a DAG) can't coexist with Network Load Balancing. So, you need to use 4 servers instead of 2, and many smaller environments may not have the server resources or licenses for this approach. Using a separate load balancer will allow you to have just two servers with combined mailbox/cas/hub transport roles.
After doing some research on the available load balancing options, I settled on ZenLoadBalancer. There are several (resolvable) roadblocks you can run into during the setup, none of which I found to be easy to research online, at least not all in one place.
- The Hyper-V support is not included in the default install.
- There is little up-to-date documentation.
- Clustering doesn't work in a default install on Hyper-V.
- You will need a minimum of 6 IP addresses.
- Download ZenLoadBalancer open source 3.0.3
- Create two Hyper-V VMs. Add two NICs, a legacy NIC and the standard Hyper-V NIC.
- On each NIC, enable MAC address spoofing (in SCVMM 2012 in a highly-available VM, I had to use Failover Cluster manager to reach the MAC address spoofing option. This was not needed on my second Hyper-V server which runs 2008R2; I could access the setting right from SCVMM.)
- Install the ZenLB distro on each VM. During install choose a temporary IP; this will be the NIC for the "legacy" network card, not the full-speed Hyper-V NIC.
- Connect to each VM console and login as root. Then type in aptitude update; aptitude upgrade. Answer yes to each prompt except do not overwrite the configuration file when prompted.
- Connect to the web console for each server with the temporary IP and login with the default password admin/admin.
- Go to Settings | Interfaces and enable the newly visible NIC. The second NIC should be the Hyper-V NIC that support fulls gigabit+ networking. Add a static IP to each--this is the permanent IP dedicated to each individual ZenLB instance.
- On one server, set up a virtual IP and then set up clustering. Add the second server to the cluster. It should copy over the shared virtual IP.
- Add a second virtual IP and create a new "farm" named ExchangeCAS with the profile L4xNAT.
- Edit the farm and change the NAT type to NAT (from the default DNAT).
- Add the IP of each of your real servers. You need to fill in the priority and weight fields.
Test your new configuration by updating your /etc/hosts file on a few Windows clients.