Configuration Requirements
These requirements must be satisfied to support this
configuration:
- The maximum round trip latency on both the IP network and the inter-cluster network between the two VPLEX clusters must not exceed 5 milliseconds round-trip-time for a non-uniform host access configuration and must not exceed 1 millisecond round-trip-time for a uniform host access configuration. The IP network supports the VMware ESXi hosts and the VPLEX Management Console. The interface between two VPLEX clusters can be Fibre Channel or IP. Round-trip-time for a non-uniform host access configuration is now supported up to 10 milliseconds for VPLEX Geosynchrony 5.2 and later and ESXi 5.5 and later with NMP and PowerPath. For more information on detailed supported configuration, see the latest VPLEX EMC Simple Support Matrix (ESSM) on support.emc.com.
- The ESXi hosts in both data
centers must have a private network on the same IP subnet and broadcast
domain.
- Any IP subnet used by the
virtual machine that resides on it must be accessible from ESXi hosts in
both data centers. This requirement is important so that clients accessing
virtual machines running on ESXi hosts on both sides are able to function
smoothly upon any VMware HA triggered virtual machine restart events.
- The data storage locations,
including the boot device used by the virtual machines, must be active and
accessible from ESXi hosts in both data centers.
- vCenter Server must be able to
connect to ESXi hosts in both data centers.
- The VMware datastore for the
virtual machines running in the ESX Cluster are provisioned on Distributed
Virtual Volumes.
- The maximum number of hosts in
the HA cluster must not exceed 32 hosts.
- The configuration option
auto-resume for VPLEX Cross-Connect consistency groups must be set to true.
Notes:
- The ESXi hosts forming the
VMware HA cluster can be distributed on two sites. HA Clusters can start a
virtual machine on the surviving ESXi host, and the ESXi host access the
Distributed Virtual Volume through storage path at its site.
- VPLEX 5.0 and later version and
ESXi 5.x/6.0 are tested in this configuration with the VPLEX Witness.
Migration of ESXi host into Stretched Cluster:
S/N
|
Steps
|
1
|
vMotion the VMs to other Hosts from Identified one
|
2
|
Put the required host in maintainence mode
|
3
|
Create Standard vSwitch VSS for the Host
|
4
|
Migrate the Management and vMotion virtual adapters and all
physical adapters to VSS
|
5
|
Remove the same host from VDS (Nexus 1000v)
|
6
|
Remove the same host from Cluster
|
7
|
Change the Mgmt & Vmotion IP for Site 2 host
|
8
|
Modify the Mgmt & Vmotion VLANs (for eg: 736 and 735) on Site 2 host
in UCSM
|
9
|
Modify the Packet & Control VLANs (for eg: 741 and 742) on Site 1 host
in UCSM
|
10
|
For Site 2 host, add the host record to hosts file in
Stretched Cluster vCenter server
|
11
|
Add the host into New Vcenter
|
12
|
Migrate the Management and vMotion virtual adapters and all
physical adapters to VDS
|
13
|
Add sting value to the ESX host
|
14
|
Remove the host from Maintainence mode
|
15
|
Check for cross site vMotion
|
Migration of VMs into Stretched Cluster:
S/N
|
Prerequisites
|
1
|
Check if all the VMDKs for the VM resides on the same datastore
|
2
|
Note down the VLAN details for the VMs
|
3
|
Note down the datastore location for the VMDKs of the VMs
|
4
|
Note down the naa.id for the datastores
|
S/N
|
Steps
|
1
|
Shutdown all the VMs running in the Datastore to be migrated
|
2
|
Remove all the VMs from inventory
|
3
|
Disable the SIOC if enabled
|
4
|
Check if the datastore is used as HA heatbeat datastore
|
5
|
Remove the datastore from the datastore cluster
|
6
|
unmount the datastore from all the hosts
|
7
|
Inform storage team to assign the same LUN to stretched cluster
|
8
|
Rescan the HBAs and add the datastore to stretched Cluster
|
9
|
Register the VMs to the Cluster
|
10
|
Change the VLAN for the network adapter for all the VMs and
Power on the VMs
|
No comments:
Post a Comment