RedNectar’s Hyperflex Pre-Install Checklist (Updated)


Completing Cisco’s Pre-installation Checklist for Cisco HX Data Platform will capture all the information you need, but not necessarily in the order you need it, and for a single site only. So, I decided to write a version that gets the information in the order you need it and in a format that’s easier to expand into a multi-site or stretched-cluster deployment. If you are planning to install a Stretched Cluster, make sure Site#1 and Site#2 have identical configurations where indicated.

Logical Planning for the installation

Task 1:               Planning

Before embarking on the checklist, it is important that you know how your Hyperflex installation is going to integrate into your existing network, and what new resources (VLANs, IP subnets you will need to find).

VMware vCenter

Hyperflex cannot operate without VMware vCenter. You may use an existing vCenter installation or vCenter can be included as part of the Hyperflex installation. If it is to be included in the Hyperflex installation process, best practice is that vCenter be installed on an ESXi host that is NOT part of the Hyperflex cluster (to avoid having the management application hosted on the managed system). If absolutely required, vCenter can be installed on the Hyperflex cluster. Recommended options are bolded.

Fabric Interconnects

Typically, Hyperflex clusters are installed using an independent pair of Fabric Interconnect switches. If the Hyperflex cluster is to be installed and integrated with an existing UCS deployment, the Fabric Interconnect switches will be rebooted during the Hyperflex installation.

VLAN/vSwitch and Subnet planning

Your Hyperflex Installation requires that you plan for additional subnets and VLANs. Depending on the final design, this could be as many as four new subnets and four new VLANs. The following diagrams offer two slightly different approaches to VLAN planning:

HX Install Design #1: Separate OOB and HX Mgmt VLAN/Subnets

HX Install Design #2: Combined OOB/HX Mgmt VLAN/Subnets (recommended)

To determine the number of IP addresses and new VLANs required, use the following guidelines. In all the following calculations, n is the number of Nodes in the Hyperflex Cluster.

vCenter Subnet is assumed to be an existing subnet. If a new vCenter is being installed as part of this installation, IP addresses will need to be allocated for the vCenter appliance and its default gateway.

Total IPs required= 2

·       1 x Default GW IP

·       1 x vCenter Appliance VM

Installer VM Subnet is also assumed to be an existing subnet. If practical, make provision for the HX Installer VM to be allocated an IP address on the hx-inband-mgmt subnet. This ensures that the installer has full access to the ESXi hosts, and (in design #2) to UCS Manager. The Installer MUST be able to access vCenter, UCS Manager and the ESXi hosts.

Total IPs required= 2

·       1 x Default GW IP

·       1 x HX installer VM

UCS OOB Subnet is the external UCS OOB management – you probably have a VLAN set aside for this already and is where the UCS Manager Primary, Secondary and Cluster IPs live. These are configured on the Fabric Interconnects during the initial setup. Also, this subnet is where the ext-mgmt IP Pool for OOB CIMC is sourced, which will require at least n addresses. This subnet is a mandatory requirement.

Total IP addresses required=4 + n

·       1 x UCS Manager Primary (FI-A)

·       1 x UCS Manager Secondary (FI-B)

·       1 x UCSM Cluster IP

·       1 x Default GW IP

·       n x CIMC Pool addresses (one per HX Server)

Hyperflex inband management VLAN and Subnet is shown in the diagram as hx-inband-mgmt subnet/VLAN/vSwitch. You may already have a management VLAN that you wish to use in this situation. Indeed, it may even be the same subnet as the UCS OOB Subnet (as shown in design #2 above), in which case the number of subnets required is reduced by one. This VLAN MUST be configured before the install on the UPSTREAM SWITCHES and on the Port Channel between the upstream switches.

Total IP addresses required=2 + 2 x n

·       1 x Management Cluster IP

·       1 x Default GW IP

·       2 x n (vmnic0 and Controller VM per ESXi host)

·       [Optional] Consider reserving an IP address for the Installer VM on this subnet.

Hyperflex Storage VLAN and Subnet is shown in the diagram as hx-storage-data subnet/VLAN/vSwitch. This is an isolated Subnet/VLAN/vSwitch. Supplying a default gateway is optional unless your backup system accesses the storage directly. This VLAN MUST be configured before the install on the UPSTREAM SWITCHES and on the Port Channel between the upstream switches and requires MTU of 9000.

Total IPs required=2 + 2 x n (or 1 + 2 x n if no default gateway IP required)

·       1 x Data Cluster IP

·       1 x Default GW IP

·       2 x n (vmnic1 and Controller VM per ESXi host)

VMotion VLAN (and subnet) is shown in the diagram as hx-vmotion VLAN/vmk. No IPs are required at installation, but can be added post install, and in fact is recommended. This VLAN MUST be configured before the install on the UPSTREAM SWITCHES and on the Port Channel between the upstream switches.

Total IPs required= n (or 1 + n if default gateway IP required)

·       1 x Default GW IP (Optional)

·       n (vmnic2 per ESXi host)

VM Network VLAN is a VLAN that will be added to a standard VMware vSwitch which will be called vswitch-hx-vm-network.  This item exists more or less as a place-marker to remind you that you will need to plumb the VLANs used by the guest VMs to the HX Cluster. One VLAN is configured during the install process for this purpose, and no doubt you will be allocating IPs to VMs as they are added later. Additional VLANs can be easily added post-install and a section is provided later in this document to define any additional VLANs you might want to make available to your VMs.

Task 2:               Fabric Interconnects (FIs) – initial console config

The first thing to configure is the UCS Fabric Interconnects via a console cable. The following information will be required to be able to complete the initial configuration.

·       You will need to enter the items marked * again later, so remember them.

The UCS System Name is used to name the Fabric Interconnects, and the suffixes A and B will be added by the system. If you use OurDC.UCS.FI as the System Name, the first Fabric Interconnect will automatically be named OurDC.UCS.FIA and the second OurDC.UCS.FIB

Configuration Item

Site#1

Site#2

Site#n

UCS Admin Password*

UCS System Name
(Recommend ending with letters FI)

FI-A Mgmt0 IP address
(Recommended a.b.c.7)

Mgmt0 IPv4 netmask (Common for all IPs)

IPv4 address of default gateway

UCS Cluster IPv4 Address*
(Recommended a.b.c.9)

DNS IP address*
(Comma separated, maximum two)

Domain name (optional)

UCS FI-B IP address
(Recommended a.b.c.8)

Task 3:               Fabric Interconnects – firmware upgrade

You may wish to assign the NTP server address to the Fabric Interconnects during the Firmware Upgrade.

Configuration Item

Site#1

Site#2

Site#n

NTP Server IP*

Task 4:               Server Ports, Uplink Ports, FC Uplink ports on FIs

If using 6200 Fabric Interconnects, AND you plan on connecting Fibre Channel storage to the FIs now or in the future, remember these will have to be the high numbered ports and increment in steps of 2. So, for a UCS 6248, ports 31 and 32 will be the first FC ports. For a UCS 6296, ports 47 and 48 will be the first FC ports.

For UCS 6332-16UP FIs, the order is reversed. Only the first 16 (unified) ports are capable of 4/8/16Gbps FC – but they progress from left to right in pairs, so the first two FC ports will be ports 1 & 2.

The UCS 6332 doesn’t support FC, but both the UCS 6332 and the UCS 6332-16UP FIs have 6 dedicated 40Gbps QSFP+ ports – these are the highest numbered ports on the respective FI (27-32 on the UCS 6332, 35-40 on the UCS 6332-16UP)

RedNectar’s Recommendation:

Allocate server ports in ascending order from the lowest port number available.

Allocate uplink ports to the highest numbered ports.

Exception:

If attaching using 10Gbps servers to UCS 6332-16UP reserve the lowest numbered ports for current and future FC connections.

The configuration of the Ethernet uplink ports is also a consideration. The original design should indicate whether you are using:

·       Port-channelled uplinks

·       Unbundled uplinks

Configuration Item

Site#1

Site#2

Site#n

Server1 port number: (E.g. 1/1)
(Other servers should use incremental ports)

Number of server ports to configure:
(At least as many as there are nodes)

Port range for Ethernet uplinks: (E.g. 1/31-32)

Uplink Type

PortChannel
Unbundled

PortChannel
Unbundled

PortChannel
Unbundled

Port range for FC uplinks: (E.g. 1/1-4)

Task 5:               The Installer VM and Hyperflex configuration items

The Installer VM is required to be running to complete the installation from this point on. Ideally there will already a vCenter deployed and running on which the Installer VM can be deployed or has already been deployed. This VM needs access to the UCS Manager IPs defined in Task 2: and to the vCenter Appliance that will be used to manage the Hyperflex Nodes, as well as the IP addresses assigned to the ESXi Hosts and the Controller VM defined later in this task. The Installer VM also needs access to the DNS server used to resolve the IP addresses and names used for the Hyperflex ESXI hosts (both forward and reverse)*. For a full list of TCP port numbers required, see the Cisco HyperFlex Systems Installation Guide for VMware ESXi. For multi-site installs, the same Installer VM can be used for all installs if connectivity permits.

*Clarification:

Technically it is vCenter that needs to be able to resolve ESXi host names in each direction, but it is generally easier to test from the installer VM than vCenter, and if both are using the same DNS server, the test should be valid.

Configuration Item

Site#1

Site#2

Site#n

Will the Installer VM be already deployed in a vCenter before the installation begins?

Yes No

Yes No

Yes No

Installer VM IP Address
(Recommended a.b.c.100)

Installer VM IP Default Gateway

 

 

 

UCSM Credentials/config

For convenience during the install, re-enter items you’ve already defined in Task 2:

Configuration Item

Site#1

Site#2

Site#n

UCS Manger hostname: (=Cluster IPv4 Address)

User Name

admin

admin

admin

Password:

(UCS Admin Password)

 

 

 

Site Name:
(Only required if Stretched Cluster install)

vCenter Credentials

·       The vCenter password requires a special character, but should NOT contain special characters like <>[]{}/\’`”* – to be honest I don’t know the complete list, and I’m guessing at the list above based on my knowledge of how hard it is to pass some of these characters to a process via an API, which is what the Installer does to log into vCenter.

Configuration Item

Site#1

Site#2

Site#n

If configuring a Stretched Cluster configure only ONE vCenter

 

vCenter Server (IP or FQDN)

vCenter username
(Typically username@domain)

vCenter Admin password (Please change if it contains special characters <>[]{}/\’`”*)

Hypervisor Credentials

The ESXi hosts in the cluster all come with a default user called root with a password of Cisco123 – you should NOT change the username, but you MUST change the password.

·       Like the vCenter password, this password requires a special character, but should NOT contain special characters like <>[]{}/\’`”* – furthermore, this password requires one Uppercase letter (two if the first character is Uppercase), one digit (two if the last character is a digit) and of course a lowercase character and has to be between 6 and 80 characters long.

Configuration Item

Site#1

Site#2

Site#n

Recommend keeping consistency for Stretched Cluster

 

Installer Hypervisor Admin user name

root

root

root

Installer Hypervisor Admin user password

Cisco123

Cisco123

Cisco123

New Hypervisor Admin (root) user password (Min 6 Chars Upper/lower/special/digit mix)

 

 

 

Combined UCSM and vCenter VLAN/IP Config items

You will need at least four VLAN IDs, but not all have to be unique. Here are my recommendations:

Tip:

Refer to the design diagrams above while completing these items

If you have a VLAN/subnet already planned for the UCS OOB Management, consider using that VLAN for the hx-inband-mgmt. (See design diagrams)

The VLAN ID for HX Storage traffic (hx-storage-data) does not need to be seen any further Northbound than the upstream switches that connect to the FIs. So, it is safe to use the same VLAN ID at every site. This will typically a new VLAN not used anywhere else.

    • You may well already be using a VLAN for vMotion. It’s OK to use the same one here.  In fact, if you are moving VMs from an existing environment, it’s a good idea.
    • One VM Network VLAN ID is required so you can deploy the first VM to the HX Cluster. More can be added later in the Post-Install task.  This vlan will be added as a port-group to a vSwitch called vswitch-hx-vm-network that is created on every ESXi host.
  • Each cluster should have its own MAC address range.  MAC addresses always begin with 00:25:B5: so that that much is filled in for you. Add just one more 2 digit hextet  to the prefix given. All HX ESXi hosts will be allocated MAC addresses using this prefix.

The hx-ext-mgmt Pool for OOB CIMC should have enough IPs to allocate one to every HX ESXi server now and in the future and be from the same subnet that the UCS OOB Management IPs were allocated. Refer to the design diagrams

 

Configuration Item
[Defaults in brackets]

Site#1

Site#2

Site#n

If configuring a Stretched Cluster all values must be identical for both sites except MAC and IP Pools

 

VLAN Name for Hypervisor and HX Mgmt
[ hx‑inband‑mgmt]

VLAN ID for Hypervisor and HX Mgmt

VLAN Name for HX Storage traffic
[ hx‑storage‑data]

VLAN ID for HX Storage traffic

VLAN Name for VM vMotion
[ hx‑vmotion]

VLAN ID for VM vMotion*

VLAN Name for first VM Network
(More can be added later)

VLAN ID(s) for first VM Network
(More can be added later)

MAC Pool Prefix
(Add 1 more hextet e.g AA)

00:25:B5:

00:25:B5:

00:25:B5:

ext-mgmt IP Pool for OOB CIMC
(E.g. a.b.c.101-165)
For Stretched Cluster, use same subnet but don’t overlap

ext-mgmt IP subnet Mask

ext-mgmt IP Gateway

iSCSI Storage and/or FC Storage

If you plan to give the HX cluster access to remote iSCSI or FC storage, it will be simpler to configure it during the install.

Configuration Item
[Defaults in brackets]

Site#1

Site#2

Site#n

iSCSI Storage

VLAN A Name

VLAN A ID

VLAN B Name

 

VLAN B ID

FC Storage

WWxN Pool

20:00:00:25:B5:

20:00:00:25:B5:

20:00:00:25:B5:

VSAN A Name
[hx-ext-storage-fc-a]

VSAN A ID

VLAN B Name

 

VSAN B Name
[hx-ext-storage-fc-b]

VSAN B ID

Advanced Items

UCS Manager stores configuration information for each server in a construct called a Service Profile then organises these along with other relevant policies in a construct called an Organisation. This is a required item.

·       If a Hyperflex Cluster Name is given here it will be added as a User Label to Service Profiles in UCS Manger for easier identification. Don’t confuse it with the Cluster Name required later on for the Storage Cluster. Except in the case of Stretched Cluster that may have “stretched” over two sites, I’d recommend a different Hyperflex Cluster name per Site.

·       Org Name (Organisation Name) the can be the same for each site but is probably better to have a naming plan. Organisation Names are used to separate Hyperflex specific configurations from other UCS servers in UCS Manager. I recommend using a consistent name for identical clusters – e.g. Stretched Cluster.

Configuration Item
[Defaults in brackets]

Site#1

Site#2

Site#n

Hyperflex Cluster Name
[HyperFlex cluster]

Org Name
(E.g. HX-VDI)

Hypervisor Configuration

This is the bit where the DNS configuration is important. If your installer cannot resolve the names given here to the IP Addresses (and vice-versa) then the Servers will be added to the configuration using IP addresses only, rather than the names.

Tip:

Refer to the design diagrams above while completing these items

Other advice:

The Server IP addresses defined here will be in the HX Mgmt VLAN (hx_inband_mgmt).

The subnet assigned to the HX Mgmt VLAN (hx_inband_mgmt) is used for management traffic among ESXi, HX, and VMware vCenter; must be routable

    • If using more than one DNS Server (maximum two), separate with commas
  • Always use sequential numbers for IP addresses and names – allow system to automatically generate these from the first IP Address/Name – so make sure your first hostname ends in 01

Configuration Item

Site#1

Site#2

Site#n

Subnet Mask

 

Gateway

ESXi Server1 IP address
(Recommended a.b.c.11)

ESXi Server1 Hostname**
(xxx -01)

** Verify DNS forward and reverse records are created for each host defined here. If no DNS records exist, hosts are added to vCenter by IP address instead of FQDN.

Cluster IP Configuration

The Hypervisor IP addresses and Storage Controller IP addresses defined for the HX Mgmt VLAN (hx_inband_mgmt) must be in the same subnet as the Host IP addresses given in the previous step.

The Hypervisor IP addresses and Storage Controller IP addresses defined here for the Data VLAN (hx‑storage‑data) must be in a different subnet to the addresses assigned to the HX Mgmt VLAN (hx_inband_mgmt). This subnet does not really need to be routable (so gateway IP is optional), although that may change when synchronous replication is supported.

·       Always use sequential numbers for IP addresses – allow system to automatically generate these from the first IP Address.

Configuration Item

Site#1

Site#2

Site#n

Management VLAN – make both IPs on same subnet

Hypervisor 1 IP Address (Already = ESXi Server1 IP address in last step)

Can’t change

Cant’ change

Can’t change

Storage Controller 1 IP address
(Recommended a.b.c.71)

Management Cluster IP address
(Recommended a.b.c.10)

Management Cluster Gateway

Data VLAN – make both IPs on same subnet

Hypervisor 1 IP Address
(Recommended d.e.f.11)

Storage Controller 1 IP address
(Recommended d.e.f.71)

Data Cluster IP address
(Recommended d.e.f.10)

Data Cluster Gateway
(Optional)

Storage Cluster and vCenter Configuration

·       The Cluster Name is the name given to the Storage Cluster. Use a naming convention like HXCLUS01. It will also be used as the vCenter Cluster Name by default too. I recommend keeping it this way unless you already have a vCenter Cluster you wish to use.

·       The controller VM is the VM that manages the storage cluster – and it must be 10 characters long (at least). It also requires the Upper/Lower/Digit/Special Character combo too, but should NOT contain special characters like <>[]{}/\’`”*

·       Cisco recommends using Replication factor 3 for Hybrid Clusters and either 2 or 3 for All Flash Clusters. RedNectar recommends FR2 for All Flash Clusters unless a heightened level of redundancy is desired.

·       If the vCenter Datacenter and/or Cluster (case sensate) exists already, it will be used. Otherwise it/they will get created during install.

·       If multiple DNS and/or NTP servers are used, use commas to separate the list. Typically, these will be consistent throughout the install, but I’ve seen some sites that use different addresses in different places.

Configuration Item

Site#1

Site#2

Site#n

Cluster Name (E.g. HXCLUS01)

 

Replication Factor (2 or 3)

 

Controller VM password (required) (10+ chars Upper/lower/digit/special mix)

vCenter Datacenter Name
(E.g. HX-DC)

vCenter Cluster Name
[Default=cluster name above]

DNS Server(s) (Maximum two, comma separated)

NTP Server(s) (Maximum two, comma separated)

Time Zone:
(E.g. AEST +10)

Connected Services

  • Cisco recommends that connected services be enabled. If you wish connected services to be configured, enter the detail below. Enable Connected Services in order to enable management via Cisco Intersight.

Configuration Item

Global value

Enable Connected Services?
(Recommended)

Yes No

Email for service request notifications
name@company.com

vCenter Single Sign-On (SSO)

If using SSO, the SSO URI may be required (assumed to be the same for all sites). This information is required only if the SSO URI is not reachable. The SSO Server URL can be found in vCenter at vCenter Server > Manage > Advanced Settings, key config.vpxd.sso.sts.uri

Configuration Item

Global value

SSO Server URI

Stretched Cluster Witness

  • If you are installing a Stretched Cluster, a Witness VM is required, preferably at a third location (RTT to either site 200ms).

Configuration Item

Cluster#1

Cluster#3

IP address of Witness VM

Task 6:               Post installation task

Once the Hyperflex cluster have been installed, you will need to run a post-install script where additional features can be enabled, including adding additional VM Networks. Your vSphere licence must support any features asked for.

    • Enabling HA/DRS on cluster enables vSphere High Availability (HA) feature per best practice.
    • Disabling SSH warning suppresses the SSH and shell warnings in the vCenter. SSH must remain enabled for proper functioning of the HyperFlex system.
  • Configure ESXi logging onto HX datastore allows the creation of a HX Datastore to hold the ESXi log files.

Adding vMotion interfaces configures vMotion interfaces per best practice. Requires IP addresses and VLAN ID input. Refer to the design diagrams.

    • You can add additional guest VLANs to Cisco UCS Manager and within ESXi on all cluster hosts during this process.
    • Enabling NTP on ESXi hosts configures and enables NTP on ESXi hosts.
  • If SMTP mail server and Auto Support parameters are configured, a test email is sent to ensure that the SMTP relay is working.

Configuration Item

Site#1

Site#2

Site#n

If configuring a Stretched Cluster, Site#1 must = Site#2 except vmk interface IP (but use same subnet)

Enable HA/DRS on cluster?
(Recommended)

Yes No

Yes No

Yes No

Disable SSH warning?
(Recommended)

Yes No

Yes No

Yes No

Configure ESXi logging onto HX datastore?
(Recommended)

Yes No

Yes No

Yes No

Datastore Name for HX ESXi logging?
(E.g. HX-ESXiLogs)

 

 

 

Add vMotion interfaces?
(Recommended)

Yes No

Yes No

Yes No

ESXi Server 1 vMotion vmk interface IP
(Recommended x.y.z.11)

 

 

 

VLAN ID for vMotion
(As recorded above)

 

 

 

Add VM network VLANs? (Record VLAN names in table below)

Yes No

Yes No

Yes No

Enable NTP on ESXi hosts

Yes No

Yes No

Yes No

Send test email?

Yes No

Yes No

Yes No

VM Networks List

By adding additional VLANs at this point, the VLAN Names & IDs you define here will be created by UCS Manager on both Fabric Interconnects.  Additionally, the VLAN Names & IDs specified here will be added to the vSwitch called vswitch-hx-vm-network that is created on every ESXi host. (Actually, the VLAN Name on the vSwitch will be as specified here with the VLAN ID appended to the name)

Configuration Item

Site#1

Site#2

Site#n

If configuring a Stretched Cluster, recommend Site#1 = Site#2

VM Network (=VLAN name)

 

 

 

VM Network ID (=VLAN ID)

 

 

 

VM Network (=VLAN name)

 

 

 

VM Network ID (=VLAN ID)

 

 

 

VM Network (=VLAN name)

 

 

 

VM Network ID (=VLAN ID)

 

 

 

VM Network (=VLAN name)

 

 

 

VM Network ID (=VLAN ID)

 

 

 

That completes my Installation Checklist. But it is not enough to have just a checklist of items without validating them. So, …

Before beginning Hyperflex installation…

After the logical Planning for the installation has been completed, you need to validate it.

Here is a checklist of a few things that you should make sure are completed before arriving on each site for the install. Having these items complete will greatly help make the Hyperflex installation go smoothly. If doing the install for a customer, have them complete this as well as the pre-installation above.

Task

Completed?

Task 1:               The Physical environment

a.     Do you have the correct power chords for all devices that have to be racked?

b.     Do you have the 10G/40G uplinks cabled to the rack where the Hyperflex Clusters is to be installed?

c.     Are the 10G/40G uplinks physically connected to the upstream switches?

d.     If bringing FC to the FIs, do you have the FC fibre uplinks physically connected from the FIs to the upstream FC switches?

e.     Do you have 2 x RJ45 connections to the OOB Management switch that the Fabric Interconnects will connect to?

The two FI’s have ports labelled L1 and L2. Two regular RJ45 Ethernet cables are needed to connect L1 to L1 and L2 to L2. Ideally, these will be ~20cm in length to keep patching obvious and neat.

f.      Do you have the L1 & L2 ports for both FIs in all locations connected via 2 x regular RJ45 Ethernet cables?

HX Clusters use single-cable management so that all management functions are carried out from UCS Manager.

 

g.      Have you ensured that no cables are attached to the on-board 1Gbps and CIMC interfaces of the nodes?

Task 2:               The Inter-Fabric Network

h.     Are the four VLANs defined in the Pre-Install Checklist configured on the upstream switches that the FIs will be connecting to?
(Refer to the design diagrams above)

i.     Have jumbo frames been enabled on the upstream switches that the FIs will be connecting to? (Refer to the design diagrams above)

Task 3:               The Management Switch/VLAN

The FI’s need 1G Ethernet connectivity to a management switch/VLAN.

j.      Have the IP addresses defined as default gateway addresses in the Pre-Install Checklist been configured on a router/Layer3 switch?

Plug a laptop into the ports assigned to the FI Management ports in the racks where the FIs are to be installed (i.e. as in e above). Assign your laptop an IP address in the appropriate range

k.      Can the laptop ping the default gateway IP?

l.      Can the laptop ping the NTP server defined in the Pre-Install Checklist?

Task 4:               The Installer

The installer is a .ova file (Cisco-HX-Data-Platform-Installer-vxxxxx.ova) – a vCenter (v6.5) needs to be set up with an ESXi Host and the .ova installed on the ESXi Host.

Note:            If all else fails, you can run the installer from a laptop using VMware Fusion or VMware Workstation.

When the installer VM boots, it needs to obtain an IP address and DNS information via DHCP or be allocated the IP details defined above.

The following tests are to verify that the Installer VM has been given a suitable IP address, has access to DNS, and that the DNS has been configured fully.

The installer VM username/password is root/Cisco123.

m.     Has the Installer VM been given an IP address via DHCP?
(Use the command ip addr show eth0 or ifconfig eth0 to check)

n.      Has the Installer VM has been configured with the correct DNS address?
(Use the command cat etc/resolv.conf to check)

o.   Can the Installer VM resolve forward and reverse names using the following commands?
nslookup <insert IP of first HX-ESXi host>
<repeat for all HX-ESXi hosts in the cluster>
nslookup WhateverDNSNameYouUseForESXiHost-01
<repeat for all HX-ESXi hostnames in the cluster>

p.     Can the Installer VM ping the NTP server?
ping <insert IP NTP Server>

q.     (Stretched Cluster) Can the Installer VM ping the NTP server Witness VM?
ping <insert Witness IP>

Advertisement

About RedNectar Chris Welsh

Professional IT Instructor. All things TCP/IP, Cisco or Data Centre
This entry was posted in Cisco, Hyperflex, UCS, VLANs and tagged . Bookmark the permalink.

1 Response to RedNectar’s Hyperflex Pre-Install Checklist (Updated)

  1. Pingback: RedNectar’s HX Pre-Install Checklist | RedNectar's Blog

Comments are closed.