Seven things to know to make Hyperflex go – Cisco HyperFlex Best Practices

You have Cisco Hyperflex installed, but not quite sure if there is anything you need to do differently now that want to deploy VMs on the Hyperflex Data Platform (HXDP)

Well, yes there are some things that you need to do differently, and some you should do differently, but most activities you’ve ever done with VMs and ESXi hosts will remain the same.

Here are the seven things you need to know to make Hyperflex perform optimally.

Create new users for the HX Connect GUI from vCenter (Must Do)

Create Datastores using the HXDP utilities, not vCenter standard Datastore creation (Must Do)

Create Snapshots using the HXDP utilities, not vCenter standard Snapshot function (Must Do)

Create multiple Clones using the HXDP utilities (Should Do)

Use HXDP Maintenance Mode, not vCenter standard Maintenance Mode (Must Do)

Upgrade ESXi software using HX Connect or Intersight, not vCenter (Must Do)

Create new VLANs using the HX Installer VM (Should Do)

Create new users for the HX Connect GUI from vCenter (Must Do)

There is a default admin user that can be used to log into the HX Connect GUI, but best practice is to use your vCenter username and password.  If you want to add a read-only or another administrator user for Hyperflex, use the regular method for creating a user in vCenter.

Create Datastores using the HXDP utilities, not vCenter standard Datastore creation (Must Do)

Before you can even begin to use Hyperflex, you must create a Datastore on your HXDP.  Since you will only do this rarely, it is an easy point to forget, which can lead to frustration if you try to do this using the normal Datastore creation method from vCenter, because vCenter will want to assign the Datastore to a single ESXi host, whereas the HX Datastore will be distributed across all HX ESXi storage nodes. In other words, you can’t create a Datastore on your HXDP using the normal Datastore creation method from vCenter.  Creating a HX Datastore can only be done in one of two ways, using Hyperflex Connect (easy) or the vCenter plugin (messy).

Method #1: Using HX Connect:

Click Datastores and then Create Datastore (I said it was easy)

Figure 1 Creating a Datastore in Hyperflex Connect is easy


Method #2: Using vCenter:

Navigate to Global Inventory Lists > Cisco HX Data Platform.  Next, select your cluster in the Navigator then click on the Manage tab. From here, click on the Datastores sub-tab where you will find an icon that will let you create a Datastore.

Figure 2 Creating a Datastore in vCenter is messy


Create Snapshots using the HXDP utilities, not vCenter standard Snapshot function (Must Do)

This one is a bit tricky.  When you create the first Snapshot in one of the two HXDP methods, a special SENTINAL snapshot is created.  This ensures that any future snapshots can trace their pointer-based log-structured file system origins back to the original format.  IF YOU CREATE THE FIRST SNAPSHOT using the VMware standard Snapshot functions, then you are stuck with the VMware Re-do snapshot system and will be stuck with the non-HX aware consolidation process should you wish to consolidate in the future.

The VMware Re-do snapshot system works like this:  The first snapshot is made, and the .vmdk file for that VM is locked. A new .vmdk file is created to record any changes that are made after the original snapshot is made. Similarly, when the next snapshot is made, the second .vmdk file for that VM is locked and a new one created, and so on.  The problem with this method is that not so much that no data is ever deleted, and the size of the snapshot re-do files may become far greater than the original, but that if you find that you need to reclaim the space, VMware now has to revert to the original snapshot and process the Re-do files.  This process can take minutes, hours or even days depending on the size and complexity of the Re-do files – and there is a possibility that the process may exhaust your existing disk space before completion (after all, you are probably doing the consolidation to reclaim some space).

Hyperflex consolidation works like this:  All the pointer-based snapshots are deleted in a matter of seconds, and the redundant chunks of data marked for deletion. Job done. In seconds. And no chance of running out of disk space.

Moral of this story: Use Hyperflex pointer-based snapshots for the first snapshot for all VMs.  And to ensure this happens, why not take the approach of using Hyperflex pointer-based snapshots for all snapshots?

Here are the two ways of creating HXDP pointer-based snapshots:

Method #1: Using HX Connect:

Click Virtual Machines and then from the Actions menu, select Snapshot Now

Figure 3 Creating Snapshots in Hyperflex Connect


Method #2: Using vCenter (must be Flash version of vCenter, not HTML):

Navigate to Global Inventory Lists > Cisco HX Data Platform.  Next, select your cluster in the Navigator then click on the Manage tab. From here, click on the Datastores sub-tab where you will find an icon that will let you create a Datastore.

Figure 4 When creating HXDP Snapshots in vCenter you need to be careful


Create multiple Clones using the HXDP utilities (Should Do)

If you want to create one clone of a VM, it does not matter whether you use the standard VMware cloning option or the HXDP Ready Clones option.  The main thing to remember is that to take advantage of the Hyperflex pointer-based log-structured file system’s inherent de-duplication, you must make sure that the VM you are cloning does not have any VMware Redo snapshots – only HXDP pointer-based snapshots.  However, when you use the HXDP Ready Clones option, you’ll be able to create as many clones as you want without taking any extra disk space because of the Hyperflex pointer-based log-structured file system’s inherent de-duplication.  One thing though, if you want to use Customisation Specifications or Resource Pools, you’ll have to have already created them in vCenter.

Figure 5 Creating HXDP Ready Clones in HC Connect


Figure 6 Creating HXDP Ready Clones in vCenter


Use HXDP Maintenance Mode, not vCenter standard Maintenance Mode (Must Do)

Should you ever need to shut down an ESXi storage host (and you will need to sometime), make sure you use the HXDP Maintenance Mode, not vCenter standard Maintenance Mode. The difference is this:  When you shut down an ESXi storage host using the HXDP Maintenance Mode, the HXDP Controller VM will be shut down cleanly, and the other ESXi hosts will be informed that the HXDP Controller VM for that host is no longer available. To understand the re-percussions of this, you need to be aware of how the HX IOVisor works (which I hope will be the subject of a future blog post).  When you use the standard VMware Maintenance mode, VMware doesn’t know that the HXDP Controller VM needs to be shut down gracefully and the other ESXi hosts need to be informed that the HXDP Controller VM for that host is no longer available, so shuts down the ESXi host regardless.  Now, the HXDP will recover for this mis-hap in time (in most cases), but it is certainly NOT “best practice”

Figure 7 Entering HXDP Maintenance Mode using HX Connect


Figure 8 Entering HXDP Maintenance Mode using HXDP extensions in vCenter


Upgrade ESXi software using HX Connect or Intersight, not vCenter (Must Do)

When the ESXi software needs to be upgraded, remember the ESXi software on your HXDP hosts has been modified by the addition of the IOVisor and the VAAI .vibs (VMware Installation Binaries), so upgrading the ESXi hosts is NOT a simple matter of upgrading using VMware’s released version – you need the ESXi versions released by Cisco with the appropriate .vibs installed.  The simplest way to do this is to make sure you do the upgrades using Hyperflex Connect, or better still (if you have more than one site) Cisco’s Intersight SaaS platform ( which can upgrade multiple HXDP sites in a single click! (Pause while I sip come more Kool-Aid). BTW, even if you have only one site, you should still connect your cluster to Intersight, but I’ll talk about that in a future post.

Figure 8 Upgrading using HX Connect


Create new VLANs using the HX Installer VM (Should Do)

If you even need a new VLAN on the HX data platform, you need to make sure that VLAN is available to each ESXi host and each Fabric Interconnect, and your upstream switches.  Since Hyperflex was designed to run on ROBO versions of VMware and above, standard vSwitches are maintained in each ESXi host, so Cisco has provided a utility that allows you to quickly create a new VLAN on all hosts plus the Fabric Interconnects in one step, a task that is quite tedious if you are not using vSphere Distributed Switches.  Of course, if you have a version of VMware that supports VMware VDS, you probably won’t want to use this feature (because it configures Standards vSwitches, not vSphere Distributed Switches).  You’ll be prompted to enter usernames and passwords for UCSManager, vCenter and the ESXi Hosts, so it is a bit tedious, but simpler and safer than adding the VLANs manually to each ESXi host and the FIs.  Here’s a sample session adding one VLAN – in this case it was a stretched cluster, so the VLANs were added to six hosts and four Fabric Interconnects in one hit!

root@HyperFlex-Installer:~# post_install --vlan
Logging in to controller
HX CVM admin password: **************
Getting ESX hosts from HX cluster...
vCenter URL:
Enter vCenter username (user@domain): administrator@vsphere.local
vCenter Password: **************
Found datacenter HX-DC
Found cluster HXCLUS01

post_install to be run for the following hosts:

 Enter ESX root password: **************
 Attempting to find UCSM IP
Site A - UCSM IP:
Site A - UCSM Username: admin
Site A - UCSM Password: **************
Site A - HX UCS Sub Organization: HX
Site B - UCSM IP:
Site B - UCSM Username: admin
Site B - UCSM Password: **************
Site B - HX UCS Sub Organization: HX
 Port Group Name to add (VLAN ID will be appended to the name): TestVLAN
 VLAN ID: (0-4096) 104
 Adding VLAN 104 to FI
 Adding VLAN 104 to vm-network-a VNIC template
 Adding VLAN 104 to FI
 Adding VLAN 104 to vm-network-a VNIC template
Adding TestVLAN-104 to sitea-esxi01.mynet.local
Adding TestVLAN-104 to sitea-esxi02.mynet.local
Adding TestVLAN-104 to sitea-esxi03.mynet.local
Adding TestVLAN-104 to siteb-esxi01.mynet.local
Adding TestVLAN-104 to siteb-esxi02.mynet.local
Adding TestVLAN-104 to siteb-esxi03.mynet.local
Add additional VM network VLANs? (y/n) n
Posted in Best Preactices, Cisco, Data Center, Data Centre, ESXi, Hyperflex, VMware | Tagged , , , | 2 Comments

MS Community Robot Finds Word Tables Offensive

I posted this question on the Micsoft Community board and in the course of the following discussion, I posted this picture of a MS Table with the Language incorrectly set.

But suddenly, it was removed from my discussion.  Replaced by the dreaded missing image icon, and the only indication of what had happened was in the History of the post:

Now I know many things about MS Tables are offensive, the fact that I can’t change the Language setting for a Table Stlye for one – but a picture of a table? Come on MS, what’s the deal? Did you think my arrow was too pointy? Or was the angle-of-the dangle at the wrong angel? Or was it that I posted a picture of a …. “box” (Snigger, snigger)

Whatever the reason, I’m pissed off that I can’t add reasonable pictures to a post.

Posted in Microsoft, opinion, rant

RedNectar’s Hyperflex Pre-Install Checklist (Updated)

Completing Cisco’s Pre-installation Checklist for Cisco HX Data Platform will capture all the information you need, but not necessarily in the order you need it, and for a single site only. So, I decided to write a version that gets the information in the order you need it and in a format that’s easier to expand into a multi-site or stretched-cluster deployment. If you are planning to install a Stretched Cluster, make sure Site#1 and Site#2 have identical configurations where indicated.

Logical Planning for the installation

Task 1:               Planning

Before embarking on the checklist, it is important that you know how your Hyperflex installation is going to integrate into your existing network, and what new resources (VLANs, IP subnets you will need to find).

VMware vCenter

Hyperflex cannot operate without VMware vCenter. You may use an existing vCenter installation or vCenter can be included as part of the Hyperflex installation. If it is to be included in the Hyperflex installation process, best practice is that vCenter be installed on an ESXi host that is NOT part of the Hyperflex cluster (to avoid having the management application hosted on the managed system). If absolutely required, vCenter can be installed on the Hyperflex cluster. Recommended options are bolded.

Fabric Interconnects

Typically, Hyperflex clusters are installed using an independent pair of Fabric Interconnect switches. If the Hyperflex cluster is to be installed and integrated with an existing UCS deployment, the Fabric Interconnect switches will be rebooted during the Hyperflex installation.

VLAN/vSwitch and Subnet planning

Your Hyperflex Installation requires that you plan for additional subnets and VLANs. Depending on the final design, this could be as many as four new subnets and four new VLANs. The following diagrams offer two slightly different approaches to VLAN planning:

HX Install Design #1: Separate OOB and HX Mgmt VLAN/Subnets

HX Install Design #2: Combined OOB/HX Mgmt VLAN/Subnets (recommended)

To determine the number of IP addresses and new VLANs required, use the following guidelines. In all the following calculations, n is the number of Nodes in the Hyperflex Cluster.

vCenter Subnet is assumed to be an existing subnet. If a new vCenter is being installed as part of this installation, IP addresses will need to be allocated for the vCenter appliance and its default gateway.

Total IPs required= 2

·       1 x Default GW IP

·       1 x vCenter Appliance VM

Installer VM Subnet is also assumed to be an existing subnet. If practical, make provision for the HX Installer VM to be allocated an IP address on the hx-inband-mgmt subnet. This ensures that the installer has full access to the ESXi hosts, and (in design #2) to UCS Manager. The Installer MUST be able to access vCenter, UCS Manager and the ESXi hosts.

Total IPs required= 2

·       1 x Default GW IP

·       1 x HX installer VM

UCS OOB Subnet is the external UCS OOB management – you probably have a VLAN set aside for this already and is where the UCS Manager Primary, Secondary and Cluster IPs live. These are configured on the Fabric Interconnects during the initial setup. Also, this subnet is where the ext-mgmt IP Pool for OOB CIMC is sourced, which will require at least n addresses. This subnet is a mandatory requirement.

Total IP addresses required=4 + n

·       1 x UCS Manager Primary (FI-A)

·       1 x UCS Manager Secondary (FI-B)

·       1 x UCSM Cluster IP

·       1 x Default GW IP

·       n x CIMC Pool addresses (one per HX Server)

Hyperflex inband management VLAN and Subnet is shown in the diagram as hx-inband-mgmt subnet/VLAN/vSwitch. You may already have a management VLAN that you wish to use in this situation. Indeed, it may even be the same subnet as the UCS OOB Subnet (as shown in design #2 above), in which case the number of subnets required is reduced by one. This VLAN MUST be configured before the install on the UPSTREAM SWITCHES and on the Port Channel between the upstream switches.

Total IP addresses required=2 + 2 x n

·       1 x Management Cluster IP

·       1 x Default GW IP

·       2 x n (vmnic0 and Controller VM per ESXi host)

·       [Optional] Consider reserving an IP address for the Installer VM on this subnet.

Hyperflex Storage VLAN and Subnet is shown in the diagram as hx-storage-data subnet/VLAN/vSwitch. This is an isolated Subnet/VLAN/vSwitch. Supplying a default gateway is optional unless your backup system accesses the storage directly. This VLAN MUST be configured before the install on the UPSTREAM SWITCHES and on the Port Channel between the upstream switches and requires MTU of 9000.

Total IPs required=2 + 2 x n (or 1 + 2 x n if no default gateway IP required)

·       1 x Data Cluster IP

·       1 x Default GW IP

·       2 x n (vmnic1 and Controller VM per ESXi host)

VMotion VLAN (and subnet) is shown in the diagram as hx-vmotion VLAN/vmk. No IPs are required at installation, but can be added post install, and in fact is recommended. This VLAN MUST be configured before the install on the UPSTREAM SWITCHES and on the Port Channel between the upstream switches.

Total IPs required= n (or 1 + n if default gateway IP required)

·       1 x Default GW IP (Optional)

·       n (vmnic2 per ESXi host)

VM Network VLAN is a VLAN that will be added to a standard VMware vSwitch which will be called vswitch-hx-vm-network.  This item exists more or less as a place-marker to remind you that you will need to plumb the VLANs used by the guest VMs to the HX Cluster. One VLAN is configured during the install process for this purpose, and no doubt you will be allocating IPs to VMs as they are added later. Additional VLANs can be easily added post-install and a section is provided later in this document to define any additional VLANs you might want to make available to your VMs.

Task 2:               Fabric Interconnects (FIs) – initial console config

The first thing to configure is the UCS Fabric Interconnects via a console cable. The following information will be required to be able to complete the initial configuration.

·       You will need to enter the items marked * again later, so remember them.

The UCS System Name is used to name the Fabric Interconnects, and the suffixes A and B will be added by the system. If you use OurDC.UCS.FI as the System Name, the first Fabric Interconnect will automatically be named OurDC.UCS.FIA and the second OurDC.UCS.FIB

Configuration Item




UCS Admin Password*

UCS System Name
(Recommend ending with letters FI)

FI-A Mgmt0 IP address
(Recommended a.b.c.7)

Mgmt0 IPv4 netmask (Common for all IPs)

IPv4 address of default gateway

UCS Cluster IPv4 Address*
(Recommended a.b.c.9)

DNS IP address*
(Comma separated, maximum two)

Domain name (optional)

UCS FI-B IP address
(Recommended a.b.c.8)

Task 3:               Fabric Interconnects – firmware upgrade

You may wish to assign the NTP server address to the Fabric Interconnects during the Firmware Upgrade.

Configuration Item




NTP Server IP*

Task 4:               Server Ports, Uplink Ports, FC Uplink ports on FIs

If using 6200 Fabric Interconnects, AND you plan on connecting Fibre Channel storage to the FIs now or in the future, remember these will have to be the high numbered ports and increment in steps of 2. So, for a UCS 6248, ports 31 and 32 will be the first FC ports. For a UCS 6296, ports 47 and 48 will be the first FC ports.

For UCS 6332-16UP FIs, the order is reversed. Only the first 16 (unified) ports are capable of 4/8/16Gbps FC – but they progress from left to right in pairs, so the first two FC ports will be ports 1 & 2.

The UCS 6332 doesn’t support FC, but both the UCS 6332 and the UCS 6332-16UP FIs have 6 dedicated 40Gbps QSFP+ ports – these are the highest numbered ports on the respective FI (27-32 on the UCS 6332, 35-40 on the UCS 6332-16UP)

RedNectar’s Recommendation:

Allocate server ports in ascending order from the lowest port number available.

Allocate uplink ports to the highest numbered ports.


If attaching using 10Gbps servers to UCS 6332-16UP reserve the lowest numbered ports for current and future FC connections.

The configuration of the Ethernet uplink ports is also a consideration. The original design should indicate whether you are using:

·       Port-channelled uplinks

·       Unbundled uplinks

Configuration Item




Server1 port number: (E.g. 1/1)
(Other servers should use incremental ports)

Number of server ports to configure:
(At least as many as there are nodes)

Port range for Ethernet uplinks: (E.g. 1/31-32)

Uplink Type




Port range for FC uplinks: (E.g. 1/1-4)

Task 5:               The Installer VM and Hyperflex configuration items

The Installer VM is required to be running to complete the installation from this point on. Ideally there will already a vCenter deployed and running on which the Installer VM can be deployed or has already been deployed. This VM needs access to the UCS Manager IPs defined in Task 2: and to the vCenter Appliance that will be used to manage the Hyperflex Nodes, as well as the IP addresses assigned to the ESXi Hosts and the Controller VM defined later in this task. The Installer VM also needs access to the DNS server used to resolve the IP addresses and names used for the Hyperflex ESXI hosts (both forward and reverse)*. For a full list of TCP port numbers required, see the Cisco HyperFlex Systems Installation Guide for VMware ESXi. For multi-site installs, the same Installer VM can be used for all installs if connectivity permits.


Technically it is vCenter that needs to be able to resolve ESXi host names in each direction, but it is generally easier to test from the installer VM than vCenter, and if both are using the same DNS server, the test should be valid.

Configuration Item




Will the Installer VM be already deployed in a vCenter before the installation begins?

Yes No

Yes No

Yes No

Installer VM IP Address
(Recommended a.b.c.100)

Installer VM IP Default Gateway




UCSM Credentials/config

For convenience during the install, re-enter items you’ve already defined in Task 2:

Configuration Item




UCS Manger hostname: (=Cluster IPv4 Address)

User Name





(UCS Admin Password)




Site Name:
(Only required if Stretched Cluster install)

vCenter Credentials

·       The vCenter password requires a special character, but should NOT contain special characters like <>[]{}/\’`”* – to be honest I don’t know the complete list, and I’m guessing at the list above based on my knowledge of how hard it is to pass some of these characters to a process via an API, which is what the Installer does to log into vCenter.

Configuration Item




If configuring a Stretched Cluster configure only ONE vCenter


vCenter Server (IP or FQDN)

vCenter username
(Typically username@domain)

vCenter Admin password (Please change if it contains special characters <>[]{}/\’`”*)

Hypervisor Credentials

The ESXi hosts in the cluster all come with a default user called root with a password of Cisco123 – you should NOT change the username, but you MUST change the password.

·       Like the vCenter password, this password requires a special character, but should NOT contain special characters like <>[]{}/\’`”* – furthermore, this password requires one Uppercase letter (two if the first character is Uppercase), one digit (two if the last character is a digit) and of course a lowercase character and has to be between 6 and 80 characters long.

Configuration Item




Recommend keeping consistency for Stretched Cluster


Installer Hypervisor Admin user name




Installer Hypervisor Admin user password




New Hypervisor Admin (root) user password (Min 6 Chars Upper/lower/special/digit mix)




Combined UCSM and vCenter VLAN/IP Config items

You will need at least four VLAN IDs, but not all have to be unique. Here are my recommendations:


Refer to the design diagrams above while completing these items

If you have a VLAN/subnet already planned for the UCS OOB Management, consider using that VLAN for the hx-inband-mgmt. (See design diagrams)

The VLAN ID for HX Storage traffic (hx-storage-data) does not need to be seen any further Northbound than the upstream switches that connect to the FIs. So, it is safe to use the same VLAN ID at every site. This will typically a new VLAN not used anywhere else.

    • You may well already be using a VLAN for vMotion. It’s OK to use the same one here.  In fact, if you are moving VMs from an existing environment, it’s a good idea.
    • One VM Network VLAN ID is required so you can deploy the first VM to the HX Cluster. More can be added later in the Post-Install task.  This vlan will be added as a port-group to a vSwitch called vswitch-hx-vm-network that is created on every ESXi host.
  • Each cluster should have its own MAC address range.  MAC addresses always begin with 00:25:B5: so that that much is filled in for you. Add just one more 2 digit hextet  to the prefix given. All HX ESXi hosts will be allocated MAC addresses using this prefix.

The hx-ext-mgmt Pool for OOB CIMC should have enough IPs to allocate one to every HX ESXi server now and in the future and be from the same subnet that the UCS OOB Management IPs were allocated. Refer to the design diagrams


Configuration Item
[Defaults in brackets]




If configuring a Stretched Cluster all values must be identical for both sites except MAC and IP Pools


VLAN Name for Hypervisor and HX Mgmt
[ hx‑inband‑mgmt]

VLAN ID for Hypervisor and HX Mgmt

VLAN Name for HX Storage traffic
[ hx‑storage‑data]

VLAN ID for HX Storage traffic

VLAN Name for VM vMotion
[ hx‑vmotion]

VLAN ID for VM vMotion*

VLAN Name for first VM Network
(More can be added later)

VLAN ID(s) for first VM Network
(More can be added later)

MAC Pool Prefix
(Add 1 more hextet e.g AA)




ext-mgmt IP Pool for OOB CIMC
(E.g. a.b.c.101-165)
For Stretched Cluster, use same subnet but don’t overlap

ext-mgmt IP subnet Mask

ext-mgmt IP Gateway

iSCSI Storage and/or FC Storage

If you plan to give the HX cluster access to remote iSCSI or FC storage, it will be simpler to configure it during the install.

Configuration Item
[Defaults in brackets]




iSCSI Storage






FC Storage

WWxN Pool










Advanced Items

UCS Manager stores configuration information for each server in a construct called a Service Profile then organises these along with other relevant policies in a construct called an Organisation. This is a required item.

·       If a Hyperflex Cluster Name is given here it will be added as a User Label to Service Profiles in UCS Manger for easier identification. Don’t confuse it with the Cluster Name required later on for the Storage Cluster. Except in the case of Stretched Cluster that may have “stretched” over two sites, I’d recommend a different Hyperflex Cluster name per Site.

·       Org Name (Organisation Name) the can be the same for each site but is probably better to have a naming plan. Organisation Names are used to separate Hyperflex specific configurations from other UCS servers in UCS Manager. I recommend using a consistent name for identical clusters – e.g. Stretched Cluster.

Configuration Item
[Defaults in brackets]




Hyperflex Cluster Name
[HyperFlex cluster]

Org Name
(E.g. HX-VDI)

Hypervisor Configuration

This is the bit where the DNS configuration is important. If your installer cannot resolve the names given here to the IP Addresses (and vice-versa) then the Servers will be added to the configuration using IP addresses only, rather than the names.


Refer to the design diagrams above while completing these items

Other advice:

The Server IP addresses defined here will be in the HX Mgmt VLAN (hx_inband_mgmt).

The subnet assigned to the HX Mgmt VLAN (hx_inband_mgmt) is used for management traffic among ESXi, HX, and VMware vCenter; must be routable

    • If using more than one DNS Server (maximum two), separate with commas
  • Always use sequential numbers for IP addresses and names – allow system to automatically generate these from the first IP Address/Name – so make sure your first hostname ends in 01

Configuration Item




Subnet Mask



ESXi Server1 IP address
(Recommended a.b.c.11)

ESXi Server1 Hostname**
(xxx -01)

** Verify DNS forward and reverse records are created for each host defined here. If no DNS records exist, hosts are added to vCenter by IP address instead of FQDN.

Cluster IP Configuration

The Hypervisor IP addresses and Storage Controller IP addresses defined for the HX Mgmt VLAN (hx_inband_mgmt) must be in the same subnet as the Host IP addresses given in the previous step.

The Hypervisor IP addresses and Storage Controller IP addresses defined here for the Data VLAN (hx‑storage‑data) must be in a different subnet to the addresses assigned to the HX Mgmt VLAN (hx_inband_mgmt). This subnet does not really need to be routable (so gateway IP is optional), although that may change when synchronous replication is supported.

·       Always use sequential numbers for IP addresses – allow system to automatically generate these from the first IP Address.

Configuration Item




Management VLAN – make both IPs on same subnet

Hypervisor 1 IP Address (Already = ESXi Server1 IP address in last step)

Can’t change

Cant’ change

Can’t change

Storage Controller 1 IP address
(Recommended a.b.c.71)

Management Cluster IP address
(Recommended a.b.c.10)

Management Cluster Gateway

Data VLAN – make both IPs on same subnet

Hypervisor 1 IP Address
(Recommended d.e.f.11)

Storage Controller 1 IP address
(Recommended d.e.f.71)

Data Cluster IP address
(Recommended d.e.f.10)

Data Cluster Gateway

Storage Cluster and vCenter Configuration

·       The Cluster Name is the name given to the Storage Cluster. Use a naming convention like HXCLUS01. It will also be used as the vCenter Cluster Name by default too. I recommend keeping it this way unless you already have a vCenter Cluster you wish to use.

·       The controller VM is the VM that manages the storage cluster – and it must be 10 characters long (at least). It also requires the Upper/Lower/Digit/Special Character combo too, but should NOT contain special characters like <>[]{}/\’`”*

·       Cisco recommends using Replication factor 3 for Hybrid Clusters and either 2 or 3 for All Flash Clusters. RedNectar recommends FR2 for All Flash Clusters unless a heightened level of redundancy is desired.

·       If the vCenter Datacenter and/or Cluster (case sensate) exists already, it will be used. Otherwise it/they will get created during install.

·       If multiple DNS and/or NTP servers are used, use commas to separate the list. Typically, these will be consistent throughout the install, but I’ve seen some sites that use different addresses in different places.

Configuration Item




Cluster Name (E.g. HXCLUS01)


Replication Factor (2 or 3)


Controller VM password (required) (10+ chars Upper/lower/digit/special mix)

vCenter Datacenter Name
(E.g. HX-DC)

vCenter Cluster Name
[Default=cluster name above]

DNS Server(s) (Maximum two, comma separated)

NTP Server(s) (Maximum two, comma separated)

Time Zone:
(E.g. AEST +10)

Connected Services

  • Cisco recommends that connected services be enabled. If you wish connected services to be configured, enter the detail below. Enable Connected Services in order to enable management via Cisco Intersight.

Configuration Item

Global value

Enable Connected Services?

Yes No

Email for service request notifications

vCenter Single Sign-On (SSO)

If using SSO, the SSO URI may be required (assumed to be the same for all sites). This information is required only if the SSO URI is not reachable. The SSO Server URL can be found in vCenter at vCenter Server > Manage > Advanced Settings, key config.vpxd.sso.sts.uri

Configuration Item

Global value

SSO Server URI

Stretched Cluster Witness

  • If you are installing a Stretched Cluster, a Witness VM is required, preferably at a third location (RTT to either site 200ms).

Configuration Item



IP address of Witness VM

Task 6:               Post installation task

Once the Hyperflex cluster have been installed, you will need to run a post-install script where additional features can be enabled, including adding additional VM Networks. Your vSphere licence must support any features asked for.

    • Enabling HA/DRS on cluster enables vSphere High Availability (HA) feature per best practice.
    • Disabling SSH warning suppresses the SSH and shell warnings in the vCenter. SSH must remain enabled for proper functioning of the HyperFlex system.
  • Configure ESXi logging onto HX datastore allows the creation of a HX Datastore to hold the ESXi log files.

Adding vMotion interfaces configures vMotion interfaces per best practice. Requires IP addresses and VLAN ID input. Refer to the design diagrams.

    • You can add additional guest VLANs to Cisco UCS Manager and within ESXi on all cluster hosts during this process.
    • Enabling NTP on ESXi hosts configures and enables NTP on ESXi hosts.
  • If SMTP mail server and Auto Support parameters are configured, a test email is sent to ensure that the SMTP relay is working.

Configuration Item




If configuring a Stretched Cluster, Site#1 must = Site#2 except vmk interface IP (but use same subnet)

Enable HA/DRS on cluster?

Yes No

Yes No

Yes No

Disable SSH warning?

Yes No

Yes No

Yes No

Configure ESXi logging onto HX datastore?

Yes No

Yes No

Yes No

Datastore Name for HX ESXi logging?
(E.g. HX-ESXiLogs)




Add vMotion interfaces?

Yes No

Yes No

Yes No

ESXi Server 1 vMotion vmk interface IP
(Recommended x.y.z.11)




VLAN ID for vMotion
(As recorded above)




Add VM network VLANs? (Record VLAN names in table below)

Yes No

Yes No

Yes No

Enable NTP on ESXi hosts

Yes No

Yes No

Yes No

Send test email?

Yes No

Yes No

Yes No

VM Networks List

By adding additional VLANs at this point, the VLAN Names & IDs you define here will be created by UCS Manager on both Fabric Interconnects.  Additionally, the VLAN Names & IDs specified here will be added to the vSwitch called vswitch-hx-vm-network that is created on every ESXi host. (Actually, the VLAN Name on the vSwitch will be as specified here with the VLAN ID appended to the name)

Configuration Item




If configuring a Stretched Cluster, recommend Site#1 = Site#2

VM Network (=VLAN name)




VM Network ID (=VLAN ID)




VM Network (=VLAN name)




VM Network ID (=VLAN ID)




VM Network (=VLAN name)




VM Network ID (=VLAN ID)




VM Network (=VLAN name)




VM Network ID (=VLAN ID)




That completes my Installation Checklist. But it is not enough to have just a checklist of items without validating them. So, …

Before beginning Hyperflex installation…

After the logical Planning for the installation has been completed, you need to validate it.

Here is a checklist of a few things that you should make sure are completed before arriving on each site for the install. Having these items complete will greatly help make the Hyperflex installation go smoothly. If doing the install for a customer, have them complete this as well as the pre-installation above.



Task 1:               The Physical environment

a.     Do you have the correct power chords for all devices that have to be racked?

b.     Do you have the 10G/40G uplinks cabled to the rack where the Hyperflex Clusters is to be installed?

c.     Are the 10G/40G uplinks physically connected to the upstream switches?

d.     If bringing FC to the FIs, do you have the FC fibre uplinks physically connected from the FIs to the upstream FC switches?

e.     Do you have 2 x RJ45 connections to the OOB Management switch that the Fabric Interconnects will connect to?

The two FI’s have ports labelled L1 and L2. Two regular RJ45 Ethernet cables are needed to connect L1 to L1 and L2 to L2. Ideally, these will be ~20cm in length to keep patching obvious and neat.

f.      Do you have the L1 & L2 ports for both FIs in all locations connected via 2 x regular RJ45 Ethernet cables?

HX Clusters use single-cable management so that all management functions are carried out from UCS Manager.


g.      Have you ensured that no cables are attached to the on-board 1Gbps and CIMC interfaces of the nodes?

Task 2:               The Inter-Fabric Network

h.     Are the four VLANs defined in the Pre-Install Checklist configured on the upstream switches that the FIs will be connecting to?
(Refer to the design diagrams above)

i.     Have jumbo frames been enabled on the upstream switches that the FIs will be connecting to? (Refer to the design diagrams above)

Task 3:               The Management Switch/VLAN

The FI’s need 1G Ethernet connectivity to a management switch/VLAN.

j.      Have the IP addresses defined as default gateway addresses in the Pre-Install Checklist been configured on a router/Layer3 switch?

Plug a laptop into the ports assigned to the FI Management ports in the racks where the FIs are to be installed (i.e. as in e above). Assign your laptop an IP address in the appropriate range

k.      Can the laptop ping the default gateway IP?

l.      Can the laptop ping the NTP server defined in the Pre-Install Checklist?

Task 4:               The Installer

The installer is a .ova file (Cisco-HX-Data-Platform-Installer-vxxxxx.ova) – a vCenter (v6.5) needs to be set up with an ESXi Host and the .ova installed on the ESXi Host.

Note:            If all else fails, you can run the installer from a laptop using VMware Fusion or VMware Workstation.

When the installer VM boots, it needs to obtain an IP address and DNS information via DHCP or be allocated the IP details defined above.

The following tests are to verify that the Installer VM has been given a suitable IP address, has access to DNS, and that the DNS has been configured fully.

The installer VM username/password is root/Cisco123.

m.     Has the Installer VM been given an IP address via DHCP?
(Use the command ip addr show eth0 or ifconfig eth0 to check)

n.      Has the Installer VM has been configured with the correct DNS address?
(Use the command cat etc/resolv.conf to check)

o.   Can the Installer VM resolve forward and reverse names using the following commands?
nslookup <insert IP of first HX-ESXi host>
<repeat for all HX-ESXi hosts in the cluster>
nslookup WhateverDNSNameYouUseForESXiHost-01
<repeat for all HX-ESXi hostnames in the cluster>

p.     Can the Installer VM ping the NTP server?
ping <insert IP NTP Server>

q.     (Stretched Cluster) Can the Installer VM ping the NTP server Witness VM?
ping <insert Witness IP>

Posted in Cisco, Hyperflex, UCS, VLANs | Tagged | 1 Comment

Resolution Immediacy and Deployment Immediacy – ACI Master Class

When configuring ACI, have you ever wondered what those Resolution Immediacy options [Immediate | On Demand | Pre-provision] and the Deployment Immediacy options [Immediate | On Demand] do? Read on to find out.

I always like to start with a picture.  The one below shows two ESXi hosts, one attached to Leaf 101, the other to Leaf 102. A vCenter Applicance has access to both hosts via a management NIC (vmnic0) in each host.  Although vmnic1 in each host is physically connected to an ACI leaf switch, neither host has been configured to use vmnic1, so the ACI leaf switches do not see any MAC addresses or LLDP packets from the ESXi hosts yet.

I have configured an Access Policy Chain as below that includes a VMM Domain called RN:vC-VMM.Dom, but the VMM Domain has not yet been associated with vCenter, so no vDS exists on vCenter or any ESXi hosts.

On the tenant side, my configuration is shown below. Note the Web-EPG has not yet been linked to the RN:vC-VMM.Dom:

It is imprtant to reiterate that the VMM Domain (RN:vC-VMM.Dom) has not yet been configured with the vCenter details. Therefore, the vDS has not been created in vCenter or on the ESXi hosts, so the ACI leaf switches do not see any MAC addresses or LLDP packets yet.  And of course as yet, no policy has been sent to either Leaf 101 or Leaf 102

We can see this by looking at the VRF situation on each leaf. Note that neither leaf even knows about any VRF except the default VRFs:

Resolution Immediacy: Pre-Provision

Now I’m going to associate my Web-EPG in my Tenant with the RN:vC-VMM.Dom and check the Pre-Provision Resolution Immediacy option:

This has now linked my EPG with the VMM Domain, as the picture shows:

Remember, no packets have left the ESXi servers to reach the ACI fabric at this stage, but by specifying Pre-provision for Resolution Immediacy, ACI looks at the Access Policy Chain for the RN:vC-VMM.Dom and sends policy to every Leaf it finds in that chain – in my case Leaf 101 and Leaf 102.  This can be seen by noticing that both leaves now have at least some policy pushed –  they both now see my Prod-VRF:

Note:RedPoint Setting the Resolution Immediacy option to Pre-provision causes policy to be pushed to all switches that are defined in the Access Policy Chain in which the VMM Domain exists. 

Resolution Immediacy: Immediate

So now that I have established that Pre-Provisioned Resolution Immediacy causes policies to be pushed to the leaf switches irrespective of whether hosts are attached or not, I’ll explore Immediate Resolution Immediacy by changing the Domain configuration under the EPG.

Now that the Resolution Immediacy has been changed to Immediate, the VRF information is removed from the leaf switches – in other words the “Pre-provisioned” policies have been removed.

To show when Immediate Resolution Immediacy is applied, I will now configure the VMM Domain with the vCenter credentials.  That will cause the APIC to handshake with vCenter and create a vDS with a name matching the VMM Domain (RN:vC-VMM.Dom). I’ll then configure vCenter so that one of the two ESXi hosts is given an uplink (vmnic1) on the RN:vC-VMM.Dom. This will allow LLDP packets to flow between the vDS and the Nexus 9000 Leaf Switch. Pictorially it will be:

Well, that’s done, so I’ll take another look at the VRF situation on the leaf switches:

And sure enough, policies have been immediately pushed, but ONLY to the leaf switch where the vDS has been given a connection to ACI.  Note the ESXi hosts don’t yet host a single VM – Immediate mans “Immediately the vDS is seen”.

Note:RedPoint Setting the Resolution Immediacy option to Immediate causes policy to be pushed to leaf switches as soon as an LLDP connection is made between the vDS and the Nexus 9000 Leaf Switch. 

Resolution Immediacy: On Demand

Like last time, I’ll back off the Reolution Immediacy and change it from Immediate to On Demand, then seen what happens to the VRF situation on Leaf 101.

No prizes for guessing the result. My RedNectar:Prod-VRF has disappeared:

To show you when On Demand Resolution Immediacy takes place, I’ll continue with the vCenter configuration by adding the second ESXi host to the vDS, and adding a VMs to both ESXi Hosts. But I’ll only configure the VM on ESXi2 with the vDS portgoup assigned to its NIC. Here’s the picture:

And I’m betting you already know that the output of the show vrf commands is going to be just like this:

And as you now doubt predicted, my RedNectar:Prod-VRF has been created only on Leaf102 where the vDS RN:vC-VMM.Dom was assigned an uplink via vmnic1 to the Nexus 9000 Leaf Switch. At this stage the VMs are still powered off, so no packets had to flow for the policy to be pushed to the Leaf Switch.

Note:RedPoint Setting the Resolution Immediacy option to On Demand means that policy is not pushed to the Switches until a VM’s vNIC is assigned to a Port Group on the vDS created by the APIC.

Deployment Immediacy: Immediate and On Demand

Having dealt with Resolution Immediacy, it’s time to look at Deployment Immediacy.  This one is a little more straight forward, and has nothing to do with when the policies are pushed to the switches, and everything to do with when the policies are committed to Contenet Addressable Memory (CAM or TCAM) after being pushed.  As you would expect, Immediate Deployment means that polices will be committed to TCAM as soon as they are pushed to the switches, whether that be Pre-provisioned, when the vDS sees LLDP packets (Immediate) or when a VM is assigned to the vDS (on-demand).

On Demand Deployment Immediacy simply menas that the TCAM resources are not consumed until a packet is seen.

Conclusion and Best Practice

In terms of conservation of resources, using a Resolution Immediacy of On Demand is recommended, although in practice it is probably functionally equivalent to Resolution Immediacy of Immediate because it would not be often that a vDS would be deployed on an ESXi host without any VMs using it. However I can see that it would be possible (perhaps all the VMs have been migrated and no-one has decommissioned the vDS) so my recommendation (with one exception, see below) is to use Resolution Immediacy of On Demand.

Exception: There are times when it is necesary to use a Resolution Immediacy of Pre-Provision. If there is more than one switch hop between the ESXi host and the nexus 9000 Leaf Switch, or there is a switch that does not support LLDP (or CDP at a pinch) then LLDP packets can’t reach the vDS and the Leaf.  In these situations, as will often be the case during migration, use a Resolution Immediacy of Pre-Provision.

Tip: Use a separate AAEP (Attachable Access Entity Profile) for all ESXi attached devices if using the Resolution Immediacy Pre-Provision option. That way you will ensure that only switches to which the ESXi hosts enter the ACI fabric have the policies push to them

Of course you can easily modify the Resolution Immediacy of an EPG as shown in the illustrations above, so if you use Pre-Provision during migration, you can change it after if you wish.

Deployment Immediacy also has some limitations as to when you can use On Demand. For instance, the microsegmentation feature requires resolution to be immediate.

Use On Demand for both Resolution Immediacy and Deployment Immediacy unless:

  1. You don’t have LLDP connectivity between your leaf switch and the ESXi hosts:
    • In which case you should use Pre-Provision for Resolution Immediacy.
  2. You are using the microsegmentation feature:
    • In which case you should use Immediate for Deployment Immediacy.


Further Reading:

Click to access white-paper-c11-737909.pdf

Posted in Access Policy Chain, ACI, ACI Tutorial, Cisco, Master Class, Nexus 9000 | Tagged , , ,

ARP Gleaning – ACI Master Class

Many people are confused about the way ACI handles ARPs and whether they should enable the ARP Flooding option.  This article explains the following fact:

ARP flooding is only required if the following two conditions are met:

  1. There is a silent host in a Bridge Domain
  2. There is no IP address configured for the bridge domain in the same subnet as the silent host

The reason for this is because ACI does ARP Gleaning.

This is howARP Gleaning works

Let’s start with a picture of two leaf switches with three hosts attached in the same subnet.

As you can see, two of the hosts are VMs (one on each leaf), and the other is a single attached host on Leaf102.

ACI is configured fairly simply –

  • An IP address is configured on the Bridge Domain.
  • Forwarding is optimised: i.e.
    • L2 Unknown Unicasts are sent to the Hardware Proxy
    • L3 Unknown Multicasts are flooded
    • Multi Destination frames are flooded within the BD
    • ARP flooding is disabled
  • WebServer VM2 and WebServer BM are on the same EPG, WebServer VM1 is on a different EPG.
    • This is just to illustrate that ARP Gleaning works at the Bridge Domain level, not VLAN encapsulation level, and is not restricted to a single switch.

The hosts are all silent Linux boxes running Lubuntu – in other words none of the hosts have sent any packet at the beginning of the scenario.

I’ll begin the test by sending a single ping from the BM host attached to Leaf102 (via eth1/26) to the VM also attached to Leaf102 (via eth1/23) while running Wireshark captures on all three hosts.  Remember, the VM has not yet sent a single packet and its MAC address is as yet unknown on Leaf102.  This can be seen by looking at the Operational tab of the EPG.

Now if you know a little about ACI, you will know that if a workstation has NEVER sent a packet, it will be unknown to its closest leaf, and therefore unknown to the entire fabric.  The question that needs to be addressed is “If ARP flooding is disabled, how can ACI find a workstation if it has never sent a packet?”.  To find the answer, read on as I describe what happens when a ping command is issued at the source station.

The ping generates three ARP requests. The following capture taken on the sending PC show the first two ARPs go unanswered, then suddenly an ARP request from the Default Gateway IP turns up. This is the Gleaning ARP – the ARP request sent by the default gateway.  Shortly I’ll explain why this Gleaning ARP made it possible for the third ARP request in the capture below to get a reply from the target workstation, and for the subsequent ping packet to get a reply.

To understand why the third ARP request in the capture above got a reply, you’ll have to look at the capture on the target workstation as shown below. Note that before it received the single ARP request from the first workstation, it received three ARP requests from the default gateway IP. There are the Gleaning ARPs sent by the ACI fabric.  The purpose of these Gleaning ARPs is simply to “tickle” the target station into sending a packet – not because the gateway needs the MAC address of the target!

So as you can see in the capture above, it is not until the target has responded to the Gleaning ARP that it gets the ARP request from the source station.

I’ll wrap up with a few other points about ARP Gleaning.

  • ARP Gleaning ONLY works if the Bridge Domain (or EPG associated with the Bridge Domain) has been assigned an IP address on the same subnet with which it can source a Gleaning ARP.
  • The IP address assigned to the Bridge Domain does not have to be the default gateway IP – if you have a router or firewall attached that serves as a default gateway for an EPG and you DON’T want to turn on ARP flooding, assigning any IP address on that subnet to the Bridge Domain will ensure your hosts will find their default gateway.
  • ARP Gleaning requests are flooded throughout the Bridge Domain – this is demonstrated by looking at the packet capture of the VM on Leaf101 – it is on the same Bridge Domain but different EPG – yet it still saw the ARP Gleaning broadcast, as shown below:


It is not always necessary to enable ARP flooding on a Bridge Domain in ACI if you have silent hosts – assigning an IP address on the same subnet to the Bridge Domain will enable ARP Gleaning which may reduce the total broadcast count for the Bridge Domain.

Only if you have silent hosts on a subnet and you don’t have an IP address set on the Bridge Domain, will you need to enable ARP flooding.

Dedication: Vineet – this one is for you!

Posted in ACI, ACI Tutorial, Cisco, Master Class | Tagged , , , | 7 Comments

An Epyc new addition to the UCS Family!

Another great post from UCS Guru. Make sure you read the full story on his blog.

Back in February of this year, when I read an article in The Register, announcing that Raghu Nambiar, the then chief technology officer for UCS servers had joined AMD. I didn’t think too much of it, but when I also saw that AMD were, for the first time (in my memory), exhibiting at Cisco Live, My right eyebrow rose in a particular “Roger Moore esque” manner, and I sensed something may well be afoot.

Some of you may well have noticed that even since 2009 there has always been an AMD CPU server qualification policy in Cisco UCS Manager , and several years ago I did bring this up with Cisco, as to why in an exclusively Intel based product would need such a policy, to which, if memory serves, the answer at the time was “never say never”

Well today that “prophecy”  was fulfilled with the announcement of the…

View original post 765 more words

Posted in GNS3 WorkBench

Backup Plan

Found this in a hotel tonight. Always good to have a backup plan

Posted in GNS3 WorkBench | Tagged

Why Easter doesn’t it fall at different times in different time zones

If Easter Sunday falls “on the first Sunday AFTER the first full moon after the vernal equinox”, why doesn’t it fall at different times in different time zones?  This year for example, tonight’s (Easter Saturday March 31 2018) full moon occurs after midnight in places between the International date line and the UTC+11 time zone. So according to the formula, Kiwi kids should have to wait another week before breaking open those chocolate eggs.

Well, it turns out that the formula is not set by the astronomical path of the moon, but by a bunch of men (I’ve no doubt women weren’t invited) who formulated the Ecclesiastical Lunar Calendar so long ago that it was before the split of the Gregorian and Julian calendars. (In 325 AD/CE in fact).

Which means today we actually have two Easters, one for each of the divergent calendars, even though both follow the same formula.

Anyway, in the said Ecclesiastical Lunar Calendar, the vernal equinox is always March 21, irrespective of the position of the earth in regard to its transit around the sun. And Easter is always the Sunday following the Pascal Full Moon. And for the calculation of Easter, the Pascal Full moon is defined as been the 14th day after the Ecclesiastical Lunar new moon – so we are back to the Ecclesiastical Lunar Calendar and its ancient origins.

Now it’s probably a good thing that there is a universal standard or two, it means we only have two variations – the Gregorian and the Julian – of Easter throughout the world, and children in New Zealand, Fiji etc. don’t have to hang out for another week to get their Easter Eggs – oh that’s unless they are following the Julian calendar (as Orthodox Christians do), it which case they will have to wait until April 8 2018!


Posted in blog, opinion

CSMA/CD and full duplex for wireless? It could be coming

A group of researchers at National Instruments have found a way to listen to radio signals while receiving on the same frequency.

The team found a solution that relies on in-band full duplex, so it can sense while transmitting, which potentially eliminates all collision overheads in wireless networks.

This could have huge implications – and even give your home wifi a boost if you have a lot of users – certainly will give the office and cafe wifi hotspots a boost.

The problem with existing wireless communications is that once a device starts transmitting, it doesn’t know if another device has transmitted at the same time (causing a collision) until it has finished transmitting and waited for an acknowledgement from the Access Point.  If no acknowledgement comes, it tries again. This is called Carrier Sense, Multiple Access with Collission Avoidance (CSMA/CA).

Your ancient (1980-c2000) shared Ethernet on the other hand operated in much the same way, a device would start transmitting, but was able to detect if any other device transmitted at the same time, and so stop transmitting immediately. This was called  Carrier Sense, Multiple Access with Collission Detection (CSMA/CD) and is course much more efficient than Collision Avoidance.

But that is not the whole story. Modern wired Ethernet networks use two pairs of wires to transmit, and another to recieve, meaning they can transmit AND receive at the same time. Full Duplex.  If we could do that for wireless, (and this article indicates that they have achieved full-duplex operation albeit with just 6 devices at this stage), then the benefits could be much greater.


Posted in blog, opinion, wifi, wireless network | Tagged ,

Why have WordPress made it soooo hard to follow someone?

WordPress, you have hosted my blog since 2010.  I won’t start a tirade of things you STILL can’t do on WP, but I am going to have a whinge about one feature you have obscured.

Why have WordPress made it soooo hard to follow someone?  I should never have to respond to a reader’s comment such as the one I got today.

I would like to thank you sooooo much for such a awesome ACI blogs, I found things here which are not well documented even in Cisco Docs. You are surely doing a great job. I wish to find a subscriber button on your website and keep up with your great work.

For those who would like to follow my blog, or any other blog, you have to move your cursor to the bottom right-hand corner of the page, and/or scroll up a bit (scrolling is clearly the only option on a mouseless device). You will then get an option pop up giving you the chance to follow or subscribe to my blog.


Posted in opinion, rant, wordpress | Tagged | 2 Comments