Why Easter doesn’t it fall at different times in different time zones

If Easter Sunday falls “on the first Sunday AFTER the first full moon after the vernal equinox”, why doesn’t it fall at different times in different time zones?  This year for example, tonight’s (Easter Saturday March 31 2018) full moon occurs after midnight in places between the International date line and the UTC+11 time zone. So according to the formula, Kiwi kids should have to wait another week before breaking open those chocolate eggs.

Well, it turns out that the formula is not set by the astronomical path of the moon, but by a bunch of men (I’ve no doubt women weren’t invited) who formulated the Ecclesiastical Lunar Calendar so long ago that it was before the split of the Gregorian and Julian calendars. (In 325 AD/CE in fact).

Which means today we actually have two Easters, one for each of the divergent calendars, even though both follow the same formula.

Anyway, in the said Ecclesiastical Lunar Calendar, the vernal equinox is always March 21, irrespective of the position of the earth in regard to its transit around the sun. And Easter is always the Sunday following the Pascal Full Moon. And for the calculation of Easter, the Pascal Full moon is defined as been the 14th day after the Ecclesiastical Lunar new moon – so we are back to the Ecclesiastical Lunar Calendar and its ancient origins.

Now it’s probably a good thing that there is a universal standard or two, it means we only have two variations – the Gregorian and the Julian – of Easter throughout the world, and children in New Zealand, Fiji etc. don’t have to hang out for another week to get their Easter Eggs – oh that’s unless they are following the Julian calendar (as Orthodox Christians do), it which case they will have to wait until April 8 2018!


Posted in blog, opinion | Leave a comment

CSMA/CD and full duplex for wireless? It could be coming

A group of researchers at National Instruments have found a way to listen to radio signals while receiving on the same frequency.

The team found a solution that relies on in-band full duplex, so it can sense while transmitting, which potentially eliminates all collision overheads in wireless networks.

This could have huge implications – and even give your home wifi a boost if you have a lot of users – certainly will give the office and cafe wifi hotspots a boost.

The problem with existing wireless communications is that once a device starts transmitting, it doesn’t know if another device has transmitted at the same time (causing a collision) until it has finished transmitting and waited for an acknowledgement from the Access Point.  If no acknowledgement comes, it tries again. This is called Carrier Sense, Multiple Access with Collission Avoidance (CSMA/CA).

Your ancient (1980-c2000) shared Ethernet on the other hand operated in much the same way, a device would start transmitting, but was able to detect if any other device transmitted at the same time, and so stop transmitting immediately. This was called  Carrier Sense, Multiple Access with Collission Detection (CSMA/CD) and is course much more efficient than Collision Avoidance.

But that is not the whole story. Modern wired Ethernet networks use two pairs of wires to transmit, and another to recieve, meaning they can transmit AND receive at the same time. Full Duplex.  If we could do that for wireless, (and this article indicates that they have achieved full-duplex operation albeit with just 6 devices at this stage), then the benefits could be much greater.


Posted in blog, opinion, wifi, wireless network | Tagged , | Leave a comment

Why have WordPress made it soooo hard to follow someone?

WordPress, you have hosted my blog since 2010.  I won’t start a tirade of things you STILL can’t do on WP, but I am going to have a whinge about one feature you have obscured.

Why have WordPress made it soooo hard to follow someone?  I should never have to respond to a reader’s comment such as the one I got today.

I would like to thank you sooooo much for such a awesome ACI blogs, I found things here which are not well documented even in Cisco Docs. You are surely doing a great job. I wish to find a subscriber button on your website and keep up with your great work.

For those who would like to follow my blog, or any other wordpress.com blog, you have to move your cursor to the bottom right-hand corner of the page, and/or scroll up a bit (scrolling is clearly the only option on a mouseless device). You will then get an option pop up giving you the chance to follow or subscribe to my blog.


Posted in opinion, rant, wordpress | Tagged | 2 Comments

RedNectar’s HX Pre-Install Checklist


Completing Cisco’s Pre-installation Checklist for Cisco HX Data Platform will capture all the information you need, but not necessarily in the order you need it, and for a single site only.  So, I decided to write a version that gets the information in the order you need it and in a format that’s easier to expand into a multi-site deployment.

Logical Planning for the installation


Don’t use special characters any passwords for a hassle free install.

Task 1:
Fabric Interconnects – initial console config

Initial configuration of the Fabric Interconnects will require the following information.  You will need to enter the items marked * again later, so remember them.

Configuration Item




UCS Admin Password*

UCS System Name

FI-A Mgmt0 IP address

Mgmt0 IPv4 netmask

IPv4 address of default gateway

Cluster IPv4 Address*

DNS IP address*

Domain name (optional)

UCS FI-B IP address

Task 2:
Fabric Interconnects – firmware upgrade

You may wish to assign the NTP server address to the Fabric Interconnects during the Firmware Upgrade.

Configuration Item




NTP Server IP

Task 3:
Server Ports, Uplink Ports, FC Uplink ports on FIs

If using 6200 Fabric Interconnects, AND you plan on connecting Fibre Channel storage to the FIs now or in the future, remember these will have to be the high numbered ports and increment in steps of 2. So, for a UCS 6248, ports 31 and 32 will be the
first FC ports. For a UCS 6296, ports 47 and 48 will be the first FC ports.

For UCS 6332-16UP FIs, the order is reversed. Only the first 16 (unified) ports are capable of 4/8/16Gbps FC – but they progress from left to right in pairs, so the first two FC ports will be ports 1 & 2.

The UCS 6332 doesn’t support FC, but both the UCS 6332 and the UCS 6332-16UP FIs have 6 dedicated 40Gbps QSFP+ ports – these are the highest numbered ports on the respective FI (27-32 on the UCS 6332, 35-40 on the UCS 6332-16UP)

RedNectar’s Recommendation:

Allocate server ports in ascending order from the lowest port number available.

Allocate uplink ports to the highest numbered ports.


If attaching using 10Gbps servers to UCS 6332-16UP reserve the lowest numbered ports for current and future FC connections.

The configuration of the Ethernet uplink ports is also a consideration. The original design should indicate whether you are using:

  • Unbundled uplinks
  • Two port-channelled uplinks, one to each upstream switch
  • A single port-channel from each FI to an upstream switch

Configuration Item




Server1 port number:

Port range to configure as server ports:

Port range for Ethernet uplinks:

Port ranges for Ethernet uplink port-channels:

Port range for FC uplinks:

Task 4:
The Installer VM and Hyperflex configuration items

UCSM Config items – Part#1

You will need to re-enter items you’ve already configured above.





UCS Manger hostname: (=Cluster IPv4 Address)

User Name




(UCS Admin Password)




vCenter config items

Some of the following I’ve filled in with recommended values. Don’t change them unless you really know what you are doing. In other places, I’ve included the recommended values in brackets. Here are some more recommendations:

  • The vCenter password should NOT contain special characters like <>[]{}/\’`” – to be honest I don’t know the complete list, and I’m guessing at the list above based on my knowledge of how hard it is to pass some of these characters to a process via an API, which is what the Installer does to log into vCenter.

Configuration Item




vCenter Server (IP or FQDN)

vCenter username

vCenter Admin password

Installer Hypervisor Admin user name




Installer Hypervisor Admin user password




VLAN config items

  • You will need at least four VLAN IDs, but not all have to be unique. Here are my recommendations:
  • The VLAN ID for HX Storage traffic does not need to be seen any further Northbound than the upstream switches that connect to the FIs. So, it is safe to use the same VLAN ID at every site.
  • You may well already be using a VLAN for vMotion. It’s OK to use the same one here.  In fact, if you are moving VMs from an existing environment, it’s a good idea.
  • The list of VLANs needs to include all VLANs that the VMs will need. You can add more later, but remember there is a system wide maximum of 3000 VLANs (UCS 6300 FIs) or 2000 VLANs (UCS 6200 FIs)
  • Each cluster should have its own MAC address range.  MAC addresses always begin with 00:25:B5: so I’ve filled that much in for you. Add just one more 2 digit hextet to the prefix given.
  • The OOB CIMC Pool should have enough IPs to allocate one to every server now and in the future.
  • My advice is to make the IP address Pool for OOB CIMC be part of the same subnet that will be used for the HX Mgmt VLAN.

Configuration Item




VLAN Name for HX Mgmt [hxinbandmgmt]

VLAN ID for HX Mgmt

VLAN Name for HX Storage traffic [hx storagedata]

VLAN ID for HX Storage traffic

VLAN Name for VM vMotion [hxvmotion]

VLAN ID for VM vMotion

VLAN Name(s) for VM Network (comma separated)

VLAN ID(s) for VM Network (comma separated)

MAC Pool Prefix




ext-mgmt IP Pool for OOB CIMC

ext-mgmt IP subnet Mask

ext-mgmt IP Gateway

iSCSI Storage and/or FC Storage

If you plan to give the HX cluster access to remote iSCSI or FC storage, it will be simpler to configure it during the install.

Configuration Item




iSCSI Storage






FC Storage

WWxN Pool




VSAN A Name [hx-ext-storage-fc-a]




VSAN B Name [hx-ext-storage-fc-b]


UCS Firmware Version

My recommendation is to upgrade the firmware before beginning the installation, and allow the installer to enter the UCS Firmware Version at Installation time.

  • If a Hyperflex Cluster Name is given here it will be added as a label to Service Profiles in UCS Manger for easier identification. Don’t confuse it with the Cluster Name required later on for the Storage Cluster.
  • Org Name can be the same for each site, but is probably better to have a naming plan. Organisation Names are used to separate Hyperflex specific configurations from other UCS servers in UCS Manager.

Configuration Item




Hyperflex Cluster Name (Optional)

Org Name

Hypervisor Configuration

This is the bit where the DNS configuration is important.  If your installer cannot resolve the names given here to the IP Addresses (and vice-versa) then the Servers will be added to the configuration using IP addresses only, rather than the names.

Other advice:

  • The Server IP addresses defined here will be in the HX Mgmt VLAN.
  • If using more than one DNS Server, separate with commas
  • Always use sequential numbers for IP addresses and names – allow system to automatically generate these from the first IP Address/Name – so make sure your first hostname ends in 01

Configuration Item




Subnet Mask



DNS Server(s)

Server1 IP address

Server1 Hostname (xxx-01)

Cluster IP Configuration

  • The Hypervisor IP addresses and Storage Controller IP addresses defined for the HX Mgmt VLAN must be in the same subnet as the Host IP addresses given in the previous step.
  • The Hypervisor IP addresses and Storage Controller IP addresses defined here for the Data VLAN must be in a different subnet.  This subnet does not really need to be routable (so gateway IP is optional), although that may change when synchronous replication is supported.
  • Always use sequential numbers for IP addresses – allow system to automatically generate these from the first IP Address.

Configuration Item




Management VLAN – make both IPs on same subnet

Hypervisor 1 IP Address (Same as last step)

Storage Controller 1 IP address

Management Cluster IP address

Management Cluster Gateway

Data VLAN – make both IPs on same subnet

Hypervisor 1 IP Address

Storage Controller 1 IP address

Data Cluster IP address

Data Cluster Gateway

Storage Cluster and vCenter Configuration

  • The Cluster Name is the name given to the Storage Cluster. Use a naming convention.
  • The controller VM is the VM that manages the storage cluster. Use a secure password without special characters.
  • Cisco recommends using Replication factor 3 for Hybrid Clusters, 2 for All Flash unless a heightened level of redundancy is desired.
  • If the vCenter Datacenter and/or Cluster (case sensate) exists already, it will be used. Otherwise it/they will get created during install.
  • If multiple DNS and/or NTP servers are used, use commas to separate the list.

Configuration Item




Cluster Name


Replication Factor (2 or 3)


Controller VM password (required)

vCenter Datacenter Name

vCenter Cluster Name

DNS Server(s) (Use same values as last time)

NTP Server(s)

That completes my Installation Checklist.  But it is not enough to have just a checklist of items without validating them. So, …

Before beginning Hyperflex installation…

After the logical Planning for the installation has been completed, you need to validate it.

Here is a checklist of a few things that you should make sure are completed before arriving on each site for the install.  Having these items complete will greatly help make the Hyperflex installation go smoothly.  If doing the install for a customer, have them complete this as well as the pre-installation checklist for EACH site.



Task 1:The Physical environment

a.Do you have the correct power chords for all devices that have to be racked?

b.Do you have the 10G/40G uplinks cabled to the rack where the Hyperflex Clusters is to be installed?

c.Are the 10G/40G uplinks physically connected to the upstream switches?

d.If bringing FC to the FIs, do you have the FC fibre uplinks cabled to the rack where the Hyperflex Clusters is to be installed?

e.If bringing FC to the FIs, do you have the FC fibre uplinks physically connected to the upstream FC switches?

f.Do you have 2 x RJ45 connections to the OOB Management switch that the Fabric Interconnects will connect to?

The two FI’s have ports labelled L1 and L2. Two regular RJ45 Ethernet cables are needed to connect L1 to L1 and L2 to L2. Ideally, these will be ~20cm in length to keep patching obvious and neat.

g.Do you have 2 x regular RJ45 Ethernet cables to patch the L1 & L2 ports for both FIs in all locations?

Task 2:The Inter-Fabric Network

h.Are the four VLANs defined in the Pre-Install Checklist configured on the upstream switches that the FIs will be connecting to?

i.Have jumbo frames been enabled on the upstream switches that the FIs will be connecting to?

Task 3:The Management Switch/VLAN

The FI’s need 1G Ethernet connectivity to a management switch/VLAN.

j.Have the IP addresses defined as default gateway addresses in the Pre-Install Checklist been configured on a router/Layer3 switch?

Plug a laptop into the ports assigned to the FI Management ports in the racks where the FIs are to be installed (i.e. as in c above).  Assign your laptop an IP address in the appropriate range

k.Can the laptop ping the default gateway IP?

l.Can the laptop ping the NTP server defined in the Pre-Install Checklist?

Task 4:The Installer

The installer is a .ova file (Cisco-HX-Data-Platform-Installer-vxxxxx.ova) – a vCenter (v6.5) needs to be set up with an ESXi Host and the .ova installed on the ESXi Host.

Note:If all else fails, you can run the installer from a laptop using VMware Fusion or VMware Workstation.

When the installer VM boots, it needs to be given an IP address and DNS information via DHCP.

The following tests are to verify that the Installer VM has been given a suitable IP address, has access to DNS, and that the DNS has been configured fully.

Note:The installer VM username/password is root/Cisco123

m.Has the Installer VM been given an IP address via DHCP? (Use the command ip addr show eth0 or ifconfig eth0 to check)

n.Has the Installer VM has been configured with the correct DNS address? (Use the command cat etc/resolv.conf to check)

o.Can the Installer VM resolve forward and reverse names using the following commands?
nslookup <insert IP of first HX-ESXi host>
nslookup <WhateverDNSNameYouUseForESXiHost-01>

p.Can the Installer VM ping the NTP server?

Posted in Cisco, Data Center, Data Centre, Hyperflex, UCS | Tagged , , , | Leave a comment


Note: This post started as an answer I gave on the Cisco Support Forum. This version is slightly expanded with pictures and examples.

In this post I will examine the roles of three very important protocols that exist in the ACI environment.

I will explain

  • that IS-IS is the underlying routing protocol that is used by the leaves and spines to learn where they sit in the topology in relation to each other
  • how Leaf switches use COOP to report local station information to the Spine (Oracle) switches
  • how BGP and MP-BGP is used to redistribute routes from external sources to leaf switches.

Let me start with a picture.  Imagine a simple 2leaf/2spine topology with HostA attached to to Leaf1 and with HostB attached to to Leaf2.

  • Leaf1 has a VTEP address of
  • Leaf2 has a VTEP address of
  • Spine1 has a VTEP address of
  • Spine2 has a VTEP address of
  • HostA has a MAC address of A and an IP address of and is attached to port 1/5 on Leaf1
  • HostB has a MAC address of B and an IP address of and is attached to port 1/6 on Leaf2

Enter IS-IS

The leaves and spines will exchange IS-IS routing updates with each other so that Leaf1 sees that it has two equally good paths to reach Leaf2, and Leaf2 sees that it has two equally good paths to reach Leaf1.

Leaf1# show ip route vrf overlay-1
IP Route Table for VRF "overlay-1", ubest/mbest: 2/0
*via, eth1/51.2, [115/3], 6d20h, isis-isis_infra, L1
*via, eth1/52.2, [115/3], 6d20h, isis-isis_infra, L1

For now, that’s all we need to know about IS-IS – it is the routing protocol used by the VTEPs to learn how to reach the other VTEPs.

Now think about the hosts.

This is where COOP comes in.

When Leaf1 learns about HostA because, say HostA sent an ARP request seeking the MAC address of (which you know is HostB, but that’s not relevant at the moment), Leaf1 looks at that ARP request, and just like a normal switch, learns that MAC A is present on port 1/5.  But the leaf is a bit more clever than that, and looks INSIDE the payload of the ARP packet and learns that Host1 also has an IP address of and records all this information in its Local Station Table.

Leaf1#show endpoint interface ethernet 1/5

VLAN/Domain  Encap VLAN  MAC/IP Address  Interface
65           vlan-2051  a036.9f86.e94e L eth1/5
Tenant1:VRF1 vlan-2051    L eth1/5

AND THEN reports this information to one of the spine switches (chosen at random) using the Council Of Oracles Protocol (COOP).  The spine switch (oracle) that was chosen then relays this information to all the other spines (oracles) so that every spine (oracle) has a complete record of every end point in the system.

The spines (oracles) record the information learned via the COOP in the Global Proxy Table, and this information is used to resolve unknown destination MAC/IP addresses when traffic is sent to the Proxy address.

Note that all of this happens without anything to do with BGP.

But to round off the COOP story, we would assume that at some stage Leaf2 (a citizen) will also learn HostB‘s MAC and IP and also inform one of the spines (oracles) at random of this information using the COOP.

Spine1#show coop internal info repo ep | egrep -i "mac|real|-"
EP mac : A0:36:9F:86:E9:4E
MAC Tunnel :
Real IPv4 EP :
EP mac : A0:36:9F:61:88:FD
MAC Tunnel :
Real IPv4 EP :

So COOP is used solely for the purpose of distributing endpoint information to spine switches (oracles). As far as I know, spine switches never use COOP to distribute end host information to leaf switches.

So where does BGP fit in?

BGP is not needed until an external router is connected.  So now imagine that Leaf2 has had a router connected and has learned some routes from that external router for a particular VRF for a particular Tenant.

How can Leaf2 pass this information on to Leaf1 where HostA is trying to send packets to one of these external networks?  For Leaf2 to be able to pass routing information on to Leaf1 and keep that information exclusive to the same VRF, we need a routing protocol that is capable of exchanging routing information for multiple VRFs across an underlay network

Which is exactly what MP-BGP was invented for – to carry routing information across MPLS underlay networks.  In the case of ACI, BGP is configured by choosing an Autonomous System number and nominating one of the spine switches to be a route reflector.  MP-BGP is self configuring, you don’t need to do anything to make it work!

(Although you will have to configure your Tenant to exchange routes with the external router.)

Leaf1# show ip route vrf Tenant1:VRF1, ubest/mbest: 1/0, attached, direct, pervasive
*via, [1/0], 04:43:32, static, tag 4294967295, ubest/mbest: 1/0, attached, pervasive
*via, vlan25, [1/0], 03:52:23, local, local, ubest/mbest: 1/0
*via, [200/5], 00:11:41, bgp-1, internal, tag 1

aka Chris Welsh

Posted in ACI, ACI configuration, APIC, Cisco, Data Center, Data Centre, EPG, L3 Out, L3out | Tagged , , | Leave a comment

Guest Post! WTF Are all those Checkboxes? (ACI L3 Outs) – Part 2 of ???

Found this great post explaining a lot of fine detail on ACI L3 outs – make sure you check out the original!

Come Route With Me!

My friend and colleague Mr. Jason Banker recently ran into some good times with the mysteries of the ACI L3 Out Checkbox Madness! He Slack’d me and told me he’d found some clowns blog post about it (yours truly) and that some updates and additional information was needed, so he kindly volunteered some time to help out! Without further ado here is Jason’s Checkbox Madness:

As we continue to deploy fabrics we always joke about these damn routing checkboxes shooting us in the foot.  We play with different scenarios in the lab to ensure we understand how these pesky boxes work and what other options we have for future deployments.   The scenario here was to use get different OSPF areas connected to the same border leaf using ACI as the transit.  This scenario brings up some certain challenges and hopefully my testing will help others understand it a little better…

View original post 999 more words

Posted in GNS3 WorkBench

Non overlapping VTEP IP addresses in Cisco ACI

In a Cisco ACI deployment, Cisco recommends that “The TEP IP address pool should not overlap with existing IP address pools that may be in use by the servers (in particular, by virtualized servers).”

Let me tell you a reason much closer to reality why you might want to avoid overlapping your Cisco ACI TEP addresses with your locally configured addressing scheme.

When you first configure a Cisco ACI fabric, you need to configure a range of IP addresses that the ACI Fabric uses internally for VTEP addressing of the APICs, leaf and spine switches and other internally used addresses like anycast addresses for the spine proxy functions.

As I mentioned, Cisco recommends that “The TEP IP address pool should not overlap with existing IP address pools that may be in use by the servers (in particular, by virtualized servers).” I can only guess by the wording of this advice that Cisco sees that there may be some issue with the APICs being able reaching remote VTEPs on Cisco AVS virtual switches, but I see this as an outlier scenario.

The problem with VTEP IP address pools is the APICs.  You see, the APICs can’t handle:

  1. having a management IP address that overlaps with the VTEP address space, (it can’t figure out which interface to send management responses on) or
  2. being accessed from a workstation that is using an IP address that overlaps with the VTEP address space.

Since it is conceivable that any internal IP address may need to access the APIC for some reason sometime, I would recommend that you don’t overlap VTEP addresses with any currently used internal addresses.

Below is an example of the routing table from an APIC:

apic1# netstat -rn
Kernel IP routing table
Destination     Gateway         Genmask         Flags   MSS Window  irtt Iface         UG        0 0          0 oobmgmt     UG        0 0          0 bond0.3967 UH        0 0          0 bond0.3967 UGH       0 0          0 bond0.3967 UGH       0 0          0 bond0.3967   U         0 0          0 teplo-1   U         0 0          0 lxcbr0   U         0 0          0 oobmgmt     U         0 0          0 docker0

In this case, the VTEP address range is, and the APIC sees all 10.0.x.x IP addresses as being reachable via the bond0.3967 interface, as shown by the UG 0 0 0 bond0.3967
routing table entry on the APIC.

Recall I said that the APICs can’t handle:

  1. having a management IP address that overlaps with the VTEP address space, (it can’t figure out which interface to send management responses on) or
  2. being accessed from a workstation that is using an IP address that overlaps with the VTEP address space.

I’ll deal with case #2 first.

Now imagine for a minute I have a workstation with an IP address of say that wishes to communicate with the OOB (Out of Band) management IP address of the APIC, which happens to be  Now that remote workstation of may well have a perfectly good route to, and may indeed be able to send packets to the APIC.

The problem of course arises when the APIC tries to send the reply packets to As per the APIC’s routing table, the APIC would expect to reach via its bond0.3967 interface, as shown by the UG 0 0 0 bond0.3967
routing table entry on the APIC.

Similarly, with case#1. This time, imagine I had used as https://supportforums.cisco.com/discussion/13311571/overlapping-or-non-overlapping-vtep-poolmy OOB Management subnet.  Since that overlaps with my VTEP range ( there is potential that IP addresses from my OOB subnet ( could be allocated to VTEPs somewhere – and if that happened my APIC would be unable to communicate with any other address on the OOB subnet that clashes with a VTEP address.  In theory, the APIC would still be able to communicate with the VTEP addresses because it adds a /32 address to its routing table for every VTEP, but in my experience when I saw a customer with this configuration there was a problem communicating with the OOB subnet.


Just been reading this discussion on the Cisco forum – it seems that the docker0 interface that was introduced in version 2.2 may also screw up the APIC’s view of the rest of the world in the same way


This is an expansion of a reply I gave on the Cisco Support forum

More information on VTEP addressing in the Cisco Application Centric Infrastructure Best Practices Guide

Posted in ACI, ACI configuration, APIC, Cisco, Data Center, Data Centre | Tagged , , , ,