RedNectar’s HX Pre-Install Checklist

 

Completing Cisco’s Pre-installation Checklist for Cisco HX Data Platform will capture all the information you need, but not necessarily in the order you need it, and for a single site only.  So, I decided to write a version that gets the information in the order you need it and in a format that’s easier to expand into a multi-site deployment.

Logical Planning for the installation

Tip:

Don’t use special characters any passwords for a hassle free install.

Task 1:
Fabric Interconnects – initial console config

Initial configuration of the Fabric Interconnects will require the following information.  You will need to enter the items marked * again later, so remember them.

Configuration Item

Site#1

Site#2

Site#n

UCS Admin Password*

UCS System Name

FI-A Mgmt0 IP address

Mgmt0 IPv4 netmask

IPv4 address of default gateway

Cluster IPv4 Address*

DNS IP address*

Domain name (optional)

UCS FI-B IP address

Task 2:
Fabric Interconnects – firmware upgrade

You may wish to assign the NTP server address to the Fabric Interconnects during the Firmware Upgrade.

Configuration Item

Site#1

Site#2

Site#n

NTP Server IP

Task 3:
Server Ports, Uplink Ports, FC Uplink ports on FIs

If using 6200 Fabric Interconnects, AND you plan on connecting Fibre Channel storage to the FIs now or in the future, remember these will have to be the high numbered ports and increment in steps of 2. So, for a UCS 6248, ports 31 and 32 will be the
first FC ports. For a UCS 6296, ports 47 and 48 will be the first FC ports.

For UCS 6332-16UP FIs, the order is reversed. Only the first 16 (unified) ports are capable of 4/8/16Gbps FC – but they progress from left to right in pairs, so the first two FC ports will be ports 1 & 2.

The UCS 6332 doesn’t support FC, but both the UCS 6332 and the UCS 6332-16UP FIs have 6 dedicated 40Gbps QSFP+ ports – these are the highest numbered ports on the respective FI (27-32 on the UCS 6332, 35-40 on the UCS 6332-16UP)

RedNectar’s Recommendation:

Allocate server ports in ascending order from the lowest port number available.

Allocate uplink ports to the highest numbered ports.

Exception:

If attaching using 10Gbps servers to UCS 6332-16UP reserve the lowest numbered ports for current and future FC connections.

The configuration of the Ethernet uplink ports is also a consideration. The original design should indicate whether you are using:

  • Unbundled uplinks
  • Two port-channelled uplinks, one to each upstream switch
  • A single port-channel from each FI to an upstream switch

Configuration Item

Site#1

Site#2

Site#n

Server1 port number:

Port range to configure as server ports:

Port range for Ethernet uplinks:

Port ranges for Ethernet uplink port-channels:

Port range for FC uplinks:

Task 4:
The Installer VM and Hyperflex configuration items

UCSM Config items – Part#1

You will need to re-enter items you’ve already configured above.

Configuration
Item

Site#1

Site#2

Site#n

UCS Manger hostname: (=Cluster IPv4 Address)

User Name

admin

admin

admin

Password:
(UCS Admin Password)

 

 

 

vCenter config items

Some of the following I’ve filled in with recommended values. Don’t change them unless you really know what you are doing. In other places, I’ve included the recommended values in brackets. Here are some more recommendations:

  • The vCenter password should NOT contain special characters like []{}/\’`” – to be honest I don’t know the complete list, and I’m guessing at the list above based on my knowledge of how hard it is to pass some of these characters to a process via an API, which is what the Installer does to log into vCenter.

Configuration Item

Site#1

Site#2

Site#n

vCenter Server (IP or FQDN)

vCenter username

vCenter Admin password

Installer Hypervisor Admin user name

root

root

root

Installer Hypervisor Admin user password

Cisco123

Cisco123

Cisco123

VLAN config items

  • You will need at least four VLAN IDs, but not all have to be unique. Here are my recommendations:
  • The VLAN ID for HX Storage traffic does not need to be seen any further Northbound than the upstream switches that connect to the FIs. So, it is safe to use the same VLAN ID at every site.
  • You may well already be using a VLAN for vMotion. It’s OK to use the same one here.  In fact, if you are moving VMs from an existing environment, it’s a good idea.
  • The list of VLANs needs to include all VLANs that the VMs will need. You can add more later, but remember there is a system wide maximum of 3000 VLANs (UCS 6300 FIs) or 2000 VLANs (UCS 6200 FIs)
  • Each cluster should have its own MAC address range.  MAC addresses always begin with 00:25:B5: so I’ve filled that much in for you. Add just one more 2 digit hextet to the prefix given.
  • The OOB CIMC Pool should have enough IPs to allocate one to every server now and in the future.
  • My advice is to make the IP address Pool for OOB CIMC be part of the same subnet that will be used for the HX Mgmt VLAN.

Configuration Item

Site#1

Site#2

Site#n

VLAN Name for HX Mgmt [hxinbandmgmt]

VLAN ID for HX Mgmt

VLAN Name for HX Storage traffic [hx storagedata]

VLAN ID for HX Storage traffic

VLAN Name for VM vMotion [hxvmotion]

VLAN ID for VM vMotion

VLAN Name(s) for VM Network (comma separated)

VLAN ID(s) for VM Network (comma separated)

MAC Pool Prefix

00:25:B5:  

00:25:B5:  

00:25:B5:  

ext-mgmt IP Pool for OOB CIMC

ext-mgmt IP subnet Mask

ext-mgmt IP Gateway

iSCSI Storage and/or FC Storage

If you plan to give the HX cluster access to remote iSCSI or FC storage, it will be simpler to configure it during the install.

Configuration Item

Site#1

Site#2

Site#n

iSCSI Storage

VLAN A Name

VLAN A ID

VLAN B Name

 

VLAN B ID

FC Storage

WWxN Pool

20:00:00:25:B5:

20:00:00:25:B5:

20:00:00:25:B5:

VSAN A Name [hx-ext-storage-fc-a]

VSAN A ID

VLAN B Name

 

VSAN B Name [hx-ext-storage-fc-b]

VSAN B ID

UCS Firmware Version

My recommendation is to upgrade the firmware before beginning the installation, and allow the installer to enter the UCS Firmware Version at Installation time.

  • If a Hyperflex Cluster Name is given here it will be added as a label to Service Profiles in UCS Manger for easier identification. Don’t confuse it with the Cluster Name required later on for the Storage Cluster.
  • Org Name can be the same for each site, but is probably better to have a naming plan. Organisation Names are used to separate Hyperflex specific configurations from other UCS servers in UCS Manager.

Configuration Item

Site#1

Site#2

Site#n

Hyperflex Cluster Name (Optional)

Org Name

Hypervisor Configuration

This is the bit where the DNS configuration is important.  If your installer cannot resolve the names given here to the IP Addresses (and vice-versa) then the Servers will be added to the configuration using IP addresses only, rather than the names.

Other advice:

  • The Server IP addresses defined here will be in the HX Mgmt VLAN.
  • If using more than one DNS Server, separate with commas
  • Always use sequential numbers for IP addresses and names – allow system to automatically generate these from the first IP Address/Name – so make sure your first hostname ends in 01

Configuration Item

Site#1

Site#2

Site#n

Subnet Mask

 

Gateway

DNS Server(s)

Server1 IP address

Server1 Hostname (xxx-01)

Cluster IP Configuration

  • The Hypervisor IP addresses and Storage Controller IP addresses defined for the HX Mgmt VLAN must be in the same subnet as the Host IP addresses given in the previous step.
  • The Hypervisor IP addresses and Storage Controller IP addresses defined here for the Data VLAN must be in a different subnet.  This subnet does not really need to be routable (so gateway IP is optional), although that may change when synchronous replication is supported.
  • Always use sequential numbers for IP addresses – allow system to automatically generate these from the first IP Address.

Configuration Item

Site#1

Site#2

Site#n

Management VLAN – make both IPs on same subnet

Hypervisor 1 IP Address (Same as last step)

Storage Controller 1 IP address

Management Cluster IP address

Management Cluster Gateway

Data VLAN – make both IPs on same subnet

Hypervisor 1 IP Address

Storage Controller 1 IP address

Data Cluster IP address

Data Cluster Gateway

Storage Cluster and vCenter Configuration

  • The Cluster Name is the name given to the Storage Cluster. Use a naming convention.
  • The controller VM is the VM that manages the storage cluster. Use a secure password without special characters.
  • Cisco recommends using Replication factor 3 for Hybrid Clusters, 2 for All Flash unless a heightened level of redundancy is desired.
  • If the vCenter Datacenter and/or Cluster (case sensate) exists already, it will be used. Otherwise it/they will get created during install.
  • If multiple DNS and/or NTP servers are used, use commas to separate the list.

Configuration Item

Site#1

Site#2

Site#n

Cluster Name

 

Replication Factor (2 or 3)

 

Controller VM password (required)

vCenter Datacenter Name

vCenter Cluster Name

DNS Server(s) (Use same values as last time)

NTP Server(s)

That completes my Installation Checklist.  But it is not enough to have just a checklist of items without validating them. So, …

Before beginning Hyperflex installation…

After the logical Planning for the installation has been completed, you need to validate it.

Here is a checklist of a few things that you should make sure are completed before arriving on each site for the install.  Having these items complete will greatly help make the Hyperflex installation go smoothly.  If doing the install for a customer, have them complete this as well as the pre-installation checklist for EACH site.

Task

Completed?

Task 1:The Physical environment

a.Do you have the correct power chords for all devices that have to be racked?

b.Do you have the 10G/40G uplinks cabled to the rack where the Hyperflex Clusters is to be installed?

c.Are the 10G/40G uplinks physically connected to the upstream switches?

d.If bringing FC to the FIs, do you have the FC fibre uplinks cabled to the rack where the Hyperflex Clusters is to be installed?

e.If bringing FC to the FIs, do you have the FC fibre uplinks physically connected to the upstream FC switches?

f.Do you have 2 x RJ45 connections to the OOB Management switch that the Fabric Interconnects will connect to?

The two FI’s have ports labelled L1 and L2. Two regular RJ45 Ethernet cables are needed to connect L1 to L1 and L2 to L2. Ideally, these will be ~20cm in length to keep patching obvious and neat.

g.Do you have 2 x regular RJ45 Ethernet cables to patch the L1 & L2 ports for both FIs in all locations?

Task 2:The Inter-Fabric Network

h.Are the four VLANs defined in the Pre-Install Checklist configured on the upstream switches that the FIs will be connecting to?

i.Have jumbo frames been enabled on the upstream switches that the FIs will be connecting to?

Task 3:The Management Switch/VLAN

The FI’s need 1G Ethernet connectivity to a management switch/VLAN.

j.Have the IP addresses defined as default gateway addresses in the Pre-Install Checklist been configured on a router/Layer3 switch?

Plug a laptop into the ports assigned to the FI Management ports in the racks where the FIs are to be installed (i.e. as in c above).  Assign your laptop an IP address in the appropriate range

k.Can the laptop ping the default gateway IP?

l.Can the laptop ping the NTP server defined in the Pre-Install Checklist?

Task 4:The Installer

The installer is a .ova file (Cisco-HX-Data-Platform-Installer-vxxxxx.ova) – a vCenter (v6.5) needs to be set up with an ESXi Host and the .ova installed on the ESXi Host.

Note:If all else fails, you can run the installer from a laptop using VMware Fusion or VMware Workstation.

When the installer VM boots, it needs to be given an IP address and DNS information via DHCP.

The following tests are to verify that the Installer VM has been given a suitable IP address, has access to DNS, and that the DNS has been configured fully.

Note:The installer VM username/password is root/Cisco123

m.Has the Installer VM been given an IP address via DHCP? (Use the command ip addr show eth0 or ifconfig eth0 to check)

n.Has the Installer VM has been configured with the correct DNS address? (Use the command cat etc/resolv.conf to check)

o.Can the Installer VM resolve forward and reverse names using the following commands?
nslookup
nslookup

p.Can the Installer VM ping the NTP server?

Advertisements
Posted in Cisco, Data Center, Data Centre, Hyperflex, UCS | Tagged , , ,

ISIS, COOP, BGP and MP-BGP in Cisco ACI

Note: This post started as an answer I gave on the Cisco Support Forum. This version is slightly expanded with pictures and examples.

In this post I will examine the roles of three very important protocols that exist in the ACI environment.

I will explain

  • that IS-IS is the underlying routing protocol that is used by the leaves and spines to learn where they sit in the topology in relation to each other
  • how Leaf switches use COOP to report local station information to the Spine (Oracle) switches
  • how BGP and MP-BGP is used to redistribute routes from external sources to leaf switches.

Let me start with a picture.  Imagine a simple 2leaf/2spine topology with HostA attached to to Leaf1 and with HostB attached to to Leaf2.

  • Leaf1 has a VTEP address of 10.0.1.101
  • Leaf2 has a VTEP address of 10.0.1.102
  • Spine1 has a VTEP address of 10.0.1.201
  • Spine2 has a VTEP address of 10.0.1.202
  • HostA has a MAC address of A and an IP address of 192.168.1.1 and is attached to port 1/5 on Leaf1
  • HostB has a MAC address of B and an IP address of 192.168.1.2 and is attached to port 1/6 on Leaf2

Enter IS-IS

The leaves and spines will exchange IS-IS routing updates with each other so that Leaf1 sees that it has two equally good paths to reach Leaf2, and Leaf2 sees that it has two equally good paths to reach Leaf1.


Leaf1# show ip route vrf overlay-1 10.0.1.102
IP Route Table for VRF "overlay-1"

10.0.1.102/32, ubest/mbest: 2/0
*via 10.0.1.201, eth1/51.2, [115/3], 6d20h, isis-isis_infra, L1
*via 10.0.1.202, eth1/52.2, [115/3], 6d20h, isis-isis_infra, L1

For now, that’s all we need to know about IS-IS – it is the routing protocol used by the VTEPs to learn how to reach the other VTEPs.

Now think about the hosts.

This is where COOP comes in.

When Leaf1 learns about HostA because, say HostA sent an ARP request seeking the MAC address of 192.168.1.2 (which you know is HostB, but that’s not relevant at the moment), Leaf1 looks at that ARP request, and just like a normal switch, learns that MAC A is present on port 1/5.  But the leaf is a bit more clever than that, and looks INSIDE the payload of the ARP packet and learns that Host1 also has an IP address of 192.168.1.1 and records all this information in its Local Station Table.

Leaf1#show endpoint interface ethernet 1/5

VLAN/Domain  Encap VLAN  MAC/IP Address  Interface
+-----------+----------+----------------+---------
65           vlan-2051  a036.9f86.e94e L eth1/5
Tenant1:VRF1 vlan-2051  192.168.1.1    L eth1/5

AND THEN reports this information to one of the spine switches (chosen at random) using the Council Of Oracles Protocol (COOP).  The spine switch (oracle) that was chosen then relays this information to all the other spines (oracles) so that every spine (oracle) has a complete record of every end point in the system.

The spines (oracles) record the information learned via the COOP in the Global Proxy Table, and this information is used to resolve unknown destination MAC/IP addresses when traffic is sent to the Proxy address.

Note that all of this happens without anything to do with BGP.

But to round off the COOP story, we would assume that at some stage Leaf2 (a citizen) will also learn HostB‘s MAC and IP and also inform one of the spines (oracles) at random of this information using the COOP.

Spine1#show coop internal info repo ep | egrep -i "mac|real|-"
------------------------------------------
EP mac : A0:36:9F:86:E9:4E
MAC Tunnel : 10.0.1.101
Real IPv4 EP : 192.168.1.1
------------------------------------------
EP mac : A0:36:9F:61:88:FD
MAC Tunnel : 10.0.1.102
Real IPv4 EP : 192.168.1.2

So COOP is used solely for the purpose of distributing endpoint information to spine switches (oracles). As far as I know, spine switches never use COOP to distribute end host information to leaf switches.

So where does BGP fit in?

BGP is not needed until an external router is connected.  So now imagine that Leaf2 has had a router connected and has learned some routes from that external router for a particular VRF for a particular Tenant.

How can Leaf2 pass this information on to Leaf1 where HostA is trying to send packets to one of these external networks?  For Leaf2 to be able to pass routing information on to Leaf1 and keep that information exclusive to the same VRF, we need a routing protocol that is capable of exchanging routing information for multiple VRFs across an underlay network

Which is exactly what MP-BGP was invented for – to carry routing information across MPLS underlay networks.  In the case of ACI, BGP is configured by choosing an Autonomous System number and nominating one of the spine switches to be a route reflector.  MP-BGP is self configuring, you don’t need to do anything to make it work!

(Although you will have to configure your Tenant to exchange routes with the external router.)

Leaf1# show ip route vrf Tenant1:VRF1

192.168.1.0/24, ubest/mbest: 1/0, attached, direct, pervasive
*via 10.0.1.102%overlay-1, [1/0], 04:43:32, static, tag 4294967295
192.168.1.10/32, ubest/mbest: 1/0, attached, pervasive
*via 192.168.1.10, vlan25, [1/0], 03:52:23, local, local
1.0.0.0/8, ubest/mbest: 1/0
*via 10.0.1.102%overlay-1, [200/5], 00:11:41, bgp-1, internal, tag 1

RedNectar
aka Chris Welsh

Posted in ACI, ACI configuration, APIC, Cisco, Data Center, Data Centre, EPG, L3 Out, L3out | Tagged , ,

Guest Post! WTF Are all those Checkboxes? (ACI L3 Outs) – Part 2 of ???

Found this great post explaining a lot of fine detail on ACI L3 outs – make sure you check out the original!

Come Route With Me!

My friend and colleague Mr. Jason Banker recently ran into some good times with the mysteries of the ACI L3 Out Checkbox Madness! He Slack’d me and told me he’d found some clowns blog post about it (yours truly) and that some updates and additional information was needed, so he kindly volunteered some time to help out! Without further ado here is Jason’s Checkbox Madness:


As we continue to deploy fabrics we always joke about these damn routing checkboxes shooting us in the foot.  We play with different scenarios in the lab to ensure we understand how these pesky boxes work and what other options we have for future deployments.   The scenario here was to use get different OSPF areas connected to the same border leaf using ACI as the transit.  This scenario brings up some certain challenges and hopefully my testing will help others understand it a little better…

View original post 999 more words

Posted in GNS3 WorkBench

Non overlapping VTEP IP addresses in Cisco ACI

In a Cisco ACI deployment, Cisco recommends that “The TEP IP address pool should not overlap with existing IP address pools that may be in use by the servers (in particular, by virtualized servers).”

Let me tell you a reason much closer to reality why you might want to avoid overlapping your Cisco ACI TEP addresses with your locally configured addressing scheme.

When you first configure a Cisco ACI fabric, you need to configure a range of IP addresses that the ACI Fabric uses internally for VTEP addressing of the APICs, leaf and spine switches and other internally used addresses like anycast addresses for the spine proxy functions.

As I mentioned, Cisco recommends that “The TEP IP address pool should not overlap with existing IP address pools that may be in use by the servers (in particular, by virtualized servers).” I can only guess by the wording of this advice that Cisco sees that there may be some issue with the APICs being able reaching remote VTEPs on Cisco AVS virtual switches, but I see this as an outlier scenario.

The problem with VTEP IP address pools is the APICs.  You see, the APICs can’t handle:

  1. having a management IP address that overlaps with the VTEP address space, (it can’t figure out which interface to send management responses on) or
  2. being accessed from a workstation that is using an IP address that overlaps with the VTEP address space.

Since it is conceivable that any internal IP address may need to access the APIC for some reason sometime, I would recommend that you don’t overlap VTEP addresses with any currently used internal addresses.

Below is an example of the routing table from an APIC:


apic1# netstat -rn
Kernel IP routing table
Destination     Gateway         Genmask         Flags   MSS Window  irtt Iface
0.0.0.0         172.16.11.1     0.0.0.0         UG        0 0          0 oobmgmt
10.0.0.0        10.0.0.30       255.255.0.0     UG        0 0          0 bond0.3967
10.0.0.30       0.0.0.0         255.255.255.255 UH        0 0          0 bond0.3967
10.0.32.64      10.0.0.30       255.255.255.255 UGH       0 0          0 bond0.3967
10.0.32.65      10.0.0.30       255.255.255.255 UGH       0 0          0 bond0.3967
169.254.1.0     0.0.0.0         255.255.255.0   U         0 0          0 teplo-1
169.254.254.0   0.0.0.0         255.255.255.0   U         0 0          0 lxcbr0
172.16.11.0     0.0.0.0         255.255.255.0   U         0 0          0 oobmgmt
172.17.0.0      0.0.0.0         255.255.0.0     U         0 0          0 docker0
apic1#

In this case, the VTEP address range is 10.0.0.0/16, and the APIC sees all 10.0.x.x IP addresses as being reachable via the bond0.3967 interface, as shown by the
10.0.0.0 10.0.0.30 255.255.0.0 UG 0 0 0 bond0.3967
routing table entry on the APIC.

Recall I said that the APICs can’t handle:

  1. having a management IP address that overlaps with the VTEP address space, (it can’t figure out which interface to send management responses on) or
  2. being accessed from a workstation that is using an IP address that overlaps with the VTEP address space.

I’ll deal with case #2 first.

Now imagine for a minute I have a workstation with an IP address of say 10.0.11.11 that wishes to communicate with the OOB (Out of Band) management IP address of the APIC, which happens to be 172.16.11.111.  Now that remote workstation of 10.0.11.11 may well have a perfectly good route to 172.16.11.11, and may indeed be able to send packets to the APIC.

The problem of course arises when the APIC tries to send the reply packets to 10.0.11.11. As per the APIC’s routing table, the APIC would expect to reach 10.0.11.11 via its bond0.3967 interface, as shown by the
10.0.0.0 10.0.0.30 255.255.0.0 UG 0 0 0 bond0.3967
routing table entry on the APIC.

Similarly, with case#1. This time, imagine I had used 10.0.11.0/24 as https://supportforums.cisco.com/discussion/13311571/overlapping-or-non-overlapping-vtep-poolmy OOB Management subnet.  Since that overlaps with my VTEP range (10.0.0.0/16) there is potential that IP addresses from my OOB subnet (10.0.11.0/24) could be allocated to VTEPs somewhere – and if that happened my APIC would be unable to communicate with any other 10.0.11.0/24 address on the OOB subnet that clashes with a VTEP address.  In theory, the APIC would still be able to communicate with the VTEP addresses because it adds a /32 address to its routing table for every VTEP, but in my experience when I saw a customer with this configuration there was a problem communicating with the OOB subnet.

RedNectar

STOPPRESS
Just been reading this discussion on the Cisco forum – it seems that the docker0 interface that was introduced in version 2.2 may also screw up the APIC’s view of the rest of the world in the same way

References:

This is an expansion of a reply I gave on the Cisco Support forum

More information on VTEP addressing in the Cisco Application Centric Infrastructure Best Practices Guide

Posted in ACI, ACI configuration, APIC, Cisco, Data Center, Data Centre | Tagged , , , ,

Cisco ACI Naming Standards

Cisco ACI Naming Standards

The Naming of Cats is a difficult matter,
It isn’t just one of your holiday games;

When you notice a cat in profound meditation,
The reason, I tell you, is always the same:
His mind is engaged in a rapt contemplation
Of the thought, of the thought, of the thought of his name:
His ineffable effable
Effanineffable
Deep and inscrutable singular Name.

T.S. Elliot. The Naming of Cats

Have you become frustrated at the multiple names Cisco uses for the same object within the ACI GUI? Have you clicked on a link that promised to show a list of Interface Selector Profiles only to be shown a list of Leaf Interface Profiles instead? Have you ever wondered what a L3 Out object is, when there no facility to create an object called L3 Out?
I managed to muddle my way around the GUI and discover that L3Outs were actually External Layer 3 Networks and solve many other ambiguities by developing and adopting a consistent naming standard.

In a nutshell…

Consistent and structured naming of objects in Cisco’s ACI environment can help you greatly when learning how the different objects relate to each other.  This article explains the logic I use to name objects in Cisco ACI. In summary, these are:

Rule#1: Suffixes

If the object will ever be referred to by another object, make sure you name the object with a hyphen followed by a suffix that describes the item. For example:
Leaf101-IntProf describes the Interface Profile for Leaf switch 101,
WebServers-EPG describes an End Point Group.

Of course the problem when you first start out is that you don’t know what objects are going to be referred to in another drop-down list somewhere. That’s why you will want to use this guide.

Rule#2: Prefixes

If the object is a infrastructure object intended for use by a single tenant, prefix the object with a reference to that Tenant followed by a colon. For example, TenantX:StaticVLANs-VLAN.Pool describes a VLAN Pool intended for use by Tenant TenantX and Common:Telstra-ExtL3Dom describes an External Layer 3 Domain used by the common tenant. In a similar vein, infrastructure objects shared by multiple tenants should be prefixed with Shared:, such as Shared:WAN.Links-AEP which describes an Attachable Access Entity Profile (AEP) that multiple Tenants may share.

Rule#2 corollary:  Global infrastructure objects

If the object can be used by all tenants, omit the prefix.  Disable-CDP is the only CDP Interface Policy you’ll ever need to disable CDP – no need to create multiple duplicates.  Similarly, you’ll only ever need one Leaf Switch Profile for leaf 101, so call it Leaf101-LeafProf, but if you think it helps, Global:L101-LeafProf or Shared:L101-LeafProf would be acceptable.

Rule#3: Punctuation

I use TitleText style to concatenate words in names, but if an acronym is involved, I use a period as a  separator to make VLAN.Pool more readable than VLANPool. I reserve the use of the hyphen character for use only as part of the descriptor suffix, but will use the colon character both as a separator for the prefix and as a replacement for a slash character when naming port numbers, such as TenantX:L101..102:1:35-VPCIPG which also shows my preference for using double periods to indicate a range.  Hopefully the above example obviously describes a VPC Interface Policy Group for TenantX on port 1/35 of both Leaf101 and Leaf102.

Legal names, characters and separators

There are some characters that you can’t use in names. There are sixty-six legal characters. They are all alphanumeric characters (upper and lowercase) and the four separator characters .:-_ (period, colon, hyphen and underscore).  In fact, you could indeed call an object ...:-_-:... if you wished. Numeric names are OK too, so a Leaf Switch Selector could indeed be called 101 or even 101..102. But keep in mind you can’t use the space character, and using my conventions, the hyphen character is used as the separator for objects requiring a suffix and the colon character is used as the separator for objects requiring a prefix.

With the ground rules laid, let me continue with some more specific detail.  I will approach this in three sections.

  • Firstly, I’ll discuss objects defined in the Tenant space, where you will discover exactly what a L3Out really is.
  • Next, I’ll look at the Access-Policy Chain which the infrastructure administrator will define under the Fabric > Access Policies and VM Networking menus in the Advanced configuration mode, and
  • Finally, I’ll fill you in on a bit of the background to this article and tidy up any loose ends.

Names for objects defined in Tenants

I guess there is no better start than the name of the Tenant itself.

Tenants > Add Tenant

The name of your tenant need to be as short as possible. If the Tenant is a listed company, consider using the stock symbol – CSCO rather than Cisco Systems.  This is because (as explained above), you will often want to use the Tenant name in naming Access Policies. Another consideration (if you are hosting multiple Tenants) is the real estate on the Submenu for Tenants – which lists more names if the names are short! And similarly, in many drop-down menus, you will see the name of the Tenant included in the list. Shorter the better!
Here are my examples:

Recommended Tenant Name Purpose
common Pre-defined. You can’t change this.
CSCO If your Tenant has a stock symbol, use it
NNK Abbreviated form of Nectar Network Knowledge Pty Ltd
UQ.Admin University of Queensland Administration Tenant
UQ.Dev University of Queensland Development group Tenant

Tenants > Tenant TenantName > Networking > VRFs

Give VRDs a -VRF suffix, although you may prefer -Ctx for Context (VRFs are sometimes referred to as contexts, and before v1.2, VRFs were known as Private Networks).

Here are my examples:

Recommended Private Network Name Purpose
Dev-VRF VRF to separate the Development team
Production-VRF Main routing VRF
DMZ-VRF You can use VRFs to implement a DMZ type approach

Tenants > Tenant TenantName > Networking > Bridge Domains

Bridge Domains get a name describing the Bridge Domain and a -BD suffix. If the BD is being mapped to a VLAN, the existing VLAN name may be appropriate.

Here are my examples:

Recommended Bridge Domain Name Purpose
WebServer-BD Bridge Domain for the Web Servers server farm
NAS-BD Bridge Domain for the Network Attached Storage VLAN
DevTest-BD Bridge Domain for testing
VLAN100-BD Bridge Domain used to migrate VLAN 100. Use with care, because you may find that other VLANs also end up using this BD


Tenants > Tenant TenantName > Application Profiles

Application Profiles get a name describing the Application and a -AP suffix.

Here are my examples:

Recommended Application Profile Name Purpose
SAP-AP Application Profile for SAP
Webshop-AP Application Profile for your Webshop Application
OurAppDev-AP Application Profile for an application in development

Tenants > Tenant TenantName > Application Profiles > Application EPGs

End Point Groups get a name describing the type of servers that are represented in the group and a -EPG suffix.

Here are my examples:

Recommended EPG Name Purpose
SAP.Servers-EPG Application Servers for SAP
WebServers-EPG EPG for the Web servers server farm
SQL-EPG EPG for SQL DataBase servers

Tenants > Tenant TenantName > Security Policies > Filters

Filters can be used multiple times within a Tenant, and indeed filters in the common Tenant can be used by any Tenant, so there is an argument for having all filters defined in the common Tenant. But the most confusing aspect about filters is that a filter can define a single TCP port number, or could consist of many entries with multiple protocols and even ranges of port numbers. My suggestion is to keep filters to specific protocol/port numbers, or at the very most a collection of closely related port numbers.
Inside the filter, you will also need to name the filter entries.  My convention is to name the filter entries based on the protocol/port number, and to give the filter a -Fltr suffix.
Here are my examples:

Recommended Filter Name Purpose Recommended Filter Entry Name(s)
HTTP-Fltr Filter for HTTP traffic TCP80
HTTPS-Fltr Filter for HTTPS traffic TCP443
AD-Fltr Filter for Active Directory Protocols TCP1025..5000
TCP49152..65535
TCP389
UDP389
TCP636
… etc (See MS website)
ICMP-Fltr Filter for ICMP traffic ICMP

Tenants > Tenant TenantName > Security Policies > Contracts

Contracts define the collection of protocols that are required for an EPG to provide a service to another EPG.  Therefore, as well as having a -Ct suffix, I always include the word Services (or Svcs) in the name of the contract to indicate which EPG is the provider of the service.  Contracts also contain Subjects, and unless there is a reason to have more than one Subject in a Contract, I duplicate the contract name for the Subject name, except with a -Subj extension.

Here are my examples:

Recommended Contract Name Purpose Recommended Subject Name(s)
WebServices-Ct Contract to be provided by the WebServes-EPG WebServices-Subj
WebServices-Ct Contract to be provided by the WebServes-EPG, but with TCP443
traffic to be treated differently to TCP80 traffic
HTTP-Subj
HTTPS-Subj
AD.Svcs-Ct Contract for Active Directory Services AD.Svcs-Subj

Tenants > Tenant TenantName > Networking > External Bridged Networks

An External Bridged Network has colloquially become known as a L2 Out – a “Layer 2 Outside” network. Consequently, a suffix of -L2Out is a great abbreviation.  But there is a more important association that also has a significant bearing on the name. Each L2 Out is associated with a single VLAN ID.  So my advice is to name the L2 Out after the VLAN – either by ID or VLAN Name if appropriate. Here are my examples:

Here are my suggestions.

Recommended External Bridged Network (L2 Out) Name Purpose
VLAN2000-L2Out L2 Out for VLAN 2000
NAS.VLAN-L2Out L2 Out for Network Attached Storage VLAN

Tenants > Tenant TenantName > Networking > External Bridged Networks > VLANx-L2.Out > Networks

A L2 Out also needs a child object that can be used to link to Contracts.  This object is referred to in the GUI as a Network but I prefer the concept of referring to is as a L2 EPG, because the whole ACI policy philosophy is centred around the EPG-Contract association.  And since this L2 EPG is going to allow traffic to and from a particular external VLAN, it is appropriate to name the entity with a name mimicking its parent and a -L2EPG suffix.

Here are my examples:

Recommended (L2 Out)  Network Name Purpose
VLAN2000-L2EPG L2 EPG for VLAN2000-L2Out
NAS.VLAN-L2EPG L2 EPG for NAS.VLAN-L2Out
2020-L2EPG L2 EPG for 2020-L2.Out


 Tenants > Tenant TenantName > Networking > External Routed Networks

Similar to the L2 Out idea, an External Routed Network is known as a L3 Out – and indeed even referred to as such under a Bridge Domain’s configuration. The essential use of the “Layer 3 Outside” network is to give a VRF the ability to:

  1. advertise public subnets on behalf of linked Bridge Domains using a particular protocol (OSPF/BGP/EIGRP), and
  2. process incoming routes for that protocol to be added to the VRF routing table.  In other words, it provides a routing service for a VRF for a particular protocol(s).

So it makes sense to name a L3 Out based on VRF and/or routing protocol and give it a -L3Out suffix.

Here are my examples:

Recommended External Routed (L3 Out) Network Name Alternative Form Purpose
DevVRF-L3Out Dev-L3.Out OSPF & BGP L3 Out for the Development VRF
ProductionVRF-EIGRP.L3Out Production-EIGRP.L3.Out EIGRP L3 Out for the Production VRF
ProductionVRF-BGP.L3Out Production-BGP.L3.Out BGP L3 Out for the Production VRF
DMZ.VRF-OSPF.L3Out DMZ-L3.Out L3 Out for DMZ VRF

 Tenants > Tenant TenantName > Networking > External Routed Networks > L3OutName-L3Out > Logical Node Profiles

When you create a Logical Node Profile for a L3Out you are defining which Leaf Switches are going to become external routers – PE routers in terms of how MP-BGP works in ACI.  The Node Profile name will not be seen outside the L3Out, so adding a the suffix is not necessary, but you may feel more comfortable using it. One thing to remember when creating Logical Node profiles for multiple Nodes within the same L3 Out is that it makes no difference whether you create one Node Profile per Leaf, or include all Nodes (Leaves) in a single Node Profile.  For me, I like to see a single Node Profile per Leaf. Since the Node Profile is going to define Leaf switch, name name the profile based on the Leaf name. Node profiles aren’t referenced by other objects, so using a -NodeProf suffix is not so necessary here.

Here are my examples:

Recommended Node Profile Name Alternative Form Purpose
L101 L101-NodeProf Node Profile for Leaf101
103..104 103..104-NodeProf Node Profile for Leaves 103 and 104

Tenants > Tenant TenantName > Networking > External Routed Networks > L3OutName-L3Out > Logical Node Profiles > NodeProfileName > Logical Interface Profiles

When you create a Logical Interface Profile for a L3Out‘s Logical Node Profile, you are defining the actual interface that will be used to send and receive routing exchanges.  These profiles can define physical routed interfaces, logical sub-interfaces or logical switched virtual interfaces (SVIs).  My recommendation is to only ever include one such interface in each profile (the Node Profile can have multiple Interface Profiles if required), and follow slightly different naming rules depending on whether the Interface Profile is a routed interface, sub-interface or SVI. Similar to the Node Profiles within a L3 Out, the Interface Profile’s -IntProf suffix is not essential here.

Here are my examples:

Recommended Logical Interface Profile Name Alternative Form Purpose
eth101:1:1 101:1:1-IntProf Routed interface on eth1/1 on leaf 101
eth102:1:2.333 102:1:2.333-IntProf Routed sub-interface for VLAN 333 on eth1/2 on leaf 102
VLAN400 VLAN400-IntProf SVI on VLAN 400

Names for Access Policy model objects

Understanding the Access Policy model, or Access Policy Chain as I like to call it, is one of the hardest concepts to master in ACI. Access policies are configured under:

Fabric > Access Policies

Object Concept Examples
Interface Policies You will need a collection of well defined interface policies to define non-default policies for per-interface configuration options such as CDP, LLDP, BPDU Guard etc.   Once you have defined a particular Interface Policy once, it can be used universally for all tenants. Enable-CDP
Disable-CDP
Enable-BPDU.Filter
Enable-BPDU.Guard
Enable-BPDU.GuardFilter
Enable-BPDU.Flood

ActiveLACP-PC
PassiveLACP-PC
MAC.Pinning-PC

PerPort-VLAN.Scope
PerLeaf-VLAN.Scope

Leaf Profile Describes a Leaf switch (or collection of leaf switches). Name the profile based on the Switch ID(s) Leaf101-LeafProf
101-LeafProf
L101..102VPC-LeafProf
RedNectar’s Rule: Have one and only one Leaf Profile per leaf switch for all leaf switches

Permitted Exception: You may consider having a special VPC Leaf Profile per pair of VPC linked leaf switches to link to the upcoming VPC Interface Profile

Leaf Selector Child object of Leaf Profiles, defines a leaf switch Leaf101
101-LeafSel
Global:Leaf101
Interface Profiles Describes a set of switch ports linked to a Leaf Profile.
Match the name of the Interface Profile to its related Leaf Profile
Leaf101-IntProf
L101-IntProf
L101..102VPC-IntProf
RedNectar’s Rule1: Have one and only one Interface Profile per Leaf Profile, except for …
RedNectar’s Rule2: If you don’t have a corresponding Leaf Profile for each pair of VPC Leaves, create a special VPC Interface Profile per pair of VPC linked leaf switches, and have both leaves link to this VPC Interface Profile
Access Port Selectors Child object of Interface Profiles. Give the selector a name that reflects the port number it represents. 1:01 (defines port 1/1)
1:01-IntSel (defines port 1/1)
1:13..14-PC (defines port 1/13-14 used in a port channel)
RedNectar’s Rule: Have one Access Port Selector per port (very tedious), except when two ports on a leaf must have congruent configurations, such as when defining a Port Channel, so…
RedNectar’s Rule: Have one Access Port Selector per configured Port Channel
RedNectar’s Tip: When naming Access Port Selectors, use leading zeros in the port-numbering scheme as shown above.  That will keep your list of Access Port Selectors in order when sorted alphabetically.
Note: Interface Policy Groups have subtle but important differences depending on whether they are Access Port Policy Groups or [Virtual] Port Channel Interface Policy Groups; so I have treated each case separately.
Access Port Policy Groups  Describe a generalised collection of Interface Policies for single-attached devices. The more “generalised” the Group, the more re-usable it becomes. Name the APPG to describe the type of attached hosts and the Tenant using the attached host.  If the attached host is to be shared, indicate it in the name. TenantX:SingleAttachedHosts-APPG

Shared:AccessPorts-APPG

[V]PC Interface Policy Groups Describe a specific Port Channel or Virtual Port Channel Interface. There is way of “generalising” a group of polices as per Access Port Policy Groups, but each [V]PC will need it’s own collection of Interface Policies defined. Since VPCs and PCs must be unique for a given pair/group of ports, name the [V]PC to describe the Leaf Ports to be assigned.[See Footnote] Leaf101..102:1:35-VPCIPG (defines a VPC on interface 1/35 on Leafs 101 and 102)
L103:1:4-5-PCIPG (defines a Port Channel on 1/4-5 of Leaf 103)
TenantX:FIA-VPCIPG (defines a VPC to Fabric Interconnect A for TenantX)
Attachable Access Entity Profiles (AEPs) Provides a joiner between the physical configuration of the Leaf ports and the encapsulation configuration. Think of it as a VLAN Profile. Or a VXLAN Profile.  Name the AEP to symbolise the collection of V[X]LANs along with the ports that will permit these V[X]LANs. TenantX:AllVLANs-AEP

Shared:ExternalAccess-AEP

Physical Domains Provide a place to define a single collection of VLANs (or VXLANs) to be used to map directly connected hosts to EPGs. Name the Physical Domain based on the name of the Tenant and the associated VLAN Pool. TenantX:StaticVLANs-PhysDom

Common:StaticVLANs-PhysDom

External Layer 2 Domains Provide a place to define a single collection of VLANs (or VXLANs) to be used to map VLANs or hosts to L2EPGs. Name the External Layer 2 Domain based on the name of the Tenant and the associated VLAN Pool. TenantX:StaticVLANs-ExtL2Dom

Common:StaticVLANs-ExtL2Dom

External Layer 3 Domains Provide a place to define a single collection of VLANs (or VXLANs) to be used to map external connections to L3 External Networks (L3 Outs). Name the External Layer 2 Domain based on the name of the Tenant and the associated VLAN Pool. TenantX:StaticVLANs-ExtL3Dom

Common:StaticVLANs-ExtL3Dom

Virtual Machine Management (VMM) Domains VMM Domains are multi-murpose. A VMM:
a) provides a place to define the identity and login credentials to a vCenter/SCVM/KVM
b) provides a place to define a single collection of VLANs (or VXLANs) to be used to map PortGroups/VM Networks to EPGs.
c) will bestow its name to a Distributed Virtual Switch in the target vCenter/SCVM/KVM Name the VMM Domain based on the name of the Tenant, the type of VMM and the associated VLAN Pool.
TenantX:Apps.vCenter-VMM.Dom

Shared:SCVM-VMM.Dom

VLAN Pools Every Domain (Physical, L2/L3 External or VMM) needs an associated VLAN Pool. If you give each Tenant a collection of Static VLANs and another collection of Dynamic VLANs should be sufficient. Name the VLAN Pool based on the name of the Tenant and the associated Domain. TenantX:StaticVLANs-VLAN.Pool

TenantX:Apps.vCenter-VLAN.Pool

Footnote: A PC Interface Policy Group (PCIPG) must be unique per leaf – so it is possible to re-use PCIPGs, but… if you do, you’ll now have to have some way of remembering if a particular PCIPG has been used on a particular leaf or not, in which case you might still use names like1:4-5-PCIPG omitting the leaf name and only using that PCIPG when deploying a PC on ports 4-5. Your choice.  Similarly, a VPC Interface Policy Group (VPCIPG) need only be unique per VPC pair of switches and if you choose this option I would again suggest using names like 1:35-VPCIPG and only use that VPCIPG when deploying a VPC on port 35 of the two switches.

The logic…

Throughout my Cisco ACI Tutorial, I followed a naming standard which I suggest you follow for your first install. I wanted to follow the convention that was cited in the Troubleshooting Cisco Application Centric Infrastructure book, but decided that the examples they gave were sometimes inconsistent, too detailed, and in some cases too verbose. But I stuck with the spirit of using a structure of [Purpose]-[ObjectType] that seemed to be the backbone of the convention, adding some extra punctuation rules, such as concatenating words into TitleTextStyle to make them readable, and adding a [TenantName]: to the convention when appropriate – so my convention is: [TenantName]:[Purpose]-[ObjectType] Having the [ObjectType] as part of a name can help tremendously when learning the structure and when distinguishing between similar objects. Clearly Leaf101-IntProf is less likely to be confused with Leaf101-LeafProf for having the -[ObjectType] suffix.

RedNectar

Note: The Interface Profile object is referred to as an associated Interface Selector Profile or Interface Select Profile on the Switch Profile page. On the other hand, the Access Port Selector object is also referred to in various places as an Interface Selector or the Host Port Selector.

Don’t be confused. I was.

Posted in Access Policies, ACI, ACI configuration, Cisco, Data Center, Data Centre, Nexus, Nexus 9000, SDN, Software Defined Networking | Tagged , , , , | 1 Comment

Script to create Linked Clones on ESXi

I had a problem.  The ESXi server I was supplied with had limited disk space, and I had to create 10 clone VMs from a master VM of around 40GB to run a class.  Creating multiple copies of the master would have more than exhausted the disk space I had.

So instead, I created a single snapshot of the master, then took 10 copies of the original .vmx file and the much smaller snapshot delta file, and changed each of the 10 copied .vmx files so that the scsi0:0.fileName attribute pointed to the original master file, (changed a couple of other attributes too) and edited the delta snapshot file so its path to its parent file also pointed to the original master file.

After creating my set of linked clones, the total additional space required for 10 linked clones was less than 11GB, yet each clone was a fully functioning copy of the original 40GB parent. Total space saved, approximately 390GB!

Now to be honest, I didn’t do all that by hand.  I used a script to do it for me, and here is where I stood on the shoulders of giants.  The process would have been impossible without the help of:

If you’d like to see how I did this, read on. I’ll cover the following:

But first, a disclaimer.

Disclaimer:RedPoint I am human, I may have made mistakes and omitted safeguards in the scripts described in this article.  Make sure you operate ONLY on material that is securely backed up, and be warned that these scripts could inadvertently create or delete hundreds of VMs in a one fell swoop.  Use with care.
You have been warned.

The Background Theory

You need to understand a bit about how VMware stores its VMs.  If you browse a datastore or navigate to where a VM is stored on an ESXi host in the command line interface (typically cd /vmfs/volumes/data or similar) you should see that a VM consists of several files:

/vmfs/volumes/57<snip>ed/Golden Masters/GNS3WB88-master # ls -lh
total 42210320
-rw-------    1 root     root       40.0G Mar 18 08:37 GNS3WB88-master-flat.vmdk
-rw-------    1 root     root        8.5K Mar 18 08:37 GNS3WB88-master.nvram
-rw-------    1 root     root         502 Mar 18 08:37 GNS3WB88-master.vmdk
-rw-------    1 root     root           0 Mar 18 08:06 GNS3WB88-master.vmsd
-rw-------    1 root     root        3.3K Mar 18 08:06 GNS3WB88-master.vmx
-rw-------    1 root     root        3.3K Mar 18 08:06 GNS3WB88-master.vmxf
-rw-------    1 root     root        8.5K Mar 18 08:37 nvram

Note particularly the .vmdk and .vmx files.  The  *flat.vmdk file is your disk image and the .vmx file is the descriptor file that tells the hypervisor exactly what is what in relation to your VM, including the location of the virtual disk files that make up your VM, and the snapshot status of your VM.  Take a look at the .vmx file, especially the line that shows you where your disk file lives. The command cat *.vmx | grep vmdk should show you:

/vmfs/volumes/57<snip>ed/Golden Masters/GNS3WB88-master # cat *.vmx | grep vmdk
scsi0:0.fileName = "GNS3WB88-master.vmdk"

And if you check the .vmdk file described in the scsi0:0.fileName = section, you will see the reference to the actual disk file image (the “flat” file):

/vmfs/volumes/57<snip>ed/Golden Masters/GNS3WB88-master # cat *master.vmdk | grep vmdk
RW 83886080 VMFS "GNS3WB88-master-flat.vmdk"
Note:RedPoint If you browse the files using the vSphere file browser, you will not see the separation of the two .vmdk files – the file browser hides the “flat” .vmdk file, and shows the descriptor file as the large file.

After you create a snapshot of your VM, the structure changes a little:

/vmfs/volumes/57<snip>ed/Golden Masters/GNS3WB88-master # ls -lh
total 42210320
-rw-------    1 root     root      256.1M Mar 19 09:25 GNS3WB88-master-000001-delta.vmdk
-rw-------    1 root     root         333 Mar 19 08:55 GNS3WB88-master-000001.vmdk
-rw-------    1 root     root       31.2K Mar 18 09:32 GNS3WB88-master-Snapshot1.vmsn
-rw-------    1 root     root       40.0G Mar 18 18:38 GNS3WB88-master-flat.vmdk
-rw-------    1 root     root        8.5K Mar 18 06:49 GNS3WB88-master.nvram
-rw-------    1 root     root         525 Mar 18 19:15 GNS3WB88-master.vmdk
-rw-------    1 root     root         476 Mar 18 09:32 GNS3WB88-master.vmsd
-rw-------    1 root     root        3.3K Mar 19 09:25 GNS3WB88-master.vmx
-rw-------    1 root     root        3.3K Mar 18 06:52 GNS3WB88-master.vmxf
-rw-------    1 root     root        8.5K Mar 19 09:25 nvram
-rw-------    1 root     root      164.3K Mar 19 09:25 vmware.log

Note that there is now a *-000001.vmdk file and a *-000001-delta.vmdk file as well as a *-Snapshot1.vmsn file.  If you check the *.vmx file again, you will see:

/vmfs/volumes/57<snip>ed/Golden Masters/GNS3WB88-master # cat *master.vmx | grep vmdk
RW 83886080 VMFS "GNS3WB88-master-000001.vmdk"

And if you take a look at that file, you will see the snapshot information:

vmfs/volumes/57<snip>ed/Golden Master/GNS3WB88-master # cat GNS3WB88-master-000001.vmdk
# Disk DescriptorFile
version=1
encoding="UTF-8"
CID=c9801963
parentCID=e0de4476
isNativeSnapshot="no"
createType="vmfsSparse"
parentFileNameHint="GNS3WB88-master.vmdk"
# Extent description
RW 83886080 VMFSSPARSE "GNS3WB88-master-000001-delta.vmdk"

# The Disk Data Base
#DDB

ddb.longContentID = "c7ddda7740d46041620b9dc5c9801963"

Armed with all this detail, you have enough information to create a linked clone.  All you need to do is copy the files Snapshot files and descriptor file (leaving the main base .vmdk files) from the Golden Masker image to another directory and edit the .vmx file to point to the parent’s base files!  The new clone will store any changes to the original disk image in its own copy of the *-000001-delta.vmdk and keep access int the original *-flat.vmdk image for any static information.

And creating those copies and manipulating the .vmx files is exactly what my script does.  Here’s how you use it.

The Process in Detail

In the first task you will ensure your directory structure is compatible with my script, then you will prepare your “Golden Master” image from which you will make the clones. In the third task you will create a script on your ESXi host (by copying and pasting mine). Naturally, the next task is to run the script, and finally you will check your results.

Task #1: Prepare your Directory Structure

Firstly, to run my script in its published format, you need to have the directory structure right.  My script expects that your ESXi host will have a directory where you keep the Golden Masters, and each Golden Master will live in a folder that ends with the characters -master.  After the script has run, it will create another directory where the clones will reside.  In other words, your structure should be something like this:

- /
| - Golden Masters
|   | + FirstVM-master
|   | + SecondVM-master
|   | + ThirdVM-master
|- AnotherVM
|- AndAnotherVM

After you have run the script for say the FirstVM, and SecondVM, the structure will change to:

- /
| - Golden Masters
|   | + FirstVM-master
|   | + SecondVM-master
|   | + ThirdVM-master
|- FirstVM
|   | + FirstVM-#01
|   | + FirstVM-#02
|   | + FirstVM-#03
|   | + FirstVM-#04
|- SecondVM
|   | + SecondVM-#01
|   | + SecondVM-#02
|   | + SecondVM-#03
|   | + SecondVM-#04
|- AnotherVM
|- AndAnotherVM

If necessary, use the Datastore Browser to organise your directory structure, or if you have a different structure, you could of course modify the script to match your structure.  To get to the Datastore Browser in ESXi, start with the vSphere Client.  In the vSphere Client, select the ESXi host, click the Configuration tab, click Storage in the Hardware section, then right-click on your storage device where you can select Browse Datastore

In the Datastore Browser, you will find all the tools you need to create folders and move VMs – just be aware that after you have moved a VM, it will have to be added to the Inventory again.  Which is why you get this warning when you move a VM:

Assuming you now have the VM from which you wish to create your “Golden Master” in a sub-directory off the main data storage, and have registered that VM in the vSphere Client, you are ready to prepare your “Golden Master”.

Task #2: Prepare your “Golden Master”

In the vSphere client, locate the VM that you need to create linked clones for.  This will be your “Golden Master” VM.

Remember, to run my script in its published format, your “Golden Master” MUST live in a folder that ends with the characters -master – and if you have recently moved the VM, it will need to be re-registered in vSphere.

So if not already in the correct format, rename (Right Click on the VM: Rename) your VM so that it ends with -master

Next, make sure the VM is powered down, then make sure that this VM has no snapshots already – (Right Click on the VM: Snapshot | Snapshot Manager…)

If there are snapshots, delete all of them to consolidate to the version you want to be the “Golden Master”.  You want this VM to be as clean as you can get it.

If you browse the datastore where the file is located (Select VM; click Summary Tab; Resources section; select storage volume; right-click: Browse Datastore… then navigate to your VM’s folder) you should see something similar to this:

Note that the big file is the .vmdk file, and there are no snapshot files.

Next, take a snapshot of the VM. (Right-click: Snapshot | Take Snapshot…). I named mine CloneBase, then clicked OK.

And if you browse the datastore again, you should see something like this:

Note the snapshot file has now been created and a small additional -00001.vmdk file has been created.  This .vmdk file will be the the log journal that records the changes in the snapshot leaving the original .vmdk file intact and read-only.

The next challenge is to create a script file to turn your snapshot into a set of linked clones.

Task #3: Create the clone.sh script

Note:RedPoint This task requires you to ssh to your ESXi host.  If ssh is NOT enabled on your ESXi host, this step will fail.  This kb article explains how to enable ssh if necessary.

Firstly, select all the code in the Appendix #1 below, and copy it to your PC’s copy buffer.

Next, start a ssh session to your ESXi host from a PC that supports copy and paste, then navigate to the parent directory of your “Golden Master” folder – the folder that ends in -master.  This is where the script expects to run from, and will create clones that are linked back to the .vmdk file in your -master folder.

~ # cd /vmfs/volumes/data/Golden\ Masters/
/vmfs/volumes/57<snip>ed/Golden Masters # ls -lh
drwxr-xr-x    1 root     root        1.6K Mar 18 19:15 GNS3WB88-master
drwxr-xr-x    1 root     root        1.1K May 25  2016 TCPIP Linux Server-master

Open vi using the command vi clone.sh

Tip:

RedPoint2

In vi, start by entering the command set noautoindent – your paste in the next step will look much nicer.  Do this by pressing the following sequence, including the colon

:set noautoindent

Press i to enter insert mode in vi, then paste the contents of the code in the Appendix #1 below.

In vi, press <Esc>:wq to write your file and quit.

Make your script executable with the command chmod +x clone.sh

/vmfs/volumes/57<snip>ed/Golden Masters # chmod +x clone.sh

Check that the file is executable by issuing a ls -lh command and looking for the x attribute

/vmfs/volumes/57<snip>ed/Golden Masters # ls -lh clone.sh
-rwxr-xr-x 1 root root 4.0K Mar 17 04:19 clone.sh

Note that the clone.sh file is listed as executable. You are now ready to run the script.

Task #4: Run the clone.sh script

At last you are ready to run your script.  If you run the script with no parameters, it will give you a list of expected parameters

/vmfs/volumes/57<snip>ed/Golden Masters # ./clone.sh

clone.sh version 1.2
USAGE: ./clone.sh base_image_folder-master no_of_copies [starting_number] [vnc_base_port]
base_image_folder-master MUST end with the string '-master'
Clones will have names of base_image_folder-#01, base_image_folder-#02 etc
If starting_number is specified, clones will start numbering from that number
Maximum cloneID=9999; maximum number of clones=99
If vnc_base_port is given, first clone will use vnc port vnc_base_port+1, default 5900

So to make twelve linked clones of say the GNS3WB88-master image, enter the command:

./clone.sh GNS3WB88-master 12 

You can check through the output to look for any errors – at this stage my error checking is minimal, but I’ve put plenty progress statements in the script to help you work out where there is a problem should one arise.  There is a sample output from running the command above in Appendix #3, but it is in the vSphere client where you will want to check your results first.

Task #5: Check results

Once the script has run, you should be able to see your results in vSphere.  Note that there is a resource group created to hold your set of linked clones, and the clones are numbered sequentially – you are ready to start powering on the clones – oh by the way, there is a line in the script that you can “uncomment” to automatically power on each clone as it is built.

That’s it – your clones are ready, but there is a little more you need to be careful of, especially if these clones have a limited life and you want to replace them later, so make sure you read the following section on Maintenance.

Maintenance and a Warning

Firstly the warning.  You must understand that that you have created linked clones – so each clone depends on the disk image that belongs in the Golden Master, so:

WARNING:RedPoint Don’t ever delete (as in Delete from Disk) a clone from the vSphere client – if you do, it will delete you master .vmdk and none of the clones past or future nor even your Golden Master will ever work again.  Restore your backup if you do.  You have been warned. (Don’t ask how I found out, but I’m glad I waited for the backup to complete).

The corollary from the warning is that should you ever wish to remove a clone, use the Remove from Inventory option rather than Delete from Disk.

And now the boring maintenance tips…

When you run the script for the first time, it creates the structure needed to hold the clones.  When you run it a second or subsequent time it will unceremoniously delete any previous linked clone with the same number.  This works out ideal if you are say running classes and need a fresh set of clones each time you run the class, but there are a couple of things to note.

Firstly, if you say create 12 clones on the first run, then create only 10 clones on the second run, clones #11 and #12 from the first run will still exist – if you don’t want them to hang around, use the Remove from Inventory option rather than Delete from Disk in vSphere as explained above.

Similarly, if on the first run you created clones numbered 20-29 (using the command ./clone.sh my-master 10 20) and next time you create clones 01-10 from the same master, you will have a resource group with clones 01-10 and 20-29 in it. So be careful.

Deleting clones that you created can be a pain, especially if you created many more than you needed. So I have included a copy of another script I wrote to remove clones – use with caution, but if you need it, you’ll find the script in Appendix #2

Enjoy your cloning!

RedNectar

Appendix #1: The clone.sh script

# Adapted From: https://github.com/oliverbock/esxi-linked-clone/blob/master/clone.sh
# v1.2 2017-03-25 Chris Welsh

version=1.2

readonly noOfArgs=$#
#Remove trailing / of path if it has one
readonly inFolder=${1%/}
if [ "$3" = "" ] ; then
  startingNo="01"
  noOfCopies=$2
  lastCopyNo=$2
else
  startingNo=$3
  noOfCopies=$2
  lastCopyNo=$(( $2 + $3 - 1 ))
fi

if [ "$4" = "" ] ; then
  VNCstartPort=5900
else
  VNCstartPort=$4
fi

usage() {
  echo ""
  echo "clone.sh version $version"
  echo "USAGE: ./clone.sh base_image_folder-master no_of_copies [starting_number] [vnc_base_port]"
  echo "base_image_folder-master MUST end with the string '-master'"
  echo "Clones will have names of base_image_folder-#01, base_image_folder-#02 etc"
  echo "If starting_number is specified, clones will start numbering from that number"
  echo "Maximum cloneID=9999; maximum number of clones=99"
  echo "If vnc_base_port is given, first clone will use vnc port vnc_base_port+1, default 5900"
  echo ""
  }

makeandcopy() {
  if [ ! -d "${outFolder}" ] ; then
    echo "Creating ${outFolder}"
    local escapedOutFolder=$(echo "${outFolder}" | sed -e 's/[\/&]/\\&/g' | sed 's/ /\\ /g')
    mkdir "${outFolder}"
  else
    echo "Removing contents of old "${outFolder}
    ls -lh "${outFolder}"/*
    rm "${outFolder}"/*
  fi
  cp "${inFolder}"/*-000001* "${outFolder}/"
  cp "${inFolder}"/*.vmx "${outFolder}/${thisClone}.vmx"

}

main() {

  if [  ${noOfArgs} -eq 0 ] ; then
    usage
    exit 1
  fi

  if [  ${noOfArgs} -eq 1 ] ; then
    echo ""
    echo "ERROR--Insufficient arguments"
    usage
    exit 1
  fi

  if [ ${noOfCopies} -ge 100 ] ; then
    # Clone copy count arbitarily set to 99 - I don't want anyone to have to accidently create hundreds of clones
    echo ""
    echo "ERROR--Clone copy count exceeds 99"
    usage
    exit 1
  fi

  if [ ${lastCopyNo} -ge 10000 ] ; then
    # Maximum clone copy number arbitarily set to 9999 - could actually be set as high as 59635 before VNC TCP Port numbers exceed 65535
    echo ""
    echo "ERROR--Clone sequence exceeds 9999 (last copy would be ${lastCopyNo})"
    usage
    exit 1
  fi

  echo "${inFolder}" | grep -q "\-master$"
  if [[ $? -ne 0 ]] ; then
    # Input filename in wrong format
    echo ""
    echo "ERROR--input folder MUST end with -master. You entered"
    echo "${inFolder}"
    usage
    exit 1
  fi

  echo "============== Beginning Job =============="
  local fullBasePath=$(readlink -f "${inFolder}")/
  local escapedPath=$(echo "${fullBasePath}" | sed -e 's/[\/&]/\\&/g' | sed 's/ /\\ /g')

  outFolderBase=../${inFolder/-master/\/}
  echo "Output folder BasePath is ${outFolderBase}"
  if [ ! -d "${outFolderBase}" ] ; then
    echo "Creating ${outFolderBase}"
    mkdir "${outFolderBase}"
  fi

  resourcePool=${inFolder/-master/}
  # Thanks to Alessandro Pilotti for putting this on github
  # https://github.com/cloudbase/unattended-setup-scripts/blob/master/esxi/create-esxi-resource-pool.sh

  thisPoolID=`sed -rn 'N; s/\ +<name>'"${resourcePool}"'<\/name>\n\ +<objID>(.+)<\/objID>/\1/p' /etc/vmware/hostd/pools.xml`
  if [ -z "${thisPoolID}" ]; then
    echo "Creating resource pool :${resourcePool}:"
    thisPoolID=`vim-cmd hostsvc/rsrc/create --cpu-min-expandable=true --cpu-shares=normal --mem-min-expandable=true --mem-shares=normal ha-root-pool "${resourcePool}" | sed -rn "s/'vim.ResourcePool:(.+)'/\1/p"`
  fi

#-------------------- Main Loop begins here ---------------#

  for i in $(seq -w ${startingNo} ${lastCopyNo}) ; do

    thisClone=${inFolder/master/#${i}}
    outFolder=${outFolderBase}${inFolder/master/#${i}}
    VNCport=`expr $VNCstartPort + $i`

    echo "=============================================================================="
    echo "Cloning Clone#${i} named ${thisClone} using VNCport=${VNCport} to ${outFolder}"

    makeandcopy

    cd "${outFolder}"/
    echo "================ Processing .vmx file ================"
    echo "Delete Swap File line, will be auto recreated"
    sed -i '/sched.swap.derivedName/d' ./*.vmx
    echo "Change Display Name to ${thisClone}"
    sed -i -e '/^displayName =/ s/= .*"/= "'"${thisClone}"'\"/' ./*.vmx
    echo "Change VNC Port Value to ${VNCport}"
    sed -i -e '/RemoteDisplay.vnc.port =/ s/= .*"/= "'"${VNCport}"'\"/' ./*.vmx
    echo "Change Parent Disk Path"
    sed -i -e '/parentFileNameHint=/ s/="/="'"${escapedPath}"'/' ./*-000001.vmdk

    # Forces generation of new MAC + DHCP
    echo "Forcing change of MAC addresses for up to two NICs"
    sed -i '/ethernet0.generatedAddress/d' ./*.vmx
    sed -i '/ethernet0.addressType/d' ./*.vmx
    sed -i '/ethernet1.generatedAddress/d' ./*.vmx
    sed -i '/ethernet1.addressType/d' ./*.vmx

    # Forces creation of a fresh UUID for the VM.
    echo "Forcing creation of a fresh UUID for the VM."
    sed -i '/uuid.location/d' ./*.vmx
    sed -i '/uuid.bios/d' ./*.vmx
    echo "============== Done processing .vmx file =============="

    # Register the machine so that it appears in vSphere.
    fullPath=`pwd`/${thisClone}.vmx
    #echo "fullPath:"$fullPath"==="
    #echo "fullBasePath:"$fullBasePath"==="
    #echo "{escapedPath}:"${escapedPath}"==="
    local escapedfullpath=$(echo "${fullPath}" | sed -e 's/[\/&]/\\&/g' | sed 's/ /\\ /g')
    #echo "escapedfullpath:"$escapedfullpath"==="
    vmID=`/bin/vim-cmd vmsvc/getallvms | egrep "${thisClone}" | awk '{print $1}'`
    if [ ! -z "${vmID}" ] ; then  #We found the VM was registered, so unregister it first
      echo "VM ${thisClone} already registered, checking which pool"
      echo "Too damned hard to determine which pool, assume if registered, it it the correct pool."
      #if it is not the correct pool; then
          # vim-cmd vmsvc/unregister "${vmID}"
          #echo "thisPoolID:${thisPoolID}==="
      #fi
    else
      echo "Registering ${fullPath} as ${thisClone} in resource pool ${resourcePool}" with ID ${thisPoolID}
      vmID=`vim-cmd solo/registervm "${fullPath}" "${thisClone}" "${thisPoolID}"`
    fi

    # Power on the machine if required - uncomment the following
    #vim-cmd vmsvc/power.on ${vmID}

    # Return to base directory to do next clone
    cd - &> /dev/null
 done
 echo "============== Job Completed =============="
}

main

Appendix #2: A removeClone.sh script, just in case…

Be VERY careful using this! Like the clone.sh script, it needs to be run with the -master directory name as the parameter. I could have tidied this, but I needed it quickly, so simply modified what I had. Useful if you accidentally create a hundred clones you want to remove just as quickly.

# V1.0 2017-03-18 Chris Welsh

readonly noOfArgs=$#
#Remove trailing / of path if it has one
readonly inFolder=${1%/}
if [ "$3" = "" ] ; then
  startingNo="01"
  noOfCopies=$2
  lastCopyNo=$2
else
  startingNo=$3
  noOfCopies=$2
  lastCopyNo=$(( $2 + $3 - 1 ))
fi

usage() {
  echo ""
  echo "USAGE: ./removeClone.sh base_image_folder-master no_of_copies [starting_number]"
  echo "base_image_folder-master MUST end with the string '-master'"
  echo "Clones are assumed to have names of base_image_folder-#01, base_image_folder-#02 etc"
  echo "If starting_number is specified, clones will start numbering from that number"
  echo "Maximum cloneID=9999; maximum number of clones=99"
  }

deleteAndDestroy() {
  local escapedOutFolder=$(echo "${outFolder}" | sed -e 's/[\/&]/\\&/g' | sed 's/ /\\ /g')
  if [ -d "${escapedOutFolder}" ] ; then
    echo "${outFolder} doesn't exist - skipping"
  else
    echo "Removing contents of old "${outFolder}
    ls -lh "${outFolder}"/*
    rm "${outFolder}"/*
    echo "Removing directory '${outFolder}'"
    rmdir "${outFolder}"
  fi

}

main() {
  if [  ${noOfArgs} -le 1 ] ; then
    echo "ERROR--Insufficient arguments"
    usage
    exit 1
  fi

  if [ ${noOfCopies} -ge 100 ] ; then
    # Clone copy count arbitarily set to 99 - I don't want anyone to have to accidently create hundreds of clones
    echo "ERROR--Clone copy count exceeds 99"
    usage
    exit 1
  fi

  if [ ${lastCopyNo} -ge 10000 ] ; then
    # Maximum clone copy number arbitarily set to 9999 - could actually be set as high as 59635 before VNC TCP Port numbers exceed 65535
    echo "ERROR--Clone sequence exceeds 9999 (last copy would be ${lastCopyNo})"
    usage
    exit 1
  fi

  echo "${inFolder}" | grep -q "\-master$"
  if [[ $? -ne 0 ]] ; then
    # Input filename in wrong format
    echo "ERROR--input folder MUST end with -master. You entered"
    echo "${inFolder}"
    usage
    exit 1
  fi

  echo "============== Beginning Job =============="
  local fullBasePath=$(readlink -f "${inFolder}")/
  local escapedPath=$(echo "${fullBasePath}" | sed -e 's/[\/&]/\\&/g' | sed 's/ /\\ /g')

  outFolderBase=../${inFolder/-master/\/}
  echo "Clone folder BasePath is ${outFolderBase}"
  resourcePool=${inFolder/-master/}
  thisPoolID=`sed -rn 'N; s/\ +<name>'"${resourcePool}"'<\/name>\n\ +<objID>(.+)<\/objID>/\1/p' /etc/vmware/hostd/pools.xml`

#------------------- Main Loop begins here ---------------#

  for i in $(seq -w ${startingNo} ${lastCopyNo}) ; do

    thisClone=${inFolder/master/#${i}}
    outFolder=${outFolderBase}${inFolder/master/#${i}}

    echo "=============================================================================="
    echo "Removing Clone#${i} named ${thisClone} from ${outFolder}"
    escapedClone=$(echo "${thisClone}" | sed -e 's/[\/&]/\\&/g' | sed 's/ /\\ /g')
#    vmID=`/bin/vim-cmd vmsvc/getallvms | awk -vvmname="${thisClone}" '{if ($2 == vmname) print $1}'`
     vmID=`/bin/vim-cmd vmsvc/getallvms | egrep "${thisClone}" | awk '{print $1}'`

    if [ ! -z "${vmID}" ] ; then  #We found the VM was registered, so unregister it
      echo "Powering down and unregistering vm with ID $vmID"
       # Power off the machine if required
       vim-cmd vmsvc/power.off ${vmID}
       vim-cmd vmsvc/unregister "${vmID}"
    else
      echo "No vmID found for $thisClone"
    fi

    deleteAndDestroy #Remove files and directory

  done

#------------------- Main Loop ends here ---------------#

  echo "Resource pool is ${resourcePool} with ID :${thisPoolID}:"
  if [ ! -z "${thisPoolID}" ]; then
    echo "Removing resource pool ${resourcePool}"
    vim-cmd hostsvc/rsrc/destroy "${thisPoolID}"
  else
    echo "There is no resource pool called ${resourcePool}"
  fi

  echo  "Clones removed, attempting to remove parent folder (will fail if you didn't delete all clones)"
  rmdir "${outFolderBase}"

  echo "============== Job Completed =============="
}

main

Appendix #3: Sample output from running the clone.sh script

/vmfs/volumes/573201c2-529afdbe-5824-6805ca1ca2ed/Master - Copy # <em><strong>./clone.sh GNS3WB88-master 12</strong></em>
============== Beginning Job ==============
Output folder BasePath is ../GNS3WB88/
Creating ../GNS3WB88/
Creating resource pool :GNS3WB88:
==============================================================================
Cloning Clone#01 named GNS3WB88-#01 using VNCport=5901 to ../GNS3WB88/GNS3WB88-#01
Creating ../GNS3WB88/GNS3WB88-#01
================ Processing .vmx file ================
Delete Swap File line, will be auto recreated
Change Display Name to GNS3WB88-#01
Change VNC Port Value to 5901
Change Parent Disk Path
Forcing change of MAC addresses for up to two NICs
Forcing creation of a fresh UUID for the VM.
============== Done processing .vmx file ==============
Registering /vmfs/volumes/data/GNS3WB88/GNS3WB88-#01/GNS3WB88-#01.vmx as GNS3WB88-#01 in resource pool GNS3WB88 with ID pool0
==============================================================================

<...output omitted for next 10 clones ...>

==============================================================================
Cloning Clone#12 named GNS3WB88-#12 using VNCport=5912 to ../GNS3WB88/GNS3WB88-#12
Creating ../GNS3WB88/GNS3WB88-#12
================ Processing .vmx file ================
Delete Swap File line, will be auto recreated
Change Display Name to GNS3WB88-#12
Change VNC Port Value to 5912
Change Parent Disk Path
Forcing change of MAC addresses for up to two NICs
Forcing creation of a fresh UUID for the VM.
============== Done processing .vmx file ==============
Registering /vmfs/volumes/data/GNS3WB88/GNS3WB88-#12/GNS3WB88-#12.vmx as GNS3WB88-#12 in resource pool GNS3WB88 with ID pool0
============== Job Completed ==============
Posted in ESXi, virtual interface, VMware | Tagged , , , , , , ,

A funny thing happened in the ACI lab today…

I had a Tenant with a statically configured bare metal hosts attached to interface 1/16 on both leaf 101 and leaf 102, but came up with a “invalid-path;invalid-vlan” error on the faults page for the EPG that was being configured for leaf 102. The host attached to Leaf 101 was working, had no errors and was configured in exactly the same way!

I checked:
In the tenant, the EPG had been linked to the correct Physical Domain
In the tenant, the EPG had been linked to the correct leaf/port/vlan

In the Fabric Policies;
The Leaf Profile defined the correct leaf, and was linked to the correct Interface Profile
The Interface Profile had an Access Port Selector for to the correct port (1/16), and the Access Port Selector was linked to an Access Port Policy Group
The Access Port Policy Group was linked to the correct Attachable Access Entity Profile
The Attachable Access Entity Profile was linked to the same Physical Domain as the EPG showing the error
The Physical Domain was linked to a VLAN Pool that included the VLAN ID being used in the EPG for the static mapping.

So I was stumped.  I have never seen an  “invalid-path;invalid-vlan” error before that could be solved by checking the above, so in desperation I checked thing from the CLI:

apic1# show run leaf 102 interface ethernet 1/16
# Command: show running-config leaf 102 interface ethernet 1/16
# Time: Wed Mar  1 04:17:41 2017
  leaf 102
    interface ethernet 1/16
      # Policy-group configured from leaf-profile ['T5:L102-LeafProf'], leaf-interface-profile T5:L102-IntProf
      # policy-group T5:1G.CDP.LLDP-APPG
      lldp receive
      lldp transmit
      cdp enable
      vlan-domain member T5:MappedVLANs-PhysDom type phys
      switchport access vlan 2050 tenant Tenant5 application 2Tier-AP epg WebServers-EPG
      speed 1G
      negotiate auto
      link debounce time 100
      exit
    exit

“That looks a bit strange”, I thought.  “I don’t normally see the lldp and cdp policies etc”.  But there was nothing in the config that was wrong, none the less, I thought I’d compare with the same port on the other leaf.

apic1# show run leaf 101 interface ethernet 1/16
# Command: show running-config leaf 101 interface ethernet 1/16
# Time: Wed Mar  1 04:18:02 2017
  leaf 101
    interface ethernet 1/16
      # Policy-group configured from leaf-profile ['T5:L101-LeafProf'], leaf-interface-profile T5:L101-IntProf
      # policy-group T5:1G.CDP.LLDP-APPG
      switchport access vlan 2050 tenant Tenant5 application 2Tier-AP epg AppServer-EPG
      exit
    exit

Now this looks much like I expect. And at this stage, this is the only indication that the configuration on 102/1/16 is not quite “normal”. So what I tried next was to see if I could remove the “extra” lines of config on leaf 101. Since there is no default interface command in ACI-NX-OS, I tried manually removing the cdp, lldp etc config:

apic1(config)# leaf 102
apic1(config-leaf)# default ?
apic1(config-leaf)# default inter
Command entered is not APIC NX-OS style CLI.Trying shell command…

apic1(config-leaf)# interface ethernet 1/16
apic1(config-leaf-if)# shut
apic1(config-leaf-if)# no lldp receive
apic1(config-leaf-if)# no lldp transmit
apic1(config-leaf-if)# no cdp  enable
apic1(config-leaf-if)# no vlan-domain member T5:MappedVLANs-PhysDom type phys
apic1(config-leaf-if)# no speed 1G
apic1(config-leaf-if)# no negotiate auto
apic1(config-leaf-if)# no link debounce time 100
apic1(config-leaf-if)# no shutdown

Better see if that worked!

apic1(config-leaf-if)# show run leaf 102 interface ethernet 1/16
# Command: show running-config leaf 102 interface ethernet 1/16
# Time: Wed Mar  1 04:28:16 2017
  leaf 102
    interface ethernet 1/16
      no lldp receive
      no lldp transmit
      no cdp enable
      speed auto
      no negotiate auto
      link debounce time 100
      exit
    exit

Clearly that didn’t work as intended. And by now I’d removed the interface selector for interface 1/16 from the interface profile for Leaf 102 as well, so there should have been no association with any lldp, cdp etc – except for one thing – I’d forgotten that when you do anything in the CLI, it automatically starts creating pesky objects with names beginning with __ui and I could see these in the GUI – but I knew how to get rid of those thanks to this post.

Note:RedPoint Unless Daniel has updated his blog, you will see that one command I used was different to the one in the link above: Daniel’s blog says to use a moconfig delete command, when in fact it should be moconfig commit

And that’s what I did!

apic1# for i in `find *__ui*`
for> do
for> echo "removing $i"
for> modelete $i
for> done
removing attentp-__ui_l102_eth1--16
This command is being deprecated on APIC controller, please use NXOS-style equivalent command
removing attentp-__ui_l102_eth1--16/mo
This command is being deprecated on APIC controller, please use NXOS-style equivalent command
...<snip>....

apic1# moconfig commit
This command is being deprecated on APIC controller, please use NXOS-style equivalent command
Committing mo 'uni/infra/lldpIfP-__ui_l102_eth1--16'
Committing mo 'uni/infra/hintfpol-__ui_l102_eth1--16'
Committing mo 'uni/infra/cdpIfP-__ui_l102_eth1--16'
Committing mo 'uni/infra/attentp-__ui_l102_eth1--16'

All mos committed successfully.

Re-checking the CLI config showed:

apic1(config-leaf-if)# show run leaf 102 interface ethernet 1/16
# Command: show running-config leaf 102 interface ethernet 1/16
# Time: Wed Mar 1 04:42:46 2017
 leaf 102
 interface ethernet 1/16
 exit
 exit

Brilliant! It certainly looks “correct” although I have no idea why this config should be any more “correct” than what I saw earlier, except…
… when I re-created the Interface Selector for access port 1/16 on leaf 102, and reassigned the same interface to the EPG in the tenant config (in other words restored the previous config) – all errors disappeared and the config worked!

Now it could have been that running the script to remove the pesky __ui items actually removed some other junk that was causing a problem, but whatever caused the error is a mystery!

One of the strangest mysteries I have encountered in the ACI lab so far.

RedNectar

Posted in Access Policies, ACI, ACI configuration, APIC, Data Center, Data Centre | Tagged , , , , , ,