Guest Post! WTF Are all those Checkboxes? (ACI L3 Outs) – Part 2 of ???

Found this great post explaining a lot of fine detail on ACI L3 outs – make sure you check out the original!

Come Route With Me!

My friend and colleague Mr. Jason Banker recently ran into some good times with the mysteries of the ACI L3 Out Checkbox Madness! He Slack’d me and told me he’d found some clowns blog post about it (yours truly) and that some updates and additional information was needed, so he kindly volunteered some time to help out! Without further ado here is Jason’s Checkbox Madness:


As we continue to deploy fabrics we always joke about these damn routing checkboxes shooting us in the foot.  We play with different scenarios in the lab to ensure we understand how these pesky boxes work and what other options we have for future deployments.   The scenario here was to use get different OSPF areas connected to the same border leaf using ACI as the transit.  This scenario brings up some certain challenges and hopefully my testing will help others understand it a little better…

View original post 999 more words

Advertisements
Posted in GNS3 WorkBench

Non overlapping VTEP IP addresses in Cisco ACI

In a Cisco ACI deployment, Cisco recommends that “The TEP IP address pool should not overlap with existing IP address pools that may be in use by the servers (in particular, by virtualized servers).”

Let me tell you a reason much closer to reality why you might want to avoid overlapping your Cisco ACI TEP addresses with your locally configured addressing scheme.

When you first configure a Cisco ACI fabric, you need to configure a range of IP addresses that the ACI Fabric uses internally for VTEP addressing of the APICs, leaf and spine switches and other internally used addresses like anycast addresses for the spine proxy functions.

As I mentioned, Cisco recommends that “The TEP IP address pool should not overlap with existing IP address pools that may be in use by the servers (in particular, by virtualized servers).” I can only guess by the wording of this advice that Cisco sees that there may be some issue with the APICs being able reaching remote VTEPs on Cisco AVS virtual switches, but I see this as an outlier scenario.

The problem with VTEP IP address pools is the APICs.  You see, the APICs can’t handle:

  1. having a management IP address that overlaps with the VTEP address space, (it can’t figure out which interface to send management responses on) or
  2. being accessed from a workstation that is using an IP address that overlaps with the VTEP address space.

Since it is conceivable that any internal IP address may need to access the APIC for some reason sometime, I would recommend that you don’t overlap VTEP addresses with any currently used internal addresses.

Below is an example of the routing table from an APIC:


apic1# netstat -rn
Kernel IP routing table
Destination     Gateway         Genmask         Flags   MSS Window  irtt Iface
0.0.0.0         172.16.11.1     0.0.0.0         UG        0 0          0 oobmgmt
10.0.0.0        10.0.0.30       255.255.0.0     UG        0 0          0 bond0.3967
10.0.0.30       0.0.0.0         255.255.255.255 UH        0 0          0 bond0.3967
10.0.32.64      10.0.0.30       255.255.255.255 UGH       0 0          0 bond0.3967
10.0.32.65      10.0.0.30       255.255.255.255 UGH       0 0          0 bond0.3967
169.254.1.0     0.0.0.0         255.255.255.0   U         0 0          0 teplo-1
169.254.254.0   0.0.0.0         255.255.255.0   U         0 0          0 lxcbr0
172.16.11.0     0.0.0.0         255.255.255.0   U         0 0          0 oobmgmt
172.17.0.0      0.0.0.0         255.255.0.0     U         0 0          0 docker0
apic1#

In this case, the VTEP address range is 10.0.0.0/16, and the APIC sees all 10.0.x.x IP addresses as being reachable via the bond0.3967 interface, as shown by the
10.0.0.0 10.0.0.30 255.255.0.0 UG 0 0 0 bond0.3967
routing table entry on the APIC.

Recall I said that the APICs can’t handle:

  1. having a management IP address that overlaps with the VTEP address space, (it can’t figure out which interface to send management responses on) or
  2. being accessed from a workstation that is using an IP address that overlaps with the VTEP address space.

I’ll deal with case #2 first.

Now imagine for a minute I have a workstation with an IP address of say 10.0.11.11 that wishes to communicate with the OOB (Out of Band) management IP address of the APIC, which happens to be 172.16.11.111.  Now that remote workstation of 10.0.11.11 may well have a perfectly good route to 172.16.11.11, and may indeed be able to send packets to the APIC.

The problem of course arises when the APIC tries to send the reply packets to 10.0.11.11. As per the APIC’s routing table, the APIC would expect to reach 10.0.11.11 via its bond0.3967 interface, as shown by the
10.0.0.0 10.0.0.30 255.255.0.0 UG 0 0 0 bond0.3967
routing table entry on the APIC.

Similarly, with case#1. This time, imagine I had used 10.0.11.0/24 as https://supportforums.cisco.com/discussion/13311571/overlapping-or-non-overlapping-vtep-poolmy OOB Management subnet.  Since that overlaps with my VTEP range (10.0.0.0/16) there is potential that IP addresses from my OOB subnet (10.0.11.0/24) could be allocated to VTEPs somewhere – and if that happened my APIC would be unable to communicate with any other 10.0.11.0/24 address on the OOB subnet that clashes with a VTEP address.  In theory, the APIC would still be able to communicate with the VTEP addresses because it adds a /32 address to its routing table for every VTEP, but in my experience when I saw a customer with this configuration there was a problem communicating with the OOB subnet.

RedNectar

STOPPRESS
Just been reading this discussion on the Cisco forum – it seems that the docker0 interface that was introduced in version 2.2 may also screw up the APIC’s view of the rest of the world in the same way

References:

This is an expansion of a reply I gave on the Cisco Support forum

More information on VTEP addressing in the Cisco Application Centric Infrastructure Best Practices Guide

Posted in ACI, ACI configuration, APIC, Cisco, Data Center, Data Centre | Tagged , , , ,

Cisco ACI Naming Standards

Cisco ACI Naming Standards

The Naming of Cats is a difficult matter,
It isn’t just one of your holiday games;

When you notice a cat in profound meditation,
The reason, I tell you, is always the same:
His mind is engaged in a rapt contemplation
Of the thought, of the thought, of the thought of his name:
His ineffable effable
Effanineffable
Deep and inscrutable singular Name.

T.S. Elliot. The Naming of Cats

Have you become frustrated at the multiple names Cisco uses for the same object within the ACI GUI? Have you clicked on a link that promised to show a list of Interface Selector Profiles only to be shown a list of Leaf Interface Profiles instead? Have you ever wondered what a L3 Out object is, when there no facility to create an object called L3 Out?
I managed to muddle my way around the GUI and discover that L3Outs were actually External Layer 3 Networks and solve many other ambiguities by developing and adopting a consistent naming standard.

In a nutshell…

Consistent and structured naming of objects in Cisco’s ACI environment can help you greatly when learning how the different objects relate to each other.  This article explains the logic I use to name objects in Cisco ACI. In summary, these are:

Rule#1: Suffixes

If the object will ever be referred to by another object, make sure you name the object with a hyphen followed by a suffix that describes the item. For example:
Leaf101-IntProf describes the Interface Profile for Leaf switch 101,
WebServers-EPG describes an End Point Group.

Of course the problem when you first start out is that you don’t know what objects are going to be referred to in another drop-down list somewhere. That’s why you will want to use this guide.

Rule#2: Prefixes

If the object is a infrastructure object intended for use by a single tenant, prefix the object with a reference to that Tenant followed by a colon. For example, TenantX:StaticVLANs-VLAN.Pool describes a VLAN Pool intended for use by Tenant TenantX and Common:Telstra-ExtL3Dom describes an External Layer 3 Domain used by the common tenant. In a similar vein, infrastructure objects shared by multiple tenants should be prefixed with Shared:, such as Shared:WAN.Links-AEP which describes an Attachable Access Entity Profile (AEP) that multiple Tenants may share.

Rule#2 corollary:  Global infrastructure objects

If the object can be used by all tenants, omit the prefix.  Disable-CDP is the only CDP Interface Policy you’ll ever need to disable CDP – no need to create multiple duplicates.  Similarly, you’ll only ever need one Leaf Switch Profile for leaf 101, so call it Leaf101-LeafProf, but if you think it helps, Global:L101-LeafProf or Shared:L101-LeafProf would be acceptable.

Rule#3: Punctuation

I use TitleText style to concatenate words in names, but if an acronym is involved, I use a period as a  separator to make VLAN.Pool more readable than VLANPool. I reserve the use of the hyphen character for use only as part of the descriptor suffix, but will use the colon character both as a separator for the prefix and as a replacement for a slash character when naming port numbers, such as TenantX:L101..102:1:35-VPCIPG which also shows my preference for using double periods to indicate a range.  Hopefully the above example obviously describes a VPC Interface Policy Group for TenantX on port 1/35 of both Leaf101 and Leaf102.

Legal names, characters and separators

There are some characters that you can’t use in names. There are sixty-six legal characters. They are all alphanumeric characters (upper and lowercase) and the four separator characters .:-_ (period, colon, hyphen and underscore).  In fact, you could indeed call an object ...:-_-:... if you wished. Numeric names are OK too, so a Leaf Switch Selector could indeed be called 101 or even 101..102. But keep in mind you can’t use the space character, and using my conventions, the hyphen character is used as the separator for objects requiring a suffix and the colon character is used as the separator for objects requiring a prefix.

With the ground rules laid, let me continue with some more specific detail.  I will approach this in three sections.

  • Firstly, I’ll discuss objects defined in the Tenant space, where you will discover exactly what a L3Out really is.
  • Next, I’ll look at the Access-Policy Chain which the infrastructure administrator will define under the Fabric > Access Policies and VM Networking menus in the Advanced configuration mode, and
  • Finally, I’ll fill you in on a bit of the background to this article and tidy up any loose ends.

Names for objects defined in Tenants

I guess there is no better start than the name of the Tenant itself.

Tenants > Add Tenant

The name of your tenant need to be as short as possible. If the Tenant is a listed company, consider using the stock symbol – CSCO rather than Cisco Systems.  This is because (as explained above), you will often want to use the Tenant name in naming Access Policies. Another consideration (if you are hosting multiple Tenants) is the real estate on the Submenu for Tenants – which lists more names if the names are short! And similarly, in many drop-down menus, you will see the name of the Tenant included in the list. Shorter the better!
Here are my examples:

Recommended Tenant Name Purpose
common Pre-defined. You can’t change this.
CSCO If your Tenant has a stock symbol, use it
NNK Abbreviated form of Nectar Network Knowledge Pty Ltd
UQ.Admin University of Queensland Administration Tenant
UQ.Dev University of Queensland Development group Tenant

Tenants > Tenant TenantName > Networking > VRFs

Give VRDs a -VRF suffix, although you may prefer -Ctx for Context (VRFs are sometimes referred to as contexts, and before v1.2, VRFs were known as Private Networks).

Here are my examples:

Recommended Private Network Name Purpose
Dev-VRF VRF to separate the Development team
Production-VRF Main routing VRF
DMZ-VRF You can use VRFs to implement a DMZ type approach

Tenants > Tenant TenantName > Networking > Bridge Domains

Bridge Domains get a name describing the Bridge Domain and a -BD suffix. If the BD is being mapped to a VLAN, the existing VLAN name may be appropriate.

Here are my examples:

Recommended Bridge Domain Name Purpose
WebServer-BD Bridge Domain for the Web Servers server farm
NAS-BD Bridge Domain for the Network Attached Storage VLAN
DevTest-BD Bridge Domain for testing
VLAN100-BD Bridge Domain used to migrate VLAN 100. Use with care, because you may find that other VLANs also end up using this BD


Tenants > Tenant TenantName > Application Profiles

Application Profiles get a name describing the Application and a -AP suffix.

Here are my examples:

Recommended Application Profile Name Purpose
SAP-AP Application Profile for SAP
Webshop-AP Application Profile for your Webshop Application
OurAppDev-AP Application Profile for an application in development

Tenants > Tenant TenantName > Application Profiles > Application EPGs

End Point Groups get a name describing the type of servers that are represented in the group and a -EPG suffix.

Here are my examples:

Recommended EPG Name Purpose
SAP.Servers-EPG Application Servers for SAP
WebServers-EPG EPG for the Web servers server farm
SQL-EPG EPG for SQL DataBase servers

Tenants > Tenant TenantName > Security Policies > Filters

Filters can be used multiple times within a Tenant, and indeed filters in the common Tenant can be used by any Tenant, so there is an argument for having all filters defined in the common Tenant. But the most confusing aspect about filters is that a filter can define a single TCP port number, or could consist of many entries with multiple protocols and even ranges of port numbers. My suggestion is to keep filters to specific protocol/port numbers, or at the very most a collection of closely related port numbers.
Inside the filter, you will also need to name the filter entries.  My convention is to name the filter entries based on the protocol/port number, and to give the filter a -Fltr suffix.
Here are my examples:

Recommended Filter Name Purpose Recommended Filter Entry Name(s)
HTTP-Fltr Filter for HTTP traffic TCP80
HTTPS-Fltr Filter for HTTPS traffic TCP443
AD-Fltr Filter for Active Directory Protocols TCP1025..5000
TCP49152..65535
TCP389
UDP389
TCP636
… etc (See MS website)
ICMP-Fltr Filter for ICMP traffic ICMP

Tenants > Tenant TenantName > Security Policies > Contracts

Contracts define the collection of protocols that are required for an EPG to provide a service to another EPG.  Therefore, as well as having a -Ct suffix, I always include the word Services (or Svcs) in the name of the contract to indicate which EPG is the provider of the service.  Contracts also contain Subjects, and unless there is a reason to have more than one Subject in a Contract, I duplicate the contract name for the Subject name, except with a -Subj extension.

Here are my examples:

Recommended Contract Name Purpose Recommended Subject Name(s)
WebServices-Ct Contract to be provided by the WebServes-EPG WebServices-Subj
WebServices-Ct Contract to be provided by the WebServes-EPG, but with TCP443
traffic to be treated differently to TCP80 traffic
HTTP-Subj
HTTPS-Subj
AD.Svcs-Ct Contract for Active Directory Services AD.Svcs-Subj

Tenants > Tenant TenantName > Networking > External Bridged Networks

An External Bridged Network has colloquially become known as a L2 Out – a “Layer 2 Outside” network. Consequently, a suffix of -L2Out is a great abbreviation.  But there is a more important association that also has a significant bearing on the name. Each L2 Out is associated with a single VLAN ID.  So my advice is to name the L2 Out after the VLAN – either by ID or VLAN Name if appropriate. Here are my examples:

Here are my suggestions.

Recommended External Bridged Network (L2 Out) Name Purpose
VLAN2000-L2Out L2 Out for VLAN 2000
NAS.VLAN-L2Out L2 Out for Network Attached Storage VLAN

Tenants > Tenant TenantName > Networking > External Bridged Networks > VLANx-L2.Out > Networks

A L2 Out also needs a child object that can be used to link to Contracts.  This object is referred to in the GUI as a Network but I prefer the concept of referring to is as a L2 EPG, because the whole ACI policy philosophy is centred around the EPG-Contract association.  And since this L2 EPG is going to allow traffic to and from a particular external VLAN, it is appropriate to name the entity with a name mimicking its parent and a -L2EPG suffix.

Here are my examples:

Recommended (L2 Out)  Network Name Purpose
VLAN2000-L2EPG L2 EPG for VLAN2000-L2Out
NAS.VLAN-L2EPG L2 EPG for NAS.VLAN-L2Out
2020-L2EPG L2 EPG for 2020-L2.Out


 Tenants > Tenant TenantName > Networking > External Routed Networks

Similar to the L2 Out idea, an External Routed Network is known as a L3 Out – and indeed even referred to as such under a Bridge Domain’s configuration. The essential use of the “Layer 3 Outside” network is to give a VRF the ability to:

  1. advertise public subnets on behalf of linked Bridge Domains using a particular protocol (OSPF/BGP/EIGRP), and
  2. process incoming routes for that protocol to be added to the VRF routing table.  In other words, it provides a routing service for a VRF for a particular protocol(s).

So it makes sense to name a L3 Out based on VRF and/or routing protocol and give it a -L3Out suffix.

Here are my examples:

Recommended External Routed (L3 Out) Network Name Alternative Form Purpose
DevVRF-L3Out Dev-L3.Out OSPF & BGP L3 Out for the Development VRF
ProductionVRF-EIGRP.L3Out Production-EIGRP.L3.Out EIGRP L3 Out for the Production VRF
ProductionVRF-BGP.L3Out Production-BGP.L3.Out BGP L3 Out for the Production VRF
DMZ.VRF-OSPF.L3Out DMZ-L3.Out L3 Out for DMZ VRF

 Tenants > Tenant TenantName > Networking > External Routed Networks > L3OutName-L3Out > Logical Node Profiles

When you create a Logical Node Profile for a L3Out you are defining which Leaf Switches are going to become external routers – PE routers in terms of how MP-BGP works in ACI.  The Node Profile name will not be seen outside the L3Out, so adding a the suffix is not necessary, but you may feel more comfortable using it. One thing to remember when creating Logical Node profiles for multiple Nodes within the same L3 Out is that it makes no difference whether you create one Node Profile per Leaf, or include all Nodes (Leaves) in a single Node Profile.  For me, I like to see a single Node Profile per Leaf. Since the Node Profile is going to define Leaf switch, name name the profile based on the Leaf name. Node profiles aren’t referenced by other objects, so using a -NodeProf suffix is not so necessary here.

Here are my examples:

Recommended Node Profile Name Alternative Form Purpose
L101 L101-NodeProf Node Profile for Leaf101
103..104 103..104-NodeProf Node Profile for Leaves 103 and 104

Tenants > Tenant TenantName > Networking > External Routed Networks > L3OutName-L3Out > Logical Node Profiles > NodeProfileName > Logical Interface Profiles

When you create a Logical Interface Profile for a L3Out‘s Logical Node Profile, you are defining the actual interface that will be used to send and receive routing exchanges.  These profiles can define physical routed interfaces, logical sub-interfaces or logical switched virtual interfaces (SVIs).  My recommendation is to only ever include one such interface in each profile (the Node Profile can have multiple Interface Profiles if required), and follow slightly different naming rules depending on whether the Interface Profile is a routed interface, sub-interface or SVI. Similar to the Node Profiles within a L3 Out, the Interface Profile’s -IntProf suffix is not essential here.

Here are my examples:

Recommended Logical Interface Profile Name Alternative Form Purpose
eth101:1:1 101:1:1-IntProf Routed interface on eth1/1 on leaf 101
eth102:1:2.333 102:1:2.333-IntProf Routed sub-interface for VLAN 333 on eth1/2 on leaf 102
VLAN400 VLAN400-IntProf SVI on VLAN 400

Names for Access Policy model objects

Understanding the Access Policy model, or Access Policy Chain as I like to call it, is one of the hardest concepts to master in ACI. Access policies are configured under:

Fabric > Access Policies

Object Concept Examples
Interface Policies You will need a collection of well defined interface policies to define non-default policies for per-interface configuration options such as CDP, LLDP, BPDU Guard etc.   Once you have defined a particular Interface Policy once, it can be used universally for all tenants. Enable-CDP
Disable-CDP
Enable-BPDU.Filter
Enable-BPDU.Guard
Enable-BPDU.GuardFilter
Enable-BPDU.Flood

ActiveLACP-PC
PassiveLACP-PC
MAC.Pinning-PC

PerPort-VLAN.Scope
PerLeaf-VLAN.Scope

Leaf Profile Describes a Leaf switch (or collection of leaf switches). Name the profile based on the Switch ID(s) Leaf101-LeafProf
101-LeafProf
L101..102VPC-LeafProf
RedNectar’s Rule: Have one and only one Leaf Profile per leaf switch for all leaf switches

Permitted Exception: You may consider having a special VPC Leaf Profile per pair of VPC linked leaf switches to link to the upcoming VPC Interface Profile

Leaf Selector Child object of Leaf Profiles, defines a leaf switch Leaf101
101-LeafSel
Global:Leaf101
Interface Profiles Describes a set of switch ports linked to a Leaf Profile.
Match the name of the Interface Profile to its related Leaf Profile
Leaf101-IntProf
L101-IntProf
L101..102VPC-IntProf
RedNectar’s Rule1: Have one and only one Interface Profile per Leaf Profile, except for …
RedNectar’s Rule2: If you don’t have a corresponding Leaf Profile for each pair of VPC Leaves, create a special VPC Interface Profile per pair of VPC linked leaf switches, and have both leaves link to this VPC Interface Profile
Access Port Selectors Child object of Interface Profiles. Give the selector a name that reflects the port number it represents. 1:01 (defines port 1/1)
1:01-IntSel (defines port 1/1)
1:13..14-PC (defines port 1/13-14 used in a port channel)
RedNectar’s Rule: Have one Access Port Selector per port (very tedious), except when two ports on a leaf must have congruent configurations, such as when defining a Port Channel, so…
RedNectar’s Rule: Have one Access Port Selector per configured Port Channel
RedNectar’s Tip: When naming Access Port Selectors, use leading zeros in the port-numbering scheme as shown above.  That will keep your list of Access Port Selectors in order when sorted alphabetically.
Note: Interface Policy Groups have subtle but important differences depending on whether they are Access Port Policy Groups or [Virtual] Port Channel Interface Policy Groups; so I have treated each case separately.
Access Port Policy Groups  Describe a generalised collection of Interface Policies for single-attached devices. The more “generalised” the Group, the more re-usable it becomes. Name the APPG to describe the type of attached hosts and the Tenant using the attached host.  If the attached host is to be shared, indicate it in the name. TenantX:SingleAttachedHosts-APPG

Shared:AccessPorts-APPG

[V]PC Interface Policy Groups Describe a specific Port Channel or Virtual Port Channel Interface. There is way of “generalising” a group of polices as per Access Port Policy Groups, but each [V]PC will need it’s own collection of Interface Policies defined. Since VPCs and PCs must be unique for a given pair/group of ports, name the [V]PC to describe the Leaf Ports to be assigned.[See Footnote] Leaf101..102:1:35-VPCIPG (defines a VPC on interface 1/35 on Leafs 101 and 102)
L103:1:4-5-PCIPG (defines a Port Channel on 1/4-5 of Leaf 103)
TenantX:FIA-VPCIPG (defines a VPC to Fabric Interconnect A for TenantX)
Attachable Access Entity Profiles (AEPs) Provides a joiner between the physical configuration of the Leaf ports and the encapsulation configuration. Think of it as a VLAN Profile. Or a VXLAN Profile.  Name the AEP to symbolise the collection of V[X]LANs along with the ports that will permit these V[X]LANs. TenantX:AllVLANs-AEP

Shared:ExternalAccess-AEP

Physical Domains Provide a place to define a single collection of VLANs (or VXLANs) to be used to map directly connected hosts to EPGs. Name the Physical Domain based on the name of the Tenant and the associated VLAN Pool. TenantX:StaticVLANs-PhysDom

Common:StaticVLANs-PhysDom

External Layer 2 Domains Provide a place to define a single collection of VLANs (or VXLANs) to be used to map VLANs or hosts to L2EPGs. Name the External Layer 2 Domain based on the name of the Tenant and the associated VLAN Pool. TenantX:StaticVLANs-ExtL2Dom

Common:StaticVLANs-ExtL2Dom

External Layer 3 Domains Provide a place to define a single collection of VLANs (or VXLANs) to be used to map external connections to L3 External Networks (L3 Outs). Name the External Layer 2 Domain based on the name of the Tenant and the associated VLAN Pool. TenantX:StaticVLANs-ExtL3Dom

Common:StaticVLANs-ExtL3Dom

Virtual Machine Management (VMM) Domains VMM Domains are multi-murpose. A VMM:
a) provides a place to define the identity and login credentials to a vCenter/SCVM/KVM
b) provides a place to define a single collection of VLANs (or VXLANs) to be used to map PortGroups/VM Networks to EPGs.
c) will bestow its name to a Distributed Virtual Switch in the target vCenter/SCVM/KVM Name the VMM Domain based on the name of the Tenant, the type of VMM and the associated VLAN Pool.
TenantX:Apps.vCenter-VMM.Dom

Shared:SCVM-VMM.Dom

VLAN Pools Every Domain (Physical, L2/L3 External or VMM) needs an associated VLAN Pool. If you give each Tenant a collection of Static VLANs and another collection of Dynamic VLANs should be sufficient. Name the VLAN Pool based on the name of the Tenant and the associated Domain. TenantX:StaticVLANs-VLAN.Pool

TenantX:Apps.vCenter-VLAN.Pool

Footnote: A PC Interface Policy Group (PCIPG) must be unique per leaf – so it is possible to re-use PCIPGs, but… if you do, you’ll now have to have some way of remembering if a particular PCIPG has been used on a particular leaf or not, in which case you might still use names like1:4-5-PCIPG omitting the leaf name and only using that PCIPG when deploying a PC on ports 4-5. Your choice.  Similarly, a VPC Interface Policy Group (VPCIPG) need only be unique per VPC pair of switches and if you choose this option I would again suggest using names like 1:35-VPCIPG and only use that VPCIPG when deploying a VPC on port 35 of the two switches.

The logic…

Throughout my Cisco ACI Tutorial, I followed a naming standard which I suggest you follow for your first install. I wanted to follow the convention that was cited in the Troubleshooting Cisco Application Centric Infrastructure book, but decided that the examples they gave were sometimes inconsistent, too detailed, and in some cases too verbose. But I stuck with the spirit of using a structure of [Purpose]-[ObjectType] that seemed to be the backbone of the convention, adding some extra punctuation rules, such as concatenating words into TitleTextStyle to make them readable, and adding a [TenantName]: to the convention when appropriate – so my convention is: [TenantName]:[Purpose]-[ObjectType] Having the [ObjectType] as part of a name can help tremendously when learning the structure and when distinguishing between similar objects. Clearly Leaf101-IntProf is less likely to be confused with Leaf101-LeafProf for having the -[ObjectType] suffix.

RedNectar

Note: The Interface Profile object is referred to as an associated Interface Selector Profile or Interface Select Profile on the Switch Profile page. On the other hand, the Access Port Selector object is also referred to in various places as an Interface Selector or the Host Port Selector.

Don’t be confused. I was.

Posted in Access Policies, ACI, ACI configuration, Cisco, Data Center, Data Centre, Nexus, Nexus 9000, SDN, Software Defined Networking | Tagged , , , , | 1 Comment

Script to create Linked Clones on ESXi

I had a problem.  The ESXi server I was supplied with had limited disk space, and I had to create 10 clone VMs from a master VM of around 40GB to run a class.  Creating multiple copies of the master would have more than exhausted the disk space I had.

So instead, I created a single snapshot of the master, then took 10 copies of the original .vmx file and the much smaller snapshot delta file, and changed each of the 10 copied .vmx files so that the scsi0:0.fileName attribute pointed to the original master file, (changed a couple of other attributes too) and edited the delta snapshot file so its path to its parent file also pointed to the original master file.

After creating my set of linked clones, the total additional space required for 10 linked clones was less than 11GB, yet each clone was a fully functioning copy of the original 40GB parent. Total space saved, approximately 390GB!

Now to be honest, I didn’t do all that by hand.  I used a script to do it for me, and here is where I stood on the shoulders of giants.  The process would have been impossible without the help of:

If you’d like to see how I did this, read on. I’ll cover the following:

But first, a disclaimer.

Disclaimer:RedPoint I am human, I may have made mistakes and omitted safeguards in the scripts described in this article.  Make sure you operate ONLY on material that is securely backed up, and be warned that these scripts could inadvertently create or delete hundreds of VMs in a one fell swoop.  Use with care.
You have been warned.

The Background Theory

You need to understand a bit about how VMware stores its VMs.  If you browse a datastore or navigate to where a VM is stored on an ESXi host in the command line interface (typically cd /vmfs/volumes/data or similar) you should see that a VM consists of several files:

/vmfs/volumes/57<snip>ed/Golden Masters/GNS3WB88-master # ls -lh
total 42210320
-rw-------    1 root     root       40.0G Mar 18 08:37 GNS3WB88-master-flat.vmdk
-rw-------    1 root     root        8.5K Mar 18 08:37 GNS3WB88-master.nvram
-rw-------    1 root     root         502 Mar 18 08:37 GNS3WB88-master.vmdk
-rw-------    1 root     root           0 Mar 18 08:06 GNS3WB88-master.vmsd
-rw-------    1 root     root        3.3K Mar 18 08:06 GNS3WB88-master.vmx
-rw-------    1 root     root        3.3K Mar 18 08:06 GNS3WB88-master.vmxf
-rw-------    1 root     root        8.5K Mar 18 08:37 nvram

Note particularly the .vmdk and .vmx files.  The  *flat.vmdk file is your disk image and the .vmx file is the descriptor file that tells the hypervisor exactly what is what in relation to your VM, including the location of the virtual disk files that make up your VM, and the snapshot status of your VM.  Take a look at the .vmx file, especially the line that shows you where your disk file lives. The command cat *.vmx | grep vmdk should show you:

/vmfs/volumes/57<snip>ed/Golden Masters/GNS3WB88-master # cat *.vmx | grep vmdk
scsi0:0.fileName = "GNS3WB88-master.vmdk"

And if you check the .vmdk file described in the scsi0:0.fileName = section, you will see the reference to the actual disk file image (the “flat” file):

/vmfs/volumes/57<snip>ed/Golden Masters/GNS3WB88-master # cat *master.vmdk | grep vmdk
RW 83886080 VMFS "GNS3WB88-master-flat.vmdk"
Note:RedPoint If you browse the files using the vSphere file browser, you will not see the separation of the two .vmdk files – the file browser hides the “flat” .vmdk file, and shows the descriptor file as the large file.

After you create a snapshot of your VM, the structure changes a little:

/vmfs/volumes/57<snip>ed/Golden Masters/GNS3WB88-master # ls -lh
total 42210320
-rw-------    1 root     root      256.1M Mar 19 09:25 GNS3WB88-master-000001-delta.vmdk
-rw-------    1 root     root         333 Mar 19 08:55 GNS3WB88-master-000001.vmdk
-rw-------    1 root     root       31.2K Mar 18 09:32 GNS3WB88-master-Snapshot1.vmsn
-rw-------    1 root     root       40.0G Mar 18 18:38 GNS3WB88-master-flat.vmdk
-rw-------    1 root     root        8.5K Mar 18 06:49 GNS3WB88-master.nvram
-rw-------    1 root     root         525 Mar 18 19:15 GNS3WB88-master.vmdk
-rw-------    1 root     root         476 Mar 18 09:32 GNS3WB88-master.vmsd
-rw-------    1 root     root        3.3K Mar 19 09:25 GNS3WB88-master.vmx
-rw-------    1 root     root        3.3K Mar 18 06:52 GNS3WB88-master.vmxf
-rw-------    1 root     root        8.5K Mar 19 09:25 nvram
-rw-------    1 root     root      164.3K Mar 19 09:25 vmware.log

Note that there is now a *-000001.vmdk file and a *-000001-delta.vmdk file as well as a *-Snapshot1.vmsn file.  If you check the *.vmx file again, you will see:

/vmfs/volumes/57<snip>ed/Golden Masters/GNS3WB88-master # cat *master.vmx | grep vmdk
RW 83886080 VMFS "GNS3WB88-master-000001.vmdk"

And if you take a look at that file, you will see the snapshot information:

vmfs/volumes/57<snip>ed/Golden Master/GNS3WB88-master # cat GNS3WB88-master-000001.vmdk
# Disk DescriptorFile
version=1
encoding="UTF-8"
CID=c9801963
parentCID=e0de4476
isNativeSnapshot="no"
createType="vmfsSparse"
parentFileNameHint="GNS3WB88-master.vmdk"
# Extent description
RW 83886080 VMFSSPARSE "GNS3WB88-master-000001-delta.vmdk"

# The Disk Data Base
#DDB

ddb.longContentID = "c7ddda7740d46041620b9dc5c9801963"

Armed with all this detail, you have enough information to create a linked clone.  All you need to do is copy the files Snapshot files and descriptor file (leaving the main base .vmdk files) from the Golden Masker image to another directory and edit the .vmx file to point to the parent’s base files!  The new clone will store any changes to the original disk image in its own copy of the *-000001-delta.vmdk and keep access int the original *-flat.vmdk image for any static information.

And creating those copies and manipulating the .vmx files is exactly what my script does.  Here’s how you use it.

The Process in Detail

In the first task you will ensure your directory structure is compatible with my script, then you will prepare your “Golden Master” image from which you will make the clones. In the third task you will create a script on your ESXi host (by copying and pasting mine). Naturally, the next task is to run the script, and finally you will check your results.

Task #1: Prepare your Directory Structure

Firstly, to run my script in its published format, you need to have the directory structure right.  My script expects that your ESXi host will have a directory where you keep the Golden Masters, and each Golden Master will live in a folder that ends with the characters -master.  After the script has run, it will create another directory where the clones will reside.  In other words, your structure should be something like this:

- /
| - Golden Masters
|   | + FirstVM-master
|   | + SecondVM-master
|   | + ThirdVM-master
|- AnotherVM
|- AndAnotherVM

After you have run the script for say the FirstVM, and SecondVM, the structure will change to:

- /
| - Golden Masters
|   | + FirstVM-master
|   | + SecondVM-master
|   | + ThirdVM-master
|- FirstVM
|   | + FirstVM-#01
|   | + FirstVM-#02
|   | + FirstVM-#03
|   | + FirstVM-#04
|- SecondVM
|   | + SecondVM-#01
|   | + SecondVM-#02
|   | + SecondVM-#03
|   | + SecondVM-#04
|- AnotherVM
|- AndAnotherVM

If necessary, use the Datastore Browser to organise your directory structure, or if you have a different structure, you could of course modify the script to match your structure.  To get to the Datastore Browser in ESXi, start with the vSphere Client.  In the vSphere Client, select the ESXi host, click the Configuration tab, click Storage in the Hardware section, then right-click on your storage device where you can select Browse Datastore

In the Datastore Browser, you will find all the tools you need to create folders and move VMs – just be aware that after you have moved a VM, it will have to be added to the Inventory again.  Which is why you get this warning when you move a VM:

Assuming you now have the VM from which you wish to create your “Golden Master” in a sub-directory off the main data storage, and have registered that VM in the vSphere Client, you are ready to prepare your “Golden Master”.

Task #2: Prepare your “Golden Master”

In the vSphere client, locate the VM that you need to create linked clones for.  This will be your “Golden Master” VM.

Remember, to run my script in its published format, your “Golden Master” MUST live in a folder that ends with the characters -master – and if you have recently moved the VM, it will need to be re-registered in vSphere.

So if not already in the correct format, rename (Right Click on the VM: Rename) your VM so that it ends with -master

Next, make sure the VM is powered down, then make sure that this VM has no snapshots already – (Right Click on the VM: Snapshot | Snapshot Manager…)

If there are snapshots, delete all of them to consolidate to the version you want to be the “Golden Master”.  You want this VM to be as clean as you can get it.

If you browse the datastore where the file is located (Select VM; click Summary Tab; Resources section; select storage volume; right-click: Browse Datastore… then navigate to your VM’s folder) you should see something similar to this:

Note that the big file is the .vmdk file, and there are no snapshot files.

Next, take a snapshot of the VM. (Right-click: Snapshot | Take Snapshot…). I named mine CloneBase, then clicked OK.

And if you browse the datastore again, you should see something like this:

Note the snapshot file has now been created and a small additional -00001.vmdk file has been created.  This .vmdk file will be the the log journal that records the changes in the snapshot leaving the original .vmdk file intact and read-only.

The next challenge is to create a script file to turn your snapshot into a set of linked clones.

Task #3: Create the clone.sh script

Note:RedPoint This task requires you to ssh to your ESXi host.  If ssh is NOT enabled on your ESXi host, this step will fail.  This kb article explains how to enable ssh if necessary.

Firstly, select all the code in the Appendix #1 below, and copy it to your PC’s copy buffer.

Next, start a ssh session to your ESXi host from a PC that supports copy and paste, then navigate to the parent directory of your “Golden Master” folder – the folder that ends in -master.  This is where the script expects to run from, and will create clones that are linked back to the .vmdk file in your -master folder.

~ # cd /vmfs/volumes/data/Golden\ Masters/
/vmfs/volumes/57<snip>ed/Golden Masters # ls -lh
drwxr-xr-x    1 root     root        1.6K Mar 18 19:15 GNS3WB88-master
drwxr-xr-x    1 root     root        1.1K May 25  2016 TCPIP Linux Server-master

Open vi using the command vi clone.sh

Tip:

RedPoint2

In vi, start by entering the command set noautoindent – your paste in the next step will look much nicer.  Do this by pressing the following sequence, including the colon

:set noautoindent

Press i to enter insert mode in vi, then paste the contents of the code in the Appendix #1 below.

In vi, press <Esc>:wq to write your file and quit.

Make your script executable with the command chmod +x clone.sh

/vmfs/volumes/57<snip>ed/Golden Masters # chmod +x clone.sh

Check that the file is executable by issuing a ls -lh command and looking for the x attribute

/vmfs/volumes/57<snip>ed/Golden Masters # ls -lh clone.sh
-rwxr-xr-x 1 root root 4.0K Mar 17 04:19 clone.sh

Note that the clone.sh file is listed as executable. You are now ready to run the script.

Task #4: Run the clone.sh script

At last you are ready to run your script.  If you run the script with no parameters, it will give you a list of expected parameters

/vmfs/volumes/57<snip>ed/Golden Masters # ./clone.sh

clone.sh version 1.2
USAGE: ./clone.sh base_image_folder-master no_of_copies [starting_number] [vnc_base_port]
base_image_folder-master MUST end with the string '-master'
Clones will have names of base_image_folder-#01, base_image_folder-#02 etc
If starting_number is specified, clones will start numbering from that number
Maximum cloneID=9999; maximum number of clones=99
If vnc_base_port is given, first clone will use vnc port vnc_base_port+1, default 5900

So to make twelve linked clones of say the GNS3WB88-master image, enter the command:

./clone.sh GNS3WB88-master 12 

You can check through the output to look for any errors – at this stage my error checking is minimal, but I’ve put plenty progress statements in the script to help you work out where there is a problem should one arise.  There is a sample output from running the command above in Appendix #3, but it is in the vSphere client where you will want to check your results first.

Task #5: Check results

Once the script has run, you should be able to see your results in vSphere.  Note that there is a resource group created to hold your set of linked clones, and the clones are numbered sequentially – you are ready to start powering on the clones – oh by the way, there is a line in the script that you can “uncomment” to automatically power on each clone as it is built.

That’s it – your clones are ready, but there is a little more you need to be careful of, especially if these clones have a limited life and you want to replace them later, so make sure you read the following section on Maintenance.

Maintenance and a Warning

Firstly the warning.  You must understand that that you have created linked clones – so each clone depends on the disk image that belongs in the Golden Master, so:

WARNING:RedPoint Don’t ever delete (as in Delete from Disk) a clone from the vSphere client – if you do, it will delete you master .vmdk and none of the clones past or future nor even your Golden Master will ever work again.  Restore your backup if you do.  You have been warned. (Don’t ask how I found out, but I’m glad I waited for the backup to complete).

The corollary from the warning is that should you ever wish to remove a clone, use the Remove from Inventory option rather than Delete from Disk.

And now the boring maintenance tips…

When you run the script for the first time, it creates the structure needed to hold the clones.  When you run it a second or subsequent time it will unceremoniously delete any previous linked clone with the same number.  This works out ideal if you are say running classes and need a fresh set of clones each time you run the class, but there are a couple of things to note.

Firstly, if you say create 12 clones on the first run, then create only 10 clones on the second run, clones #11 and #12 from the first run will still exist – if you don’t want them to hang around, use the Remove from Inventory option rather than Delete from Disk in vSphere as explained above.

Similarly, if on the first run you created clones numbered 20-29 (using the command ./clone.sh my-master 10 20) and next time you create clones 01-10 from the same master, you will have a resource group with clones 01-10 and 20-29 in it. So be careful.

Deleting clones that you created can be a pain, especially if you created many more than you needed. So I have included a copy of another script I wrote to remove clones – use with caution, but if you need it, you’ll find the script in Appendix #2

Enjoy your cloning!

RedNectar

Appendix #1: The clone.sh script

# Adapted From: https://github.com/oliverbock/esxi-linked-clone/blob/master/clone.sh
# v1.2 2017-03-25 Chris Welsh

version=1.2

readonly noOfArgs=$#
#Remove trailing / of path if it has one
readonly inFolder=${1%/}
if [ "$3" = "" ] ; then
  startingNo="01"
  noOfCopies=$2
  lastCopyNo=$2
else
  startingNo=$3
  noOfCopies=$2
  lastCopyNo=$(( $2 + $3 - 1 ))
fi

if [ "$4" = "" ] ; then
  VNCstartPort=5900
else
  VNCstartPort=$4
fi

usage() {
  echo ""
  echo "clone.sh version $version"
  echo "USAGE: ./clone.sh base_image_folder-master no_of_copies [starting_number] [vnc_base_port]"
  echo "base_image_folder-master MUST end with the string '-master'"
  echo "Clones will have names of base_image_folder-#01, base_image_folder-#02 etc"
  echo "If starting_number is specified, clones will start numbering from that number"
  echo "Maximum cloneID=9999; maximum number of clones=99"
  echo "If vnc_base_port is given, first clone will use vnc port vnc_base_port+1, default 5900"
  echo ""
  }

makeandcopy() {
  if [ ! -d "${outFolder}" ] ; then
    echo "Creating ${outFolder}"
    local escapedOutFolder=$(echo "${outFolder}" | sed -e 's/[\/&]/\\&/g' | sed 's/ /\\ /g')
    mkdir "${outFolder}"
  else
    echo "Removing contents of old "${outFolder}
    ls -lh "${outFolder}"/*
    rm "${outFolder}"/*
  fi
  cp "${inFolder}"/*-000001* "${outFolder}/"
  cp "${inFolder}"/*.vmx "${outFolder}/${thisClone}.vmx"

}

main() {

  if [  ${noOfArgs} -eq 0 ] ; then
    usage
    exit 1
  fi

  if [  ${noOfArgs} -eq 1 ] ; then
    echo ""
    echo "ERROR--Insufficient arguments"
    usage
    exit 1
  fi

  if [ ${noOfCopies} -ge 100 ] ; then
    # Clone copy count arbitarily set to 99 - I don't want anyone to have to accidently create hundreds of clones
    echo ""
    echo "ERROR--Clone copy count exceeds 99"
    usage
    exit 1
  fi

  if [ ${lastCopyNo} -ge 10000 ] ; then
    # Maximum clone copy number arbitarily set to 9999 - could actually be set as high as 59635 before VNC TCP Port numbers exceed 65535
    echo ""
    echo "ERROR--Clone sequence exceeds 9999 (last copy would be ${lastCopyNo})"
    usage
    exit 1
  fi

  echo "${inFolder}" | grep -q "\-master$"
  if [[ $? -ne 0 ]] ; then
    # Input filename in wrong format
    echo ""
    echo "ERROR--input folder MUST end with -master. You entered"
    echo "${inFolder}"
    usage
    exit 1
  fi

  echo "============== Beginning Job =============="
  local fullBasePath=$(readlink -f "${inFolder}")/
  local escapedPath=$(echo "${fullBasePath}" | sed -e 's/[\/&]/\\&/g' | sed 's/ /\\ /g')

  outFolderBase=../${inFolder/-master/\/}
  echo "Output folder BasePath is ${outFolderBase}"
  if [ ! -d "${outFolderBase}" ] ; then
    echo "Creating ${outFolderBase}"
    mkdir "${outFolderBase}"
  fi

  resourcePool=${inFolder/-master/}
  # Thanks to Alessandro Pilotti for putting this on github
  # https://github.com/cloudbase/unattended-setup-scripts/blob/master/esxi/create-esxi-resource-pool.sh

  thisPoolID=`sed -rn 'N; s/\ +<name>'"${resourcePool}"'<\/name>\n\ +<objID>(.+)<\/objID>/\1/p' /etc/vmware/hostd/pools.xml`
  if [ -z "${thisPoolID}" ]; then
    echo "Creating resource pool :${resourcePool}:"
    thisPoolID=`vim-cmd hostsvc/rsrc/create --cpu-min-expandable=true --cpu-shares=normal --mem-min-expandable=true --mem-shares=normal ha-root-pool "${resourcePool}" | sed -rn "s/'vim.ResourcePool:(.+)'/\1/p"`
  fi

#-------------------- Main Loop begins here ---------------#

  for i in $(seq -w ${startingNo} ${lastCopyNo}) ; do

    thisClone=${inFolder/master/#${i}}
    outFolder=${outFolderBase}${inFolder/master/#${i}}
    VNCport=`expr $VNCstartPort + $i`

    echo "=============================================================================="
    echo "Cloning Clone#${i} named ${thisClone} using VNCport=${VNCport} to ${outFolder}"

    makeandcopy

    cd "${outFolder}"/
    echo "================ Processing .vmx file ================"
    echo "Delete Swap File line, will be auto recreated"
    sed -i '/sched.swap.derivedName/d' ./*.vmx
    echo "Change Display Name to ${thisClone}"
    sed -i -e '/^displayName =/ s/= .*"/= "'"${thisClone}"'\"/' ./*.vmx
    echo "Change VNC Port Value to ${VNCport}"
    sed -i -e '/RemoteDisplay.vnc.port =/ s/= .*"/= "'"${VNCport}"'\"/' ./*.vmx
    echo "Change Parent Disk Path"
    sed -i -e '/parentFileNameHint=/ s/="/="'"${escapedPath}"'/' ./*-000001.vmdk

    # Forces generation of new MAC + DHCP
    echo "Forcing change of MAC addresses for up to two NICs"
    sed -i '/ethernet0.generatedAddress/d' ./*.vmx
    sed -i '/ethernet0.addressType/d' ./*.vmx
    sed -i '/ethernet1.generatedAddress/d' ./*.vmx
    sed -i '/ethernet1.addressType/d' ./*.vmx

    # Forces creation of a fresh UUID for the VM.
    echo "Forcing creation of a fresh UUID for the VM."
    sed -i '/uuid.location/d' ./*.vmx
    sed -i '/uuid.bios/d' ./*.vmx
    echo "============== Done processing .vmx file =============="

    # Register the machine so that it appears in vSphere.
    fullPath=`pwd`/${thisClone}.vmx
    #echo "fullPath:"$fullPath"==="
    #echo "fullBasePath:"$fullBasePath"==="
    #echo "{escapedPath}:"${escapedPath}"==="
    local escapedfullpath=$(echo "${fullPath}" | sed -e 's/[\/&]/\\&/g' | sed 's/ /\\ /g')
    #echo "escapedfullpath:"$escapedfullpath"==="
    vmID=`/bin/vim-cmd vmsvc/getallvms | egrep "${thisClone}" | awk '{print $1}'`
    if [ ! -z "${vmID}" ] ; then  #We found the VM was registered, so unregister it first
      echo "VM ${thisClone} already registered, checking which pool"
      echo "Too damned hard to determine which pool, assume if registered, it it the correct pool."
      #if it is not the correct pool; then
          # vim-cmd vmsvc/unregister "${vmID}"
          #echo "thisPoolID:${thisPoolID}==="
      #fi
    else
      echo "Registering ${fullPath} as ${thisClone} in resource pool ${resourcePool}" with ID ${thisPoolID}
      vmID=`vim-cmd solo/registervm "${fullPath}" "${thisClone}" "${thisPoolID}"`
    fi

    # Power on the machine if required - uncomment the following
    #vim-cmd vmsvc/power.on ${vmID}

    # Return to base directory to do next clone
    cd - &> /dev/null
 done
 echo "============== Job Completed =============="
}

main

Appendix #2: A removeClone.sh script, just in case…

Be VERY careful using this! Like the clone.sh script, it needs to be run with the -master directory name as the parameter. I could have tidied this, but I needed it quickly, so simply modified what I had. Useful if you accidentally create a hundred clones you want to remove just as quickly.

# V1.0 2017-03-18 Chris Welsh

readonly noOfArgs=$#
#Remove trailing / of path if it has one
readonly inFolder=${1%/}
if [ "$3" = "" ] ; then
  startingNo="01"
  noOfCopies=$2
  lastCopyNo=$2
else
  startingNo=$3
  noOfCopies=$2
  lastCopyNo=$(( $2 + $3 - 1 ))
fi

usage() {
  echo ""
  echo "USAGE: ./removeClone.sh base_image_folder-master no_of_copies [starting_number]"
  echo "base_image_folder-master MUST end with the string '-master'"
  echo "Clones are assumed to have names of base_image_folder-#01, base_image_folder-#02 etc"
  echo "If starting_number is specified, clones will start numbering from that number"
  echo "Maximum cloneID=9999; maximum number of clones=99"
  }

deleteAndDestroy() {
  local escapedOutFolder=$(echo "${outFolder}" | sed -e 's/[\/&]/\\&/g' | sed 's/ /\\ /g')
  if [ -d "${escapedOutFolder}" ] ; then
    echo "${outFolder} doesn't exist - skipping"
  else
    echo "Removing contents of old "${outFolder}
    ls -lh "${outFolder}"/*
    rm "${outFolder}"/*
    echo "Removing directory '${outFolder}'"
    rmdir "${outFolder}"
  fi

}

main() {
  if [  ${noOfArgs} -le 1 ] ; then
    echo "ERROR--Insufficient arguments"
    usage
    exit 1
  fi

  if [ ${noOfCopies} -ge 100 ] ; then
    # Clone copy count arbitarily set to 99 - I don't want anyone to have to accidently create hundreds of clones
    echo "ERROR--Clone copy count exceeds 99"
    usage
    exit 1
  fi

  if [ ${lastCopyNo} -ge 10000 ] ; then
    # Maximum clone copy number arbitarily set to 9999 - could actually be set as high as 59635 before VNC TCP Port numbers exceed 65535
    echo "ERROR--Clone sequence exceeds 9999 (last copy would be ${lastCopyNo})"
    usage
    exit 1
  fi

  echo "${inFolder}" | grep -q "\-master$"
  if [[ $? -ne 0 ]] ; then
    # Input filename in wrong format
    echo "ERROR--input folder MUST end with -master. You entered"
    echo "${inFolder}"
    usage
    exit 1
  fi

  echo "============== Beginning Job =============="
  local fullBasePath=$(readlink -f "${inFolder}")/
  local escapedPath=$(echo "${fullBasePath}" | sed -e 's/[\/&]/\\&/g' | sed 's/ /\\ /g')

  outFolderBase=../${inFolder/-master/\/}
  echo "Clone folder BasePath is ${outFolderBase}"
  resourcePool=${inFolder/-master/}
  thisPoolID=`sed -rn 'N; s/\ +<name>'"${resourcePool}"'<\/name>\n\ +<objID>(.+)<\/objID>/\1/p' /etc/vmware/hostd/pools.xml`

#------------------- Main Loop begins here ---------------#

  for i in $(seq -w ${startingNo} ${lastCopyNo}) ; do

    thisClone=${inFolder/master/#${i}}
    outFolder=${outFolderBase}${inFolder/master/#${i}}

    echo "=============================================================================="
    echo "Removing Clone#${i} named ${thisClone} from ${outFolder}"
    escapedClone=$(echo "${thisClone}" | sed -e 's/[\/&]/\\&/g' | sed 's/ /\\ /g')
#    vmID=`/bin/vim-cmd vmsvc/getallvms | awk -vvmname="${thisClone}" '{if ($2 == vmname) print $1}'`
     vmID=`/bin/vim-cmd vmsvc/getallvms | egrep "${thisClone}" | awk '{print $1}'`

    if [ ! -z "${vmID}" ] ; then  #We found the VM was registered, so unregister it
      echo "Powering down and unregistering vm with ID $vmID"
       # Power off the machine if required
       vim-cmd vmsvc/power.off ${vmID}
       vim-cmd vmsvc/unregister "${vmID}"
    else
      echo "No vmID found for $thisClone"
    fi

    deleteAndDestroy #Remove files and directory

  done

#------------------- Main Loop ends here ---------------#

  echo "Resource pool is ${resourcePool} with ID :${thisPoolID}:"
  if [ ! -z "${thisPoolID}" ]; then
    echo "Removing resource pool ${resourcePool}"
    vim-cmd hostsvc/rsrc/destroy "${thisPoolID}"
  else
    echo "There is no resource pool called ${resourcePool}"
  fi

  echo  "Clones removed, attempting to remove parent folder (will fail if you didn't delete all clones)"
  rmdir "${outFolderBase}"

  echo "============== Job Completed =============="
}

main

Appendix #3: Sample output from running the clone.sh script

/vmfs/volumes/573201c2-529afdbe-5824-6805ca1ca2ed/Master - Copy # <em><strong>./clone.sh GNS3WB88-master 12</strong></em>
============== Beginning Job ==============
Output folder BasePath is ../GNS3WB88/
Creating ../GNS3WB88/
Creating resource pool :GNS3WB88:
==============================================================================
Cloning Clone#01 named GNS3WB88-#01 using VNCport=5901 to ../GNS3WB88/GNS3WB88-#01
Creating ../GNS3WB88/GNS3WB88-#01
================ Processing .vmx file ================
Delete Swap File line, will be auto recreated
Change Display Name to GNS3WB88-#01
Change VNC Port Value to 5901
Change Parent Disk Path
Forcing change of MAC addresses for up to two NICs
Forcing creation of a fresh UUID for the VM.
============== Done processing .vmx file ==============
Registering /vmfs/volumes/data/GNS3WB88/GNS3WB88-#01/GNS3WB88-#01.vmx as GNS3WB88-#01 in resource pool GNS3WB88 with ID pool0
==============================================================================

<...output omitted for next 10 clones ...>

==============================================================================
Cloning Clone#12 named GNS3WB88-#12 using VNCport=5912 to ../GNS3WB88/GNS3WB88-#12
Creating ../GNS3WB88/GNS3WB88-#12
================ Processing .vmx file ================
Delete Swap File line, will be auto recreated
Change Display Name to GNS3WB88-#12
Change VNC Port Value to 5912
Change Parent Disk Path
Forcing change of MAC addresses for up to two NICs
Forcing creation of a fresh UUID for the VM.
============== Done processing .vmx file ==============
Registering /vmfs/volumes/data/GNS3WB88/GNS3WB88-#12/GNS3WB88-#12.vmx as GNS3WB88-#12 in resource pool GNS3WB88 with ID pool0
============== Job Completed ==============
Posted in ESXi, virtual interface, VMware | Tagged , , , , , , ,

A funny thing happened in the ACI lab today…

I had a Tenant with a statically configured bare metal hosts attached to interface 1/16 on both leaf 101 and leaf 102, but came up with a “invalid-path;invalid-vlan” error on the faults page for the EPG that was being configured for leaf 102. The host attached to Leaf 101 was working, had no errors and was configured in exactly the same way!

I checked:
In the tenant, the EPG had been linked to the correct Physical Domain
In the tenant, the EPG had been linked to the correct leaf/port/vlan

In the Fabric Policies;
The Leaf Profile defined the correct leaf, and was linked to the correct Interface Profile
The Interface Profile had an Access Port Selector for to the correct port (1/16), and the Access Port Selector was linked to an Access Port Policy Group
The Access Port Policy Group was linked to the correct Attachable Access Entity Profile
The Attachable Access Entity Profile was linked to the same Physical Domain as the EPG showing the error
The Physical Domain was linked to a VLAN Pool that included the VLAN ID being used in the EPG for the static mapping.

So I was stumped.  I have never seen an  “invalid-path;invalid-vlan” error before that could be solved by checking the above, so in desperation I checked thing from the CLI:

apic1# show run leaf 102 interface ethernet 1/16
# Command: show running-config leaf 102 interface ethernet 1/16
# Time: Wed Mar  1 04:17:41 2017
  leaf 102
    interface ethernet 1/16
      # Policy-group configured from leaf-profile ['T5:L102-LeafProf'], leaf-interface-profile T5:L102-IntProf
      # policy-group T5:1G.CDP.LLDP-APPG
      lldp receive
      lldp transmit
      cdp enable
      vlan-domain member T5:MappedVLANs-PhysDom type phys
      switchport access vlan 2050 tenant Tenant5 application 2Tier-AP epg WebServers-EPG
      speed 1G
      negotiate auto
      link debounce time 100
      exit
    exit

“That looks a bit strange”, I thought.  “I don’t normally see the lldp and cdp policies etc”.  But there was nothing in the config that was wrong, none the less, I thought I’d compare with the same port on the other leaf.

apic1# show run leaf 101 interface ethernet 1/16
# Command: show running-config leaf 101 interface ethernet 1/16
# Time: Wed Mar  1 04:18:02 2017
  leaf 101
    interface ethernet 1/16
      # Policy-group configured from leaf-profile ['T5:L101-LeafProf'], leaf-interface-profile T5:L101-IntProf
      # policy-group T5:1G.CDP.LLDP-APPG
      switchport access vlan 2050 tenant Tenant5 application 2Tier-AP epg AppServer-EPG
      exit
    exit

Now this looks much like I expect. And at this stage, this is the only indication that the configuration on 102/1/16 is not quite “normal”. So what I tried next was to see if I could remove the “extra” lines of config on leaf 101. Since there is no default interface command in ACI-NX-OS, I tried manually removing the cdp, lldp etc config:

apic1(config)# leaf 102
apic1(config-leaf)# default ?
apic1(config-leaf)# default inter
Command entered is not APIC NX-OS style CLI.Trying shell command…

apic1(config-leaf)# interface ethernet 1/16
apic1(config-leaf-if)# shut
apic1(config-leaf-if)# no lldp receive
apic1(config-leaf-if)# no lldp transmit
apic1(config-leaf-if)# no cdp  enable
apic1(config-leaf-if)# no vlan-domain member T5:MappedVLANs-PhysDom type phys
apic1(config-leaf-if)# no speed 1G
apic1(config-leaf-if)# no negotiate auto
apic1(config-leaf-if)# no link debounce time 100
apic1(config-leaf-if)# no shutdown

Better see if that worked!

apic1(config-leaf-if)# show run leaf 102 interface ethernet 1/16
# Command: show running-config leaf 102 interface ethernet 1/16
# Time: Wed Mar  1 04:28:16 2017
  leaf 102
    interface ethernet 1/16
      no lldp receive
      no lldp transmit
      no cdp enable
      speed auto
      no negotiate auto
      link debounce time 100
      exit
    exit

Clearly that didn’t work as intended. And by now I’d removed the interface selector for interface 1/16 from the interface profile for Leaf 102 as well, so there should have been no association with any lldp, cdp etc – except for one thing – I’d forgotten that when you do anything in the CLI, it automatically starts creating pesky objects with names beginning with __ui and I could see these in the GUI – but I knew how to get rid of those thanks to this post.

Note:RedPoint Unless Daniel has updated his blog, you will see that one command I used was different to the one in the link above: Daniel’s blog says to use a moconfig delete command, when in fact it should be moconfig commit

And that’s what I did!

apic1# for i in `find *__ui*`
for> do
for> echo "removing $i"
for> modelete $i
for> done
removing attentp-__ui_l102_eth1--16
This command is being deprecated on APIC controller, please use NXOS-style equivalent command
removing attentp-__ui_l102_eth1--16/mo
This command is being deprecated on APIC controller, please use NXOS-style equivalent command
...<snip>....

apic1# moconfig commit
This command is being deprecated on APIC controller, please use NXOS-style equivalent command
Committing mo 'uni/infra/lldpIfP-__ui_l102_eth1--16'
Committing mo 'uni/infra/hintfpol-__ui_l102_eth1--16'
Committing mo 'uni/infra/cdpIfP-__ui_l102_eth1--16'
Committing mo 'uni/infra/attentp-__ui_l102_eth1--16'

All mos committed successfully.

Re-checking the CLI config showed:

apic1(config-leaf-if)# show run leaf 102 interface ethernet 1/16
# Command: show running-config leaf 102 interface ethernet 1/16
# Time: Wed Mar 1 04:42:46 2017
 leaf 102
 interface ethernet 1/16
 exit
 exit

Brilliant! It certainly looks “correct” although I have no idea why this config should be any more “correct” than what I saw earlier, except…
… when I re-created the Interface Selector for access port 1/16 on leaf 102, and reassigned the same interface to the EPG in the tenant config (in other words restored the previous config) – all errors disappeared and the config worked!

Now it could have been that running the script to remove the pesky __ui items actually removed some other junk that was causing a problem, but whatever caused the error is a mystery!

One of the strangest mysteries I have encountered in the ACI lab so far.

RedNectar

Posted in Access Policies, ACI, ACI configuration, APIC, Data Center, Data Centre | Tagged , , , , , ,

Configuring In-Band Management for the APIC on Cisco ACI (Part #3-via a L3Out)

Note:RedPoint This is the third and last in a series of articles – the following is a variation of the first and second in the series. Much of the story is identical – but with a few added extras to configure the L3 out rather than an L2 out or Application Profile as with the EPG approach.

Anyone unlucky enough to try and configure In-Band management on the Cisco APIC will have probably realised that it is not a simple task. Which is probably why many Cisco support forum experts advises to use out of band (oob) management instead [link].

And anyone unlucky enough to try and decipher Cisco’s official documentation for configuring In-Band management on the Cisco APIC or watch their pathetic video (which simply does not work – it does not complete the job) are probably feeling frustrated to the point of giving up.

Let me ease your frustration and take you through a journey showing you how to configure In-Band management for ACI in a variety of ways:

  1. Via an EPG (in the mgmt Tenant) (Part#1 of this series)
    1. using the GUI
    2. using the CLI
    3. using the API
  2. Via an external bridged network (L2 Out) (Part#2 of this series)
    1. using the GUI
    2. using the CLI
    3. using the API
  3. Via an external routed network (L3 Out) (This article)
    1. using the GUI
    2. using the CLI
    3. using the API
    4. Appendix: Configuring L3 Out Interface Profiles with VLANs (Coming Soon)

In-Band Management Via an external routed network (L3 Out) in the mgmt Tenant

Let’s begin with a diagram showing my test setup for the L3Out approach.  It is somewhat different to the previous designs because an external router is involved, so there is no direct connections between the Nexus 9K Leaf switches and either the VMM Server or the Mgmt Host.

IP addressing for the Leaf and Spine switches will use the switch ID in the fourth octect of the 192.168.99.0/24 network. E.g., Spine 201 will be 192.168.99.201. The default gateway address to be configured on the inb Bridge Domain in the mgmt tenant will be 192.168.99.1.

So let me plan exactly what will need to be done:

The Access Policy Chain

I’ll need to allocate VLAN IDs for the internal inband management EPG (VLAN 100) and in case I decide to use SVI or a Routed Sub-Interface for the L3EPG, I’ll include another VLAN too (VLAN 99). I’ll put them a VLAN Pool, which will connect to a External Layer 3 Domain, which will need to link to an AEP which has appropriate Access Port Policy Group assignments linking the AEP to the relevant attachment ports of the APICs, the vCenter host and the ACI Management host. Like the picture shows.


Curiously, in the previous method directly attaching an EPG to the leaves, I created a Physical Domain to contain the VLANs, and it linked the physical ports where the APICs attach (via the AEP > APPP > [Interface Profile + LeafProfile]). Last time I used an External l2 Domain – and it still worked! This time, I used an External L3 Domain rather than the Physical Domain – but again this still worked. So it seems that as far as the APIC attached ports are concerned, so long as they have a link to the relevant VLANs, it doesn’t matter if it is via a Physical Domain or an External L2 Domain or External L3 Domain.

The mgmt Tenant

In the mgmt Tenant there is a number of tasks I’ll have to do.

I’ll need to create a special EPG called an In-band EPG.  This will have to be done before assigning the static addresses I want to the APICs, Leaves and Spines.

I’ll assign the default gateway IP address to the pre-defined inb Bridge Domain in the mgmt Tenant, and then create a L3 External Routed Network (L3 Out) for my external router’s connection and assign port Ethernet 1/1 on Leaf101 to that L3 Out. Initially I’ll use a Routed interface, rather than an SVI or Routed Sub Interface so I won’t need any VLAN associations, but I will configure those in an Appendix.

To be able to consume a contract, I’ll also of course have to create a L3EPG which I will name 0.0.0.0:0-L3EPG to reflect the function and range of IP addresses accessible via this L3 Out.

Finally, I’ll need to create a Contract (inband.MgmtServices-Ct) which will use the common/default filter to allow all traffic, and of course I’ll have to link the contract to the special In-Band EPG (provider) and the 0.0.0.0:0-L3EPG (consumer) mentioned above.

Again, a picture tells the story:

If all goes well, when both the Access Polices and the Tenant configuration is complete, the APIC will be able to manage the vCenter VMM, and the Management Station bare metal server will be able to manage the ACI fabric via the APIC IP addresses.

Enough of design, time to start configuring!

Step-by-Step: Configuring In-Band management via a L3 Out using the GUI

Conventions

Cisco APIC Advanced GUI Menu Selection sequences are displayed in Bolded Blue text, with >+ meaning Right-click and select so that the following line:
Fabric > Access Policies > Pools > VLAN >+ Create VLAN Pool
should be interpreted as:
From the Cisco APIC Advanced GUI Main Menu, select Fabric
From the sub-menu, select Access Policies
In the Navigation Pane, expand Pools, then on the VLAN sub-item, right-click and select Create VLAN Pool.
If a particular tab in the Work Pane needs to be selected, it will be inserted into the sequence in square brackets, such as:
… > Networks > 0.0.0.0:0-L3EPG > [Contracts] tab 
Within the Work Pane and within some dialogues, it will be necessary to click on a + icon to add an item. This is indicated by a (+) followed by the name of the item that needs to be added, so that:
(+) Interface Selectors:
should be interpreted as
Click the + icon adjacent the Interface Selectors: prompt.
Text that needs to be typed at prompts is presented in  orange italicised bold text, while items to be selected from a drop down menu or by clicking options on the screen are shown in bolded underlined text.
Options like clicking OK, UPDATE or SUBMIT are assumed, so not specifically stated unless required between sub-steps. Use your intelligence.

Part 1: Set the Connectivity Preference for the pod to ooband

Firstly, since the default interface to use for external connections id the inband interface, I’m going to set the Connectivity Preference for the pod to ooband – just in case I loose access to the management GUI while configuring this.

Fabric > Fabric Policies > Global Policies > Connectivity Preferences

Interface to use for external connections: ooband

Part 2: Configure the Access Policy Chain

This is a long slog – if you are not familiar with Cisco ACI Access Policies, you might want to read my earlier ACI Tutorials, especially Tutorial #4.

Fabric > Access Policies > Pools > VLAN >+ Create VLAN Pool

Name: inband-VLAN.Pool
Allocation Mode: Static Allocation
(+) Encap Blocks:
Range: VLAN 99 – VLAN 100

Note:RedPoint In this tutorial I am using a Routed Interface in my L3Out, which will not require a VLAN allocation. But later I am planning on exploring SVI and Routed Sub-Interfaces so I’ve included VLAN 99 in the range as well for that exploration.

Fabric > Access Policies > Physical and External Domains > External Routed Domains >+ Create Layer 3 Domain

Name: inband-ExtL3Dom
VLAN Pool: inband-VLAN.Pool

Fabric > Access Policies > Global Policies > Attachable Access Entity Profiles >+ Create Attachable Access Entity Profile

Name: inband-AEP
(+) Domains (VMM, Physical or External) To Be Associated To Interfaces:
Domain Profile: inband-ExtL3Dom

Fabric > Access Policies > Interface Policies > Policies > LLDP Interface >+ Create LLDP Interface Policy

Name: Enable-LLDP
[Leave default values – I just want to have a policy that spells out that LLDP is enabled]

Fabric > Access Policies > Interface Policies > Policy Groups >Leaf Policy Groups >+ Create Leaf Access Port Policy Group

Name: inband.LLDP-APPG
LLDP Policy: Enable-LLDP
Attached Entity Profile: inband-AEP

Fabric > Access Policies > Interface Policies > Profiles >Leaf Profiles >+ Create Leaf Interface Profile

Name: L101-IntProf
(+) Interface Selectors:
Name: 1:1
Description: Router
Interface IDs: 1/1
Interface Policy Group: inband.LLDP-APPG
(+) Interface Selectors:
Name: 1:46-48
Description: APICs
Interface IDs: 1/46-48
Interface Policy Group: inband.LLDP-APPG

Now repeat for Leaf102 – this time just add the APIC ports

Fabric > Access Policies > Interface Policies > Profiles >Leaf Profiles >+ Create Leaf Interface Profile

Name: L102-IntProf
(+) Interface Selectors:
Name: 1:46-48
Description: APICs
Interface IDs: 1/46-48
Interface Policy Group: inband.LLDP-APPG

Fabric > Access Policies > Switch Policies > Profiles >Leaf Profiles >+ Create Leaf Profile

Name: L101-LeafProf
(+) Leaf Selectors:
Name: Leaf101
Blocks: 101
UPDATE > NEXT
[x] L101-IntProf

And again for leaf 102

Fabric > Access Policies > Switch Policies > Profiles >Leaf Profiles >+ Create Leaf Profile

Name: L102-LeafProf
(+) Leaf Selectors:
Name: Leaf102
Blocks: 102
UPDATE > NEXT
[x] L102-IntProf

That’s the Access Policies done, now for the mgmt Tenant configuration.

Part 3: mgmt Tenant Configuration

Before I can assign a static IP addresses to  an APIC or switch, the GUI forces me to create a Node Management EPG, so begin by creating one – I’ll use the name Default because I don’t expect I’ll ever need another, but I’ll use an upper-case D to distinguish it from system created defaults which always use a lowercase d.

Tenants > Tenant mgmt > Node Management EPGs >+ Create In-Band Management EPG

Name: Default
Encap: vlan-100
Bridge Domain: inb

Now I can create the Static Node Management Addresses.

Tenants > Tenant mgmt > Node Management Addresses > Static Node Management Addresses >+ Create Static Node Management Addresses

Node Range: 1 – 3
Config: In-Band Addresses
In-Band Mangement EPG: Default
In-Band IPV4 Address: 192.168.99.111/24
In-Band IPV4 Gateway: 192.168.99.1/24

[Tip: If you are following my steps, ignore the warning (as shown below).  I already set the Interface to use for external connections to ooband, and in spite of the inference in the warning, your preference for management will NOT switch to In-Band]

inbabd-warning

Tedious as it was, I resisted the temptation to resort to the CLI, and repeated the above step for Nodes  101-102, and 201-202.

That default gateway IP address I defined on the nodes will reside in the inb Bridge Domain.

Tenants > Tenant mgmt > Networking > Bridge Domains > inb > Subnets  >+ Create subnet

Gateway IP: 192.168.99.1/24
Scope: [x] Advertised Externally

That’s took care of the internal network except that I will have to come back to the inb Bridge Domain to link it to the L3Out after I’ve created it.

At this stage the APICs were able to ping the default gateway and the Leaf switches verifying that the configurations were valid, although I was not able to ping the Spine switches.  However, I took heart from this video and assumed that all was OK.

	apic1# ping -c 3 192.168.99.1
	PING 192.168.99.1 (192.168.99.1) 56(84) bytes of data.
	64 bytes from 192.168.99.1: icmp_seq=1 ttl=63 time=2.86 ms
	64 bytes from 192.168.99.1: icmp_seq=2 ttl=63 time=0.827 ms
	64 bytes from 192.168.99.1: icmp_seq=3 ttl=63 time=0.139 ms

	--- 192.168.99.1 ping statistics ---
	3 packets transmitted, 3 received, 0% packet loss, time 2002ms
	rtt min/avg/max/mdev = 0.139/1.276/2.862/1.156 ms
	apic1# ping -c 3 192.168.99.101
	PING 192.168.99.101 (192.168.99.101) 56(84) bytes of data.
	64 bytes from 192.168.99.101: icmp_seq=1 ttl=63 time=0.969 ms
	64 bytes from 192.168.99.101: icmp_seq=2 ttl=63 time=0.176 ms
	64 bytes from 192.168.99.101: icmp_seq=3 ttl=63 time=0.209 ms

	--- 192.168.99.101 ping statistics ---
	3 packets transmitted, 3 received, 0% packet loss, time 2000ms
	rtt min/avg/max/mdev = 0.176/0.451/0.969/0.366 ms
	apic1# ping -c 3 192.168.99.201
	PING 192.168.99.201 (192.168.99.201) 56(84) bytes of data.
	From 192.168.99.111 icmp_seq=1 Destination Host Unreachable
	From 192.168.99.111 icmp_seq=2 Destination Host Unreachable
	From 192.168.99.111 icmp_seq=3 Destination Host Unreachable

	--- 192.168.99.201 ping statistics ---
	3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 3005ms
	

I’ll need a contract to put between the L3EPG and the special management In-Band EPG – life will be easier if I create that first.

Tenants > Tenant mgmt > Security Policies > Contracts  >+ Create Contract

Name: inband.MgmtServices-Ct
Scope: VRF [Default]
(+) Subjects:
Name: inband.MgmtServices-Subj
Filter Chain
(+) Filters
Name: common/default

Now to create the L3Out, the Node Profile and the L2EPG

Tenants > Tenant mgmt > Networking > External Routed Networks  >+ Create Routed Outside

Name: inband.OSPF-L3Out
VRF: mgmt/inb
External Routed Domain: inband-ExtL3Dom
[
x] OSPF
OSPF Area ID: 1
OSPF Area Type: Regular Area
(+) Nodes And Interfaces Protocol Profiles
Name: Leaf101-OSPF.NodeProf
(+) Nodes
Node ID: 101
Router ID: 1.1.1.1
OK > OK > NEXT
(+) External EPG Networks
Name: 0.0.0.0:0-L3EPG
(+) Subnet
IP Address: 0.0.0.0/0

You will have noticed that during the process above I did not include a step to add the Interface Profile – I did this because I wanted to explore the three different options for Interface Profiles – Routed InterfaceSVI Interface and Routed sub-interface.

Firstly, I’ll explore using a Routed Interface option, and look at the other options as an Appendix to this article.

Tenants > Tenant mgmt > Networking > External Routed Networks  > inband.OSPF-L3Out > Logical Node Profiles > Leaf101-OSPF.NodeProf  > Logical Interface Profiles >+ Create Interface Profile

Name: OSPF-IntProf
Interfaces
(+) Routed Interfaces:
Path: topology/pod-1/paths-101/pathep-[eth1/1]
IPv4 Primary / IPv6 Preferred Address: 172.16.2.2/30
MTU (bytes): 1500

Note:RedPoint At this point, since my external router is configured with a routed interface configured with OSPF and an IP of 172.16.2.1/30 I will also check that the OSPF adjacency has come up by navigating to Tenants > Tenant mgmt > Networking > External Routed Networks  > inband.OSPF-L3Out > Logical Node Profiles > Leaf101-OSPF.NodeProf  > Configured Nodes > topology/pod-1/node-101 > OSPF for VRF mgmt:inb and check that I have a neighbour in the list of neighbors in the Work Pane.

Have the L3EPG consume the contract I created earlier:

Tenants > Tenant mgmt > Networking > External Routed Networks  > inband.OSPF-L3Out > Networks > 0.0.0.0:0-L3EPG > [Contracts] tab 

(+) Consumed Contracts:
Name: inband.MgmtServices-Ct

And the In-Band EPG Provide it:

Tenants > Tenant mgmt >Node Management EPGs > In-Band EPG Default 

(+) Provided Contracts:
Name: inband.MgmtServices-Ct

And finally, I’ll have to link the L3Out to the inb Bridge Domain so that the APIC knows which L3Out to use when advertising the 192.168.99.0/24 network externally.

Tenants > Tenant mgmt > Networking > Bridge Domains > inb > [Policy] tab > [L3 Configurations] tab

(+) Associated L3 Outs:
L3 Out:  mgmt/inband.OSPF-L3Out

Time to test!

To be confident that I will now be able to deploy a VMM Domain with connectivity to the Virtual Machine Manager (vCenter in my case), I’ll ping the VMM server from the APIC, only this time I’ll tell the APIC to use the inband management interface using the ‑I ping option (or reconfigure the Connectivity Preferences to use the inband interface for external connections rather than the ooband interface which I configured in Part #1).

		apic1# ping -c3 -I 192.168.99.111 172.16.99.99
		PING 172.16.99.99 (172.16.99.99) from 192.168.99.111 : 56(84) bytes of data.
		64 bytes from 172.16.99.99: icmp_seq=1 ttl=61 time=0.374 ms
		64 bytes from 172.16.99.99: icmp_seq=2 ttl=61 time=0.403 ms
		64 bytes from 172.16.99.99: icmp_seq=3 ttl=61 time=0.391 ms

		--- 172.16.99.99 ping statistics ---
		3 packets transmitted, 3 received, 0% packet loss, time 2000ms
		rtt min/avg/max/mdev = 0.374/0.389/0.403/0.020 ms
		

And the final test is to see if my management PC can indeed manage the APIC via the In-Band management IP address:

apic-access

Step-by-Step: Configuring In-Band management via a L3 Out using the CLI

The main narrative for the configuration steps are contained in the explanation of the GUI configuration, so you should read that for more detail.  The following commands are entered in configuration mode.


Part 1: Set the Connectivity Preference for the pod to ooband

mgmt_connectivity pref ooband

Part 2: Configure the Access Policy Chain

# First, create the VLAN Pool and External L3 Domain
# If you type the command below, you may notice a curious thing -
# at the point where the word &quot;type&quot; appears, if you press &quot;?&quot;
# you will see options for &lt;CR&gt; and &quot;dynamic&quot;, but not &quot;type&quot;.
# In other words, &quot;type&quot; is a hidden option - I discovered it
# by creating a domain in the GUI and looking at the running
# config later.
  vlan-domain inband-ExtL3Dom type l3ext
    vlan-pool inband-VLAN.Pool
    vlan 99-100
    exit

# And a Access Port Policy Group linked to the inband-ExtL3Dom
  template policy-group inband.LLDP-APPG

# Another curious thing with the CLI is that there is no way
# to create an AEP - one gets created for you whether you
# want it or not when you link the APPG to the Domain in the
# following command.
    vlan-domain member inband-ExtL3Dom type l3ext
    exit

# Not necessary to create an Interface Policy to Enable-LLDP in the
# CLI, Interface Policies are applied directly to the interfaces

# Now the Leaf Profiles, Interface Profiles and Port Selectors
  leaf-profile L101-LeafProf
    leaf-group Leaf101
      leaf 101
      exit
    leaf-interface-profile L101-IntProf
    exit
  leaf-profile L102-LeafProf
    leaf-group Leaf102
      leaf 102
      exit
    leaf-interface-profile L102-IntProf
    exit

  leaf-interface-profile L101-IntProf
    leaf-interface-group 1:1
      description 'Router'
      interface ethernet 1/1
      policy-group inband.LLDP-APPG
      exit
    leaf-interface-group 1:46-48
      description 'APICs'
      interface ethernet 1/46-48
      policy-group inband.LLDP-APPG
      exit
    exit

  leaf-interface-profile L102-IntProf
    leaf-interface-group 1:46-48
      description 'APICs'
      interface ethernet 1/46-48
      policy-group inband.LLDP-APPG
      exit
    exit

That’s the Access Policies done, now for the mgmt Tenant configuration.

Part 3: mgmt Tenant Configuration

# Node IP addressing is configured OUTSIDE the mgmt
# Tenant in the CLI, so I'll do the mgmt Tenant bits
# first, in the order that best fits - defining the
# contract first means I can configure the AP in one hit

  tenant mgmt
    contract inband.MgmtServices-Ct
      subject inband.MgmtServices-Subj
        access-group default both
        exit
      exit

    l3out inband.OSPF-L3Out
      vrf member inb
      exit

    external-l3 epg 0.0.0.0:0-L3EPG l3out inband.OSPF-L3Out
      vrf member inb
      match ip 0.0.0.0/0
      contract consumer inband.MgmtServices-Ct
      exit

    inband-mgmt epg Default
      contract provider inband.MgmtServices-Ct
      bridge-domain inb
      vlan 100
      exit

    interface bridge-domain inb
      ip address 192.168.99.1/24 secondary scope public
      exit
    exit

# Now the Node IP addressing

  controller 1
    interface inband-mgmt0
      ip address 192.168.99.111/24 gateway 192.168.99.1
      inband-mgmt epg Default
      vlan 100
      exit
    exit
  controller 2
    interface inband-mgmt0
      ip address 192.168.99.112/24 gateway 192.168.99.1
      inband-mgmt epg Default
      vlan 100
      exit
    exit
  controller 3
    interface inband-mgmt0
      ip address 192.168.99.113/24 gateway 192.168.99.1
      inband-mgmt epg Default
      vlan 100
      exit
    exit

  switch 101
    interface inband-mgmt0
      ip address 192.168.99.101/24 gateway 192.168.99.1
      inband-mgmt epg Default
      exit
    exit
  switch 102
    interface inband-mgmt0
      ip address 192.168.99.102/24 gateway 192.168.99.1
      inband-mgmt epg Default
      exit
    exit
  switch 201
    interface inband-mgmt0
      ip address 192.168.99.201/24 gateway 192.168.99.1
      inband-mgmt epg Default
      exit
    exit
  switch 202
    interface inband-mgmt0
      ip address 192.168.99.202/24 gateway 192.168.99.1
      inband-mgmt epg Default
      exit
    exit

# Finally, apply routing configuration to 
# leaf 101 eth1/1

  leaf 101
 
    vrf context tenant mgmt vrf inb l3out inband.OSPF-L3Out
      router-id 1.1.1.1
      route-map inband.OSPF-L3Out_out
        match bridge-domain inb
          exit
        exit
      exit
 
# The CLI gets itself into a bit of a Catch-22 here
# When complete, you will see a line:
# ip router ospf default area 0.0.0.1 
# under the configuration of interface ethernet 1/1, but if I
# try to enter it before configuring the &quot;router ospf default&quot;
# section below, I get an error.
#
# Similarly, if I try to configure the  &quot;router ospf default&quot;
# section before configuring the vrf under the ethernet 1/1
# interface, I also get an error.

    interface ethernet 1/1
      no switchport
      vrf member tenant mgmt vrf inb l3out inband.OSPF-L3Out
      mtu 1500
      ip address 172.16.2.2/30
      exit

   router ospf default
      vrf member tenant mgmt vrf inb
        area 0.0.0.1 l3out inband.OSPF-L3Out
# I have no idea why a line saying &quot;area 0.0.0.1 nssa&quot;
# turns up in the config, but it does, so I had to also
# enter the following line.
        no area 0.0.0.1 nssa
        exit
      exit

# Note how I had to then return to interface configuration mode to 
# complete the config AFTER having done the &quot;router ospf default&quot;
# section
    interface ethernet 1/1
      ip router ospf default area 0.0.0.1
      exit
    exit

Time to test!

To be confident that I will now be able to manage the APIC from my management host, I’ll ping the Mgmt Host from the APIC.

		apic1# ping -c 3 192.168.99.10
		PING 192.168.99.10 (192.168.99.10) 56(84) bytes of data.
		64 bytes from 192.168.99.10: icmp_seq=1 ttl=64 time=0.458 ms
		64 bytes from 192.168.99.10: icmp_seq=2 ttl=64 time=0.239 ms
		64 bytes from 192.168.99.10: icmp_seq=3 ttl=64 time=0.238 ms

		--- 192.168.99.10 ping statistics ---
		3 packets transmitted, 3 received, 0% packet loss, time 1999ms
		rtt min/avg/max/mdev = 0.238/0.311/0.458/0.105 ms
		

And the final test is to see if my management PC can indeed manage the APIC via the In-Band management IP address – only this time for a change I’ll use ssh to access, and access APIC#2

sshaccess

One interesting thing to note in the CLI configuration is that nowhere do you create an Attachable Access Entity Profile (AEP).  But, when you enter the above commands, one miraculously appears (called __ui_pg_inband.LLDP-APPG) when you view the GUI.

miracluousaep-l2ext

Another myriad of mysteries happens in the mgmt Tenant, even if you go through the CLI config from a clean configuration. While entering the commands above in the CLI, the APIC will automatically add an Application Profile (called default)  with an EPG (also called default).  But it doesn’t stop there! There is also another Node Management EPG (called default) magically created, and a mystical contract (called inband-default-contract) with a link to a mysterious filter (called  inband-default). I have no idea why, but here’s some commands to clean up the crap left behind.

		# Remove crap left behind by previous CLI commands
		tenant mgmt
		no application default
		no contract inband-default-contract
		no inband-mgmt epg default
		no access-list inband-default
		

Step-by-Step: Configuring In-Band management via a L3 Out using the API

The main narrative for the configuration steps are contained in the explanation of the GUI configuration, so you should read that for more detail.  The following sections can be saved to a text file (with a .xml extension) and posted to your config using the GUI (using right-click > Post …), or you can copy and paste the sections below into Postman.


Right-click > Post … Tutorial

Assume one of the sections below is stored a text file with a .xml extension such as  connectivityPrefs.xml

In the APIC GUI, any configuration item that has Post … as one of the right-click options can be used to post the file.

post

The contents of the .xml file must be posted to the uni Parent Distinguished Name (DN) as shown below:

posttouni

The configuration defined in the .xml file will have been pushed into your config:

unpdatedconnpref

End of tutorial


Part 1: Set the Connectivity Preference for the pod to ooband

		<?xml version="1.0" encoding="UTF-8"?>
		<!-- connectivityPrefs.xml -->
		<mgmtConnectivityPrefs dn="uni/fabric/connectivityPrefs" interfacePref="ooband"/>
		

Part 2: Configure the Access Policy Chain

Save each of these snippets in a separate .xml file and post one at a time.  Or use Postman and copy and paste.

		<?xml version="1.0" encoding="UTF-8"?>
		<!-- Create the VLAN Pool -->
		<fvnsVlanInstP allocMode="static" dn="uni/infra/vlanns-[inband-VLAN.Pool]-static" name="inband-VLAN.Pool">
			<fvnsEncapBlk from="vlan-99" to="vlan-100"/>
		</fvnsVlanInstP>
		
		<?xml version="1.0" encoding="UTF-8"?>
		<!-- Create the External L3 Domain, assign it the VLAN Pool -->
		<l3extDomP dn="uni/l3dom-inband-ExtL3Dom" name="inband-ExtL3Dom">
			<infraRsVlanNs tDn="uni/infra/vlanns-[inband-VLAN.Pool]-static"/>
		</l3extDomP>
		
		<!-- Create an Attchable Access Entity Profile (AEP) -->
		<infraAttEntityP descr="" dn="uni/infra/attentp-inband-AEP" name="inband-AEP">
			<infraRsDomP tDn="uni/l3dom-inband-ExtL3Dom"/>
		</infraAttEntityP>
		
		<?xml version="1.0" encoding="UTF-8"?>
		<!-- Create an Enable-LLDP Interface Policy -->
		<lldpIfPol adminRxSt="enabled" adminTxSt="enabled" dn="uni/infra/lldpIfP-Enable-LLDP" />
		
		<?xml version="1.0" encoding="UTF-8"?>
		<!-- Create an Access Port Policy Group -->
		<infraAccPortGrp dn="uni/infra/funcprof/accportgrp-inband.LLDP-APPG" name="inband.LLDP-APPG">
			<infraRsAttEntP tDn="uni/infra/attentp-inband-AEP"/>
			<infraRsLldpIfPol tnLldpIfPolName="Enable-LLDP"/>
		</infraAccPortGrp>
		
		<?xml version="1.0" encoding="UTF-8"?>
		<!-- Two Interface Profiles will be needed - first one for Leaf101 -->
		<infraAccPortP dn="uni/infra/accportprof-L101-IntProf" name="L101-IntProf">
			<!-- Add an interface selector for the External Router -->
			<infraHPortS descr="Router" name="1:1" type="range">
				<infraRsAccBaseGrp fexId="101" tDn="uni/infra/funcprof/accportgrp-inband.LLDP-APPG"/>
				<infraPortBlk fromCard="1" fromPort="1" name="block1" toCard="1" toPort="1"/>
			</infraHPortS>
			<!-- Add the ports where the APICs are connected -->
			<infraHPortS descr="APICs" name="1:46-48" type="range">
				<infraRsAccBaseGrp fexId="101" tDn="uni/infra/funcprof/accportgrp-inband.LLDP-APPG"/>
				<infraPortBlk fromCard="1" fromPort="46" name="block1" toCard="1" toPort="48"/>
			</infraHPortS>
		</infraAccPortP>
		
		<?xml version="1.0" encoding="UTF-8"?>
		<!-- Another Interface Profile for Leaf102 -->
		<infraAccPortP dn="uni/infra/accportprof-L102-IntProf" name="L102-IntProf">
			<!-- Add the ports where the APICs are connected -->
			<infraHPortS descr="APICs" name="1:46-48" type="range">
				<infraRsAccBaseGrp fexId="102" tDn="uni/infra/funcprof/accportgrp-inband.LLDP-APPG"/>
				<infraPortBlk fromCard="1" fromPort="46" name="block2" toCard="1" toPort="48"/>
			</infraHPortS>
		</infraAccPortP>
		
		<?xml version="1.0" encoding="UTF-8"?>
		<!-- Create a Leaf Profile to own the corresponding Interface Profile -->
		<infraNodeP dn="uni/infra/nprof-L101-LeafProf" name="L101-LeafProf">
			<infraLeafS name="Leaf101" type="range">
				<infraNodeBlk name ="Default" from_="101" to_="101"/>
			</infraLeafS>
			<infraRsAccPortP tDn="uni/infra/accportprof-L101-IntProf"/>
		</infraNodeP>
		
		<?xml version="1.0" encoding="UTF-8"?>
		<!-- Create a Leaf Profile to own the corresponding Interface Profile -->
		<infraNodeP dn="uni/infra/nprof-L102-LeafProf" name="L102-LeafProf">
			<infraLeafS name="Leaf102" type="range">
				<infraNodeBlk name ="Default" from_="102" to_="102"/>
			</infraLeafS>
			<infraRsAccPortP tDn="uni/infra/accportprof-L102-IntProf"/>
		</infraNodeP>
		

That’s the Access Policies done, now for the mgmt Tenant configuration.

Part 3: mgmt Tenant Configuration

<?xml version="1.0" encoding="UTF-8"?>
<!-- api/policymgr/mo/.xml -->
<polUni>
	<fvTenant name="mgmt">
		<mgmtMgmtP name="default">
			<!-- Create a Node Management EPG -->
			<mgmtInB encap="vlan-100" name="Default">
				<!-- Assign Adresses for APICs In-Band management network -->
				<mgmtRsInBStNode addr="192.168.99.111/24" gw="192.168.99.1" tDn="topology/pod-1/node-1"/>
				<mgmtRsInBStNode addr="192.168.99.112/24" gw="192.168.99.1" tDn="topology/pod-1/node-2"/>
				<mgmtRsInBStNode addr="192.168.99.113/24" gw="192.168.99.1" tDn="topology/pod-1/node-3"/>
				<!-- Assign Adresses for switches In-Band management network -->
				<mgmtRsInBStNode addr="192.168.99.101/24" gw="192.168.99.1" tDn="topology/pod-1/node-101"/>
				<mgmtRsInBStNode addr="192.168.99.102/24" gw="192.168.99.1" tDn="topology/pod-1/node-102"/>
				<mgmtRsInBStNode addr="192.168.99.201/24" gw="192.168.99.1" tDn="topology/pod-1/node-201"/>
				<mgmtRsInBStNode addr="192.168.99.202/24" gw="192.168.99.1" tDn="topology/pod-1/node-202"/>
				<mgmtRsMgmtBD tnFvBDName="inb"/>
        <!-- The Node Mangement EPG will be the provider for the Contract -->
				<fvRsProv tnVzBrCPName="inband.MgmtServices-Ct"/>
			</mgmtInB>
		</mgmtMgmtP>
		<!-- Create the Contract Assigned to the Default Node Management EPG -->
		<vzBrCP name="inband.MgmtServices-Ct" scope="context">
			<vzSubj name="inband.MgmtServices-Subj">
				<!-- Use the common/default filter -->
				<vzRsSubjFiltAtt directives="" tnVzFilterName="default"/>
			</vzSubj>
		</vzBrCP>
		<!-- Assign IP address to inb BD -->
		<fvBD name="inb">
			<fvRsBDToOut tnL3extOutName="inband.OSPF-L3Out"/>
			<fvSubnet ip="192.168.99.1/24" scope="public"/>
		</fvBD>
		<!-- Create the External L3 Network (L3 Out) and L3EPG -->
		<l3extOut name="inband.OSPF-L3Out">
			<l3extLNodeP name="Leaf101-OSPF.NodeProf">
				<l3extRsNodeL3OutAtt rtrId="1.1.1.1" rtrIdLoopBack="yes" tDn="topology/pod-1/node-101"/>
				<l3extLIfP name="OSPF-intProf">
					<ospfIfP>
						<ospfRsIfPol tnOspfIfPolName=""/>
					</ospfIfP>
					<l3extRsPathL3OutAtt addr="172.16.2.2/30" ifInstT="l3-port" mode="regular" mtu="1500" tDn="topology/pod-1/paths-101/pathep-[eth1/1]"/>
				</l3extLIfP>
			</l3extLNodeP>
			<l3extRsEctx tnFvCtxName="inb"/>
			<l3extRsL3DomAtt tDn="uni/l3dom-inband-ExtL3Dom"/>
			<l3extInstP name="0.0.0.0:0-L3EPG">
				<fvRsCons tnVzBrCPName="inband.MgmtServices-Ct"/>
				<l3extSubnet ip="0.0.0.0/0"/>
			</l3extInstP>
			<ospfExtP areaId="0.0.0.1" areaType="regular"/>
		</l3extOut>
	</fvTenant>
</polUni>

The fact that the login screen comes up is proof that the Mgmt Host has connectivity to the APICs.

visorelogin

Appendix: Configuring L3 Out Interface Profiles with VLANs

Coming Soon


That completes this series of tutorials for configuring In-Band Management on the APIC for Cisco ACI.  Don’t forget to share and like and rate each article to make it easier for others to find when searing for help!

RedNectar

Note:RedPoint If you would like the author or one of my colleagues to assist with the setup of your ACI installation, contact acimentor@housley.com.au and refer to this article. Housley works mainly around APJC, but are not restricted to this area.

References:

Cisco’s official ACI management documentation – I have informed Cisco of the fact that this documentation is not up to scratch – hopefully it will be fixed soon.

The Cisco APIC NX-OS Style Command-Line Interface Configuration Guide – especially the chapter on Configuring Management Interfaces was particularly helpful – much better than the reference above.

Also Cisco’s ACI Troubleshooting Book had a couple of hints about how things hang together.

Carl Niger’s youtube video series was helpful – I recommend it to you.

Cisco’s pathetic video on configuring In-Band management is simply not worth wasting your time on.  But it ‘s included here since I referred to it.

Posted in ACI, ACI API, ACI CLI, ACI configuration, ACI inband management tutorials, ACI Tutorial, APIC, Cisco, Data Center, Data Centre, EPG, In-Band management, inband management, L2 Out, L2out, L3 Out, L3out, Postman, tutorial | Tagged , , , , , , , , , , | 3 Comments

Configuring In-Band Management for the APIC on Cisco ACI (Part #2-via a L2Out)

Note:RedPoint This is the second in a series of articles – the following is a variation of the first in the series.  In fact, the whole story is almost identical – it is just that this one uses a L2 out approach rather than an EPG approach.

Anyone unlucky enough to try and configure In-Band management on the Cisco APIC will have probably realised that it is not a simple task. Which is probably why many Cisco support forum experts advises to use out of band (oob) management instead [link].

And anyone unlucky enough to try and decipher Cisco’s official documentation for configuring In-Band management on the Cisco APIC or watch their pathetic video (which simply does not work – it does not complete the job) are probably feeling frustrated to the point of giving up.

Let me ease your frustration and take you through a journey showing you how to configure In-Band management for ACI in a variety of ways:

  1. Via an EPG (in the mgmt Tenant) (Part#1 of this series)
    1. using the GUI
    2. using the CLI
    3. using the API
  2. Via an external bridged network (L2 Out) (This article)
    1. using the GUI
    2. using the CLI
    3. using the API
  3.  Via an external routed network (L3 Out) (Part#3 of this series)
    1. using the GUI
    2. using the CLI
    3. using the API

In-Band Management Via an external bridged network (L2 Out) in the mgmt Tenant

Let’s begin with a diagram showing my test setup for the L2Out approach.  It is identical to the previous design, except that there is no way I can use an untagged host connection directly to an interface configured for a L2 Out – so I’ve had to introduce a switch between the Nexus 9K Leaf102 and the Mgmt Host.

IP addressing for the Leaf and Spine switches will use the switch ID in the fourth octect of the 192.168.99.0/24 network. E.g., Spine 201 will be 192.168.99.201. The default gateway address to be configured on the inb Bridge Domain in the mgmt tenant will be 192.168.99.1.

So let me plan exactly what will need to be done:

The Access Policy Chain

I’ll need to allocate VLAN IDs for the internal inband management EPG (VLAN 100) and another for the user facing L2EPG (VLAN 99). I’ll put them a VLAN Pool, which will connect to a External Layer 2 Domain, which will need to link to an AEP which has appropriate Access Port Policy Group assignments linking the AEP to the relevant attachment ports of the APICs, the vCenter host and the ACI Management host. Like the picture shows.


Curiously, in the previous method directly attaching an EPG to the leaves, I created a Physical Domain to contain the VLANs, and it linked the physical ports where the APICs attach (via the AEP > APPP > [Interface Profile + LeafProfile]). This time, I used an External L2 Domain rather than the Physical Domain – but this still worked. So it seems that as far as the APIC attached ports are concerned, so long as they have a link to the relevant VLANs, it doesn’t matter if it is via a Physical Domain or an External L2 Domain.

The mgmt Tenant

In the mgmt Tenant there is a number of tasks I’ll have to do.

I’ll need to create a special EPG called an In-band EPG.  This will have to be done before assigning the static addresses I want to the APICs, Leaves and Spines.

I’ll assign the default gateway IP address to the pre-defined inb Bridge Domain in the mgmt Tenant, and then create a L2 External Bridged Network (L2 Out) for my external VLAN (VLAN 99) and assign ports Ethernet 1/10 on each Leaf to that L2 Out. To be able to consume a contract, I’ll also of course have to create a L2EPG which I will name inband.VLAN99-L2EPG to reflect the function and VLAN assigned.

Finally, I’ll need to create a Contract (inband.MgmtServices-Ct) which will use the common/default filter to allow all traffic, and of course I’ll have to link the contract to the special In-Band EPG (provider) and the inband.VLAN99-L2EPG (consumer) mentioned above.

Again, a picture tells the story:

If all goes well, when both the Access Polices and the Tenant configuration is complete, the APIC will be able to manage the vCenter VMM, and the Management Station bare metal server will be able to manage the ACI fabric via the APIC IP addresses.

Enough of design, time to start configuring!

Step-by-Step: Configuring In-Band management via a L2 Out using the GUI

Conventions

Cisco APIC Advanced GUI Menu Selection sequences are displayed in Bolded Blue text, with >+ meaning Right-click and select so that the following line:
Fabric > Access Policies > Pools > VLAN >+ Create VLAN Pool
should be interpreted as:
From the Cisco APIC Advanced GUI Main Menu, select Fabric
From the sub-menu, select Access Policies
In the Navigation Pane, expand Pools, then on the VLAN sub-item, right-click and select Create VLAN Pool.
If a particular tab in the Work Pane needs to be selected, it will be inserted into the sequence in square brackets, such as:
… > Networks > 0.0.0.0:0-L3EPG > [Contracts] tab 
Within the Work Pane and within some dialogues, it will be necessary to click on a + icon to add an item. This is indicated by a (+) followed by the name of the item that needs to be added, so that:
(+) Interface Selectors:
should be interpreted as
Click the + icon adjacent the Interface Selectors: prompt.
Text that needs to be typed at prompts is presented in  orange italicised bold text, while items to be selected from a drop down menu or by clicking options on the screen are shown in bolded underlined text.
Options like clicking OK, UPDATE or SUBMIT are assumed, so not specifically stated unless required between sub-steps. Use your intelligence.

Part 1: Set the Connectivity Preference for the pod to ooband

Firstly, since the default interface to use for external connections id the inband interface, I’m going to set the Connectivity Preference for the pod to ooband – just in case I loose access to the management GUI while configuring this.

Fabric > Fabric Policies > Global Policies > Connectivity Preferences

Interface to use for external connections: ooband

Part 2: Configure the Access Policy Chain

This is a long slog – if you are not familiar with Cisco ACI Access Policies, you might want to read my earlier ACI Tutorials, especially Tutorial #4.

Fabric > Access Policies > Pools > VLAN >+ Create VLAN Pool

Name: inband-VLAN.Pool
Allocation Mode: Static Allocation
(+) Encap Blocks:
Range: VLAN 99 – VLAN 100

Fabric > Access Policies > Physical and External Domains > External Bridged Domains >+ Create Layer 2 Domain

Name: inband-ExtL2Dom
VLAN Pool: inband-VLAN.Pool

Fabric > Access Policies > Global Policies > Attachable Access Entity Profiles >+ Create Attachable Access Entity Profile

Name: inband-AEP
(+) Domains (VMM, Physical or External) To Be Associated To Interfaces:
Domain Profile: inband-ExtL2Dom

Fabric > Access Policies > Interface Policies > Policies > LLDP Interface >+ Create LLDP Interface Policy

Name: Enable-LLDP
[Leave default values – I just want to have a policy that spells out that LLDP is enabled]

Fabric > Access Policies > Interface Policies > Policy Groups >Leaf Policy Groups >+ Create Leaf Access Port Policy Group

Name: inband.LLDP-APPG
LLDP Policy: Enable-LLDP
Attached Entity Profile: inband-AEP

Fabric > Access Policies > Interface Policies > Profiles >Leaf Profiles >+ Create Leaf Interface Profile

Name: L101-IntProf
(+) Interface Selectors:
Name: 1:10
Description: vCenter
Interface IDs: 1/10
Interface Policy Group: inband.LLDP-APPG
(+) Interface Selectors:
Name: 1:46-48
Description: APICs
Interface IDs: 1/46-48
Interface Policy Group: inband.LLDP-APPG

Now repeat for Leaf102

Fabric > Access Policies > Interface Policies > Profiles >Leaf Profiles >+ Create Leaf Interface Profile

Name: L102-IntProf
(+) Interface Selectors:
Name: 1:10
Description: Mgmt Host
Interface IDs: 1/10
Interface Policy Group: inband.LLDP-APPG
(+) Interface Selectors:
Name: 1:46-48
Description: APICs
Interface IDs: 1/46-48
Interface Policy Group: inband.LLDP-APPG

Fabric > Access Policies > Switch Policies > Profiles >Leaf Profiles >+ Create Leaf Profile

Name: L101-LeafProf
(+) Leaf Selectors:
Name: Leaf101
Blocks: 101
UPDATE > NEXT
[x] L101-IntProf

And again for leaf 102

Fabric > Access Policies > Switch Policies > Profiles >Leaf Profiles >+ Create Leaf Profile

Name: L102-LeafProf
(+) Leaf Selectors:
Name: Leaf102
Blocks: 102
UPDATE > NEXT
[x] L102-IntProf

That’s the Access Policies done, now for the mgmt Tenant configuration.

Part 3: mgmt Tenant Configuration

Before I can assign a static IP addresses to  an APIC or switch, the GUI forces me to create a Node Management EPG, so begin by creating one – I’ll use the name Default because I don’t expect I’ll ever need another, but I’ll use an upper-case D to distinguish it from system created defaults which always use a lowercase d.

Tenants > Tenant mgmt > Node Management EPGs >+ Create In-Band Management EPG

Name: Default
Encap: vlan-100
Bridge Domain: inb

Now I can create the Static Node Management Addresses.

Tenants > Tenant mgmt > Node Management Addresses > Static Node Management Addresses >+ Create Static Node Management Addresses

Node Range: 1 – 3
Config: In-Band Addresses
In-Band Mangement EPG: Default
In-Band IPV4 Address: 192.168.99.111/24
In-Band IPV4 Gateway: 192.168.99.1/24

[Tip: If you are following my steps, ignore the warning (as shown below).  I already set the Interface to use for external connections to ooband, and in spite of the inference in the warning, your preference for management will NOT switch to In-Band]

inbabd-warning

Tedious as it was, I resisted the temptation to resort to the CLI, and repeated the above step for Nodes  101-102, and 201-202.

That default gateway IP address I defined on the nodes will reside in the inb Bridge Domain.

Tenants > Tenant mgmt > Networking > Bridge Domains > inb > Subnets  >+ Create subnet

Gateway IP: 192.168.99.1/24

That’s took care of the internal network – the APICs were able to ping the default gateway and the Leaf switches verifying that the configurations were valid, although at this stage I was not able to ping the Spine switches.  However, I took heart from this video and assumed that all was OK.

apic1# ping -c 3 192.168.99.1
PING 192.168.99.1 (192.168.99.1) 56(84) bytes of data.
64 bytes from 192.168.99.1: icmp_seq=1 ttl=63 time=2.86 ms
64 bytes from 192.168.99.1: icmp_seq=2 ttl=63 time=0.827 ms
64 bytes from 192.168.99.1: icmp_seq=3 ttl=63 time=0.139 ms

--- 192.168.99.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 0.139/1.276/2.862/1.156 ms
apic1# ping -c 3 192.168.99.101
PING 192.168.99.101 (192.168.99.101) 56(84) bytes of data.
64 bytes from 192.168.99.101: icmp_seq=1 ttl=63 time=0.969 ms
64 bytes from 192.168.99.101: icmp_seq=2 ttl=63 time=0.176 ms
64 bytes from 192.168.99.101: icmp_seq=3 ttl=63 time=0.209 ms

--- 192.168.99.101 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2000ms
rtt min/avg/max/mdev = 0.176/0.451/0.969/0.366 ms
apic1# ping -c 3 192.168.99.201
PING 192.168.99.201 (192.168.99.201) 56(84) bytes of data.
From 192.168.99.111 icmp_seq=1 Destination Host Unreachable
From 192.168.99.111 icmp_seq=2 Destination Host Unreachable
From 192.168.99.111 icmp_seq=3 Destination Host Unreachable

--- 192.168.99.201 ping statistics ---
3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 3005ms

I’ll need a contract to put between the L2EPG and the special management In-Band EPG – life will be easier if I create that first.

Tenants > Tenant mgmt > Security Policies > Contracts  >+ Create Contract

Name: inband.MgmtServices-Ct
Scope: VRF [Default]
(+) Subjects:
Name: inband.MgmtServices-Subj
Filter Chain
(+) Filters
Name: common/default

Now to create the L2Out and the L2EPG

Tenants > Tenant mgmt > Networking > External Bridged Networks  >+ Create Bridged Outside

Name: inband.VLAN99-L2Out
External Bridged Domain: inband-ExtL2Dom
Bridge Domain: mgmt/inb
Encap: VLAN 99
Nodes And Interfaces Protocol Profiles
Path Type: port
Path: Pod1/Node-101/eth1/10
ADD
Path: Pod1/Node-102/eth1/10
ADD>NEXT
(+) External EPG Networks
Name: inband.VLAN99-L2EPG

Have the L2EPG consume the contract I created earlier:

Tenants > Tenant mgmt > Networking > External Bridged Networks  > inband.VLAN99-L2Out > Networks > inband.VLAN99-L2EPG 

(+) Consumed Contracts:
Name: mgmt/inband.MgmtServices-Ct

And the In-Band EPG Provide it:

Tenants > Tenant mgmt >Node Management EPGs > In-Band EPG Default 

(+) Provided Contracts:
Name: mgmt/inband.MgmtServices-Ct

Time to test!

To be confident that I will now be able to deploy a VMM Domain with connectivity to the Virtual Machine Manager (vCenter in my case), I’ll ping the VMM server from the APIC.

apic1# ping -c 3 192.168.99.99
PING 192.168.99.99 (192.168.99.99) 56(84) bytes of data.
64 bytes from 192.168.99.99: icmp_seq=1 ttl=64 time=0.458 ms
64 bytes from 192.168.99.99: icmp_seq=2 ttl=64 time=0.239 ms
64 bytes from 192.168.99.99: icmp_seq=3 ttl=64 time=0.238 ms

--- 192.168.99.99 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1999ms
rtt min/avg/max/mdev = 0.238/0.311/0.458/0.105 ms

And the final test is to see if my management PC can indeed manage the APIC via the In-Band management IP address:

apic-access

Step-by-Step: Configuring In-Band management via a L2 Out using the CLI

The main narrative for the configuration steps are contained in the explanation of the GUI configuration, so you should read that for more detail.  The following commands are entered in configuration mode.


Part 1: Set the Connectivity Preference for the pod to ooband

mgmt_connectivity pref ooband

Part 2: Configure the Access Policy Chain

# First, create the VLAN Pool and External L2 Domain
# If you type the command below, you may notice a curious thing -
# at the point where the word "type" appears, if you press "?"
# you will see options for <CR> and "dynamic", but not "type".
# In other words, "type" is a hidden option - I discovered it
# by creating a domain in the GUI and looking at the running
# config later.
  vlan-domain inband-ExtL2Dom type l2ext
    vlan-pool inband-VLAN.Pool
    vlan 99-100
    exit

# And a Access Port Policy Group linked to the inband-ExtL2Dom
  template policy-group inband.LLDP-APPG
# Another curious thing with the CLI is that there is no way
# to create an AEP - one gets created for you whether you
# want it or not when you link the APPG to the Domain in the
# following command.
    vlan-domain member inband-ExtL2Dom type l2ext
    exit

# Not necessary to create an Interface Policy to Enable-LLDP in the
# CLI, Interface Policies are applied directly to the interfaces

# Now the Leaf Profiles, Interface Profiles and Port Selectors
  leaf-profile L101-LeafProf
    leaf-group Leaf101
      leaf 101
      exit
    leaf-interface-profile L101-IntProf
    exit
  leaf-profile L102-LeafProf
    leaf-group Leaf102
      leaf 102
      exit
    leaf-interface-profile L102-IntProf
    exit

  leaf-interface-profile L101-IntProf
    leaf-interface-group 1:10
      description 'vCenter'
      interface ethernet 1/10
      policy-group inband.LLDP-APPG
      exit
    leaf-interface-group 1:46-48
      description 'APICs'
      interface ethernet 1/46-48
      policy-group inband.LLDP-APPG
      exit
    exit

  leaf-interface-profile L102-IntProf
    leaf-interface-group 1:10
      description 'Mgmt Host'
      interface ethernet 1/10
      policy-group inband.LLDP-APPG
      exit
    leaf-interface-group 1:46-48
      description 'APICs'
      interface ethernet 1/46-48
      policy-group inband.LLDP-APPG
      exit
    exit

That’s the Access Policies done, now for the mgmt Tenant configuration.

Part 3: mgmt Tenant Configuration

# Node IP addressing is configured OUTSIDE the mgmt
# Tenant in the CLI, so I'll do the mgmt Tenant bits
# first, in the order that best fits - defining the
# contract first means I can configure the AP in one hit

  tenant mgmt
    contract inband.MgmtServices-Ct
      subject inband.MgmtServices-Subj
        access-group default both
        exit
      exit

    external-l2 epg inband.VLAN99-L2Out:inband.VLAN99-L2EPG
      bridge-domain member inb
      contract consumer inband.MgmtServices-Ct
      exit

    inband-mgmt epg Default
      contract provider inband.MgmtServices-Ct
      bridge-domain inb
      vlan 100
      exit

    interface bridge-domain inb
      ip address 192.168.99.1/24 secondary
      exit
    exit

# Now the Node IP addressing

  controller 1
    interface inband-mgmt0
      ip address 192.168.99.111/24 gateway 192.168.99.1
      inband-mgmt epg Default
      vlan 100
      exit
    exit
  controller 2
    interface inband-mgmt0
      ip address 192.168.99.112/24 gateway 192.168.99.1
      inband-mgmt epg Default
      vlan 100
      exit
    exit
  controller 3
    interface inband-mgmt0
      ip address 192.168.99.113/24 gateway 192.168.99.1
      inband-mgmt epg Default
      vlan 100
      exit
    exit

  switch 101
    interface inband-mgmt0
      ip address 192.168.99.101/24 gateway 192.168.99.1
      inband-mgmt epg Default
      exit
    exit
  switch 102
    interface inband-mgmt0
      ip address 192.168.99.102/24 gateway 192.168.99.1
      inband-mgmt epg Default
      exit
    exit
  switch 201
    interface inband-mgmt0
      ip address 192.168.99.201/24 gateway 192.168.99.1
      inband-mgmt epg Default
      exit
    exit
  switch 202
    interface inband-mgmt0
      ip address 192.168.99.202/24 gateway 192.168.99.1
      inband-mgmt epg Default
      exit
    exit

# Finally, apply vlan configuration to the
# physical interfaces where necessary

  leaf 101
    interface ethernet 1/10
      switchport trunk allowed vlan 99 tenant mgmt external-l2 epg inband.VLAN99-L2Out:inband.VLAN99-L2EPG
      exit
    exit

  leaf 102
    interface ethernet 1/10
      switchport trunk allowed vlan 99 tenant mgmt external-l2 epg inband.VLAN99-L2Out:inband.VLAN99-L2EPG
      exit
    exit

Time to test!

To be confident that I will now be able to manage the APIC from my management host, I’ll ping the Mgmt Host from the APIC.

apic1# ping -c 3 192.168.99.10
PING 192.168.99.10 (192.168.99.10) 56(84) bytes of data.
64 bytes from 192.168.99.10: icmp_seq=1 ttl=64 time=0.458 ms
64 bytes from 192.168.99.10: icmp_seq=2 ttl=64 time=0.239 ms
64 bytes from 192.168.99.10: icmp_seq=3 ttl=64 time=0.238 ms

--- 192.168.99.10 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1999ms
rtt min/avg/max/mdev = 0.238/0.311/0.458/0.105 ms

And the final test is to see if my management PC can indeed manage the APIC via the In-Band management IP address – only this time for a change I’ll use ssh to access, and access APIC#2

sshaccess

One interesting thing to note in the CLI configuration is that nowhere do you create an Attachable Access Entity Profile (AEP).  But, when you enter the above commands, one miraculously appears (called __ui_pg_inband.LLDP-APPG) when you view the GUI.

miracluousaep-l2ext

Another myriad of mysteries happens in the mgmt Tenant, even if you go through the CLI config from a clean configuration. While entering the commands above in the CLI, the APIC will automatically add an Application Profile (called default)  with an EPG (also called default).  But it doesn’t stop there! There is also another Node Management EPG (called default) magically created, and a mystical contract (called inband-default-contract) with a link to a mysterious filter (called  inband-default). I have no idea why, but here’s some commands to clean up the crap left behind.

# Remove crap left behind by previous CLI commands
tenant mgmt
  no application default
  no contract inband-default-contract
  no inband-mgmt epg default
  no access-list inband-default

Step-by-Step: Configuring In-Band management via a L2 Out using the API

The main narrative for the configuration steps are contained in the explanation of the GUI configuration, so you should read that for more detail.  The following sections can be saved to a text file (with a .xml extension) and posted to your config using the GUI (using right-click > Post …), or you can copy and paste the sections below into Postman.


Right-click > Post … Tutorial

Assume one of the sections below is stored a text file with a .xml extension such as  connectivityPrefs.xml

In the APIC GUI, any configuration item that has Post … as one of the right-click options can be used to post the file.

post

The contents of the .xml file must be posted to the uni Parent Distinguished Name (DN) as shown below:

posttouni

The configuration defined in the .xml file will have been pushed into your config:

unpdatedconnpref

End of tutorial


Part 1: Set the Connectivity Preference for the pod to ooband

<?xml version="1.0" encoding="UTF-8"?>
<!-- connectivityPrefs.xml -->
<mgmtConnectivityPrefs dn="uni/fabric/connectivityPrefs" interfacePref="ooband"/>

Part 2: Configure the Access Policy Chain

Save each of these snippets in a separate .xml file and post one at a time.  Or use Postman and copy and paste.

<?xml version="1.0" encoding="UTF-8"?>
<!-- Create the VLAN Pool -->
<fvnsVlanInstP allocMode="static" dn="uni/infra/vlanns-[inband-VLAN.Pool]-static" name="inband-VLAN.Pool">
    <fvnsEncapBlk from="vlan-99" to="vlan-100"/>
</fvnsVlanInstP>
<?xml version="1.0" encoding="UTF-8"?>
<!-- Create the External L2 Domain, assign it the VLAN Pool -->
<l2extDomP dn="uni/l2dom-inband-ExtL2Dom" name="inband-ExtL2Dom">
	<infraRsVlanNs tDn="uni/infra/vlanns-[inband-VLAN.Pool]-static"/>
</l2extDomP>
<!-- Create an Attchable Access Entity Profile (AEP) -->
<infraAttEntityP descr="" dn="uni/infra/attentp-inband-AEP" name="inband-AEP">
  <infraRsDomP tDn="uni/l2dom-inband-ExtL2Dom"/>
</infraAttEntityP>
<?xml version="1.0" encoding="UTF-8"?>
<!-- Create an Enable-LLDP Interface Policy -->
<lldpIfPol adminRxSt="enabled" adminTxSt="enabled" dn="uni/infra/lldpIfP-Enable-LLDP" />
<?xml version="1.0" encoding="UTF-8"?>
<!-- Create an Access Port Policy Group -->
<infraAccPortGrp dn="uni/infra/funcprof/accportgrp-inband.LLDP-APPG" name="inband.LLDP-APPG">
    <infraRsAttEntP tDn="uni/infra/attentp-inband-AEP"/>
    <infraRsLldpIfPol tnLldpIfPolName="Enable-LLDP"/>
</infraAccPortGrp>
<?xml version="1.0" encoding="UTF-8"?>
<!-- Two Interface Profiles will be needed - first one for Leaf101 -->
<infraAccPortP dn="uni/infra/accportprof-L101-IntProf" name="L101-IntProf">
    <!-- Add an interface selector for the vCenter Server -->
    <infraHPortS descr="vCenter" name="1:10" type="range">
        <infraRsAccBaseGrp fexId="101" tDn="uni/infra/funcprof/accportgrp-inband.LLDP-APPG"/>
        <infraPortBlk fromCard="1" fromPort="10" name="block1" toCard="1" toPort="10"/>
    </infraHPortS>
    <!-- Add the ports where the APICs are connected -->
    <infraHPortS descr="APICs" name="1:46-48" type="range">
        <infraRsAccBaseGrp fexId="101" tDn="uni/infra/funcprof/accportgrp-inband.LLDP-APPG"/>
        <infraPortBlk fromCard="1" fromPort="46" name="block1" toCard="1" toPort="48"/>
    </infraHPortS>
</infraAccPortP>
<?xml version="1.0" encoding="UTF-8"?>
<!-- Another Interface Profile for Leaf102 -->
<infraAccPortP dn="uni/infra/accportprof-L102-IntProf" name="L102-IntProf">
    <!-- Add an interface selector for the Mgmt Host -->
    <infraHPortS descr="Mgmt Host" name="1:10" type="range">
        <infraRsAccBaseGrp fexId="102" tDn="uni/infra/funcprof/accportgrp-inband.LLDP-APPG"/>
        <infraPortBlk fromCard="1" fromPort="10" name="block2" toCard="1" toPort="10"/>
    </infraHPortS>
    <!-- Add the ports where the APICs are connected -->
    <infraHPortS descr="APICs" name="1:46-48" type="range">
        <infraRsAccBaseGrp fexId="102" tDn="uni/infra/funcprof/accportgrp-inband.LLDP-APPG"/>
        <infraPortBlk fromCard="1" fromPort="46" name="block2" toCard="1" toPort="48"/>
    </infraHPortS>
</infraAccPortP>
<?xml version="1.0" encoding="UTF-8"?>
<!-- Create a Leaf Profile to own the corresponding Interface Profile -->
<infraNodeP dn="uni/infra/nprof-L101-LeafProf" name="L101-LeafProf">
    <infraLeafS name="Leaf101" type="range">
        <infraNodeBlk name ="Default" from_="101" to_="101"/>
    </infraLeafS>
    <infraRsAccPortP tDn="uni/infra/accportprof-L101-IntProf"/>
</infraNodeP>
<!-- Create a Leaf Profile to own the corresponding Interface Profile -->
<infraNodeP dn="uni/infra/nprof-L102-LeafProf" name="L102-LeafProf">
    <infraLeafS name="Leaf102" type="range">
        <infraNodeBlk name ="Default" from_="102" to_="102"/>
    </infraLeafS>
    <infraRsAccPortP tDn="uni/infra/accportprof-L102-IntProf"/>
</infraNodeP>

That’s the Access Policies done, now for the mgmt Tenant configuration.

Part 3: mgmt Tenant Configuration

<?xml version="1.0" encoding="UTF-8"?>
<!-- api/policymgr/mo/.xml -->
<polUni>
  <fvTenant name="mgmt">
    <mgmtMgmtP name="default">

      <!-- Create a Node Management EPG -->
      <mgmtInB encap="vlan-100" name="Default">
        <!-- Assign Adresses for APICs In-Band management network -->
        <mgmtRsInBStNode addr="192.168.99.111/24" gw="192.168.99.1" tDn="topology/pod-1/node-1"/>
        <mgmtRsInBStNode addr="192.168.99.112/24" gw="192.168.99.1" tDn="topology/pod-1/node-2"/>
        <mgmtRsInBStNode addr="192.168.99.113/24" gw="192.168.99.1" tDn="topology/pod-1/node-3"/>
        <!-- Assign Adresses for switches In-Band management network -->
        <mgmtRsInBStNode addr="192.168.99.101/24" gw="192.168.99.1" tDn="topology/pod-1/node-101"/>
        <mgmtRsInBStNode addr="192.168.99.102/24" gw="192.168.99.1" tDn="topology/pod-1/node-102"/>
        <mgmtRsInBStNode addr="192.168.99.201/24" gw="192.168.99.1" tDn="topology/pod-1/node-201"/>
        <!-- The Node Mangement EPG will be the provider for the Contract -->
        <mgmtRsMgmtBD tnFvBDName="inb"/>
        <fvRsProv tnVzBrCPName="inband.MgmtServices-Ct"/>
      </mgmtInB>
    </mgmtMgmtP>

    <!-- Create the Contract Assigned to the Default Node Management EPG -->
    <vzBrCP name="inband.MgmtServices-Ct" scope="context">
      <vzSubj name="inband.MgmtServices-Subj">
        <!-- Use the common/default filter -->
        <vzRsSubjFiltAtt directives="" tnVzFilterName="default"/>
      </vzSubj>
    </vzBrCP>

    <!-- Assign IP address to inb BD -->
    <fvBD name="inb">
      <fvSubnet ip="192.168.99.1/24" />
    </fvBD>

	<!-- Create the L2Out and its associated L2EPG -->
	<l2extOut name="inband.VLAN99-L2Out">
		<l2extLNodeP name="default">
			<l2extLIfP name="default">
				<l2extRsPathL2OutAtt tDn="topology/pod-1/paths-101/pathep-[eth1/10]"/>
				<l2extRsPathL2OutAtt tDn="topology/pod-1/paths-102/pathep-[eth1/10]"/>
			</l2extLIfP>
		</l2extLNodeP>
		<l2extRsL2DomAtt tDn="uni/l2dom-inband-ExtL2Dom"/>
		<l2extRsEBd encap="vlan-99" tnFvBDName="inb"/>
		<l2extInstP name="inband.VLAN99-L2EPG">
			<!-- The L2EPG will consume the Contract -->
			<fvRsCons tnVzBrCPName="inband.MgmtServices-Ct"/>
		</l2extInstP>
	</l2extOut>
</fvTenant>

Again, I’ll test by pinging the vCenter server from apic#3 for a change, and for browse to the Visore interface of the APIC from the Mgmt Host.

apic3# ping -c 3 192.168.99.99
PING 192.168.99.99 (192.168.99.99) 56(84) bytes of data.
64 bytes from 192.168.99.99: icmp_seq=1 ttl=64 time=0.302 ms
64 bytes from 192.168.99.99: icmp_seq=2 ttl=64 time=0.221 ms
64 bytes from 192.168.99.99: icmp_seq=3 ttl=64 time=0.204 ms

--- 192.168.99.99 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1999ms
rtt min/avg/max/mdev = 0.204/0.242/0.302/0.044 ms

The fact that the login screen comes up is proof that the Mgmt Host has connectivity to the APICs.

visorelogin

In the next installment, I will configure In-Band management so the fabric can be managed from an external network via a L3 out.

RedNectar

Note:RedPoint If you would like the author or one of my colleagues to assist with the setup of your ACI installation, contact acimentor@housley.com.au and refer to this article. Housley works mainly around APJC, but are not restricted to this area.

References:

Cisco’s official ACI management documentation – I have informed Cisco of the fact that this documentation is not up to scratch – hopefully it will be fixed soon.

The Cisco APIC NX-OS Style Command-Line Interface Configuration Guide – especially the chapter on Configuring Management Interfaces was particularly helpful – much better than the reference above.

Also Cisco’s ACI Troubleshooting Book had a couple of hints about how things hang together.

Carl Niger’s youtube video series was helpful – I recommend it to you.

Cisco’s pathetic video on configuring In-Band management is simply not worth wasting your time on.  But it ‘s included here since I referred to it.

Posted in ACI, ACI API, ACI CLI, ACI configuration, aci inband management, ACI inband management tutorials, ACI Tutorial, APIC, Cisco, Cloud computing, configuration tutorial, Data Center, Data Centre, EPG, In-Band management, inband management, L2 Out, L2out, L3 Out, L3out, Postman, tutorial | Tagged , , , , , , , , , , | 2 Comments