A funny thing happened in the ACI lab today…

I had a Tenant with a statically configured bare metal hosts attached to interface 1/16 on both leaf 101 and leaf 102, but came up with a “invalid-path;invalid-vlan” error on the faults page for the EPG that was being configured for leaf 102. The host attached to Leaf 101 was working, had no errors and was configured in exactly the same way!

I checked:
In the tenant, the EPG had been linked to the correct Physical Domain
In the tenant, the EPG had been linked to the correct leaf/port/vlan

In the Fabric Policies;
The Leaf Profile defined the correct leaf, and was linked to the correct Interface Profile
The Interface Profile had an Access Port Selector for to the correct port (1/16), and the Access Port Selector was linked to an Access Port Policy Group
The Access Port Policy Group was linked to the correct Attachable Access Entity Profile
The Attachable Access Entity Profile was linked to the same Physical Domain as the EPG showing the error
The Physical Domain was linked to a VLAN Pool that included the VLAN ID being used in the EPG for the static mapping.

So I was stumped.  I have never seen an  “invalid-path;invalid-vlan” error before that could be solved by checking the above, so in desperation I checked thing from the CLI:

apic1# show run leaf 102 interface ethernet 1/16
# Command: show running-config leaf 102 interface ethernet 1/16
# Time: Wed Mar  1 04:17:41 2017
  leaf 102
    interface ethernet 1/16
      # Policy-group configured from leaf-profile ['T5:L102-LeafProf'], leaf-interface-profile T5:L102-IntProf
      # policy-group T5:1G.CDP.LLDP-APPG
      lldp receive
      lldp transmit
      cdp enable
      vlan-domain member T5:MappedVLANs-PhysDom type phys
      switchport access vlan 2050 tenant Tenant5 application 2Tier-AP epg WebServers-EPG
      speed 1G
      negotiate auto
      link debounce time 100
      exit
    exit

“That looks a bit strange”, I thought.  “I don’t normally see the lldp and cdp policies etc”.  But there was nothing in the config that was wrong, none the less, I thought I’d compare with the same port on the other leaf.

apic1# show run leaf 101 interface ethernet 1/16
# Command: show running-config leaf 101 interface ethernet 1/16
# Time: Wed Mar  1 04:18:02 2017
  leaf 101
    interface ethernet 1/16
      # Policy-group configured from leaf-profile ['T5:L101-LeafProf'], leaf-interface-profile T5:L101-IntProf
      # policy-group T5:1G.CDP.LLDP-APPG
      switchport access vlan 2050 tenant Tenant5 application 2Tier-AP epg AppServer-EPG
      exit
    exit

Now this looks much like I expect. And at this stage, this is the only indication that the configuration on 102/1/16 is not quite “normal”. So what I tried next was to see if I could remove the “extra” lines of config on leaf 101. Since there is no default interface command in ACI-NX-OS, I tried manually removing the cdp, lldp etc config:

apic1(config)# leaf 102
apic1(config-leaf)# default ?
apic1(config-leaf)# default inter
Command entered is not APIC NX-OS style CLI.Trying shell command…

apic1(config-leaf)# interface ethernet 1/16
apic1(config-leaf-if)# shut
apic1(config-leaf-if)# no lldp receive
apic1(config-leaf-if)# no lldp transmit
apic1(config-leaf-if)# no cdp  enable
apic1(config-leaf-if)# no vlan-domain member T5:MappedVLANs-PhysDom type phys
apic1(config-leaf-if)# no speed 1G
apic1(config-leaf-if)# no negotiate auto
apic1(config-leaf-if)# no link debounce time 100
apic1(config-leaf-if)# no shutdown

Better see if that worked!

apic1(config-leaf-if)# show run leaf 102 interface ethernet 1/16
# Command: show running-config leaf 102 interface ethernet 1/16
# Time: Wed Mar  1 04:28:16 2017
  leaf 102
    interface ethernet 1/16
      no lldp receive
      no lldp transmit
      no cdp enable
      speed auto
      no negotiate auto
      link debounce time 100
      exit
    exit

Clearly that didn’t work as intended. And by now I’d removed the interface selector for interface 1/16 from the interface profile for Leaf 102 as well, so there should have been no association with any lldp, cdp etc – except for one thing – I’d forgotten that when you do anything in the CLI, it automatically starts creating pesky objects with names beginning with __ui and I could see these in the GUI – but I knew how to get rid of those thanks to this post.

Note:RedPoint Unless Daniel has updated his blog, you will see that one command I used was different to the one in the link above: Daniel’s blog says to use a moconfig delete command, when in fact it should be moconfig commit

And that’s what I did!

apic1# for i in `find *__ui*`
for> do
for> echo "removing $i"
for> modelete $i
for> done
removing attentp-__ui_l102_eth1--16
This command is being deprecated on APIC controller, please use NXOS-style equivalent command
removing attentp-__ui_l102_eth1--16/mo
This command is being deprecated on APIC controller, please use NXOS-style equivalent command
...<snip>....

apic1# moconfig commit
This command is being deprecated on APIC controller, please use NXOS-style equivalent command
Committing mo 'uni/infra/lldpIfP-__ui_l102_eth1--16'
Committing mo 'uni/infra/hintfpol-__ui_l102_eth1--16'
Committing mo 'uni/infra/cdpIfP-__ui_l102_eth1--16'
Committing mo 'uni/infra/attentp-__ui_l102_eth1--16'

All mos committed successfully.

Re-checking the CLI config showed:

apic1(config-leaf-if)# show run leaf 102 interface ethernet 1/16
# Command: show running-config leaf 102 interface ethernet 1/16
# Time: Wed Mar 1 04:42:46 2017
 leaf 102
 interface ethernet 1/16
 exit
 exit

Brilliant! It certainly looks “correct” although I have no idea why this config should be any more “correct” than what I saw earlier, except…
… when I re-created the Interface Selector for access port 1/16 on leaf 102, and reassigned the same interface to the EPG in the tenant config (in other words restored the previous config) – all errors disappeared and the config worked!

Now it could have been that running the script to remove the pesky __ui items actually removed some other junk that was causing a problem, but whatever caused the error is a mystery!

One of the strangest mysteries I have encountered in the ACI lab so far.

RedNectar

Advertisements
Posted in Access Policies, ACI, ACI configuration, APIC, Data Center, Data Centre | Tagged , , , , , , | Leave a comment

Configuring In-Band Management for the APIC on Cisco ACI (Part #3-via a L3Out)

Note:RedPoint This is the third and last in a series of articles – the following is a variation of the first and second in the series. Much of the story is identical – but with a few added extras to configure the L3 out rather than an L2 out or Application Profile as with the EPG approach.

Anyone unlucky enough to try and configure In-Band management on the Cisco APIC will have probably realised that it is not a simple task. Which is probably why many Cisco support forum experts advises to use out of band (oob) management instead [link].

And anyone unlucky enough to try and decipher Cisco’s official documentation for configuring In-Band management on the Cisco APIC or watch their pathetic video (which simply does not work – it does not complete the job) are probably feeling frustrated to the point of giving up.

Let me ease your frustration and take you through a journey showing you how to configure In-Band management for ACI in a variety of ways:

  1. Via an EPG (in the mgmt Tenant) (Part#1 of this series)
    1. using the GUI
    2. using the CLI
    3. using the API
  2. Via an external bridged network (L2 Out) (Part#2 of this series)
    1. using the GUI
    2. using the CLI
    3. using the API
  3. Via an external routed network (L3 Out) (This article)
    1. using the GUI
    2. using the CLI
    3. using the API
    4. Appendix: Configuring L3 Out Interface Profiles with VLANs (Coming Soon)

In-Band Management Via an external routed network (L3 Out) in the mgmt Tenant

Let’s begin with a diagram showing my test setup for the L3Out approach.  It is somewhat different to the previous designs because an external router is involved, so there is no direct connections between the Nexus 9K Leaf switches and either the VMM Server or the Mgmt Host.

IP addressing for the Leaf and Spine switches will use the switch ID in the fourth octect of the 192.168.99.0/24 network. E.g., Spine 201 will be 192.168.99.201. The default gateway address to be configured on the inb Bridge Domain in the mgmt tenant will be 192.168.99.1.

So let me plan exactly what will need to be done:

The Access Policy Chain

I’ll need to allocate VLAN IDs for the internal inband management EPG (VLAN 100) and in case I decide to use SVI or a Routed Sub-Interface for the L3EPG, I’ll include another VLAN too (VLAN 99). I’ll put them a VLAN Pool, which will connect to a External Layer 3 Domain, which will need to link to an AEP which has appropriate Access Port Policy Group assignments linking the AEP to the relevant attachment ports of the APICs, the vCenter host and the ACI Management host. Like the picture shows.


Curiously, in the previous method directly attaching an EPG to the leaves, I created a Physical Domain to contain the VLANs, and it linked the physical ports where the APICs attach (via the AEP > APPP > [Interface Profile + LeafProfile]). Last time I used an External l2 Domain – and it still worked! This time, I used an External L3 Domain rather than the Physical Domain – but again this still worked. So it seems that as far as the APIC attached ports are concerned, so long as they have a link to the relevant VLANs, it doesn’t matter if it is via a Physical Domain or an External L2 Domain or External L3 Domain.

The mgmt Tenant

In the mgmt Tenant there is a number of tasks I’ll have to do.

I’ll need to create a special EPG called an In-band EPG.  This will have to be done before assigning the static addresses I want to the APICs, Leaves and Spines.

I’ll assign the default gateway IP address to the pre-defined inb Bridge Domain in the mgmt Tenant, and then create a L3 External Routed Network (L3 Out) for my external router’s connection and assign port Ethernet 1/1 on Leaf101 to that L3 Out. Initially I’ll use a Routed interface, rather than an SVI or Routed Sub Interface so I won’t need any VLAN associations, but I will configure those in an Appendix.

To be able to consume a contract, I’ll also of course have to create a L3EPG which I will name 0.0.0.0:0-L3EPG to reflect the function and range of IP addresses accessible via this L3 Out.

Finally, I’ll need to create a Contract (inband.MgmtServices-Ct) which will use the common/default filter to allow all traffic, and of course I’ll have to link the contract to the special In-Band EPG (provider) and the 0.0.0.0:0-L3EPG (consumer) mentioned above.

Again, a picture tells the story:

If all goes well, when both the Access Polices and the Tenant configuration is complete, the APIC will be able to manage the vCenter VMM, and the Management Station bare metal server will be able to manage the ACI fabric via the APIC IP addresses.

Enough of design, time to start configuring!

Step-by-Step: Configuring In-Band management via a L3 Out using the GUI

Conventions

Cisco APIC Advanced GUI Menu Selection sequences are displayed in Bolded Blue text, with >+ meaning Right-click and select so that the following line:
Fabric > Access Policies > Pools > VLAN >+ Create VLAN Pool
should be interpreted as:
From the Cisco APIC Advanced GUI Main Menu, select Fabric
From the sub-menu, select Access Policies
In the Navigation Pane, expand Pools, then on the VLAN sub-item, right-click and select Create VLAN Pool.
If a particular tab in the Work Pane needs to be selected, it will be inserted into the sequence in square brackets, such as:
… > Networks > 0.0.0.0:0-L3EPG > [Contracts] tab 
Within the Work Pane and within some dialogues, it will be necessary to click on a + icon to add an item. This is indicated by a (+) followed by the name of the item that needs to be added, so that:
(+) Interface Selectors:
should be interpreted as
Click the + icon adjacent the Interface Selectors: prompt.
Text that needs to be typed at prompts is presented in  orange italicised bold text, while items to be selected from a drop down menu or by clicking options on the screen are shown in bolded underlined text.
Options like clicking OK, UPDATE or SUBMIT are assumed, so not specifically stated unless required between sub-steps. Use your intelligence.

Part 1: Set the Connectivity Preference for the pod to ooband

Firstly, since the default interface to use for external connections id the inband interface, I’m going to set the Connectivity Preference for the pod to ooband – just in case I loose access to the management GUI while configuring this.

Fabric > Fabric Policies > Global Policies > Connectivity Preferences

Interface to use for external connections: ooband

Part 2: Configure the Access Policy Chain

This is a long slog – if you are not familiar with Cisco ACI Access Policies, you might want to read my earlier ACI Tutorials, especially Tutorial #4.

Fabric > Access Policies > Pools > VLAN >+ Create VLAN Pool

Name: inband-VLAN.Pool
Allocation Mode: Static Allocation
(+) Encap Blocks:
Range: VLAN 99 – VLAN 100

Note:RedPoint In this tutorial I am using a Routed Interface in my L3Out, which will not require a VLAN allocation. But later I am planning on exploring SVI and Routed Sub-Interfaces so I’ve included VLAN 99 in the range as well for that exploration.

Fabric > Access Policies > Physical and External Domains > External Routed Domains >+ Create Layer 3 Domain

Name: inband-ExtL3Dom
VLAN Pool: inband-VLAN.Pool

Fabric > Access Policies > Global Policies > Attachable Access Entity Profiles >+ Create Attachable Access Entity Profile

Name: inband-AEP
(+) Domains (VMM, Physical or External) To Be Associated To Interfaces:
Domain Profile: inband-ExtL3Dom

Fabric > Access Policies > Interface Policies > Policies > LLDP Interface >+ Create LLDP Interface Policy

Name: Enable-LLDP
[Leave default values – I just want to have a policy that spells out that LLDP is enabled]

Fabric > Access Policies > Interface Policies > Policy Groups >Leaf Policy Groups >+ Create Leaf Access Port Policy Group

Name: inband.LLDP-APPG
LLDP Policy: Enable-LLDP
Attached Entity Profile: inband-AEP

Fabric > Access Policies > Interface Policies > Profiles >Leaf Profiles >+ Create Leaf Interface Profile

Name: L101-IntProf
(+) Interface Selectors:
Name: 1:1
Description: Router
Interface IDs: 1/1
Interface Policy Group: inband.LLDP-APPG
(+) Interface Selectors:
Name: 1:46-48
Description: APICs
Interface IDs: 1/46-48
Interface Policy Group: inband.LLDP-APPG

Now repeat for Leaf102 – this time just add the APIC ports

Fabric > Access Policies > Interface Policies > Profiles >Leaf Profiles >+ Create Leaf Interface Profile

Name: L102-IntProf
(+) Interface Selectors:
Name: 1:46-48
Description: APICs
Interface IDs: 1/46-48
Interface Policy Group: inband.LLDP-APPG

Fabric > Access Policies > Switch Policies > Profiles >Leaf Profiles >+ Create Leaf Profile

Name: L101-LeafProf
(+) Leaf Selectors:
Name: Leaf101
Blocks: 101
UPDATE > NEXT
[x] L101-IntProf

And again for leaf 102

Fabric > Access Policies > Switch Policies > Profiles >Leaf Profiles >+ Create Leaf Profile

Name: L102-LeafProf
(+) Leaf Selectors:
Name: Leaf102
Blocks: 102
UPDATE > NEXT
[x] L102-IntProf

That’s the Access Policies done, now for the mgmt Tenant configuration.

Part 3: mgmt Tenant Configuration

Before I can assign a static IP addresses to  an APIC or switch, the GUI forces me to create a Node Management EPG, so begin by creating one – I’ll use the name Default because I don’t expect I’ll ever need another, but I’ll use an upper-case D to distinguish it from system created defaults which always use a lowercase d.

Tenants > Tenant mgmt > Node Management EPGs >+ Create In-Band Management EPG

Name: Default
Encap: vlan-100
Bridge Domain: inb

Now I can create the Static Node Management Addresses.

Tenants > Tenant mgmt > Node Management Addresses > Static Node Management Addresses >+ Create Static Node Management Addresses

Node Range: 1 – 3
Config: In-Band Addresses
In-Band Mangement EPG: Default
In-Band IPV4 Address: 192.168.99.111/24
In-Band IPV4 Gateway: 192.168.99.1/24

[Tip: If you are following my steps, ignore the warning (as shown below).  I already set the Interface to use for external connections to ooband, and in spite of the inference in the warning, your preference for management will NOT switch to In-Band]

inbabd-warning

Tedious as it was, I resisted the temptation to resort to the CLI, and repeated the above step for Nodes  101-102, and 201-202.

That default gateway IP address I defined on the nodes will reside in the inb Bridge Domain.

Tenants > Tenant mgmt > Networking > Bridge Domains > inb > Subnets  >+ Create subnet

Gateway IP: 192.168.99.1/24
Scope: [x] Advertised Externally

That’s took care of the internal network except that I will have to come back to the inb Bridge Domain to link it to the L3Out after I’ve created it.

At this stage the APICs were able to ping the default gateway and the Leaf switches verifying that the configurations were valid, although I was not able to ping the Spine switches.  However, I took heart from this video and assumed that all was OK.

	apic1# ping -c 3 192.168.99.1
	PING 192.168.99.1 (192.168.99.1) 56(84) bytes of data.
	64 bytes from 192.168.99.1: icmp_seq=1 ttl=63 time=2.86 ms
	64 bytes from 192.168.99.1: icmp_seq=2 ttl=63 time=0.827 ms
	64 bytes from 192.168.99.1: icmp_seq=3 ttl=63 time=0.139 ms

	--- 192.168.99.1 ping statistics ---
	3 packets transmitted, 3 received, 0% packet loss, time 2002ms
	rtt min/avg/max/mdev = 0.139/1.276/2.862/1.156 ms
	apic1# ping -c 3 192.168.99.101
	PING 192.168.99.101 (192.168.99.101) 56(84) bytes of data.
	64 bytes from 192.168.99.101: icmp_seq=1 ttl=63 time=0.969 ms
	64 bytes from 192.168.99.101: icmp_seq=2 ttl=63 time=0.176 ms
	64 bytes from 192.168.99.101: icmp_seq=3 ttl=63 time=0.209 ms

	--- 192.168.99.101 ping statistics ---
	3 packets transmitted, 3 received, 0% packet loss, time 2000ms
	rtt min/avg/max/mdev = 0.176/0.451/0.969/0.366 ms
	apic1# ping -c 3 192.168.99.201
	PING 192.168.99.201 (192.168.99.201) 56(84) bytes of data.
	From 192.168.99.111 icmp_seq=1 Destination Host Unreachable
	From 192.168.99.111 icmp_seq=2 Destination Host Unreachable
	From 192.168.99.111 icmp_seq=3 Destination Host Unreachable

	--- 192.168.99.201 ping statistics ---
	3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 3005ms
	

I’ll need a contract to put between the L3EPG and the special management In-Band EPG – life will be easier if I create that first.

Tenants > Tenant mgmt > Security Policies > Contracts  >+ Create Contract

Name: inband.MgmtServices-Ct
Scope: VRF [Default]
(+) Subjects:
Name: inband.MgmtServices-Subj
Filter Chain
(+) Filters
Name: common/default

Now to create the L3Out, the Node Profile and the L2EPG

Tenants > Tenant mgmt > Networking > External Routed Networks  >+ Create Routed Outside

Name: inband.OSPF-L3Out
VRF: mgmt/inb
External Routed Domain: inband-ExtL3Dom
[
x] OSPF
OSPF Area ID: 1
OSPF Area Type: Regular Area
(+) Nodes And Interfaces Protocol Profiles
Name: Leaf101-OSPF.NodeProf
(+) Nodes
Node ID: 101
Router ID: 1.1.1.1
OK > OK > NEXT
(+) External EPG Networks
Name: 0.0.0.0:0-L3EPG
(+) Subnet
IP Address: 0.0.0.0/0

You will have noticed that during the process above I did not include a step to add the Interface Profile – I did this because I wanted to explore the three different options for Interface Profiles – Routed InterfaceSVI Interface and Routed sub-interface.

Firstly, I’ll explore using a Routed Interface option, and look at the other options as an Appendix to this article.

Tenants > Tenant mgmt > Networking > External Routed Networks  > inband.OSPF-L3Out > Logical Node Profiles > Leaf101-OSPF.NodeProf  > Logical Interface Profiles >+ Create Interface Profile

Name: OSPF-IntProf
Interfaces
(+) Routed Interfaces:
Path: topology/pod-1/paths-101/pathep-[eth1/1]
IPv4 Primary / IPv6 Preferred Address: 172.16.2.2/30
MTU (bytes): 1500

Note:RedPoint At this point, since my external router is configured with a routed interface configured with OSPF and an IP of 172.16.2.1/30 I will also check that the OSPF adjacency has come up by navigating to Tenants > Tenant mgmt > Networking > External Routed Networks  > inband.OSPF-L3Out > Logical Node Profiles > Leaf101-OSPF.NodeProf  > Configured Nodes > topology/pod-1/node-101 > OSPF for VRF mgmt:inb and check that I have a neighbour in the list of neighbors in the Work Pane.

Have the L3EPG consume the contract I created earlier:

Tenants > Tenant mgmt > Networking > External Routed Networks  > inband.OSPF-L3Out > Networks > 0.0.0.0:0-L3EPG > [Contracts] tab 

(+) Consumed Contracts:
Name: inband.MgmtServices-Ct

And the In-Band EPG Provide it:

Tenants > Tenant mgmt >Node Management EPGs > In-Band EPG Default 

(+) Provided Contracts:
Name: inband.MgmtServices-Ct

And finally, I’ll have to link the L3Out to the inb Bridge Domain so that the APIC knows which L3Out to use when advertising the 192.168.99.0/24 network externally.

Tenants > Tenant mgmt > Networking > Bridge Domains > inb > [Policy] tab > [L3 Configurations] tab

(+) Associated L3 Outs:
L3 Out:  mgmt/inband.OSPF-L3Out

Time to test!

To be confident that I will now be able to deploy a VMM Domain with connectivity to the Virtual Machine Manager (vCenter in my case), I’ll ping the VMM server from the APIC, only this time I’ll tell the APIC to use the inband management interface using the ‑I ping option (or reconfigure the Connectivity Preferences to use the inband interface for external connections rather than the ooband interface which I configured in Part #1).

		apic1# ping -c3 -I 192.168.99.111 172.16.99.99
		PING 172.16.99.99 (172.16.99.99) from 192.168.99.111 : 56(84) bytes of data.
		64 bytes from 172.16.99.99: icmp_seq=1 ttl=61 time=0.374 ms
		64 bytes from 172.16.99.99: icmp_seq=2 ttl=61 time=0.403 ms
		64 bytes from 172.16.99.99: icmp_seq=3 ttl=61 time=0.391 ms

		--- 172.16.99.99 ping statistics ---
		3 packets transmitted, 3 received, 0% packet loss, time 2000ms
		rtt min/avg/max/mdev = 0.374/0.389/0.403/0.020 ms
		

And the final test is to see if my management PC can indeed manage the APIC via the In-Band management IP address:

apic-access

Step-by-Step: Configuring In-Band management via a L3 Out using the CLI

The main narrative for the configuration steps are contained in the explanation of the GUI configuration, so you should read that for more detail.  The following commands are entered in configuration mode.


Part 1: Set the Connectivity Preference for the pod to ooband

mgmt_connectivity pref ooband

Part 2: Configure the Access Policy Chain

# First, create the VLAN Pool and External L3 Domain
# If you type the command below, you may notice a curious thing -
# at the point where the word &quot;type&quot; appears, if you press &quot;?&quot;
# you will see options for &lt;CR&gt; and &quot;dynamic&quot;, but not &quot;type&quot;.
# In other words, &quot;type&quot; is a hidden option - I discovered it
# by creating a domain in the GUI and looking at the running
# config later.
  vlan-domain inband-ExtL3Dom type l3ext
    vlan-pool inband-VLAN.Pool
    vlan 99-100
    exit

# And a Access Port Policy Group linked to the inband-ExtL3Dom
  template policy-group inband.LLDP-APPG

# Another curious thing with the CLI is that there is no way
# to create an AEP - one gets created for you whether you
# want it or not when you link the APPG to the Domain in the
# following command.
    vlan-domain member inband-ExtL3Dom type l3ext
    exit

# Not necessary to create an Interface Policy to Enable-LLDP in the
# CLI, Interface Policies are applied directly to the interfaces

# Now the Leaf Profiles, Interface Profiles and Port Selectors
  leaf-profile L101-LeafProf
    leaf-group Leaf101
      leaf 101
      exit
    leaf-interface-profile L101-IntProf
    exit
  leaf-profile L102-LeafProf
    leaf-group Leaf102
      leaf 102
      exit
    leaf-interface-profile L102-IntProf
    exit

  leaf-interface-profile L101-IntProf
    leaf-interface-group 1:1
      description 'Router'
      interface ethernet 1/1
      policy-group inband.LLDP-APPG
      exit
    leaf-interface-group 1:46-48
      description 'APICs'
      interface ethernet 1/46-48
      policy-group inband.LLDP-APPG
      exit
    exit

  leaf-interface-profile L102-IntProf
    leaf-interface-group 1:46-48
      description 'APICs'
      interface ethernet 1/46-48
      policy-group inband.LLDP-APPG
      exit
    exit

That’s the Access Policies done, now for the mgmt Tenant configuration.

Part 3: mgmt Tenant Configuration

# Node IP addressing is configured OUTSIDE the mgmt
# Tenant in the CLI, so I'll do the mgmt Tenant bits
# first, in the order that best fits - defining the
# contract first means I can configure the AP in one hit

  tenant mgmt
    contract inband.MgmtServices-Ct
      subject inband.MgmtServices-Subj
        access-group default both
        exit
      exit

    l3out inband.OSPF-L3Out
      vrf member inb
      exit

    external-l3 epg 0.0.0.0:0-L3EPG l3out inband.OSPF-L3Out
      vrf member inb
      match ip 0.0.0.0/0
      contract consumer inband.MgmtServices-Ct
      exit

    inband-mgmt epg Default
      contract provider inband.MgmtServices-Ct
      bridge-domain inb
      vlan 100
      exit

    interface bridge-domain inb
      ip address 192.168.99.1/24 secondary scope public
      exit
    exit

# Now the Node IP addressing

  controller 1
    interface inband-mgmt0
      ip address 192.168.99.111/24 gateway 192.168.99.1
      inband-mgmt epg Default
      vlan 100
      exit
    exit
  controller 2
    interface inband-mgmt0
      ip address 192.168.99.112/24 gateway 192.168.99.1
      inband-mgmt epg Default
      vlan 100
      exit
    exit
  controller 3
    interface inband-mgmt0
      ip address 192.168.99.113/24 gateway 192.168.99.1
      inband-mgmt epg Default
      vlan 100
      exit
    exit

  switch 101
    interface inband-mgmt0
      ip address 192.168.99.101/24 gateway 192.168.99.1
      inband-mgmt epg Default
      exit
    exit
  switch 102
    interface inband-mgmt0
      ip address 192.168.99.102/24 gateway 192.168.99.1
      inband-mgmt epg Default
      exit
    exit
  switch 201
    interface inband-mgmt0
      ip address 192.168.99.201/24 gateway 192.168.99.1
      inband-mgmt epg Default
      exit
    exit
  switch 202
    interface inband-mgmt0
      ip address 192.168.99.202/24 gateway 192.168.99.1
      inband-mgmt epg Default
      exit
    exit

# Finally, apply routing configuration to 
# leaf 101 eth1/1

  leaf 101
 
    vrf context tenant mgmt vrf inb l3out inband.OSPF-L3Out
      router-id 1.1.1.1
      route-map inband.OSPF-L3Out_out
        match bridge-domain inb
          exit
        exit
      exit
 
# The CLI gets itself into a bit of a Catch-22 here
# When complete, you will see a line:
# ip router ospf default area 0.0.0.1 
# under the configuration of interface ethernet 1/1, but if I
# try to enter it before configuring the &quot;router ospf default&quot;
# section below, I get an error.
#
# Similarly, if I try to configure the  &quot;router ospf default&quot;
# section before configuring the vrf under the ethernet 1/1
# interface, I also get an error.

    interface ethernet 1/1
      no switchport
      vrf member tenant mgmt vrf inb l3out inband.OSPF-L3Out
      mtu 1500
      ip address 172.16.2.2/30
      exit

   router ospf default
      vrf member tenant mgmt vrf inb
        area 0.0.0.1 l3out inband.OSPF-L3Out
# I have no idea why a line saying &quot;area 0.0.0.1 nssa&quot;
# turns up in the config, but it does, so I had to also
# enter the following line.
        no area 0.0.0.1 nssa
        exit
      exit

# Note how I had to then return to interface configuration mode to 
# complete the config AFTER having done the &quot;router ospf default&quot;
# section
    interface ethernet 1/1
      ip router ospf default area 0.0.0.1
      exit
    exit

Time to test!

To be confident that I will now be able to manage the APIC from my management host, I’ll ping the Mgmt Host from the APIC.

		apic1# ping -c 3 192.168.99.10
		PING 192.168.99.10 (192.168.99.10) 56(84) bytes of data.
		64 bytes from 192.168.99.10: icmp_seq=1 ttl=64 time=0.458 ms
		64 bytes from 192.168.99.10: icmp_seq=2 ttl=64 time=0.239 ms
		64 bytes from 192.168.99.10: icmp_seq=3 ttl=64 time=0.238 ms

		--- 192.168.99.10 ping statistics ---
		3 packets transmitted, 3 received, 0% packet loss, time 1999ms
		rtt min/avg/max/mdev = 0.238/0.311/0.458/0.105 ms
		

And the final test is to see if my management PC can indeed manage the APIC via the In-Band management IP address – only this time for a change I’ll use ssh to access, and access APIC#2

sshaccess

One interesting thing to note in the CLI configuration is that nowhere do you create an Attachable Access Entity Profile (AEP).  But, when you enter the above commands, one miraculously appears (called __ui_pg_inband.LLDP-APPG) when you view the GUI.

miracluousaep-l2ext

Another myriad of mysteries happens in the mgmt Tenant, even if you go through the CLI config from a clean configuration. While entering the commands above in the CLI, the APIC will automatically add an Application Profile (called default)  with an EPG (also called default).  But it doesn’t stop there! There is also another Node Management EPG (called default) magically created, and a mystical contract (called inband-default-contract) with a link to a mysterious filter (called  inband-default). I have no idea why, but here’s some commands to clean up the crap left behind.

		# Remove crap left behind by previous CLI commands
		tenant mgmt
		no application default
		no contract inband-default-contract
		no inband-mgmt epg default
		no access-list inband-default
		

Step-by-Step: Configuring In-Band management via a L3 Out using the API

The main narrative for the configuration steps are contained in the explanation of the GUI configuration, so you should read that for more detail.  The following sections can be saved to a text file (with a .xml extension) and posted to your config using the GUI (using right-click > Post …), or you can copy and paste the sections below into Postman.


Right-click > Post … Tutorial

Assume one of the sections below is stored a text file with a .xml extension such as  connectivityPrefs.xml

In the APIC GUI, any configuration item that has Post … as one of the right-click options can be used to post the file.

post

The contents of the .xml file must be posted to the uni Parent Distinguished Name (DN) as shown below:

posttouni

The configuration defined in the .xml file will have been pushed into your config:

unpdatedconnpref

End of tutorial


Part 1: Set the Connectivity Preference for the pod to ooband

		<?xml version="1.0" encoding="UTF-8"?>
		<!-- connectivityPrefs.xml -->
		<mgmtConnectivityPrefs dn="uni/fabric/connectivityPrefs" interfacePref="ooband"/>
		

Part 2: Configure the Access Policy Chain

Save each of these snippets in a separate .xml file and post one at a time.  Or use Postman and copy and paste.

		<?xml version="1.0" encoding="UTF-8"?>
		<!-- Create the VLAN Pool -->
		<fvnsVlanInstP allocMode="static" dn="uni/infra/vlanns-[inband-VLAN.Pool]-static" name="inband-VLAN.Pool">
			<fvnsEncapBlk from="vlan-99" to="vlan-100"/>
		</fvnsVlanInstP>
		
		<?xml version="1.0" encoding="UTF-8"?>
		<!-- Create the External L3 Domain, assign it the VLAN Pool -->
		<l3extDomP dn="uni/l3dom-inband-ExtL3Dom" name="inband-ExtL3Dom">
			<infraRsVlanNs tDn="uni/infra/vlanns-[inband-VLAN.Pool]-static"/>
		</l3extDomP>
		
		<!-- Create an Attchable Access Entity Profile (AEP) -->
		<infraAttEntityP descr="" dn="uni/infra/attentp-inband-AEP" name="inband-AEP">
			<infraRsDomP tDn="uni/l3dom-inband-ExtL3Dom"/>
		</infraAttEntityP>
		
		<?xml version="1.0" encoding="UTF-8"?>
		<!-- Create an Enable-LLDP Interface Policy -->
		<lldpIfPol adminRxSt="enabled" adminTxSt="enabled" dn="uni/infra/lldpIfP-Enable-LLDP" />
		
		<?xml version="1.0" encoding="UTF-8"?>
		<!-- Create an Access Port Policy Group -->
		<infraAccPortGrp dn="uni/infra/funcprof/accportgrp-inband.LLDP-APPG" name="inband.LLDP-APPG">
			<infraRsAttEntP tDn="uni/infra/attentp-inband-AEP"/>
			<infraRsLldpIfPol tnLldpIfPolName="Enable-LLDP"/>
		</infraAccPortGrp>
		
		<?xml version="1.0" encoding="UTF-8"?>
		<!-- Two Interface Profiles will be needed - first one for Leaf101 -->
		<infraAccPortP dn="uni/infra/accportprof-L101-IntProf" name="L101-IntProf">
			<!-- Add an interface selector for the External Router -->
			<infraHPortS descr="Router" name="1:1" type="range">
				<infraRsAccBaseGrp fexId="101" tDn="uni/infra/funcprof/accportgrp-inband.LLDP-APPG"/>
				<infraPortBlk fromCard="1" fromPort="1" name="block1" toCard="1" toPort="1"/>
			</infraHPortS>
			<!-- Add the ports where the APICs are connected -->
			<infraHPortS descr="APICs" name="1:46-48" type="range">
				<infraRsAccBaseGrp fexId="101" tDn="uni/infra/funcprof/accportgrp-inband.LLDP-APPG"/>
				<infraPortBlk fromCard="1" fromPort="46" name="block1" toCard="1" toPort="48"/>
			</infraHPortS>
		</infraAccPortP>
		
		<?xml version="1.0" encoding="UTF-8"?>
		<!-- Another Interface Profile for Leaf102 -->
		<infraAccPortP dn="uni/infra/accportprof-L102-IntProf" name="L102-IntProf">
			<!-- Add the ports where the APICs are connected -->
			<infraHPortS descr="APICs" name="1:46-48" type="range">
				<infraRsAccBaseGrp fexId="102" tDn="uni/infra/funcprof/accportgrp-inband.LLDP-APPG"/>
				<infraPortBlk fromCard="1" fromPort="46" name="block2" toCard="1" toPort="48"/>
			</infraHPortS>
		</infraAccPortP>
		
		<?xml version="1.0" encoding="UTF-8"?>
		<!-- Create a Leaf Profile to own the corresponding Interface Profile -->
		<infraNodeP dn="uni/infra/nprof-L101-LeafProf" name="L101-LeafProf">
			<infraLeafS name="Leaf101" type="range">
				<infraNodeBlk name ="Default" from_="101" to_="101"/>
			</infraLeafS>
			<infraRsAccPortP tDn="uni/infra/accportprof-L101-IntProf"/>
		</infraNodeP>
		
		<?xml version="1.0" encoding="UTF-8"?>
		<!-- Create a Leaf Profile to own the corresponding Interface Profile -->
		<infraNodeP dn="uni/infra/nprof-L102-LeafProf" name="L102-LeafProf">
			<infraLeafS name="Leaf102" type="range">
				<infraNodeBlk name ="Default" from_="102" to_="102"/>
			</infraLeafS>
			<infraRsAccPortP tDn="uni/infra/accportprof-L102-IntProf"/>
		</infraNodeP>
		

That’s the Access Policies done, now for the mgmt Tenant configuration.

Part 3: mgmt Tenant Configuration

<?xml version="1.0" encoding="UTF-8"?>
<!-- api/policymgr/mo/.xml -->
<polUni>
	<fvTenant name="mgmt">
		<mgmtMgmtP name="default">
			<!-- Create a Node Management EPG -->
			<mgmtInB encap="vlan-100" name="Default">
				<!-- Assign Adresses for APICs In-Band management network -->
				<mgmtRsInBStNode addr="192.168.99.111/24" gw="192.168.99.1" tDn="topology/pod-1/node-1"/>
				<mgmtRsInBStNode addr="192.168.99.112/24" gw="192.168.99.1" tDn="topology/pod-1/node-2"/>
				<mgmtRsInBStNode addr="192.168.99.113/24" gw="192.168.99.1" tDn="topology/pod-1/node-3"/>
				<!-- Assign Adresses for switches In-Band management network -->
				<mgmtRsInBStNode addr="192.168.99.101/24" gw="192.168.99.1" tDn="topology/pod-1/node-101"/>
				<mgmtRsInBStNode addr="192.168.99.102/24" gw="192.168.99.1" tDn="topology/pod-1/node-102"/>
				<mgmtRsInBStNode addr="192.168.99.201/24" gw="192.168.99.1" tDn="topology/pod-1/node-201"/>
				<mgmtRsInBStNode addr="192.168.99.202/24" gw="192.168.99.1" tDn="topology/pod-1/node-202"/>
				<mgmtRsMgmtBD tnFvBDName="inb"/>
        <!-- The Node Mangement EPG will be the provider for the Contract -->
				<fvRsProv tnVzBrCPName="inband.MgmtServices-Ct"/>
			</mgmtInB>
		</mgmtMgmtP>
		<!-- Create the Contract Assigned to the Default Node Management EPG -->
		<vzBrCP name="inband.MgmtServices-Ct" scope="context">
			<vzSubj name="inband.MgmtServices-Subj">
				<!-- Use the common/default filter -->
				<vzRsSubjFiltAtt directives="" tnVzFilterName="default"/>
			</vzSubj>
		</vzBrCP>
		<!-- Assign IP address to inb BD -->
		<fvBD name="inb">
			<fvRsBDToOut tnL3extOutName="inband.OSPF-L3Out"/>
			<fvSubnet ip="192.168.99.1/24" scope="public"/>
		</fvBD>
		<!-- Create the External L3 Network (L3 Out) and L3EPG -->
		<l3extOut name="inband.OSPF-L3Out">
			<l3extLNodeP name="Leaf101-OSPF.NodeProf">
				<l3extRsNodeL3OutAtt rtrId="1.1.1.1" rtrIdLoopBack="yes" tDn="topology/pod-1/node-101"/>
				<l3extLIfP name="OSPF-intProf">
					<ospfIfP>
						<ospfRsIfPol tnOspfIfPolName=""/>
					</ospfIfP>
					<l3extRsPathL3OutAtt addr="172.16.2.2/30" ifInstT="l3-port" mode="regular" mtu="1500" tDn="topology/pod-1/paths-101/pathep-[eth1/1]"/>
				</l3extLIfP>
			</l3extLNodeP>
			<l3extRsEctx tnFvCtxName="inb"/>
			<l3extRsL3DomAtt tDn="uni/l3dom-inband-ExtL3Dom"/>
			<l3extInstP name="0.0.0.0:0-L3EPG">
				<fvRsCons tnVzBrCPName="inband.MgmtServices-Ct"/>
				<l3extSubnet ip="0.0.0.0/0"/>
			</l3extInstP>
			<ospfExtP areaId="0.0.0.1" areaType="regular"/>
		</l3extOut>
	</fvTenant>
</polUni>

The fact that the login screen comes up is proof that the Mgmt Host has connectivity to the APICs.

visorelogin

Appendix: Configuring L3 Out Interface Profiles with VLANs

Coming Soon


That completes this series of tutorials for configuring In-Band Management on the APIC for Cisco ACI.  Don’t forget to share and like and rate each article to make it easier for others to find when searing for help!

RedNectar

Note:RedPoint If you would like the author or one of my colleagues to assist with the setup of your ACI installation, contact acimentor@housley.com.au and refer to this article. Housley works mainly around APJC, but are not restricted to this area.

References:

Cisco’s official ACI management documentation – I have informed Cisco of the fact that this documentation is not up to scratch – hopefully it will be fixed soon.

The Cisco APIC NX-OS Style Command-Line Interface Configuration Guide – especially the chapter on Configuring Management Interfaces was particularly helpful – much better than the reference above.

Also Cisco’s ACI Troubleshooting Book had a couple of hints about how things hang together.

Carl Niger’s youtube video series was helpful – I recommend it to you.

Cisco’s pathetic video on configuring In-Band management is simply not worth wasting your time on.  But it ‘s included here since I referred to it.

Posted in ACI, ACI API, ACI CLI, ACI configuration, ACI inband management tutorials, ACI Tutorial, APIC, Cisco, Data Center, Data Centre, EPG, In-Band management, inband management, L2 Out, L2out, L3 Out, L3out, Postman, tutorial | Tagged , , , , , , , , , , | 3 Comments

Configuring In-Band Management for the APIC on Cisco ACI (Part #2-via a L2Out)

Note:RedPoint This is the second in a series of articles – the following is a variation of the first in the series.  In fact, the whole story is almost identical – it is just that this one uses a L2 out approach rather than an EPG approach.

Anyone unlucky enough to try and configure In-Band management on the Cisco APIC will have probably realised that it is not a simple task. Which is probably why many Cisco support forum experts advises to use out of band (oob) management instead [link].

And anyone unlucky enough to try and decipher Cisco’s official documentation for configuring In-Band management on the Cisco APIC or watch their pathetic video (which simply does not work – it does not complete the job) are probably feeling frustrated to the point of giving up.

Let me ease your frustration and take you through a journey showing you how to configure In-Band management for ACI in a variety of ways:

  1. Via an EPG (in the mgmt Tenant) (Part#1 of this series)
    1. using the GUI
    2. using the CLI
    3. using the API
  2. Via an external bridged network (L2 Out) (This article)
    1. using the GUI
    2. using the CLI
    3. using the API
  3.  Via an external routed network (L3 Out) (Part#3 of this series)
    1. using the GUI
    2. using the CLI
    3. using the API

In-Band Management Via an external bridged network (L2 Out) in the mgmt Tenant

Let’s begin with a diagram showing my test setup for the L2Out approach.  It is identical to the previous design, except that there is no way I can use an untagged host connection directly to an interface configured for a L2 Out – so I’ve had to introduce a switch between the Nexus 9K Leaf102 and the Mgmt Host.

IP addressing for the Leaf and Spine switches will use the switch ID in the fourth octect of the 192.168.99.0/24 network. E.g., Spine 201 will be 192.168.99.201. The default gateway address to be configured on the inb Bridge Domain in the mgmt tenant will be 192.168.99.1.

So let me plan exactly what will need to be done:

The Access Policy Chain

I’ll need to allocate VLAN IDs for the internal inband management EPG (VLAN 100) and another for the user facing L2EPG (VLAN 99). I’ll put them a VLAN Pool, which will connect to a External Layer 2 Domain, which will need to link to an AEP which has appropriate Access Port Policy Group assignments linking the AEP to the relevant attachment ports of the APICs, the vCenter host and the ACI Management host. Like the picture shows.


Curiously, in the previous method directly attaching an EPG to the leaves, I created a Physical Domain to contain the VLANs, and it linked the physical ports where the APICs attach (via the AEP > APPP > [Interface Profile + LeafProfile]). This time, I used an External L2 Domain rather than the Physical Domain – but this still worked. So it seems that as far as the APIC attached ports are concerned, so long as they have a link to the relevant VLANs, it doesn’t matter if it is via a Physical Domain or an External L2 Domain.

The mgmt Tenant

In the mgmt Tenant there is a number of tasks I’ll have to do.

I’ll need to create a special EPG called an In-band EPG.  This will have to be done before assigning the static addresses I want to the APICs, Leaves and Spines.

I’ll assign the default gateway IP address to the pre-defined inb Bridge Domain in the mgmt Tenant, and then create a L2 External Bridged Network (L2 Out) for my external VLAN (VLAN 99) and assign ports Ethernet 1/10 on each Leaf to that L2 Out. To be able to consume a contract, I’ll also of course have to create a L2EPG which I will name inband.VLAN99-L2EPG to reflect the function and VLAN assigned.

Finally, I’ll need to create a Contract (inband.MgmtServices-Ct) which will use the common/default filter to allow all traffic, and of course I’ll have to link the contract to the special In-Band EPG (provider) and the inband.VLAN99-L2EPG (consumer) mentioned above.

Again, a picture tells the story:

If all goes well, when both the Access Polices and the Tenant configuration is complete, the APIC will be able to manage the vCenter VMM, and the Management Station bare metal server will be able to manage the ACI fabric via the APIC IP addresses.

Enough of design, time to start configuring!

Step-by-Step: Configuring In-Band management via a L2 Out using the GUI

Conventions

Cisco APIC Advanced GUI Menu Selection sequences are displayed in Bolded Blue text, with >+ meaning Right-click and select so that the following line:
Fabric > Access Policies > Pools > VLAN >+ Create VLAN Pool
should be interpreted as:
From the Cisco APIC Advanced GUI Main Menu, select Fabric
From the sub-menu, select Access Policies
In the Navigation Pane, expand Pools, then on the VLAN sub-item, right-click and select Create VLAN Pool.
If a particular tab in the Work Pane needs to be selected, it will be inserted into the sequence in square brackets, such as:
… > Networks > 0.0.0.0:0-L3EPG > [Contracts] tab 
Within the Work Pane and within some dialogues, it will be necessary to click on a + icon to add an item. This is indicated by a (+) followed by the name of the item that needs to be added, so that:
(+) Interface Selectors:
should be interpreted as
Click the + icon adjacent the Interface Selectors: prompt.
Text that needs to be typed at prompts is presented in  orange italicised bold text, while items to be selected from a drop down menu or by clicking options on the screen are shown in bolded underlined text.
Options like clicking OK, UPDATE or SUBMIT are assumed, so not specifically stated unless required between sub-steps. Use your intelligence.

Part 1: Set the Connectivity Preference for the pod to ooband

Firstly, since the default interface to use for external connections id the inband interface, I’m going to set the Connectivity Preference for the pod to ooband – just in case I loose access to the management GUI while configuring this.

Fabric > Fabric Policies > Global Policies > Connectivity Preferences

Interface to use for external connections: ooband

Part 2: Configure the Access Policy Chain

This is a long slog – if you are not familiar with Cisco ACI Access Policies, you might want to read my earlier ACI Tutorials, especially Tutorial #4.

Fabric > Access Policies > Pools > VLAN >+ Create VLAN Pool

Name: inband-VLAN.Pool
Allocation Mode: Static Allocation
(+) Encap Blocks:
Range: VLAN 99 – VLAN 100

Fabric > Access Policies > Physical and External Domains > External Bridged Domains >+ Create Layer 2 Domain

Name: inband-ExtL2Dom
VLAN Pool: inband-VLAN.Pool

Fabric > Access Policies > Global Policies > Attachable Access Entity Profiles >+ Create Attachable Access Entity Profile

Name: inband-AEP
(+) Domains (VMM, Physical or External) To Be Associated To Interfaces:
Domain Profile: inband-ExtL2Dom

Fabric > Access Policies > Interface Policies > Policies > LLDP Interface >+ Create LLDP Interface Policy

Name: Enable-LLDP
[Leave default values – I just want to have a policy that spells out that LLDP is enabled]

Fabric > Access Policies > Interface Policies > Policy Groups >Leaf Policy Groups >+ Create Leaf Access Port Policy Group

Name: inband.LLDP-APPG
LLDP Policy: Enable-LLDP
Attached Entity Profile: inband-AEP

Fabric > Access Policies > Interface Policies > Profiles >Leaf Profiles >+ Create Leaf Interface Profile

Name: L101-IntProf
(+) Interface Selectors:
Name: 1:10
Description: vCenter
Interface IDs: 1/10
Interface Policy Group: inband.LLDP-APPG
(+) Interface Selectors:
Name: 1:46-48
Description: APICs
Interface IDs: 1/46-48
Interface Policy Group: inband.LLDP-APPG

Now repeat for Leaf102

Fabric > Access Policies > Interface Policies > Profiles >Leaf Profiles >+ Create Leaf Interface Profile

Name: L102-IntProf
(+) Interface Selectors:
Name: 1:10
Description: Mgmt Host
Interface IDs: 1/10
Interface Policy Group: inband.LLDP-APPG
(+) Interface Selectors:
Name: 1:46-48
Description: APICs
Interface IDs: 1/46-48
Interface Policy Group: inband.LLDP-APPG

Fabric > Access Policies > Switch Policies > Profiles >Leaf Profiles >+ Create Leaf Profile

Name: L101-LeafProf
(+) Leaf Selectors:
Name: Leaf101
Blocks: 101
UPDATE > NEXT
[x] L101-IntProf

And again for leaf 102

Fabric > Access Policies > Switch Policies > Profiles >Leaf Profiles >+ Create Leaf Profile

Name: L102-LeafProf
(+) Leaf Selectors:
Name: Leaf102
Blocks: 102
UPDATE > NEXT
[x] L102-IntProf

That’s the Access Policies done, now for the mgmt Tenant configuration.

Part 3: mgmt Tenant Configuration

Before I can assign a static IP addresses to  an APIC or switch, the GUI forces me to create a Node Management EPG, so begin by creating one – I’ll use the name Default because I don’t expect I’ll ever need another, but I’ll use an upper-case D to distinguish it from system created defaults which always use a lowercase d.

Tenants > Tenant mgmt > Node Management EPGs >+ Create In-Band Management EPG

Name: Default
Encap: vlan-100
Bridge Domain: inb

Now I can create the Static Node Management Addresses.

Tenants > Tenant mgmt > Node Management Addresses > Static Node Management Addresses >+ Create Static Node Management Addresses

Node Range: 1 – 3
Config: In-Band Addresses
In-Band Mangement EPG: Default
In-Band IPV4 Address: 192.168.99.111/24
In-Band IPV4 Gateway: 192.168.99.1/24

[Tip: If you are following my steps, ignore the warning (as shown below).  I already set the Interface to use for external connections to ooband, and in spite of the inference in the warning, your preference for management will NOT switch to In-Band]

inbabd-warning

Tedious as it was, I resisted the temptation to resort to the CLI, and repeated the above step for Nodes  101-102, and 201-202.

That default gateway IP address I defined on the nodes will reside in the inb Bridge Domain.

Tenants > Tenant mgmt > Networking > Bridge Domains > inb > Subnets  >+ Create subnet

Gateway IP: 192.168.99.1/24

That’s took care of the internal network – the APICs were able to ping the default gateway and the Leaf switches verifying that the configurations were valid, although at this stage I was not able to ping the Spine switches.  However, I took heart from this video and assumed that all was OK.

apic1# ping -c 3 192.168.99.1
PING 192.168.99.1 (192.168.99.1) 56(84) bytes of data.
64 bytes from 192.168.99.1: icmp_seq=1 ttl=63 time=2.86 ms
64 bytes from 192.168.99.1: icmp_seq=2 ttl=63 time=0.827 ms
64 bytes from 192.168.99.1: icmp_seq=3 ttl=63 time=0.139 ms

--- 192.168.99.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 0.139/1.276/2.862/1.156 ms
apic1# ping -c 3 192.168.99.101
PING 192.168.99.101 (192.168.99.101) 56(84) bytes of data.
64 bytes from 192.168.99.101: icmp_seq=1 ttl=63 time=0.969 ms
64 bytes from 192.168.99.101: icmp_seq=2 ttl=63 time=0.176 ms
64 bytes from 192.168.99.101: icmp_seq=3 ttl=63 time=0.209 ms

--- 192.168.99.101 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2000ms
rtt min/avg/max/mdev = 0.176/0.451/0.969/0.366 ms
apic1# ping -c 3 192.168.99.201
PING 192.168.99.201 (192.168.99.201) 56(84) bytes of data.
From 192.168.99.111 icmp_seq=1 Destination Host Unreachable
From 192.168.99.111 icmp_seq=2 Destination Host Unreachable
From 192.168.99.111 icmp_seq=3 Destination Host Unreachable

--- 192.168.99.201 ping statistics ---
3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 3005ms

I’ll need a contract to put between the L2EPG and the special management In-Band EPG – life will be easier if I create that first.

Tenants > Tenant mgmt > Security Policies > Contracts  >+ Create Contract

Name: inband.MgmtServices-Ct
Scope: VRF [Default]
(+) Subjects:
Name: inband.MgmtServices-Subj
Filter Chain
(+) Filters
Name: common/default

Now to create the L2Out and the L2EPG

Tenants > Tenant mgmt > Networking > External Bridged Networks  >+ Create Bridged Outside

Name: inband.VLAN99-L2Out
External Bridged Domain: inband-ExtL2Dom
Bridge Domain: mgmt/inb
Encap: VLAN 99
Nodes And Interfaces Protocol Profiles
Path Type: port
Path: Pod1/Node-101/eth1/10
ADD
Path: Pod1/Node-102/eth1/10
ADD>NEXT
(+) External EPG Networks
Name: inband.VLAN99-L2EPG

Have the L2EPG consume the contract I created earlier:

Tenants > Tenant mgmt > Networking > External Bridged Networks  > inband.VLAN99-L2Out > Networks > inband.VLAN99-L2EPG 

(+) Consumed Contracts:
Name: mgmt/inband.MgmtServices-Ct

And the In-Band EPG Provide it:

Tenants > Tenant mgmt >Node Management EPGs > In-Band EPG Default 

(+) Provided Contracts:
Name: mgmt/inband.MgmtServices-Ct

Time to test!

To be confident that I will now be able to deploy a VMM Domain with connectivity to the Virtual Machine Manager (vCenter in my case), I’ll ping the VMM server from the APIC.

apic1# ping -c 3 192.168.99.99
PING 192.168.99.99 (192.168.99.99) 56(84) bytes of data.
64 bytes from 192.168.99.99: icmp_seq=1 ttl=64 time=0.458 ms
64 bytes from 192.168.99.99: icmp_seq=2 ttl=64 time=0.239 ms
64 bytes from 192.168.99.99: icmp_seq=3 ttl=64 time=0.238 ms

--- 192.168.99.99 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1999ms
rtt min/avg/max/mdev = 0.238/0.311/0.458/0.105 ms

And the final test is to see if my management PC can indeed manage the APIC via the In-Band management IP address:

apic-access

Step-by-Step: Configuring In-Band management via a L2 Out using the CLI

The main narrative for the configuration steps are contained in the explanation of the GUI configuration, so you should read that for more detail.  The following commands are entered in configuration mode.


Part 1: Set the Connectivity Preference for the pod to ooband

mgmt_connectivity pref ooband

Part 2: Configure the Access Policy Chain

# First, create the VLAN Pool and External L2 Domain
# If you type the command below, you may notice a curious thing -
# at the point where the word "type" appears, if you press "?"
# you will see options for <CR> and "dynamic", but not "type".
# In other words, "type" is a hidden option - I discovered it
# by creating a domain in the GUI and looking at the running
# config later.
  vlan-domain inband-ExtL2Dom type l2ext
    vlan-pool inband-VLAN.Pool
    vlan 99-100
    exit

# And a Access Port Policy Group linked to the inband-ExtL2Dom
  template policy-group inband.LLDP-APPG
# Another curious thing with the CLI is that there is no way
# to create an AEP - one gets created for you whether you
# want it or not when you link the APPG to the Domain in the
# following command.
    vlan-domain member inband-ExtL2Dom type l2ext
    exit

# Not necessary to create an Interface Policy to Enable-LLDP in the
# CLI, Interface Policies are applied directly to the interfaces

# Now the Leaf Profiles, Interface Profiles and Port Selectors
  leaf-profile L101-LeafProf
    leaf-group Leaf101
      leaf 101
      exit
    leaf-interface-profile L101-IntProf
    exit
  leaf-profile L102-LeafProf
    leaf-group Leaf102
      leaf 102
      exit
    leaf-interface-profile L102-IntProf
    exit

  leaf-interface-profile L101-IntProf
    leaf-interface-group 1:10
      description 'vCenter'
      interface ethernet 1/10
      policy-group inband.LLDP-APPG
      exit
    leaf-interface-group 1:46-48
      description 'APICs'
      interface ethernet 1/46-48
      policy-group inband.LLDP-APPG
      exit
    exit

  leaf-interface-profile L102-IntProf
    leaf-interface-group 1:10
      description 'Mgmt Host'
      interface ethernet 1/10
      policy-group inband.LLDP-APPG
      exit
    leaf-interface-group 1:46-48
      description 'APICs'
      interface ethernet 1/46-48
      policy-group inband.LLDP-APPG
      exit
    exit

That’s the Access Policies done, now for the mgmt Tenant configuration.

Part 3: mgmt Tenant Configuration

# Node IP addressing is configured OUTSIDE the mgmt
# Tenant in the CLI, so I'll do the mgmt Tenant bits
# first, in the order that best fits - defining the
# contract first means I can configure the AP in one hit

  tenant mgmt
    contract inband.MgmtServices-Ct
      subject inband.MgmtServices-Subj
        access-group default both
        exit
      exit

    external-l2 epg inband.VLAN99-L2Out:inband.VLAN99-L2EPG
      bridge-domain member inb
      contract consumer inband.MgmtServices-Ct
      exit

    inband-mgmt epg Default
      contract provider inband.MgmtServices-Ct
      bridge-domain inb
      vlan 100
      exit

    interface bridge-domain inb
      ip address 192.168.99.1/24 secondary
      exit
    exit

# Now the Node IP addressing

  controller 1
    interface inband-mgmt0
      ip address 192.168.99.111/24 gateway 192.168.99.1
      inband-mgmt epg Default
      vlan 100
      exit
    exit
  controller 2
    interface inband-mgmt0
      ip address 192.168.99.112/24 gateway 192.168.99.1
      inband-mgmt epg Default
      vlan 100
      exit
    exit
  controller 3
    interface inband-mgmt0
      ip address 192.168.99.113/24 gateway 192.168.99.1
      inband-mgmt epg Default
      vlan 100
      exit
    exit

  switch 101
    interface inband-mgmt0
      ip address 192.168.99.101/24 gateway 192.168.99.1
      inband-mgmt epg Default
      exit
    exit
  switch 102
    interface inband-mgmt0
      ip address 192.168.99.102/24 gateway 192.168.99.1
      inband-mgmt epg Default
      exit
    exit
  switch 201
    interface inband-mgmt0
      ip address 192.168.99.201/24 gateway 192.168.99.1
      inband-mgmt epg Default
      exit
    exit
  switch 202
    interface inband-mgmt0
      ip address 192.168.99.202/24 gateway 192.168.99.1
      inband-mgmt epg Default
      exit
    exit

# Finally, apply vlan configuration to the
# physical interfaces where necessary

  leaf 101
    interface ethernet 1/10
      switchport trunk allowed vlan 99 tenant mgmt external-l2 epg inband.VLAN99-L2Out:inband.VLAN99-L2EPG
      exit
    exit

  leaf 102
    interface ethernet 1/10
      switchport trunk allowed vlan 99 tenant mgmt external-l2 epg inband.VLAN99-L2Out:inband.VLAN99-L2EPG
      exit
    exit

Time to test!

To be confident that I will now be able to manage the APIC from my management host, I’ll ping the Mgmt Host from the APIC.

apic1# ping -c 3 192.168.99.10
PING 192.168.99.10 (192.168.99.10) 56(84) bytes of data.
64 bytes from 192.168.99.10: icmp_seq=1 ttl=64 time=0.458 ms
64 bytes from 192.168.99.10: icmp_seq=2 ttl=64 time=0.239 ms
64 bytes from 192.168.99.10: icmp_seq=3 ttl=64 time=0.238 ms

--- 192.168.99.10 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1999ms
rtt min/avg/max/mdev = 0.238/0.311/0.458/0.105 ms

And the final test is to see if my management PC can indeed manage the APIC via the In-Band management IP address – only this time for a change I’ll use ssh to access, and access APIC#2

sshaccess

One interesting thing to note in the CLI configuration is that nowhere do you create an Attachable Access Entity Profile (AEP).  But, when you enter the above commands, one miraculously appears (called __ui_pg_inband.LLDP-APPG) when you view the GUI.

miracluousaep-l2ext

Another myriad of mysteries happens in the mgmt Tenant, even if you go through the CLI config from a clean configuration. While entering the commands above in the CLI, the APIC will automatically add an Application Profile (called default)  with an EPG (also called default).  But it doesn’t stop there! There is also another Node Management EPG (called default) magically created, and a mystical contract (called inband-default-contract) with a link to a mysterious filter (called  inband-default). I have no idea why, but here’s some commands to clean up the crap left behind.

# Remove crap left behind by previous CLI commands
tenant mgmt
  no application default
  no contract inband-default-contract
  no inband-mgmt epg default
  no access-list inband-default

Step-by-Step: Configuring In-Band management via a L2 Out using the API

The main narrative for the configuration steps are contained in the explanation of the GUI configuration, so you should read that for more detail.  The following sections can be saved to a text file (with a .xml extension) and posted to your config using the GUI (using right-click > Post …), or you can copy and paste the sections below into Postman.


Right-click > Post … Tutorial

Assume one of the sections below is stored a text file with a .xml extension such as  connectivityPrefs.xml

In the APIC GUI, any configuration item that has Post … as one of the right-click options can be used to post the file.

post

The contents of the .xml file must be posted to the uni Parent Distinguished Name (DN) as shown below:

posttouni

The configuration defined in the .xml file will have been pushed into your config:

unpdatedconnpref

End of tutorial


Part 1: Set the Connectivity Preference for the pod to ooband

<?xml version="1.0" encoding="UTF-8"?>
<!-- connectivityPrefs.xml -->
<mgmtConnectivityPrefs dn="uni/fabric/connectivityPrefs" interfacePref="ooband"/>

Part 2: Configure the Access Policy Chain

Save each of these snippets in a separate .xml file and post one at a time.  Or use Postman and copy and paste.

<?xml version="1.0" encoding="UTF-8"?>
<!-- Create the VLAN Pool -->
<fvnsVlanInstP allocMode="static" dn="uni/infra/vlanns-[inband-VLAN.Pool]-static" name="inband-VLAN.Pool">
    <fvnsEncapBlk from="vlan-99" to="vlan-100"/>
</fvnsVlanInstP>
<?xml version="1.0" encoding="UTF-8"?>
<!-- Create the External L2 Domain, assign it the VLAN Pool -->
<l2extDomP dn="uni/l2dom-inband-ExtL2Dom" name="inband-ExtL2Dom">
	<infraRsVlanNs tDn="uni/infra/vlanns-[inband-VLAN.Pool]-static"/>
</l2extDomP>
<!-- Create an Attchable Access Entity Profile (AEP) -->
<infraAttEntityP descr="" dn="uni/infra/attentp-inband-AEP" name="inband-AEP">
  <infraRsDomP tDn="uni/l2dom-inband-ExtL2Dom"/>
</infraAttEntityP>
<?xml version="1.0" encoding="UTF-8"?>
<!-- Create an Enable-LLDP Interface Policy -->
<lldpIfPol adminRxSt="enabled" adminTxSt="enabled" dn="uni/infra/lldpIfP-Enable-LLDP" />
<?xml version="1.0" encoding="UTF-8"?>
<!-- Create an Access Port Policy Group -->
<infraAccPortGrp dn="uni/infra/funcprof/accportgrp-inband.LLDP-APPG" name="inband.LLDP-APPG">
    <infraRsAttEntP tDn="uni/infra/attentp-inband-AEP"/>
    <infraRsLldpIfPol tnLldpIfPolName="Enable-LLDP"/>
</infraAccPortGrp>
<?xml version="1.0" encoding="UTF-8"?>
<!-- Two Interface Profiles will be needed - first one for Leaf101 -->
<infraAccPortP dn="uni/infra/accportprof-L101-IntProf" name="L101-IntProf">
    <!-- Add an interface selector for the vCenter Server -->
    <infraHPortS descr="vCenter" name="1:10" type="range">
        <infraRsAccBaseGrp fexId="101" tDn="uni/infra/funcprof/accportgrp-inband.LLDP-APPG"/>
        <infraPortBlk fromCard="1" fromPort="10" name="block1" toCard="1" toPort="10"/>
    </infraHPortS>
    <!-- Add the ports where the APICs are connected -->
    <infraHPortS descr="APICs" name="1:46-48" type="range">
        <infraRsAccBaseGrp fexId="101" tDn="uni/infra/funcprof/accportgrp-inband.LLDP-APPG"/>
        <infraPortBlk fromCard="1" fromPort="46" name="block1" toCard="1" toPort="48"/>
    </infraHPortS>
</infraAccPortP>
<?xml version="1.0" encoding="UTF-8"?>
<!-- Another Interface Profile for Leaf102 -->
<infraAccPortP dn="uni/infra/accportprof-L102-IntProf" name="L102-IntProf">
    <!-- Add an interface selector for the Mgmt Host -->
    <infraHPortS descr="Mgmt Host" name="1:10" type="range">
        <infraRsAccBaseGrp fexId="102" tDn="uni/infra/funcprof/accportgrp-inband.LLDP-APPG"/>
        <infraPortBlk fromCard="1" fromPort="10" name="block2" toCard="1" toPort="10"/>
    </infraHPortS>
    <!-- Add the ports where the APICs are connected -->
    <infraHPortS descr="APICs" name="1:46-48" type="range">
        <infraRsAccBaseGrp fexId="102" tDn="uni/infra/funcprof/accportgrp-inband.LLDP-APPG"/>
        <infraPortBlk fromCard="1" fromPort="46" name="block2" toCard="1" toPort="48"/>
    </infraHPortS>
</infraAccPortP>
<?xml version="1.0" encoding="UTF-8"?>
<!-- Create a Leaf Profile to own the corresponding Interface Profile -->
<infraNodeP dn="uni/infra/nprof-L101-LeafProf" name="L101-LeafProf">
    <infraLeafS name="Leaf101" type="range">
        <infraNodeBlk name ="Default" from_="101" to_="101"/>
    </infraLeafS>
    <infraRsAccPortP tDn="uni/infra/accportprof-L101-IntProf"/>
</infraNodeP>
<!-- Create a Leaf Profile to own the corresponding Interface Profile -->
<infraNodeP dn="uni/infra/nprof-L102-LeafProf" name="L102-LeafProf">
    <infraLeafS name="Leaf102" type="range">
        <infraNodeBlk name ="Default" from_="102" to_="102"/>
    </infraLeafS>
    <infraRsAccPortP tDn="uni/infra/accportprof-L102-IntProf"/>
</infraNodeP>

That’s the Access Policies done, now for the mgmt Tenant configuration.

Part 3: mgmt Tenant Configuration

<?xml version="1.0" encoding="UTF-8"?>
<!-- api/policymgr/mo/.xml -->
<polUni>
  <fvTenant name="mgmt">
    <mgmtMgmtP name="default">

      <!-- Create a Node Management EPG -->
      <mgmtInB encap="vlan-100" name="Default">
        <!-- Assign Adresses for APICs In-Band management network -->
        <mgmtRsInBStNode addr="192.168.99.111/24" gw="192.168.99.1" tDn="topology/pod-1/node-1"/>
        <mgmtRsInBStNode addr="192.168.99.112/24" gw="192.168.99.1" tDn="topology/pod-1/node-2"/>
        <mgmtRsInBStNode addr="192.168.99.113/24" gw="192.168.99.1" tDn="topology/pod-1/node-3"/>
        <!-- Assign Adresses for switches In-Band management network -->
        <mgmtRsInBStNode addr="192.168.99.101/24" gw="192.168.99.1" tDn="topology/pod-1/node-101"/>
        <mgmtRsInBStNode addr="192.168.99.102/24" gw="192.168.99.1" tDn="topology/pod-1/node-102"/>
        <mgmtRsInBStNode addr="192.168.99.201/24" gw="192.168.99.1" tDn="topology/pod-1/node-201"/>
        <!-- The Node Mangement EPG will be the provider for the Contract -->
        <mgmtRsMgmtBD tnFvBDName="inb"/>
        <fvRsProv tnVzBrCPName="inband.MgmtServices-Ct"/>
      </mgmtInB>
    </mgmtMgmtP>

    <!-- Create the Contract Assigned to the Default Node Management EPG -->
    <vzBrCP name="inband.MgmtServices-Ct" scope="context">
      <vzSubj name="inband.MgmtServices-Subj">
        <!-- Use the common/default filter -->
        <vzRsSubjFiltAtt directives="" tnVzFilterName="default"/>
      </vzSubj>
    </vzBrCP>

    <!-- Assign IP address to inb BD -->
    <fvBD name="inb">
      <fvSubnet ip="192.168.99.1/24" />
    </fvBD>

	<!-- Create the L2Out and its associated L2EPG -->
	<l2extOut name="inband.VLAN99-L2Out">
		<l2extLNodeP name="default">
			<l2extLIfP name="default">
				<l2extRsPathL2OutAtt tDn="topology/pod-1/paths-101/pathep-[eth1/10]"/>
				<l2extRsPathL2OutAtt tDn="topology/pod-1/paths-102/pathep-[eth1/10]"/>
			</l2extLIfP>
		</l2extLNodeP>
		<l2extRsL2DomAtt tDn="uni/l2dom-inband-ExtL2Dom"/>
		<l2extRsEBd encap="vlan-99" tnFvBDName="inb"/>
		<l2extInstP name="inband.VLAN99-L2EPG">
			<!-- The L2EPG will consume the Contract -->
			<fvRsCons tnVzBrCPName="inband.MgmtServices-Ct"/>
		</l2extInstP>
	</l2extOut>
</fvTenant>

Again, I’ll test by pinging the vCenter server from apic#3 for a change, and for browse to the Visore interface of the APIC from the Mgmt Host.

apic3# ping -c 3 192.168.99.99
PING 192.168.99.99 (192.168.99.99) 56(84) bytes of data.
64 bytes from 192.168.99.99: icmp_seq=1 ttl=64 time=0.302 ms
64 bytes from 192.168.99.99: icmp_seq=2 ttl=64 time=0.221 ms
64 bytes from 192.168.99.99: icmp_seq=3 ttl=64 time=0.204 ms

--- 192.168.99.99 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1999ms
rtt min/avg/max/mdev = 0.204/0.242/0.302/0.044 ms

The fact that the login screen comes up is proof that the Mgmt Host has connectivity to the APICs.

visorelogin

In the next installment, I will configure In-Band management so the fabric can be managed from an external network via a L3 out.

RedNectar

Note:RedPoint If you would like the author or one of my colleagues to assist with the setup of your ACI installation, contact acimentor@housley.com.au and refer to this article. Housley works mainly around APJC, but are not restricted to this area.

References:

Cisco’s official ACI management documentation – I have informed Cisco of the fact that this documentation is not up to scratch – hopefully it will be fixed soon.

The Cisco APIC NX-OS Style Command-Line Interface Configuration Guide – especially the chapter on Configuring Management Interfaces was particularly helpful – much better than the reference above.

Also Cisco’s ACI Troubleshooting Book had a couple of hints about how things hang together.

Carl Niger’s youtube video series was helpful – I recommend it to you.

Cisco’s pathetic video on configuring In-Band management is simply not worth wasting your time on.  But it ‘s included here since I referred to it.

Posted in ACI, ACI API, ACI CLI, ACI configuration, aci inband management, ACI inband management tutorials, ACI Tutorial, APIC, Cisco, Cloud computing, configuration tutorial, Data Center, Data Centre, EPG, In-Band management, inband management, L2 Out, L2out, L3 Out, L3out, Postman, tutorial | Tagged , , , , , , , , , , | 2 Comments

Configuring In-Band Management for the APIC on Cisco ACI (Part #1-via an EPG)

Anyone unlucky enough to try and configure In-Band management on the Cisco APIC will have probably realised that it is not a simple task. Which is probably why many Cisco support forum experts advises to use out of band (oob) management instead [link].

And anyone unlucky enough to try and decipher Cisco’s official documentation for configuring In-Band management on the Cisco APIC or watch their pathetic video (which simply does not work – it does not complete the job) are probably feeling frustrated to the point of giving up.

Let me ease your frustration and take you through a journey showing you how to configure In-Band management for ACI in a variety of ways:

  1. Via an EPG (in the mgmt Tenant) :This post
    1. using the GUI
    2. using the CLI
    3. using the API
  2.  Via an external bridged network (L2 Out) (Part#2 of this series)
    1. using the GUI
    2. using the CLI
    3. using the API
  3. Via an external routed network (L3 Out) (Part#3 of this series)
    1. using the GUI
    2. using the CLI
    3. using the API

In-Band Management Via an EPG in the mgmt Tenant

Let’s begin with a diagram showing my test setup for the EPG approach.

IP addressing for the Leaf and Spine switches will use the switch ID in the fourth octect of the 192.168.99.0/24 network. E.g., Spine 201 will be 192.168.99.201. The default gateway address to be configured on the inb Bridge Domain in the mgmt tenant will be 192.168.99.1.

So let me plan exactly what will need to be done:

The Access Policy Chain

I’ll need to allocate VLAN IDs for the internal inband management EPG (VLAN 100) and because there has to be two EPGs involved, another for the user facing EPG (VLAN 99). I’ll put them a VLAN Pool, which will connect to a Physical Domain, which will need to link to an AEP which has appropriate Access Port Policy Group assignments linking the AEP to the relevant attachment ports of the APICs, the vCenter host and the ACI Management host. Like the picture shows.

The mgmt Tenant

In the mgmt Tenant there is a number of tasks I’ll have to do.

I’ll need to create a special EPG called an In-band EPG.  This will have to be done before assigning the static addresses I want to the APICs, Leaves and Spines.

I’ll assign the default gateway IP address to the pre-defined inb Bridge Domain in the mgmt Tenant, and then create a second EPG (inband.Default-EPG) linked to the same inb Bridge Domain and linked to the inband-PhysDom that was created in the Access Policy Chain.

I’ll statically assign the ports (Ethernet 1/10 on Leaf101 and Leaf102) to the inband.Default-EPG using VLAN 99 encapsulation, making sure the link to the management host (on Leaf102) is untagged.

Finally, I’ll need to create a Contract (inband.MgmtServices-Ct) which will use the common/default filter to allow all traffic, and of course I’ll have to link the contract to the special In-Band EPG (provider) and the inband.Default-EPG (consumer) mentioned above.

Again, a picture tells the story:

If all goes well, when both the Access Polices and the Tenant configuration is complete, the APIC will be able to manage the vCenter VMM, and the Management Station bare metal server will be able to manage the ACI fabric via the APIC IP addresses.

Enough of design, time to start configuring!

Step-by-Step: Configuring inband management for an EPG via the GUI

Conventions

Cisco APIC Advanced GUI Menu Selection sequences are displayed in Bolded Blue text, with >+ meaning Right-click and select so that the following line:
Fabric > Access Policies > Pools > VLAN >+ Create VLAN Pool
should be interpreted as:
From the Cisco APIC Advanced GUI Main Menu, select Fabric
From the sub-menu, select Access Policies
In the Navigation Pane, expand Pools, then on the VLAN sub-item, right-click and select Create VLAN Pool.
If a particular tab in the Work Pane needs to be selected, it will be inserted into the sequence in square brackets, such as:
… > Networks > 0.0.0.0:0-L3EPG > [Contracts] tab 
Within the Work Pane and within some dialogues, it will be necessary to click on a + icon to add an item. This is indicated by a (+) followed by the name of the item that needs to be added, so that:
(+) Interface Selectors:
should be interpreted as
Click the + icon adjacent the Interface Selectors: prompt.
Text that needs to be typed at prompts is presented in  orange italicised bold text, while items to be selected from a drop down menu or by clicking options on the screen are shown in bolded underlined text.
Options like clicking OK, UPDATE or SUBMIT are assumed, so not specifically stated unless required between sub-steps. Use your intelligence.

Part 1: Set the Connectivity Preference for the pod to ooband

Firstly, since the default interface to use for external connections id the inband interface, I’m going to set the Connectivity Preference for the pod to ooband – just in case I loose access to the management GUI while configuring this.

Fabric > Fabric Policies > Global Policies > Connectivity Preferences

Interface to use for external connections: ooband

Part 2: Configure the Access Policy Chain

This is a long slog – if you are not familiar with Cisco ACI Access Policies, you might want to read my earlier ACI Tutorials, especially Tutorial #4.

Fabric > Access Policies > Pools > VLAN >+ Create VLAN Pool

Name: inband-VLAN.Pool
Allocation Mode: Static Allocation
(+) Encap Blocks:
Range: VLAN 99 – VLAN 100

Fabric > Access Policies > Physical and External Domains > Physical Domains >+ Create Physical Domain

Name: inband-PhysDom
VLAN Pool: inband-VLAN.Pool

Fabric > Access Policies > Global Policies > Attachable Access Entity Profiles >+ Create Attachable Access Entity Profile

Name: inband-AEP
(+) Domains (VMM, Physical or External) To Be Associated To Interfaces:
Domain Profile: inband-PhysDom

Fabric > Access Policies > Interface Policies > Policies > LLDP Interface >+ Create LLDP Interface Policy

Name: Enable-LLDP
[Leave default values – I just want to have a policy that spells out that LLDP is enabled]

Fabric > Access Policies > Interface Policies > Policy Groups >Leaf Policy Groups >+ Create Leaf Access Port Policy Group

Name: inband.LLDP-APPG
LLDP Policy: Enable-LLDP
Attached Entity Profile: inband-AEP

Fabric > Access Policies > Interface Policies > Profiles >Leaf Profiles >+ Create Leaf Interface Profile

Name: L101-IntProf
(+) Interface Selectors:
Name: 1:10
Description: vCenter
Interface IDs: 1/10
Interface Policy Group: inband.LLDP-APPG
(+) Interface Selectors:
Name: 1:46-48
Description: APICs
Interface IDs: 1/46-48
Interface Policy Group: inband.LLDP-APPG

Now repeat for Leaf102

Fabric > Access Policies > Interface Policies > Profiles >Leaf Profiles >+ Create Leaf Interface Profile

Name: L102-IntProf
(+) Interface Selectors:
Name: 1:10
Description: Mgmt Host
Interface IDs: 1/10
Interface Policy Group: inband.LLDP-APPG
(+) Interface Selectors:
Name: 1:46-48
Description: APICs
Interface IDs: 1/46-48
Interface Policy Group: inband.LLDP-APPG

Fabric > Access Policies > Switch Policies > Profiles >Leaf Profiles >+ Create Leaf Profile

Name: L101-LeafProf
(+) Leaf Selectors:
Name: Leaf101
Blocks: 101
UPDATE > NEXT
[x] L101-IntProf

And again for leaf 102

Fabric > Access Policies > Switch Policies > Profiles >Leaf Profiles >+ Create Leaf Profile

Name: L102-LeafProf
(+) Leaf Selectors:
Name: Leaf102
Blocks: 102
UPDATE > NEXT
[x] L102-IntProf

That’s the Access Policies done, now for the mgmt Tenant configuration.

Part 3: mgmt Tenant Configuration

Before I can assign a static IP addresses to  an APIC or switch, the GUI forces me to create a Node Management EPG, so begin by creating one – I’ll use the name Default because I don’t expect I’ll ever need another, but I’ll use an upper-case D to distinguish it from system created defaults which always use a lowercase d.

Tenants > Tenant mgmt > Node Management EPGs >+ Create In-Band Management EPG

Name: Default
Encap: vlan-100
Bridge Domain: inb

Now I can create the Static Node Management Addresses.

Tenants > Tenant mgmt > Node Management Addresses > Static Node Management Addresses >+ Create Static Node Management Addresses

Node Range: 1 – 3
Config: In-Band Addresses
In-Band Mangement EPG: Default
In-Band IPV4 Address: 192.168.99.111/24
In-Band IPV4 Gateway: 192.168.99.1/24

[Tip: If you are following my steps, ignore the warning (as shown below).  I already set the Interface to use for external connections to ooband, and in spite of the inference in the warning, your preference for management will NOT switch to In-Band]

inbabd-warning

Tedious as it was, I resisted the temptation to resort to the CLI, and repeated the above step for Nodes  101-102, and 201-202.

That default gateway IP address I defined on the nodes will reside in the inb Bridge Domain.

Tenants > Tenant mgmt > Networking > Bridge Domains > inb > Subnets  >+ Create subnet

Gateway IP: 192.168.99.1/24

That’s took care of the internal network – the APICs were able to ping the default gateway and the Leaf switches verifying that the configurations were valid, although at this stage I was not able to ping the Spine switches.  However, I took heart from this video and assumed that all was OK.

apic1# ping -c 3 192.168.99.1
PING 192.168.99.1 (192.168.99.1) 56(84) bytes of data.
64 bytes from 192.168.99.1: icmp_seq=1 ttl=63 time=2.86 ms
64 bytes from 192.168.99.1: icmp_seq=2 ttl=63 time=0.827 ms
64 bytes from 192.168.99.1: icmp_seq=3 ttl=63 time=0.139 ms

--- 192.168.99.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 0.139/1.276/2.862/1.156 ms
apic1# ping -c 3 192.168.99.101
PING 192.168.99.101 (192.168.99.101) 56(84) bytes of data.
64 bytes from 192.168.99.101: icmp_seq=1 ttl=63 time=0.969 ms
64 bytes from 192.168.99.101: icmp_seq=2 ttl=63 time=0.176 ms
64 bytes from 192.168.99.101: icmp_seq=3 ttl=63 time=0.209 ms

--- 192.168.99.101 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2000ms
rtt min/avg/max/mdev = 0.176/0.451/0.969/0.366 ms
apic1# ping -c 3 192.168.99.201
PING 192.168.99.201 (192.168.99.201) 56(84) bytes of data.
From 192.168.99.111 icmp_seq=1 Destination Host Unreachable
From 192.168.99.111 icmp_seq=2 Destination Host Unreachable
From 192.168.99.111 icmp_seq=3 Destination Host Unreachable

--- 192.168.99.201 ping statistics ---
3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 3005ms

To allow access from my vCenter VMM and external management host, I’ll need to create an EPG.  Since this EPG will need to have a contract with the mgmt Tenant’s special In-Band EPG, it will be easiest to create the EPG in the mgmt Tenant, but there is no reason why I couldn’t have created this EPG in a different Tenant so long as I had configured all the fiddly pieces that go along with consuming a contract from another Tenant.

But before I create the EPG, I’ll need an Application Profile:

Tenants > Tenant mgmt >Application profiles >+ Create Application Profile 

Name: inband.Default-AP

Tenants > Tenant mgmt >Application profiles > inband.Default-AP > Application EPGs >+ Create Application EPG 

Now I can create the EPG

Name: inband.Default-EPG
Bridge Domain: mgmt/inb

I’ll need to provide the link between the EPG and the Access Policy chain via the Physical Domain

Tenants > Tenant mgmt >Application profiles > inband.Default-AP > Application EPGs > inband.Default-EPG > Domains >+ Add Physical Domain Association 

Physical Domain Profile: inband.PhysDom

And finally, I’ll link the EPG to the ports connected to vCenter and the Mgmt Host using VLAN 99 – Tagged on Leaf101 (vCenter) and untagged on Leaf102 (Mgmt Host)

Tenants > Tenant mgmt >Application profiles > inband.Default-AP > Application EPGs > inband.Default-EPG > Static Ports >+ Deploy Static EPG on PC, VPC, or Interface

Path Type: Port
Path: Pod-1/Node-101/eth1/10
Port Encap (…): VLAN 99
Mode: Trunk [Default]

I had to remember to make interface 1/10 untagged on Leaf102 – that is where the Mgmt Host is attached.

Tenants > Tenant mgmt >Application profiles > inband.Default-AP > Application EPGs > inband.Default-EPG > Static Ports >+ Deploy Static EPG on PC, VPC, or Interface

Path Type: Port
Path: Pod-1/Node-102/eth1/10
Port Encap (…): VLAN 99
Mode: Access (Untagged)

That’s created both the Application Profile and the EPG, now I’ll need a Contract with a Subject that links to the common/default filter to allow all traffic.  If I had wanted to be more restrictive, I could have created and linked my own filter of course.

Tenants > Tenant mgmt > Security Policies > Contracts  >+ Create Contract

Name: inband.MgmtServices-Ct
Scope: VRF [Default]
(+) Subjects:
Name: inband.MgmtServices-Subj
Filter Chain
(+) Filters
Name: common/default

And finally, to apply the contract so that it is provided by the special In-Band EPG and consumed by the EPG I created (inband.Default-EPG)

Tenants > Tenant mgmt >Node Management EPGs > In-Band EPG Default 

(+) Provided Contracts:
Name: mgmt/inband.MgmtServices-Ct

Tenants > Tenant mgmt >Application Profiles > inband.Default-AP > Application EPGs > EPG inband.Default-EPG > Contracts >+ Add Consumed Contract

Contract: mgmt/inband.MgmtServices-Ct

Time to test!

To be confident that I will now be able to deploy a VMM Domain with connectivity to the Virtual Machine Manager (vCenter in my case), I’ll ping the VMM server from the APIC.

apic1# ping -c 3 192.168.99.99
PING 192.168.99.99 (192.168.99.99) 56(84) bytes of data.
64 bytes from 192.168.99.99: icmp_seq=1 ttl=64 time=0.458 ms
64 bytes from 192.168.99.99: icmp_seq=2 ttl=64 time=0.239 ms
64 bytes from 192.168.99.99: icmp_seq=3 ttl=64 time=0.238 ms

--- 192.168.99.99 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1999ms
rtt min/avg/max/mdev = 0.238/0.311/0.458/0.105 ms

And the final test is to see if my management PC can indeed manage the APIC via the In-Band management IP address:

apic-access

Step-by-Step: Configuring inband management for an EPG via the CLI

The main narrative for the configuration steps are contained in the explanation of the GUI configuration, so you should read that for more detail.  The following commands are entered in configuration mode.


Part 1: Set the Connectivity Preference for the pod to ooband

mgmt_connectivity pref ooband

Part 2: Configure the Access Policy Chain

# First, create the VLAN Pool and Physical Domain
# If you type the command below, you may notice a curious thing -
# at the point where the word "type" appears, if you press "?"
# you will see options for <CR> and "dynamic", but not "type".
# In other words, "type" is a hidden option - I discovered it
# by creating a domain in the GUI and looking at the running
# config later.
  vlan-domain inband-PhysDom type phys
    vlan-pool inband-VLAN.Pool
    vlan 99-100
    exit

# And a Access Port Policy Group linked to the inband-PhysDom
  template policy-group inband.LLDP-APPG
# Another curious thing with the CLI is that there is no way
# to create an AEP - one gets created for you whether you
# want it or not when you link the APPG to the Domain in the
# following command.
    vlan-domain member inband-PhysDom type phys
    exit

# Not necessary to create an Interface Policy to Enable-LLDP in the
# CLI, Interface Policies are applied directly to the interfaces

# Now the Leaf Profiles, Interface Profiles and Port Selectors
  leaf-profile L101-LeafProf
    leaf-group Leaf101
      leaf 101
      exit
    leaf-interface-profile L101-IntProf
    exit
  leaf-profile L102-LeafProf
    leaf-group Leaf102
      leaf 102
      exit
    leaf-interface-profile L102-IntProf
    exit

  leaf-interface-profile L101-IntProf
    leaf-interface-group 1:10
      description 'vCenter'
      interface ethernet 1/10
      policy-group inband.LLDP-APPG
      exit
    leaf-interface-group 1:46-48
      description 'APICs'
      interface ethernet 1/46-48
      policy-group inband.LLDP-APPG
      exit
    exit

  leaf-interface-profile L102-IntProf
    leaf-interface-group 1:10
      description 'Mgmt Host'
      interface ethernet 1/10
      policy-group inband.LLDP-APPG
      exit
    leaf-interface-group 1:46-48
      description 'APICs'
      interface ethernet 1/46-48
      policy-group inband.LLDP-APPG
      exit
    exit

That’s the Access Policies done, now for the mgmt Tenant configuration.

Part 3: mgmt Tenant Configuration

# Node IP addressing is configured OUTSIDE the mgmt
# Tenant in the CLI, so I'll do the mgmt Tenant bits
# first, in the order that best fits - defining the
# contract first means I can configure the AP in one hit

  tenant mgmt
    contract inband.MgmtServices-Ct
      subject inband.MgmtServices-Subj
        access-group default both
        exit
      exit

    application inband.Default-AP
      epg inband.Default-EPG
        bridge-domain member inb
        contract consumer inband.MgmtServices-Ct
        exit
      exit

    inband-mgmt epg Default
      contract provider inband.MgmtServices-Ct
      bridge-domain inb
      vlan 100
      exit

    interface bridge-domain inb
      ip address 192.168.99.1/24 secondary
      exit
    exit

# Now the Node IP addressing

  controller 1
    interface inband-mgmt0
      ip address 192.168.99.111/24 gateway 192.168.99.1
      inband-mgmt epg Default
      vlan 100
      exit
    exit
  controller 2
    interface inband-mgmt0
      ip address 192.168.99.112/24 gateway 192.168.99.1
      inband-mgmt epg Default
      vlan 100
      exit
    exit
  controller 3
    interface inband-mgmt0
      ip address 192.168.99.113/24 gateway 192.168.99.1
      inband-mgmt epg Default
      vlan 100
      exit
    exit

  switch 101
    interface inband-mgmt0
      ip address 192.168.99.101/24 gateway 192.168.99.1
      inband-mgmt epg Default
      exit
    exit
  switch 102
    interface inband-mgmt0
      ip address 192.168.99.102/24 gateway 192.168.99.1
      inband-mgmt epg Default
      exit
    exit
  switch 201
    interface inband-mgmt0
      ip address 192.168.99.201/24 gateway 192.168.99.1
      inband-mgmt epg Default
      exit
    exit
  switch 202
    interface inband-mgmt0
      ip address 192.168.99.202/24 gateway 192.168.99.1
      inband-mgmt epg Default
      exit
    exit

# Finally, apply vlan configuration to the
# physical interfaces where necessary

  leaf 101
    interface ethernet 1/10
      switchport trunk allowed vlan 99 tenant mgmt application inband.Default-AP epg inband.Default-EPG
      exit
    exit

  leaf 102
    interface ethernet 1/10
      switchport access vlan 99 tenant mgmt application inband.Default-AP epg inband.Default-EPG
      exit
    exit

Time to test!

To be confident that I will now be able to manage the APIC from my management host, I’ll ping the Mgmt Host from the APIC.

apic1# ping -c 3 192.168.99.10
PING 192.168.99.10 (192.168.99.10) 56(84) bytes of data.
64 bytes from 192.168.99.10: icmp_seq=1 ttl=64 time=0.458 ms
64 bytes from 192.168.99.10: icmp_seq=2 ttl=64 time=0.239 ms
64 bytes from 192.168.99.10: icmp_seq=3 ttl=64 time=0.238 ms

--- 192.168.99.10 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1999ms
rtt min/avg/max/mdev = 0.238/0.311/0.458/0.105 ms

And the final test is to see if my management PC can indeed manage the APIC via the In-Band management IP address – only this time for a change I’ll use ssh to access, and access APIC#2

sshaccess

One interesting thing to note in the CLI configuration is that nowhere do you create an Attachable Access Entity Profile (AEP).  But, when you enter the above commands, one miraculously appears (called __ui_pg_inband.LLDP-APPG) when you view the GUI.

miracluousaep-l2ext

Another myriad of mysteries happens in the mgmt Tenant, even if you go through the CLI config from a clean configuration. While entering the commands above in the CLI, the APIC will automatically add an Application Profile (called default)  with an EPG (also called default).  But it doesn’t stop there! There is also another Node Management EPG (called default) magically created, and a mystical contract (called inband-default-contract) with a link to a mysterious filter (called  inband-default). I have no idea why, but here’s some commands to clean up the crap left behind.

# Remove crap left behind by previous CLI commands
tenant mgmt
  no application default
  no contract inband-default-contract
  no inband-mgmt epg default
  no access-list inband-default

Step-by-Step: Configuring inband management for an EPG via the API

The main narrative for the configuration steps are contained in the explanation of the GUI configuration, so you should read that for more detail.  The following sections can be saved to a text file (with a .xml extension) and posted to your config using the GUI (using right-click > Post …), or you can copy and paste the sections below into Postman.


Right-click > Post … Tutorial

Assume one of the sections below is stored a text file with a .xml extension such as  connectivityPrefs.xml

In the APIC GUI, any configuration item that has Post … as one of the right-click options can be used to post the file.

post

The contents of the .xml file must be posted to the uni Parent Distinguished Name (DN) as shown below:

posttouni

The configuration defined in the .xml file will have been pushed into your config:

unpdatedconnpref

End of tutorial


Part 1: Set the Connectivity Preference for the pod to ooband

<?xml version="1.0" encoding="UTF-8"?>
<!-- connectivityPrefs.xml -->
<mgmtConnectivityPrefs dn="uni/fabric/connectivityPrefs" interfacePref="ooband"/>

Part 2: Configure the Access Policy Chain

Save each of these snippets in a separate .xml file and post one at a time.  Or use Postman and copy and paste.

<?xml version="1.0" encoding="UTF-8"?>
<!-- Create the VLAN Pool -->
<fvnsVlanInstP allocMode="static" dn="uni/infra/vlanns-[inband-VLAN.Pool]-static" name="inband-VLAN.Pool">
    <fvnsEncapBlk from="vlan-99" to="vlan-100"/>
</fvnsVlanInstP>
<?xml version="1.0" encoding="UTF-8"?>
<!-- Create the Physical Domain, assign it the VLAN Pool -->
<physDomP dn="uni/phys-inband-PhysDom" name="inband-PhysDom">
    <infraRsVlanNs tDn="uni/infra/vlanns-[inband-VLAN.Pool]-static"/>
</physDomP>
<!-- Create an Attchable Access Entity Profile (AEP) -->
<infraAttEntityP descr="" dn="uni/infra/attentp-inband-AEP" name="inband-AEP">
  <infraRsDomP tDn="uni/l2dom-inband-ExtL2Dom"/>
</infraAttEntityP>
<?xml version="1.0" encoding="UTF-8"?>
<!-- Create an Enable-LLDP Interface Policy -->
<lldpIfPol adminRxSt="enabled" adminTxSt="enabled" dn="uni/infra/lldpIfP-Enable-LLDP" />
<?xml version="1.0" encoding="UTF-8"?>
<!-- Create an Access Port Policy Group -->
<infraAccPortGrp dn="uni/infra/funcprof/accportgrp-inband.LLDP-APPG" name="inband.LLDP-APPG">
    <infraRsAttEntP tDn="uni/infra/attentp-inband-AEP"/>
    <infraRsLldpIfPol tnLldpIfPolName="Enable-LLDP"/>
</infraAccPortGrp>
<?xml version="1.0" encoding="UTF-8"?>
<!-- Two Interface Profiles will be needed - first one for Leaf101 -->
<infraAccPortP dn="uni/infra/accportprof-L101-IntProf" name="L101-IntProf">
    <!-- Add an interface selector for the vCenter Server -->
    <infraHPortS descr="vCenter" name="1:10" type="range">
        <infraRsAccBaseGrp fexId="101" tDn="uni/infra/funcprof/accportgrp-inband.LLDP-APPG"/>
        <infraPortBlk fromCard="1" fromPort="10" name="block1" toCard="1" toPort="10"/>
    </infraHPortS>
    <!-- Add the ports where the APICs are connected -->
    <infraHPortS descr="APICs" name="1:46-48" type="range">
        <infraRsAccBaseGrp fexId="101" tDn="uni/infra/funcprof/accportgrp-inband.LLDP-APPG"/>
        <infraPortBlk fromCard="1" fromPort="46" name="block1" toCard="1" toPort="48"/>
    </infraHPortS>
</infraAccPortP>
<?xml version="1.0" encoding="UTF-8"?>
<!-- Another Interface Profile for Leaf102 -->
<infraAccPortP dn="uni/infra/accportprof-L102-IntProf" name="L102-IntProf">
    <!-- Add an interface selector for the Mgmt Host -->
    <infraHPortS descr="Mgmt Host" name="1:10" type="range">
        <infraRsAccBaseGrp fexId="102" tDn="uni/infra/funcprof/accportgrp-inband.LLDP-APPG"/>
        <infraPortBlk fromCard="1" fromPort="10" name="block2" toCard="1" toPort="10"/>
    </infraHPortS>
    <!-- Add the ports where the APICs are connected -->
    <infraHPortS descr="APICs" name="1:46-48" type="range">
        <infraRsAccBaseGrp fexId="102" tDn="uni/infra/funcprof/accportgrp-inband.LLDP-APPG"/>
        <infraPortBlk fromCard="1" fromPort="46" name="block2" toCard="1" toPort="48"/>
    </infraHPortS>
</infraAccPortP>
<?xml version="1.0" encoding="UTF-8"?>
<!-- Create a Leaf Profile to own the corresponding Interface Profile -->
<infraNodeP dn="uni/infra/nprof-L101-LeafProf" name="L101-LeafProf">
    <infraLeafS name="Leaf101" type="range">
        <infraNodeBlk name ="Default" from_="101" to_="101"/>
    </infraLeafS>
    <infraRsAccPortP tDn="uni/infra/accportprof-L101-IntProf"/>
</infraNodeP>
<?xml version="1.0" encoding="UTF-8"?>
<!-- Create a Leaf Profile to own the corresponding Interface Profile -->
<infraNodeP dn="uni/infra/nprof-L102-LeafProf" name="L102-LeafProf">
    <infraLeafS name="Leaf102" type="range">
        <infraNodeBlk name ="Default" from_="102" to_="102"/>
    </infraLeafS>
    <infraRsAccPortP tDn="uni/infra/accportprof-L102-IntProf"/>
</infraNodeP>

That’s the Access Policies done, now for the mgmt Tenant configuration.

Part 3: mgmt Tenant Configuration

<?xml version="1.0" encoding="UTF-8"?>
<!-- api/policymgr/mo/.xml -->
<polUni>
  <fvTenant name="mgmt">
    <mgmtMgmtP name="default">
      <!-- Create a Node Management EPG -->
      <mgmtInB encap="vlan-100" name="Default">
        <!-- Assign Adresses for APICs In-Band management network -->
        <mgmtRsInBStNode addr="192.168.99.111/24" gw="192.168.99.1" tDn="topology/pod-1/node-1"/>
        <mgmtRsInBStNode addr="192.168.99.112/24" gw="192.168.99.1" tDn="topology/pod-1/node-2"/>
        <mgmtRsInBStNode addr="192.168.99.113/24" gw="192.168.99.1" tDn="topology/pod-1/node-3"/>
        <!-- Assign Adresses for switches In-Band management network -->
        <mgmtRsInBStNode addr="192.168.99.101/24" gw="192.168.99.1" tDn="topology/pod-1/node-101"/>
        <mgmtRsInBStNode addr="192.168.99.102/24" gw="192.168.99.1" tDn="topology/pod-1/node-102"/>
        <mgmtRsInBStNode addr="192.168.99.201/24" gw="192.168.99.1" tDn="topology/pod-1/node-201"/>
        <mgmtRsMgmtBD tnFvBDName="inb"/>
        <fvRsProv tnVzBrCPName="inband.MgmtServices-Ct"/>
      </mgmtInB>
    </mgmtMgmtP>
    <!-- Create the Contract Assigned to the Default Node Management EPG -->
    <vzBrCP name="inband.MgmtServices-Ct" scope="context">
      <vzSubj name="inband.MgmtServices-Subj">
        <!-- Use the common/default filter -->
        <vzRsSubjFiltAtt directives="" tnVzFilterName="default"/>
      </vzSubj>
    </vzBrCP>
    <!-- Assign IP address to inb BD -->
    <fvBD name="inb">
      <fvSubnet ip="192.168.99.1/24" />
    </fvBD>
    <!-- Create the Application Profile and EPG -->
    <fvAp name="inband.Default-AP">
      <fvAEPg name="inband.Default-EPG">
        <fvRsCons tnVzBrCPName="inband.MgmtServices-Ct"/>
        <fvRsPathAtt encap="vlan-99" tDn="topology/pod-1/paths-101/pathep-[eth1/10]"/>
        <!-- Make sure Leaf 102, port 1/10 is configured for untagged traffic -->
        <fvRsPathAtt encap="vlan-99" mode="untagged" tDn="topology/pod-1/paths-102/pathep-[eth1/10]"/>
        <fvRsDomAtt tDn="uni/phys-inband-PhysDom"/>
        <fvRsBd tnFvBDName="inb"/>
      </fvAEPg>
    </fvAp>
  </fvTenant>
</polUni>

Again, I’ll test by pinging the vCenter server from apic#3 for a change, and for browse to the Visore interface of the APIC from the Mgmt Host.

apic3# ping -c 3 192.168.99.99
PING 192.168.99.99 (192.168.99.99) 56(84) bytes of data.
64 bytes from 192.168.99.99: icmp_seq=1 ttl=64 time=0.302 ms
64 bytes from 192.168.99.99: icmp_seq=2 ttl=64 time=0.221 ms
64 bytes from 192.168.99.99: icmp_seq=3 ttl=64 time=0.204 ms

--- 192.168.99.99 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1999ms
rtt min/avg/max/mdev = 0.204/0.242/0.302/0.044 ms

The fact that the login screen comes up is proof that the Mgmt Host has connectivity to the APICs.

visorelogin

In the next installment, I will configure In-Band management via a L2 out.

RedNectar

Note:RedPoint If you would like the author or one of my colleagues to assist with the setup of your ACI installation, contact acimentor@housley.com.au and refer to this article. Housley works mainly around APJC, but are not restricted to this area.

References:

Cisco’s official ACI management documentation – I have informed Cisco of the fact that this documentation is not up to scratch – hopefully it will be fixed soon.

The Cisco APIC NX-OS Style Command-Line Interface Configuration Guide – especially the chapter on Configuring Management Interfaces was particularly helpful – much better than the reference above.

Also Cisco’s ACI Troubleshooting Book had a couple of hints about how things hang together.

Carl Niger’s youtube video series was helpful – I recommend it to you.

Cisco’s pathetic video on configuring In-Band management is simply not worth wasting your time on.  But it ‘s included here since I referred to it.

Posted in ACI, ACI API, ACI CLI, ACI configuration, ACI inband management tutorials, ACI Tutorial, APIC, Cisco, configuration tutorial, Data Center, Data Centre, EPG, In-Band management, inband management, L2 Out, L2out, L3 Out, L3out, Postman, tutorial | Tagged , , , , , , , , , , | 3 Comments

Cisco ACI Per Port VLAN feature

By default Cisco ACI Leaf switches consider every VLAN tag on a particular switch to identify a particular EPG.

Recall from my earlier tutorials, that Cisco ACI does not use VLAN tags to identify VLANs in the traditional sense, but rather it looks at a VLAN tag on an incoming frame to determine what source End Point Group (EPG) is to be used in determining the policy for this frame.

This means that if you needed to use say VLAN tag 1000 to identify EPG1 when traffic arrives at interface Ethernet 1/21, but also use VLAN tag 1000 to identify EPG2 if traffic arrives on interface Ethernet 1/22, the default settings will need to be changed.

I recently I had a situation where traffic had to be tunneled through a transparent device (an IPS), so each interface of the device was allocated to a different EPG and different Bridge Domains.  The problem was, the same VLAN has to be used on the ingress side as on the egress side, so both EPGs had to be allocated the same VLAN mapping.  The customer had already tried configuring the ports, but kept getting a “Configuration
failed for …  due to Encap Already Used in Another EPG
” error,  so I looked to use the Per Port VLAN feature to rescue them.

physical

Physical Layout of IPS and Leaf Switch

It turned out that the configuration was not quite as straightforward as I expected.  Here is what I did:

First I created a VLAN Scope Policy – or as Cisco has poorly named it, a L2 Interface Policy.

Note: The following menu sequences are for an admin user operating in Advanced mode.  >+ means right-click and choose..

FABRIC > ACCESS POLICIES > Policies > Interface Policies > Policies > L2 Interface >+ Create L2 Interface Policy.

Name: PerPort-VLAN.Scope
Scope: Port Local Scope

Then I created two VLAN Pools.  I had initially tried to use the same VLAN Pool, the same Physical Domain and the same Access Port Policy Groups (APPGs) for each of the two interfaces, but it seems that the L2 Interface Policy requires that when applied to two different EPGs on the same switch, those two EPGs must be associated with two different Physical Domains, and each domain linked to a different VLAN Pool.  If anyone can show me any official Cisco documentation that states this fact, I’d be really grateful as I am to dpita who posted this on his blog and a more readable version on the Cisco Support forum.  The ACI help page does tell us that each EPG must be in a different Bridge Domain, but mentions nothing about requiring different VLAN Pools or physical domains.  Good one Cisco!

So I will get on with it and create the VLAN Pools:

FABRIC > ACCESS POLICIES > Pools > VLAN  >+ Create VLAN Pool

Name: AllVLANs-VLAN.Pool
Allocation Mode: Static
Encap Blocks: (+) VLAN 1VLAN 4094

And another to fulfill the separate VLAN Pool requirement

FABRIC > ACCESS POLICIES > Pools > VLAN  >+ Create VLAN Pool

Name: PerPortVLANs-VLAN.Pool
Allocation Mode: Static
Encap Blocks: (+) VLAN 1VLAN 4094

Since Domains can only be linked to a single VLAN Pool, clearly two Physical Domains will be required too, and each Domain linked to its respective VLAN Pool,

FABRIC > ACCESS POLICIES > Physical and External Domains > Physical Domains  >+ Create Physical Domain

Name: AllVLANs-PhysDom
VLAN Pool: (+) AllVLANs-VLAN.Pool

FABRIC > ACCESS POLICIES > Physical and External Domains > Physical Domains  >+ Create Physical Domain

Name: PerPortVLANs-PhysDom
VLAN Pool: (+) PerPortVLANs-VLAN.Pool

To keep the separation complete, I also suggest creating two AEPs, although this not strictly necessary – I could have just used one AEP and added both Physical Domains

FABRIC > ACCESS POLICIES > Global Policies> Attachable Access Entity Profiles  >+ Create Attachable Access Entity Profile

Name: AllVLANs-AEP
Domain: (+) AllVLANs-VLAN.PhysDom

FABRIC > ACCESS POLICIES > Global Policies> Attachable Access Entity Profiles  >+ Create Attachable Access Entity Profile

Name: PerPortVLANs-AEP
Domain: (+) PerPortVLANs-PhysDom

To link these VLAN Pools to interfaces I had to create two Interface Policy Groups – in my case the devices were single attached, so I created two Access Port Policy Groups

FABRIC > ACCESS POLICIES > Interface Policies> Policy Groups >+ Create Access Port Policy Group

Name: AllVLANs-APPG
Attached Entity Profile: AllVLANs-AEP

FABRIC > ACCESS POLICIES > Interface Policies> Policy Groups >+ Create Access Port Policy Group

Name: PPVLAN.PerPortVLANs-APPG
L2 Interface Policy: PerPort-VLAN.Scope
Attached Entity Profile: PerPortVLANs-AEP

Of course, if I was a CLI jockey I would have avoided all of the GUI clicking by issuing the commands:

configure
  vlan-domain AllVLANs-VLAN.Dom
    vlan 1-4094
    exit
  vlan-domain PerPortVLANs-VLAN.Dom
    vlan 1-4094
    exit
  vlan-domain phys type phys
    exit

  template policy-group AllVLANs-APPG
    vlan-domain member AllVLANs-VLAN.Dom
    exit
  template policy-group PPVLAN.PerPortVLANs-APPG
    vlan-domain member PerPortVLANs-VLAN.Dom
    switchport vlan scope local
    exit

and the VLAN Pools, Physical (and L2 and L3) Domains and AEPs would have all been created for me, albeit with each  VLAN Pool and Domain being given a name that ends with VLAN.Dom, and an AEP with a name beginning with __ui_ and which can never be deleted from the GUI should I need to do so later. Oh and two identical L2 Interface polices also beginning with the accursed __ui_

But I digress.

Of course these Access Port Policy Groups had to be assigned to the relevant ports, in my case there were interfaces Ethernet 1/21 and 1/22 on Leaf 101.  I had already created a Leaf Switch Profile named Leaf101-LeafProf and linked it to its matching Interface Profile called (of course) Leaf101-IntProf.

All I had to do now was add two more Interface Selectors to the Leaf101-IntProf
Interface Profile.

FABRIC > ACCESS POLICIES > Interface Policies> Interface Profiles > Leaf101-IntProf  >+ Create Access Port Selector

Name: 1:21
Interface IDs: 1/21
Interface Policy Group: AllVLANs-APPG

FABRIC > ACCESS POLICIES > Interface Policies> Interface Profiles > Leaf101-IntProf  >+ Create Access Port Selector

Name: 1:22
Interface IDs: 1/22
Interface Policy Group: PPVLAN.PerPortVLANs-APPG

And of course the alternative version for the click-challenged:

  #This section is already configured 
  leaf-profile Leaf101-LeafProf
    leaf-group Leaf101
      leaf 101
      exit
    leaf-interface-profile Leaf101-IntProf
    exit
  #End of already configured section
  
  leaf-interface-profile Leaf101-IntProf
    leaf-interface-group 1:21
      interface ethernet 1/21
      policy-group AllVLANs-APPG
      exit
    leaf-interface-group 1:22
      interface ethernet 1/22
      policy-group PPVLAN.PerPortVLANs-APPG
      exit
    exit

With the Access Policies now completed, I could now configure the two EPGs with the same VLAN ID (I was using VLAN 1000) back in the Tenant area.  The EPGs had been created earlier with the creative names of EPG1 and EPG2.  In this case each EPG had its own Bridge Domain and both BDs were linked to the same VRF.  First EPG1 configuration:

TENANT > Tenant TenantName > Application Profiles > Tenant-AP > Application EPGs > EPG1 > Domains (VMs and Bare-Metals) >+ Add Physical Domain Association

Physical Domain Profile: AllVLANs-PhysDom

TENANT > Tenant TenantName > Application Profiles > Tenant-AP > Application EPGs > EPG1 > Static Ports >+ Deploy Static EPG on PC, VPC, or interface

Path Type: Port
Path: Pod-1/Node-101/eth1/21
Port Encap (…): VLAN 1000

And then EPG2

TENANT > Tenant TenantName > Application Profiles > Tenant-AP > Application EPGs > EPG1 > Domains (VMs and Bare-Metals) >+ Add Physical Domain Association

Physical Domain Profile: PerPortVLANs-PhysDom

TENANT > Tenant TenantName > Application Profiles > Tenant-AP > Application EPGs > EPG1 > Static Ports >+ Deploy Static EPG on PC, VPC, or interface

Path Type: Port
Path: Pod-1/Node-101/eth1/22
Port Encap (…): VLAN 1000

Or…

  leaf 101
    interface ethernet 1/21
      switchport trunk allowed vlan 1000 tenant TenantName application Tenant-AP epg EPG1
      exit
    interface ethernet 1/22
      switchport trunk allowed vlan 1000 tenant TenantName application Tenant-AP epg EPG2
      exit
    exit
  exit

At this point both EPG1 and EPG2 were happily sending and receiving frames tagged with VLAN 1000 and no traffic was leakingbetween the two EPGs.  And to complete the picture, here’s the CLI version of the Tenant config:

  tenant TenantName
    vrf context VRF1
      exit
    bridge-domain BD1
      no unicast routing
      vrf member VRF1
      exit
    bridge-domain BD2
      no unicast routing
      vrf member VRF1
      exit
    application Tenant-AP
      epg EPG1
        bridge-domain member BD1
        exit
      epg EPG2
        bridge-domain member BD2
        exit
      exit
    interface bridge-domain BD1
      exit
    interface bridge-domain BD2
      exit
    exit

RedNectar

Posted in Access Policies, ACI, ACI configuration, ACI Tutorial, Cisco | Tagged , , , , , | 4 Comments

Introducing Cisco UCS S-Series

Does this mean Cisco is officially getting into the storage market?

UCSguru.com

Today Cisco announced the Cisco UCS S Series line of storage servers.

S-Series-fronts-series-logoNow the more eagle eyed among you may think that the new Cisco UCS S3260 Storage Server looks very much like the Cisco UCS C3260 Rack server (Colusa), well you wouldn’t be too far off, however the S3260 has been well and truly “Pimped” to address the changing needs of a modern storage solution, particularly an extremely cost effective building block in a Hybrid Cloud environment.

The C-3160/C-3260 was particularly suited to large cost effective cooler storage solutions, that is to say the retention of less / inactive data on a long-term or indefinite basis at low cost, use cases being, archive or video surveillance etc.. The fact is data is getting bigger and warmer all time time and it shows no signs of slowing down anytime soon. And even on these traditional colder storage…

View original post 541 more words

Posted in GNS3 WorkBench

Automated Document Revision Information Reference

On the cover of the documents I produce I like to put the print date and a version number.

frontpagerev

But sometimes… I forget to update the version number, even though I’ve updated the Document Revision Information on the inside pages:

docrevinfo

So, I went looking for a solution that would automatically update the front cover every time I added a new version number in the Document Revision Information table.

After several attempt using the { =MAX(Above) } field in the Document Revision Information table (which didn’t work well because it had to be placed in the last row of the table using hidden text if you didn’t want it printed) I figured out that I could apply the MAX function to a Bookmark reference, and that if the table was bookmarked, I could reference the Rows and Columns in the table.

Here’s the steps I took.

Step 1: Bookmark the table.

Select the table, and choose Insert | Bookmark, name the bookmark and click Add.  I called my Bookmark DocRevisionInfoTable (MS doesn’t allow spaces in Bookmark names).

addbookmark

Step 2: Formula reference to DocRevisionInfoTable

Now go to the front cover where you need the latest version number calculated, and:

  1. Press <Ctrl+F9> (Windows) or <Cmd+F9> (Mac OS X) to insert a field code.  This will make a stylised pair of braces appear – {} with the cursor between the braces.
  2. Enter the following text between the braces

{ =MAX(DocRevisionInfoTable A:A) \#"#.0#" }

fieldformula

  1. Press <F9> to update the field

fieldupdated

A little explanation:

The field { =MAX(DocRevisionInfoTable A:A) \#”#.0#” } works like this:

DocRevisionInfoTable is of course the name of the table.

A:A defines the first row of the table.  According to this document, I should have been able to use C1 to define the first column, but that didn’t work for me.

\#”#.0#” is a format descriptor.
The \# says “this is a number format”
The “#.0#” says “print all digits before the decimal point, and at least one place after the decimal point”.  This means that version 2.0  will print as 2.0 rather than just plain old 2, and if there is a version 2.01 the extra digit after the 0 gets printed too.

RedNectar

 

 

Posted in Microsoft Word, MS Word Tips | Tagged , , , , , , , , , , , ,