Note:![]() |
This is the third and last in a series of articles – the following is a variation of the first and second in the series. Much of the story is identical – but with a few added extras to configure the L3 out rather than an L2 out or Application Profile as with the EPG approach. |
Anyone unlucky enough to try and configure In-Band management on the Cisco APIC will have probably realised that it is not a simple task. Which is probably why many Cisco support forum experts advises to use out of band (oob) management instead [link].
And anyone unlucky enough to try and decipher Cisco’s official documentation for configuring In-Band management on the Cisco APIC or watch their pathetic video (which simply does not work – it does not complete the job) are probably feeling frustrated to the point of giving up.
Let me ease your frustration and take you through a journey showing you how to configure In-Band management for ACI in a variety of ways:
- Via an EPG (in the mgmt Tenant) (Part#1 of this series)
- using the GUI
- using the CLI
- using the API
- Via an external bridged network (L2 Out) (Part#2 of this series)
- using the GUI
- using the CLI
- using the API
- Via an external routed network (L3 Out) (This article)
In-Band Management Via an external routed network (L3 Out) in the mgmt Tenant
Let’s begin with a diagram showing my test setup for the L3Out approach. It is somewhat different to the previous designs because an external router is involved, so there is no direct connections between the Nexus 9K Leaf switches and either the VMM Server or the Mgmt Host.
IP addressing for the Leaf and Spine switches will use the switch ID in the fourth octect of the 192.168.99.0/24 network. E.g., Spine 201 will be 192.168.99.201. The default gateway address to be configured on the inb Bridge Domain in the mgmt tenant will be 192.168.99.1.
So let me plan exactly what will need to be done:
The Access Policy Chain
I’ll need to allocate VLAN IDs for the internal inband management EPG (VLAN 100) and in case I decide to use SVI or a Routed Sub-Interface for the L3EPG, I’ll include another VLAN too (VLAN 99). I’ll put them a VLAN Pool, which will connect to a External Layer 3 Domain, which will need to link to an AEP which has appropriate Access Port Policy Group assignments linking the AEP to the relevant attachment ports of the APICs, the vCenter host and the ACI Management host. Like the picture shows.
Curiously, in the previous method directly attaching an EPG to the leaves, I created a Physical Domain to contain the VLANs, and it linked the physical ports where the APICs attach (via the AEP > APPP > [Interface Profile + LeafProfile]). Last time I used an External l2 Domain – and it still worked! This time, I used an External L3 Domain rather than the Physical Domain – but again this still worked. So it seems that as far as the APIC attached ports are concerned, so long as they have a link to the relevant VLANs, it doesn’t matter if it is via a Physical Domain or an External L2 Domain or External L3 Domain.
The mgmt Tenant
In the mgmt Tenant there is a number of tasks I’ll have to do.
I’ll need to create a special EPG called an In-band EPG. This will have to be done before assigning the static addresses I want to the APICs, Leaves and Spines.
I’ll assign the default gateway IP address to the pre-defined inb Bridge Domain in the mgmt Tenant, and then create a L3 External Routed Network (L3 Out) for my external router’s connection and assign port Ethernet 1/1 on Leaf101 to that L3 Out. Initially I’ll use a Routed interface, rather than an SVI or Routed Sub Interface so I won’t need any VLAN associations, but I will configure those in an Appendix.
To be able to consume a contract, I’ll also of course have to create a L3EPG which I will name 0.0.0.0:0-L3EPG to reflect the function and range of IP addresses accessible via this L3 Out.
Finally, I’ll need to create a Contract (inband.MgmtServices-Ct) which will use the common/default filter to allow all traffic, and of course I’ll have to link the contract to the special In-Band EPG (provider) and the 0.0.0.0:0-L3EPG (consumer) mentioned above.
Again, a picture tells the story:
If all goes well, when both the Access Polices and the Tenant configuration is complete, the APIC will be able to manage the vCenter VMM, and the Management Station bare metal server will be able to manage the ACI fabric via the APIC IP addresses.
Enough of design, time to start configuring!
Step-by-Step: Configuring In-Band management via a L3 Out using the GUI
Conventions |
Cisco APIC Advanced GUI Menu Selection sequences are displayed in Bolded Blue text, with >+ meaning Right-click and select so that the following line: Fabric > Access Policies > Pools > VLAN >+ Create VLAN Pool should be interpreted as: From the Cisco APIC Advanced GUI Main Menu, select Fabric From the sub-menu, select Access Policies In the Navigation Pane, expand Pools, then on the VLAN sub-item, right-click and select Create VLAN Pool. |
If a particular tab in the Work Pane needs to be selected, it will be inserted into the sequence in square brackets, such as: … > Networks > 0.0.0.0:0-L3EPG > [Contracts] tab |
Within the Work Pane and within some dialogues, it will be necessary to click on a + icon to add an item. This is indicated by a (+) followed by the name of the item that needs to be added, so that: (+) Interface Selectors: should be interpreted as Click the + icon adjacent the Interface Selectors: prompt. |
Text that needs to be typed at prompts is presented in orange italicised bold text, while items to be selected from a drop down menu or by clicking options on the screen are shown in bolded underlined text. |
Options like clicking OK, UPDATE or SUBMIT are assumed, so not specifically stated unless required between sub-steps. Use your intelligence. |
Part 1: Set the Connectivity Preference for the pod to ooband
Firstly, since the default interface to use for external connections id the inband interface, I’m going to set the Connectivity Preference for the pod to ooband – just in case I loose access to the management GUI while configuring this.
Fabric > Fabric Policies > Global Policies > Connectivity Preferences
Interface to use for external connections: ooband
Part 2: Configure the Access Policy Chain
This is a long slog – if you are not familiar with Cisco ACI Access Policies, you might want to read my earlier ACI Tutorials, especially Tutorial #4.
Fabric > Access Policies > Pools > VLAN >+ Create VLAN Pool
Name: inband-VLAN.Pool
Allocation Mode: Static Allocation
(+) Encap Blocks:
Range: VLAN 99 – VLAN 100
Fabric > Access Policies > Physical and External Domains > External Routed Domains >+ Create Layer 3 Domain
Name: inband-ExtL3Dom
VLAN Pool: inband-VLAN.Pool
Fabric > Access Policies > Global Policies > Attachable Access Entity Profiles >+ Create Attachable Access Entity Profile
Name: inband-AEP
(+) Domains (VMM, Physical or External) To Be Associated To Interfaces:
Domain Profile: inband-ExtL3Dom
Fabric > Access Policies > Interface Policies > Policies > LLDP Interface >+ Create LLDP Interface Policy
Name: Enable-LLDP
[Leave default values – I just want to have a policy that spells out that LLDP is enabled]
Fabric > Access Policies > Interface Policies > Policy Groups >Leaf Policy Groups >+ Create Leaf Access Port Policy Group
Name: inband.LLDP-APPG
LLDP Policy: Enable-LLDP
Attached Entity Profile: inband-AEP
Fabric > Access Policies > Interface Policies > Profiles >Leaf Profiles >+ Create Leaf Interface Profile
Name: L101-IntProf
(+) Interface Selectors:
Name: 1:1
Description: Router
Interface IDs: 1/1
Interface Policy Group: inband.LLDP-APPG
(+) Interface Selectors:
Name: 1:46-48
Description: APICs
Interface IDs: 1/46-48
Interface Policy Group: inband.LLDP-APPG
Now repeat for Leaf102 – this time just add the APIC ports
Fabric > Access Policies > Interface Policies > Profiles >Leaf Profiles >+ Create Leaf Interface Profile
Name: L102-IntProf
(+) Interface Selectors:
Name: 1:46-48
Description: APICs
Interface IDs: 1/46-48
Interface Policy Group: inband.LLDP-APPG
Fabric > Access Policies > Switch Policies > Profiles >Leaf Profiles >+ Create Leaf Profile
Name: L101-LeafProf
(+) Leaf Selectors:
Name: Leaf101
Blocks: 101
UPDATE > NEXT
[x] L101-IntProf
And again for leaf 102
Fabric > Access Policies > Switch Policies > Profiles >Leaf Profiles >+ Create Leaf Profile
Name: L102-LeafProf
(+) Leaf Selectors:
Name: Leaf102
Blocks: 102
UPDATE > NEXT
[x] L102-IntProf
That’s the Access Policies done, now for the mgmt Tenant configuration.
Part 3: mgmt Tenant Configuration
Before I can assign a static IP addresses to an APIC or switch, the GUI forces me to create a Node Management EPG, so begin by creating one – I’ll use the name Default because I don’t expect I’ll ever need another, but I’ll use an upper-case D to distinguish it from system created defaults which always use a lowercase d.
Tenants > Tenant mgmt > Node Management EPGs >+ Create In-Band Management EPG
Name: Default
Encap: vlan-100
Bridge Domain: inb
Now I can create the Static Node Management Addresses.
Tenants > Tenant mgmt > Node Management Addresses > Static Node Management Addresses >+ Create Static Node Management Addresses
Node Range: 1 – 3
Config: In-Band Addresses
In-Band Mangement EPG: Default
In-Band IPV4 Address: 192.168.99.111/24
In-Band IPV4 Gateway: 192.168.99.1/24
[Tip: If you are following my steps, ignore the warning (as shown below). I already set the Interface to use for external connections to ooband, and in spite of the inference in the warning, your preference for management will NOT switch to In-Band]
Tedious as it was, I resisted the temptation to resort to the CLI, and repeated the above step for Nodes 101-102, and 201-202.
That default gateway IP address I defined on the nodes will reside in the inb Bridge Domain.
Tenants > Tenant mgmt > Networking > Bridge Domains > inb > Subnets >+ Create subnet
Gateway IP: 192.168.99.1/24
Scope: [x] Advertised Externally
That’s took care of the internal network except that I will have to come back to the inb Bridge Domain to link it to the L3Out after I’ve created it.
At this stage the APICs were able to ping the default gateway and the Leaf switches verifying that the configurations were valid, although I was not able to ping the Spine switches. However, I took heart from this video and assumed that all was OK.
apic1# ping -c 3 192.168.99.1 PING 192.168.99.1 (192.168.99.1) 56(84) bytes of data. 64 bytes from 192.168.99.1: icmp_seq=1 ttl=63 time=2.86 ms 64 bytes from 192.168.99.1: icmp_seq=2 ttl=63 time=0.827 ms 64 bytes from 192.168.99.1: icmp_seq=3 ttl=63 time=0.139 ms --- 192.168.99.1 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2002ms rtt min/avg/max/mdev = 0.139/1.276/2.862/1.156 ms apic1# ping -c 3 192.168.99.101 PING 192.168.99.101 (192.168.99.101) 56(84) bytes of data. 64 bytes from 192.168.99.101: icmp_seq=1 ttl=63 time=0.969 ms 64 bytes from 192.168.99.101: icmp_seq=2 ttl=63 time=0.176 ms 64 bytes from 192.168.99.101: icmp_seq=3 ttl=63 time=0.209 ms --- 192.168.99.101 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2000ms rtt min/avg/max/mdev = 0.176/0.451/0.969/0.366 ms apic1# ping -c 3 192.168.99.201 PING 192.168.99.201 (192.168.99.201) 56(84) bytes of data. From 192.168.99.111 icmp_seq=1 Destination Host Unreachable From 192.168.99.111 icmp_seq=2 Destination Host Unreachable From 192.168.99.111 icmp_seq=3 Destination Host Unreachable --- 192.168.99.201 ping statistics --- 3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 3005ms
I’ll need a contract to put between the L3EPG and the special management In-Band EPG – life will be easier if I create that first.
Tenants > Tenant mgmt > Security Policies > Contracts >+ Create Contract
Name: inband.MgmtServices-Ct
Scope: VRF [Default]
(+) Subjects:
Name: inband.MgmtServices-Subj
Filter Chain
(+) Filters
Name: common/default
Now to create the L3Out, the Node Profile and the L2EPG
Tenants > Tenant mgmt > Networking > External Routed Networks >+ Create Routed Outside
Name: inband.OSPF-L3Out
VRF: mgmt/inb
External Routed Domain: inband-ExtL3Dom
[x] OSPF
OSPF Area ID: 1
OSPF Area Type: Regular Area
(+) Nodes And Interfaces Protocol Profiles
Name: Leaf101-OSPF.NodeProf
(+) Nodes
Node ID: 101
Router ID: 1.1.1.1
OK > OK > NEXT
(+) External EPG Networks
Name: 0.0.0.0:0-L3EPG
(+) Subnet
IP Address: 0.0.0.0/0
You will have noticed that during the process above I did not include a step to add the Interface Profile – I did this because I wanted to explore the three different options for Interface Profiles – Routed Interface, SVI Interface and Routed sub-interface.
Firstly, I’ll explore using a Routed Interface option, and look at the other options as an Appendix to this article.
Tenants > Tenant mgmt > Networking > External Routed Networks > inband.OSPF-L3Out > Logical Node Profiles > Leaf101-OSPF.NodeProf > Logical Interface Profiles >+ Create Interface Profile
Name: OSPF-IntProf
Interfaces
(+) Routed Interfaces:
Path: topology/pod-1/paths-101/pathep-[eth1/1]
IPv4 Primary / IPv6 Preferred Address: 172.16.2.2/30
MTU (bytes): 1500
Have the L3EPG consume the contract I created earlier:
Tenants > Tenant mgmt > Networking > External Routed Networks > inband.OSPF-L3Out > Networks > 0.0.0.0:0-L3EPG > [Contracts] tab
(+) Consumed Contracts:
Name: inband.MgmtServices-Ct
And the In-Band EPG Provide it:
Tenants > Tenant mgmt >Node Management EPGs > In-Band EPG Default
(+) Provided Contracts:
Name: inband.MgmtServices-Ct
And finally, I’ll have to link the L3Out to the inb Bridge Domain so that the APIC knows which L3Out to use when advertising the 192.168.99.0/24 network externally.
Tenants > Tenant mgmt > Networking > Bridge Domains > inb > [Policy] tab > [L3 Configurations] tab
(+) Associated L3 Outs:
L3 Out: mgmt/inband.OSPF-L3Out
Time to test!
To be confident that I will now be able to deploy a VMM Domain with connectivity to the Virtual Machine Manager (vCenter in my case), I’ll ping the VMM server from the APIC, only this time I’ll tell the APIC to use the inband management interface using the ‑I ping option (or reconfigure the Connectivity Preferences to use the inband interface for external connections rather than the ooband interface which I configured in Part #1).
apic1# ping -c3 -I 192.168.99.111 172.16.99.99 PING 172.16.99.99 (172.16.99.99) from 192.168.99.111 : 56(84) bytes of data. 64 bytes from 172.16.99.99: icmp_seq=1 ttl=61 time=0.374 ms 64 bytes from 172.16.99.99: icmp_seq=2 ttl=61 time=0.403 ms 64 bytes from 172.16.99.99: icmp_seq=3 ttl=61 time=0.391 ms --- 172.16.99.99 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2000ms rtt min/avg/max/mdev = 0.374/0.389/0.403/0.020 ms
And the final test is to see if my management PC can indeed manage the APIC via the In-Band management IP address:
Step-by-Step: Configuring In-Band management via a L3 Out using the CLI
The main narrative for the configuration steps are contained in the explanation of the GUI configuration, so you should read that for more detail. The following commands are entered in configuration mode.
Part 1: Set the Connectivity Preference for the pod to ooband
mgmt_connectivity pref ooband
Part 2: Configure the Access Policy Chain
# First, create the VLAN Pool and External L3 Domain # If you type the command below, you may notice a curious thing - # at the point where the word "type" appears, if you press "?" # you will see options for <CR> and "dynamic", but not "type". # In other words, "type" is a hidden option - I discovered it # by creating a domain in the GUI and looking at the running # config later. vlan-domain inband-ExtL3Dom type l3ext vlan-pool inband-VLAN.Pool vlan 99-100 exit # And a Access Port Policy Group linked to the inband-ExtL3Dom template policy-group inband.LLDP-APPG # Another curious thing with the CLI is that there is no way # to create an AEP - one gets created for you whether you # want it or not when you link the APPG to the Domain in the # following command. vlan-domain member inband-ExtL3Dom type l3ext exit # Not necessary to create an Interface Policy to Enable-LLDP in the # CLI, Interface Policies are applied directly to the interfaces # Now the Leaf Profiles, Interface Profiles and Port Selectors leaf-profile L101-LeafProf leaf-group Leaf101 leaf 101 exit leaf-interface-profile L101-IntProf exit leaf-profile L102-LeafProf leaf-group Leaf102 leaf 102 exit leaf-interface-profile L102-IntProf exit leaf-interface-profile L101-IntProf leaf-interface-group 1:1 description 'Router' interface ethernet 1/1 policy-group inband.LLDP-APPG exit leaf-interface-group 1:46-48 description 'APICs' interface ethernet 1/46-48 policy-group inband.LLDP-APPG exit exit leaf-interface-profile L102-IntProf leaf-interface-group 1:46-48 description 'APICs' interface ethernet 1/46-48 policy-group inband.LLDP-APPG exit exit
That’s the Access Policies done, now for the mgmt Tenant configuration.
Part 3: mgmt Tenant Configuration
# Node IP addressing is configured OUTSIDE the mgmt # Tenant in the CLI, so I'll do the mgmt Tenant bits # first, in the order that best fits - defining the # contract first means I can configure the AP in one hit tenant mgmt contract inband.MgmtServices-Ct subject inband.MgmtServices-Subj access-group default both exit exit l3out inband.OSPF-L3Out vrf member inb exit external-l3 epg 0.0.0.0:0-L3EPG l3out inband.OSPF-L3Out vrf member inb match ip 0.0.0.0/0 contract consumer inband.MgmtServices-Ct exit inband-mgmt epg Default contract provider inband.MgmtServices-Ct bridge-domain inb vlan 100 exit interface bridge-domain inb ip address 192.168.99.1/24 secondary scope public exit exit # Now the Node IP addressing controller 1 interface inband-mgmt0 ip address 192.168.99.111/24 gateway 192.168.99.1 inband-mgmt epg Default vlan 100 exit exit controller 2 interface inband-mgmt0 ip address 192.168.99.112/24 gateway 192.168.99.1 inband-mgmt epg Default vlan 100 exit exit controller 3 interface inband-mgmt0 ip address 192.168.99.113/24 gateway 192.168.99.1 inband-mgmt epg Default vlan 100 exit exit switch 101 interface inband-mgmt0 ip address 192.168.99.101/24 gateway 192.168.99.1 inband-mgmt epg Default exit exit switch 102 interface inband-mgmt0 ip address 192.168.99.102/24 gateway 192.168.99.1 inband-mgmt epg Default exit exit switch 201 interface inband-mgmt0 ip address 192.168.99.201/24 gateway 192.168.99.1 inband-mgmt epg Default exit exit switch 202 interface inband-mgmt0 ip address 192.168.99.202/24 gateway 192.168.99.1 inband-mgmt epg Default exit exit # Finally, apply routing configuration to # leaf 101 eth1/1 leaf 101 vrf context tenant mgmt vrf inb l3out inband.OSPF-L3Out router-id 1.1.1.1 route-map inband.OSPF-L3Out_out match bridge-domain inb exit exit exit # The CLI gets itself into a bit of a Catch-22 here # When complete, you will see a line: # ip router ospf default area 0.0.0.1 # under the configuration of interface ethernet 1/1, but if I # try to enter it before configuring the "router ospf default" # section below, I get an error. # # Similarly, if I try to configure the "router ospf default" # section before configuring the vrf under the ethernet 1/1 # interface, I also get an error. interface ethernet 1/1 no switchport vrf member tenant mgmt vrf inb l3out inband.OSPF-L3Out mtu 1500 ip address 172.16.2.2/30 exit router ospf default vrf member tenant mgmt vrf inb area 0.0.0.1 l3out inband.OSPF-L3Out # I have no idea why a line saying "area 0.0.0.1 nssa" # turns up in the config, but it does, so I had to also # enter the following line. no area 0.0.0.1 nssa exit exit # Note how I had to then return to interface configuration mode to # complete the config AFTER having done the "router ospf default" # section interface ethernet 1/1 ip router ospf default area 0.0.0.1 exit exit
Time to test!
To be confident that I will now be able to manage the APIC from my management host, I’ll ping the Mgmt Host from the APIC.
apic1# ping -c 3 192.168.99.10 PING 192.168.99.10 (192.168.99.10) 56(84) bytes of data. 64 bytes from 192.168.99.10: icmp_seq=1 ttl=64 time=0.458 ms 64 bytes from 192.168.99.10: icmp_seq=2 ttl=64 time=0.239 ms 64 bytes from 192.168.99.10: icmp_seq=3 ttl=64 time=0.238 ms --- 192.168.99.10 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 1999ms rtt min/avg/max/mdev = 0.238/0.311/0.458/0.105 ms
And the final test is to see if my management PC can indeed manage the APIC via the In-Band management IP address – only this time for a change I’ll use ssh to access, and access APIC#2
One interesting thing to note in the CLI configuration is that nowhere do you create an Attachable Access Entity Profile (AEP). But, when you enter the above commands, one miraculously appears (called __ui_pg_inband.LLDP-APPG) when you view the GUI.
Another myriad of mysteries happens in the mgmt Tenant, even if you go through the CLI config from a clean configuration. While entering the commands above in the CLI, the APIC will automatically add an Application Profile (called default) with an EPG (also called default). But it doesn’t stop there! There is also another Node Management EPG (called default) magically created, and a mystical contract (called inband-default-contract) with a link to a mysterious filter (called inband-default). I have no idea why, but here’s some commands to clean up the crap left behind.
# Remove crap left behind by previous CLI commands tenant mgmt no application default no contract inband-default-contract no inband-mgmt epg default no access-list inband-default
Step-by-Step: Configuring In-Band management via a L3 Out using the API
The main narrative for the configuration steps are contained in the explanation of the GUI configuration, so you should read that for more detail. The following sections can be saved to a text file (with a .xml extension) and posted to your config using the GUI (using right-click > Post …), or you can copy and paste the sections below into Postman.
Right-click > Post … Tutorial
Assume one of the sections below is stored a text file with a .xml extension such as connectivityPrefs.xml
In the APIC GUI, any configuration item that has Post … as one of the right-click options can be used to post the file.
The contents of the .xml file must be posted to the uni Parent Distinguished Name (DN) as shown below:
The configuration defined in the .xml file will have been pushed into your config:
End of tutorial
Part 1: Set the Connectivity Preference for the pod to ooband
<?xml version="1.0" encoding="UTF-8"?> <!-- connectivityPrefs.xml --> <mgmtConnectivityPrefs dn="uni/fabric/connectivityPrefs" interfacePref="ooband"/>
Part 2: Configure the Access Policy Chain
Save each of these snippets in a separate .xml file and post one at a time. Or use Postman and copy and paste.
<?xml version="1.0" encoding="UTF-8"?> <!-- Create the VLAN Pool --> <fvnsVlanInstP allocMode="static" dn="uni/infra/vlanns-[inband-VLAN.Pool]-static" name="inband-VLAN.Pool"> <fvnsEncapBlk from="vlan-99" to="vlan-100"/> </fvnsVlanInstP>
<?xml version="1.0" encoding="UTF-8"?> <!-- Create the External L3 Domain, assign it the VLAN Pool --> <l3extDomP dn="uni/l3dom-inband-ExtL3Dom" name="inband-ExtL3Dom"> <infraRsVlanNs tDn="uni/infra/vlanns-[inband-VLAN.Pool]-static"/> </l3extDomP>
<!-- Create an Attchable Access Entity Profile (AEP) --> <infraAttEntityP descr="" dn="uni/infra/attentp-inband-AEP" name="inband-AEP"> <infraRsDomP tDn="uni/l3dom-inband-ExtL3Dom"/> </infraAttEntityP>
<?xml version="1.0" encoding="UTF-8"?> <!-- Create an Enable-LLDP Interface Policy --> <lldpIfPol adminRxSt="enabled" adminTxSt="enabled" dn="uni/infra/lldpIfP-Enable-LLDP" />
<?xml version="1.0" encoding="UTF-8"?> <!-- Create an Access Port Policy Group --> <infraAccPortGrp dn="uni/infra/funcprof/accportgrp-inband.LLDP-APPG" name="inband.LLDP-APPG"> <infraRsAttEntP tDn="uni/infra/attentp-inband-AEP"/> <infraRsLldpIfPol tnLldpIfPolName="Enable-LLDP"/> </infraAccPortGrp>
<?xml version="1.0" encoding="UTF-8"?> <!-- Two Interface Profiles will be needed - first one for Leaf101 --> <infraAccPortP dn="uni/infra/accportprof-L101-IntProf" name="L101-IntProf"> <!-- Add an interface selector for the External Router --> <infraHPortS descr="Router" name="1:1" type="range"> <infraRsAccBaseGrp fexId="101" tDn="uni/infra/funcprof/accportgrp-inband.LLDP-APPG"/> <infraPortBlk fromCard="1" fromPort="1" name="block1" toCard="1" toPort="1"/> </infraHPortS> <!-- Add the ports where the APICs are connected --> <infraHPortS descr="APICs" name="1:46-48" type="range"> <infraRsAccBaseGrp fexId="101" tDn="uni/infra/funcprof/accportgrp-inband.LLDP-APPG"/> <infraPortBlk fromCard="1" fromPort="46" name="block1" toCard="1" toPort="48"/> </infraHPortS> </infraAccPortP>
<?xml version="1.0" encoding="UTF-8"?> <!-- Another Interface Profile for Leaf102 --> <infraAccPortP dn="uni/infra/accportprof-L102-IntProf" name="L102-IntProf"> <!-- Add the ports where the APICs are connected --> <infraHPortS descr="APICs" name="1:46-48" type="range"> <infraRsAccBaseGrp fexId="102" tDn="uni/infra/funcprof/accportgrp-inband.LLDP-APPG"/> <infraPortBlk fromCard="1" fromPort="46" name="block2" toCard="1" toPort="48"/> </infraHPortS> </infraAccPortP>
<?xml version="1.0" encoding="UTF-8"?> <!-- Create a Leaf Profile to own the corresponding Interface Profile --> <infraNodeP dn="uni/infra/nprof-L101-LeafProf" name="L101-LeafProf"> <infraLeafS name="Leaf101" type="range"> <infraNodeBlk name ="Default" from_="101" to_="101"/> </infraLeafS> <infraRsAccPortP tDn="uni/infra/accportprof-L101-IntProf"/> </infraNodeP>
<?xml version="1.0" encoding="UTF-8"?> <!-- Create a Leaf Profile to own the corresponding Interface Profile --> <infraNodeP dn="uni/infra/nprof-L102-LeafProf" name="L102-LeafProf"> <infraLeafS name="Leaf102" type="range"> <infraNodeBlk name ="Default" from_="102" to_="102"/> </infraLeafS> <infraRsAccPortP tDn="uni/infra/accportprof-L102-IntProf"/> </infraNodeP>
That’s the Access Policies done, now for the mgmt Tenant configuration.
Part 3: mgmt Tenant Configuration
<?xml version="1.0" encoding="UTF-8"?> <!-- api/policymgr/mo/.xml --> <polUni> <fvTenant name="mgmt"> <mgmtMgmtP name="default"> <!-- Create a Node Management EPG --> <mgmtInB encap="vlan-100" name="Default"> <!-- Assign Adresses for APICs In-Band management network --> <mgmtRsInBStNode addr="192.168.99.111/24" gw="192.168.99.1" tDn="topology/pod-1/node-1"/> <mgmtRsInBStNode addr="192.168.99.112/24" gw="192.168.99.1" tDn="topology/pod-1/node-2"/> <mgmtRsInBStNode addr="192.168.99.113/24" gw="192.168.99.1" tDn="topology/pod-1/node-3"/> <!-- Assign Adresses for switches In-Band management network --> <mgmtRsInBStNode addr="192.168.99.101/24" gw="192.168.99.1" tDn="topology/pod-1/node-101"/> <mgmtRsInBStNode addr="192.168.99.102/24" gw="192.168.99.1" tDn="topology/pod-1/node-102"/> <mgmtRsInBStNode addr="192.168.99.201/24" gw="192.168.99.1" tDn="topology/pod-1/node-201"/> <mgmtRsInBStNode addr="192.168.99.202/24" gw="192.168.99.1" tDn="topology/pod-1/node-202"/> <mgmtRsMgmtBD tnFvBDName="inb"/> <!-- The Node Mangement EPG will be the provider for the Contract --> <fvRsProv tnVzBrCPName="inband.MgmtServices-Ct"/> </mgmtInB> </mgmtMgmtP> <!-- Create the Contract Assigned to the Default Node Management EPG --> <vzBrCP name="inband.MgmtServices-Ct" scope="context"> <vzSubj name="inband.MgmtServices-Subj"> <!-- Use the common/default filter --> <vzRsSubjFiltAtt directives="" tnVzFilterName="default"/> </vzSubj> </vzBrCP> <!-- Assign IP address to inb BD --> <fvBD name="inb"> <fvRsBDToOut tnL3extOutName="inband.OSPF-L3Out"/> <fvSubnet ip="192.168.99.1/24" scope="public"/> </fvBD> <!-- Create the External L3 Network (L3 Out) and L3EPG --> <l3extOut name="inband.OSPF-L3Out"> <l3extLNodeP name="Leaf101-OSPF.NodeProf"> <l3extRsNodeL3OutAtt rtrId="1.1.1.1" rtrIdLoopBack="yes" tDn="topology/pod-1/node-101"/> <l3extLIfP name="OSPF-intProf"> <ospfIfP> <ospfRsIfPol tnOspfIfPolName=""/> </ospfIfP> <l3extRsPathL3OutAtt addr="172.16.2.2/30" ifInstT="l3-port" mode="regular" mtu="1500" tDn="topology/pod-1/paths-101/pathep-[eth1/1]"/> </l3extLIfP> </l3extLNodeP> <l3extRsEctx tnFvCtxName="inb"/> <l3extRsL3DomAtt tDn="uni/l3dom-inband-ExtL3Dom"/> <l3extInstP name="0.0.0.0:0-L3EPG"> <fvRsCons tnVzBrCPName="inband.MgmtServices-Ct"/> <l3extSubnet ip="0.0.0.0/0"/> </l3extInstP> <ospfExtP areaId="0.0.0.1" areaType="regular"/> </l3extOut> </fvTenant> </polUni>
The fact that the login screen comes up is proof that the Mgmt Host has connectivity to the APICs.
Appendix: Configuring L3 Out Interface Profiles with VLANs
Coming Soon
That completes this series of tutorials for configuring In-Band Management on the APIC for Cisco ACI. Don’t forget to share and like and rate each article to make it easier for others to find when searing for help!
RedNectar
Note:![]() |
If you would like the author or one of my colleagues to assist with the setup of your ACI installation, contact acimentor@housley.com.au and refer to this article. Housley works mainly around APJC, but are not restricted to this area. |
References:
Cisco’s official ACI management documentation – I have informed Cisco of the fact that this documentation is not up to scratch – hopefully it will be fixed soon.
The Cisco APIC NX-OS Style Command-Line Interface Configuration Guide – especially the chapter on Configuring Management Interfaces was particularly helpful – much better than the reference above.
Also Cisco’s ACI Troubleshooting Book had a couple of hints about how things hang together.
Carl Niger’s youtube video series was helpful – I recommend it to you.
Cisco’s pathetic video on configuring In-Band management is simply not worth wasting your time on. But it ‘s included here since I referred to it.
Thank you very much Cisco’s doc on the topic really sucks!
Pingback: Configuring In-Band Management for the APIC on Cisco ACI (Part #2-via a L2Out) | RedNectar's Blog
Pingback: Configuring In-Band Management for the APIC on Cisco ACI (Part #1-via an EPG) | RedNectar's Blog