Note:![]() |
This is the second in a series of articles – the following is a variation of the first in the series. In fact, the whole story is almost identical – it is just that this one uses a L2 out approach rather than an EPG approach. |
Anyone unlucky enough to try and configure In-Band management on the Cisco APIC will have probably realised that it is not a simple task. Which is probably why many Cisco support forum experts advises to use out of band (oob) management instead [link].
And anyone unlucky enough to try and decipher Cisco’s official documentation for configuring In-Band management on the Cisco APIC or watch their pathetic video (which simply does not work – it does not complete the job) are probably feeling frustrated to the point of giving up.
Let me ease your frustration and take you through a journey showing you how to configure In-Band management for ACI in a variety of ways:
- Via an EPG (in the mgmt Tenant) (Part#1 of this series)
- using the GUI
- using the CLI
- using the API
- Via an external bridged network (L2 Out) (This article)
- Via an external routed network (L3 Out) (Part#3 of this series)
- using the GUI
- using the CLI
- using the API
In-Band Management Via an external bridged network (L2 Out) in the mgmt Tenant
Let’s begin with a diagram showing my test setup for the L2Out approach. It is identical to the previous design, except that there is no way I can use an untagged host connection directly to an interface configured for a L2 Out – so I’ve had to introduce a switch between the Nexus 9K Leaf102 and the Mgmt Host.
IP addressing for the Leaf and Spine switches will use the switch ID in the fourth octect of the 192.168.99.0/24 network. E.g., Spine 201 will be 192.168.99.201. The default gateway address to be configured on the inb Bridge Domain in the mgmt tenant will be 192.168.99.1.
So let me plan exactly what will need to be done:
The Access Policy Chain
I’ll need to allocate VLAN IDs for the internal inband management EPG (VLAN 100) and another for the user facing L2EPG (VLAN 99). I’ll put them a VLAN Pool, which will connect to a External Layer 2 Domain, which will need to link to an AEP which has appropriate Access Port Policy Group assignments linking the AEP to the relevant attachment ports of the APICs, the vCenter host and the ACI Management host. Like the picture shows.
Curiously, in the previous method directly attaching an EPG to the leaves, I created a Physical Domain to contain the VLANs, and it linked the physical ports where the APICs attach (via the AEP > APPP > [Interface Profile + LeafProfile]). This time, I used an External L2 Domain rather than the Physical Domain – but this still worked. So it seems that as far as the APIC attached ports are concerned, so long as they have a link to the relevant VLANs, it doesn’t matter if it is via a Physical Domain or an External L2 Domain.
The mgmt Tenant
In the mgmt Tenant there is a number of tasks I’ll have to do.
I’ll need to create a special EPG called an In-band EPG. This will have to be done before assigning the static addresses I want to the APICs, Leaves and Spines.
I’ll assign the default gateway IP address to the pre-defined inb Bridge Domain in the mgmt Tenant, and then create a L2 External Bridged Network (L2 Out) for my external VLAN (VLAN 99) and assign ports Ethernet 1/10 on each Leaf to that L2 Out. To be able to consume a contract, I’ll also of course have to create a L2EPG which I will name inband.VLAN99-L2EPG to reflect the function and VLAN assigned.
Finally, I’ll need to create a Contract (inband.MgmtServices-Ct) which will use the common/default filter to allow all traffic, and of course I’ll have to link the contract to the special In-Band EPG (provider) and the inband.VLAN99-L2EPG (consumer) mentioned above.
Again, a picture tells the story:
If all goes well, when both the Access Polices and the Tenant configuration is complete, the APIC will be able to manage the vCenter VMM, and the Management Station bare metal server will be able to manage the ACI fabric via the APIC IP addresses.
Enough of design, time to start configuring!
Step-by-Step: Configuring In-Band management via a L2 Out using the GUI
Conventions |
Cisco APIC Advanced GUI Menu Selection sequences are displayed in Bolded Blue text, with >+ meaning Right-click and select so that the following line: Fabric > Access Policies > Pools > VLAN >+ Create VLAN Pool should be interpreted as: From the Cisco APIC Advanced GUI Main Menu, select Fabric From the sub-menu, select Access Policies In the Navigation Pane, expand Pools, then on the VLAN sub-item, right-click and select Create VLAN Pool. |
If a particular tab in the Work Pane needs to be selected, it will be inserted into the sequence in square brackets, such as: … > Networks > 0.0.0.0:0-L3EPG > [Contracts] tab |
Within the Work Pane and within some dialogues, it will be necessary to click on a + icon to add an item. This is indicated by a (+) followed by the name of the item that needs to be added, so that: (+) Interface Selectors: should be interpreted as Click the + icon adjacent the Interface Selectors: prompt. |
Text that needs to be typed at prompts is presented in orange italicised bold text, while items to be selected from a drop down menu or by clicking options on the screen are shown in bolded underlined text. |
Options like clicking OK, UPDATE or SUBMIT are assumed, so not specifically stated unless required between sub-steps. Use your intelligence. |
Part 1: Set the Connectivity Preference for the pod to ooband
Firstly, since the default interface to use for external connections id the inband interface, I’m going to set the Connectivity Preference for the pod to ooband – just in case I loose access to the management GUI while configuring this.
Fabric > Fabric Policies > Global Policies > Connectivity Preferences
Interface to use for external connections: ooband
Part 2: Configure the Access Policy Chain
This is a long slog – if you are not familiar with Cisco ACI Access Policies, you might want to read my earlier ACI Tutorials, especially Tutorial #4.
Fabric > Access Policies > Pools > VLAN >+ Create VLAN Pool
Name: inband-VLAN.Pool
Allocation Mode: Static Allocation
(+) Encap Blocks:
Range: VLAN 99 – VLAN 100
Fabric > Access Policies > Physical and External Domains > External Bridged Domains >+ Create Layer 2 Domain
Name: inband-ExtL2Dom
VLAN Pool: inband-VLAN.Pool
Fabric > Access Policies > Global Policies > Attachable Access Entity Profiles >+ Create Attachable Access Entity Profile
Name: inband-AEP
(+) Domains (VMM, Physical or External) To Be Associated To Interfaces:
Domain Profile: inband-ExtL2Dom
Fabric > Access Policies > Interface Policies > Policies > LLDP Interface >+ Create LLDP Interface Policy
Name: Enable-LLDP
[Leave default values – I just want to have a policy that spells out that LLDP is enabled]
Fabric > Access Policies > Interface Policies > Policy Groups >Leaf Policy Groups >+ Create Leaf Access Port Policy Group
Name: inband.LLDP-APPG
LLDP Policy: Enable-LLDP
Attached Entity Profile: inband-AEP
Fabric > Access Policies > Interface Policies > Profiles >Leaf Profiles >+ Create Leaf Interface Profile
Name: L101-IntProf
(+) Interface Selectors:
Name: 1:10
Description: vCenter
Interface IDs: 1/10
Interface Policy Group: inband.LLDP-APPG
(+) Interface Selectors:
Name: 1:46-48
Description: APICs
Interface IDs: 1/46-48
Interface Policy Group: inband.LLDP-APPG
Now repeat for Leaf102
Fabric > Access Policies > Interface Policies > Profiles >Leaf Profiles >+ Create Leaf Interface Profile
Name: L102-IntProf
(+) Interface Selectors:
Name: 1:10
Description: Mgmt Host
Interface IDs: 1/10
Interface Policy Group: inband.LLDP-APPG
(+) Interface Selectors:
Name: 1:46-48
Description: APICs
Interface IDs: 1/46-48
Interface Policy Group: inband.LLDP-APPG
Fabric > Access Policies > Switch Policies > Profiles >Leaf Profiles >+ Create Leaf Profile
Name: L101-LeafProf
(+) Leaf Selectors:
Name: Leaf101
Blocks: 101
UPDATE > NEXT
[x] L101-IntProf
And again for leaf 102
Fabric > Access Policies > Switch Policies > Profiles >Leaf Profiles >+ Create Leaf Profile
Name: L102-LeafProf
(+) Leaf Selectors:
Name: Leaf102
Blocks: 102
UPDATE > NEXT
[x] L102-IntProf
That’s the Access Policies done, now for the mgmt Tenant configuration.
Part 3: mgmt Tenant Configuration
Before I can assign a static IP addresses to an APIC or switch, the GUI forces me to create a Node Management EPG, so begin by creating one – I’ll use the name Default because I don’t expect I’ll ever need another, but I’ll use an upper-case D to distinguish it from system created defaults which always use a lowercase d.
Tenants > Tenant mgmt > Node Management EPGs >+ Create In-Band Management EPG
Name: Default
Encap: vlan-100
Bridge Domain: inb
Now I can create the Static Node Management Addresses.
Tenants > Tenant mgmt > Node Management Addresses > Static Node Management Addresses >+ Create Static Node Management Addresses
Node Range: 1 – 3
Config: In-Band Addresses
In-Band Mangement EPG: Default
In-Band IPV4 Address: 192.168.99.111/24
In-Band IPV4 Gateway: 192.168.99.1/24
[Tip: If you are following my steps, ignore the warning (as shown below). I already set the Interface to use for external connections to ooband, and in spite of the inference in the warning, your preference for management will NOT switch to In-Band]
Tedious as it was, I resisted the temptation to resort to the CLI, and repeated the above step for Nodes 101-102, and 201-202.
That default gateway IP address I defined on the nodes will reside in the inb Bridge Domain.
Tenants > Tenant mgmt > Networking > Bridge Domains > inb > Subnets >+ Create subnet
Gateway IP: 192.168.99.1/24
That’s took care of the internal network – the APICs were able to ping the default gateway and the Leaf switches verifying that the configurations were valid, although at this stage I was not able to ping the Spine switches. However, I took heart from this video and assumed that all was OK.
apic1# ping -c 3 192.168.99.1 PING 192.168.99.1 (192.168.99.1) 56(84) bytes of data. 64 bytes from 192.168.99.1: icmp_seq=1 ttl=63 time=2.86 ms 64 bytes from 192.168.99.1: icmp_seq=2 ttl=63 time=0.827 ms 64 bytes from 192.168.99.1: icmp_seq=3 ttl=63 time=0.139 ms --- 192.168.99.1 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2002ms rtt min/avg/max/mdev = 0.139/1.276/2.862/1.156 ms apic1# ping -c 3 192.168.99.101 PING 192.168.99.101 (192.168.99.101) 56(84) bytes of data. 64 bytes from 192.168.99.101: icmp_seq=1 ttl=63 time=0.969 ms 64 bytes from 192.168.99.101: icmp_seq=2 ttl=63 time=0.176 ms 64 bytes from 192.168.99.101: icmp_seq=3 ttl=63 time=0.209 ms --- 192.168.99.101 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2000ms rtt min/avg/max/mdev = 0.176/0.451/0.969/0.366 ms apic1# ping -c 3 192.168.99.201 PING 192.168.99.201 (192.168.99.201) 56(84) bytes of data. From 192.168.99.111 icmp_seq=1 Destination Host Unreachable From 192.168.99.111 icmp_seq=2 Destination Host Unreachable From 192.168.99.111 icmp_seq=3 Destination Host Unreachable --- 192.168.99.201 ping statistics --- 3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 3005ms
I’ll need a contract to put between the L2EPG and the special management In-Band EPG – life will be easier if I create that first.
Tenants > Tenant mgmt > Security Policies > Contracts >+ Create Contract
Name: inband.MgmtServices-Ct
Scope: VRF [Default]
(+) Subjects:
Name: inband.MgmtServices-Subj
Filter Chain
(+) Filters
Name: common/default
Now to create the L2Out and the L2EPG
Tenants > Tenant mgmt > Networking > External Bridged Networks >+ Create Bridged Outside
Name: inband.VLAN99-L2Out
External Bridged Domain: inband-ExtL2Dom
Bridge Domain: mgmt/inb
Encap: VLAN 99
Nodes And Interfaces Protocol Profiles
Path Type: port
Path: Pod1/Node-101/eth1/10
ADD
Path: Pod1/Node-102/eth1/10
ADD>NEXT
(+) External EPG Networks
Name: inband.VLAN99-L2EPG
Have the L2EPG consume the contract I created earlier:
Tenants > Tenant mgmt > Networking > External Bridged Networks > inband.VLAN99-L2Out > Networks > inband.VLAN99-L2EPG
(+) Consumed Contracts:
Name: mgmt/inband.MgmtServices-Ct
And the In-Band EPG Provide it:
Tenants > Tenant mgmt >Node Management EPGs > In-Band EPG Default
(+) Provided Contracts:
Name: mgmt/inband.MgmtServices-Ct
Time to test!
To be confident that I will now be able to deploy a VMM Domain with connectivity to the Virtual Machine Manager (vCenter in my case), I’ll ping the VMM server from the APIC.
apic1# ping -c 3 192.168.99.99 PING 192.168.99.99 (192.168.99.99) 56(84) bytes of data. 64 bytes from 192.168.99.99: icmp_seq=1 ttl=64 time=0.458 ms 64 bytes from 192.168.99.99: icmp_seq=2 ttl=64 time=0.239 ms 64 bytes from 192.168.99.99: icmp_seq=3 ttl=64 time=0.238 ms --- 192.168.99.99 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 1999ms rtt min/avg/max/mdev = 0.238/0.311/0.458/0.105 ms
And the final test is to see if my management PC can indeed manage the APIC via the In-Band management IP address:
Step-by-Step: Configuring In-Band management via a L2 Out using the CLI
The main narrative for the configuration steps are contained in the explanation of the GUI configuration, so you should read that for more detail. The following commands are entered in configuration mode.
Part 1: Set the Connectivity Preference for the pod to ooband
mgmt_connectivity pref ooband
Part 2: Configure the Access Policy Chain
# First, create the VLAN Pool and External L2 Domain # If you type the command below, you may notice a curious thing - # at the point where the word "type" appears, if you press "?" # you will see options for <CR> and "dynamic", but not "type". # In other words, "type" is a hidden option - I discovered it # by creating a domain in the GUI and looking at the running # config later. vlan-domain inband-ExtL2Dom type l2ext vlan-pool inband-VLAN.Pool vlan 99-100 exit # And a Access Port Policy Group linked to the inband-ExtL2Dom template policy-group inband.LLDP-APPG # Another curious thing with the CLI is that there is no way # to create an AEP - one gets created for you whether you # want it or not when you link the APPG to the Domain in the # following command. vlan-domain member inband-ExtL2Dom type l2ext exit # Not necessary to create an Interface Policy to Enable-LLDP in the # CLI, Interface Policies are applied directly to the interfaces # Now the Leaf Profiles, Interface Profiles and Port Selectors leaf-profile L101-LeafProf leaf-group Leaf101 leaf 101 exit leaf-interface-profile L101-IntProf exit leaf-profile L102-LeafProf leaf-group Leaf102 leaf 102 exit leaf-interface-profile L102-IntProf exit leaf-interface-profile L101-IntProf leaf-interface-group 1:10 description 'vCenter' interface ethernet 1/10 policy-group inband.LLDP-APPG exit leaf-interface-group 1:46-48 description 'APICs' interface ethernet 1/46-48 policy-group inband.LLDP-APPG exit exit leaf-interface-profile L102-IntProf leaf-interface-group 1:10 description 'Mgmt Host' interface ethernet 1/10 policy-group inband.LLDP-APPG exit leaf-interface-group 1:46-48 description 'APICs' interface ethernet 1/46-48 policy-group inband.LLDP-APPG exit exit
That’s the Access Policies done, now for the mgmt Tenant configuration.
Part 3: mgmt Tenant Configuration
# Node IP addressing is configured OUTSIDE the mgmt # Tenant in the CLI, so I'll do the mgmt Tenant bits # first, in the order that best fits - defining the # contract first means I can configure the AP in one hit tenant mgmt contract inband.MgmtServices-Ct subject inband.MgmtServices-Subj access-group default both exit exit external-l2 epg inband.VLAN99-L2Out:inband.VLAN99-L2EPG bridge-domain member inb contract consumer inband.MgmtServices-Ct exit inband-mgmt epg Default contract provider inband.MgmtServices-Ct bridge-domain inb vlan 100 exit interface bridge-domain inb ip address 192.168.99.1/24 secondary exit exit # Now the Node IP addressing controller 1 interface inband-mgmt0 ip address 192.168.99.111/24 gateway 192.168.99.1 inband-mgmt epg Default vlan 100 exit exit controller 2 interface inband-mgmt0 ip address 192.168.99.112/24 gateway 192.168.99.1 inband-mgmt epg Default vlan 100 exit exit controller 3 interface inband-mgmt0 ip address 192.168.99.113/24 gateway 192.168.99.1 inband-mgmt epg Default vlan 100 exit exit switch 101 interface inband-mgmt0 ip address 192.168.99.101/24 gateway 192.168.99.1 inband-mgmt epg Default exit exit switch 102 interface inband-mgmt0 ip address 192.168.99.102/24 gateway 192.168.99.1 inband-mgmt epg Default exit exit switch 201 interface inband-mgmt0 ip address 192.168.99.201/24 gateway 192.168.99.1 inband-mgmt epg Default exit exit switch 202 interface inband-mgmt0 ip address 192.168.99.202/24 gateway 192.168.99.1 inband-mgmt epg Default exit exit # Finally, apply vlan configuration to the # physical interfaces where necessary leaf 101 interface ethernet 1/10 switchport trunk allowed vlan 99 tenant mgmt external-l2 epg inband.VLAN99-L2Out:inband.VLAN99-L2EPG exit exit leaf 102 interface ethernet 1/10 switchport trunk allowed vlan 99 tenant mgmt external-l2 epg inband.VLAN99-L2Out:inband.VLAN99-L2EPG exit exit
Time to test!
To be confident that I will now be able to manage the APIC from my management host, I’ll ping the Mgmt Host from the APIC.
apic1# ping -c 3 192.168.99.10 PING 192.168.99.10 (192.168.99.10) 56(84) bytes of data. 64 bytes from 192.168.99.10: icmp_seq=1 ttl=64 time=0.458 ms 64 bytes from 192.168.99.10: icmp_seq=2 ttl=64 time=0.239 ms 64 bytes from 192.168.99.10: icmp_seq=3 ttl=64 time=0.238 ms --- 192.168.99.10 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 1999ms rtt min/avg/max/mdev = 0.238/0.311/0.458/0.105 ms
And the final test is to see if my management PC can indeed manage the APIC via the In-Band management IP address – only this time for a change I’ll use ssh to access, and access APIC#2
One interesting thing to note in the CLI configuration is that nowhere do you create an Attachable Access Entity Profile (AEP). But, when you enter the above commands, one miraculously appears (called __ui_pg_inband.LLDP-APPG) when you view the GUI.
Another myriad of mysteries happens in the mgmt Tenant, even if you go through the CLI config from a clean configuration. While entering the commands above in the CLI, the APIC will automatically add an Application Profile (called default) with an EPG (also called default). But it doesn’t stop there! There is also another Node Management EPG (called default) magically created, and a mystical contract (called inband-default-contract) with a link to a mysterious filter (called inband-default). I have no idea why, but here’s some commands to clean up the crap left behind.
# Remove crap left behind by previous CLI commands tenant mgmt no application default no contract inband-default-contract no inband-mgmt epg default no access-list inband-default
Step-by-Step: Configuring In-Band management via a L2 Out using the API
The main narrative for the configuration steps are contained in the explanation of the GUI configuration, so you should read that for more detail. The following sections can be saved to a text file (with a .xml extension) and posted to your config using the GUI (using right-click > Post …), or you can copy and paste the sections below into Postman.
Right-click > Post … Tutorial
Assume one of the sections below is stored a text file with a .xml extension such as connectivityPrefs.xml
In the APIC GUI, any configuration item that has Post … as one of the right-click options can be used to post the file.
The contents of the .xml file must be posted to the uni Parent Distinguished Name (DN) as shown below:
The configuration defined in the .xml file will have been pushed into your config:
End of tutorial
Part 1: Set the Connectivity Preference for the pod to ooband
<?xml version="1.0" encoding="UTF-8"?> <!-- connectivityPrefs.xml --> <mgmtConnectivityPrefs dn="uni/fabric/connectivityPrefs" interfacePref="ooband"/>
Part 2: Configure the Access Policy Chain
Save each of these snippets in a separate .xml file and post one at a time. Or use Postman and copy and paste.
<?xml version="1.0" encoding="UTF-8"?> <!-- Create the VLAN Pool --> <fvnsVlanInstP allocMode="static" dn="uni/infra/vlanns-[inband-VLAN.Pool]-static" name="inband-VLAN.Pool"> <fvnsEncapBlk from="vlan-99" to="vlan-100"/> </fvnsVlanInstP>
<?xml version="1.0" encoding="UTF-8"?> <!-- Create the External L2 Domain, assign it the VLAN Pool --> <l2extDomP dn="uni/l2dom-inband-ExtL2Dom" name="inband-ExtL2Dom"> <infraRsVlanNs tDn="uni/infra/vlanns-[inband-VLAN.Pool]-static"/> </l2extDomP>
<!-- Create an Attchable Access Entity Profile (AEP) --> <infraAttEntityP descr="" dn="uni/infra/attentp-inband-AEP" name="inband-AEP"> <infraRsDomP tDn="uni/l2dom-inband-ExtL2Dom"/> </infraAttEntityP>
<?xml version="1.0" encoding="UTF-8"?> <!-- Create an Enable-LLDP Interface Policy --> <lldpIfPol adminRxSt="enabled" adminTxSt="enabled" dn="uni/infra/lldpIfP-Enable-LLDP" />
<?xml version="1.0" encoding="UTF-8"?> <!-- Create an Access Port Policy Group --> <infraAccPortGrp dn="uni/infra/funcprof/accportgrp-inband.LLDP-APPG" name="inband.LLDP-APPG"> <infraRsAttEntP tDn="uni/infra/attentp-inband-AEP"/> <infraRsLldpIfPol tnLldpIfPolName="Enable-LLDP"/> </infraAccPortGrp>
<?xml version="1.0" encoding="UTF-8"?> <!-- Two Interface Profiles will be needed - first one for Leaf101 --> <infraAccPortP dn="uni/infra/accportprof-L101-IntProf" name="L101-IntProf"> <!-- Add an interface selector for the vCenter Server --> <infraHPortS descr="vCenter" name="1:10" type="range"> <infraRsAccBaseGrp fexId="101" tDn="uni/infra/funcprof/accportgrp-inband.LLDP-APPG"/> <infraPortBlk fromCard="1" fromPort="10" name="block1" toCard="1" toPort="10"/> </infraHPortS> <!-- Add the ports where the APICs are connected --> <infraHPortS descr="APICs" name="1:46-48" type="range"> <infraRsAccBaseGrp fexId="101" tDn="uni/infra/funcprof/accportgrp-inband.LLDP-APPG"/> <infraPortBlk fromCard="1" fromPort="46" name="block1" toCard="1" toPort="48"/> </infraHPortS> </infraAccPortP>
<?xml version="1.0" encoding="UTF-8"?> <!-- Another Interface Profile for Leaf102 --> <infraAccPortP dn="uni/infra/accportprof-L102-IntProf" name="L102-IntProf"> <!-- Add an interface selector for the Mgmt Host --> <infraHPortS descr="Mgmt Host" name="1:10" type="range"> <infraRsAccBaseGrp fexId="102" tDn="uni/infra/funcprof/accportgrp-inband.LLDP-APPG"/> <infraPortBlk fromCard="1" fromPort="10" name="block2" toCard="1" toPort="10"/> </infraHPortS> <!-- Add the ports where the APICs are connected --> <infraHPortS descr="APICs" name="1:46-48" type="range"> <infraRsAccBaseGrp fexId="102" tDn="uni/infra/funcprof/accportgrp-inband.LLDP-APPG"/> <infraPortBlk fromCard="1" fromPort="46" name="block2" toCard="1" toPort="48"/> </infraHPortS> </infraAccPortP>
<?xml version="1.0" encoding="UTF-8"?> <!-- Create a Leaf Profile to own the corresponding Interface Profile --> <infraNodeP dn="uni/infra/nprof-L101-LeafProf" name="L101-LeafProf"> <infraLeafS name="Leaf101" type="range"> <infraNodeBlk name ="Default" from_="101" to_="101"/> </infraLeafS> <infraRsAccPortP tDn="uni/infra/accportprof-L101-IntProf"/> </infraNodeP>
<!-- Create a Leaf Profile to own the corresponding Interface Profile --> <infraNodeP dn="uni/infra/nprof-L102-LeafProf" name="L102-LeafProf"> <infraLeafS name="Leaf102" type="range"> <infraNodeBlk name ="Default" from_="102" to_="102"/> </infraLeafS> <infraRsAccPortP tDn="uni/infra/accportprof-L102-IntProf"/> </infraNodeP>
That’s the Access Policies done, now for the mgmt Tenant configuration.
Part 3: mgmt Tenant Configuration
<?xml version="1.0" encoding="UTF-8"?> <!-- api/policymgr/mo/.xml --> <polUni> <fvTenant name="mgmt"> <mgmtMgmtP name="default"> <!-- Create a Node Management EPG --> <mgmtInB encap="vlan-100" name="Default"> <!-- Assign Adresses for APICs In-Band management network --> <mgmtRsInBStNode addr="192.168.99.111/24" gw="192.168.99.1" tDn="topology/pod-1/node-1"/> <mgmtRsInBStNode addr="192.168.99.112/24" gw="192.168.99.1" tDn="topology/pod-1/node-2"/> <mgmtRsInBStNode addr="192.168.99.113/24" gw="192.168.99.1" tDn="topology/pod-1/node-3"/> <!-- Assign Adresses for switches In-Band management network --> <mgmtRsInBStNode addr="192.168.99.101/24" gw="192.168.99.1" tDn="topology/pod-1/node-101"/> <mgmtRsInBStNode addr="192.168.99.102/24" gw="192.168.99.1" tDn="topology/pod-1/node-102"/> <mgmtRsInBStNode addr="192.168.99.201/24" gw="192.168.99.1" tDn="topology/pod-1/node-201"/> <!-- The Node Mangement EPG will be the provider for the Contract --> <mgmtRsMgmtBD tnFvBDName="inb"/> <fvRsProv tnVzBrCPName="inband.MgmtServices-Ct"/> </mgmtInB> </mgmtMgmtP> <!-- Create the Contract Assigned to the Default Node Management EPG --> <vzBrCP name="inband.MgmtServices-Ct" scope="context"> <vzSubj name="inband.MgmtServices-Subj"> <!-- Use the common/default filter --> <vzRsSubjFiltAtt directives="" tnVzFilterName="default"/> </vzSubj> </vzBrCP> <!-- Assign IP address to inb BD --> <fvBD name="inb"> <fvSubnet ip="192.168.99.1/24" /> </fvBD> <!-- Create the L2Out and its associated L2EPG --> <l2extOut name="inband.VLAN99-L2Out"> <l2extLNodeP name="default"> <l2extLIfP name="default"> <l2extRsPathL2OutAtt tDn="topology/pod-1/paths-101/pathep-[eth1/10]"/> <l2extRsPathL2OutAtt tDn="topology/pod-1/paths-102/pathep-[eth1/10]"/> </l2extLIfP> </l2extLNodeP> <l2extRsL2DomAtt tDn="uni/l2dom-inband-ExtL2Dom"/> <l2extRsEBd encap="vlan-99" tnFvBDName="inb"/> <l2extInstP name="inband.VLAN99-L2EPG"> <!-- The L2EPG will consume the Contract --> <fvRsCons tnVzBrCPName="inband.MgmtServices-Ct"/> </l2extInstP> </l2extOut> </fvTenant>
Again, I’ll test by pinging the vCenter server from apic#3 for a change, and for browse to the Visore interface of the APIC from the Mgmt Host.
apic3# ping -c 3 192.168.99.99 PING 192.168.99.99 (192.168.99.99) 56(84) bytes of data. 64 bytes from 192.168.99.99: icmp_seq=1 ttl=64 time=0.302 ms 64 bytes from 192.168.99.99: icmp_seq=2 ttl=64 time=0.221 ms 64 bytes from 192.168.99.99: icmp_seq=3 ttl=64 time=0.204 ms --- 192.168.99.99 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 1999ms rtt min/avg/max/mdev = 0.204/0.242/0.302/0.044 ms
The fact that the login screen comes up is proof that the Mgmt Host has connectivity to the APICs.
In the next installment, I will configure In-Band management so the fabric can be managed from an external network via a L3 out.
RedNectar
Note:![]() |
If you would like the author or one of my colleagues to assist with the setup of your ACI installation, contact acimentor@housley.com.au and refer to this article. Housley works mainly around APJC, but are not restricted to this area. |
References:
Cisco’s official ACI management documentation – I have informed Cisco of the fact that this documentation is not up to scratch – hopefully it will be fixed soon.
The Cisco APIC NX-OS Style Command-Line Interface Configuration Guide – especially the chapter on Configuring Management Interfaces was particularly helpful – much better than the reference above.
Also Cisco’s ACI Troubleshooting Book had a couple of hints about how things hang together.
Carl Niger’s youtube video series was helpful – I recommend it to you.
Cisco’s pathetic video on configuring In-Band management is simply not worth wasting your time on. But it ‘s included here since I referred to it.
Pingback: Configuring In-Band Management for the APIC on Cisco ACI (Part #1-via an EPG) | RedNectar's Blog
Pingback: Configuring In-Band Management for the APIC on Cisco ACI (Part #3-via a L3Out) | RedNectar's Blog