Configuring In-Band Management for the APIC on Cisco ACI (Part #1-via an EPG)


Anyone unlucky enough to try and configure In-Band management on the Cisco APIC will have probably realised that it is not a simple task. Which is probably why many Cisco support forum experts advises to use out of band (oob) management instead [link].

And anyone unlucky enough to try and decipher Cisco’s official documentation for configuring In-Band management on the Cisco APIC or watch their pathetic video (which simply does not work – it does not complete the job) are probably feeling frustrated to the point of giving up.

Let me ease your frustration and take you through a journey showing you how to configure In-Band management for ACI in a variety of ways:

  1. Via an EPG (in the mgmt Tenant) :This post
    1. using the GUI
    2. using the CLI
    3. using the API
  2.  Via an external bridged network (L2 Out) (Part#2 of this series)
    1. using the GUI
    2. using the CLI
    3. using the API
  3. Via an external routed network (L3 Out) (Part#3 of this series)
    1. using the GUI
    2. using the CLI
    3. using the API

In-Band Management Via an EPG in the mgmt Tenant

Let’s begin with a diagram showing my test setup for the EPG approach.

IP addressing for the Leaf and Spine switches will use the switch ID in the fourth octect of the 192.168.99.0/24 network. E.g., Spine 201 will be 192.168.99.201. The default gateway address to be configured on the inb Bridge Domain in the mgmt tenant will be 192.168.99.1.

So let me plan exactly what will need to be done:

The Access Policy Chain

I’ll need to allocate VLAN IDs for the internal inband management EPG (VLAN 100) and because there has to be two EPGs involved, another for the user facing EPG (VLAN 99). I’ll put them a VLAN Pool, which will connect to a Physical Domain, which will need to link to an AEP which has appropriate Access Port Policy Group assignments linking the AEP to the relevant attachment ports of the APICs, the vCenter host and the ACI Management host. Like the picture shows.

The mgmt Tenant

In the mgmt Tenant there is a number of tasks I’ll have to do.

I’ll need to create a special EPG called an In-band EPG.  This will have to be done before assigning the static addresses I want to the APICs, Leaves and Spines.

I’ll assign the default gateway IP address to the pre-defined inb Bridge Domain in the mgmt Tenant, and then create a second EPG (inband.Default-EPG) linked to the same inb Bridge Domain and linked to the inband-PhysDom that was created in the Access Policy Chain.

I’ll statically assign the ports (Ethernet 1/10 on Leaf101 and Leaf102) to the inband.Default-EPG using VLAN 99 encapsulation, making sure the link to the management host (on Leaf102) is untagged.

Finally, I’ll need to create a Contract (inband.MgmtServices-Ct) which will use the common/default filter to allow all traffic, and of course I’ll have to link the contract to the special In-Band EPG (provider) and the inband.Default-EPG (consumer) mentioned above.

Again, a picture tells the story:

If all goes well, when both the Access Polices and the Tenant configuration is complete, the APIC will be able to manage the vCenter VMM, and the Management Station bare metal server will be able to manage the ACI fabric via the APIC IP addresses.

Enough of design, time to start configuring!

Step-by-Step: Configuring inband management for an EPG via the GUI

Conventions

Cisco APIC Advanced GUI Menu Selection sequences are displayed in Bolded Blue text, with >+ meaning Right-click and select so that the following line:
Fabric > Access Policies > Pools > VLAN >+ Create VLAN Pool
should be interpreted as:
From the Cisco APIC Advanced GUI Main Menu, select Fabric
From the sub-menu, select Access Policies
In the Navigation Pane, expand Pools, then on the VLAN sub-item, right-click and select Create VLAN Pool.
If a particular tab in the Work Pane needs to be selected, it will be inserted into the sequence in square brackets, such as:
… > Networks > 0.0.0.0:0-L3EPG > [Contracts] tab 
Within the Work Pane and within some dialogues, it will be necessary to click on a + icon to add an item. This is indicated by a (+) followed by the name of the item that needs to be added, so that:
(+) Interface Selectors:
should be interpreted as
Click the + icon adjacent the Interface Selectors: prompt.
Text that needs to be typed at prompts is presented in  orange italicised bold text, while items to be selected from a drop down menu or by clicking options on the screen are shown in bolded underlined text.
Options like clicking OK, UPDATE or SUBMIT are assumed, so not specifically stated unless required between sub-steps. Use your intelligence.

Part 1: Set the Connectivity Preference for the pod to ooband

Firstly, since the default interface to use for external connections id the inband interface, I’m going to set the Connectivity Preference for the pod to ooband – just in case I loose access to the management GUI while configuring this.

Fabric > Fabric Policies > Global Policies > Connectivity Preferences

Interface to use for external connections: ooband

Part 2: Configure the Access Policy Chain

This is a long slog – if you are not familiar with Cisco ACI Access Policies, you might want to read my earlier ACI Tutorials, especially Tutorial #4.

Fabric > Access Policies > Pools > VLAN >+ Create VLAN Pool

Name: inband-VLAN.Pool
Allocation Mode: Static Allocation
(+) Encap Blocks:
Range: VLAN 99 – VLAN 100

Fabric > Access Policies > Physical and External Domains > Physical Domains >+ Create Physical Domain

Name: inband-PhysDom
VLAN Pool: inband-VLAN.Pool

Fabric > Access Policies > Global Policies > Attachable Access Entity Profiles >+ Create Attachable Access Entity Profile

Name: inband-AEP
(+) Domains (VMM, Physical or External) To Be Associated To Interfaces:
Domain Profile: inband-PhysDom

Fabric > Access Policies > Interface Policies > Policies > LLDP Interface >+ Create LLDP Interface Policy

Name: Enable-LLDP
[Leave default values – I just want to have a policy that spells out that LLDP is enabled]

Fabric > Access Policies > Interface Policies > Policy Groups >Leaf Policy Groups >+ Create Leaf Access Port Policy Group

Name: inband.LLDP-APPG
LLDP Policy: Enable-LLDP
Attached Entity Profile: inband-AEP

Fabric > Access Policies > Interface Policies > Profiles >Leaf Profiles >+ Create Leaf Interface Profile

Name: L101-IntProf
(+) Interface Selectors:
Name: 1:10
Description: vCenter
Interface IDs: 1/10
Interface Policy Group: inband.LLDP-APPG
(+) Interface Selectors:
Name: 1:46-48
Description: APICs
Interface IDs: 1/46-48
Interface Policy Group: inband.LLDP-APPG

Now repeat for Leaf102

Fabric > Access Policies > Interface Policies > Profiles >Leaf Profiles >+ Create Leaf Interface Profile

Name: L102-IntProf
(+) Interface Selectors:
Name: 1:10
Description: Mgmt Host
Interface IDs: 1/10
Interface Policy Group: inband.LLDP-APPG
(+) Interface Selectors:
Name: 1:46-48
Description: APICs
Interface IDs: 1/46-48
Interface Policy Group: inband.LLDP-APPG

Fabric > Access Policies > Switch Policies > Profiles >Leaf Profiles >+ Create Leaf Profile

Name: L101-LeafProf
(+) Leaf Selectors:
Name: Leaf101
Blocks: 101
UPDATE > NEXT
[x] L101-IntProf

And again for leaf 102

Fabric > Access Policies > Switch Policies > Profiles >Leaf Profiles >+ Create Leaf Profile

Name: L102-LeafProf
(+) Leaf Selectors:
Name: Leaf102
Blocks: 102
UPDATE > NEXT
[x] L102-IntProf

That’s the Access Policies done, now for the mgmt Tenant configuration.

Part 3: mgmt Tenant Configuration

Before I can assign a static IP addresses to  an APIC or switch, the GUI forces me to create a Node Management EPG, so begin by creating one – I’ll use the name Default because I don’t expect I’ll ever need another, but I’ll use an upper-case D to distinguish it from system created defaults which always use a lowercase d.

Tenants > Tenant mgmt > Node Management EPGs >+ Create In-Band Management EPG

Name: Default
Encap: vlan-100
Bridge Domain: inb

Now I can create the Static Node Management Addresses.

Tenants > Tenant mgmt > Node Management Addresses > Static Node Management Addresses >+ Create Static Node Management Addresses

Node Range: 1 – 3
Config: In-Band Addresses
In-Band Mangement EPG: Default
In-Band IPV4 Address: 192.168.99.111/24
In-Band IPV4 Gateway: 192.168.99.1

[Tip: If you are following my steps, ignore the warning (as shown below).  I already set the Interface to use for external connections to ooband, and in spite of the inference in the warning, your preference for management will NOT switch to In-Band]

inbabd-warning

Tedious as it was, I resisted the temptation to resort to the CLI, and repeated the above step for Nodes  101-102, and 201-202.

That default gateway IP address I defined on the nodes will reside in the inb Bridge Domain.

Tenants > Tenant mgmt > Networking > Bridge Domains > inb > Subnets  >+ Create subnet

Gateway IP: 192.168.99.1/24

That’s took care of the internal network – the APICs were able to ping the default gateway and the Leaf switches verifying that the configurations were valid, although at this stage I was not able to ping the Spine switches.  However, I took heart from this video and assumed that all was OK.

apic1# ping -c 3 192.168.99.1
PING 192.168.99.1 (192.168.99.1) 56(84) bytes of data.
64 bytes from 192.168.99.1: icmp_seq=1 ttl=63 time=2.86 ms
64 bytes from 192.168.99.1: icmp_seq=2 ttl=63 time=0.827 ms
64 bytes from 192.168.99.1: icmp_seq=3 ttl=63 time=0.139 ms

--- 192.168.99.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 0.139/1.276/2.862/1.156 ms
apic1# ping -c 3 192.168.99.101
PING 192.168.99.101 (192.168.99.101) 56(84) bytes of data.
64 bytes from 192.168.99.101: icmp_seq=1 ttl=63 time=0.969 ms
64 bytes from 192.168.99.101: icmp_seq=2 ttl=63 time=0.176 ms
64 bytes from 192.168.99.101: icmp_seq=3 ttl=63 time=0.209 ms

--- 192.168.99.101 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2000ms
rtt min/avg/max/mdev = 0.176/0.451/0.969/0.366 ms
apic1# ping -c 3 192.168.99.201
PING 192.168.99.201 (192.168.99.201) 56(84) bytes of data.
From 192.168.99.111 icmp_seq=1 Destination Host Unreachable
From 192.168.99.111 icmp_seq=2 Destination Host Unreachable
From 192.168.99.111 icmp_seq=3 Destination Host Unreachable

--- 192.168.99.201 ping statistics ---
3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 3005ms

To allow access from my vCenter VMM and external management host, I’ll need to create an EPG.  Since this EPG will need to have a contract with the mgmt Tenant’s special In-Band EPG, it will be easiest to create the EPG in the mgmt Tenant, but there is no reason why I couldn’t have created this EPG in a different Tenant so long as I had configured all the fiddly pieces that go along with consuming a contract from another Tenant.

But before I create the EPG, I’ll need an Application Profile:

Tenants > Tenant mgmt >Application profiles >+ Create Application Profile 

Name: inband.Default-AP

Tenants > Tenant mgmt >Application profiles > inband.Default-AP > Application EPGs >+ Create Application EPG 

Now I can create the EPG

Name: inband.Default-EPG
Bridge Domain: mgmt/inb

I’ll need to provide the link between the EPG and the Access Policy chain via the Physical Domain

Tenants > Tenant mgmt >Application profiles > inband.Default-AP > Application EPGs > inband.Default-EPG > Domains >+ Add Physical Domain Association 

Physical Domain Profile: inband.PhysDom

And finally, I’ll link the EPG to the ports connected to vCenter and the Mgmt Host using VLAN 99 – Tagged on Leaf101 (vCenter) and untagged on Leaf102 (Mgmt Host)

Tenants > Tenant mgmt >Application profiles > inband.Default-AP > Application EPGs > inband.Default-EPG > Static Ports >+ Deploy Static EPG on PC, VPC, or Interface

Path Type: Port
Path: Pod-1/Node-101/eth1/10
Port Encap (…): VLAN 99
Mode: Trunk [Default]

I had to remember to make interface 1/10 untagged on Leaf102 – that is where the Mgmt Host is attached.

Tenants > Tenant mgmt >Application profiles > inband.Default-AP > Application EPGs > inband.Default-EPG > Static Ports >+ Deploy Static EPG on PC, VPC, or Interface

Path Type: Port
Path: Pod-1/Node-102/eth1/10
Port Encap (…): VLAN 99
Mode: Access (Untagged)

That’s created both the Application Profile and the EPG, now I’ll need a Contract with a Subject that links to the common/default filter to allow all traffic.  If I had wanted to be more restrictive, I could have created and linked my own filter of course.

Tenants > Tenant mgmt > Security Policies > Contracts  >+ Create Contract

Name: inband.MgmtServices-Ct
Scope: VRF [Default]
(+) Subjects:
Name: inband.MgmtServices-Subj
Filter Chain
(+) Filters
Name: common/default

And finally, to apply the contract so that it is provided by the special In-Band EPG and consumed by the EPG I created (inband.Default-EPG)

Tenants > Tenant mgmt >Node Management EPGs > In-Band EPG Default 

(+) Provided Contracts:
Name: mgmt/inband.MgmtServices-Ct

Tenants > Tenant mgmt >Application Profiles > inband.Default-AP > Application EPGs > EPG inband.Default-EPG > Contracts >+ Add Consumed Contract

Contract: mgmt/inband.MgmtServices-Ct

Time to test!

To be confident that I will now be able to deploy a VMM Domain with connectivity to the Virtual Machine Manager (vCenter in my case), I’ll ping the VMM server from the APIC.

apic1# ping -c 3 192.168.99.99
PING 192.168.99.99 (192.168.99.99) 56(84) bytes of data.
64 bytes from 192.168.99.99: icmp_seq=1 ttl=64 time=0.458 ms
64 bytes from 192.168.99.99: icmp_seq=2 ttl=64 time=0.239 ms
64 bytes from 192.168.99.99: icmp_seq=3 ttl=64 time=0.238 ms

--- 192.168.99.99 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1999ms
rtt min/avg/max/mdev = 0.238/0.311/0.458/0.105 ms

And the final test is to see if my management PC can indeed manage the APIC via the In-Band management IP address:

apic-access

Step-by-Step: Configuring inband management for an EPG via the CLI

The main narrative for the configuration steps are contained in the explanation of the GUI configuration, so you should read that for more detail.  The following commands are entered in configuration mode.


Part 1: Set the Connectivity Preference for the pod to ooband

mgmt_connectivity pref ooband

Part 2: Configure the Access Policy Chain

# First, create the VLAN Pool and Physical Domain
# If you type the command below, you may notice a curious thing -
# at the point where the word "type" appears, if you press "?"
# you will see options for <CR> and "dynamic", but not "type".
# In other words, "type" is a hidden option - I discovered it
# by creating a domain in the GUI and looking at the running
# config later.
  vlan-domain inband-PhysDom type phys
    vlan-pool inband-VLAN.Pool
    vlan 99-100
    exit

# And a Access Port Policy Group linked to the inband-PhysDom
  template policy-group inband.LLDP-APPG
# Another curious thing with the CLI is that there is no way
# to create an AEP - one gets created for you whether you
# want it or not when you link the APPG to the Domain in the
# following command.
    vlan-domain member inband-PhysDom type phys
    exit

# Not necessary to create an Interface Policy to Enable-LLDP in the
# CLI, Interface Policies are applied directly to the interfaces

# Now the Leaf Profiles, Interface Profiles and Port Selectors
  leaf-profile L101-LeafProf
    leaf-group Leaf101
      leaf 101
      exit
    leaf-interface-profile L101-IntProf
    exit
  leaf-profile L102-LeafProf
    leaf-group Leaf102
      leaf 102
      exit
    leaf-interface-profile L102-IntProf
    exit

  leaf-interface-profile L101-IntProf
    leaf-interface-group 1:10
      description 'vCenter'
      interface ethernet 1/10
      policy-group inband.LLDP-APPG
      exit
    leaf-interface-group 1:46-48
      description 'APICs'
      interface ethernet 1/46-48
      policy-group inband.LLDP-APPG
      exit
    exit

  leaf-interface-profile L102-IntProf
    leaf-interface-group 1:10
      description 'Mgmt Host'
      interface ethernet 1/10
      policy-group inband.LLDP-APPG
      exit
    leaf-interface-group 1:46-48
      description 'APICs'
      interface ethernet 1/46-48
      policy-group inband.LLDP-APPG
      exit
    exit

That’s the Access Policies done, now for the mgmt Tenant configuration.

Part 3: mgmt Tenant Configuration

# Node IP addressing is configured OUTSIDE the mgmt
# Tenant in the CLI, so I'll do the mgmt Tenant bits
# first, in the order that best fits - defining the
# contract first means I can configure the AP in one hit

  tenant mgmt
    contract inband.MgmtServices-Ct
      subject inband.MgmtServices-Subj
        access-group default both
        exit
      exit

    application inband.Default-AP
      epg inband.Default-EPG
        bridge-domain member inb
        contract consumer inband.MgmtServices-Ct
        exit
      exit

    inband-mgmt epg Default
      contract provider inband.MgmtServices-Ct
      bridge-domain inb
      vlan 100
      exit

    interface bridge-domain inb
      ip address 192.168.99.1/24 secondary
      exit
    exit

# Now the Node IP addressing

  controller 1
    interface inband-mgmt0
      ip address 192.168.99.111/24 gateway 192.168.99.1
      inband-mgmt epg Default
      vlan 100
      exit
    exit
  controller 2
    interface inband-mgmt0
      ip address 192.168.99.112/24 gateway 192.168.99.1
      inband-mgmt epg Default
      vlan 100
      exit
    exit
  controller 3
    interface inband-mgmt0
      ip address 192.168.99.113/24 gateway 192.168.99.1
      inband-mgmt epg Default
      vlan 100
      exit
    exit

  switch 101
    interface inband-mgmt0
      ip address 192.168.99.101/24 gateway 192.168.99.1
      inband-mgmt epg Default
      exit
    exit
  switch 102
    interface inband-mgmt0
      ip address 192.168.99.102/24 gateway 192.168.99.1
      inband-mgmt epg Default
      exit
    exit
  switch 201
    interface inband-mgmt0
      ip address 192.168.99.201/24 gateway 192.168.99.1
      inband-mgmt epg Default
      exit
    exit
  switch 202
    interface inband-mgmt0
      ip address 192.168.99.202/24 gateway 192.168.99.1
      inband-mgmt epg Default
      exit
    exit

# Finally, apply vlan configuration to the
# physical interfaces where necessary

  leaf 101
    interface ethernet 1/10
      switchport trunk allowed vlan 99 tenant mgmt application inband.Default-AP epg inband.Default-EPG
      exit
    exit

  leaf 102
    interface ethernet 1/10
      switchport access vlan 99 tenant mgmt application inband.Default-AP epg inband.Default-EPG
      exit
    exit

Time to test!

To be confident that I will now be able to manage the APIC from my management host, I’ll ping the Mgmt Host from the APIC.

apic1# ping -c 3 192.168.99.10
PING 192.168.99.10 (192.168.99.10) 56(84) bytes of data.
64 bytes from 192.168.99.10: icmp_seq=1 ttl=64 time=0.458 ms
64 bytes from 192.168.99.10: icmp_seq=2 ttl=64 time=0.239 ms
64 bytes from 192.168.99.10: icmp_seq=3 ttl=64 time=0.238 ms

--- 192.168.99.10 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1999ms
rtt min/avg/max/mdev = 0.238/0.311/0.458/0.105 ms

And the final test is to see if my management PC can indeed manage the APIC via the In-Band management IP address – only this time for a change I’ll use ssh to access, and access APIC#2

sshaccess

One interesting thing to note in the CLI configuration is that nowhere do you create an Attachable Access Entity Profile (AEP).  But, when you enter the above commands, one miraculously appears (called __ui_pg_inband.LLDP-APPG) when you view the GUI.

miracluousaep-l2ext

Another myriad of mysteries happens in the mgmt Tenant, even if you go through the CLI config from a clean configuration. While entering the commands above in the CLI, the APIC will automatically add an Application Profile (called default)  with an EPG (also called default).  But it doesn’t stop there! There is also another Node Management EPG (called default) magically created, and a mystical contract (called inband-default-contract) with a link to a mysterious filter (called  inband-default). I have no idea why, but here’s some commands to clean up the crap left behind.

# Remove crap left behind by previous CLI commands
tenant mgmt
  no application default
  no contract inband-default-contract
  no inband-mgmt epg default
  no access-list inband-default

Step-by-Step: Configuring inband management for an EPG via the API

The main narrative for the configuration steps are contained in the explanation of the GUI configuration, so you should read that for more detail.  The following sections can be saved to a text file (with a .xml extension) and posted to your config using the GUI (using right-click > Post …), or you can copy and paste the sections below into Postman.


Right-click > Post … Tutorial

Assume one of the sections below is stored a text file with a .xml extension such as  connectivityPrefs.xml

In the APIC GUI, any configuration item that has Post … as one of the right-click options can be used to post the file.

post

The contents of the .xml file must be posted to the uni Parent Distinguished Name (DN) as shown below:

posttouni

The configuration defined in the .xml file will have been pushed into your config:

unpdatedconnpref

End of tutorial


Part 1: Set the Connectivity Preference for the pod to ooband

<?xml version="1.0" encoding="UTF-8"?>
<!-- connectivityPrefs.xml -->
<mgmtConnectivityPrefs dn="uni/fabric/connectivityPrefs" interfacePref="ooband"/>

Part 2: Configure the Access Policy Chain

Save each of these snippets in a separate .xml file and post one at a time.  Or use Postman and copy and paste.

<?xml version="1.0" encoding="UTF-8"?>
<!-- Create the VLAN Pool -->
<fvnsVlanInstP allocMode="static" dn="uni/infra/vlanns-[inband-VLAN.Pool]-static" name="inband-VLAN.Pool">
    <fvnsEncapBlk from="vlan-99" to="vlan-100"/>
</fvnsVlanInstP>
<?xml version="1.0" encoding="UTF-8"?>
<!-- Create the Physical Domain, assign it the VLAN Pool -->
<physDomP dn="uni/phys-inband-PhysDom" name="inband-PhysDom">
    <infraRsVlanNs tDn="uni/infra/vlanns-[inband-VLAN.Pool]-static"/>
</physDomP>
<!-- Create an Attchable Access Entity Profile (AEP) -->
<infraAttEntityP descr="" dn="uni/infra/attentp-inband-AEP" name="inband-AEP">
  <infraRsDomP tDn="uni/l2dom-inband-ExtL2Dom"/>
</infraAttEntityP>
<?xml version="1.0" encoding="UTF-8"?>
<!-- Create an Enable-LLDP Interface Policy -->
<lldpIfPol adminRxSt="enabled" adminTxSt="enabled" dn="uni/infra/lldpIfP-Enable-LLDP" />
<?xml version="1.0" encoding="UTF-8"?>
<!-- Create an Access Port Policy Group -->
<infraAccPortGrp dn="uni/infra/funcprof/accportgrp-inband.LLDP-APPG" name="inband.LLDP-APPG">
    <infraRsAttEntP tDn="uni/infra/attentp-inband-AEP"/>
    <infraRsLldpIfPol tnLldpIfPolName="Enable-LLDP"/>
</infraAccPortGrp>
<?xml version="1.0" encoding="UTF-8"?>
<!-- Two Interface Profiles will be needed - first one for Leaf101 -->
<infraAccPortP dn="uni/infra/accportprof-L101-IntProf" name="L101-IntProf">
    <!-- Add an interface selector for the vCenter Server -->
    <infraHPortS descr="vCenter" name="1:10" type="range">
        <infraRsAccBaseGrp fexId="101" tDn="uni/infra/funcprof/accportgrp-inband.LLDP-APPG"/>
        <infraPortBlk fromCard="1" fromPort="10" name="block1" toCard="1" toPort="10"/>
    </infraHPortS>
    <!-- Add the ports where the APICs are connected -->
    <infraHPortS descr="APICs" name="1:46-48" type="range">
        <infraRsAccBaseGrp fexId="101" tDn="uni/infra/funcprof/accportgrp-inband.LLDP-APPG"/>
        <infraPortBlk fromCard="1" fromPort="46" name="block1" toCard="1" toPort="48"/>
    </infraHPortS>
</infraAccPortP>
<?xml version="1.0" encoding="UTF-8"?>
<!-- Another Interface Profile for Leaf102 -->
<infraAccPortP dn="uni/infra/accportprof-L102-IntProf" name="L102-IntProf">
    <!-- Add an interface selector for the Mgmt Host -->
    <infraHPortS descr="Mgmt Host" name="1:10" type="range">
        <infraRsAccBaseGrp fexId="102" tDn="uni/infra/funcprof/accportgrp-inband.LLDP-APPG"/>
        <infraPortBlk fromCard="1" fromPort="10" name="block2" toCard="1" toPort="10"/>
    </infraHPortS>
    <!-- Add the ports where the APICs are connected -->
    <infraHPortS descr="APICs" name="1:46-48" type="range">
        <infraRsAccBaseGrp fexId="102" tDn="uni/infra/funcprof/accportgrp-inband.LLDP-APPG"/>
        <infraPortBlk fromCard="1" fromPort="46" name="block2" toCard="1" toPort="48"/>
    </infraHPortS>
</infraAccPortP>
<?xml version="1.0" encoding="UTF-8"?>
<!-- Create a Leaf Profile to own the corresponding Interface Profile -->
<infraNodeP dn="uni/infra/nprof-L101-LeafProf" name="L101-LeafProf">
    <infraLeafS name="Leaf101" type="range">
        <infraNodeBlk name ="Default" from_="101" to_="101"/>
    </infraLeafS>
    <infraRsAccPortP tDn="uni/infra/accportprof-L101-IntProf"/>
</infraNodeP>
<?xml version="1.0" encoding="UTF-8"?>
<!-- Create a Leaf Profile to own the corresponding Interface Profile -->
<infraNodeP dn="uni/infra/nprof-L102-LeafProf" name="L102-LeafProf">
    <infraLeafS name="Leaf102" type="range">
        <infraNodeBlk name ="Default" from_="102" to_="102"/>
    </infraLeafS>
    <infraRsAccPortP tDn="uni/infra/accportprof-L102-IntProf"/>
</infraNodeP>

That’s the Access Policies done, now for the mgmt Tenant configuration.

Part 3: mgmt Tenant Configuration

<?xml version="1.0" encoding="UTF-8"?>
<!-- api/policymgr/mo/.xml -->
<polUni>
  <fvTenant name="mgmt">
    <mgmtMgmtP name="default">
      <!-- Create a Node Management EPG -->
      <mgmtInB encap="vlan-100" name="Default">
        <!-- Assign Adresses for APICs In-Band management network -->
        <mgmtRsInBStNode addr="192.168.99.111/24" gw="192.168.99.1" tDn="topology/pod-1/node-1"/>
        <mgmtRsInBStNode addr="192.168.99.112/24" gw="192.168.99.1" tDn="topology/pod-1/node-2"/>
        <mgmtRsInBStNode addr="192.168.99.113/24" gw="192.168.99.1" tDn="topology/pod-1/node-3"/>
        <!-- Assign Adresses for switches In-Band management network -->
        <mgmtRsInBStNode addr="192.168.99.101/24" gw="192.168.99.1" tDn="topology/pod-1/node-101"/>
        <mgmtRsInBStNode addr="192.168.99.102/24" gw="192.168.99.1" tDn="topology/pod-1/node-102"/>
        <mgmtRsInBStNode addr="192.168.99.201/24" gw="192.168.99.1" tDn="topology/pod-1/node-201"/>
        <mgmtRsMgmtBD tnFvBDName="inb"/>
        <fvRsProv tnVzBrCPName="inband.MgmtServices-Ct"/>
      </mgmtInB>
    </mgmtMgmtP>
    <!-- Create the Contract Assigned to the Default Node Management EPG -->
    <vzBrCP name="inband.MgmtServices-Ct" scope="context">
      <vzSubj name="inband.MgmtServices-Subj">
        <!-- Use the common/default filter -->
        <vzRsSubjFiltAtt directives="" tnVzFilterName="default"/>
      </vzSubj>
    </vzBrCP>
    <!-- Assign IP address to inb BD -->
    <fvBD name="inb">
      <fvSubnet ip="192.168.99.1/24" />
    </fvBD>
    <!-- Create the Application Profile and EPG -->
    <fvAp name="inband.Default-AP">
      <fvAEPg name="inband.Default-EPG">
        <fvRsCons tnVzBrCPName="inband.MgmtServices-Ct"/>
        <fvRsPathAtt encap="vlan-99" tDn="topology/pod-1/paths-101/pathep-[eth1/10]"/>
        <!-- Make sure Leaf 102, port 1/10 is configured for untagged traffic -->
        <fvRsPathAtt encap="vlan-99" mode="untagged" tDn="topology/pod-1/paths-102/pathep-[eth1/10]"/>
        <fvRsDomAtt tDn="uni/phys-inband-PhysDom"/>
        <fvRsBd tnFvBDName="inb"/>
      </fvAEPg>
    </fvAp>
  </fvTenant>
</polUni>

Again, I’ll test by pinging the vCenter server from apic#3 for a change, and for browse to the Visore interface of the APIC from the Mgmt Host.

apic3# ping -c 3 192.168.99.99
PING 192.168.99.99 (192.168.99.99) 56(84) bytes of data.
64 bytes from 192.168.99.99: icmp_seq=1 ttl=64 time=0.302 ms
64 bytes from 192.168.99.99: icmp_seq=2 ttl=64 time=0.221 ms
64 bytes from 192.168.99.99: icmp_seq=3 ttl=64 time=0.204 ms

--- 192.168.99.99 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1999ms
rtt min/avg/max/mdev = 0.204/0.242/0.302/0.044 ms

The fact that the login screen comes up is proof that the Mgmt Host has connectivity to the APICs.

visorelogin

In the next installment, I will configure In-Band management via a L2 out.

RedNectar

Note:RedPoint If you would like the author or one of my colleagues to assist with the setup of your ACI installation, contact acimentor@housley.com.au and refer to this article. Housley works mainly around APJC, but are not restricted to this area.

References:

Cisco’s official ACI management documentation – I have informed Cisco of the fact that this documentation is not up to scratch – hopefully it will be fixed soon.

The Cisco APIC NX-OS Style Command-Line Interface Configuration Guide – especially the chapter on Configuring Management Interfaces was particularly helpful – much better than the reference above.

Also Cisco’s ACI Troubleshooting Book had a couple of hints about how things hang together.

Carl Niger’s youtube video series was helpful – I recommend it to you.

Cisco’s pathetic video on configuring In-Band management is simply not worth wasting your time on.  But it ‘s included here since I referred to it.

Advertisements

About RedNectar Chris Welsh

Professional IT Instructor. All things TCP/IP, Cisco or VoIP
This entry was posted in ACI, ACI API, ACI CLI, ACI configuration, ACI inband management tutorials, ACI Tutorial, APIC, Cisco, configuration tutorial, Data Center, Data Centre, EPG, In-Band management, inband management, L2 Out, L2out, L3 Out, L3out, Postman, tutorial and tagged , , , , , , , , , , . Bookmark the permalink.