In a Cisco ACI deployment, Cisco recommends that “The TEP IP address pool should not overlap with existing IP address pools that may be in use by the servers (in particular, by virtualized servers).”
Let me tell you a reason much closer to reality why you might want to avoid overlapping your Cisco ACI TEP addresses with your locally configured addressing scheme.
When you first configure a Cisco ACI fabric, you need to configure a range of IP addresses that the ACI Fabric uses internally for VTEP addressing of the APICs, leaf and spine switches and other internally used addresses like anycast addresses for the spine proxy functions.
As I mentioned, Cisco recommends that “The TEP IP address pool should not overlap with existing IP address pools that may be in use by the servers (in particular, by virtualized servers).” I can only guess by the wording of this advice that Cisco sees that there may be some issue with the APICs being able reaching remote VTEPs on Cisco AVS virtual switches, but I see this as an outlier scenario.
The problem with VTEP IP address pools is the APICs. You see, the APICs can’t handle:
- having a management IP address that overlaps with the VTEP address space, (it can’t figure out which interface to send management responses on) or
- being accessed from a workstation that is using an IP address that overlaps with the VTEP address space.
Since it is conceivable that any internal IP address may need to access the APIC for some reason sometime, I would recommend that you don’t overlap VTEP addresses with any currently used internal addresses.
Below is an example of the routing table from an APIC:
apic1# netstat -rn Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 0.0.0.0 172.16.11.1 0.0.0.0 UG 0 0 0 oobmgmt 10.0.0.0 10.0.0.30 255.255.0.0 UG 0 0 0 bond0.3967 10.0.0.30 0.0.0.0 255.255.255.255 UH 0 0 0 bond0.3967 10.0.32.64 10.0.0.30 255.255.255.255 UGH 0 0 0 bond0.3967 10.0.32.65 10.0.0.30 255.255.255.255 UGH 0 0 0 bond0.3967 169.254.1.0 0.0.0.0 255.255.255.0 U 0 0 0 teplo-1 169.254.254.0 0.0.0.0 255.255.255.0 U 0 0 0 lxcbr0 172.16.11.0 0.0.0.0 255.255.255.0 U 0 0 0 oobmgmt 172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0 apic1#
In this case, the VTEP address range is 10.0.0.0/16, and the APIC sees all 10.0.x.x IP addresses as being reachable via the bond0.3967 interface, as shown by the
10.0.0.0 10.0.0.30 255.255.0.0 UG 0 0 0 bond0.3967
routing table entry on the APIC.
Recall I said that the APICs can’t handle:
- having a management IP address that overlaps with the VTEP address space, (it can’t figure out which interface to send management responses on) or
- being accessed from a workstation that is using an IP address that overlaps with the VTEP address space.
I’ll deal with case #2 first.
Now imagine for a minute I have a workstation with an IP address of say 10.0.11.11 that wishes to communicate with the OOB (Out of Band) management IP address of the APIC, which happens to be 172.16.11.111. Now that remote workstation of 10.0.11.11 may well have a perfectly good route to 172.16.11.11, and may indeed be able to send packets to the APIC.
The problem of course arises when the APIC tries to send the reply packets to 10.0.11.11. As per the APIC’s routing table, the APIC would expect to reach 10.0.11.11 via its bond0.3967 interface, as shown by the
10.0.0.0 10.0.0.30 255.255.0.0 UG 0 0 0 bond0.3967
routing table entry on the APIC.
Similarly, with case#1. This time, imagine I had used 10.0.11.0/24 as https://supportforums.cisco.com/discussion/13311571/overlapping-or-non-overlapping-vtep-poolmy OOB Management subnet. Since that overlaps with my VTEP range (10.0.0.0/16) there is potential that IP addresses from my OOB subnet (10.0.11.0/24) could be allocated to VTEPs somewhere – and if that happened my APIC would be unable to communicate with any other 10.0.11.0/24 address on the OOB subnet that clashes with a VTEP address. In theory, the APIC would still be able to communicate with the VTEP addresses because it adds a /32 address to its routing table for every VTEP, but in my experience when I saw a customer with this configuration there was a problem communicating with the OOB subnet.
RedNectar
STOPPRESS
Just been reading this discussion on the Cisco forum – it seems that the docker0 interface that was introduced in version 2.2 may also screw up the APIC’s view of the rest of the world in the same way
References:
This is an expansion of a reply I gave on the Cisco Support forum
More information on VTEP addressing in the Cisco Application Centric Infrastructure Best Practices Guide