Validating email id

Brocade VCS HW VTEP solution uses two different IP interfaces for this connectivity – VCS Virtual IP in Management VRF for connecting to NSX Controllers, and Loopback or VRRP-E based VTEP in Default VRF to talk to ESXi VTEPs.

It is possible to set up special VRFs for both of these, if needed.

If we already have a functioning VCS fabric and just want to add the overlay-gateway function to it, we may already have needed IP routing set up and in place.

In this case, configuring new Loopback interfaces on rbridges that will participate in overlay-gateway, and handling their reachibility through already-configured routing protocols (or static routes) should be fine.

Both ways, according to my contacts in the know, deliver the same functionality; so there are no benefits / drawbacks to help us decide.

Let’s turn to how things work then, and figure this one out.

To support management access, nodes (rbridges) of the fabric maintain a VCS Virtual IP. It is typical for this IP to belong to the Management VRF, and be on the same subnet as the Management interfaces of your fabric nodes.

Hardware Switch Controller (HSC) component of overlay-gateway subsystem on VCS uses Virtual IP for Management and Control plane communications, namely for talking to NSX-v Controllers.

Ok, given that we can ping an ESXi VTEP just fine, let’s ping in the other direction just for fun before closing off for today: [[email protected]:~] vmkping netstack=vxlan 192.168.150.1 PING 192.168.150.1 (192.168.150.1): 56 data bytes 64 bytes from 192.168.150.1: icmp_seq=0 ttl=62 time=9.949 ms 64 bytes from 192.168.150.1: icmp_seq=1 ttl=62 time=2.229 ms 64 bytes from 192.168.150.1: icmp_seq=2 ttl=62 time=2.282 ms --- 192.168.150.1 ping statistics --- 3 packets transmitted, 3 packets received, 0% packet loss round-trip min/avg/max = 2.229/4.820/9.949 ms Works, as expected.

Once we’ve decided on the physical topology for our HW VTEP deployment, it’s time to ensure all the right network connectivity is in place.

HW VTEP requires connectivity to two separate, independent IP domains: Management and VTEP.

rbridge-id 102 ip route 192.168.50.0/24 192.168.150.2 VDX6740-101# ping 192.1 src-addr 192.168.150.1 Type Control-c to abort PING 192.1 (192.1) from 192.168.150.1: 56 data bytes --- 192.1 ping statistics --- 5 packets transmitted, 0 packets received, 100% packet loss VDX6740-102# ping 192.1 src-addr 192.168.150.1 Type Control-c to abort PING 192.1 (192.1) from 192.168.150.1: 56 data bytes 64 bytes from 192.1: icmp_seq=0 ttl=62 time=2.198 ms 64 bytes from 192.1: icmp_seq=1 ttl=62 time=1.669 ms 64 bytes from 192.1: icmp_seq=2 ttl=62 time=1.989 ms 64 bytes from 192.1: icmp_seq=3 ttl=62 time=1.210 ms 64 bytes from 192.1: icmp_seq=4 ttl=62 time=1.729 ms --- 192.1 ping statistics --- 5 packets transmitted, 5 packets received, 0% packet loss round-trip min/avg/max/stddev = 1.210/1.759/2.198/0.333 ms Works fine! Our little VCS is connected to upstream network through a Port-channel, which means the return traffic is load-balanced.

Looks like in our case the return traffic for this particular conversation ends up on the link that’s connected to the rbridge 102.

Leave a Reply