Authenticates vSmart controllers and routers to the SD-WAN domain.
Orchestrates connectivity between Routers and Controllers (vSmart).
All SD-WAN devices MUST connect directly to vBond.
This requires publicly accessible IP address.
ie. cannot be behind a NAT unless a 1:1.
DTLS
vBonds keep continuous DTLS tunnel established from vBond to each vSmart controller.
When SD-WAN vEdge or cEdge comes online, they are configured to reach out to vBond via DTLS Tunnel. This facilitates authentication and joining the network with the vSmarts.
Authentication performed via certificates.
NAT Traversal
vBond is middle man for SD-WAN devices authenticating and joining network.
vBond allows all other SD-WAN devices to be behind NAT without issue.
Load Balancing
vBond automatically load balances between vSmart controllers as SD-WAN edges come online.
Border of SD-Access fabric to all other internal networks
In and out of Fabric.
When a PC in the SD-Access fabric tries reaching shared services, it will query the control plane node (LISP MS/MR) and find out it needs to go to the internal border node. The internal border node has an eBGP peering setup with the fusion router which allows the reachability to the Shared Services block.
The internal border node is redistributing BGP into LISP and vice versa.
Default Border Node:
Border of SD-access fabric to all other networks that are not internal networks.
ie. vendors, internet, etc. Similar to default Route.
Also known as PXTR
Anywhere Border Node:
Can serve as both Internal and Default border node.
Note – According to Cisco Webinars, the underlay is already built in the CCIE Lab.
Manual:
Overlay can run over any type of underlay.
Can be Layer 2 or Layer 3
Highly recommended layer 3.
‘Lean and Mean’ underlay.
Spanning Tree is still needed if layer 2 is used as underlay.
Routing Protocol
Cisco recommends IS-IS.
Can be different routing protocol, commonly using different routing protocol if brown field.
SD-Access supports EIGRP, OSPF, and IS-IS.
Each edge device must advertise loopback interfaces into underlay.
Loopbacks are used to form VXLAN tunnels.
Shared Services
DHCP, DNS, Domain Services, DNA Center, WLC.
These services sit outside of fabric domain.
Underlay needs to be routable to shared services.
ie. to internal border node.
0.0.0.0/0 will not work unless internal and external border node are same device.
MTU
VXLAN requires an extra 50B for header.
54B if there’s a VLAN tag.
Cisco Recommends MTU of 9100B for the entire underlay.
Can be devices not running as edge or border node
Middle ‘routing’ devices such as older switches just passing traffic.
Underlay link connectivity
P2P links between each switch in underlay.
Recommends 10Gbps of throughput between each switch.
TIMERS
DO NOT CHANGE IGP TIMERS.
Use BFD to improve failure detection.
None of this is necessary if Greenfield – use LAN automation with factory default IOS-XE switches. LAN Automation will build out IS-IS underlay.LAN Automation
LAN Automation:
Initial task is running discovery to import a Border node into inventory.
Once a border node is added to inventory, DNA can hop from the border node into neighboring devices to auto configure underlay.
Note – ‘ip routing’ needs to be configured on seed/border node before starting LAN automation.
Border is actually behind the scenes configuring itself as a DHCP server, handing out leases to other fabric devices, and then configuring them.
In addition the configurations are done with a PnP agent on the un-configured devices.
The additional fabric devices need to be completely factory reset.
Debugging logs all 0-7. Monitor will log to screen when remoting into device. Console logs to console when physically plugged into device.
Logs 0-7 to the local buffer.
Includes ms in the logs. Service timestamps allows changes in how logs appear in terms of time.
Conditional Debugging:
Allows logs to be generated for only specific things. Can be tied to access list.
The access-list was created permitting 10.10.10.10, then a route was created for that IP. In the logs we can see that there was a log generated for the additional static route.
Sending logs to host:
Sending to Syslog server 10.10.10.10; Only logging to level 4, warning.
‘show logging’ will display where we’re sending logs to, and what levels we’re logging at for monitor, console and buffer.
Allows tracking of IP service levels by using active traffic monitoring
Router generates packets to check service levels.
Used to measure and verify service levels.
QoS as example.
Uses different types of probes depending on app being monitored.
ICMP
TCP
UDP
Can be tied with object tracking to take actions.
Reliable static routes.
We’re running IP SLA examples between R5 and R4. R5 has a loopback with IP address 5.5.5.5 and R4 has loopback 4.4.4.4. The transit subnet is 10.30.1.0/24. R4 with .4 and R5 with .5. A backup path from R5 to R4 is through R3.
First config will just be ICMP from R5 to R4.
The only non-default configuration was changing the Threshold to a ping every 5 seconds. Next we need to schedule the ping for now and let it run forever. Then assign the SLA to a process for tracking.
The current reachability between the devices is completed with OSPF. What we can do though is add a static route that will take precedence over the OSPF AD of 110 and make the static depend on the IP SLA.
Once that is completed, the static route will be added or removed from the route table depending on the track succeeding.
After this we can add a floating static to R5 as well that has a higher administrative distance than our static with the track statement.
After shutting down the interface connecting R4 to R5, we see that the main static leaves the routing table and we have our alternate path with an AD of 15.
Next example is going to use TCP probes instead of just ping.
R5
R4
On R5 we created a tcp connect IP SLA and enabled forever. On R4 we had to create an IP SLA tcp responder for the IP and port we’re trying to hit. Now on R5 we can see the tcp connect is successful.
Same exact logic as the ping. We can change the tracking statement from IP SLA 1 to IP SLA 2, and now the tracking with the static route is taken care of.