Data Plane – vEdge/cEdge

  • Available as hardware, software and cloud functions.
  • Sit at geographically disparate locations.
  • Push data across WAN.
  • Function as normal routers and SD-WAN Overlay routers.
    • ie. OMP, Localized and Centralized policies.
  • Each router sets up DTLS tunnel to each vEdge in the SDWAN Fabric.
  • Forms OMP adjacencies over DTLS tunnels to vSmarts for routing information.
  • Sets up standard IPSEC tunnels to other SD-WAN routers.

  • vEdge
    • Viptela platform running Viptela Software.
  • cEdge
    • Viptela software running alongside Cisco IOS-XE.
    • Operates on ISR, ASR1k, CSR, CSRV, and ISRv.

Control Plane – vSmart/OMP

  • vSmart Controller
    • Control plane/brains of the solution.
    • Managed via the CLI or (most likely and recommended) by vManage.
    • Post router authentication, there is permanent DTLS tunnel between each vManage and each vEdge/cEdge.
      • DTLS tunnels are used to create OMP neighborship between each router.
    • vSmart controller uses OMP to determine topology and calculate best routes to different destinations across SD-WAN fabric.
    • Services vSmart offers (configured on vManage)
      • VPN segmentation
      • Traffic engineering
      • Service chaining.
      • QoS

Orchestration Plane – vBond

  • vBond
    • Authenticates vSmart controllers and routers to the SD-WAN domain.
    • Orchestrates connectivity between Routers and Controllers (vSmart).
    • All SD-WAN devices MUST connect directly to vBond.
      • This requires publicly accessible IP address.
        • ie. cannot be behind a NAT unless a 1:1.

  • DTLS
    • vBonds keep continuous DTLS tunnel established from vBond to each vSmart controller.
    • When SD-WAN vEdge or cEdge comes online, they are configured to reach out to vBond via DTLS Tunnel. This facilitates authentication and joining the network with the vSmarts.
      • Authentication performed via certificates.
  • NAT Traversal
    • vBond is middle man for SD-WAN devices authenticating and joining network.
      • vBond allows all other SD-WAN devices to be behind NAT without issue.
  • Load Balancing
    • vBond automatically load balances between vSmart controllers as SD-WAN edges come online.

Internal and Default Border Nodes

Internal Border Node:

  • Border of SD-Access fabric to all other internal networks
    • In and out of Fabric.
    • When a PC in the SD-Access fabric tries reaching shared services, it will query the control plane node (LISP MS/MR) and find out it needs to go to the internal border node. The internal border node has an eBGP peering setup with the fusion router which allows the reachability to the Shared Services block.
    • The internal border node is redistributing BGP into LISP and vice versa.

Default Border Node:

  • Border of SD-access fabric to all other networks that are not internal networks.
    • ie. vendors, internet, etc. Similar to default Route.
    • Also known as PXTR

Anywhere Border Node:

  • Can serve as both Internal and Default border node.

SDA – Fabric domains

  • Fabric site
    • Consists of:
      • Edge Nodes
      • Border Nodes
      • Control Plane Nodes
    • Independent of physical location.
    • A single DNA Center instance can manage multiple sites.
    • Fabric in a Box
      • CP, Border and edge all in one box.
      • Specifically for a small site that has single DNA Fabric device.
        • ie. switch.
  • Fabric Site connectivity
    • Transit Network
      • Connects to each fabric site via their border nodes.
      • SDA Transit Network
        • Maintains VXLAN across sites.
        • Carries VNIDs and SGTs.
        • Typically dark fiber.
        • Contains Transit Control Plane nodes.
      • IP Based Transit Network
        • Typical private leased lines from carriers.
        • Cannot control things like MTU.
        • VXLAN not carried across transit.
        • Re-identifying traffic is necessary when crossing IP based transit.

SDA – Underlay Network – PnP/IS-IS

Note – According to Cisco Webinars, the underlay is already built in the CCIE Lab.


  • Overlay can run over any type of underlay.
    • Can be Layer 2 or Layer 3
      • Highly recommended layer 3.
        • ‘Lean and Mean’ underlay.
      • Spanning Tree is still needed if layer 2 is used as underlay.
  • Routing Protocol
    • Cisco recommends IS-IS.
    • Can be different routing protocol, commonly using different routing protocol if brown field.
    • SD-Access supports EIGRP, OSPF, and IS-IS.
    • Each edge device must advertise loopback interfaces into underlay.
      • Loopbacks are used to form VXLAN tunnels.
    • Shared Services
      • DHCP, DNS, Domain Services, DNA Center, WLC.
      • These services sit outside of fabric domain.
        • Underlay needs to be routable to shared services.
          • ie. to internal border node.
          • will not work unless internal and external border node are same device.
  • MTU
    • VXLAN requires an extra 50B for header.
      • 54B if there’s a VLAN tag.
    • Cisco Recommends MTU of 9100B for the entire underlay.
      • Can be devices not running as edge or border node
        • Middle ‘routing’ devices such as older switches just passing traffic.
  • Underlay link connectivity
    • P2P links between each switch in underlay.
    • Recommends 10Gbps of throughput between each switch.
    • Use BFD to improve failure detection.

None of this is necessary if Greenfield – use LAN automation with factory default IOS-XE switches. LAN Automation will build out IS-IS underlay.LAN Automation

LAN Automation:

  • Initial task is running discovery to import a Border node into inventory.
    • Once a border node is added to inventory, DNA can hop from the border node into neighboring devices to auto configure underlay.
    • Note – ‘ip routing’ needs to be configured on seed/border node before starting LAN automation.
      • Border is actually behind the scenes configuring itself as a DHCP server, handing out leases to other fabric devices, and then configuring them.
      • In addition the configurations are done with a PnP agent on the un-configured devices.
        • The additional fabric devices need to be completely factory reset.
        • Last button is ‘Stop Automation’.
          • Counter intuitive.


Local Logging:

Debugging logs all 0-7. Monitor will log to screen when remoting into device. Console logs to console when physically plugged into device.

Logs 0-7 to the local buffer.

Includes ms in the logs. Service timestamps allows changes in how logs appear in terms of time.

Conditional Debugging:

Allows logs to be generated for only specific things. Can be tied to access list.

The access-list was created permitting, then a route was created for that IP. In the logs we can see that there was a log generated for the additional static route.

Sending logs to host:

Sending to Syslog server; Only logging to level 4, warning.

‘show logging’ will display where we’re sending logs to, and what levels we’re logging at for monitor, console and buffer.


  • SNMP
    • Can be used for event driven or for pull.
    • v2c
      • Uses clear text community string for authentication.
      • Polling should be combined with an access-list for security.
        • Only authorized management station should be able to poll.
    • v3
      • Secure authentication and encryption.
      • authNoPriv
        • authentication but no encryption.
      • authPriv
        • authentication and encryption.
Enables all traps
Specifies 2c server and community ‘public’

For SNMPv3 we’ll need to do the following:

  • Create a group
  • Create a user
  • Add user to group
  • Specify authentication and encryption

Flexible Netflow

  • Provides stats for traffic flowing through router.
  • Network monitoring, capacity planning, security analysis, accounting.
  • Config steps
    • Create flow record
    • Configure flow exporter.
    • Create a flow monitor.
    • Apply flow monitor to link.
  • Verified locally as ‘show flow monitor <name> cache’

Flow Record:

Flow Monitor:

Apply to Link:

Verification (only locally):

Exporter (If server available):

IP SLA – Object Tracking

  • IP SLA
    • Allows tracking of IP service levels by using active traffic monitoring
      • Router generates packets to check service levels.
    • Used to measure and verify service levels.
      • QoS as example.
    • Uses different types of probes depending on app being monitored.
      • ICMP
      • TCP
      • UDP
    • Can be tied with object tracking to take actions.
      • Reliable static routes.

We’re running IP SLA examples between R5 and R4. R5 has a loopback with IP address and R4 has loopback The transit subnet is R4 with .4 and R5 with .5. A backup path from R5 to R4 is through R3.

First config will just be ICMP from R5 to R4.

The only non-default configuration was changing the Threshold to a ping every 5 seconds. Next we need to schedule the ping for now and let it run forever. Then assign the SLA to a process for tracking.

The current reachability between the devices is completed with OSPF. What we can do though is add a static route that will take precedence over the OSPF AD of 110 and make the static depend on the IP SLA.

Once that is completed, the static route will be added or removed from the route table depending on the track succeeding.

After this we can add a floating static to R5 as well that has a higher administrative distance than our static with the track statement.

After shutting down the interface connecting R4 to R5, we see that the main static leaves the routing table and we have our alternate path with an AD of 15.

Next example is going to use TCP probes instead of just ping.


On R5 we created a tcp connect IP SLA and enabled forever. On R4 we had to create an IP SLA tcp responder for the IP and port we’re trying to hit. Now on R5 we can see the tcp connect is successful.

Same exact logic as the ping. We can change the tracking statement from IP SLA 1 to IP SLA 2, and now the tracking with the static route is taken care of.