Blog Feed

FlexVPN w/out Defaults

  • Without defaults, below configuration list is needed:
    • IKEv2
      • Proposal
      • Policy
      • Keyring
      • Profile
    • IPSEC
      • Transform set
      • Profile
  • IKEv2 Proposal
    • Used for normal IPSEC negotiation
      • DH group
      • Encryption – AES
      • Integrity – SHA
  • Tunnel Interface
    • Attaching VPN config
  • IKEv2 policy
    • Container for proposal that was just created
  • IKEv2 Keyring
    • Contains authentication and specifies the remote host.
      • PSK or RSA/Certificate
  • IKEv2 Profile
    • Contains identity and authentication we want to use.
      • DOES NOT CONTAIN ACTUAL PSK
      • SORT OF REPETITIVE BUT KEYRING GETS ADDED TO THIS PROFILE
  • IPSEC Transform-Set
    • Specifies encryption and hashing algorithms.
      • Under TS the tunnel mode can be set as well.
        • ie. Tunnel or Transport. Default is Tunnel.
  • IPSEC Profile
    • Glues together the IKEv2 profile and Transform set
  • Tunnel Interface
    • Specifies normal GRE operation and attaches IPSEC profile to the tunnel.
  • Disable Smart Defaults (optional)
    • If needed, can disable the smart defaults

‘no crypto ikev2 policy default’
‘no crypto ipsec profile default’
‘no crypto ipsec transform-set default’
‘no crypto ikev2 proposal default’

  • Verify Tunnel/Protection

Note the default IKEv2 policy is disabled.

Shows specifics of VPN IKEv2 settings.

Shows a security association exists for IKEv2.

Note default profile is disabled.

Shows there is a security association and we have packet encaps/decaps.

Flex VPN Site to Site w/defaults – Simplicities

  • Ultimately need minimal configuration to make the VPN tunnels go.
  • IKEv2 VPN typical requirements:
    • IKEv2 proposal
    • IKEv2 keyring
    • IKEv2 profile
    • IKEv2 policy
    • IPSEC ts
    • IPSEC profile

  • Smart Defaults allow operator to only configure:
    • Keyring
    • IKEv2 profile
    • Tunnel interface
  • Keyring example
  • IKEv2 Profile example
  • Tunnel interface

IPSEC configuration for tunnel interface only requires applying IPSEC profile.

FlexVPN General

  • Cisco’s IOS implementation of IKEv2
    • Unified configuration framework for L2L, remote access and spoke-spoke VPNs
      • Tunnel interfaces.
  • FlexVPN components
    • Proposal, policy, credential store, profile
    • Tunnel interface.
  • Other
    • IPSEC Profile
    • Routing
  • FlexVPN Proposal
    • Set of algorithms used to protect IKE_SA_INIT
      • More than one function can be configured for the same security feature.
        • ‘crypto ikev2 proposal <name>
          • ‘encryption <enc type>’
          • ‘integrity <inte type>
          • ‘group <dh group>’
          • ‘prf <prf type>’
        • AES in Galois/Counter mode (AES-GCM) combined algorithm.
          • Requires PRF to be manually configured.
        • DH Groups 19/20 are Elliptic Curve Algorithms (ECDH)
  • Enables a proposal
    • Policy can match based on FVRF or local IP.
      • ‘crypto ikev2 policy <name>’
        • ‘proposal <name>’
        • ‘match fvrf <name>’
        • ‘match address local <ipv4 or ipv6>
  • Credential Store
    • Stores authentication data
      • Trustpoint (‘crypto pki trustpoint’)
      • Keys (can now be asymmetric)
        • Keyring (‘crypto ikev2 keyring’)
        • In-profile (‘authentication <local|remote> pre-share’)
  • IKEv2/FlexVPN profile
    • Stores non-negotiable IKE parameters.
      • Must be attached to an IPSEC profile.
      • ‘crypto ikev2 profile <name>’
        • ‘match <options>’
        • ‘authentication <local|remote> <pre-share|rsa-sig|ecdsa-sig|eap>’
        • ‘keyring <name>’
        • ‘pki trustpoint <name> <sign|verify>
        • ‘identity local <address|dn|email|fqdn|key-id>’
        • ‘dpd interval <periodic|on-demand>’
        • ‘virtual-template nr’
        • ‘ivrf <ivrf name>’
    • NOTE – IKEv2 can use separate authentication mechanisms on two sides of the tunnel. Unlike IKEv1.
  • Profile selection
    • ‘Match’ statements
      • IP address(es), cert map, FVRF and IKEv2 ID
        • Same-type statements or ORed, different type are ANDed
        • Cert map and IKEv2 ID are treated as the same type.
      • ‘match vrf CUST1’
      • ‘match local address 10.1.1.1’
      • ‘match local address 10.2.2.1’
      • ‘match certificate CMAP1’
    • Result
      • (VRF CUST1) AND (IP 10.1.1.1 OR 10.2.2.1) AND (cert match in CMAP1)
  • Flex Tunnels
    • Static:
      • ‘interface tunnel <nr>
      • ‘<ip|ipv6> address <address>’
      • ‘tunnel source <interface|IP_add>
      • ‘tunnel destination <IP_add>’
      • ‘tunnel mode ipsec <ipv4|ipv6>’
    • Dynamic
      • ‘interface virtual template type tunnel <nr>’
        • ‘<ip|ipv6> unnumbered interface’
        • ‘tunnel source <interface|IP_address>
        • ‘tunnel mode ipsec <ipv4|ipv6>’
  • Activates IPsec
    • Requires a transform-set
    • IKEv2 profile must be attached on initiator.
      • enables IKEv2
    • ‘crypto ipsec transform-set <set name>’
    • ‘crypto ipsec profile <profname>’
      • ‘set transform-set <set name>’
      • ‘set ikev2-profile <profname>’
        • ‘int tunnel <x>’
          • ‘tunnel protection ipsec profile <profname>’
  • ‘Smart Defaults’
    • Simplifies IKEv2 deployments
    • Group of predefined IKEv2 and IPsec components called ‘default’.
      • Proposal, Policy, Transform Set, IPSec profile
    • Verify with ‘show crypto ikev2 <group type> default’ and ‘show run all
      • Example of group type
        • ‘show crypto ikev2 ipsec profile default’
        • ‘show run all | sec crypto’
          • Look for default

DMVPN IKEv2

The below topology is running DMVPN Phase 3. R1-3 are spokes and R5 is the hub. Currently the routers are all running IPSEC via IKEv1 and we’re going to change this to IKEv2.

  • IKEv1 vs. IKEv2 (out of scope for CCIE)
    • IKEv2 can use asymmetric authentication.
      • ie. one side running PSK and other running PKI.
    • IKEv1 has to be the same on both.

By removing the line ‘no tunnel protection ipsec profile vpnprof we’re able to remove IPSEC IKEv1 from each tunnel interface. The IKEv1 configuration will still be on the box but not applied to anything.

The configuration for IKEv2 will be the following:

crypto ipsec transform-set cisco-ts esp-aes esp-sha256-hmac

mode transport

crypto ikev2 keyring cisco-ikev2-keyring

peer dmvpn-node

description symmetric pre-shared key for the hub/spoke

address 0.0.0.0 0.0.0.0

pre-shared-key cisco123

crypto ikev2 profile cisco-ikev2-profile

 match identity remote any

 authentication remote pre-share

 authentication local pre-share

 keyring local cisco-ikev2-keyring

authentication local pre-share

match address local 0.0.0.0

crypto ipsec profile cisco-ipsec-ikev2

set transform-set cisco-ts

set ikev2-profile cisco-ikev2-profile

int tunnel 0

tunnel protection ipsec profile cisco-ipsec-ikev2

NOTE – Transport vs. Tunnel mode:

  • Transport
    • Encrypts only the payload of packets, not the header.
  • Tunnel
    • Encrypts the entire payload and header.
    • Ultimately more secure.

After these lines are configured on ALL routers, the VPN tunnels and BGP adjacencies move into the Up state.

DMVPN IKEv1

IPSEC with IKEv1 configuration is going to leverage the topology/lab that’s already setup with plain DMVPN mGRE.

The configuration is already setup with BGP and R5 as the hub. The below configuration will be the same on every router and it will add IPSEC encryption over each tunnel.

crypto isakmp policy 1 

 encr aes

 authentication pre-share

 group 14

 crypto isakmp key Cisco47 address 0.0.0.0

 crypto ipsec transform-set trans2 esp-aes esp-sha-hmac

 mode transport

 crypto ipsec profile vpnprof

 set transform-set trans2

int tunnel 0

 tunnel protection ipsec profile vpnprof

Now on R1 we’re going to ping R3 and form the direct tunnel. On R1 we now see an ‘H’ route and the direct tunnel under ‘show dmvpn’.

The difference though between now and normal GRE, a ‘show crypto ipsec association’ shows a direct IPSEC tunnel between R1 and R3.

DMVPN Phase 3 – BGP

In the same topology above will be running DMVPN with BGP this time. Currently there are no routing protocols setup, only the DMVPN phase 3 configs. R1-3 are spokes and R5 is the hub.

BGP in DMVPN is able to leverage BGP Peer Groups. We setup a group name associated with an IP range that is able to automatically be added as a BGP neighbor, then we assign that peer group desired BGP neighbor parameters. The config on the hub is below.

All of this is running iBGP/ASN 100. A spoke configuration is below.

This can be applied to all spokes. After this is entered all neighbors come up and a spoke routing table looks like below:

Note that the next hops are preserved instead of being modified by the hub. From spoke 2 to spoke 1 we’re going to run a ping and let the direct tunnel form. After we’re getting the following on R2:

R2 is receiving an ‘H’/NHRP route and we can see the tunnel formed directly between the two spokes.

On R5/Hub we can change the behavior of the next hop by enabling next hop self.

Now the route table on a spoke has the next hop setup as the hub itself.

BGP Summary:

Similar to EIGRP, BGP allows us to summarize prefixes from any location. On the hub we’re going to summarize our transit/VPN range – 155.1.0.0/16.

R5(config-router)#aggregate-address 155.1.0.0 255.255.0.0 summary-only

After a ‘clear ip bgp * out’ on the hub, we begin seeing the summary address above on spokes.

Now from R1 we’ll ping R3 and let the dynamic tunnel come up. Again in the routing table we’re seeing the ‘%’ override symbol and an NHRP ‘H’ route.

A Default route can be setup instead of the summary address on the hub as well. In BGP we’re going to remove the aggregate address and add a default originate command.

R5 is now advertising a default route AND all the specific routes. If we want to make this ONLY default route from R5 to the spokes, we can do that with a route-map/prefix list combination.

Then apply the route-map outbound to the neighbor group in BGP.

This now pushes only 0.0.0.0/0 from R5 to the spokes.

Now when trying to establish spoke to spoke connectivity, NHRP will kick in and create a more direct route as traffic is needed.

Above R3 is running a trace to R1 and it initially shows hitting the DMVPN Hub at .5. The second attempt at the trace shows R3 going directly to R1. Once the dynamic tunnel is stood up we see that the NHRP route has been added to R3’s route table.

The advantage of BGP in a DMVPN topology is that you can be very specific on what routes you are advertising from hub to spoke. In addition, the BGP listen feature can be used for easier spoke turn up.

DMVPN Phase 3 – EIGRP

  • Phase 3 supported Routing Protocols:
    • RIP
    • EIGRP
    • OSPF
    • BGP
    • ODR
      • Preferred EIGRP and BGP
        • Summarization options are better.
  • Phase 3 General
    • mGRE on hub and spokes
      • NHRP required for spoke registration to hub
      • NHRP required for spoke to spoke resolution
    • When a hub hairpins traffic over same interface:
      • Sends NHRP redirect message back to packet source.
      • Forwards original packet down to spoke via RIB.
    • Routing
      • Summarization/default routing to hub allowed.
        • Results in NHRP routes for spoke to spoke tunnel.
        • With no-summary, NHO performed for spoke to spoke tunnel.
          • next hop is changed from hub IP to spoke IP.
      • Next hop on spokes is always changed by the hub.
        • Because of this, NHRP resolution is triggered by hub.
      • Multi-level hierarchy works without daisy-chaining.

Configuration:

The topology currently is the one above, R1-R3 are DMVPN Phase 2 spokes and R5 is the hub. Each router is running OSPF over its tunnel 0 and advertising the loopback.

R5/Hub:
Physical IP – Gig0/0 – 96.76.43.137/29
VPN/Tunnel IP – Tu0 – 155.1.0.5
Loopback – L5 – 5.5.5.5/32

R1/Spoke:
Physical IP – Gig0/0 – 96.76.43.140/29
VPN/Tunnel IP – Tu0 – 155.1.0.1
Loopback – L1 – 1.1.1.1/32

R2/Spoke:
Physical IP – Gig0/0 – 96.76.43.138/29
VPN/Tunnel IP – Tu0 – 155.1.0.2
Loopback – L2 – 2.2.2.2/32

R3/Spoke:
Physical IP – Gig0/0 – 96.76.43.137/29
VPN/Tunnel IP – Tu0 – 155.1.0.3
Loopback – L3 – 3.3.3.3/32

The OSPF network types are currently set to Broadcast with the hub as DR. If the network type on each tunnel interface is changed to ‘point-to-multipoint’, we’ll see that the DR/BDR process goes away.

Network type adjusted on all spokes as well as hub.

Now on a spoke if the routing table is shown, we’ll see that each spoke next hop hits the hub, unlike the broadcast network before. This is due to Next Hop Override not functioning yet.

To enable NHO, we need to add some commands. On each spoke we’ll add ‘ip nhrp shortcut’, and the hub gets ‘ip nhrp redirect’. Both of these commands are added on the tunnel 0 interface, or in general the DMVPN tunnel interfaces.

Now when doing a traceroute from R1 to R2 you can see that traffic first gets directed to the hub, then down to R2/spoke. If the traceroute is performed again though we’ll see the traffic goes directly from spoke to spoke.

Looking at the routing table of R1 shows now a % sign next to the route on R1 to R2 – 2.2.2.2, which means next hop override is taking place. Next hop does not change in the routing table but it can be seen in the cef table.

EIGRP:

OSPF has pretty big limitations in this type of topology when it comes to route manipulation via areas and summarization. If a tool like summarization were to be used, it’s much better to use BGP or EIGRP.

The above topology was changed to run EIGRP. We’re enabling all the same interfaces in the EIGRP process and there are three EIGRP DMVPN neighbors on the hub. Each spoke is receiving routes over the DMVPN tunnels.

In the image directly above we’re reading the routing table for R1. Notice that it’s only receiving routes from the hub at R5. None of the routes from our other spokes, R2 and R3. That’s because the hub needs the command ‘no ip split-horizon eigrp 1’.

Once that’s added R1 begins receiving the routes from the other DMVPN spokes. The next hop does not change in Phase 3 however, that’s still the default behavior. In Phase 2 we’d add Next-Hop-Self, but that’s unnecessary in Phase 3 due to NHRP’s response that advises spokes to enable direct tunnels to each other.

With EIGRP enabled we can run a traceroute from R1 to R2 and we’ll see the first packet gets directed to the hub, R5. Running a traceroute again will show traffic is now going directly from spoke to spoke. In the routing table of R1 we’ll see that there’s a NHO occuring from the ‘%’, and in the cef table we’ll see that this is where that NHO is taking place.

Now because this is EIGRP, we can easily do a summarization from the hub. On R5 we’ll enter the below commands and then look at a spoke routing table.

R1 Route Table:

Now from R1 a traceroute will be performed from R1’s loopback to R3’s loopback.

First try the packet hits the hub first. Second try it goes directly over the dynamic tunnel.

Now in the routing table of R1 a route labeled ‘H’ is visible, which means it’s from NHRP. When using summarization in DMVPN, NHRP will take care of the more specific routes when they’re needed – ie. when traffic is actually going from spoke to spoke.

Default Route:

This summarization can be done with a single default route as well. On the hub the summary address has been removed and a 0.0.0.0 0.0.0.0 has been added.

On a spoke the ‘show ip route eigrp’ now looks like below:

After pinging R2 (spoke) and looking at R1’s routing table again, we’ll see that NHRP has added another more specific route in our DMVPN topology.

DMVPN Phase 2

In the image above we are setting up DMVPN phase 2. R1-3 are spokes and R5 is the hub. Current reachability should not matter all that much because it’s one broadcast domain between the four routers. IP addresses are assigned below:

R5/Hub:
Physical IP – Gig0/0 – 96.76.43.137/29
VPN/Tunnel IP – Tu0 – 155.1.0.5
Loopback – L5 – 5.5.5.5/32

R1/Spoke:
Physical IP – Gig0/0 – 96.76.43.140/29
VPN/Tunnel IP – Tu0 – 155.1.0.1
Loopback – L1 – 1.1.1.1/32

R2/Spoke:
Physical IP – Gig0/0 – 96.76.43.138/29
VPN/Tunnel IP – Tu0 – 155.1.0.2
Loopback – L2 – 2.2.2.2/32

R3/Spoke:
Physical IP – Gig0/0 – 96.76.43.137/29
VPN/Tunnel IP – Tu0 – 155.1.0.3
Loopback – L3 – 3.3.3.3/32

The config on the hub/R5 is below:



The config on any spoke is below, minor IP changes:

On each router we’re running EIGRP AS 1 that is advertising each loopback and participating/creating adjacency on the physical interface. The main difference between the hub vs. spoke configuration are the NHS, map and map multicast commands. All of them specifying the hub/server/hub IPs.

Now on the hub we’ll see the below screen after entering ‘show dmvpn’:

The attribute D on the right means they were setup dynamically. Now looking at routes sourced from EIGRP, we receive the following from a spoke:

R1 is receiving the loopback of R5, as well as a couple routes from a router running EIGRP on the other side (outside of DMVPN) of R5. A router advertising it’s own loopback of 4.4.4.4/32.

Unfortunately we are not receiving routes from the other DMVPN spokes. This is due to EIGRP being a (hybrid) distance vector protocol that uses Split Horizon as loop avoidance. To fix this the hub will need to turn split horizon off.

After this there is a DUAL resync and the routes start to come through.

Spoke to Spoke communication:

As of right now if a spoke wants to speak to another spoke, it will first have to traverse the hub. This is due to EIGRP changing the next hop value to its tunnel 0 interface. This can be changed with the command below on R5 tun0:

‘no ip next-hop-self eigrp <process id>

Now the routes on R1 show the VPN address of each spoke as the next hop. If a traceroute is completed from R1’s loopback to R2’s loopback, the first hop shows it goes to the hub, second to the spoke. If this is performed again however it can be seen that now there’s spoke to spoke communication. In addition on R1 we’ll see with ‘show dmvpn’ that there’s a dynamic tunnel created between the two.

OSPF:

All of this can be completed with other routing protocols. EIGRP has been removed from each router and now this will be completed with OSPF.

For the spokes on each tunnel and loopback interface we’re going to enable ‘ip ospf 1 area 0’, and on the tunnel interface we will change the network type via ‘ip ospf network broadcast’.

The two commands above will be added on the hub as well, but the hub also needs the command ‘ip ospf priority 255’ under the tunnel interface. The reason for this is because we cannot have any non-hub device as the Designated Router. If a spoke becomes the DR then updates will not work because each spoke does not statically have a connection between the two devices. The hub is needed for the DMVPN spoke to spoke connectivity. Routing updates in this situation will eventually fail.

An additional command to make sure a spoke does not become the DR is by changing their priority to 0 so they take themselves out of the election process.

An ‘ip ospf priority 0’ shows that all of the spoke neighbors are now DROTHERs, so they’ll never even be a BDR.

Now on R1 when looking at ‘show ip route ospf’ we’ll see the routes for the neighbors come in. On the routes to other DMVPN spokes, we’ll see that the next hop is not modified like it was originally in EIGRP.

To reach 3.3.3.3, the next hop value is the VPN address of R3 instead of R5, the hub. This is why OSPF Broadcast network type is used. The DR process does not modify next hop. The limitation to using OSPF in DMVPN however has to do with needing to specify DR and not being able to summarize. In this scenario all routers are in area 0.

DMVPN Phases Overview

  • DMVPN Phase 1
    • mGRE on Hub and p-pGRE on spokes
      • No direct spoke to spoke communication.
    • Routing
      • Summarization and default at hub is allowed.
      • Next-hop on spokes is always changed by the hub.
  • DMVPN Phase 2
    • mGRE on hub and spokes
      • NHRP required for spoke registration to hub.
      • NHRP required for spoke to spoke resolution.
      • Spoke to spoke tunnel triggered by spoke.
    • Routing
      • Summarization/default not allowed at hub.
      • Next hop on spokes is always preserved by hub.
      • Multi-level hierarchy requires hub daisy-chaining.
  • DMVPN Phase 3
    • mGRE on hub and spokes.
      • NHRP required for spoke registration to hub.
      • NHRP required for spoke to spoke resolution.
    • When a hub hairpins out same interface:
      • Send NHRP redirect message to packet source.
      • Forward original packet down to spoke via RIB.
    • Routing
      • Summarization and default at hub is recommended.
        • Results in NHRP routes for spoke to spoke tunnel.
        • With no-summary, NHO is performed for spoke to spoke tunnel.
          • Next-hop is changed from hub IP to spoke IP.
      • Next hop on spokes is always changed by the hub
        • NHRP resolution triggered by hub.
      • Multi-level hierarchy works without daisy-chaining.

DMVPN Overview

  • What is it?
    • Point to multipoint Layer 3 overlay VPN
      • Logical hub and spoke.
      • Direct spoke to spoke supported.
    • Uses combination of the following:
      • Multipoint GRE Tunnels
      • Next Hop Resolution Protocol
      • IPSEC Crypto Profiles
      • Routing
  • Why?
    • Independent of SP access method
      • Only requirement is IP connectivity
      • Can be used with different types of WAN connectivity
    • Routing policy not dictated by SP.
      • E.g MPLS L3VPN restrictions
    • Highly scalable
      • If properly designed.
  • How?
    • Allows on-demand full mesh IPSEC tunnels with minimal configuration.
      • mGRE
      • NHRP
      • IPSEC Profiles
      • Routing
    • Reduces need for amount of tunnels required for full mesh.
      • Uses one mGRE interface for all connections.
      • Tunnels are created on-demand between nodes.
      • Encryption is optional.
        • Almost always used.
    • On demand tunnels between nodes.
      • Initial tunnel-mesh is hub and spoke (always on)
      • Traffic patterns trigger spoke to spoke.
      • Solves management scalability.
    • Maintains tunnels based on traffic patterns.
      • Spoke to spoke on demand.
      • spoke to spoke lifetime based on traffic.
    • Requires 2 IGPs
      • Underlay and overlay.
      • IPv4 and IPv6 supported for both passenger and transport.
    • Main components
      • Hub/NHRP Server (NHS)
      • Spokes/NHRP Clients (NHS)
    • Spokes/Clients register with Hub/Server
      • Spokes manually specify Hub’s address
      • Sent via NHRP Registration Request
      • Hub dynamically learns spokes’ VPN address and NBMA address.
    • Spokes establish tunnels to hub
      • Exchange IGP routing info over tunnel.
  • Spoke 1 knows Spoke2’s routes via IGP.
    • Learned via tunnel to hub
    • Next-hop is spoke2’s VPN IP for DMVPN Phase 2.
    • Next-hop is hub’s VPN IP for DMVPN Phase 3.
  • Spoke 1 asks for Spoke2’s real address
    • Maps next-hop (VPN) IP to tunnel source (NBMA) IP
    • Sent via NHRP Resolution.
  • Spoke to Spoke tunnel is formed
    • Hub only used for control plane exchange
    • Spoke to spoke data plane may flow through hub initially.
  • NHRP Messages
    • Registration Request
      • Spokes register their NBMA and VPN IP to NHS
      • Required to build the spoke to hub tunnels.
    • NHRP Resolution Request
      • Spoke queries for the NBMA-to-VPN mappings of other spokes.
      • Required to build spoke to spoke tunnels
    • NHRP Redirect
      • NHS answer to a spoke to spoke data plane packet through it.
      • Similar to IP redirects when packet in/out is same.
      • Used only in DMVPN Phase 3 to build spoke to spoke tunnels.
        • Go to next hop for spoke to spoke.