MPLS Overview

  • Multiprotocol Label Switching
  • RFC3031
  • Multiprotocol
    • Can transport different payloads
  • Layer 2 Payloads
    • Ethernet, Frame Relay, ATM, PPP, HDLC, etc.
  • Layer 3
    • IPv4, IPv6, etc.
  • Extensible for new future payloads
  • Label Switching
    • Switches traffic between interfaces based on locally significant label values.
  • Similar to legacy virtual circuit switching
    • Frame Relay input/output DLCI
    • ATM input/output VPI/VCI
  • Why?
    • Transparent tunneling over SP network
    • BGP Free core
      • Saves routing table space on Provider routers
    • Offer L2/L3 VPN service to customers.
      • No need for overlay VPN model
    • Traffic Engineering
      • Distribute load over underutilized links
      • Give Bandwidth guarantees
      • Route based on service type
      • Detect and repair failures quickly
        • Fast Reroute (FRR)
  • Label format
    • 4 byte header used to “switch” packets
      • 20 bit label – locally significant
      • 3 bit EXP – Class of Service
      • S bit – Defines last label in label stack
      • 8 bit TTL – TTL
  • Labels
    • MPLS labels are bound to FECs
      • Forwarding equivalency class
      • IPv4 or IPv6 for CCIE purposes.
        • Binding between label and IP prefix.
    • Router uses MPLS LFIB instead of IP routing table to switch traffic.
    • Switching Logic
      • If traffic comes in if1 with label X, send it out if2 with label Y
  • MPLS Device Roles
    • Consists of three types of devices
      • Customer Edge (CE)
      • Provider Edge (PE)
      • Provider (P)
  • CE
    • Last hop device in customer’s network.
      • Connects to provider’s network.
    • Can be layer 2 or 3.
    • Typically not aware any MPLS is running.
  • PE
    • Also called Label Edge Router (LER)
    • Last hop device in provider’s network.
      • Connects to CE and provider core devices.
    • Performs both IP routing and MPLS lookups.
    • Traffic from customer to core
      • Receives unlabeled packets (e.g. IPv4/6)
      • Adds one or more MPLS labels
      • Forwards labeled packet to core
    • Traffic from core to customer
      • Receives MPLS labeled packets.
      • Removes one or more MPLS labels.
      • Forwards packet to customer.
  • P
    • Also called Label Switch Router (LSR)
    • Core devices in provider’s network
    • Connects to PEs and other P routers
    • Switches traffic based ONLY on MPLS labels
  • Operations
    • PE and P routers perform three major functions:
      • Label push
        • Add a label to incoming
          • label imposition
      • Label Swap
        • Replace the label on an incoming packet
      • Label pop
        • Remove the labelfrom an outgoing packet
  • Label Distribution
    • Advertised via LD protocol
    • Label Distribution Protocol (LDP)
      • Advertises labels for IGP learned routes.
      • RFC 5036
    • MP-BGP
      • Advertises labels for BGP learned routes.
      • RFC 3107
    • RSVP
      • Used for MPLS Traffic Engineering (MPLS TE)
      • RFC 3209

BGP Tunneling/MPLS Start

Like any routing protocol, BGP can be used over a GRE tunnel to bypass BGP peering or full mesh across multiple routers. In the image below there are 10 routers and they’re all running OSPF in area 0 except for R10 to R8 and R7 to R9. OSPF is simply used for reachability.

R10, R8, R9 and R7 are all running BGP in different ASNs. The goal is to create a tunnel from R8 to R7, then use BGP to create connectivity from R9 to R10.

R9 = ASN 900
R7 = ASN 100
R8 = ASN 100
R10 = ASN 1000


Connectivity has already been established across the OSPF domain. First part is creating the tunnel.

The tunnel source is the local loopback and tunnel destination is the remote loopback on R7. The same configuration but opposite source destination will be setup on R7. After entered we should see tunnel 0 come up and each end of the tunnel should be reachable – ie., 8.

Next we’re going to create BGP peerings. Below are the completed peerings:

R8/ASN100 — R10/ASN1000 – EBGP
R8/ASN100 — R7/ASN100 – iBGP
R7/ASN100 — R9/ASN900

The BGP peering between R7 and R8 is not going through the tunnel. That is routable over the OSPF domain that’s setup for reachability. Below is R8’s BGP configuration:

Now we’re going to setup a route-map that changes the next hop of advertisements between R8 and R7.

We’re setting up a route map that modifies the next hop to be the local GRE tunnel interface IP, and then advertising that via a route-map to our iBGP peer. The reason we’re doing this is for reachability between R9 and R10. Without this our local next-hop would be advertised which is not setup in the OSPF domain. As it shouldn’t, typically that would be out of our domain space. Or it is in our domain but we’re trying to minimize BGP peering. The opposite of the route map and BGP change is completed on R7’s side as well. The result is below: (R9’s loopback) is now actually added into the RIB of R8 with a next hop of, the tunnel interface of R7.


  • MPLS is this same logic as GRE, but more flexible.
    • Arbitrary transport
    • Arbitrary payload
    • Extensible applications

Example Case:

  • Form MPLS tunnel from ingress to egress.
    • Typically IGP + LDP is used for this.
    • Could be BGP or RSVP (MPLS TE)
  • Peer BGP from ingress to egress
  • Recurse BGP next-hop to MPLS label.
  • What is the core’s data plane result?
    • Core label switches ingress PE to egress PE
    • Core does not need end-to-end information.

On the image above now we’re going to enable MPLS. First we’re removing the GRE tunnel though between R8 and R9 with ‘no int tunnel0’. After we’re going into the OSPF process on every router and enabling MPLS autoconfig with LDP.

After entered on all of the routers you can start to see these messages in the console that LDP neighbors are forming. The LDP neighbors will be the exact neighbors we already have in OSPF.

Now on our BGP configurations for R8 and R7, we’ll need to remove the route-map for next hop and do a normal ‘next-hop-self‘ on the neighbor statement like below.

There is now reachability again between R10 and R9’s loopbacks.

On R8 we can see in the MPLS forwarding table that it has reachability to R7


  • MPLS is advantageous because it allows us to not run BGP over all transit routers.
  • The BGP table is too large, this minimizes the table size.
  • Another form of tunneling, easier than doing VPN or GRE everywhere.

Layer 2 Multicast

  • Ethernet Multicasting
    • Supports L2 multicast natively.
      • Multicast bit in 48-bit address: lowest bit in the first byte.
      • Anything that starts with 01-
    • Multicast addresses used for various purposes.
      • CDP, L2 protocol tunneling.
      • Allocated by IEEE.
    • IPv4 addresses map to MAC to forward on LAN
      • Allows L2 switches to forward multicast intelligently.
    • MAC address range
      • 01-00-5E-00-00-00 to 01-00-5E-7F-FF-FF
        • First 25 bits fixed
        • Last 23 bits mapped from IPv4 address.
    • Implies overlap of addresses.
  • Switches treat unknown unicast and multicast like broadcast
    • Multicast traffic flooded out all ports in broadcast domain.
    • IGMP Snooping needs to be used to avoid this.
  • IGMP Snooping:
    • Switch listens for IGMP Report/Leaves
      • L2 device inspects L3 frames.
      • Extracts group address reported.
      • Prunes unneeded Multicast
    • Multicast Routers have to be discovered
      • Routers need to process all multicast traffic.
      • Switch listens to PIM messages.
        • PIM Snooping


  • ‘ip igmp snooping’
  • ‘ip igmp snooping vlan x’
  • Statically assign port to group:
    • ‘ip igmp snooping vlan <x> static <ip> interface <interface>

  • IGMP Snooping and STP
    • STP TCN may signal receiver moving.
      • After TCN event switch floods all multicast groups out all ports.
      • ‘ip igmp snooping tcn flood query count <count>’
      • Above command means flood until <count> query intervals have expired.
    • Disabling flooding during TCN
      • ‘no ip igmp snooping tcn flood’
  • IGMP Profiles:
    • access-group only on L3 interfaces.
    • Profile allows IGMP access-control at Layer 2.
      • Profiles are either in permit or deny mode.
      • Permit mode allows specified groups and blocks all others.
      • Deny mode blocks specified groups and allows all others.
      • Configuration:
        • ip igmp profile 1
        • permit
        • range
        • range
        • int gig1/0/45
          • ip igmp profile 1
  • IGMP Throttling
    • Limits amount of groups joined on interface
      • ip igmp max-groups NN
      • ip igmp max-groups action <deny|replace>
        • New groups are either denied or replace old ones.

Bidirectional PIM

  • RFC 5015
  • Traditional Sparse Mode forms two trees.
    • Unidirectional SPT from source to RP
    • Undirectional shared tree from RP to receivers.
  • Results in (*,G) and (S,G) control plane.
    • Doesn’t scale well.
  • Bidirectional PIM solves this by only allowing the Shared Tree (*,G) and never a SPT (S,G).
  • Operations:
    • Define an RP and group range as bidirectional.
      • Stops formation of (S,G) for range.
    • Build single (*,G) tree towards RP
      • Traffic flows upstream from source to RP
      • Traffic flows downstream from R to receivers
    • Removes PIM Register process
      • Traffic from sources always flow to RP.
    • Uses Designated Forwarder for loop prevention.
  • Bidir Designated Forwarder:
    • One DF is elected per PIM segment
      • Lowest metric to RP wins.
      • Highest IP in tie.
    • Only DF can forward traffic upstream towards RP.
    • All other interfaces in OIL are downstream facing.
    • Removes the need for RPF check.
      • Due to this all routers must agree on Bidir or loops can occur.


In this topology we’re going to setup R1 as the Rendezvous Point.

Bidirectional PIM first needs to be enabled globally, and then added on to the rp-address command. This needs to get turned on for every router in the path.

Now on R8 we’re going to join a group.

After the group Join reached the RP, R1 now sees the receiver/(*,G).

We’re going to now setup a continuous ping on R9 to the group address and see what the mroute shows on R1.

Which is still only showing the (*,G) because this is Bidirectional PIM. The full SPT does not exist, will not form.

Entire goal is to reduce the number of senders in the Multicast Table.

Source Specific Multicast (SSM)

  • Any Source Multicast (ASM)
    • Traditional PIM Sparse Mode with an RP.
    • Receiver does not yet know who the sender is.
    • Sender and Receiver are connected through the RP.
    • Both (S,G) and (*,G)
    • Source begins to send traffic.
      • PIM DR hears app feed (S,G)
      • Unicast PIM Register is sent from DR to RP.
      • RP acks DR with Register Stop.
      • RP now knows about (S,G)
    • Receiver signals group membership
      • App sends IGMPv1/2 Report for (*,G)
      • IGMP Querier translates to (*,G) PIM Join towards RP
      • PIM Join forwarded up RPF path to RP
      • RP now knows about receiver.
    • RP joins (S,G)
      • RP sends (S,G) PIM Join up RPF path to source.
      • App now flows from source to RP.
    • RP forwards to receiver via (*,G)
      • Receiver now gets app flow.
    • SPT Switchover
      • Last hop sends PIM join (S,G)
      • Last hop sends PIM Prune (*,G)
      • Receiver is now joined to the (S,G)
    • Issues with ASM Design
      • Receivers don’t know about senders in advance.
        • RP is used to find senders.
      • RP is a bottleneck in the control plane.
        • RP failure means that new trees can’t be built.
        • RP is at least temporarily in the data plane.
      • Solution
        • Have receiver pre-learn the source out of band.

  • Source Specific Multicast (SSM)
    • Group address range
  • Receiver knows app source before it signals membership.
    • Receiver uses IGMPv3 Report to signal (S,G) join.
  • RP is not needed to build the shared tree.
    • App already knows source.
    • RP not needed to build control plane.
  • Result is only (S,G) trees
    • Last hp router sends (S,G) PIM Join up RPF towards source
    • Each tree is SPT for (S,G)


  • Enable multicast routing
    • ‘ip multicast-routing’
  • Define global SSM Group range
    • ‘ip pim ssm <default|range>
  • Enable PIM Sparse at interface level
    • ‘ip pim sparse-mode’
  • Enable IGMPv3 on links to receivers
    • ‘ip igmp version 3’

In the below topology we’re going to setup SSM. R9 we’re going to start with sending a ping to, which should fail right away because there is zero configuration for that group.

On the other side of the topology we’re going to run a source specific join from R8. First step is enabling IGMPv3 on the link towards the source, which is also towards R10.

That is completed on both R10 and R8 because the first hop router will be sending the Join message.

In addition, SSM default needs to be turned on for every single router in the path.

Now, on R8 we’re going to continue joining the group. The command is similar to IGMPv2 but now specifies the source, which is

After entering the join, the extended ping from our source R9 begins to receive responses.

Now when doing a ‘show ip mroute’ on R4, a router in the source to destination path, we only see the (S,G) and no (*,G).

Multicast Anycast RP and MSDP

  • Anycast
    • One to nearest routing.
    • Multiple destinations share same address.
    • Route to closest one based on routing protocol.
    • Poor man’s load balancing and HA.
  • Anycast Operations:
    • Mirrors application data to multiple devices in topology.
    • Assign each device the same duplicate IP and advertise it
      • Same /32 Loopback into routing protocol.
    • Use the routing table for load balancing and HA.
      • Routing to specific destination depends on where you are physically in the topology.
      • If anycast device fails, use routing convergence to find the next closest device.
  • Anycast RP
    • Uses Anycast load balancing to decentralize placement of PIM Sparse Mode RPs.
      • PIM Register and Join messages go to closest RP in topology.
      • If one RP goes down, convergence is up to IGP.
      • As long as one anycast RP is up, new trees can be built.
      • RP failure does not necessarily impact current trees.
    • Design Issues:
      • Requires all RPs to share information about senders and receivers.
        • MSDP helps with this.
  • Multicast Source Discovery Protocol (MSDP)
    • Used to advertise (S,G) pairs between RPs.
      • Listen for PIM Registers regarding (S,G)
      • Tell other RPs about (S,G) through an MSDP Source Active (SA) message.
      • Essentially like an inter-RP PIM Register message.
    • Allows PIM domains to use independent RPs
      • Originally designed for Inter-AS Multicast
      • CCIE Use Case is Anycast RP for Intra-AS Multicast.
  • Setup:
    • Anycast RPs assign duplicate Loopback address and advertise into IGP.
    • All routers point to anycast RP address.
      • Can be static or dynamic assignment.
    • Anycast RPs are MSDP peers using a unique address.
      • Each device has a globally routable Loopback plus the Anycast Loopback.
      • If three or more RPs, usually a mesh group.
    • When PIM Register is received, MSDP SA is sent to MSDP peers.
      • Results in sync of (S,G) information.
      • RP that knows about receiver can now join the (S,G) tree.
  • Caveats:
    • Requires duplicate addresses
      • Ensure control plane protocols don’t use duplicate IP as identifier
        • ie. router-id
    • Requires unique address to sync the application data
      • App is hosted on Anycast
      • App data sync between Anycast peers needs to be routable (unique).
        • Similar to VIP in HA pairs, etc.


The image above is the topology being worked on. Every router has ip multicast enabled, is running OSPF in area 0, and is advertising unique loopbacks/addresses into OSPF. There are no IGMP groups currently running, but PIM is enabled on each transit interface.

The two routers that will be performing Anycast and acting as Rendezvous Point (RP) are R6 and R4. On each router we’ll create Loopback interface 20 and give the interface an IP of That interface will join the OSPF process with ‘ip ospf 1 area 0’

After advertising the loopbacks into OSPF, R1, the router in the middle, now has two destinations to R6 and R4.

Next step is to have all the routers in the topology pointing to for their PIM RP mapping. This command will be applied to all routers.

Now to verify the routers are using as their RP, the command is run below.

Now we’ll add a group into the mix. On R8 we’ll do an IGMP join then take a look at the mroute tables.

R4 Normal
R6 No Group

On R4 everything is behaving normally. The RP found the listener and is waiting for a sender to complete the SPT. On R6 however, the group is not found because the two RPs are not syncing data with each other.

If I go over to R9 on the other side of the topology and ping, the two RP mroutes for this group will appear like below:

R6 shows it created the full tree, but it has an outgoing interface of Null. This is because it has no idea about where the listener/receiver is located.

R4 shows it only has the listener/receiver location and no idea where the source would be.

Below are the commands to enable MSDP.

On R6 we’re peering with R4 at, and on R4 we’re peering with R6 at Note the peering needs to happen on unique addresses, cannot use the Anycast address.

After the commands are entered we get a session up message in the terminal. Now when pinging from R9 to I’m getting responses and the mroute tables look like this on the two RPs:

R4 (S,G)
R6 (S,G)

When R9 begins pinging, R6 receives a PIM Register message that gets passed on to its peer R4 via MSDP Source Active (SA).

Bootstrap Router

  • RFC 5059
    • Similar functionality to Cisco proprietary Auto RP.
  • Roles
    • RP Candidate
      • Similar to Candidate RP
      • Uses Unicast PIM to advertise itself to Bootstrap Router
    • Bootstrap Router
      • Analogous to mapping agent
      • Advertises RP info to other routers with multicast PIM on a hop by hop basis.
  • By default Auto-RP and BSR messages are sent on all PIM enabled interfaces.
  • For added security, these messages should be filtered on network edge.
    • Auto-RP via Multicast Boundary
    • BSR via BSR Border
  • Filtering can occur via TTL as well.
    • ‘Administrative Scoping’


In the topology above all routers are running OSPF in area 0. They each have ip multicast enabled and we’re going to make R1 the RP via BSR.

The first two commands above will make R1 the BSR and RP Candidate. The ‘group-list 1’ is referring to an access-list numbered 1 that allows the below Multicast addresses to report into the RP.

An RP Mapping will show the same prefixes:

On R9 we can confirm that R1 is now officially the RP of the Multicast domain.

Now on R8 we’re going to create a PIM Join to the group and and check the mroute table on R1. R8: ‘R8(config)#ip igmp join-group’

R1 now has the group setup in a (*,, meaning it not yet has a sender but knows which interface to go out for receivers/group members.

After hopping onto R9 and trying to ping the group address, R1’s mroute table looks like below:

It now has the full SPT tree completed with an (S,G) – (,


Command ‘show ip pim rp’ will show RP for all Multicast groups.

Auto RP

  • PIM Sparse mode
    • Traffic not flooded unless asked for it.
    • Uses RP as root of Shared Tree
    • PIM DR hears sender and reports (S,G) to RP through PIM Register
    • Last hop router hears IGMP Join and sends (*,G) PIM join towards RP.
    • RP sends (S,G) PIM join towards source to complete shared tree.
    • Last hop router can initiate PIM SPT Join and Shared Tree Prune once feed is end-to-end.
  • Without RP
    • Sources cannot register
    • Joins cannot be processed
  • All routers must agree on same RP address on a per-group basis
    • Registers and Joins are rejected for invalid RP.
  • RP address assignment.
    • Can be done statically or Dynamically.
    • Dynamically
      • Auto-RP
      • BSR

Auto RP:

  • Cisco properietary
    • Two functional roles
      • Candidate RP
        • Devices willing to be the RP
      • Mapping Agent
        • Chooses the RP among candidates and relays this info to the rest of PIM domain.
    • Allows for redundancy of RPs

Auto RP Process:

  • Candidate RP sends announcement with group range they are willing to service.
    • Uses group (S,
  • Mapping agent then discovers candidate RP and advertises their mappings to all other routers
    • Joins (*, to discover about Candidate RPs
    • Announces final RP advertisement with (S,
  • Caveats:
    • Dynamically learned RP mapping preferred over static.
    • Auto-RP control plane messages are subject to RPF check.
    • Routers must join (*, for Candidate RP and (*, for mapping agent.
    • Pim Sparse Mode
      • Cannot join the Auto-RP groups without knowing where the RP is located.
      • Cannot know where the RP is without joining the Auto-RP groups.
      • Recursive Logic.
      • Solutions:
        • Default RP Assignment
          • Assign static RP for groups for and .40.
          • Defeats purpose of dynamic.
        • PIM Sparse-Dense Mode
          • Dense for groups without an RP
          • Sparse for all others.
        • Auto-RP Listener feature
          • Dense for and .40 only
          • Sparse for others.
  • Auto-RP with multiple candidates.
    • For redundancy and load distribution, Candidate RPs can be configured.
    • ACL applied on Candidate RP controls what groups they service.
    • If multiple overlapping Canddiate RPs, Mapping Agent chooses highest RP address.
  • Mapping Agent Security
    • Needs to be protected against false RP Candidate RP advertisements.
    • RP Announce Filter feature can permit or deny Candidate RP to be accepted.

Auto-RP Configuration:

In the above topology we’re running OSPF on all routers in Area 0. They’re each advertising a loopback address into the domain and each have Multicast enabled. There is however zero RP at the moment. To enable Auto-RP we’ll first add the autorp listener command to each device.

In this environment the router R1 will become the RP. On R1 we’ll first need to allow the router to send RP announcements. We’ll be sending these announcements out via interface Loopback1 and the scope will be a TTL of 255.

And for the mapping agent we’ll need to add in the RP discovery for the mapping agent. Again the interface used will be interface Loopback1 and the TTL of 255

Shortly after once doing an ‘show ip pim rp mapping’ you can see that this host is now the Auto RP and mapping agent.

Now, if we head over to R9 and ping the group, the RP at R1 will have an (S, G) in its mroute table for with the router I just pinged from.

The incoming interface is the unicast outbound interface to reach R9, G0/0, and the outgoing interface list is Null. Null because there is currently no destination or group member.

To add a group member I will go over to R8, go under the active interface towards the RP, and add an IGMP join message to our newly created Multicast Group.

Now the mroute on R10 will show the succeeded RPF check and it will know the path for IGMP group

Again from R9 we’ll try pinging the multicast group After we do this, now there’s a group member, there will be a successful ping response. In addition the ‘show ip mroute’ will show the full (S, G) for everything in the path. Here’s R5:

Auto-RP Security:

Currently on R1 we can see with the command ‘show ip pim rp mapping’ that we’re servicing the group So basically everything. If we wanted to add an access-list to this we could do the follwowing:

Create an ACL:

Add ACL to the pim RP mapping:

After running a ‘clear ip rp-mapping’ the ‘show ip rp mapping’ shows the serviced group has decreased to meet our ACL.

Same with another router outside of the RP

This essentially means if any router on the network tries to join a group that does not fall under, or, the network will fail to know what it should do.

In the image above, after the ACL add, I’ll try to have R5 join group

On R4 now, checking mroute for, it shows the incoming interface is Null. The reason for this is because the router does not know what Rendezvous Point is used for the new group.

After adding the permit any to access-list 1, then pinging the group, I can now see in R4 that it’s building the full SPT.

Multicast RPF Failures

  • Reverse Path Forwarding Check
    • PIM does not exchange its own topology.
    • PIM relies on IGP for loop free path selection.
    • RPF check is an extra data plane loop prevention technique.
  • RPF check performed on all incoming multicast packets.
    • If incoming multicast interface == outgoing unicast interface, RPF check passed.
    • If incoming multicast interface DOES NOT == outgoing unicast interface, RPF check fails, packet dropped.
  • RPF check changes depending on type of tree.
    • Shortest Path Trees (SPT)
      • Performs check against source
      • Used in (S,G) trees.
        • Pim Dense Mode
        • Pim Sparse Mode from RP to Source
        • Pim Sparse mode from Receiver to Source after SPT Switchover (S-bit set)
        • Pim Source Specific Multicast (SSM)
    • Shared Trees (RPT)
      • Perform RPF check against RP.
      • Used in (*,G) trees
        • Pim Sparse Mode from Receiver to RP before SPT switchover.
  • RPF check also performed on RP against Register messages.
    • RP must have route back to source that is being registered.
    • If no route, (S, G) state cannot be created.
  • Verify/Troubleshooting commands:
    • show ip mroute
    • show ip mroute count
    • show ip rpf
    • mtrace
    • debug ip pim
    • debug ip mfib pak
  • RPF controls how trees must be built, not how it can be built.
    • PIM join is sent out RPF interface for both (*,G) and (S,G)
    • Changing RPF results in traffic engineering for multicast.
    • You can override Unicast route table if there are static mroutes entered.
      • ip mroute x.x.x.x x.x.x.x x.x.x.x

PIM Sparse-Mode Config

The image above shows 10 routers all running in OSPF area 0. They are each advertising their own loopback interface into the OSPF domain – ie. R<#> = #.#.#.#/24. They will each get Multicast and PIM Sparse mode enabled on their transit interfaces.

In addition, each router will point to R1 as it’s Rendezvous Point at

On R10 when trying to ping the multicast group, I will not get a response, but now on R1 there should be some information gathered.

‘show ip mroute’ on R1 will display a few things.

– Shows the sender (R10) and the group the sender was trying to reach (
– Shows incoming interface on Gig0/3, RPF neighbor (neighboring router interface).
– Outgoing interface in regular Unicast table is Incoming interface in Multicast table.

– Going to be used by the receivers. Right now it shows outgoing interface as NULL because there are no receivers.

On the reverse side, lets now have a receiver ‘join’ the multicast group of This will be completed in R9, where an IGMP join will be setup.

On R9 there’s only one link towards the RP, which is gig 0/0.

Now on the RP when looking at the group, there is an interface in the outgoing interface list. Gig0/0 is the interface pointing to R6, which is the closest to the new IGMP group member.

Now if we move up to R5 and take a look at the mroute table, we’ll see the following:

Shared tree that is pointed towards the Rendezvous Point (RP).

  • (S, G) – (Source, Group) – (*,

And the source tree (Shortest path tree) which is rooted at the group sender.

  • (S, G) – (Source, Group) – (,
  • Notice the incoming interface is traffic coming from the source, and outgoing interface is where traffic goes to reach the RP.
  • The same can be seen on R4, which is another hop up the chain closer to the Rendezvous Point.

Mroute on R1 is going to show the shared tree and the (S,G).


  • The RP is ultimately about creating the control plane for traffic from sender to receiver(s). If the RP is not actually in the data path between (S, G), then there’s no point in sending traffic to RP just to have it redirect to somewhere else. To accomplish taking the RP out of the next hop, SPT Switchover is completed.