Basic Multicast – A “There Are No Dumb Questions” Guide

I’ve often described network engineers and multicasting as a “group of teenagers talking about driving a manual transmission car”.

“Do you guys know how to drive stick? I mean, yea – of course I know how to drive a stick! Anyone who’s anyone knows how to drive a stick! It’s easy. Why, if you can’t drive stick, you can pretty much hand in your driver’s license right now, along with your man card!”

The funny thing is that none of them can drive a manual transmission car, and three-quarters of them don’t even own a car to begin with.

In the spirit of honesty, a lot of engineers treat multicasting in a similar fashion. It’s an ethereal technology that they know is used for neighbor exchanges with most IGRPs, but doesn’t otherwise cross their desks to any real degree – and if it does, the senior engineers and architects will surely handle it, right?

The mentality you’ll want to adopt in terms of multicasting is that it’s something that’s quite valuable to know before you’re thrown into the mix of a serious issue involving it, and you’re otherwise striking out in terms of knowing how it works. For many CCNA-level engineers who are making the transition to CCNP-level positions with a promotion or a job change, multicasting may be the first truly difficult challenge that they face as they’ve likely never touched it before or training/tested on it thus far in their careers.

Let’s take a look at basic multicasting, starting from ground zero, and working our way into a couple of realistic deployment scenarios that many enterprises would find everyday use for. On the way, we’ll have a look at what not to do, what TO do, and when you’d want to think about doing it.

This is going to be a rather long read, but it’s my hope that it’s an informative one for you!

A Bit Of Backstory

What is multicast truly intended to do?

Conceptually, it’s used to take identical traffic that would otherwise exist in the form of numerous individual unicast streams, and convert them into a single stream directed only to the devices that need the traffic in question. Most engineers know it uses the 224.0.0.0/4 IP address range, AKA the “Class D” address range, to function.

It can be enabled on virtually any Layer 3 Cisco device with IP routing turned on. As with many other routed protocols, it can be distributed as a “static route”-type system or distributed via a dynamic routing protocol (specifically MP-BGP). Cisco devices can be manually, statically assigned to a multicast IP group (AKA, a multicast IP address to listen/relay on), or can add themselves into a group on-demand to create a traffic path between listening and transmitting clients.

As with any IGRP, such as OSPF or EIGRP, there needs to be a “master router” of sorts in most implementations that is the brains behind the entire multicast hierarchy – that master router is called the Rendezvous Point, or ‘RP’ for short. You can have either a single one of these, or a redundant set of them that share an anycast IP address together.

Multicasting in the IP world is often referred to as ‘PIM’, or Protocol Independent Multicasting, because that’s the flavor of it that nearly everyone uses today. There are several different variants of PIM; the most common one, and the one that we’ll use in our demonstrations later on, is PIM-ASM. The ‘ASM’ bit stands for Any-Source-Multicast, which allows any IP address in the hierarchy to claim the ‘speaker’ role which means that it’s the device that’s transmitting. PIM-ASM is also the easiest to set up.

PIM is joined at the hip to ‘IGMP’, or Interior Gateway Message Protocol. IGMP is the means by which clients can request to their router to join a specific multicast IP group. The client will send an IGMP request out to the multicast address it wants to be added to, and as long as the router (which is usually the default gateway) has multicast correctly configured, it will enable the multicast group on itself, and attempt to find the RP that’s authoritative for the given group to initiate communications.

If you’re like me, and you’re running redundant routers beforehand as your default gateway courtesy of something like HSRP or GLBP, they’ll go through what’s called a ‘Designated Router’ election process for the subnet they receive the incoming IGMP request on. The Designated Router, or ‘DR’ for short, is the router that is authoritative for the local subnet as the router that will initiate the multicast group communication with the RP; the router that loses the DR election will then be a member that will be on standby to assume this function itself if the current DR fails. By default, the device with the highest configured interface IP address will win the DR election – you can also manually adjust priorities for DR election to cause the router of your choice to win, if needed. If you’re running a single router as your gateway, it will win the DR election by default since there’s nothing to compete with it.

I’ve placed a considerable amount of information in front of you rather quickly; I apologize for that! Let’s use it in a couple quick examples to get a bit of a better grasp as to how it all connects together before we go any further.

Multicast Process & Flow – Sparse Mode, With an RP

To take a look at how multicasting actually works in action. Consider the below topology with a single client, going through its gateway router for connectivity. It has an RP, presumably in a central datacenter, and then a server that’s functioning as a multicast speaker in a different routed remote subnet.​

Picture

We’re going to look at this in two ways – firstly, what this flow would look like if we were running PIM in Sparse Mode (AKA, the only routers that receive multicast traffic are those that actually have clients needing the traffic right now), and secondly, PIM in Dense Mode (AKA, where every single router in the topology gets the multicast traffic sent to it, regardless of any attached clients needing it).

For PIM in Sparse Mode:

  1. The Multicast Speaker server sends out an IGMP Request packet to multicast IP address ‘224.0.10.20’, asking to be added to this group. It also internally adds itself to this group so that it can receive any traffic on this group that’s local to the Layer 2 subnet it’s on.
  2. The R2 router receives the IGMP Request packet. It sees that it has a client that wants to be in ‘224.0.10.20’, but it isn’t in ‘224.0.10.20’ itself. It adds itself to this group before proceeding.
  3. R2 first checks if it has a static entry configured that tells it what the Rendezvous Point IP address is for the ‘224.0.10.20’ group. It finds that it does, and initiates a PIM tunneling request to that RP.
  4. The PIM tunnel is automatically negotiated and created between R2 and the RP; it is a GRE point-to-point tunnel that’s used to create a direct Layer 2 adjacency to forward upstream multicast traffic through.
  5. Once this tunnel is created, the Multicast Speaker can begin to send traffic to the RP on this group through the tunnel. However, since the RP has a single speaker and no listeners at this point, it doesn’t know where to send the traffic to (as it has no other Reverse Path Forwarding adjacencies on this multicast group right now), so it drops the traffic.

Then:

  1. The Multicast Client computer sends out an IGMP Request packet to multicast IP address ‘224.0.10.20’, asking to be added to this group. It also internally adds itself to this group so that it can receive any traffic on this group that’s local to the Layer 2 subnet it’s on.
  2. The R1 router receives the IGMP Request packet. It sees that it has a client that wants to be in ‘224.0.10.20’, but it isn’t in ‘224.0.10.20’ itself. It adds itself to this group before proceeding.
  3. R1 first checks if it has a static entry configured that tells it what the Rendezvous Point IP address is for the ‘224.0.10.20’ group. It finds that it doesn’t.
  4. R1 then checks its multicast routing table (which is different from a conventional Layer 3 routing table) to see if it has an entry that ‘224.0.10.20’ falls into. It finds that it has an entry in this table that it received via Multi-Protocol BGP, and what the RP’s IP address is for this group.
  5. R1 initiates a PIM tunneling request to the RP for the ‘224.0.10.20’ group. The tunnel is established.
  6. Now that the RP has a downstream adjacency in the ‘224.0.10.20’ group, the traffic that was formerly being dropped earlier now has a “next hop” that it can go to. The RP checks its Reverse Path Forwarding entry for R1, sees R1’s IP address (outside of the PIM GRE tunnel) in the RPF entry, and sends a unicast packet with a PIM header to R1 with the traffic the Multicast Speaker is sending.
  7. R1 receives the unicast PIM packet, and checks it against the table of clients it has that are a member of the multicast group that’s noted in the PIM packet header. It notes that it has active listeners in the given group, and then forwards the packet as a multicast packet out the egress interface it has on the client’s subnet.
  8. The Multicast Client receives the packet, notes that it’s for a multicast group that it’s a member of, and processes it accordingly.

​WOW! There’s a lot going on here!

There’s a few things I want you to note, in passing. For starters, notice that by default, Sparse Mode means that nothing in our equation is going to handle or receive the multicast traffic unless it specifically needs to. We also see the process change between using a static entry to learn about our RP at the downstream level, vs using a routing protocol to learn about the RP – the routing protocol method is a much more scalable way to distribute this information if you’re working in a large-scale environment.

We also see the “PIM Tunnel” used; I’m commonly asked by some engineers why Sparse Mode has to use a GRE-based tunnel between the downstream router and the RP for direct uni-directional communication. The reason is that the initial multicast traffic request has to get to the RP without anything else in the middle seeing it or reacting to it first. If a “middleman” router catches a raw PIM Request packet in transit, it would add itself to the multicast group then attempt to find the RP for the group – in the event it couldn’t find the RP, it would then notice that it has a PIM Neighbor adjancency with its downstream router, and if that downstream router was the DR for that network segment, it would forward the multicast back the way it came and create a thrashed traffic loop.

This is why PIM in Sparse Mode has to function as a top-down, tree-style topology that’s heavily reliant on the Reverse Path Forwarding entries in its multicast routing table – it’s otherwise all-too-easy to accidentally create a network loop with multicast traffic.

Sparse Mode tends to be the preferred deployment mode for the majority of installations purely because while it’s more complicated and takes longer to set up and establish connections, it prunes traffic to only get to network segments that actually need it. If your company has 15 remote satellite offices with slow WAN links, but only three of them need the multicast traffic in question on a regular basis, Sparse Mode makes the most sense so that you’re not flooding all links with needless traffic.

Now, let’s take a look at the alternative to Sparse Mode.

Multicast Process & Flow – Dense Mode, With no RP

Dense Mode, in contrast to the Sparse Mode system we just examined, will by default put all multicast groups onto all the routers in your topology automatically unless you manually tell certain routers to exclude given groups. Because of this approach to multicasting, Dense Mode doesn’t require a Rendezvous Point as a multicast headend because every router in the environment is going to get the traffic, and flow/route control for it isn’t needed.

As a result, our topology has changed slightly – notice that we don’t have an RP anymore, and we simply have an ‘R3’ router in its place.

Picture

Our traffic flow process now looks something like this:

  1. The Multicast Speaker server sends out an IGMP Request packet to multicast IP address ‘224.0.10.20’, asking to be added to this group. It also internally adds itself to this group so that it can receive any traffic on this group that’s local to the Layer 2 subnet it’s on.
  2. The R2 router receives the IGMP Request packet. It sees that it has a client that wants to be in ‘224.0.10.20’, but it isn’t in ‘224.0.10.20’ itself. It adds itself to this group before proceeding.
  3. It notes that it is configured in Dense Mode, and checks its PIM neighbor table to see what neighbors it has that are also configured in PIM Dense Mode. It sees an adjacency with the R3 router, and sends an IGMP Request to it notifying it that there’s a new multicast group that R3 needs to know about and add.
  4. R3 receives the IGMP Request, and notes that it isn’t in ‘224.0.10.20’ itself. It adds itself before proceeding.
  5. R3 will now check its PIM neighbor table to see if it has any neighbors in PIM Dense Mode that need to know about this group. It sees both R1 and R2, but since R2 has the same IP address as the Reverse Path Forwarding entry for the IGMP Request it received, it won’t send the request “back” to R2 but will rather only send it on to R1. It does this to prevent flooding and packet duplication.
  6. R1 receives the IGMP Request, and notes that it isn’t in ‘224.0.10.20’ itself. It adds itself before proceeding.
  7. R1 doesn’t otherwise have any further downstream routers that need to know about this group, and no listeners that are currently in it, either. After checking its PIM neighbor table to confirm this, it opts to stand by until further PIM group traffic is sent to it.
  8. R1, R2, and R3 will continue to maintain membership in the ‘224.0.10.20’ group in the hope that traffic will be sent to it so that they can provide it to their PIM neighbors or connected multicast listeners. The inactive timeout interval for this traffic is five minutes, by default – if they don’t get traffic within that time frame, they will remove their membership from the ‘224.0.10.20’ group until they have reason to join it again.

Then:

  1. The Multicast Speaker begins sending out traffic on the ‘224.0.10.20’ group address.
  2. R2 receives the traffic, and sends it on to R3 as it notes that it has a neighbor entry for R3 that also corresponds to the group address in question.
  3. R3 receives the traffic, and sends it on to R1 as it notes that it has a neighbor entry for R1 that also corresponds to the group address in question. It doesn’t send the traffic back to R2 as it notes that its Reverse Path Forwarding entry in the PIM packet is the same IP address as the R2 egress interface that sent the original packet.
  4. R1 receives the traffic, and notes that it has no multicast listeners currently registered in the ‘224.0.10.20’ group. As it has no other neighbors that it can forward the traffic to and no other clients that need it, it drops the traffic.

Finally:

  1. The Multicast Client sends an IGMP Request to R1 requesting to be added to the ‘224.0.10.20’ multicast group. It also internally adds itself to the group so that it can receive any Layer 2 traffic for the group.
  2. R1 receives the request, notes that it is already in the needed multicast group, and that it has a neighbor entry in PIM for R3 on the same group. It doesn’t perform any further action at this time, as it knows the multicast group already exists in the topology.
  3. The Multicast Speaker sends more packets on ‘224.0.10.20’, which can now flow through R2, then R3, then R1, and can now be given to the Multicast Client as R1 now has a PIM table entry for the Multicast Client on the ‘224.0.10.20’ group.

OK, so what do we see? While the process seems long from a communication standpoint, understand that from a configuration perspective, all we’ve really had to do to get this set up is to run two commands on our routers – one to enable multicast PIM, and one to set PIM to Dense Mode for any/all of the needed multicast groups. That’s it!

While Dense Mode is substantially easier to set up and troubleshoot in comparison to Sparse Mode, we also notice the tradeoff for this – that if R1 didn’t have an active multicast listener on the ‘224.0.10.20’ group at the time the Multicast Speaker was sending traffic, it was getting all the traffic sent to it anyway, and was forced to drop it.

Dense Mode can create quite a bit of traffic overhead in an environment that’s not correctly scoped for it, as we see here in our example. It’s a good fit for environments that require a lighter traffic load at either almost all or all of their locations, or very small environments with a more junior engineer who simply doesn’t want to troubleshoot the complexities of Sparse Mode – nothing wrong with that, either!

Now that we’ve seen a couple of examples of what we can do with multicast in our environments, let’s take a look at our first depth example of this in action.

Deployment Scenario #1 – Workstation Imaging

Once companies hit the 100-employee mark (heck, I’ve even seen some 50-employee companies do this), it’s a good bet that their IT shop is starting to look at an imaging solution for their PCs and servers as opposed to manually building them out one-by-one each time a new or rebuilt machine is needed. As such, PC imaging is the most common day-to-day use of multicast that you’ll see out in the IT world.

For organizations that have either a single large office with users who work from home, or have spread out into several geographically-distant locations, PC imaging makes a good pairing for multicasting because it gives your IT shop the ability to remotely re-image PCs without a user having to manually bring them back to the corporate office for service. Multicast-based imaging can easily profit a business thousands, if not tens of thousands each year in saving lost time, lost productivity, and user exasperation.

​As such, it’s going to be our first practical example of multicast in action.

​Our example company is going to be a five-location company, with offices across the United States. They have roughly 500 employees, and have 100 Mbps-rated WAN links that are interconnected into an MPLS mesh topology with each other. They’re using single routers for egress at all their locations, as opposed to a redundant pair. For our purposes, we’ll say that they aren’t currently using multicast for anything beyond IGRP neighbor exchanges, and PC imaging is going to be their first real foray into it.

Picture

The first question that needs to be asked is: Which PIM mode is the ideal fit for that we’re looking to do here?

We know that our traffic is going to consist of Windows OS images – which are fairly large files, especially when loaded up with a full corporate application suite. We also know that the odds are that we won’t be imaging remote PCs every day – it will be a troubleshooting or upgrade consideration on an irregular basis.

While a reasonable argument could possible made for either PIM mode, I would personally choose Sparse Mode to avoid sending imaging traffic across all locations at once when it in all likelihood only needs to go to a single location at once; at most, two.

The second question that needs to be asked is: If we’re going to use Sparse Mode, where should we place the Rendezvous Point(s) at?

Because of the nature of Sparse Mode in that it’s a tree-style topology, and all traffic will be tunneled back to the RP so that it can be distributed from that point in the network, it makes sense to put the RP in the spot that you’re estimating the majority of the “Speaker” traffic will come from. In our example, we’ll say that the Corporate Office in New York has the imaging servers in its datacenter, so the traffic will be originating from there. It then makes sense to place the RP in New York.

The final question that we’ll need to consider is: How do we want to propagate the PIM group membership, as well as inform routers of the RP for that group in our environment?

While this heavily comes down to personal preference, I tend to lean on the side of statically specifying the PIM group and RP entries if your environment has ten routers or less. If you wanted to do dynamic routing for PIM, the only real option you have is for Multi-Protocol BGP; not everyone’s running MP-BGP in their networks currently, and it may be a steep learning curve for more junior engineers as well as an additional protocol you’d now have to add to everything and manage. Makes more sense in large environments, and less sense in the five-router environment we’re dealing with here.

Deployment Scenario #1 Cont. – The Command Line

Alright, so we’re setting up a five-router static multicast topology. For the purposes of keeping our deployment simple, we’re going to assume the use of the ‘224.0.10.20’ address we used previously, as well as assume that there aren’t any internal firewalls or ACL rules that would otherwise impede traffic – we’re strictly only going to look at the multicast side of a deployment.

Everything we’re going to be dealing with is a router-on-a-stick type deployment, although this model could easily be adapted to a Layer 3 Core Switch / Routed WAN Link deployment by simply adding a downstream device into our topology that would need IGMP and PIM enabled, as well as the static pointer for our RP.

Let’s start off by hopping into the New York router, since it’s going to be our RP. We’re going to use the following commands on it:

NewYork_ASR(config)# ip multicast-routing distributed
NewYork_ASR(config)# ip multicast multipath s-g-hash next-hop-based
NewYork_ASR(config)# ip pim log-neighbor-changes
NewYork_ASR(config)# interface Loopback100
NewYork_ASR(config-if)# description Anycast RP Address
NewYork_ASR(config-if)# ip address 10.0.10.1 255.255.255.255
NewYork_ASR(config-if)# ip pim sparse-mode
NewYork_ASR(config-if)# no shutdown
NewYork_ASR(config-if)# exit
NewYork_ASR(config)# ip pim rp-address 10.0.10.1
NewYork_ASR(config)# interface GigabitEthernet 0/0/0.10
NewYork_ASR(config-if)# ip pim sparse-mode
NewYork_ASR(config-if)# exit
NewYork_ASR(config)# interface GigabitEthernet 0/0/0.20
NewYork_ASR(config-if)# ip pim sparse-mode
NewYork_ASR(config-if)# exit
NewYork_ASR(config)# interface Serial 0/0/0
NewYork_ASR(config-if)# ip pim sparse-mode
NewYork_ASR(config-if)# exit

OK, let’s break down what we’re doing here. Firstly, we’re enabling multicast routing in distributed mode; you can run this command without the ‘distributed’ keyword, as I’m only using it here to future-proof the installation in the event they have redundant RPs down the road. Secondly, we’re specifying our multipathing method; this is also not applicable to the current installation, and is merely future-proofing it for the point where they have redundant WAN or Core links down the road. Thirdly, logging our neighbor changes can also be incredibly helpful in the event we need to troubleshoot afterwards.

From there, we’re creating a Loopback interface to use as an RP address, and then assigning it a free IP address. I’d like you to note, here – try to use the lower possible IP address for the RP that you can, to help ensure that it doesn’t interfere with something such as OSPF’s DR election.

Next, we’re going into what’s presumably our data VLAN subinterfaces on the LAN side, and enabling PIM in Sparse Mode. Then, we do the same thing on the Serial WAN interface.

This implementation is also making a further assumption in that they’re doing static routing for their site-to-site communication instead of using a routing protocol. As such, we’ll need to ensure on all our downstream routers that they have a route to know how to get to the ‘10.0.10.1’ RP address as it isn’t being advertised into a routing protocol, etc. To facilitate this, we’ll assume that the IP address on the New York router’s Serial interface is 10.250.100.1.

Let’s hop on the Kansas City router, and demonstrate what the branch office config is going to look like:

KansasCity_ASR(config)# ​ip multicast-routing
KansasCity_ASR(config)# ip pim log-neighbor-changes
KansasCity_ASR(config)# ip multicast multipath s-g-hash next-hop-based
KansasCity_ASR(config)# ip pim rp-address 10.0.10.1 group-list 224.0.10.20/32
KansasCity_ASR(config)# ip route 10.0.10.1 255.255.255.0 10.250.100.1
KansasCity_ASR(config)# interface Serial 0/0/0
KansasCity_ASR(config-if)# ip pim sparse-mode
KansasCity_ASR(config-if)# exit
KansasCity_ASR(config)# interface GigabitEthernet 0/0/0.10
KansasCity_ASR(config-if)# ip pim sparse-mode
KansasCity_ASR(config-if)# exit
KansasCity_ASR(config)# interface GigabitEthernet 0/0/0.20
KansasCity_ASR(config-if)# ip pim sparse-mode
KansasCity_ASR(config-if)# exit

As you can see, our config is very similar to the RP’s config. Noteworthy changes would be that we’re using a static route to point to our RP address, and we’re using a fairly explicit entry for which multicast group to point to the RP for.

You may be asking why we’re being as explicit as we are in terms of only using the RP for ‘224.0.10.20’ instead of just using it as a blowswitch for all multicast traffic, period. The reason for this is that we don’t want, for example, routing protocol multicast traffic going through our RP as that would cause a great deal of complication with neighbor exchanges and topology changes (depending on the protocol they might be using, it could actually cripple their network).

The second reason to be this explicit is that it’s one of two different ways to do a measure of control of what client multicasting is allowed to use the RP. The alternate approach to this would be to use an ACL on each interface to restrict multicast traffic; the method I’m using arguably does nearly the same thing with far fewer commands used.

These commands, repeated onto the remainder of our routers, gives us exactly what we’re looking for. We now have a functional imaging multicast topology that kicks into gear as soon as the server is configured, and clients start sending multicasts upstream to register to our imaging group.

Deployment Scenario #2 – Company Lobby eSigns

Our second deployment situation is for a handcrafted furniture company that has a 250-employee corporate office, and over 100 satellite locations with ten employees or less at them. What they do have is a newly-purchased digital signage suite that runs on a television in the lobby of their satellite locations, and provides product information and marketing adverts to potential customers. They use VDI thin clients for their endpoint workstations instead of desktop PCs, which has eliminated the need for PC imaging and subsequently means that they haven’t had to attempt multicasting for it yet.

They’re considering a multicast implementation to help manage the video traffic that’s going to their signage TVs, as it would substantially cut down on the amount of traffic that’s going from the video server at the corporate office out to each location.

Streaming some type of video or audio feed is quite possibly the second most practical usage of multicast, as it offers a way to cut a very large amount of data down into a single stream at the traffic headend.

For this deployment, let’s assume that we’re dealing with redundant routers at the corporate headend with redundant LAN/WAN links. The corporate office also has a Layer 3 Core switch that’s doing their subnet routing on the LAN side, so we’re not doing router-on-a-stick there. For the satellite locations, they’re using a single Cisco router in router-on-a-stick fashion, which is then uplinked into an SD-WAN solution that’s providing a DMVPN-type mesh topology in the WAN “cloud” that’s interconnecting all the sites.

​With this in mind, let’s ask ourselves the same three questions we did in Scenario #1:

The first question that needs to be asked is: Which PIM mode is the ideal fit for that we’re looking to do here?

This topology is a much more definitive fit for running PIM in Dense Mode, as each location is always going to have this display system running during normal business hours. The only argument that could be made for Sparse Mode here is if they were previously using a Sparse deployment, and it would be simpler to continue to use it and just add another multicast group into each location’s config.

The second question isn’t going to be applicable here as we’re going to using Dense Mode PIM, which doesn’t use an RP.

The final question that we’ll need to consider is: How do we want to propagate the PIM group membership, as well as inform routers of the RP for that group in our environment?

For this environment, a very strong argument could be made towards using MP-BGP to automate routing propagation and failover – especially since they have redundant routers and links at the corporate office. The thing to consider as a counterargument to MP-BGP would be that they have an SD-WAN solution that may have to be re-configured or custom-tailored to work with a routing topology like this, as part of the functionality may overlap between systems.

​In an effort to help demonstrate this type of setup in action, I’m going to opt to go the MP-BGP route (no pun intended).

Deployment Scenario #2 Cont. – The Command Line

To make this interesting, let’s suppose that our company is currently using EIGRP for routing, and MP-BGP is going to be the ​first foray for them into an alternate routing protocol. Also in the spirit of making this interesting, let’s suppose that we’re going to do this configuration on core switches for the corporate office as well.

This deployment is also going to have the advantage of having a network management suite that’s going to allow us to push a templated configuration to all of the satellite location routers – so we’re going to be using a “one size fits all” approach to our downstream config.

So, EIGRP is already going to be propagating our IPv4 unicast addressing, and MP-BGP is going to be the multicast icing on top of that cake.

Since our core switches at the corporate office are going to serve as the headend point for almost all of this traffic, let’s start our configuration there. We’re working with a pair of Catalyst 9410R chassis for this.

CoreSW1_9410R(config)# ip multicast-routing
CoreSW1_9410R(config)# ip pim log-neighbor-changes
CoreSW1_9410R(config)# ​interface Vlan100
CoreSW1_9410R(config-if)# ip pim dense-mode
CoreSW1_9410R(config-if)# exit
CoreSW1_9410R(config)# interface TenGigabitEthernet 3/1
CoreSW1_9410R(config-if)# ip pim dense-mode
CoreSW1_9410R(config-if)# exit
CoreSW1_9410R(config)# interface TenGigabitEthernet 4/1
CoreSW1_9410R(config-if)# ip pim dense-mode
CoreSW1_9410R(config-if)# exit
CoreSW1_9410R(config)# interface TenGigabitEthernet 3/2
CoreSW1_9410R(config-if)# ip pim dense mode
CoreSW1_9410R(config-if)# exit
CoreSW1_9410R(config)# interface TenGigabitEthernet 4/2
CoreSW1_9410R(config-if)# ip pim dense mode

Our setup this time around is MUCH simpler on the frontend. We’re enabling multicast routing in general, and not needing to specify multipathing or hashing means because in Dense Mode, all Layer 3 PIM nodes are getting all the traffic anyway.

We’re enabling PIM on our server VLAN interface, then on our redundant links up to the routing edge, then on our redundant links to the other core switch. We’ll circle back around after we’ve got our base config spun up and do the MP-BGP side, but for now, we’re doing the PIM side.

Our config is going to be nearly identical on our routing edge at corporate, as well as downstream at a satellite office:

CorporateR1_ASR(config)# ip multicast-routing
CorporateR1_ASR(config)# ip pim log-neighbor-changes
CorporateR1_ASR(config)# interface TenGigabitEthernet 0/0/0
CorporateR1_ASR(config-if)# ip pim dense-mode
CorporateR1_ASR(config-if)# exit
CorporateR1_ASR(config)# interface TenGigabitEthernet 0/0/1
CorporateR1_ASR(config-if)# ip pim dense-mode
CorporateR1_ASR(config-if)# exit
CorporateR1_ASR(config)# interface GigabitEthernet 0/2/0
CorporateR1_ASR(config-if)# ip pim dense-mode

DesMoinesR1_ASR(config)# ip multicast-routing
DesMoinesR1_ASR(config)# ip pim log-neighbor-changes
DesMoinesR1_ASR(config)# interface GigabitEthernet 0/0/0
DesMoinesR1_ASR(config-if)# ip pim dense-mode
DesMoinesR1_ASR(config-if)# exit
DesMoinesR1_ASR(config)# interface GigabitEthernet 0/0/1.50
DesMoinesR1_ASR(config-if)# ip pim dense-mode

All we’re really doing here is enabling PIM Dense Mode on transport interfaces, and subinterfaces pertaining to the VLAN the eSign platform is on. Now, let’s circle back around and start our MP-BGP journey:

CoreSW1_9410R(config)# router bgp 65001
CoreSW1_9410R(config-router)# address-family ipv4 multicast
CoreSW1_9410R(config-router-af)# neighbor 10.10.0.1 remote-as 65001
CoreSW1_9410R(config-router-af)# neighbor 10.10.0.1 activate
CoreSW1_9410R(config-router-af)# neighbor 10.10.0.2 remote-as 65001
CoreSW1_9410R(config-router-af)# neighbor 10.10.0.2 activate
CoreSW1_9410R(config-router-af)# neighbor 10.10.10.2 remote-as 65001
CoreSW1_9410R(config-router-af)# neighbor 10.10.10.2 activate
CoreSW1_9410R(config-router-af)# network 224.0.10.20 mask 255.255.255.255
CoreSW1_9410R(config-router-af)# exit-address-family

Note what what’s going on here. We’re using AS 65001 as it’s a local BGP AS, then we’re using one of the MP-BGP extensions to advertise a multicast neighbor and address set, activating our neighbors, and then exiting the address family – this is a VERY important step because at the time of this writing, the Cisco device won’t commit the address-family config until you exit out of it.

The three neighbors that we’re specifying here are the second core switch, the first corporate router, and the second corporate router.

​We don’t need to do any kind of redistribution into EIGRP because multicast advertisements are going to be a completely separate entity from EIGRP domains. This also gives us the passive benefit of ensuring that our configuration stops at the company’s Internet edge and doesn’t accidentally bleed out into a provider’s route scope, or that of a vendor’s or provider’s scope.

The configuration on the corporate routers is going to be much bigger, but not more complex:

CorporateR1_ASR(config)# router bgp 65001
CorporateR1_ASR(config-router)# address-family ipv4 multicast
CorporateR1_ASR(config-router-af)# neighbor 10.10.0.2 remote-as 65001
CorporateR1_ASR(config-router-af)# neighbor 10.10.0.2 activate
CorporateR1_ASR(config-router-af)# neighbor 10.10.10.1 remote-as 65001
CorporateR1_ASR(config-router-af)# neighbor 10.10.10.1 activate
CorporateR1_ASR(config-router-af)# neighbor 10.10.10.2 remote-as 65001
CorporateR1_ASR(config-router-af)# neighbor 10.10.10.2 activate
     (** These neighbor statements will continue for each satellite location's router neighborship**)
CorporateR1_ASR(config-router-af)# exit-address-family

A similar config will exist on the satellite locations:

DesMoinesR1_ASR(config)# router bgp 65001
DesMoinesR1_ASR(config-router)# address-family ipv4 multicast
DesMoinesR1_ASR(config-router-af)# neighbor 10.10.0.1 remote-as 65001
DesMoinesR1_ASR(config-router-af)# neighbor 10.10.0.1 activate
DesMoinesR1_ASR(config-router-af)# neighbor 10.10.0.2 remote-as 65001
DesMoinesR1_ASR(config-router-af)# neighbor 10.10.0.2 activate
DesMoinesR1_ASR(config-router-af)# exit-address-family

The neighbors we’re specifying on the Des Moines router are the two corporate ASRs. This gives us a functional setup for all of our routing. The core switches at the corporate office are advertising the multicast address to be distributed, then each other router that’s a member of BGP AS 65001 is relaying the information out to the rest of the environment.

Simple, right?

Troubleshooting The Topology

There’s several commands that you can use to check the status of your deployment:

  • show ip mroute endpoint/multicast ip address
  • show ip pim neighbors
  • show ip pim tunnel
  • show ip igmp groups
  • show ip igmp interface

In my experience, the most common cause of multicast not working or PIM neighborships not forming is that you’re missing a PIM command on an interface, or are missing the command to denote the RP address for a given group.

Summary

This is the longest tech article I’ve ever written.

I’ve tried to keep the discussion simple, and geared towards someone who’s just starting out with multicasting. I’ve deliberately avoided getting into PIM-SSM, or any of the more complex extensions to PIM that aren’t used as commonly.

Hopefully, this has given you a bit more insight into how to roll out your own multicast deployment. For more information or more advanced topics in PIM, Cisco has published a couple of fantastic books on the subject of multicasting and how to scale it. They’re available here and here if you’re interested.

Caleb
Caleb Huggenberger is a 31 year-old systems engineer, owner of the non-profit animation streaming service 'Otaku Central', and Eastern culture enthusiast. Outside of long work days, he enjoys electronics engineering, cast iron campfire cooking, and homesteading on his acreage in the Indiana countryside.

Leave A Comment (please keep things clean & civil)

Your email address will not be published. Required fields are marked *