CCNA Introduction to QoS

Chapter 1 – Introduction

There’s no doubt that some traffic is more important than others. We wouldn’t want the quality of a video or phone call to suffer, because people are streaming a 4K YouTube video.

QoS, or Quality of Service are the collection of tools that we use to set rate limits and manage priorities.

This lesson, and the one to follow, will be a gentle introduction. If you’re interested in learning more, then you can go deeper with our QoS mini-series.

Chapter 2 – What is QoS?

Sometimes, a network link will experience congestion. This is much like congestion you might experience on the roads. Once they’re filled to capacity, traffic speed and performance suffers.

In times of congestion, QoS gives priority to more important traffic. This is at the expense of less important traffic.

Here we have a main office and a branch office. Between them is a 100Mbps WAN link, which is reaching capacity. This link carries all types of traffic, including web browsing and phone calls.

When the link is over capacity, some traffic will drop. There’s no way around that; There’s a limit to how much data this link can carry.

Without QoS, it’s first in, first served. There’s no more logic to that. We will find then, that voice traffic and web browsing traffic are both affected. This might cause phone calls to sound bad or drop out.

Phone calls are usually more important. So we can configure QoS to prioritise voice traffic. This way, the voice traffic will pass through, while the less important traffic will drop.

Notice that this still drops some traffic. QoS does not magically make your links better, better, or faster. Neither does it make congestion go away. But it does allow the more important traffic through first.

We configure QoS to identify network traffic and categorize it. For example, we may create a real-time category, which contains voice and video traffic. This type of traffic is sensitive to network congestion, and is usually high priority.

Another category might be for network management, and contain routing updates. There might be a category for business-critical applications and websites. And finally a category for non-essential traffic.

QoS handles each of these categories in different ways, according to their needs.

For example, we may reserve an amount of bandwidth for real-time traffic. We may also make sure the router forwards these packets first.

For other traffic we use a strategy called shaping. Shaping buffers the forwarding of packets until there is available bandwidth.

For other traffic types we may consider policing. This is where we rate-limit the amount of bandwidth these applications can have.

We’ll talk about shaping and policing in the next video.

Here is the key point that you should remember. QoS is helpful for congested links, or to help prevent them becoming congested.

If you’re regularly experiencing congestion, you may have a bigger problem. You may need to consider upgrading your links, or reviewing the traffic that you allow on them.

QoS is part of the solution, not the entire solution. Do not avoid upgrading your capacity because you’re using QoS.


Quality of Service includes several tools to get the job done. As we’ll see in this video and the next, we can combine them to achieve different outcomes.

Chapter 3 – Application Traffic

Not all applications have the same network requirements.

Some applications need to work in real time. Think of phone calls, skype, teams, and zoom. You’re talking to a real person in real time, your user experience relies on regular delivery of packets.

Then there’s applications like streaming video. That’s things like YouTube and NetFlix. It doesn’t quite need to be real time, but for a good experience you don’t want it pausing and buffering if possible.

To learn more about voice and converged networks, take a look at our voice mini-series.

And there’s transactional traffic. This includes traffic like web browsing, which uses the HTTP protocol. HTTP uses a series of commands, or transactions, to retrieve a web page for the browser to display.

In short, applications have different needs. To address those needs we must understand the different characteristics that affect them.

In particular, we want to look at bandwidth, loss, latency, and jitter.

In networking, bandwidth refers to the amount of data that needs to be transferred. We measure the maximum bandwidth of a link by the maximum number of bits per second it can transfer.

Some applications, such as video streaming, need a lot of bandwidth. Others, like using SSH to manage a router, need very little bandwidth.

Loss is where traffic does not reach its destination. There may be several places along the network where traffic can be lost.

We measure loss in the percentage of packets that didn’t arrive. For example if we send 1000 packets, and 990 packets arrive, then this link has 1% loss.

Latency is the amount of time it takes a packet to travel from one endpoint to another. A similar term, Round Trip Time or RTT, is the time taken for the packet to reach the other end, and the response to come back.

Latency and bandwidth are often confused. Think of it like a water pipe. The bandwidth is how wide your pipe is, which controls how much water can fit into the pipe at one time. The latency is the length of the pipe, which affects how long it takes for water to travel the length of the pipe.

Applications like HTTP are tolerant to high latency. Others, particularly real-time traffic like voice and video, are more latency sensitive.

An interesting one is jitter. This relates to latency.

Latency measures the time for a packet to travel between two endpoints. Jitter measures the variance in this time.

Imagine we measure the latency of five packets. Delivery takes 10ms, 20, 7, 55, and 37-milliseconds. We have a lot of variance between the smallest and largest delivery times. This is jitter.

Jitter is particularly destructive to real time applications like voice and video. These need a steady stream of traffic. 

Of course, some jitter is normal. A lot of jitter means there may be a problem somewhere on the network. One possibility is that a link or a device is beyond its capacity. This slows down delivery on some packets.


You can use your workstation to get some measurements. This can be helpful when troubleshooting a network issue.

Chapter 4 – Classification and Marking

There are two perspectives that we need to use when thinking of QoS.

On one hand, every device in the network has its own QoS configuration. They are responsible for making its own decisions and taking action. The term for this is PHB, or Per-Hop Behaviour.

But, all these independent devices work together to achieve a greater goal. That means we also need to think of QoS as ‘end-to-end’. That is, the configuration on all these devices should work toward the same goal.

Let’s consider this example network. We have a main office and a branch office. There are phones at each site, which make calls to each other.

Some network equipment is close to the end devices, such as the switches that the phones connect to. Other devices are in the middle of the network, while others are out on the edge.

We would consider voice call traffic between the phones to be of high importance. The phones definitely think so!

For this to be true, all routers and switches along the path also need to consider this to be of high importance.

While each device has its own QoS configuration, they all support the one end-to-end QoS goal.

So how does QoS do this? When a switch or router receives a packet, the first thing it will need to do is classify it. That is, when a router receives packets, it sorts them into categories called classes. We configure these classes ourselves.

We’ll continue with the example of a phone call. The phone generates the traffic, and sends it to the switch. So, this traffic, and knows it is voice traffic. So, as each packet arrives, the switch categorises them as high-priority.

Each device will classify packets as they arrive.

After classification, a device may take action on the traffic.

It would be nice if we have a nice healthy network with low latency and high bandwidth links. Then QoS wouldn’t need to do anything. 

But that’s not always the case. Imagine that this device has recently received a lot of traffic, and it’s links are filling to capacity. There are too many packets, so it needs to decide which packets to send, and which to delay or drop.

This is per-hop-behaviour at work. This device uses its configuration to make its own independent decision about what to do. 

It will likely decide to send the voice call traffic first. It can then send the rest of the traffic when it can.

There are various actions that QoS can take. These include rate limiting traffic, queueing and scheduling packets, and marking the packets.

We’re going to look at all these actions. We’ll start with marking packets.

Before a device sends a packet out, it can add a marking in the Ethernet or IP header.

This marking is a value that identifies that traffic’s class. It’s purpose is to allow other devices along the path to make informed decisions about how to handle the packet.

Marking is usually done as close to the source of the traffic as possible. In our example, the phone marks its own traffic. If the phone isn’t capable of doing this, we could configure the switch to do it.

When another device in the path receives the packet, it can read the marking, and classify it. While there are other ways to perform classification, this is the most useful in most cases.

These markings help us to achieve end-to-end QoS.

If we have a good reason, we can even re-mark packets. An example of this is if we receive traffic from the internet that contains markings.

We wouldn’t trust these markings, as we don’t know who set them or why. What is it’s an attacker who wants us to treat their packets as a priority?

In a case like this, we can change the markings on a packet, or remove them completely.

What we’ve seen here is the trust boundary. That is, the part of the network whose markings we trust.

There are three different ways we can mark packets and frames.

At layer-2, frames passing over a trunk link can have a marking added to the Ethernet header. We call this Class of Service.

At layer-3 there are two options; IP Precedence and DSCP. Both of these add a value in the IP header.

IP Precedence has been out of date for some time, leaving DSCP as the preferred choice.

Ethernet frames consist of a header, payload, and data. In the header, there may be an 802.1q field. This field is here on trunk links.

Inside this field are a few subfields, one of which is the PRI field. This contains three Class of Service bits.

Three bits means we have eight different combinations. These are ‘Class Selectors’, or CS0 through to CS7.

The higher the class selector, the higher priority of the traffic.

We would use Class of Service markings if we’re dealing with layer-2 only traffic. Or, if we have switches that aren’t able to look at the IP headers.

The original type of layer-3 marking was IP Precedence. 

The IPv4 header used to have an 8-bit field called ‘Type of Service’. Like Class of Service, routers used three of these bits to mark packets. We call these a precedence, or prec 0 through to 7.

DSCP has since replaced IP Precedence. This is also known as DiffServ. Two reasons for this are:

  • It supports IPv6
  • It uses six bits for markings, not three

Of these six bits, three are the Class Selector. This makes DSCP backward compatible with IP Precedence. The remaining three are the drop probability.  

We’ll look at what these do soon. First, we need to understand classes, which are also known as Forwarding Classes.

A class is the category that we organise traffic into. For example, we could have a class for real-time traffic like voice and video. We could have another class for network traffic like OSPF and BGP. We can have a few classes or many classes.

When a packet is in a class, our devices can make decisions based on that class.

For example, a router would consider any traffic that is part of the real-time class to be very important. It will take action to make sure it delivers those packets on time.

The three code selector bits mark the forwarding class. That means that there are eight different forwarding classes. Each has their own name.

The special names used are ‘Assured Forwarding’, ‘Expedited Forwarding’, and ‘Best Effort’. You’ll see later on how these names make a bit more sense.

So far, this is not much different to IP Precedence, which also had eight categories. But, we still have the three bits from the Drop Probability field.

These are kind-of a subclass within each Assured Forwarding class. That means that we could have interactive video and streaming video in the same class. But, we could assign them different drop probabilities.

This means that a router could prioritise interactive video over streaming video.

One last thing to notice here is that we don’t have multiple drop probabilities for every class. The reason is that we don’t really need that many different combinations. Even complicated networks are fine with the classes and drop probabilities shown here. 


So you can see how marking traffic makes things easier on us. It’s especially useful for routers along the network path, as it helps them make good decisions.

Chapter 5 – Cisco’s MQC

On Cisco routers, we configure QoS using a tool called the MQC. This stands for Modular QoS CLI. This might sound fancy, but don’t let it intimidate you. All it is is a hierarchy of CLI commands, which we’ll see soon.

It works like this: We use one or more class maps to create classes, and assign traffic to them. For example, we may have a class for real-time traffic. We create a class-map to define this class, and place voice traffic in the class.

We then create a policy, using a policy-map. The policy map applies an action to traffic in a class. For example, we could mark traffic in the real-time class. We may also rate-limit traffic  from another class.

And finally, we use a service policy to apply the policy map to an interface. We can apply this policy to traffic coming into or going out of an interface.

The router looks at traffic passing through the interface. It will check the service policy to see if it needs to perform an action on this traffic.

So to summarize, when we configure QoS, we:

  1. Create class-maps to classify traffic
  2. Create policy maps to apply actions to classes
  3. Assign a policy to an interface with a service-policy
  4. Verify settings

We’ll see this in action in a moment. First, a little warning for you. Some of these commands might seem intimidating at first. 

The key to getting comfortable with this is to try it yourself. The more you try it, the more it will make sense to you.

The lab at the end of the video will help with this.

We’ll now run through some QoS configuration, using a simple example. We’re going to treat all traffic to or from the server as important.

The first step in this is to classify the traffic. This means identifying traffic, and putting it into a class.

We can identify traffic using an Access List, also known as an ACL. We’ve got a separate video on that if you’re not familiar with them.

Often we think of ACLs as a way to allow or deny traffic, but they’re capable of much more. They are also used to identify interesting traffic.

So, we’ll create a new extended access-list called Web-Server.

We’re going to permit traffic from the server,, to anywhere. And we’re going to permit traffic from anywhere to the server as well.

When we use ACLs with Quality of Service, the permit keyword means ‘this is the traffic we’re looking for’. It doesn’t allow or block the traffic.

Now that we have the ACL, we can create the class-map. To do this, we use the class-map command.

Here, we’ve created a new class-map named Important-Traffic. We can choose whatever name we want. We’ve also used the match-all statement. I’ll come back to that in a moment.

Notice that we’re now in the class map configuration mode? This means we can add extra configuration within this class map.

Here we can add in our match criteria, using the match command. We’ve told the class-map to match the Web-Server ACL that we created a moment ago.

We can put several match criteria in here if we want to, to build more complicated class maps. For example, we might also look at markings, as well as the access list.

That’s where the match-all keyword comes in. This tells the class map that all the conditions we give it must match. If all conditions match, that traffic is part of this class map.

The alternative to match-all is match-any. This says that any of the conditions we supply must match. If any condition matches, that traffic will be part of this class map.

We can have more than one class-map of course. In fact, there’s already another class map here called class-default.

This is here to match all remaining traffic that we haven’t already put in a class map. If we try to configure this class, you’ll see this error message. This means that we can’t configure it.

Having this default class map here also means that all traffic will end up in some class-map.

The next step is to create a policy map, to take some action. We’ll call ours Super-Policy. Policy maps also have sub-configuration.

Inside the policy, we use the class command to add actions to the class maps that we defined. You’ll remember that we named our class-map Important-Traffic.

We’ll tell the router to give 50% of the link bandwidth to this class. This reserves up to 50% of the bandwidth for our server. The server can use more than 50% if it’s available. This command makes sure it gets at least 50%.

We can add the other 50% to the default class. We don’t need to do this, as this covers all remaining traffic anyway. But I’ve done it anyway, to show you that it’s there.

There are several different actions that we can configure here. We’ll look at a few more in the next video.

The class maps and policies are for nothing if we don’t apply them somewhere. Let’s apply this to the gig0/0 interface. We do this in the interface configuration area.

We apply a policy with the service-policy command. Remember to use a hyphen between ‘service’ and ‘policy’, or you’ll make the same mistake I just did.

Notice that after the service-policy command, I’ve put the out keyword? This means that the policy will apply to traffic that’s leaving the interface. The alternative is the in keyword. This applies policy to traffic entering the interface.

In this way, we can apply different policies to traffic entering or leaving the router. A common example of this is marking traffic as it comes in, and rate-limiting some traffic as it leaves.

Some actions, like reserving bandwidth, only works on traffic leaving the router.

We can also add the bandwidth command. We saw this a few videos ago when talking about OSPF.

This doesn’t change the bandwidth of the link. It does give the router more information about the available bandwidth on the link. For example, this could be a 10Gig interface, but it’s connected to a 1Gig WAN link.

If we’re going to give 50% of the bandwidth to our server, it’s important that the router knows how much bandwidth that is.

That’s now the end of configuration. We can verify our settings with the show policy-map interface command.

This shows us the service-policy (that’s the policy-map) on this interface. We can also see whether it’s applied in the ingress or egress direction.

Under that we have our class-maps. At the bottom of each class is the action we’ve applied. Near the top, we can see the amount of traffic that has passed through this class-map


So to summarize:

  • We identify traffic with class-maps
  • Policy-maps apply actions to the traffic in class-maps
  • Policy-maps are applied to an interface in a particular direction

Chapter 6 – MQC – Markings

Maybe we don’t want our router to reserve bandwidth. Maybe we’d rather mark the traffic, and let some other device manage the bandwidth.

Let’s edit our policy map to do this.

We’ll change our action under the Important-Traffic class. The first step is to remove the bandwidth command we used earlier.

We can now use the set dscp command to choose a marking to apply to this traffic. As this is important traffic, I’ll mark it as EF, or Expedited Forwarding.

And that’s all there is to marking traffic!

When we verify our QoS configuration, you’ll notice that there’s a new action listed. This shows that we’re setting the DSCP value of EF. So far, the router has not marked any packets.

Before finishing the video, I’d like to mention a nice tool called NBAR. That is, Network-Based Application Recognition.

NBAR is a tool built into Cisco routers that looks at the traffic passing through the router or switch. By looking at the traffic, it will work out which application is generating this traffic. This includes web applications like YouTube, facebook, and others.

Using NBAR, we can apply QoS directly to applications, or types of applications. This makes marking so much easier.


In the lab today, we’re configuring QoS on a small network, as shown here. This includes creating classes, marking traffic, and allocating bandwidth.

We’ll also look at removing any markings that we receive on traffic from the internet.

End Scene

Well, we’ve covered a lot of how QoS works at a technical level. It’s important for us to remember the user though. When we configure QoS, we do this because we want our users to have a good experience.

In the next video, we’re going to look at some recommended class models, as well as shaping and rate limiting.