Test driving traffic shaping on Linux

In my last post, I shared a simple setup that does bandwidth limiting on Linux using TBF (Token Bucket Filter). The TBF based approach applies a bandwidth throttle on the NIC as a whole.

The situation in reality might be more complex then what the post described. Normally the users like to control bandwidth based on the type of application generating the traffic.

Lets take a simple example; the user may like to allow his bandwidth to be shared by application traffic as follows

  • 50% bandwidth available to web traffic
  • 30% available to mail service
  • 20% available for rest of the application

Traffic Control on Linux provides ways to achieve this using classful queuing discipline.

In essence, this type of traffic control can be achieved by first classify the traffic in to different classes and applying traffic shaping rules to each of those classes.

The Hierarchical Token Bucket Queuing Discipline

Although Linux provides various classful queuing discipline, in this post, we will look at Hierarchical Token Bucket (HTB) to solve the bandwidth sharing and control problem.

HTB is actually a Hierarchy of TBF (Token Bucket filter we described in the last post) applied to a network interface. HTB works by classifying the traffic and applying TBF to individual class of traffic. So to understand HTB we must understand Token Bucket Filer first.

How Token Bucket Filter works

Lets get a deeper look at how the Token Bucket Filter (TBF) works.

The TBF works by having a Bucket of tokens attached to the network interface and each time a packet needs to be passed over the network interface a token is consumed from the Bucket. The kernel produces these tokens at a fixed rate to fill-up the bucket.

When the traffic is flowing at a slower pace then the rate of token generation the bucket will fill up. Once filled up the bucket will reject all the extra tokens generated by the kernel. The tokens accumulated in the bucket can help in passing a burst of traffic (limited by the size of the bucket) over the interface.

When the traffic is flowing at a pace higher then the rate of token generation the packet must wait until the next token is available in the bucket before being allowed to pass over the network interface.

TBF

In the tc command line the size of the bucket is related to burst size, rate of token generation is related to rate and the latency parameter provides the amount of time a packet can be in the queue waiting for a token before being dropped.

The HTB queuing discipline

The following figure describes the working of HTB queuing discipline.

HTB

To apply HTB discipline we have to go through the following steps

  • Define different classes with their rate limiting attributes
  • Add rules to classify the traffic in to different classes

In this example we will try to implement the same traffic sharing requirement as mentioned in the introduction section. Web traffic will get 50% of the bandwidth while mail gets 30% and 20% is shared by all other traffic.

The following rules define the various classes with the traffic limits

tc qdisc add dev eth0 root handle 1: htb default 30
tc class add dev eth0 parent 1: classid 1:1 htb rate 100kbps ceil 100kbps
tc class add dev eth0 parent 1:1 classid 1:10 htb rate 50kbps ceil 100kbps
tc class add dev eth0 parent 1:1 classid 1:20 htb rate 30kbps ceil 100kbps
tc class add dev eth0 parent 1:1 classid 1:30 htb rate 20mbps ceil 100kbps

Slide1

Now we must classify the traffic into their classes based on some match condition. The following rules classify the web and mail traffic in to class 10 and 20. All other traffic are pushed to class 30 by default

tc filter add dev eth0 protocol ip parent 1:0 prio 1 u32 match ip dport 80 0xffff flowid 1:10 
tc filter add dev eth0 protocol ip parent 1:0 prio 1 u32 match ip dport 25 0xffff flowid 1:20

Verifying the results

We will use iperf to verify the results of the traffic control changes. Use the following command to start 2 instances of iperf on the server

iperf -s -p <port number> -i 1

For our example, we use the following commands to start the servers

iperf -s -p 25 -i 1
iperf -s -p 80 -i 1

The clients are started with the following commands

iperf –c <server ip> -p <port number> -t <time period to run the test>

For this example we used the following commands to run the test for 60 secs. The server IP of 192.168.90.4 was used.

iperf -c 192.168.90.4 -p 25 -t 60
iperf -c 192.168.90.4 -p 25 -t 60

Here is a snapshot of output from the test.

HTB

It show the web traffic to be close to 500kbps while mail traffic to be close to 300kbps, the same ration we wanted to shape the traff. When excess bandwidth is available HTB will split it in the same ratio that we configured for the classes.

Advertisements

Published by

Chandan Dutta Chowdhury

Software Engineer

One thought on “Test driving traffic shaping on Linux”

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s