Test driving OpenWRT

Recently I have been looking at tools for managing and monitoring my home network. In my previous post I talked about using a Network Namespace to control the download limit.

Now I wanted to look at more advanced tools for the job. OpenWRT is a Linux based firmware, which supports a lot of networking hardware. I am exploring the possibility of flashing OpenWRT on my backup router at home.

To test OpenWRT I used a KVM image (which can be found here) and started a VM on my desktop. The following diagram shows the network topology.

Slide1

Little tweaking is required for making OpenWRT work with libvirtd. The idea is to push the incoming traffic to OpenWRT and apply traffic monitoring/policy.

Libvirt provides dnsmasq service which listens on bridge virbr0 and provides DHCP ip to the VMs. It also configures NAT rules for traffic going out of the VMs through the virbr0.

  • For this test we will remove the NAT rules on the bridge virbr0. All applications on the desktop will communicate through this bridge to OpenWRT which will route the traffic to the Internet.
  • I also stopped the odhcpd and dnsmasq server running on OpenWRT. Started a dhsclient on the lan interface (br-lan) to request a IP from libvirtd.

Once OpenWRT is booted you can login to the web interface of the router to configure it.

The following figure shows the networking inside OpenWRT router.

Slide2

The routing table on my desktop is as followsScreenshot from 2016-03-06 20:34:41

The routing table on the OpenWRT server is show belowScreenshot from 2016-03-06 20:34:29

OpenWRT allows installation of extra packages to enhance its functionality.I could find packages like quagga, bird etc which will be interesting to explore.

Screenshot from 2016-03-06 17:51:13.png

It provides traffic monitoring and classifications.

Screenshot from 2016-03-06 19:41:27

Openwrt provider firewall configuration using iptables.

Screenshot from 2016-03-06 17:48:57

I will be exploring more of its features before deciding if I will flash it on my backup home router.

Rate Limiting ACT broadband on Ubuntu

ISPs have started to provide high bandwidth connections while the FUP (Fair Usage Policy) limit is still not enough (I am using ACT Broadband). Once you decide to be on youtube most of the time the download limit gets exhausted rather quickly.

As I use Ubuntu for my desktop, I decided to use TC to throttle my Internet bandwidth to bring in some control over my Internet bandwidth usage. Have a look at my previous posts about rate limiting and  traffic shaping on Linux to learn about usage of TC.

Here is my modest network setup at home.

Slide1

The problem is that TC can throttle traffic going out on an interface but traffic shaping will not impact the download bandwidth.

The Solution

To get around this problem I introduced a Linux network namespace into the topology. Here is how the topology looks now.

Slide2

I use this script to setup the upload/download bandwidth limit.

Results

Here are readings before and after applying the throttle

Before

media-20160302

After rate-limiting to 1024Kbps upload and download

media-20160302-1.png

Neutron Extension to add a console to Virtual Router

I have written a small extension to Neutron to add a console to virtual routers. This can help the tenant in understanding the networking setup and debugging. The console provides a very limited set of commands to be executed with in the virtual router (linux network namespace).

The demo shows a proof-of-concept of the idea, although the demo shows the console working with Linux network namespace, it can be easily adapted to other implementations.

The CLI of the console is very configurable.

Here is the video

Test driving LXD

Overview

LXD is an OS container (unlike docker which is application level container) and provides a more complete OS user space environment. It is a improved version of LXC containers. It provides image management, live migration of container instances, OpenStack integration.

LXD is composed of two components the server daemon called ‘lxd’ while the client is called ‘lxc ‘

Installation

The latest version of LXD can be installed from the ppa repository

sudo add-apt-repository ppa:ubuntu-lxc/lxd-git-master
sudo apt-get update
sudo apt-get install lxd

Image repositories

You will need a LXC image to start your container. To get a container image you have to add a LXC image repository location

sudo lxc remote add images images.linuxcontainers.org –debug
LXD1
sudo lxc remote list

LXD5

Next you can import the LXC image

sudo lxd-images import lxc ubuntu trusty amd64 --alias ubuntu --alias Ubuntu
sudo lxc image list

LXD2

Starting your Linux Container instance

sudo lxc launch ubuntu u1
sudo lxc list

LXD3
sudo lxc exec u1 /bin/bash

LXD4

 

Visualizing KVM (libvirt) network connections on a Hypervisor

At times I have found the need to visualize the network connectivity of KVM instances on a Hypervisor Host. I normally use libvirt and KVM for launching my VM workloads. In this post we will look at a simple script that can parse the information available with libvirt and the host kernel to plot the network connectivity between the KVM instances. The script can parse Linux Bridge and OVS based switches.

It can generate a GraphViz based topology rendering (requires pygraphviz), can use networkx and d3js to produce a webpage which is exposed using a simple webserver or just a json output describing the network graph.

The source of the script is available here .

The following is a sample output of my hypervisor host.

SVG using d3js
d3_svg

PNG using GraphViz
gviz

Json text
json_net

Fullscreen display for Core Linux on VirtualBox

The default screen resolution for Core Linux on VirtualBox is 1024×768 and does not provide a good screen area usage.

defaultI came across this post on the Core Linux forum that provides steps to set the correct screen resolution. I am capturing the steps in this post for my reference, all credits are due to the original poster.

I found 1440×762 to be a better resolution if you are going to use VirtualBox in window mode, while 1440×900 is better suited for full-screen mode. In the following command “linux” is the name of my VM.

VBoxManage setextradata "linux" "VBoxInternal2/UgaHorizontalResolution" 1440
VBoxManage setextradata "linux" "VBoxInternal2/UgaVerticalResolution" 762
VBoxManage setextradata "linux" "CustomVideoMode1" 1440x762x24
VBoxManage setextradata "linux" "GUI/CustomVideoMode1" 1440x762x24

Once done you need to update the .xsession file in your home directory with the proper resolution

/usr/local/bin/Xvesa -br -screen 1440x762x24 ...

Now reboot the VM to activate the new resolution or you can exit to the prompt and restart the Window Manager with “startx” command

Here is the result after the configuration

custom

Implementing Basic Networking Constructs with Linux Namespaces

In this post, I will explain the use of Linux network namespace to implement basic networking constructs like a L2 switching network and Routed network.

Lets start by looking at the basic commands to create, delete and list network namespaces on Linux.

The next step is to create a LAN, we will use namespaces to simulate two nodes connected to a bridge and simulate a LAN inside the Linux host. We will implement a topology like the one shown below

lan

Finally, let see how to simulate a router to connect two LAN segments. We will implement the simplest of the topology with just two nodes connected to a router on different LAN segments

router

Test driving traffic shaping on Linux

In my last post, I shared a simple setup that does bandwidth limiting on Linux using TBF (Token Bucket Filter). The TBF based approach applies a bandwidth throttle on the NIC as a whole.

The situation in reality might be more complex then what the post described. Normally the users like to control bandwidth based on the type of application generating the traffic.

Lets take a simple example; the user may like to allow his bandwidth to be shared by application traffic as follows

  • 50% bandwidth available to web traffic
  • 30% available to mail service
  • 20% available for rest of the application

Traffic Control on Linux provides ways to achieve this using classful queuing discipline.

In essence, this type of traffic control can be achieved by first classify the traffic in to different classes and applying traffic shaping rules to each of those classes.

The Hierarchical Token Bucket Queuing Discipline

Although Linux provides various classful queuing discipline, in this post, we will look at Hierarchical Token Bucket (HTB) to solve the bandwidth sharing and control problem.

HTB is actually a Hierarchy of TBF (Token Bucket filter we described in the last post) applied to a network interface. HTB works by classifying the traffic and applying TBF to individual class of traffic. So to understand HTB we must understand Token Bucket Filer first.

How Token Bucket Filter works

Lets get a deeper look at how the Token Bucket Filter (TBF) works.

The TBF works by having a Bucket of tokens attached to the network interface and each time a packet needs to be passed over the network interface a token is consumed from the Bucket. The kernel produces these tokens at a fixed rate to fill-up the bucket.

When the traffic is flowing at a slower pace then the rate of token generation the bucket will fill up. Once filled up the bucket will reject all the extra tokens generated by the kernel. The tokens accumulated in the bucket can help in passing a burst of traffic (limited by the size of the bucket) over the interface.

When the traffic is flowing at a pace higher then the rate of token generation the packet must wait until the next token is available in the bucket before being allowed to pass over the network interface.

TBF

In the tc command line the size of the bucket is related to burst size, rate of token generation is related to rate and the latency parameter provides the amount of time a packet can be in the queue waiting for a token before being dropped.

The HTB queuing discipline

The following figure describes the working of HTB queuing discipline.

HTB

To apply HTB discipline we have to go through the following steps

  • Define different classes with their rate limiting attributes
  • Add rules to classify the traffic in to different classes

In this example we will try to implement the same traffic sharing requirement as mentioned in the introduction section. Web traffic will get 50% of the bandwidth while mail gets 30% and 20% is shared by all other traffic.

The following rules define the various classes with the traffic limits

tc qdisc add dev eth0 root handle 1: htb default 30
tc class add dev eth0 parent 1: classid 1:1 htb rate 100kbps ceil 100kbps
tc class add dev eth0 parent 1:1 classid 1:10 htb rate 50kbps ceil 100kbps
tc class add dev eth0 parent 1:1 classid 1:20 htb rate 30kbps ceil 100kbps
tc class add dev eth0 parent 1:1 classid 1:30 htb rate 20mbps ceil 100kbps

Slide1

Now we must classify the traffic into their classes based on some match condition. The following rules classify the web and mail traffic in to class 10 and 20. All other traffic are pushed to class 30 by default

tc filter add dev eth0 protocol ip parent 1:0 prio 1 u32 match ip dport 80 0xffff flowid 1:10 
tc filter add dev eth0 protocol ip parent 1:0 prio 1 u32 match ip dport 25 0xffff flowid 1:20

Verifying the results

We will use iperf to verify the results of the traffic control changes. Use the following command to start 2 instances of iperf on the server

iperf -s -p <port number> -i 1

For our example, we use the following commands to start the servers

iperf -s -p 25 -i 1
iperf -s -p 80 -i 1

The clients are started with the following commands

iperf –c <server ip> -p <port number> -t <time period to run the test>

For this example we used the following commands to run the test for 60 secs. The server IP of 192.168.90.4 was used.

iperf -c 192.168.90.4 -p 25 -t 60
iperf -c 192.168.90.4 -p 25 -t 60

Here is a snapshot of output from the test.

HTB

It show the web traffic to be close to 500kbps while mail traffic to be close to 300kbps, the same ration we wanted to shape the traff. When excess bandwidth is available HTB will split it in the same ratio that we configured for the classes.

Network Bandwidth Limiting on Linux with TC

On Linux Traffic Queuing Discipline attached to a NIC can be used to shape the outgoing bandwidth. By default, Linux uses pfifo_fast as the queuing discipline. Use the following command to verify the setting on your network card

# tc qdisc show dev eth0
qdisc pfifo_fast 0: root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1

Measuring the default bandwidth

I am using iperf tool to measure the bandwidth between my VirtualBox instance (10.0.2.15) and my Desktop (192.168.90.4) acting as the iperf server.

Start the iperf server with the following command

iperf -s

The client can then connect using the following command

iperf -c <server address>

The following figure shows the bandwidth available with default queue setting

TBF1

Limiting Traffic with TC

We will use Token Bucket Filter to throttle the outgoing traffic. The following command sets an egress rate of 1024kbit at a latency of 50ms and a burst rate of 1540

# tc qdisc add dev eth0 root tbf rate 1024kbit latency 50ms burst 1540

Use the tc qdisc show command to verify the setting

# tc qdisc show dev eth0
tc qdisc add dev eth0 root tbf rate 1024kbit latency 50ms burst 1540

Verifying the result

I measured the bandwidth again to make sure the new queuing configuration is working and sure enough, the result from iperf confirmed it.

TBF2

The following command shows the detailed statistics of the queuing discipline

TBF3Impact of different parameters for Token Bucket Filter (TBF)

Decreasing the latency number leads to packet drops, follow figure captures the result after latency was dropped to 1ms.

TBF5

Providing a big burst buffer defeats the rate limiting

TBF4TBF6

Both the parameters needs to be chosen properly to avoid packet loss and spike in traffic beyond the rate limit