SecureNet: Simulating a Secure Network with Mininet

I have been working with OpenStack(devstack) for a while and I must say it is quite convenient to bring up a test setup using devstack. At times, I still feel it is an overkill to use devstack for a quick test to verify your understanding of the network/security rules/routing etc.

This is where Mininet shines. It is very lite on resources and extremely fast in getting your topology up and running. It cuts the setup to the absolute necessities and needless to say, it has proven invaluable to me while trying out various topologies and tests.

Lacking Security Device simulation

The default Mininet toolbox comes with a switch and hosts nodes. The switch is primarily a controller based SDN switch like OpenVSwitch or IVS. The primary focus of Mininet has been L2 networks with SDN controllers.

While for me the goal was to test routing and security much like what is available on OpenStack. I found an example topology in the Mininet code simulating a LinuxRouter. Basically, a Host node (Namespace) was configured to do l3 forwarding.

This give me the initial idea of implementing security devices with Mininet, after all the reference network services (routing and firewall) in OpenStack are based on Namespaces

A Simulated Perimeter Firewall

As IPTables are available within the namespace I decided to use them to implement a Perimeter Firewall that would inspect the traffic between the Networks. I used a separate table to have my firewall rules and redirected all traffic hitting the FORWARD table to my custom table PFW. The last rule in the FORWARD table was to drop all traffic.

So now when I started my topology with the perimeter Firewall the ping test confirmed that no traffic was flowing between the networks.

MN1

To allow traffic we need to specifically configure the Firewall to allow packets.

MN3

This took care of securing the Network boundary, but how about traffic flowing within the network. OpenStack supports this using micro segmentation with Security Groups.

Simulating Micro Segmentation with OpenStack like Security Groups

Well as I was using OpenVSwitch for my topology in standalone mode, I decided to configure the OVS packet filtering capabilities to implement a firewall within the Network itself. Here is some sample OVS rules to do L3 packet match and filter.

ovs-ofctl add-flow switch1 dl_type=0x0800,nw_src=30.0.0.100,nw_dst=30.0.0.101s, action=DROP

And produces the following setting (can be verified with ovs-ofctl dump-flows switch1)

cookie=0x0, duration=9.050s, table=0, n_packets=0, n_bytes=0, idle_age=9, ip,nw_src=30.0.0.100,nw_dst=30.0.0.101 actions=drop

Updated CLI to allow Firewall Rules

All this looks good but it’s a lot of work to manually configure all the rules. And I was thinking, how can this be automated?

I extended the switch node in Mininet to implement a Secure Switch and while at it I also implement the Perimeter Firewall as a specialized Host Node. My intention of implementing these special nodes was to add the capability of configuring the nodes with security settings. By this time, it was already looking like an interesting Weekend Project 🙂

So, got into it an implemented a set of CLI commands for Mininet to configure the firewall capabilities on the topology(using the security nodes that I introduced). I was calling it the Secure Network.

I wanted to keep things simple to start with and so kept the rule definitions very coarse, namely only L3 filtering only(I may look at L4 in future). So here is the CLI I came up with.

Securenet] secrule add allow [addr1] to [addr2]
Securenet] secrule add deny [addr1] to [addr3]

Here addr1, addr2 and addr3 are Micro Segments(or Security Groups)

Now, to define Firewall rules with Micro Segments (let’s call it Security Groups as this is what is was trying to simulate at the first place) I needed a way to define the Security Groups and associate Hosts to it.

To keep things simple I went for an automatic creation of Security Groups and when a Host association command is entered. This was done by extending the host commands. Here is an example of creating a Security Group and associating a Host with it.

Securenet] h30 bind sg1

Above command creates a Security Group called SG1 and associates host h30 to it.

To add another Host to the same SG issue the same command with a different Host name

Securenet] h31 bind sg1

This adds h31 to sg1. To view the members of the security group use the sghosts command

Securenet] sghosts sg1
0: h30
1: h31

Reacting to Security Group changes with events

By this time, I was already too excited to stop and was thinking of the interactions between the Firewall rules and Security Group definitions.

As the high-level Firewall rules are composed of Security Groups, the firewall must react to the changes to Security Group definition. This means the change in Security Group definition must produce some kind of event that is observable by the Firewall and then it must update its configuration accordingly to keep the firewall rule constrains satisfied.

To do this I extended the Topology and Mininet class so that any change in the SG definition can trigger a re-evaluation of the Firewall rules.

MN4

Again to keep things reasonably simple I went with a rip and replace of Firewall configuration (both IPTables and OVS) and did not bother about the traffic impact (we are in simulation after all). But this might be an interesting area to explore in future.

Simulating IPSets

One thing that I was missing while defining the Firewall rules was a way to target a set of external IP address. As the IP addresses associated to a Security Group definition was derived out of its member Hosts, so there was no way to define rules based on addresses that are not part of the topology.

So, another set of CLI commands were introduced to define groups based on IP addresses manually and without any topology based constrains.

Securenet] secrule ipg add ipg1 1.1.1.1
Securenet] secrule ipg add ipg2 2.1.1.1
Securenet] secrule ipg add ipg3 2.1.1.2

CLI to view a list of IPGs

Securenet] secrule ipg list
0: ipg2
1: ipg3
2: ipg1

And to view its content use the following

Securenet] secrule ipg show ipg1
0: ipgh-1.1.1.1/32

With IPGs it was possible to have rules with arbitrary addresses.

Conclusion

Here is a run of a simple set of commands on the secure net topology

MN5

The updated IPTables config in the Perimeter Firewall.

MN6

And the security rules in OVS

MN7

This was more of a fun long weekend project for me. Once I dived into the project I realized a number of challenges that are posed by such an undertaking like rule optimization, minimizing the config changes, targeting the right device to push a security rule, figuring out duplicate rules and conflicts etc. Also, as I thought through more use cases I realized that the firewall rule definition needs a language of its own.

But at the end of the weekend I feel mostly it was a lot of fun.

Advertisements

Control Plane for our virtual network

In the last two blogs, I have gone through the process of developing a VPN base virtual network. One thing that we ignored is the amount of configuration that we need to change to add or remove nodes or provision new edge routers.

While, some of these steps are part of the infrastructure provisioning, like connecting the routers over L3 links and the VPN tunnel setup. These can be considered as static one time activity, while others are very transient like adding a device to the network and adding another edge router.

To manage such a network manually is time consuming and error prone. In this blog, I will explore means to automate this activity.

Let’s talk about adding a device to our network

What does it take to add a new device to the network?

Slide4

In terms of provisioning

  • First we need to know the IP address associated to the device.
  • we need to add a host route to all the edge router (local and remote) to make the communication between this new device and the rest of the already present devices in the network.

The reverse needs to happen when a device is taken off the network. The host routes associated with the device needs to be removed.

How about adding an Edge Router?

Adding a new edge router is a little more involved. Let’s say we have to add a new edge router to an existing network and also add a new device to it (because that is mostly the only reason to add a new edge router).

In terms of provisioning

  • The new edge router must add host routes for all the devices existing in the network.
  • Then it must follow the same steps as described earlier to add the new device to the network.

Removing the edge router will mean removing all devices connected to the edge router. In most cases removing the edge router with devices connected may not be a valid case (but we can always think of system failure which can cause such a situation).

How to automate?

From the above discussion, we can see that the job of adding and removing devices and edge routers is made of two tasks.

  • Distributing information about the device joining and leaving the network. i.e. distributing host routes when the device is added to the network and retracting the route when the device leaves
  • Adding host routes to the edge routers. This step is more about pushing(programming) the route into the routing table of the edge router

This job description exactly fits the skills of a routing protocol 🙂

BGP as control plane

BGP well known for its ability to distribute routes and its scalability. Let’s explore how BGP can be used for distributing the routes for both the cases above. Fortunately, BGP is designed such a job and can already take care of cases like new edge router joining the network.

For our example, we will use GoBGP to distribute the host routes. All the edge routers run BGP and form peer with each other.

If you are interested in details, please look at my previous blogs exploring goBGP and how to publish routes.

Adding a device node must trigger a route publish on BGP.

Adding an edge router is automatically taken care by the new edge router forming BGP peering with rest of the edge routers and receiving all the existing host routes

Other Options: Using a Controller

The other option is to use a SDN controller which can listen for events like device joining /leaving or new edge routers joining the network and send commands to orchestrate the edge routers.

This is a more centralized and ground up approach, but it gives you the flexibility of designing your own event handlers.

We have already discussed about the events that needs to be handled in the previous section, I will not go in to the details of a controller design (not in this post :))

Programming the routes

The second task in automating the device join/leave was to programming the routes in the edge routers.  This can be done with agents running in the edge routers which monitors BGP or listens to the commands from the SDN controller and configures the routes.

Designing a L2 virtual network

In the previous blog, we saw how we can connect two devices in the same subnet over a L3 device. With some configuration to enable ARP Proxying and host routes on the L3 device we were able to simulate a L2 network over a L3 device.

The important thing to note from the previous blog is that we did not make any change on the device nodes. All changes were made to the router.

So, what can we achieve with this capability?

In this blog, I will be taking that experience to the next level by designing a virtual L2 network using routers.

But most importantly we will be using only common Linux utilities and kernel features to achieve this.

Connecting nodes across multiple routers

Let’s start with setting up a simple topology for this experiment. We have two routers each connecting 2 device nodes. The routers themselves are connected to each other using a L3 link.

Slide2

In a way, each router with nodes are replica of the setup from my first blogs. The end nodes are devices while the middle node are the routers.

Device Interface IP Address
Node_1 node_if0 20.0.0.1/24
Node_2 node_if0 20.0.0.2/24
Vrouter1 vr_if1

vr_if2

eth1

1.0.0.1/24

Node_3 node_if0 20.0.0.3/24
Node_4 node_if0 20.0.0.4/24
Vrouter2 vr_if1

vr_if2

eth1

1.0.0.2/24

Look at the commands from my previous blog to see how to bring up this topology.

Making it work

To make it work each router need to be configured with Proxy ARP (refer to previous blog). Each router must be configured with host routes for all the device nodes. This means we have host routes for devices directly connected to the router and also the ones that are connected via the second router.

Here is an example routing table listing with routes for directly connected devices.

vrouter1 > route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
1.0.0.0         0.0.0.0         255.255.255.0   U     0      0        0 eth1
20.0.0.1        0.0.0.0         255.255.255.255 UH    0      0        0 vr_if1
20.0.0.2        0.0.0.0         255.255.255.255 UH    0      0        0 vr_if2

Use the following commands to setup remote routes

vrouter1 > ip route add 20.0.0.3/32 nexthop via 1.0.0.2
vrouter1 > ip route add 20.0.0.4/32 nexthop via 1.0.0.2

Similar route configuration will be needed for vrouter2 to reach node_1(20.0.0.1) and node_2(20.0.0.2)

Caveat: I found that L2 learning did not work properly when I connected multiple routers. To rescue the situation, I added static ARP entries.

Manually creating the ARP entries for directly connected devices on the router. Use the following commands to setup the static ARP entries.

arp -s 20.0.0.${node_no} $node_mac

With the above taken care you should be able to ping nodes across the multiple routers.

node_1 > ping 20.0.0.3
PING 20.0.0.3 (20.0.0.3) 56(84) bytes of data.
64 bytes from 20.0.0.3: icmp_seq=1 ttl=62 time=437 ms
64 bytes from 20.0.0.3: icmp_seq=2 ttl=62 time=1.02 ms
64 bytes from 20.0.0.3: icmp_seq=3 ttl=62 time=0.947 ms
^C
--- 20.0.0.3 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 0.947/146.511/437.563/205.804 ms

L3 fabric as a transport

Now that we have a workable L2 network that can span across multiple routers, is there anything else required to make this solution work in a generic environment?

If you look closely, you will realize that although the routers in our experiment are connected to each other over a L3 link, the host routes in our end routers depend on the knowledge of exact path connecting them, e.g. if we introduce a third router in between them our solution will not work. As the remote host routes now needs to be adjusted to account for the extra hop that was introduced by the third router. Additionally, all the routers in the path connecting the end routers need to be aware of the host routes for the device nodes.

The VPN tunnels to rescue

The way to solve this situation is to setup tunnels between the routers connecting the device nodes. In this way, the L3 path is isolated from knowing the host routes and the Host routes are isolated from the impact of changes in the L3 path that connects the end routers.

We can setup GRE tunnels between the end routers, but we can also make the connections over SSH tunnels.

Use the following commands for setting up GRE

vrouter1 > ip tunnel add tun0 mode gre remote 1.0.0.2 local 1.0.0.1 ttl 255
vrouter1 > ifconfig tun0 2.0.0.1/24 up

Or

Use the SSH based tunnel as described at

http://man.openbsd.org/ssh#SSH-BASED_VIRTUAL_PRIVATE_NETWORKS

So now we have out L3 Virtual Private Network which isolates us from impact of the changes in the path. Make sure that you use the tunneled path for reaching the remote nodes. To do this adjust the host routs to use the tunnel paths

vrouter1 > ip route add 20.0.0.3/32 nexthop via 2.0.0.2
vrouter1 > ip route add 20.0.0.4/32 nexthop via 2.0.0.2

Putting it all together

Finally, this is how the complete topology looks.

Slide3

Here is a summary of the IP addresses and interfaces on each node.

 

Device Interface IP Address
Node_1 node_if0 20.0.0.1/24
Node_2 node_if0 20.0.0.2/24
Vrouter1 vr_if1

vr_if2

eth1

tun0

1.0.0.1/24

2.0.0.1/24

Node_3 node_if0 20.0.0.3/24
Node_4 node_if0 20.0.0.4/24
Vrouter2 vr_if1

vr_if2

eth1

tun0

1.0.0.2/24

2.0.0.2/24

The routing table with tunnel paths enabled looks like the following.

vrouter-routing-table

And the pings are still working 🙂

node_1 > ping 20.0.0.3
PING 20.0.0.3 (20.0.0.3) 56(84) bytes of data.
64 bytes from 20.0.0.3: icmp_seq=1 ttl=62 time=290 ms
64 bytes from 20.0.0.3: icmp_seq=2 ttl=62 time=1.01 ms
64 bytes from 20.0.0.3: icmp_seq=3 ttl=62 time=0.973 ms
64 bytes from 20.0.0.3: icmp_seq=4 ttl=62 time=1.17 ms
^C
--- 20.0.0.3 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3004ms
rtt min/avg/max/mdev = 0.973/73.499/290.832/125.477 ms

 

Using a Router for Switching

We all know that Routers are Layer3 devices and switches are Layer2. So how can a Layer 3 device be used to connect two or more devices at Layer2 ?

In this blog, I will explore the mechanisms that make it possible to use a routing device as a switch.

The test setup

We start with a simple setup with three Linux Network Namespaces connected in series. The middle node acting as a Router and the end nodes acting as the devices that needs a layer 2 connectivity.

Use the following script for setting up the Namespaces and connections.

https://bitbucket.org/!api/2.0/snippets/xchandan/Bazz4/0a29fd79dc2ad2be4c816f8db788abfed3ac9282/files/router_topo_step1.sh

Configuration

The script adds the following IP addresses to the device nodes.

Device Interface IP Address
Node_1 node_if0 20.0.0.1/24
Node_2 node_if0 20.0.0.2/24
Vrouter1 vr_if1

vr_if2

None

None

In this setup, we have two Layer2 broadcast domain A and B connected using the router.

Slide1

It doesn’t work, testing what’s is going on

Now we have two devices in the same Subnet connected using a Router instead of a switch. Start a ping form one device node to the other and predictably this does not work. But running tcpdump can give you an insight on to the traffic flow.

node_1 > ping 20.0.0.2
PING 20.0.0.2 (20.0.0.2) 56(84) bytes of data.
From 20.0.0.1 icmp_seq=1 Destination Host Unreachable
From 20.0.0.1 icmp_seq=2 Destination Host Unreachable
From 20.0.0.1 icmp_seq=3 Destination Host Unreachable
^C
--- 20.0.0.2 ping statistics ---
5 packets transmitted, 0 received, +3 errors, 100% packet loss, time 4024ms

And following is the tcpdump output

vrouter1 > sudo tcpdump -i vr_if1 -n
tcpdump: WARNING: vr_if1: no IPv4 address assigned
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on vr_if1, link-type EN10MB (Ethernet), capture size 65535 bytes
^C
13:29:08.265464 ARP, Request who-has 20.0.0.2 tell 20.0.0.1, length 28
13:29:09.263145 ARP, Request who-has 20.0.0.2 tell 20.0.0.1, length 28
13:29:10.263058 ARP, Request who-has 20.0.0.2 tell 20.0.0.1, length 28
13:29:11.280501 ARP, Request who-has 20.0.0.2 tell 20.0.0.1, length 28
13:29:12.278951 ARP, Request who-has 20.0.0.2 tell 20.0.0.1, length 28

The ARP requests are not resolving.

As the device nodes are part of the same subnet, when we try to ping one of the device node from another, the first device node sends ARP queries to find the Layer2 address for the destination IP. ARP uses Layer2 broadcast to for finding the MAC address of the destination. As our destination is not part of the same Layer2 domain the ARP query broadcast will never reach it and will never be answered.

The missing link that will make it work

We saw in the previous section the problem of unresolved ARP queries.

So how do we rescue this situation?

To make this setup work we will need to enable Proxy ARP on the Router. When Proxy ARP is enabled on the router interfaces, the interface starts responding to the ARP query requests from the device nodes with its own MAC (Layer2) address. In a way, it proxies for a device(with the destination IP) which is actually not present on the Layer2 network.

To enable proxy ARP on the router, use the following commands.

vrouter1 > echo 1 > /proc/sys/net/ipv4/conf/vr_if1/proxy_arp
vrouter1 > echo 1 > /proc/sys/net/ipv4/conf/vr_if2/proxy_arp

Taste of success

So now the ARP problem is resolved. The device node gets a reply of its ARP query with a MAC address which actually belongs to the interface of the router. The device now knows what MAC address associated to the destination IP.

But the pings are still failing.

This is because we have solved only half the problem.

What does the Router do with the packet that it received? Once the Layer2 frame arrives on the router, it removes the Layer2 header and looks at the IP address. It obviously does not hold the destination IP.

So, the router does what the router does with any other IP packet, looks up the routing table.

To make the ping work, we will have to add host routes to the destination IP address on the router.

The following commands will achieve our goal.

vrouter1 > ip route add 20.0.0.1/32 dev vr_if1
vrouter1 > ip route add 20.0.0.2/32 dev vr_if2

This is how the routing table on the router looks like

vrouter1 > route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
20.0.0.1        0.0.0.0         255.255.255.255 UH    0      0        0 vr_if1
20.0.0.2        0.0.0.0         255.255.255.255 UH    0      0        0 vr_if2

One more thing to keep in mind. Don’t forget to enable IP forwarding on your router.

vrouter1 > echo 1 > /proc/sys/net/ipv4/ip_forward

Finally, we can test the end to end traffic flow 🙂

 

node_1 > ping 20.0.0.2
PING 20.0.0.2 (20.0.0.2) 56(84) bytes of data.
64 bytes from 20.0.0.2: icmp_seq=1 ttl=63 time=0.167 ms
64 bytes from 20.0.0.2: icmp_seq=2 ttl=63 time=0.061 ms
^C
--- 20.0.0.2 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.061/0.114/0.167/0.053 ms

 

 

IPTables: Matching A GRE packet based on tunnel key

I was trying to figure out a way to match packets with a certain GRE key and take some action. IPTables does not provide a direct solution to this problem but has the u32 extension modules that can be used to extract 4 bytes of the IP header and match against a pattern.

So, I decided to give a try to this extension.

Prepare the setup

I created a tunnel between 2 of my VMs and assign IP address to the tunnel interfaces

On VM1

sudo ip tunnel add tun2 mode gre remote 192.168.122.103 local 192.168.122.134 ttl 255 key 22

sudo ifconfig tun2 6.5.5.1/24 up

On VM2

sudo ip tunnel add tun2 mode gre remote 192.168.122.134 local 192.168.122.103 ttl 255 key 22

sudo ifconfig tun2 6.5.5.2/24 up

Start with a basic rule

Next, created a IPTables rule on the receiving system to generate logs for packet match, but you can also create an ACCEPT rule and check the builtin packet counter for the rule.

sudo iptables -I INPUT -p 47 -m limit --limit 20/min -j LOG --log-prefix "IPT GRE" --log-level 4

Now start ping from VM2 to VM2

ping 6.5.5.1

You can keep a watch on the packet counters with the following command

watch "sudo iptables -L -v -n"

The GRE header

Next, a look at the GRE header format (taken from RFC https://tools.ietf.org/html/rfc2890). The header format is described in the RFC and it contains an optional 32bit key, which is the data of our interest.

0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|C| |K|S| Reserved0       | Ver |         Protocol Type         |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|      Checksum (optional)      |       Reserved1 (Optional)    |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|                         Key (optional)                        |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|                 Sequence Number (Optional)                    |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

Run the following tcpdump command to capture the packets(My VMs don’t have GUI)

sudo tcpdump -s 0 -n -i ens3 proto GRE -w dump.pcap

The captured packets can be analyzed using wireshark

untitled1

Understanding the iptables u32 extension match rule

Basically u32 module is able to extract 4 byte of data from the IP header at a given offset and match with the given hex number or range. Here is an example of a u32 match rule from the man-page. It matches packets within a certain length. The man page describes the format of the rule, you provide an offset , u32 extracts 4 byte from the offset position, and then we AND it with the MASK and finally compare with the HEX value

Example:

match IP packets with total length >= 256
The IP header contains a total length field in bytes 2-3.

--u32 "0 & 0xFFFF = 0x100:0xFFFF"

read bytes 0-3
AND that with 0xFFFF (giving bytes 2-3), 
and test whether that is in the range [0x100:0xFFFF]

The man page has more details.

Craft a match for GRE Key

The IP header length is 20 bytes and the GRE key starts at 24 bytes, as can be confirmed from the wireshark. At the beginning of the rule match starts at the IP header(highlighted in the wireshark screenshot)

untitled

Based on the example from the man page I crafted the following rule to match the GRE key.

sudo iptables -I INPUT -p 47 -m u32 --u32 "24 & 0xFFFFFFFF = 0x16" -m limit --limit 20/min -j LOG --log-prefix "IPT GRE key 22" --log-level 4

Checking for Key Present Flag

But the key can be optional. So, add match for Key-Present Flag.

sudo iptables -I INPUT -p 47 -m u32 --u32 "20 & 0x20000000 = 0x20000000 && 24 & 0xFFFFFFFF = 0x16" -m limit --limit 20/min -j LOG --log-prefix "IPT GRE key 22" --log-level 4

Here is a screen capture of the iptables packet counters

Chain INPUT (policy ACCEPT 294K packets, 78M bytes)
 pkts bytes target prot opt in out source destination
 711 79632 LOG 47 -- * * 0.0.0.0/0 0.0.0.0/0 u32 "0x14&0x20000000=0x20000000&&0x18&0xffffffff=0x16" limit: avg 20/min burst 5 LOG flags
 0 level 4 prefix "IPT GRE key 22"

The above rule is simplistic and good to get you started but has short comings, e.g. it assumes a constant IP header length.

The man page describes examples of how to handle variable length headers, fragmentation check etc.

Running a standalone OpenStack Neutron server

One of the great advantage for an OpenStack developer is the ease with which a dev environment can be created. I cannot say enough good things about devstack. Devstack is a tool that provides a very flexible way of creating development environment for OpenStack.

Devstack is very flexible and can be configured using simple config file (local.conf). Another advantage of running devstack based environment is that it hardly needs any special hardware prerequisite. A VM on your laptop is good enough to bring-up an all-in-one OpenStack environment, although a good amount of RAM and CPU for your VM will yield better results.

As a developer interested in OpenStack networking (Neutron) my interest lies mostly on the Neutron service and most of the time I find a lot of OpenStack services are not really required for my day-to-day activity. So I decided to tweak the my devstack config file to start only the minimum services, just enough to run networking service and save a little on my devstack VMs RAM and CPU requirement.

The OpenStack networking service itself depends on the common set of infrastructure services like the database server, rabbitmq etc. The following is the local.conf that I used for this purpose.

[[local|localrc]]
ADMIN_PASSWORD=XXXXXXXXX
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
 
GIT_BASE=https://git.openstack.org
LOGFILE=/opt/stack/logs/stack.sh.log 
#Q_PLUGIN_EXTRA_CONF_PATH += '/etc/neutron/fwaas_driver.ini'
RECLONE=yes
LIBS_FROM_GIT=python-neutronclient
 
disable_all_services

enable_service rabbit
enable_service database
enable_service mysql
enable_service infra
enable_service keystone
enable_service q-svc
enable_service neutron

With the above local.conf only the network service and basic infrastructure are started by devstack. Here is a list of windows in my screen session

0$ shell  1$(L) key  2$(L) key-access  3$(L) q-svc   4$ code*  5-$ log

The Last two windows are manually created for browsing the code and looking at my custom logs.

Test-driving arbitrary data publishing over BGP

BGP is a routing protocol known for its strength in scaling and resilience. It is also flexible and extensible.  With its Multi-Protocol extension BGP can support distribution of various data types. Still to extend BGP for every new route data type  requires introduction of new address family(AFI/SAFI) and making BGP aware of the new data type.

What if we could just distribute any arbitrary data over BGP without the need for introduction of changes to BGP? Of-course the BGP end point itself cannot decipher the arbitrary data but the data can be passed on to other applications which can make sense of it.

Well this feature is proposed in the following IETF draft

https://tools.ietf.org/html/draft-lapukhov-bgp-opaque-signaling-02

The draft proposes introduction of a generic opaque Address family to allow distributing arbitrary key value pair over BGP.

Testing opaque data dissemination over BGP

While investigating GoBGP for my previous experiment I realized that GoBGP has support for Key Value based opaque data dissemination. So let’s see how we can disseminate a Key Value pair of arbitrary data over BGP.

For this experiment I am using a simple linear topology as shown below.

slide1

Also I have used mininet to simulate the above topology using Linux Network Name Spaces as the BGP speakers.

Starting the topology

Install mininet using apt-get commands as follows

$ sudo apt-get install mininet

Start the default linier topology as follows. This topology has two hosts H1(10.0.0.1) and H2 (10.0.0.2) connected through a switch

$ sudo mn
mininet>

Start GoBGP instances on the hosts. To do this create 2 config files one for each GoBGP instance. The following is an example config file for H1(10.0.0.1).

[global.config]
  as = 64512
  router-id = "10.0.0.1"
[[neighbors]]
[neighbors.config]
  neighbor-address = "10.0.0.2"
  peer-as = 64512
[[neighbors.afi-safis]]
  [neighbors.afi-safis.config]
  afi-safi-name = "opaque"

To start the GoBGP instance use the following command

mininet> h1 ./GO_WA/bin/gobgpd -f ./gobgpd_h1.conf &
mininet> h2 ./GO_WA/bin/gobgpd -f ./gobgpd_h2.conf &

Check the BGP neighbor status

mininet> h1 /home/ubuntu/GO_WA/bin/gobgp nei
Peer        AS  Up/Down State       |#Advertised Received Accepted
10.0.0.2 64512 00:00:24 Establ      |          0        0        0

Also make sure that the neighbor had the capability to publish and consume opaque data

mininet> h1 ./GO_WA/bin/gobgp neigh 10.0.0.2
BGP neighbor is 10.0.0.2, remote AS 64512
  BGP version 4, remote router ID 10.0.0.2
  BGP state = BGP_FSM_ESTABLISHED, up for 1d 05:45:43
  BGP OutQ = 0, Flops = 0
  Hold time is 90, keepalive interval is 30 seconds
  Configured hold time is 90, keepalive interval is 30 seconds
  Neighbor capabilities:
    BGP_CAP_MULTIPROTOCOL:
        opaque: advertised and received
    BGP_CAP_ROUTE_REFRESH:      advertised and received
    BGP_CAP_FOUR_OCTET_AS_NUMBER:       advertised and received
  Message statistics:
                         Sent       Rcvd
    Opens:                  1          1
    Notifications:          0          0
    Updates:                4          0
    Keepalives:          3572       3572
    Route Refesh:           0          0
    Discarded:              0          0
    Total:               3577       3573
  Route statistics:
    Advertised:             3
    Received:               0
    Accepted:               0

Publishing an opaque Key-Value pair

Use the following command to publish any key value pair over BGP connection.

mininet> h1 ./GO_WA/bin/gobgp global rib add key key1 value value1 -a opaque
mininet> h1 ./GO_WA/bin/gobgp global rib add key key2 value value2 -a opaque
mininet> h1 ./GO_WA/bin/gobgp global rib add key key3 value value3 -a opaque

Verify that the opaque data is received on the BGP peers i.e. H2

mininet> h2 ./GO_WA/bin/gobgp global rib -a opaque

  Network Next Hop  Age      Attrs
*>key1    10.0.0.1  00:01:10 [{Origin: ?} {LocalPref: 100} {Value: value1}]
*>key2    10.0.0.1  00:00:09 [{Origin: ?} {LocalPref: 100} {Value: value2}]
*>key3    10.0.0.1  00:00:03 [{Origin: ?} {LocalPref: 100} {Value: value3}]

Conclusion

Publishing arbitrary data over BGP can be used to distribute application specific configuration.  BGP will not try to interpret the opaque data. The data must be used by the application. The draft proposes BGP implementation may provide APIs for the application to publish and consume opaque data.

 

Test-driving EVPN route publishing with GoBGP

In recent times there has been a lot of interest in tunnel based L2 networks, especially for Cloud Networks implemented with VXLAN.  The tunnel based networks were initially proposed with the idea of alleviating the 4k limit imposed with VLAN based networks.

EVPN based VXLAN tunneled networks use BGP as control plane for L2 learning. In this port we will test GoBGP for publishing L2 routes.

L2 learning in tunneled networks

The tunnel based networks need special handling of L2 learning. L2 learning consists of learning how to reach certain MAC (and IP address) in the network. In a normal VLAN based network this is done using the data path, i.e. when two IP addresses communicate in a VLAN the switch in the data path use the source MAC address of the packet to learn which port of the switch is connected a MAC address. This normally happens during the ARP resolution phase that converts the IP(L3) address to MAC(L2) address. It involves flooding all ports of the switch to query for the MAC address associated with the requested IP address.

In a tunnel based networks each of the endpoint form a full mesh of tunnels with all other endpoints.  Unfortunately, data path learning is a very costly option for tunnel based network as it requires flooding of all the tunnels.

Control Plane based L2 learning

The tunnel based networks instead implement a control plane based L2 learning. The control plane based learning work by proactively publishing the location of a MAC(L2) address on the overlay network to all the tunnel end-points.

To implement a control plane based learning someone with the knowledge of location of the MAC(L2) address on the overlay network must send this information to all the tunnel endpoints. This can be done by using a central controller which has a complete knowledge of the overlay network. This is how SDN controllers work, these controllers have the complete view of the network and can publish L2 reachability information to all tunnel endpoints

The EVPN solution uses a combination of data path and control plane learning. Each tunnel endpoint implements a local data path learning and publishes the learned L2 reachability information as L2 routes to the remote endpoints. It uses BGP to publish local and learn remote L2 addresses.

Publishing a L2 route

The L2 route consist of a MAC address (and optionally the associated IP address) and a next-hop IP address of the VTEP. This essentially means that any endpoint trying to send a packet to a MAC on the overlay network should look up its local L2 route table for the destination MAC and send the encapsulated L2 packet to the next hop VTEP IP.

Publishing L2 routes with GoBGP

Now that we have the theory of L2 route publishing cleared, let’s look at some real example of L2 routes published over BGP.

GoBGP is an open source BGP implementation which supports publishing EVPN routes. The following figure represents the test topology. It consists of three BGP peers connected in full mesh.  The VTEP IPs are used to create the BGP peers.

slide1

The Router details are as Follows(These are KVM instances started using libvirt and use the management IP for router-id)

Device VTEP-IP
Router1 192.168.122.3
Router2 192.168.122.87
Router3 192.168.122.174

To install GoBGP refer to the install documentation available at

https://github.com/osrg/gobgp/blob/master/docs/sources/getting-started.md

We are going to use the following configuration file to start BGP service Router1. Similar configuration files will be used to start the gobgpd instances on the other routers

[global.config]
  as = 64512
  router-id = "192.168.122.3"
[[neighbors]]
[neighbors.config]
  neighbor-address = "192.168.122.87"
  peer-as = 64512
[[neighbors.afi-safis]]
  [neighbors.afi-safis.config]
  afi-safi-name = "l2vpn-evpn"
[[neighbors]]
[neighbors.config]
  neighbor-address = "192.168.122.174"
  peer-as = 64512
[[neighbors.afi-safis]]
  [neighbors.afi-safis.config]
  afi-safi-name = "l2vpn-evpn"

To start the gobgpd instance use the following command

# gobgpd -f gobgpd.conf

gobgp0

Check that all the neighbors have established BGP connection and EVPN route are supported

gobgp1

To publish and view a L2 route use the following commands.

# gobgp global rib add macadv aa:bb:cc:dd:ee:04 2.2.2.4 1 1 rd 64512:10 rt 64512:10 encap vxlan -a evpn
# gobgp global rib -a evpn

Here is an example

Router3 >gobgp global rib add macadv aa:bb:cc:dd:ee:05 2.2.2.5 1 1 rd 64512:10 rt 64512:10 encap vxlan -a evpn
Router3 >gobgp global rib -a evpn
    Network                                                                                            Next Hop             AS_PATH              Age        Attrs
*>  [type:macadv][rd:64512:10][esi:single-homed][etag:1][mac:aa:bb:cc:dd:ee:05][ip:2.2.2.5][labels:[1]]0.0.0.0                                   00:00:02   [{Origin: ?} {Extcomms: [64512:10], [VXLAN]}]

The following screen shout captures the output

gobgp2.png

The above route tells the router to send any L2 packet destined to “aa:bb:cc:dd:ee:05” to the VTEP IP of Router3 over a VXLAN tunnel.

Let’s check that the L2 routes show up on the other Routers. The following route is learned through BGP on Router1.

Router1 >gobgp global rib -a evpn
    Network                                                                                            Next Hop             AS_PATH              Age        Attrs
*>  [type:macadv][rd:64512:10][esi:single-homed][etag:1][mac:aa:bb:cc:dd:ee:05][ip:2.2.2.5][labels:[1]]192.168.122.174                           00:00:38   [{Origin: ?} {LocalPref: 100} {Extcomms: [64512:10], [VXLAN]}]

The following screen shout captures the output

gobgp3

The route says: a  L2 packet destined for “aa:bb:cc:dd:ee:05” should be encapsulated as in  VXLAN tunnel and sent to  192.168.122.174(the VTEP IP of Router3).

Conclusion

From the above demonstration is can be seen how BGP based control plane is used by EVPN to distribute the L2 routes. This approach removes the need for a centralized controller which can become a single point of failure and bottleneck while scaling overlay networks

 

A simple metadata server to run cloud images on standalone libvirt :: KVM Hypervisor

With all the interest in Cloud Computing and virtualization, the OS vendors are providing ever more easier ways to deploy VMs. Most of them now come with cloud images. This makes it really easy for users to deploy VMs with the distro of their choice on a cloud platform like OpenStack or AWS.

Here are a few mentions of the available cloud images

https://cloud-images.ubuntu.com/

https://getfedora.org/en/cloud/download/

http://cloud.centos.org/centos/

Complete Lock-down

But, what about users wanting to deploy virtual machines on standalone Hypervisors? With so many pre-built images available, why should anyone need to install virtual machines from scratch?

With this thought in mind I decided to give these cloud images a try on my Hypervisor. I use libvirt with KVM on a physical server as my Hypervisor.

I realized quickly that these images are completely locked down, there is no default password, no web interface and most of then are configured with serial console.

Auto configuration deamon

These VM images are configured with a configuration daemon called cloudinit. When these cloud image based VM boots for the first time the cloudinit daemon tries to retrieve configuration information from various data sources and sets up things like password for the default user, hostname, SSH keys etc. A complete manual of cloudinit is available here

The cloudinit daemon can retrieve these configuration settings from sources like a ConfigDrives, CDROM attached to the VM(NoCloud) or from the network(EC2).

A Simple Solution

In my previous attempts to use a cloud image I followed the CDROM (attached ISO) based approach to provide the configuration data to the VMs. This quickly gets very cumbersome.

So the obvious thought was to try to mimic the metadata service provides by the cloud platform. The metadata service is a web service that provides the configuration data to the cloud images. As the cloud images boot, they send a DHCP request to get an IP address, then it contacts the metadata service on the network and try to retrieve the configuration for the VM. The metadata service is expected to be available at the well-known IP address of 169.254.169.254.

The libvirt configuration provides a default network to the VMs. It also provides a DHCP service to allocate IP to the VMs from a subnet pool of 192.168.122.0/24. All the VMs connect to the default network bridge; virbr0. This bridge also acts at the network Gateway and is configured with the IP 192.168.122.1. The topology is shown in the image below.

Slide1

With some trial and error I could come up with a Simple Metadata Service that can be used on a standalone libvirt/KVM based hypervisor to support booting cloud images. The metadata service is a python bottle app and you will need to install bottle web framework.

# pip install bottle

To make the metadata service work add the metadata IP to the bridge interface as follows:

 # ip addr add 169.254.169.254 dev virbr0

Download the server code from

https://bitbucket.org/xchandan/md_server

Build and Install the mdserver package

# python setup.py bdist_rpm 
# rpm -ivh dist/mdserver-<version>.noarch.rpm

or

# python setup.py install

Then start the metadata server as follows

# systemctl start mdserver

or

# mdserver /etc/mdserver/mdserver.conf

Setting password and ssh-key

You can set the password and ssh public key using the /etc/mdserver/mdserver.conf file. An example cam be found here (md_server/etc/mdserver/mdserver.conf.test in the source tree)

[mdserver]
password = password-test

[public-keys]
default = ssh-rsa ....

Note: The Hostname of the VM is set to the libvirt domain name. Although the mdserver can be run on ports other then 80, this is only for testing, the cloud images will always contact 169.254.169.254 on port 80 for metadata

With the metadata server running you should be able to start VMs with cloud images.

The simple metadata service is able to set the hostname, password for the default user, set SSH authorized key and enables password based SSH access.

I was able to test it with cirros, Ubuntu and Fedora images.

 

 

 

Test-Driving OSPF on RouterOS – Interoperability

So I wrote about OSPF on RouterOS in my previous post. It was a nice experiment to learn about routing protocols.

I wanted to take it a little further and test Interoperability of RouterOS with other open source solutions.

This post is an update from the previous one and I will add OSPF neighbor nodes to the setup. I decided to use Quagga the most talked about open-source routing protocol suit and XORP the eXtensible Open Router Platform.

Updated Setup

The following is the updated setup for the Interoperability test. I have added two new Ubuntu nodes as OSPF neighbor.

  • Quagga on Ubuntu
  • XORP on Ubuntu

Slide3.jpg

Configuration

Quagga

The following configuration was added to Quagga node

Screenshot from 2016-03-27 12:33:55.png

XORP

The XORP node did not advertise any new subnet but received OSPF updates.

XORP_Conf.png

Results

  • All the nodes could discover their neighbors

Screenshot from 2016-03-27 00:03:27.png

  • All nodes got route updates.

Screenshot from 2016-03-27 01:54:34.png

  • OSPF Traces

Screenshot from 2016-03-27 01:57:34.png