System Monitoring: Glances at Top

System monitoring is one of the topics that has got a lot of attention lately.

A lot of options already exists in the open but every-time I search a for a system monitoring tool I am presented with options suitable for large scale cluster of machines, capturing metrics over time, showing trends of resource utilization over time and requiring a storage system and moderate to heavy installation. These systems are good for Ops teams interested in managing the stable operation of systems and optimization of resource utilization over time.

My requirements on the other hand are more on the side of debugging and require a real time peak in to the monitored system. To be frank my favourite tool for the job has been to use top command on the system to look at the current status of the system and it still one of the invaluable tools linux provides. It provides info on the load average, ram utilization and per process metrics with various selection and sorting options.

My requirements

More than often I start with a couple of physical machines and start by running virtual machines on them. A quick and easy monitoring tool is what I need to debug. Till now I would have a tmux session with windows running top command over ssh to these physical machines. I am mostly interested in the status VMs (kvm processes) running on the system and the overall health of the hypervisor.

One problem with top is it does not cover the network and disk metrics (which are a good to have feature), but most importantly although top provides the per process stats but I could not find a way to get alerts when a process (KVM instance in my case) starts to go high on resources.

Glances a better top

I recently tried glances and was really impressed with its capability. The most important part is the easy installation steps. Although I created a virtual environment and installed the app within it, it is not a requirement. To install glances all you need to do is to install the python package using pip and that’s it, you are ready to go

pip install glances

It has the same look and feel as top (although not as may selection and sorting option) but what it provides is a complete view of the system in real time including disk and network stats.

Screenshot 2019-04-15 at 8.20.51 AM

In addition you can define process level alerts for resource utilization.

It can act as standalone, client and a server mode. In the standalone mode it can monitor the status of the local system.

Screenshot 2019-04-15 at 8.17.38 AM

While in the server mode (with the -s option) it can be sending statistics to a remote client.

Or you can run it in the web server mode to monitor the system over a web browser(make sure you have bottle install for the webserver mode to work).

glances -w
Glances Web User Interface started on http://0.0.0.0:61208/

In the client mode you can create a list of servers that you need to monitor and start the client in the browse mode (with –browse option). The client will present you with a list of monitored server with a summary of their status. You can connect to any of them to get the detailed view.

Screenshot 2019-04-15 at 8.26.48 AM

The client connected to a remote server has the same look and feel as a standalone glances instance

My current installation

I am monitoring three physical machines with glances. I have glances running in the server mode on the physical machines and also in the web server mode (Currently you need to run 2 instances of Glances if you need both server and web server mode, there is a open request to enable both modes on the same instance of Glances)

I have installed glances in a virtual environment to keep its dependencies separated for the rest of the system.

I am running it under supervisord to make sure we have the process always running and restarted after reboot.

Advertisements

No surprise performance test

Recently I had a need to deploy some python FLASK based application. Although FLASK has a convenience CLI built-in to run your application while developing the deployment documentation provided a bunch of production ready deployment method. After going through the various documentation and learning about the event loop based implementation of Gevent, I decided to try out Gevent based deployment and wanted to compare its performance with the default flask CLI. I used the simplest of Flask app (hello world) to try it out.

And no surprise that the Gevent based deployment came out to be better than the default flask cli, but the difference in performance is seen only when the number of request and concurrency was really high. I used Apache bench to test the performance. A total of one hundred thousand requests were served by the application and at a concurrency level of one hundred

ab -c100 -n100000 http://localhost:8181/

And just for completeness I also tried similar application written in Golang as Go also uses event loops with its goroutine. The final comparison was made with Nginx which is also an event loop based web server (it looks very unfair to compare app server with static file, but the app was very simple and it returns just a string). Nginx served the exact same content as the apps did.

Overall Nginx was the winner as expected followed by Go App, python3.x asyncio based aiohttp,  python Gevent and finally the Flask App cli. What I liked about the experiment is it gave me a perspective of how much of a performance difference we can expect while choosing our tools and how to compare it against the development time and effort.

Time per request(mean) in ms

You can see the code and results here

Test Driving transmission for multi-site file sync

As the industry moves towards more distributed deployment of services, syncing files across multiple location is a problem that often needs to be solved. In the world of file synching there are two algorithms that are outstanding. One being rsync which is a very efficient tool for synching files. It works great when you have a few remotes that need to be kept in synch, but as soon as the number of remotes grow we hit the problem of bandwidth and load. Rsync being a single-source-to-destination syncing mechanism, puts a lot of load on the source as the source node needs to transfer a lot of data to each of the remotes. 

filesync (1)

This is the use case where sync based on bittorrent protocol shines. As bittorrent uses peer-to-peer file distribution there is no single source that distributes the files. Files are broken down into chunks and each chunk is associated with its hash. To get a copy of the file a participating peer asks for all the chunks of the file from all other peers. Any peer that has the chunk can be the source for this transfer. Additionally the file chunks are not transferred sequentially, but the transfer is randomized to make sure that possibility of finding the requested chunk in any other peer (rather than the original source) increases.     

filesync

In this blog I will explore all the components needed to set up a bittorrent based file sync. A lot of these steps can be automated to come up with an efficient multi-site file synching solution. All the setup is done on a single Ubuntu 16.04 VM. Following diagram shows my test setup.

filesync

How to use a bittorrent network to share your data ?

To transfer your data using torrents you will need the following things

  • The full copy of the source files that you want to transfer
  • Torrent file generated for the source files
  • Some way to distribute this torrent file to all your remotes (peers)
  • A torrent tracking server
  • A torrent client

Installing transmission client

Although you can use any torrent client, transmission is one of the popular torrent client for linux. It provides a CLI (and RPC API) interface which is handy for integration with other projects.

To install transmission on your machine use the following command

apt-get install transmission-cli transmission-common transmission-daemon transmission-gtk transmission-remote-gtk 

This should get transmission installed. To configure the torrent client edit the settings file

/etc/transmission-daemon/settings.json 

You might be interested to change the default download directory which by default points to

/var/lib/transmission-daemon/downloads/

Make sure that transmission has write access to the new download directory you set.

Transmission uses systemd unit to manage the transmission client daemon. If you want to make changes to transmission daemon follow the standard practice of customizing any systemd service. For my experiment I disabled authentication for RPC calls. To do this follow the steps as below.

cp /lib/systemd/system/transmission-daemon.service     /etc/systemd/system/transmission-daemon.service 

Then edit the system file in /etc/systemd/system/<service> with your local changes

diff -Nur /etc/systemd/system/transmission-daemon.service /lib/systemd/system/transmission-daemon.service 
--- /etc/systemd/system/transmission-daemon.service 2019-03-15 22:30:59.736229363 +0530
+++ /lib/systemd/system/transmission-daemon.service 2018-02-06 23:25:40.000000000 +0530
@@ -5,7 +5,7 @@
 [Service]
 User=debian-transmission
 Type=notify
-ExecStart=/usr/bin/transmission-daemon -f --log-error -T
+ExecStart=/usr/bin/transmission-daemon -f --log-error
 ExecStop=/bin/kill -s STOP $MAINPID
 ExecReload=/bin/kill -s HUP $MAINPID

Start the transmission daemon as follows

systemctl start transmission-daemon

And check its status with the following command

systemctl status transmission-daemon

Next step is to create a torrent file for your source files. Before you can create a torrent file you need to provide a tracker url. The next section will show you how to setup your own tracker server or you may choose to use public tracker server

What is a torrent tracker ?

A torrent tracker is a server which tracks all the peers that are interested in a torrent file distribution. It also helps peers interested in a file transfer to find each other. When a torrent client adds a new torrent file, it looks for the track url embedded in the torrent file and contacts the tracker server and gets a list or other peers that are interested and participating in the source file distribution. The client can then contact all its peers and start requesting chunks of the source file.

You can either use a public torrent tracker server, but if you are using torrents to share private data probably you will want to use a private torrent tracker server. For my experiment I tried

https://github.com/chihaya/chihaya.git

To setup the tracker use the following steps at https://github.com/chihaya/chihaya/blob/master/README.md

You will need to install Golang for compiling it. Once compiled you should have a binary named chihaya in your sources top dir. An example configuration file is provided as part of the code example_config.yaml

cp example_config.yaml config.yaml

Then edit it to your liking. As I am running this experiment on my local machine, I changed the http addr to “127.0.0.1:6969”

http:
    addr: "127.0.0.1:6969"

In a real deployment your tracker should be accessible from all the peers, so it must listen on a public interface. The default setting is to listen on “0.0.0.0:6969”.

Once done start the tracker server as follows

./chihaya --config config.yaml 

This will start the server and your tracker url will be http://127.0.0.1:6969/announce or the IP you configured for the http listen address.

How to create a torrent ?

To create the  torrent file use the following command.

transmission-create <source file|source dir> -t <tracker url>

For me

transmission-create <source file|source dir> -t http://127.0.0.1:6969/announce

The above command will create the torrent file for your source file(s). The torrent file will contain the chunks definition of the source file along with its hash and the tracker url. Each time a new peer adds the torrent file. The torrent client will send a request to the tracker to add itself to the list of peers interested in the source file.

How to publish your files with torrent ?

To publishing the source file you need to add its torrent (like we created in the steps above). The only difference between all the other peers and the publisher is that the publisher will be the peer with the complete source file to start with. The peers with the complete files are also known as seeders while all the peers who have incomplete source files are called leechers. In the beginning the publisher will be the only seeder but as more and more peers get a complete source file they will also become seeders.

To add a torrent file and become a seeder use the following command.

 transmission-remote --add <torrent file> -w <parent dir of complete source files> [ -u <upload bandwidth limit in kb/s>]

The upload bandwidth limit is optional.

How to download files ?

This should be the the most familiar part to everyone. Although for this blog we will use the transmission client for the reasons mentioned above. All the peers need to add the torrent file we created above to the transmission client. The torrent file can be hosted on a web server or can be sent to the peers over mail or any other means.  To add a torrent file to all other peers use the following command

transmission-remote --add <torrent file> -w <parent dir where the source file will be saved> [ -u <upload bandwidth limit in kb/s>]

You can use the following commands to list the currently added torrents in transmission

transmission-remote -l
transmission-remote -t <torrent_id> -r

Where torrent_id is the first column in the output of transmission-remote -l. You can also use the web or gtk interface if you like to use transmission

http://localhost:9091/transmission

Or run transmission-gtk to use the GUI

Once you have added the torrent with the transmission-remote –add <torrent file> -w <parent dir where the source file will be saved> command, transmission with verify that the available source file matches the hash in the torrent file.

Once completed your transmission client will be marked as a seeder. For my test I used another torrent client to download the source files by using the torrent file.

In production you would like to use the transmission-remote on the peers to add the torrent file and start the download.

Summary

If you have the need to distribute files among multiple remotes, torrents can be a very efficient mechanism. You can relieve the source file server of the synching load and also benefit from better bandwidth utilization of the peers, not to mention the increased speed of distribution

Extending my Private LAN behind NAT to AWS

WordPress on EC2 with DB in private LAN

We have all known and talked about a hybrid cloud solution to spill over an On-Prem infrastructure to cloud providers like AWS. Recently I had a need to do (a POC that needed) the same for my private lab where part of my software runs on the cloud while still communicating with the in house components.

The standard solution for this is to setup a VPN connection between your private lab and AWS. Only  problem is that I don’t have a public IP available to setup the tunnels and also I am behind the ISPs NAT. So the VPN solution does not work for my case.

Hence I used SSH to setup the poor mans version of VPN tunnels.

The setup in detail

Basic idea is that you have a pair of tunnel host setup on the VPC and your private lab as seen in the figure below and use the tunnel to route traffic between the networks.

Tunnel

Setting it up

Following steps will bring up the setup.

  • First start the pair of tunnel host one on the the Private LAN (TH1) another on VPC with a Public IP (TH2). These hosts will act as the tunnel gateway. I used Ubuntu based VMs for the tunnel hosts.

In our case the tunnel host on the private LAN will act as SSH client to initiate the tunnel connection.

  • Update the SSH configuration file /etc/ssh/sshd_config on the server side (TH2) to allow creation of tunnels. 
PermitTunnel yes

Note: SSH sessions are terminated if there are no activity for a certain amount of time. I found some help full suggestion to keep the connection alive, you can also read about it the sshd_config man page. Here is the gist of it

Update your SSH server (TH2) configuration with the following options

ClientAliveInterval 120
ClientAliveCountMax 720

Update Client (TH1) config  ~/.ssh/ssh_config with the following

ServerAliveInterval 120
  • Next setup the tunnels using SSH connections.

As the tunnels are setup using SSH connection so  having a Public IP (TH2_Public_IP) on the AWS side is enough.  Client side can reside behind the ISP provided NAT with no need of public IP. SSH connections are setup as root user.

 On the client(TH1):
     # ssh -i AWS_SSH_KEY.pem -f -w 0:0 <TH2_Public_IP> true
     # ifconfig tun0 1.1.1.10 netmask 255.255.255.0 up

     tun10
Similarly setup the tunnel host on AWS 

On the server(TH2):
    # ifconfig tun0 1.1.1.20 netmask 255.255.255.0 up
  • Check if you tunnel connection is up by pinging the tunnel IPs (1.1.1.10 and 1.1.1.20 from each other).
  • Next setup routing on the tunnel hosts to connect the private lab to your VPC.
On the client(TH1):
# route add 172.31.0.0/20 1.1.1.20

On the server(TH2):
# route add 192.168.56.0/24 1.1.1.10

Here is the route table on the TH2 after update

route20.png

  • Then setup routes on the EC2 instances to redirect traffic to your tunnel host(TH2) on AWS using its Private IP (TH2_Private_IP) (can be automated using Systems manager).

On each EC2 instance on the VPC

# route add 192.168.56.0/24 <TH2_Private_IP>

Same changes updates needs to be done on the private LAN instances using the Private ip of the local tunnel host(TH1_Private_IP)

# route add 172.31.0.0/20 <TH1_Private_IP>

Although you can also update the route on the LANs gateway if you have access to if you don’t want to every individual host on your LAN.

Test application: WordPress on EC2 with DB in private LAN

To test my setup I ran a WordPress server on the AWS while my DB remained on my local LAN. 

You will have to make MySQL/MariaDB listen to the all IPs on the machine, open up firewall rules to allow MySQL access on port 3306 and setup a db user and database with proper privileges.

mysql.pngOnce done start an EC2 instance (I used ubuntu 18 image) on the VPC and make sure that you can connect to MySQL on the private LAN.

Also make sure that you add SecurityGroup rules to provide access to HTTP port for WordPress

wordpress4.png

Install WordPress on this host along with php and a web server (used Nginx). You should be able to use the private LAN IP of the MySQL server and complete the install.

wordpress5

and ons installed you should be able to use the WordPress app as usual.

wordpress12

Conclusion

This is definitely not a production type of setup. I set this up for my personal need to experiment. The tunnel depends on liveness of SSH connection, although we have increased the timeout the connection may drop if there is no communication beyond the timeout period.

Test Driving Inter Regional VPC peering in AWS

Connect AWS VPCs hosted in different regions.

AWS Virtual Private Cloud(VPC) provides a way to isolate a tenant’s cloud infrastructure. To a tenant a VPCs provide a view of his own virtual infrastructure in the cloud that is completely isolated, has its own compute, storage, network connectivity, security settings etc.

In the physical world, Amazon’s data centers are organized into different geographical location called Regions. Each Regions has multiple Availability Zones which are data centers with independent power and connectivity to protect against any natural disaster.
A VPC is associated with a single Region. A VPC may have multiple associated subnets (one per Availability Zone). Subnets in VPC do not span across Availability Zones.

What is VPC Peering ?

Although as an AWS tenant, VPCs provide you with a secure way to isolate and control access to your cloud resources; it also creates its own challenges. As your infrastructure grows, you are forced to create multiple VPCs within or across multiple Regions. Question is: How would you now connect your instances deployed in different VPCs?

One way to connect your resources across VPCs is over the Public IP. This meant the traffic has to traverse the internet to travel between your cloud resources hosted in different regions.

Peering provides a solution to this problem by allowing instances in different VPCs to connect over a peering link. Till recently VPC peering was only allowed within a single region. This was before AWS allowed creating VPC Peering across Regions. With Inter Regional VPC peering the traffic between instances in VPC in different regions never have to leave the Amazon network.

Note: One thing to keep in mind while peering VPCs is that the peered VPCs should not have overlapping subnets CIDRs. As VPC peering involves route table updates to add routes to remote subnets, any CIDR overlap between the local and remote subnet will not work.

Creating Inter Regional VPC peering

To test Inter Regional VPC peering, you will need to create two custom VPCs in different Regions. I have explored custom VPC and Internet access in my previous post. For this test I am peering custom VPCs created in N. Virginia (CIDR 10.0.1.0/24) and London (CIDR 10.0.2.0/24) Region.

The peering links can be created in the “VPC” dashboard under the “Networking & Content Delivery” heading. To create a peering link navigate to

Networking & Content Delivery > VPC > VPC Peering Connections and click on the Create peering Connection.

VPC10

To create a peering connection you must know the VPC id of the local and peer VPC in the remote Region. While the local VPC can be chosen for the drop down menu, you must find the remote VPC id from the remote Region’s VPC console. On completing the above step a VPC peering request is created with status as pending

VPC11

To activate the peering link, the VPC in the remote Region must accept the peering request.

VPC13

Click “Accept Request” and “Yes, Accept”

VPC14

Once accepted the status of the peering link changes to Active.

VPC16

The next step is to update the route table of your VPC and add a route to the subnets associated with your peered VPC using the VPC peering link interface which are named as pcx-xxxxx…

Note: The route table must be updated on both the local and remote VPC to provide a complete forward and reverse path for the traffic.

VPC17

And that’s all. Now you should be able to communicate between the peered VPCs. You can try running ping or SSH between the instances on the VPCs. You may have to adjust the Security group rules to make this work.

VPC18

No transitive routing with VPC peering and problem of manual route table update

One of the limitation of VPC peering is that it only allows communication across directly connected VPCs.[As of Oct 2018]

As an example if VPC-A peers with VPC-B and VPC-B peers with VPC-C, the traffic is only allow between VPC-A , VPC-B and VPC-B, VPC-C. The peering link is not transitive this means VPC-A cannot send or receive traffic from VPC-C. To enable this VPC-A must directly peer with VPC-C.

VPC22

This means multiple VPCs wanting to communicate with each other in the group must form a full mesh of peering links. Additionally the route table associated with each of the VPC needs to be manually updated with the subnet CIDRs of all its peers and correct peering link. This can be quite tricky to maintain with manual updates.

Although third party routing solutions can be used to enable transitive routing, but it will undermine the routing infrastructure provided by AWS.

Recent Updates

AWS now supports accessing load balancers over Inter Regional Peering Links

https://aws.amazon.com/about-aws/whats-new/2018/10/network-load-balancer-now-supports-inter-region-vpc-peering/

Custom VPC and Internet Access in AWS

Create your VPC, launch EC2 instances and get internet access with Public IP.

With a Virtual Private Cloud(VPC), tenants can create his own cloud based infrastructure in AWS. While AWS provides a default VPC for a new tenant, there are always use cases that need creation of custom VPC.

While exploring custom VPC, I found that getting the my EC2 instances on the custom VPC to connect to the internet was not a straight forward and involves a few steps.

The next sections describe the steps needed to make the EC2 instances get internet access with Public IP

1. Creating the Custom VPC and associated Subnet

To create a custom VPC navigate to Networking & Content Delivery > VPC > VPCs  and click on “Create VPC” and provide a name for your custom VPC and an associated CIDR.

VPC1

VPCs are tied to your logged in Region. If you have multiple Availability Zones in this region you may want to create one Subnet per Availability Zone but further sub-netting the CIRD associated with the VPC. In my case I have created only one Subnet for the custom VPC.

VPC2

2. Creating an Internet Gateway for the custom VPC

Next step is to create an internet gateway. The internet gateway connects the VPC to the internet. To create it navigate to Networking & Content Delivery > VPC > Internet Gateways and click on “Create internet gateway

VPC4Next associate this Internet Gateway to your Custom VPC using the Actions menu after selecting the newly created Internet Gateway. After this step the Gateway should have an associated VPC

VPC6

3. Update Routing Table for the VPC to allow traffic to and from Internet

Next we have to update the routing table to add a default route to send and receive traffic from internet.

To do this navigate to Networking & Content Delivery > VPC > Route Table and select the routing table associate with the custom VPC and click on the “Routes” tab then click on edit and add a default route. Next click “Save” to update the routing table.VPC7

4. Launching EC2 instances on the custom VPC and checking connectivity

Finally we are ready to launch our EC2 instance and make it internet accessible using Public IP.

To do this navigate to Compute > EC2 and click on “Launch Instance

Follow the usual steps to Launch the instance except in the “Configure Instance” stage select your custom VPC for the network(this will automatically load the associated Subnet) and enable Auto-assign Public IP.VPC21You will be asked to create a new Security Group for the VPC (allow SSH access for testing).

Once launched you should be able to SSH to the instance using its Public IP.

VPC9

 

 

Stateful vs Stateless firewalls: Which one to use when?

Firewalls provide traffic filtering and protects the trusted environment for the untrusted. A firewall can be stateful or stateless

A stateful firewall is capable of tracking connection states, it is better equipped to allow or deny traffic based on such knowledge.  A TCP connection for example goes through the handshake (SYN-SYN+ACK-SYN), to EASTABLISHED state, and finally is CLOSED. A stateful firewall can detect these states.  If a packet belongs to an already running flow it can be allowed, while a new connection form the untrusted host can be dropped.

Let’s take a scenario to understand this better

A client sitting behind firewall connects to a web server www.example.com. and receives a reply.

Let’s see what configuration of the stateful and stateless firewall are needed to make this communication work.

Stateful Firewall configuration:

# Generic rule to allow clients to connect to any
# webserver on the internet
Allow traffic going out to port  80
Allow traffic related to connections initiated by any internal client back to the same client
Deny any other traffic coming in to the client

Stateless Firewall configuration:

Allow traffic going out to port  80 on www.example.com
Allow traffic coming from host www.example.com and port 80
Deny any other traffic coming in to the client

From the above it is clear that the stateful firewall will allow incoming traffic only if it is related to connections the client has started. Also, note that this makes it possible to write generic rules for a stateful firewall.

 

Slide1

Stateless firewall on the other hand does not have any knowledge of what connections the client has initiated, instead it depends purely on the attributes of the packet like source, destination address etc. to make the allow or deny decision.

Why do we need stateless Firewalls?

Stateful firewall needs to track each of the connection that passes though the firewall. It needs to maintain the state of all the active connection. New connections are actively added and expired connection are purged from the connection state maintained by the firewall. This requires a lot of resources (memory, cpu) on the firewall and as such is a costly.

Another consideration is load balancing traffic on multiple firewalls. In case of stateful firewall the connection state must be synchronized across multiple firewalls to provide a consistent view of active connections.

A stateless firewall on the other hand deals with a single packet at a time. Thus, the resources needed by such a filtering process is much less.

When to use Stateless firewall?

A stateless firewall can be a faster and less resource intensive alternative in the following cases

  • Server side firewall: If you are running a purely server application with well-known ports on a machine. In this case firewall can be explicitly programmed to allow connection to and from the server port. As the server ports are well known to the firewall and the server expects new connection anyway, stateless firewalls can handle this use case.
  • Client side firewall: A client program which strictly connect to a small set of trusted hosts (internal) can be protected using stateless firewalls with specific rules.

A stateful firewall on the other hand can be used to protect client applications which connect to a large number of untrusted hosts (webservers on internet, peer-to-peer traffic).  The connection tracker on the stateful firewall will only allow incoming packets which are related to communications started by the internal clients. All new traffic trying to reach the client application will be dropped by the stateful firewall.

Home network traffic analysis with a Raspberry Pi 3 and Ntop

I had the Raspberry Pi laying around for some time without doing any major function and so was the NetGear switch [1]. So, I decided to do a weekend project to implement traffic analysis on my home network.

I have a PPPoE connection to my ISP that connects to my home router [2]. The router provides both wire and wifi connectivity. As with most people I have very few devices that connect to the router over an Ethernet cable, most devices are wifi capable. This makes traffic monitoring a bit of a problem on the LAN side.

To get around the problem I decided to put the traffic monitor on the WAN side of the router.

The following figure shows the connectivity.

Slide1

Tapping the WAN side with port mirroring

The NetGear GS105E switch provides the capability of port mirroring. I used this to mirror traffic arriving through the router and the ISP connection. The mirrored traffic is passed on to the Raspberry Pi. All traffic monitoring happens on the Pi.

 

Screenshot from 2018-02-11 01:26:51

Monitoring tools

Once the traffic is available on the mirrored port, I was able to run traffic monitors like wireshark, tshark and tcpdump on the mirror port to analyze all the traffic between the router and ISP. These tools give a live view of the packets going through my home network.

To monitor traffic over long time I used Ntop [3]. It can aggregate and produce nice traffic analysis summary. I used the Rasbian [4] image for the pi and Ntopng can be easily installed from their repository using apt.

Accessing the Monitoring result

As the Gigabit port of the Pi is used to receive mirrored traffic, the monitoring dashboard is accessed over the wlan0 interface. This will keep the monitored traffic separate from the monitoring traffic.

Refs:

[1] https://www.netgear.com/support/product/GS105Ev2.aspx

[2] https://www.amazon.in/3G-4G-LTE-Router-Multi-WAN/dp/B00N0W4FTM

[3] https://www.ntop.org/products/traffic-analysis/ntop/

[4] https://www.raspberrypi.org/downloads/raspbian/

 

Using Mininet to test OpenStack Firewall drivers

OpenStack FWaaS project will be supporting a Layer 2 firewall based on OVS flow rules. While working on the OVS driver, I felt the need to do some quick tests to check if the flow rules are programmed correctly on the OVS bridge.

Although we can run a complete devstack system within a VM, to run a firewall test would mean running at least 2 nova instances and configuring the ports with FWaaS and SG policies. And unless I include other means it will need a lot of manual configuration.

With multiple nova instances running within the devstack VM, it quickly becomes a resource hog on a laptop. Manual bring up of topology even the simplest one is very painful and error prone for the quick setup and tear-down I needed

This is where I decided to test the new OVS Firewall driver using Mininet. Mininet provides easy way to setup network topology using Network Namespaces as hosts connected to OVS based switches(although other switch modes are also available)

The test setup for the Firewall Driver

For my tests, I needed a simple 2 node topology connected to a OVS switch. The firewall policies are pushed using openflow rules. Mininet supports OVS switches either connected to SDN controllers or as standalone bridges called OVSBridge.

Slide1

For the tests, we need a standalone OVS bridge and the Firewall driver will be used to configuring the flow rules.

To start the simulation, all we need is to specify the topology and mininet can deploy the same with bridges and namespace. The topology can either be custom made using host, switch and link definition e.g. link1 or we can use one of the prepackaged typologies that comes along with mininet link2

Compared to the nova instance VMs this is very lite on resources.

Configuration and access to hosts

One of the good things about mininet topology is that all the components come up with default config. The Namespaces are connected to the bridge and configured with IP address, all interfaces are up and IP connectivity is available as soon as the topology is deployed. Mininet also provides APIs to access the configuration. This helps in easy customization of the configuration and integration of the topology with the rest of the automation.

In the default configuration, the OVS bridge is configured with a single flow rule to make it work like a normal switch. For my tests, I had to remove this rule as more detailed flow configuration will be pushed by the Firewall driver based on the security policy.

It is also possible to run custom commands within the hosts. This is important as we need to start some process to open TCP ports to check the firewall policy. I used netcat to start a tcp server on each of the host and check the connectivity over the firewall.

Demo

Here is a demo of the simulator in action. The SG and FWaaS policies are configured to allow ICMP and TCP port 8000 traffic. The policies are tested with firewall driver set to SG, FWaaS and both.

 

The following CLI can be used to view a running trace of the rule hit counts in OVS.

sudo watch 'ovs-ofctl dump-flows s1|grep -v n_packets=0|sed -r "s/\S+//2"|sed -r "s/\S+//5"| sed -r "s/\S+//1"'

The simulator code

The complete code for this automation is quite small and is available as a single file. It allows deploying the 2-node topology and configuring the firewall policy and once the configuration is done the user lands on the CLI prompt. This can be used to do the traffic tests.

The details of using the script can be found in the README

 

 

 

 

 

 

 

SecureNet: Simulating a Secure Network with Mininet

I have been working with OpenStack(devstack) for a while and I must say it is quite convenient to bring up a test setup using devstack. At times, I still feel it is an overkill to use devstack for a quick test to verify your understanding of the network/security rules/routing etc.

This is where Mininet shines. It is very lite on resources and extremely fast in getting your topology up and running. It cuts the setup to the absolute necessities and needless to say, it has proven invaluable to me while trying out various topologies and tests.

Lacking Security Device simulation

The default Mininet toolbox comes with a switch and hosts nodes. The switch is primarily a controller based SDN switch like OpenVSwitch or IVS. The primary focus of Mininet has been L2 networks with SDN controllers.

While for me the goal was to test routing and security much like what is available on OpenStack. I found an example topology in the Mininet code simulating a LinuxRouter. Basically, a Host node (Namespace) was configured to do l3 forwarding.

This give me the initial idea of implementing security devices with Mininet, after all the reference network services (routing and firewall) in OpenStack are based on Namespaces

A Simulated Perimeter Firewall

As IPTables are available within the namespace I decided to use them to implement a Perimeter Firewall that would inspect the traffic between the Networks. I used a separate table to have my firewall rules and redirected all traffic hitting the FORWARD table to my custom table PFW. The last rule in the FORWARD table was to drop all traffic.

So now when I started my topology with the perimeter Firewall the ping test confirmed that no traffic was flowing between the networks.

MN1

To allow traffic we need to specifically configure the Firewall to allow packets.

MN3

This took care of securing the Network boundary, but how about traffic flowing within the network. OpenStack supports this using micro segmentation with Security Groups.

Simulating Micro Segmentation with OpenStack like Security Groups

Well as I was using OpenVSwitch for my topology in standalone mode, I decided to configure the OVS packet filtering capabilities to implement a firewall within the Network itself. Here is some sample OVS rules to do L3 packet match and filter.

ovs-ofctl add-flow switch1 dl_type=0x0800,nw_src=30.0.0.100,nw_dst=30.0.0.101s, action=DROP

And produces the following setting (can be verified with ovs-ofctl dump-flows switch1)

cookie=0x0, duration=9.050s, table=0, n_packets=0, n_bytes=0, idle_age=9, ip,nw_src=30.0.0.100,nw_dst=30.0.0.101 actions=drop

Updated CLI to allow Firewall Rules

All this looks good but it’s a lot of work to manually configure all the rules. And I was thinking, how can this be automated?

I extended the switch node in Mininet to implement a Secure Switch and while at it I also implement the Perimeter Firewall as a specialized Host Node. My intention of implementing these special nodes was to add the capability of configuring the nodes with security settings. By this time, it was already looking like an interesting Weekend Project 🙂

So, got into it an implemented a set of CLI commands for Mininet to configure the firewall capabilities on the topology(using the security nodes that I introduced). I was calling it the Secure Network.

I wanted to keep things simple to start with and so kept the rule definitions very coarse, namely only L3 filtering only(I may look at L4 in future). So here is the CLI I came up with.

Securenet] secrule add allow [addr1] to [addr2]
Securenet] secrule add deny [addr1] to [addr3]

Here addr1, addr2 and addr3 are Micro Segments(or Security Groups)

Now, to define Firewall rules with Micro Segments (let’s call it Security Groups as this is what is was trying to simulate at the first place) I needed a way to define the Security Groups and associate Hosts to it.

To keep things simple I went for an automatic creation of Security Groups and when a Host association command is entered. This was done by extending the host commands. Here is an example of creating a Security Group and associating a Host with it.

Securenet] h30 bind sg1

Above command creates a Security Group called SG1 and associates host h30 to it.

To add another Host to the same SG issue the same command with a different Host name

Securenet] h31 bind sg1

This adds h31 to sg1. To view the members of the security group use the sghosts command

Securenet] sghosts sg1
0: h30
1: h31

Reacting to Security Group changes with events

By this time, I was already too excited to stop and was thinking of the interactions between the Firewall rules and Security Group definitions.

As the high-level Firewall rules are composed of Security Groups, the firewall must react to the changes to Security Group definition. This means the change in Security Group definition must produce some kind of event that is observable by the Firewall and then it must update its configuration accordingly to keep the firewall rule constrains satisfied.

To do this I extended the Topology and Mininet class so that any change in the SG definition can trigger a re-evaluation of the Firewall rules.

MN4

Again to keep things reasonably simple I went with a rip and replace of Firewall configuration (both IPTables and OVS) and did not bother about the traffic impact (we are in simulation after all). But this might be an interesting area to explore in future.

Simulating IPSets

One thing that I was missing while defining the Firewall rules was a way to target a set of external IP address. As the IP addresses associated to a Security Group definition was derived out of its member Hosts, so there was no way to define rules based on addresses that are not part of the topology.

So, another set of CLI commands were introduced to define groups based on IP addresses manually and without any topology based constrains.

Securenet] secrule ipg add ipg1 1.1.1.1
Securenet] secrule ipg add ipg2 2.1.1.1
Securenet] secrule ipg add ipg3 2.1.1.2

CLI to view a list of IPGs

Securenet] secrule ipg list
0: ipg2
1: ipg3
2: ipg1

And to view its content use the following

Securenet] secrule ipg show ipg1
0: ipgh-1.1.1.1/32

With IPGs it was possible to have rules with arbitrary addresses.

Conclusion

Here is a run of a simple set of commands on the secure net topology

MN5

The updated IPTables config in the Perimeter Firewall.

MN6

And the security rules in OVS

MN7

This was more of a fun long weekend project for me. Once I dived into the project I realized a number of challenges that are posed by such an undertaking like rule optimization, minimizing the config changes, targeting the right device to push a security rule, figuring out duplicate rules and conflicts etc. Also, as I thought through more use cases I realized that the firewall rule definition needs a language of its own.

But at the end of the weekend I feel mostly it was a lot of fun.