This blog is part of 3 part series on etcd
What is Etcd ?
In a multi-node cluster hosting a service, each node needs to have a consistent view of the shared service configuration. At the same time, it is important that the configuration remains highly available and should be able to survive node failure. Etcd is a highly available, distributed configuration data store designed for a clustered environment.
For a distributed system to remain consistent, its data needs to be in agreement with atleast the majority of the cluster members. Consensus building is the process of bringing in an agreement within the majority of the cluster members. A cluster must constantly strive to be in consensus for its data to remain consistent.
Etcd Test Setup
To try out Etcd we will need a cluster setup. For our testing, we will create a very simple cluster setup using Linux network Name-Spaces and different directory for the node data store on a Ubuntu 14.10 Virtual Machine. The following diagram describes our setup.
The setup has three cluster nodes and a client node for testing. All the nodes are implemented with network Name Spaces and connect to a bridge on an OpenVSwitch instance (Linux Bridge can be used as well) using veth pair.
The scripts for bringing up a simple etcd cluster based on namespaces are available here. The script setup.sh brings up a 2-node name-space based setup
Aadditional nodes can be added with the add_node.sh script.
- Run setup.sh to bring up 2 nodes.
$ sudo bash setup.sh
chandan@chandan-VirtualBox:~/TA/etcd_prebuilt/etcd/setup$ sudo bash setup.sh + cleanup + ip netns add ns1 + ip netns add ns2 + ip link add veth1-ovs type veth peer name veth1-ns + ip link add veth2-ovs type veth peer name veth2-ns + ip link set veth1-ns netns ns1 + ip link set veth2-ns netns ns2 + ip netns exec ns1 ifconfig veth1-ns 126.96.36.199/24 up + ip netns exec ns2 ifconfig veth2-ns 188.8.131.52/24 up + ifconfig veth1-ovs up + ifconfig veth2-ovs up + ovs-vsctl add-br br-ns + ovs-vsctl set bridge br-ns datapath_type=netdev + ovs-vsctl add-port br-ns veth1-ovs + ovs-vsctl add-port br-ns veth2-ovs
- Run add_node.sh to add the 3rd
$ sudo bash add_node.sh 3
chandan@chandan-VirtualBox:~/TA/etcd_prebuilt/etcd/setup$ sudo bash add_node.sh 3 + node_no=3 + [[ 3 == '' ]] + ip netns add ns3 + ip link add veth3-ovs type veth peer name veth3-ns + ip link set veth3-ns netns ns3 + ip netns exec ns3 ifconfig veth3-ns 184.108.40.206/24 up + ovs-vsctl add-port br-ns veth3-ovs + ifconfig veth3-ovs up
- Run add_node.sh to add 4th node to act as client.
$sudo bash add_node.sh 4
Start Etcd on cluster nodes
Download a pre-built version of etcd from github and extract the etcd and etcdctl binary.
Etcd provides flags to configure its runtime behavior, for ease of use, I have wrapped the flag settings into a script etcd_start.sh. This script assumes etcd binary in the current directory.
You can use screen or terminator to open multiple terminals to monitor the formation of cluster. The setup script in the previous section created a namespace per node. On 3 different terminal start a shell in the namespaces using the following command, replacing node_number with actual node number.
$sudo ip netns exec ns bash
This will start a root shell in the namespace
Use the etcd_start.sh wrapper to start etcd as follows
# bash etcd_start.sh
Cluster formation on my test setup is captured in the following screenshot.
Testing basic operation
Etcdctl is the client that can be used to interact with the cluster. Etcdctl needs to be told about the cluster endpoints.
- Lets check the cluster status
Client $ ./etcdctl cluster-health cluster is healthy member 14fb055fe2856bf7 is healthy member a19202a30ffa8431 is healthy member c9899e68a83f6626 is healthy Client $ ./etcdctl member list 14fb055fe2856bf7: name=Node_2 peerURLs=http://220.127.116.11:2380 clientURLs=http://18.104.22.168:2379,http://22.214.171.124:4001 a19202a30ffa8431: name=Node_1 peerURLs=http://126.96.36.199:2380 clientURLs=http://188.8.131.52:2379,http://184.108.40.206:4001 c9899e68a83f6626: name=Node_3 peerURLs=http://220.127.116.11:2380 clientURLs=http://18.104.22.168:2379,http://22.214.171.124:4001
Cluster status is healthy
- Lets try to set a data item and retrieve its value.
Client $ ./etcdctl set name "Chandan Dutta Chowdhury" Chandan Dutta Chowdhury Client $ Client $ ./etcdctl get name Chandan Dutta Chowdhury
Cluster is functional
- Lets stop the leader and check if the data survived the lose of leader
Client $ ./etcdctl cluster-health cluster is healthy member 14fb055fe2856bf7 is healthy member a19202a30ffa8431 is unhealthy member c9899e68a83f6626 is healthy Client $ ./etcdctl get name Chandan Dutta Chowdhury Client $ ./etcdctl set name chandan chandan
Cluster is still functional after lose of one member(leader)
- Lets stop another member and check the cluster
client $ ./etcdctl get name chandan client $ ./etcdctl set name "chandan dutta chowdhury" Error: 501: All the given peers are not reachable (Tried to connect to each peer twice and failed)  client $
With only one member the cluster is in read-only state, new key-value pair cannot be set
In this post, we looked at Etcd a distributed highly available configuration data store for clustered environment. We also looked at the cluster behavior on lose of minority and majority members . We now have a 3 node etcd cluster to try basic etcd features. In the following posts we will look at details of etcd APIs, service discovery and creating a simple application using etcd.