My Setup

Building My Odroid-C2 Docker Cloud Part 3 – Build and Test Drive

The journey so far

My last article documented how I built Docker 1.12.0 from source on my Odroid-C2. Docker 1.12 has swarm mode ie, clustering built in. In this article, I am assembling my 5 Odroid-C2 single board computers into a cluster and test-driving the cluster with simple swarm mode commands.  This is to make sure that Docker 1.12 is working before getting into more advanced swarm mode features and executing a more realistic workload on the cluster in Part 4.

Cluster Hardware Setup

Here is a the bill of materials for my Docker cluster:

  • 5 X Odroid-C2s
  • 1 X D-Link DGS-1008A 8 Port Gigabit Desktop Switch
  • 1 X 5V 15A switching power supply
  • 1 X USB Disk Drive
  • 5 X ethernet cable
  • 1 X 1 to 8 power splitter cable
  • 1 X custom-built cluster holder (cannibalizing my Odroid-U3 cluster)

The front and back views of the assembled cluster are shown below:

 

 

Front View
Front View

 

Back View
Back View

Cluster Software Setup

The cluster consists of 5 Odroid-C2 single board computers. Their host names and static IP addresses are as follows:

  • c2-swarm-00 – 192.168.1.100 (manager)
  • c2-swarm-01 – 192.168.1.101 (node 1)
  • c2-swarm-02 – 192.168.1.102 (node 2)
  • c2-swarm-03 – 192.168.1.103 (node 3)
  • c2-swarm-04 – 192.168.1.104 (node 4)

Only c2-swarm-00 has a USB disk drive connected.

The software packages installed on the cluster include:

  • ssh keys for password-less login
  • nfs-kernel-server on c2-swarm-00 and nfs-common only on the rest
  • go 1.6.2 on c2-swarm-00
  • docker 1.12.0 compiled from source

Test-Driving Swarm Mode

My cluster is now assembled and ready for testing. To bring up swarm mode on my cluster, issue on the manager the following command:

docker swarm init --advertise-addr 192.168.1.100

which returns

Swarm initialized: current node (8jw6y313hmt3vfa1fme1dinro) is now a manager.

 

To add a worker to this swarm, run the following command:
docker swarm join \
--token SWMTKN-1-2gvqzfx48uw8zcokwl5033iwdel2rl9n96lc0wj1qso7lrztub-aokks5xcm5v7c4usmeswsgg1k \
192.168.1.100:2377

 
To add a manager to this swarm, run the following command:
docker swarm join \
--token SWMTKN-1-2gvqzfx48uw8zcokwl5033iwdel2rl9n96lc0wj1qso7lrztub-1pjgcl8msc6ivn31quorcfsxg \
192.168.1.100:2377

docker info shows:

Swarm Mode
Swarm Mode

To make the other nodes join the cluster, issue the command below from each node:

docker swarm join \
--token SWMTKN-1-2gvqzfx48uw8zcokwl5033iwdel2rl9n96lc0wj1qso7lrztub-aokks5xcm5v7c4usmeswsgg1k \
192.168.1.100:2377

To see the result, issue the following commands from the manager:

docker node ls

which shows:

docker node ls
docker node ls

Now pull down a small busybox image and create a ping service which pins the manger continuously.

docker pull arm64el/busybox-arm64el

docker service create --replicas 1 --name pingservice arm64el/busybox-arm64el /bin/ping 192.168.1.100

docker service ls

docker service inspect --pretty pingservice
docker create service
docker create service

We can see that there is only 1 instance of the service running. To scale it to run 5 instances, issue the command:

docker service scale pingservice=5
docker service ps pingservice
docker service scale
docker service scale

Since I used the service scale command to set the desired state to run 5 instances of the service pingservice, I expect it to spin off new containers to maintain the 5 service instance count when I shutdown nodes: c2-swarm-03 and c2-swarm-04. In fact that is the case as can be seen below (now running 2 containers(services) on c2-swarm-00 and c2-swarm-02, and 1 in c2-swarm-03):

Maintaining Desired State
Maintaining Desired State

When done testing, issue the command:

docker service rm pingservice

and shutdown the cluster.

What Next?

This concludes the initial setup and test drive of my docker cluster. So far, we are not doing any useful work with docker. I shall remedy that in the next instalment in which more advanced swarm mode commands will be used to illustrate the use of docker to run a more realistic production workload that requires data persistence. So, stay tuned!