Why Docker v1.12.1
In part 4 of this series, I tested Docker v1.12.0 and found that it had two serious issues:
- Load balancing not working
- Overlay network name service not working
Another reason is that I recently noticed the following warning in the log which worries me.
WARN Udev sync is not supported. This will lead to unexpected behavior, data loss and errors
A ticket has been raised which suggests that the issue may be curcumvented by using dynamically linked binaries.
I am hoping that v1.12.1 has resolved these issues. And I’d like to explore building dynamically linked binaries this time.
Building Docker v1.12.1
I checked out the source code for v1.12.1 and followed the procedure I used successfully in building my v1.12.0 binaries in Part 2 of this series.
It turned out that following those steps was not enough to build v1.12.1. After executing:
sudo make build
I issue the commands:
sudo apt-get update sudo apt-get install -y btrfs-tools libsqlite3-dev libdevmapper-dev AUTO_GOPATH=1 ./project/make.sh dynbinary
The last command instructs the build process to create dynamically linked binaries which is what I wanted to experiment with. However, the compilation failed due to low memory:
Sep 26 20:40:25 c2-swarm-00 kernel: [40463.037797] lowmemorykiller: Killing 'compile' (17353), adj 0,... Sep 26 20:40:25 c2-swarm-00 kernel: [40463.037797] to free 686512kB on behalf of 'kswapd0' (39) because... Sep 26 20:40:25 c2-swarm-00 kernel: [40463.037797] cache 31460kB is below limit 65536kB for oom_score_adj 0... Sep 26 20:40:25 c2-swarm-00 kernel: [40463.037797] Free memory is -1500kB above reserved. nonmove free (6344kB),(1862...
firstname.lastname@example.org posted the solution to this problem in my blog:
Original values: sudo echo '0,1,6,12' > /sys/module/lowmemorykiller/parameters/adj sudo echo '1536,2048,4096,16384' > /sys/module/lowmemorykiller/parameters/minfree replace them with: sudo -s sudo echo '9999' > /sys/module/lowmemorykiller/parameters/adj sudo echo '1' > /sys/module/lowmemorykiller/parameters/minfree
After making these changes, I again ran:
AUTO_GOPATH=1 ./project/make.sh dynbinary
and this time it worked. To my surprise, the build only created the binaries: dockerd-1.12.1, docker-proxy-1.12.1 and docker-1.12.1 (not counting their symbolically linked couterparts). Binaries: docker-containerd, docker-containerd-ctr, docker-containerd-shim and docker-runc were not created like the build for static binary build!
I saved the dynamically linked binaries and proceeded to build the statically linked binaries by issuing the command:
sudo make binary
This time it created all the binaries but they were statically linked.
Initial Testing of 1.12.1 Binaries
I proceeded to replace my 1.12.0 binaries with the 3 newly built dynamically linked binaries and the 4 statically linked binaries. The “dev sync” warning did not appear. I was using the armbian xenial server. I decided to try the binaries on the armbian jessie server as well. However, the dynamically linked binaries would not work as they failed to link to the right version of the required libraries although I checked and found that these libraries had already been installed. I then proceeded to replace all the dynamically linked libraries with the static ones. And docker ran without issuing “udev sync” warning. Consequently, I also switched to use the statically linked binaries on my xenial server. As a service to the ODROID community, I am making all the 1.12.1 statically linked binaries together with an installation script on Github:
I wanted to build the .deb package but failed. demaniak posted a comment on my blog saying:
The make deb target has some missing components. there is an issue open about this: https://github.com/docker/docker/issues/27045
That is why I wrote the installation script to go with the binaries and not providing a .deb package for easy installation.
Load Balancing Issue
Swarm mode is supposed to perform load balancing meaning that if you invoke a service on the swarm node that is not running the service, it will redirect you to the service. This means that if you are running the service on 3 nodes, you can actually invoke the service in any of the nodes in the swarm cluster and not only on the nodes running the service. That was not the behaviour I witnessed using 1.12.0 swarm mode. I could not access the service on the nodes not running it. And after I scaled up and down the number of replicas of the service, sometimes I could not even invoke the service on the node running the service! I searched the Internet and many people reported the same issue. The problem was thought to be swarm mode not updating the IPVS (IP Virtual Server) tables correctly. IPVS is the kernel module responsible for load balancing. I am sorry to report reported that the problem is still not resolved in Docker 1.12.1.
When I pointed my web browser to the node not running the service, I could see in syslog error message shown:
Overlay Network Service Name Issue
This issue was reported in Part 4 of this series. rajkumar49 summed up the issue succinctly:
I face this issue in Overlay networks. I started all Docker Swarm services in same overlay network using Docker 1.12.1 engine. I can able to access container using service name in the Same host only . Accessing the another host containers using service name is not working . i have even tried the --listen-addr method when launching the Swarm manger and Swarm Worker. Related closed ticket #23855. Also, I can see that Overlay network allocated the IP address to all containers in all the hosts . i can ping the VIP of the service from that service's containers . please help.
Unfortunately, this is still not working in 1.12.1. When I tried to access the container by name, the following errors were found in syslog:
To say that I am a little bit disappointed after going so far in my experiment with Docker and, swarm mode in particular, is an understatement. In the meantime, I shall start experimenting with Kubernetes, an open-source system for automating deployment, scaling, and management of containerized applications. To many, it is a more feature-rich and more mature orchestration engine than Docker Swarm Mode. Of course, I am still interested in Docker Swarm Mode. I shall try again once a newer version of Docker is available. Until then, I shall explore Kubernetes and report on the outcome…hopefully, soon