Using Docker In Continuous Delivery

One of our clients asked us to help them reduce the execution times of their tests. The main execution took about 5 hours. This time included about 30 minutes for preparing virtual machine images and setting up the system. The rest of the time was for the sequential execution of over 2000 tests, which were organised into about 50 test suites. The tests were executed overnight. If the tests failed, there was one chance to re-run the suite in the day, but only if all fixes were ready before lunch.

In addition to the long execution time, the quality of the tests were low. For example, because all the tests were executed on the same environment, developers had to create complex tear down procedures. Such procedures were lengthy and far from perfect, leading to unpredictable failures. These and other problems significantly reduced the trust in the tests and thus undermined the whole automation effort.

In this article, we will explore how one technology that requires relatively small transitional effort and can dramatically improve the situation described above. That technology is Docker and we will explore using Docker.

 

Container (OS Level Virtualization)

Containers allow us to create multiple isolated and secure environments within a single instance of an operating system. As opposed to virtual machines (VMs), containers do not launch a separate OS but instead share the host kernel while maintaining the isolation of resources and processes.

This architectural difference leads to the drastic reduction in the overheads related to starting and running instances. The start up time for a container is about 100 milliseconds. The start up time for a virtual machine is about 30 seconds. Because of the low overheads, the number of containers running on a typical server can easily measure in the hundreds. The same server would struggle to support ten to fifteen VMs.

 (from ‘Using Containers for Continuous Deployment’, Reznik, available at CMCrossroads. Last accessed: 19/06/2014)

 

Testing using Containers

The creation of new, clean execution environments for each test suite was always paramount for effective test automation. This is because tests runs must be independent. Because of this, great care and effort, and lots of tooling, are dedicated to the setting up and later tearing down of test suites.

Although it is technically possible to achieve clean tests runs using VMs, the need to deploy and run a full instance of an operating system makes VMs impractical as a solution.

Containers implemented using Docker on the other hand are perfect candidates for such approach.

Docker uses images. You can use a number of images in a ‘layered’ approach to achieve what can be achieved with a VM. This layered approach for the creation of Docker images allows the reuse of existing Docker images without the need for rebuilding the entire OS file system. Instead, only the images that includes the changed application itself are rebuilt.

For example, to create an image for a website we would typically use an image of a OS, such as Ubuntu; an image with the middleware, such as an Apache web-server; finally we’d  have an image of the web-application itself. This means that only the last, typically much smaller image, needs to be rebuilt and redeployed to the test servers.

Once the image is deployed to the test servers, the container startup time is less that a second and the performance of the processes running inside is the same as if they were running natively. And thus the execution time of setting up and tearing down the tests is significantly reduced.

 

Let’s now see how these differences may improve the situation at the client I mentioned before.

 

  • Setup Time. This is now reduced to milliseconds.

  • Hardware Utilisation. Once the images are ready, we can deploy as many containers as our hardware can support. And since there is no additional OS instance in each container, we can deploy up to hundreds of them on each physical server.

  • Parallel Execution. Because of better hardware utilisation, each container can execute a separate test suite in parallel and thus all tests can be executed in a few minutes instead of a few hours.

  • No Need To Tear Down – Just Trash the Container. Since each test suite is executed in a fully protected environment, we can remove all the tear down procedures and use disposable containers instead, which we simply ‘let die’.

  • Ability to Run Locally (Or Anywhere). Since the same containers can run everywhere, we can allow developers to run the same test suites on their laptops without the need to install and configure complex test automation setups. We can of course allow them to run ‘in the cloud’ or on clusters of abandoned workstations, where we would get even better performance.

 

The changes in the preparation and the execution times of the entire test automation might can be so dramatic that it is now possible to run the suite on every single commit, as suggested by Martin Fowler in his seminal article “Continuous Integration”

 

Docker In Continuous Delivery and DevOps

Continuous Delivery relies on the ability to deploy applications very quickly onto development, testing, acceptance and productions environments. At the same time, one of the core requirements for DevOps is the standardization of tooling between Dev and Ops. Docker containers thus help to achieve the dream of both Continuous Delivery and the DevOps movement.

 

Conclusion

The implications of the adoption of containers for software development are significant. As explained in this article, automated testers may think that the reduction in test time is the most obvious. However, this is only part of a much more significant idea, namely that Docker containers allow for, on the one hand, Continuous Integration and on the other the standardisation of tools, which the DevOps movement is premised on.

About the Author

Pini

Pini has 15+ years of experience in delivering software in Israel and Netherlands. Starting as a developer and moving through technical, managerial and consulting positions in Configuration Management and Operations areas, Pini acquired deep understanding of the software delivery processes and currently helping organisations around Europe with improving software delivery pipeline by introducing Docker and other cutting edge technologies.
Find out more about @pini-reznik