Sunday, December 11, 2016

Automated Integration Testing in a Microservices World.

Everyone's dabbling with microservices these days. It turns out that writing distributed applications are difficult. They offer advantages to be sure. However, there is no free lunch. One of the difficulties is automated integration testing. That is, testing a microservice with all the external resources it needs including any other services it calls, the databases it uses, any queues it uses. Just setting up and maintaining these resources can be a daunting task that often takes specialized labor.  All too often, integration testing is difficult enough that the task is sloughed off. Fortunately, Docker and it's companion product Docker Compose can make integration testing much easier.

Standardizing on Docker images for deployment artifacts makes integration testing easier. Most organizations writing microservices seem to adopt Docker as a deployment artifact anyway as it greatly speeds up interaction between application developers and operations. It facilitates integration testing as well as you can deploy services you consume (or mocks for the services you consume) temporarily and run integration tests against them. Additionally, consumers for your service can temporarily deploy your docker image to perform their own integration tests. However, as most services also need databases, message queues, and possibly other resources to function properly, that isn't the end of the story. I've previously written about how to dockerize (is that a word?) your own services and applications here.

Docker images already exist for most database software and message software. It's possible to leverage these docker deployments for your own integration testing. In other words, the community has done part of your setup work for you. For example, if my service needs a PostgreSQL database to function, I leverage the official Docker deployment for my integration tests. As it turns out, the Postgres docker deployment makes their image very easy to consumer for integration testing. All I need to do is mount the directory '/docker-entrypoint-initdb.d' and make sure that directory has any SQL files and/or shell scripts I need run to set the database up for use by my application.  The MySQL docker deployment does something similar. For messaging, similar docker distributions exist for RabbitMQ, Active MQ, and Kafka. Note that ActiveMQ and Kafka aren't yet "official" docker deployments.

Docker Compose makes it very easy to assemble multiple images into a consistent and easily deployable environment. Docker-compose configurations are YAML files. Detailed documentation can be found here. It is out of scope for this blog entry to do a complete overview of Docker Compose, but I'll point you to an open source example and discuss a couple of snippets from the example as an illustration.

The screen shot on the left contains a snippet of a docker-compose configuration. The full source is here. Note that each section under services describes a docker image that's to be deployed and possibly built. In this snippet, images vote, redis, worker, and db are to be deployed. Note that vote and worker will be built (e.g. turned into a Docker image) before they are deployed. For images already built, it's only necessary to list the image name.

Other common compose directives are as follows:
  • volumes-- links a directory in the real world to a directory inside the container
  • ports-- links a port in the real world to a port on the inside of the container. For example, vote links port 5000 on the outside to port 80 on the inside.
  • command-- specifies the command within the Docker container that will be run at startup.
  • environment-- (not illustrated here) allows you to set environment variables within the Docker container


Assemble and maintain a Docker compose configuration for your services. This is for your own use in integration tests and so that your consumers can easily know what resources you require in case they want to run integration tests of their own. It's also possible for them to use that compose configuration directly and include it when they set up for their own integration tests.

The Docker environment for your integration tests should be started and shut down as part of the execution of the test. This has many advantages over maintaining the environment separately in an "always on" state. When integration tests aren't needed, they son't consume resources regardless. Those integration tests, along with their environment, can be easily run by developers locally if they need to debug issues; debugging separate environments is always more problematic. Furthermore, integration tests can be easily and painlessly be hosted anywhere (e.g. on-premise, in the cloud) and are host agnostic.

An Integration Test Example

I would be remiss if I didn't pull these concepts together for an integration test example for you.  For my example, I'm leveraging an integration test generic health check written to make sure that a RabbitMQ environment is up and functioning. The source for the check is here, but we're more interested in its integration test today. 

This test utilizes the DockerProcessAPI toolset as I don't currently work in environments that require a docker-machine and the Docker Remote API (Linux or Windows 10 Pro/Enterprise). If your environment requires a docker-machine (e.g. it is a Mac or an earlier version of Windows), then I recommend the Spotify docker-client instead.

The integration test for the health check uses Docker to establish a RabbitMQ environment before the test and shut it down after the test. This part is written as a JUnit test using the @BeforeClass and @AfterClass annotations to bring the environment up once for the entire test and not for each test individually.
In this example, I first pull the latest RabbitMQ image (official distribution). I then map a port for RabbitMQ to use and start the container. I wait five seconds for the environment to initialize, then cause a logging for the current docker environment running.

My log of what Docker containers are running isn't technically required. It does help sometimes if there are port conflicts where the test is running or other problems with a failed test that need to be investigated. As this test runs in a scheduled manner, I don't always know execution context.

After the test completes, the @AfterClass method will shut down the RabbitMQ instance I started and once again cause a container listing just in case something needs to be investigated.

That's a very short example. Had the integration test environment been more complicated and I needed Docker Compose, that would have been relatively simple with the DockerProcessAPI as well. Here's an example of bringing up a Docker Compose environment given a compose configuration YAML:

Here's an example after the test of bringing that same environment back down:

In addition, there are additional convenience methods on the DockerProcessAPI that can log compose environments that are running for investigative purposes later.

Thanks for taking time to read this entry. Feel free to comment or contact me if you have questions.  

Resources





1 comment:

  1. Hello Derek
    The Article on Automated Integration testing is nice .It give Detail information about it.Thanks for Sharing about Automated Integration testing . Software Testing Services

    ReplyDelete