Running integration tests on Google Cloud Build using docker-compose

Author : usitvhd
Publish Date : 2021-04-01


Running integration tests on Google Cloud Build using docker-compose

This post is a direct follow-up to Microservices test architecture where I’ve introduced new kinds of tests to our example project.

Wild Workouts uses Google Cloud Build as CI/CD platform. It’s configured in a continuous deployment manner, meaning the changes land on production as soon as the pipeline passes. If you consider our current setup, it’s both brave and naive. We have no tests running there that could save us from obvious mistakes (the not-so-obvious mistakes can rarely be caught by tests, anyway).

In this article I will show how to run integration, component, and end-to-end tests on Google Cloud Build using docker-compose.
This is not just another article with random code snippets.

This post is part of a bigger series where we show how to build Go applications that are easy to develop, maintain, and fun to work with in the long term. We are doing it by sharing proven techniques based on many experiments we did with teams we lead and scientific research.

You can learn these patterns by building with us a fully functional example Go web application – Wild Workouts.

We did one thing differently – we included some subtle issues to the initial Wild Workouts implementation. Have we lost our minds to do that? Not yet. ? These issues are common for many Go projects. In the long term, these small issues become critical and stop adding new features.

It’s one of the essential skills of a senior or lead developer; you always need to keep long-term implications in mind.

We will fix them by refactoring Wild Workouts. In that way, you will quickly understand the techniques we share.

Do you know that feeling after reading an article about some technique and trying implement it only to be blocked by some issues skipped in the guide? Cutting these details makes articles shorter and increases page views, but this is not our goal. Our goal is to create content that provides enough know-how to apply presented techniques. If you did not read previous articles from the series yet, we highly recommend doing that.

We believe that in some areas, there are no shortcuts. If you want to build complex applications in a fast and efficient way, you need to spend some time learning that. If it was simple, we wouldn’t have large amounts of scary legacy code.

Here’s the full list of 14 articles released so far.

The full source code of Wild Workouts is available on GitHub. Don’t forget to leave a star for our project! ⭐
The current config

Let’s take a look at the current cloudbuild.yaml file. While it’s pretty simple, most steps are being run several times as we keep 3 microservices in a single repository. I focus on the backend part, so I will skip all config related to frontend deployment now.

steps:
  - id: trainer-lint
    name: golang
    entrypoint: ./scripts/lint.sh
    args: [trainer]
  - id: trainings-lint
    name: golang
    entrypoint: ./scripts/lint.sh
    args: [trainings]
  - id: users-lint
    name: golang
    entrypoint: ./scripts/lint.sh
    args: [users]

  - id: trainer-docker
    name: gcr.io/cloud-builders/docker
    entrypoint: ./scripts/build-docker.sh
    args: ["trainer", "$PROJECT_ID"]
    waitFor: [trainer-lint]
  - id: trainings-docker
    name: gcr.io/cloud-builders/docker
    entrypoint: ./scripts/build-docker.sh
    args: ["trainings", "$PROJECT_ID"]
    waitFor: [trainings-lint]
  - id: users-docker
    name: gcr.io/cloud-builders/docker
    entrypoint: ./scripts/build-docker.sh
    args: ["users", "$PROJECT_ID"]
    waitFor: [users-lint]

  - id: trainer-http-deploy
    name: gcr.io/cloud-builders/gcloud
    entrypoint: ./scripts/deploy.sh
    args: [trainer, http, "$PROJECT_ID"]
    waitFor: [trainer-docker]
  - id: trainer-grpc-deploy
    name: gcr.io/cloud-builders/gcloud
    entrypoint: ./scripts/deploy.sh
    args: [trainer, grpc, "$PROJECT_ID"]
    waitFor: [trainer-docker]
  - id: trainings-http-deploy
    name: gcr.io/cloud-builders/gcloud
    entrypoint: ./scripts/deploy.sh
    args: [trainings, http, "$PROJECT_ID"]
    waitFor: [trainings-docker]
  - id: users-http-deploy
    name: gcr.io/cloud-builders/gcloud
    entrypoint: ./scripts/deploy.sh
    args: [users, http, "$PROJECT_ID"]
    waitFor: [users-docker]
  - id: users-grpc-deploy
    name: gcr.io/cloud-builders/gcloud
    entrypoint: ./scripts/deploy.sh
    args: [users, grpc, "$PROJECT_ID"]
    waitFor: [users-docker]

Full source: github.com/ThreeDotsLabs/wild-workouts-go-ddd-example/cloudbuild.yaml

Notice the waitFor key. It makes a step wait only for other specified steps. Some jobs can run in parallel this way.

Here’s a more readable version of what’s going on:

We have a similar workflow for each service: lint (static analysis), build the Docker image, and deploy it as one or two Cloud Run services.

Since our test suite is ready and works locally, we need to figure out how to plug it in the pipeline.
Docker Compose

We already have one docker-compose definition, and I would like to keep it this way. We will use it for:

    running the application locally,
    running tests locally,
    running tests in the CI.

These three targets have different needs. For example, when running the application locally, we want to have hot code reloading. But that’s pointless in the CI. On the other hand, we can’t expose ports on localhost in the CI, which is the easiest way to reach the application in the local environment.

Luckily docker-compose is flexible enough to support all of these use cases. We will use a base docker-compose.yml file and an additional docker-compose.ci.yml file with overrides just for the CI. You can run it by passing both files using the -f flag (notice there’s one flag for each file). Keys from the files will be merged in the provided order.

docker-compose -f docker-compose.yml -f docker-compose.ci.yml up -d

Typically, docker-compose looks for the docker-compose.yml file in the current directory or parent directories. Using the -f flag disables this behavior, so only specified files are parsed.

To run it on Cloud Build, we can use the docker/compose image.

- id: docker-compose
  name: 'docker/compose:1.19.0'
  args: ['-f', 'docker-compose.yml', '-f', 'docker-compose.ci.yml', 'up', '-d']
  env:
    - 'PROJECT_ID=$PROJECT_ID'
  waitFor: [trainer-docker, trainings-docker, users-docker]

Full source: github.com/ThreeDotsLabs/wild-workouts-go-ddd-example/cloudbuild.yaml

Since we filled waitFor with proper step names, we can be sure the correct images are present. This is what we’ve just added:

The first override we add to docker-compose.ci.yml makes each service use docker images by the tag instead of building one from docker/app/Dockerfile. This ensures our tests check the same images we’re going to deploy.

Note the ${PROJECT_ID} variable in the image keys. This needs to be the production project, so we can’t hardcode it in the repository. Cloud Build provides this variable in each step, so we just pass it to the docker-compose up command (see the definition above).

services:
  trainer-http:
    image: "gcr.io/${PROJECT_ID}/trainer"

  trainer-grpc:
    image: "gcr.io/${PROJECT_ID}/trainer"

  trainings-http:
    image: "gcr.io/${PROJECT_ID}/trainings"

  users-http:
    image: "gcr.io/${PROJECT_ID}/users"

  users-grpc:
    image: "gcr.io/${PROJECT_ID}/users"

Full source: github.com/ThreeDotsLabs/wild-workouts-go-ddd-example/docker-compose.ci.yml
Network

Many CI systems use Docker today, typically running each step inside a container with the chosen image. Using docker-compose in a CI is a bit trickier, as it usually means running Docker containers from within a Docker container.

On Google Cloud Build, all containers live inside the cloudbuild network. Simply adding this network as the default one for our docker-compose.ci.yml is enough for CI steps to connect to the docker-compose services.

Here’s the second part of our override file:

networks:
  default:
    external:
      name: cloudbuild

Full source: github.com/ThreeDotsLabs/wild-workouts-go-ddd-example/docker-compose.ci.yml
Environment variables

Using environment variables as configuration seems simple at first, but it quickly becomes complex considering how many scenarios we need to handle. Let’s try to list all of them:

    running the application locally,
    running component tests locally,
    running component tests in the CI,
    running end-to-end tests locally,
    running end-to-end tests in the CI.

I didn’t include running the application on production, as it doesn’t use docker-compose.

Why component and end-to-end tests are separate scenarios? The former spin up services on demand and the latter communicate with services already running within docker-compose. It means both types will use different endpoints to reach the services.
For more details on component and end-to-end tests see the previous article. The TL;DR version is: we focus coverage on component tests, which don't include external services. End-to-end tests are there just to confirm the contract is not broken on a very high level and only for the most critical path. This is the key to decoupled services.

We already keep a base .env file that holds most variables. It’s passed to each service in the docker-compose definition.

Additionally, docker-compose loads this file automatically when it finds it in the working directory. Thanks to this, we can use the variables inside the yaml definition as well.

services:
  trainer-http:
    build:
      context: docker/app
    ports:
      # The $PORT variable comes from the .env file
      - "127.0.0.1:3000:$PORT"
    env_file:
      # All variables from .env are passed to the service
      - .env
    # (part of the definition omitted)

Full source: github.com/ThreeDotsLabs/wild-workouts-go-ddd-example/docker-compose.yml

We also need these variables loaded when running tests. That’s pretty easy



Category :news

Stealing it all, Radhika Apte looks sexy as ever in a rustic dress!

Stealing it all, Radhika Apte looks sexy as ever in a rustic dress!

- Being omnipresent on every platform possible, Radhika Apte is being hailed as the new indie actress of our times. Recently, the actress flew to Japan and ...


The harrowing scenes from India have shocked the world

The harrowing scenes from India have shocked the world

- India has had more Covid-19 cases in the last seven days than anywhere else in the world Experts believe the real death toll may be higher the official numbers


New Coronavirus variations from India, Africa and UK 

New Coronavirus variations from India, Africa and UK 

- The dissemination of new variations of the Covid from various nations in Indonesia is called by disease transmission specialists the "greatest misstep"


Anger as Cambodia’s Hun Sen meets Myanmar military leader

Anger as Cambodia’s Hun Sen meets Myanmar military leader

- Cambodian Prime Minister Hun Sen’s visit to Myanmar seeking to revive peace efforts after last year’s military takeover has provoked