Testcontainers – Docker https://www.docker.com Wed, 24 Apr 2024 16:03:23 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.3 https://www.docker.com/wp-content/uploads/2024/02/cropped-docker-logo-favicon-32x32.png Testcontainers – Docker https://www.docker.com 32 32 A Promising Methodology for Testing GenAI Applications in Java https://www.docker.com/blog/testing-genai-applications-in-java/ Wed, 24 Apr 2024 16:03:14 +0000 https://www.docker.com/?p=54150 In the vast universe of programming, the era of generative artificial intelligence (GenAI) has marked a turning point, opening up a plethora of possibilities for developers.

Tools such as LangChain4j and Spring AI have democratized access to the creation of GenAI applications in Java, allowing Java developers to dive into this fascinating world. With Langchain4j, for instance, setting up and interacting with large language models (LLMs) has become exceptionally straightforward. Consider the following Java code snippet:

public static void main(String[] args) {
    var llm = OpenAiChatModel.builder()
            .apiKey("demo")
            .modelName("gpt-3.5-turbo")
            .build();
    System.out.println(llm.generate("Hello, how are you?"));
}

This example illustrates how a developer can quickly instantiate an LLM within a Java application. By simply configuring the model with an API key and specifying the model name, developers can begin generating text responses immediately. This accessibility is pivotal for fostering innovation and exploration within the Java community. More than that, we have a wide range of models that can be run locally, and various vector databases for storing embeddings and performing semantic searches, among other technological marvels.

Despite this progress, however, we are faced with a persistent challenge: the difficulty of testing applications that incorporate artificial intelligence. This aspect seems to be a field where there is still much to explore and develop.

In this article, I will share a methodology that I find promising for testing GenAI applications.

2400x1260 2024 GenAi

Project overview

The example project focuses on an application that provides an API for interacting with two AI agents capable of answering questions. 

An AI agent is a software entity designed to perform tasks autonomously, using artificial intelligence to simulate human-like interactions and responses. 

In this project, one agent uses direct knowledge already contained within the LLM, while the other leverages internal documentation to enrich the LLM through retrieval-augmented generation (RAG). This approach allows the agents to provide precise and contextually relevant answers based on the input they receive.

I prefer to omit the technical details about RAG, as ample information is available elsewhere. I’ll simply note that this example employs a particular variant of RAG, which simplifies the traditional process of generating and storing embeddings for information retrieval.

Instead of dividing documents into chunks and making embeddings of those chunks, in this project, we will use an LLM to generate a summary of the documents. The embedding is generated based on that summary.

When the user writes a question, an embedding of the question will be generated and a semantic search will be performed against the embeddings of the summaries. If a match is found, the user’s message will be augmented with the original document.

This way, there’s no need to deal with the configuration of document chunks, worry about setting the number of chunks to retrieve, or worry about whether the way of augmenting the user’s message makes sense. If there is a document that talks about what the user is asking, it will be included in the message sent to the LLM.

Technical stack

The project is developed in Java and utilizes a Spring Boot application with Testcontainers and LangChain4j.

For setting up the project, I followed the steps outlined in Local Development Environment with Testcontainers and Spring Boot Application Testing and Development with Testcontainers.

I also use Tescontainers Desktop to facilitate database access and to verify the generated embeddings as well as to review the container logs.

The challenge of testing

The real challenge arises when trying to test the responses generated by language models. Traditionally, we could settle for verifying that the response includes certain keywords, which is insufficient and prone to errors.

static String question = "How I can install Testcontainers Desktop?";
@Test
    void verifyRaggedAgentSucceedToAnswerHowToInstallTCD() {
        String answer  = restTemplate.getForObject("/chat/rag?question={question}", ChatController.ChatResponse.class, question).message();
        assertThat(answer).contains("https://testcontainers.com/desktop/");
    }

This approach is not only fragile but also lacks the ability to assess the relevance or coherence of the response.

An alternative is to employ cosine similarity to compare the embeddings of a “reference” response and the actual response, providing a more semantic form of evaluation. 

This method measures the similarity between two vectors/embeddings by calculating the cosine of the angle between them. If both vectors point in the same direction, it means the “reference” response is semantically the same as the actual response.

static String question = "How I can install Testcontainers Desktop?";
static String reference = """
       - Answer must indicate to download Testcontainers Desktop from https://testcontainers.com/desktop/
       - Answer must indicate to use brew to install Testcontainers Desktop in MacOS
       - Answer must be less than 5 sentences
       """;
@Test
    void verifyRaggedAgentSucceedToAnswerHowToInstallTCD() {
        String answer  = restTemplate.getForObject("/chat/rag?question={question}", ChatController.ChatResponse.class, question).message();
        double cosineSimilarity = getCosineSimilarity(reference, answer);
        assertThat(cosineSimilarity).isGreaterThan(0.8);
    }

However, this method introduces the problem of selecting an appropriate threshold to determine the acceptability of the response, in addition to the opacity of the evaluation process.

Toward a more effective method

The real problem here arises from the fact that answers provided by the LLM are in natural language and non-deterministic. Because of this, using current testing methods to verify them is difficult, as these methods are better suited to testing predictable values. 

However, we already have a great tool for understanding non-deterministic answers in natural language: LLMs themselves. Thus, the key may lie in using one LLM to evaluate the adequacy of responses generated by another LLM. 

This proposal involves defining detailed validation criteria and using an LLM as a “Validator Agent” to determine if the responses meet the specified requirements. This approach can be applied to validate answers to specific questions, drawing on both general knowledge and specialized information

By incorporating detailed instructions and examples, the Validator Agent can provide accurate and justified evaluations, offering clarity on why a response is considered correct or incorrect.

static String question = "How I can install Testcontainers Desktop?";
    static String reference = """
            - Answer must indicate to download Testcontainers Desktop from https://testcontainers.com/desktop/
            - Answer must indicate to use brew to install Testcontainers Desktop in MacOS
            - Answer must be less than 5 sentences
            """;

    @Test
    void verifyStraightAgentFailsToAnswerHowToInstallTCD() {
        String answer  = restTemplate.getForObject("/chat/straight?question={question}", ChatController.ChatResponse.class, question).message();
        ValidatorAgent.ValidatorResponse validate = validatorAgent.validate(question, answer, reference);
        assertThat(validate.response()).isEqualTo("no");
    }

    @Test
    void verifyRaggedAgentSucceedToAnswerHowToInstallTCD() {
        String answer  = restTemplate.getForObject("/chat/rag?question={question}", ChatController.ChatResponse.class, question).message();
        ValidatorAgent.ValidatorResponse validate = validatorAgent.validate(question, answer, reference);
        assertThat(validate.response()).isEqualTo("yes");
    }

We can even test more complex responses where the LLM should suggest a better alternative to the user’s question.

static String question = "How I can find the random port of a Testcontainer to connect to it?";
    static String reference = """
            - Answer must not mention using getMappedPort() method to find the random port of a Testcontainer
            - Answer must mention that you don't need to find the random port of a Testcontainer to connect to it
            - Answer must indicate that you can use the Testcontainers Desktop app to configure fixed port
            - Answer must be less than 5 sentences
            """;

    @Test
    void verifyRaggedAgentSucceedToAnswerHowToDebugWithTCD() {
        String answer  = restTemplate.getForObject("/chat/rag?question={question}", ChatController.ChatResponse.class, question).message();
        ValidatorAgent.ValidatorResponse validate = validatorAgent.validate(question, answer, reference);
        assertThat(validate.response()).isEqualTo("yes");
    }

Validator Agent

The configuration for the Validator Agent doesn’t differ from that of other agents. It is built using the LangChain4j AI Service and a list of specific instructions:

public interface ValidatorAgent {
    @SystemMessage("""
                ### Instructions
                You are a strict validator.
                You will be provided with a question, an answer, and a reference.
                Your task is to validate whether the answer is correct for the given question, based on the reference.
                
                Follow these instructions:
                - Respond only 'yes', 'no' or 'unsure' and always include the reason for your response
                - Respond with 'yes' if the answer is correct
                - Respond with 'no' if the answer is incorrect
                - If you are unsure, simply respond with 'unsure'
                - Respond with 'no' if the answer is not clear or concise
                - Respond with 'no' if the answer is not based on the reference
                
                Your response must be a json object with the following structure:
                {
                    "response": "yes",
                    "reason": "The answer is correct because it is based on the reference provided."
                }
                
                ### Example
                Question: Is Madrid the capital of Spain?
                Answer: No, it's Barcelona.
                Reference: The capital of Spain is Madrid
                ###
                Response: {
                    "response": "no",
                    "reason": "The answer is incorrect because the reference states that the capital of Spain is Madrid."
                }
                """)
    @UserMessage("""
            ###
            Question: {{question}}
            ###
            Answer: {{answer}}
            ###
            Reference: {{reference}}
            ###
            """)
    ValidatorResponse validate(@V("question") String question, @V("answer") String answer, @V("reference") String reference);

    record ValidatorResponse(String response, String reason) {}
}

As you can see, I’m using Few-Shot Prompting to guide the LLM on the expected responses. I also request a JSON format for responses to facilitate parsing them into objects, and I specify that the reason for the answer must be included, to better understand the basis of its verdict.

Conclusion

The evolution of GenAI applications brings with it the challenge of developing testing methods that can effectively evaluate the complexity and subtlety of responses generated by advanced artificial intelligences. 

The proposal to use an LLM as a Validator Agent represents a promising approach, paving the way towards a new era of software development and evaluation in the field of artificial intelligence. Over time, we hope to see more innovations that allow us to overcome the current challenges and maximize the potential of these transformative technologies.

Learn more

]]>
Revolutionize Your CI/CD Pipeline: Integrating Testcontainers and Bazel https://www.docker.com/blog/revolutionize-your-ci-cd-pipeline-integrating-testcontainers-and-bazel/ Thu, 29 Feb 2024 15:00:00 +0000 https://www.docker.com/?p=51429 One of the challenges in modern software development is being able to release software often and with confidence. This can only be achieved when you have a good CI/CD setup in place that can test your software and release it with minimal or even no human intervention. But modern software applications also use a wide range of third-party dependencies and often need to run on multiple operating systems and architectures. 

In this post, I will explain how the combination of Bazel and Testcontainers helps developers build and release software by providing a hermetic build system.

banner Running Testcontainers Tests using Bazel 2400x1260 1

Using Bazel and Testcontainers together

Bazel is an open source build tool developed by Google to build and test multi-language, multi-platform projects. Several big IT companies have adopted monorepos for various reasons, such as:

  • Code sharing and reusability 
  • Cross-project refactoring 
  • Consistent builds and dependency management 
  • Versioning and release management

With its multi-language support and focus on reproducible builds, Bazel shines in building such monorepos.

A key concept of Bazel is hermeticity, which means that when all inputs are declared, the build system can know when an output needs to be rebuilt. This approach brings determinism where, given the same input source code and product configuration, it will always return the same output by isolating the build from changes to the host system.

Testcontainers is an open source framework for provisioning throwaway, on-demand containers for development and testing use cases. Testcontainers make it easy to work with databases, message brokers, web browsers, or just about anything that can run in a Docker container.

Using Bazel and Testcontainers together offers the following features:

  • Bazel can build projects using different programming languages like C, C++, Java, Go, Python, Node.js, etc.
  • Bazel can dynamically provision the isolated build/test environment with desired language versions.
  • Testcontainers can provision the required dependencies as Docker containers so that your test suite is self-contained. You don’t have to manually pre-provision the necessary services, such as databases, message brokers, and so on. 
  • All the test dependencies can be expressed through code using Testcontainers APIs, and you avoid the risk of breaking hermeticity by sharing such resources between tests.

Let’s see how we can use Bazel and Testcontainers to build and test a monorepo with modules using different languages.
We are going to explore a monorepo with a customers module, which uses Java, and a products module, which uses Go. Both modules interact with relational databases (PostgreSQL) and use Testcontainers for testing.

Getting started with Bazel

To begin, let’s get familiar with Bazel’s basic concepts. The best way to install Bazel is by using Bazelisk. Follow the official installation instructions to install Bazelisk. Once it’s installed, you should be able to run the Bazelisk version and Bazel version commands:

$ brew install bazelisk
$ bazel version

Bazelisk version: v1.12.0
Build label: 7.0.0

Before you can build a project using Bazel, you need to set up its workspace. 

A workspace is a directory that holds your project’s source files and contains the following files:

  • The WORKSPACE.bazel file, which identifies the directory and its contents as a Bazel workspace and lives at the root of the project’s directory structure.
  • A MODULE.bazel file, which declares dependencies on Bazel plugins (called “rulesets”).
  • One or more BUILD (or BUILD.bazel) files, which describe the sources and dependencies for different parts of the project. A directory within the workspace that contains a BUILD file is a package.

In the simplest case, a MODULE.bazel file can be an empty file, and a BUILD file can contain one or more generic targets as follows:

genrule(
    name = "foo",
    outs = ["foo.txt"],
    cmd_bash = "sleep 2 && echo 'Hello World' >$@",
)

genrule(
    name = "bar",
    outs = ["bar.txt"],
    cmd_bash = "sleep 2 && echo 'Bye bye' >$@",
)

Here, we have two targets: foo and bar. Now we can build those targets using Bazel as follows:

$ bazel build //:foo <- runs only foo target, // indicates root workspace
$ bazel build //:bar <- runs only bar target
$ bazel build //... <- runs all targets

Configuring the Bazel build in a monorepo

We are going to explore using Bazel in the testcontainers-bazel-demo repository. This repository is a monorepo with a customers module using Java and a products module using Go. Its structure looks like the following:

testcontainers-bazel-demo
|____customers
| |____BUILD.bazel
| |____src
|____products
| |____go.mod
| |____go.sum
| |____repo.go
| |____repo_test.go
| |____BUILD.bazel
|____MODULE.bazel

Bazel uses different rules for building different types of projects. Bazel uses rules_java for building Java packages, rules_go for building Go packages, rules_python for building Python packages, etc.

We may also need to load additional rules providing additional features. For building Java packages, we may want to use external Maven dependencies and use JUnit 5 for running tests. In that case, we should load rules_jvm_external to be able to use Maven dependencies. 

We are going to use Bzlmod, the new external dependency subsystem, to load the external dependencies. In the MODULE.bazel file, we can load the additional rules_jvm_external and contrib_rules_jvm as follows:

bazel_dep(name = "contrib_rules_jvm", version = "0.21.4")
bazel_dep(name = "rules_jvm_external", version = "5.3")

maven = use_extension("@rules_jvm_external//:extensions.bzl", "maven")
maven.install(
   name = "maven",
   artifacts = [
       "org.postgresql:postgresql:42.6.0",
       "ch.qos.logback:logback-classic:1.4.6",
       "org.testcontainers:postgresql:1.19.3",
       "org.junit.platform:junit-platform-launcher:1.10.1",
       "org.junit.platform:junit-platform-reporting:1.10.1",
       "org.junit.jupiter:junit-jupiter-api:5.10.1",
       "org.junit.jupiter:junit-jupiter-params:5.10.1",
       "org.junit.jupiter:junit-jupiter-engine:5.10.1",
   ],
)
use_repo(maven, "maven")

Let’s understand the above configuration in the MODULE.bazel file:

  • We have loaded the rules_jvm_external rules from Bazel Central Registry and loaded extensions to use third-party Maven dependencies.
  • We have configured all our Java application dependencies using Maven coordinates in the maven.install artifacts configuration.
  • We are loading the contrib_rules_jvm rules that supports running JUnit 5 tests as a suite.

Now, we can run the @maven//:pin program to create a JSON lockfile of the transitive dependencies, in a format that rules_jvm_external can use later:

bazel run @maven//:pin

Rename the generated file rules_jvm_external~4.5~maven~maven_install.json to maven_install.json. Now update the MODULE.bazel to reflect that we pinned the dependencies.

Add a lock_file attribute to the maven.install() and update the use_repo call to also expose the unpinned_maven repository used to update the dependencies:

maven.install(
    ...
    lock_file = "//:maven_install.json",
)

use_repo(maven, "maven", "unpinned_maven")

Now, when you update any dependencies, you can run the following command to update the lock file:

​​bazel run @unpinned_maven//:pin

Let’s configure our build targets in the customers/BUILD.bazel file, as follows:

load(
 "@bazel_tools//tools/jdk:default_java_toolchain.bzl",
 "default_java_toolchain", "DEFAULT_TOOLCHAIN_CONFIGURATION", "BASE_JDK9_JVM_OPTS", "DEFAULT_JAVACOPTS"
)

default_java_toolchain(
 name = "repository_default_toolchain",
 configuration = DEFAULT_TOOLCHAIN_CONFIGURATION,
 java_runtime = "@bazel_tools//tools/jdk:remotejdk_17",
 jvm_opts = BASE_JDK9_JVM_OPTS + ["--enable-preview"],
 javacopts = DEFAULT_JAVACOPTS + ["--enable-preview"],
 source_version = "17",
 target_version = "17",
)

load("@rules_jvm_external//:defs.bzl", "artifact")
load("@contrib_rules_jvm//java:defs.bzl", "JUNIT5_DEPS", "java_test_suite")

java_library(
   name = "customers-lib",
   srcs = glob(["src/main/java/**/*.java"]),
   deps = [
       artifact("org.postgresql:postgresql"),
       artifact("ch.qos.logback:logback-classic"),
   ],
)

java_library(
   name = "customers-test-resources",
   resources = glob(["src/test/resources/**/*"]),
)

java_test_suite(
   name = "customers-lib-tests",
   srcs = glob(["src/test/java/**/*.java"]),
   runner = "junit5",
   test_suffixes = [
       "Test.java",
       "Tests.java",
   ],
   runtime_deps = JUNIT5_DEPS,
   deps = [
       ":customers-lib",
       ":customers-test-resources",
       artifact("org.junit.jupiter:junit-jupiter-api"),
       artifact("org.junit.jupiter:junit-jupiter-params"),
       artifact("org.testcontainers:postgresql"),
   ],
)

Let’s understand this BUILD configuration:

  • We have loaded default_java_toolchain and then configured the Java version to 17.
  • We have configured a java_library target with the name customers-lib that will build the production jar file.
  • We have defined a java_test_suite target with the name customers-lib-tests to define our test suite, which will execute all the tests. We also configured the dependencies on the other target customers-lib and external dependencies.
  • We also defined another target with the name customers-test-resources to add non-Java sources (e.g., logging config files) to our test suite target as a dependency.

In the customers package, we have a CustomerService class that stores and retrieves customer details in a PostgreSQL database. And we have CustomerServiceTest that tests CustomerService methods using Testcontainers. Take a look at the GitHub repository for the complete code.

Note: You can use Gazelle, which is a Bazel build file generator, to generate the BUILD.bazel files instead of manually writing them.

Running Testcontainers tests

For running Testcontainers tests, we need a Testcontainers-supported container runtime. Let’s assume you have a local Docker installed using Docker Desktop.

Now, with our Bazel build configuration, we are ready to build and test the customers package:

# to run all build targets of customers package
$ bazel build //customers/...

# to run a specific build target of customers package
$ bazel build //customers:customers-lib

# to run all test targets of customers package
$ bazel test //customers/...

# to run a specific test target of customers package
$ bazel test //customers:customers-lib-tests

When you run the build for the first time, it will take time to download the required dependencies and then execute the targets. But, if you try to build or test again without any code or configuration changes, Bazel will not re-run the build/test again and will show the cached result. Bazel has a powerful caching mechanism that will detect code changes and run only the targets that are necessary to run.

While using Testcontainers, you define the required dependencies as part of code using Docker image names along with tags, such as Postgres:16. So, unless you change the code (e.g., Docker image name or tag), Bazel will cache the test results.

Similarly, we can use rules_go and Gazelle for configuring Bazel build for Go packages. Take a look at the MODULE.bazel and products/BUILD.bazel files to learn more about configuring Bazel in a Go package.

As mentioned earlier, we need a Testcontainers-supported container runtime for running Testcontainers tests. Installing Docker on complex CI platforms might be challenging, and you might need to use a complex Docker-in-Docker setup. Additionally, some Docker images might not be compatible with the operating system architecture (e.g., Apple M1). 

Testcontainers Cloud solves these problems by eliminating the need to have Docker on the localhost or CI runners and run the containers on cloud VMs transparently.

Here is an example of running the Testcontainers tests using Bazel on Testcontainers Cloud using GitHub Actions:

name: CI

on:
 push:
   branches:
     - '**'

jobs:
 build:
   runs-on: ubuntu-latest
   steps:
   - uses: actions/checkout@v4

   - name: Configure TestContainers cloud
     uses: atomicjar/testcontainers-cloud-setup-action@main
     with:
       wait: true
       token: ${{ secrets.TC_CLOUD_TOKEN }}

   - name: Cache Bazel
     uses: actions/cache@v3
     with:
       path: |
         ~/.cache/bazel
       key: ${{ runner.os }}-bazel-${{ hashFiles('.bazelversion', '.bazelrc', 'WORKSPACE', 'WORKSPACE.bazel', 'MODULE.bazel') }}
       restore-keys: |
         ${{ runner.os }}-bazel-

   - name: Build and Test
     run: bazel test --test_output=all //...

GitHub Actions runners already come with Bazelisk installed, so we can use Bazel out of the box. We have configured the TC_CLOUD_TOKEN environment variable through Secrets and started the Testcontainers Cloud agent. If you check the build logs, you can see that the tests are executed using Testcontainers Cloud.

Summary

We have shown how to use the Bazel build system to build and test monorepos with multiple modules using different programming languages. Combined with Testcontainers, you can make the builds self-contained and hermetic.

Although Bazel and Testcontainers help us have a self-contained build, we need to take extra measures to make it a hermetic build: 

  • Bazel can be configured to use a specific version of SDK, such as JDK 17, Go 1.20, etc., so that builds always use the same version instead of what is installed on the host machine. 
  • For Testcontainers tests, using Docker tag latest for container dependencies may result in non-deterministic behavior. Also, some Docker image publishers override the existing images using the same tag. To make the build/test deterministic, always use the Docker image digest so that the builds and tests always use the exact same version of images that gives reproducible and hermetic builds.
  • Using Testcontainers Cloud for running Testcontainers tests reduces the complexity of Docker setup and gives a deterministic container runtime environment.

Visit the Testcontainers website to learn more, and get started with Testcontainers Cloud by creating a free account.

Learn more

]]>
How to Use Testcontainers on Jenkins CI https://www.docker.com/blog/how-to-use-testcontainers-on-jenkins-ci/ Mon, 26 Feb 2024 15:22:16 +0000 https://www.docker.com/?p=51539 Releasing software often and with confidence relies on a strong continuous integration and continuous delivery (CI/CD) process that includes the ability to automate tests. Jenkins offers an open source automation server that facilitates such release of software projects.

In this article, we will explore how you can run tests based on the open source Testcontainers framework in a Jenkins pipeline using Docker and Testcontainers Cloud

Testcontainers Jenkins 2400x1260 1

Jenkins, which streamlines the development process by automating the building, testing, and deployment of code changes, is widely adopted in the DevOps ecosystem. It supports a vast array of plugins, enabling integration with various tools and technologies, making it highly customizable to meet specific project requirements.

Testcontainers is an open source framework for provisioning throwaway, on-demand containers for development and testing use cases. Testcontainers makes it easy to work with databases, message brokers, web browsers, or just about anything that can run in a Docker container.

Testcontainers also provides support for many popular programming languages, including Java, Go, .NET, Node.js, Python, and more. This article will show how to test a Java Spring Boot application (testcontainers-showcase) using Testcontainers in a Jenkins pipeline. Please fork the repository into your GitHub account. To run Testcontainers-based tests, a Testcontainers-supported container runtime, like Docker, needs to be available to agents.

Note: As Jenkins CI servers are mostly run on Linux machines, the following configurations are tested on a Linux machine only.

Docker containers as Jenkins agents

Let’s see how to use dynamic Docker container-based agents. To be able to use Docker containers as agents, install the Docker Pipeline plugin

Now, let’s create a file with name Jenkinsfile in the root of the project with the following content:

pipeline {
   agent {
       docker {
             image 'eclipse-temurin:17.0.9_9-jdk-jammy'
             args '--network host -u root -v /var/run/docker.sock:/var/run/docker.sock'
       }
 }

   triggers { pollSCM 'H/2 * * * *' } // poll every 2 mins

   stages {
       stage('Build and Test') {
           steps {
               sh './mvnw verify'
           }
       }
   }
}

We are using the eclipse-temurin:17.0.9_9-jdk-jammy Docker container as an agent to run the builds for this pipeline. Note that we are mapping the host’s Unix Docker socket as a volume with root user permissions to make it accessible to the agent, but this can potentially be a security risk.

Add the Jenkinsfile and push the changes to the Git repository.

Now, go to the Jenkins Dashboard and select New Item to create the pipeline. Follow these steps:

  • Enter testcontainers-showcase as pipeline name.
  • Select Pipeline as job type.
  • Select OK.
  • Under Pipeline section:
  • Branches to build: Branch Specifier (blank for ‘any’): */main.
  • Script Path: Jenkinsfile.
  • Select Save.
  • Choose Build Now to trigger the pipeline for the first time.

The pipeline should run the Testcontainers-based tests successfully in a container-based agent using the remote Docker-in-Docker based configuration.

Kubernetes pods as Jenkins agents

While running Testcontainers-based tests on Kubernetes pods, you can run a Docker-in-Docker (DinD) container as a sidecar. To use Kubernetes pods as Jenkins agents, install Kubernetes plugin.

Now you can create the Jenkins pipeline using Kubernetes pods as agents as follows:

def pod =
"""
apiVersion: v1
kind: Pod
metadata:
 labels:
   name: worker
spec:
 serviceAccountName: jenkins
 containers:
   - name: java17
     image: eclipse-temurin:17.0.9_9-jdk-jammy
     resources:
       requests:
         cpu: "1000m"
         memory: "2048Mi"
     imagePullPolicy: Always
     tty: true
     command: ["cat"]
   - name: dind
     image: docker:dind
     imagePullPolicy: Always
     tty: true
     env:
       - name: DOCKER_TLS_CERTDIR
         value: ""
     securityContext:
       privileged: true
"""

pipeline {
   agent {
       kubernetes {
           yaml pod
       }
   }
   environment {
       DOCKER_HOST = 'tcp://localhost:2375'
       DOCKER_TLS_VERIFY = 0
   }

   stages {
       stage('Build and Test') {
           steps {
               container('java17') {
                   script {
                       sh "./mvnw verify"
                   }
               }
           }
       }
   }
}

Although we can use a Docker-in-Docker based configuration to make the Docker environment available to the agent, this setup also brings configuration complexities and security risks.

  • By volume mounting the host’s Docker Unix socket (Docker-out-of-Docker) with the agents, the agents have direct access to the host Docker engine.
  • When using DooD approach file sharing, using bind-mounting doesn’t work because the containerized app and Docker engine work in different contexts. 
  • The Docker-in-Docker (DinD) approach requires the use of insecure privileged containers.

You can watch the Docker-in-Docker: Containerized CI Workflows presentation to learn more about the challenges of a Docker-in-Docker based CI setup.

This is where Testcontainers Cloud comes into the picture to make it easy to run Testcontainers-based tests more simply and reliably. 

By using Testcontainers Cloud, you don’t even need a Docker daemon running on the agent. Containers will be run in on-demand cloud environments so that you don’t need to use powerful CI agents with high CPU/memory for your builds.

Let’s see how to use Testcontainers Cloud with minimal setup and run Testcontainers-based tests.

Testcontainers Cloud-based setup

Testcontainers Cloud helps you run Testcontainers-based tests at scale by spinning up the dependent services as Docker containers on the cloud and having your tests connect to those services.

If you don’t have a Testcontainers Cloud account already, you can create an account and get a Service Account Token as follows:

  1. Sign up for a Testcontainers Cloud account.
  2. Once logged in, create an organization.
  3. Navigate to the Testcontainers Cloud dashboard and generate a Service account (Figure 1).
Screenshot of interface for creating a new Testcontainer Cloud service account and getting access token.
Figure 1: Create a new Testcontainers Cloud service account.

To use Testcontainers Cloud, we need to start a lightweight testcontainers-cloud agent by passing TC_CLOUD_TOKEN as an environment variable.

You can store the TC_CLOUD_TOKEN value as a secret in Jenkins as follows:

  • From the Dashboard, select Manage Jenkins.
  • Under Security, choose Credentials.
  • You can create a new domain or use System domain.
  • Under Global credentials, select Add credentials.
  • Select Kind as Secret text.
  • Enter TC_CLOUD_TOKEN value in Secret.
  • Enter tc-cloud-token-secret-id as ID.
  • Select Create.

Next, you can update the Jenkinsfile as follows:

pipeline {
   agent {
       docker {
             image 'eclipse-temurin:17.0.9_9-jdk-jammy'
       }
 }

   triggers { pollSCM 'H/2 * * * *' }

   stages {

       stage('TCC SetUp') {
     environment {
      	 TC_CLOUD_TOKEN = credentials('tc-cloud-token-secret-id')
           }
           steps {
               sh "curl -fsSL https://get.testcontainers.cloud/bash | sh"
           }
       }

       stage('Build and Test') {
           steps {
               sh './mvnw verify'
           }
       }
   }
}

We have set the TC_CLOUD_TOKEN environment variable using the value from tc-cloud-token-secret-id credential we created and started a Testcontainers Cloud agent before running our tests.

Now if you commit and push the updated Jenkinsfile, then the pipeline will run the tests using Testcontainers Cloud. You should see log statements similar to the following indicating that the Testcontainers-based tests are using Testcontainers Cloud instead of the default Docker daemon.

14:45:25.748 [testcontainers-lifecycle-0] INFO  org.testcontainers.DockerClientFactory - Connected to docker: 
  Server Version: 78+testcontainerscloud (via Testcontainers Desktop 1.5.5)
  API Version: 1.43
  Operating System: Ubuntu 20.04 LTS
  Total Memory: 7407 MB

You can also leverage Testcontainers Cloud’s Turbo mode in conjunction with build tools that feature parallel run capabilities to run tests even faster.

In the case of Maven, you can use the -DforkCount=N system property to specify the degree of parallelization. For Gradle, you can specify the degree of parallelization using the maxParallelForks property.

We can enable parallel execution of our tests using four forks in Jenkinsfile as follows:

stage('Build and Test') {
      steps {
           sh './mvnw verify -DforkCount=4' 
      }
}

For more information, check out the article on parallelizing your tests with Turbo mode.

Conclusion

In this article, we have explored how to run Testcontainers-based tests on Jenkins CI using dynamic containers and Kubernetes pods as agents with Docker-out-of-Docker and Docker-in-Docker based configuration. 

Then we learned how to create a Testcontainers Cloud account and configure the pipeline to run tests using Testcontainers Cloud. We also explored leveraging Testcontainers Cloud Turbo mode combined with your build tool’s parallel execution capabilities. 

Although we have demonstrated this setup using a Java project as an example, Testcontainers libraries exist for other popular languages, too, and you can follow the same pattern of configuration to run your Testcontainers-based tests on Jenkins CI in Golang, .NET, Python, Node.js, etc.

Get started with Testcontainers Cloud by creating a free account at the website.

Learn more

]]>
Testcontainers Best Practices https://www.docker.com/blog/testcontainers-best-practices/ Fri, 03 Nov 2023 18:53:43 +0000 https://www.docker.com/?p=52214 Testcontainers is an open source framework for provisioning throwaway, on-demand containers for development and testing use cases. Testcontainers make it easy to work with databases, message brokers, web browsers, or just about anything that can run in a Docker container.

You can also use Testcontainers libraries for local development. Testcontainers libraries combined with Testcontainers Desktop provide a pleasant local development and testing experience. Testcontainers libraries are available for most of the popular languages like Java, Go, .NET, Node.js, Python, Ruby, Rust, Clojure, and Haskell.

In this article, we’ll explore some Do’s and Don’ts while using Testcontainers libraries. We’re going to show code snippets in Java, but the concepts are applicable to other languages as well.

Testcontainers: Best Practices

Don’t rely on fixed ports for tests

If you’re just getting started with Testcontainers or converting your existing test setup to use Testcontainers, you might think of using fixed ports for the containers.

For example, let’s say you have a current testing setup where a PostgreSQL test database is installed and running on port 5432, and your tests talk to that database. When you try to leverage Testcontainers for running PostgreSQL database instead of using a manually installed database, you might think of starting the PostgreSQL containers and exposing it on the fixed port 5432 on the host.

But using fixed ports for containers while running tests is not a good idea for the following reasons:

  • You, or your team members, might have another process running on the same port, and if that’s the case, the tests will fail.
  • While running tests on a Continuous Integration (CI) environment, there can be multiple pipelines running in parallel. The pipelines might try to start multiple containers of the same type on the same fixed port, which will cause port collisions.
  • You want to parallelize your test suite locally, which results in multiple instances of the same container running simultaneously.

To avoid these issues altogether, the best approach is to use the Testcontainers built-in dynamic port mapping capabilities.

// Example 1:

GenericContainer<?> redis = 
      new GenericContainer<>("redis:5.0.3-alpine")
            .withExposedPorts(6379);
int mappedPort = redis.getMappedPort(6379);
// if there is only one port exposed then you can use redis.getFirstMappedPort()


// Example 2:

PostgreSQLContainer<?> postgres = 
     new PostgreSQLContainer<>("postgres:16-alpine");
int mappedPort = postgres.getMappedPort(5432);
String jdbcUrl = postgres.getJdbcUrl();

While it’s strongly discouraged to use a fixed port for tests, using a fixed port for local development can be convenient. It allows you to connect to services using a consistent port, for instance, when using database inspection tools. With Testcontainers Desktop, you can easily connect to those services on a fixed port.

Don’t hardcode the hostname

While using Testcontainers for your tests, you should always dynamically configure the host and port values. For example, here’s what a typical Spring Boot test using a Redis container looks like:

@SpringBootTest(webEnvironment = RANDOM_PORT)
@Testcontainers
class MyControllerTest {

   @Container
   static GenericContainer<?> redis = 
        new GenericContainer<>(DockerImageName.parse("redis:5.0.3-alpine"))
             .withExposedPorts(6379);

   @DynamicPropertySource
   static void overrideProperties(DynamicPropertyRegistry registry) {
      registry.add("spring.redis.host", () -> "localhost");
      registry.add("spring.redis.port", () -> redis.getMappedPort(6379));
   }

   @Test
   void someTest() {
      ....
   }
}

As a keen observer, you might’ve noticed we’ve hardcoded the Redis host as localhost. If you run the test, it’ll work and run fine on your CI also as long as you’re using a local Docker daemon that’s configured in such a way that the mapped ports of the containers are accessible through localhost.

But if you configure your environment to use a Remote Docker daemon then your tests will fail because those containers aren’t running on localhost anymore. So, the best practice to make your tests fully portable is to use redis.getHost() instead of a hardcoded localhost as follows:

@DynamicPropertySource
static void overrideProperties(DynamicPropertyRegistry registry) {
    registry.add("spring.redis.host", () -> redis.getHost());
    registry.add("spring.redis.port", () -> redis.getMappedPort(6379));
}

Don’t hardcode the container name

You might think of giving a name to the containers using withCreateContainerCmdModifier(..) as follows:

PostgreSQLContainer<?> postgres= 
     new PostgreSQLContainer<>("postgres:16-alpine")
           .withCreateContainerCmdModifier(cmd -> cmd.withName("postgres"));

But giving a fixed/hardcoded name to containers will cause problems when trying to run multiple containers with the same name. This will most likely cause problems in CI environments while running multiple pipelines in parallel.

As a rule of thumb, if a certain generic Docker feature (such as container names) is not available in the Testcontainers API, this tends to be an opinionated decision that fosters using integration testing best practices. The withCreateContainerCmdModifier() is available as an advanced feature for experienced users that have very specific use cases but shouldn’t be used to work around the Testcontainers design decisions.

Copy files into containers instead of mounting them

While configuring the containers for your tests, you might want to copy some local files into a specific location inside the container. A typical example would be copying database initialization SQL scripts into some location inside the database container.

You can configure this by mounting a local file into the container as follows:

PostgreSQLContainer<?> postgres =
   new PostgreSQLContainer<>("postgres:16-alpine")
    .withFileSystemBind(
          "src/test/resources/schema.sql",
          "/docker-entrypoint-initdb.d/01-schema.sql",
          BindMode.READ_ONLY);

This might work locally. But if you are using a Remote Docker daemon or Testcontainers Cloud, then those files won’t be found in the remote docker host, and tests will fail.

Instead of mounting local files, you should use File copying as follows:

PostgreSQLContainer<?> postgres =
   new PostgreSQLContainer<>("postgres:16-alpine")
      .withCopyFileToContainer(
          MountableFile.forClasspathResource("schema.sql"),
          "/docker-entrypoint-initdb.d/01-schema.sql");

This approach works fine even while using Remote Docker daemon or Testcontainers Cloud, allowing tests to be portable.

Use the same container versions as in production

While specifying the container tag, don’t use latest, as it can introduce flakiness in your tests when a new version of the image is released. Instead, use the same version that you use in production to ensure you can trust the outcome of your tests.

For example, if you are using PostgreSQL 15.2 version in the production environment then use postgres:15.2 Docker image for testing and local development as well.

// DON'T DO THIS

PostgreSQLContainer<?> postgres = 
    new PostgreSQLContainer<>("postgres:latest");

// INSTEAD, DO THIS
PostgreSQLContainer<?> postgres = 
    new PostgreSQLContainer<>("postgres:15.2");

Use proper container lifecycle strategy

Typically the same container(s) will be used for all the tests in a class as follows:

@SpringBootTest(webEnvironment = RANDOM_PORT)
@Testcontainers
class MyControllerTest {

    @Container
    static GenericContainer<?> redis =
            new GenericContainer<>("redis:5.0.3-alpine")
                .withExposedPorts(6379);

    @DynamicPropertySource
    static void overrideProperties(DynamicPropertyRegistry registry) {
        registry.add("spring.redis.host", () -> "localhost");
        registry.add("spring.redis.port", () -> redis.getMappedPort(6379));
    }

    @Test
    void firstTest() {
        ....
    }


    @Test
    void secondTest() {
        ....
    }
}

When you run MyControllerTest, only one Redis container will be started and used for executing both tests. This is because we make the Redis container a static field. If it isn’t a static field, then two Redis instances will be used for running the two tests, which might not be what you want and could even fail if you aren’t recreating the Spring Context. While using separate containers for each test is possible, it’ll be resource-intensive and may slow down the test execution.

Also, sometimes developers who aren’t familiar with Testcontainers lifecycle use JUnit 5 Extension annotations @Testcontainers and @Container and also manually start/stop the container by calling container.start() and container.stop() methods. Please read Testcontainers container lifecycle management using JUnit 5 guide to thoroughly understand Testcontainers lifecycle methods.

Another common approach to speed up the test execution is using Singleton Containers Pattern.

Leverage your framework’s integration for Testcontainers

Some frameworks such as Spring Boot, Quarkus, and Micronaut provide out-of-the-box integration for Testcontainers. While building the applications using any of these frameworks, it’s recommended to use frameworks Testcontainers integration support.

User preconfigured technology-specific modules when possible

Testcontainers provide technology-specific modules for most of the popular technologies such as SQL databases, NoSQL datastores, message brokers, search engines, etc. These modules provide technology-specific API that makes it easy to retrieve the container’s information, such as getting JDBC URL from a SQL database container, bootstrapServers URL from Kafka container, etc. Most importantly, they take care of all necessary bootstrapping work, making it easy to run an application in a container and interact with it from your Java code.

For example, using GenericContainer to create a PostgreSQL container looks as follows:

GenericContainer<?> postgres = new GenericContainer<>("postgres:16-alpine")
       .withExposedPorts(5432)
       .withEnv("POSTGRES_USER", "test")
       .withEnv("POSTGRES_PASSWORD", "test")
       .withEnv("POSTGRES_DB", "test")
       .waitingFor(
          new LogMessageWaitStrategy()
              .withRegEx(".*database system is ready to accept connections.*\\s")
              .withTimes(2).withStartupTimeout(Duration.of(60L, ChronoUnit.SECONDS)));
postgres.start();

String jdbcUrl = String.format(
           "jdbc:postgresql://%s:%d/test", postgres.getHost(), 
           postgres.getFirstMappedPort());

By using the Testcontainers PostgreSQL module, you can create an instance of PostgreSQL container simply as follows:

PostgreSQLContainer<?> postgres = new PostgreSQLContainer<>("postgres:16-alpine");
String jdbcUrl = postgres.getJdbcUrl();

The PostgreSQL module implementation already applies sensible defaults and also provides convenient methods to get container information.

So, instead of using GenericContainer, first, check if there’s a module already available in the Modules Catalog for your desired technology.

On the other hand, if you’re missing an important module from the catalog, chances are good that by using GenericContainer directly (or by writing your own custom class extending GenericContainer), you can get the technology working.

Use WaitStrategies to check the container is ready

If you’re using GenericContainer or creating your own module, then use the appropriate WaitStrategy to check whether the container is fully initialized and ready to use instead of using sleep for some (milli)seconds.

//DON'T DO THIS
GenericContainer<?> container = new GenericContainer<>("image:tag")
                                                       .withExposedPorts(9090);
container.start();
Thread.sleep(2 * 1000); //waiting for container to be ready

container.getHost();
container.getFirstMappedPort();

//DO THIS
GenericContainer<?> container = new GenericContainer<>("image:tag")
       .withExposedPorts(9090)
       .waitingFor(Wait.forLogMessage(".*Ready to accept connections.*\\n", 1));
container.start();

container.getHost();
container.getFirstMappedPort();

Check the Testcontainers language-specific documentation to see what are the available WaitStrategies out of the box. You can also implement your own if need be. 

Please note: If you don’t configure any WaitStrategy, Testcontainers will set up a default WaitStrategy that’ll check for connectivity of all exposed ports from the host.

Summary

We’ve explored some of the do’s and don’ts when using Testcontainers libraries and provided better alternatives. Check out the Testcontainers website to find more resources on how to use the framework effectively.

Learn more

]]>
Local Development of Go Applications with Testcontainers https://www.docker.com/blog/local-development-of-go-applications-with-testcontainers/ Mon, 14 Aug 2023 18:58:24 +0000 https://www.docker.com/?p=52218 When building applications, it’s important to have an enjoyable developer experience, regardless of the programming language. This experience includes having a great build tool to perform any task related to the development lifecycle of the project. This includes compiling it, building the release artifacts, and running tests.

Often times our build tool doesn’t support all our local development tasks, such as starting the runtime dependencies for our application. We’re then forced to manage them manually with a Makefile, a shell script, or an external Docker Compose file. This might involve calling them in a separated terminal or even maintaining code for that purpose. Thankfully, there’s a better way.

In this post, I’m going to show you how to use Testcontainers for Go. You’ll learn how to start and stop the runtime dependencies of your Go application while building it and how to run the tests simply and consistently. We’ll build a super simple Go app using the Fiber web framework, which will connect to a PostgreSQL database to store its users. Then, we’ll leverage Go’s built-in capabilities and use Testcontainers for Go to start the dependencies of the application.

Local development of Go applications with Testcontainers

You can find the source code in the testcontainers-go-fiber repository.

If you’re new to Testcontainers for Go, then watch this video to get started with Testcontainers for Go.

NOTE: I’m not going to show the code to interact with the users database, as the purpose of this post is to show how to start the dependencies of the application, not how to interact with them.

Introducing Fiber

From their Fiber website:

Fiber is a Go web framework built on top of Fasthttp, the fastest HTTP engine for Go. It’s designed to ease things up for fast development with zero memory allocation and performance in mind.

Why Fiber? There are various frameworks for working with HTTP in Go, such as gin, or gobuffalo. And many Gophers directly stay in the net/http package of the Go’s standard library. In the end, it doesn’t matter which library of framework we choose, as it’s independent of what we’re going to demonstrate here.

Let’s create the default Fiber application:

package main

import (
   "log"
   "os"

   "github.com/gofiber/fiber/v2"
)

func main() {
   app := fiber.New()

   app.Get("/", func(c *fiber.Ctx) error {
       return c.SendString("Hello, World!")
   })

   log.Fatal(app.Listen(":8000"))
}

As we said, our application will connect to a Postgres database to store its users. In order to share state across the application, we’re going to create a new type representing the App. This App type will include information about the Fiber application, and the connection string for the users database.

// MyApp is the main application, including the fiber app and the postgres container
type MyApp struct {
   // The name of the app
   Name string
   // The version of the app
   Version string
   // The fiber app
   FiberApp *fiber.App
   // The database connection string for the users database. The application will need it to connect to the database,
   // reading it from the USERS_CONNECTION environment variable in production, or from the container in development.
   UsersConnection string
}

var App *MyApp = &MyApp{
   Name:            "my-app",
   Version:         "0.0.1",
   // in production, the URL will come from the environment
   UsersConnection: os.Getenv("USERS_CONNECTION"),
}

func main() {
   app := fiber.New()

   app.Get("/", func(c *fiber.Ctx) error {
      return c.SendString("Hello, World!")
   })

   // register the fiber app
   App.FiberApp = app

   log.Fatal(app.Listen(":8000"))
}

For demonstration purposes, we’re going to use the main package to define the access to the users in the Postgres database. In the real-world application, this code wouldn’t be in the main package.

Running the application for local development would be this:

Testcontainers for Go

Testcontainers for Go is a Go library that allows us to start and stop Docker containers from our Go tests. It provides us with a way to define our own containers, so we can start and configure any container we want. It also provides us with a set of predefined containers in the form of Go modules that we can use to start those dependencies of our application.

Therefore, with Testcontainers, we’ll be able to interact with our dependencies in an abstract manner, as we could be interacting with databases, message brokers, or any other kind of dependency in a Docker container.

Starting the dependencies for development mode

Now that we have a library for it, we need to start the dependencies of our application. Remember that we’re talking about the local experience of building the application. So, we would like to start the dependencies only under certain build conditions, not on the production environment.

Go build tags

Go provides us with a way to define build tags that we can use to define build conditions. We can define a build tag in the form of a comment at the top of our Go files. For example, we can define a build tag called dev like this:

// +build dev
// go:build dev

Adding this build tag to a file will mean that the file will only be compiled when the dev build tag is passed to the go build command, not landing into the release artifact. The power of the go toolchain is that this build tag applies to any command that uses the go toolchain, such as go run. Therefore, we can still use this build tag when running our application with go run -tags dev ..

Go init functions

The init functions in Go are special functions that are executed before the main function. We can define an init function in a Go file like this:

func init() {
   // Do something
}

They aren’t executed in a deterministic order, so please consider this when defining init functions.

For our example, in which we want to improve the local development experience in our Go application, we’re going to use an init function in a dev_dependencies.go file protected by a dev build tag. From there, we’ll start the dependencies of our application, which in our case is the PostgreSQL database for the users.

We’ll use Testcontainers for Go to start this Postgres database. Let’s combine all this information in the dev_dependencies.go file:

//go:build dev
// +build dev

package main

import (
   "context"
   "log"
   "path/filepath"
   "time"

   "github.com/jackc/pgx/v5"
   "github.com/testcontainers/testcontainers-go"
   "github.com/testcontainers/testcontainers-go/modules/postgres"
   "github.com/testcontainers/testcontainers-go/wait"
)

func init() {
   ctx := context.Background()

   c, err := postgres.RunContainer(ctx,
       testcontainers.WithImage("postgres:15.3-alpine"),
       postgres.WithInitScripts(filepath.Join(".", "testdata", "dev-db.sql")),
       postgres.WithDatabase("users-db"),
       postgres.WithUsername("postgres"),
       postgres.WithPassword("postgres"),
       testcontainers.WithWaitStrategy(
           wait.ForLog("database system is ready to accept connections").
               WithOccurrence(2).WithStartupTimeout(5*time.Second)),
   )
   if err != nil {
       panic(err)
   }

   connStr, err := c.ConnectionString(ctx, "sslmode=disable")
   if err != nil {
       panic(err)
   }

   // check the connection to the database
   conn, err := pgx.Connect(ctx, connStr)
   if err != nil {
       panic(err)
   }
   defer conn.Close(ctx)

   App.UsersConnection = connStr
   log.Println("Users database started successfully")
}

The c container is defined and started using Testcontainers for Go. We’re using:

  • The WithInitScripts option to copy and run a SQL script that creates the database and the tables. This script is located in the testdata folder.
  • The WithWaitStrategy option to wait for the database to be ready to accept connections, checking database logs.
  • The WithDatabase, WithUsername and WithPassword options to configure the database.
  • The ConnectionString method to get the connection string to the database directly from the started container.

The App variable will be of the type we defined earlier, representing the application. This type included information about the Fiber application and the connection string for the users database. Therefore, after the container is started, we’re filling the connection string to the database directly from the container we just started.

So far so good! We’ve leveraged the built-in capabilities in Go to execute the init functions defined in the dev_dependencies.go file only when the -tags dev flag is added to the go run command.

With this approach, running the application and its dependencies takes a single command!

go run -tags dev .

We’ll see that the Postgres database is started and the tables are created. We can also see that the App variable is filled with the information about the Fiber application and the connection string for the users database.

Stopping the dependencies for development mode

Now that the dependencies are started, if and only if the build tags are passed to the go run command, we need to stop them when the application is stopped.

We’re going to reuse what we did with the build tags to register a graceful shutdown to stop the dependencies of the application before stopping the application itself only when the dev build tag is passed to the go run command.

Our Fiber app stays untouched, and we’ll need to only update the dev_dependencies.go file:

//go:build dev
// +build dev

package main

import (
    "context"
    "fmt"
    "log"
    "os"
    "os/signal"
    "path/filepath"
    "syscall"
    "time"

    "github.com/jackc/pgx/v5"
    "github.com/testcontainers/testcontainers-go"
    "github.com/testcontainers/testcontainers-go/modules/postgres"
    "github.com/testcontainers/testcontainers-go/wait"
)

func init() {
    ctx := context.Background()

    c, err := postgres.RunContainer(ctx,
        testcontainers.WithImage("postgres:15.3-alpine"),
        postgres.WithInitScripts(filepath.Join(".", "testdata", "dev-db.sql")),
        postgres.WithDatabase("users-db"),
        postgres.WithUsername("postgres"),
        postgres.WithPassword("postgres"),
        testcontainers.WithWaitStrategy(
            wait.ForLog("database system is ready to accept connections").
                WithOccurrence(2).WithStartupTimeout(5*time.Second)),
    )
    if err != nil {
        panic(err)
    }

    connStr, err := c.ConnectionString(ctx, "sslmode=disable")
    if err != nil {
        panic(err)
    }

    // check the connection to the database
    conn, err := pgx.Connect(ctx, connStr)
    if err != nil {
        panic(err)
    }
    defer conn.Close(ctx)

    App.UsersConnection = connStr
    log.Println("Users database started successfully")

    // register a graceful shutdown to stop the dependencies when the application is stopped
    // only in development mode
    var gracefulStop = make(chan os.Signal)
    signal.Notify(gracefulStop, syscall.SIGTERM)
    signal.Notify(gracefulStop, syscall.SIGINT)
    go func() {
        sig := <-gracefulStop
        fmt.Printf("caught sig: %+v\n", sig)
        err := shutdownDependencies()
        if err != nil {
            os.Exit(1)
        }
        os.Exit(0)
    }()
}

// helper function to stop the dependencies
func shutdownDependencies(containers ...testcontainers.Container) error {
    ctx := context.Background()
    for _, c := range containers {
        err := c.Terminate(ctx)
        if err != nil {
            log.Println("Error terminating the backend dependency:", err)
            return err
        }
    }

    return nil
}

In this code, at the bottom of the init function and right after setting the database connection string, we’re starting a goroutine to handle the graceful shutdown. We’re also listening for the defining SIGTERM and SIGINT signals. When a signal is put into the gracefulStop channel the shutdownDependencies helper function will be called to stop the dependencies of the application. This helper function will internally call the Testcontainers for Go’s Terminate method of the database container, resulting in the container being stopped on signals.

What’s especially great about this approach is how dynamic the created environment is. Testcontainers takes extra effort to allow parallelization and binds containers on high-level available ports. This means the dev mode won’t collide with running the tests. Or you can have multiple instances of your application running without any problems!

Hey, what will happen in production?

Because our app is initializing the connection to the database from the environment:

var App *MyApp = &MyApp{
   Name:            "my-app",
   Version:         "0.0.1",
   DevDependencies: []DevDependency{},
   // in production, the URL will come from the environment
   UsersConnection: os.Getenv("USERS_CONNECTION"),
}

We don’t have to worry about that value being overridden by our custom code for the local development. The UsersConnection won’t be set because everything that we showed here is protected by the dev build tag.

NOTE: Are you using Gin or net/http directly? You could directly benefit from everything that we explained here: init functions and build tags to start and graceful shutdown the runtime dependencies.

Conclusion

In this post, we’ve learned how to use Testcontainers for Go to start and stop the dependencies of our application while building it and running the tests. And all we needed to leverage was the built-in capabilities of the Go language and the go toolchain.

The result is that we can start the dependencies of our application while building it and running the application. And we can stop them when the application is stopped. This means that our local development experience is improved, as we don’t need to start the dependencies in a Makefile, shell script, or an external Docker Compose file. And the most important thing, it only happens for development mode, passing the -tags dev flag to the go run command.

Learn more

]]>
Getting Started with Testcontainers for Go nonadult
Spring Boot Application Testing and Development with Testcontainers https://www.docker.com/blog/spring-boot-application-testing-and-development-with-testcontainers/ Wed, 17 May 2023 18:58:33 +0000 https://www.docker.com/?p=52219 Spring Boot 3.1.0 introduced great support for Testcontainers that’ll not only make writing integration tests easier, but also make local development a breeze.

Spring Boot Application Testing & Development with Testcontainers

“Clone & Run” Developer Experience

Gone are the days of maintaining a document with a long list of manual steps needed to set up an application locally before running it. With Docker installing the application, dependencies became easier. But you still had to maintain different versions of scripts based on your Operating System, to manually spin up the application dependencies as Docker containers.

With the Testcontainers support added in Spring Boot 3.1.0, developers can now simply clone the repository and run the application! All the application dependencies, such as databases, message brokers, etc. can be configured to automatically start when we run the application.

If you’re new to Testcontainers, go through Getting started with Testcontainers in a Java Spring Boot Project guide to learn how to test your Spring Boot applications using Testcontainers.

Simplified integration testing using ServiceConnections

Prior to Spring Boot 3.1.0, we had to use @DynamicPropertySource to set the dynamic properties obtained from containers started by Testcontainers as follows:

@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
@Testcontainers
class CustomerControllerTest {

   @Container
   static PostgreSQLContainer<?> postgres = 
                  new PostgreSQLContainer<>("postgres:15-alpine");

   @DynamicPropertySource
   static void configureProperties(DynamicPropertyRegistry registry) {
       registry.add("spring.datasource.url", postgres::getJdbcUrl);
       registry.add("spring.datasource.username", postgres::getUsername);
       registry.add("spring.datasource.password", postgres::getPassword);
   }

   // your tests
}

Then, Spring Boot 3.1.0 introduced the new concept of ServiceConnection. This automatically configures the necessary Spring Boot properties for the supporting containers.

First, add the spring-boot-testcontainers as a test dependency:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-testcontainers</artifactId>
    <scope>test</scope>
</dependency>

Now, we can rewrite the previous example by adding @ServiceConnection without having to explicitly configure spring.datasource.url, spring.datasource.username, and spring.datasource.password using the @DynamicPropertySource approach.

@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
@Testcontainers
class CustomerControllerTest {

   @Container
   @ServiceConnection
   static PostgreSQLContainer<?> postgres = 
                   new PostgreSQLContainer<>("postgres:15-alpine");

   // your tests
}

Notice that we’re not registering the datasource properties explicitly anymore.

The @ServiceConnection support not only works for relational databases but also many other commonly used dependencies like Kafka, RabbitMQ, Redis, MongoDB, ElasticSearch, and Neo4j. For the complete list of supporting services, see the official documentation.

You can also define all your container dependencies in one TestConfiguration class and import it into your integration tests.

For example, let’s say you’re using Postgres and Kafka in your application. You can then create a class called ContainersConfig as follows:

@TestConfiguration(proxyBeanMethods = false)
public class ContainersConfig {

   @Bean
   @ServiceConnection
   public PostgreSQLContainer<?> postgreSQLContainer() {
       return new PostgreSQLContainer<>("postgres:15.2-alpine");
   }

   @Bean
   @ServiceConnection
   public KafkaContainer kafkaContainer() {
       return new KafkaContainer(
                   DockerImageName.parse("confluentinc/cp-kafka:7.2.1"));
   }
}

Finally, you can import the ContainersConfig into your tests as follows:

@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
@Import(ContainersConfig.class)
class ApplicationTests {

   //your tests
}

How to use a container that doesn’t have ServiceConnection support

In your applications, you may need to use a dependency that doesn’t have a dedicated Testcontainers module or out-of-the-box ServiceConnection support from Spring Boot. Don’t worry, you can still use Testcontainers GenericContainer and register the properties using DynamicPropertyRegistry.

For example, you might want to use Mailhog for testing email functionality. In this case, you can use Testcontainers GenericContainer and register Spring Boot email properties as follows:

@TestConfiguration(proxyBeanMethods = false)
public class ContainersConfig {

   @Bean
   @ServiceConnection
   public PostgreSQLContainer<?> postgreSQLContainer() {
       return new PostgreSQLContainer<>("postgres:15.2-alpine");
   }

   @Bean
   @ServiceConnection
   public KafkaContainer kafkaContainer() {
       return new KafkaContainer(
                    DockerImageName.parse("confluentinc/cp-kafka:7.2.1"));
   }

   @Bean
   public GenericContainer mailhogContainer(DynamicPropertyRegistry registry) {
       GenericContainer container = new GenericContainer("mailhog/mailhog")
                                            .withExposedPorts(1025);
       registry.add("spring.mail.host", container::getHost);
       registry.add("spring.mail.port", container::getFirstMappedPort);
       return container;
   }
}

As we’ve seen, we can use any containerized service and register the application properties.

Local development using Testcontainers

In the previous section, we learned how to use Testcontainers for testing Spring Boot applications. With Spring Boot 3.1.0 Testcontainers support, we can also use Testcontainers during the development time to run the application locally.

To do this, create a TestApplication class in the test classpath under src/test/java as follows:

import org.springframework.boot.SpringApplication;

public class TestApplication {
   public static void main(String[] args) {
       SpringApplication
         .from(Application::main) //Application is main entrypoint class
         .with(ContainersConfig.class)
         .run(args);
   }
}

Observe that we’ve used the configuration class ContainersConfig using .with(...) to attach it to the application launcher.

Now you can run TestApplication from your IDE. It will automatically start all the containers defined in ContainersConfig and configure the properties.

You can also run TestApplication using the Maven or Gradle build tools as follows:

./mvnw spring-boot:test-run //Maven
./gradlew bootTestRun //Gradle

Using DevTools with Testcontainers at development time

We’ve now learned how to use Testcontainers for local development. But one challenge with this setup is that every time the application is modified and a build is triggered, the existing containers will be destroyed and new containers will be created. This can result in slowness or loss of data between application restarts.

Spring Boot provides devtools to improve the developer experience by refreshing the application upon code changes. We can use @RestartScope annotation provided by devtools to indicate certain beans to be reused instead of recreating them.

First, let’s add the spring-boot-devtools dependency as follows:

<!-- For Maven -->
<dependency>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-devtools</artifactId>
   <scope>runtime</scope>
   <optional>true</optional>
</dependency>

<!-- For Gradle -->
testImplementation "org.springframework.boot:spring-boot-devtools"

Now, add @RestartScope annotation on bean definitions in ContainersConfig as follows:

@TestConfiguration(proxyBeanMethods = false)
public class ContainersConfig {

   @Bean
   @ServiceConnection
   @RestartScope
   public PostgreSQLContainer<?> postgreSQLContainer() {
       return new PostgreSQLContainer<>("postgres:15.2-alpine");
   }

   @Bean
   @ServiceConnection
   @RestartScope
   public KafkaContainer kafkaContainer() {
       return new KafkaContainer(
                DockerImageName.parse("confluentinc/cp-kafka:7.2.1"));
   }

   ...
}

Now if you make any application code changes and the build is triggered, the application will restart but use the existing containers.

Please note: Eclipse automatically triggers a build when the code changes are saved, while in IntelliJ IDEA, you need to trigger a build manually.

Conclusion

Modern software development involves using lots of technologies and tools to tackle growing business needs. This has resulted in a significant increase in the complexity of the development environment setup. Improving the developer experience isn’t a nice to have anymore — it’s a necessity.

To improve this developer experience, Spring Boot 3.1.0 added out-of-the-box support for Testcontainers. Spring Boot and Testcontainers integration works seamlessly with your local Docker, on CI, and with Testcontainers Cloud too.

This is an impactful transformation for not only testing but also local development. And developers can now look forward to an experience that brings the clone & run philosophy into reality.

Learn more

]]>
How to Create a Testcontainers for .NET Module for Wider Ecosystem https://www.docker.com/blog/how-to-create-a-testcontainers-for-net-module-for-wider-ecosystem/ Mon, 06 Feb 2023 20:23:08 +0000 https://www.docker.com/?p=52253 Testcontainers libraries make it easy to create reliable tests by allowing your unit tests to run with real dependencies. Anything that runs in containers can become a part of your tests with just a few lines of code: from databases and message brokers to Kubernetes clusters and cloud solutions for testing.

Flexible API and attention to detail like automatic cleanup and mapped ports randomization made Testcontainers a widely-adopted solution. Still, one thing that elevates Testcontainers even more is the ecosystem of modules — pre-configured abstractions that allow you to test applications with specific technologies without configuring the containers yourself.

Tiled logos of different technologies including Couchbase, MongoDB, and more.

And now, with the recent Testcontainers for .NET release, there’s better support for modules than ever before.

In this article, we’ll look at how to create a Testcontainers for .NET module for your favorite technology, how to add capabilities to the module so common configuration options are added in the API, and where to look for a good example of a module.

How to create a Testcontainers for .NET module for wider ecosystem

How to implement a module for Testcontainers for .NET

Testcontainers for .NET offers two ways of implementing a module, depending on the complexity of the use case. For simple modules, developers can inherit from the ContainerBuilder class. It provides a straightforward way to build a module and configure it as needed.

For more advanced use cases, Testcontainers for .NET provides a second option for developers to inherit from ContainerBuilder<TBuilderEntity, TContainerEntity, TConfigurationEntity>. This class offers a more flexible and powerful way to build modules and provides access to additional features and configurations.

Both approaches allow developers to share and reuse their configurations and best practices. They’re also a simple and consistent way to spin up containers.

The Testcontainers for .NET repository contains a .NET template to scaffold advanced modules quickly. To create and add a new module to the Testcontainers solution file, check out the repository and install the .NET template first:

git clone --branch develop git@github.com:testcontainers/testcontainers-dotnet.git
cd ./testcontainers-dotnet/
dotnet new --install ./src/Templates

The following CLI commands create and add a new PostgreSQL module to the solution file:

dotnet new tcm --name PostgreSql --official-module true --output ./src
dotnet sln add ./src/Testcontainers.PostgreSql/Testcontainers.PostgreSql.csproj

A module in Testcontainers for .NET typically consists of three classes representing the builder, configuration, and container. The PostgreSQL module we just created above consists of the PostgreSqlBuilder, PostgreSqlConfiguration, and PostgreSqlContainer classes.

  1. The builder class sets the module default configuration and validates it. It extends the Testcontainers builder and adds or overrides members specifically to configure the module. The builder is responsible for creating a valid configuration and container instance.
  2. The configuration class stores optional members to configure the module and interact with the container. Usually, these are properties like a Username or Password that are required sometime later.
  3. Developers interact with the builder the most. It manages the lifecycle and provides module specific members to interact with the container. The result of the builder is an instance of the container class. 

The next steps guide you through the process of creating a new module for Testcontainers for .NET. We’ll first show how to override and extend the default configuration provided by the ContainerBuilder class.

After that, we’ll explain how to add new members to the builder and configuration classes. By doing this, you extend the capabilities of the builder and configuration to support more complex use cases.

Set module configuration

The configuration classes in Testcontainers for .NET are designed to be immutable. In other words, once an instance of a configuration class is created, its values cannot be changed. This makes it more reliable, easier to understand, and better to share between different use cases like A/B testing.

To set the PostgreSQL module default configuration, override the read-only DockerResourceConfiguration property in PostgreSqlBuilder and set its value in both constructors. The default constructor sets DockerResourceConfiguration to the return value of Init().DockerResourceConfiguration, where the overloaded private constructor just sets the argument value. It receives an updated instance of the immutable Docker resource configuration as soon as a property changes. The .NET template already includes this configuration, making it easy for developers to quickly get started by simply commenting out the necessary parts.

public PostgreSqlBuilder()
    : this(new PostgreSqlConfiguration())
{
    DockerResourceConfiguration = Init().DockerResourceConfiguration;
}

private PostgreSqlBuilder(PostgreSqlConfiguration resourceConfiguration)
    : base(resourceConfiguration)
{
    DockerResourceConfiguration = resourceConfiguration;
}

protected override PostgreSqlConfiguration DockerResourceConfiguration { get; }

To append the PostgreSQL configurations to the default Testcontainers configurations, override or comment out the member Init(). Then, add the necessary configurations, such as the Docker image and a wait strategy to the base implementation.

protected override PostgreSqlBuilder Init()
{
    var waitStrategy = Wait.ForUnixContainer().UntilCommandIsCompleted("pg_isready");
    return base.Init().WithImage("postgres:15.1").WithPortBinding(5432, true).WithWaitStrategy(waitStrategy);
}

Add module capability

When using the PostgreSQL Docker image, it’s required to have a password set in order to run it. To demonstrate how to add a new builder capability, we’ll use this requirement as an example.

First, add a new property Password to the PostgreSqlConfiguration class. Then, add a password argument with a default value of null to the default constructor.

This allows the builder to set individual arguments or configurations. The overloaded PostgreSqlConfiguration(PostgreSqlConfiguration, PostgreSqlConfiguration) constructor takes care of merging the configurations together. The builder will receive and hold an updated instances that contains all information:

public PostgreSqlConfiguration(string password = null)
{
    Password = password;
}

public PostgreSqlConfiguration(PostgreSqlConfiguration oldValue, PostgreSqlConfiguration newValue)
    : base(oldValue, newValue)
{
    Password = BuildConfiguration.Combine(oldValue.Password, newValue.Password);
}

public string Password { get; }

Since the PostgreSqlConfiguration class is now able to store the password value, we can add a member WithPassword(string) to PostgreSqlBuilder. We don’t just store the password in the PostgreSqlConfiguration instance to construct the database connection string later. But we also set the necessary environment variable POSTGRES_PASSWORD to run the container.

public PostgreSqlBuilder WithPassword(string password)
{
    return Merge(DockerResourceConfiguration, new PostgreSqlConfiguration(password: password)).WithEnvironment("POSTGRES_PASSWORD", password);
}

By following this approach, the PostgreSqlContainer class is able to access the configured values. This opens up additional functionalities, such as constructing the database connection string. This enables the class to provide a more streamlined and convenient experience for developers who are working with modules.

public string GetConnectionString()
{
    var properties = new Dictionary<string, string>();
    properties.Add("Host", Hostname);
    properties.Add("Port", GetMappedPublicPort(5432).ToString());
    properties.Add("Database", "postgres");
    properties.Add("Username", "postgres");
    properties.Add("Password", _configuration.Password);
    return string.Join(";", properties.Select(property => string.Join("=", property.Key, property.Value)));
}

Finally, there’re two approaches to ensure that the required password is provided. Either override the Validate() member and check the immutable configuration instance:

protected override void Validate()
{
    base.Validate();

    _ = Guard.Argument(DockerResourceConfiguration.Password, nameof(PostgreSqlConfiguration.Password))
        .NotNull()
        .NotEmpty();
}

or extend the Init() member as we have already done and add WithPassword(Guid.NewGuid().ToString()) to set a default value.

It’s always a good idea to add both approaches. This way, the user can be sure that the module is properly configured, whether by themself or by default. This helps maintain a consistent and reliable experience for the user. Following it, when creating your own modules, either in-house or public, you can be a role model for other developers too.

The Testcontainers for .NET repository provides a reference implementation of the Microsoft SQL Server module. This module is a comprehensive example and can serve as a guide for you to get a better understanding of how to implement an entire module including the tests!

Conclusion

Testcontainers for .NET offers a streamlined and flexible way to spin up test dependencies. By utilizing the .NET template for the new modules, developers can take advantage of the pre-existing configurations and easily extend them with custom abstractions.

This helps to grow the ecosystem of the technologies you can use to test applications against with just a few lines of code. And this is made possible without requiring the end-developer to do the low-level configuration like specifying what ports to expose or paths to put the config files in the container.

Great use cases for the modules include public contributions to the Testcontainers for .NET project to support your favorite database or technology and also in-house abstractions to help your colleagues keep up with best practices.

All in all, by following the steps outlined in this article, you can easily extend the capabilities of Testcontainers for .NET and make the most out of their testing setup.

Learn more

]]>
Testcontainers: Testing with Real Dependencies https://www.docker.com/blog/testcontainers-testing-with-real-dependencies/ Mon, 14 Nov 2022 20:12:50 +0000 https://www.docker.com/?p=52240 Software evolves over time and automated testing is an essential prerequisite for Continuous Integration and Continuous Delivery. Developers write various types of tests, such as unit tests, integration tests, performance tests, and E2E tests for measuring different aspects of the software.

Usually, unit testing is done to verify only business logic. And depending on the part of the system that is tested, external dependencies tend to be mocked or stubbed.

But the unit tests alone don’t give much confidence because the actual end-to-end functionality depends on various external service integrations. So, integration tests are used to verify the overall behavior of the system by using real dependencies.

Traditionally integration testing is a complex process that can involve:

  • Installing and configuring the required dependent services such as databases, message brokers, etc.
  • Setting up the web or application server
  • Building and deploying the artifact (jar, war, native executable, etc) on the server
  • Running integration tests

With Testcontainers, you can have the lightweight experience and simplicity of unit tests, combined with the reliability of integration tests running against real dependencies.

Testcontainers Testing with Real Dependencies

Why is testing with real dependencies important?

Tests should enable the developers to verify application behavior with quick feedback cycles during the actual development activity.

Testing with mocks or in-memory services not only gives the wrong impression that the system is working fine but can also to significantly delay the feedback cycle. Tests using real dependencies exercise the actual code and give more confidence.

Consider a common scenario of using in-memory databases like H2 for testing while using Postgres or SQL Server in production. There are a couple reasons why this is a bad practice.

1. Compatibility Issues

Any non-trivial application will leverage some of the database-specific features that might not be supported by in-memory databases. For example, a common way to apply pagination is using LIMIT and OFFSET.

SELECT id, name FROM employee ORDER BY name LIMIT 25 OFFSET 50

Imagine using the H2 database for testing and MS SQL Server for production. When you test with H2, the tests will pass, giving a wrong impression that your code is working fine. But it will fail in production because MS SQL Server doesn’t support LIMIT … OFFSET syntax.

2. In-memory databases may not support all the features of your production database

Sometimes applications use database vendor-specific advanced features which may not be fully supported by in-memory databases. Examples can include XML/JSON transformation functions, WINDOW Functions, and Common Table Expressions (CTE). In these cases, it’s impossible to test using in-memory databases.

These frequently grow into even larger problems when you’re mocking services in your own code. While mocks can help test scenarios where you can successfully extract the mock definition to use as a contract for services, this verification of compatibility oftentimes only adds complexity to the test setup.

The typical use of mocks won’t allow you to reliably verify that the your system behavior will work in the production environment. It also won’t give you confidence in the test suite’s ability to catch issues caused by code incompatibilities and third-party integrations. 

So, it’s strongly recommended to write tests using real dependencies as much as possible and use mocks only when needed.

Testing with real dependencies using Testcontainers

Testcontainers is a testing library that enables you to write tests using real dependencies with disposable Docker containers. It provides a programmable API to spin up required dependent services as Docker containers. This way, you can write tests using real services instead of mocks. So, regardless of whether you’re writing unit, API, or end-to-end tests, you can write tests using real dependencies with the same programming model.

Testcontainers diagram

Testcontainers libraries are available for the following languages and integrate well with most of the frameworks and testing libraries:

  • Java
  • Go
  • Node.js 
  • .NET
  • Python
  • Rust

Case study

Let’s see how Testcontainers can be used to test various slices of an application and how all of them look like “Unit tests with real dependencies”.

We’ll use example code from a SpringBoot application implementing a typical API service that’s consumed via a web app and uses Postgres for storing data. But since Testcontainers provides you with an idiomatic API for your favorite language, a similar setup can be achieved in all of them.

Treat these examples as illustrations to get a feel of what’s possible. And if you’re in the Java ecosystem, then you’ll recognize the tests you’ve written in the past or take inspiration on how you can do it.

Testing data repositories

Let’s say we have the following Spring Data JPA repository with one custom method.

public interface TodoRepository extends PagingAndSortingRepository<Todo, String> {
   @Query("select t from Todo t where t.completed is false")
   Iterable<Todo> getPendingTodos();
}

As we mentioned above, using an in-memory database for testing while using a different type of database for production isn’t recommended and can cause issues. A feature or query syntax supported by your production database type might not be supported by an in-memory database.

For example, the following query (which you might have in your data migration scripts) would work fine in Postgresql but will break in the case of H2.

INSERT INTO todos (id, title)
VALUES ('1', 'Learn Modern Integration Testing with Testcontainers')
ON CONFLICT do nothing;

So, it’s always recommended to test with the same type of database that’s used for production.

We can write unit tests for TodoRepository using SpringBoot’s slice test annotation @DataJpaTest. We’ll do this by provisioning a Postgres container using Testcontainers as follows:

@DataJpaTest
@AutoConfigureTestDatabase(replace = AutoConfigureTestDatabase.Replace.NONE)
@Testcontainers
class TodoRepositoryTest {
    @Container
    static PostgreSQLContainer<?> postgres = new PostgreSQLContainer<>("postgres:14-alpine");

    @DynamicPropertySource
    static void configureProperties(DynamicPropertyRegistry registry) {
        registry.add("spring.datasource.url", postgres::getJdbcUrl);
        registry.add("spring.datasource.username", postgres::getUsername);
        registry.add("spring.datasource.password", postgres::getPassword);
    }

    @Autowired
    TodoRepository repository;

    @BeforeEach
    void setUp() {
        repository.deleteAll();
        repository.save(new Todo(null, "Todo Item 1", true, 1));
        repository.save(new Todo(null, "Todo Item 2", false, 2));
        repository.save(new Todo(null, "Todo Item 3", false, 3));
    }

    @Test
    void shouldGetPendingTodos() {
        assertThat(repository.getPendingTodos()).hasSize(2);
    }
}

The Postgres database dependency is provisioned by using Testcontainers JUnit5 Extension, and the test talks to the real Postgres database. For more information on using container lifecycle management see Testcontainers and JUnit integration.

By testing with the same type of database that’s used for production, instead of using an in-memory database, the chance of database compatibility issues is avoided altogether and increases the confidence in our tests.

For database testing, Testcontainers provides special JDBC URL support which makes it easier to work with SQL databases.

Testing REST API endpoints

We can test API endpoints by bootstrapping the application along with the required dependencies such as the database provisioned via Testcontainers. The programming model for testing REST API endpoints is the same as the Repository unit test.

@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
@Testcontainers
public class TodoControllerTests {
    @LocalServerPort
    private Integer port;
    
    @Container
    static PostgreSQLContainer<?> postgres = new PostgreSQLContainer<>("postgres:14-alpine");

    @DynamicPropertySource
    static void configureProperties(DynamicPropertyRegistry registry) {
        registry.add("spring.datasource.url", postgres::getJdbcUrl);
        registry.add("spring.datasource.username", postgres::getUsername);
        registry.add("spring.datasource.password", postgres::getPassword);
    }

    @Autowired
    TodoRepository todoRepository;

    @BeforeEach
    void setUp() {
        todoRepository.deleteAll();
        RestAssured.baseURI = "http://localhost:" + port;
    }

    @Test
    void shouldGetAllTodos() {
        List<Todo> todos = List.of(
                new Todo(null, "Todo Item 1", false, 1),
                new Todo(null, "Todo Item 2", false, 2)
        );
        todoRepository.saveAll(todos);

        given()
                .contentType(ContentType.JSON)
                .when()
                .get("/todos")
                .then()
                .statusCode(200)
                .body(".", hasSize(2));
    }
}

We’ve bootstrapped the application using the @SpringBootTest annotation and used RestAssured for making API calls and verifying the response. This will give us more confidence in our tests as there are no mocks involved, and it enables developers to do any kind of internal code refactoring without breaking API contact.

End-to-end testing using Selenium and Testcontainers

Selenium is a popular browser automation tool for performing end-to-end testing. Testcontainers provides a Selenium module that simplifies the execution of selenium-based tests in a Docker container.

@Testcontainers
public class SeleniumE2ETests {
   @Container
   static BrowserWebDriverContainer<?> chrome = new BrowserWebDriverContainer<>().withCapabilities(new ChromeOptions());
 
   static RemoteWebDriver driver;
   
   @BeforeAll
   static void beforeAll() {
       driver = new RemoteWebDriver(chrome.getSeleniumAddress(), new ChromeOptions());
   }
 
   @AfterAll
   static void afterAll() {
       driver.quit();
   }
 
   @Test
   void testViewHomePage() {
      String baseUrl = "https://myapp.com";
      driver.get(baseUrl);
      assertThat(driver.getTitle()).isEqualTo("App Title");
   }
}

We’re able to run Selenium tests using the same programming model with the WebDriver provided by Testcontainers. Testcontainers even makes it easy to record videos of the test execution without having to go through a complex configuration setup.

You can take a look at the Testcontainers Java SpringBoot QuickStart project for reference.

Conclusion

We looked at various types of tests that developers use for their applications: data access layer, API tests, and even end-to-end tests. We also discovered how using Testcontainers libraries simplifies the setup to run these with the real dependencies like the actual version of the database you’ll use in production. 

Testcontainers is available in multiple popular programming languages for example Java, Go, .NET, and Python. It also offers an idiomatic approach to transforming your tests with real dependencies into unit tests that developers know and love.

Testcontainers-based tests run the same way in your CI pipeline and locally, whether you choose to run an individual test via your IDE, a class of tests, or even the whole suite from the command line. This gives you unparalleled reproducibility of issues and developer experience.

Finally, Testcontainers enables writing tests using real dependencies without having to use mocks which brings more confidence to your test suite. So, if you’re a fan of a practical approach, check out the Testcontainers Java SpringBoot QuickStart, which has all the test types we looked at in this article available to run from the get-go.

Learn more

]]>