Java – Docker https://www.docker.com Wed, 24 Apr 2024 16:03:23 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.3 https://www.docker.com/wp-content/uploads/2024/02/cropped-docker-logo-favicon-32x32.png Java – Docker https://www.docker.com 32 32 A Promising Methodology for Testing GenAI Applications in Java https://www.docker.com/blog/testing-genai-applications-in-java/ Wed, 24 Apr 2024 16:03:14 +0000 https://www.docker.com/?p=54150 In the vast universe of programming, the era of generative artificial intelligence (GenAI) has marked a turning point, opening up a plethora of possibilities for developers.

Tools such as LangChain4j and Spring AI have democratized access to the creation of GenAI applications in Java, allowing Java developers to dive into this fascinating world. With Langchain4j, for instance, setting up and interacting with large language models (LLMs) has become exceptionally straightforward. Consider the following Java code snippet:

public static void main(String[] args) {
    var llm = OpenAiChatModel.builder()
            .apiKey("demo")
            .modelName("gpt-3.5-turbo")
            .build();
    System.out.println(llm.generate("Hello, how are you?"));
}

This example illustrates how a developer can quickly instantiate an LLM within a Java application. By simply configuring the model with an API key and specifying the model name, developers can begin generating text responses immediately. This accessibility is pivotal for fostering innovation and exploration within the Java community. More than that, we have a wide range of models that can be run locally, and various vector databases for storing embeddings and performing semantic searches, among other technological marvels.

Despite this progress, however, we are faced with a persistent challenge: the difficulty of testing applications that incorporate artificial intelligence. This aspect seems to be a field where there is still much to explore and develop.

In this article, I will share a methodology that I find promising for testing GenAI applications.

2400x1260 2024 GenAi

Project overview

The example project focuses on an application that provides an API for interacting with two AI agents capable of answering questions. 

An AI agent is a software entity designed to perform tasks autonomously, using artificial intelligence to simulate human-like interactions and responses. 

In this project, one agent uses direct knowledge already contained within the LLM, while the other leverages internal documentation to enrich the LLM through retrieval-augmented generation (RAG). This approach allows the agents to provide precise and contextually relevant answers based on the input they receive.

I prefer to omit the technical details about RAG, as ample information is available elsewhere. I’ll simply note that this example employs a particular variant of RAG, which simplifies the traditional process of generating and storing embeddings for information retrieval.

Instead of dividing documents into chunks and making embeddings of those chunks, in this project, we will use an LLM to generate a summary of the documents. The embedding is generated based on that summary.

When the user writes a question, an embedding of the question will be generated and a semantic search will be performed against the embeddings of the summaries. If a match is found, the user’s message will be augmented with the original document.

This way, there’s no need to deal with the configuration of document chunks, worry about setting the number of chunks to retrieve, or worry about whether the way of augmenting the user’s message makes sense. If there is a document that talks about what the user is asking, it will be included in the message sent to the LLM.

Technical stack

The project is developed in Java and utilizes a Spring Boot application with Testcontainers and LangChain4j.

For setting up the project, I followed the steps outlined in Local Development Environment with Testcontainers and Spring Boot Application Testing and Development with Testcontainers.

I also use Tescontainers Desktop to facilitate database access and to verify the generated embeddings as well as to review the container logs.

The challenge of testing

The real challenge arises when trying to test the responses generated by language models. Traditionally, we could settle for verifying that the response includes certain keywords, which is insufficient and prone to errors.

static String question = "How I can install Testcontainers Desktop?";
@Test
    void verifyRaggedAgentSucceedToAnswerHowToInstallTCD() {
        String answer  = restTemplate.getForObject("/chat/rag?question={question}", ChatController.ChatResponse.class, question).message();
        assertThat(answer).contains("https://testcontainers.com/desktop/");
    }

This approach is not only fragile but also lacks the ability to assess the relevance or coherence of the response.

An alternative is to employ cosine similarity to compare the embeddings of a “reference” response and the actual response, providing a more semantic form of evaluation. 

This method measures the similarity between two vectors/embeddings by calculating the cosine of the angle between them. If both vectors point in the same direction, it means the “reference” response is semantically the same as the actual response.

static String question = "How I can install Testcontainers Desktop?";
static String reference = """
       - Answer must indicate to download Testcontainers Desktop from https://testcontainers.com/desktop/
       - Answer must indicate to use brew to install Testcontainers Desktop in MacOS
       - Answer must be less than 5 sentences
       """;
@Test
    void verifyRaggedAgentSucceedToAnswerHowToInstallTCD() {
        String answer  = restTemplate.getForObject("/chat/rag?question={question}", ChatController.ChatResponse.class, question).message();
        double cosineSimilarity = getCosineSimilarity(reference, answer);
        assertThat(cosineSimilarity).isGreaterThan(0.8);
    }

However, this method introduces the problem of selecting an appropriate threshold to determine the acceptability of the response, in addition to the opacity of the evaluation process.

Toward a more effective method

The real problem here arises from the fact that answers provided by the LLM are in natural language and non-deterministic. Because of this, using current testing methods to verify them is difficult, as these methods are better suited to testing predictable values. 

However, we already have a great tool for understanding non-deterministic answers in natural language: LLMs themselves. Thus, the key may lie in using one LLM to evaluate the adequacy of responses generated by another LLM. 

This proposal involves defining detailed validation criteria and using an LLM as a “Validator Agent” to determine if the responses meet the specified requirements. This approach can be applied to validate answers to specific questions, drawing on both general knowledge and specialized information

By incorporating detailed instructions and examples, the Validator Agent can provide accurate and justified evaluations, offering clarity on why a response is considered correct or incorrect.

static String question = "How I can install Testcontainers Desktop?";
    static String reference = """
            - Answer must indicate to download Testcontainers Desktop from https://testcontainers.com/desktop/
            - Answer must indicate to use brew to install Testcontainers Desktop in MacOS
            - Answer must be less than 5 sentences
            """;

    @Test
    void verifyStraightAgentFailsToAnswerHowToInstallTCD() {
        String answer  = restTemplate.getForObject("/chat/straight?question={question}", ChatController.ChatResponse.class, question).message();
        ValidatorAgent.ValidatorResponse validate = validatorAgent.validate(question, answer, reference);
        assertThat(validate.response()).isEqualTo("no");
    }

    @Test
    void verifyRaggedAgentSucceedToAnswerHowToInstallTCD() {
        String answer  = restTemplate.getForObject("/chat/rag?question={question}", ChatController.ChatResponse.class, question).message();
        ValidatorAgent.ValidatorResponse validate = validatorAgent.validate(question, answer, reference);
        assertThat(validate.response()).isEqualTo("yes");
    }

We can even test more complex responses where the LLM should suggest a better alternative to the user’s question.

static String question = "How I can find the random port of a Testcontainer to connect to it?";
    static String reference = """
            - Answer must not mention using getMappedPort() method to find the random port of a Testcontainer
            - Answer must mention that you don't need to find the random port of a Testcontainer to connect to it
            - Answer must indicate that you can use the Testcontainers Desktop app to configure fixed port
            - Answer must be less than 5 sentences
            """;

    @Test
    void verifyRaggedAgentSucceedToAnswerHowToDebugWithTCD() {
        String answer  = restTemplate.getForObject("/chat/rag?question={question}", ChatController.ChatResponse.class, question).message();
        ValidatorAgent.ValidatorResponse validate = validatorAgent.validate(question, answer, reference);
        assertThat(validate.response()).isEqualTo("yes");
    }

Validator Agent

The configuration for the Validator Agent doesn’t differ from that of other agents. It is built using the LangChain4j AI Service and a list of specific instructions:

public interface ValidatorAgent {
    @SystemMessage("""
                ### Instructions
                You are a strict validator.
                You will be provided with a question, an answer, and a reference.
                Your task is to validate whether the answer is correct for the given question, based on the reference.
                
                Follow these instructions:
                - Respond only 'yes', 'no' or 'unsure' and always include the reason for your response
                - Respond with 'yes' if the answer is correct
                - Respond with 'no' if the answer is incorrect
                - If you are unsure, simply respond with 'unsure'
                - Respond with 'no' if the answer is not clear or concise
                - Respond with 'no' if the answer is not based on the reference
                
                Your response must be a json object with the following structure:
                {
                    "response": "yes",
                    "reason": "The answer is correct because it is based on the reference provided."
                }
                
                ### Example
                Question: Is Madrid the capital of Spain?
                Answer: No, it's Barcelona.
                Reference: The capital of Spain is Madrid
                ###
                Response: {
                    "response": "no",
                    "reason": "The answer is incorrect because the reference states that the capital of Spain is Madrid."
                }
                """)
    @UserMessage("""
            ###
            Question: {{question}}
            ###
            Answer: {{answer}}
            ###
            Reference: {{reference}}
            ###
            """)
    ValidatorResponse validate(@V("question") String question, @V("answer") String answer, @V("reference") String reference);

    record ValidatorResponse(String response, String reason) {}
}

As you can see, I’m using Few-Shot Prompting to guide the LLM on the expected responses. I also request a JSON format for responses to facilitate parsing them into objects, and I specify that the reason for the answer must be included, to better understand the basis of its verdict.

Conclusion

The evolution of GenAI applications brings with it the challenge of developing testing methods that can effectively evaluate the complexity and subtlety of responses generated by advanced artificial intelligences. 

The proposal to use an LLM as a Validator Agent represents a promising approach, paving the way towards a new era of software development and evaluation in the field of artificial intelligence. Over time, we hope to see more innovations that allow us to overcome the current challenges and maximize the potential of these transformative technologies.

Learn more

]]>
Shorter Feedback Loops Developing Java Apps with Digma’s Free Docker Extension https://www.docker.com/blog/shorter-feedback-loops-developing-java-apps-with-digmas-free-docker-extension/ Tue, 20 Jun 2023 13:03:17 +0000 https://www.docker.com/?p=43247 Many engineering teams follow a similar process when developing Docker images. This process encompasses activities such as development, testing, and building, and the images are then released as the subsequent version of the application. During each of these distinct stages, a significant quantity of data can be gathered, offering valuable insights into the impact of code modifications and providing a clearer understanding of the stability and maturity of the product features.

Observability has made a huge leap in recent years, and advanced technologies such as OpenTelemetry (OTEL) and eBPF have simplified the process of collecting runtime data for applications. Yet, for all that progress, developers may still be on the fence about whether and how to use this new resource. The effort required to transform the raw data into something meaningful and beneficial for the development process may seem to outweigh the advantages offered by these technologies.

The practice of continuous feedback envisions a development flow in which this particular problem has been solved. By making the evaluation of code runtime data continuous, developers can benefit from shorter feedback loops and tighter control of their codebase. Instead of waiting for issues to develop in production systems, or even surface in testing, various developer tools can watch the code observability data for you and provide early warning and rigorous linting for regressions, code smells, or issues. 

At the same time, gathering information back into the code from multiple deployment environments, such as the test or production environment, can help developers understand how a specific function, query, or event is performing in the real world at a single glance.

Docker and Digma logos on orange background

Meet Digma, the first linter for your runtime data 

Digma is a free developer tool that was created to bridge the continuous feedback gap in the DevOps loop. It aims to make sense of the gazillion metrics traces and logs the code is spewing out — so that developers won’t have to. And, it does this continuously and automatically. 

To make the tool even more practical, Digma processes the observability data and uses it to lint the source code itself, right in the IDE. From a developer’s perspective, this means a whole new level of information about the code is revealed — allowing for better code design based on real-world feedback, as well as quick turnaround for fixing regression or avoiding common pitfalls and anti-patterns.

How Digma is deployed

Digma is packaged as a self-contained Docker extension to make it easy for developers to evaluate their code without registration to an external service or sharing any data that might not be allowed by corporate policy (Figure 1). As such, Digma acts as your own intelligent agent for monitoring code execution, especially in development and testing. The backend component is able to track execution over time and highlight any inadvertent changes to behavior, performance, or error patterns that should be addressed.  

To collect data about the code, behind the scenes Digma leverages OpenTelemetry, a widely used open standard for modern observability. To make getting started easier, Digma takes care of the configuration work for you, so from the developer’s point of view, getting started with continuous feedback is equivalent to flipping a switch.

 Illustration of Digma process showing feedback providers and observability sources.
Figure 1: Overview of Digma.

Once the information is collected using OTEL, Digma runs it through what could be described as a reverse pipeline, aggregating the data, analyzing it and providing it via an API to multiple outlets including the IDE plugin, Jaeger, and the Docker extension dashboard that we’ll describe in this example.

Note: Currently, Digma supports Java applications running on IntelliJ IDEA; however, support for additional languages and platforms will be rolling out in the coming months.

Installing Digma

Prerequisites: Docker Desktop 4.8 or later.

Note: You must ensure that the Docker Extension is enabled (Figure 2).

Screenshot of Docker Desktop showing "Enable Docker Extensions" selected with blue checkmark.
Figure 2: Enabling Docker Extensions.

Step 1. Installing the Digma Docker Extension

In the Extensions Marketplace, search for Digma and select Install (Figure 3).

Screenshot of Extensions Marketplace showing results of search for Digma.
Figure 3: Install Digma.

Step 2. Collecting feedback about your code

After installing the Digma extension from the marketplace, you’ll be directed to a quickstart page that will guide you through the next steps to collect feedback about your code. Digma can collect information from two different sources:

  • Your tests/debug sessions within the IDE
  • Any Docker containers you may have running in Docker desktop

Getting feedback while debugging/running code in the IDE

Digma’s IDE extension makes the process of collecting data about your code in the IDE trivial. The entire setup is reduced to a single toggle button click (Figure 4). Behind the scenes, Digma adds variables to the runtime configuration, so that it uses OpenTelemetry for observability. No prior knowledge or observability is needed, and no code changes are necessary to get this to work. 

With the observability toggle enabled, you’ll start getting immediate feedback as you run your code locally, or even when you execute your tests. Digma will be on the lookout for any regressions and will automatically spot code smells and issues.

Screenshot of Digma dialog box with Observability toggle switch enabled (in blue).
Figure 4: Observability toggle switch.

Getting feedback from running containers

In addition to the IDE, you can also choose to collect data from any running container on your machine running Java code. The process to do that is extremely simple and also documented in the Digma extension Getting Started page. It does not involve changing any Dockerfile or code. Basically, you download the OTEL agent, mount it to a volume on the container, and set some environment variables that point the Java process to use it.

curl --create-dirs -O -L --output-dir ./otel
https://github.com/open-telemetry/opentelemetry-java-instrumentation/releases/latest/download/opentelemetry-javaagent.jar

curl --create-dirs -O -L --output-dir ./otel
https://github.com/digma-ai/otel-java-instrumentation/releases/latest/download/digma-otel-agent-extension.jar

export JAVA_TOOL_OPTIONS="-javaagent:/otel/javaagent.jar -Dotel.exporter.otlp.endpoint=http://localhost:5050 -Dotel.javaagent.extensions=/otel/digma-otel-agent-extension.jar"
export OTEL_SERVICE_NAME={--ENTER YOUR SERVICE NAME HERE--}
export DEPLOYMENT_ENV=LOCAL_DOCKER

docker run -d -v "/$(pwd)/otel:/otel" --env JAVA_TOOL_OPTIONS --env OTEL_SERVICE_NAME --env DEPLOYMENT_ENV {-- APPEND PARAMS AND REPO/IMAGE --}

Uncovering how the code behaves in runtime

Whether you collect data from running containers or in the IDE, Digma will continuously analyze the code behavior and make the analysis results available both as code annotations in the IDE and as dashboards in the Digma extension itself. To demonstrate a live example, we’ve created a sample application based on the Java Spring Boot PetClinic app. 

In this scenario, we’ll clone the repo and run the code from the IDE, triggering a few actions to see what they reveal about the code. For convenience, we’ve created run configurations to simulate common API calls and create interesting data:

  1. Open the PetClinic project here in your IntelliJ IDE.
  2. Run the petclinic-service configuration.
  3. Access the API and play around with it on http://localhost:9753/ or simply run the ClientTested run config included in the workspace.

Almost immediately, the result of the dynamic linting analysis will start appearing over the code in the IDE (Figure 5):

Screenshot showing Digma Insights after analysis.
Figure 5: Results of Digma analysis.

At the same time, the Digma extension itself will present a dashboard cataloging all of the assets that have been identified, including code locations, API endpoints, database queries, and more (Figure 6). For each asset, you’ll be able to access basic statistics about its performance as well as a more detailed list of issues of concern that may need to be tracked about their runtime behavior.

 Screenshot of Digma dashboard listing assets that have been identified.
Figure 6: Digma dashboard.

Using the observability data

One of the main problems Digma tries to solve is not how to collect observability data but how to turn it into a useful and practical asset that can speed up development processes and improve the code. Digma’s insights can be directly applied during design time, based on existing data, as well as to validate changes as they are being run in dev/test/prod and to get early feedback and shorter loops into the development process.

Examples of design time planning code insights:

  • See runtime usage for the modified functions and understand who will be affected by the change
  • Concurrency to plan for, bottlenecks and criticality
  • Existing errors and exceptions that need to be handled by the new component
  • See complete visualization of the flow of control using the modified code 

Examples of runtime code validations for shorter feedback loops:

  • Catch code and query modeling smells, such as N+1 Selects, are detected and highlighted
  • Identify new performance bottlenecks and regressions as a result of the changes
  • Spot scaling issues earlier and address them as a part of the dev cycle

As Digma evolves, it continues to track and provide clear visibility into specific areas that are important to developers with the goal of having complete clarity about how each piece of code is behaving in real-world scenarios and catching issues and regressions much earlier in the process.

Who should use Digma?

Unlike many other observability solutions, Digma is code first and developer first. Digma is completely free for developers, does not require any code changes or sharing data, and will get you from zero to impactful data about your code within minutes.  If you are working on Java code, use the JetBrains IDE, and you want to improve your code with actual execution data, you can get started by picking up the Digma extension from the marketplace. 

You can provide feedback in our Slack channel and tell us where Digma improved your dev cycle.

]]>
Spring Boot Application Testing and Development with Testcontainers https://www.docker.com/blog/spring-boot-application-testing-and-development-with-testcontainers/ Wed, 17 May 2023 18:58:33 +0000 https://www.docker.com/?p=52219 Spring Boot 3.1.0 introduced great support for Testcontainers that’ll not only make writing integration tests easier, but also make local development a breeze.

Spring Boot Application Testing & Development with Testcontainers

“Clone & Run” Developer Experience

Gone are the days of maintaining a document with a long list of manual steps needed to set up an application locally before running it. With Docker installing the application, dependencies became easier. But you still had to maintain different versions of scripts based on your Operating System, to manually spin up the application dependencies as Docker containers.

With the Testcontainers support added in Spring Boot 3.1.0, developers can now simply clone the repository and run the application! All the application dependencies, such as databases, message brokers, etc. can be configured to automatically start when we run the application.

If you’re new to Testcontainers, go through Getting started with Testcontainers in a Java Spring Boot Project guide to learn how to test your Spring Boot applications using Testcontainers.

Simplified integration testing using ServiceConnections

Prior to Spring Boot 3.1.0, we had to use @DynamicPropertySource to set the dynamic properties obtained from containers started by Testcontainers as follows:

@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
@Testcontainers
class CustomerControllerTest {

   @Container
   static PostgreSQLContainer<?> postgres = 
                  new PostgreSQLContainer<>("postgres:15-alpine");

   @DynamicPropertySource
   static void configureProperties(DynamicPropertyRegistry registry) {
       registry.add("spring.datasource.url", postgres::getJdbcUrl);
       registry.add("spring.datasource.username", postgres::getUsername);
       registry.add("spring.datasource.password", postgres::getPassword);
   }

   // your tests
}

Then, Spring Boot 3.1.0 introduced the new concept of ServiceConnection. This automatically configures the necessary Spring Boot properties for the supporting containers.

First, add the spring-boot-testcontainers as a test dependency:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-testcontainers</artifactId>
    <scope>test</scope>
</dependency>

Now, we can rewrite the previous example by adding @ServiceConnection without having to explicitly configure spring.datasource.url, spring.datasource.username, and spring.datasource.password using the @DynamicPropertySource approach.

@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
@Testcontainers
class CustomerControllerTest {

   @Container
   @ServiceConnection
   static PostgreSQLContainer<?> postgres = 
                   new PostgreSQLContainer<>("postgres:15-alpine");

   // your tests
}

Notice that we’re not registering the datasource properties explicitly anymore.

The @ServiceConnection support not only works for relational databases but also many other commonly used dependencies like Kafka, RabbitMQ, Redis, MongoDB, ElasticSearch, and Neo4j. For the complete list of supporting services, see the official documentation.

You can also define all your container dependencies in one TestConfiguration class and import it into your integration tests.

For example, let’s say you’re using Postgres and Kafka in your application. You can then create a class called ContainersConfig as follows:

@TestConfiguration(proxyBeanMethods = false)
public class ContainersConfig {

   @Bean
   @ServiceConnection
   public PostgreSQLContainer<?> postgreSQLContainer() {
       return new PostgreSQLContainer<>("postgres:15.2-alpine");
   }

   @Bean
   @ServiceConnection
   public KafkaContainer kafkaContainer() {
       return new KafkaContainer(
                   DockerImageName.parse("confluentinc/cp-kafka:7.2.1"));
   }
}

Finally, you can import the ContainersConfig into your tests as follows:

@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
@Import(ContainersConfig.class)
class ApplicationTests {

   //your tests
}

How to use a container that doesn’t have ServiceConnection support

In your applications, you may need to use a dependency that doesn’t have a dedicated Testcontainers module or out-of-the-box ServiceConnection support from Spring Boot. Don’t worry, you can still use Testcontainers GenericContainer and register the properties using DynamicPropertyRegistry.

For example, you might want to use Mailhog for testing email functionality. In this case, you can use Testcontainers GenericContainer and register Spring Boot email properties as follows:

@TestConfiguration(proxyBeanMethods = false)
public class ContainersConfig {

   @Bean
   @ServiceConnection
   public PostgreSQLContainer<?> postgreSQLContainer() {
       return new PostgreSQLContainer<>("postgres:15.2-alpine");
   }

   @Bean
   @ServiceConnection
   public KafkaContainer kafkaContainer() {
       return new KafkaContainer(
                    DockerImageName.parse("confluentinc/cp-kafka:7.2.1"));
   }

   @Bean
   public GenericContainer mailhogContainer(DynamicPropertyRegistry registry) {
       GenericContainer container = new GenericContainer("mailhog/mailhog")
                                            .withExposedPorts(1025);
       registry.add("spring.mail.host", container::getHost);
       registry.add("spring.mail.port", container::getFirstMappedPort);
       return container;
   }
}

As we’ve seen, we can use any containerized service and register the application properties.

Local development using Testcontainers

In the previous section, we learned how to use Testcontainers for testing Spring Boot applications. With Spring Boot 3.1.0 Testcontainers support, we can also use Testcontainers during the development time to run the application locally.

To do this, create a TestApplication class in the test classpath under src/test/java as follows:

import org.springframework.boot.SpringApplication;

public class TestApplication {
   public static void main(String[] args) {
       SpringApplication
         .from(Application::main) //Application is main entrypoint class
         .with(ContainersConfig.class)
         .run(args);
   }
}

Observe that we’ve used the configuration class ContainersConfig using .with(...) to attach it to the application launcher.

Now you can run TestApplication from your IDE. It will automatically start all the containers defined in ContainersConfig and configure the properties.

You can also run TestApplication using the Maven or Gradle build tools as follows:

./mvnw spring-boot:test-run //Maven
./gradlew bootTestRun //Gradle

Using DevTools with Testcontainers at development time

We’ve now learned how to use Testcontainers for local development. But one challenge with this setup is that every time the application is modified and a build is triggered, the existing containers will be destroyed and new containers will be created. This can result in slowness or loss of data between application restarts.

Spring Boot provides devtools to improve the developer experience by refreshing the application upon code changes. We can use @RestartScope annotation provided by devtools to indicate certain beans to be reused instead of recreating them.

First, let’s add the spring-boot-devtools dependency as follows:

<!-- For Maven -->
<dependency>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-devtools</artifactId>
   <scope>runtime</scope>
   <optional>true</optional>
</dependency>

<!-- For Gradle -->
testImplementation "org.springframework.boot:spring-boot-devtools"

Now, add @RestartScope annotation on bean definitions in ContainersConfig as follows:

@TestConfiguration(proxyBeanMethods = false)
public class ContainersConfig {

   @Bean
   @ServiceConnection
   @RestartScope
   public PostgreSQLContainer<?> postgreSQLContainer() {
       return new PostgreSQLContainer<>("postgres:15.2-alpine");
   }

   @Bean
   @ServiceConnection
   @RestartScope
   public KafkaContainer kafkaContainer() {
       return new KafkaContainer(
                DockerImageName.parse("confluentinc/cp-kafka:7.2.1"));
   }

   ...
}

Now if you make any application code changes and the build is triggered, the application will restart but use the existing containers.

Please note: Eclipse automatically triggers a build when the code changes are saved, while in IntelliJ IDEA, you need to trigger a build manually.

Conclusion

Modern software development involves using lots of technologies and tools to tackle growing business needs. This has resulted in a significant increase in the complexity of the development environment setup. Improving the developer experience isn’t a nice to have anymore — it’s a necessity.

To improve this developer experience, Spring Boot 3.1.0 added out-of-the-box support for Testcontainers. Spring Boot and Testcontainers integration works seamlessly with your local Docker, on CI, and with Testcontainers Cloud too.

This is an impactful transformation for not only testing but also local development. And developers can now look forward to an experience that brings the clone & run philosophy into reality.

Learn more

]]>
How I Built My First Containerized Java Web Application https://www.docker.com/blog/how-i-built-my-first-containerized-java-web-application/ Thu, 11 Aug 2022 14:00:36 +0000 https://www.docker.com/?p=36516 It gives me immense pleasure to present this blog as I intern with Docker as a Product Marketer. This incredible experience has given me the chance to brainstorm many new ideas. As a prior Java developer, I’ve always been amazed by how Java and Spring Boot work wonders together! Shoutout to everyone who helped drive the completion of this blog!

 

For the past 30 years, web development has become vital across multiple industries. Developers have long used Java and the Spring Framework for web development, particularly on the server side.

Java follows the “old is gold” philosophy. And after evolving over 25 years, it’s still one of today’s most popular programming languages. Fifty-million websites including Google, LinkedIn, eBay, Amazon, and Stack Overflow, use Java extensively.

 

Java Meets Docker

In this blog, we’ll create a simple Java Spring Boot web application and containerize it using Docker, which works by running our application as a software “image.” This image packages together the operating system, code, and any supporting libraries or dependencies. This makes it much easier to develop and deploy cross-platform applications. Let’s jump into the process.

Here’s what you’ll be doing

  1. Building your first Java Spring Boot web app
  2. Running and building your application without Docker, first
  3. Containerizing the Spring Boot web application

What you’ll need

1. JDK 17 or above
2. Spring Tool Suite for Eclipse
3. Docker Desktop

Building your first Java Spring Boot web app

We’re using Spring Tool Suite (STS) for our application. STS is Eclipse-based and tailored for creating Spring applications. It includes a built-in and customizable Tomcat server that offers Spring configuration file validation, coding assistance, and more.

Another advantage is that Spring Tool Suite 4 doesn’t need an explicit Maven plugin. It ships with its own Maven plugin, which is easy to enable by navigating to Windows > Preferences > Maven. This IDE also offers an approachable UI and tools to simplify Spring app development.

That said, let’s now create a new base project for our app. Create a new Spring starter project from the Package Explorer:

 

Spring Starter

Spring Start webapp

Since we’re building a Spring web app, we need to add our Spring web and Thymeleaf dependencies. Thymeleaf is a Java template engine for implementing frontend functions like HTML, XML, JavaScript, CSS, or even plain text files with Spring Boot.

Starter Project Dependencies

You’ll notice that configuring your starter project takes some time, since we’re pulling from the official website. Once finished, you can further configure your project base. The project structure looks like this:

Project Structure

By default, Maven compiles sources from src/main/java and src/test/java is where your test cases reside. Meanwhile, src/main/resources is the standard Maven location for application resources like templates, images and other configurations.

Maven’s fundamental unit of work is the pom.xml (Project Object Model). It contains information about the project, its dependencies, and configuration details that Maven uses while building.

Here’s our project’s POM. You’ll also notice the Spring Web and Thymeleaf dependencies that we added initially:

&lt;?xml version="1.0" encoding="UTF-8"?&gt;
&lt;project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd"&gt;
&lt;modelVersion&gt;4.0.0&lt;/modelVersion&gt;
&lt;parent&gt;
&lt;groupId&gt;org.springframework.boot&lt;/groupId&gt;
&lt;artifactId&gt;spring-boot-starter-parent&lt;/artifactId&gt;
&lt;version&gt;2.7.2&lt;/version&gt;
&lt;relativePath/&gt; &lt;!-- lookup parent from repository --&gt;
&lt;/parent&gt;
&lt;groupId&gt;com.example&lt;/groupId&gt;
&lt;artifactId&gt;webapp&lt;/artifactId&gt;
&lt;version&gt;0.0.1-SNAPSHOT&lt;/version&gt;
&lt;name&gt;webapp&lt;/name&gt;
&lt;description&gt;Demo project for Spring Boot&lt;/description&gt;
&lt;properties&gt;
&lt;java.version&gt;17&lt;/java.version&gt;
&lt;/properties&gt;
&lt;dependencies&gt;
&lt;dependency&gt;
&lt;groupId&gt;org.springframework.boot&lt;/groupId&gt;
&lt;artifactId&gt;spring-boot-starter-web&lt;/artifactId&gt;
&lt;/dependency&gt;

&lt;dependency&gt;
&lt;groupId&gt;org.springframework.boot&lt;/groupId&gt;
&lt;artifactId&gt;spring-boot-starter-test&lt;/artifactId&gt;
&lt;scope&gt;test&lt;/scope&gt;
&lt;/dependency&gt;

&lt;dependency&gt;
&lt;groupId&gt;org.springframework.boot&lt;/groupId&gt;
&lt;artifactId&gt;spring-boot-starter-thymeleaf&lt;/artifactId&gt;
&lt;/dependency&gt;
&lt;/dependencies&gt;

&lt;build&gt;
&lt;plugins&gt;
&lt;plugin&gt;
&lt;groupId&gt;org.springframework.boot&lt;/groupId&gt;
&lt;artifactId&gt;spring-boot-maven-plugin&lt;/artifactId&gt;
&lt;/plugin&gt;
&lt;/plugins&gt;
&lt;/build&gt;

&lt;/project&gt;

 

Next, if you inspect the source code inside src/main/java, you’ll see a generated class WebappApplication.java file:

package com.example.mypkg;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;

@SpringBootApplication
public class WebappApplication {

public static void main(String[] args) {
SpringApplication.run(WebappApplication.class, args);
}

}

 

This is the main class that your Spring Boot app executes from. The @SpringBootApplication annotation denotes a variety of features — including the ability to enable Spring Boot auto-configuration, Java-based Spring configuration, and component scanning.

Therefore, @SpringBootApplication is akin to using @Configuration@EnableAutoConfiguration, and @ComponentScan with their default attributes. Here’s more information about each annotation:

    • @Configuration denotes that the particular class has @Bean definition methods. The Spring container may process it to provide bean definitions.
    • @EnableAutoConfiguration helps you auto-configure beans present in the classpath.
    • @ComponentScan lets Spring scan for configurations, controllers, services, and other predefined components.

A Spring application is bootstrapped as a standalone from the main method using SpringApplication.run(<Classname>.class, args).

As mentioned, you can embed both static and dynamic web pages in src/main/resources. Here, we’ve designated Products.html as the home page of our application:

Products HTML

We’ll use a simple RESTful web service to grab our application’s home page. First, create a Controller class in the same location as your main class. This’ll be responsible for processing incoming REST API calls, preparing a model, and returning the rendered view as a response.

package com.example.mypkg;

import org.springframework.stereotype.Controller;
import org.springframework.web.bind.annotation.GetMapping;

@Controller
public class HomeController {

@GetMapping(value = "/DockerProducts")
public String index() {
return "Products";
}

}

 

The @Controller annotation assigns the controller role to a particular class. You’d use this annotation to mark a class as a web request handler. @GetMapping commonly maps HTTP GET requests onto specific handler methods. Here, it’s the method “index” that returns our app’s homepage, called “Products.”

Building and running the application without Docker

It’s time to test our application by running it as a Spring Boot application. 

 

Spring Boot Stencil

Your application is now available at port 8080, which you can access by opening http://localhost:8080/DockerProducts in your browser.

We’ve tested our project by running it. It’s now time to build our application by  creating a JAR file. Choose the “Maven clean” install option within Spring Tool Suite:

 

Maven Clean

 

Maven Install

Here’s the console for the ongoing build. You’ll see that STS has successfully built our JAR:

JAR Build Success

You can access this JAR file in the Target folder shown below:

JAR Target File

Containerizing our Spring Boot web application with Docker

Next, we’re using Docker to containerize our application. Before starting, download and install Docker Desktop. Docker Desktop includes multiple developer-focused tools like the Docker CLI, Docker Compose, and Docker Engine. It also features a user-friendly UI (the Docker Dashboard) that streamlines common container, image, and volume management tasks.

Once that’s installed, we’ll tackle containerization with the following steps:

  1. Creating a Dockerfile
  2. Building the Docker image
  3. Running Docker container to access the application

Creating a Dockerfile

A Dockerfile is a plain-text file that specifies instructions for building a Docker image. You can create this in your project’s root directory:

FROM eclipse-temurin:17-jdk-focal
ADD target/webapp-0.0.1-SNAPSHOT.jar webapp-0.0.1-SNAPSHOT.jar
EXPOSE 8080
ENTRYPOINT ["java", "-jar", "webapp-0.0.1-SNAPSHOT.jar"]

 

What does each instruction do?

  • FROM – Specifies the base image that your Dockerfile uses to build a new image. We’re using eclipse-temurin:17-jdk-focal as our base image. The Eclipse Temurin Project shares code and procedures that enable you to easily create Java SE runtime binaries. It also helps you leverage common, related technologies that appear throughout Java’s ecosystem.
  • ADD – Copies the new files and JAR into your Docker container’s filesystem at a specific destination
  • EXPOSE – Reveals specific ports to the host machine. We’re exposing port 8080 since embedded Tomcat servers automatically use it.
  • ENTRYPOINT – Sets executables that’ll run once the container spins up

Building the Docker image

Docker images are instructive templates for creating Docker containers. You’ll build your Docker image by opening the STS terminal at your project’s root directory, and entering the following command:

docker build -t docker_desktop_page .

 

Our image name is docker_desktop_page. Here’s how your images will appear if you request a list:

 

Images List

Run your application as a Docker container

A Docker container is a running instance of a Docker image. It’s a lightweight, standalone, executable software package that includes everything needed to run an application. Enter this command to start your container:

docker run -p 8080:8080 docker_desktop_page

Spring BW Output

Access your application at http://localhost:8080/DockerProducts. Here’s a quick glimpse of our webpage!

Final Webpage

You can also view your image and running container via the Docker Dashboard:

DD Image

DD Container

You can also manage these containers within the Container interface.

Containerization enables easier build and deploy

Congratulations! You’ve successfully built your first Java website. You can access the full project source code here.

You’ve now learned how easy containerizing an application is — even without prior Docker experience. To learn more about developing your next Java Spring Boot application, check out our Getting started with Java Overview.

]]>
Resources to Use Javascript, Python, Java, and Go with Docker https://www.docker.com/blog/resources-to-use-javascript-python-java-and-go-with-docker/ Fri, 08 Jul 2022 14:00:05 +0000 https://www.docker.com/?p=34686 With so many programming and scripting languages out there, developers can tackle development projects any number of ways. However, some languages — like JavaScript, Python, and Java — have been perennial favorites. (We’ve previously touched on this while unpacking Stack Overflow’s 2022 Developer Survey results.)

Programming Language Syntax

Image courtesy of Joan Gamell, via Unsplash

Many developers use Docker in tandem with these languages. We’ve seen our users create some amazing applications! Here are some resources and recommendations to level up your container game with these languages.

Getting Started with Docker

If you’ve never used Docker, you may want to familiarize yourself with some basic concepts first. You can learn the technical fundamentals of Docker and containerization via our “Orientation and Setup” guide and our introductory page. You’ll learn how containers work, and even how to harness tools like the Docker CLI or Docker Desktop.

Our Orientation page also serves as a foundation for many of our own official walkthroughs. This is a great resource if you’re completely new to Docker!

If you prefer hands-on learning, look no further than Shy Ruparel’s “Getting Started with Docker” video guide. Shy will introduce you to Docker’s architecture, essential CLI commands, Docker Desktop tips, and sample applications.

If you’re feeling comfortable with Docker, feel free to jump to your language-specific section using the links below. We’ve created language-specific workflows for each top language within our documentation (AKA “Our Language Modules” in this blog). These steps are linked below alongside some extra exploratory resources. We’ll also include some awesome-compose code samples to accelerate similar development projects — or to serve as inspiration.

Table of Contents

How to Use Docker with JavaScript

JavaScript has been the programming world’s leading language for 10 years running. Luckily, there are also many ways to use JavaScript and Docker together. Check out these resources to harness JavaScript, Node.js, and other runtimes or frameworks with Docker.

Docker Node.js Modules

Before exploring further, it’s worth completing our learning modules for Node. These take you through the basics and set you up for increasingly-complex projects later on. We recommend completing these in order:

  1. Overview for Node.js (covering learning objectives and containerization of your Node application)
  2. Build your Node image
  3. Run your image as a container
  4. Use containers for development
  5. Run your tests using Node.js and Mocha frameworks
  6. Configure CI/CD for your application
  7. Deploy your app

It’s also possible that you’ll want to explore more processes for building minimum viable products (MVPs) or pulling container images. You can read more by visiting the following links.

Other Essential Node Resources

How to Use Docker with Python

Python has consistently been one of our developer community’s favorite languages. From building simple sample apps to leveraging machine learning frameworks, the language supports a variety of workloads. You can learn more about the dynamic duo of Python and Docker via these links.

Docker Python Modules

Similar to Node.js, these pages from our documentation are a great starting point for harnessing Python and Docker:

  1. Overview for Python
  2. Build your Python image
  3. Run your image as a container
  4. Use containers for development (featuring Python and MySQL)
  5. Configure CI/CD for your application
  6. Deploy your app

Other Essential Python Resources

How to Use Docker with Java

Both its maturity and the popularity of Spring Boot have contributed to Java’s growth over the years. It’s easy to pair Java with Docker! Here are some resources to help you do it.

Docker Java Modules

Like with Python, these modules can help you hit the ground running with Java and Docker:

  1. Overview for Java
  2. Build your Java image
  3. Run your image as a container
  4. Use containers for development
  5. Run your tests
  6. Configure CI/CD for your application
  7. Deploy your app

Other Essential Java Resources

How to Use Docker with Go

Last, but not least, Go has become a popular language for Docker users. According to Stack Overflow’s 2022 Developer Survey, over 10,000 JavaScript users (of roughly 46,000) want to start or continue developing in Go or Rust. It’s often positioned as an alternative to C++, yet many Go users originally transition over from Python and Ruby.

There’s tremendous overlap there. Go’s ecosystem is growing, and it’s become increasingly useful for scaling workloads. Check out these links to jumpstart your Go and Docker development.

Docker Go Modules

  1. Overview for Go
  2. Build your Go image
  3. Run your image as a container
  4. Use containers for development
  5. Run your tests using Go test
  6. Configure CI/CD for your application
  7. Deploy your app

Other Essential Go Resources

Build in the Language You Want with Docker

Docker supports all of today’s leading languages. It’s easy to containerize your application and deploy cross-platform without having to make concessions. You can bring your workflows, your workloads, and, ultimately, your users along.

And that’s just the tip of the iceberg. We welcome developers who develop in other languages like Rust, TypeScript, C#, and many more. Docker images make it easy to create these applications from scratch.

We hope these resources have helped you discover and explore how Docker works with your preferred language. Visit our language-specific guides page to learn key best practices and image management tips for using these languages with Docker Desktop.

]]>
Getting Started with Docker nonadult
9 Tips for Containerizing Your Spring Boot Code https://www.docker.com/blog/9-tips-for-containerizing-your-spring-boot-code/ Thu, 23 Jun 2022 03:41:05 +0000 https://www.docker.com/?p=34217 At Docker, we’re incredibly proud of our vibrant, diverse and creative community. From time to time, we feature cool contributions from the community on our blog to highlight some of the great work our community does. Are you working on something awesome with Docker? Send your contributions to Ajeet Singh Raina (@ajeetraina) on the Docker Community Slack and we might feature your work!

Tons of developers use Docker containers to package their Spring Boot applications. According to VMWare’s State of Spring 2021 report, the number of organizations running containerized Spring apps spiked to 57% — compared to 44% in 2020.

What’s driving this significant growth? The ever-increasing demand to reduce startup times of web applications and optimize resource usage, which greatly boosts developer productivity.

Why is containerizing a Spring Boot app important?

Running your Spring Boot application in a Docker container has numerous benefits. First, Docker’s friendly, CLI-based workflow lets developers build, share, and run containerized Spring applications for other developers of all skill levels. Second, developers can install their app from a single package and get it up and running in minutes. Third, Spring developers can code and test locally while ensuring consistency between development and production.

Containerizing a Spring Boot application is easy. You can do this by copying the .jar or .war file right into a JDK base image and then packaging it as a Docker image. There are numerous articles online that can help you effectively package your apps. However, many important concerns like Docker image vulnerabilities, image bloat, missing image tags, and poor build performance aren’t addressed. We’ll tackle those common concerns while sharing nine tips for containerizing your Spring Boot code.

A Simple “Hello World” Spring Boot application

To better understand the unattended concern, let’s build a sample “Hello World” application. In our last blog post, you saw how easy it is to build the “Hello World!” application by downloading this pre-initialized project and generating a ZIP file. You’d then unzip it and complete the following steps to run the app.

image4 2

Under the src/main/java/com/example/dockerapp/ directory, you can modify your DockerappApplication.java file with the following content:


package com.example.dockerapp;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

@RestController
@SpringBootApplication
public class DockerappApplication {

@RequestMapping(&amp;quot;/&amp;quot;)
public String home() {
return &amp;quot;Hello World!&amp;quot;;
}

public static void main(String[] args) {
SpringApplication.run(DockerappApplication.class, args);
}

}

The following command takes your compiled code and packages it into a distributable format, like a JAR:

./mvnw package
java -jar target/*.jar

By now, you should be able to access “Hello World” via http://localhost:8080.

In order to Dockerize this app, you’d use a Dockerfile.  A Dockerfile is a text document that contains every instruction a user could call on the command line to assemble a Docker image. A Docker image is composed of a stack of layers, each representing an instruction in our Dockerfile. Each subsequent layer contains changes to its underlying layer.

Typically, developers use the following Dockerfile template to build a Docker image.


FROM eclipse-temurin
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} app.jar
EXPOSE 8080
ENTRYPOINT [&amp;quot;java&amp;quot;, &amp;quot;-jar&amp;quot;, &amp;quot;/app.jar&amp;quot;]

The first line defines the base image which is around 457 MB. The  ARG instruction specifies variables that are available to the COPY instruction. The COPY copies the  JAR file from the target/ folder to your Docker image’s root. The EXPOSE instruction informs Docker that the container listens on the specified network ports at runtime. Lastly, an ENTRYPOINT lets you configure a container that runs as an executable. It corresponds to your java -jar target/*.jar  command.

You’d build your image using the docker build command, which looks like this:


$ docker build -t spring-boot-docker .
Sending build context to Docker daemon  15.98MB
Step 1/5 : FROM eclipse-temurin
---a3562aa0b991
Step 2/5 : ARG JAR_FILE=target/*.jar
---Running in a8c13e294a66
Removing intermediate container a8c13e294a66
---aa039166d524
Step 3/5 : COPY ${JAR_FILE} app.jar
COPY failed: no source files were specified

One key drawback of our above example is that it isn’t fully containerized. You must first create a JAR file by running the ./mvnw package command on the host system. This requires you to manually install Java, set up the  JAVA_HOME environment variable, and install Maven. In a nutshell, your JDK must reside outside of your Docker container — adding even more complexity into your build environment. There has to be a better way.

1) Automate all the manual steps

We recommend building up the JAR during the build process within your Dockerfile itself. The following RUN instructions trigger a goal that resolves all project dependencies, including plugins, reports, and their dependencies:

FROM eclipse-temurin
WORKDIR /app

COPY .mvn/ .mvn
COPY mvnw pom.xml ./
RUN ./mvnw dependency:go-offline

COPY src ./src

CMD [&amp;quot;./mvnw&amp;quot;, &amp;quot;spring-boot:run&amp;quot;]

Avoid copying the JAR file manually while writing a Dockerfile

2) Use a specific base image tag, instead of latest

When building Docker images, it’s always recommended to specify useful tags which codify version information, intended destination (prod or test, for instance), stability, or other useful information for deploying your application in different environments. Don’t rely on the automatically-created latest tag. Using latest is unpredictable and may cause unexpected behavior. Every time you pull the latest image, it might contain a new build or untested release that could break your application.

For example, using the eclipse-temurin:latest Docker image as a base image isn’t ideal. Instead, you should use specific tags like eclipse-temurin:17-jdk-jammy , eclipse-temurin:8u332-b09-jre-alpin etc.

Avoid using FROM eclipse-temurin:latest in your Dockerfile

3) Use Eclipse Temurin instead of JDK, if possible

On the OpenJDK Docker Hub page, you’ll find a list of recommended Docker Official Images that you should use while building Java applications. The upstream OpenJDK image no longer provides a JRE, so no official JRE images are produced. The official OpenJDK images just contain “vanilla” builds of the OpenJDK provided by Oracle or the relevant project lead.

One of the most popular official images with a build-worthy JDK is Eclipse Temurin. The Eclipse Temurin project provides code and processes that support the building of runtime binaries and associated technologies. These are high performance, enterprise-caliber, and cross-platform.

FROM eclipse-temurin:17-jdk-jammy

WORKDIR /app

COPY .mvn/ .mvn
COPY mvnw pom.xml ./
RUN ./mvnw dependency:go-offline

COPY src ./src

CMD [&amp;quot;./mvnw&amp;quot;, &amp;quot;spring-boot:run&amp;quot;]

4) Use a Multi-Stage Build

With multi-stage builds, a Docker build can use one base image for compilation, packaging, and unit tests. Another image holds the runtime of the application. This makes the final image more secure and smaller in size (as it does not contain any development or debugging tools). Multi-stage Docker builds are a great way to ensure your builds are 100% reproducible and as lean as possible. You can create multiple stages within a Dockerfile and control how you build that image.

You can containerize your Spring Boot applications using a multi-layer approach. Each layer may contain different parts of the application such as dependencies, source code, resources, and even snapshot dependencies. Alternatively, you can build any application as a separate image from the final image that contains the runnable application. To better understand this, let’s consider the following Dockerfile:

FROM eclipse-temurin:17-jdk-jammy
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} app.jar
EXPOSE 8080
ENTRYPOINT [&amp;quot;java&amp;quot;, &amp;quot;-jar&amp;quot;, &amp;quot;/app.jar&amp;quot;]

Spring Boot uses a “fat JAR” as its default packaging format. When we inspect the fat JAR, we see that the application forms a very small part of the entire JAR. This portion changes most frequently. The remaining portion contains the Spring Framework dependencies. Optimization typically involves isolating the application into a separate layer from the Spring Framework dependencies. You only have to download the dependencies layer — which forms the bulk of the fat JAR — once, plus it’s cached in the host system.

The above Dockerfile assumes that the fat JAR was already built on the command line. You can also do this in Docker using a multi-stage build and copying the results from one image to another. Instead of using the Maven or Gradle plugin, we can also create a layered JAR Docker image with a Dockerfile. While using Docker, we must follow two more steps to extract the layers and copy those into the final image.

In the first stage, we’ll extract the dependencies. In the second stage, we’ll copy the extracted dependencies to the final image:

FROM eclipse-temurin:17-jdk-jammy as builder
WORKDIR /opt/app
COPY .mvn/ .mvn
COPY mvnw pom.xml ./
RUN ./mvnw dependency:go-offline
COPY ./src ./src
RUN ./mvnw clean install

FROM eclipse-temurin:17-jre-jammy
WORKDIR /opt/app
EXPOSE 8080
COPY --from=builder /opt/app/target/*.jar /opt/app/*.jar
ENTRYPOINT [&amp;quot;java&amp;quot;, &amp;quot;-jar&amp;quot;, &amp;quot;/opt/app/*.jar&amp;quot; ]

The first image is labeled builder. We use it to run eclipse-temurin:17-jdk-jammy, build the fat JAR, and unpack it.

Notice that this Dockerfile has been split into two stages. The later layers contain the build configuration and the source code for the application, and the earlier layers contain the complete Eclipse JDK image itself. This small optimization also saves us from copying the target directory to a Docker image — even a temporary one used for the build. Our final image is just 277 MB, compared to the first stage build’s 450MB size.

5) Use .dockerignore

To increase build performance, we recommend creating a .dockerignore file in the same directory as your Dockerfile. For this tutorial, your .dockerignore file should contain just one line:

target

This line excludes the target directory, which contains output from Maven, from the Docker build context. There are many good reasons to carefully structure a .dockerignore file, but this simple file is good enough for now. Let’s now explain the build context and why it’s essential . The docker build command builds Docker images from a Dockerfile and a “context.” This context is the set of files located in your specified PATH or URL. The build process can reference any of these files.

Meanwhile, the compilation context is where the developer works. It could be a folder on Mac, Windows or a Linux directory. This directory contains all necessary application components like source code, configuration files, libraries, and plugins. With the .dockerignore file, we can determine which of the following elements like source code, configuration files, libraries, plugins, etc. to exclude while building your new image.

Here’s how your .dockerignore file might look if you choose to exclude the conf, libraries, and plugins directory from your build:

image1 3

6) Favor Multi-Architecture Docker Images

Your CPU can only run binaries for its native architecture. For example, Docker images built for an x86 system can’t run on an Arm-based system. With Apple fully transitioning to their custom Arm-based silicon, it’s possible that your x86 (Intel or AMD) Docker Image won’t work with Apple’s recent M-series chips. Consequently, we always recommended building multi-arch container images. Below is the mplatform/mquery Docker image that lets you query the multi-platform status of any public image, in any public registry:

docker run --rm mplatform/mquery eclipse-temurin:17-jre-alpine
Image: eclipse-temurin:17-jre-alpine (digest: sha256:ac423a0315c490d3bc1444901d96eea7013e838bcf7cc09978cf84332d7afc76)
* Manifest List: Yes (Image type: application/vnd.docker.distribution.manifest.list.v2+json)
* Supported platforms:
- linux/amd64

We introduced the docker buildx command to help you build multi-architecture images. Buildx is a Docker component that enables many powerful build features with a familiar Docker user experience. All builds executed via Buildx run via the Moby BuildKit builder engine. BuildKit is designed to excel at multi-platform builds, or those not just targeting the user’s local platform. When you invoke a build, you can set the --platform flag to specify the build output’s target platform, (like linux/amd64, linux/arm64, or darwin/amd64):

docker buildx build --platform linux/amd64, linux/arm64 -t spring-helloworld .

 

7) Run as non-root user for security purposes

Running applications with user privileges is safer, since it helps mitigate risks. The same applies to Docker containers. By default, Docker containers and their running apps have root privileges. It’s therefore best to run Docker containers as non-root users. You can do this by adding USER instructions within your Dockerfile. The USER instruction sets the preferred user name (or UID) and optionally the user group (or GID) while running the image — and for any subsequent RUN, CMD, or ENTRYPOINT instructions:

FROM eclipse-temurin:17-jdk-alpine
RUN addgroup demogroup; adduser  --ingroup demogroup --disabled-password demo
USER demo

WORKDIR /app

COPY .mvn/ .mvn
COPY mvnw pom.xml ./
RUN ./mvnw dependency:go-offline

COPY src ./src

CMD [&amp;quot;./mvnw&amp;quot;, &amp;quot;spring-boot:run&amp;quot;]

8) Fix security vulnerabilities in your Java image

Today’s developers rely on third-party code and applications while building their services. By using external software without care, your code may be more vulnerable. Leveraging trusted images and continually monitoring your containers is essential to combating this.

Docker Scout stands out among available security tools for its ability to provide detailed visibility into dependencies on specific images, along with remediation options integrated into developers’ workflows. It offers advanced analysis of vulnerabilities in dependencies and provides recommendations to quickly fix them, either by searching for updated or patched base images or by identifying the exact location of vulnerabilities in different layers.

Designed to simplify the lives of developers, Docker Scout integrates directly into Docker, allowing you to spend more time developing code while keeping vulnerabilities under control. Additionally, Docker focuses on improving the developer experience by matching CVE to package and SBOM to CVEdb, thereby improving vulnerability detection and resolution without the need for additional scans.

Whenever you download a Docker image (for example ,”jsgiraldoh/spring-helloworld:latest”), Docker Desktop recommends that you look at the summary of known vulnerabilities and image recommendations, such as Log4Shell:

$ docker pull jsgiraldoh/spring-helloworld:latest
latest: Pulling from jsgiraldoh/spring-helloworld
df9b9388f04a: Pull complete
d3277e9a7631: Pull complete
13f2c9707109: Pull complete
28060bb0256d: Pull complete
6c5e62ebc1c8: Pull complete
33c5943bedda: Pull complete
eac578d9fcdc: Pull complete
491863a4fe37: Pull complete
063454f659da: Pull complete
Digest: sha256:194b3574389ea82ee6cca08d9b707c1965c3242b0765f2830c1a511f3a1980e3
Status: Downloaded newer image for jsgiraldoh/spring-helloworld:latest
docker.io/jsgiraldoh/spring-helloworld:latest

What&amp;#039;s Next?
  View a summary of image vulnerabilities and recommendations → docker scout quickview jsgiraldoh/spring-helloworld:latest

If you run the command recommended by Docker, you will get the following result:

Screenshot of image in Docker Scout showing summary of vulnerabilities.

The result of the command is a summary of the vulnerabilities found in each layer of our image. We will also have two additional options that will allow us to observe the CVEs and recommendations to solve these vulnerabilities.

Screenshot of Docker Scout showing Critical and High vulnerabilities associate with analyzed image.
Screenshot of Docker Scout showing Recommended Fixes for vulnerabilities in analyzed image.

In addition to using the CLI, it is also possible to access the Docker Scout functionality through Docker Desktop, with the Docker Scout option. To begin, install Docker Desktop 4.25.1 on your Mac, Windows, or Linux machine.

Screenshot of Docker Scout showing Advanced Image Analysis page.

After selecting the image you want to analyze, you must click on the View packages and CVEs section.

Screenshot of Docker Scout with view of packages and layers.

9) Use the OpenTelemetry API to measure Java performance

How do Spring Boot developers ensure that their apps are faster and performant? Generally, developers rely on third-party observability tools to measure the performance of their Java applications. Application performance monitoring is essential for all kinds of Java applications, and developers must create top notch user experiences.

Observability isn’t just limited to application performance. With the rise of microservices architectures, the three pillars of observability — metrics, traces, and logs — are front and center. Metrics help developers to understand what’s wrong with the system, while traces help you discover how it’s wrong. Logs tells you why it’s wrong, letting developers dig into particular metrics or traces to holistically understand system behavior.

Observing Java applications requires monitoring your Java VM metrics via JMX, underlying host metrics, and Java app traces. Java developers should monitor, analyze, and diagnose application performance using the Java OpenTelemetry API. OpenTelemetry provides a single set of APIs, libraries, agents, and collector services to capture distributed traces and metrics from your application. Check out this video to learn more.

Conclusion

In this blog post, you saw some of the many ways to optimize your Docker images by carefully crafting your Dockerfile and securing your image by using Snyk Docker Extension Marketplace. If you’d like to go further, check out these bonus resources that cover recommendations and best practices for building secure, production-grade Docker images.

Docker blog CheatSheet v3a 1
]]>
Write Maintainable Integration Tests with Docker https://www.docker.com/blog/maintainable-integration-tests-with-docker/ Wed, 31 Jul 2019 17:29:30 +0000 https://blog.docker.com/?p=23771 Testcontainer is an open source community focused on making integration tests easier across many languages. Gianluca Arbezzano is a Docker Captain, SRE at Influx Data and the maintainer of the Golang implementation of Testcontainer that uses the Docker API to expose a test-friendly library that you can use in your test cases. 

markus spiske code unsplash
Photo by Markus Spiske on Unsplash.
The popularity of microservices and the use of third-party services for non-business critical features has drastically increased the number of integrations that make up the modern application. These days, it is commonplace to use MySQL, Redis as a key value store, MongoDB, Postgress, and InfluxDB – and that is all just for the database – let alone the multiple services that make up other parts of the application.

All of these integration points require different layers of testing. Unit tests increase how fast you write code because you can mock all of your dependencies, set the expectation for your function and iterate until you get the desired transformation. But, we need more. We need to make sure that the integration with Redis, MongoDB or a microservice works as expected, not just that the mock works as we wrote it. Both are important but the difference is huge.

In this article, I will show you how to use testcontainer to write integration tests in Go with very low overhead. So, I am not telling you to stop writing unit tests, just to be clear!

Back in the day, when I was interested in becoming  a Java developer, I tried to write an integration between Zipkin, a popular open source tracer, and InfluxDB. I ultimately failed because I am not a Java developer, but I did understand how they wrote integration tests, and I became fascinated.

Getting Started: testcontainers-java

Zipkin provides a UI and an API to store and manipulate traces, it supports Cassandra, in-memory, ElasticSearch, MySQL and many more platforms as storage. In order to validate that all the storage systems work, they use a library called testcontainers-java that is a wrapper around the docker-api designed to be “test-friendly.”Here is the Quick Start example:

public class RedisBackedCacheIntTestStep0 {
    private RedisBackedCache underTest;

    @Before
    public void setUp() {
        // Assume that we have Redis running locally?
        underTest = new RedisBackedCache("localhost", 6379);
    }

    @Test
    public void testSimplePutAndGet() {
        underTest.put("test", "example");

        String retrieved = underTest.get("test");
        assertEquals("example", retrieved);
    }
}
At the setUp you can create a container (redis in this case) and expose a port. From here, you can interact with a live redis instance.

Everytime you start a new container, there is a “sidecar” called ryuk that keeps your Docker environment clean by removing containers, volumes and networks after a certain amount of time. You can also remove them from inside the test.The below example comes from Zipkin. They are testing the ElasticSearch integration and as the example shows, you can programmatically configure your dependencies from inside the test case.

public class ElasticsearchStorageRule extends ExternalResource {
 static final Logger LOGGER = LoggerFactory.getLogger(ElasticsearchStorageRule.class);
 static final int ELASTICSEARCH_PORT = 9200; final String image; final String index;
 GenericContainer container;
 Closer closer = Closer.create();

 public ElasticsearchStorageRule(String image, String index) {
   this.image = image;
   this.index = index;
 }
 @Override
 
  protected void before() {
   try {
     LOGGER.info("Starting docker image " + image);
     container =
         new GenericContainer(image)
             .withExposedPorts(ELASTICSEARCH_PORT)
             .waitingFor(new HttpWaitStrategy().forPath("/"));
     container.start();
     if (Boolean.valueOf(System.getenv("ES_DEBUG"))) {
       container.followOutput(new Slf4jLogConsumer(LoggerFactory.getLogger(image)));
     }
     System.out.println("Starting docker image " + image);
   } catch (RuntimeException e) {
     LOGGER.warn("Couldn't start docker image " + image + ": " + e.getMessage(), e);
   }
That this happens programmatically is key because you do not need to rely on something external such as docker-compose to spin up your integration tests environment. By spinning it up from inside the test itself, you have a lot more control over the orchestration and provisioning, and the test is more stable. You can even check when a container is ready before you start a test.

Since I am not a Java developer, I ported the library (we are still working on all the features) in Golang and now it’s in the main testcontainers/testcontainers-go organization.

func TestNginxLatestReturn(t *testing.T) {
    ctx := context.Background()
    req := testcontainers.ContainerRequest{
        Image:        "nginx",
        ExposedPorts: []string{"80/tcp"},
    }
    nginxC, err := testcontainers.GenericContainer(ctx, testcontainers.GenericContainerRequest{
        ContainerRequest: req,
        Started:          true,
    })
    if err != nil {
        t.Error(err)
    }
    defer nginxC.Terminate(ctx)
    ip, err := nginxC.Host(ctx)
    if err != nil {
        t.Error(err)
    }
    port, err := nginxC.MappedPort(ctx, "80")
    if err != nil {
        t.Error(err)
    }
    resp, err := http.Get(fmt.Sprintf("http://%s:%s", ip, port.Port()))
    if resp.StatusCode != http.StatusOK {
        t.Errorf("Expected status code %d. Got %d.", http.StatusOK, resp.StatusCode)
    }
}

Creating the Test

This is what it looks like:

ctx := context.Background()
req := testcontainers.ContainerRequest{
    Image:        "nginx",
    ExposedPorts: []string{"80/tcp"},
}
nginxC, err := testcontainers.GenericContainer(ctx, testcontainers.GenericContainerRequest{
    ContainerRequest: req,
    Started:          true,
})
if err != nil {
    t.Error(err)
}
defer nginxC.Terminate(ctx)
You create the nginx container and with the defer nginxC.Terminate(ctx) command, you are cleaning up the container when the test is over. Remember ryuk? it is not a mandatory command, but testcontainers-go uses it to remove the containers at some point.

Modules

The Java library has a feature called modules where you get pre-canned containers such as databases (mysql, postgress, cassandra, etc.) or applications like nginx.The go version is working on something similar but it is still an open pr.

If you’d like to build a microservice your application relies on from the upstream video, this is a great feature. Or if you would like to test how your application behaves from inside a container (probably more similar to where it will run in prod). This is how it works in Java:

@Rule
public GenericContainer dslContainer = new GenericContainer(
    new ImageFromDockerfile()
            .withFileFromString("folder/someFile.txt", "hello")
            .withFileFromClasspath("test.txt", "mappable-resource/test-resource.txt")
            .withFileFromClasspath("Dockerfile", "mappable-dockerfile/Dockerfile"))

What I’m working on now

Something that I am currently working on is a new canned container that uses kind to spin up Kubernetes clusters inside a container. If your applications use the Kubernetes API, you can test it in integration:

ctx := context.Background()
k := &KubeKindContainer{}
err := k.Start(ctx)
if err != nil {
  t.Fatal(err.Error())
}
defer k.Terminate(ctx)
clientset, err := k.GetClientset()
if err != nil {
  t.Fatal(err.Error())
}
ns, err := clientset.CoreV1().Namespaces().Get("default", metav1.GetOptions{})
if err != nil {
  t.Fatal(err.Error())
}
if ns.GetName() != "default" {
  t.Fatalf("Expected default namespace got %s", ns.GetName())
This feature is still a work in progress as you can see from PR67.

Calling All Coders

The Java version for testcontainers is the first one developed, it has a lot of features not ported to the Go version or to other libraries as well such as JavaScript, Rust, .Net.

My suggestion is to try the one written in your language and to contribute to it. 

In Go we don’t have a way to programmatically build images. I am thinking to embed buildkit or img in order to get a damonless builder that doesn’t depend on Docker. The great part about working with the Go version is that all the container related libraries are already in Go, so you can do a very good work of integration with them.

This is a great chance to become part of this community! If you are passionate about testing framework join us and send your pull requests, or come to hang out on Slack.

Try It Out

I hope you are as excited as me about the flavour and the power this library provides. Take a look at the testcontainers organization on GitHub to see if your language is covered and try it out! And, if your language is not covered, let’s write it! If you are a Go developer and you’d like to contribute, feel free to reach out to me @gianarb, or go check it out and open an issue or pull request!

Docker Captain @GianArb gives the low down on how to write maintainable integration tests with #Docker
Click To Tweet


]]>
Improved Docker Container Integration with Java 10 https://www.docker.com/blog/improved-docker-container-integration-with-java-10/ Tue, 03 Apr 2018 23:53:00 +0000 https://blog.docker.com/?p=20335 Docker and Java

Many applications that run in a Java Virtual Machine (JVM), including data services such as Apache Spark and Kafka and traditional enterprise applications, are run in containers. Until recently, running the JVM in a container presented problems with memory and cpu sizing and usage that led to performance loss. This was because Java didn’t recognize that it was running in a container. With the release of Java 10, the JVM now recognizes constraints set by container control groups (cgroups). Both memory and cpu constraints can be used manage Java applications directly in containers, these include:

  • adhering to memory limits set in the container
  • setting available cpus in the container
  • setting cpu constraints in the container

Java 10 improvements are realized in both Docker for Mac or Windows and Docker Enterprise Edition environments.

Container Memory Limits

Until Java 9 the JVM did not recognize memory or cpu limits set by the container using flags. In Java 10, memory limits are automatically recognized and enforced.

Java defines a server class machine as having 2 CPUs and 2GB of memory and the default heap size is ¼ of the physical memory. For example, a Docker Enterprise Edition installation has 2GB of memory and 4 CPUs. Compare the difference between containers running Java 8 and Java 10. First, Java 8:

docker container run -it -m512 --entrypoint bash openjdk:latest

$ docker-java-home/bin/java -XX:+PrintFlagsFinal -version | grep MaxHeapSize
    uintx MaxHeapSize                              := 524288000                          {product}
openjdk version "1.8.0_162"

The max heap size is 512M or ¼ of the 2GB set by the Docker EE installation instead of the limit set on the container to 512M. In comparison, running the same commands on Java 10 shows that the memory limit set in the container is fairly close to the expected 128M:

docker container run -it -m512M --entrypoint bash openjdk:10-jdk

$ docker-java-home/bin/java -XX:+PrintFlagsFinal -version | grep MaxHeapSize
   size_t MaxHeapSize                              = 134217728                                {product} {ergonomic}
openjdk version "10" 2018-03-20

Setting Available CPUs

By default, each container’s access to the host machine’s CPU cycles is unlimited. Various constraints can be set to limit a given container’s access to the host machine’s CPU cycles. Java 10 recognizes these limits:

docker container run -it --cpus 2 openjdk:10-jdk
jshell> Runtime.getRuntime().availableProcessors()
$1 ==> 2

All CPUs allocated to Docker EE get the same proportion of CPU cycles. The proportion can be modified by changing the container’s CPU share weighting relative to the weighting of all other running containers. The  proportion will only apply when CPU-intensive processes are running. When tasks in one container are idle, other containers can use the leftover CPU time. The actual amount of CPU time will vary depending on the number of containers running on the system. These can be set in Java 10:

docker container run -it --cpu-shares 2048 openjdk:10-jdk
jshell> Runtime.getRuntime().availableProcessors()
$1 ==> 2

The cpuset constraint sets which CPUs allow execution in Java 10.

docker run -it --cpuset-cpus="1,2,3" openjdk:10-jdk
jshell> Runtime.getRuntime().availableProcessors()
$1 ==> 3

Allocating memory and CPU

With Java 10, container settings can be used to estimate the allocation of memory and CPUs needed to deploy an application. Let’s assume that the memory heap and CPU requirements for each process running in a container has already been determined and JAVA_OPTS set. For example, if you have an application distributed across 10 nodes; five nodes require 512Mb of memory with 1024 CPU-shares each and another five nodes require 256Mb with 512 CPU-shares each. Note that 1 CPU share proportion is represented by 1024.

For memory, the application would need 5Gb allocated at minimum.

512Mb x 5 = 2.56 Gb

256Mb x 5 = 1.28 Gb

The application would require 8 CPUs to run efficiently.

1024 x 5 = 5 CPUs

512 x 5 = 3 CPUs

Best practice suggests profiling the application to determine the memory and CPU allocations for each process running in the JVM. However, Java 10 removes the guesswork when sizing containers to prevent out of memory errors in Java applications as well allocating sufficient CPU to process work loads.


Improved Docker Container Integration with @Java 10
Click To Tweet


To learn more about Docker solutions for Java Developers:

 

]]>
Spring Boot Development with Docker https://www.docker.com/blog/spring-boot-development-docker/ Wed, 24 May 2017 19:00:00 +0000 https://blog.docker.com/?p=17776 The AtSea Shop is an example storefront application that can be deployed on different operating systems and can be customized to both your enterprise development and operational environments. In my last post, I discussed the architecture of the app. In this post, I will cover how to setup your development environment to debug the Java REST backend that runs in a container.

Building the REST Application

I used the Spring Boot framework to rapidly develop the REST backend that manages products, customers and orders tables used in the AtSea Shop. The application takes advantage of Spring Boot’s built-in application server, support for REST interfaces and ability to define multiple data sources. Because it was written in Java, it is agnostic to the base operating system and runs in either Windows or Linux containers. This allows developers to build against a heterogenous architecture.

Project setup

The AtSea project uses multi-stage builds, a new Docker feature, which allows me to use multiple images to build a single Docker image that includes all the components needed for the application. The multi-stage build uses a Maven container to build the the application jar file. The jar file is then copied to a Java Development Kit image. This makes for a more compact and efficient image because the Maven is not included with the application. Similarly, the React store front client is built in a Node image and the compile application is also added to the final application image.

I used Eclipse to write the AtSea app. If you want info on configuring IntelliJ or Netbeans for remote debugging, you can check out the the Docker Labs Repository. You can also check out the code in the AtSea app github repository.

I built the application by cloning the repository and imported the project into Eclipse by setting the Root Directory to the project and clicking Finish

    File > Import > Maven > Existing Maven Projects 

Since I used using Spring Boot, I took advantage of spring-devtools to do remote debugging in the application. I had to add the Spring Boot-devtools dependency to the pom.xml file.

<dependency>
     <groupId>org.springframework.boot</groupId>
     <artifactId>spring-boot-devtools</artifactId>
</dependency>

Note that developer tools are automatically disabled when the application is fully packaged as a jar. To ensure that devtools are available during development, I set the <excludeDevtools> configuration to false in the spring-boot-maven build plugin:

<build>
    <plugins>
        <plugin>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-maven-plugin</artifactId>
            <configuration>
                <excludeDevtools>false</excludeDevtools>
            </configuration>
        </plugin>
    </plugins>
</build>

This example uses a Docker Compose file that creates a simplified build of the containers specifically needed for development and debugging.

 version: "3.1"

services:
  database:
    build: 
       context: ./database
    image: atsea_db
    environment:
      POSTGRES_USER: gordonuser
      POSTGRES_DB: atsea
    ports:
      - "5432:5432" 
    networks:
      - back-tier
    secrets:
      - postgres_password

  appserver:
    build:
       context: .
       dockerfile: app/Dockerfile-dev
    image: atsea_app
    ports:
      - "8080:8080"
      - "5005:5005"
    networks:
      - front-tier
      - back-tier
    secrets:
      - postgres_password

secrets:
  postgres_password:
    file: ./devsecrets/postgres_password
    
networks:
  front-tier:
  back-tier:
  payment:
    driver: overlay

 The Compose file uses secrets to provision passwords and other sensitive information such as certificates –  without relying on environmental variables. Although the example uses PostgreSQL, the application can use secrets to connect to any database defined by as a Spring Boot datasource. From JpaConfiguration.java:

 public DataSourceProperties dataSourceProperties() {
        DataSourceProperties dataSourceProperties = new DataSourceProperties();

    // Set password to connect to database using Docker secrets.
    try(BufferedReader br = new BufferedReader(new FileReader("/run/secrets/postgres_password"))) {
        StringBuilder sb = new StringBuilder();
        String line = br.readLine();
        while (line != null) {
            sb.append(line);
            sb.append(System.lineSeparator());
            line = br.readLine();
        }
         dataSourceProperties.setDataPassword(sb.toString());
     } catch (IOException e) {
        System.err.println("Could not successfully load DB password file");
     }
    return dataSourceProperties;
}

Also note that the appserver opens port 5005 for remote debugging and that build calls the Dockerfile-dev file to build a container that has remote debugging turned on. This is set in the Entrypoint which specifies transport and address for the debugger.

ENTRYPOINT ["java", 

"-agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=5005","-jar", 

"/app/AtSea-0.0.1-SNAPSHOT.jar"]

Remote Debugging

To start remote debugging on the application, run compose using the docker-compose-dev.yml file.

docker-compose -f docker-compose-dev.yml up --build

Docker will build the images and start the AtSea Shop database and appserver containers. However, the application will not fully load until Eclipse’s remote debugger attaches to the application. To start remote debugging you click on Run > Debug Configurations …

Select Remote Java Application then press the new button to create a configuration. In the Debug Configurations panel, you give the configuration a name, select the AtSea project and set the connection properties for host and the port to 5005. Click Apply > Debug.  

9d6c9743 e348 4c76 8515 1743162101ad

The appserver will start up.

appserver_1|2017-05-09 03:22:23.095 INFO 1 --- [main] s.b.c.e.t.TomcatEmbeddedServletContainer : Tomcat started on port(s): 8080 (http)

appserver_1|2017-05-09 03:22:23.118 INFO 1 --- [main] com.docker.atsea.AtSeaApp                : Started AtSeaApp in 38.923 seconds (JVM running for 109.984)

To test remote debugging set a breakpoint on ProductController.java where it returns a list of products.

b0e7f813 c3de 4d1e b52d af6d4821c58e 1

You can test it using curl or your preferred tool for making HTTP requests:

curl -H "Content-Type: application/json" -X GET  http://localhost:8080/api/product/

Eclipse will switch to the debug perspective where you can step through the code.

e7edc918 c6b0 44e2 b6c9 7ef71cd223d6 1

The AtSea Shop example shows how easy it is to use containers as part of your normal development environment using tools that you and your team are familiar with. Download the application to try out developing with containers or use it as basis for your own Spring Boot REST application.

Interested in more? Check out these developer resources and videos from Dockercon 2017.


Developing the AtSea app with #Docker and #SpringBoot by @spara
Click To Tweet


]]>