developers – Docker https://www.docker.com Tue, 14 May 2024 12:56:52 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.3 https://www.docker.com/wp-content/uploads/2024/02/cropped-docker-logo-favicon-32x32.png developers – Docker https://www.docker.com 32 32 A Promising Methodology for Testing GenAI Applications in Java https://www.docker.com/blog/testing-genai-applications-in-java/ Wed, 24 Apr 2024 16:03:14 +0000 https://www.docker.com/?p=54150 In the vast universe of programming, the era of generative artificial intelligence (GenAI) has marked a turning point, opening up a plethora of possibilities for developers.

Tools such as LangChain4j and Spring AI have democratized access to the creation of GenAI applications in Java, allowing Java developers to dive into this fascinating world. With Langchain4j, for instance, setting up and interacting with large language models (LLMs) has become exceptionally straightforward. Consider the following Java code snippet:

public static void main(String[] args) {
    var llm = OpenAiChatModel.builder()
            .apiKey("demo")
            .modelName("gpt-3.5-turbo")
            .build();
    System.out.println(llm.generate("Hello, how are you?"));
}

This example illustrates how a developer can quickly instantiate an LLM within a Java application. By simply configuring the model with an API key and specifying the model name, developers can begin generating text responses immediately. This accessibility is pivotal for fostering innovation and exploration within the Java community. More than that, we have a wide range of models that can be run locally, and various vector databases for storing embeddings and performing semantic searches, among other technological marvels.

Despite this progress, however, we are faced with a persistent challenge: the difficulty of testing applications that incorporate artificial intelligence. This aspect seems to be a field where there is still much to explore and develop.

In this article, I will share a methodology that I find promising for testing GenAI applications.

2400x1260 2024 GenAi

Project overview

The example project focuses on an application that provides an API for interacting with two AI agents capable of answering questions. 

An AI agent is a software entity designed to perform tasks autonomously, using artificial intelligence to simulate human-like interactions and responses. 

In this project, one agent uses direct knowledge already contained within the LLM, while the other leverages internal documentation to enrich the LLM through retrieval-augmented generation (RAG). This approach allows the agents to provide precise and contextually relevant answers based on the input they receive.

I prefer to omit the technical details about RAG, as ample information is available elsewhere. I’ll simply note that this example employs a particular variant of RAG, which simplifies the traditional process of generating and storing embeddings for information retrieval.

Instead of dividing documents into chunks and making embeddings of those chunks, in this project, we will use an LLM to generate a summary of the documents. The embedding is generated based on that summary.

When the user writes a question, an embedding of the question will be generated and a semantic search will be performed against the embeddings of the summaries. If a match is found, the user’s message will be augmented with the original document.

This way, there’s no need to deal with the configuration of document chunks, worry about setting the number of chunks to retrieve, or worry about whether the way of augmenting the user’s message makes sense. If there is a document that talks about what the user is asking, it will be included in the message sent to the LLM.

Technical stack

The project is developed in Java and utilizes a Spring Boot application with Testcontainers and LangChain4j.

For setting up the project, I followed the steps outlined in Local Development Environment with Testcontainers and Spring Boot Application Testing and Development with Testcontainers.

I also use Tescontainers Desktop to facilitate database access and to verify the generated embeddings as well as to review the container logs.

The challenge of testing

The real challenge arises when trying to test the responses generated by language models. Traditionally, we could settle for verifying that the response includes certain keywords, which is insufficient and prone to errors.

static String question = "How I can install Testcontainers Desktop?";
@Test
    void verifyRaggedAgentSucceedToAnswerHowToInstallTCD() {
        String answer  = restTemplate.getForObject("/chat/rag?question={question}", ChatController.ChatResponse.class, question).message();
        assertThat(answer).contains("https://testcontainers.com/desktop/");
    }

This approach is not only fragile but also lacks the ability to assess the relevance or coherence of the response.

An alternative is to employ cosine similarity to compare the embeddings of a “reference” response and the actual response, providing a more semantic form of evaluation. 

This method measures the similarity between two vectors/embeddings by calculating the cosine of the angle between them. If both vectors point in the same direction, it means the “reference” response is semantically the same as the actual response.

static String question = "How I can install Testcontainers Desktop?";
static String reference = """
       - Answer must indicate to download Testcontainers Desktop from https://testcontainers.com/desktop/
       - Answer must indicate to use brew to install Testcontainers Desktop in MacOS
       - Answer must be less than 5 sentences
       """;
@Test
    void verifyRaggedAgentSucceedToAnswerHowToInstallTCD() {
        String answer  = restTemplate.getForObject("/chat/rag?question={question}", ChatController.ChatResponse.class, question).message();
        double cosineSimilarity = getCosineSimilarity(reference, answer);
        assertThat(cosineSimilarity).isGreaterThan(0.8);
    }

However, this method introduces the problem of selecting an appropriate threshold to determine the acceptability of the response, in addition to the opacity of the evaluation process.

Toward a more effective method

The real problem here arises from the fact that answers provided by the LLM are in natural language and non-deterministic. Because of this, using current testing methods to verify them is difficult, as these methods are better suited to testing predictable values. 

However, we already have a great tool for understanding non-deterministic answers in natural language: LLMs themselves. Thus, the key may lie in using one LLM to evaluate the adequacy of responses generated by another LLM. 

This proposal involves defining detailed validation criteria and using an LLM as a “Validator Agent” to determine if the responses meet the specified requirements. This approach can be applied to validate answers to specific questions, drawing on both general knowledge and specialized information

By incorporating detailed instructions and examples, the Validator Agent can provide accurate and justified evaluations, offering clarity on why a response is considered correct or incorrect.

static String question = "How I can install Testcontainers Desktop?";
    static String reference = """
            - Answer must indicate to download Testcontainers Desktop from https://testcontainers.com/desktop/
            - Answer must indicate to use brew to install Testcontainers Desktop in MacOS
            - Answer must be less than 5 sentences
            """;

    @Test
    void verifyStraightAgentFailsToAnswerHowToInstallTCD() {
        String answer  = restTemplate.getForObject("/chat/straight?question={question}", ChatController.ChatResponse.class, question).message();
        ValidatorAgent.ValidatorResponse validate = validatorAgent.validate(question, answer, reference);
        assertThat(validate.response()).isEqualTo("no");
    }

    @Test
    void verifyRaggedAgentSucceedToAnswerHowToInstallTCD() {
        String answer  = restTemplate.getForObject("/chat/rag?question={question}", ChatController.ChatResponse.class, question).message();
        ValidatorAgent.ValidatorResponse validate = validatorAgent.validate(question, answer, reference);
        assertThat(validate.response()).isEqualTo("yes");
    }

We can even test more complex responses where the LLM should suggest a better alternative to the user’s question.

static String question = "How I can find the random port of a Testcontainer to connect to it?";
    static String reference = """
            - Answer must not mention using getMappedPort() method to find the random port of a Testcontainer
            - Answer must mention that you don't need to find the random port of a Testcontainer to connect to it
            - Answer must indicate that you can use the Testcontainers Desktop app to configure fixed port
            - Answer must be less than 5 sentences
            """;

    @Test
    void verifyRaggedAgentSucceedToAnswerHowToDebugWithTCD() {
        String answer  = restTemplate.getForObject("/chat/rag?question={question}", ChatController.ChatResponse.class, question).message();
        ValidatorAgent.ValidatorResponse validate = validatorAgent.validate(question, answer, reference);
        assertThat(validate.response()).isEqualTo("yes");
    }

Validator Agent

The configuration for the Validator Agent doesn’t differ from that of other agents. It is built using the LangChain4j AI Service and a list of specific instructions:

public interface ValidatorAgent {
    @SystemMessage("""
                ### Instructions
                You are a strict validator.
                You will be provided with a question, an answer, and a reference.
                Your task is to validate whether the answer is correct for the given question, based on the reference.
                
                Follow these instructions:
                - Respond only 'yes', 'no' or 'unsure' and always include the reason for your response
                - Respond with 'yes' if the answer is correct
                - Respond with 'no' if the answer is incorrect
                - If you are unsure, simply respond with 'unsure'
                - Respond with 'no' if the answer is not clear or concise
                - Respond with 'no' if the answer is not based on the reference
                
                Your response must be a json object with the following structure:
                {
                    "response": "yes",
                    "reason": "The answer is correct because it is based on the reference provided."
                }
                
                ### Example
                Question: Is Madrid the capital of Spain?
                Answer: No, it's Barcelona.
                Reference: The capital of Spain is Madrid
                ###
                Response: {
                    "response": "no",
                    "reason": "The answer is incorrect because the reference states that the capital of Spain is Madrid."
                }
                """)
    @UserMessage("""
            ###
            Question: {{question}}
            ###
            Answer: {{answer}}
            ###
            Reference: {{reference}}
            ###
            """)
    ValidatorResponse validate(@V("question") String question, @V("answer") String answer, @V("reference") String reference);

    record ValidatorResponse(String response, String reason) {}
}

As you can see, I’m using Few-Shot Prompting to guide the LLM on the expected responses. I also request a JSON format for responses to facilitate parsing them into objects, and I specify that the reason for the answer must be included, to better understand the basis of its verdict.

Conclusion

The evolution of GenAI applications brings with it the challenge of developing testing methods that can effectively evaluate the complexity and subtlety of responses generated by advanced artificial intelligences. 

The proposal to use an LLM as a Validator Agent represents a promising approach, paving the way towards a new era of software development and evaluation in the field of artificial intelligence. Over time, we hope to see more innovations that allow us to overcome the current challenges and maximize the potential of these transformative technologies.

Learn more

]]>
Better Debugging: How the Signal0ne Docker Extension Uses AI to Simplify Container Troubleshooting https://www.docker.com/blog/debug-containers-ai-signal0ne-docker-extension/ Wed, 24 Apr 2024 15:58:35 +0000 https://www.docker.com/?p=53996 This post was written in collaboration with Szymon Stawski, project maintainer at Signal0ne.

Consider this scenario: You fire up your Docker containers, hit an API endpoint, and … bam! It fails. Now what? The usual drill involves diving into container logs, scrolling through them to understand the error messages, and spending time looking for clues that will help you understand what’s wrong. But what if you could get a summary of what’s happening in your containers and potential issues with the proposed solutions already provided?

In this article, we’ll dive into a solution that solves this issue using AI. AI can already help developers write code, so why not help developers understand their system, too? 

Signal0ne is a Docker Desktop extension that scans Docker containers’ state and logs in search of problems, analyzes the discovered issues, and outputs insights to help developers debug. We first learned about Signal0ne as the winning submission in the 2023 Docker AI/ML Hackathon, and we’re excited to show you how to use it to debug more efficiently. 

2400x1260 debug

Introducing Signal0ne Docker extension: Streamlined debugging for Docker

The magic of the Signal0ne Docker extension is its ability to shorten feedback loops for working with and developing containerized applications. Forget endless log diving — the extension offers a clear and concise summary of what’s happening inside your containers after logs and states are analyzed by an AI agent, pinpointing potential issues and even suggesting solutions. 

Developing applications these days involves more than a block of code executed in a vacuum. It is a complex system of dependencies, and different user flows that need debugging from time to time. AI can help filter out all the system noise and focuses on providing data about certain issues in the system so that developers can debug faster and better. 

Docker Desktop is one of the most popular tools used for local development with a huge community, and Docker features like Docker Debug enhance the community’s ability to quickly debug and resolve issues with their containerized apps.

Signal0ne Docker extension’s suggested solutions and summaries can help you while debugging your container or editing your code so that you can focus on bringing value as a software engineer. The term “developer experience” is often used, but this extension focuses on one crucial aspect: shortening development time. This translates directly to increased productivity, letting you build containerized applications faster and more efficiently.

How does the Docker Desktop extension work?

Between AI co-pilots, highly integrated in IDEs that help write code, and browser AI chats that help understand software development concepts in a Q&A way, there is one piece missing: logs and runtime system data. 

The Signal0ne Docker Desktop extension consists of three components: two hosted on the user’s local system (UI and agent) and one in the Signal0ne cloud backend service. The agent scans the user’s local environment in the search of containers with invalid states, runtime issues, or some warnings or errors in the logs, after issue discovery, it collects additional data from container definition for enhanced analysis. 

After the Signal0ne agent discovery, data is sent to the backend service, where a combination of pre-trained LLM and solution search retrieval service performs the analysis. The analysis of the issues can be seen from the Signal0ne extension UI, including: 

  • Short log summary — Outlines what is happening within a particular container, including logs on which analysis was based can be accessed from the sources dropdown if you wish.
  • Solutions summary — One paragraph about possible solutions to the analyzed issue with some explanation of why this particular solution was proposed.
  • Solution sources — Links to websites where the issue has already been solved by the community (for example, a GitHub issue or StackOverflow post).

Watch this short video demonstration for more details: 

Technical details

The issue analysis pipeline is powered by the LangChain tooling framework, which utilizes open source Mistral 7b LLM with LLM-optimized web search that browses sources of existing solutions on the open web (Figure 1).

 Illustration of Signal0ne architecture showing local development containers, UI, agent, backend service, Mistral 7b LLM, and search API.
Figure 1: Signal0ne architecture overview.

The following section shows sample issue analysis pipeline output:

Logs from Docker container:

“$ irb\n>> \"12341:asdf\"[/\\d+/]\n# => \"12341\"  \n>> \"12341:asdf\"[/\\d*/]\n# => \"12341\"  \n>> \"12341:asdf\"[0..5]\n# => \"12341:\"  \n>> \"12341:asdf\"[0...5]\n# => \"12341\"  \n>> \"12341:asdf\"[0, ':']\nTypeError: can't convert String into Integer\n  from (irb):5:in”

Summary

The user is trying to extract a substring from the string 12341:asdf using different Ruby methods, such as [/\\\\d+/], [/\\\\d\\\\*/], [0..5], [0...5], and [0, ':']. However, when attempting to use the indexing method [0, ':'], an error occurs because the method expects an integer argument but receives a string instead.

Predicted solutions: 

“To extract a substring from the given string 12341:asdf without encountering an error when using indexing methods, consider the following options:

1. Using regular expressions:

To extract a substring using regular expressions, you can use the scan method with a regular expression pattern. For instance, to extract the digits before the colon, you can use the following code snippet:

				str = "12341:asdf"                           
				substring = str.scan(/[\\\\d]+/)                          
				puts substring.first
				# Output: ["12341"]                         
				puts substring.first[0]                           
				# Output: 12341

2. Slicing:

Another alternative to indexing is slicing, which allows you to extract a specified range of characters from a string. To extract the digits before the colon using slicing, you can use the following code snippet:

str = "12341:asdf"                       
	   substring = str[0...str.index(":")]                         
               puts substring               
	   # Output: "12341"                         
	   puts substring.to_i                           
	   # Output: 12341 

These two methods should help you extract the desired substring from the given string without encountering an error.

Getting started

Getting started with the Signal0ne Docker Desktop extension is a straightforward process that allows developers to leverage the benefits of unified development. 

Here are the steps for installing Signal0ne Docker extension:

1. Install Docker Desktop.

2. Choose Add Extensions in the left sidebar. The Browse tab will appear by default (Figure 2).

Screenshot of Docker Desktop Extensions Marketplace highlighting "Add Extensions" option and "Browse" tab.
Figure 2: Signal0ne extension installation from the marketplace.

3. In the Filters drop-down, select the Utility tools category.

4. Find Signal0ne and then select Install (Figure 3).

Screenshot of Signal0ne installation process.
Figure 3: Extension installation process.

5. Log in after the extension is installed (Figure 4).

Screenshot of Signal0ne login page.
Figure 4: Signal0ne extension login screen.

6. Start developing your apps, and, if you face some issues while debugging, have a look at the Signal0ne extension UI. The issue analysis will be there to help you with debugging.

Make sure the Signal0ne agent is enabled by toggling on (Figure 5):

Screenshot of Signal0ne Agent Settings toggle bar.
Figure 5: Agent settings tab.

Figure 6 shows the summary and sources:

Screenshot of Signal0ne page showing search criteria and related insights.
Figure 6: Overview of the inspected issue.

Proposed solutions and sources are shown in Figures 7 and 8. Solutions sources will redirect you to a webpage with predicted solution:

Screenshot of Signal0ne page showing search criteria and proposed solutions.
Figure 7: Overview of proposed solutions to the encountered issue.
Screenshot of Signal0ne page showing search criteria and related source links.
Figure 8: Overview of the list of helpful links.

If you want to contribute to the project, you can leave feedback via the Like or Dislike button in the issue analysis output (Figure 9).

Screenshot of Signal0ne  sources page showing thumbs up/thumbs down feedback options at the bottom.
Figure 9: You can leave feedback about analysis output for further improvements.

To explore Signal0ne Docker Desktop extension without utilizing your containers, consider experimenting with dummy containers using this docker compose to observe how logs are being analyzed and how helpful the output is with the insights:

services:
  broken_bulb: # c# application that cannot start properly
    image: 'Signal0neai/broken_bulb:dev'
  faulty_roger: # 
    image: 'Signal0neai/faulty_roger:dev'
  smoked_server: # nginx server hosting the website with the miss-configuration
    image: 'Signal0neai/smoked_server:dev'
    ports:
      - '8082:8082'
  invalid_api_call: # python webserver with bug 
   image: 'Signal0neai/invalid_api_call:dev'
   ports:
    - '5000:5000'
  • broken_bulb: This service uses the image Signal0neai/broken_bulb:dev. It’s a C# application that throws System.NullReferenceException during the startup. Thanks to that application, you can observe how Signal0ne discovers the failed container, extracts the error logs, and analyzes it.
  • faulty_roger: This service uses the image Signal0neai/faulty_roger:dev. It is a Python API server that is trying to connect to an unreachable database on localhost.
  • smoked_server: This service utilizes the image Signal0neai/smoked_server:dev. The smoked_server service is an Nginx instance that is throwing 403 forbidden while the user is trying to access the root path (http://127.0.0.1:8082/). Signal0ne can help you debug that.
  • invalid_api_call: API service with a bug in one of the endpoints, to generate an error call http://127.0.0.1:5000/create-table  after running the container. Follow the analysis of Signal0ne and try to debug the issue.

Conclusion

Debugging containerized applications can be time-consuming and tedious, often involving endless scrolling through logs and searching for clues to understand the issue. However, with the introduction of the Signal0ne Docker extension, developers can now streamline this process and boost their productivity significantly.

By leveraging the power of AI and language models, the extension provides clear and concise summaries of what’s happening inside your containers, pinpoints potential issues, and even suggests solutions. With its user-friendly interface and seamless integration with Docker Desktop, the Signal0ne Docker extension is set to transform how developers debug and develop containerized applications.

Whether you’re a seasoned Docker user or just starting your journey with containerized development, this extension offers a valuable tool that can save you countless hours of debugging and help you focus on what matters most — building high-quality applications efficiently. Try the extension in Docker Desktop today, and check out the documentation on GitHub.

Learn more

]]>
Signal0ne docker extension demo nonadult
Docker Desktop 4.29: Docker Socket Mount Permissions in ECI, Advanced Error Management, Moby 26, and New Beta Features  https://www.docker.com/blog/docker-desktop-4-29/ Wed, 10 Apr 2024 14:20:02 +0000 https://www.docker.com/?p=53616 The release of Docker Desktop 4.29 introduces enhancements to secure and streamline the development process and to improve error management and workflow efficiency. With the integration of Enhanced Container Isolation (ECI) with Docker socket mount permissions, the debut of Moby 26 within Docker Desktop, and exciting features such as Docker Compose enhancements via synchronized file shares reaching beta release, we’re equipping developers with the essential resources to tackle the complexities of modern development head-on.

Dive into the details to discover these new enhancements and get a sneak peek at exciting advancements currently in beta release.

In this post:

2400x1260 4.29 docker desktop release

Enhanced Container Isolation with Docker socket mount permissions 

We’re pleased to unveil a new feature in the latest Docker Desktop release, now in General Availability to Business subscribers, that further improves Desktop’s Enhanced Container Isolation (ECI) mode: Docker socket mount permissions. This update blends robust security with the flexibility you love, allowing you to enjoy key development tools like Testcontainers with the peace of mind provided by ECI’s unprivileged containers. Initially launched in beta with Docker Desktop 4.27, this update moves the ECI Docker socket mount permissions feature to General Availability (GA), demonstrating our commitment to making Docker Desktop the best modern application development platform.

The Docker Engine socket, a crucial component for container management, has historically been a vector for potential security risks. Unauthorized access could enable malicious activities, such as supply chain attacks. However, legitimate use cases, like the Testcontainers framework, require socket access for operational tasks.

With ECI, Docker Desktop enhances security by default, blocking unapproved bind-mounting of the Docker Engine socket into containers. Yet, recognizing the need for flexibility, we introduce controlled access through admin-settings.json configuration. This allows specified images to bind-mount the Docker socket, combining security with functionality. 

Key features include:

  • Selective permissions: Admins can now specify which container images can access the Docker socket through a curated imageList, ensuring that only trusted containers have the necessary permissions.
  • Command restrictions: The commandList feature further tightens security by limiting the Docker commands approved containers can execute, acting as a secondary defense layer.

While we celebrate this release, our journey doesn’t stop here. We’re continuously exploring ways to expand Docker Desktop’s capabilities, ensuring our users can access the most secure, efficient, and user-friendly containerization tools.

Stay tuned for further security enhancements, including our beta release of air-gapped containers. Update to Docker Desktop 4.29 to start leveraging the full potential of Enhanced Container Isolation with Docker socket mount permissions today.

Advanced error management in Docker Desktop 

We’re redefining error management to significantly improve the developer experience. This update isn’t just about fixing bugs; it’s a comprehensive overhaul aimed at making the development process more efficient, reliable, and user-friendly.

Central to this update is our shift toward self-service troubleshooting and resilience, transforming errors from roadblocks into opportunities for growth and learning. The new system presents actionable insights for errors, ensuring developers can swiftly move toward a resolution.

Key enhancements include:

  • An enhanced error interface: Combining error codes with explanatory text and support links, making troubleshooting straightforward.
  • Direct diagnostic uploads: Allowing users to share diagnostics from the error screen, streamlining support. 
  • Reset and exit options: Offering quick fixes directly from the error interface.
  • Self-service remediation: Providing clear, actionable steps for users to resolve issues independently (Figure 1).
docker desktop 4 29 f1
Figure 1: Error message displaying self-service remediation options.

This update marks a significant leap in our commitment to enhancing the Docker Desktop user experience, empowering developers, and reducing the need for support tickets. Read Next-Level Error Handling: How Docker Desktop 4.29 Aims to Simplify Developer Challenges to dive deeper into these enhancements in our blog and discover how Docker Desktop 4.29 is setting a new standard for error management and developer support.

New in Docker Engine: Volume subpath mounts, networking enhancements, BuildKit 0.13, and more 

In the latest Docker Engine update, Moby 26, packaged in Docker Desktop 4.29, introduces several enhancements aimed at enriching the developer experience. Here’s the breakdown of what’s new: 

  • Volume subpath mounts: Responding to widespread user requests, we’ve made it possible to mount a subdirectory as a named volume. This addition enhances flexibility and control over data management within containers. Detailed guidance on specifying these mounts is available in the docs
  • Networking enhancements: Significant improvements have been made to bolster the stability of networking capabilities within the engine, along with preliminary efforts to support future IPv6 enhancements.
  • Integration of BuildKit 0.13: Among other updates, this BuildKit version includes experimental support for Windows Containers, ensuring builds remain dependable and efficient.
  • Streamlined API: Deprecated API versions have been removed, concentrating on quality enhancements and promoting a more secure, reliable environment.
  • Multi-platform image enhancements: In this release, you’ll see an improved docker images UX as we’ve combined image entries for multi-platform images.

Beta release highlights

Docker Debug in Docker Desktop GUI and CLI 

Docker Debug (Beta), a recent addition to Docker Desktop, streamlines the debugging process for developers. This feature, accessible in Docker Pro, Teams, and Business subscriptions, offers a shell for efficiently debugging both local and remote containerized applications — even those that fail to run. With Docker Debug, developers can swiftly pinpoint and address issues, freeing up more time for innovation.

Now, in beta release, Docker Debug introduces comprehensive debugging directly from the Docker Desktop CLI for active and inactive containers alike. Moreover, the Docker Desktop GUI has been enhanced with an intuitive option: Click the toggle in the Exec tab within a container to switch on Debug mode to start debugging with the necessary tools at your fingertips.

docker desktop 4 29 f2
Figure 2: Docker Desktop containers view showcasing debugging a running container with Docker Debug.

To dive into Docker Debug, ensure you’re logged in with your subscription account, then initiate debugging by executing docker debug <Container or Image name> in the CLI or by selecting a container from the GUI container list for immediate debugging from any device local or in the cloud.

Improved volume backup capabilities 

With our latest release, we’re elevating volume backup capabilities in Docker Desktop, introducing an upgraded feature set in beta release. This enhancement directly integrates the Volumes Backup & Share extension directly into Docker Desktop, streamlining your backup processes. 

docker desktop 4 29 f3
Figure 3: Docker Desktop Volumes view showcasing new backup functionality.

This release marks a significant step forward, but it’s just the beginning. We’re committed to expanding these capabilities, adding even more value in future updates. Start exploring the new feature today and prepare for an enhanced backup experience soon.

Support for host network mode on Docker Desktop for Mac and Windows 

Support for host network mode (docker run –net=host), previously limited to Linux users, is now available for Mac and Windows Docker Desktop users, offering enhanced networking capabilities and flexibility.

With host network mode support, Docker Desktop becomes a more versatile tool for advanced networking tasks, such as dynamic network penetration testing, without predefined port mappings. This feature is especially useful for applications requiring the ability to dynamically accept connections on various ports, just as if they were running directly on the host. Features include:

  • Simplified networking: Eases the setup for complex networking tasks, facilitating security testing and the development of network-centric applications.
  • Greater flexibility: Allows containers to use the host’s network stack, avoiding the complexities of port forwarding.
docker desktop 4 29 f4
Figure 4: The host network mode enhancement in Preview Beta reflects our commitment to improving Docker Desktop and is available after authenticating against all Docker subscriptions.

Enhancing security with Docker Desktop’s new air-gapped containers

Docker Desktop’s latest beta feature, air-gapped containers, is now available in version 4.29, reflecting our deep investment in security enhancements. This Business subscription feature empowers administrators to limit container access to network resources, tightening security across containerized applications by: 

  • Restricting network access: Ensuring containers communicate only with approved sources.
  • Customizing proxy rules: Allowing detailed control over container traffic.
  • Enhancing data protection: Preventing unauthorized data transfer in or out of containers.

The introduction of air-gapped containers is part of our broader effort to make Docker Desktop not just a development tool, but an even more secure development environment. We’re excited about the potential this feature holds for enhancing security protocols and simplifying the management of sensitive data.

Compose bind mount support with synchronized file shares 

We’re elevating the Docker Compose experience for our subscribers by integrating synchronized file shares (SFS) directly into Compose. This feature eradicates the sluggishness typically associated with managing large codebases in containers. Formerly known as Mutagen, synchronized file shares enhances bind mounts with native filesystem performance, accelerating file operations by an impressive 2-10x. This leap forward is incredibly impactful for developers handling extensive codebases, effortlessly streamlining their workflow.

With a Docker subscription, you’ll find that Docker Compose and SFS work together seamlessly, automatically optimizing bind mounts to significantly boost synchronization speeds. This integration requires no additional configuration; Compose intelligently activates SFS whenever a bind mount is used, instantly enhancing your development process.

Enabling synchronized file shares in Compose is simple:

  1. Log into Docker Desktop.
  2. Under Settings, navigate to Features in development and choose the Experimental features tab.
  3. Enable Access experimental features and Manage Synchronized file shares with Compose.

Once set up via Docker Desktop settings, these folders act as standard bind mounts with the added benefit of SFS speed enhancements. 

docker desktop 4 29 f5
Figure 5: Docker Desktop settings displaying the option to turn on synchronized file shares with Docker Compose.
docker desktop 4 29 f6
Figure 6: Demonstration of compose up creating and synching shares in the terminal.

If your Compose project relies on a bind mount that could benefit from synchronized file shares, the initial share creation must be done through the Docker Desktop GUI.

Embrace the future of Docker Compose with Docker Desktop’s synchronized file shares and transform your development workflow with unparalleled speed and efficiency.

Try Docker Desktop 4.29 now

Docker Desktop 4.29 introduces updates focused on innovation, security, and enhancing the developer experience. This release integrates community feedback and advances Docker’s capabilities, providing solutions that meet developers’ and businesses’ immediate needs while setting the stage for future features. We advise all Docker users to upgrade to version 4.29. Please note that access to certain features in this release requires authentication and may be contingent upon your subscription tier. We encourage you to evaluate your feature needs and select the subscription level that best suits your requirements.

Join the conversation

Dive into the discussion and contribute to the evolution of Docker Desktop. Use our feedback form to share your thoughts and let us know how to improve the Hardened Desktop features. Your input directly influences the development roadmap, ensuring Docker Desktop meets and exceeds our community and customers’ needs.

Learn more

]]>
Get Started with the Latest Updates for Dockerfile Syntax (v1.7.0) https://www.docker.com/blog/new-dockerfile-capabilities-v1-7-0/ Tue, 09 Apr 2024 15:16:53 +0000 https://www.docker.com/?p=53427
Dockerfiles are fundamental tools for developers working with Docker, serving as a blueprint for creating Docker images. These text documents contain all the commands a user could call on the command line to assemble an image. Understanding and effectively utilizing Dockerfiles can significantly streamline the development process, allowing for the automation of image creation and ensuring consistent environments across different stages of development. Dockerfiles are pivotal in defining project environments, dependencies, and the configuration of applications within Docker containers.

With new versions of the BuildKit builder toolkit, Docker Buildx CLI, and Dockerfile frontend for BuildKit (v1.7.0), developers now have access to enhanced Dockerfile capabilities. This blog post delves into these new Dockerfile capabilities and explains how you can can leverage them in your projects to further optimize your Docker workflows.

2400x1260 dockerfile images

Versioning

Before we get started, here’s a quick reminder of how Dockerfile is versioned and what you should do to update it. 

Although most projects use Dockerfiles to build images, BuildKit is not limited only to that format. BuildKit supports multiple different frontends for defining the build steps for BuildKit to process. Anyone can create these frontends, package them as regular container images, and load them from a registry when you invoke the build.

With the new release, we have published two such images to Docker Hub: docker/dockerfile:1.7.0 and docker/dockerfile:1.7.0-labs.

To use these frontends, you need to specify a #syntax directive at the beginning of the file to tell BuildKit which frontend image to use for the build. Here we have set it to use the latest of the 1.x.x major version. For example:

#syntax=docker/dockerfile:1

FROM alpine
...

This means that BuildKit is decoupled from the Dockerfile frontend syntax. You can start using new Dockerfile features right away without worrying about which BuildKit version you’re using. All the examples described in this article will work with any version of Docker that supports BuildKit (the default builder as of Docker 23), as long as you define the correct #syntax directive on the top of your Dockerfile.

You can learn more about Dockerfile frontend versions in the documentation. 

Variable expansions

When you write Dockerfiles, build steps can contain variables that are defined using the build arguments (ARG) and environment variables (ENV) instructions. The difference between build arguments and environment variables is that environment variables are kept in the resulting image and persist when a container is created from it.

When you use such variables, you most likely use ${NAME} or, more simply, $NAME in COPY, RUN, and other commands.

You might not know that Dockerfile supports two forms of Bash-like variable expansion:

  • ${variable:-word}: Sets a value to word if the variable is unset
  • ${variable:+word}: Sets a value to word if the variable is set

Up to this point, these special forms were not that useful in Dockerfiles because the default value of ARG instructions can be set directly:

FROM alpine
ARG foo="default value"

If you are an expert in various shell applications, you know that Bash and other tools usually have many additional forms of variable expansion to ease the development of your scripts.

In Dockerfile v1.7, we have added:

  • ${variable#pattern} and ${variable##pattern} to remove the shortest or longest prefix from the variable’s value.
  • ${variable%pattern} and ${variable%%pattern} to remove the shortest or longest suffix from the variable’s value.
  • ${variable/pattern/replacement} to first replace occurrence of a pattern
  • ${variable//pattern/replacement} to replace all occurrences of a pattern

How these rules are used might not be completely obvious at first. So, let’s look at a few examples seen in actual Dockerfiles.

For example, projects often can’t agree on whether versions for downloading your dependencies should have a “v” prefix or not. The following allows you to get the format you need:

# example VERSION=v1.2.3
ARG VERSION=${VERSION#v}
# VERSION is now '1.2.3'

In the next example, multiple variants are used by the same project:

ARG VERSION=v1.7.13
ADD https://github.com/containerd/containerd/releases/download/${VERSION}/containerd-${VERSION#v}-linux-amd64.tar.gz / 

To configure different command behaviors for multi-platform builds, BuildKit provides useful built-in variables like TARGETOS and TARGETARCH. Unfortunately, not all projects use the same values. For example, in containers and the Go ecosystem, we refer to 64-bit ARM architecture as arm64, but sometimes you need aarch64 instead.

ADD https://github.com/oven-sh/bun/releases/download/bun-v1.0.30/bun-linux-${TARGETARCH/arm64/aarch64}.zip /

In this case, the URL also uses a custom name for AMD64 architecture. To pass a variable through multiple expansions, use another ARG definition with an expansion from the previous value. You could also write all the definitions on a single line, as ARG allows multiple parameters, which may hurt readability.

ARG ARCH=${TARGETARCH/arm64/aarch64}
ARG ARCH=${ARCH/amd64/x64}
ADD https://github.com/oven-sh/bun/releases/download/bun-v1.0.30/bun-linux-${ARCH}.zip /

Note that the example above is written in a way that if a user passes their own --build-arg ARCH=value, then that value is used as-is.

Now, let’s look at how new expansions can be useful in multi-stage builds.

One of the techniques described in “Advanced multi-stage build patterns” shows how build arguments can be used so that different Dockerfile commands run depending on the build-arg value. For example, you can use that pattern if you build a multi-platform image and want to run additional COPY or RUN commands only for specific platforms. If this method is new to you, you can learn more about it from that post.

In summarized form, the idea is to define a global build argument and then define build stages that use the build argument value in the stage name while pointing to the base of your target stage via the build-arg name.

Old example:

ARG BUILD_VERSION=1

FROM alpine AS base
RUN …

FROM base AS branch-version-1
RUN touch version1

FROM base AS branch-version-2
RUN touch version2

FROM branch-version-${BUILD_VERSION} AS after-condition

FROM after-condition
RUN …

When using this pattern for multi-platform builds, one of the limitations is that all the possible values for the build-arg need to be defined by your Dockerfile. This is problematic as we want Dockerfile to be built in a way that it can build on any platform and not limit it to a specific set. 

You can see other examples here and here of Dockerfiles where dummy stage aliases must be defined for all architectures, and no other architecture can be built. Instead, the pattern we would like to use is that there is one architecture that has a special behavior, and everything else shares another common behavior.

With new expansions, we can write this to demonstrate running special commands only on RISC-V, which is still somewhat new and may need custom behavior:

#syntax=docker/dockerfile:1.7

ARG ARCH=${TARGETARCH#riscv64}
ARG ARCH=${ARCH:+"common"}
ARG ARCH=${ARCH:-$TARGETARCH}

FROM --platform=$BUILDPLATFORM alpine AS base-common
ARG TARGETARCH
RUN echo "Common build, I am $TARGETARCH" > /out

FROM --platform=$BUILDPLATFORM alpine AS base-riscv64
ARG TARGETARCH
RUN echo "Riscv only special build, I am $TARGETARCH" > /out

FROM base-${ARCH} AS base

Let’s look at these ARCH definitions more closely.

  • The first sets ARCH to TARGETARCH but removes riscv64 from the value.
  • Next, as we described previously, we don’t actually want the other architectures to use their own values but instead want them all to share a common value. So, we set ARCH to common except if it was cleared from the previous riscv64 rule. 
  • Now, if we still have an empty value, we default it back to $TARGETARCH.
  • The last definition is optional, as we would already have a unique value for both cases, but it makes the final stage name base-riscv64 nicer to read.

Additional examples of including multiple conditions with shared conditions, or conditions based on architecture variants can be found in this GitHub Gist page.

Comparing this example to the initial example of conditions between stages, the new pattern isn’t limited to just controlling the platform differences of your builds but can be used with any build-arg. If you have used this pattern before, then you can effectively now define an “else” clause, whereas previously, you were limited to only “if” clauses.

Copy with keeping parent directories

The following feature has been released in the “labs” channel. Define the following at the top of your Dockerfile to use this feature.

#syntax=docker/dockerfile:1.7-labs

When you are copying files in your Dockerfile, for example, do this:

COPY app/file /to/dest/dir/

This example means the source file is copied directly to the destination directory. If your source path was a directory, all the files inside that directory would be copied directly to the destination path.

What if you have a file structure like the following:

.
├── app1
│   ├── docs
│   │   └── manual.md
│   └── src
│       └── server.go
└── app2
    └── src
        └── client.go

You want to copy only files in app1/src, but so that the final files at the destination would be /to/dest/dir/app1/src/server.go and not just /to/dest/dir/server.go.

With the new COPY --parents flag, you can write:

COPY --parents /app1/src/ /to/dest/dir/  

This will copy the files inside the src directory and recreate the app1/src directory structure for these files.

Things get more powerful when you start to use wildcard paths. To copy the src directories for both apps into their respective locations, you can write:

COPY --parents */src/ /to/dest/dir/ 

This will create both /to/dest/dir/app1 and /to/dest/dir/app2, but it will not copy the docs directory. Previously, this kind of copy was not possible with a single command. You would have needed multiple copies for individual files (as shown in this example) or used some workaround with the RUN --mount instruction instead.

You can also use double-star wildcard (**) to match files under any directory structure. For example, to copy only the Go source code files anywhere in your build context, you can write:

COPY --parents **/*.go /to/dest/dir/

If you are thinking about why you would need to copy specific files instead of just using COPY ./ to copy all files, remember that your build cache gets invalidated when you include new files in your build. If you copy all files, the cache gets invalidated when any file is added or changed, whereas if you copy only Go files, only changes in these files influence the cache.

The new --parents flag is not only for COPY instructions from your build context, but obviously, you can also use them in multi-stage builds when copying files between stages using COPY --from

Note that with COPY --from syntax, all source paths are expected to be absolute, meaning that if the --parents flag is used with such paths, they will be fully replicated as they were in the source stage. That may not always be desirable, and instead, you may want to keep some parents but discard and replace others. In that case, you can use a special /./ relative pivot point in your source path to mark which parents you wish to copy and which should be ignored. This special path component resembles how rsync works with the --relative flag.

#syntax=docker/dockerfile:1.7-labs
FROM ... AS base
RUN ./generate-lot-of-files -o /out/
# /out/usr/bin/foo
# /out/usr/lib/bar.so
# /out/usr/local/bin/baz

FROM scratch
COPY --from=base --parents /out/./**/bin/ /
# /usr/bin/foo
# /usr/local/bin/baz

This example above shows how only bin directories are copied from the collection of files that the intermediate stage generated, but all the directories will keep their paths relative to the out directory. 

Exclusion filters

The following feature has been released in the “labs” channel. Define the following at the top of your Dockerfile to use this feature:

#syntax=docker/dockerfile:1.7-labs

Another related case when moving files in your Dockerfile with COPY and ADD instructions is when you want to move a group of files but exclude a specific subset. Previously, your only options were to use RUN --mount or try to define your excluded files inside a .dockerignore file. 

.dockerignore files, however, are not a good solution for this problem, because they only list the files excluded from the client-side build context and not from builds from remote Git/HTTP URLs and are limited to one per Dockerfile. You should use them similarly to .gitignore to mark files that are never part of your project but not as a way to define your application-specific build logic.

With the new --exclude=[pattern] flag, you can now define such exclusion filters for your COPY and ADD commands directly in the Dockerfile. The pattern uses the same format as .dockerignore.

The following example copies all the files in a directory except Markdown files:

COPY --exclude=*.md app /dest/

You can use the flag multiple times to add multiple filters. The next example excludes Markdown files and also a file called README:

COPY --exclude=*.md --exclude=README app /dest/

Double-star wildcards exclude not only Markdown files in the copied directory but also in any subdirectory:

COPY --exclude=**/*.md app /dest/

As in .dockerignore files, you can also define exceptions to the exclusions with ! prefix. The following example excludes all Markdown files in any copied directory, except if the file is called important.md — in that case, it is still copied.

COPY --exclude=**/*.md --exclude=!**/important.md app /dest/

This double negative may be confusing initially, but note that this is a reversal of the previous exclude rule, and “include patterns” are defined by the source parameter of the COPY instruction.

When using --exclude together with previously described --parents copy mode, note that the exclude patterns are relative to the copied parent directories or to the pivot point /./ if one is defined. See the following directory structure for example:

assets
├── app1
│   ├── icons32x32
│   ├── icons64x64
│   ├── notes
│   └── backup
├── app2
│   └── icons32x32
└── testapp
    └── icons32x32
COPY --parents --exclude=testapp assets/./**/icons* /dest/

This command would create the directory structure below. Note that only directories with the icons prefix were copied, the root parent directory assets was skipped as it was before the relative pivot point, and additionally, testapp was not copied as it was defined with an exclusion filter.

dest
├── app1
│   ├── icons32x32
│   └── icons64x64
└── app2
    └── icons32x32

Conclusion

We hope this post gave you ideas for improving your Dockerfiles and that the patterns shown here will help you describe your build more efficiently. Remember that your Dockerfile can start using all these features today by defining the #syntax line on top, even if you haven’t updated to the latest Docker yet.

For a full list of other features in the new BuildKit, Buildx, and Dockerfile releases, check out the changelogs:

Thanks to community members @tstenner, @DYefimov, and @leandrosansilva for helping to implement these features!

If you have issues or suggestions you want to share, let us know in the issue tracker.

Learn more

]]>
Debian’s Dedication to Security: A Robust Foundation for Docker Developers https://www.docker.com/blog/debian-for-docker-developers/ Thu, 04 Apr 2024 14:03:10 +0000 https://www.docker.com/?p=53447 As security threats become more and more prevalent, building software with security top of mind is essential. Security has become an increasing concern for container workloads specifically and, commensurately, for container base-image choice. Many conversations around choosing a secure base image focus on CVE counts, but security involves a lot more than that. 

One organization that has been leading the way in secure software development is the Debian Project. In this post, I will outline how and why Debian operates as a secure basis for development.

White text on purple background with Docker logo and "Docker Official Images"

For more than 30 years, Debian’s diverse group of volunteers has provided a free, open, stable, and secure GNU/Linux distribution. Debian’s emphasis on engineering excellence and clean design, as well as its wide variety of packages and supported architectures, have made it not only a widely used distribution in its own right but also a meta-distribution. Many other Linux distributions, such as Ubuntu, Linux Mint, and Kali Linux, are built on top of Debian, as are many Docker Official Images (DOI). In fact, more than 1,000 Docker Official Images variants use the debian DOI or the Debian-derived ubuntu DOI as their base image. 

Why Debian?

As a bit of a disclaimer, I have been using Debian GNU/Linux for a long time. I remember installing Debian from floppy disks in the 1990s on a PC that I cobbled together, and later reinstalling so I could test prerelease versions of the netinst network installer. Installing over the network took a while using a 56-kbps modem. At those network speeds, you had to be very particular about which packages you chose in dselect

Having used a few other distributions before trying Debian, I still remember being amazed by how well-organized and architected the system was. No dangling or broken dependencies. No download failures. No incompatible shared libraries. No package conflicts, but rather a thoughtful handling of packages providing similar functionality. 

Much has changed over the years, no more floppies, dselect has been retired, my network connection speed has increased by a few orders of magnitude, and now I “install” Debian via docker pull debian. What has not changed is the feeling of amazement I have toward Debian and its community.

Open source software and security

Despite the achievements of the Debian project and the many other projects it has spawned, it is not without detractors. Like many other open source projects, Debian has received its share of criticsm in the past few years by opportunists lamenting the state of open source security. Writing about the software supply chain while bemoaning high-profile CVEs and pointing to malware that has been uploaded to an open source package ecosystem, such as PyPI or NPM, has become all too common. 

The pernicious assumption in such articles is that open source software is the problem. We know this is not the case. We’ve been through this before. Back when I was installing Debian over a 56-kbps modem, all sorts of fear, uncertainty, and doubt (FUD) was being spread by various proprietary software vendors. We learned then that open source is not a security problem — it is a security solution. 

Being open source does not automatically convey an improved security status compared to closed-source software, but it does provide significant advantages. In his Secure Programming HOWTO, David Wheeler provides a balanced summary of the relationship between open source software and security. A purported advantage conveyed by closed-source software is the nondisclosure of its source code, but we know that security through obscurity is no security at all. 

The transparency of open source software and open ecosystems allows us to better know our security posture. Openness allows for the rapid identification and remediation of vulnerabilities. Openness enables the vast majority of the security and supply chain tooling that developers regularly use. How many closed-source tools regularly publish CVEs? With proprietary software, you often only find out about a vulnerability after it is too late.

Debian’s rapid response strategy

Debian has been criticized for moving too slowly on the security front. But this narrative, like the open vs. closed-source narrative, captures neither the nuance nor reality. Although several distributions wait to publish CVEs until a fixed version is available, Debian opts for complete transparency and urgency when communicating security information to its users.

Furthermore, Debian maintainers are not a mindless fleet of automatons hastily applying patches and releasing new package versions. As a rule, Debian maintainers are experts among experts, deeply steeped in software and delivery engineering, open source culture, and the software they package.

zlib vulnerability example

A recent zlib vulnerability, CVE-2023-45853, provides an insightful example of the Debian project’s diligent, thorough approach to security. Several distributions grabbed a patch for the vulnerability, applied it, rebuilt, packaged, and released a new zlib package. The Debian security community took a closer look.

As mentioned in the CVE summary, the vulnerability was in minizip, which is a utility under the contrib directory of the zlib source code. No minizip source files are compiled into the zlib library, libz. As such, this vulnerability did not actually affect any zlib packages.

If that were where the story had ended, the only harm would be in updating a package unnecessarily. But the story did not end there. As detailed in the Debian bug thread, the offending minizip code was copied (i.e., vendored) and used in a lot of other widely used software. In fact, the vendored minizip code in both Chromium and Node.js was patched about a month before the zlib CVE was even published. 

Unfortunately, other commonly used software packages also had vendored copies of minizip that were still vulnerable. Thanks to the diligence of the Debian project, either the patch was applied to those projects as well, or they were compiled against the patched system minizip (not zlib!) dev package rather than the vendored version. In other distributions, those buggy vendored copies are in some cases still being compiled into software packages, with nary a mention in any CVE.

Thinking beyond CVEs

In the past 30 years, we have seen an astronomical increase in the role open source software plays in the tech industry. Despite the productivity gains that software engineers get by leveraging the massive amount of high-quality open source software available, we are once again hearing the same FUD we heard in the early days of open source. 

The next time you see an article about the dangers lurking in your open source dependencies, don’t be afraid to look past the headlines and question the assumptions. Open ecosystems lead to secure software, and the Debian project provides a model we would all do well to emulate. Debian’s goal is security, which encompasses a lot more than a report showing zero CVEs. Consumers of operating systems and container images would be wise to understand the difference. 

So go ahead and build on top of the debian DOI. FROM debian is never a bad way to start a Dockerfile!

Learn more

]]>
Empower Your Development: Dive into Docker’s Comprehensive Learning Ecosystem https://www.docker.com/blog/docker-learning-ecosystem/ Tue, 02 Apr 2024 13:26:01 +0000 https://www.docker.com/?p=53438 Continuous learning is a necessity for developers in today’s fast-paced development landscape. Docker recognizes the importance of keeping developers at the forefront of innovation, and to do so, we aim to empower the developer community with comprehensive learning resources.

Docker has taken a multifaceted approach to developer education by forging partnerships with renowned platforms like Udemy and LinkedIn Learning, investing in our own documentation and guides, and highlighting the incredible learning content created by the developer community, including Docker Captains.

2400x1260 docker developer learning across platforms

Commitment to developer learning

At Docker, our goal is to simplify the lives of developers, which begins with empowering devs with understanding how to maximize the power of Docker tools throughout their projects. We also recognize that developers have different learning styles, so we are taking a diversified approach to delivering this material across an array of platforms and formats, which means developers can learn in the format that best suits them. 

Strategic partnerships for developer learning

Recognizing the diverse learning needs of developers, Docker has partnered with leading online learning platforms — Udemy and LinkedIn Learning. These partnerships offer developers access to a wide range of courses tailored to different expertise levels, from beginners looking to get started with Docker to advanced users aiming to deepen their knowledge. 

For teams already utilizing these platforms for other learning needs, this collaboration places Docker learning in a familiar platform next to other coursework.

  • Udemy: Docker’s collaboration with Udemy highlights an array of Endorsed Docker courses, designed by industry experts. Whether getting a handle on containerization or mastering Docker with Kubernetes, Udemy’s platform offers the flexibility and depth developers need to upskill at their own pace. Today, demand remains high for Docker content across the Udemy platform, with more than 350 courses offered and nearly three million enrollments to date.
  • LinkedIn Learning: Through LinkedIn Learning, developers can dive into curated Docker courses to earn a Docker Foundations Professional Certificate once they complete the program. These resources are not just about technical skills; they also cover best practices and practical applications, ensuring learners are job-ready.

Leveraging Docker’s documentation and guides

Although third-party platforms provide comprehensive learning paths, Docker’s own documentation and guides are indispensable tools for developers. Our documentation is continuously updated to serve as both a learning resource and a reference. From installation and configuration to advanced container orchestration and networking, Docker’s guides are designed to help you find your solution with step-by-step walk-throughs.

If it’s been a while since you’ve checked out Docker Docs, you can visit docs.docker.com to find manuals, a getting started guide, along with many new use-case guides to help you with advanced applications including generative AI and security.  

Learners interested in live sessions can register for upcoming live webinars and training on the Docker Training site. There, you will find sessions where you can interact with the Docker support team and discuss best practices for using Docker Scout and Docker Admin.

The role of community in learning

Docker’s community is a vibrant ecosystem of learners, contributors, and innovators. We are thrilled to see the community creating content, hosting workshops, providing mentorship, and enriching the vast array of Docker learning resources. In particular, Docker Captains stand out for their expertise and dedication to sharing knowledge. From James Spurin’s Dive Into Docker course, to Nana Janashia’s Docker Crash Course, to Vladimir Mikhalev’s blog with guided IT solutions using Docker (just to name a few), it’s clear there’s much to learn from within the community.

We encourage developers to join the community and participate in conversations to seek advice, share knowledge, and collaborate on projects. You can also check out the Docker Community forums and join the Slack community to connect with other members of the community.

Conclusion

Docker’s holistic approach to developer learning underscores our commitment to empowering developers with knowledge and skills. By combining our comprehensive documentation and guides with top learning platform partnerships and an active community, we offer developers a robust framework for learning and growth. We encourage you to use all of these resources together to build a solid foundation of knowledge that is enhanced with new perspectives and additional insights as new learning offerings continue to be added.

Whether you’re a novice eager to explore the world of containers or a seasoned pro looking to refine your expertise, Docker’s learning ecosystem is designed to support your journey every step of the way.

Join us in this continuous learning journey, and come learn with Docker.

Learn more

]]>
Revolutionize Your CI/CD Pipeline: Integrating Testcontainers and Bazel https://www.docker.com/blog/revolutionize-your-ci-cd-pipeline-integrating-testcontainers-and-bazel/ Thu, 29 Feb 2024 15:00:00 +0000 https://www.docker.com/?p=51429 One of the challenges in modern software development is being able to release software often and with confidence. This can only be achieved when you have a good CI/CD setup in place that can test your software and release it with minimal or even no human intervention. But modern software applications also use a wide range of third-party dependencies and often need to run on multiple operating systems and architectures. 

In this post, I will explain how the combination of Bazel and Testcontainers helps developers build and release software by providing a hermetic build system.

banner Running Testcontainers Tests using Bazel 2400x1260 1

Using Bazel and Testcontainers together

Bazel is an open source build tool developed by Google to build and test multi-language, multi-platform projects. Several big IT companies have adopted monorepos for various reasons, such as:

  • Code sharing and reusability 
  • Cross-project refactoring 
  • Consistent builds and dependency management 
  • Versioning and release management

With its multi-language support and focus on reproducible builds, Bazel shines in building such monorepos.

A key concept of Bazel is hermeticity, which means that when all inputs are declared, the build system can know when an output needs to be rebuilt. This approach brings determinism where, given the same input source code and product configuration, it will always return the same output by isolating the build from changes to the host system.

Testcontainers is an open source framework for provisioning throwaway, on-demand containers for development and testing use cases. Testcontainers make it easy to work with databases, message brokers, web browsers, or just about anything that can run in a Docker container.

Using Bazel and Testcontainers together offers the following features:

  • Bazel can build projects using different programming languages like C, C++, Java, Go, Python, Node.js, etc.
  • Bazel can dynamically provision the isolated build/test environment with desired language versions.
  • Testcontainers can provision the required dependencies as Docker containers so that your test suite is self-contained. You don’t have to manually pre-provision the necessary services, such as databases, message brokers, and so on. 
  • All the test dependencies can be expressed through code using Testcontainers APIs, and you avoid the risk of breaking hermeticity by sharing such resources between tests.

Let’s see how we can use Bazel and Testcontainers to build and test a monorepo with modules using different languages.
We are going to explore a monorepo with a customers module, which uses Java, and a products module, which uses Go. Both modules interact with relational databases (PostgreSQL) and use Testcontainers for testing.

Getting started with Bazel

To begin, let’s get familiar with Bazel’s basic concepts. The best way to install Bazel is by using Bazelisk. Follow the official installation instructions to install Bazelisk. Once it’s installed, you should be able to run the Bazelisk version and Bazel version commands:

$ brew install bazelisk
$ bazel version

Bazelisk version: v1.12.0
Build label: 7.0.0

Before you can build a project using Bazel, you need to set up its workspace. 

A workspace is a directory that holds your project’s source files and contains the following files:

  • The WORKSPACE.bazel file, which identifies the directory and its contents as a Bazel workspace and lives at the root of the project’s directory structure.
  • A MODULE.bazel file, which declares dependencies on Bazel plugins (called “rulesets”).
  • One or more BUILD (or BUILD.bazel) files, which describe the sources and dependencies for different parts of the project. A directory within the workspace that contains a BUILD file is a package.

In the simplest case, a MODULE.bazel file can be an empty file, and a BUILD file can contain one or more generic targets as follows:

genrule(
    name = "foo",
    outs = ["foo.txt"],
    cmd_bash = "sleep 2 && echo 'Hello World' >$@",
)

genrule(
    name = "bar",
    outs = ["bar.txt"],
    cmd_bash = "sleep 2 && echo 'Bye bye' >$@",
)

Here, we have two targets: foo and bar. Now we can build those targets using Bazel as follows:

$ bazel build //:foo <- runs only foo target, // indicates root workspace
$ bazel build //:bar <- runs only bar target
$ bazel build //... <- runs all targets

Configuring the Bazel build in a monorepo

We are going to explore using Bazel in the testcontainers-bazel-demo repository. This repository is a monorepo with a customers module using Java and a products module using Go. Its structure looks like the following:

testcontainers-bazel-demo
|____customers
| |____BUILD.bazel
| |____src
|____products
| |____go.mod
| |____go.sum
| |____repo.go
| |____repo_test.go
| |____BUILD.bazel
|____MODULE.bazel

Bazel uses different rules for building different types of projects. Bazel uses rules_java for building Java packages, rules_go for building Go packages, rules_python for building Python packages, etc.

We may also need to load additional rules providing additional features. For building Java packages, we may want to use external Maven dependencies and use JUnit 5 for running tests. In that case, we should load rules_jvm_external to be able to use Maven dependencies. 

We are going to use Bzlmod, the new external dependency subsystem, to load the external dependencies. In the MODULE.bazel file, we can load the additional rules_jvm_external and contrib_rules_jvm as follows:

bazel_dep(name = "contrib_rules_jvm", version = "0.21.4")
bazel_dep(name = "rules_jvm_external", version = "5.3")

maven = use_extension("@rules_jvm_external//:extensions.bzl", "maven")
maven.install(
   name = "maven",
   artifacts = [
       "org.postgresql:postgresql:42.6.0",
       "ch.qos.logback:logback-classic:1.4.6",
       "org.testcontainers:postgresql:1.19.3",
       "org.junit.platform:junit-platform-launcher:1.10.1",
       "org.junit.platform:junit-platform-reporting:1.10.1",
       "org.junit.jupiter:junit-jupiter-api:5.10.1",
       "org.junit.jupiter:junit-jupiter-params:5.10.1",
       "org.junit.jupiter:junit-jupiter-engine:5.10.1",
   ],
)
use_repo(maven, "maven")

Let’s understand the above configuration in the MODULE.bazel file:

  • We have loaded the rules_jvm_external rules from Bazel Central Registry and loaded extensions to use third-party Maven dependencies.
  • We have configured all our Java application dependencies using Maven coordinates in the maven.install artifacts configuration.
  • We are loading the contrib_rules_jvm rules that supports running JUnit 5 tests as a suite.

Now, we can run the @maven//:pin program to create a JSON lockfile of the transitive dependencies, in a format that rules_jvm_external can use later:

bazel run @maven//:pin

Rename the generated file rules_jvm_external~4.5~maven~maven_install.json to maven_install.json. Now update the MODULE.bazel to reflect that we pinned the dependencies.

Add a lock_file attribute to the maven.install() and update the use_repo call to also expose the unpinned_maven repository used to update the dependencies:

maven.install(
    ...
    lock_file = "//:maven_install.json",
)

use_repo(maven, "maven", "unpinned_maven")

Now, when you update any dependencies, you can run the following command to update the lock file:

​​bazel run @unpinned_maven//:pin

Let’s configure our build targets in the customers/BUILD.bazel file, as follows:

load(
 "@bazel_tools//tools/jdk:default_java_toolchain.bzl",
 "default_java_toolchain", "DEFAULT_TOOLCHAIN_CONFIGURATION", "BASE_JDK9_JVM_OPTS", "DEFAULT_JAVACOPTS"
)

default_java_toolchain(
 name = "repository_default_toolchain",
 configuration = DEFAULT_TOOLCHAIN_CONFIGURATION,
 java_runtime = "@bazel_tools//tools/jdk:remotejdk_17",
 jvm_opts = BASE_JDK9_JVM_OPTS + ["--enable-preview"],
 javacopts = DEFAULT_JAVACOPTS + ["--enable-preview"],
 source_version = "17",
 target_version = "17",
)

load("@rules_jvm_external//:defs.bzl", "artifact")
load("@contrib_rules_jvm//java:defs.bzl", "JUNIT5_DEPS", "java_test_suite")

java_library(
   name = "customers-lib",
   srcs = glob(["src/main/java/**/*.java"]),
   deps = [
       artifact("org.postgresql:postgresql"),
       artifact("ch.qos.logback:logback-classic"),
   ],
)

java_library(
   name = "customers-test-resources",
   resources = glob(["src/test/resources/**/*"]),
)

java_test_suite(
   name = "customers-lib-tests",
   srcs = glob(["src/test/java/**/*.java"]),
   runner = "junit5",
   test_suffixes = [
       "Test.java",
       "Tests.java",
   ],
   runtime_deps = JUNIT5_DEPS,
   deps = [
       ":customers-lib",
       ":customers-test-resources",
       artifact("org.junit.jupiter:junit-jupiter-api"),
       artifact("org.junit.jupiter:junit-jupiter-params"),
       artifact("org.testcontainers:postgresql"),
   ],
)

Let’s understand this BUILD configuration:

  • We have loaded default_java_toolchain and then configured the Java version to 17.
  • We have configured a java_library target with the name customers-lib that will build the production jar file.
  • We have defined a java_test_suite target with the name customers-lib-tests to define our test suite, which will execute all the tests. We also configured the dependencies on the other target customers-lib and external dependencies.
  • We also defined another target with the name customers-test-resources to add non-Java sources (e.g., logging config files) to our test suite target as a dependency.

In the customers package, we have a CustomerService class that stores and retrieves customer details in a PostgreSQL database. And we have CustomerServiceTest that tests CustomerService methods using Testcontainers. Take a look at the GitHub repository for the complete code.

Note: You can use Gazelle, which is a Bazel build file generator, to generate the BUILD.bazel files instead of manually writing them.

Running Testcontainers tests

For running Testcontainers tests, we need a Testcontainers-supported container runtime. Let’s assume you have a local Docker installed using Docker Desktop.

Now, with our Bazel build configuration, we are ready to build and test the customers package:

# to run all build targets of customers package
$ bazel build //customers/...

# to run a specific build target of customers package
$ bazel build //customers:customers-lib

# to run all test targets of customers package
$ bazel test //customers/...

# to run a specific test target of customers package
$ bazel test //customers:customers-lib-tests

When you run the build for the first time, it will take time to download the required dependencies and then execute the targets. But, if you try to build or test again without any code or configuration changes, Bazel will not re-run the build/test again and will show the cached result. Bazel has a powerful caching mechanism that will detect code changes and run only the targets that are necessary to run.

While using Testcontainers, you define the required dependencies as part of code using Docker image names along with tags, such as Postgres:16. So, unless you change the code (e.g., Docker image name or tag), Bazel will cache the test results.

Similarly, we can use rules_go and Gazelle for configuring Bazel build for Go packages. Take a look at the MODULE.bazel and products/BUILD.bazel files to learn more about configuring Bazel in a Go package.

As mentioned earlier, we need a Testcontainers-supported container runtime for running Testcontainers tests. Installing Docker on complex CI platforms might be challenging, and you might need to use a complex Docker-in-Docker setup. Additionally, some Docker images might not be compatible with the operating system architecture (e.g., Apple M1). 

Testcontainers Cloud solves these problems by eliminating the need to have Docker on the localhost or CI runners and run the containers on cloud VMs transparently.

Here is an example of running the Testcontainers tests using Bazel on Testcontainers Cloud using GitHub Actions:

name: CI

on:
 push:
   branches:
     - '**'

jobs:
 build:
   runs-on: ubuntu-latest
   steps:
   - uses: actions/checkout@v4

   - name: Configure TestContainers cloud
     uses: atomicjar/testcontainers-cloud-setup-action@main
     with:
       wait: true
       token: ${{ secrets.TC_CLOUD_TOKEN }}

   - name: Cache Bazel
     uses: actions/cache@v3
     with:
       path: |
         ~/.cache/bazel
       key: ${{ runner.os }}-bazel-${{ hashFiles('.bazelversion', '.bazelrc', 'WORKSPACE', 'WORKSPACE.bazel', 'MODULE.bazel') }}
       restore-keys: |
         ${{ runner.os }}-bazel-

   - name: Build and Test
     run: bazel test --test_output=all //...

GitHub Actions runners already come with Bazelisk installed, so we can use Bazel out of the box. We have configured the TC_CLOUD_TOKEN environment variable through Secrets and started the Testcontainers Cloud agent. If you check the build logs, you can see that the tests are executed using Testcontainers Cloud.

Summary

We have shown how to use the Bazel build system to build and test monorepos with multiple modules using different programming languages. Combined with Testcontainers, you can make the builds self-contained and hermetic.

Although Bazel and Testcontainers help us have a self-contained build, we need to take extra measures to make it a hermetic build: 

  • Bazel can be configured to use a specific version of SDK, such as JDK 17, Go 1.20, etc., so that builds always use the same version instead of what is installed on the host machine. 
  • For Testcontainers tests, using Docker tag latest for container dependencies may result in non-deterministic behavior. Also, some Docker image publishers override the existing images using the same tag. To make the build/test deterministic, always use the Docker image digest so that the builds and tests always use the exact same version of images that gives reproducible and hermetic builds.
  • Using Testcontainers Cloud for running Testcontainers tests reduces the complexity of Docker setup and gives a deterministic container runtime environment.

Visit the Testcontainers website to learn more, and get started with Testcontainers Cloud by creating a free account.

Learn more

]]>
Docker Desktop 4.28: Enhanced File Sharing and Security Plus Refined Builds View in Docker Build Cloud https://www.docker.com/blog/docker-desktop-4-28/ Wed, 28 Feb 2024 14:00:00 +0000 https://www.docker.com/?p=52486 Docker Desktop 4.28 introduces updates to file-sharing controls, focusing on security and administrative ease. Responding to feedback from our business users, this update brings refined file-sharing capabilities and path allow-listing, aiming to simplify management and enhance security for IT administrators and users alike.

Along with our investments in bringing access to cloud resources within the local Docker Desktop experience with Docker Build Cloud Builds view, this release provides a more efficient and flexible platform for development teams.

Docker Desktop 4.28

Introducing enhanced file-sharing controls in Docker Desktop Business 

As we continue to innovate and elevate the Docker experience for our business customers, we’re thrilled to unveil significant upgrades to the Docker Desktop’s Hardened Desktop feature. Recognizing the importance of administrative control over Docker Desktop settings, we’ve listened to your feedback and are introducing enhancements prioritizing security and ease of use.

For IT administrators and non-admin users, Docker now offers the much-requested capability to specify and manage file-sharing options directly via Settings Management (Figure 1). This includes:

  • Selective file sharing: Choose your preferred file-sharing implementation directly from Settings > General, where you can choose between VirtioFS, gRPC FUSE, or osxfs. VirtioFS is only available for macOS versions 12.5 and above and is turned on by default.
  • Path allow-listing: Precisely control which paths users can share files from, enhancing security and compliance across your organization.
Screenshot of Docker Desktop showing Synchronized file shares page.
Figure 1: Display of Docker Desktop settings enhanced file-sharing settings.

We’ve also reimagined the Settings > Resources > File Sharing interface to enhance your interaction with Docker Desktop (Figure 2). You’ll notice:

  • Clearer error messaging: Quickly understand and rectify issues with enhanced error messages.
  • Intuitive action buttons: Experience a smoother workflow with redesigned action buttons, making your Docker Desktop interactions as straightforward as possible.
Screenshot of Docker Desktop showing Resources page with options for File Sharing, Synchronized file shares, and Virtual sharing.
Figure 2: Displaying settings management in Docker Desktop to notify business subscribers of their access rights.

These enhancements are not just about improving current functionalities; they’re about unlocking new possibilities for your Docker experience. From increased security controls to a more navigable interface, every update is designed with your efficiency in mind.

Refining development with Docker Desktop’s Builds view update 

Docker Desktop’s previous update introduced Docker Build Cloud integration, aimed at reducing build times and improving build management. In this release, we’re landing incremental updates that refine the Builds view, making it easier and faster to manage your builds.

New in Docker Desktop 4.28:

  • Dedicated tabs: Separates active from completed builds for better organization (Figure 3).
  • Build insights: Displays build duration and cache steps, offering more clarity on the build process.
  • Reliability fixes: Resolves issues with updates for a more consistent experience.
  • UI improvements: Updates the empty state view for a clearer dashboard experience (Figure 4).

These updates are designed to streamline the build management process within Docker Desktop, leveraging Docker Build Cloud for more efficient builds.

Screenshot of Builds view showing tabs for Build history and Active builds.
Figure 3: Dedicated tabs for Build history vs. Active builds to allow more space for inspecting your builds.
Screenshot of Builds view with Active builds tab selected and showing "No builds currently active".
Figure 4: Updated view supporting empty state — no Active builds.

To explore how Docker Desktop and Docker Build Cloud can optimize your development workflow, read our Docker Build Cloud blog post. Experience the latest Builds view update to further enrich your local, hybrid, and cloud-native development journey.

These Docker Desktop updates support improved platform security and a better user experience. By introducing more detailed file-sharing controls, we aim to provide developers with a more straightforward administration experience and secure environment. As we move forward, we remain dedicated to refining Docker Desktop to meet the evolving needs of our users and organizations, enhancing their development workflows and agility to innovate.

Join the conversation and make your mark

Dive into the dialogue and contribute to the evolution of Docker Desktop. Use our feedback form to share your thoughts and let us know how to improve the Hardened Desktop features. Your input directly influences the development roadmap, ensuring Docker Desktop meets and exceeds our community and customers’ needs.

Learn more

]]>
Docker Desktop 4.27: Synchronized File Shares, Docker Init GA, Private Extensions Marketplace, Moby 25, Support for Testcontainers with ECI, Docker Build Cloud, and Docker Debug Beta https://www.docker.com/blog/docker-desktop-4-27/ Fri, 09 Feb 2024 14:17:02 +0000 https://www.docker.com/?p=51234 We’re pleased to announce Docker Desktop 4.27, packed with exciting new features and updates. The new release includes key advancements such as synchronized file shares, collaboration enhancements in Docker Build Cloud, the introduction of the private marketplace for extensions (available for Docker Business customers), and the much-anticipated release of Moby 25

Additionally, we explore the support for Testcontainers with Enhanced Container Isolation, the general availability of docker init with expanded language support, and the beta release of Docker Debug. These updates represent significant strides in improving development workflows, enhancing security, and offering advanced customization for Docker users.

Docker 4.27

Docker Desktop synchronized file shares GA

We’re diving into some fantastic updates for Docker Desktop, and we’re especially thrilled to introduce our latest feature, synchronized file shares, which is available now in version 4.27 (Figure 1). Following our acquisition announcement in June 2023, we have integrated the technology behind Mutagen into the core of Docker Desktop.

You can now say goodbye to the challenges of using large codebases in containers with virtual filesystems. Synchronized file shares unlock native filesystem performance for bind mounts and provides a remarkable 2-10x boost in file operation speeds. For developers managing extensive codebases, this is a game-changer.

Screenshot of Docker Desktop showing file sharing resources.
Figure 1: Shares have been created and are available for use in containers.

To get started, log in to Docker Desktop with your subscription account (Pro, Teams, or Business) to harness the power of Docker Desktop synchronized file shares. You can read more about this feature in the Docker documentation.

Collaborate on shared Docker Build Cloud builds in Docker Desktop

With the recent GA of Docker Build Cloud, your team can now leverage Docker Desktop to use powerful cloud-based build machines and shared caching to reduce unnecessary rebuilds and get your build done in a fraction of the time, regardless of your local physical hardware.

New builds can make instant use of the shared cache. Even if this is your first time building the project, you can immediately speed up build times with shared caches.

We know that team members have varying levels of Docker expertise. When a new developer has issues with their build failing, the Builds view makes it effortless for anyone on the team to locate the troublesome build using search and filtering. They can then collaborate on a fix and get unblocked in no time.

When all your team is building on the same cloud builder, it can get noisy, so we added filtering by specific build types, helping you focus on the builds that are important to you.

Link to builder settings for a build

Previously, to access builder settings, you had to jump back to the build list or the settings page, but now you can access them directly from a build (Figure 2).

Animated gif showing Docker Desktop actions to access builder settings.
Figure 2: Access builder settings directly from a build.

Delete build history for a builder

And, until now you could only delete build in batches, which meant if you wanted to clear the build history it required a lot of clicks. This update enables you to clear all builds easily (Figure 3).

Animated gif showing Docker Desktop actions to clear build history.
Figure 3: Painlessly clear the build history for an individual builder.

Refresh storage data for your builder at any point in time

Refreshing the storage data is an intensive operation, so it only happens periodically. Previously, when you were clearing data, you would have to wait a while to see the update. Now it’s just a one-click process (Figure 4).

Screenshot of Docker Desktop showing  storage data for selected builder
Figure 4: Quickly refresh storage data for a builder to get an up-to-date view of your usage.

New feature: Private marketplace for extensions available for Docker Business subscribers

Docker Business customers now have exclusive access to a new feature: the private marketplace for extensions. This enhancement focuses on security, compliance, and customization, and empowering developers, providing:

  • Controlled access: Manage which extensions developers can use through allow-listing.
  • Private distribution: Easily distribute company-specific extensions from a private registry.
  • Customized development: Deploy customized team processes and tools as unpublished/private Docker extensions tailored to a specific organization.

The private marketplace for extensions enables a secure, efficient, and tailored development environment, aligning with your enterprise’s specific needs. Get started today by learning how to configure a private marketplace for extensions.

Moby 25 release — containerd image store 

We are happy to announce the release of Moby 25.0 with Docker Desktop 4.27. In case you’re unfamiliar, Moby is the open source project for Docker Engine, which ships in Docker Desktop. We have dedicated significant effort to this release, which marks a major release milestone for the open source Moby project. You can read a comprehensive list of enhancements in the v25.0.0 release notes.

With the release of Docker Desktop 4.27,  support for the containerd image store has graduated from beta to general availability. This work began in September 2022 when we started extending the Docker Engine integration with containerd, so we are excited to have this functionality reach general availability.

This support provides a more robust user experience by natively storing and building multi-platform images and using snapshotters for lazy pulling images (e.g., stargz) and peer-to-peer image distribution (e.g., dragonfly, nydus). It also provides a foundation for you to run Wasm containers (currently in beta). 

Using the containerd image store is not currently enabled by default for all users but can be enabled in the general settings in Docker Desktop under Use containers for pulling and storing images (Figure 5).

Screenshot of Docker Desktop showing option to enable containerd image store.
Figure 5: Enable use of the containerd image store in the general settings in Docker Desktop.

Going forward, we will continue improving the user experience of pushing, pulling, and storing images with the containerd image store, help migrate user images to use containerd, and work toward enabling it by default for all users. 

As always, you can try any of the features landing in Moby 25 in Docker Desktop.

Support for Testcontainers with Enhanced Container Isolation

Docker Desktop 4.27 introduces the ability to use the popular Testcontainers framework with Enhanced Container Isolation (ECI). 

ECI, which is available to Docker Business customers, provides an additional layer of security to prevent malicious workloads running in containers from compromising the Docker Desktop or the host by running containers without root access to the Docker Desktop VM, by vetting sensitive system calls inside containers and other advanced techniques. It’s meant to better secure local development environments. 

Before Docker Desktop 4.27, ECI blocked mounting the Docker Engine socket into containers to increase security and prevent malicious containers from gaining access to Docker Engine. However, this also prevented legitimate scenarios (such as Testcontainers) from working with ECI.   

Starting with Docker Desktop 4.27, admins can now configure ECI to allow Docker socket mounts, but in a controlled way (e.g., on trusted images of their choice) and even restrict the commands that may be sent on that socket. This functionality, in turn, enables users to enjoy the combined benefits of frameworks such as Testcontainers (or any others that require containers to access the Docker engine socket) with the extra security and peace of mind provided by ECI.

Docker init GA with Java support 

Initially released in its beta form in Docker 4.18, docker init has undergone several enhancements. The docker init command-line utility aids in the initialization of Docker resources within a project. It automatically generates Dockerfiles, Compose files, and .dockerignore files based on the nature of the project, significantly reducing the setup time and complexity associated with Docker configurations. 

The initial beta release of docker init only supported Go and generic projects. The latest version, available in Docker 4.27, supports Go, Python, Node.js, Rust, ASP.NET, PHP, and Java (Figure 6).

Screenshot of Docker init CLI welcome page.
Figure 6. Docker init will suggest the best template for the application.

The general availability of docker init offers an efficient and user-friendly way to integrate Docker into your projects. Whether you’re a seasoned Docker user or new to containerization, docker init is ready to enhance your development workflow. 

Beta release of Docker Debug 

As previously announced at DockerCon 2023, Docker Debug is now available as a beta offering in Docker Desktop 4.27.

Screenshot of beta version of Docker Debug page.
Figure 7: Docker Debug.

Developers can spend as much as 60% of their time debugging their applications, with much of that time taken up by sorting and configuring tools and setup instead of debugging. Docker Debug (available in Pro, Teams, or Business subscriptions) provides a language-independent, integrated toolbox for debugging local and remote containerized apps — even when the container fails to launch — enabling developers to find and solve problems faster.

To get started, run docker debug <Container or Image name> in the Docker Desktop CLI while logged in with your subscription account.

Conclusion

Docker Desktop’s latest updates and features, from synchronized file shares to the first beta release of Docker Debug, reflect our ongoing commitment to enhancing developer productivity and operational efficiency. Integrating these capabilities into Docker Desktop streamlines development processes and empowers teams to collaborate more effectively and securely. As Docker continues to evolve, we remain dedicated to providing our community and customers with innovative solutions that address the dynamic needs of modern software development.

Stay tuned for further updates and enhancements, and as always, we encourage you to explore these new features to see how they can benefit your development workflow.

Upgrade to Docker Desktop 4.27 to explore these updates and experiment with Docker’s latest features.

Learn more

]]>
See 2-10x Faster File Operation Speeds with Synchronized File Shares in Docker Desktop https://www.docker.com/blog/announcing-synchronized-file-shares/ Tue, 06 Feb 2024 15:31:17 +0000 https://www.docker.com/?p=51170 We are happy to announce that Mutagen’s file-sharing technology, acquired by Docker, has been seamlessly integrated into Docker Desktop, and the synchronized file shares feature is available now in Docker Desktop. This enhancement brings fast and flexible host-to-VM file sharing, offering a performance boost for developers dealing with extensive codebases.

Synchronized file shares overcome the limitations of traditional bind mounts, providing native file system performance, so developers can enjoy 2-10x faster file operation speeds. Simply log in to Docker Desktop with your subscription account (Docker Pro, Teams, or Business) to experience this new time-saving feature.

rectangle docker desktop synchronized file shares ga

Improving the developer experience 

Synchronized file shares transform the backend developer experience by increasing developer productivity with the time saved compared to traditional file-sharing systems. Synchronized file sharing is ideal for developers who:

  • Manage large repositories or monorepos with more than 100,000 files, totaling significant storage.
  • Utilize virtual file systems (such as VirtioFS, gRPC FUSE, or osxfs) and face scalability issues with their workflows.
  • Encounter performance limitations and want a seamless file-sharing solution without worrying about ownership conflicts.

To get started, go to Settings and navigate to the File sharing tab within the Resources section (Figure 1). You can learn more about the functionality and how to use it in our documentation.

 Screenshot of Docker Desktop Settings showing synchronized file shares option.
Figure 1: File sharing — shares have been created and are available for use in containers.

How Docker solves the problem 

Using synchronized file system caches to improve bind mount performance isn’t a new idea, but this functionality has never been available to developers as an ergonomic first-party solution. With Docker’s acquisition of Mutagen, we’re now in a position to offer an easy-to-use and transparent mechanism with potentially order-of-magnitude improvements to developer workflows.

Bind mounts are the mechanism that Linux uses to make files (like code, scripts, and images) available to containers. They’re what you get when you specify a host path to the -v/--volume flag in docker run or docker create commands (or a host path under volumes: in Compose). If folders are bind-mounted in read/write mode (the default), they also allow containers to write back to the host file system, which is great for getting files (like build products) out of containers.

When using containers natively on Linux, for example with Docker Engine, this functionality is enabled by the Linux kernel and comes with no performance impact. When using a cross-platform solution like Docker Desktop, the necessity of virtualization means that an additional file-sharing mechanism between the host system, and the Linux VM is required to enable bind mounts.

Historically, Docker has used a number of virtual file system solutions to enable this host/VM file sharing, with different solutions available based on the host platform. The most recent of these mechanisms, VirtioFS, provides an excellent out-of-the-box file-sharing solution for most developers and projects, and we’re continuing to invest in further performance improvements. These virtual file systems operate by running a file server on the host, providing files on demand via FUSE-backed file systems within the VM.

Although virtual file systems work great for most cases, there are projects where additional performance is required. In cases where a project contains many thousands (or even millions) of files totaling hundreds of megabytes or gigabytes, the demanding system calls used by development tools can lead to extremely slow behavior. 

Your project might fall into this category even if it contains only a single file — look at the staggering tree of dependencies that modern frameworks bring into your node_modules directory, for example.  Modern developer tools like compilers, dynamic language runtimes, and package managers love to traverse file systems, issuing thousands or millions of readdir(), stat(), and open()/read()/write()/close() calls. With virtual file systems, each of these system calls has to be sent across the host/VM boundary (in addition to incurring the standard round trips between kernel space and user space within the Linux VM when using the FUSE stack).

Using synchronized file shares

This is where synchronized file shares come into play. With synchronized file shares, developers can create ext4-backed caches of host file system locations inside the Docker Desktop VM. This means all those expensive file system calls are now handled directly by the Linux kernel on a native file system. These caches are kept in sync with the host file system using the Mutagen file synchronization engine, so the files are propagated bidirectionally with ultra-low latency. For most developers, there should be no perceptible difference in the file-sharing experience, other than improved performance!

So what’s the trade-off? Well, you’ll pay to store the files twice (the originals on the host and the cache inside the VM). Given the relatively low cost of disk space, compared with the high cost of developer time, this trade-off is usually a no-brainer.

To keep you in control of what gets synced, we’ve made synchronized file shares a granular, opt-in experience (we don’t want to sync your entire hard drive by default). We’ve worked hard to make this step as easy as possible — select Create share in the File sharing settings pane and choose the location you want.

The opt-in nature of synchronized file shares also makes it easy to adopt either gradually or selectively — there’s no need to impose changes on your entire team. Any bind mount that can’t be provided by synchronized file shares’ caches will fall back to your default virtual file-sharing mechanism, meaning there’s no change to your existing workflows. Team members can opt-in to synchronized file shares as necessary, using the functionality as a strategic optimization for specific parts of a codebase.

Conclusion 

We’re excited about this latest time-saving feature and what it means to you — freeing up time, increasing productivity, and enabling a focus on innovation. Docker Desktop continues investing in modernizing the developer experience, and synchronized file shares is the latest enhancement. 

Learn more  

]]>