Products – Docker https://www.docker.com Thu, 16 May 2024 16:08:53 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.3 https://www.docker.com/wp-content/uploads/2024/02/cropped-docker-logo-favicon-32x32.png Products – Docker https://www.docker.com 32 32 A Quick Guide to Containerizing Llamafile with Docker for AI Applications https://www.docker.com/blog/a-quick-guide-to-containerizing-llamafile-with-docker-for-ai-applications/ Wed, 15 May 2024 13:39:56 +0000 https://www.docker.com/?p=54682 This post was contributed by Sophia Parafina.

Keeping pace with the rapid advancements in artificial intelligence can be overwhelming. Every week, new Large Language Models (LLMs), vector databases, and innovative techniques emerge, potentially transforming the landscape of AI/ML development. Our extensive collaboration with developers has uncovered numerous creative and effective strategies to harness Docker in AI development. 

This quick guide shows how to use Docker to containerize llamafile, an executable that brings together all the components needed to run a LLM chatbot with a single file. This guide will walk you through the process of containerizing llamafile and having a functioning chatbot running for experimentation.

Llamafile’s concept of bringing together LLMs and local execution has sparked a high level of interest in the GenAI space, as it aims to simplify the process of getting a functioning LLM chatbot running locally. 

Blue and white illustration showing llama on file folders

Containerize llamafile

Llamafile is a Mozilla project that runs open source LLMs, such as Llama-2-7B, Mistral 7B, or any other models in the GGUF format. The Dockerfile builds and containerizes llamafile, then runs it in server mode. It uses Debian trixie as the base image to build llamafile. The final or output image uses debian:stable as the base image.

To get started, copy, paste, and save the following in a file named Dockerfile.

# Use debian trixie for gcc13
FROM debian:trixie as builder

# Set work directory
WORKDIR /download

# Configure build container and build llamafile
RUN mkdir out && \
    apt-get update && \
    apt-get install -y curl git gcc make && \
    git clone https://github.com/Mozilla-Ocho/llamafile.git  && \
    curl -L -o ./unzip https://cosmo.zip/pub/cosmos/bin/unzip && \
    chmod 755 unzip && mv unzip /usr/local/bin && \
    cd llamafile && make -j8 LLAMA_DISABLE_LOGS=1 && \ 
    make install PREFIX=/download/out

# Create container
FROM debian:stable as out

# Create a non-root user
RUN addgroup --gid 1000 user && \
    adduser --uid 1000 --gid 1000 --disabled-password --gecos "" user

# Switch to user
USER user

# Set working directory
WORKDIR /usr/local

# Copy llamafile and man pages
COPY --from=builder /download/out/bin ./bin
COPY --from=builder /download/out/share ./share/man

# Expose 8080 port.
EXPOSE 8080

# Set entrypoint.
ENTRYPOINT ["/bin/sh", "/usr/local/bin/llamafile"]

# Set default command.
CMD ["--server", "--host", "0.0.0.0", "-m", "/model"]

To build the container, run:

docker build -t llamafile .

Running the llamafile container

To run the container, download a model such as Mistral-7b-v0.1. The example below saves the model to the model directory, which is mounted as a volume.

$ docker run -d -v ./model/mistral-7b-v0.1.Q5_K_M.gguf:/model -p 8080:8080 llamafile

The container will open a browser window with the llama.cpp interface (Figure 1).

Screenshot of llama.cpp dialog box showing configuration options such as prompt, username, prompt template, chat history template, predictions, etc.
Figure 1: Llama.cpp is a C/C++ port of Facebook’s LLaMA model by Georgi Gerganov, optimized for efficient LLM inference across various devices, including Apple silicon, with a straightforward setup and advanced performance tuning features​.
$ curl -s http://localhost:8080/v1/chat/completions -H "Content-Type: application/json" -d '{
  "model": "gpt-3.5-turbo",
  "messages": [
    {
      "role": "system",
      "content": "You are a poetic assistant, skilled in explaining complex programming concepts with creative flair."
    },
    {
      "role": "user",
      "content": "Compose a poem that explains the concept of recursion in programming."
    }
  ]
}' | python3 -c '
import json
import sys
json.dump(json.load(sys.stdin), sys.stdout, indent=2)
print()
'

Llamafile has many parameters to tune the model. You can see the parameters with man llama file or llama file --help. Parameters can be set in the Dockerfile CMD directive.

Now that you have a containerized llamafile, you can run the container with the LLM of your choice and begin your testing and development journey. 

What’s next?

To continue your AI development journey, read the Docker GenAI guide, review the additional AI content on the blog, and check out our resources

 Learn more

]]>
Automating Docker Image Builds with Pulumi and Docker Build Cloud https://www.docker.com/blog/pulumi-and-docker-build-cloud/ Tue, 14 May 2024 14:50:18 +0000 https://www.docker.com/?p=54859 This guest post was contributed by Diana Esteves, Solutions Architect, Pulumi.

Pulumi is an Infrastructure as Code (IaC) platform that simplifies resource management across any cloud or SaaS provider, including Docker. Pulumi providers are integrations with useful tools and vendors. Pulumi’s new Docker Build provider is about making your builds even easier, faster, and more reliable. 

In this post, we will dive into how Pulumi’s new Docker Build provider works with Docker Build Cloud to streamline building, deploying, and managing containerized applications. First, we’ll set up a project using Docker Build Cloud and Pulumi. Then, we’ll explore cool use cases that showcase how you can leverage this provider to simplify your build and deployment pipelines.  

2400x1260 docker pulumi

Pulumi Docker Build provider features

Top features of the Pulumi Docker Build provider include the following:

  • Docker Build Cloud support: Offload your builds to the cloud and free up your local resources. Faster builds mean fewer headaches.
  • Multi-platform support: Build Docker images that work on different hardware architectures without breaking a sweat.
  • Advanced caching: Say goodbye to redundant builds. In addition to the shared caching available when you use Docker Build Cloud, this provider supports multiple cache backends, like Amazon S3, GitHub Actions, and even local disk, to keep your builds efficient.
  • Flexible export options: Customize where your Docker images go after they’re built — export to registries, filesystems, or wherever your workflow needs.

Getting started with Docker Build Cloud and Pulumi  

Docker Build Cloud is Docker’s newest offering that provides a pair of AMD and Arm builders in the cloud and shared cache for your team, resulting in up to 39x faster image builds. Docker Personal, Pro, Team, and Business plans include a set number of Build Cloud minutes, or you can purchase a Build Cloud Team plan to add minutes. Learn more about Docker Build Cloud plans

The example builds an NGINX Dockerfile using a Docker Build Cloud builder. We will create a Docker Build Cloud builder, create a Pulumi program in Typescript, and build our image.

Prerequisites

Step 1: Set up your Docker Build Cloud builder

Building images locally means being subject to local compute and storage availability. Pulumi allows users to build images with Docker Build Cloud.

The Pulumi Docker Build provider fully supports Docker Build Cloud, which unlocks new capabilities, as individual team members or a CI/CD pipeline can fully take advantage of improved build speeds, shared build cache, and native multi-platform builds.

If you still need to create a builder, follow the steps below; otherwise, skip to step 1C.

A. Log in to your Docker Build Cloud account.

B. Create a new cloud builder named my-cool-builder. 

pulumi docker build f1
Figure 1: Create the new cloud builder and call it my-cool-builder.

C. In your local machine, sign in to your Docker account.

$ docker login

D. Add your existing cloud builder endpoint.

$ docker buildx create --driver cloud ORG/BUILDER_NAME
# Replace ORG with the Docker Hub namespace of your Docker organization. 
# This creates a builder named cloud-ORG-BUILDER_NAME.

# Example:
$ docker buildx create --driver cloud pulumi/my-cool-builder
# cloud-pulumi-my-cool-builder

# check your new builder is configured
$ docker buildx ls

E. Optionally, see that your new builder is available in Docker Desktop.

pulumi docker build f2
Figure 2: The Builders view in the Docker Desktop settings lists all available local and Docker Build Cloud builders available to the logged-in account.

For additional guidance on setting up Docker Build Cloud, refer to the Docker docs.

Step 2: Set up your Pulumi project

To create your first Pulumi project, start with a Pulumi template. Pulumi has curated hundreds of templates that are directly integrated with the Pulumi CLI via pulumi new. In particular, the Pulumi team has created a Pulumi template for Docker Build Cloud to get you started.

The Pulumi programming model centers around defining infrastructure using popular programming languages. This approach allows you to leverage existing programming tools and define cloud resources using familiar syntaxes such as loops and conditionals.

To copy the Pulumi template locally:

$ pulumi new https://github.com/pulumi/examples/tree/master/dockerbuildcloud-ts --dir hello-dbc
# project name: hello-dbc 
# project description: (default)
# stack name: dev
# Note: Update the builder value to match yours
# builder: cloud-pulumi-my-cool-builder 
$ cd hello-dbc

# update all npm packages (recommended)
$ npm update --save

Optionally, explore your Pulumi program. The hello-dbc folder has everything you need to build a Dockerfile into an image with Pulumi. Your Pulumi program starts with an entry point, typically a function written in your chosen programming language. This function defines the infrastructure resources and configurations for your project. For TypeScript, that file is index.ts, and the contents are shown below:

import * as dockerBuild from "@pulumi/docker-build";
import * as pulumi from "@pulumi/pulumi";

const config = new pulumi.Config();
const builder = config.require("builder");

const image = new dockerBuild.Image("image", {

   // Configures the name of your existing buildx builder to use.
   // See the Pulumi.<stack>.yaml project file for the builder configuration.
   builder: {
       name: builder, // Example, "cloud-pulumi-my-cool-builder",
   },
   context: {
       location: "app",
   },
   // Enable exec to run a custom docker-buildx binary with support
   // for Docker Build Cloud (DBC).
   exec: true,
   push: false,
});

Step 3: Build your Docker image

Run the pulumi up command to see the image being built with the newly configured builder:

$ pulumi up --yes

You can follow the browser link to the Pulumi Cloud dashboard and navigate to the Image resource to confirm it’s properly configured by noting the builder parameter.

pulumi docker build f3
Figure 3: Navigate to the Image resource to check the configuration.

Optionally, also check your Docker Build Cloud dashboard for build minutes usage:

pulumi docker build f4
Figure 4: The build.docker.com view shows the user has selected the Cloud builders from the left menu and the product dashboard is shown on the right side.

Congratulations! You have built an NGINX Dockerfile with Docker Build Cloud and Pulumi. This was achieved by creating a new Docker Build Cloud builder and passing that to a Pulumi template. The Pulumi CLI is then used to deploy the changes.

Advanced use cases with buildx and BuildKit

To showcase popular buildx and BuildKit features, test one or more of the following Pulumi code samples. These include multi-platform, advanced caching,  and exports. Note that each feature is available as an input (or parameter) in the Pulumi Docker Build Image resource. 

Multi-platform image builds for Docker Build Cloud

Docker images can support multiple platforms, meaning a single image may contain variants for architectures and operating systems. 

The following code snippet is analogous to invoking a build from the Docker CLI with the --platform flag to specify the target platform for the build output.

import * as dockerBuild from "@pulumi/docker-build";

const image = new dockerBuild.Image("image", {
   // Build a multi-platform image manifest for ARM and AMD.
   platforms: [
       dockerBuild.Platform.Linux_amd64,
       dockerBuild.Platform.Linux_arm64,
   ],
   push: false,

});

Deploy the changes made to the Pulumi program:

$ pulumi up --yes

Caching from and to AWS ECR

Maintaining cached layers while building Docker images saves precious time by enabling faster builds. However, utilizing cached layers has been historically challenging in CI/CD pipelines due to recycled environments between builds. The cacheFrom and cacheTo parameters allow programmatic builds to optimize caching behavior. 

Update your Docker image resource to take advantage of caching:

import * as dockerBuild from "@pulumi/docker-build";
import * as pulumi from "@pulumi/pulumi";
import * as aws from "@pulumi/aws"; // Required for ECR

// Create an ECR repository for pushing.
const ecrRepository = new aws.ecr.Repository("ecr-repository", {});

// Grab auth credentials for ECR.
const authToken = aws.ecr.getAuthorizationTokenOutput({
   registryId: ecrRepository.registryId,
});

const image = new dockerBuild.Image("image", {
   push: true,
   // Use the pushed image as a cache source.
   cacheFrom: [{
       registry: {
           ref: pulumi.interpolate`${ecrRepository.repositoryUrl}:cache`,
       },
   }],
   cacheTo: [{
       registry: {
           imageManifest: true,
           ociMediaTypes: true,
           ref: pulumi.interpolate`${ecrRepository.repositoryUrl}:cache`,
       },
   }],
   // Provide our ECR credentials.
   registries: [{
       address: ecrRepository.repositoryUrl,
       password: authToken.password,
       username: authToken.userName,
   }],
})

Notice the declaration of additional resources for AWS ECR. 

Export builds as a tar file

Exporting allows us to share or back up the resulting image from a build invocation. 

To export the build as a local .tar file, modify your resource to include the exports Input:

const image = new dockerBuild.Image("image", {
   push: false,
   exports: [{
       docker: {
           tar: true,
       },
   }],
})

Deploy the changes made to the Pulumi program:

$ pulumi up --yes

Review the Pulumi Docker Build provider guide to explore other Docker Build features, such as build arguments, build contexts, remote contexts, and more.

Next steps

Infrastructure as Code (IaC) is key to managing modern cloud-native development, and Docker lets developers create and control images with Dockerfiles and Docker Compose files. But when the situation gets more complex, like deploying across different cloud platforms, Pulumi can offer additional flexibility and advanced infrastructure features. The Docker Build provider supports Docker Build Cloud, streamlining building, deploying, and managing containerized applications, which helps development teams work together more effectively and maintain agility.

Pulumi’s latest Docker Build provider, powered by BuildKit, improves flexibility and efficiency in Docker builds. By applying IaC principles, developers manage infrastructure with code, even in intricate scenarios. This means you can focus on building and deploying your containerized workloads without the hassle of complex infrastructure challenges.

Visit Pulumi’s launch announcement and the provider documentation to get started with the Docker Build provider. 

Register for the June 25 Pulumi and Docker virtual workshop: Automating Docker Image Builds using Pulumi and Docker.

Learn more

]]>
Navigating Proxy Servers with Ease: New Advancements in Docker Desktop 4.30 https://www.docker.com/blog/navigating-proxy-servers-docker-desktop-4-30/ Tue, 14 May 2024 12:53:51 +0000 https://www.docker.com/?p=54745 Within the ecosystem of corporate networks, proxy servers stand as guardians, orchestrating the flow of information with a watchful eye toward security. These sentinels, while adding layers to navigate, play a crucial role in safeguarding an organization’s digital boundaries and keeping its network denizens — developers and admins alike — secure from external threats. 

Recognizing proxy servers’ critical position, Docker Desktop 4.30 offers new enhancements, especially on the Windows front, to ensure seamless integration and interaction within these secured environments.

Illustration of person accessing computer

Traditional approach

The realm of proxy servers is intricate, a testament to their importance in modern corporate infrastructure. They’re not just barriers but sophisticated filters and conduits that enhance security, optimize network performance, and ensure efficient internet traffic management. In this light, the dance of authentication — while complex — is necessary to maintain this secure environment, ensuring that only verified users and applications gain access.

Traditionally, Docker Desktop approached corporate networks with a single option: basic authentication. Although functional, this approach often felt like navigating with an outdated map. It was a method that, while simple, sometimes led to moments of vulnerability and the occasional hiccup in access for those venturing into more secure or differently configured spaces within the network. 

This approach could also create roadblocks for users and admins, such as:

  • Repeated login prompts: A constant buzzkill.
  • Security faux pas: Your credentials, base64 encoded, might as well be on a billboard.
  • Access denied: Use a different authentication method? Docker Desktop is out of the loop.
  • Workflow whiplash: Nothing like a login prompt to break your coding stride.
  • Performance hiccups: Waiting on auth can slow down your Docker development endeavors.

Seamless interaction

Enter Docker Desktop 4.30, where the roadblocks are removed. Embracing the advanced authentication protocols of Kerberos and NTLM, Docker Desktop now ensures a more secure, seamless interaction with corporate proxies while creating a streamlined and frictionless experience. 

This upgrade is designed to help you easily navigate the complexities of proxy authentication, providing a more intuitive and unobtrusive experience that both developers and admins can appreciate:

  • Invisible authentication: Docker Desktop handles the proxy handshake behind the scenes.
  • No more interruptions: Focus on your code, not on login prompts.
  • Simplicity: No extra steps compared to basic auth. 
  • Performance perks: Less time waiting, more time doing.

A new workflow with Kerberos authentication scheme is shown in Figure 1:

Illustration of Kerberos authentication process showing the following steps: Connect, Client authenticate, Get Service Ticket, Access service.
Figure 1: Workflow with Kerberos authentication.

A new workflow with NTLM auth scheme is shown in Figure 2:

Illustration of NTLM authentication process showing the following steps: Auth request, NTLM challenge, NTLM response, Confirm/Deny, Connect service.
Figure 2: Workflow with NTLM authentication scheme.

Integrating Docker Desktop into environments guarded by NTLM or Kerberos proxies no longer feels like a challenge but an opportunity. 

With Docker Desktop 4.30, we’re committed to facilitating this transition, prioritizing secure, efficient workflows catering to developers and admins who orchestrate these digital environments. Our focus is on bridging gaps and ensuring Docker Desktop aligns with today’s corporate networks’ security and operational standards.

FAQ

  • Who benefits? Both Windows-based developers and admins.
  • Continued basic auth support? Yes, providing flexibility while encouraging a shift to more secure protocols.
  • How to get started? Upgrade to Docker Desktop 4.30 for Windows.
  • Impact on internal networking? Absolutely none. It’s smooth sailing for container networking.
  • Validity of authentication? Enjoy 10 hours of secure access with Kerberos, automatically renewed with system logins.

Docker Desktop is more than just a tool — it’s a bridge to a more streamlined, secure, and productive coding environment, respecting the intricate dance with proxy servers and ensuring that everyone involved, from developers to admins, moves in harmony with the secure protocols of their networks. Welcome to a smoother and more secure journey with Docker Desktop.

Learn more

]]>
Docker Desktop 4.30: Proxy Support with SOCKS5, NTLM and Kerberos, ECI for Build Commands, Build View Features, and Docker Desktop on RHEL Beta https://www.docker.com/blog/docker-desktop-4-30/ Tue, 14 May 2024 12:48:12 +0000 https://www.docker.com/?p=54707 In this post:

Docker Desktop is elevating its capabilities with crucial updates that streamline development workflows and enhance security for developers and enterprises alike. Key enhancements in Docker Desktop 4.30 include improved SOCKS5 proxy support for seamless network connectivity, advanced integration with NTLM and Kerberos for smoother authentication processes, and extended Enhanced Container Isolation (ECI) to secure build environments. Additionally, administrative ease is boosted by simplifying sign-in enforcement through familiar system settings, and WSL 2 configurations have been optimized to enhance performance.

In this blog post, we’ll describe these enhancements and also provide information on future features and available beta features such as Docker Desktop on Red Hat Enterprise Linux (RHEL). Read on to learn more about how these updates are designed to maximize the efficiency and security of your Docker Desktop experience.

Docker Desktop 4.30 illustration showing Docker logo on desktop display with flames

Enhancing connectivity with SOCKS proxy support in Docker Desktop

Docker Desktop now supports SOCKS5 proxies, a significant enhancement that broadens its usability in corporate environments where SOCKS proxy is the primary means for internet access or is used to connect to company intranets. This new feature allows users to configure Docker Desktop to route HTTP/HTTPS traffic through SOCKS proxies, enhancing network flexibility and security.

Users can easily configure Docker Desktop to access the internet using socks5:// proxy URLs. This ensures that all outgoing requests, including Docker pulls and other internet access on ports 80/443, are routed through the chosen SOCKS proxy.

  • The proxy configuration can manually be specified in Settings > Resources > Proxies > Manual proxy configuration, by adding the socks5://host:port URL in the Secure Web Server HTTPS box.
  • Automatic detection of SOCKS proxies specified in .pac files is also supported.

This advancement not only improves Docker Desktop’s functionality for developers needing robust proxy support but also aligns with business needs for secure and versatile networking solutions. This new feature is available to Docker Business subscribers. 

Visit Docker Docs for detailed information on setting up and utilizing SOCKS proxy support in Docker Desktop.

Seamless integration of Docker Desktop with NTLM and Kerberos proxies

Proxy servers are vital in corporate networks, ensuring security and efficient traffic management. Recognizing their importance, Docker Desktop has evolved to enhance integration with these secured environments, particularly on Windows. Traditional basic authentication often presented challenges, such as repeated login prompts and security concerns. 

Docker Desktop 4.30 introduces major upgrades by supporting advanced authentication protocols such as Kerberos and NTLM, which streamline the user experience by handling the proxy handshake invisibly and reducing interruptions.

These updates simplify workflows and improve security and performance, allowing developers and admins to focus more on their tasks and less on managing access issues. The new version promises a seamless, secure, and more efficient interaction with corporate proxies, making Docker Desktop a more robust tool in today’s security-conscious corporate settings.

For a deeper dive into how Docker Desktop is simplifying proxy navigation and enhancing your development workflow within the Docker Business subscription, be sure to read the full blog post.

Docker Desktop with Enhanced Container Isolation for build commands

Docker Desktop’s latest update marks an important advancement in container security by extending Enhanced Container Isolation (ECI) to docker build and docker buildx commands. This means docker build/buildx commands run in rootless mode when ECI is enabled, thereby protecting the host machine against malicious containers inadvertently used as dependencies while building container images.

This update is significant as it addresses previous limitations where ECI protected containers initiated with docker run but did not extend the same level of security to containers created during the build processes — unless the build was done with the docker-container build driver. 

Prior limitations:

  • Limited protection: Before this update, while ECI effectively safeguarded containers started with docker run, those spawned by docker build or docker buildx commands, using the default “docker” build driver, did not benefit from this isolation, posing potential security risks.
  • Security vulnerabilities: Given the nature of build processes, they can be susceptible to various security vulnerabilities, which previously might not have been adequately mitigated. This gap in protection could expose Docker Desktop users to risks during the build phase.

Enhancements in Docker Desktop 4.30:

  • Rootless build operations: By extending ECI to include build commands, Docker Desktop now ensures that builds run rootless, significantly enhancing security.
  • Comprehensive protection: This extension of ECI now includes support for docker builds on all platforms (Mac, Windows, Hyper-V, Linux), except Windows WSL, ensuring that all phases of container operation — both runtime and build — are securely isolated.

This development not only strengthens security across Docker Desktop’s operations but also aligns with Docker’s commitment to providing comprehensive security solutions. By safeguarding the entire lifecycle of container management, Docker ensures that users are protected against potential vulnerabilities from development to deployment.

To understand the full scope of these changes and how to leverage them within your Docker Business Subscription, visit the Enhanced Container Isolation docs for additional guidance.

Docker Desktop for WSL 2: A leap toward simplification and speed

We’re excited to announce an update to Docker Desktop that enhances its performance on Windows Subsystem for Linux (WSL 2) by reducing the complexity of the setup process. This update simplifies the WSL 2 setup by consolidating the previously required two Docker Desktop WSL distributions into one.

The simplification of Docker Desktop’s WSL 2 setup is designed to make the codebase easier to understand and maintain, improving our ability to handle failures more effectively. Most importantly, this change will also enhance the startup speed of Docker Desktop on WSL 2, allowing you to get to work faster than ever before.

What’s changing?

Phase 1: Starting with Docker Desktop 4.30, we are rolling out this update incrementally on all fresh installations. If you’re setting up Docker Desktop for the first time, you’ll experience a more streamlined installation process with reduced setup complexity right away.

Phase 2: We plan to introduce data migration in a future update, further enhancing the system’s efficiency and user experience. This upcoming phase will ensure that existing users also benefit from these improvements without any hassle.

To take advantage of phase 1, we encourage all new and existing users to upgrade to Docker Desktop 4.30. By doing so, you’ll be prepared to seamlessly transition to the enhanced version as we roll out subsequent phases.

Keep an eye out for more updates as we continue to refine Docker Desktop and enrich your development experience. 

Enhance your Docker Builds experience with new Docker Desktop Build features

Docker Desktop’s latest updates bring significant improvements to the Builds View, enhancing both the management and transparency of your build processes. These updates are designed to make Docker Desktop an indispensable tool for developers seeking efficiency and detailed insights into their builds.

Bulk delete enhancements:

  • Extended bulk delete capability: The ability to bulk delete builds has been expanded beyond the current page. Now, by defining a search or query, you can effortlessly delete all builds that match your specified criteria across multiple pages.
  • Simplified user experience: With the new Select all link next to the header, managing old or unnecessary builds becomes more straightforward, allowing you to maintain a clean and organized build environment with minimal effort (Figure 1).
Screenshot of Docker Desktop Build history page showing "Select all" option.
Figure 1: Docker Desktop Build history view displaying the new Select All or Select Various builds to take action.

Build provenance and OpenTelemetry traces:

  • Provenance and dependency insights: The updated Builds View now includes an action menu that offers access to the dependencies and provenance of each build (Figure 2). This feature enables access to the origin details and the context of the builds for deeper inspection, enhancing security and compliance.
  • OpenTelemetry integration: For advanced debugging, Docker Desktop lets you download OpenTelemetry traces to inspect build performance in Jaeger. This integration is crucial for identifying and addressing performance bottlenecks efficiently. Also, depending on your build configuration, you can now download the provenance to inspect the origin details for the build.
Screenshot of Docker Desktop Builds View showing Dependencies, Configuration, and Build results.
Figure 2: Docker Desktop Builds View displaying Dependencies and Build results in more detail.

Overall, these features work together to provide a more streamlined and insightful build management experience, enabling developers to focus more on innovation and less on administrative tasks. 

For more detailed information on how to leverage these new functionalities and optimize your Docker Desktop experience, make sure to visit Builds documentation.

Reimagining Dev Environments: Streamlining development workflows

We are evolving our approach to development environments as part of our continuous effort to refine Docker Desktop and enhance user experience. Since its launch in 2021, Docker Desktop’s Dev Environments feature has been a valuable tool for developers to quickly start projects from GitHub repositories or local directories. However, to better align with our users’ evolving needs and feedback, we will be transitioning from the existing Dev Environments feature to a more robust and integrated solution in the near future. 

What does that mean to those using Dev Environments today? The feature is unchanged. Starting with the Docker Desktop 4.30 release, though, new users trying out Dev Environments will need to explicitly turn it on in Beta features settings. This change is part of our broader initiative to streamline Docker Desktop functionalities and introduce new features in the future (Figure 3).

Screenshot of Docker Desktop Settings page showing features in development with Beta and experimental features.
Figure 3: Docker Desktop Settings page displaying available features in development and beta features.

We understand the importance of a smooth transition and are committed to providing detailed guidance and support to our users when we officially announce the evolution of Dev Environments. Until then, you can continue to leverage Dev Environments and look forward to additional functionality to come.

Docker Desktop support for Red Hat Enterprise Linux beta

As part of Docker’s commitment to broadening its support for enterprise-grade operating systems, we are excited to announce the expansion of Docker Desktop to include compatibility with Red Hat Enterprise Linux (RHEL) distributions, specifically versions 8 and 9. This development is designed to support our users in enterprise environments where RHEL is widely used, providing them with the same seamless Docker experience they expect on other platforms.

To provide feedback on this new beta functionality, engage your Account Executive or join the Docker Desktop Preview Program.

As Docker Desktop continues to evolve, the latest updates are set to significantly enhance the platform’s efficiency and security. From integrating advanced proxy support with SOCKS5, NTLM, and Kerberos to streamlining administrative processes and optimizing WSL 2 setups, these improvements are tailored to meet the needs of modern developers and enterprises. 

With the addition of exciting upcoming features and beta opportunities like Docker Desktop on Red Hat Enterprise Linux, Docker remains committed to providing robust, secure, and user-friendly solutions. Stay connected with us to explore how these continuous advancements can transform your development workflows and enhance your Docker experience.

Learn more

]]>
Wasm vs. Docker: Performant, Secure, and Versatile Containers https://www.docker.com/blog/wasm-vs-docker/ Thu, 09 May 2024 18:39:15 +0000 https://www.docker.com/?p=53826 Docker and WebAssembly (Wasm) represent two pivotal technologies that have reshaped the software development landscape. You’ve probably started to hear more about Wasm in the past few years as it has gained in popularity, and perhaps you’ve also heard about the benefits of using it in your application stack. This may have led you to think about the differences between Wasm and Docker, especially because the technologies work together so closely.

In this article, we’ll explore how these two technologies can work together to enable you to deliver consistent, efficient, and secure environments for deploying applications. By marrying these two tools, developers can easily reap the performance benefits of WebAssembly with containerized software development.

White text on blue background saying Wasm vs. Docker

What’s Wasm?

Wasm is a compact binary instruction format governed by the World Wide Web Consortium (W3C). It’s a portable compilation target for more than 40 programming languages, like C/C++, C#, JavaScript, Go, and Rust. In other words, Wasm is a bytecode format encoded to run on a stack-based virtual machine.

Similar to the way Java can be compiled to Java bytecode and executed on the Java Virtual Machine (JVM), which can then be compiled to run on various architectures, a program can be compiled to Wasm bytecode and then executed by a Wasm runtime, which can be packaged to run on different architectures, such as Arm and x86.

a program can be compiled to Wasm bytecode and then executed by a Wasm runtime, which can be packaged to run on different architectures, such as Arm and x86

What’s a Wasm runtime?

Wasm runtimes bridge the gap between portable bytecode and the underlying hardware architecture. They also provide APIs to communicate with the host environment and provide interoperability between other languages, such as JavaScript.

At a high level, a Wasm runtime runs your bytecode in three semantic phases:

  1. Decoding: Processing the module to convert it to an internal representation
  2. Validation: Checking to see that the decoded module is valid
  3. Execution: Installing and invoking a valid module

Wasm runtime examples include Spin, Wasmtime, WasmEdge, and Wasmer. Major browsers like Firefox and Chrome also use Spider Monkey and V8, respectively.

Why use Wasm?

To understand why you might want to use WebAssembly in your application stack, let’s examine its main benefits — notably, security without sacrificing performance and versatility.

Security without sacrificing performance

Wasm enables code to run at near-native speed within a secure, sandboxed environment, protecting systems from malicious software. This performance is achieved through just-in-time (JIT) compilation of WebAssembly bytecode directly into machine code, bypassing the need for transpiling into an intermediate format. 

Wasm also uses shared linear memory — a contiguous block of memory that simplifies data exchange between modules or between WebAssembly and JavaScript. This design allows for efficient communication and enables developers to blend the flexibility of JavaScript with the robust performance of WebAssembly in a single application.

The security of this system is further enhanced by the design of the host runtime environment, which acts as a sandbox. It restricts the Wasm module from accessing anything outside of the designated memory space and from performing potentially dangerous operations like file system access, network requests, and system calls. WebAssembly’s requirement for explicit imports and exports to access host functionality adds another layer of control, ensuring a secure execution environment.

Use case versatility

Finally, WebAssembly is relevant for more than traditional web platforms (contrary to its name). It’s also an excellent tool for server-side applications, edge computing, game development, and cloud/serverless computing. If performance, security, or target device resources are a concern, consider using this compact binary format.

During the past few years, WebAssembly has become more prevalent on the server side because of the WebAssembly System Interface (or WASI). WASI is a modular API for Wasm that provides access to operating system features like files, filesystems, and clocks. 

Docker vs. Wasm: How are they related?

After reading about WebAssembly code, you might be wondering how Docker is relevant. Doesn’t WebAssembly handle sandboxing and portability? How does Docker fit in the picture? Let’s discuss further.

Docker helps developers build, run, and share applications — including those that use Wasm. This is especially true because Wasm is a complementary technology to Linux containers. However, handling these containers without solid developer experience can quickly become a roadblock to application development.

That’s where Docker comes in with a smooth developer experience for building with Wasm and/or Linux containers.

Benefits of using Docker and Wasm together

Using Docker and Wasm together affords great developer experience benefits as well, including:

  • Consistent development environments: Developers can use Docker to containerize their Wasm runtime environments. This approach allows for a consistent Wasm development and execution environment that works the same way across any machine, from local development to production.
  • Efficient deployment: By packaging Wasm applications within Docker, developers can leverage efficient image management and distribution capabilities. This makes deploying and scaling these types of applications easier across various environments.
  • Security and isolation: Although Docker isolates applications at the operating system level, Wasm provides a sandboxed execution environment. When used together, the technologies offer a robust layered security model against many common vulnerabilities.
  • Enhanced performance: Developers can use Docker containers to deploy Wasm applications in serverless architectures or as microservices. This lets you take advantage of Wasm’s performance benefits in a scalable and manageable way.

How to enable Wasm on Docker Desktop

If you’re interested in running WebAssembly containers, you’re in luck! Support for Wasm workloads is now in beta, and you can enable it on Docker Desktop by checking Enable Wasm on the Features in development tab under Settings (Figure 2).

Note: Make sure you have containerd image store support enabled first.

Screenshot of Docker Desktop Settings showing checkmark beside "Enable Wasm" option.
Figure 2: Enable Wasm in Docker Desktop.

After enabling Wasm in Docker Desktop, you’re ready to go. Docker currently supports many Wasm runtimes, including Spin, WasmEdge, and Wasmtime. You can also find detailed documentation that explains how to run these applications.

How Docker supports WebAssembly

To explain how Docker supports WebAssembly, we’ll need to quickly review how the Docker Engine works.

The Docker Engine builds on a higher-level container runtime called containerd. This runtime provides fundamental functionality to control the container lifecycle. Using a shim process, containerd can leverage runc (a low-level runtime) under the hood. Then, runc can interact directly with the operating system to manage various aspects of containers.

The Docker Engine builds on a higher-level container runtime called containerd. This runtime provides fundamental functionality to control the container lifecycle. Using a shim process, containerd can leverage runc (a low-level runtime) under the hood. Then, runc can interact directly with the operating system to manage various aspects of containers.

What’s neat about this design is that anyone can write a shim to integrate other runtimes with containerd, including WebAssembly runtimes. As a result, you can plug-and-play with various Wasm runtimes in Docker, like WasmEdge, Spin, and Wasmtime.

The future of WebAssembly and Docker

WebAssembly is continuously evolving, so you’ll need a tight pulse to keep up with ecosystem developments. One recent advancement relates to how the new WebAssembly Component model will impact shims for the various container runtimes. At Docker, we’re working to make it simple for developers to create Wasm containers and enhance the developer experience.

In a famous 2019 tweet thread, Docker founder Solomon Hykes described the future of cloud computing. In this future, he describes a world where Docker runs Windows, Linux, and WebAssembly containers side by side. Given all the recent developments in the ecosystem, that future is well and truly here.

Recent advancements include:

  • The launch of WASI Preview 2 fully rebased WASI on the component model type system and semantics: This makes it modular, fully virtualizable, and accessible to various source languages.
  • Fermyon, Microsoft, SUSE, LiquidReply, and others have also released the SpinKube open source project: The project provided a straightforward path for deploying Wasm-based serverless functions into Kubernetes clusters. Developers can use SpinKube with Docker via k3s (a lightweight wrapper to run Rancher Lab’s minimal Kubernetes distribution). Docker Desktop also includes the shim, which enables you to run Kubernetes containers on your local machine.

In 2024, we expect the combination of Wasm and containers to be highly regarded for its efficiency, scalability, and cost.

Wrapping things up

In this article, we explained how Docker and Wasm work together and how to use Docker for Wasm workloads. We’re excited to see Wasm’s adoption grow in the coming years and will continue to enhance our support to meet developers both where they’re at and where they’re headed. 

Check out the following related materials for details on Wasm and how it works with Docker:

Learn more

Thanks to Sohan Maheshwar, Developer Advocate Lead at Fermyon, for collaborating on this post.

]]>
Creating AI-Enhanced Document Management with the GenAI Stack https://www.docker.com/blog/creating-ai-enhanced-document-management-with-the-genai-stack/ Tue, 07 May 2024 14:39:51 +0000 https://www.docker.com/?p=54113 Organizations must deal with countless reports, contracts, research papers, and other documents, but managing, deciphering, and extracting pertinent information from these documents can be challenging and time-consuming. In such scenarios, an AI-powered document management system can offer a transformative solution.

Developing Generative AI (GenAI) technologies with Docker offers endless possibilities not only for summarizing lengthy documents but also for categorizing them and generating detailed descriptions and even providing prompt insights you may have missed. This multi-faceted approach, powered by AI, changes the way organizations interact with textual data, saving both time and effort.

In this article, we’ll look at how to integrate Alfresco, a robust document management system, with the GenAI Stack to open up possibilities such as enhancing document analysis, automating content classification, transforming search capabilities, and more.

2400x1260 2024 gen ai stack v1

High-level architecture of Alfresco document management 

Alfresco is an open source content management platform designed to help organizations manage, share, and collaborate on digital content and documents. It provides a range of features for document management, workflow automation, collaboration, and records management.

You can find the Alfresco Community platform on Docker Hub. The Docker image for the UI, named alfresco-content-app, has more than 10 million pulls, while other core platform services have more than 1 million pulls.

Alfresco Community platform (Figure 1) provides various open source technologies to create a Content Service Platform, including:

  • Alfresco content repository is the core of Alfresco and is responsible for storing and managing content. This component exposes a REST API to perform operations in the repository.
  • Database: PostgreSQL, among others, serves as the database management system, storing the metadata associated with a document.
  • Apache Solr: Enhancing search capabilities, Solr enables efficient content and metadata searches within Alfresco.
  • Apache ActiveMQ: As an open source message broker, ActiveMQ enables asynchronous communication between various Alfresco services. Its Messaging API handles asynchronous messages in the repository.
  • UI reference applications: Share and Alfresco Content App provide intuitive interfaces for user interaction and accessibility.

For detailed instructions on deploying Alfresco Community with Docker Compose, refer to the official Alfresco documentation.

 Illustration of Alfresco Community platform architecture, showing PostgreSQL, ActiveMQ, repo, share, Alfresco content app, and more.
Figure 1: Basic diagram for Alfresco Community deployment with Docker.

Why integrate Alfresco with the GenAI Stack?

Integrating Alfresco with the GenAI Stack unlocks a powerful suite of GenAI services, significantly enhancing document management capabilities. Enhancing Alfresco document management with the GenAI stack services has different benefits:

  • Use different deployments according to resources available: Docker allows you to easily switch between different Large Language Models (LLMs) of different sizes. Additionally, if you have access to GPUs, you can deploy a container with a GPU-accelerated LLM for faster inference. Conversely, if GPU resources are limited or unavailable, you can deploy a container with a CPU-based LLM.
  • Portability: Docker containers encapsulate the GenAI service, its dependencies, and runtime environment, ensuring consistent behavior across different environments. This portability allows you to develop and test the AI model locally and then deploy it seamlessly to various platforms.
  • Production-ready: The stack provides support for GPU-accelerated computing, making it well suited for deploying GenAI models in production environments. Docker’s declarative approach to deployment allows you to define the desired state of the system and let Docker handle the deployment details, ensuring consistency and reliability.
  • Integration with applications: Docker facilitates integration between GenAI services and other applications deployed as containers. You can deploy multiple containers within the same Docker environment and orchestrate communication between them using Docker networking. This integration enables you to build complex systems composed of microservices, where GenAI capabilities can be easily integrated into larger applications or workflows.

How does it work?

Alfresco provides two main APIs for integration purposes: the Alfresco REST API and the Alfresco Messaging API (Figure 2).

  • The Alfresco REST API provides a set of endpoints that allow developers to interact with Alfresco content management functionalities over HTTP. It enables operations such as creating, reading, updating, and deleting documents, folders, users, groups, and permissions within Alfresco. 
  • The Alfresco Messaging API provides a messaging infrastructure for asynchronous communication built on top of Apache ActiveMQ and follows the publish-subscribe messaging pattern. Integration with the Messaging API allows developers to build event-driven applications and workflows that respond dynamically to changes and updates within the Alfresco Repository.

The Alfresco Repository can be updated with the enrichment data provided by GenAI Service using both APIs:

  • The Alfresco REST API may retrieve metadata and content from existing repository nodes to be sent to GenAI Service, and update back the node.
  • The Alfresco Messaging API may be used to consume new and updated nodes in the repository and obtain the result from the GenAI Service.
 Illustration showing integration of two main Alfresco APIs: REST API and Messaging API.
Figure 2: Alfresco provides two main APIs for integration purposes: the Alfresco REST API and the Alfresco Messaging API.

Technically, Docker deployment includes both the Alfresco and GenAI Stack platforms running over the same Docker network (Figure 3). 

The GenAI Stack works as a REST API service with endpoints available in genai:8506, whereas Alfresco uses a REST API client (named alfresco-ai-applier) and a Messages API client (named alfresco-ai-listener) to integrate with AI services. Both clients can also be run as containers.

 Illustration of deployment architecture, showing Alfresco and GenAI Stack.
Figure 3: Deployment architecture for Alfresco integration with GenAI Stack services.

The GenAI Stack service provides the following endpoints:

  • summary: Returns a summary of a document together with several tags. It allows some customization, like the language of the response, the number of words in the summary and the number of tags.
  • classify: Returns a term from a list that best matches the document. It requires a list of terms as input in addition to the document to be classified.
  • prompt: Replies to a custom user prompt using retrieval-augmented generation (RAG) for the document to limit the scope of the response.
  • describe: Returns a text description for an input picture.

The implementation of GenAI Stack services loads the document text into chunks in Neo4j VectorDB to improve QA chains with embeddings and prevent hallucinations in the response. Pictures are processed using an LLM with a visual encoder (LlaVA) to generate descriptions (Figure 4). Note that Docker GenAI Stack allows for the use of multiple LLMs for different goals.

 Illustration of GenAI Stack Services, showing Document Loader, LLM embeddings, VectorDB, QA Chain, and more.
Figure 4: The GenAI Stack services are implemented using RAG and an LLM with visual encoder (LlaVA) for describing pictures.

Getting started 

To get started, check the following:

Obtaining the amount of RAM available for Docker Desktop can be done using following command:

docker info --format '{{json .MemTotal}}'

If the result is under 20 GiB, follow the instructions in Docker official documentation for your operating system to boost the memory limit for Docker Desktop.

Clone the repository

Use the following command to close the repository:

git clone https://github.com/aborroy/alfresco-genai.git

The project includes the following components:

  • genai-stack folder is using https://github.com/docker/genai-stack project to build a REST endpoint that provides AI services for a given document.
  • alfresco folder includes a Docker Compose template to deploy Alfresco Community 23.1.
  • alfresco-ai folder includes a set of projects related to Alfresco integration.
    • alfresco-ai-model defines a custom Alfresco content model to store summaries, terms and prompts to be deployed in Alfresco Repository and Share App.
    • alfresco-ai-applier uses the Alfresco REST API to apply summaries or terms for a populated Alfresco Repository.
    • alfresco-ai-listener listens to messages and generates summaries for created or updated nodes in Alfresco Repository.
  • compose.yaml file describes a deployment for Alfresco and GenAI Stack services using include directive.

Starting Docker GenAI service

The Docker GenAI Service for Alfresco, located in the genai-stack folder, is based on the Docker GenAI Stack project, and provides the summarization service as a REST endpoint to be consumed from Alfresco integration.

cd genai-stack

Before running the service, modify the .env file to adjust available preferences:

# Choose any of the on premise models supported by ollama
LLM=mistral
LLM_VISION=llava
# Any language name supported by chosen LLM
SUMMARY_LANGUAGE=English
# Number of words for the summary
SUMMARY_SIZE=120
# Number of tags to be identified with the summary
TAGS_NUMBER=3

Start the Docker Stack using the standard command:

docker compose up --build --force-recreate

After the service is up and ready, the summary REST endpoint becomes accessible. You can test its functionality using a curl command.

Use a local PDF file (file.pdf in the following sample) to obtain a summary and a number of tags.

curl --location 'http://localhost:8506/summary' \
--form 'file=@"./file.pdf"'
{ 
  "summary": " The text discusses...", 
  "tags": " Golang, Merkle, Difficulty", 
  "model": "mistral"
}

Use a local PDF file (file.pdf in the following sample) and a list of terms (such as Japanese or Spanish) to obtain a classification of the document.

curl --location \
'http://localhost:8506/classify?termList=%22Japanese%2CSpanish%22' \
--form 'file=@"./file.pdf"'
{
    "term": " Japanese",
    "model": "mistral"
}

Use a local PDF file (file.pdf in the following sample) and a prompt (such as “What is the name of the son?”) to obtain a response regarding the document.

curl --location \
'http://localhost:8506/prompt?prompt=%22What%20is%20the%20name%20of%20the%20son%3F%22' \
--form 'file=@"./file.pdf"'
{
    "answer": " The name of the son is Musuko.",
    "model": "mistral"
}

Use a local picture file (picture.jpg in the following sample) to obtain a text description of the image.

curl --location 'http://localhost:8506/describe' \
--form 'image=@"./picture.jpg"'
{
    "description": " The image features a man standing... ",
    "model": "llava"
}

Note that, in this case, LlaVA LLM is used instead of Mistral.

Make sure to stop Docker Compose before continuing to the next step.

Starting Alfresco

The Alfresco Platform, located in the alfresco folder, provides a sample deployment of the Alfresco Repository including a customized content model to store results obtained from the integration with the GenAI Service.

Because we want to run both Alfresco and GenAI together, we’ll use the compose.yaml file located in the project’s main folder.

include:
  - genai-stack/compose.yaml
  - alfresco/compose.yaml
#  - alfresco/compose-ai.yaml

In this step, we’re deploying only GenAI Stack and Alfresco, so make sure to leave the compose.ai.yaml line commented out.

Start the stack using the standard command:

docker compose up --build --force-recreate

After the service is up and ready, the Alfresco Repository becomes accessible. You can test the platform using default credentials (admin/admin) in the following URLs:

Enhancing existing documents within Alfresco 

The AI Applier application, located in the alfresco-ai/alfresco-ai-applier folder, contains a Spring Boot application that retrieves documents stored in an Alfresco folder, obtains the response from the GenAI Service and updates the original document in Alfresco.

Before running the application for the first time, you’ll need to build the source code using Maven.

cd alfresco-ai/alfresco-ai-applier
mvn clean package

As we have GenAI Service and Alfresco Platform up and running from the previous steps, we can upload documents to the Alfresco Shared Files/summary folder and run the program to update the documents with the summary.

java -jar target/alfresco-ai-applier-0.8.0.jar \
--applier.root.folder=/app:company_home/app:shared/cm:summary \
--applier.action=SUMMARY
...
Processing 2 documents of a total of 2
END: All documents have been processed. The app may need to be executed again for nodes without existing PDF rendition.

Once the process has been completed, every Alfresco document in the Shared Files/summary folder will include the information obtained by the GenAI Stack service: summary, tags, and LLM used (Figure 5).

Screenshot of Document details in Alfresco, showing Document properties, Summary, tags, and LLM used.
Figure 5: The document has been updated in Alfresco Repository with summary, tags and model (LLM).

You can now upload documents to the Alfresco Shared Files/classify folder to prepare the repository for the next step.

Classifying action can be applied to documents in the Alfresco Shared Files/classify folder using the following command. GenAI Service will pick the term from the list (English, Spanish, Japanese) that best matches each document in the folder.

java -jar target/alfresco-ai-applier-0.8.0.jar \
--applier.root.folder=/app:company_home/app:shared/cm:classify \
--applier.action=CLASSIFY \
--applier.action.classify.term.list=English,Spanish,Japanese
...
Processing 2 documents of a total of 2
END: All documents have been processed. The app may need to be executed again for nodes without existing PDF rendition.

Upon completion, every Alfresco document in the Shared Files folder will include the information obtained by the GenAI Stack service: a term from the list of terms and the LLM used (Figure 6).

Screenshot showing document classification update in Alfresco Repository.
Figure 6: The document has been updated in Alfresco Repository with term and model (LLM).

You can upload pictures to the Alfresco Shared Files/picture folder to prepare the repository for the next step.

To obtain a text description from pictures, create a new folder named picture under the Shared Files folder. Upload any image file to this folder and run the following command:

java -jar target/alfresco-ai-applier-0.8.0.jar \
--applier.root.folder=/app:company_home/app:shared/cm:picture \
--applier.action=DESCRIBE
...
Processing 1 documents of a total of 1
END: All documents have been processed. The app may need to be executed again for nodes without existing PDF rendition.

Following this process, every Alfresco image in the picture folder will include the information obtained by the GenAI Stack service: a text description and the LLM used (Figure 7).

Screenshot showing document description update in Alfresco repository.
Figure 7: The document has been updated in Alfresco Repository with text description and model (LLM).

Enhancing new documents uploaded to Alfresco

The AI Listener application, located in the alfresco-ai/alfresco-ai-listener folder, contains a Spring Boot application that listens to Alfresco messages, obtains the response from the GenAI Service and updates the original document in Alfresco.

Before running the application for the first time, you’ll need to build the source code using Maven and to build the Docker image.

cd alfresco-ai/alfresco-ai-listener
mvn clean package
docker build . -t alfresco-ai-listener

As we are using the AI Listener application as a container, stop the Alfresco deployment and uncomment the alfresco-ai-listener in the compose.yaml file.

include:
  - genai-stack/compose.yaml
  - alfresco/compose.yaml
  - alfresco/compose-ai.yaml

Start the stack using the standard command:

docker compose up --build --force-recreate

After the service is again up and ready, the Alfresco Repository becomes accessible. You can verify that the platform is working by using default credentials (admin/admin) in the following URLs:

Summarization

Next, upload a new document and apply the “Summarizable with AI” aspect to the document. After a while, the document will include the information obtained by the GenAI Stack service: summary, tags, and LLM used.

Description

If you want to use AI enhancement, you might want to set up a folder that automatically applies the necessary aspect, instead of doing it manually.

Create a new folder named pictures in Alfresco Repository and create a rule with the following settings in it:

  • Name: description
  • When: Items are created or enter this folder
  • If all criteria are met: All Items
  • Perform Action: Add “Descriptable with AI” aspect

Upload a new picture to this folder. After a while, without manual setting of the aspect, the document will include the information obtained by the GenAI Stack service: description and LLM used.

Classification

Create a new folder named classifiable in Alfresco Repository. Apply the “Classifiable with AI” aspect to this folder and add a list of terms separated by comma in the “Terms” property (such as English, Japanese, Spanish).

Create a new rule for classifiable folder with the following settings:

  • Name: classifiable
  • When: Items are created or enter this folder
  • If all criteria are met: All Items
  • Perform Action: Add “Classified with AI” aspect

Upload a new document to this folder. After a while, the document will include the information obtained by the GenAI Stack service: term and LLM used.

A degree of automation can be achieved when using classification with AI. To do this, a simple Alfresco Repository script named classify.js needs to be created in the folder “Repository/Data Dictionary/Scripts” with following content.

document.move(
  document.parent.childByNamePath(    
    document.properties["genai:term"]));

Create a new rule for classifiable folder to apply this script with following settings:

  • Name: move
  • When: Items are updated
  • If all criteria are met: All Items
  • Perform Action: Execute classify.js script

Create a child folder of the classifiable folder with the name of every term defined in the “Terms” property. 

When you set up this configuration, any documents uploaded to the folder will automatically be moved to a subfolder based on the identified term. This means that the documents are classified automatically.

Prompting

Finally, to use the prompting GenAI feature, apply the “Promptable with AI” aspect to an existing document. Type your question in the “Question” property.

After a while, the document will include the information obtained by the GenAI Stack service: answer and LLM used.

A new era of document management

By embracing this framework, you can not only unlock a new level of efficiency, productivity, and user experience but also lay the foundation for limitless innovation. With Alfresco and GenAI Stack, the possibilities are endless — from enhancing document analysis and automating content classification to revolutionizing search capabilities and beyond.

If you’re unsure about any part of this process, check out the following video, which demonstrates all the steps live:

Learn more

]]>
Creating AI-Enhanced Document Management with the Docker GenAI Stack nonadult
Docker and JFrog Partner to Further Secure Docker Hub and Remove Millions of Imageless Repos with Malicious Links https://www.docker.com/blog/docker-jfrog-partner-to-further-secure-docker-hub/ Tue, 30 Apr 2024 14:00:55 +0000 https://www.docker.com/?p=54468 Like any large platform on the internet (such as GitHub, YouTube, GCP, AWS, Azure, and Reddit), Docker Hub, known for its functionality and collaborative environment, can become a target for large-scale malware and spam campaigns. Today, security researchers at JFrog announced that they identified millions of spam repositories on Docker Hub without images that have malicious links embedded in the repository descriptions/metadata. To be clear, no malicious container images were discovered by JFrog. Rather, these were pages buried in the web interface of Docker Hub that a user would have to discover and click on to be at any risk. We thank our partner JFrog for this report, and Docker has deleted all reported repositories. Docker also has a security@docker.com mailbox, which is monitored by the Security team. All malicious repositories are removed once validated.

2400x1260 dockerjfrog

The JFrog report highlights methods employed by bad actors, such as using fake URL shorteners and Google’s open redirect vulnerabilities to mask their malicious intent. These attacks are not simple to detect — many are not malware but simple links, for example, and wouldn’t be detectable except by humans or flagged as malicious by security tools. 

JFrog identified millions of “imageless” repositories on Docker Hub. These repositories, devoid of actual Docker images, serve merely as fronts for distributing malware or phishing attacks. Approximately 3 million repositories were found to contain no substantive content, just misleading documentation intended to lure users to harmful websites. The investment in maintaining Hub is enormous on many fronts.

These repositories are not high-traffic repositories and would not be highlighted within Hub. The below repository is an example highlighted in JFRog’s blog. Since there is not an image in the repository, there will not be any pulls.

docker jfrog security screenshot 1

An image would be displayed below with a corresponding tag. These repositories are empty.

docker jfrog security screenshot 2

Conclusion

Docker is committed to security and has made substantial investments this past year, demonstrating our commitment to our customers. We have recently completed our SOC 2 Type 2 audit and ISO 27001 certification review, and we are waiting on certification. Both SOC 2 and ISO 27001 demonstrate Docker’s commitment to Customer Trust and securing our products. 

We urge all Docker users to use trusted content. Docker Hub users should remain vigilant, verify the credibility of repositories before use, and report any suspicious activities. If you have discovered a security vulnerability in one of Docker’s products or services, we encourage you to report it responsibly to security@docker.com. Read our Vulnerability Disclosure Policy to learn more.

Docker is committed to collaborating with security experts like JFrog and the community to ensure that Docker Hub remains a safe and robust platform for developers around the globe. 

]]>
Better Debugging: How the Signal0ne Docker Extension Uses AI to Simplify Container Troubleshooting https://www.docker.com/blog/debug-containers-ai-signal0ne-docker-extension/ Wed, 24 Apr 2024 15:58:35 +0000 https://www.docker.com/?p=53996 This post was written in collaboration with Szymon Stawski, project maintainer at Signal0ne.

Consider this scenario: You fire up your Docker containers, hit an API endpoint, and … bam! It fails. Now what? The usual drill involves diving into container logs, scrolling through them to understand the error messages, and spending time looking for clues that will help you understand what’s wrong. But what if you could get a summary of what’s happening in your containers and potential issues with the proposed solutions already provided?

In this article, we’ll dive into a solution that solves this issue using AI. AI can already help developers write code, so why not help developers understand their system, too? 

Signal0ne is a Docker Desktop extension that scans Docker containers’ state and logs in search of problems, analyzes the discovered issues, and outputs insights to help developers debug. We first learned about Signal0ne as the winning submission in the 2023 Docker AI/ML Hackathon, and we’re excited to show you how to use it to debug more efficiently. 

2400x1260 debug

Introducing Signal0ne Docker extension: Streamlined debugging for Docker

The magic of the Signal0ne Docker extension is its ability to shorten feedback loops for working with and developing containerized applications. Forget endless log diving — the extension offers a clear and concise summary of what’s happening inside your containers after logs and states are analyzed by an AI agent, pinpointing potential issues and even suggesting solutions. 

Developing applications these days involves more than a block of code executed in a vacuum. It is a complex system of dependencies, and different user flows that need debugging from time to time. AI can help filter out all the system noise and focuses on providing data about certain issues in the system so that developers can debug faster and better. 

Docker Desktop is one of the most popular tools used for local development with a huge community, and Docker features like Docker Debug enhance the community’s ability to quickly debug and resolve issues with their containerized apps.

Signal0ne Docker extension’s suggested solutions and summaries can help you while debugging your container or editing your code so that you can focus on bringing value as a software engineer. The term “developer experience” is often used, but this extension focuses on one crucial aspect: shortening development time. This translates directly to increased productivity, letting you build containerized applications faster and more efficiently.

How does the Docker Desktop extension work?

Between AI co-pilots, highly integrated in IDEs that help write code, and browser AI chats that help understand software development concepts in a Q&A way, there is one piece missing: logs and runtime system data. 

The Signal0ne Docker Desktop extension consists of three components: two hosted on the user’s local system (UI and agent) and one in the Signal0ne cloud backend service. The agent scans the user’s local environment in the search of containers with invalid states, runtime issues, or some warnings or errors in the logs, after issue discovery, it collects additional data from container definition for enhanced analysis. 

After the Signal0ne agent discovery, data is sent to the backend service, where a combination of pre-trained LLM and solution search retrieval service performs the analysis. The analysis of the issues can be seen from the Signal0ne extension UI, including: 

  • Short log summary — Outlines what is happening within a particular container, including logs on which analysis was based can be accessed from the sources dropdown if you wish.
  • Solutions summary — One paragraph about possible solutions to the analyzed issue with some explanation of why this particular solution was proposed.
  • Solution sources — Links to websites where the issue has already been solved by the community (for example, a GitHub issue or StackOverflow post).

Watch this short video demonstration for more details: 

Technical details

The issue analysis pipeline is powered by the LangChain tooling framework, which utilizes open source Mistral 7b LLM with LLM-optimized web search that browses sources of existing solutions on the open web (Figure 1).

 Illustration of Signal0ne architecture showing local development containers, UI, agent, backend service, Mistral 7b LLM, and search API.
Figure 1: Signal0ne architecture overview.

The following section shows sample issue analysis pipeline output:

Logs from Docker container:

“$ irb\n>> \"12341:asdf\"[/\\d+/]\n# => \"12341\"  \n>> \"12341:asdf\"[/\\d*/]\n# => \"12341\"  \n>> \"12341:asdf\"[0..5]\n# => \"12341:\"  \n>> \"12341:asdf\"[0...5]\n# => \"12341\"  \n>> \"12341:asdf\"[0, ':']\nTypeError: can't convert String into Integer\n  from (irb):5:in”

Summary

The user is trying to extract a substring from the string 12341:asdf using different Ruby methods, such as [/\\\\d+/], [/\\\\d\\\\*/], [0..5], [0...5], and [0, ':']. However, when attempting to use the indexing method [0, ':'], an error occurs because the method expects an integer argument but receives a string instead.

Predicted solutions: 

“To extract a substring from the given string 12341:asdf without encountering an error when using indexing methods, consider the following options:

1. Using regular expressions:

To extract a substring using regular expressions, you can use the scan method with a regular expression pattern. For instance, to extract the digits before the colon, you can use the following code snippet:

				str = "12341:asdf"                           
				substring = str.scan(/[\\\\d]+/)                          
				puts substring.first
				# Output: ["12341"]                         
				puts substring.first[0]                           
				# Output: 12341

2. Slicing:

Another alternative to indexing is slicing, which allows you to extract a specified range of characters from a string. To extract the digits before the colon using slicing, you can use the following code snippet:

str = "12341:asdf"                       
	   substring = str[0...str.index(":")]                         
               puts substring               
	   # Output: "12341"                         
	   puts substring.to_i                           
	   # Output: 12341 

These two methods should help you extract the desired substring from the given string without encountering an error.

Getting started

Getting started with the Signal0ne Docker Desktop extension is a straightforward process that allows developers to leverage the benefits of unified development. 

Here are the steps for installing Signal0ne Docker extension:

1. Install Docker Desktop.

2. Choose Add Extensions in the left sidebar. The Browse tab will appear by default (Figure 2).

Screenshot of Docker Desktop Extensions Marketplace highlighting "Add Extensions" option and "Browse" tab.
Figure 2: Signal0ne extension installation from the marketplace.

3. In the Filters drop-down, select the Utility tools category.

4. Find Signal0ne and then select Install (Figure 3).

Screenshot of Signal0ne installation process.
Figure 3: Extension installation process.

5. Log in after the extension is installed (Figure 4).

Screenshot of Signal0ne login page.
Figure 4: Signal0ne extension login screen.

6. Start developing your apps, and, if you face some issues while debugging, have a look at the Signal0ne extension UI. The issue analysis will be there to help you with debugging.

Make sure the Signal0ne agent is enabled by toggling on (Figure 5):

Screenshot of Signal0ne Agent Settings toggle bar.
Figure 5: Agent settings tab.

Figure 6 shows the summary and sources:

Screenshot of Signal0ne page showing search criteria and related insights.
Figure 6: Overview of the inspected issue.

Proposed solutions and sources are shown in Figures 7 and 8. Solutions sources will redirect you to a webpage with predicted solution:

Screenshot of Signal0ne page showing search criteria and proposed solutions.
Figure 7: Overview of proposed solutions to the encountered issue.
Screenshot of Signal0ne page showing search criteria and related source links.
Figure 8: Overview of the list of helpful links.

If you want to contribute to the project, you can leave feedback via the Like or Dislike button in the issue analysis output (Figure 9).

Screenshot of Signal0ne  sources page showing thumbs up/thumbs down feedback options at the bottom.
Figure 9: You can leave feedback about analysis output for further improvements.

To explore Signal0ne Docker Desktop extension without utilizing your containers, consider experimenting with dummy containers using this docker compose to observe how logs are being analyzed and how helpful the output is with the insights:

services:
  broken_bulb: # c# application that cannot start properly
    image: 'Signal0neai/broken_bulb:dev'
  faulty_roger: # 
    image: 'Signal0neai/faulty_roger:dev'
  smoked_server: # nginx server hosting the website with the miss-configuration
    image: 'Signal0neai/smoked_server:dev'
    ports:
      - '8082:8082'
  invalid_api_call: # python webserver with bug 
   image: 'Signal0neai/invalid_api_call:dev'
   ports:
    - '5000:5000'
  • broken_bulb: This service uses the image Signal0neai/broken_bulb:dev. It’s a C# application that throws System.NullReferenceException during the startup. Thanks to that application, you can observe how Signal0ne discovers the failed container, extracts the error logs, and analyzes it.
  • faulty_roger: This service uses the image Signal0neai/faulty_roger:dev. It is a Python API server that is trying to connect to an unreachable database on localhost.
  • smoked_server: This service utilizes the image Signal0neai/smoked_server:dev. The smoked_server service is an Nginx instance that is throwing 403 forbidden while the user is trying to access the root path (http://127.0.0.1:8082/). Signal0ne can help you debug that.
  • invalid_api_call: API service with a bug in one of the endpoints, to generate an error call http://127.0.0.1:5000/create-table  after running the container. Follow the analysis of Signal0ne and try to debug the issue.

Conclusion

Debugging containerized applications can be time-consuming and tedious, often involving endless scrolling through logs and searching for clues to understand the issue. However, with the introduction of the Signal0ne Docker extension, developers can now streamline this process and boost their productivity significantly.

By leveraging the power of AI and language models, the extension provides clear and concise summaries of what’s happening inside your containers, pinpoints potential issues, and even suggests solutions. With its user-friendly interface and seamless integration with Docker Desktop, the Signal0ne Docker extension is set to transform how developers debug and develop containerized applications.

Whether you’re a seasoned Docker user or just starting your journey with containerized development, this extension offers a valuable tool that can save you countless hours of debugging and help you focus on what matters most — building high-quality applications efficiently. Try the extension in Docker Desktop today, and check out the documentation on GitHub.

Learn more

]]>
Signal0ne docker extension demo nonadult
Next-Level Error Handling: How Docker Desktop 4.29 Aims to Simplify Developer Challenges https://www.docker.com/blog/next-level-error-handling/ Wed, 10 Apr 2024 14:22:38 +0000 https://www.docker.com/?p=53632 Imagine you’re deep in the zone, coding away on a groundbreaking project. The ideas are flowing, the coffee’s still warm, and then — bam! An error message pops up, halting your progress like a red light at a busy intersection. We’ve all been there, staring at cryptic codes or vague advice, feeling more like ancient mariners navigating by the stars than modern developers armed with cutting-edge technology.

This scenario is all too familiar in the world of software development. With an arsenal of tools, languages, platforms, and security protocols at our disposal, the complexity of our work environment has skyrocketed. For developers charting the unexplored territories of innovation, encountering errors can feel like facing a tempest with a leaky boat. But fear not, for Docker Desktop sails to the rescue with a lighthouse’s guidance: a new, intuitive prompt that sheds light on the mysterious seas of error messages.

2400x1260 4.29 enhancing docker desktop advanced error management

Enhancing Docker Desktop with advanced error management

In our Docker Desktop 4.29 release, we’ve embarked on an ambitious journey to elevate the user experience by fundamentally redefining error management. This initiative goes far beyond simple bug fixes; it aims to create a development environment that is not only more efficient and reliable but also more satisfying for developers. At the heart of these enhancements is our unwavering commitment to empowering users and providing them with the tools they need to recover swiftly from any setbacks they may encounter.

This strategic update is built around a core objective: pivoting Docker Desktop toward a model that supports self-service troubleshooting and fosters user resilience. By reimagining errors as opportunities for learning and growth, we’re doing more than just solving technical problems. We’re transforming how developers interact with Docker Desktop, enabling them to overcome challenges confidently and enhance their skills in the process. The changes introduced in Docker Desktop 4.29 signify a significant leap forward in our mission to address user issues and enhance their ability to navigate the complexities of software development with ease and efficiency.

Bridging the gap: From frustration to resolution

Previously, encountering an error in Docker Desktop could feel like reaching a dead end. Users were often greeted with cryptic error codes or minimal guidance, lacking the necessary context for a swift resolution. This outdated approach impeded user experience, efficiency, and overall satisfaction. The contrast with our new system couldn’t be more stark: Users now receive actionable insights when an error arises, ensuring every issue is a step toward a solution (Figure 1).

dd 4 29 error messages f1
Figure 1: Previous Docker Desktop error message: 4.28 and earlier did not provide intuitive instructions on how to remediate.

Empowering users, reducing support tickets

This latest update introduces an intuitive error management interface, direct diagnostic uploads, and self-service remediation options. These enhancements make troubleshooting more accessible and reduce the need for support inquiries, improving Docker Desktop’s usability and reliability specifically in the following ways: 

1. Enhanced error interface: Introducing an updated error interface that combines raw error codes with helpful explanatory text, including links for streamlined support. This not only makes troubleshooting more accessible but also significantly enhances the support process.

2. Direct diagnostic uploads: Users can now easily collect and upload diagnostics directly from the error screen. This feature enhances our support and troubleshooting capabilities, making it easier for users to get the help they need without navigating away from the error context.

3. Reset and exit options: Recognizing that some situations may require more drastic measures, the updated error interface also allows users to reset the application to factory settings or exit the application directly from the error screen (Figure 2).

dd 4 29 error messages f2
Figure 2: New Docker Desktop error message: 4.29 release providing remediation information and diagnostic sharing options.

4. Self-Service Options: For errors within the user’s ability to remedy, the error message now provides a user-friendly technical error description accompanied by clear, actionable buttons for immediate remediation. This reduces the need for support tickets and fosters a sense of user empowerment.

dd 4 29 error messages f3
Figure 3: Error message displaying self-service remediation options.

Conclusion

This update is evidence of our continuous focus on refining and enhancing our Docker Desktop users’ experiences — and there are more updates to come. We’re committed to making every aspect of application development as intuitive and empowering as possible. Look for further improvements as we continue to advance the state of user support and error remediation that supports sky-rocketing your innovation trajectory and productivity.

Learn more

]]>
Docker Desktop 4.29: Docker Socket Mount Permissions in ECI, Advanced Error Management, Moby 26, and New Beta Features  https://www.docker.com/blog/docker-desktop-4-29/ Wed, 10 Apr 2024 14:20:02 +0000 https://www.docker.com/?p=53616 The release of Docker Desktop 4.29 introduces enhancements to secure and streamline the development process and to improve error management and workflow efficiency. With the integration of Enhanced Container Isolation (ECI) with Docker socket mount permissions, the debut of Moby 26 within Docker Desktop, and exciting features such as Docker Compose enhancements via synchronized file shares reaching beta release, we’re equipping developers with the essential resources to tackle the complexities of modern development head-on.

Dive into the details to discover these new enhancements and get a sneak peek at exciting advancements currently in beta release.

In this post:

2400x1260 4.29 docker desktop release

Enhanced Container Isolation with Docker socket mount permissions 

We’re pleased to unveil a new feature in the latest Docker Desktop release, now in General Availability to Business subscribers, that further improves Desktop’s Enhanced Container Isolation (ECI) mode: Docker socket mount permissions. This update blends robust security with the flexibility you love, allowing you to enjoy key development tools like Testcontainers with the peace of mind provided by ECI’s unprivileged containers. Initially launched in beta with Docker Desktop 4.27, this update moves the ECI Docker socket mount permissions feature to General Availability (GA), demonstrating our commitment to making Docker Desktop the best modern application development platform.

The Docker Engine socket, a crucial component for container management, has historically been a vector for potential security risks. Unauthorized access could enable malicious activities, such as supply chain attacks. However, legitimate use cases, like the Testcontainers framework, require socket access for operational tasks.

With ECI, Docker Desktop enhances security by default, blocking unapproved bind-mounting of the Docker Engine socket into containers. Yet, recognizing the need for flexibility, we introduce controlled access through admin-settings.json configuration. This allows specified images to bind-mount the Docker socket, combining security with functionality. 

Key features include:

  • Selective permissions: Admins can now specify which container images can access the Docker socket through a curated imageList, ensuring that only trusted containers have the necessary permissions.
  • Command restrictions: The commandList feature further tightens security by limiting the Docker commands approved containers can execute, acting as a secondary defense layer.

While we celebrate this release, our journey doesn’t stop here. We’re continuously exploring ways to expand Docker Desktop’s capabilities, ensuring our users can access the most secure, efficient, and user-friendly containerization tools.

Stay tuned for further security enhancements, including our beta release of air-gapped containers. Update to Docker Desktop 4.29 to start leveraging the full potential of Enhanced Container Isolation with Docker socket mount permissions today.

Advanced error management in Docker Desktop 

We’re redefining error management to significantly improve the developer experience. This update isn’t just about fixing bugs; it’s a comprehensive overhaul aimed at making the development process more efficient, reliable, and user-friendly.

Central to this update is our shift toward self-service troubleshooting and resilience, transforming errors from roadblocks into opportunities for growth and learning. The new system presents actionable insights for errors, ensuring developers can swiftly move toward a resolution.

Key enhancements include:

  • An enhanced error interface: Combining error codes with explanatory text and support links, making troubleshooting straightforward.
  • Direct diagnostic uploads: Allowing users to share diagnostics from the error screen, streamlining support. 
  • Reset and exit options: Offering quick fixes directly from the error interface.
  • Self-service remediation: Providing clear, actionable steps for users to resolve issues independently (Figure 1).
docker desktop 4 29 f1
Figure 1: Error message displaying self-service remediation options.

This update marks a significant leap in our commitment to enhancing the Docker Desktop user experience, empowering developers, and reducing the need for support tickets. Read Next-Level Error Handling: How Docker Desktop 4.29 Aims to Simplify Developer Challenges to dive deeper into these enhancements in our blog and discover how Docker Desktop 4.29 is setting a new standard for error management and developer support.

New in Docker Engine: Volume subpath mounts, networking enhancements, BuildKit 0.13, and more 

In the latest Docker Engine update, Moby 26, packaged in Docker Desktop 4.29, introduces several enhancements aimed at enriching the developer experience. Here’s the breakdown of what’s new: 

  • Volume subpath mounts: Responding to widespread user requests, we’ve made it possible to mount a subdirectory as a named volume. This addition enhances flexibility and control over data management within containers. Detailed guidance on specifying these mounts is available in the docs
  • Networking enhancements: Significant improvements have been made to bolster the stability of networking capabilities within the engine, along with preliminary efforts to support future IPv6 enhancements.
  • Integration of BuildKit 0.13: Among other updates, this BuildKit version includes experimental support for Windows Containers, ensuring builds remain dependable and efficient.
  • Streamlined API: Deprecated API versions have been removed, concentrating on quality enhancements and promoting a more secure, reliable environment.
  • Multi-platform image enhancements: In this release, you’ll see an improved docker images UX as we’ve combined image entries for multi-platform images.

Beta release highlights

Docker Debug in Docker Desktop GUI and CLI 

Docker Debug (Beta), a recent addition to Docker Desktop, streamlines the debugging process for developers. This feature, accessible in Docker Pro, Teams, and Business subscriptions, offers a shell for efficiently debugging both local and remote containerized applications — even those that fail to run. With Docker Debug, developers can swiftly pinpoint and address issues, freeing up more time for innovation.

Now, in beta release, Docker Debug introduces comprehensive debugging directly from the Docker Desktop CLI for active and inactive containers alike. Moreover, the Docker Desktop GUI has been enhanced with an intuitive option: Click the toggle in the Exec tab within a container to switch on Debug mode to start debugging with the necessary tools at your fingertips.

docker desktop 4 29 f2
Figure 2: Docker Desktop containers view showcasing debugging a running container with Docker Debug.

To dive into Docker Debug, ensure you’re logged in with your subscription account, then initiate debugging by executing docker debug <Container or Image name> in the CLI or by selecting a container from the GUI container list for immediate debugging from any device local or in the cloud.

Improved volume backup capabilities 

With our latest release, we’re elevating volume backup capabilities in Docker Desktop, introducing an upgraded feature set in beta release. This enhancement directly integrates the Volumes Backup & Share extension directly into Docker Desktop, streamlining your backup processes. 

docker desktop 4 29 f3
Figure 3: Docker Desktop Volumes view showcasing new backup functionality.

This release marks a significant step forward, but it’s just the beginning. We’re committed to expanding these capabilities, adding even more value in future updates. Start exploring the new feature today and prepare for an enhanced backup experience soon.

Support for host network mode on Docker Desktop for Mac and Windows 

Support for host network mode (docker run –net=host), previously limited to Linux users, is now available for Mac and Windows Docker Desktop users, offering enhanced networking capabilities and flexibility.

With host network mode support, Docker Desktop becomes a more versatile tool for advanced networking tasks, such as dynamic network penetration testing, without predefined port mappings. This feature is especially useful for applications requiring the ability to dynamically accept connections on various ports, just as if they were running directly on the host. Features include:

  • Simplified networking: Eases the setup for complex networking tasks, facilitating security testing and the development of network-centric applications.
  • Greater flexibility: Allows containers to use the host’s network stack, avoiding the complexities of port forwarding.
docker desktop 4 29 f4
Figure 4: The host network mode enhancement in Preview Beta reflects our commitment to improving Docker Desktop and is available after authenticating against all Docker subscriptions.

Enhancing security with Docker Desktop’s new air-gapped containers

Docker Desktop’s latest beta feature, air-gapped containers, is now available in version 4.29, reflecting our deep investment in security enhancements. This Business subscription feature empowers administrators to limit container access to network resources, tightening security across containerized applications by: 

  • Restricting network access: Ensuring containers communicate only with approved sources.
  • Customizing proxy rules: Allowing detailed control over container traffic.
  • Enhancing data protection: Preventing unauthorized data transfer in or out of containers.

The introduction of air-gapped containers is part of our broader effort to make Docker Desktop not just a development tool, but an even more secure development environment. We’re excited about the potential this feature holds for enhancing security protocols and simplifying the management of sensitive data.

Compose bind mount support with synchronized file shares 

We’re elevating the Docker Compose experience for our subscribers by integrating synchronized file shares (SFS) directly into Compose. This feature eradicates the sluggishness typically associated with managing large codebases in containers. Formerly known as Mutagen, synchronized file shares enhances bind mounts with native filesystem performance, accelerating file operations by an impressive 2-10x. This leap forward is incredibly impactful for developers handling extensive codebases, effortlessly streamlining their workflow.

With a Docker subscription, you’ll find that Docker Compose and SFS work together seamlessly, automatically optimizing bind mounts to significantly boost synchronization speeds. This integration requires no additional configuration; Compose intelligently activates SFS whenever a bind mount is used, instantly enhancing your development process.

Enabling synchronized file shares in Compose is simple:

  1. Log into Docker Desktop.
  2. Under Settings, navigate to Features in development and choose the Experimental features tab.
  3. Enable Access experimental features and Manage Synchronized file shares with Compose.

Once set up via Docker Desktop settings, these folders act as standard bind mounts with the added benefit of SFS speed enhancements. 

docker desktop 4 29 f5
Figure 5: Docker Desktop settings displaying the option to turn on synchronized file shares with Docker Compose.
docker desktop 4 29 f6
Figure 6: Demonstration of compose up creating and synching shares in the terminal.

If your Compose project relies on a bind mount that could benefit from synchronized file shares, the initial share creation must be done through the Docker Desktop GUI.

Embrace the future of Docker Compose with Docker Desktop’s synchronized file shares and transform your development workflow with unparalleled speed and efficiency.

Try Docker Desktop 4.29 now

Docker Desktop 4.29 introduces updates focused on innovation, security, and enhancing the developer experience. This release integrates community feedback and advances Docker’s capabilities, providing solutions that meet developers’ and businesses’ immediate needs while setting the stage for future features. We advise all Docker users to upgrade to version 4.29. Please note that access to certain features in this release requires authentication and may be contingent upon your subscription tier. We encourage you to evaluate your feature needs and select the subscription level that best suits your requirements.

Join the conversation

Dive into the discussion and contribute to the evolution of Docker Desktop. Use our feedback form to share your thoughts and let us know how to improve the Hardened Desktop features. Your input directly influences the development roadmap, ensuring Docker Desktop meets and exceeds our community and customers’ needs.

Learn more

]]>