security – Docker https://www.docker.com Thu, 16 May 2024 20:13:09 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.3 https://www.docker.com/wp-content/uploads/2024/02/cropped-docker-logo-favicon-32x32.png security – Docker https://www.docker.com 32 32 Navigating Proxy Servers with Ease: New Advancements in Docker Desktop 4.30 https://www.docker.com/blog/navigating-proxy-servers-docker-desktop-4-30/ Tue, 14 May 2024 12:53:51 +0000 https://www.docker.com/?p=54745 Within the ecosystem of corporate networks, proxy servers stand as guardians, orchestrating the flow of information with a watchful eye toward security. These sentinels, while adding layers to navigate, play a crucial role in safeguarding an organization’s digital boundaries and keeping its network denizens — developers and admins alike — secure from external threats. 

Recognizing proxy servers’ critical position, Docker Desktop 4.30 offers new enhancements, especially on the Windows front, to ensure seamless integration and interaction within these secured environments.

Illustration of person accessing computer

Traditional approach

The realm of proxy servers is intricate, a testament to their importance in modern corporate infrastructure. They’re not just barriers but sophisticated filters and conduits that enhance security, optimize network performance, and ensure efficient internet traffic management. In this light, the dance of authentication — while complex — is necessary to maintain this secure environment, ensuring that only verified users and applications gain access.

Traditionally, Docker Desktop approached corporate networks with a single option: basic authentication. Although functional, this approach often felt like navigating with an outdated map. It was a method that, while simple, sometimes led to moments of vulnerability and the occasional hiccup in access for those venturing into more secure or differently configured spaces within the network. 

This approach could also create roadblocks for users and admins, such as:

  • Repeated login prompts: A constant buzzkill.
  • Security faux pas: Your credentials, base64 encoded, might as well be on a billboard.
  • Access denied: Use a different authentication method? Docker Desktop is out of the loop.
  • Workflow whiplash: Nothing like a login prompt to break your coding stride.
  • Performance hiccups: Waiting on auth can slow down your Docker development endeavors.

Seamless interaction

Enter Docker Desktop 4.30, where the roadblocks are removed. Embracing the advanced authentication protocols of Kerberos and NTLM, Docker Desktop now ensures a more secure, seamless interaction with corporate proxies while creating a streamlined and frictionless experience. 

This upgrade is designed to help you easily navigate the complexities of proxy authentication, providing a more intuitive and unobtrusive experience that both developers and admins can appreciate:

  • Invisible authentication: Docker Desktop handles the proxy handshake behind the scenes.
  • No more interruptions: Focus on your code, not on login prompts.
  • Simplicity: No extra steps compared to basic auth. 
  • Performance perks: Less time waiting, more time doing.

A new workflow with Kerberos authentication scheme is shown in Figure 1:

Illustration of Kerberos authentication process showing the following steps: Connect, Client authenticate, Get Service Ticket, Access service.
Figure 1: Workflow with Kerberos authentication.

A new workflow with NTLM auth scheme is shown in Figure 2:

Illustration of NTLM authentication process showing the following steps: Auth request, NTLM challenge, NTLM response, Confirm/Deny, Connect service.
Figure 2: Workflow with NTLM authentication scheme.

Integrating Docker Desktop into environments guarded by NTLM or Kerberos proxies no longer feels like a challenge but an opportunity. 

With Docker Desktop 4.30, we’re committed to facilitating this transition, prioritizing secure, efficient workflows catering to developers and admins who orchestrate these digital environments. Our focus is on bridging gaps and ensuring Docker Desktop aligns with today’s corporate networks’ security and operational standards.

FAQ

  • Who benefits? Both Windows-based developers and admins.
  • Continued basic auth support? Yes, providing flexibility while encouraging a shift to more secure protocols.
  • How to get started? Upgrade to Docker Desktop 4.30 for Windows.
  • Impact on internal networking? Absolutely none. It’s smooth sailing for container networking.
  • Validity of authentication? Enjoy 10 hours of secure access with Kerberos, automatically renewed with system logins.

Docker Desktop is more than just a tool — it’s a bridge to a more streamlined, secure, and productive coding environment, respecting the intricate dance with proxy servers and ensuring that everyone involved, from developers to admins, moves in harmony with the secure protocols of their networks. Welcome to a smoother and more secure journey with Docker Desktop.

Learn more

]]>
Docker and JFrog Partner to Further Secure Docker Hub and Remove Millions of Imageless Repos with Malicious Links https://www.docker.com/blog/docker-jfrog-partner-to-further-secure-docker-hub/ Tue, 30 Apr 2024 14:00:55 +0000 https://www.docker.com/?p=54468 Like any large platform on the internet (such as GitHub, YouTube, GCP, AWS, Azure, and Reddit), Docker Hub, known for its functionality and collaborative environment, can become a target for large-scale malware and spam campaigns. Today, security researchers at JFrog announced that they identified millions of spam repositories on Docker Hub without images that have malicious links embedded in the repository descriptions/metadata. To be clear, no malicious container images were discovered by JFrog. Rather, these were pages buried in the web interface of Docker Hub that a user would have to discover and click on to be at any risk. We thank our partner JFrog for this report, and Docker has deleted all reported repositories. Docker also has a security@docker.com mailbox, which is monitored by the Security team. All malicious repositories are removed once validated.

2400x1260 dockerjfrog

The JFrog report highlights methods employed by bad actors, such as using fake URL shorteners and Google’s open redirect vulnerabilities to mask their malicious intent. These attacks are not simple to detect — many are not malware but simple links, for example, and wouldn’t be detectable except by humans or flagged as malicious by security tools. 

JFrog identified millions of “imageless” repositories on Docker Hub. These repositories, devoid of actual Docker images, serve merely as fronts for distributing malware or phishing attacks. Approximately 3 million repositories were found to contain no substantive content, just misleading documentation intended to lure users to harmful websites. The investment in maintaining Hub is enormous on many fronts.

These repositories are not high-traffic repositories and would not be highlighted within Hub. The below repository is an example highlighted in JFRog’s blog. Since there is not an image in the repository, there will not be any pulls.

docker jfrog security screenshot 1

An image would be displayed below with a corresponding tag. These repositories are empty.

docker jfrog security screenshot 2

Conclusion

Docker is committed to security and has made substantial investments this past year, demonstrating our commitment to our customers. We have recently completed our SOC 2 Type 2 audit and ISO 27001 certification review, and we are waiting on certification. Both SOC 2 and ISO 27001 demonstrate Docker’s commitment to Customer Trust and securing our products. 

We urge all Docker users to use trusted content. Docker Hub users should remain vigilant, verify the credibility of repositories before use, and report any suspicious activities. If you have discovered a security vulnerability in one of Docker’s products or services, we encourage you to report it responsibly to security@docker.com. Read our Vulnerability Disclosure Policy to learn more.

Docker is committed to collaborating with security experts like JFrog and the community to ensure that Docker Hub remains a safe and robust platform for developers around the globe. 

]]>
Debian’s Dedication to Security: A Robust Foundation for Docker Developers https://www.docker.com/blog/debian-for-docker-developers/ Thu, 04 Apr 2024 14:03:10 +0000 https://www.docker.com/?p=53447 As security threats become more and more prevalent, building software with security top of mind is essential. Security has become an increasing concern for container workloads specifically and, commensurately, for container base-image choice. Many conversations around choosing a secure base image focus on CVE counts, but security involves a lot more than that. 

One organization that has been leading the way in secure software development is the Debian Project. In this post, I will outline how and why Debian operates as a secure basis for development.

White text on purple background with Docker logo and "Docker Official Images"

For more than 30 years, Debian’s diverse group of volunteers has provided a free, open, stable, and secure GNU/Linux distribution. Debian’s emphasis on engineering excellence and clean design, as well as its wide variety of packages and supported architectures, have made it not only a widely used distribution in its own right but also a meta-distribution. Many other Linux distributions, such as Ubuntu, Linux Mint, and Kali Linux, are built on top of Debian, as are many Docker Official Images (DOI). In fact, more than 1,000 Docker Official Images variants use the debian DOI or the Debian-derived ubuntu DOI as their base image. 

Why Debian?

As a bit of a disclaimer, I have been using Debian GNU/Linux for a long time. I remember installing Debian from floppy disks in the 1990s on a PC that I cobbled together, and later reinstalling so I could test prerelease versions of the netinst network installer. Installing over the network took a while using a 56-kbps modem. At those network speeds, you had to be very particular about which packages you chose in dselect

Having used a few other distributions before trying Debian, I still remember being amazed by how well-organized and architected the system was. No dangling or broken dependencies. No download failures. No incompatible shared libraries. No package conflicts, but rather a thoughtful handling of packages providing similar functionality. 

Much has changed over the years, no more floppies, dselect has been retired, my network connection speed has increased by a few orders of magnitude, and now I “install” Debian via docker pull debian. What has not changed is the feeling of amazement I have toward Debian and its community.

Open source software and security

Despite the achievements of the Debian project and the many other projects it has spawned, it is not without detractors. Like many other open source projects, Debian has received its share of criticsm in the past few years by opportunists lamenting the state of open source security. Writing about the software supply chain while bemoaning high-profile CVEs and pointing to malware that has been uploaded to an open source package ecosystem, such as PyPI or NPM, has become all too common. 

The pernicious assumption in such articles is that open source software is the problem. We know this is not the case. We’ve been through this before. Back when I was installing Debian over a 56-kbps modem, all sorts of fear, uncertainty, and doubt (FUD) was being spread by various proprietary software vendors. We learned then that open source is not a security problem — it is a security solution. 

Being open source does not automatically convey an improved security status compared to closed-source software, but it does provide significant advantages. In his Secure Programming HOWTO, David Wheeler provides a balanced summary of the relationship between open source software and security. A purported advantage conveyed by closed-source software is the nondisclosure of its source code, but we know that security through obscurity is no security at all. 

The transparency of open source software and open ecosystems allows us to better know our security posture. Openness allows for the rapid identification and remediation of vulnerabilities. Openness enables the vast majority of the security and supply chain tooling that developers regularly use. How many closed-source tools regularly publish CVEs? With proprietary software, you often only find out about a vulnerability after it is too late.

Debian’s rapid response strategy

Debian has been criticized for moving too slowly on the security front. But this narrative, like the open vs. closed-source narrative, captures neither the nuance nor reality. Although several distributions wait to publish CVEs until a fixed version is available, Debian opts for complete transparency and urgency when communicating security information to its users.

Furthermore, Debian maintainers are not a mindless fleet of automatons hastily applying patches and releasing new package versions. As a rule, Debian maintainers are experts among experts, deeply steeped in software and delivery engineering, open source culture, and the software they package.

zlib vulnerability example

A recent zlib vulnerability, CVE-2023-45853, provides an insightful example of the Debian project’s diligent, thorough approach to security. Several distributions grabbed a patch for the vulnerability, applied it, rebuilt, packaged, and released a new zlib package. The Debian security community took a closer look.

As mentioned in the CVE summary, the vulnerability was in minizip, which is a utility under the contrib directory of the zlib source code. No minizip source files are compiled into the zlib library, libz. As such, this vulnerability did not actually affect any zlib packages.

If that were where the story had ended, the only harm would be in updating a package unnecessarily. But the story did not end there. As detailed in the Debian bug thread, the offending minizip code was copied (i.e., vendored) and used in a lot of other widely used software. In fact, the vendored minizip code in both Chromium and Node.js was patched about a month before the zlib CVE was even published. 

Unfortunately, other commonly used software packages also had vendored copies of minizip that were still vulnerable. Thanks to the diligence of the Debian project, either the patch was applied to those projects as well, or they were compiled against the patched system minizip (not zlib!) dev package rather than the vendored version. In other distributions, those buggy vendored copies are in some cases still being compiled into software packages, with nary a mention in any CVE.

Thinking beyond CVEs

In the past 30 years, we have seen an astronomical increase in the role open source software plays in the tech industry. Despite the productivity gains that software engineers get by leveraging the massive amount of high-quality open source software available, we are once again hearing the same FUD we heard in the early days of open source. 

The next time you see an article about the dangers lurking in your open source dependencies, don’t be afraid to look past the headlines and question the assumptions. Open ecosystems lead to secure software, and the Debian project provides a model we would all do well to emulate. Debian’s goal is security, which encompasses a lot more than a report showing zero CVEs. Consumers of operating systems and container images would be wise to understand the difference. 

So go ahead and build on top of the debian DOI. FROM debian is never a bad way to start a Dockerfile!

Learn more

]]>
From Misconceptions to Mastery: Enhancing Security and Transparency with Docker Official Images https://www.docker.com/blog/enhancing-security-and-transparency-with-docker-official-images/ Thu, 04 Apr 2024 14:01:47 +0000 https://www.docker.com/?p=53546 Docker Official Images are a curated set of Docker repositories hosted on Docker Hub that provide a wide range of pre-configured images for popular language runtimes and frameworks, cloud-first utilities, data stores, and Linux distributions. These images are maintained and vetted, ensuring they meet best practices for security, usability, and versioning, making it easier for developers to deploy and run applications consistently across different environments.

Docker Official Images are an important component of Docker’s commitment to the security of both the software supply chain and open source software. Docker Official Images provide thousands of images you can use directly or as a base image when building your own images. For example, there are Docker Official Images for Alpine Linux, NGINX, Ubuntu, PostgreSQL, Python, and Node.js. Visit Docker Hub to search through the currently available Docker Official Images.

In this blog post, we address three common misconceptions about Docker Official Images and outline seven ways they help secure the software supply chain.

banner docker official images part1

3 common misconceptions about Docker Official Images

Even though Docker Official Images have been around for more than a decade and have been used billions of times, they are somewhat misunderstood. Who “owns” Docker Official Images? What is with all those tags? How should you use Docker Official Images? Let’s address some of the more common misconceptions.

Misconception 1: Docker Official Images are controlled by Docker

Docker Official Images are maintained through a partnership between upstream maintainers, community volunteers, and Docker engineers. External developers maintain the majority of Docker Official Images Dockerfiles, with Docker engineers providing insight and review to ensure best practices and uniformity across the Docker Official Images catalog. Additionally, Docker provides and maintains the Docker Official Images build infrastructure and logic, ensuring consistent and secure build environments that allow Docker Official Images to support more than 10 architecture/operating system combinations.

Misconception 2: Docker Official Images are designed for a single use case

Most Docker Official Images repositories offer several image variants and maintain multiple supported versions. In other words, the latest tag of a Docker Official Image might not be the right choice for your use case. 

Docker Official Images tags

The documentation for each Docker Official Images repository contains a “Supported tags and respective Dockerfile links” section that lists all the current tags with links to the Dockerfiles that created the image with those tags (Figure 1). This section can be a little intimidating for first-time users, but keeping in mind a few conventions will allow even novices to understand what image variants are available and, more importantly, which variant best fits their use case.

supported tags doi f1
Figure 1: Documentation showing the current tags with links to the Dockerfiles that created the image with those tags.
  • Tags listed on the same line all refer to the same underlying image. (Multiple tags can point to the same image.) For example, Figure 1 shows the ubuntu Docker Official Images repository, where the 20.04, focal-20240216, and focal tags all refer to the same image.
  • Often the latest tag for a Docker Official Images repository is optimized for ease of use and includes a wide variety of software helpful, but not strictly necessary, when using the main software packaged in the Docker Official Image. For example, latest images often include tools like Git and build tools. Because of their ease of use and wide applicability, latest images are often used in getting-started guides.
  • Some operating system and language runtime repositories offer “slim” variants that have fewer packages installed and are therefore smaller. For example, the python:3.12.2-bookworm image contains not only the Python runtime, but also any tool you might need to build and package your Python application — more than 570 packages! Compare this to the python:3.12.2-slim-bookworm image, which has about 150 packages.
  • Many Docker Official Images repositories offer “alpine” variants built on top of the Alpine Linux distribution rather than Debian or Ubuntu. Alpine Linux is focused on providing a small, simple, and secure base for container images, and Docker Official Images alpine variants typically aim to install only necessary packages. As a result, Docker Official Images alpine variants are typically even smaller than “slim” variants. For example, the linux/amd64 node:latest image is 382 MB, the node:slim image is 70 MB, and the node:alpine image is 47 MB.
  • If you see tags with words that look like Toy Story characters (for example, bookworm, bullseye, and trixie) or adjectives (such as jammy, focal, and bionic), those indicate the codename of the Linux distribution they use as a base image. Debian-release codenames are based on Toy Story characters, and Ubuntu releases use alliterative adjective-animal appellations. Linux distribution indicators are helpful because many Docker Official Images provide variants built upon multiple underlying distribution versions (for example, postgres:bookworm and postgres:bullseye).
  • Tags may contain other hints to the purpose of their image variant. Often these are explained later in the Docker Official Images repository documentation. Check the “How to use this image” and/or “Image Variants” sections.

Misconception 3: Docker Official Images do not follow software development best practices

Some critics argue that Docker Official Images go against the grain of best practices, such as not running container processes as root. While it’s true that we encourage users to embrace a few opinionated standards, we also recognize that different use cases require different approaches. For example, some use cases may require elevated privileges for their workloads, and we provide options for them to do so securely.

7 ways Docker Official Images help secure the software supply chain

We recognize that security is a continuous process, and we’re committed to providing the best possible experience for our users. Since the company’s inception in 2013, Docker has been a leader in the software supply chain, and our commitment to security — including open source security — has helped to protect developers from emerging threats all along the way.

With the availability of open source software, efficiently building powerful applications and services is easier than ever. The transparency of open source allows unprecedented insight into the security posture of the software you create. But to take advantage of the power and transparency of open source software, fully embracing software supply chain security is imperative. A few ways Docker Official Images help developers build a more secure software supply chain include:

  1. Open build process 

Because visibility is an important aspect of the software supply chain, Docker Official Images are created from a transparent and open build process. The Dockerfile inputs and build scripts are all open source, all Docker Official Images updates go through a public pull request process, and the logs from all Docker Official Images builds are available to inspect (Jenkins / GitHub Actions).

  1. Principle of least privilege

The Docker Official Images build system adheres strictly to the principle of least privilege (POLP), for example, by restricting writes for each architecture to architecture-specific build agents. 

  1. Updated build system 

Ensuring the security of Docker Official Images builds and images is paramount. The Docker Official Images build system is kept up to date through automated builds, regular security audits, collaboration with upstream projects, ongoing testing, and security patches. 

  1. Vulnerability reports and continuous monitoring

Courtesy of Docker Scout, vulnerability insights are available for all Docker Official Images and are continuously updated as new vulnerabilities are discovered. We are committed to continuously monitoring our images for security issues and addressing them promptly. For example, we were among the first to provide reasoned guidance and remediation for the recent xz supply chain attack. We also use insights and remediation guidance from Docker Scout, which surfaces actionable insights in near-real-time by updating CVE results from 20+ CVE databases every 20-60 minutes.

  1. Software Bill of Materials (SBOM) and provenance attestations 

We are committed to providing a complete and accurate SBOM and detailed build provenance as signed attestations for all Docker Official Images. This allows our users to have confidence in the origin of Docker Official Images and easily identify and mitigate any potential vulnerabilities.

  1. Signature validation 

We are working on integrating signature validation into our image pull and build processes. This will ensure that all Docker Official Images are verified before use, providing an additional layer of security for our users.

  1. Increased update frequency 

Docker Official Images provide the best of both worlds: the latest version of the software you want, built upon stable versions of Linux distributions. This allows you to use the latest features and fixes of the software you are running without having to wait for a new package from your Linux distribution or being forced to use an unstable version of your Linux distribution. Further, we are working to increase the throughput of the Docker Official Images build infrastructure to allow us to support more frequent updates for larger swaths of Docker Official Images. As part of this effort, we are piloting builds on GitHub Actions and Docker Build Cloud.

Conclusion

Docker’s leadership in security and protecting open source software has been established through Docker Official Images and other trusted content we provide our customers. We take a comprehensive approach to security, focusing on best practices, tooling, and community engagement, and we work closely with upstream projects and SIGs to address security issues promptly and proactively.

Docker Official Images provide a flexible and secure way for developers to build, ship, test, and run their applications. Docker Official Images are maintained through a partnership between the Docker Official Images community, upstream maintainers/volunteers, and Docker engineers, ensuring best practices and uniformity across the Docker Official Images catalog. Each Docker Official Image offers numerous image variants that cater to different use cases, with tags indicating the purpose of each variant. 

Developers can build using Docker tools and products with confidence, knowing that their applications are built on a secure, transparent foundation. 

Looking to dive in? Get started building with Docker Official Images today.

Learn more

]]>
OpenSSH and XZ/liblzma: A Nation-State Attack Was Thwarted, What Did We Learn? https://www.docker.com/blog/openssh-and-xz-liblzma/ Mon, 01 Apr 2024 19:05:54 +0000 https://www.docker.com/?p=53505 Black padlock on light blue digital background

I have been recently watching The Americans, a decade-old TV series about undercover KGB agents living disguised as a normal American family in Reagan’s America in a paranoid period of the Cold War. I was not expecting this weekend to be reading mailing list posts of the same type of operation being performed on open source maintainers by agents with equally shadowy identities (CVE-2024-3094).

As The Grugq explains, “The JK-persona hounds Lasse (the maintainer) over multiple threads for many months. Fortunately for Lasse, his new friend and star developer is there, and even more fortunately, Jia Tan has the time available to help out with maintenance tasks. What luck! This is exactly the style of operation a HUMINT organization will run to get an agent in place. They will position someone and then create a crisis for the target, one which the agent is able to solve.”

The operation played out over two years, getting the agent in place, setting up the infrastructure for the attack, hiding it from various tools, and then rushing to get it into Linux distributions before some recent changes in systemd were shipped that would have stopped this attack from working.

An equally unlikely accident resulted when Andres Freund, a Postgres maintainer, discovered the attack before it had reached the vast majority of systems, from a probably accidental performance slowdown. Andres says, “I didn’t even notice it while logging in with SSH or such. I was doing some micro-benchmarking at the time and was looking to quiesce the system to reduce noise. Saw sshd processes were using a surprising amount of CPU, despite immediately failing because of wrong usernames etc. Profiled sshd. Which showed lots of cpu time in code with perf unable to attribute it to a symbol, with the dso showing as liblzma. Got suspicious. Then I recalled that I had seen an odd valgrind complaint in my automated testing of Postgres, a few weeks earlier, after some package updates were installed. Really required a lot of coincidences.” 

It is hard to overstate how lucky we were here, as there are no tools that will detect this vulnerability. Even ex-post it is not possible to detect externally as we do not have the private key needed to trigger the vulnerability, and the code is very well hidden. While Linus’s law has been stated as “given enough eyeballs all bugs are shallow,” we have seen in the past this is not always true, or there are just not enough eyeballs looking at all the code we consume, even if this time it worked.

In terms of immediate actions, the attack appears to have been targeted at subset of OpenSSH servers patched to integrate with systemd. Running SSH servers in containers is rare, and the initial priority should be container hosts, although as the issue was caught early it is likely that few people updated. There is a stream of fixes to liblzma, the xz compression library where the exploit was placed, as the commits from the last two years are examined, although at present there is no evidence that there are exploits for any software other than OpenSSH included. In the Docker Scout web interface you can search for “lzma” in package names, and issues will be flagged in the “high profile vulnerabilities” policy.

So many commentators have simple technical solutions, and so many vendors are using this to push their tools. As a technical community, we want there to be technical solutions to problems like this. Vendors want to sell their products after events like this, even though none even detected it. Rewrite it in Rust, shoot autotools, stop using GitHub tarballs, and checked-in artifacts, the list goes on. These are not bad things to do, and there is no doubt that understandability and clarity are valuable for security, although we often will trade them off for performance. It is the case that m4 and autotools are pretty hard to read and understand, while tools like ifunc allow dynamic dispatch even in a mostly static ecosystem. Large investments in the ecosystem to fix these issues would be worthwhile, but we know that attackers would simply find new vectors and weird machines. Equally, there are many naive suggestions about the people, as if having an identity for open source developers would solve a problem, when there are very genuine people who wish to stay private while state actors can easily find fake identities, or “just say no” to untrusted people. Beware of people bringing easy solutions, there are so many in this hot-take world.

Where can we go from here? Awareness and observability first. Hyper awareness even, as we see in this case small clues matter. Don’t focus on the exact details of this attack, which will be different next time, but think more generally. Start by understanding your organization’s software consumption, supply chain, and critical points. Ask what you should be funding to make it different. Then build in resilience. Defense in depth, and diversity — not a monoculture. OpenSSH will always be a target because it is so widespread, and the OpenBSD developers are doing great work and the target was upstream of them because of this. But we need a diverse ecosystem with multiple strong solutions, and as an organization you need second suppliers for critical software. The third critical piece of security in this era is recoverability. Planning for the scenario in which the worst case has happened and understanding the outcomes and recovery process is everyone’s homework now, and making sure you are prepared with tabletop exercises around zero days. 

This is an opportunity for all of us to continue working together to strengthen the open source supply chain, and to work on resilience for when this happens next. We encourage dialogue and discussion on this within Docker communities.

Learn more

]]>
Is Your Container Image Really Distroless? https://www.docker.com/blog/is-your-container-image-really-distroless/ Wed, 27 Mar 2024 13:25:43 +0000 https://www.docker.com/?p=52629 Containerization helped drastically improve the security of applications by providing engineers with greater control over the runtime environment of their applications. However, a significant time investment is required to maintain the security posture of those applications, given the daily discovery of new vulnerabilities as well as regular releases of languages and frameworks. 

The concept of “distroless” images offers the promise of greatly reducing the time needed to keep applications secure by eliminating most of the software contained in typical container images. This approach also reduces the amount of time teams spend remediating vulnerabilities, allowing them to focus only on the software they are using. 

In this article, we explain what makes an image distroless, describe tools that make the creation of distroless images practical, and discuss whether distroless images live up to their potential.

2400x1260 is your image really distroless

What’s a distro?

A Linux distribution is a complete operating system built around the Linux kernel, comprising a package management system, GNU tools and libraries, additional software, and often a graphical user interface.

Common Linux distributions include Debian, Ubuntu, Arch Linux, Fedora, Red Hat Enterprise Linux, CentOS, and Alpine Linux (which is more common in the world of containers). These Linux distributions, like most Linux distros, treat security seriously, with teams working diligently to release frequent patches and updates to known vulnerabilities. A key challenge that all Linux distributions must face involves the usability/security dilemma. 

On its own, the Linux kernel is not very usable, so many utility commands are included in distributions to cover a large array of use cases. Having the right utilities included in the distribution without having to install additional packages greatly improves a distro’s usability. The downside of this increase in usability, however, is an increased attack surface area to keep up to date. 

A Linux distro must strike a balance between these two elements, and different distros have different approaches to doing so. A key aspect to keep in mind is that a distro that emphasizes usability is not “less secure” than one that does not emphasize usability. What it means is that the distro with more utility packages requires more effort from its users to keep it secure.

Multi-stage builds

Multi-stage builds allow developers to separate build-time dependencies from runtime ones. Developers can now start from a full-featured build image with all the necessary components installed, perform the necessary build step, and then copy only the result of those steps to a more minimal or even an empty image, called “scratch”. With this approach, there’s no need to clean up dependencies and, as an added bonus, the build stages are also cacheable, which can considerably reduce build time. 

The following example shows a Go program taking advantage of multi-stage builds. Because the Golang runtime is compiled into the binary, only the binary and root certificates need to be copied to the blank slate image.

FROM golang:1.21.5-alpine as build
WORKDIR /
COPY go.* .
RUN go mod download
COPY . .
RUN go build -o my-app


FROM scratch
COPY --from=build
  /etc/ssl/certs/ca-certificates.crt
  /etc/ssl/certs/ca-certificates.crt
COPY --from=build /my-app /usr/local/bin/my-app
ENTRYPOINT ["/usr/local/bin/my-app"]

BuildKit

BuildKit, the current engine used by docker build, helps developers create minimal images thanks to its extensible, pluggable architecture. It provides the ability to specify alternative frontends (with the default being the familiar Dockerfile) to abstract and hide the complexity of creating distroless images. These frontends can accept more streamlined and declarative inputs for builds and can produce images that contain only the software needed for the application to run. 

The following example shows the input for a frontend for creating Python applications called mopy by Julian Goede.

#syntax=cmdjulian/mopy
apiVersion: v1
python: 3.9.2
build-deps:
  - libopenblas-dev
  - gfortran
  - build-essential
envs:
  MYENV: envVar1
pip:
  - numpy==1.22
  - slycot
  - ./my_local_pip/
  - ./requirements.txt
labels:
  foo: bar
  fizz: ${mopy.sbom}
project: my-python-app/

So, is your image really distroless?

Thanks to new tools for creating container images like multi-stage builds and BuildKit, it is now a lot more practical to create images that only contain the required software and its runtime dependencies. 

However, many images claiming to be distroless still include a shell (usually Bash) and/or BusyBox, which provides many of the commands a Linux distribution does — including wget — that can leave containers vulnerable to Living off the land (LOTL) attacks. This raises the question, “Why would an image trying to be distroless still include key parts of a Linux distribution?” The answer typically involves container initialization. 

Developers often have to make their applications configurable to meet the needs of their users. Most of the time, those configurations are not known at build time so they need to be configured at run time. Often, these configurations are applied using shell initialization scripts, which in turn depend on common Linux utilities such as sed, grep, cp, etc. When this is the case, the shell and utilities are only needed for the first few seconds of the container’s lifetime. Luckily, there is a way to create true distroless images while still allowing initialization using tools available from most container orchestrators: init containers.

Init containers

In Kubernetes, an init container is a container that starts and must complete successfully before the primary container can start. By using a non-distroless container as an init container that shares a volume with the primary container, the runtime environment and application can be configured before the application starts. 

The lifetime of that init container is short (often just a couple seconds), and it typically doesn’t need to be exposed to the internet. Much like multi-stage builds allow developers to separate the build-time dependencies from the runtime dependencies, init containers allow developers to separate initialization dependencies from the execution dependencies. 

The concept of init container may be familiar if you are using relational databases, where an init container is often used to perform schema migration before a new version of an application is started.

Kubernetes example

Here are two examples of using init containers. First, using Kubernetes:

apiVersion: v1
kind: Pod
metadata:
  name: kubecon-postgress-pod
  labels:
    app.kubernetes.io/name: KubeConPostgress
spec:
  containers:
  - name: postgress
    image: laurentgoderre689/postgres-distroless
    securityContext:
      runAsUser: 70
      runAsGroup: 70
    volumeMounts:
    - name: db
      mountPath: /var/lib/postgresql/data/
  initContainers:
  - name: init-postgress
    image: postgres:alpine3.18
    env:
      - name: POSTGRES_PASSWORD
        valueFrom:
          secretKeyRef:
            name: kubecon-postgress-admin-pwd
            key: password
    command: ['docker-ensure-initdb.sh']
    volumeMounts:
    - name: db
      mountPath: /var/lib/postgresql/data/
  volumes:
  - name: db
    emptyDir: {}

- - - 

> kubectl apply -f pod.yml && kubectl get pods
pod/kubecon-postgress-pod created
NAME                    READY   STATUS     RESTARTS   AGE
kubecon-postgress-pod   0/1     Init:0/1   0          0s
> kubectl get pods
NAME                    READY   STATUS    RESTARTS   AGE
kubecon-postgress-pod   1/1     Running   0          10s

Docker Compose example

The init container concept can also be emulated in Docker Compose for local development using service dependencies and conditions.

services:
 db:
   image: laurentgoderre689/postgres-distroless
   user: postgres
   volumes:
     - pgdata:/var/lib/postgresql/data/
   depends_on:
     db-init:
       condition: service_completed_successfully

 db-init:
   image: postgres:alpine3.18
   environment:
      POSTGRES_PASSWORD: example
   volumes:
     - pgdata:/var/lib/postgresql/data/
   user: postgres
    command: docker-ensure-initdb.sh

volumes:
 pgdata:

- - - 
> docker-compose up 
[+] Running 4/0
 ✔ Network compose_default      Created                                                                                                                      
 ✔ Volume "compose_pgdata"      Created                                                                                                                     
 ✔ Container compose-db-init-1  Created                                                                                                                      
 ✔ Container compose-db-1       Created                                                                                                                      
Attaching to db-1, db-init-1
db-init-1  | The files belonging to this database system will be owned by user "postgres".
db-init-1  | This user must also own the server process.
db-init-1  | 
db-init-1  | The database cluster will be initialized with locale "en_US.utf8".
db-init-1  | The default database encoding has accordingly been set to "UTF8".
db-init-1  | The default text search configuration will be set to "english".
db-init-1  | [...]
db-init-1 exited with code 0
db-1       | 2024-02-23 14:59:33.191 UTC [1] LOG:  starting PostgreSQL 16.1 on aarch64-unknown-linux-musl, compiled by gcc (Alpine 12.2.1_git20220924-r10) 12.2.1 20220924, 64-bit
db-1       | 2024-02-23 14:59:33.191 UTC [1] LOG:  listening on IPv4 address "0.0.0.0", port 5432
db-1       | 2024-02-23 14:59:33.191 UTC [1] LOG:  listening on IPv6 address "::", port 5432
db-1       | 2024-02-23 14:59:33.194 UTC [1] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db-1       | 2024-02-23 14:59:33.196 UTC [9] LOG:  database system was shut down at 2024-02-23 14:59:32 UTC
db-1       | 2024-02-23 14:59:33.198 UTC [1] LOG:  database system is ready to accept connections

As demonstrated by the previous example, an init container can be used alongside a container to remove the need for general-purpose software and allow the creation of true distroless images. 

Conclusion

This article explained how Docker build tools allow for the separation of build-time dependencies from run-time dependencies to create “distroless” images. For example, using init containers allows developers to separate the logic needed to configure a runtime environment from the environment itself and provide a more secure container. This approach also helps teams focus their efforts on the software they use and find a better balance between security and usability.

Learn more

]]>
Is Your Image Really Distroless? - Laurent Goderre, Docker nonadult
Filter Out Security Vulnerability False Positives with VEX https://www.docker.com/blog/filter-out-security-vulnerability-false-positives-with-vex/ Tue, 05 Mar 2024 13:57:23 +0000 https://www.docker.com/?p=52382 Development and security teams are becoming overwhelmed by an ever-growing backlog of security vulnerabilities requiring their attention. Although these vulnerability insights are essential to safeguard organizations and their customers from potential threats, the findings are often bloated with a high volume of noise, especially from false positives. 

The 2022 Cloud Security Alert Fatigue Report states that more than 40% of alerts from security tools are false positives, which means that teams can be inundated with vulnerabilities that pose no actual risk. The impact of these false positives includes delayed releases, wasted productivity, internal friction, burnout, and eroding customer trust, all of which accumulate significant financial loss for organizations.

How can developers and security professionals cut through the noise so that they can more effectively manage vulnerabilities and focus on what truly matters? That is where the Vulnerability Exploitability eXchange (VEX) comes in.

In this article, we’ll explain how VEX works with Docker Scout and walk through how you can get started. 

v2 rectangle false positives got you down vex them

What is VEX?

VEX, developed by the National Telecommunications and Information Administration (NTIA), is a specification aimed at capturing and conveying information about exploitable vulnerabilities within a product. Among other details, the framework classifies vulnerability status into four key categories, forming the core of a VEX document:

  • Not affected — No remediation is required regarding this vulnerability.
  • Affected — Actions are recommended to remediate or address this vulnerability.
  • Fixed — These product versions contain a fix for the vulnerability.
  • Under Investigation — Whether these product versions are affected by the vulnerability is still unknown. An update will be provided in a later release.

By ingesting the context from VEX, organizations can distinguish the noise from the confirmed exploitable vulnerabilities to get a more accurate picture of their attack surface and bring focus to their remediation activities. For example, vulnerabilities assigned a “not affected” status in the VEX document may potentially be ruled out as false positives and hidden from tool outputs to minimize distraction.

Although the practice of documenting software vulnerability context is not novel per se, VEX itself represents an advancement over solutions that have traditionally ruled over the vulnerability management processes, such as emails, spreadsheets, Confluence pages, and Jira tickets. 

What sets VEX apart are its standardized and machine-readable features, which make it much better suited for integration and automation within an organization’s vulnerability ecosystem, resulting in a more streamlined and effective approach to vulnerability management without unnecessary resource drain. However, to yield these results — repeatedly and at scale — the technology landscape surrounding VEX must first evolve to deliver tools and experiences that can successfully put VEX data into action in verifiable, automatable, and meaningful ways. 

For more information on VEX, refer to the one-page summary (PDF) by NTIA.

Want to get started with VEX? Docker can help

The implementation of VEX is still nascent in the industry and widespread utilization and adoption will be key in unleashing its full potential. Docker, too, is early in its VEX journey, but read on for how we’re helping our users get started.

Use Docker Scout with local VEX documents

If you want to try how VEX works with Docker Scout, the quickest way to get up and running is to create a local VEX document with the tool of your choice, such as vexctl, and incorporate it into your image analysis with the --vex-location flag for the docker scout cves command.

$ mkdir -p /usr/local/share/vex
$ vexctl create [options] --file /usr/local/share/vex/example.vex.json
$ docker scout cves --vex-location /usr/local/share/vex <image-reference>

Embed VEX documents as attestations

The new docker scout attestation add command lets you attach VEX documents to images as in-toto attestations, which means VEX statements are available on and distributed together with the image.

$ docker scout attestation add \
  --file /usr/local/share/vex/example.vex.json \
  --predicate-type https://openvex.dev/ns/v0.2.0 \
  <image>

Docker Scout automatically incorporates any VEX attestations into the results when you analyze images on the CLI. It also works with attestations signed with Sigstore and attached using vexctl attest --attest --sign.

Automatically create VEX documents with Sysdig

The Sysdig integration for Docker Scout detects what packages are being loaded into memory in your runtime environment and automatically creates VEX statements for filtering out non-applicable CVEs.

Try it out

We are working on embedding the above capability and more into Docker Scout so that users can effortlessly generate and apply VEX to vanquish their false positives for good. Simultaneously, we are exploring VEX for Docker Official Images to allow upstream maintainers to indicate non-applicable CVEs in their images, which can improve tooling (e.g., scanner) accuracy if VEX is taken into account. 

In the meantime, if you are curious about how this all works in practice, we’ve created a guide that walks you through the steps of creating a VEX document, applying it to image analysis, and creating VEX attestations. 

Learn more

]]>
Azure Container Registry and Docker Hub: Connecting the Dots with Seamless Authentication and Artifact Cache https://www.docker.com/blog/azure-container-registry-and-docker-hub-connecting-the-dots-with-seamless-authentication-and-artifact-cache/ Thu, 29 Feb 2024 14:48:05 +0000 https://www.docker.com/?p=52583 By leveraging the wide array of public images available on Docker Hub, developers can accelerate development workflows, enhance productivity, and, ultimately, ship scalable applications that run like clockwork. When building with public content, acknowledging the potential operational risks associated with using that content without proper authentication is crucial. 

In this post, we will describe best practices for mitigating these risks and ensuring the security and reliability of your containers.

Black padlock on light blue digital background

Import public content locally

There are several advantages to importing public content locally. Doing so improves the availability and reliability of your public content pipeline and protects you from failed CI builds. By importing your public content, you can easily validate, verify, and deploy images to help run your business more reliably.

For more information on this best practice, check out the Open Container Initiative’s guide on Consuming Public Content.

Configure Artifact Cache to consume public content

Another best practice is to configure Artifact Cache to consume public content. Azure Container Registry’s (ACR) Artifact Cache feature allows you to cache your container artifacts in your own Azure Container Registry, even for private networks. This approach limits the impact of rate limits and dramatically increases pull reliability when combined with geo-replicated ACR, allowing you to pull artifacts from the region closest to your Azure resource. 

Additionally, ACR offers various security features, such as private networks, firewall configuration, service principals, and more, which can help you secure your container workloads. For complete information on using public content with ACR Artifact Cache, refer to the Artifact Cache technical documentation.

Authenticate pulls with public registries

We recommend authenticating your pull requests to Docker Hub using subscription credentials. Docker Hub offers developers the ability to authenticate when building with public library content. Authenticated users also have access to pull content directly from private repositories. For more information, visit the Docker subscriptions page. Microsoft Artifact Cache also supports authenticating with other public registries, providing an additional layer of security for your container workloads.

Following these best practices when using public content from Docker Hub can help mitigate security and reliability risks in your development and operational cycles. By importing public content locally, configuring Artifact Cache, and setting up preferred authentication methods, you can ensure your container workloads are secure and reliable.

Learn more about securing containers

Additional resources for improving container security for Microsoft and Docker customers

]]>
How to Use OpenPubkey to Solve Key Management via SSO https://www.docker.com/blog/how-to-use-openpubkey-to-solve-key-management-via-sso/ Tue, 20 Feb 2024 14:14:55 +0000 https://www.docker.com/?p=51668 This post was contributed by BastionZero.

Giving people the ability to sign messages under their identity is extremely powerful. For instance, this functionality lets you SSH into servers, sign software artifacts, and create end-to-end encrypted communications under your single sign-on (SSO) identity.

The OpenPubkey protocol and open source project brings the power of digital signatures to both people and workloads without adding trusted parties. OpenPubkey is built on the OpenID Connect (OIDC) SSO protocol, which is supported by major identity providers, including Google, Microsoft, Okta, and Facebook. 

This article will explore how OpenPubkey works and look at three use cases in detail.

banner bastionzero blog how to use openpubkey to solve your key management problems

What can OpenPubkey do?

Public key cryptography was invented in the 1970s and has become the most powerful tool in the security engineering toolbox. It allows anything holding a public key, and its associated signing key, to create a cryptographic identity. This identity is extremely secure because the party cannot only use their signing key to prove they are who they say they are but also sign messages under this identity. 

Servers often authenticate themselves to people using public keys associated with the server’s identity, yet the process rarely works the other way. People rarely authenticate to servers using public keys associated with a person’s identity. Instead, less secure authentication methods are employed, such as authentication secrets stored in cookies, which must be transmitted on every request.

Let’s say that Alice wanted to sign the message “Flee at once — all is discovered” under her email address alice@example.com. How would she do it? One approach would be for Alice to create a public key (PK) and signing key (SK) and then publish the mapping between her email and the PK. 

This approach has two problems. First, you and everyone verifying this message must trust that the webpage has honestly mapped Alice’s email to her public key and has not maliciously replaced her public key with another key that could be used to impersonate Alice. Second, Alice must now protect and manage the signing key associated with this public key. History has shown that users can be terrible at protecting signing keys. Probably the most famous example is of the man who lost a signing key controlling half a billion dollars worth of Bitcoin.

Human authentication on the web was originally supposed to work the same way as server authentication. Much like a certificate authority (CA) issues a certificate to a server, which associates a public key with the server’s identity (`example.com`), the plan was to have a CA issue a client certificate to a person that associates a public key with that person’s identity. These client certificates are still around and are well-used for certain applications, but they never caught on for widespread personal use, likely because of the terrible user experience (UX) of asking people to secure and manage secret signing keys.

OpenPubkey addresses both of these problems. It uses your identity provider to perform the mapping between identity and your public key. Because you already trust your identity provider, having your identity provider perform this mapping does not add any new trusted parties. For instance, Alice must already trust her identity provider, Example.com, to manage her identity (alice@example.com), so it is natural to use Example.com to perform the mapping between Alice’s public key and her Example.com identity (alice@example.com). Example.com already knows how to authenticate @example.com users, so Alice doesn’t need to set up a new account or create new authentication factors.

Second, to solve the problem of lost or stolen signing keys, OpenPubkey public keys and signing keys are ephemeral. That means the signing keys can be deleted and recreated at will. OpenPubkey generates a fresh public key and signing key for a user every time that user authenticates to their identity provider. This approach to making public keys ephemeral removes one of the most significant UX barriers to authenticating people with public keys. It also provides a security win; it creates a much smaller window of exposure if a signing key is stolen, as signing keys can be deleted when the user idles or logs out.

How does OpenPubkey work?

Let’s return to our situation: Alice wants to sign the message “Flee at once — all is discovered” under her identity (alice@example.com). First, Alice’s computer generates a fresh public key and signing key. Next, she needs her identity provider, Example.com, to associate her identity with this public key. How does OpenPubkey do this? To understand the process, we first need to provide details about how SSO/OpenID Connect works.

Example.com, which is the identity provider for @example.com, knows how to check that Alice is really alice@example.com. Example.com does this every time Alice signs into Example.com. In OIDC, the identity provider signs a statement, called an ID Token, which roughly says “this is alice@example.com”. Part of the authentication process in OIDC allows the user (or their software) to submit a random value that will be included in the issued ID Token. 

Alice’s OpenPubkey client puts the cryptographic hash of Alice’s public key into this value in her ID Token. Alice’s OpenPubkey client modifies the ID Token into an object called a PK Token, which essentially says: “this is alice@example.com and her public key is 0xABCE…“. We’re skipping a few details of OpenPubkey, but this is the basic idea.

Now that Alice has a PK Token signed by Example.com, which binds her public key to her identity, she can sign the statement “Flee at once — all is discovered” and broadcast the message, the signature, and her ID Token. Bob, or anyone else for that matter, can check whether this message is really from alice@example.com, by checking that the ID Token is signed by Example.com and then checking that Alice’s signature matches the public key in the ID Token.

OpenPubkey use cases

Now let’s look at OpenPubkey use cases.

SSH

OpenPubkey is useful for more than just telling your friends that “Flee at once — all is discovered.” Because most security protocols are built on public key cryptography, OpenPubkey can easily plug human identities into these protocols.

SSH supports the authentication of both machines and users with public keys (also known as SSH keys).  However, these SSH keys are not associated with identities. With an SSH key, you can say “allow root access for key 0xABCD…”, but not “allow root access for alice@example.com.” This presents several UX and security problems. As mentioned previously, people struggle with managing their secret signing keys, and SSH is no exception. 

Even more problematic, because public keys are not associated with identities, it is difficult to tell if an SSH key represents a person or machine that should no longer have access. As Tatu Ylonen, the inventor of SSH, writes in his recent paper Challenges in Managing SSH Keys — and a Call for Solutions:

“In analyzing SSH keys for dozens of large enterprises, it has turned out that in many environments 90% of all authorized keys are no longer used. They represent access that was provisioned, but never terminated when the person left or the need for access ceased to exist. Some of the authorized keys are 10-20 years old, and typically about 10% of them grant root access or other privileged access. The vast majority of private user keys found in most environments do not have passphrases.”

OpenPubkey can be used to solve this problem by binding SSH keys to user identities. That way,  the server can check whether the identity (alice@example.com) is allowed to connect to the server or not. This means that Alice can access her SSH server using SSO; she can log in to Example.com as alice@example.com and then gain access to the server as long as her SSO is valid.

OpenPubkey authentication can be added to SSH with a small change to the SSH config. No code changes to SSH are required. To try it out, or learn more about how OpenPubkey’s SSH works, see our recent post: How to Use OpenPubkey to SSH Without SSH Keys.

Secure messaging

OpenPubkey can also be used to solve one of the major issues with end-to-end encrypted messaging. Suppose someone sends you a message on a secure messaging app: How do you know they are actually that person? Some secure messaging apps let you look up the public key that is securing your communication, but how do you know that that public key is actually the public key of the person you want to privately communicate with?

This connection between public key and identity is the core problem that OpenPubkey solves. With OpenPubkey, Bob can learn the public key for alice@example.com by checking an ID Token signed by Example.com, which includes Alice’s public key and her email address. This does involve trusting Example.com, but you generally already have to trust Example.com to SSO @example.com users.

While it’s not discussed here, OpenPubkey does support an optional protocol — the MFA cosigner — which removes the requirement of trusting the identity provider. But even without the MFA cosigner protocol, OpenPubkey provides stronger security for end-to-end encrypted messaging because it allows Bob to learn Alice’s public key directly from Alice’s identity provider.

Signing container images

OpenPubkey is not limited to human use cases. OpenPubkey developers are working on a solution to allow workflows (rather than people) to sign images using GitHub’s identity provider and GitHub Actions. You can learn more about this use case by reading How to Use OpenPubkey with GitHub Actions Workloads.

Help us expand the utility of OpenPubkey

These three use cases should not be seen as the limits of what OpenPubkey can do. This approach is highly flexible and can be used for VPNs, cosigning, container service meshes, cryptocurrencies, web applications, and even physical access. 

We invite anyone who wants to contribute to OpenPubkey to visit and star our GitHub repo. We are building an open and friendly community and welcome pull requests from anyone — see the contribution guidelines to learn more.    

Learn more

]]>
Docker Security Advisory: Multiple Vulnerabilities in runc, BuildKit, and Moby https://www.docker.com/blog/docker-security-advisory-multiple-vulnerabilities-in-runc-buildkit-and-moby/ Wed, 31 Jan 2024 20:05:23 +0000 https://www.docker.com/?p=51378 February 1 updates:

January 31 updates:

  • Patches for runc, BuildKit, and Moby (Docker Engine) are now available.
  • Updates have been rolled out to Docker Build Cloud builders.

We at Docker prioritize the security and integrity of our software and the trust of our users. Security researchers at Snyk Labs recently identified and reported four security vulnerabilities in the container ecosystem. One of the vulnerabilities, CVE-2024-21626, concerns the runc container runtime, and the other three affect BuildKit (CVE-2024-23651, CVE-2024-23652, and CVE-2024-23653). We want to assure our community that our team, in collaboration with the reporters and open source maintainers, has been diligently working on coordinating and implementing necessary remediations.

banner docker security advisory

We are committed to maintaining the highest security standards. We will publish patched versions of runc, BuildKit, and Moby on January 31 and release an update for Docker Desktop on February 1 to address these vulnerabilities.  Additionally, our latest Moby and BuildKit releases will include fixes for CVE-2024-23650 and CVE-2024-24557, discovered respectively by an independent researcher and through Docker’s internal research initiatives.

 Versions impacted
runc<= 1.1.11
BuildKit<= 0.12.4
Moby (Docker Engine)<= 25.0.1 and <= 24.0.8
Docker Desktop<= 4.27.0

These vulnerabilities can only be exploited if a user actively engages with malicious content by incorporating it into the build process or running a container from a suspect image (particularly relevant for the CVE-2024-21626 container escape vulnerability). Potential impacts include unauthorized access to the host filesystem, compromising the integrity of the build cache, and, in the case of CVE-2024-21626, a scenario that could lead to full container escape. 

We strongly urge all customers to prioritize security by applying the available updates as soon as they are released. Timely application of these updates is the most effective measure to safeguard your systems against these vulnerabilities and maintain a secure and reliable Docker environment.

What should I do if I’m on an affected version?

If you are using affected versions of runc, BuildKit, Moby, or Docker Desktop, make sure to update to the latest versions as soon as patched versions become available (all to be released no later than February 1 and linked in the following table):

 Patched versions
runc>= 1.1.12
BuildKit>= 0.12.5
Moby (Docker Engine)>= 25.0.2 and >= 24.0.9*
Docker Desktop>= 4.27.1
* Only CVE-2024-21626 and CVE-2024-24557 were fixed in Moby 24.0.9.


If you are unable to update to an unaffected version promptly after it is released, follow these best practices to mitigate risk: 

  • Only use trusted Docker images (such as Docker Official Images).
  • Don’t build Docker images from untrusted sources or untrusted Dockerfiles.
  • If you are a Docker Business customer using Docker Desktop and unable to update to v4.27.1 immediately after it’s released, make sure to enable Hardened Docker Desktop features such as:
  • For CVE-2024-23650, CVE-2024-23651, CVE-2024-23652, and CVE-2024-23653, avoid using BuildKit frontend from an untrusted source. A frontend image is usually specified as the #syntax line on your Dockerfile, or with --frontend flag when using the buildctl build command.
  • To mitigate CVE-2024-24557, make sure to either use BuildKit or disable caching when building images. From the CLI this can be done via the DOCKER_BUILDKIT=1 environment variable (default for Moby >= v23.0 if the buildx plugin is installed) or the --no-cache flag. If you are using the HTTP API directly or through a client, the same can be done by setting nocache to true or version to 2 for the /build API endpoint.

Technical details and impact

CVE-2024-21626 (High)

In runc v1.1.11 and earlier, due to certain leaked file descriptors, an attacker can gain access to the host filesystem by causing a newly-spawned container process (from runc exec) to have a working directory in the host filesystem namespace, or by tricking a user to run a malicious image and allow a container process to gain access to the host filesystem through runc run. The attacks can also be adapted to overwrite semi-arbitrary host binaries, allowing for complete container escapes. Note that when using higher-level runtimes (such as Docker or Kubernetes), this vulnerability can be exploited by running a malicious container image without additional configuration or by passing specific workdir options when starting a container. The vulnerability can also be exploited from within Dockerfiles in the case of Docker.

  • The issue has been fixed in runc v1.1.12.

CVE-2024-23651 (High)

In BuildKit <= v0.12.4, two malicious build steps running in parallel sharing the same cache mounts with subpaths could cause a race condition, leading to files from the host system being accessible to the build container. This will only occur if a user is trying to build a Dockerfile of a malicious project.

  • The issue will be fixed in BuildKit v0.12.5.

CVE-2024-23652 (High)

In BuildKit <= v0.12.4, a malicious BuildKit frontend or Dockerfile using RUN --mount could trick the feature that removes empty files created for the mountpoints into removing a file outside the container from the host system. This will only occur if a user is using a malicious Dockerfile.

  • The issue will be fixed in BuildKit v0.12.5.

CVE-2024-23653 (High)

In addition to running containers as build steps, BuildKit also provides APIs for running interactive containers based on built images. In BuildKit <= v0.12.4, it is possible to use these APIs to ask BuildKit to run a container with elevated privileges. Normally, running such containers is only allowed if special security.insecure entitlement is enabled both by buildkitd configuration and allowed by the user initializing the build request.

  • The issue will be fixed in BuildKit v0.12.5.

CVE-2024-23650 (Medium)

In BuildKit <= v0.12.4, a malicious BuildKit client or frontend could craft a request that could lead to BuildKit daemon crashing with a panic.

  • The issue will be fixed in BuildKit v0.12.5.

CVE-2024-24557 (Medium)

In Moby <= v25.0.1 and <= v24.0.8, the classic builder cache system is prone to cache poisoning if the image is built FROM scratch. Also, changes to some instructions (most important being HEALTHCHECK and ONBUILD) would not cause a cache miss. An attacker with knowledge of the Dockerfile someone is using could poison their cache by making them pull a specially crafted image that would be considered a valid cache candidate for some build steps.

  • The issue will be fixed in Moby >= v25.0.2 and >= v24.0.9.

How are Docker products affected? 

The following Docker products are affected. No other products are affected by these vulnerabilities.

Docker Desktop

Docker Desktop v4.27.0 and earlier are affected. Docker Desktop v4.27.1 will be released on February 1 and includes runc, BuildKit, and dockerd binaries patches. In addition to updating to this new version, we encourage all Docker users to diligently use Docker images and Dockerfiles and ensure you only use trusted content in your builds.

As always, you should check Docker Desktop system requirements for your operating system (Windows, Linux, Mac) before updating to ensure full compatibility.

Docker Build Cloud

Any new Docker Build Cloud builder instances will be provisioned with the latest Docker Engine and BuildKit versions after fixes are released and will, therefore, be unaffected by these CVEs. Docker will also be rolling out gradual updates to any existing builder instances.

Security at Docker

At Docker, we know that part of being developer-obsessed is providing secure software to developers. We appreciate the responsible disclosure of these vulnerabilities. If you’re aware of potential security vulnerabilities in any Docker product, report them to security@docker.com. For more information on Docker’s security practices, see our website.

Advisory links

]]>