containers – Docker https://www.docker.com Thu, 16 May 2024 20:13:29 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.3 https://www.docker.com/wp-content/uploads/2024/02/cropped-docker-logo-favicon-32x32.png containers – Docker https://www.docker.com 32 32 Wasm vs. Docker: Performant, Secure, and Versatile Containers https://www.docker.com/blog/wasm-vs-docker/ Thu, 09 May 2024 18:39:15 +0000 https://www.docker.com/?p=53826 Docker and WebAssembly (Wasm) represent two pivotal technologies that have reshaped the software development landscape. You’ve probably started to hear more about Wasm in the past few years as it has gained in popularity, and perhaps you’ve also heard about the benefits of using it in your application stack. This may have led you to think about the differences between Wasm and Docker, especially because the technologies work together so closely.

In this article, we’ll explore how these two technologies can work together to enable you to deliver consistent, efficient, and secure environments for deploying applications. By marrying these two tools, developers can easily reap the performance benefits of WebAssembly with containerized software development.

White text on blue background saying Wasm vs. Docker

What’s Wasm?

Wasm is a compact binary instruction format governed by the World Wide Web Consortium (W3C). It’s a portable compilation target for more than 40 programming languages, like C/C++, C#, JavaScript, Go, and Rust. In other words, Wasm is a bytecode format encoded to run on a stack-based virtual machine.

Similar to the way Java can be compiled to Java bytecode and executed on the Java Virtual Machine (JVM), which can then be compiled to run on various architectures, a program can be compiled to Wasm bytecode and then executed by a Wasm runtime, which can be packaged to run on different architectures, such as Arm and x86.

a program can be compiled to Wasm bytecode and then executed by a Wasm runtime, which can be packaged to run on different architectures, such as Arm and x86

What’s a Wasm runtime?

Wasm runtimes bridge the gap between portable bytecode and the underlying hardware architecture. They also provide APIs to communicate with the host environment and provide interoperability between other languages, such as JavaScript.

At a high level, a Wasm runtime runs your bytecode in three semantic phases:

  1. Decoding: Processing the module to convert it to an internal representation
  2. Validation: Checking to see that the decoded module is valid
  3. Execution: Installing and invoking a valid module

Wasm runtime examples include Spin, Wasmtime, WasmEdge, and Wasmer. Major browsers like Firefox and Chrome also use Spider Monkey and V8, respectively.

Why use Wasm?

To understand why you might want to use WebAssembly in your application stack, let’s examine its main benefits — notably, security without sacrificing performance and versatility.

Security without sacrificing performance

Wasm enables code to run at near-native speed within a secure, sandboxed environment, protecting systems from malicious software. This performance is achieved through just-in-time (JIT) compilation of WebAssembly bytecode directly into machine code, bypassing the need for transpiling into an intermediate format. 

Wasm also uses shared linear memory — a contiguous block of memory that simplifies data exchange between modules or between WebAssembly and JavaScript. This design allows for efficient communication and enables developers to blend the flexibility of JavaScript with the robust performance of WebAssembly in a single application.

The security of this system is further enhanced by the design of the host runtime environment, which acts as a sandbox. It restricts the Wasm module from accessing anything outside of the designated memory space and from performing potentially dangerous operations like file system access, network requests, and system calls. WebAssembly’s requirement for explicit imports and exports to access host functionality adds another layer of control, ensuring a secure execution environment.

Use case versatility

Finally, WebAssembly is relevant for more than traditional web platforms (contrary to its name). It’s also an excellent tool for server-side applications, edge computing, game development, and cloud/serverless computing. If performance, security, or target device resources are a concern, consider using this compact binary format.

During the past few years, WebAssembly has become more prevalent on the server side because of the WebAssembly System Interface (or WASI). WASI is a modular API for Wasm that provides access to operating system features like files, filesystems, and clocks. 

Docker vs. Wasm: How are they related?

After reading about WebAssembly code, you might be wondering how Docker is relevant. Doesn’t WebAssembly handle sandboxing and portability? How does Docker fit in the picture? Let’s discuss further.

Docker helps developers build, run, and share applications — including those that use Wasm. This is especially true because Wasm is a complementary technology to Linux containers. However, handling these containers without solid developer experience can quickly become a roadblock to application development.

That’s where Docker comes in with a smooth developer experience for building with Wasm and/or Linux containers.

Benefits of using Docker and Wasm together

Using Docker and Wasm together affords great developer experience benefits as well, including:

  • Consistent development environments: Developers can use Docker to containerize their Wasm runtime environments. This approach allows for a consistent Wasm development and execution environment that works the same way across any machine, from local development to production.
  • Efficient deployment: By packaging Wasm applications within Docker, developers can leverage efficient image management and distribution capabilities. This makes deploying and scaling these types of applications easier across various environments.
  • Security and isolation: Although Docker isolates applications at the operating system level, Wasm provides a sandboxed execution environment. When used together, the technologies offer a robust layered security model against many common vulnerabilities.
  • Enhanced performance: Developers can use Docker containers to deploy Wasm applications in serverless architectures or as microservices. This lets you take advantage of Wasm’s performance benefits in a scalable and manageable way.

How to enable Wasm on Docker Desktop

If you’re interested in running WebAssembly containers, you’re in luck! Support for Wasm workloads is now in beta, and you can enable it on Docker Desktop by checking Enable Wasm on the Features in development tab under Settings (Figure 2).

Note: Make sure you have containerd image store support enabled first.

Screenshot of Docker Desktop Settings showing checkmark beside "Enable Wasm" option.
Figure 2: Enable Wasm in Docker Desktop.

After enabling Wasm in Docker Desktop, you’re ready to go. Docker currently supports many Wasm runtimes, including Spin, WasmEdge, and Wasmtime. You can also find detailed documentation that explains how to run these applications.

How Docker supports WebAssembly

To explain how Docker supports WebAssembly, we’ll need to quickly review how the Docker Engine works.

The Docker Engine builds on a higher-level container runtime called containerd. This runtime provides fundamental functionality to control the container lifecycle. Using a shim process, containerd can leverage runc (a low-level runtime) under the hood. Then, runc can interact directly with the operating system to manage various aspects of containers.

The Docker Engine builds on a higher-level container runtime called containerd. This runtime provides fundamental functionality to control the container lifecycle. Using a shim process, containerd can leverage runc (a low-level runtime) under the hood. Then, runc can interact directly with the operating system to manage various aspects of containers.

What’s neat about this design is that anyone can write a shim to integrate other runtimes with containerd, including WebAssembly runtimes. As a result, you can plug-and-play with various Wasm runtimes in Docker, like WasmEdge, Spin, and Wasmtime.

The future of WebAssembly and Docker

WebAssembly is continuously evolving, so you’ll need a tight pulse to keep up with ecosystem developments. One recent advancement relates to how the new WebAssembly Component model will impact shims for the various container runtimes. At Docker, we’re working to make it simple for developers to create Wasm containers and enhance the developer experience.

In a famous 2019 tweet thread, Docker founder Solomon Hykes described the future of cloud computing. In this future, he describes a world where Docker runs Windows, Linux, and WebAssembly containers side by side. Given all the recent developments in the ecosystem, that future is well and truly here.

Recent advancements include:

  • The launch of WASI Preview 2 fully rebased WASI on the component model type system and semantics: This makes it modular, fully virtualizable, and accessible to various source languages.
  • Fermyon, Microsoft, SUSE, LiquidReply, and others have also released the SpinKube open source project: The project provided a straightforward path for deploying Wasm-based serverless functions into Kubernetes clusters. Developers can use SpinKube with Docker via k3s (a lightweight wrapper to run Rancher Lab’s minimal Kubernetes distribution). Docker Desktop also includes the shim, which enables you to run Kubernetes containers on your local machine.

In 2024, we expect the combination of Wasm and containers to be highly regarded for its efficiency, scalability, and cost.

Wrapping things up

In this article, we explained how Docker and Wasm work together and how to use Docker for Wasm workloads. We’re excited to see Wasm’s adoption grow in the coming years and will continue to enhance our support to meet developers both where they’re at and where they’re headed. 

Check out the following related materials for details on Wasm and how it works with Docker:

Learn more

Thanks to Sohan Maheshwar, Developer Advocate Lead at Fermyon, for collaborating on this post.

]]>
containerd vs. Docker: Understanding Their Relationship and How They Work Together https://www.docker.com/blog/containerd-vs-docker/ Wed, 27 Mar 2024 13:41:27 +0000 https://www.docker.com/?p=52606 During the past decade, containers have revolutionized software development by introducing higher levels of consistency and scalability. Now, developers can work without the challenges of dependency management, environment consistency, and collaborative workflows.

When developers explore containerization, they might learn about container internals, architecture, and how everything fits together. And, eventually, they may find themselves wondering about the differences between containerd and Docker and how they relate to one another.

In this blog post, we’ll explain what containerd is, how Docker and containerd work together, and how their combined strengths can improve developer experience.

containerd 2400x1260 1

What’s a container?

Before diving into what containerd is, I should briefly review what containers are. Simply put, containers are processes with added isolation and resource management. Containers have their own virtualized operating system with access to host system resources. 

Containers also use operating system kernel features. They use namespaces to provide isolation and cgroups to limit and monitor resources like CPU, memory, and network bandwidth. As you can imagine, container internals are complex, and not everyone has the time or energy to become an expert in the low-level bits. This is where container runtimes, like containerd, can help.

What’s containerd?

In short, containerd is a runtime built to run containers. This open source tool builds on top of operating system kernel features and improves container management with an abstraction layer, which manages namespaces, cgroups, union file systems, networking capabilities, and more. This way, developers don’t have to handle the complexities directly. 

In March 2017, Docker pulled its core container runtime into a standalone project called containerd and donated it to the Cloud Native Computing Foundation (CNCF).  By February 2019, containerd had reached the Graduated maturity level within the CNCF, representing its significant development, adoption, and community support. Today, developers recognize containerd as an industry-standard container runtime known for its scalability, performance, and stability.

Containerd is a high-level container runtime with many use cases. It’s perfect for handling container workloads across small-scale deployments, but it’s also well-suited for large, enterprise-level environments (including Kubernetes). 

A key component of containerd’s robustness is its default use of Open Container Initiative (OCI)-compliant runtimes. By using runtimes such as runc (a lower-level container runtime), containerd ensures standardization and interoperability in containerized environments. It also efficiently deals with core operations in the container life cycle, including creating, starting, and stopping containers.

How is containerd related to Docker?

But how is containerd related to Docker? To answer this, let’s take a high-level look at Docker’s architecture (Figure 1). 

Containerd facilitates operations on containers by directly interfacing with your operating system. The Docker Engine sits on top of containerd and provides additional functionality and developer experience enhancements.

containerd diagram v1

How Docker interacts with containerd

To better understand this interaction, let’s talk about what happens when you run the docker run command:

  • After you select enter, the Docker CLI will send the run command and any command-line arguments to the Docker daemon (dockerd) via REST API call.
  • dockerd will parse and validate the request, and then it will check that things like container images are available locally. If they’re not, it will pull the image from the specified registry.
  • Once the image is ready, dockerd will shift control to containerd to create the container from the image.
  • Next, containerd will set up the container environment. This process includes tasks such as setting up the container file system, networking interfaces, and other isolation features.
  • containerd will then delegate running the container to runc using a shim process. This will create and start the container.
  • Finally, once the container is running, containerd will monitor the container status and manage the lifecycle accordingly.

Docker and containerd: Better together 

Docker has played a key role in the creation and adoption of containerd, from its inception to its donation to the CNCF and beyond. This involvement helped standardize container runtimes and bolster the open source community’s involvement in containerd’s development. Docker continues to support the evolution of the open source container ecosystem by continuously maintaining and evolving containerd.

Containerd specializes in the core functionality of running containers. It’s a great choice for developers needing access to lower-level container internals and other advanced features. Docker builds on containerd to create a cohesive developer experience and comprehensive toolchain for building, running, testing, verifying, and sharing containers.

Build + Run

In development environments, tools like Docker Desktop, Docker CLI, and Docker Compose allow developers to easily define, build, and run single or multi-container environments and seamlessly integrate with your favorite editors or IDEs or even in your CI/CD pipeline. 

Test

One of the largest developer experience pain points involves testing and environment consistency. With Testcontainers, developers don’t have to worry about reproducibility across environments (for example, dev, staging, testing, and production). Testcontainers also allows developers to use containers for isolated dependency management, parallel testing, and simplified CI/CD integration.

Verify

By analyzing your container images and creating a software bill of materials (SBOM), Docker Scout works with Docker Desktop, Docker Hub, or Docker CLI to help organizations shift left. It also empowers developers to find and fix software vulnerabilities in container images, ensuring a secure software supply chain.

Share

Docker Registry serves as a store for developers to push container images to a shared repository securely. This functionality streamlines image sharing, making maintaining consistency and efficiency in development and deployment workflows easier. 

With Docker building on top of containerd, the software development lifecycle benefits from the inner loop and testing to secure deployment to production.

Wrapping up

In this article, we discussed the relationship between Docker and containerd. We showed how containers, as isolated processes, leverage operating system features to provide efficient and scalable development and deployment solutions. We also described what containerd is and explained how Docker leverages containerd in its stack. 

Docker builds upon containerd to enhance the developer experience, offering a comprehensive suite of tools for the entire development lifecycle across building, running, verifying, sharing, and testing containers. 

Start your next projects with containerd and other container components by checking out Docker’s open source projects and most popular open source tools

Learn more

]]>
11 Years of Docker: Shaping the Next Decade of Development https://www.docker.com/blog/docker-11-year-anniversary/ Thu, 21 Mar 2024 15:31:36 +0000 https://www.docker.com/?p=53202 Eleven years ago, Solomon Hykes walked onto the stage at PyCon 2013 and revealed Docker to the world for the first time. The problem Docker was looking to solve? “Shipping code to the server is hard.”

And the world of application software development changed forever.

2400x1260

Docker was built on the shoulders of giants of the Linux kernel, copy-on-write file systems, and developer-friendly git semantics. The result? Docker has fundamentally transformed how developers build, share, and run applications. By “dockerizing” an app and its dependencies into a standardized, open format, Docker dramatically lowered the friction between devs and ops, enabling devs to focus on their apps — what’s inside the container — and ops to focus on deploying any app, anywhere — what’s outside the container, in a standardized format. Furthermore, this standardized “unit of work” that abstracts the app from the underlying infrastructure enables an “inner loop” for developers of code, build, test, verify, and debug, which results in 13X more frequent releases of higher-quality, more secure updates.

The subsequent energy over the past 11 years from the ecosystem of developers, community, open source maintainers, partners, and customers cannot be understated, and we are so thankful and appreciative of your support. This has shown up in many ways, including the following:

  • Ranked #1 “most-wanted” tool/platform by Stack Overflow’s developer community for the past four years
  • 26 million monthly active IPs accessing 15 million repos on Docker Hub, pulling them 25 billion times per month
  • 17 million registered developers
  • Moby project has 67.5k stars, 18.5k forks, and more than 2,200 contributors; Docker Compose has 32.1k stars and 5k forks
  • A vibrant network of 70 Docker Captains across 25 countries serving 167 community meetup groups with more than 200k members and 4800 total meetups
  • 79,000+ customers

The next decade

In our first decade, we changed how developers build, share, and run any app, anywhere — and we’re only accelerating in our second!

Specifically, you’ll see us double down on meeting development teams where they are to enable them to rapidly ship high-quality, secure apps via the following focus areas:

  • Dev Team Productivity. First, we’ll continue to help teams take advantage of the right tech for the right job — whether that’s Linux containers, Windows containers, serverless functions, and/or Wasm (Web Assembly) — with the tools they love and the skills they already possess. Second, by bringing together the best of local and the best of cloud, we’ll enable teams to discover and address issues even faster in the “inner loop,” as you’re already seeing today with our early efforts with Docker Scout, Docker Build Cloud, and Testcontainers Cloud.
  • GenAI. This tech is ushering in a “golden age” for development teams, and we’re investing to help in two areas: First, our GenAI Stack — built through collaboration with partners Ollama, LangChain, and Neo4j — enables dev teams to quickly stand up secure, local GenAI-powered apps. Second, our Docker AI is uniquely informed by anonymized data from dev teams using Docker, which enables us to deliver automations that eliminate toil and reduce security risks.
  • Software Supply Chain. The software supply chain is heterogeneous, extensive, and complex for dev teams to navigate and secure, and Docker will continue to help simplify, make more visible, and manage it end-to-end. Whether it’s the trusted content “building blocks” of Docker Official Images (DOI) in Docker Hub, the transformation of ingredients into runnable images via BuildKit, verifying and securing the dev environment with digital signatures and enhanced container isolation, consuming metadata feedback from running containers in production, or making the entire end-to-end supply chain visible and issues actionable in Docker Scout, Docker has it covered and helps make a more secure internet!

Dialing it past 11

While our first decade was fantastic, there’s so much more we can do together as a community to serve app development teams, and we couldn’t be more excited as our second decade together gets underway and we dial it past 11! If you haven’t already, won’t you join us today?!

How has Docker influenced your approach to software development? Share your experiences with the community and join the conversation on LinkedIn.

Let’s build, share, and run — together!

]]>
Docker Partners with NVIDIA to Support Building and Running AI/ML Applications https://www.docker.com/blog/docker-nvidia-support-building-running-ai-ml-apps/ Mon, 18 Mar 2024 22:06:12 +0000 https://www.docker.com/?p=53020 The domain of GenAI and LLMs has been democratized and tasks that were once purely in the domain of AI/ML developers must now be reasoned with by regular application developers into everyday products and business logic. This is leading to new products and services across banking, security, healthcare, and more with generative text, images, and videos. Moreover, GenAI’s potential economic impact is substantial, with estimates it could add trillions of dollars annually to the global economy. 

Docker offers an ideal way for developers to build, test, run, and deploy the NVIDIA AI Enterprise software platform — an end-to-end, cloud-native software platform that brings generative AI within reach for every business. The platform is available to use in Docker containers, deployable as microservices. This enables teams to focus on cutting-edge AI applications where performance isn’t just a goal — it’s a necessity.

This week, at the NVIDIA GTC global AI conference, the latest release of NVIDIA AI Enterprise was announced, providing businesses with the tools and frameworks necessary to build and deploy custom generative AI models with NVIDIA AI foundation models, the NVIDIA NeMo framework, and the just-announced NVIDIA NIM inference microservices, which deliver enhanced performance and efficient runtime. 

This blog post summarizes some of the Docker resources available to customers today.

docker nvidia 2400x1260 1

Docker Hub

Docker Hub is the world’s largest repository for container images with an extensive collection of AI/ML development-focused container images, including leading frameworks and tools such as PyTorch, TensorFlow, Langchain, Hugging Face, and Ollama. With more than 100 million pull requests for AI/ML-related images, Docker Hub’s significance to the developer community is self-evident. It not only simplifies the development of AI/ML applications but also democratizes innovation, making AI technologies accessible to developers across the globe.

NVIDIA’s Docker Hub library offers a suite of container images that harness the power of accelerated computing, supplementing NVIDIA’s API catalog. Docker Hub’s vast audience — which includes approximately 27 million monthly active IPs, showcasing an impressive 47% year-over-year growth — can use these container images to enhance AI performance. 

Docker Hub’s extensive reach, underscored by an astounding 26 billion monthly image pulls, suggests immense potential for continued growth and innovation.

Docker Desktop with NVIDIA AI Workbench

Docker Desktop on Windows and Mac helps deliver NVIDIA AI Workbench developers a smooth experience on local and remote machines. 

NVIDIA AI Workbench is an easy-to-use toolkit that allows developers to create, test, and customize AI and machine learning models on their PC or workstation and scale them to the data center or public cloud. It simplifies interactive development workflows while automating technical tasks that halt beginners and derail experts. AI Workbench makes workstation setup and configuration fast and easy. Example projects are also included to help developers get started even faster with their own data and use cases.   

Docker engineering teams are collaborating with NVIDIA to improve the user experience with NVIDIA GPU-accelerated platforms through recent improvements to the AI Workbench installation on WSL2.

Check out how NVIDIA AI Workbench can be used locally to tune a generative image model to produce more accurate prompted results:

In a near-term update, AI Workbench will use the Container Device Interface (CDI) to govern local and remote GPU-enabled environments. CDI is a CNCF-sponsored project led by NVIDIA and Intel, which exposes NVIDIA GPUs inside of containers to support complex device configurations and CUDA compatibility checks. This simplifies how research, simulation, GenAI, and ML applications utilize local and cloud-native GPU resources.  

With Docker Desktop 4.29 (which includes Moby 25), developers can configure CDI support in the daemon and then easily make all NVIDIA GPUs available in a running container by using the –device option via support for CDI devices.

docker run --device nvidia.com/gpu=all <image> <command>

LLM-powered apps with Docker GenAI Stack

The Docker GenAI Stack lets teams easily integrate NVIDIA accelerated computing into their AI workflows. This stack, designed for seamless component integration, can be set up on a developer’s laptop using Docker Desktop for Windows. It helps deliver the power of NVIDIA GPUs and NVIDIA NIM to accelerate LLM inference, providing tangible improvements in application performance. Developers can experiment and modify five pre-packaged applications to leverage the stack’s capabilities.

Accelerate AI/ML development with Docker Desktop

Docker Desktop facilitates an accelerated machine learning development environment on a developer’s laptop. By tapping NVIDIA GPU support for containers, developers can leverage tools distributed via Docker Hub, such as PyTorch and TensorFlow, to see significant speed improvements in their projects, underscoring the efficiency gains possible with NVIDIA technology on Docker.

Securing the software supply chain

Securing the software supply chain is a crucial aspect of continuously developing ML applications that can run reliably and securely in production. Building with verified, trusted content from Docker Hub and staying on top of security issues through actionable insights from Docker Scout is key to improving security posture across the software supply chain. By following these best practices, customers can minimize the risk of security issues hitting production, improving the overall reliability and integrity of applications running in production. This comprehensive approach not only accelerates the development of ML applications built with the Docker GenAI Stack but also allows for more secure images when building on images sourced from Hub that interface with LLMs, such as LangChain. Ultimately, this provides developers with the confidence that their applications are built on a secure and reliable foundation.

With exploding interest in AI from a huge range of developers, we are excited to work with NVIDIA to build tooling that helps accelerate building AI applications. The ecosystem around Docker and NVIDIA has been building strong foundations for many years and this is enabling a new community of enterprise AI/ML developers to explore and build GPU accelerated applications.”

Justin Cormack, Chief Technology Officer, Docker

Enterprise applications like NVIDIA AI Workbench can benefit enormously from the streamlining that Docker Desktop provides on local systems. Our work with the Docker team will help improve the AI Workbench user experience for managing GPUs on Windows.”

Tyler Whitehouse, Principal Product Manager, NVIDIA

Learn more 

By leveraging Docker Desktop and Docker Hub with NVIDIA technologies, developers are equipped to harness the revolutionary power of AI, grow their skills, and seize opportunities to deliver innovative applications that push the boundaries of what’s possible. Check out NVIDIA’s Docker Hub library  and NVIDIA AI Enterprise to get started with your own AI solutions.

]]>
NVIDIA AI Workbench | Fine Tuning Generative AI nonadult
Are Containers Only for Microservices? Myth Debunked https://www.docker.com/blog/are-containers-only-for-microservices-myth-debunked/ Wed, 06 Mar 2024 14:44:54 +0000 https://www.docker.com/?p=52184 In the ever-evolving software delivery landscape, containerization has emerged as a transformative force, reshaping how organizations build, test, deploy, and manage their applications. 

Whether you are maintaining a monolithic legacy system, navigating the complexities of Service-Oriented Architecture (SOA), or orchestrating your digital strategy around application programming interfaces (APIs), containerization offers a pathway to increased efficiency, resilience, and agility. 

In this post, we’ll debunk the myth that containerization is solely the domain of microservices by exploring its applicability and advantages across different architectural paradigms. 

rectangle containerization

Containerization across architectures

Although containerization is commonly associated with microservices architecture because of its agility and scalability, the potential extends far beyond, offering compelling benefits to a variety of architectural styles. From the tightly integrated components of monolithic applications to the distributed nature of SOA and the strategic approach of API-led connectivity, containerization stands as a universal tool, adaptable and beneficial across the board.

Beyond the immediate benefits of improved resource utilization, faster deployment cycles, and streamlined maintenance, the true value of containerization lies in its ability to ensure consistent application performance across varied environments. This consistency is a cornerstone for reliability and efficiency, pivotal in today’s fast-paced software delivery demands.

Here, we will provide examples of how this technology can be a game-changer for your digital strategy, regardless of your adopted style. Through this exploration, we invite technology leaders and executives to broaden their perspective on containerization, seeing it not just as a tool for one architectural approach but as a versatile ally in the quest for digital excellence.

1. Event-driven architecture

Event-driven architecture (EDA) represents a paradigm shift in how software components interact, pivoting around the concept of events — such as state changes or specific action occurrences — as the primary conduit for communication. This architectural style fosters loose coupling, enabling components to operate independently and react asynchronously to events, thereby augmenting system flexibility and agility. EDA’s intrinsic support for scalability, by allowing components to address fluctuating workloads independently, positions it as an ideal candidate for handling dynamic system demands.

Within the context of EDA, containerization emerges as a critical enabler, offering a streamlined approach to encapsulate applications alongside their dependencies. This encapsulation guarantees that each component of an event-driven system functions within a consistent, isolated environment — a crucial factor when managing components with diverse dependency requirements. Containers’ scalability becomes particularly advantageous in EDA, where fluctuating event volumes necessitate dynamic resource allocation. By deploying additional container instances in response to increased event loads, systems maintain high responsiveness levels.

Moreover, containerization amplifies the deployment flexibility of event-driven components, ensuring consistent event generation and processing across varied infrastructures (Figure 1). This adaptability facilitates the creation of agile, scalable, and portable architectures, underpinning the deployment and management of event-driven components with a robust, flexible infrastructure. Through containerization, EDA systems achieve enhanced operational efficiency, scalability, and resilience, embodying the principles of modern, agile application delivery.

 Illustration of event-driven architecture showing Event Broker connected to Orders Service, Fulfillment Service, and Notification Service.
Figure 1: Event-driven architecture.

2. API-led architecture

API-led connectivity represents a strategic architectural approach focused on the design, development, and management of APIs to foster seamless connectivity and data exchange across various systems, applications, and services within an organization (Figure 2). This methodology champions a modular and scalable framework ideal for the modern digital enterprise.

The principles of API-led connectivity — centering on system, process, and experience APIs — naturally harmonize with the benefits of containerization. By encapsulating each API within its container, organizations can achieve unparalleled modularity and scalability. Containers offer an isolated runtime environment for each API, ensuring operational independence and eliminating the risk of cross-API interference. This isolation is critical, as it guarantees that modifications or updates to one API can proceed without adversely affecting others, which is a cornerstone of maintaining a robust API-led ecosystem.

Moreover, the dual advantages of containerization — ensuring consistent execution environments and enabling easy scalability — align perfectly with the goals of API-led connectivity. This combination not only simplifies the deployment and management of APIs across diverse environments but also enhances the resilience and flexibility of the API infrastructure. Together, API-led connectivity and containerization empower organizations to develop, scale, and manage their API ecosystems more effectively, driving efficiency and innovation in application delivery.

Illustration of API-led architecture showing separate layers for Channel APIs, Orchestration APIs, and System APIs.
Figure 2: API-led architecture.

3. Service-oriented architecture

Service-oriented architecture (SOA) is a design philosophy that emphasizes the use of discrete services within an architecture to provide business functionalities. These services communicate through well-defined interfaces and protocols, enabling interoperability and facilitating the composition of complex applications from independently developed services. SOA’s focus on modularity and reusability makes it particularly amenable to the benefits offered by containerization.

Containerization brings a new dimension of flexibility and efficiency to SOA by encapsulating these services into containers. This encapsulation provides an isolated environment for each service, ensuring consistent execution regardless of the deployment environment. Such isolation is crucial for maintaining the integrity and availability of services, particularly in complex, distributed architectures where services must communicate across different platforms and networks.

Moreover, containerization enhances the scalability and manageability of SOA-based systems. Containers can be dynamically scaled to accommodate varying loads, enabling organizations to respond swiftly to changes in demand. This scalability, combined with the ease of deployment and rollback provided by container orchestration platforms, supports the agile delivery and continuous improvement of services.

The integration of containerization with SOA essentially results in a more resilient, scalable, and manageable architecture. It enables organizations to leverage the full potential of SOA by facilitating faster deployment, enhancing performance, and simplifying the lifecycle management of services. Together, SOA and containerization create a powerful framework for building flexible, future-proof applications that can adapt to the evolving needs of the business.

4. Monolithic applications

Contrary to common perceptions, monolithic applications stand to gain significantly from containerization. This technology can encapsulate the full application stack — including the core application, its dependencies, libraries, and runtime environment within a container. This encapsulation ensures uniformity across various stages of the development lifecycle, from development and testing to production, effectively addressing the infamous ‘it works on my machine’ challenge. Such consistency streamlines the deployment process and simplifies scaling efforts, which is particularly beneficial for applications that need to adapt quickly to changing demands.

Moreover, containerization fosters enhanced collaboration among development teams by standardizing the operational environment, thereby minimizing discrepancies that typically arise from working in divergent development environments. This uniformity is invaluable in accelerating development cycles and improving product reliability.

Perhaps one of the most strategic benefits of containerization for monolithic architectures is the facilitation of a smoother transition to microservices. By containerizing specific components of the monolith, organizations can incrementally decompose their application into more manageable, loosely coupled microservices. This approach not only mitigates the risks associated with a full-scale migration but also allows teams to gradually adapt to microservices’ architectural patterns and principles.

Containerization presents a compelling proposition for monolithic applications, offering a pathway to modernization that enhances deployment efficiency, operational consistency, and the flexibility to evolve toward a microservices-oriented architecture. Through this lens, containerization is not just a tool for new applications but a bridge that allows legacy applications to step into the future of software development.

Conclusion

The journey of modern software development, with its myriad architectural paths, is markedly enhanced by the adoption of containerization. This technology transcends architectural boundaries, bringing critical advantages such as isolation, scalability, and portability to the forefront of application delivery. Whether your environment is monolithic, service-oriented, event-driven, or API-led, containerization aligns perfectly with the ethos of modern, distributed, and cloud-native applications. 

By embracing the adaptability and transformative potential of containerization, you can open your architectures to a future where agility, efficiency, and resilience are not just aspirations but achievable realities. Begin your transformative journey with Docker Desktop today and redefine what’s possible within the bounds of your existing architectural framework.

Learn more

]]>
Azure Container Registry and Docker Hub: Connecting the Dots with Seamless Authentication and Artifact Cache https://www.docker.com/blog/azure-container-registry-and-docker-hub-connecting-the-dots-with-seamless-authentication-and-artifact-cache/ Thu, 29 Feb 2024 14:48:05 +0000 https://www.docker.com/?p=52583 By leveraging the wide array of public images available on Docker Hub, developers can accelerate development workflows, enhance productivity, and, ultimately, ship scalable applications that run like clockwork. When building with public content, acknowledging the potential operational risks associated with using that content without proper authentication is crucial. 

In this post, we will describe best practices for mitigating these risks and ensuring the security and reliability of your containers.

Black padlock on light blue digital background

Import public content locally

There are several advantages to importing public content locally. Doing so improves the availability and reliability of your public content pipeline and protects you from failed CI builds. By importing your public content, you can easily validate, verify, and deploy images to help run your business more reliably.

For more information on this best practice, check out the Open Container Initiative’s guide on Consuming Public Content.

Configure Artifact Cache to consume public content

Another best practice is to configure Artifact Cache to consume public content. Azure Container Registry’s (ACR) Artifact Cache feature allows you to cache your container artifacts in your own Azure Container Registry, even for private networks. This approach limits the impact of rate limits and dramatically increases pull reliability when combined with geo-replicated ACR, allowing you to pull artifacts from the region closest to your Azure resource. 

Additionally, ACR offers various security features, such as private networks, firewall configuration, service principals, and more, which can help you secure your container workloads. For complete information on using public content with ACR Artifact Cache, refer to the Artifact Cache technical documentation.

Authenticate pulls with public registries

We recommend authenticating your pull requests to Docker Hub using subscription credentials. Docker Hub offers developers the ability to authenticate when building with public library content. Authenticated users also have access to pull content directly from private repositories. For more information, visit the Docker subscriptions page. Microsoft Artifact Cache also supports authenticating with other public registries, providing an additional layer of security for your container workloads.

Following these best practices when using public content from Docker Hub can help mitigate security and reliability risks in your development and operational cycles. By importing public content locally, configuring Artifact Cache, and setting up preferred authentication methods, you can ensure your container workloads are secure and reliable.

Learn more about securing containers

Additional resources for improving container security for Microsoft and Docker customers

]]>
How to Use Testcontainers on Jenkins CI https://www.docker.com/blog/how-to-use-testcontainers-on-jenkins-ci/ Mon, 26 Feb 2024 15:22:16 +0000 https://www.docker.com/?p=51539 Releasing software often and with confidence relies on a strong continuous integration and continuous delivery (CI/CD) process that includes the ability to automate tests. Jenkins offers an open source automation server that facilitates such release of software projects.

In this article, we will explore how you can run tests based on the open source Testcontainers framework in a Jenkins pipeline using Docker and Testcontainers Cloud

Testcontainers Jenkins 2400x1260 1

Jenkins, which streamlines the development process by automating the building, testing, and deployment of code changes, is widely adopted in the DevOps ecosystem. It supports a vast array of plugins, enabling integration with various tools and technologies, making it highly customizable to meet specific project requirements.

Testcontainers is an open source framework for provisioning throwaway, on-demand containers for development and testing use cases. Testcontainers makes it easy to work with databases, message brokers, web browsers, or just about anything that can run in a Docker container.

Testcontainers also provides support for many popular programming languages, including Java, Go, .NET, Node.js, Python, and more. This article will show how to test a Java Spring Boot application (testcontainers-showcase) using Testcontainers in a Jenkins pipeline. Please fork the repository into your GitHub account. To run Testcontainers-based tests, a Testcontainers-supported container runtime, like Docker, needs to be available to agents.

Note: As Jenkins CI servers are mostly run on Linux machines, the following configurations are tested on a Linux machine only.

Docker containers as Jenkins agents

Let’s see how to use dynamic Docker container-based agents. To be able to use Docker containers as agents, install the Docker Pipeline plugin

Now, let’s create a file with name Jenkinsfile in the root of the project with the following content:

pipeline {
   agent {
       docker {
             image 'eclipse-temurin:17.0.9_9-jdk-jammy'
             args '--network host -u root -v /var/run/docker.sock:/var/run/docker.sock'
       }
 }

   triggers { pollSCM 'H/2 * * * *' } // poll every 2 mins

   stages {
       stage('Build and Test') {
           steps {
               sh './mvnw verify'
           }
       }
   }
}

We are using the eclipse-temurin:17.0.9_9-jdk-jammy Docker container as an agent to run the builds for this pipeline. Note that we are mapping the host’s Unix Docker socket as a volume with root user permissions to make it accessible to the agent, but this can potentially be a security risk.

Add the Jenkinsfile and push the changes to the Git repository.

Now, go to the Jenkins Dashboard and select New Item to create the pipeline. Follow these steps:

  • Enter testcontainers-showcase as pipeline name.
  • Select Pipeline as job type.
  • Select OK.
  • Under Pipeline section:
  • Branches to build: Branch Specifier (blank for ‘any’): */main.
  • Script Path: Jenkinsfile.
  • Select Save.
  • Choose Build Now to trigger the pipeline for the first time.

The pipeline should run the Testcontainers-based tests successfully in a container-based agent using the remote Docker-in-Docker based configuration.

Kubernetes pods as Jenkins agents

While running Testcontainers-based tests on Kubernetes pods, you can run a Docker-in-Docker (DinD) container as a sidecar. To use Kubernetes pods as Jenkins agents, install Kubernetes plugin.

Now you can create the Jenkins pipeline using Kubernetes pods as agents as follows:

def pod =
"""
apiVersion: v1
kind: Pod
metadata:
 labels:
   name: worker
spec:
 serviceAccountName: jenkins
 containers:
   - name: java17
     image: eclipse-temurin:17.0.9_9-jdk-jammy
     resources:
       requests:
         cpu: "1000m"
         memory: "2048Mi"
     imagePullPolicy: Always
     tty: true
     command: ["cat"]
   - name: dind
     image: docker:dind
     imagePullPolicy: Always
     tty: true
     env:
       - name: DOCKER_TLS_CERTDIR
         value: ""
     securityContext:
       privileged: true
"""

pipeline {
   agent {
       kubernetes {
           yaml pod
       }
   }
   environment {
       DOCKER_HOST = 'tcp://localhost:2375'
       DOCKER_TLS_VERIFY = 0
   }

   stages {
       stage('Build and Test') {
           steps {
               container('java17') {
                   script {
                       sh "./mvnw verify"
                   }
               }
           }
       }
   }
}

Although we can use a Docker-in-Docker based configuration to make the Docker environment available to the agent, this setup also brings configuration complexities and security risks.

  • By volume mounting the host’s Docker Unix socket (Docker-out-of-Docker) with the agents, the agents have direct access to the host Docker engine.
  • When using DooD approach file sharing, using bind-mounting doesn’t work because the containerized app and Docker engine work in different contexts. 
  • The Docker-in-Docker (DinD) approach requires the use of insecure privileged containers.

You can watch the Docker-in-Docker: Containerized CI Workflows presentation to learn more about the challenges of a Docker-in-Docker based CI setup.

This is where Testcontainers Cloud comes into the picture to make it easy to run Testcontainers-based tests more simply and reliably. 

By using Testcontainers Cloud, you don’t even need a Docker daemon running on the agent. Containers will be run in on-demand cloud environments so that you don’t need to use powerful CI agents with high CPU/memory for your builds.

Let’s see how to use Testcontainers Cloud with minimal setup and run Testcontainers-based tests.

Testcontainers Cloud-based setup

Testcontainers Cloud helps you run Testcontainers-based tests at scale by spinning up the dependent services as Docker containers on the cloud and having your tests connect to those services.

If you don’t have a Testcontainers Cloud account already, you can create an account and get a Service Account Token as follows:

  1. Sign up for a Testcontainers Cloud account.
  2. Once logged in, create an organization.
  3. Navigate to the Testcontainers Cloud dashboard and generate a Service account (Figure 1).
Screenshot of interface for creating a new Testcontainer Cloud service account and getting access token.
Figure 1: Create a new Testcontainers Cloud service account.

To use Testcontainers Cloud, we need to start a lightweight testcontainers-cloud agent by passing TC_CLOUD_TOKEN as an environment variable.

You can store the TC_CLOUD_TOKEN value as a secret in Jenkins as follows:

  • From the Dashboard, select Manage Jenkins.
  • Under Security, choose Credentials.
  • You can create a new domain or use System domain.
  • Under Global credentials, select Add credentials.
  • Select Kind as Secret text.
  • Enter TC_CLOUD_TOKEN value in Secret.
  • Enter tc-cloud-token-secret-id as ID.
  • Select Create.

Next, you can update the Jenkinsfile as follows:

pipeline {
   agent {
       docker {
             image 'eclipse-temurin:17.0.9_9-jdk-jammy'
       }
 }

   triggers { pollSCM 'H/2 * * * *' }

   stages {

       stage('TCC SetUp') {
     environment {
      	 TC_CLOUD_TOKEN = credentials('tc-cloud-token-secret-id')
           }
           steps {
               sh "curl -fsSL https://get.testcontainers.cloud/bash | sh"
           }
       }

       stage('Build and Test') {
           steps {
               sh './mvnw verify'
           }
       }
   }
}

We have set the TC_CLOUD_TOKEN environment variable using the value from tc-cloud-token-secret-id credential we created and started a Testcontainers Cloud agent before running our tests.

Now if you commit and push the updated Jenkinsfile, then the pipeline will run the tests using Testcontainers Cloud. You should see log statements similar to the following indicating that the Testcontainers-based tests are using Testcontainers Cloud instead of the default Docker daemon.

14:45:25.748 [testcontainers-lifecycle-0] INFO  org.testcontainers.DockerClientFactory - Connected to docker: 
  Server Version: 78+testcontainerscloud (via Testcontainers Desktop 1.5.5)
  API Version: 1.43
  Operating System: Ubuntu 20.04 LTS
  Total Memory: 7407 MB

You can also leverage Testcontainers Cloud’s Turbo mode in conjunction with build tools that feature parallel run capabilities to run tests even faster.

In the case of Maven, you can use the -DforkCount=N system property to specify the degree of parallelization. For Gradle, you can specify the degree of parallelization using the maxParallelForks property.

We can enable parallel execution of our tests using four forks in Jenkinsfile as follows:

stage('Build and Test') {
      steps {
           sh './mvnw verify -DforkCount=4' 
      }
}

For more information, check out the article on parallelizing your tests with Turbo mode.

Conclusion

In this article, we have explored how to run Testcontainers-based tests on Jenkins CI using dynamic containers and Kubernetes pods as agents with Docker-out-of-Docker and Docker-in-Docker based configuration. 

Then we learned how to create a Testcontainers Cloud account and configure the pipeline to run tests using Testcontainers Cloud. We also explored leveraging Testcontainers Cloud Turbo mode combined with your build tool’s parallel execution capabilities. 

Although we have demonstrated this setup using a Java project as an example, Testcontainers libraries exist for other popular languages, too, and you can follow the same pattern of configuration to run your Testcontainers-based tests on Jenkins CI in Golang, .NET, Python, Node.js, etc.

Get started with Testcontainers Cloud by creating a free account at the website.

Learn more

]]>
Scaling Docker Compose Up https://www.docker.com/blog/scaling-docker-compose-up/ Tue, 06 Feb 2024 14:02:50 +0000 https://www.docker.com/?p=51058 Docker Compose‘s simplicity — just run compose up — has been an integral part of developer workflows for a decade, with the first commit occurring in 2013, back when it was called Plum. Although the feature set has grown dramatically in that time, maintaining that experience has always been integral to the spirit of Compose.

In this post, we’ll walk through how to manage microservice sprawl with Docker Compose by importing subprojects from other Git repos.

banner Scaling Docker Compose Up 2400x1260

Maintaining simplicity

Now, perhaps more than ever, that simplicity is key. The complexity of modern software development is undeniable regardless of whether you’re using microservices or a monolith, deploying to the cloud or on-prem, or writing in JavaScript or C. 

Compose has not kept up with this “development sprawl” and is even sometimes an obstacle when working on larger, more complex projects. Maintaining Compose to accurately represent your increasingly complex application can require its own expertise, often resulting in out-of-date configuration in YAML or complex makefile tasks.

As an open source project, Compose serves everyone from home lab enthusiasts to transcontinental corporations, which is no small feat, and our commitment to maintaining Compose’s signature simplicity for all users hasn’t changed.

The increased flexibility afforded by Compose watch and include means your project no longer needs to be one-size-fits-all. Now, it’s possible to split your project across Git repos and import services as needed, customizing their configuration in the process.

Application architecture

Let’s take a look at a hypothetical application architecture. To begin, the application is split across two Git repos:

  • backend — Backend in Python/Flask
  • frontend — Single-page app (SPA) frontend in JavaScript/Node.js

While working on the frontend, the developers run without using Docker or Compose, launching npm start on their laptops directly and proxy API requests to a shared staging server (as opposed to running the backend locally). Meanwhile, while working on the backend, developers and CI (for integration tests) share a Compose file and rely on command-line tools like cURL to manually test functionality locally.

We’d like a flexible configuration that enables each group of developers to use their optimal workflow (e.g., leveraging hot reload for the frontend) while also allowing reuse to share project configuration between repos. At first, this seems like an impossible situation to resolve.

Frontend

We can start by adding a compose.yaml file to frontend:

services:
  frontend:
  pull_policy: build
    build:
      context: .
    environment:
      BACKEND_HOST: ${BACKEND_HOST:-https://staging.example.com}
    ports:
      - 8000:8000

Note: If you’re wondering what the Dockerfile looks like, take a look at this samples page for an up-to-date example of best practices generated by docker init.

This is a great start! Running docker compose up will now build the Node.js frontend and make it accessible at http://localhost:8000/.

The BACKEND_HOST environment variable can be used to control where upstream API requests are proxied to and defaults to our shared staging instance.

Unfortunately, we’ve lost the great developer experience afforded by hot module reload (HMR) because everything is inside the container. By adding a develop.watch section, we can preserve that:

services:
  frontend:
    pull_policy: build
    build:
      context: .
    environment:
      BACKEND_HOST: ${BACKEND_HOST:-https://staging.example.com}
    ports:
      - 8000:8000
    develop:
      watch:
        - path: package.json
          action: rebuild
        - path: src/
          target: /app/src
          action: sync

Now, while working on the frontend, developers continue to benefit from the rapid iteration cycles due to HMR. Whenever a file is modified locally in the src/ directory, it’s synchronized into the container at /app/src

If the package.json file is modified, the entire container is rebuilt, so that the RUN npm install step in the Dockerfile will be re-executed and install the latest dependencies. The best part is the only change to the workflow is running docker compose watch instead of npm start.

Backend

Now, let’s set up a Compose file in backend:

services:
  backend:
    pull_policy: build
    build:
      context: .
    ports:
      - 1234:8080
    develop:
      watch:
        - path: requirements.txt
          action: rebuild
        - path: ./
          target: /app/
          action: sync

include:
  - path: git@github.com:myorg/frontend.git
    env_file: frontend.env

frontend.env

BACKEND_HOST=http://backend:8080

Much of this looks very similar to the frontend compose.yaml.

When files in the project directory change locally, they’re synchronized to /app inside the container, so the Flask dev server can handle hot reload. If the requirements.txt is changed, the entire container is rebuilt, so that the RUN pip install step in the Dockerfile will be re-executed and install the latest dependencies.

However, we’ve also added an include section that references the frontend project by its Git repository. The custom env_file points to a local path (in the backend repo), which sets BACKEND_HOST so that the frontend service container will proxy API requests to the backend service container instead of the default.

Note: Remote includes are an experimental feature. You’ll need to set COMPOSE_EXPERIMENTAL_GIT_REMOTE=1 in your environment to use Git references.

With this configuration, developers can now run the full stack while keeping the frontend and backend Compose projects independent and even in different Git repositories.

As developers, we’re used to sharing code library dependencies, and the include keyword brings this same reusability and convenience to your Compose development configurations.

What’s next?

There are still some rough edges. For example, the remote project is cloned to a temporary directory, which makes it impractical to use with watch mode when imported, as the files are not available for editing. Enabling bigger and more complex software projects to use Compose for flexible, personal environments is something we’re continuing to improve upon.

If you’re a Docker customer using Compose across microservices or repositories, we’d love to hear how we can better support you. Get in touch!

Learn more

]]>
EJBCA and Docker — Streamlining PKI Management and TLS Certificate Issuance https://www.docker.com/blog/ejbca-and-docker-streamlining-pki-management-and-tls-certificate-issuance/ Thu, 25 Jan 2024 15:26:58 +0000 https://www.docker.com/?p=49823 This post was contributed by Keyfactor.

Docker has revolutionized how we develop and deploy modern applications, making it easier and more efficient for developers to create and manage containerized applications. 

If you’re in the world of enterprise-level security, public key infrastructure (PKI), and certificate management, you might already be familiar with EJBCA, an open source tool for implementing PKIs. In this blog post, we will explore how to deploy EJBCA as a Docker container, making your infrastructure setup more modern, efficient, and flexible for your security and certificate management needs. 

Streamlining PKI Management and TLS Certificate Issuance 2400x1260 1

Why deploy EJBCA as a Docker container?

EJBCA is a robust PKI and certificate management solution, but sometimes setting up and managing it can be challenging, especially if you need to deploy it from source. Deploying EJBCA as a Docker container can simplify the process and offer various benefits, including:

  • Portability — Docker containers are lightweight and portable, containing all the software needed to run an application. Once you have an EJBCA container image, you can run it on any system that supports Docker, ensuring consistency across environments.
  • Easy scaling — Containers make it straightforward to scale your EJBCA instance. You can spin up multiple containers with ease, and orchestration tools like Kubernetes can manage the scaling for you.
  • Simplified deployment — With EJBCA in a Docker container, you can deploy and upgrade it quickly without worrying about complex installation procedures or dependencies such as Java, database drivers, Wildfly application server, operating system, etc. An installation of EJBCA requires all of these components, and the container has all of these critical dependencies installed and configured.

Advantages of open source PKI and EJBCA

When it comes to implementing a PKI solution, EJBCA’s open source nature provides distinct advantages over other software tools or utilities. Tools such as OpenSSL may serve well for testing, but they often prove inadequate for production. A Microsoft PKI or other PKI service tailored to specific use cases can be robust but often limited in flexibility, scalability, interoperability, and compliance.

EJBCA is one of the most used open source PKIs in the world. It can be built from source using the code from GitHub or be deployed as a Docker container. Here are advantages that you can expect from EJBCA:

  • Comprehensive feature set — EJBCA offers a comprehensive feature set for certificate management, including certificate issuance, revocation, and key management for many use cases. You can run hundreds of CAs in one single installation. This is effective compared to, for example, Microsoft ADCS, which can run only one CA per server installation. One installation of EJBCA can also support multiple use cases.
  • Robust certificate authority — EJBCA functions as a full-fledged certificate authority (CA), registration authority, and validation authority, including support for both online certificate status protocol (OCSP) and certificate revocation lists (CRLs), essential for being able to support a serious PKI. 
  • Scalability and automation — In production scenarios, scalability is critical when EJBCA is under load and more instances are needed to serve PKI operations. EJBCA can be easily scaled using Docker orchestration tools, Helm charts, and by leveraging EJBCA open source Ansible playbooks, ensuring that your PKI infrastructure can handle the demands of your organization. 
  • User management and role-based access control — EJBCA offers user management and role-based access control, allowing you to define who can perform specific tasks within your PKI. 
  • Active community and support — EJBCA benefits from an active open source community and professional support options for the EJBCA Enterprise editions, ensuring you can find the right assistance when needed. EJBCA Enterprise edition is available as software and hardware appliances, Cloud AWS and Azure Marketplace options, and SaaS.
  • Compliance and auditing — EJBCA is designed with compliance and auditing in mind, helping you meet regulatory requirements and maintain a robust and signed audit trail. For example, you can enforce certificate policy for each CA to prevent the CA from signing any type of certificate signing request (CSR) that is submitted.

Getting started

Let’s walk through the process of deploying EJBCA as a Docker container. You can learn more through our introductory video on YouTube.

Step 1: Install Docker

You must have Docker installed on your system. 

Step 2: Pull the EJBCA Docker image

EJBCA provides an official Docker image, making it easy to get started. You can pull the image using the following command:

docker pull keyfactor/ejbca-ce:latest

Step 3: Run EJBCA container

Now that you have the EJBCA image, you can run it as a container:

shellCopy code
docker run -d --rm --name ejbca-node1 -p 80:8080 -p 443:8443 -h "127.0.0.1" --memory="2048m" --memory-swap="2048m" --cpus="2" ejbca/ejbca-ce:8.0.0

This command will start the EJBCA container in the background, and it will be accessible at https://localhost:443/ejbca/adminweb.

Step 4: Access the EJBCA web console

Open your web browser and navigate to https://localhost/ejbca/adminweb to access the EJBCA web console.

Custom installation configuration

If you need to customize your EJBCA instance, you can mount a configuration file or use an external database with the container. This step allows you to tailor the PKI to your specific needs.

Issuing a TLS certificate as a PKI admin  

Private TLS certificates play a crucial role in authenticating users and devices within closed network environments such as enterprise networks and business applications. When public trust isn’t necessary, opting for private TLS certificates is the most cost efficient and convenient way. Yet, it’s crucial to approach it with seriousness. The PKI software setup and certificate issuance process hold significance even in private trust environments.  

You can generate TLS client or server certificates easily by following our best practices video tutorials. EJBCA allows you to initiate on a small scale and expand as your use case evolves. This series commences with a guide on setting up EJBCA as a Docker container. Read more and find additional options for how to issue your TLS certificates with EJBCA on the website.

Conclusion

Deploying EJBCA as a Docker container simplifies the management of your PKI setup. It provides portability, isolation, and scalability, making it easier to handle security and certificate management. Whether you are a security professional or a developer working on PKI solutions, using Docker to run EJBCA can streamline your workflow and enhance your security practices.

In this blog post, we’ve covered the basics of setting up EJBCA as a Docker container and explained how a PKI admin can configure the software to issue TLS certificates. We encourage you to explore the EJBCA documentation and tutorial videos for more advanced configurations and guidance on issuing certificates for your products or workloads. With the power of Docker and EJBCA, you can take control of your certificate authority and PKI efficiently and securely.

Now, go ahead and secure your digital world with EJBCA and Docker! If you have any questions or want to share your experiences, connect with us on the Keyfactor discussions page.

Learn more

]]>
How to Use OpenPubkey to SSH Without SSH Keys https://www.docker.com/blog/how-to-use-openpubkey-to-ssh-without-ssh-keys/ Thu, 18 Jan 2024 13:22:05 +0000 https://www.docker.com/?p=50520 This post was contributed by BastionZero.

What if you could SSH without having to worry about SSH keys? Without the need to worry about SSH keys getting lost, stolen, shared, rotated, or forgotten? In this article, we’ll walk you through how to SSH to your remote Docker setups with just your email account or Single Sign-On (SSO). Find instructions for setting up OpenPubkey SSH in our documentation.

banner How to Use OpenPubkey to SSH Without SSH Keys 2400x1260px

What’s wrong with SSH?

We love SSH and use it all the time, but don’t often stop to count how many keys we’ve accumulated over the years. As of writing this, I have eight. I can tell you what five of them are for, I definitely shouldn’t have at least two of them, and I’m pretty sure of the swift firing that would happen if I lost at least one other. What on earth is “is_key.pem”? I have no idea, and it sounds like I didn’t know when I made it.

There’s rarely an SSH key that’s actually harmless, even if you’re only using it to access or debug remote Docker setups. Test environments get cryptojacked and proxyjacked frequently, and entire swaths of the internet are dedicated to SSH hacking. 

When was the last time you patched sshd? The tool is ubiquitous yet so rarely updated that those threats are not going away anytime soon. Managing keys is a hassle that is bound to lead to compromise, and simple mistakes can lead to horrible outcomes. Even GitHub exposed their SSH private key in a public repository last year. 

So, what can we do? How can we do better? And is it free? Yes, yes, and yes. 

Now, there’s a new way to use SSH with OpenPubkey. Instead of juggling SSH keys, OpenPubkey SSH (OPK SSH) allows you to use your regular email account or SSO to log in and securely connect to an SSH server with a quick, one-time setup. No more guessing which keys get you fired, and no cursing your past self for poor naming conventions. No keys.

OpenPubkey SSH is the first fully developed use case for OpenPubkey, an open source project led by BastionZero, Docker, and The Linux Foundation. It will continue to grow and improve as we enhance its features and adapt it to meet evolving user needs and security challenges. Read on to learn what OpenPubkey is and how it works.

Getting started with OpenPubkey SSH 

Currently, OPK SSH only supports logging in via Google. If you have a particular provider you’d prefer, come visit us in GitHub or learn more in the Getting involved section below.

OpenPubkey SSH is being offered as part of BastionZero’s zero-trust command-line utility: the zli. Instructions for installing the zli can be found in the BastionZero documentation.

After installing the zli, you’ll need to:

  1. Configure your SSH server (<1 minute)
  2. Log in with Google (<1 minute)
  3. Test your configuration
  4. Use OPK SSH for Docker remote access
  5. Manage users

Configure your SSH server

The first step is to configure your SSH server. For your first-time setup, we assume you have a Google account and at least sudoer access to the SSH server you’re trying to set up.

zli configure opk <your Google email> <user>@<hostname>

Log in with Google

Then, you need to log in. This will open a browser window so you can authenticate with Google:

zli login --opk

Test your configuration

Now, you can use SSH using OPK. To test that everything configured correctly and access is working via OPK SSH, you can run the following command:

ssh -F /dev/null -o IdentityFile=~/.ssh/id_ecdsa -o IdentitiesOnly=yes user@server_ip

Because we save our certificate at a default location, SSH will always use it to authenticate. So, it is not necessary to specify the IdentityFile after removing your existing SSH keys.

Use OPK SSH for Docker remote access

If you’re already using SSH with Docker then you’re all set, you get to keep your existing remote Docker setup with no need to do anything else. Otherwise, you can set your local Docker client to connect to a remote Docker instance by doing one of the following:

# Set an environment variable
$ export DOCKER_HOST=ssh://user@server-ip

# Or, create a new context
$ docker context create ssh-box --docker "host=ssh://user@server-ip"

Then you can use Docker as usual, and it will use SSH under the hood to connect to your remote Docker instance.

Manage users

Now that you’ve set it up for one user, let’s discuss how to configure it for many. OPK SSH means that you don’t have to coordinate with users to give them access. Who you choose to allow access to your server is specified in an easy-to-read YAML policy file that might look like this:

$ cat policy.yaml
users:
    - email: alice@acme.co
      principals:
        - root
        - luffy
    - email: bob@co.acme
      principals:
        - luffy

Note that principals is SSH-speak for the users you’re allowed to SSH in as.

If you’re flying solo or in a small group, then you’ll likely never have to deal with this file directly; our zli configuration command takes care of this for you. However, larger groups may be more interested in how this works at scale, and we’ve got answers for you. To discuss how OPK SSH can specifically fit your needs, reach out to us at BastionZero. For any issues or troubleshooting questions during the process, visit our guide.

How it works

Docker already lets you use SSH to execute Docker commands on remote containers by specifying a different host either as an environment variable or as part of a context.

# Set an environment variable
$ export DOCKER_HOST=ssh://user@server-ip

# Or, create a new context
$ docker context create ssh-box --docker "host=ssh://user@server-ip"

For OPK SSH, you don’t need to change any of that. Docker is using your pre-configured SSH under the hood for you. OpenPubkey is a different configuration that’s more secure yet completely compatible with Docker or any other access use case that relies on SSH (Figure 1).

 Illustration showing overview of Docker Client, SSH Client, SSO, Docker Host, and OPK verifier.
Figure 1: Accessing a container using OpenPubkey SSH.

OpenPubkey slides in nicely with how SSH is already designed. We only use integration mechanisms that are well-used and widely deployed. First, we use SSH certificates instead of SSH keys, and second, we use the AuthorizedKeysCommand to invoke the OpenPubkey verifier program. This is all taken care of for you by our zli configure command.

$ cat /etc/ssh/sshd_config
...
AuthorizedKeysCommand /etc/opk/opk-ssh verify %u %k %t
AuthorizedKeysCommandUser root
...

SSH certificates remove the need for any keys. Instead of using them as in a traditional certificate ecosystem, such as x509, our goal is to embed them with a special token that we can verify on the server. That’s where the AuthorizedKeysCommand comes in. 

The AuthorizedKeysCommand allows users to have their access evaluated by a program instead of by comparing it against preconfigured, public keys in an authorized_keys file. Once you’ve configured your sshd to use our OPK verifier, it can grant or deny access for all OPK-generated SSH certificates you give it going forward.

What is OpenPubkey?

OpenPubkey isn’t just about SSH; it is so much more. Docker is using it to sign Docker Official Images and BastionZero is using it for zero-trust infrastructure access. OpenPubkey is a joint effort between the Linux Foundation, BastionZero, and Docker. It is an open source project built on top of OpenID Connect (OIDC) that adds new functionality without impacting any of the old. 

OIDC is a protocol that lets you log into websites or applications using your personal (or work) email accounts. When you log in, you’re actually generating an identity token (ID token) that’s only for the specific application and that attests to the fact that you’re you. It also includes some handy personal information — essentially whatever you’ve given that application permission to request. 

Basically, OpenPubkey adds a temporary public key to your ID token so that you can sign messages. Because it’s attested to by trusted identity providers like Google, Microsoft, Okta, etc., anyone can verify it anywhere, at any time.

But OpenPubkey isn’t just about adding a public key to your ID token; it’s also about how you use it. One issue with vanilla OIDC is that any application that respects that token assumes you are you. With OpenPubkey, proving that you’re you isn’t just about presenting a public token, but also a single-use, signed message. So, the only way to impersonate you is to steal your public token and a private secret that never leaves your machine.  

Getting involved

There are plenty of ways to get involved. We’re building a passionate and engaged community. We discuss things at both a high level for those who like to architect and at a fun, gritty, technical level for those who like to be a different kind of architect. Come to hang out; we appreciate the support in whatever capacity you can provide.

If you’d like to get involved, visit our OpenPubkey repo. And if you’re ready to try OPK SSH to SSH without SSH keys, refer to our documentation’s comprehensive guide.

Learn more

]]>