Docker – Docker https://www.docker.com Thu, 09 May 2024 18:53:42 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.3 https://www.docker.com/wp-content/uploads/2024/02/cropped-docker-logo-favicon-32x32.png Docker – Docker https://www.docker.com 32 32 Wasm vs. Docker: Performant, Secure, and Versatile Containers https://www.docker.com/blog/wasm-vs-docker/ Thu, 09 May 2024 18:39:15 +0000 https://www.docker.com/?p=53826 Docker and WebAssembly (Wasm) represent two pivotal technologies that have reshaped the software development landscape. You’ve probably started to hear more about Wasm in the past few years as it has gained in popularity, and perhaps you’ve also heard about the benefits of using it in your application stack. This may have led you to think about the differences between Wasm and Docker, especially because the technologies work together so closely.

In this article, we’ll explore how these two technologies can work together to enable you to deliver consistent, efficient, and secure environments for deploying applications. By marrying these two tools, developers can easily reap the performance benefits of WebAssembly with containerized software development.

White text on blue background saying Wasm vs. Docker

What’s Wasm?

Wasm is a compact binary instruction format governed by the World Wide Web Consortium (W3C). It’s a portable compilation target for more than 40 programming languages, like C/C++, C#, JavaScript, Go, and Rust. In other words, Wasm is a bytecode format encoded to run on a stack-based virtual machine.

Similar to the way Java can be compiled to Java bytecode and executed on the Java Virtual Machine (JVM), which can then be compiled to run on various architectures, a program can be compiled to Wasm bytecode and then executed by a Wasm runtime, which can be packaged to run on different architectures, such as Arm and x86.

a program can be compiled to Wasm bytecode and then executed by a Wasm runtime, which can be packaged to run on different architectures, such as Arm and x86

What’s a Wasm runtime?

Wasm runtimes bridge the gap between portable bytecode and the underlying hardware architecture. They also provide APIs to communicate with the host environment and provide interoperability between other languages, such as JavaScript.

At a high level, a Wasm runtime runs your bytecode in three semantic phases:

  1. Decoding: Processing the module to convert it to an internal representation
  2. Validation: Checking to see that the decoded module is valid
  3. Execution: Installing and invoking a valid module

Wasm runtime examples include Spin, Wasmtime, WasmEdge, and Wasmer. Major browsers like Firefox and Chrome also use Spider Monkey and V8, respectively.

Why use Wasm?

To understand why you might want to use WebAssembly in your application stack, let’s examine its main benefits — notably, security without sacrificing performance and versatility.

Security without sacrificing performance

Wasm enables code to run at near-native speed within a secure, sandboxed environment, protecting systems from malicious software. This performance is achieved through just-in-time (JIT) compilation of WebAssembly bytecode directly into machine code, bypassing the need for transpiling into an intermediate format. 

Wasm also uses shared linear memory — a contiguous block of memory that simplifies data exchange between modules or between WebAssembly and JavaScript. This design allows for efficient communication and enables developers to blend the flexibility of JavaScript with the robust performance of WebAssembly in a single application.

The security of this system is further enhanced by the design of the host runtime environment, which acts as a sandbox. It restricts the Wasm module from accessing anything outside of the designated memory space and from performing potentially dangerous operations like file system access, network requests, and system calls. WebAssembly’s requirement for explicit imports and exports to access host functionality adds another layer of control, ensuring a secure execution environment.

Use case versatility

Finally, WebAssembly is relevant for more than traditional web platforms (contrary to its name). It’s also an excellent tool for server-side applications, edge computing, game development, and cloud/serverless computing. If performance, security, or target device resources are a concern, consider using this compact binary format.

During the past few years, WebAssembly has become more prevalent on the server side because of the WebAssembly System Interface (or WASI). WASI is a modular API for Wasm that provides access to operating system features like files, filesystems, and clocks. 

Docker vs. Wasm: How are they related?

After reading about WebAssembly code, you might be wondering how Docker is relevant. Doesn’t WebAssembly handle sandboxing and portability? How does Docker fit in the picture? Let’s discuss further.

Docker helps developers build, run, and share applications — including those that use Wasm. This is especially true because Wasm is a complementary technology to Linux containers. However, handling these containers without solid developer experience can quickly become a roadblock to application development.

That’s where Docker comes in with a smooth developer experience for building with Wasm and/or Linux containers.

Benefits of using Docker and Wasm together

Using Docker and Wasm together affords great developer experience benefits as well, including:

  • Consistent development environments: Developers can use Docker to containerize their Wasm runtime environments. This approach allows for a consistent Wasm development and execution environment that works the same way across any machine, from local development to production.
  • Efficient deployment: By packaging Wasm applications within Docker, developers can leverage efficient image management and distribution capabilities. This makes deploying and scaling these types of applications easier across various environments.
  • Security and isolation: Although Docker isolates applications at the operating system level, Wasm provides a sandboxed execution environment. When used together, the technologies offer a robust layered security model against many common vulnerabilities.
  • Enhanced performance: Developers can use Docker containers to deploy Wasm applications in serverless architectures or as microservices. This lets you take advantage of Wasm’s performance benefits in a scalable and manageable way.

How to enable Wasm on Docker Desktop

If you’re interested in running WebAssembly containers, you’re in luck! Support for Wasm workloads is now in beta, and you can enable it on Docker Desktop by checking Enable Wasm on the Features in development tab under Settings (Figure 2).

Note: Make sure you have containerd image store support enabled first.

Screenshot of Docker Desktop Settings showing checkmark beside "Enable Wasm" option.
Figure 2: Enable Wasm in Docker Desktop.

After enabling Wasm in Docker Desktop, you’re ready to go. Docker currently supports many Wasm runtimes, including Spin, WasmEdge, and Wasmtime. You can also find detailed documentation that explains how to run these applications.

How Docker supports WebAssembly

To explain how Docker supports WebAssembly, we’ll need to quickly review how the Docker Engine works.

The Docker Engine builds on a higher-level container runtime called containerd. This runtime provides fundamental functionality to control the container lifecycle. Using a shim process, containerd can leverage runc (a low-level runtime) under the hood. Then, runc can interact directly with the operating system to manage various aspects of containers.

The Docker Engine builds on a higher-level container runtime called containerd. This runtime provides fundamental functionality to control the container lifecycle. Using a shim process, containerd can leverage runc (a low-level runtime) under the hood. Then, runc can interact directly with the operating system to manage various aspects of containers.

What’s neat about this design is that anyone can write a shim to integrate other runtimes with containerd, including WebAssembly runtimes. As a result, you can plug-and-play with various Wasm runtimes in Docker, like WasmEdge, Spin, and Wasmtime.

The future of WebAssembly and Docker

WebAssembly is continuously evolving, so you’ll need a tight pulse to keep up with ecosystem developments. One recent advancement relates to how the new WebAssembly Component model will impact shims for the various container runtimes. At Docker, we’re working to make it simple for developers to create Wasm containers and enhance the developer experience.

In a famous 2019 tweet thread, Docker founder Solomon Hykes described the future of cloud computing. In this future, he describes a world where Docker runs Windows, Linux, and WebAssembly containers side by side. Given all the recent developments in the ecosystem, that future is well and truly here.

Recent advancements include:

  • The launch of WASI Preview 2 fully rebased WASI on the component model type system and semantics: This makes it modular, fully virtualizable, and accessible to various source languages.
  • Fermyon, Microsoft, SUSE, LiquidReply, and others have also released the SpinKube open source project: The project provided a straightforward path for deploying Wasm-based serverless functions into Kubernetes clusters. Developers can use SpinKube with Docker via k3s (a lightweight wrapper to run Rancher Lab’s minimal Kubernetes distribution). Docker Desktop also includes the shim, which enables you to run Kubernetes containers on your local machine.

In 2024, we expect the combination of Wasm and containers to be highly regarded for its efficiency, scalability, and cost.

Wrapping things up

In this article, we explained how Docker and Wasm work together and how to use Docker for Wasm workloads. We’re excited to see Wasm’s adoption grow in the coming years and will continue to enhance our support to meet developers both where they’re at and where they’re headed. 

Check out the following related materials for details on Wasm and how it works with Docker:

Learn more

Thanks to Sohan Maheshwar, Developer Advocate Lead at Fermyon, for collaborating on this post.

]]>
AI Trends Report 2024: AI’s Growing Role in Software Development https://www.docker.com/blog/ai-trends-report-2024/ Tue, 16 Apr 2024 13:00:20 +0000 https://www.docker.com/?p=53763 The landscape of application development is rapidly evolving, propelled by the integration of Artificial Intelligence (AI) into the development process. Results in the Docker AI Trends Report 2024, a precursor to the upcoming State of Application Development Report, show interesting AI trends among developers, highlighted in this report.

The most recent Docker State of Application Development Survey results offer insights into how developers are adopting and utilizing AI, reflecting a shift toward more intelligent, efficient, and adaptable development methodologies. This transformation is part of a larger trend observed across the tech industry as AI becomes increasingly central to software development.

The annual Docker State of Application Development survey, conducted by our User Research Team, is one way Docker product managers, engineers, and designers gather insights from Docker users to continuously develop and improve the suite of tools the company offers. For example, in Docker’s 2022 State of Application Development Survey, we found that the task for which Docker users most often refer to support/documentation was creating a Dockerfile (reported by 60% of respondents). This finding helped spur the innovation of Docker AI.

More than 1,300 developers participated in the latest Docker State of Application Development survey, conducted in late 2023. The online survey asked respondents about what tools they use, their application development processes and frustrations, feelings about industry trends, Docker usage, and participation in developer communities. We wanted to know where developers are focused, what they’re working on, and what is most important to them.

Of the approximately 1,300 respondents to the survey, 885 completed it. The findings in this report are based on the 885 completed responses.

2400x1260 2024 docker ai trends report 1

Who responded to the Docker survey?

Respondents who took our survey ranged from home hobbyists to professionals at companies with more than 5,000 employees. Forty-two percent of respondents are working for a small company (up to 100 employees), 28% of participants say they work for mid-sized companies (between 100 and 1,000 employees), and 25% work for large companies (more than 1,000 employees). 

Well over half of the respondents were in engineering roles — for example, 36% of respondents identified as back-end or full-stack developers; 21% were DevOps, infrastructure managers, or platform engineers; and 4% were front-end developers. Other roles of respondents included dev/engineering managers, company leadership, product managers, security roles, and AI/ML roles. There was nearly an even split between respondents with more experience (6+ years, 54%) and less experienced (0-5 years, 46%). 

Our survey underscored a marked growth in roles focused on machine learning (ML) engineering and data science within the Docker ecosystem. In our 2022 survey, approximately 1% of respondents represented this demographic, whereas they made up 8% in the most recent survey. ML engineers and data scientists represent a rapidly expanding user base. This signals the growing relevance of AI to the software development field, and the blurring of the lines between tools used by developers and tools used by AI/ML scientists. 

More than 34% of respondents said they work in the computing or IT/SaaS industry, but we also saw responses from individuals working in accounting, banking, or finance (8%); business, consultancy, or management (7%); engineering or manufacturing (6%), and education (5%). Other responses came in from professionals in a wide range of fields, including media; academic research; transport or logistics; retail; marketing, advertising, or PR; charity or volunteer work; healthcare; construction; creative arts or design; and environment or agriculture.

Docker users made up 87% of our respondents, whereas 13% reported that they do not use Docker.  

AI as an up-and-coming trend

We asked participants what they felt were the most important trends currently in the industry. GenAI (40% of respondents) and AI assistants for software engineering (38% of respondents) were the top-selected options identified as important industry trends in software development. More senior developers (back-end, front-end, and full-stack developers with over 5 years of experience) tended to view GenAI as most important, whereas more junior developers (less than 5 years of experience) view AI assistants for software engineering as most important. This difference may signal varied and unique uses of AI throughout a career in software development.

It’s clearly trendy, but how do developers really feel about AI? The majority (65%) agree that AI is a positive option, it makes their jobs easier (61%), and it allows them to focus on more important tasks (55%). A much smaller number of respondents see AI as a threat to their jobs (23%) or say it makes their jobs more difficult (19%). 

Interestingly, despite high usage and generally positive feelings towards AI, 45% of respondents also reported that they feel AI is over-hyped. Why might this be? It’s not fully clear, but when this finding is considered alongside responses to perception of job threat, one possible answer could be entertained: respondents may be viewing AI as a critical and useful tool for their work, but they’re not too worried about the hype of it replacing them anytime soon.  

How AI is used in the developer’s world

We asked users what they use AI for, how dependent they feel on AI, and what AI tools they use most often. A majority of developers (64%) already report using AI for work, underscoring AI’s penetration into the software development field. Developers leverage AI at work mainly for coding (33% of respondents), writing documentation (29%), research (28%), writing tests (23%), troubleshooting/debugging (21%), and CLI commands (20%). 

For the 568 respondents who indicated they use AI for work, we also asked how dependent they felt on AI to get their job done on a scale of 0 (not at all dependent) to 10 (completely dependent). Responses ranged substantially and varied by role and years of experience, but the overall average reported dependence was about 4 out of 10, indicating relatively low dependence.

In the developer toolkit, respondents indicate that AI tools like ChatGPT (46% of respondents), GitHub Copilot (30%), and Bard (19%) stand out as most frequently used. 

Conclusion

Concluding our 2024 Docker AI Trends Report, Artificial Intelligence is already shifting the way software development is approached. The insights from more than 800 respondents in our latest survey illuminate a path toward a future where AI is seamlessly integrated into every aspect of application development. From coding and documentation to debugging and writing tests, AI tools are becoming indispensable in enhancing efficiency and problem-solving capabilities, allowing developers to focus on more creative and important work.

The uptake of AI tools such as ChatGPT, GitHub Copilot, and Bard among developers is a testament to AI’s value in the development process. Moreover, the growing interest in machine learning engineering and data science within the Docker community signals a broader acceptance and integration of AI technologies.

As Docker continues to innovate and support developers in navigating these changes, the evolving landscape of AI in software development presents both opportunities and challenges. Embracing AI as a positive force that can augment human capabilities rather than replace them is crucial. Docker is committed to facilitating this transition by providing tools and resources that empower developers to leverage AI effectively, ensuring they can remain at the forefront of technological innovation.

Looking ahead, Docker will continue to monitor these trends, adapt our offerings accordingly, and support our user community in harnessing the full potential of AI in software development. As the industry evolves, so too will Docker’s role in shaping the future of application development, ensuring our users are equipped to meet the challenges and seize the opportunities that lie ahead in this exciting era of AI-driven development.

Learn more

Docker’s User Research Team — Olga Diachkova, Julia Wilson, and Rebecca Floyd — conducted this survey, analyzed the results, and provided insights.

For a complete methodology, contact uxresearch@docker.com.

]]>
Empower Your Development: Dive into Docker’s Comprehensive Learning Ecosystem https://www.docker.com/blog/docker-learning-ecosystem/ Tue, 02 Apr 2024 13:26:01 +0000 https://www.docker.com/?p=53438 Continuous learning is a necessity for developers in today’s fast-paced development landscape. Docker recognizes the importance of keeping developers at the forefront of innovation, and to do so, we aim to empower the developer community with comprehensive learning resources.

Docker has taken a multifaceted approach to developer education by forging partnerships with renowned platforms like Udemy and LinkedIn Learning, investing in our own documentation and guides, and highlighting the incredible learning content created by the developer community, including Docker Captains.

2400x1260 docker developer learning across platforms

Commitment to developer learning

At Docker, our goal is to simplify the lives of developers, which begins with empowering devs with understanding how to maximize the power of Docker tools throughout their projects. We also recognize that developers have different learning styles, so we are taking a diversified approach to delivering this material across an array of platforms and formats, which means developers can learn in the format that best suits them. 

Strategic partnerships for developer learning

Recognizing the diverse learning needs of developers, Docker has partnered with leading online learning platforms — Udemy and LinkedIn Learning. These partnerships offer developers access to a wide range of courses tailored to different expertise levels, from beginners looking to get started with Docker to advanced users aiming to deepen their knowledge. 

For teams already utilizing these platforms for other learning needs, this collaboration places Docker learning in a familiar platform next to other coursework.

  • Udemy: Docker’s collaboration with Udemy highlights an array of Endorsed Docker courses, designed by industry experts. Whether getting a handle on containerization or mastering Docker with Kubernetes, Udemy’s platform offers the flexibility and depth developers need to upskill at their own pace. Today, demand remains high for Docker content across the Udemy platform, with more than 350 courses offered and nearly three million enrollments to date.
  • LinkedIn Learning: Through LinkedIn Learning, developers can dive into curated Docker courses to earn a Docker Foundations Professional Certificate once they complete the program. These resources are not just about technical skills; they also cover best practices and practical applications, ensuring learners are job-ready.

Leveraging Docker’s documentation and guides

Although third-party platforms provide comprehensive learning paths, Docker’s own documentation and guides are indispensable tools for developers. Our documentation is continuously updated to serve as both a learning resource and a reference. From installation and configuration to advanced container orchestration and networking, Docker’s guides are designed to help you find your solution with step-by-step walk-throughs.

If it’s been a while since you’ve checked out Docker Docs, you can visit docs.docker.com to find manuals, a getting started guide, along with many new use-case guides to help you with advanced applications including generative AI and security.  

Learners interested in live sessions can register for upcoming live webinars and training on the Docker Training site. There, you will find sessions where you can interact with the Docker support team and discuss best practices for using Docker Scout and Docker Admin.

The role of community in learning

Docker’s community is a vibrant ecosystem of learners, contributors, and innovators. We are thrilled to see the community creating content, hosting workshops, providing mentorship, and enriching the vast array of Docker learning resources. In particular, Docker Captains stand out for their expertise and dedication to sharing knowledge. From James Spurin’s Dive Into Docker course, to Nana Janashia’s Docker Crash Course, to Vladimir Mikhalev’s blog with guided IT solutions using Docker (just to name a few), it’s clear there’s much to learn from within the community.

We encourage developers to join the community and participate in conversations to seek advice, share knowledge, and collaborate on projects. You can also check out the Docker Community forums and join the Slack community to connect with other members of the community.

Conclusion

Docker’s holistic approach to developer learning underscores our commitment to empowering developers with knowledge and skills. By combining our comprehensive documentation and guides with top learning platform partnerships and an active community, we offer developers a robust framework for learning and growth. We encourage you to use all of these resources together to build a solid foundation of knowledge that is enhanced with new perspectives and additional insights as new learning offerings continue to be added.

Whether you’re a novice eager to explore the world of containers or a seasoned pro looking to refine your expertise, Docker’s learning ecosystem is designed to support your journey every step of the way.

Join us in this continuous learning journey, and come learn with Docker.

Learn more

]]>
containerd vs. Docker: Understanding Their Relationship and How They Work Together https://www.docker.com/blog/containerd-vs-docker/ Wed, 27 Mar 2024 13:41:27 +0000 https://www.docker.com/?p=52606 During the past decade, containers have revolutionized software development by introducing higher levels of consistency and scalability. Now, developers can work without the challenges of dependency management, environment consistency, and collaborative workflows.

When developers explore containerization, they might learn about container internals, architecture, and how everything fits together. And, eventually, they may find themselves wondering about the differences between containerd and Docker and how they relate to one another.

In this blog post, we’ll explain what containerd is, how Docker and containerd work together, and how their combined strengths can improve developer experience.

containerd 2400x1260 1

What’s a container?

Before diving into what containerd is, I should briefly review what containers are. Simply put, containers are processes with added isolation and resource management. Containers have their own virtualized operating system with access to host system resources. 

Containers also use operating system kernel features. They use namespaces to provide isolation and cgroups to limit and monitor resources like CPU, memory, and network bandwidth. As you can imagine, container internals are complex, and not everyone has the time or energy to become an expert in the low-level bits. This is where container runtimes, like containerd, can help.

What’s containerd?

In short, containerd is a runtime built to run containers. This open source tool builds on top of operating system kernel features and improves container management with an abstraction layer, which manages namespaces, cgroups, union file systems, networking capabilities, and more. This way, developers don’t have to handle the complexities directly. 

In March 2017, Docker pulled its core container runtime into a standalone project called containerd and donated it to the Cloud Native Computing Foundation (CNCF).  By February 2019, containerd had reached the Graduated maturity level within the CNCF, representing its significant development, adoption, and community support. Today, developers recognize containerd as an industry-standard container runtime known for its scalability, performance, and stability.

Containerd is a high-level container runtime with many use cases. It’s perfect for handling container workloads across small-scale deployments, but it’s also well-suited for large, enterprise-level environments (including Kubernetes). 

A key component of containerd’s robustness is its default use of Open Container Initiative (OCI)-compliant runtimes. By using runtimes such as runc (a lower-level container runtime), containerd ensures standardization and interoperability in containerized environments. It also efficiently deals with core operations in the container life cycle, including creating, starting, and stopping containers.

How is containerd related to Docker?

But how is containerd related to Docker? To answer this, let’s take a high-level look at Docker’s architecture (Figure 1). 

Containerd facilitates operations on containers by directly interfacing with your operating system. The Docker Engine sits on top of containerd and provides additional functionality and developer experience enhancements.

containerd diagram v1

How Docker interacts with containerd

To better understand this interaction, let’s talk about what happens when you run the docker run command:

  • After you select enter, the Docker CLI will send the run command and any command-line arguments to the Docker daemon (dockerd) via REST API call.
  • dockerd will parse and validate the request, and then it will check that things like container images are available locally. If they’re not, it will pull the image from the specified registry.
  • Once the image is ready, dockerd will shift control to containerd to create the container from the image.
  • Next, containerd will set up the container environment. This process includes tasks such as setting up the container file system, networking interfaces, and other isolation features.
  • containerd will then delegate running the container to runc using a shim process. This will create and start the container.
  • Finally, once the container is running, containerd will monitor the container status and manage the lifecycle accordingly.

Docker and containerd: Better together 

Docker has played a key role in the creation and adoption of containerd, from its inception to its donation to the CNCF and beyond. This involvement helped standardize container runtimes and bolster the open source community’s involvement in containerd’s development. Docker continues to support the evolution of the open source container ecosystem by continuously maintaining and evolving containerd.

Containerd specializes in the core functionality of running containers. It’s a great choice for developers needing access to lower-level container internals and other advanced features. Docker builds on containerd to create a cohesive developer experience and comprehensive toolchain for building, running, testing, verifying, and sharing containers.

Build + Run

In development environments, tools like Docker Desktop, Docker CLI, and Docker Compose allow developers to easily define, build, and run single or multi-container environments and seamlessly integrate with your favorite editors or IDEs or even in your CI/CD pipeline. 

Test

One of the largest developer experience pain points involves testing and environment consistency. With Testcontainers, developers don’t have to worry about reproducibility across environments (for example, dev, staging, testing, and production). Testcontainers also allows developers to use containers for isolated dependency management, parallel testing, and simplified CI/CD integration.

Verify

By analyzing your container images and creating a software bill of materials (SBOM), Docker Scout works with Docker Desktop, Docker Hub, or Docker CLI to help organizations shift left. It also empowers developers to find and fix software vulnerabilities in container images, ensuring a secure software supply chain.

Share

Docker Registry serves as a store for developers to push container images to a shared repository securely. This functionality streamlines image sharing, making maintaining consistency and efficiency in development and deployment workflows easier. 

With Docker building on top of containerd, the software development lifecycle benefits from the inner loop and testing to secure deployment to production.

Wrapping up

In this article, we discussed the relationship between Docker and containerd. We showed how containers, as isolated processes, leverage operating system features to provide efficient and scalable development and deployment solutions. We also described what containerd is and explained how Docker leverages containerd in its stack. 

Docker builds upon containerd to enhance the developer experience, offering a comprehensive suite of tools for the entire development lifecycle across building, running, verifying, sharing, and testing containers. 

Start your next projects with containerd and other container components by checking out Docker’s open source projects and most popular open source tools

Learn more

]]>
11 Years of Docker: Shaping the Next Decade of Development https://www.docker.com/blog/docker-11-year-anniversary/ Thu, 21 Mar 2024 15:31:36 +0000 https://www.docker.com/?p=53202 Eleven years ago, Solomon Hykes walked onto the stage at PyCon 2013 and revealed Docker to the world for the first time. The problem Docker was looking to solve? “Shipping code to the server is hard.”

And the world of application software development changed forever.

2400x1260

Docker was built on the shoulders of giants of the Linux kernel, copy-on-write file systems, and developer-friendly git semantics. The result? Docker has fundamentally transformed how developers build, share, and run applications. By “dockerizing” an app and its dependencies into a standardized, open format, Docker dramatically lowered the friction between devs and ops, enabling devs to focus on their apps — what’s inside the container — and ops to focus on deploying any app, anywhere — what’s outside the container, in a standardized format. Furthermore, this standardized “unit of work” that abstracts the app from the underlying infrastructure enables an “inner loop” for developers of code, build, test, verify, and debug, which results in 13X more frequent releases of higher-quality, more secure updates.

The subsequent energy over the past 11 years from the ecosystem of developers, community, open source maintainers, partners, and customers cannot be understated, and we are so thankful and appreciative of your support. This has shown up in many ways, including the following:

  • Ranked #1 “most-wanted” tool/platform by Stack Overflow’s developer community for the past four years
  • 26 million monthly active IPs accessing 15 million repos on Docker Hub, pulling them 25 billion times per month
  • 17 million registered developers
  • Moby project has 67.5k stars, 18.5k forks, and more than 2,200 contributors; Docker Compose has 32.1k stars and 5k forks
  • A vibrant network of 70 Docker Captains across 25 countries serving 167 community meetup groups with more than 200k members and 4800 total meetups
  • 79,000+ customers

The next decade

In our first decade, we changed how developers build, share, and run any app, anywhere — and we’re only accelerating in our second!

Specifically, you’ll see us double down on meeting development teams where they are to enable them to rapidly ship high-quality, secure apps via the following focus areas:

  • Dev Team Productivity. First, we’ll continue to help teams take advantage of the right tech for the right job — whether that’s Linux containers, Windows containers, serverless functions, and/or Wasm (Web Assembly) — with the tools they love and the skills they already possess. Second, by bringing together the best of local and the best of cloud, we’ll enable teams to discover and address issues even faster in the “inner loop,” as you’re already seeing today with our early efforts with Docker Scout, Docker Build Cloud, and Testcontainers Cloud.
  • GenAI. This tech is ushering in a “golden age” for development teams, and we’re investing to help in two areas: First, our GenAI Stack — built through collaboration with partners Ollama, LangChain, and Neo4j — enables dev teams to quickly stand up secure, local GenAI-powered apps. Second, our Docker AI is uniquely informed by anonymized data from dev teams using Docker, which enables us to deliver automations that eliminate toil and reduce security risks.
  • Software Supply Chain. The software supply chain is heterogeneous, extensive, and complex for dev teams to navigate and secure, and Docker will continue to help simplify, make more visible, and manage it end-to-end. Whether it’s the trusted content “building blocks” of Docker Official Images (DOI) in Docker Hub, the transformation of ingredients into runnable images via BuildKit, verifying and securing the dev environment with digital signatures and enhanced container isolation, consuming metadata feedback from running containers in production, or making the entire end-to-end supply chain visible and issues actionable in Docker Scout, Docker has it covered and helps make a more secure internet!

Dialing it past 11

While our first decade was fantastic, there’s so much more we can do together as a community to serve app development teams, and we couldn’t be more excited as our second decade together gets underway and we dial it past 11! If you haven’t already, won’t you join us today?!

How has Docker influenced your approach to software development? Share your experiences with the community and join the conversation on LinkedIn.

Let’s build, share, and run — together!

]]>
Rev Up Your Cloud-Native Journey: Join Docker at KubeCon + CloudNativeCon EU 2024 for Innovation, Expert Insight, and Motorsports https://www.docker.com/blog/docker-at-kubecon-eu-2024/ Wed, 13 Mar 2024 13:42:19 +0000 https://www.docker.com/?p=52866 We are racing toward the finish line at KubeCon + CloudNativeCon Europe, March 19 – 22, 2024 in Paris, France. Join the Docker “pit crew” at Booth #J3 for an incredible racing experience, new product demos, and limited-edition SWAG. 

Meet us at our KubeCon booth, sessions, and events to learn about the latest trends in AI productivity and best practices in cloud-native development with Docker. At our KubeCon booth (#J3), we’ll show you how building in the cloud accelerates development and simplifies multi-platform builds with a side-by-side demo of Docker Build Cloud. Learn how Docker and Testcontainers Cloud provide a seamless integration within the testing framework to improve the quality and speed of application delivery.

It’s not all work, though — join us at the booth for our Megennis Motorsport Racing experience and try to beat the best!

Take advantage of this opportunity to connect with the Docker team, learn from the experts, and contribute to the ever-evolving cloud-native landscape. Let’s shape the future of cloud-native technologies together at KubeCon!

2400x1260 docker at kubecon eu 2024

Deep dive sessions from Docker experts

Is Your Image Really Distroless? — Docker software engineer Laurent Goderre will dive into the world of “distroless” Docker images on Wednesday, March 20. In this session, Goderre will explain the significance of separating build-time and run-time dependencies to enhance container security and reduce vulnerabilities. He’ll also explore strategies for configuring runtime environments without compromising security or functionality. Don’t miss this must-attend session for KubeCon attendees keen on fortifying their Docker containers.

Simplified Inner and Outer Cloud Native Developer Loops — Docker Staff Community Relations Manager Oleg Šelajev and Diagrid Customer Success Engineer Alice Gibbons tackle the challenges of developer productivity in cloud-native development. On Wednesday, March 20, they will present tools and practices to bridge the gap between development and production environments, demonstrating how a unified approach can streamline workflows and boost efficiency across the board.

Engage, learn, and network at these events

Security Soiree: Hands-on cloud-native security workshop and party — Join Sysdig, Snyk, and Docker on March 19 for cocktails, team photos, music, prizes, and more at the Security Soiree. Listen to a compelling panel discussion led by industry experts, including Docker’s Director of Security, Risk & Trust, Rachel Taylor, followed by an evening of networking and festivities. Get tickets to secure your invitation.

Docker Meetup at KubeCon: Development & data productivity in the age of AI — Join us at our meetup during KubeCon on March 21 and hear insights from Docker, Pulumi, Tailscale, and New Relic. This networking mixer at Tonton Becton Restaurant promises candid discussions on enhancing developer productivity with the latest AI and data technologies. Reserve your spot now for an evening of casual conversation, drinks, and delicious appetizers.

See you March 19 – 22 at KubeCon + CloudNativeCon Europe

We look forward to seeing you in Paris — safe travels and prepare for an unforgettable experience!

Learn more

]]>
Are Containers Only for Microservices? Myth Debunked https://www.docker.com/blog/are-containers-only-for-microservices-myth-debunked/ Wed, 06 Mar 2024 14:44:54 +0000 https://www.docker.com/?p=52184 In the ever-evolving software delivery landscape, containerization has emerged as a transformative force, reshaping how organizations build, test, deploy, and manage their applications. 

Whether you are maintaining a monolithic legacy system, navigating the complexities of Service-Oriented Architecture (SOA), or orchestrating your digital strategy around application programming interfaces (APIs), containerization offers a pathway to increased efficiency, resilience, and agility. 

In this post, we’ll debunk the myth that containerization is solely the domain of microservices by exploring its applicability and advantages across different architectural paradigms. 

rectangle containerization

Containerization across architectures

Although containerization is commonly associated with microservices architecture because of its agility and scalability, the potential extends far beyond, offering compelling benefits to a variety of architectural styles. From the tightly integrated components of monolithic applications to the distributed nature of SOA and the strategic approach of API-led connectivity, containerization stands as a universal tool, adaptable and beneficial across the board.

Beyond the immediate benefits of improved resource utilization, faster deployment cycles, and streamlined maintenance, the true value of containerization lies in its ability to ensure consistent application performance across varied environments. This consistency is a cornerstone for reliability and efficiency, pivotal in today’s fast-paced software delivery demands.

Here, we will provide examples of how this technology can be a game-changer for your digital strategy, regardless of your adopted style. Through this exploration, we invite technology leaders and executives to broaden their perspective on containerization, seeing it not just as a tool for one architectural approach but as a versatile ally in the quest for digital excellence.

1. Event-driven architecture

Event-driven architecture (EDA) represents a paradigm shift in how software components interact, pivoting around the concept of events — such as state changes or specific action occurrences — as the primary conduit for communication. This architectural style fosters loose coupling, enabling components to operate independently and react asynchronously to events, thereby augmenting system flexibility and agility. EDA’s intrinsic support for scalability, by allowing components to address fluctuating workloads independently, positions it as an ideal candidate for handling dynamic system demands.

Within the context of EDA, containerization emerges as a critical enabler, offering a streamlined approach to encapsulate applications alongside their dependencies. This encapsulation guarantees that each component of an event-driven system functions within a consistent, isolated environment — a crucial factor when managing components with diverse dependency requirements. Containers’ scalability becomes particularly advantageous in EDA, where fluctuating event volumes necessitate dynamic resource allocation. By deploying additional container instances in response to increased event loads, systems maintain high responsiveness levels.

Moreover, containerization amplifies the deployment flexibility of event-driven components, ensuring consistent event generation and processing across varied infrastructures (Figure 1). This adaptability facilitates the creation of agile, scalable, and portable architectures, underpinning the deployment and management of event-driven components with a robust, flexible infrastructure. Through containerization, EDA systems achieve enhanced operational efficiency, scalability, and resilience, embodying the principles of modern, agile application delivery.

 Illustration of event-driven architecture showing Event Broker connected to Orders Service, Fulfillment Service, and Notification Service.
Figure 1: Event-driven architecture.

2. API-led architecture

API-led connectivity represents a strategic architectural approach focused on the design, development, and management of APIs to foster seamless connectivity and data exchange across various systems, applications, and services within an organization (Figure 2). This methodology champions a modular and scalable framework ideal for the modern digital enterprise.

The principles of API-led connectivity — centering on system, process, and experience APIs — naturally harmonize with the benefits of containerization. By encapsulating each API within its container, organizations can achieve unparalleled modularity and scalability. Containers offer an isolated runtime environment for each API, ensuring operational independence and eliminating the risk of cross-API interference. This isolation is critical, as it guarantees that modifications or updates to one API can proceed without adversely affecting others, which is a cornerstone of maintaining a robust API-led ecosystem.

Moreover, the dual advantages of containerization — ensuring consistent execution environments and enabling easy scalability — align perfectly with the goals of API-led connectivity. This combination not only simplifies the deployment and management of APIs across diverse environments but also enhances the resilience and flexibility of the API infrastructure. Together, API-led connectivity and containerization empower organizations to develop, scale, and manage their API ecosystems more effectively, driving efficiency and innovation in application delivery.

Illustration of API-led architecture showing separate layers for Channel APIs, Orchestration APIs, and System APIs.
Figure 2: API-led architecture.

3. Service-oriented architecture

Service-oriented architecture (SOA) is a design philosophy that emphasizes the use of discrete services within an architecture to provide business functionalities. These services communicate through well-defined interfaces and protocols, enabling interoperability and facilitating the composition of complex applications from independently developed services. SOA’s focus on modularity and reusability makes it particularly amenable to the benefits offered by containerization.

Containerization brings a new dimension of flexibility and efficiency to SOA by encapsulating these services into containers. This encapsulation provides an isolated environment for each service, ensuring consistent execution regardless of the deployment environment. Such isolation is crucial for maintaining the integrity and availability of services, particularly in complex, distributed architectures where services must communicate across different platforms and networks.

Moreover, containerization enhances the scalability and manageability of SOA-based systems. Containers can be dynamically scaled to accommodate varying loads, enabling organizations to respond swiftly to changes in demand. This scalability, combined with the ease of deployment and rollback provided by container orchestration platforms, supports the agile delivery and continuous improvement of services.

The integration of containerization with SOA essentially results in a more resilient, scalable, and manageable architecture. It enables organizations to leverage the full potential of SOA by facilitating faster deployment, enhancing performance, and simplifying the lifecycle management of services. Together, SOA and containerization create a powerful framework for building flexible, future-proof applications that can adapt to the evolving needs of the business.

4. Monolithic applications

Contrary to common perceptions, monolithic applications stand to gain significantly from containerization. This technology can encapsulate the full application stack — including the core application, its dependencies, libraries, and runtime environment within a container. This encapsulation ensures uniformity across various stages of the development lifecycle, from development and testing to production, effectively addressing the infamous ‘it works on my machine’ challenge. Such consistency streamlines the deployment process and simplifies scaling efforts, which is particularly beneficial for applications that need to adapt quickly to changing demands.

Moreover, containerization fosters enhanced collaboration among development teams by standardizing the operational environment, thereby minimizing discrepancies that typically arise from working in divergent development environments. This uniformity is invaluable in accelerating development cycles and improving product reliability.

Perhaps one of the most strategic benefits of containerization for monolithic architectures is the facilitation of a smoother transition to microservices. By containerizing specific components of the monolith, organizations can incrementally decompose their application into more manageable, loosely coupled microservices. This approach not only mitigates the risks associated with a full-scale migration but also allows teams to gradually adapt to microservices’ architectural patterns and principles.

Containerization presents a compelling proposition for monolithic applications, offering a pathway to modernization that enhances deployment efficiency, operational consistency, and the flexibility to evolve toward a microservices-oriented architecture. Through this lens, containerization is not just a tool for new applications but a bridge that allows legacy applications to step into the future of software development.

Conclusion

The journey of modern software development, with its myriad architectural paths, is markedly enhanced by the adoption of containerization. This technology transcends architectural boundaries, bringing critical advantages such as isolation, scalability, and portability to the forefront of application delivery. Whether your environment is monolithic, service-oriented, event-driven, or API-led, containerization aligns perfectly with the ethos of modern, distributed, and cloud-native applications. 

By embracing the adaptability and transformative potential of containerization, you can open your architectures to a future where agility, efficiency, and resilience are not just aspirations but achievable realities. Begin your transformative journey with Docker Desktop today and redefine what’s possible within the bounds of your existing architectural framework.

Learn more

]]>
Docker Desktop 4.28: Enhanced File Sharing and Security Plus Refined Builds View in Docker Build Cloud https://www.docker.com/blog/docker-desktop-4-28/ Wed, 28 Feb 2024 14:00:00 +0000 https://www.docker.com/?p=52486 Docker Desktop 4.28 introduces updates to file-sharing controls, focusing on security and administrative ease. Responding to feedback from our business users, this update brings refined file-sharing capabilities and path allow-listing, aiming to simplify management and enhance security for IT administrators and users alike.

Along with our investments in bringing access to cloud resources within the local Docker Desktop experience with Docker Build Cloud Builds view, this release provides a more efficient and flexible platform for development teams.

Docker Desktop 4.28

Introducing enhanced file-sharing controls in Docker Desktop Business 

As we continue to innovate and elevate the Docker experience for our business customers, we’re thrilled to unveil significant upgrades to the Docker Desktop’s Hardened Desktop feature. Recognizing the importance of administrative control over Docker Desktop settings, we’ve listened to your feedback and are introducing enhancements prioritizing security and ease of use.

For IT administrators and non-admin users, Docker now offers the much-requested capability to specify and manage file-sharing options directly via Settings Management (Figure 1). This includes:

  • Selective file sharing: Choose your preferred file-sharing implementation directly from Settings > General, where you can choose between VirtioFS, gRPC FUSE, or osxfs. VirtioFS is only available for macOS versions 12.5 and above and is turned on by default.
  • Path allow-listing: Precisely control which paths users can share files from, enhancing security and compliance across your organization.
Screenshot of Docker Desktop showing Synchronized file shares page.
Figure 1: Display of Docker Desktop settings enhanced file-sharing settings.

We’ve also reimagined the Settings > Resources > File Sharing interface to enhance your interaction with Docker Desktop (Figure 2). You’ll notice:

  • Clearer error messaging: Quickly understand and rectify issues with enhanced error messages.
  • Intuitive action buttons: Experience a smoother workflow with redesigned action buttons, making your Docker Desktop interactions as straightforward as possible.
Screenshot of Docker Desktop showing Resources page with options for File Sharing, Synchronized file shares, and Virtual sharing.
Figure 2: Displaying settings management in Docker Desktop to notify business subscribers of their access rights.

These enhancements are not just about improving current functionalities; they’re about unlocking new possibilities for your Docker experience. From increased security controls to a more navigable interface, every update is designed with your efficiency in mind.

Refining development with Docker Desktop’s Builds view update 

Docker Desktop’s previous update introduced Docker Build Cloud integration, aimed at reducing build times and improving build management. In this release, we’re landing incremental updates that refine the Builds view, making it easier and faster to manage your builds.

New in Docker Desktop 4.28:

  • Dedicated tabs: Separates active from completed builds for better organization (Figure 3).
  • Build insights: Displays build duration and cache steps, offering more clarity on the build process.
  • Reliability fixes: Resolves issues with updates for a more consistent experience.
  • UI improvements: Updates the empty state view for a clearer dashboard experience (Figure 4).

These updates are designed to streamline the build management process within Docker Desktop, leveraging Docker Build Cloud for more efficient builds.

Screenshot of Builds view showing tabs for Build history and Active builds.
Figure 3: Dedicated tabs for Build history vs. Active builds to allow more space for inspecting your builds.
Screenshot of Builds view with Active builds tab selected and showing "No builds currently active".
Figure 4: Updated view supporting empty state — no Active builds.

To explore how Docker Desktop and Docker Build Cloud can optimize your development workflow, read our Docker Build Cloud blog post. Experience the latest Builds view update to further enrich your local, hybrid, and cloud-native development journey.

These Docker Desktop updates support improved platform security and a better user experience. By introducing more detailed file-sharing controls, we aim to provide developers with a more straightforward administration experience and secure environment. As we move forward, we remain dedicated to refining Docker Desktop to meet the evolving needs of our users and organizations, enhancing their development workflows and agility to innovate.

Join the conversation and make your mark

Dive into the dialogue and contribute to the evolution of Docker Desktop. Use our feedback form to share your thoughts and let us know how to improve the Hardened Desktop features. Your input directly influences the development roadmap, ensuring Docker Desktop meets and exceeds our community and customers’ needs.

Learn more

]]>
How to Use Testcontainers on Jenkins CI https://www.docker.com/blog/how-to-use-testcontainers-on-jenkins-ci/ Mon, 26 Feb 2024 15:22:16 +0000 https://www.docker.com/?p=51539 Releasing software often and with confidence relies on a strong continuous integration and continuous delivery (CI/CD) process that includes the ability to automate tests. Jenkins offers an open source automation server that facilitates such release of software projects.

In this article, we will explore how you can run tests based on the open source Testcontainers framework in a Jenkins pipeline using Docker and Testcontainers Cloud

Testcontainers Jenkins 2400x1260 1

Jenkins, which streamlines the development process by automating the building, testing, and deployment of code changes, is widely adopted in the DevOps ecosystem. It supports a vast array of plugins, enabling integration with various tools and technologies, making it highly customizable to meet specific project requirements.

Testcontainers is an open source framework for provisioning throwaway, on-demand containers for development and testing use cases. Testcontainers makes it easy to work with databases, message brokers, web browsers, or just about anything that can run in a Docker container.

Testcontainers also provides support for many popular programming languages, including Java, Go, .NET, Node.js, Python, and more. This article will show how to test a Java Spring Boot application (testcontainers-showcase) using Testcontainers in a Jenkins pipeline. Please fork the repository into your GitHub account. To run Testcontainers-based tests, a Testcontainers-supported container runtime, like Docker, needs to be available to agents.

Note: As Jenkins CI servers are mostly run on Linux machines, the following configurations are tested on a Linux machine only.

Docker containers as Jenkins agents

Let’s see how to use dynamic Docker container-based agents. To be able to use Docker containers as agents, install the Docker Pipeline plugin

Now, let’s create a file with name Jenkinsfile in the root of the project with the following content:

pipeline {
   agent {
       docker {
             image 'eclipse-temurin:17.0.9_9-jdk-jammy'
             args '--network host -u root -v /var/run/docker.sock:/var/run/docker.sock'
       }
 }

   triggers { pollSCM 'H/2 * * * *' } // poll every 2 mins

   stages {
       stage('Build and Test') {
           steps {
               sh './mvnw verify'
           }
       }
   }
}

We are using the eclipse-temurin:17.0.9_9-jdk-jammy Docker container as an agent to run the builds for this pipeline. Note that we are mapping the host’s Unix Docker socket as a volume with root user permissions to make it accessible to the agent, but this can potentially be a security risk.

Add the Jenkinsfile and push the changes to the Git repository.

Now, go to the Jenkins Dashboard and select New Item to create the pipeline. Follow these steps:

  • Enter testcontainers-showcase as pipeline name.
  • Select Pipeline as job type.
  • Select OK.
  • Under Pipeline section:
  • Branches to build: Branch Specifier (blank for ‘any’): */main.
  • Script Path: Jenkinsfile.
  • Select Save.
  • Choose Build Now to trigger the pipeline for the first time.

The pipeline should run the Testcontainers-based tests successfully in a container-based agent using the remote Docker-in-Docker based configuration.

Kubernetes pods as Jenkins agents

While running Testcontainers-based tests on Kubernetes pods, you can run a Docker-in-Docker (DinD) container as a sidecar. To use Kubernetes pods as Jenkins agents, install Kubernetes plugin.

Now you can create the Jenkins pipeline using Kubernetes pods as agents as follows:

def pod =
"""
apiVersion: v1
kind: Pod
metadata:
 labels:
   name: worker
spec:
 serviceAccountName: jenkins
 containers:
   - name: java17
     image: eclipse-temurin:17.0.9_9-jdk-jammy
     resources:
       requests:
         cpu: "1000m"
         memory: "2048Mi"
     imagePullPolicy: Always
     tty: true
     command: ["cat"]
   - name: dind
     image: docker:dind
     imagePullPolicy: Always
     tty: true
     env:
       - name: DOCKER_TLS_CERTDIR
         value: ""
     securityContext:
       privileged: true
"""

pipeline {
   agent {
       kubernetes {
           yaml pod
       }
   }
   environment {
       DOCKER_HOST = 'tcp://localhost:2375'
       DOCKER_TLS_VERIFY = 0
   }

   stages {
       stage('Build and Test') {
           steps {
               container('java17') {
                   script {
                       sh "./mvnw verify"
                   }
               }
           }
       }
   }
}

Although we can use a Docker-in-Docker based configuration to make the Docker environment available to the agent, this setup also brings configuration complexities and security risks.

  • By volume mounting the host’s Docker Unix socket (Docker-out-of-Docker) with the agents, the agents have direct access to the host Docker engine.
  • When using DooD approach file sharing, using bind-mounting doesn’t work because the containerized app and Docker engine work in different contexts. 
  • The Docker-in-Docker (DinD) approach requires the use of insecure privileged containers.

You can watch the Docker-in-Docker: Containerized CI Workflows presentation to learn more about the challenges of a Docker-in-Docker based CI setup.

This is where Testcontainers Cloud comes into the picture to make it easy to run Testcontainers-based tests more simply and reliably. 

By using Testcontainers Cloud, you don’t even need a Docker daemon running on the agent. Containers will be run in on-demand cloud environments so that you don’t need to use powerful CI agents with high CPU/memory for your builds.

Let’s see how to use Testcontainers Cloud with minimal setup and run Testcontainers-based tests.

Testcontainers Cloud-based setup

Testcontainers Cloud helps you run Testcontainers-based tests at scale by spinning up the dependent services as Docker containers on the cloud and having your tests connect to those services.

If you don’t have a Testcontainers Cloud account already, you can create an account and get a Service Account Token as follows:

  1. Sign up for a Testcontainers Cloud account.
  2. Once logged in, create an organization.
  3. Navigate to the Testcontainers Cloud dashboard and generate a Service account (Figure 1).
Screenshot of interface for creating a new Testcontainer Cloud service account and getting access token.
Figure 1: Create a new Testcontainers Cloud service account.

To use Testcontainers Cloud, we need to start a lightweight testcontainers-cloud agent by passing TC_CLOUD_TOKEN as an environment variable.

You can store the TC_CLOUD_TOKEN value as a secret in Jenkins as follows:

  • From the Dashboard, select Manage Jenkins.
  • Under Security, choose Credentials.
  • You can create a new domain or use System domain.
  • Under Global credentials, select Add credentials.
  • Select Kind as Secret text.
  • Enter TC_CLOUD_TOKEN value in Secret.
  • Enter tc-cloud-token-secret-id as ID.
  • Select Create.

Next, you can update the Jenkinsfile as follows:

pipeline {
   agent {
       docker {
             image 'eclipse-temurin:17.0.9_9-jdk-jammy'
       }
 }

   triggers { pollSCM 'H/2 * * * *' }

   stages {

       stage('TCC SetUp') {
     environment {
      	 TC_CLOUD_TOKEN = credentials('tc-cloud-token-secret-id')
           }
           steps {
               sh "curl -fsSL https://get.testcontainers.cloud/bash | sh"
           }
       }

       stage('Build and Test') {
           steps {
               sh './mvnw verify'
           }
       }
   }
}

We have set the TC_CLOUD_TOKEN environment variable using the value from tc-cloud-token-secret-id credential we created and started a Testcontainers Cloud agent before running our tests.

Now if you commit and push the updated Jenkinsfile, then the pipeline will run the tests using Testcontainers Cloud. You should see log statements similar to the following indicating that the Testcontainers-based tests are using Testcontainers Cloud instead of the default Docker daemon.

14:45:25.748 [testcontainers-lifecycle-0] INFO  org.testcontainers.DockerClientFactory - Connected to docker: 
  Server Version: 78+testcontainerscloud (via Testcontainers Desktop 1.5.5)
  API Version: 1.43
  Operating System: Ubuntu 20.04 LTS
  Total Memory: 7407 MB

You can also leverage Testcontainers Cloud’s Turbo mode in conjunction with build tools that feature parallel run capabilities to run tests even faster.

In the case of Maven, you can use the -DforkCount=N system property to specify the degree of parallelization. For Gradle, you can specify the degree of parallelization using the maxParallelForks property.

We can enable parallel execution of our tests using four forks in Jenkinsfile as follows:

stage('Build and Test') {
      steps {
           sh './mvnw verify -DforkCount=4' 
      }
}

For more information, check out the article on parallelizing your tests with Turbo mode.

Conclusion

In this article, we have explored how to run Testcontainers-based tests on Jenkins CI using dynamic containers and Kubernetes pods as agents with Docker-out-of-Docker and Docker-in-Docker based configuration. 

Then we learned how to create a Testcontainers Cloud account and configure the pipeline to run tests using Testcontainers Cloud. We also explored leveraging Testcontainers Cloud Turbo mode combined with your build tool’s parallel execution capabilities. 

Although we have demonstrated this setup using a Java project as an example, Testcontainers libraries exist for other popular languages, too, and you can follow the same pattern of configuration to run your Testcontainers-based tests on Jenkins CI in Golang, .NET, Python, Node.js, etc.

Get started with Testcontainers Cloud by creating a free account at the website.

Learn more

]]>
How to Use OpenPubkey to Solve Key Management via SSO https://www.docker.com/blog/how-to-use-openpubkey-to-solve-key-management-via-sso/ Tue, 20 Feb 2024 14:14:55 +0000 https://www.docker.com/?p=51668 This post was contributed by BastionZero.

Giving people the ability to sign messages under their identity is extremely powerful. For instance, this functionality lets you SSH into servers, sign software artifacts, and create end-to-end encrypted communications under your single sign-on (SSO) identity.

The OpenPubkey protocol and open source project brings the power of digital signatures to both people and workloads without adding trusted parties. OpenPubkey is built on the OpenID Connect (OIDC) SSO protocol, which is supported by major identity providers, including Google, Microsoft, Okta, and Facebook. 

This article will explore how OpenPubkey works and look at three use cases in detail.

banner bastionzero blog how to use openpubkey to solve your key management problems

What can OpenPubkey do?

Public key cryptography was invented in the 1970s and has become the most powerful tool in the security engineering toolbox. It allows anything holding a public key, and its associated signing key, to create a cryptographic identity. This identity is extremely secure because the party cannot only use their signing key to prove they are who they say they are but also sign messages under this identity. 

Servers often authenticate themselves to people using public keys associated with the server’s identity, yet the process rarely works the other way. People rarely authenticate to servers using public keys associated with a person’s identity. Instead, less secure authentication methods are employed, such as authentication secrets stored in cookies, which must be transmitted on every request.

Let’s say that Alice wanted to sign the message “Flee at once — all is discovered” under her email address alice@example.com. How would she do it? One approach would be for Alice to create a public key (PK) and signing key (SK) and then publish the mapping between her email and the PK. 

This approach has two problems. First, you and everyone verifying this message must trust that the webpage has honestly mapped Alice’s email to her public key and has not maliciously replaced her public key with another key that could be used to impersonate Alice. Second, Alice must now protect and manage the signing key associated with this public key. History has shown that users can be terrible at protecting signing keys. Probably the most famous example is of the man who lost a signing key controlling half a billion dollars worth of Bitcoin.

Human authentication on the web was originally supposed to work the same way as server authentication. Much like a certificate authority (CA) issues a certificate to a server, which associates a public key with the server’s identity (`example.com`), the plan was to have a CA issue a client certificate to a person that associates a public key with that person’s identity. These client certificates are still around and are well-used for certain applications, but they never caught on for widespread personal use, likely because of the terrible user experience (UX) of asking people to secure and manage secret signing keys.

OpenPubkey addresses both of these problems. It uses your identity provider to perform the mapping between identity and your public key. Because you already trust your identity provider, having your identity provider perform this mapping does not add any new trusted parties. For instance, Alice must already trust her identity provider, Example.com, to manage her identity (alice@example.com), so it is natural to use Example.com to perform the mapping between Alice’s public key and her Example.com identity (alice@example.com). Example.com already knows how to authenticate @example.com users, so Alice doesn’t need to set up a new account or create new authentication factors.

Second, to solve the problem of lost or stolen signing keys, OpenPubkey public keys and signing keys are ephemeral. That means the signing keys can be deleted and recreated at will. OpenPubkey generates a fresh public key and signing key for a user every time that user authenticates to their identity provider. This approach to making public keys ephemeral removes one of the most significant UX barriers to authenticating people with public keys. It also provides a security win; it creates a much smaller window of exposure if a signing key is stolen, as signing keys can be deleted when the user idles or logs out.

How does OpenPubkey work?

Let’s return to our situation: Alice wants to sign the message “Flee at once — all is discovered” under her identity (alice@example.com). First, Alice’s computer generates a fresh public key and signing key. Next, she needs her identity provider, Example.com, to associate her identity with this public key. How does OpenPubkey do this? To understand the process, we first need to provide details about how SSO/OpenID Connect works.

Example.com, which is the identity provider for @example.com, knows how to check that Alice is really alice@example.com. Example.com does this every time Alice signs into Example.com. In OIDC, the identity provider signs a statement, called an ID Token, which roughly says “this is alice@example.com”. Part of the authentication process in OIDC allows the user (or their software) to submit a random value that will be included in the issued ID Token. 

Alice’s OpenPubkey client puts the cryptographic hash of Alice’s public key into this value in her ID Token. Alice’s OpenPubkey client modifies the ID Token into an object called a PK Token, which essentially says: “this is alice@example.com and her public key is 0xABCE…“. We’re skipping a few details of OpenPubkey, but this is the basic idea.

Now that Alice has a PK Token signed by Example.com, which binds her public key to her identity, she can sign the statement “Flee at once — all is discovered” and broadcast the message, the signature, and her ID Token. Bob, or anyone else for that matter, can check whether this message is really from alice@example.com, by checking that the ID Token is signed by Example.com and then checking that Alice’s signature matches the public key in the ID Token.

OpenPubkey use cases

Now let’s look at OpenPubkey use cases.

SSH

OpenPubkey is useful for more than just telling your friends that “Flee at once — all is discovered.” Because most security protocols are built on public key cryptography, OpenPubkey can easily plug human identities into these protocols.

SSH supports the authentication of both machines and users with public keys (also known as SSH keys).  However, these SSH keys are not associated with identities. With an SSH key, you can say “allow root access for key 0xABCD…”, but not “allow root access for alice@example.com.” This presents several UX and security problems. As mentioned previously, people struggle with managing their secret signing keys, and SSH is no exception. 

Even more problematic, because public keys are not associated with identities, it is difficult to tell if an SSH key represents a person or machine that should no longer have access. As Tatu Ylonen, the inventor of SSH, writes in his recent paper Challenges in Managing SSH Keys — and a Call for Solutions:

“In analyzing SSH keys for dozens of large enterprises, it has turned out that in many environments 90% of all authorized keys are no longer used. They represent access that was provisioned, but never terminated when the person left or the need for access ceased to exist. Some of the authorized keys are 10-20 years old, and typically about 10% of them grant root access or other privileged access. The vast majority of private user keys found in most environments do not have passphrases.”

OpenPubkey can be used to solve this problem by binding SSH keys to user identities. That way,  the server can check whether the identity (alice@example.com) is allowed to connect to the server or not. This means that Alice can access her SSH server using SSO; she can log in to Example.com as alice@example.com and then gain access to the server as long as her SSO is valid.

OpenPubkey authentication can be added to SSH with a small change to the SSH config. No code changes to SSH are required. To try it out, or learn more about how OpenPubkey’s SSH works, see our recent post: How to Use OpenPubkey to SSH Without SSH Keys.

Secure messaging

OpenPubkey can also be used to solve one of the major issues with end-to-end encrypted messaging. Suppose someone sends you a message on a secure messaging app: How do you know they are actually that person? Some secure messaging apps let you look up the public key that is securing your communication, but how do you know that that public key is actually the public key of the person you want to privately communicate with?

This connection between public key and identity is the core problem that OpenPubkey solves. With OpenPubkey, Bob can learn the public key for alice@example.com by checking an ID Token signed by Example.com, which includes Alice’s public key and her email address. This does involve trusting Example.com, but you generally already have to trust Example.com to SSO @example.com users.

While it’s not discussed here, OpenPubkey does support an optional protocol — the MFA cosigner — which removes the requirement of trusting the identity provider. But even without the MFA cosigner protocol, OpenPubkey provides stronger security for end-to-end encrypted messaging because it allows Bob to learn Alice’s public key directly from Alice’s identity provider.

Signing container images

OpenPubkey is not limited to human use cases. OpenPubkey developers are working on a solution to allow workflows (rather than people) to sign images using GitHub’s identity provider and GitHub Actions. You can learn more about this use case by reading How to Use OpenPubkey with GitHub Actions Workloads.

Help us expand the utility of OpenPubkey

These three use cases should not be seen as the limits of what OpenPubkey can do. This approach is highly flexible and can be used for VPNs, cosigning, container service meshes, cryptocurrencies, web applications, and even physical access. 

We invite anyone who wants to contribute to OpenPubkey to visit and star our GitHub repo. We are building an open and friendly community and welcome pull requests from anyone — see the contribution guidelines to learn more.    

Learn more

]]>