Kubernetes – Docker https://www.docker.com Wed, 03 Apr 2024 14:11:06 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.3 https://www.docker.com/wp-content/uploads/2024/02/cropped-docker-logo-favicon-32x32.png Kubernetes – Docker https://www.docker.com 32 32 KubeCon EU 2024: Highlights from Paris https://www.docker.com/blog/kubecon-eu-2024-highlights-from-paris/ Wed, 03 Apr 2024 14:10:54 +0000 https://www.docker.com/?p=53600 Are in-person tech conferences back in fashion? Or are engineers just willing to travel for fresh baguettes? In this post, I round up a few highlights from KubeCon Europe 2024, held March 19-24 in Paris.

My last KubeCon was in Detroit in 2022, when tech events were still slowly recovering from COVID. But KubeCon EU in Paris was buzzing, with more than 12,000 attendees! I couldn’t even get into a few of the most popular talks because the lines to get in wrapped around the exhibition hall even after the rooms were full. Fortunately, the CNCF has already posted all the talk recordings so we can catch up on what we missed in person.

Now that I’ve been back home for a bit, here are a few highlights I rounded up from KubeCon EU 2024.

2400x1260 kubecon 2024 highlights

Docker at KubeCon

If you stopped by the Docker booth, you may have seen our Megennis Motorsport Racing experience.

kubecon eu 20224 f1 e1712150464822
The KubeCon EU 2024 Docker booth featured a Megennis Motorsport Racing experience.

Or you may have talked to one of our engineers about our new fast Docker Build Cloud experience. Everyone I talked to about Build Cloud got it immediately. I’m proud of all the work we did to make fast, hosted image builds work seamlessly with the existing docker build. 

kubecon eu 20224 f2
KubeCon booth visitors ran 173 identical builds using both Docker Build Cloud and GHA. Build Cloud builds took an average of 32 seconds each compared to GHA’s 159 sec/build.

Docker Build Cloud wasn’t the only new product we highlighted at KubeCon this year. I also got a lot of questions about Docker Scout and how to track image dependencies. Our Head of Security, Rachel Taylor, was available to demo Docker Scout for curious customers.

kubecon eu 20224 f3
Chloe Cucinotta, a Director in Docker’s marketing org, hands out eco-friendly swag for booth visitors. Photo by Docker Captain Mohammad-Ali A’râbi.

Docker Scout and Sysdig Security Day

In addition to live Docker Scout demos at the booth, Docker Scout was represented at Kubecon through a co-sponsored AMA panel and party with Sysdig Security Day. The event aimed to raise awareness around Docker’s impact on securing the software supply chain and how to solve concrete security issues with Docker Scout. It was an opportunity to explore topics in the cloud-native and open source security space alongside industry leaders Snyk and Sysdig.

The AMA panel featured Rachel Taylor, Director of Information Security, Risk, & Trust at Docker, who discussed approaches to securing the software supply chain. The post-content party served as an opportunity for Docker to hear more about our shared customers’ unique challenges one-on-one. Through participation in the event, Docker customers were able to learn more about how the Sysdig runtime monitoring integration within Docker Scout results in even more actionable insights and remediation recommendations.

Live from the show floor

Docker CEO Scott Johnston spoke with theCUBE hosts Savannah Peterson and Rob Strechay to discuss Docker Build Cloud. “What used to take an hour is now a minute and a half,” he explained.

Testcontainers and OpenShift

During KubeCon, we announced that Red Hat and Testcontainers have partnered to provide Testcontainers in OpenShift. This collaboration simplifies the testing process, allowing developers to efficiently manage their workflows without compromising on security or flexibility. By streamlining development tasks, this solution promises a significant boost in productivity for developers working within containerized environments. Read Improving the Developer Experience with Testcontainers and OpenShift to learn more.

kubecon eu 20224 f4 e1712150058949
Eli Aleyner (Head of Technical Alliances at Docker) and Daniel Oh (Senior Principal Technical Marketing Manager at Red Hat) take a selfie at the Red Hat booth.

Eli Aleyner (Head of Technical Alliances at Docker) and Daniel Oh (Senior Principal Technical Marketing Manager at Red Hat) provided a demo and an AMA at the Red Hat booth. 

Must-watch talks

kubecon eu 20224 f5 e1712150129895
During the Friday keynote, Bob Wise, CEO of Heroku, describes a lightbulb moment when he first heard about Docker as part of his discussion about the beginnings of cloud-native.

For a long time, I’ve felt that the Kubernetes API model has been its superpower. The investment in easy ways to extend Kubernetes with CRDs and the controller-runtime project are unlocking a bunch of exciting platform engineering projects.

Here are a few of the many talks that I and other people on my team really liked, and that are on YouTube now.

Platform 

In his talk Building a Large Scale Multi-Cloud Multi-Region SaaS Platform with Kubernetes Controllers, I loved how Sébastien Guilloux (Elastic) explains how to put all the pieces together to help build a multi-region platform. It takes advantage of the nice bits of Kubernetes controllers, while also questioning the assumptions about how global state should work.

Stefan Proda (ControlPlane) gave a talk on GitOps Continuous Delivery at Scale with Flux. Flux has a strong, opinionated point of view on how CI/CD tools should interact with CRDs and the events API.  There were a few different talks on Crossplane that I’d like to go back and watch. We’ve been experimenting a lot with Crossplane at Docker, and we like how it fits into Helm and Image Registries in a way that fits in with our existing image and registry tools. 

AI 

Of course, people at KubeCon are infra nerds, so when we think about AI, we first think about all those GPUs the AIs are going to need.

There was an armful of GPU provisioning talks. I attended the How KubeVirt Improves Performance with CI-Driven Benchmarking, and You Can Too. Speakers Ryan Hallisey and Alay Patel from Nvidia talked about driving down the time to allocate VMs with GPUs. But how is AI going to fit into how we run and operate servers on Kubernetes? There was less consensus on this point, but it was fun to make random guesses on what it might look like. When I was hanging out at the AuthZed booth, I made a joke about asking an AI to write my infra access control rules, and they mostly laughed and rolled their eyes.

Slimming and debugging 

Here’s a container journey I see a lot these days:

  • I have a fat container image.
  • I get a security alert about a vulnerability in one of that image’s dependencies that I don’t even use.
  • I switch to a slimmer base image, like a distroless image.
  • Oops! Now the image doesn’t work and is annoying to debug because there’s no shell.

But we’re making progress on making this easier!

In his KubeCon talk Is Your Image Really Distroless?, Docker’s Laurent Goderre walked through how to use multi-stage builds and init containers to separate out the build + init dependencies from the steady-state runtime dependencies. 

Ephemeral containers in Kubernetes graduated to stable in 2022. In their talk, Building a Tool to Debug Minimal Container Images in Kubernetes, Docker, and ContainerD, Kyle Quest (AutonomousPlane) and Saiyam Pathak (Civo) showed how you can use the ephemeral containers API to build tooling for creating a shell in a distroless container without a shell.

kubecon eu 20224 f6 e1712149960327
Talk slide showing the comparison of different approaches for running commands in a minimal container image with no shell.

One thing that Kyle and Saiyam mentioned was how useful Nix and Nixery.dev is for building these kinds of debugging tools. We’re also using Nix in docker debug. Docker engineer Johannes Grossman says that Nix solves some problems around dynamic linking that he calls “the clash-free composability property of Nix.”

See you in Salt Lake City!

Now that we’ve recovered from the action-packed KubeCon in Paris, we can start planning for KubeCon + CloudNativeCon North America 2024. We’ll see you in beautiful Salt Lake City!

Learn more

]]>
Scott Johnston, Docker | KubeCon EU 2024 nonadult
Is Your Container Image Really Distroless? https://www.docker.com/blog/is-your-container-image-really-distroless/ Wed, 27 Mar 2024 13:25:43 +0000 https://www.docker.com/?p=52629 Containerization helped drastically improve the security of applications by providing engineers with greater control over the runtime environment of their applications. However, a significant time investment is required to maintain the security posture of those applications, given the daily discovery of new vulnerabilities as well as regular releases of languages and frameworks. 

The concept of “distroless” images offers the promise of greatly reducing the time needed to keep applications secure by eliminating most of the software contained in typical container images. This approach also reduces the amount of time teams spend remediating vulnerabilities, allowing them to focus only on the software they are using. 

In this article, we explain what makes an image distroless, describe tools that make the creation of distroless images practical, and discuss whether distroless images live up to their potential.

2400x1260 is your image really distroless

What’s a distro?

A Linux distribution is a complete operating system built around the Linux kernel, comprising a package management system, GNU tools and libraries, additional software, and often a graphical user interface.

Common Linux distributions include Debian, Ubuntu, Arch Linux, Fedora, Red Hat Enterprise Linux, CentOS, and Alpine Linux (which is more common in the world of containers). These Linux distributions, like most Linux distros, treat security seriously, with teams working diligently to release frequent patches and updates to known vulnerabilities. A key challenge that all Linux distributions must face involves the usability/security dilemma. 

On its own, the Linux kernel is not very usable, so many utility commands are included in distributions to cover a large array of use cases. Having the right utilities included in the distribution without having to install additional packages greatly improves a distro’s usability. The downside of this increase in usability, however, is an increased attack surface area to keep up to date. 

A Linux distro must strike a balance between these two elements, and different distros have different approaches to doing so. A key aspect to keep in mind is that a distro that emphasizes usability is not “less secure” than one that does not emphasize usability. What it means is that the distro with more utility packages requires more effort from its users to keep it secure.

Multi-stage builds

Multi-stage builds allow developers to separate build-time dependencies from runtime ones. Developers can now start from a full-featured build image with all the necessary components installed, perform the necessary build step, and then copy only the result of those steps to a more minimal or even an empty image, called “scratch”. With this approach, there’s no need to clean up dependencies and, as an added bonus, the build stages are also cacheable, which can considerably reduce build time. 

The following example shows a Go program taking advantage of multi-stage builds. Because the Golang runtime is compiled into the binary, only the binary and root certificates need to be copied to the blank slate image.

FROM golang:1.21.5-alpine as build
WORKDIR /
COPY go.* .
RUN go mod download
COPY . .
RUN go build -o my-app


FROM scratch
COPY --from=build
  /etc/ssl/certs/ca-certificates.crt
  /etc/ssl/certs/ca-certificates.crt
COPY --from=build /my-app /usr/local/bin/my-app
ENTRYPOINT ["/usr/local/bin/my-app"]

BuildKit

BuildKit, the current engine used by docker build, helps developers create minimal images thanks to its extensible, pluggable architecture. It provides the ability to specify alternative frontends (with the default being the familiar Dockerfile) to abstract and hide the complexity of creating distroless images. These frontends can accept more streamlined and declarative inputs for builds and can produce images that contain only the software needed for the application to run. 

The following example shows the input for a frontend for creating Python applications called mopy by Julian Goede.

#syntax=cmdjulian/mopy
apiVersion: v1
python: 3.9.2
build-deps:
  - libopenblas-dev
  - gfortran
  - build-essential
envs:
  MYENV: envVar1
pip:
  - numpy==1.22
  - slycot
  - ./my_local_pip/
  - ./requirements.txt
labels:
  foo: bar
  fizz: ${mopy.sbom}
project: my-python-app/

So, is your image really distroless?

Thanks to new tools for creating container images like multi-stage builds and BuildKit, it is now a lot more practical to create images that only contain the required software and its runtime dependencies. 

However, many images claiming to be distroless still include a shell (usually Bash) and/or BusyBox, which provides many of the commands a Linux distribution does — including wget — that can leave containers vulnerable to Living off the land (LOTL) attacks. This raises the question, “Why would an image trying to be distroless still include key parts of a Linux distribution?” The answer typically involves container initialization. 

Developers often have to make their applications configurable to meet the needs of their users. Most of the time, those configurations are not known at build time so they need to be configured at run time. Often, these configurations are applied using shell initialization scripts, which in turn depend on common Linux utilities such as sed, grep, cp, etc. When this is the case, the shell and utilities are only needed for the first few seconds of the container’s lifetime. Luckily, there is a way to create true distroless images while still allowing initialization using tools available from most container orchestrators: init containers.

Init containers

In Kubernetes, an init container is a container that starts and must complete successfully before the primary container can start. By using a non-distroless container as an init container that shares a volume with the primary container, the runtime environment and application can be configured before the application starts. 

The lifetime of that init container is short (often just a couple seconds), and it typically doesn’t need to be exposed to the internet. Much like multi-stage builds allow developers to separate the build-time dependencies from the runtime dependencies, init containers allow developers to separate initialization dependencies from the execution dependencies. 

The concept of init container may be familiar if you are using relational databases, where an init container is often used to perform schema migration before a new version of an application is started.

Kubernetes example

Here are two examples of using init containers. First, using Kubernetes:

apiVersion: v1
kind: Pod
metadata:
  name: kubecon-postgress-pod
  labels:
    app.kubernetes.io/name: KubeConPostgress
spec:
  containers:
  - name: postgress
    image: laurentgoderre689/postgres-distroless
    securityContext:
      runAsUser: 70
      runAsGroup: 70
    volumeMounts:
    - name: db
      mountPath: /var/lib/postgresql/data/
  initContainers:
  - name: init-postgress
    image: postgres:alpine3.18
    env:
      - name: POSTGRES_PASSWORD
        valueFrom:
          secretKeyRef:
            name: kubecon-postgress-admin-pwd
            key: password
    command: ['docker-ensure-initdb.sh']
    volumeMounts:
    - name: db
      mountPath: /var/lib/postgresql/data/
  volumes:
  - name: db
    emptyDir: {}

- - - 

> kubectl apply -f pod.yml && kubectl get pods
pod/kubecon-postgress-pod created
NAME                    READY   STATUS     RESTARTS   AGE
kubecon-postgress-pod   0/1     Init:0/1   0          0s
> kubectl get pods
NAME                    READY   STATUS    RESTARTS   AGE
kubecon-postgress-pod   1/1     Running   0          10s

Docker Compose example

The init container concept can also be emulated in Docker Compose for local development using service dependencies and conditions.

services:
 db:
   image: laurentgoderre689/postgres-distroless
   user: postgres
   volumes:
     - pgdata:/var/lib/postgresql/data/
   depends_on:
     db-init:
       condition: service_completed_successfully

 db-init:
   image: postgres:alpine3.18
   environment:
      POSTGRES_PASSWORD: example
   volumes:
     - pgdata:/var/lib/postgresql/data/
   user: postgres
    command: docker-ensure-initdb.sh

volumes:
 pgdata:

- - - 
> docker-compose up 
[+] Running 4/0
 ✔ Network compose_default      Created                                                                                                                      
 ✔ Volume "compose_pgdata"      Created                                                                                                                     
 ✔ Container compose-db-init-1  Created                                                                                                                      
 ✔ Container compose-db-1       Created                                                                                                                      
Attaching to db-1, db-init-1
db-init-1  | The files belonging to this database system will be owned by user "postgres".
db-init-1  | This user must also own the server process.
db-init-1  | 
db-init-1  | The database cluster will be initialized with locale "en_US.utf8".
db-init-1  | The default database encoding has accordingly been set to "UTF8".
db-init-1  | The default text search configuration will be set to "english".
db-init-1  | [...]
db-init-1 exited with code 0
db-1       | 2024-02-23 14:59:33.191 UTC [1] LOG:  starting PostgreSQL 16.1 on aarch64-unknown-linux-musl, compiled by gcc (Alpine 12.2.1_git20220924-r10) 12.2.1 20220924, 64-bit
db-1       | 2024-02-23 14:59:33.191 UTC [1] LOG:  listening on IPv4 address "0.0.0.0", port 5432
db-1       | 2024-02-23 14:59:33.191 UTC [1] LOG:  listening on IPv6 address "::", port 5432
db-1       | 2024-02-23 14:59:33.194 UTC [1] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db-1       | 2024-02-23 14:59:33.196 UTC [9] LOG:  database system was shut down at 2024-02-23 14:59:32 UTC
db-1       | 2024-02-23 14:59:33.198 UTC [1] LOG:  database system is ready to accept connections

As demonstrated by the previous example, an init container can be used alongside a container to remove the need for general-purpose software and allow the creation of true distroless images. 

Conclusion

This article explained how Docker build tools allow for the separation of build-time dependencies from run-time dependencies to create “distroless” images. For example, using init containers allows developers to separate the logic needed to configure a runtime environment from the environment itself and provide a more secure container. This approach also helps teams focus their efforts on the software they use and find a better balance between security and usability.

Learn more

]]>
Is Your Image Really Distroless? - Laurent Goderre, Docker nonadult
Get Started with the Microcks Docker Extension for API Mocking and Testing https://www.docker.com/blog/get-started-with-the-microcks-docker-extension-for-api-mocking-and-testing/ Thu, 28 Sep 2023 15:04:00 +0000 https://www.docker.com/?p=46154 In the dynamic landscape of software development, collaborations often lead to innovative solutions that simplify complex challenges. The Docker and Microcks partnership is a prime example, demonstrating how the relationship between two industry leaders can reshape local application development.

This article delves into the collaborative efforts of Docker and Microcks, spotlighting the emergence of the Microcks Docker Desktop Extension and its transformative impact on the development ecosystem.

banner microcks extension

What is Microcks?

Microcks is an open source Kubernetes and cloud-native tool for API mocking and testing. It has been a Cloud Native Computing Foundation Sandbox project since summer 2023.  

Microcks addresses two primary use cases: 

  • Simulating (or mocking) an API or a microservice from a set of descriptive assets (specifications or contracts) 
  • Validating (or testing) the conformance of your application regarding your API specification by conducting contract-test

The unique thing about Microcks is that it offers a uniform and consistent approach for all kinds of request/response APIs (REST, GraphQL, gRPC, SOAP) and event-driven APIs (currently supporting eight different protocols) as shown in Figure 1.

Illustration of various APIs and protocols covered by Microcks, including REST, GraphQL, gRPC, SOAP Kafka broker, MQTT, and RabbitMQ.
Figure 1: Microcks covers all kinds of APIs.

Microcks speeds up the API development life cycle by shortening the feedback loop from the design phase and easing the pain of provisioning environments with many dependencies. All these features establish Microcks as a great help to enforce backward compatibility of your API of microservices interfaces.  

So, for developers, Microcks brings consistency, convenience, and speed to your API lifecycle.

Why run Microcks as a Docker Desktop Extension?

Although Microcks is a powerhouse, running it as a Docker Desktop Extension takes the developer experience, ease of use, and rapid iteration in the inner loop to new levels. With Docker’s containerization capabilities seamlessly integrated, developers no longer need to navigate complex setups or wrestle with compatibility issues. It’s a plug-and-play solution that transforms the development environment into a playground for innovation.

The simplicity of running Microcks as a Docker extension is a game-changer. Developers can effortlessly set up and deploy Microcks in their existing Docker environment, eliminating the need for extensive configurations. This ease of use empowers developers to focus on what they do best — building and testing APIs rather than grappling with deployment intricacies.

In agile development, rapid iterations in the inner loop are paramount. Microcks, as a Docker extension, accelerates this process. Developers can swiftly create, test, and iterate on APIs without leaving the Docker environment. This tight feedback loop ensures developers identify and address issues early, resulting in faster development cycles and higher-quality software.

The combination of two best-of-breed projects, Docker and Microcks, provides: 

  • Streamlined developer experience
  • Easiness at its core
  • Rapid iterations in the inner loop

Extension architecture

The Microcks Docker Desktop Extension has an evolving architecture depending on your enabling features. The UI that executes in Docker Desktop manages your preferences in a ~/.microcks-docker-desktop-extension folder and starts/stops/cleans the needed containers.

At its core, the architecture (Figure 2) embeds two minimal elements: the Microcks main container and a MongoDB database. The different containers of the extension run in an isolated Docker network where only the HTTP port of the main container is bound to your local host.

Illustration showing basic elements of Microcks extension architecture, including Microcks Docker network and MongoDB.
Figure 2: Microcks extension default architecture.

Through the Settings panel offered by the extension (Figure 3), you can tune the port binding and enable more features, such as:

  • The support of asynchronous APIs mocking and testing via the usefulness of AsyncAPI with Kafka and WebSocket
  • The ability to run Postman collection tests in Microcks includes support for Postman testing.
Screenshot of Microcks Settings panel showing "Enable asynchronous APIs" and "Enable testing with Postman" options.
Figure 3: Microcks extension Settings panel.

When applied, your settings are persistent in your ~/.microcks-docker-desktop-extension folder, and the extension augments the initial architecture with the required services. Even though the extension starts with additional containers, they are carefully crafted and chosen to be lightweight and consume as few resources as possible. For example, we selected the Redpanda Kafka-compatible broker for its super-light experience. 

The schema shown in Figure 4 illustrates such a “maximal architecture” for the extension:

 Illustration showing maximal architecture of Microcks extension including MongoDB, Microcks Postman runtime, Microcks Async Minion, and Redpanda Kafka Broker.
Figure 4: Microcks extension maximal architecture.

The Docker Desktop Extension architecture encapsulates the convergence of Docker’s containerization capabilities and Microcks’ API testing prowess. This collaborative endeavor presents developers with a unified interface to toggle between these functionalities seamlessly. The architecture ensures a cohesive experience, enabling developers to harness the power of both Docker and Microcks without the need for constant tool switching.

Getting started

Getting started with the Docker Desktop Extension is a straightforward process that empowers developers to leverage the benefits of unified development. The extension can be easily integrated into existing workflows, offering a familiar interface within Docker. This seamless integration streamlines the setup process, allowing developers to dive into their projects without extensive configuration.

Here are the steps for installing Microcks as a Docker Desktop Extension:
1. Choose Add Extensions in the left sidebar (Figure 5).

Screenshot of Docker Desktop with red arrow pointing to the Add Extensions option in the left sidebar.
Figure 5: Add extensions in the Docker Desktop.

2. Switch to the Browse tab.

3. In the Filters drop-down, select the Testing Tools category.

4. Find Microcks and then select Install (Figure 6).

Screenshot of Microcks extension with red arrow pointing to Open in upper right corner.
Figure 6: Find and open Microcks.

Launching Microcks

The next step is to launch Microcks (Figure 7).

Screenshot of Microcks showing red arrow pointing to rectangular blue button that says "Launch Microcks"
Figure 7: Launch Microcks.

The Settings panel allows you to configure some options, like whether you’d like to enable the asynchronous APIs features (default is disabled) and if you’d need to set an offset to ports used to access the services (Figures 8 and 9).

 Screenshot of Microcks showing green oval that says "Running" next to text reading: Microcks is running. To access the UI navigate to: http://localhost:8080.
Figure 8: Microcks is up and running.
Screenshot of Microcks dashboard showing green button that says APIs | Services. This option lets you browse, get info, and request/response mocks on Microcks managed APIs & Services.
Figure 9: Access asynchronous APIs and services.

Sample app deployment

To illustrate the real-world implications of the Docker Desktop Extension, consider a sample application deployment. As developers embark on local application development, the Docker Desktop Extension enables them to create, test, and iterate on their containers while leveraging Microcks’ API mocking and testing capabilities.

This combined approach ensures that the application’s containerization and API aspects are thoroughly validated, resulting in a higher quality end product. Check out the three-minute “Getting Started with Microcks Docker Desktop Extension” video for more information.

Conclusion

The Docker and Microcks partnership, exemplified by the Docker Desktop Extension, signifies a milestone in collaborative software development. By harmonizing containerization and API testing, this collaboration addresses the challenges of fragmented workflows, accelerating development cycles and elevating the quality of applications.

By embracing the capabilities of Docker and Microcks, developers are poised to embark on a journey characterized by efficiency, reliability, and collaborative synergy.

Remember that Microcks is a Cloud Native Computing Sandbox project supported by an open community, which means you, too, can help make Microcks even greater. Come and say hi on our GitHub discussion or Zulip chat 🐙, send some love through GitHub stars ⭐️, or follow us on Twitter, Mastodon, LinkedIn, and our YouTube channel.

Learn more

]]>
Getting Started with Microcks Docker Desktop Extension nonadult
Simplifying Kubernetes Development: Docker Desktop + Red Hat OpenShift https://www.docker.com/blog/blog-docker-desktop-red-hat-openshift/ Thu, 25 May 2023 14:01:00 +0000 https://www.docker.com/?p=42878 It’s Red Hat Summit week, and we wanted to use this as an opportunity to highlight several aspects of the Docker and Red Hat partnership. In this post, we highlight Docker Desktop and Red Hat OpenShift. In another post, we talk about 160% year-over-year growth in pulls of Red Hat’s Universal Base Image on Docker Hub.

Why Docker Desktop + Red Hat OpenShift?

Docker Desktop + Red Hat OpenShift allows the thousands of enterprises that depend on Red Hat OpenShift to leverage the Docker Desktop platform that more than 20 million active developers already know and trust to eliminate daily friction and empower them to deliver results.

Red Hat logo with the word OpenSHift on a purple background

What problem it solves

Sometimes, it feels like coding is easy compared to the sprint demo and getting everybody’s approval to move forward. Docker Desktop does the yak shaving to make developing, using, and testing containerized applications on Mac and Windows local environments easy, and the Red Hat OpenShift extension for Docker Desktop extends that with one-click pushes to Red Hat’s cloud container platform.

One especially convenient use of the Red Hat OpenShift extension for Docker Desktop is for quickly previewing or sharing work, where the ease of access and friction-free deploys without waiting for CI can reduce cycle times early in the dev process and enable rapid iteration leading up to full CI and production deployment.

Getting started

If you don’t already have Docker Desktop installed, refer to our guide on Getting Started with Docker.

Installing the extension

The Red Hat OpenShift extension is one of many in the Docker Extensions Marketplace. Select Install to install the extension.

Screenshot of Docker Extensions Marketplace showing search for OpenShift

👋 Pro tip: Have an idea for an extension that isn’t in the marketplace? Build your own and let us know about it! Or don’t tell us about it and keep it for internal use only — private plugins for company or team-specific workflows are also a thing.

Signing into the OpenShift cluster in the extension

From within your Red Hat OpenShift cluster web console, select the copy login command from the user menu:

Screenshot of Red Hat OpenShift cluster web console, with “copy login command” selected in the user menu

👋 Don’t have an OpenShift cluster? Red Hat supports OpenShift exploration and development with a developer sandbox program that offers immediate access to a cluster, guided tutorials, and more. To sign up, go to their Developer Sandbox portal.

That leads to a page with the details you can use to fetch the Kubernetes context you can use in the Red Hat OpenShift extension and elsewhere:

Screenshot of the issued API token

From there, you can come back to the Red Hat OpenShift extension in Docker Desktop. In the Deploy to OpenShift screen, you can change or log in to a Kubernetes context. In the login screen, paste the whole oc login line, including the token and server URL from the previous:

Screenshot of "Login to OpenShift" dialog box

Your first deploy

Using the extension is even easier than installing it; you just need a containerized app. A sample racing-game-app is ready to go with a Dockerfile (docs) and OpenShift manifest (docs), and it’s a joy to play.

To get started, clone https://github.com/container-demo/HexGL.

👋 Pro tip: Have an app you want to containerize? Docker’s newly introduced docker init command automates the creation of Dockerfiles, Compose manifests, and .dockerignore files.

Build your image

Using your local clone of sample racing-game-app, cd into the repo on your command line, where we’ll build the image:

docker build --platform linux/amd64 -t sample-racing-game .

The --platform linux/amd64 flag forces x86 compatible image, even if you’re building from a Mac with Apple Silicon/ARM CPU. The -t sample-racing-game names and tags your image with something more recognizable than a SHA256. Finally, the trailing dot (“.”) tells Docker to build from the current directory.

Starting with Docker version 23, docker build leverages BuildKit with more performance, better caching, and support for things like build secrets that make our lives as developers easier and more secure.

Test the image locally

Now that you’ve built your image, you can test it out locally. Just run the image using:

docker run -d -p 8080:8080 sample-racing-game

And open your browser to http://localhost:8080/ to take a look at the result.

Deploy to OpenShift

With the image built, it’s time to test it out on the cluster. We don’t even have to push it to a registry first.

Visit the Red Hat OpenShift extension in Docker Desktop, select our new sample-racing-game image, and select Push to OpenShift and Deploy from the action button pulldown:

Screenshot of Deploy to OpenShift page with sample-racing-game selected for deployment

The Push to OpenShift and Deploy action will push the image to the OpenShift cluster’s internal private registry without any need to push to another registry first. It’s not a substitute for a private registry, but it’s convenient for iterative development where you don’t plan on keeping the iterative builds.

Once you click the button, the extension will do exactly as you intend and keep you updated about progress:

Screenshot of Deploy to OpenShift page showing deployment progress for sample-racing-game

Finally, the extension will display the URL for the app and attempt to open your default browser to show it off:

Screenshot of racing game demo app start page with blue and orange racing graphic

Cleanup

Once you’re done with your app, or before attempting to redeploy it, use the web terminal to all traces of your deployment using this one-liner:

oc get all -oname | grep sample-racing-game | xargs oc delete
Screenshot of web terminal commands showing deleted files

That will avoid unnecessarily using resources and any errors if you try to redeploy to the same namespace later.

So…what did we learn?

The Red Hat OpenShift extension gives you a one-click deploy from Docker Desktop to an OpenShift cluster. It’s especially convenient for previewing iterative work in a shared cluster for sprint demos and approval as a way to accelerate iteration.

The above steps showed you how to install and configure the extension, build an image, and deploy it to the cluster with a minimum of clicks. Now show us what you’ll build next with Docker and Red Hat, tag @docker on Twitter or me @misterbisson@techub.social to share!

Learn More

Connect with Docker

]]>
Boost Your Local Testing Game with the LambdaTest Tunnel Docker Extension https://www.docker.com/blog/boost-your-local-testing-game-with-lambdatest-tunnel-docker-extension/ Tue, 16 May 2023 14:48:07 +0000 https://www.docker.com/?p=42430 As the demand for web applications continues to rise, so does the importance of testing them thoroughly. One challenge that testers face is how to test applications that are hosted locally on their machines. This is where the LambdaTest Tunnel Docker Extension comes in handy. This extension allows you to establish a secure connection between your local environment and the LambdaTest platform, making it possible to test your locally hosted pages and applications on a remote browser. 

In this article, we’ll explore the benefits of using the LambdaTest Tunnel Docker Extension and describe how it can streamline your testing workflow.

White Docker logo on black background with LambdaTest logo in blue

Overview of LambdaTest Tunnel

LambdaTest Tunnel is a secure and encrypted tunneling feature that allows devs and QAs to test their locally hosted web applications or websites on the cloud-based real machines. It establishes a secure connection between the user’s local machine and the real machine in the cloud (Figure 1).

By downloading the LambdaTest Tunnel binary, you can securely connect your local machine to LambdaTest cloud servers even when behind corporate firewalls. This allows you to test locally hosted websites or web applications across various browsers, devices, and operating systems available on the LambdaTest platform. Whether your web files are written in HTML, CSS, PHP, Python, or similar languages, you can use LambdaTest Tunnel to test them.

Diagram of LambdaTest Tunnel network setup, showing connection from the tunnel client to the API gateway to the LambdaTest's private network with proxy server and browser VMs.
Figure 1: Overview of LambdaTest Tunnel.

Why use LambdaTest Tunnel?

LambdaTest Tunnel offers numerous benefits for web developers, testers, and QA professionals. These include secure and encrypted connection, cross-browser compatibility testing, localhost testing, etc.

Let’s see the LambdaTest Tunnel benefits one by one:

  • It provides a secure and encrypted connection between your local machine and the virtual machines in the cloud, thereby ensuring the privacy of your test data and online communications.
  • With LambdaTest Tunnel, you can test your web applications or websites, local folder, and files across a wide range of browsers and operating systems without setting up complex and expensive local testing environments.
  • It lets you test your locally hosted web applications or websites on cloud-based real OS machines.
  • You can even run accessibility tests on desktop browsers while testing locally hosted web applications and pages.

Why run LambdaTest Tunnel as a Docker Extension?

With Docker Extensions, you can build and integrate software applications into your daily workflow. Using LambdaTest Tunnel as a Docker extension provides a seamless and hassle-free experience for establishing a secure connection and performing cross-browser testing of locally hosted websites and web applications on the LambdaTest platform without manually launching the tunnel through the command line interface (CLI).

The LambdaTest Tunnel Docker Extension opens up a world of options for your testing workflows by adding a variety of features. Docker Desktop has an easy-to-use one-click installation feature that allows you to use the LambdaTest Tunnel Docker Extension directly from Docker Desktop.

Getting started

Prerequisites: Docker Desktop 4.8 or later and a LambdaTest account. Note: You must ensure the Docker extension is enabled (Figure 2).

Screenshot of Docker Desktop with Docker Extensions enabled.
Figure 2: Enable Docker Extensions.

Step 1: Install the LambdaTest Docker Extension

In the Extensions Marketplace, search for LambdaTest Tunnel extension and select Install (Figure 3).

Screen shot of Extensions Marketplace, showing blue Install button for LambdaTest Tunnel.
Figure 3: Install LambdaTest Tunnel.

Step 2: Set up the Docker LambdaTest Tunnel

Open the LambdaTest Tunnel and select the Setup Tunnel to configure the tunnel (Figure 4).

Screenshot of LambdaTest Tunnel setup page.
Figure 4: Configure the tunnel.

Step 3: Enter your LambdaTest credentials

Provide your LambdaTest Username, Access Token, and preferred Tunnel Name. You can get your Username and Access Token from your LambdaTest Profile under Password & Security. 
Once these details have been entered, click on the Launch Tunnel (Figure 5).

Screenshot of LambdaTest Docker Tunnel page with black "Launch Tunnel" button.
Figure 5: Launch LambdaTest Tunnel.

The LambdaTest Tunnel will be launched, and you can see the running tunnel logs (Figure 6).

Screenshot of LambdaTest Docker Tunnel page with list of running tunnels.
Figure 6: Running logs.

Once you have configured the LambdaTest Tunnel via Docker Extension, it should appear on the LambdaTest Dashboard (Figure 7).

Screenshot of new active tunnel in LambdaTest dashboard.
Figure 7: New active tunnel.

Local testing using LambdaTest Tunnel Docker Extension

Let’s walk through a scenario using LambdaTest Tunnel. Suppose a web developer has created a new web application that allows users to upload and view images. The developer needs to ensure that the application can handle a variety of different image formats and sizes and that it can render display images across different browsers and devices.

To do this, the developer first sets up a local development environment and installs the LambdaTest Tunnel Docker Extension. They then use the web application to open and manipulate local image files.

Next, the developer uses the LambdaTest Tunnel to securely expose their local development environment to the internet. This step allows them to test the application in real-time on different browsers and devices using LambdaTest’s cloud-based digital experience testing platform.

Now let’s see the steps to perform local testing using the LambdaTest Tunnel Docker Extension.

1. Go to the LambdaTest Dashboard and navigate to Real Time Testing > Browser Testing (Figure 8).

Screenshot of LambdaTest Tunnel showing Browser Testing selected.
Figure 8: Navigate to Browser Testing.

2. In the console, enter the localhost URL, select browser, browser version, operating system, etc. and select START (Figure 9).

Screenshot of LambdaTest Tunnel page showing browser options to choose from.
Figure 9: Configure testing.

3. A cloud-based real operating system will fire up where you can perform testing of local files or folders (Figure 10).

Screenshot of LambdaTest showing local test example.
Figure 10: Perform local testing.

Learn more about how to set up the LambdaTest Tunnel Docker Extension in the documentation

Conclusion

The LambdaTest Tunnel Docker Extension makes it easy to perform local testing without launching the tunnel from the CLI. You can run localhost tests over an online cloud grid of 3000+ real browsers and operating system combinations. You don’t have to worry about the challenges of local infrastructure because LambdaTest provides you with a cloud grid of zero downtime. 

Check out the LambdaTest Tunnel Docker Extension on DockerHub. The LambdaTest Tunnel Docker Extension source code is available on GitHub, and contributions are welcome. 

]]>
Build Kubernetes Local Development Environments with Gefyra https://www.docker.com/blog/building-a-local-application-development-environment-for-kubernetes-with-the-gefyra-docker-extension/ Wed, 03 May 2023 14:00:00 +0000 https://www.docker.com/?p=42061 If you use Docker for development, you’re already well on your way to creating cloud-native software. Containerization takes care of all the essential elements for running code like system dependencies, language requirements, and application configurations. But can it handle more intricate use cases like Kubernetes local development?

In more complex systems, you might need to connect your code with several auxiliary services (like databases, storage volumes, APIs, caching layers, and message brokers). In modern Kubernetes systems, you’ll also need to manage service meshes and cloud-native deployment patterns (like probes, configuration, and structural and behavioral patterns).

Kubernetes offers a uniform interface for orchestrating scalable, resilient, and services-based applications. However, its complexity can be overwhelming, especially for developers without extensive experience setting up local Kubernetes clusters. Gefyra helps developers work in local Kubernetes development environments and create secure, reliable, and scalable software more easily.

Gefyra and Docker logos on a dark background with a lighter purple outline of two puzzle pieces

What is Gefyra? 

Gefyra, named after the Greek word for “bridge,” is a comprehensive toolkit that facilitates Docker-based development with Kubernetes. If you plan to use Kubernetes as your production platform, it’s essential to work with the same environment during development. This method ensures a good balance between development and production, minimizing any problems during the transition.

Gefyra is an open source project that provides docker run on steroids. You can use it to connect your local Docker with any Kubernetes cluster. This allows you to run a container locally that behaves as if it were running in the cluster.  You can write code locally in your favorite code editor using the tools you love.

Gefyra also doesn’t require you to perform multiple tasks when making code changes. Instead, Gefyra lets you connect your code to the cluster without making any changes to your Dockerfile. No need to create a Docker image, send it to a registry, or restart the cluster.

This method is helpful for new code and when inspecting existing code with a debugger connected to a container. That makes Gefyra a productivity superstar for any Kubernetes-based development work.

How does Gefyra work?

Gefyra installs several cluster-side components that allow it to control the local development machine and the development cluster. 

These components include a tunnel between the local development machine and the Kubernetes cluster, a local DNS resolver that behaves like the cluster DNS, and sophisticated IP routing mechanisms. To build on these components, Gefyra uses popular open source technologies, such as Docker, WireGuard, CoreDNS, Nginx, and Rsync.

To set up the local development, the developer needs to run the application in a container on their machine. The container should have a sidecar container named Cargo. The sidecar container acts as a network gateway and forwards all requests to the cluster using a CoreDNS serve (Figure 1).

Cargo encrypts all the passing traffic with WireGuard using ad hoc connection secrets. Developers can use their existing tooling, including their favorite code editor and debuggers, to develop their applications.

Yellow graphic with white text boxes showing development setup, including: IDE, Volumes, Shell, Logs, Debugger, and connection to Gefyra, including App Container and Cargo sidecar container.
Figure 1: Local development setup.

Gefyra manages two ends of a WireGuard connection and automatically establishes a VPN tunnel between the developer and the cluster. This creates a robust and fast connection without stressing the Kubernetes API server (Figure 2). The client side of Gefyra also manages a local Docker network with a VPN endpoint. This lets the container join the VPN that directs all traffic into the cluster.

Yellow graphic with boxes and arrows showing connection between Developer Machine and Developer Cluster.
Figure 2: Connecting developer machine and cluster.

Gefyra also allows bridging existing traffic from the cluster to the local container, enabling developers to test their code with real-world requests from the cluster and collaborate on changes in a team. The local container instance remains connected to auxiliary services and resources in the cluster while receiving requests from other Pods, Services, or the Ingress. This setup removes the need to create container images in a CI pipeline and update clusters for small changes.

Why run Gefyra as a Docker Extension?

The Python library containing Gefyra’s core functionality is available in its repository. The CLI that comes with the project has a long list of arguments that may be overwhelming for some users. To make it more accessible for developers, Gefyra developed the Docker Desktop extension.

The Gefyra extension lets developers work with a variety of Kubernetes clusters in Docker Desktop. These include the built-in cluster, local providers (like Minikube, K3d, or Kind, Getdeck Beiboot), and remote clusters. Let’s get started.

Installing the Gefyra Docker Desktop

Prerequisites: Docker Desktop 4.8 or later.

Step 1: Initial setup

First, make sure you have enabled Docker Extensions in Docker Desktop (it should be enabled by default). In Settings | Extensions select the Enable Docker Extensions box (Figure 3).

 Screenshot showing Docker Desktop interface with "Enable Docker Extensions" selected.
Figure 3: Enable Docker Extensions.

You’ll also need to enable Kubernetes under Settings (Figure 4).

Screenshot of Docker Desktop with "Enable Kubernetes" and "Show system containers (advanced)" selected.
Figure 4: Enable Kubernetes.

Gefyra is in the Docker Extensions Marketplace. Next, we’ll install Gefyra in Docker Desktop.

Step 2: Add the Gefyra extension

Open Docker Desktop and select Add Extensions to find the Gefyra extension in the Extensions Marketplace (Figure 5).

Screenshot showing search for "Gefyra" in Docker Extensions Marketplace.
Figure 5: Locate Gefyra in the Docker Extensions Marketplace.

Once you install Gefyra, you can open the extension and find the Gefyra start screen. Here you’ll find a list of all the containers connected to a Kubernetes cluster. Of course, this section is empty on a fresh install.

To start a local container using Gefyra, you need to click the Run Container button. You can find this button in the top right (Figure 6).

 Screenshot showing Gefyra start screen.
Figure 6: Gefyra start screen.

The next steps will vary based on whether you’re working with a local or remote Kubernetes cluster. If you’re using a local cluster, select the matching kubeconfig file and optionally set the context (Figure 7). 

For remote clusters, you may need to manually specify additional parameters. We’ll give a detailed example for you to follow along with in the next section.

Screenshot of Gefyra interface showing blue "Choose Kubeconfig" button.
Figure 7: Selecting Kubeconfig.

The Kubernetes demo workloads

The following example showcases how Gefyra uses the Kubernetes functionality in Docker Desktop to create a development environment for a simple application (Figure 8).

These two services include a backend and a frontend and are implemented as Python processes. And the frontend service uses a color property obtained from the backend to generate an HTML document. The two services communicate using HTTP, with the backend address given to the frontend as an environment variable.

 Yellow graphic showing connection of frontend and backend services.
Figure 8: Frontend and backend services.

The Gefyra team has created a repository for the Kubernetes demo workloads, which you can find on GitHub

But if you prefer video tutorials, check out this video on YouTube

Prerequisite

You’ll want to make sure to switch the current Kubernetes context to Docker Desktop. This step lets the user interact with the Kubernetes cluster and deploy applications to it using kubectl.

kubectl config current-context
docker-desktop

1. Clone the repository

First, you’ll need to clone the repository:

git clone https://github.com/gefyrahq/gefyra-demos

2. Apply the workload

The following YAML file sets up a simple two-tier app consisting of a backend service and a frontend service. The SVC_URL environment variable passed to the frontend container establishes communication between the two services.

It defines two pods, named backend and frontend, and two services, named backend and frontend, respectively. The backend pod is defined with a container that runs the quay.io/gefyra/gefyra-demo-backend image on port 5002.

The frontend pod is defined with a container that runs the quay.io/gefyra/gefyra-demo-frontend image on port 5003. The frontend container has an environment variable named SVC_URL set to the value backend.default.svc.cluster.local:5002.

The backend service is defined to select the backend pod using the app: backend label, and expose port 5002. The frontend service is defined to select the frontend pod using the app: frontend label, and expose port 80 as a load balancer, which routes traffic to port 5003 of the frontend container.

/gefyra-demos/kcd-munich> kubectl apply -f manifests/demo.yaml
pod/backend created
pod/frontend created
service/backend created
service/frontend created

Let’s watch the workload getting ready:

kubectl get pods
NAME         READY    STATUS    RESTARTS   AGE
backend     1/1            Running    0                    2m6s
frontend     1/1            Running    0                    2m6s

After the backend and frontend pods have initialized (check for the READY column in the output), you can access the application at http://localhost in your web browser. This URL is served from the Kubernetes environment of Docker Desktop.

Upon loading the page, you’ll see the application’s output displayed in your browser. While the output may not be visually stunning, it’s functional and should provide the necessary functionality you need.

Blue bar displaying "Hello World" in black text.

Now, let’s explore how we can correct or adjust the color of the output generated by the frontend component.

3. Using Gefyra “Run Container” with the frontend process

In the first part of this section, you’ll learn how to execute a frontend process on your local machine that is associated with a resource based on the Kubernetes cluster: the backend API. This can be anything ranging from a database to a message broker or other service utilized in the architecture.

Kick off a local container with Run Container from the Gefyra start screen (Figure 9).

Screenshot of Gefyra interface showing blue "Run container" button.
Figure 9: Run a local container.

Once you’ve entered the first step of this process, you’ll see that the kubeconfig and context are set automatically. That’s a lifesaver if you don’t know where to find the default kubeconfig on your host.

Just hit the Next button and proceed with the container settings (Figure 10).

Screenshot of Gefyra interface showing the "Set Kubernetes Settings" step.
Figure 10: Container settings.

In the Container Settings step, you can configure the Kubernetes-related parameters for your local container. In this example, everything happens in the default Kubernetes namespace. Select it in the first drop-down input (Figure 11). 

In the drop-down input below Image, you can specify the image to run locally. Note that it lists all images used in the selected namespace (from the Namespace selector). This way, you don’t need to worry about what images are in the cluster or find them yourself.

Instead, you get a suggestion to work with the image at hand, which is what we want to do in this example (Figure 12). You can still specify any image you want, even if it’s a new one you made on your machine.

Screenshot of Gefyra interface showing "Select a Workload" drop-down menu under Container Settings.
Figure 11: Select namespace and workload.
Screenshot of Gefyra interface showing drop-down menu of images.
Figure 12: Select image to run.

Next, we’ll copy the environment of the frontend container running in the cluster. To do this, you’ll need to select pod/frontend from the Copy Environment From selector (Figure 13). This step is important because you need the backend service address, which is passed to the pod in the cluster using an environment variable.

For the upper part of the container settings, you need to overwrite the following container image run command to enable code reloading:

poetry run flask --app app --debug run --port 5002 --host 0.0.0.0
Screenshot of Gefyra interface showing selection of  “pod/frontend” under “Copy Environment From."
Figure 13: Copy environment of frontend container.

Let’s start the container process on port 5002 and expose this port on the local machine. We’ll also mount the code directory (/gefyra-demos/kcd-munich/frontend) to make code changes immediately visible. Then, click on the Run button to start the process.

Screenshot of Gefyra interface showing installation progress bar.
Figure 14: Installing Gefyra components.

Next, we’ll quickly install Gefyra’s cluster-side components, prepare the local networking part, and pull the container image to start locally (Figure 14). Once it’s ready, you’ll redirect to the native container view of Docker Desktop from this container (Figure 15).

Screenshot showing native container view of Docker Desktop.
Figure 15: Log view.

You can look around in the container using the Terminal tab (Figure 16). Type in the env command in the shell, and you’ll see all the environment variables coming with Kubernetes.

Screenshot showing Terminal view of running container.
Figure 16: Terminal view.

We’re particularly interested in the SVC_URL variable that points the frontend to the backend process, which is still running in the cluster. Now, when browsing to the URL http://localhost:5002, you’ll get a slightly different output:

Blue bar displaying "Hello KCD" in black text

Why is that? To investigate, let’s look at the code that we already mounted into the local container, specifically the app.py that runs a Flask server (Figure 17).

Screenshot of colorful app.py code on black background.
Figure 17: App.py code.

The last line of the code in the Gefyra example displays the text Hello KCD!, and any changes made to this code are immediately updated in the local container. This feature is noteworthy because developers can freely modify the code and see the changes reflected in real time without having to rebuild or redeploy the container.

Line 12 of the code in the Gefyra example sends a request to a service URL, which is stored in the variable SVC. The value of SVC is read from an environment variable named SVC_URL, which is copied from the pod in the Kubernetes cluster. The URL, backend.default.svc.cluster.local:5002, is a fully qualified domain name (FQDN) that points to a Kubernetes service object and a port.

These URLs are commonly used by applications in Kubernetes to communicate with each other. The local container process is capable of sending requests to services running in Kubernetes using the native connection parameters, without the need for developers to make any changes, which may seem like magic at times.

In most development scenarios, the capabilities of Gefyra we just discussed are sufficient. In other words, you can use Gefyra to run a local container that can communicate with resources in the Kubernetes cluster, and you can access the app on a local port. However, what if you need to modify the backend while the frontend is still running in Kubernetes? This is where the “bridge” feature of Gefyra comes in, which we will explore next.

4. Gefyra “bridge” with the backend process

We could choose to run the frontend process locally and connect it to the backend process running in Kubernetes through a bridge. But this approach may not always be necessary or desirable for backend developers not concerned with the frontend. In this case, it may be more convenient to leave the frontend running in the cluster and stop the local instance by selecting the stop button in Docker Desktop’s container view.

To do this, we have to run a local instance of the backend service. It’s the same as with the frontend, but this time with the backend container image (Figure 18).

Screenshot of Gefyra interface showing "pod/backend" setup.
Figure 18: Running a backend container image.

Compared to the frontend example from above, you can run the backend container image (quay.io/gefyra/gefyra-demo-backend:latest), which is suggested by the drop-down selector. This time we need to copy the environment from the backend pod running in Kubernetes. Note that the volume mount is now set to the code of the backend service to make it work.

After starting the container, you can check http://localhost:5002/color, which serves the backend API response. Looking at the app.py of the backend service shows the source of this response. In line 8, this app returns a JSON response with the color property set to green (Figure 19).

Screenshot showing app.py code with "color" set to "green".
Figure 19: Checking the color.

At this point, keep in mind that we’re only running a local instance of the backend service. This time, you won’t need a connection to a Kubernetes-based resource since this container runs without external dependencies.

The idea is to make the frontend process that serves from the Kubernetes cluster on http://localhost (still blue) pick up our backend information to render its output. That’s done using Gefyra’s bridge feature. In the next step, we’ll overlay the backend process running in the cluster with our local container instance so that the local code becomes effective in the cluster.

Getting back to the Gefyra container list on the start screen, you can find the Bridge column on each locally running container (Figure 20). Once you click this button, you can create a bridge of your local container into the cluster.

Screenshot of Gefyra interface showing "Bridge" column on far right.
Figure 20: The Bridge column is visible on the far right.

In the next dialog, we need to enter the bridge configuration (Figure 21).

Screenshot of Gefyra interface showing Bridge Settings.
Figure 21: Enter the bridge configuration.

Let’s set the “Target” for the bridge to the backend pod, which is currently serving the frontend process in the cluster, and set a timeout for the bridge to 60 seconds. We also need to map the port of the proxy running in the cluster with the local instance. 

If your local container is configured to listen on a different port from the cluster, you can specify the mapping here (Figure 22). In this example, the service is running on port 5003 in both the cluster and on the local machine, so we need to map that port. After clicking the Bridge button, it takes a few seconds to return to the container list on Gefyra’s start view.

 Screenshot of Gefyra interface showing port mapping configuration.
Figure 22: Specify port mapping.

Observe the change in the icon of the Bridge button, which now depicts a stop symbol (Figure 23). This means the bridge function is operational and can be terminated by clicking this button again.

Screenshot of Gefyra showing closeup view of Bridge column and blue stop button.
Figure 23: The Bridge column showing a stop symbol.

At this point, the local code is able to handle requests from the frontend process in the cluster using the URL stored in the SVC_URL variable. This can be done without making any changes to the frontend process itself. 

To confirm this, you can open http://localhost in your browser (which is served from the Kubernetes of Docker Desktop) and check that the output is green. This is because the local code is returning the value green for the color property. You can change this value to any valid one in your IDE, and it will be immediately reflected in the cluster. This is the amazing power of this tool.

Remember to release the bridge of your container once you are finished making changes to your backend. This will reset the cluster to its original state, and the frontend will display the original “beautiful” blue H1 again. 

This approach allows us to intercept containers running in Kubernetes with our local code without modifying the Kubernetes cluster itself. That’s because we didn’t make any changes to the Kubernetes cluster itself. Instead, we kind of intercepted containers running in Kubernetes with our local code and released that intercept afterward.

Conclusion

Gefyra is an easy-to-use Docker Desktop extension that connects with Kubernetes to improve development workflows and team collaboration. It lets you run containers as usual while being connected with Kubernetes, thereby saving time and ensuring high dev/prod parity.

The Blueshoe development team would appreciate a star on GitHub and welcomes you to join their Discord community for more information.

About the Author

Michael Schilonka is a strong believer that Kubernetes can be a software development platform, too. He is the co-founder and managing director of the Munich-based agency Blueshoe and the technical lead of Gefyra and Getdeck. He talks about Kubernetes in general and how they are using Kubernetes for development. Follow him on LinkedIn to stay connected.

]]>
Announcing Docker+Wasm Technical Preview 2 https://www.docker.com/blog/announcing-dockerwasm-technical-preview-2/ Wed, 22 Mar 2023 16:12:16 +0000 https://www.docker.com/?p=41468 We recently announced the first Technical Preview of Docker+Wasm, a special build that makes it possible to run Wasm containers with Docker using the WasmEdge runtime. Starting from version 4.15, everyone can try out the features by activating the containerd image store experimental feature.

We didn’t want to stop there, however. Since October, we’ve been working with our partners to make running Wasm workloads with Docker easier and to support more runtimes.

Now we are excited to announce a new Technical Preview of Docker+Wasm with the following three new runtimes:

All of these runtimes, including WasmEdge, use the runwasi library.

What is runwasi?

Runwasi is a multi-company effort to make a library in Rust that makes it easier to write containerd shims for Wasm workloads. Last December, the runwasi project was donated and moved to the Cloud Native Computing Foundation’s containerd organization in GitHub.

With a lot of work from people at Microsoft, Second State, Docker, and others, we now have enough features in runwasi to run Wasm containers with Docker or in a Kubernetes cluster. We still have a lot of work to do, but there are enough features for people to start testing.

If you would like to chat with us or other runwasi maintainers, join us on the CNCF’s #runwasi channel.

Get the update

Ready to dive in and try it for yourself? Great! Before you do, understand that this is a technical preview build of Docker Desktop, so things might not work as expected. Be sure to back up your containers and images before proceeding.

Download and install the appropriate version for your system, then activate the containerd image store (Settings > Features in development > Use containerd for pulling and storing images), and you’ll be ready to go.

Features in development screen with "Use containerd" option selected.
Figure 1: Docker Desktop beta features in development.

Let’s take Wasm for a spin 

The WasmEdge runtime is still present in Docker Desktop, so you can run: 

$ docker run --rm --runtime=io.containerd.wasmedge.v1 
--platform=wasi/wasm secondstate/rust-example-hello:latest
Hello WasmEdge!

You can even run the same image with the wasmtime runtime:

$ docker run --rm --runtime=io.containerd.wasmtime.v1 
--platform=wasi/wasm secondstate/rust-example-hello:latest
Hello WasmEdge!

In the next example, we will deploy a Wasm workload to Docker Desktop’s Kubernetes cluster using the slight runtime. To begin, make sure to activate Kubernetes in Docker Desktop’s settings, then create an example.yaml file:

cat > example.yaml <<EOT
apiVersion: apps/v1
kind: Deployment
metadata:
  name: wasm-slight
spec:
  replicas: 1
  selector:
    matchLabels:
      app: wasm-slight
  template:
    metadata:
      labels:
        app: wasm-slight
    spec:
      runtimeClassName: wasmtime-slight-v1
      containers:
        - name: hello-slight
          image: dockersamples/slight-rust-hello:latest
          command: ["/"]
          resources:
            requests:
              cpu: 10m
              memory: 10Mi
            limits:
              cpu: 500m
              memory: 128Mi
---
apiVersion: v1
kind: Service
metadata:
  name: wasm-slight
spec:
  type: LoadBalancer
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
  selector:
    app: wasm-slight
EOT

Note the runtimeClassName, kubernetes will use this to select the right runtime for your application. 

You can now run:

$ kubectl apply -f example.yaml

Once Kubernetes has downloaded the image and started the container, you should be able to curl it:

$ curl localhost/hello
hello

You now have a Wasm container running locally in Kubernetes. How exciting! 🎉

Note: You can take this same yaml file and run it in AKS.

Now let’s see how we can use this to run Bartholomew. Bartholomew is a micro-CMS made by Fermyon that works with the spin runtime. You’ll need to clone this repository; it’s a slightly modified Bartholomew template

The repository already contains a Dockerfile that you can use to build the Wasm container:

FROM scratch
COPY . .
ENTRYPOINT [ "/modules/bartholomew.wasm" ]

The Dockerfile copies all the contents of the repository to the image and defines the build bartholomew Wasm module as the entry point of the image.

$ cd docker-wasm-bartholomew
$ docker build -t my-cms .
[+] Building 0.0s (5/5) FINISHED
 => [internal] load build definition from Dockerfile          	0.0s
 => => transferring dockerfile: 147B                          	0.0s
 => [internal] load .dockerignore                             	0.0s
 => => transferring context: 2B                               	0.0s
 => [internal] load build context                             	0.0s
 => => transferring context: 2.84kB                           	0.0s
 => CACHED [1/1] COPY . .                                     	0.0s
 => exporting to image                                        	0.0s
 => => exporting layers                                       	0.0s
 => => exporting manifest sha256:cf85929e5a30bea9d436d447e6f2f2e  0.0s
 => => exporting config sha256:0ce059f2fe907a91a671f37641f4c5d73  0.0s
 => => naming to docker.io/library/my-cms:latest              	0.0s
 => => unpacking to docker.io/library/my-cms:latest           	0.0s

You are now ready to run your first WebAssembly micro-CMS:

$ docker run --runtime=io.containerd.spin.v1 -p 3000:80 my-cms

If you go to http://localhost:3000, you should be able to see the Bartholomew landing page (Figure 2).

The Bartholomew landing page showing an example homepage written in Markdown.
Figure 2: Bartholomew landing page.

We’d love your feedback

All of this work is fresh from the oven and relies on the containerd image store in Docker, which is an experimental feature we’ve been working on for almost a year now. The good news is that we already see how this hard work can benefit everyone by adding more features to Docker. We’re still working on it, so let us know what you need. 

If you want to help us shape the future of WebAssembly with Docker, try it out, let us know what you think, and leave feedback on our public roadmap.

]]>
February Extensions: Easily Connect Local Containers to a Kubernetes Cluster and More https://www.docker.com/blog/new-docker-extensions-february-2023/ Tue, 28 Feb 2023 15:00:00 +0000 https://www.docker.com/?p=40846

Although February is the shortest month of the year, we’ve been busy at Docker and we have new Docker Extensions to share with you. Docker extensions build new functionality into Docker Desktop, extend its existing capabilities, and allow you to discover and integrate additional tools that you’re already using with Docker. Let’s look at the exciting new extensions from February.

And, if you’d like to see everything that’s available, check out our full Extensions Marketplace.

banner 2023 feb extensions roundup

Get visibility on your Kubernetes cluster 

Do you need to harden your Kubernetes cluster but lack the visibility to do so? With the Kubescape extension for Docker Desktop, you can secure your Kubernetes cluster and gain insight into your cluster’s security posture via an easy-to-use online dashboard.

The Kubescape extension works by installing the Kubescape in-cluster components, connecting them to the ARMO platform and providing insights into the Kubernetes cluster deployed by Docker Desktop via the dashboard on the ARMO platform.

With the Kubescape extension, you can:

  • Regularly scan your configurations and images
  • Visualize your RBAC rules
  • Receive automatic fix suggestions where applicable

Read Secure Your Kubernetes Clusters with the Kubescape Docker Extension to learn more.

kubescape extension docker setup

Connect your local containers to any Kubernetes cluster

Do you need a fast and dependable way to connect your local containers to any Kubernetes cluster? With Gefyra for Docker Desktop, you can easily bridge running containers into Kubernetes clusters. Gefyra aims to ease the burdens of Kubernetes-based development for developers with a seamless integration as an extension in Docker Desktop. 

The Gefyra extension lets you run a container locally and connect it to a Kubernetes cluster so you can:

  • Talk to other services
  • Let other services talk to your local container
  • Debug
  • Achieve faster iterations — no build/push/deploy/repeat
The Gefyra extension setup process in Docker Desktop, including kubernetes settings, container settings, and container start.

Deploy Alfresco using Docker containers

The Alfresco Docker extension simplifies deploying the Alfresco Digital Business Platform for testing purposes. This extension provides a single Run button in the UI to run all the containers required behind the scenes, so you can spend less time configuring and more time building and testing your product.

With the Alfresco extension on Docker Desktop, you can:

  • Pull latest Alfresco Docker images
  • Run Alfresco Docker containers
  • Use Alfresco deployment locally in your browser
  • Stop deployment and recover your system to initial status
alfresco deployment tool docker desktop

Easily deploy and test NebulaGraph

NebulaGraph is a popular open source graph database that can handle large volumes of with milliseconds of latency, scale up quickly, and have the ability to perform fast graph analytics. With the NebulaGraph extension on Docker Desktop, you can test, learn, and develop on top of the distributed version of NebulaGraph core, in one click. 

nebulargraph extension docker setup

Check out the latest Docker Extensions with Docker Desktop

Docker is always looking for ways to improve the developer experience. Check out these resources for more information on Docker Extensions:

]]>
Secure Your Kubernetes Clusters with the Kubescape Docker Extension https://www.docker.com/blog/secure-kubernetes-with-kubescape-extension/ Tue, 21 Feb 2023 15:00:00 +0000 https://www.docker.com/?p=40587 Container adoption in enterprises continues to grow, and Kubernetes has become the de facto standard for deploying and operating containerized applications. At the same time, security is shifting left and should be addressed earlier in the software development lifecycle (SDLC). Security has morphed from being a static gateway at the end of the development process to something that (ideally) is embedded every step of the way. This can potentially increase the effort for engineering and DevOps teams.

kubescape extension banner

Kubescape, a CNCF project initially created by ARMO, is intended to solve this problem. Kubescape provides a self-service, simple, and easily actionable security solution that meets developers where they are: Docker Desktop.

What is Kubescape?

Kubescape is an open source Kubernetes security platform for your IDE, CI/CD pipelines, and clusters.

Kubescape includes risk analysis, security compliance, and misconfiguration scanning. Targeting all security stakeholders, Kubescape offers an easy-to-use CLI interface, flexible output formats, and automated scanning capabilities. Kubescape saves Kubernetes users and admins time, effort, and resources.

How does Kubescape work?

Security researchers and professionals codify best practices in controls: preventative, detective, or corrective measures that can be taken to avoid — or contain — a security breach. These are grouped in frameworks by government and non-profit organizations such as the US Cybersecurity and Infrastructure Security Agency, MITRE, and the Center for Internet Security.

Kubescape contains a library of security controls that codify Kubernetes best practices derived from the most prevalent security frameworks in the industry. These controls can be run against a running cluster or manifest files under development. They’re written in Rego, the purpose-built declarative policy language that supports Open Policy Agent (OPA).

Kubescape is commonly used as a command-line tool. It can be used to scan code manually or can be triggered by an IDE integration or a CI tool. By default, the CLI results are displayed in a console-friendly manner, but they can be exported to JSON or JUnit XML, rendered to HTML or PDF, or submitted to ARMO Platform (a hosted backend for Kubescape).

kubescape command line interface diagram

Regular scans can be run using an in-cluster operator, which also enables the scanning of container images for known vulnerabilities.

kubescape scan in cluster operator

Why run Kubescape as a Docker extension?

Docker extensions are fundamental for building and integrating software applications into daily workflows. With the Kubescape Docker Desktop extension, engineers can easily shift security left without changing work habits.

The Kubescape Docker Desktop extension helps developers adopt security hygiene as early as the first lines of code. As shown in the following diagram, Kubescape enables engineers to adopt security as they write code during every step of the SDLC.

Specifically, the Kubescape in-cluster component triggers periodic scans of the cluster and shows results in ARMO Platform. Findings shown in the dashboard can be further explored, and the extension provides users with remediation advice and other actionable insights.

kubescape scan in cluster operator

Installing the Kubescape Docker extension

Prerequisites: Docker Desktop 4.8 or later.

Step 1: Initial setup

In Docker Desktop, confirm that the Docker Extensions feature is enabled. (Docker Extensions should be enabled by default.)  In Settings | Extensions select the Enable Docker Extensions box.

step one enable kubescape extension

You must also enable Kubernetes under Preferences

step one enable kubernetes

Kubescape is in the Docker Extensions Marketplace. 

In the following instructions, we’ll install Kubescape in Docker Desktop. After the extension scans automatically, the results will be shown in ARMO Platform. Here is a demo of using Kubescape on Docker Desktop:

Step 2: Add the Kubescape extension

Open Docker Desktop and select Add Extensions to find the Kubescape extension in the Extensions Marketplace.

step two add kubescape extension

Step 3: Installation

Install the Kubescape Docker Extension.

step three install kubescape extension

Step 4: Register and deploy

Once the Kubescape Docker Extension is installed, you’re ready to deploy Kubescape.

step four register deploy kubescape select provider

Currently, the only hosting provider available is ARMO Platform. We’re looking forward to adding more soon.

step four register deploy kubescape sign up

To link up your cluster, the host requires an ARMO account.

step four register deploy kubescape connect armo account

After you’ve linked your account, you can deploy Kubescape.

step four register deploy kubescape deploy

Accessing the dashboard

Once your cluster is deployed, you can view the scan output on your host (ARMO Platform) and start improving your cluster’s security posture immediately.

armo scan dashboard

Security compliance

One step to improve your cluster’s security posture is to protect against the threats posed by misconfigurations.

ARMO Platform will display any misconfigurations in your YAML, offer information about severity, and provide remediation advice. These scans can be run against one or more of the frameworks offered and can run manually or be scheduled to run periodically.

armo scan misconfigurations

Vulnerability scanning

Another step to improve your cluster’s security posture is protecting against threats posed by vulnerabilities in images.

The Kubescape vulnerability scanner scans the container images in the cluster right after the first installation and uploads the results to ARMO Platform. Kubescape’s vulnerability scanner supports the ability to scan new images as they are deployed to the cluster. Scans can be carried out manually or periodically, based on configurable cron jobs.

armo kubescape vulnerability scanner

RBAC Visualization

With ARMO Platform, you can also visualize Kubernetes RBAC (role-based access control), which allows you to dive deep into account access controls. The visualization makes pinpointing over-privileged accounts easy, and you can reduce your threat landscape with well-defined privileges. The following example shows a subject with all privileges granted on a resource.

armo kubescape rbac visualizer

Kubescape, using ARMO Platform as a portal for additional inquiry and investigation, helps you strengthen and maintain your security posture

Next steps

The Kubescape Docker extension brings security to where you’re working. Kubescape enables you to shift security to the beginning of the development process by enabling you to implement security best practices from the first line of code. You can use the Kubernetes CLI tool to get insights, or export them to ARMO Platform for easy review and remediation advice.

Give the Kubescape Docker extension a try, and let us know what you think at cncf-kubescape-users@lists.cncf.io.

]]>
Kubescape Docker Desktop nonadult
Enable No-Code Kubernetes with the harpoon Docker Extension https://www.docker.com/blog/no-code-kubernetes-harpoon-docker-extension/ Wed, 01 Feb 2023 15:00:00 +0000 https://www.docker.com/?p=40167 (This post is co-written by Dominic Holt, Founder & CEO of harpoon.)

No-code deploy Kubernetes with the harpoon Docker Extension.

Kubernetes has been a game-changer for ensuring scalable, high availability container orchestration in the Software, DevOps, and Cloud Native ecosystems. While the value is great, it doesn’t come for free. Significant effort goes into learning Kubernetes and all the underlying infrastructure and configuration necessary to power it. Still more effort goes into getting a cluster up and running that’s configured for production with automated scalability, security, and cluster maintenance.

All told, Kubernetes can take an incredible amount of effort, and you may end up wondering if there’s an easier way to get all the value without all the work.

Meet harpoon

With harpoon, anyone can provision a Kubernetes cluster and deploy their software to the cloud without writing code or configuration. Get your software up and running in seconds with a drag and drop interface. When it comes to monitoring and updating your software, harpoon handles that in real-time to make sure everything runs flawlessly. You’ll be notified if there’s a problem, and harpoon can re-deploy or roll back your software to ensure a seamless experience for your end users. harpoon does this dynamically for any software — not just a small, curated list.

To run your software on Kubernetes in the cloud, just enter your credentials and click the start button. In a few minutes, your production environment will be fully running with security baked in. Adding any software is as simple as searching for it and dragging it onto the screen. Want to add your own software? Connect your GitHub account with only a couple clicks and choose which repository to build and deploy in seconds with no code or complicated configurations.

harpoon enables you to do everything you need, like logging and monitoring, scaling clusters, creating services and ingress, and caching data in seconds with no code. harpoon makes DevOps attainable for anyone, leveling the playing field by delivering your software to your customers at the same speed as the largest and most technologically advanced companies at a fraction of the cost.

The architecture of harpoon

harpoon works in a hybrid SaaS model and runs on top of Kubernetes itself, which hosts the various microservices and components that form the harpoon enterprise platform. This is what you interface with when you’re dragging and dropping your way to nirvana. By providing cloud service provider credentials to an account owned by you or your organization, harpoon uses terraform to provision all of the underlying virtual infrastructure in your account, including your own Kubernetes cluster. In this way, you have complete control over all of your infrastructure and clusters.

The architecture for harpoon to no-code deploy Kubernetes to AWS.

Once fully provisioned, harpoon’s UI can send commands to various harpoon microservices in order to communicate with your cluster and create Kubernetes deployments, services, configmaps, ingress, and other key constructs.

If the cloud’s not for you, we also offer a fully on-prem, air-gapped version of harpoon that can be deployed essentially anywhere.

Why harpoon?

Building production software environments is hard, time-consuming, and costly, with average costs to maintain often starting at $200K for an experienced DevOps engineer and going up into the tens of millions for larger clusters and teams. Using harpoon instead of writing custom scripts can save hundreds of thousands of dollars per year in labor costs for small companies and millions per year for mid to large size businesses.

Using harpoon will enable your team to have one of the highest quality production environments available in mere minutes. Without writing any code, harpoon automatically sets up your production environment in a secure environment and enables you to dynamically maintain your cluster without any YAML or Kubernetes expertise. Better yet, harpoon is fun to use. You shouldn’t have to worry about what underlying technologies are deploying your software to the cloud. It should just work. And making it work should be simple. 

Why run harpoon as a Docker Extension?

Docker Extensions help you build and integrate software applications into your daily workflows. With the harpoon Docker Extension, you can simplify the deployment process with drag and drop, visually deploying and configuring your applications directly into your Kubernetes environment. Currently, the harpoon extension for Docker Desktop supports the following features:

  • Link harpoon to a cloud service provider like AWS and deploy a Kubernetes cluster and the underlying virtual infrastructure.
  • Easily accomplish simple or complex enterprise-grade cloud deployments without writing any code or configuration scripts.
  • Connect your source code repository and set up an automated deployment pipeline without any code in seconds.
  • Supercharge your DevOps team with real-time visual cues to check the health and status of your software as it runs in the cloud.
  • Drag and drop container images from Docker Hub, source, or private container registries
  • Manage your K8s cluster with visual pods, ingress, volumes, configmaps, secrets, and nodes.
  • Dynamically manipulate routing in a service mesh with only simple clicks and port numbers.

How to use the harpoon Docker Extension

Prerequisites: Docker Desktop 4.8 or later.

Step 1: Enable Docker Extensions

You’ll need to enable Docker Extensions under the Settings tab in Docker Desktop.

Hop into Docker Desktop and confirm that the Docker Extensions feature is enabled. Go to Settings > Extensions > and check the “Enable Docker Extensions” box.

Enable Docker Extensions under Settings on Docker Desktop.

Step 2: Install the harpoon Docker Extension

The harpoon extension is available on the Extensions Marketplace in Docker Desktop and on Docker Hub. To get started, search for harpoon in the Extensions Marketplace, then select Install.

The harpoon Docker Extension on the Extension Marketplace.

This will download and install the latest version of the harpoon Docker Extension from Docker Hub.

Installation process for the harpoon Docker Extension.

Step 3: Register with harpoon

If you’re new to harpoon, then you might need to register by clicking the Register button. Otherwise, you can use your credentials to log in.

Register with harpoon or log into your account.

While you can drag out any software or Kubernetes components you like, if you want to do actual deployments, you will first need to link your cloud service provider account. At the moment, harpoon supports Amazon Web Services (AWS). Over time, we’ll be supporting all of the major cloud service providers.

If you want to deploy software on top of AWS, you will need to provide harpoon with an access key ID and a secret access key. Since harpoon is deploying all of the necessary infrastructure in AWS in addition to the Kubernetes cluster, we require fairly extensive access to the account in order to successfully provision the environment. Your keys are only used for provisioning the necessary infrastructure to stand up Kubernetes in your account and to scale up/down your cluster as you designate. We take security very seriously at harpoon, and aside from using an extensive and layered security approach for harpoon itself, we use both disk and field level encryption for any sensitive data.

Link your AWS account to deploy Kubernetes with harpoon through Docker Desktop.

The following are the specific permissions harpoon needs to successfully deploy a cluster:

  • AmazonRDSFullAccess
  • IAMFullAccess
  • AmazonEC2FullAccess
  • AmazonVPCFullAccess
  • AmazonS3FullAccess
  • AWSKeyManagementServicePowerUser

Step 5: Start the cluster

Once you’ve linked your cloud service provider account, you just click the “Start” button on the cloud/node element in the workspace. That’s it. No, really! The cloud/node element will turn yellow and provide a countdown. While your experience may vary a bit, we tend to find that you can get a cluster up in under 6 minutes. When the cluster is running, the cloud will return and the element will glow a happy blue color.

Start the Kubernetes cluster through the harpoon Docker Extension.

Step 6: Deployment

You can search for any container image you’d like from Docker Hub, or link your GitHub account to search any GitHub repository (public or private) to deploy with harpoon. You can drag any search result over to the workspace for a visual representation of the software.

Deploying containers is as easy as hitting the “Deploy” button. Github containers will require you to build the repository first. In order for harpoon to successfully build a GitHub repository, we currently require the repository to have a top-level Dockerfile, which is industry best practice. If the Dockerfile is there, once you click the “Build” button, harpoon will automatically find it and build a container image. After a successful build, the “Deploy” button will become enabled and you can deploy the software directly.

Deploy software to Kubernetes through the harpoon Docker Extension.

Once you have a deployment, you can attach any Kubernetes element to it, including ingress, configmaps, secrets, and persistent volume claims.

You can find more info here if you need help: https://docs.harpoon.io/en/latest/usage.html 

Next steps

The harpoon Docker Extension makes it easy to provision and manage your Kubernetes clusters. You can visually deploy your software to Kubernetes and configure it without writing code or configuration. By integrating directly with Docker Desktop, we hope to make it easy for DevOps teams to dynamically start and maintain their cluster without any YAML, helm chart, or Kubernetes expertise.

Check out the harpoon Docker Extension for yourself!

]]>