Setting up the workstation with small auxiliary tools is done quickly for the individual. In a growing team, the setup quickly becomes a challenge. Containers can help and bring everything ready configured to each individual’s workplace.

Setting up the new laptop

The new laptop has arrived with the preconfigured standard software. But before the work can start, some tools have to be installed. First, there is the GIT client. Then Maven and a Java runtime are still missing. While installing the tools, more tools come to my mind. For example, the kubectl-command for accessing the Kubernetes cluster. Or the Tekton client and the ArgoCD client. Rarely used, but not unimportant, are OpenSSL and the commands jq, yq and base64. On the old machine, I had set up a Cygwin environment. However, with Chocolateley, most tools can also be installed natively under Windows. That shouldn’t make any difference, should it?

WSL2 Linux as alternative

Recently I heard that the new Windows Subsystem for Linux 2, WSL2 for short, offers a good alternative. I decided to use an Ubuntu image as a base and set up some of the tools here. If everyone uses WSL now, setting up the computers is more efficient. That sounds good, and I may convince my colleagues.

The first colleague grins. He had the same idea months ago when he got the new laptop. At first glance, our computers look very similar. But at second glance, I use much newer versions of many tools. The setup always sets up the newest version, and I advise the colleague to bring his workstation up to date. Then the collaboration works better. A few weeks later, I experienced a Deja Vu in reversed roles. Well, then I have to update my system. After all, it’s also for security reasons.

When the things become more complex

Discussions with colleagues reveal that the situation is more complex than initially thought. The web developers use Mac OS, while the service engineers swear by their Linux. In my team, Windows has prevailed, and it’s challenging to find a general solution for everyone.

When it became clear that we were using different Java versions, the optimism disappeared. Attempts to set up computers automatically with scripts have failed several times. And retooling, e.g., switching to a different Java version, costs nerves, especially when you are working on several projects with different versions in parallel.

What are the problems?

Different operating systems and versions are the first challenge. Here, we can divide the world into three main areas: Windows, Mac, and Linux. However, several flavors exist for the latter, so a more detailed look is necessary.

If one looks away from the operating systems, there is a package manager for each environment. It offers the setup for the most common tools. However, one always gets the latest one. And the definition of what is new differs from package manager to package manager. And, of course, from time to time.

And then, there are the tools that any package manager does not properly support. Only the download from the manufacturer’s site and the manual setup on the local machine remains here.

Whether package manager or installed by hand, up to here now, precisely one version exists on the computer. Each different version must now be installed manually at a different location on the disk. Setting environment variables is required for the use, which happens in the simplest case with a script.

Many tools, even more versions, and scripts require coordination. The configuration of the environment is complex and error-prone.

The container as a solution

Containers offer a completely different approach to solving this problem. Instead of installing tools on the workstation, the container now contains these tools. As a consequence, the workstation now only loads the container. The container simplifies the installation because it is only necessary to load a container. Multiple containers can coexist; retooling the environment is accomplished by switching containers.

The container connects securely to the local file system for this idea to work. In that way, the container mounts the current working. Starting the container requires many parameters. Fortunately, a shell script may hide these details. This script is available in one form for each operating system. A double click executes the script and connects the current directory with the container. Consequently, a double click is enough to activate the environment in the container.

Conclusion

The idea of environments in containers is not new. Docker Desktop offers a very similar concept to Dev Environments, and Kubernetes also offers similar models. These models are applied by Tekton, for example. In work steps, containers connect to drives, do their work, and pipelines apply other containers to this result.The serverless computing approach is now reaching the workplace. I start an environment and link it to a folder in the file system. My computer takes the role of a terminal. In effect, the container links inputs and output to my terminal.

Are you interested in more articles?

Windows Subsystem for Linux and Minikube

The Windows Subsystem for Linux is the seamless integration of Linux into Windows. Use Windows natively and quickly issue a Linux command. Apply a Linux command to the Windows file system without having to start a virtual machine. As a result, Linux is always...

The Docker daemon configuration files

Where are the Docker daemon configuration files located? How to restart the Docker daemon after applying changes to the configuration? How to change and activate the Docker configuration? These are frequently asked questions. But changes to the Docker configuration...

Docker Content Trust

Docker Content Trust feature enables your environment to run only with signed images. In this way, Docker Content Trust ensures that the docker pulls only signed containers from the docker registry. Once enabled, Docker Content Trust is active for all docker pull...

Docker, networks, subnets and IP address pools

Docker uses default address pools to create subnets. For most use cases, the shipped defaults fit. But sometimes they cause conflicts with existing networks or subnets. Overlapping networks may conflict with existing systems. Or a large number of docker networks...

Docker networks and subnets

Docker uses default address pools to create subnets. For most use cases, the shipped docker subnet defaults fit. But sometimes the docker subnets cause conflicts with existing networks or subnets. Overlapping networks may conflict with existing systems. Or a large...

Software containerization with docker reviewed

Docker software containerization reviewed Putting Software into containers seems to be state of the art. But what are the benefits? Are there any drawbacks? Most people have heard about docker technology. And not less have used docker. But we are looking towards more...

Docker process virtualization

Docker process virtualization

Docker is a lightweight framework for virtualizing application processes. Instead of emulating a computer hardware that still needs an operating system to run applications, Docker takes a different approach. Docker is able to pretend an operating system environment to...