Containers

Containers – The Essentials

09

SEPTEMBER, 2021

by Eric Goebelbecker

Let’s talk about container essentials. Over the past few years, containers have transitioned from the hottest new trend to essential IT architecture. But are they are good fit for you? Are you wondering whether or not you’re using them effectively? Or have you been afraid to pull the trigger and add containers to your IT portfolio?

Maybe you’re not clear on how containers differ from virtual machines (VMs). What’s the difference? Why would you use one instead of the other?

Containers help you use your hardware more efficiently. They give you a way to fit more applications into a single system safely. They’re also a powerful packaging mechanism for moving applications from one system to another easily. Unlike the mythical boast of some programming languages, containers truly allow you to write once and run anywhere.

In this article, we’ll cover what containers are, what they’re not, and how you can use them to build a clean, efficient, and easy-to-maintain IT infrastructure.

Containers Are Not Virtual Machines

Containers and virtual machines are not the same things. They share some similarities, especially when you look at them from a distance, but the differences can’t be overemphasized. Containers provide applications with an isolated environment. Virtual machines emulate complete computer systems that usually run more than one application.

What’s the Difference?

Servers running containers have a single operating system. The containers share that server’s kernel and operating system resources. The shared portions are read-only (with copy-on-write semantics where necessary) and, depending on how the containers are configured, have shared access to the server’s networking interfaces. Meanwhile, the applications run just as they would on any other computer.

Servers that run VMs run a hypervisor that supports the operating system running in each VM. The virtual machines are well isolated from each other, while the applications inside them are not. Similar to the containers, though, the applications still run as they would on a physical computer.

The key difference is that containers are very lightweight when compared to virtual machines.

Starting a container is simply starting an application in an isolated environment. Starting a virtual machine, on the other hand, is booting an entire operating system.

Moving or copying a container from one system to another means moving the application and the libraries needed to support its environment. Still, these components are bundled in a single package. A virtual machine is, again, an entire operating system. You measure containers in megabytes and virtual machines in gigabytes. VMs are usually contained in a single package too, but they are significantly larger than a container.

Are Containers Better?

Are containers better? It depends on what you’re trying to accomplish.

Because containers only contain what they need to support a single application, they’re smaller, require less memory, and can be stopped and started very quickly.

Virtual machines come with all of the overhead required to support a complete operating system. They need more memory and take up more space, and while you can often start and stop a VM faster than the same operating system on commodity hardware, they’re still slower than a container.

Do these differences make containers a better choice? Only if your goal is to run individual applications. Sometimes you need the support of a complete operating system, or you need to run several apps together on the same system. If that’s the case, a VM makes more sense.

Both containers and virtual machines have come a long way in terms of portability. While there are only a few container implementations, the most popular, Docker, supports Windows, macOS, and all major Linux distributions. VMs have the open virtual machine format (OVF). This format allows you to move VMs between hypervisors, with some limitations.

That said, containers make it possible to package an application that lacks support for one operating system and run it on another. So, for example, you can containerize a legacy application and run it on a newer version of your operating system.

Why Use Containers?

Containers run applications in isolation. While containers running on the same host still share operating system resources, the operating system keeps them isolated from each other. This provides some important benefits.

Containers Are Portable

Containers can run Windows, Linux, FreeBSD, and Solaris applications. Docker itself runs on Windows, Linux, and macOS (the macOS version is actually using Linux in a VM, so it’s not as robust as the other two operating systems). This means you can use Docker to run applications across platforms without using a virtual machine.

But this is only the beginning of the portability containers have to offer.

Containers can also run applications from different versions of operating systems on the same host. So, if you need to build or test code for several different versions of a Linux distribution or even different distributions, you can set up your CI/CD pipeline with build containers instead of a set of build servers or VMs.

If you need to run an older version of an application in a new environment, a container is the way to go.

Containers Are Efficient

When you set up a virtual machine, you have to allocate memory and disk in advance. Both of those resources are permanently associated with that VM. In some circumstances, you can get away with a “sparse” disk that doesn’t use all of the space right away, but there’s a performance penalty for that. Memory, however, is a fixed resource. VMs can’t share it. When you set up a VM with 16 gigabytes of memory, you’ve used that memory, whether the VM needs it all the time or not.

Containers, however, don’t have this limitation. You can set a memory limit for a container, but that’s only a maximum. Containers share host memory just like other applications. They can also share a disk. You can set aside volumes for them if you want, but that’s up to you.

So, containers only consume resources as they need them. They’re also easier to move between systems since they don’t require dedicated resources. The onus is on you to make sure they have what they need, of course. But their flexibility and portability make that easy. You can also use orchestration systems like Kubernetes, and they’ll manage the resources for you.

Why Not Use Containers?

Containers are a powerful tool, but they’re not the solution to every problem. There are plenty of situations where a VM is the better option. The obvious case is when you need to virtualize an entire system.

For example, many companies have moved to virtual desktop infrastructure (VDI) as a cost-effective and secure solution for providing workstations to their employees. Containers are not a replacement for VDIs. Desktop users need an entire operating system and the services that it provides.

If you run an application that requires significant resources, it may work best when you allocate them in advance. In that case, a VM is the better option. Containers are flexible and efficient, but sometimes that flexibility isn’t what you need, and the relative rigidity of VMs is an asset.

Time to Look at Containers

We’ve taken a brief look at container essentials. Their flexibility and efficiency make them a powerful tool that you can use to save time, effort, and money. Can you add containers to your test environment? Do you have legacy applications that need to move to updated systems? It’s time to see how containers can help you upgrade your infrastructure.

Post Author

This post was written by Eric Goebelbecker. Eric has worked in the financial markets in New York City for 25 years, developing infrastructure for market data and financial information exchange (FIX) protocol networks. He loves to talk about what makes teams effective (or not so effective!).

 

Relevant Articles

Sand Castles and DevOps at Scale

03JUNE, 2022 by Niall Crawford & Carlos "Kami" Maldonado. Modified by Eric Goebelbecker.DevOps at scale is what we call the process of implementing DevOps culture at big, structured companies. Although the DevOps term was back in 2009, most organizations still...

Test Environment Management Explained

Test Environment Management Explained3JUNE, 2022 by Erik Dietrich, Ukpai Ugochi, and Jane Temov. Modified by Eric GoebelbeckerMost companies spend between 45%-55% of their IT budget on non-production activities like  Training, Development & Testing and lose 20-40%...

Serverless Computing for Dummies

3JUNE, 2022 by Eric GoebelbeckerWhat Is Serverless Computing? Serverless computing is a cloud architecture where you don’t have to worry about buying, building, provisioning, or maintaining servers. In return for structuring your code around their APIs, your cloud...

Test Environments – The Tracks for Agile Release Trains

25MAY, 2022 by Niall Crawford & Justin Reynolds. Modified by Eric Goebelbecker.So, you’ve decided to implement a Scaled Agile Framework (SAFe) and promote a continuous delivery pipeline by implementing “Agile Release Trains” (ART)*.  Definition: An Agile Release...

What Is Data Masking and How Do We Do It?

24MAY, 2022 by Michiel Mulders. Modified by Eric Goebelbecker.With the cost of data breaches increasing every year, there’s a need for higher security standards. According to IBM’s 2021 security report, the average total cost of a data breach has risen to $4.24...

Test Environments: Why You Need One and How to Set It Up

24MAY, 2022 by Keshav MalikWith the rise of agile development methodologies, the need to quickly test new features is more critical than ever. This is especially true for websites and applications that rely on real-time data and interaction. The only way to ensure...