Select Page
Containers

Containers – The Essentials

09

SEPTEMBER, 2021

by Eric Goebelbecker

Let’s talk about container essentials. Over the past few years, containers have transitioned from the hottest new trend to essential IT architecture. But are they are good fit for you? Are you wondering whether or not you’re using them effectively? Or have you been afraid to pull the trigger and add containers to your IT portfolio?

Maybe you’re not clear on how containers differ from virtual machines (VMs). What’s the difference? Why would you use one instead of the other?

Containers help you use your hardware more efficiently. They give you a way to fit more applications into a single system safely. They’re also a powerful packaging mechanism for moving applications from one system to another easily. Unlike the mythical boast of some programming languages, containers truly allow you to write once and run anywhere.

In this article, we’ll cover what containers are, what they’re not, and how you can use them to build a clean, efficient, and easy-to-maintain IT infrastructure.

Containers Are Not Virtual Machines

Containers and virtual machines are not the same things. They share some similarities, especially when you look at them from a distance, but the differences can’t be overemphasized. Containers provide applications with an isolated environment. Virtual machines emulate complete computer systems that usually run more than one application.

What’s the Difference?

Servers running containers have a single operating system. The containers share that server’s kernel and operating system resources. The shared portions are read-only (with copy-on-write semantics where necessary) and, depending on how the containers are configured, have shared access to the server’s networking interfaces. Meanwhile, the applications run just as they would on any other computer.

Servers that run VMs run a hypervisor that supports the operating system running in each VM. The virtual machines are well isolated from each other, while the applications inside them are not. Similar to the containers, though, the applications still run as they would on a physical computer.

The key difference is that containers are very lightweight when compared to virtual machines.

Starting a container is simply starting an application in an isolated environment. Starting a virtual machine, on the other hand, is booting an entire operating system.

Moving or copying a container from one system to another means moving the application and the libraries needed to support its environment. Still, these components are bundled in a single package. A virtual machine is, again, an entire operating system. You measure containers in megabytes and virtual machines in gigabytes. VMs are usually contained in a single package too, but they are significantly larger than a container.

Are Containers Better?

Are containers better? It depends on what you’re trying to accomplish.

Because containers only contain what they need to support a single application, they’re smaller, require less memory, and can be stopped and started very quickly.

Virtual machines come with all of the overhead required to support a complete operating system. They need more memory and take up more space, and while you can often start and stop a VM faster than the same operating system on commodity hardware, they’re still slower than a container.

Do these differences make containers a better choice? Only if your goal is to run individual applications. Sometimes you need the support of a complete operating system, or you need to run several apps together on the same system. If that’s the case, a VM makes more sense.

Both containers and virtual machines have come a long way in terms of portability. While there are only a few container implementations, the most popular, Docker, supports Windows, macOS, and all major Linux distributions. VMs have the open virtual machine format (OVF). This format allows you to move VMs between hypervisors, with some limitations.

That said, containers make it possible to package an application that lacks support for one operating system and run it on another. So, for example, you can containerize a legacy application and run it on a newer version of your operating system.

Why Use Containers?

Containers run applications in isolation. While containers running on the same host still share operating system resources, the operating system keeps them isolated from each other. This provides some important benefits.

Containers Are Portable

Containers can run Windows, Linux, FreeBSD, and Solaris applications. Docker itself runs on Windows, Linux, and macOS (the macOS version is actually using Linux in a VM, so it’s not as robust as the other two operating systems). This means you can use Docker to run applications across platforms without using a virtual machine.

But this is only the beginning of the portability containers have to offer.

Containers can also run applications from different versions of operating systems on the same host. So, if you need to build or test code for several different versions of a Linux distribution or even different distributions, you can set up your CI/CD pipeline with build containers instead of a set of build servers or VMs.

If you need to run an older version of an application in a new environment, a container is the way to go.

Containers Are Efficient

When you set up a virtual machine, you have to allocate memory and disk in advance. Both of those resources are permanently associated with that VM. In some circumstances, you can get away with a “sparse” disk that doesn’t use all of the space right away, but there’s a performance penalty for that. Memory, however, is a fixed resource. VMs can’t share it. When you set up a VM with 16 gigabytes of memory, you’ve used that memory, whether the VM needs it all the time or not.

Containers, however, don’t have this limitation. You can set a memory limit for a container, but that’s only a maximum. Containers share host memory just like other applications. They can also share a disk. You can set aside volumes for them if you want, but that’s up to you.

So, containers only consume resources as they need them. They’re also easier to move between systems since they don’t require dedicated resources. The onus is on you to make sure they have what they need, of course. But their flexibility and portability make that easy. You can also use orchestration systems like Kubernetes, and they’ll manage the resources for you.

Why Not Use Containers?

Containers are a powerful tool, but they’re not the solution to every problem. There are plenty of situations where a VM is the better option. The obvious case is when you need to virtualize an entire system.

For example, many companies have moved to virtual desktop infrastructure (VDI) as a cost-effective and secure solution for providing workstations to their employees. Containers are not a replacement for VDIs. Desktop users need an entire operating system and the services that it provides.

If you run an application that requires significant resources, it may work best when you allocate them in advance. In that case, a VM is the better option. Containers are flexible and efficient, but sometimes that flexibility isn’t what you need, and the relative rigidity of VMs is an asset.

Time to Look at Containers

We’ve taken a brief look at container essentials. Their flexibility and efficiency make them a powerful tool that you can use to save time, effort, and money. Can you add containers to your test environment? Do you have legacy applications that need to move to updated systems? It’s time to see how containers can help you upgrade your infrastructure.

Post Author

This post was written by Eric Goebelbecker. Eric has worked in the financial markets in New York City for 25 years, developing infrastructure for market data and financial information exchange (FIX) protocol networks. He loves to talk about what makes teams effective (or not so effective!).

 

Relevant Articles

Environments: The ROI of TEM

16September, 2021 by Carlos SchultsLet me start with a question: as a leader in tech, are you satisfied with the budget you have? If I had to guess, I'd say the answer is no. Because of that, calculating the return on investment of the many activities in software...

Release: The Benefits of Deployment Planning

14AUGUST, 2021 by Ukpai UgochiIt is the goal of every software engineer and software development firm to continuously ship products to end users. This can only be achieved through software deployment.  In this post, we'll explore deployment and deployment planning,...

Environments – Monoliths Versus Microservices

05AUGUST, 2021 by Alexander FridmanIn the beginning there was nothing. Then there was the monolith, though we used to simply call monoliths "software." Today we have two rival architectural types: monoliths and microservices. This post will explain what monoliths and...

What Is Your Attack Surface?

15JULY, 2021 by Justin ReynoldsCompanies go to great lengths to protect their physical environments, using deterrents like locks, fences, and cameras to ward off intruders. Yet this same logic doesn’t always translate to digital security. Corporate networks — which...

Data: What Is DevSecOps?

06JULY, 2021 by Justin ReynoldsCompanies today face increasing challenges around reducing the time and cost of software development. Many are thus using DevOps methodologies, which combine software development and IT operations to achieve continuous delivery and...

Data: The ROI of Data Security

24JUNE, 2021 by Omkar HiremathInformation technology and the digital world don’t exist without data. The data of an organization can contain a lot of unclassified, as well as classified information. Irrespective of that, only authorized personnel should have access to...