DevOps and TEM Go Hand in Glove
by Mark Henke
DevOps is overall a healthy practice for most development teams, but it doesn’t come for free. Enterprises are eager to adopt the practice but their tools often lag behind DevOps practices. This is a bit like walking out into the winter cold with bare hands and being overwhelmed by Mother Nature’s bitter force.
Like hands and gloves, DevOps and test environment management (TEM) are a practical pair. In this post, we’ll summarize what DevOps and TEM are, and with what specific practices they complement each other.
DevOps: The Hands
DevOps is about marrying operations and development to achieve full software ownership. Dev teams are taking responsibility for code after it deploys into all of their environments. This gives us great power and autonomy.
However, in an enterprise, we often have many dependencies—other software systems we rely on. And managing all of these environments gets increasingly difficult. In the current landscape, managing these dependencies with DevOps is a bit like walking into cold, wintery weather with no gloves on. Without proper tools, our poor, operations-soaked hands will get frostbite. This frostbite comes in the form of time we must spend on production support and operational activities. And this time steals away our capacity for new development and numbs us to the joy of writing code.
TEM: The Gloves
Test environment management (TEM) is the set of gloves we can put on to protect us from the bitter costs of supporting our software. This post is a good primer on why TEM is so important for DevOps. Here, we are going to cover a few specific cases where it’s valuable.
The Chilling Wind of Black-Box Testing
Black-box testing is a way to test systems without deep knowledge of how they work. It’s described as a black box because your dependencies are opaque to you. You know what goes in and what should come out, but you don’t care about what happens in between. Types of black-box tests include contract testing, end-to-end system testing, and smoke testing.
Unlike unit tests, these verifications can’t run in isolation. It can be hard to set up certain data the way we need it or ensure a clean slate every time we run the test suite. Data from one team can collide with another team’s data.
The environments we’re testing can also be unstable or unreliable, changing on a cadence that constantly disrupts the teams that use it. These sorts of tests are like a chilling wind that blows through your environments, causing flaky, inconsistent failures and false positives. Without TEM, the cost of these tests can outweigh their benefits.
When we treat test environments as first-class citizens of our software suite, we can mitigate these problems. When we can manage test environments explicitly, we can cohesively change them in sync so that teams know what to expect. We can script these environments so that our consumers can customize them or manage them directly. And we can partition our environments properly so that teams won’t collide with each other.
The Snow Drift of Performance Testing
Another area that can pile up on us without TEM is performance testing. It’s prudent to ensure that we can meet our service-level objectives, but we don’t control the full stack.
It’s traditionally difficult to control or even understand the total performance you can provide. Your system is just one of many, each trying to meet customers’ latency needs. Not only is this tricky, but performance testing puts a lot of stress on systems. You can unintentionally stop another team from running their own suite. Or they may be deploying, causing your metrics to act abnormally.
Test environment management ensures performance tests are cohesive. You can manage the timing and cadence of these test runs together and across systems. Or, you can self-service your dependencies and spin up the environments yourself for testing. We’ll dive into this later. TEM is a snow shovel you can use to clear your way to smooth metrics on your performance.
The Icy Road of Tracing and Logging
When practicing DevOps, the ability to quickly track down and fix weird, subtle problems is paramount to strong product support. This is a hard task with a flood of immature products out there that don’t quite enable us to do this.
There are two main tools we have to combat this: logging and tracing. Tracing helps us pinpoint problem spots and logging helps us diagnose why those spots cause problems. But in a complex, distributed system there are often hurdles to effectively using these tools.
Loggers may be inconsistent across applications. Or they may be stored in disparate places. Imagine getting a critical problem at 2 AM and having to click through and download half a dozen log files from four different places! Or imagine trying to walk through a request across seven different systems of which you only have a few admin dashboards to guide your way.
These types of problems result in multi-hour phone calls with a dozen different people, all trying to stitch together an elaborate puzzle. The unknowable types of incidents that plague us on production support are like an icy road. We travel down it trying to reach our home, but keep sliding out of control and often end up in a ditch.
The first step is for us to have consistent, aggregated logging across all applications. The second is to implement some distributed monitoring tooling.
But how do we ensure these tools are doing a good job? Enter, once more, TEM. TEM will be the salt on our road, letting us see across systems how we can get these tools into place. We can also use our environment management to consistently bolt these things on. Then each application team need not worry about how to bring these tools to life.
The Inner Lining of Self-Service
In all these different areas where TEM warms our hands, a common theme emerges. There’s an inner lining in our gloves. Treating test environments as first-class concepts lets us move away from team hand-offs into the warmth of self-service.
Traditionally, teams were very restrictive of their code. Over time they’ve become less restrictive, but many teams still hold onto their tools, their data, and their deployments. To get anything done, you would often need to schedule things in advance with your downstream teams.
On the other side, you would need to hand off completed work for someone else to actually bring it to customers. Each hand-off increases the lead time for a feature and becomes a crack for potential miscommunication. These miscommunications develop into defects.
With ideas like TEM, we realize that our environments are often as important as our code! We thus learn to package and automate it as much as we automate our application. Doing this opens up the door for us to easily share this with others. We can publish our test environments. With strong tooling, we can maintain these explicitly. Then developers are able to self-service the test environments they need for performance testing, black-box testing, and the like.
Now developers who depend on our team don’t need to wait for us. They don’t need to hand something off to ensure it works end to end. By managing and publishing test environments, we can make features faster than ever with the highest quality.
Braving the DevOps Arctic With TEM
While DevOps gives us the autonomy to own our software, it can be a cold landscape. Many of the old tools are no longer sufficient. We need to warm ourselves with gloves that let us treat our test environments as first-class citizens. We’ll find that handling complex testing and monitoring scenarios with our dependencies will warm our entry into the cold.
Sometimes, though, even gloves don’t provide enough warmth. Sometimes we need a space heater for our TEM. That’s where a tool like Enov8 can come in.
This post was written by Mark Henke. Mark has spent over 10 years architecting systems that talk to other systems, doing DevOps before it was cool, and matching software to its business function. Every developer is a leader of something on their team, and he wants to help them see that.
04 AUGUST, 2020 by Michiel Mulders According to the 2019 IBM Data Breach report, the average data breach in 2019 cost 3.92 million USD. Businesses in certain industries, such as healthcare, suffer more substantial losses—6.45 million USD on average. As the amount of...
13 JULY, 2020 by Eric Boersma Every project manager in the world shares a similar stress. They’re working on something important, and a key stakeholder sticks their head around the corner. They ask a small, innocent question. “When are we going to release that...
01 JULY, 2020 by Diego Gavilanes Ever since the dawn of time, test environments have been left for the end, which is a headache for the testing team. They might be ready to start testing but can’t because there’s no test environment. And often, the department in...
29 JUNE, 2020 by Carlos Schults In today’s post, we’ll discuss data literacy and its relevance in the context of GDPR. We start by defining data literacy and giving a brief overview of GDPR. Then we proceed to explain some of the challenges organizations might face...
23 June, 2020 by Arnab Roy Chowdhury In this digital era, online businesses have become mainstream. Consequently, online commerce has flourished—and led to loads and loads of data! Businesses need to build data centers to store information. Not only that, but if you...
08 JUNE, 2020 by Eric Boersma Every company needs a disaster recovery plan. This is just a simple fact of life. Your company needs to know how to recover when something breaks or you can’t get access to something you need. In larger, more advanced tech companies,...