Software Security Anti-Patterns



by Eric Boersma

*Original 22 October 2019

If you’re like a lot of developers, you might not think much about software security. Sure, you hash your users’ passwords before they’re stored in your database. You don’t return sensitive information in error messages. Each time you log something, you make sure it removes sensitive data. But beyond that, you don’t spend much time thinking about the security of your app.
Chances are, you and your team think that’s OK. After all, nobody’s compromised your application yet. Thinking about security is hard, and nobody on your management team is pressuring you to make your app more secure. They definitely are pressuring you to ship that next feature tomorrow, though.


The goal of this article is to dive into some of the most common security anti-patterns teams fall into and just why they can be harmful

What is a Software Security Anti-Pattern?

My goal here isn’t to make you feel guilty about your application security practices. But it is to raise some awareness of things in your application that you might be missing. Many teams make similar mistakes when it comes to software security. They make those mistakes because they’re easy to make, and doing things the right way is usually hard or more expensive. I call these habits “anti-patterns” because they’re common practices of software teams that actually harm the teams that adopt them.
The goal of this article is to dive into some of the most common security anti-patterns teams fall into and just why they can be harmful.

Using Libraries Without Knowing Everything They Do

This is probably the single most common security anti-pattern teams fall into. I’m guilty of it myself sometimes if we’re being honest. Modern software relies on dozens of open source libraries written by developers outside of your team. The truth is that most developers don’t know what all of the libraries in their project actually do.
Not knowing what all of the libraries in your application do is a very dangerous position to be in. Consider the tale of the “leftpad” NPM package, which in 2016 broke the Node Package Management (NPM) repository. Removing this one package broke builds for thousands of NPM packages and, subsequently, the applications of thousands of businesses. Why did that happen? Because the guy who built leftpad was upset at the NPM admins. Most of the people who relied on packages that relied on leftpad had no idea it was a package that they were using. What’s more embarrassing is that all leftpad does is add some white space to the left side of a provided string. It’s embarrassingly simple.
If the author of leftpad had been actively malicious instead of simply pulling his library from NPM, thousands of developers could have had their application details secretly stolen by updates to leftpad. They never would’ve known because they didn’t know they were using the library in the first place. They’d never done an audit of the libraries their package uses. Even avoiding malicious users, many NPM packages contain known security vulnerabilities in the code. Node even provides a method for scanning them. Yet many teams have never run that scanner even a single time.

Using Production Data on Test Servers

This is another of the most common security anti-patterns. Let’s be honest: it’s difficult to come up with good test data. There are only so many cartoon characters you can name fake users after. Inventing fake addresses becomes pretty difficult after a while. Generating convincing fake medical data can be nearly impossible for someone who doesn’t understand the intricacies of what they’re dealing with.
So, many teams just use their production data right on the test system. This carries a whole bunch of potential problems along with it.
For starters, most production data contains sensitive information along with it. Your users trust your company enough to share things like addresses or phone numbers. If you work in certain environments, they might share even more sensitive information, like their banking or government ID numbers, or even medical histories. They share that information because they trust that unauthorized people won’t access that data. When you move production data to a test system, you run the risk of breaking that trust.
Most test environments aren’t nearly as secure as your production environments. It’s common to share administrator logins for test systems, while you wouldn’t ever do that for production systems. This means that unauthorized people—like junior associates, or contractors—can access that data. In many countries, this data may even be legally protected, and using it in a test environment might be breaking the law.
Thankfully, great tools exist that can help deal with this problem. Top-tier test environment management platforms make it easy to manage the data used in your test environments, which keeps production data safe.

Validating user input helps your software stay safe and stable

Using Blacklists for Input Validation

To be honest, a lot of software would be simpler without users. Unfortunately, it’d be a lot less useful too. So you need to accept input from your users, but you need to make sure they’re not sending you anything bad. So this means nothing malicious, or nothing that might accidentally break something. In short, you start validating user input.
This is a good thing. Validating user input helps your software stay safe and stable. But a really common error teams make is trying to rule out inputs specifically. They create a list of rules that data can’t match, or it’s rejected. Everything else is accepted. This is a good first step, but it has some pretty serious holes, making it another security anti-pattern. Specifically, “everything” is a pretty wide data set.
These kinds of rules tend to be reactionary, where teams will add to them when a new bug or security flaw is found. This means that you’re protected against every flaw you know about, but there’s a whole universe of threats you don’t know about.
Instead, teams should focus on creating a white list of acceptable inputs, and only allowing those. This means that your surface area for bugs and attacks is much smaller.

Carelessly Concatenating Strings

This is the sort of thing that a lot of developers think they don’t need to worry about anymore. Sure, worrying about string concatenation in C was a big deal, but their attitude is that it’s totally safe these days.
Unfortunately, there are lots of situations where that’s just not true, making it another security anti-pattern. As it turns out, it’s really difficult to effectively separate input from code, especially when pushing strings together. This means that carelessly input data can crash your servers or expose security flaws.
An example of this happening in a language that many believed to be secure was 2013’s YAML deserialization vulnerability in Ruby. It was possible for a YAML file, which was just supposed to hold configuration, to instead be evaluated as code. This code could contain bugs or security flaws. YAML was the sort of thing used everywhere in Ruby, and still is (the vulnerability has been patched). Developers were carelessly injecting security vulnerabilities into their apps without having a sense of the danger present.

Shipping Systems You Can’t Patch

This is an anti-pattern that has caught up with nearly every software team I’ve ever worked with. No one ever intends to start out creating a system that they can’t patch. That system grows into the unmaintainable state over a long time. As the dev team attaches new libraries and features to the system, simple decisions ossify. Over time, changing anything about how the system works increases the likelihood that critical behavior breaks existing functionality.

At the the same time, libraries and programming languages continue to evolve. Languages change, deprecate or remove APIs, and some libraries don’t keep up. Suddenly, adopting a new version of a programming language doesn’t mean updating a configuration file. It means potentially rewriting whole sections of the application. For most teams, this results in the application freezing at a particular language and library version. The problem here is that those language and library updates often contain security updates. So now, the development is actively choosing to to sacrifice the security of the application in exchange for avoiding the work necessary to update the code. This can be a reasonable tradeoff, but it’s unquestionably an anti-pattern.

Not Monitoring Admin Access

Providing an administrator interface to your software is a very common development pattern. There are users who need to do things that most users shouldn’t be able to do in your system. Often, these admin UIs provide very powerful access to your system, in ways that affect all of your customers, and not just one. To their credit, most software teams are thoughtful and thorough about limiting who has access to which parts of these UIs. That’s a positive thing.

Unfortunately, most teams aren’t nearly so thorough about logging who does what inside these admin UIs. Because these UIs are extremely powerful, someone misusing the UI is often capable of inflicting a lot of damage on a system’s users. It doesn’t even require that they are acting maliciously. Sometimes, a user might take the wrong action unintentionally. The damage persists, either way. Many times, development teams treat admin UIs no differently from other parts of the application when thinking about capabilities like logging behavior. But because of the sensitive nature of the data and work undertaken within admin UIs, development teams should approach logging access and actions within those UIs much more comprehensively to avoid falling trap to this application security anti-pattern.

Is Your Team Falling Into Bad Habits?

This article doesn’t hold every security anti-pattern in the book. These are just the ones that I’ve run into the most frequently. They happen because teams cut corners with the way they handle data or think about their inputs. Every team receives pressure from management to improve time to release, which means simple things slip through the cracks. It isn’t until months or years later that someone discovers the flaws present in the code.
The best way to catch these anti-patterns is to be vigilant while you’re working. The best engineers can recognize their teams slipping into these anti-patterns and ensure that they’re addressed before they become an issue. If your team is struggling with these kinds of problems, now is the time to address them. You can make your team’s code more secure, but it requires that you put in the time now. What bad habits can you stop your team from carrying on?


Eric Boersma

This post was written by Eric Boersma. Eric is a software developer and development manager who’s done everything from IT security in pharmaceuticals to writing intelligence software for the US government to building international development teams for non-profits. He loves to talk about the things he’s learned along the way, and he enjoys listening to and learning from others as well.

Relevant Articles

Test Environment Management – Modelling

30JANUARY, 2023 by Jane TemovTest Environment Management (TEM) is an essential process for ensuring the stability and consistency of the testing environment. It includes activities such as setting up the environment, monitoring and controlling the environment, and...

What is Deployment Planning?

15DECEMBER, 2022 by Jane TemovDeployment planning is the process of creating a plan for the successful deployment of a new software or system. It involves identifying the resources, tasks, and timeline needed to ensure that the deployment is successful. Deployment...

Why CICD & Test Environment Management Goes Hand-in-Hand

12DECEMBER, 2022 by Jane TemovWhy CICD & TEM Goes Hand-in-Hand Continuous Integration/Continuous Delivery (CICD) and Test Environment Management are two essential components of a successful software development process. CICD enables teams to deploy new code...

Enov8 Releases their Latest “Evaluation Edition”

08DECEMBER, 2022 by Enov8Enov8 is happy to announce the latest “evaluation”* edition is ready for consumption. *A complete Release & Environment Management product with a full license for 3 months. Our Release & Environment Management solution is designed to...

The Agile Release Train Explained

04DEC, 2022 by Jane TemovIf your organization is starting an agile transformation, you might be looking at it as an opportunity. Or perhaps you’re looking at it with some healthy skepticism.  Either is understandable—or even both at the same time. The opportunity...

Self-Healing Applications

02NOVEMBER, 2022 by Sylvia Froncza Original March 11 2019An IT and Test Environment Perspective Traditionally, test environments have been difficult to manage. For one, data exists in unpredictable or unknown states. Additionally, various applications and services...