Select Page

The History of SRE

17

JANUARY, 2020

by Sylvia Fronczak

Site reliability engineering (SRE) uses techniques and approaches from software engineering to tackle reliability problems with a team’s operations and a site’s infrastructure.

Knowing the history of SRE and understanding which problems it solves ensures that you can make it work for your organization. And as is the case with the spread and adoption of agile and DevOps, SRE guides you, so you know you’re making choices for the right reasons and with the right goals in mind.

In this post, we’ll look at the history of SRE in the industry. We’ll explore what problems caused people to combine engineering principles with operations and system administration.

Let’s kick it off with a look at how all this started.

 

In the Beginning

To start, let’s talk about software reliability engineering over the last 50 years. In this time span, people put much analysis and thought into software reliability. The literature covers many of the terms and topics that present-day site reliability engineering uses, such as mean time to recovery, using metrics to analyze and predict failures, and creating fault tolerance through application redundancy.

During this period, people learned that software can and will fail. It also became clear that people can take steps to reduce the occurrence of failure, or at least reduce the impact of failure.

But much of the literature still looked at software as one black box, irrespective of its ecosystem. Experts divided hardware and software into two different worlds. Keeping an application running wasn’t the same as building it in the first place.

For example, back when I began writing software, the organizations I worked with kept application developers and operations separate. The developers worked on products and features, while the sysadmins worked on hardware, configuration, and monitoring, much of which was manual.

Now, don’t think that this was unusual. Many companies took on this model. And no one questioned it for a long time.

However, companies like Google saw that there could be a better way. But what problem did they notice that drove them to try something different?

The Problem Worth Solving

In an application life cycle, people first wrote the application, putting forth a lot of engineering discipline and thought. And then they pushed it into production, whether all at once or iteratively. Then, once it was live and in production, they put it in maintenance mode. They didn’t actively work on it unless users wanted more features or unless there was a problem. In fact, they put very little engineering effort into running the application for the years that it provided value.

Though people looked at design and engineering principles during the development of the software, they didn’t take the same amount of care in keeping the software up and running. Instead, they hired operations teams to restart the server occasionally, fiddle with the load balancing, deploy a patch or an upgrade, or manually intervene when a problem occurred.

But they didn’t look for engineering solutions to these menial maintenance tasks. They didn’t even see this as a problem. And they repeatedly completed these same manual tasks on every one of their applications and systems.

When new applications deployed to production, they’d have a new application maintenance and operations team ready to take on the additional responsibility.

So why was this a bad thing? Mainly it had to do with costs.

Costs of the Old Way: Devs vs. Sysadmins

First, because of the way manual processes ruled in this paradigm, the sysadmin team grew over time to maintain additional systems and additional traffic. Therefore, the size of the sysadmin team increased with the number of applications, the complexity of the systems, and the success of the application due to the increased traffic.

So for a company like Google, which continually increases the number of applications and traffic over time, it required a larger and larger operations staff to keep it all running. In fact, Google isn’t alone. You’ve probably worked in places where additional teams developed for each application that went live.

Second, because the development team and the sysadmin team were separate, a divide began to form between them. The devs wanted to ship new features, while the sysadmins wanted to reduce the amount of change happening to the system. The sysadmins and the rest of the operations team felt good when few changes to the application took place. So they wanted fewer new features, fewer config changes, and less complexity overall.

Why did the sysadmins resist new features? Because changes like these introduce the potential for something to break. And the sysadmins didn’t want to be up all night dealing with it.

On the other hand, developers wanted to push new features for the end user. Change is what they lived for!

So they were two sides of a coin. Both wanted to improve the life of the customer. The operations team wanted to do that through improved reliability. The development team wanted to do that through new features and applications.

As you might expect, the two paradigms conflicted and slowed each other down. The operations teams worked to slow change, while the development team worked to speed up change.

How do we get out of this cycle? How do we get features out quickly while also keeping reliability and stability at acceptable levels?

The Path Forward

The journey that Google and other companies took to adopt and create SRE wasn’t always smooth. It also wasn’t a large, coordinated, heavily structured plan that management laid out. It grew from engineers solving engineering problems. And all of this reportedly began back in 2003, when Ben Traynor coined the term SRE at Google.

So how did Google make it work?

To begin with, people there decided to look at operations and system administration in a different way. They looked at it through the eyes of an engineer.

In addition to hiring software engineers for the operations team, Google looked for sysadmins who understood the internal workings of operations while still having a solid level of engineering skills. Combined, these people were well equipped to improve operations through engineering practices that reduced toil and improved reliability.

Why did the folks at Google do this?

When you look at teams and operations through the eyes of an engineer, you see opportunities to automate and streamline processes. You begin to notice common problems and find elegant solutions that you can use in multiple places. And you believe that engineering skills can solve complex software problems.

And then Google iterated.

The team members at Google didn’t look at failures as being a drawback of SRE. Instead, they were opportunities to improve and learn. And over time, Google team members came up with a wealth of knowledge that we can now find in their SRE handbook and workbook.

What does all this mean for you?

The next step involves your path forward. How will you look at costs and benefits with your existing teams and infrastructure? And how will you begin your journey toward SRE?

Key Takeaways

The history of reliability engineering spans many decades. People have spent a lot of time making systems more reliable. However, using manual processes around maintaining reliability resulted in rising costs for operating the systems in production. In an effort to find a better way, Google team members took it upon themselves to combine engineering skills with system administration. They did all this to reduce application operation costs—both direct and indirect.

Even if your path to SRE takes a different route from the ones that Google staffers took, you can be sure that you’ll still end up in the right place. Why? You’ll be looking at problems from a software engineer’s perspective and from a system administrator’s perspective. Combining the two will provide maintainable and reliable systems for your customers. You’ll increase automation to save time, money, and hassles. And you’ll find that problems that occur on your journey will provide chances to learn and grow.

 Sylvia Fronczak

This post was written by Sylvia Fronczak. Sylvia is a software developer that has worked in various industries with various software methodologies. She’s currently focused on design practices that the whole team can own, understand, and evolve over time.

Relevant Articles

What Is Privacy by Design?

What Is Privacy by Design?

21APRIL, 2021 by Zulaikha GreerWhat Is Privacy by Design? Millions of dollars go into securing the data and privacy of an organization. Still, malicious attacks, unnecessary third-party access, and other data security issues still prevail. While there is no definite...

Data Compliance: A Detailed Guide for IT Leaders

Data Compliance: A Detailed Guide for IT Leaders

31MARCH, 2021 by Ukpai UgochiSo, As the leader of a DevOps or agile team at a rising software company, how do you ensure that users' sensitive information is properly secured? Users are on the internet on a daily basis for communication, business, and so on. While...

What Is IT Operational Intelligence

What Is IT Operational Intelligence

24MARCH, 2021 by Taurai MutimutemaKnowledge is more important than ever in businesses of all types. Each time an engineer makes a decision, the quality of outcomes (always) hangs on how current and thorough the data that brought about their knowledge is. This...

What Is Data Fabrication in TDM

What Is Data Fabrication in TDM

15MARCH, 2021 by Carlos SchultsIn today’s post, we’ll answer what looks like a simple question: what is data fabrication in TDM? That’s such an unimposing question, but it contains a lot for us to unpack. What is TDM to begin with? Isn’t data fabrication a bad thing?...

Top TDM Metrics

Top TDM Metrics

19 FFEBRUARY, 2021 by Carlos Schults "You can't improve what you don't measure." I'm sure you're familiar with at least some variation of this phrase. The saying, often attributed to Peter Drucker, speaks to the importance of metrics as fundamental tools to enrich and...

Structured Versus Unstructured Data

Structured Versus Unstructured Data

08 FEBRUARY, 2021 by Zulaikha Greer Data is the word of the 21st century. The demand for data analysis skills has skyrocketed in the past decade. There exists an abundance of data, mostly unstructured, paired with a lack of skilled professionals and effective tools to...