IT Environments – Why Use Microservices?

08
JANUARY, 2019

By Eric Goebelbecker.

New technologies and new ways of doing things are continually impacting the way our IT Environments evolve and how we do things. One area of growing popularity is Microservices. 

Why Use Microservices?

Why should you start using microservices? Why all the hype? Are they only the latest fad, or are there good reasons why so many companies have migrated to them? The fact is, microservices improve many aspects of systems design, build, and environment operation (& underlying automation). We’ll take a look at why microservices are an excellent option for distributed applications. So, if you’re not already familiar with how they can help you create better systems, you’re in the right place.

What are Microservices?

Microservice architecture breaks applications down into collections of small services. Each service is designed to address a specific business capability. Then, designers address technical concerns only after they choose the service’s role. So, an excellent example of a service’s area of concern is “customer information”. A poor one would be “relational database access”. This distinction is an important factor in why microservices are so successful, as we’ll see below.A monolithic service has libraries built around technical or business capabilities. You may link third-party API libraries into the application. Or, you might use “plug-ins” that make it easier to add new behavior. Microservices use a radically different approach. Developers add new abilities by creating new services that run in a new process. The monolithic service runs in a single process, so it can be more complicated to manage and upgrade.Microservices separate each business concern into a discrete service. The libraries and APIs used in the process are those needed to fulfill its business concerns. The service communicates directly with clients. It uses a lightweight protocol, usually a RESTful API delivered over HTTP. Microservice systems generally need little central management.So, what makes component systems better than monolithic ones?

The Single Responsibility Principle Writ Large

The single responsibility principle is a guideline for designing software. It states that software modules or classes should have responsibility for only one thing. A well-written application or library consists of components that enjoy solid boundaries. Robert C. Martin defines the principle this way.The Single Responsibility Principle (SRP) states that each software module should have one and only one reason to change.That single “reason to change” is a crucial part of Martin’s definition. When you map a software component to a distinct business concern, it responds better to business requirements. The principle also protects it from conflicting or unrelated dependencies that can distort its purpose.Martin continues.When you write a software module, you want to make sure that when changes are requested, those changes can only originate from a single person, or rather, a single tightly coupled group of people representing a single narrowly defined business function. (emphasis added) You want to isolate your modules from the complexities of the organization as a whole, and design your systems such that each module is responsible (responds to) the needs of just that one business function.This principle is a crucial reason why microservices are a good architectural strategy. A well-designed microservice only has one reason to change. So, new requirements cause minimal disruptions since areas concern are well-defined. A system made of loosely-coupled services can adapt to changing business requirements.

Divide and Conquer

When you break an application down into services, you can divide your development up too. Each team takes an area of concern and focuses on delivering an application that fulfills it. This ability to create parallel development teams isn’t a coincidental side effect of microservices. It’s a key benefit of component architecture that helped drive its popularity. Microservices make both technology and business teams happy, so it’s no surprise that both large and small companies have embraced it.Monolithic architectures need centralized management and standardized tooling. Even if you can break the development effort down into individual teams, they’re still working on the same application. Any change requires a complete release cycle of the service. If you need to make more than one change, you have to combine them into a single release. Or, you have to queue releases one after the other and make stakeholders wait.Microservices avoid these conflicts and require almost no centralized governance. Development teams can work on individual schedules. They can release their services on their own schedules too. Rather than trying to coordinate the efforts of several groups, management can focus on results.The individual development teams can also select the tools and infrastructure they need. A monolithic application often means a development monoculture. Not so with microservices. Teams can choose the best tool for the job. This can mean different languages, database platforms, or storage technologies.

Faster Iteration

Many technologists think of microservices and agile as going together. There’s a good reason for that. Short development cycles that create tight feedback loops are at the heart of agile. Microservice architectures are the perfect environment for agile since they can deploy new code faster and more often.Slow and complicated release cycles plague monolithic architectures. First, every change risks triggering a conflict. Does it need a full regression test, or do we risk skipping it? Second, you can only schedule deployments on weekends or during tight release windows. These factors combine to slow releases down to every few weeks at best, often even less frequently.A microservice architecture is more compatible with agile methodology. Releases are easier since they are, in every way, smaller in scope. Only one service needs a QA cycle. You just need to deploy the single service. You have more freedom about when you deploy since the process is smaller in scope and represents less risk. So, you can release more often. Development receives timely feedback, and the system is responsive to changing requirements.

Improved Scalability

How do you scale a monolithic service? Usually, it involves adding load-balancing so more than one server shares the traffic. Or you can go with that most-revered of solutions, “throw more hardware at it.” You buy a bigger server with more memory and faster storage. If those approaches aren’t available or don’t work, you’ve got a problem. But happens when those solutions don’t work? What if the problem lies inside the service? It’s back to the drawing board.Microservices, tend to be easier to scale. It’s easier to identify bottlenecks in a collection of small services. So, even if you do have to result to throwing hardware at the problem, you can throw at the single service that needs it. If the service needs redesigning or replacing, the team responsible can do the job without disrupting the rest of the system.Also, as more and more enterprises embrace the cloud, containerization has risen in popularity. Microservices lend themselves to containerization since they are small applications with a limited set of dependencies. So, you can quickly scale your services horizontally with technologies like Docker and Kubernetes without writing any additional code.

Solving Design Issues with Focus and Agility

Microservices solve problems by applying time-tested software design principles at the system level. There are many benefits to this approach. First, the system tends to focus on business problems first, and technical concerns second. Second, the applicants adapt to new requirements quickly, making developers and stakeholders both happy. Finally, microservice-based systems are easy to deploy to clouds and scale well.
Eric GoebelbeckerThis post was written by Eric Goebelbecker. Eric has worked in the financial markets in New York City for 25 years, developing infrastructure for market data and financial information exchange (FIX) protocol networks. He loves to talk about what makes teams effective (or not so effective!) 

Relevant Articles

Enov8 DCT – The Data Control Tower

Enov8 DCT – The Data Control Tower

April,  2024 by Jane Temov. Author Jane Temov.  Jane is a Senior Consultant at Enov8, where she specializes in products related to IT and Test Environment Management, Enterprise Release Management, and Test Data Management. Outside of her professional work, Jane...

Enterprise Release Management: The Ultimate Guide

Enterprise Release Management: The Ultimate Guide

April,  2024 by Niall Crawford   Author Niall Crawford Niall is the Co-Founder and CIO of Enov8. He has 25 years of experience working across the IT industry from Software Engineering, Architecture, IT & Test Environment Management and Executive Leadership....

Understanding ERM versus SAFe

Understanding ERM versus SAFe

April,  2024 by Jane Temov. Author Jane Temov.  Jane is a Senior Consultant at Enov8, where she specializes in products related to IT and Test Environment Management, Enterprise Release Management, and Test Data Management. Outside of her professional work, Jane...

Serverless Architectures: Benefits and Challenges

Serverless Architectures: Benefits and Challenges

April,  2024 by Jane Temov. Author Jane Temov. Jane is a Senior Consultant at Enov8, where she specializes in products related to IT and Test Environment Management, Enterprise Release Management, and Test Data Management. Outside of her professional work, Jane enjoys...

The Crucial Role of Runsheets in Disaster Recovery

The Crucial Role of Runsheets in Disaster Recovery

March,  2024 by Jane Temov.   Author Jane Temov Jane Temov is an IT Environments Evangelist at Enov8, specializing in IT and Test Environment Management, Test Data Management, Data Security, Disaster Recovery, Release Management, Service Resilience, Configuration...