by Eric Goebelbecker
Serverless architecture has gained a lot of attention over the past few years. It’s a mainstream option for cloud applications that’s been available for several years. Amazon released AWS Lambda to the public in 2014. Soon after, they made Lambdas the primary method for writing Alexa “skills.” Microsoft followed with Azure Functions less than two years later, and Google released their Cloud Functions in 2017.
These are only the “big four” vendors; there are many others. Demand for serverless has increased steadily over the past five years, and vendors have rushed to meet it. What is serverless computing? What has driven its popularity? Is it the right choice for your IT infrastructure? Can it save you money? How does it affect testing? Will it improve your system’s performance and uptime?
Let’s take a look at what serverless computing means. We’ll discuss how it can improve your infrastructure, how it affects your testing environments and can lower your IT costs.
What Is Serverless Architecture?
Serverless Architecture has servers, but you don’t have to maintain them. Your cloud provider manages systems that run your applications for you.
So, serverless simplifies deploying and maintaining applications to the cloud. Your development teams develop discrete pieces of business logic that run in the cloud platform. These applications are often called “functions,” giving serverless its other common name “functions as a service” or FaaS. Instead of maintaining servers, you deploy these functions to a cloud platform.
MartinFowler.com provides a couple of examples of serverless architectures in a post from last year. Both cases share one common attribute. They break down the interaction with the client application into functions that run on a serverless platform. The pet store example replaces a monolithic server with microservices. The services respond to client requests and scale on demand. The advertising application uses serverless functions to respond to clicks. The system models user clicks as asynchronous events. It also scales up based on traffic, like the pet store. As you can see from both examples, serverless and microservice architecture are well-suited to each other.
We’ve mentioned the major platforms: AWS Lambda, Azure Functions, and Google Cloud Functions. There are many others, including webtask.io, IBMBlueMix, and hook.io.
When you build an application around serverless, you tend to rely on the cloud platform’s managed services. This makes sense. Why set aside application servers but maintain them for a database, storage, or other infrastructure? But, this direction has consequences.
If you’re migrating applications from an on-premises platform, there may be compatibility issues. You’re limited to the capabilities of the managed services. And, of course, you’re handing a large piece of your application to a third-party vendor.
What Does Serverless Mean for Testing?
Testing serverless is a bit of a mixed bag. Unit and local testing serverless applications should be simple and straightforward. A well-designed serverless application consists of isolated functions with discrete inputs and outputs. Designing and implementing tests for each piece, hopefully as you create them, is easy.
But reliance on managed services can complicate integration testing. Should your tests use the external systems or is cost a prohibitive factor? Running extra instances of managed services can be expensive. Is testing against “stubbed” databases and storage enough?
Even if you decide to go “all-in” and test against cloud services, it isn’t always easy. You have to make sure that your tests don’t interfere with production resources. So, you need to set up your CI/CD system to deploy to different resources. Maybe you need a separate cloud account or a virtual private cloud.
But, this doesn’t mean that serverless is a net loss for testing. Orchestrating test environments requires care and planning. But, you can rest assured that Amazon and Microsoft are better at testing their managed services than you are. You can focus your testing efforts on your application’s behavior instead.
What Does Serverless Mean for IT?
Serverless architecture means deploying code to someone else’s servers. For some organizations, that’s already a problem. Serverless has its drawbacks, and there’s no such thing as a free lunch, but it’s close. It’s at least as good as getting a free drink or extra fries.
You don’t have to maintain backend servers. You don’t need to dedicate resources to managing networking, operating system patches, filesystems, and the other overhead associated with them. Plus, as mentioned earlier, moving to managed services tends to come along with serverless. So, maintaining those servers goes away too.
But maintaining server infrastructure isn’t limited to keeping them running and up-to-date, is it? You need scalability, too. Your application probably experiences different loads depending on the time of day, week, or year. How do you plan for that?
Do you size your system for the highest expect load and pay for servers that sit unused? Whether you’re running your system on-premises or in the cloud, unused system capacity is wasted money. But, if you can’t manage capacity in real-time, you need to either size for the worst-case scenario or risk not being able to serve client requests.
Do you take on the expense of managing an orchestration system that can spin servers up and down based on load? Containers and management infrastructure such as Kubernetes can scale your app based on demand. This removes the need for idle systems that you run in anticipation of demand. But, this capability doesn’t come for free. You need to have the expertise to design and implement a dynamic infrastructure.
Serverless takes care of scaling your application infrastructure for you. The platform creates and destroys instances of your code as they are needed. So, if your applications need to accommodate a heavy load for a brief period, it will create the resources required and destroy them when it doesn’t need them anymore. You pay for exactly what you need, only when you need it, and you don’t have to build and run the infrastructure.
Serverless Simplifies Managing Your Infrastructure
With serverless architecture, you focus your efforts on your application. You deploy your business logic as one or more functional pieces of behavior, and the platform takes care of running and scaling it based on user demand. You can save both money and time by offloading the responsibility of running and maintaining infrastructure to the cloud platform. But, serverless is a significant shift. You’ll need to adjust how you manage your use of cloud services. Your test environments will need to either move to the cloud or at least embrace it.
Serverless has been a hot topic for a long time, and for good reasons. You should take a close look at it and see if it can help you build better applications.
Eric has worked in the financial markets in New York City for 25 years, developing infrastructure for market data and financial information exchange (FIX) protocol networks. He loves to talk about what makes teams effective (or not so effective!)
We often get asked by people “What is TEM (Test Environment Management), well for those of you looking for a quick overview of Test Environment Management, here is Use Case we developed as a way…
19 MARCH, 2020 by Michiel Mulders SRE vs DevOps: Friends or Foes? Nowadays, there’s a lack of clarity about the difference between site reliability engineering (SRE) and development and operations (DevOps). There’s definitely an overlap between the roles, even though...
06 MARCH, 2020 by Arnab Roy Chowdhury Top 10 SRE Practices Do you know what the key to a successful website is? Well, you’re probably going to say that it’s quality coding. However, today, there’s one more aspect that we should consider. That’s reliability. There are...
20 FEBRUARY, 2020 by Arnab Row Chowdhury Technically, the world today has advanced to a level we never could’ve imagined a few years ago. What do you think made it possible? We now understand complexities. And how do you think that became possible? Literacy! Since...
14 FEBRUARY, 2020 by Michiel Mulders A site reliability engineer loves optimizing inefficient processes but also needs coding skills. He or she must have a deep understanding of the software to optimize processes. Therefore, we can say an SRE contributes directly to...
07 February, 2020 by Arnab Roy Chowdhury Do you remember what Uncle Ben said to young Peter Parker? “With great power comes great responsibility.” The same applies to companies. At present, businesses hold a huge amount of data—not only the data of a company but also...