Serverless computing (SC) isn’t just a hot topic in IT nowadays. It’s also a big issue in the deployment and development fields, as well as the business world. From harnessing the power of global computing resources to extendibility and flexibility, its applications are diverse and seemingly limitless. SC increases velocity and responsiveness, opening a way for easy implementation of use cases that are not currently supported. But unlike some other areas in IT, there’s more substance than hype to SC. And as people’s interest in it grows, so does its importance.

Serverless computing (SC) in a nutshell

SC is an architecture in which the cloud provider is responsible for setting up the environments that are necessary for running your applications on a large scale. It also involves the cloud provider managing and maintaining that infrastructure and recovering it from failures. Because SC adds a layer of abstraction to the hardware running the software, a developer, DevOps expert or a company deploying an application to the cloud doesn’t have to worry about which servers they are deploying the application to and how to manage them. As developers, we like that feature very much. But perhaps SC’s main selling point for businesses is that the pricing of the service is based on the number of resources that are used by an application, rather than the pre-purchased physical units executing the work. With such a feature, it’s easy to see why more and more businesses are embracing SC. But to really understand why it’s so great, let’s take a look at its historical development.

Bare metal

The first evolutionary step dates way back to the 1960s, long before the development of the cloud model when developers and systems administrators had to prepare physical servers for software in a way that is rarely done nowadays. To properly run and maintain the software required a lot of care and work. Installing operating systems and device drivers and setting up memory, disk, and processor resources while also maintaining them were tasks that cost systems administrators many nights of lost sleep. As if that weren’t enough, there were other downsides to having deployed software strongly binded to a physical server in a building, such as high vulnerability to environmental influences (power outage, environmental catastrophes).

Virtual machines

During the 1970s, virtual machines were introduced. In this paradigm, developers didn’t focus on hardware to deploy their software; instead, they used simulated servers. This eased the deployment process a bit more, as minor hardware differences were no longer a problem, and there was much more flexibility regarding updates. Such developments also meant that software was no longer so strongly coupled with a specific piece of hardware. In the case of hardware failure, this meant that virtual machines could easily be migrated to other hardware. But, it still had similar downsides common with the bare metal model.

Containerization

Next came containerized deployment environments in the 1980s, but its development got serious only after 2000. Systems administrators would partition an operating system and that would allow more than one application to run on a single machine, with no interference between the applications.  At this stage, tools for easier environment initialisation and maintenance of deployment process started to become popular, development time decreased while the number of deployment cycles increased. All this led to higher development and deployment quality standards.

The paradigm shifts

We can see that the deployment of client-server-based applications started with an isolated and irreplaceable piece of hardware, progressed to isolated but replaceable hardware and, in the end, the partitioning of operating systems and the running of different applications separately. While the models used at each stage varied in their nature and benefits, the common denominator was always that software depended, in one form or another, on hardware. And because of this, the hardware required a lot of attention. But then a new, revolutionary model was developed; one that marked a paradigm shift in computing: the cloud.

serverless computing

The dawn of serverless computing

SC seriously stepped onto the scene around 2008 when Google launched the beta version of Google App Engine. With this engine, a developer could deploy his or her program to the cloud and didn’t have to worry about the provisioning details of servers. In the starting years of serverless, pioneers were convincing organisations to go all-in on serverless, focusing on development and deployment. With time, various serverless challenges were tackled and improved upon, expanding serverless capabilities to testing, monitoring, security, and application lifecycles. In 2010, Microsoft released Microsoft Azure, a cloud computing service that builds, tests, deploys, and manages applications through MS-managed data centers. Later, other major providers such as IBM and AWS, as well as many smaller operators, followed suit.

Fast-forward to the present day, and SC has become the tool of choice for many organizations. But like other major technologies, there are also misconceptions about its use. Because SC requires dedicated third-party servers to run deployed applications, this means that there are usually fewer server worries for DevOps teams and other parts of an organisation. But in spite of this benefit, the word serverless can still cause some people to be skeptical about SC, leading them to mistakenly believe that SC doesn’t involve the use of servers. Perhaps the serverless part of the SC term is a misnomer. But the take-home message for anyone new to the technology is that there are fewer server worries for you and your organization. And this means that there are also fewer server resources up and running that you need to pay for, thanks to automatic scaling.

Another misunderstanding occurs when the terms serverless and function-as-a-service (FaaS) are used interchangeably. Obviously, they are both related to the cloud, but they are not synonyms of one another. The term serverless is closer to the definition of cloud architecture, while FaaS represents a subset of cloud services. FaaS constitutes one way of implementing Cloud architecture, but it can also be one piece of your serverless architecture system. FaaS can also provide the infrastructure and services that are needed to deploy and manage application functionalities, without the complexity of building and maintaining the big infrastructure that’s typically associated with an application.

event+code

Wrap-up

Having reviewed its history, as well as its general workings, it’s clear that SC can make a world of difference for some organizations. But it’s also worth remembering that what may work for some business cases may not work for others. SC isn’t meant to completely replace preceding computing models and practices; instead, it’s just meant to help us with different possibilities. Here’s a recap of some of SC’s main benefits.

Lower costs

One of the biggest constraints in quality software development is usually the cost of such development. Businesses are always searching for ways to be more cost-effective, investing in the technology stack that helps them to maximize the use of their resources.

Focus on business

This bullet carries a lot of weight since it encapsulates many prerequisites that enable this greater focus on business logic. Using SC, costs, and time to deploy are decreased while the number of deployment cycles is increased, leaving more resources to work on the business workflows. Also, with new serverless capabilities, use cases once impossible are now ready for implementation, further enriching the user experience.

Resources elasticity

Cloud providers easily manage load balancing on the servers, taking the responsibility and work from you. They actually take the scalability paradigm and work on top of it to create resources elasticity – cloud-native systems that can be inherently scaled, both up and down.

 

What's your reaction?