The field of serverless (SLS) recently emerged as a new cloud computing paradigm, allowing to efficiently develop and deploy scalable applications without having to manage any underlying infrastructure. It is rapidly gaining momentum within the cloud industry and is even recognized to be the next fundamental evolution in cloud-native software development, as it further abstracts away hardware and operational concerns of cloud-based software engineering. Developers can focus on the business logic rather than worry about managing infrastructure resources.
In addition to these and other development benefits, practice and research show how this new paradigm has made software architects rethink application design and technical architectures. In this blog post, serverless architectures and their special features are presented and described. Several serverless architecture patterns outlined here should give you a first starting point for your concrete implementation.
Within the last year I have been able to study the topic intensively from both the academic side (as part of my master’s thesis) and from the application-oriented side as a developer at MaibornWolff. In this blog post, I want to share and summarize gained insights about serverless architectures and their key properties.
The IEEE Recommended Practice for Architectural Description for Software-Intensive Systems defines software architecture as follows:
"Software architecture is the “[…] fundamental organization of a system embodied in its components, their relationships to each other, and to the environment, and the principles guiding its design and evolution."
It is recognized as a critical part of the software design process, as multiple architecture decisions and inherited tradeoffs directly affect future performance, quality, and maintainability. Missing or poor architecture is likely to result in slow and costly software and it will be expensive to add new features in the future. Therefore, software architecture is often the primary step towards designing a software system.
This is (also) particularly true for cloud native software, which refers to applications that were developed especially to run within a cloud environment and to exploit the full advantage of cloud computing platforms and its services. Key aspects of such application architectures are the strong focus towards an open-source software stack and a microservice-based architecture. A decomposition of a monolithic application into smaller, specialized, independent and event-driven microservices is recommended. It enables better scalability properties and advantages in aiming for reliability.
Hardly any other architectural pattern has received more attention in recent years than the concept of microservices. Instead of deploying a large software system in one piece (deployment monolith), it gets modularized into autonomous services. It gains its appeal also from its characteristic advantages when deployed in a cloud environment, such as independent development and deployment, technology freedom and the possibility of autonomous scaling for each service. The stated characteristics result primarily from the reduction of the service size and eliminating of hard dependencies among services. A reason that tempts to push this break down further. And this tempts us pushing this splitting of services further into even smaller ones.
In literature, a service that has been split to the extreme, where each service component only handles one particular operation for one specific business domain with minimal resource allocation, is also referred as a nanoservice.
The term was particularly influenced by Eberhard Wolff , who also adds that new technological approaches are needed to handle such nanoservices: If services become too fine-grained, the communication, coordination, scaling and infrastructure effort will rise and can have negative impact on the overall system.
Serverless technology can help to overcome this trade-off and can enable further downsizing of microservices. In fact, in order to be able to use function-as-a-service platforms efficiently, microservices even need to be broken down further to the level of functions and events. That is demonstrated schematically in the following figure.
Comparison between Monolith, Microservice- and Serverless Architectures
FaaS technology improves the shortcomings of a microservice model as the infrastructure and scaling overhead is deligated to the platform provider. Additionally, most FaaS platforms offer function composition functionalities. Thus, even the communication and coordination exertions between the functions are simplified. For example, the OpenWhisk programming model introduces sequences and conductions that take over most communication and coordination tasks.
Despite this, a consequence of the rising number of cohesive, fine-grained functions and their inter-dependencies is that the system complexity is further increasing. The upper comparison Figure is illustrating this kind of shift. A serverless architecture requires much more effort and toolings in dealing with this complexity. Especially to not lose the overview of a rapidly evolving system.
Another upcoming blog post will continue to address this issue — stay tuned!
Serverless Architecture Properties
An architecture which is mainly built upon functions hosted on a FaaS platform and complemented by various BaaS services such as databases, message queues, API gatways and storage options can be described as a serverless architecture.
FaaS can either be used for enhancing current microservice architectures by providing “glue” code between services. That means to use FaaS to connect multiple services and forward events or messages. It can also be used to efficiently replace seldom, event-based functionalities and is often referred to as a hybrid model in the literature. On the other hand, fully serverless architectures arise, where the whole application is built upon serverless functions to reap the full benefits of serverless and to eliminate concerns about the infrastructure.
Such architectures especially need to cope with the characteristics of FaaS like the event-driven and stateless property. FaaS is by definition event-driven and this property is therefore also inherited by serverless architectures. Architects should (and need to) strongly promote this paradigm so that services produce, detect, consume, and respond to events.
Asynchronous function calls
Especially asynchronous function calls are well suited and preferred. The danger of hidden double billing is otherwise a common pitfall, as the requesting function would also be invoiced while it simply waits for a response. This is even more significant with cold starts eventually happening!
Fine service granularity
Another important consideration regarding cold starts is the service granularity. The more individual FaaS functions exist in the system, the more cold starts begin to occur due to their individual life cycles. A trade-off argument compared to the previously discussed advantages of multiple granular services.
An essential characteristic of serverelss architectures is the implementation of stateless services.
FaaS functions have serious limitations to handle state that is required to be persistent. Any such state must be externalized from the service itself. That’s why functions within a serverless architecture typically make use of BaaS-like databases or network storage. This needs to be considered especially when it comes to caching.
Pay for what you use
An additional consideration in serverless architectures are operational costs. Firstly, the basis for operating costs incurred is fundamentally different compared to traditional cloud architectures. Instead of being billed for running VMs (independent from their utilization), each single function execution is billed by its duration in combination with its reserved memory independently (pay per use). Architectural considerations towards cost optimization therefore concern each component individually and in a more granular way within a serverless architecture. Thus, the factor of cost is present in every development decision.
No central arbiter
Mike Roberts introduces another important aspect. In serverless architectures there is no central arbiter of processes. “Instead we see a preference for choreography over orchestration, with each component playing a more architecturally aware role […]”, he concludes. More independent responsibility lies directly in the various services accordingly and thus also in the individual development teams.
However, additional limitations of common FaaS Platforms such as runtime duration restrictions or memory limits need to be strongly considered in every architectural decision.
Serverless Architecture Patterns
Considering the special characteristics of serverless architectures and the crucial technical and conceptual challenges identified in dealing with FaaS, patterns of recommended practice are valuable for broader adoption. The community of developers is already active in publishing their experiences and architectural approaches in blog posts, but they are often very tailored to a specific use case. Nevertheless, some of these sources are considered in this section to emphasize practical reflections. Some fairly solid patterns with a full focus on serverless architectures were selected and explained in more detail below. When using individual patterns, the disadvantages must also be taken into account. There are rarely advantages without a drawback on the other side
Some rare academic research tries to cluster and summarise recurring patterns using mixed-method empirical studies. SPEC RG Cloud Working Group, in which 89 serverless use cases were observed and conclusions were drawn about general types of use of FaaS. Also Taibi et al. published a multivocal literature review and identified a list of fairly consistent patterns categorized into the five groups “Orchestration and Aggregation”, “Event Management”, “Availability”, “Communication” and “Authorization”.
The cold start behavior of function calls, which can lead to long response latency, is an important aspect that has to be considered in serverless architectures. Cold starts occur because FaaS platforms discard the runtime environment of unused functions to better utilize resources. The architecture pattern function warmer avoids this process by regularly triggering desired functions and prevents these functions from being useless.
Cold starts are avoided in this way, but it must be stressed that every call also incurs additional costs. This pattern is also called function pinging. The following figure illustrates such an implementation by using a cronjob to keep certain functions warm.
Comparison between Monolith, Microservice- and Serverless Architectures
Also oversizing of functions is an observed pattern. Most FaaS platforms allocate CPU power of a function linearly in proportion to the memory configuration. Therefore, if the physical machine power is insufficient, the only way to get more CPU performance and speed up the computation, is to increase the memory allocation. Even if memory is not a bottleneck.
However, as the costs are mainly made up of both, the execution duration and the amount of allocated memory, a suitable balance needs to be found in order to not drastically increase the total costs.
Instead of using a dedicated API-Gateway, which might be cumbersome to configure, a dedicated function is implemented that receives all requests and forwards them based on the payload by using composition techniques. The exposed interface gets simplified this way. However, double billing will occur, as the routing function is active until the target function returns.
Similar to the router pattern, the aggregator consists of one dedicated function. Instead of sharing multiple endpoints, only this aggregator function gets exposed. It calls each required service separately, aggregates all results and returns a single response to the client.
However, the architect must be aware that the aggregator function represents a single point of failure and also double billing will occur. The aggregator pattern can be seen as an extension of the router pattern.
Composition techniques are used to circumvent the platform timeout threshold. Practitioners split up functions and chain them to prolong the maximum execution duration defined by the FaaS platform. A function passes the initial parameter and preliminary result to the next one, until the last one terminates. This pattern will lead to strong coupling between the chained functions and increase complexity.
Function Chain Pattern
Fan-out and Fan-in
Like the function chain variant, thean-out and Fan-in pattern extends the execution duration threshold. However, instead of sequential function calls, the workload is divided into multiple processes. The high scalability of FaaS is exploited and functions are invoked in parallel. A storage service is often used for the final collection of the results of the individual processes and another function is finally triggered for consolidation.
With this pattern the total computation time is reduced by making use of parallelism and double billing is avoided. Nevertheless, it can only be applied to workloads that can be parallelized.
Fan-out and Fan-in pattern
A very commonly used pattern to handle the stateless property of FaaS is called externalized state. External storage services, such as Redis as a key-value store, are used to save and share state cross function activations. Developers need to be aware of a substantial latency overhead and high coupling.
Externalized State pattern
Read-heavy Reporting Engine (Caching)
This pattern focuses on services with read-intense workloads and helps to overcome the downstream limitations and to reduce resulting latency by using a cache.
Eduardo Romero proposes to create materialized views of frequently used data in databases. Others recommend to store common responses linked to their function input using BaaS caching services such as AWS Elasticache.
Read-heavy Reporting Engine pattern
Serverless architectures can bring great benefits to developers, architects and software application owners. The argument of cost reduction is promising and development can be streamlined. However, serverless challenges need to be known by the architect and considered carefully. There are already good practices and early design patterns towards which architects can orientate themselves. Besides all the benefits, the novel architecture style will introduce more complexity to a system and there is a clear demand for better observability and visualization.
The OpenWhisk Visualizer (OWVIS) Project is a step towards coping with this increasing complexity. OWVIS is a metaphor-based visualization tool of a Function as a Service architecture deployed on Apache OpenWhisk with the ability to make proposals to further improve a Function as a Service architecture with reference to meet serverless architecture patterns.