The Serverless Revolution

Steven Yue
13 min readJul 22, 2019

--

Cloud Computing at its finest.

What is Serverless

I came across the term, Serverless Computing, a while ago when I was browsing Medium. The word didn’t make sense to me at that time because modern internet wouldn’t even exist without servers.

Recently, this word entered my mind again when I was reading about how to set up IoT infrastructures on AWS. The article talked about the usage of AWS Lambda, which is a Serverless platform. Then I began to understand that Serverless basically means offloading all the server hosting and setup work to infrastructure providers like AWS.

Imagine if we want to set up a web service that has an endpoint /foo where users can query for today’s temperature. First, we need to set up an ECS instance on AWS that runs Linux. Then we should use some simple server frameworks such as Flask to define the /foo endpoint handler that first requests the temperature from some public weather database and then return the value. Finally, we will build the server application and deploy it to our ECS instance and spin it up.

AWS Sample Weather Application

Now with Serverless, we can simply create an AWS Lambda (or GCP Function) that acts as the /foo handler. I can use Lambda’s online editor to edit the python function to interact with the weather API/database and return the temperature. Then I will need to use AWS API Gateway to route the endpointwww.my-website.com/foo to the Lambda. The rest is all taken care of by AWS — setting up the server, scaling up the system, load-balancing, etc.

What’s amazing with Serverless is that your “server” does not exist when no one is using the endpoint. When someone hits the endpoint with an HTTP request, AWS will immediately provision a “server” for me with my customized Lambda attached to handle the request. Therefore, I don’t have to pay for the server costs when no one is using my service. I only need to pay for the Lambda requests, which gets as low as a few dimes per million of requests.

Another benefit we get from Serverless is maintenance. Normally when we have our own monolith servers and micro-services, we will need to maintain them properly, making sure that they are always online. Although with the help of container orchestration tools like Kubernetes, Terraform, etc., this process is much easier now. But still, why would I want to manage all the deployed services if the platform can take care of them?

AWS also provides a lot of useful services for building Serverless applications. DynamoDB is Amazon’s NoSQL database that enables fast data access. AppSync provides a GraphQL endpoint for querying multiple resources on AWS — you can customize the resolvers to point to a Lambda function, a DynamoDB schema, or your own micro-service. Cognito handles user login, authentication, etc — you can also set up Google/Facebook SSO with it. Amplify helps you with developing your React application and bridges it seamlessly with AppSync and Cognito. AWS IoT allows browsers to use MQTT over WebSocket to pub/sub information from a live broker, and on the other end of the broker there might be a Lambda, a DynamoDB resolver, some SNS/SQS message queue, or even your own micro-service.

Photo by Mohammed Ali on Unsplash

How to achieve Serverless

Despite all the benefits we can get from Serverless, the transition becomes a big problem. Currently, most of our existing solutions out there usually require some kind of server hosting to function. We also tend to build monolith applications that handle a lot of things at the same time. In order to go Serverless, we will need to try to convert every server functionality into a stateless Lambda function — which means we can’t use global variables, shared resources, etc. for granted.

Serverless is about breaking the system down into functional pieces. I browsed a few Medium articles that talks about building a Serverless blog or chat applications. The essence of building these applications is to first reduce the complicated application into a simple graph that models the flow of data.

Simple Applications

In order to prove it, let’s build a simple Serverless blog.

If we want to build just a normal blog, what are the key aspects of it? Frontend, backend, and database. Frontend as it displays the blog articles, database as it stores the content and backend as it responds frontend requests with data from the database. We also need a few other things to make sure everyone can access our blog — a custom domain name that gets routed to the hosted address, and maybe a hosting service (like AWS) that allows us to do all the things mentioned before.

A modern approach of building web applications is to make the frontend as static as possible and move all the logic into the backend. We don’t really need a dedicated server for static pages, because tools such as Github Page and S3 already provide us a hosting service that serves static content.

The static page itself usually just contains the View components and some metadata of the Web App. The real magic happens when the user loads the static page. The page opens up a connection to its pre-baked backend server address. When the user interacts with the components on the page, the handlers in the backend return corresponding data (from the DB) to the frontend, and then JavaScript inflates the view using the returned data.

Let’s assume that our blog has three very basic endpoints — /posts for querying posts based on user_id, /users for querying all the users, and finally, / for the web page itself.

Let’s think about this — what are the actual dynamic pieces in this scenario? You will realize that it’s nothing but just the handlers. In fact, if we somehow implemented those handlers in a different way, we don’t even need a backend server to begin with. This is where Serverless Computing becomes handy. We can simply rewrite all the data handlers (handlers that performs some data related queries) with Lambdas.

Serverless Structure (AWS)

For example, let’s focus on the handler that maps to the endpoint /posts which fetches blog posts based on user_id . In order to make this data path Serverless, we need to first put the blog post data into some databases (such as DynamoDB) that can be accessed by Lambdas. Then we need to define a Lambda function that does exactly the same fetching operation as our original handler — parsing the user_id and fetch the corresponding data using the DB connector. Finally, in order to map this function to the /posts endpoint, we need to use Amazon API Gateway to map this endpoint to the corresponding Lambda.

Pretty easy, right? In fact, we can convert /users endpoint easily using the same paradigm. Finally, we can map the / address to an S3 bucket which stores the static webpage and our whole blog just becomes Serverless.

Flowcharts / State Machines

Although a Serverless blog sounds simple to build, modern web applications, especially SaaS applications are extremely complicated. They have multiple servers or micro-services that handle different things. On top of it, each server also preserves a complex internal state that might affect how each handler works from time to time.

AWS Step Functions is a product that aims to tackle this challenge. Basically, it provides a Flowchart diagram that chains up a bunch of simple Lambdas together, using their return value as the edges of the graph. Such flowchart graphs can be useful when performing tasks such as authentication.

AWS Step Functions

In order to capture more internal states and complicated logic, we can use a special logic flow system called State Machines. Basically, a State Machine follows its predetermined State Transition Graph — where the system changes its internal state based on a given set of input and internal states.

Serverless Solution Providers

Just like we need containerization technologies when building normal web services, we also need abstractions when building Serverless applications.

Serverless (same name :D) is a startup leading in this field. It helps with deploying Serverless apps across multiple cloud service providers. Developers can work on their Serverless applications without any context of whether this is going to be deployed on AWS or GCP.

Converting from an existing web service into Serverless can be nontrivial and hard. However, I still believe that this is the right direction to move forward, because of its huge potentials in the future.

Future of Serverless Computing

The current state of Serverless Computing is still kind of a Work In Progress. Every cloud platform has support for it, but not a lot of companies have adopted the Serverless paradigm. We still need years of work on Infrastructure, Tooling and shared libraries to prove the value and efficiency of Serverless.

Serverless Computing could become the final stage of Cloud Computing.

Many years ago, before Cloud Computing was even a thing, we built our own server racks and hooked them up to the Internet. At that time we also needed to buy public static IP addresses from ISPs so other people could access our server. Soon after that, people discovered that building their own servers was not cost-efficient. So centralized server farms containing massive racks of machines emerged and that became the mainstream.

AWS and GCP disrupted the market by totally containerizing and virtualizing the servers. Instead of getting a physical server rack, we get an “instance” which is basically some virtualized resource that has some computing power and network access.

The next step is getting rid of the virtualized instances. The only reason for having those Linux instances in the cloud is because we are still used to having something that resembles a “computer” which has a command-line interface and runs any generic programs. However, with Serverless abstraction, we no longer need any generic computing resources. All we need are machines that can run these functions, and big storage to store those functions and computation graphs.

Graph-based System

Services that are written in Serverless stack can be described by a master graph that models the flow of data and nodes that represents functions. I find it awfully similar to one of my previous posts.

In my previous article, I talked about the possibility of Graph-based computing and how it impacts DevOps. It seems like with Serverless Computing the dream can become true. If we can reduce all our web services into computation graphs and leverage the Serverless framework to deploy them, we might be able to reduce the amount of overhead for every running service instance and maximize our Cloud Computing efficiency.

Reusable Components

Imagine if a large % of the world starts to adopt the Graph-based Serverless paradigm, what will happen next? One can soon realize that there will be a lot of duplicate functionalities — such as me building a Serverless blog will be almost no different from you building your own Serverless Github Page.

This is important. Because if we can exploit the similarities between different web services, we can greatly reduce the size of our current cloud infrastructure. There could totally be organizations out there creating standardized Serverless functions that anyone can use. Users can also publish their customized Serverless function into places like Github.

Eventually, all the functions just become a globally unique hash # which we can store in some kind of master or decentralized hash table. Every program then becomes a graph of data flows and function nodes which contains the hash # which one can look up its functionality.

If you think this sounds familiar that is very possible. In fact, this is how DApps work on Ethereum. Decentralized apps contain resources which are stored on IPFS with a unique hash. Every application could be simply built up by several hash numbers.

No Language Barriers

Another great step towards our future is that Serverless Computes allows different programmers to collaboratively work on the same project even they are writing completely different languages. This is because the graph itself already describes the flow of data and how each function node interacts with each other. Everyone could just work on their node and the final product will be built. This is rebuilding the Tower of Babel with computers, where “God” separated us with multiple programming languages for a long time ago.

Serverless also allows organizations to hire engineers anywhere in this world. Engineers can just work on their part of the graph remotely without touching the rest of the system. With such powerful abstraction, we can truly become digital nomads and work anywhere.

Localized Executions

When I was playing video games one day, I suffered a lot from the network latency. I was trying to play with my friend in LA and the game server is located in Chicago. I was wondering, it would be nice if the game server could magically form right in between me and my friend, instead of making a roundtrip to the midwest.

With Serverless, this becomes a possibility. The game company could simply convert their game into a Serverless graph and distribute its servers across the country. When there is a connection between two peers, the system automatically finds the midpoint of their connection and allocate the corresponding “server” to execute the graph functions for their game.

Moreover, if we take a more bold approach, the game company can use decentralized computing resources — basically, they ask people with idle computers to contribute their computing power to the network and receive crypto tokens as a reward. Once they have a decentralized network of peers that provides computing power, they can do the same thing as described in the previous paragraph: localize the graph function executions to the machines that are closest to the connection.

SaaSaaS — Software-as-a-Service as a Service

Forgive my poor nomenclature — but this is cool.

Currently, SaaS companies offer their software as a service. What this really means is that companies like Salesforce and Atlassian host their applications on their servers (or AWS servers that are paid by them). Then they will charge users a monthly/yearly subscription fee to gain access to using the web application.

However, I used to work at a SaaS startup and I noticed that as the company grows, managing customers could be extremely painful as each customer has their own specific needs in both features and privacy. Keeping one or several instances of the application running on our cloud was not a fun thing to do. Sometimes there might be customers who want to run the application on a separated physical machine, or even on their own server.

Of course, it is impossible to accommodate everyone’s need. But what if we have a Serverless stack? We can authorize our customers to gain access to our Serverless graph and simply create a copy of our application on their own server or under their own AWS account. We could charge them a monthly fee for using our technology, and under the hood, all the cloud functions are delivered as compiled binaries so we don’t need to worry about securing the IP. By doing so we can truly achieve data security because all the functions, servers and databases are managed by customers themselves.

If we expand this idea a little more, we can open up a marketplace for Serverless applications. Imagine if you are a very large corporation that wants to use some SaaS applications across the organization, you don’t want to open up a Salesforce account, a Workday account, and a GSuite account. What if you can just buy their Serverless applications and host them on your internal servers or under your AWS account? Then you basically created an exclusive fork of the application that you have complete control over and you will never suffer from intern season major outages which are pretty common these days :)

Ultimately, this is Software-as-a-Service as a Service. You can shop for SaaS applications and deploy them as your own.

Catch

Everything, in the end, has a catch. Although Serverless Computing is very promising (and cool), there’s gotta be a catch.

The very first problem has already be covered in previous sections — it’s hard to convert from existing architecture into using Serverless.

The second problem is about developing computational graphs. When we transition into Serverless, we are not getting rid of any technical challenges — in fact, we are just funneling them to the people who actually design the graphs. We will need a lot of talent when designing the overall graph and how data flows from one node to the other. Getting that talent is hard.

Finally, this is related to the future of Cloud Computing in general. People already start to lose faith in big clouds, as companies start to move out from GCP and AWS and build their own servers to be more fault-tolerant and robust across big outages. Also, there are players in the industry who use hybrid cloud solutions to ensure their product works even if one of the service providers is down.

There will be a lot of challenges along the way. But behold — future is almost here.

Finally

A lot of the things I talked about are just a collection of random thoughts I had over the last six months. I sorted of started this post, wrote a few bullet points and logged off for multiple times.

If any of the points make sense to you and you also believe it’s right, let me know :)

--

--

Steven Yue
Steven Yue

Written by Steven Yue

Software Engineer & Part-time student. I code and do photography. Instagram: stevenyue. https://higashi.tech