Who needs servers? Building a donation funnel with AWS Lambda
Serverless architecture – or making use of third-party services to run code, instead of running it from your own provisioned servers – promises to make delivering digital products and services faster and reduce operating costs. We recently embarked on our first venture into the serverless world: rebuilding the donation funnel for a charity client. In this post, I’ll share some insights into the process and give a quick rundown of what technologies we implemented. We will follow this up with a more technical blog on how everything came together to form, what we consider, a great product.
What is serverless architecture?
Ok, so ‘serverless’ is a contentious term…
There is still the need for servers in the traditional sense, it’s just that they’re fully managed by a third party. What ‘serverless’ refers to is more of an event-driven architecture, where code is executed within stateless containers or Functions as a Service (FaaS). This allows the code to run without the developer worrying about having to provision and maintain servers.
What exactly do we mean by event driven? A simple example of this is having an image automatically resized on upload to an S3 (Simple Storage Solution – Amazon Web Services’ object storage service) bucket. The event is the receipt of the image by S3. Code is then automatically executed which operates on the image. Other events could be AWS SDK calls, HTTP requests or changes to a DynamoDB table.
Enter AWS Lambda
This all sounds really good, but there are some things to consider when using Lambda functions.
Because we were using Node on this project, we had to make sure all callbacks (asynchronous code executions) are completed before the function is finished. If the Lambda starts again, it may or may not run in the same container, which will cause all sorts of issues if previous callbacks resume at this point.
Because Lambda will bootstrap and start up a container you may notice some latency the first time the function is executed. Once the container is up, it remains up in anticipation of further Lambda invocations.
You are charged for the resources required to complete your Lambda function, so to prevent the functions running indefinitely, an execution timeout is specified (a default of 3 seconds but can be 1-300 seconds per invocation). Payload sizes are also restricted. When using Lambda in conjunction with Api Gateway, you need to be a bit more careful as the gateway will timeout after 30 seconds and can not be adjusted.
There is an account level limit of 1,000 concurrent requests, so be careful of load testing.
The total packaged size of the deployed code should be less than 50MB zipped. This is avoided by not including third-party libraries e.g. npm modules.
View the entire list of AWS Lambda limits.
How we built a ‘serverless’ donation funnel
The challenge was to replace the charity’s existing donation system with one that was more flexible and scalable – so that it could be used in a wider variety of contexts for campaigns which spanned multiple channels – while maintaining stability.
We built out three APIs:
- Appeals – manages the donation appeal, including images, text and email content.
- Flows – manages the steps of the donation form, making it flexible and customisable.
- Transactions – stores the transaction data until passed on to a third-party CRM.
API Gateway allows us to expose Lambda functions via RESTful API calls, which served as our events. It also lets us describe endpoints, acceptable parameters and the responses.
Some key features are its low cost, monitoring, scalability and (for us) Swagger integration.
The third piece in the puzzle is DynamoDB. As mentioned above, the Lambda’s execution environments are stateless, therefore we have to provide our own storage. A fully managed NoSQL database, DynamoDB provides seamless scalability and low-latency responses.
Automating our build process
AWS provides direct access to all of the above via the console, which is great when you’re still learning but, as you add resources and build out the API endpoints, this soon becomes tedious. We needed some automation to help us build out the infrastructure. We chose to use Cloudformation and SAM (Serverless Application Model) templates as a way to describe the resources. This made it easier to manage and make changes to our infrastructure.
This was great – we now had some AWS resources. We needed to make the API visible to developers and the client without saying “You can go view this in the AWS console”.
Swagger (now The OpenAPI Specification) is a awesome tool for describing RESTful APIs. It allows both humans and automated tools to understand a service without needing access to any code. We used it as a part of our Cloudformation templates, which automatically built out our API endpoints based on the Swagger definitions, as well as linking them to our newly created Lambda functions.
This felt like a happy place, but there was one more thing that would make me happier. I don’t want to have to run anything to deploy this, there are tools for that. Having Jenkins pipelines run ‘npm run deploy’ to build for the first time, or make changes to our database, API or Lambda’s, based on the branch, certainly made development easier. It makes it easier to deploy, and faster to get changes shipped to different environments.
Main Express.js app
The main web application communicates with the APIs to build out the donation application based on configuration. It chooses specific appeals and flows to create the user journey and build up transactional data.
We opted to deploy this using a package from awslabs. This also uses CloudFormation to deploy the architecture, so we were already familiar with the approach. This creates a proxy API into our Node/express.js application allowing the express.js routing to operate normally within the application.
The resulting Lambda is much larger as it packages up the entire Node application including npm libraries: 36.5 MB compared with our APIs at 3.5 kB, which means you can no longer view/edit the Lambda source code in the AWS console (you will have to download it).
For a relatively new technology, we’ve had a very positive experience with AWS Lambda. We found huge benefits delivering and focusing on the new code features. It gave us the ability to operate in a lean environment, shipping changes rapidly to the client and not having to worry too much about architecture and deployments. While serverless is unlikely to be the correct approach for every problem, there seems to be a huge amount of work going into this at the moment, so we’re expecting to see a lot more of it in the future.