Lambda Go Starter Project

Serverless development allows deploying low-cost, low-maintenance applications. Go is an ever-more popular language for developing backend applications. With first rate support both in AWS Lambda and GCP Cloud Functions, Go is an excellent choice for serverless development.

Unfortunately, setting up a flexible infrastructure and efficient deployment pipelines for Serverless applications is not always easy. Especially if we are interested in not just deploying a function but exposing it through HTTP for other services to use. This post describes the key elements of a little starter project for deploying a Go Lambda that exposes a simple, extensible REST API.

This project uses the following technologies:

  • Go 1.16
  • AWS Lambda
  • AWS API Gateway
  • Terraform for defining all infrastructure
  • Goldstack Template Framework for deployment orchestration (note this is based on Node.js/Yarn)

The source code for the project is available on GitHub: go-lambda-starter-project

The live API is deployed here: API root

To quickly create a customised starter project like this one: Go Gin Lambda Template on Goldstack

Go Project Setup

To setup a new Go project is very straightforward and can be achieved using the go mod init command. This results in the creation of a go.mod file. This file helps manage all dependencies for the project. For our project, we have also added a number of dependencies that are viewable in this file:

  • aws-lambda-go: For a low-level API for interacting with AWS Lambda
  • gin: Gin is used as the framework for building our HTTP server
  • aws-lambda-go-api-proxy: For linking AWS lambda with our HTTP framework Gin
  • gin-contrib/cors: For providing our Gin server with CORS configuration

This is the resulting go.mod file for the project:

module goldstack.party/templates/lambda-go-gin
go 1.16
require (
github.com/aws/aws-lambda-go v1.23.0
github.com/awslabs/aws-lambda-go-api-proxy v0.9.0
github.com/gin-contrib/cors v1.3.1
github.com/gin-gonic/gin v1.6.3
)
view raw go.mod hosted with ❤ by GitHub

Server Implementation

The HTTP server deployed through the Lambda is defined in a couple of Go files.

main.go

main.go is the file that is run when the Lambda is invoked. It also supports being invoked locally for easy testing of the Lambda. When run locally, it will start a server equivalent to the Lambda on localhost.

package main
import (
"os"
"github.com/aws/aws-lambda-go/lambda"
)
func main() {
// when no 'PORT' environment variable defined, process lambda
if os.Getenv("PORT") == "" {
lambda.Start(Handler)
return
}
// otherwise start a local server
StartLocal()
}
view raw main.go hosted with ❤ by GitHub

server.go

The server.go file is where the HTTP server and its routes are defined. In this example, it just provides one endpoint /status that will return a hard-coded JSON. This file also configures the CORS config, in case we want to call the API from a frontend application hosted on a different domain.

package main
import (
"os"
"github.com/gin-contrib/cors"
"github.com/gin-gonic/gin"
)
func CreateServer() *gin.Engine {
r := gin.Default()
corsEnv := os.Getenv("CORS")
if corsEnv != "" {
config := cors.DefaultConfig()
config.AllowOrigins = []string{corsEnv}
r.Use(cors.New(config))
}
r.GET("/status", func(c *gin.Context) {
c.JSON(200, gin.H{
"status": "ok",
})
})
return r
}
view raw server.go hosted with ❤ by GitHub

Infrastructure

The infrastructure for this starter project is defined using Terraform. There are a couple of things we need to configure to get this project running, these are:

  • Route 53 Mappings for the domain we want to deploy the API to as well as a SSL certificate for being able to call the API via HTTPS: domain.tf
  • An API Gateway for exposing our Lambda through a public endpoint: api_gateway.tf
  • The definition of the Lambda function that we will deploy our code into: lambda.tf

The details of the infrastructure are configured in a config file: goldstack.json.

{
"$schema": "./schemas/package.schema.json",
"name": "lambda-go-gin",
"template": "lambda-go-gin",
"templateVersion": "0.1.1",
"configuration": {},
"deployments": [
{
"name": "dev",
"configuration": {
"lambdaName": "go-gin-starter-project",
"apiDomain": "go-gin-starter-project.dev.goldstack.party",
"hostedZoneDomain": "dev.goldstack.party"
},
"awsUser": "awsUser",
"awsRegion": "us-west-2"
}
]
}
view raw goldstack.json hosted with ❤ by GitHub

The configuration options are documented in the Goldstack documentation as well as in the project readme. Note that these configuration options can be created using the Goldstack project builder or manually in the JSON file.

The infrastructure can easily be stood up by using a Yarn script:

yarn
cd packages/lambda-go-gin
yarn infra up dev

dev denotes here for which environment we want to stand up the infrastructure. Currently the project only contains one environment, dev, but it is easy to define (and stand up) other ones by defining them in the goldstack.json file quoted above.

Note it may seem unconventional here to use a Yarn script, and ultimately an npm module, to deploy the infrastructure for our Go lambda. However, using a scripting language to support the build and deployment of a compiled language is nothing unusual, and using Yarn allows us to use one language/framework for managing more complex projects that also involve frontend modules defined in React.

Deployment

Deployment like standing up the infrastructure can easily be achieved with a Yarn script referencing the environment we want to deploy to:

yarn deploy dev

This will build our Go project, package and zip it as well as upload it to AWS. The credentials for the upload need to be defined in a file config/infra/aws/config.json, with contents such as the following:

{
"users": [
{
"name": "awsUser",
"type": "apiKey",
"config": {
"awsAccessKeyId": "[Access Key Id]",
"awsSecretAccessKey": "[Secret Access Key]",
"awsDefaultRegion": "us-west-2"
}
}
]
}
view raw config.json hosted with ❤ by GitHub

A guide how to obtain these credentials is available on the Goldstack documentation. It is also possible to provide these credentials in environment variables, which can be useful for CI/CD.

Development

To adapt this starter project for your requirements, you will need to do the following:

  • Provide a config file with AWS credentials (or environment variables with the same)
  • Update packages/lambda-go-gin/goldstack.json with the infrastructure definitions for your project
  • Initialise the Yarn project with yarn
  • Deploy infrastructure using cd packages/lambda-go-gin; yarn infra up dev
  • Deploy the lambda using cd packages/lambda-go-gin; yarn deploy dev
  • Start developing your server code in packages/lambda-go-gin/server.go

This project also contains a script for local development. Simply run it with

cd packages/lambda-go-gin
yarn watch

This will spin up a local Gin server on http://localhost:8084 that will provide the same API that is exposed via API gateway and makes for easy local testing of the Lambda. Also deployment of the Lambda should only take a few seconds and can be triggered anytime using yarn deploy dev.

If you want to skip the steps listed above to configure the project, you can generate a customised project with the Goldstack project builder.

This template is just a very basic Go project to be deployed to Lambda and actually my first foray into Go development. I haven’t based any larger projects on this yet to test it out, so any feedback to improve the template is welcome. I will also be keep on updating the template on Goldstack with any future learnings.

Express.js on Lambda Getting Started

AWS Lambda is a cost efficient and easy way to deploy server applications. Express.js is a very popular Node.js framework that makes it very easy to develop REST APIs. This post will go through the basics of deploying an Express.js application to AWS Lambda.

You can also check out the sample project on GitHub.

Develop Express.js Server

We first need to implement our Express.js server. Nothing particular we need to keep in mind here. We can simply define routes etc. as we normally would:

import express from 'express';
import cors from 'cors';
import helmet from 'helmet';
import { rootHandler } from './root';
export const app: express.Application = express();
app.use(helmet());
if (process.env.CORS) {
console.info(`Starting server with CORS domain: ${process.env.CORS}`);
app.use(cors({ origin: process.env.CORS, credentials: true }));
}
app.use(express.json());
app.get('/', rootHandler);
view raw server.ts hosted with ❤ by GitHub

In order to publish this server in a Lambda, we will need to add the aws-serverless-express to our project. Once that is done, we can define a new file lambda.ts with the following content:

require('source-map-support').install();
import awsServerlessExpress from 'aws-serverless-express';
import { app } from './server';
const server = awsServerlessExpress.createServer(app);
exports.handler = (event: any, context: any): any => {
awsServerlessExpress.proxy(server, event, context);
};
view raw lambda.ts hosted with ❤ by GitHub

Note that we are importing the app object from our server.ts file. We have also added the package source-map-support. Initialising this module in our code will result in much easier to read stack traces in the Lambda console (since we will package up our Lambda with Webpack).

Please see all files that are required for the server, including the handler in the sample project.

Package Server

In order to deploy our server to AWS lambda, we need to package it up into a ZIP file. Generally lambda accepts any Node.js application definition in the ZIP file but we will package up our application using Webpack. This will drastically reduce the size of our server, which results in much improved cold start times for our Lambda.

For this, we simply add the webpack package to our project and define a webpack.config.js as follows:

/* eslint-disable @typescript-eslint/no-var-requires */
const path = require('path');
const PnpWebpackPlugin = require('pnp-webpack-plugin');
module.exports = {
entry: './dist/src/lambda.js',
output: {
path: path.resolve(__dirname, 'distLambda'),
filename: 'lambda.js',
libraryTarget: 'umd',
},
target: 'node',
mode: 'production',
devtool: 'source-map',
resolve: {
plugins: [PnpWebpackPlugin],
},
resolveLoader: {
plugins: [PnpWebpackPlugin.moduleLoader(module)],
},
module: {
rules: [
// this is required to load source maps of libraries
{
test: /\.(js|js\.map|map)$/,
enforce: 'pre',
use: [require.resolve('source-map-loader')],
},
],
},
};
view raw webpack.config.js hosted with ❤ by GitHub

Note that we are adding some configuration here to include source maps for easy to read stack traces, as well as load an additional plugin to support Yarn Pnp which is used in the sample project.

Running webpack should result in the following files being generated:

lambda.js should be around 650 kb which includes the whole Express server plus Helmet which we included earlier. Cold-starts for lambda should be sub 1 s with this file size.

Deploy to AWS

Lastly, we need to deploy this Lambda to AWS. For this we first need to define the infrastructure for the Lambda. In the sample project, this is done in Terraform:

resource "aws_lambda_function" "main" {
function_name = var.lambda_name
filename = data.archive_file.empty_lambda.output_path
handler = "lambda.handler"
runtime = "nodejs12.x"
memory_size = 2048
timeout = 900
role = aws_iam_role.lambda_exec.arn
lifecycle {
ignore_changes = [
filename,
]
}
}
view raw lambda.tf hosted with ❤ by GitHub

Important here is handler which should match the file name and main method name for our packaged Node.js application. The filename is lambda.js and we defined an export handler in lambda.ts above. Therefore the handler we need to define for the Lambda is lambda.handler. Also note we set the runtime to nodejs12.x. This ensures that Lambda knows to run our application as a Node.js application. To see how to define a Lambda function manually, see this post.

Note that there is a bit more we need to configure, including an API gateway that will send through HTTP requests to our Lambda. To see all infrastructure definitions, see the AWS Infrastructure definitions in the sample project. One important thing to note is that we need to use a proxy integration in our API gateway. This will ensure that our lambda receives all HTTP calls for the gateway and allow our Express server to do the routing.

resource "aws_api_gateway_integration" "lambda" {
rest_api_id = aws_api_gateway_rest_api.main.id
resource_id = aws_api_gateway_method.proxy.resource_id
http_method = aws_api_gateway_method.proxy.http_method
# Lambdas can only be invoked via Post – but the gateway will also forward GET requests etc.
integration_http_method = "POST"
type = "AWS_PROXY"
uri = aws_lambda_function.main.invoke_arn
}
view raw api-gateway.tf hosted with ❤ by GitHub

Once our Lambda is created, we can simply ZIP up the folder distLambda/ and upload this to AWS using the Lambda console.

In the supplied sample project, we use @goldstack/template-lambda-express to help us with uploading our Lambda. Under the hood, this is using the AWS CLI. You can also use the AWS CLI directly using the update-function-code operation.

Next Steps

This post described a few fundamentals about deploying an AWS Lambda with an Express server. There are actually quite a number of steps involved to get a project working end to end. The easiest way I would recommend for getting a project with an Express server deployed on Lambda off the ground would be to use the tool Goldstack that I have developed, specifically the Express Lambda template. This has also been used to create the sample project for this post.

Otherwise, be welcome to check out the sample project on GitHub and modify it to your need. Note that one thing you will need to do is to update the goldstack.json configuration in packages/lambda-express/goldstack.json. Specifically you will need to change the domain configuration.

{
"$schema": "./schemas/package.schema.json",
"name": "lambda-express",
"template": "lambda-express",
"templateVersion": "0.1.19",
"configuration": {},
"deployments": [
{
"name": "prod",
"configuration": {
"lambdaName": "expressjs-lambda-getting-started",
"apiDomain": "expressjs-lambda.examples.goldstack.party",
"hostedZoneDomain": "goldstack.party"
},
"awsUser": "awsUser",
"awsRegion": "us-west-2"
}
]
}
view raw goldstack.json hosted with ❤ by GitHub

More details about the properties in this configuration can be found here

You will also need to create a config.json in config/infra/aws/config.json with AWS credentials for creating the infrastructure and deploying the Lambda.

{
"users": [
{
"name": "awsUser",
"type": "apiKey",
"config": {
"awsAccessKeyId": "your secret",
"awsSecretAccessKey": "your access key",
"awsDefaultRegion": "us-west-2"
}
}
]
}
view raw config.json hosted with ❤ by GitHub

If you simply use the Goldstack UI to configure your project, all these files will be prepared for you, and you can also easily create a fully configured monorepo that also includes modules for a React application or email sending.

Deploy Next.js to AWS

Next.js is becoming ever more popular these days and rightfully so. It is an extremely powerful and well-made framework that provides some very useful abstractions for developing React applications.

An easy way to deploy Next.js application is through using Vercel. However, it can also be useful to deploy Next.js application into other environments to simplify governance and billing. One popular way to to deploy frontend applications is through using the AWS services S3 and CloudFront.

This article describes how to set up the infrastructure required for running a Next.js application using Terraform on AWS, and some of the gotchas to keep in mind. You can also check out the code on GitHub.

Build Project

There are numerous ways to deploy a Next.js application. In our case, we will need to deploy our application as static website.

For this, we will need to define the following script.

"scripts": {
  "build:next": "next build && next export -o webDist/"
}

Running this script will output a bundled, stand-alone version of the Next.js application that can be deployed on any webserver that can host static files.

Next.js bundle files

S3

We will need an S3 bucket to store the files resulting from the Next.js build process. This essentially is just a public S3 bucket.

Below the Terraform for generating such a bucket using the aws_s3_bucket resource. Note here:

  • The permissions for public read are set by acl = "public-read" but we also need a public read bucket policy that is defined
  • The index_document and error_document correspond to those output in the previous step.
resource "aws_s3_bucket" "website_root" {
  bucket = "${var.website_domain}-root"

  acl = "public-read"

  # Remove this line if you want to prevent accidential deletion of bucket
  force_destroy = true

  website {
    index_document = "index.html"
    error_document = "404.html"
  }

  tags = {
    ManagedBy = "terraform"
    Changed   = formatdate("YYYY-MM-DD hh:mm ZZZ", timestamp())
  }

  policy = <<EOF
{
  "Version": "2008-10-17",
  "Statement": [
    {
      "Sid": "PublicReadForGetBucketObjects",
      "Effect": "Allow",
      "Principal": {
        "AWS": "*"
      },
      "Action": "s3:GetObject",
      "Resource": "arn:aws:s3:::${var.website_domain}-root/*"
    }
  ]
}
EOF

  lifecycle {
    ignore_changes = [tags]
  }
}

Next we need to be able to upload our files to this S3 bucket. There are many ways to do this, for instance using the AWS CLI. I have also created an open source package that provides an integrated package for handling the upload of the files (plus setting up infrastructure through Terraform): template-nextjs. This library will also be used when you choose to create a starter project on Goldstack.

One thing to keep in mind when uploading website resources to S3 is that we want to avoid errors on the user’s end during the deployment process. We for instance do not want to delete the files before we re-upload them, since this may result in a small window in which the files are unavailable for users.

This can be solved by uploading the new files first using the AWS S3 sync command and then uploading them a second time with the --delete flag. The resulting commands will look somewhat like this:

aws s3 sync . s3://[bucketname]/[path]
aws s3 sync . s3://[bucketname]/[path] --delete

Since Next.js in general generates hashed file names, you could also just keep all old files on the bucket. However, Next.js deployments can quickly become large (> 50 MB), so depending on how often you deploy, this could quickly result in a significant amount of unnecessary data stored in your S3 bucket.

See here for reference as well the utility used by the `template-nextjs’ package above: utilsS3Deployment.

CloudFront

While it is possible to use an S3 bucket by itself to host a static website, it is usually preferrable to also use a CloudFront distribution in front of the bucket. This results in significantly faster load times for the user, and also enables us make our website available through a secure https:// link.

There are quite a few moving pieces involved in setting up a CloudFront distribution, so the best reference point here will be the complete Terraform availabe in the example project on GitHub. However, I will put a few excerpts from the Terraform configuration below.

We first need to configure an origin that points to our S3 bucket defined earlier:

  origin {
    domain_name = aws_s3_bucket.website_root.website_endpoint

    origin_id   = "origin-bucket-${aws_s3_bucket.website_root.id}"
    
    custom_origin_config {
      http_port = 80
      https_port = 443
      origin_protocol_policy = "http-only"
      origin_ssl_protocols = ["TLSv1.2"]
    }
  }

And specify a root object that matches the index of our page:

  default_root_object = "index.html"

Then we need to set up the cache settings for CloudFront. A Next.js application consists of some files that should be loaded fresh by the user on every page load (e.g. index.html) but most files should only be downloaded once, and then can be cached safely. These pages can also be cached on CloudFront’s edge locations. This can be achieved in CloudFront using cache behaviours.

  # Priority 0
  ordered_cache_behavior {
    path_pattern     = "_next/static/*"
    allowed_methods  = ["GET", "HEAD", "OPTIONS"]
    cached_methods   = ["GET", "HEAD", "OPTIONS"]
    target_origin_id = "origin-bucket-${aws_s3_bucket.website_root.id}"

    forwarded_values {
      query_string = false
      headers      = ["Origin"]

      cookies {
        forward = "none"
      }
    }

    min_ttl                = 0
    default_ttl            = 86400
    max_ttl                = 31536000
    compress               = true
    viewer_protocol_policy = "redirect-to-https"
  }

We can also configure CloudFront to service the correct 404 page provided by Next.js:

  custom_error_response {
    error_caching_min_ttl = 60
    error_code            = 404
    response_page_path    = "/404.html"
    response_code         = 404
  }

Finally, we can link the CloudFront distribution with an AWS provided SSL certificate (free). There is more involved to that than the following – for more information, please have a browse around the source files.

  custom_error_response {
    error_caching_min_ttl = 60
    error_code            = 404
    response_page_path    = "/404.html"
    response_code         = 404
  }

In the example project, we also configure two CloudFront distributions. One to serve the main website, and one to allow forwarding users from an alternate domain. So if a user would go to the domain https://www.yourcompany.com, they can be forwarded to https://yourcompany.com. You can find the configuration for that here redirect.tf.

Route53

Lastly we need to be able to link our CloudFront distribtuion to a domain, and for this we can use the Route 53 service on AWS. This service works both for domains purchased on AWS and for domains purchased through another domain provider.

This can be defined in Terraform easily:

# Creates the DNS record to point on the main CloudFront distribution ID
resource "aws_route53_record" "website_cdn_root_record" {
  zone_id = data.aws_route53_zone.main.zone_id
  name    = var.website_domain
  type    = "A"

  alias {
    name                   = aws_cloudfront_distribution.website_cdn_root.domain_name
    zone_id                = aws_cloudfront_distribution.website_cdn_root.hosted_zone_id
    evaluate_target_health = false
  }
}

The example project assumes that a Route 53 hosted zone is already configured for the domain you would like to deploy your application to. You can find instructions how to achieve that in the Goldstack documentation.

Gotchas

When deploying a Next.js application to AWS using the method described in this post, there are a few gotchas we need to keep in mind.

Chiefly, this deployment will not support any Next.js API routes and server-side rendering. There are ways to provide some support for these, and there is a good Serverless module for that. However, things can quickly become complicated, so I would recommend to instead deploy an API separately using Lambdas. Goldstack projects make that easy by allowing to define the Next.js application and lambdas for providing the API in one monorepo.

There are further some issues related to pre-fetching pages in some cases. These can be avoided by not using the Next.js <Link> component.

Final Thoughts

If you are looking for a fully integrated experience for deploying Next.js applications and you do not worry too much about costs and system governance I would recommend deploying to Vercel. You can still use the Goldstack Next.js templates if you are interested in that (see Vercel Deployment).

However, there are some benefits in being able to have your entire infrastructure defined in one cloud provider, and hosting a Next.js website on AWS is very cost effective and provides a good experience for users. Also, AWS has a great reputation for service uptime, especially for the services involved in hosting this solution (S3, CloudFront, Route 53).

Feel free to clone the sample project on GitHub or be welcome to check out Goldstack which provides an easy UI tool to configure your Next.js project (plus link it to additional services such as email sending and data storage on S3).

Deploy Java Lambda using SAM and Buildkite

I’ve recently covered how to deploy a Node JS based Lambda using SAM and Buildkite. I would say that this should cover most use cases, since I believe a majority of AWS Lambdas are implemented with JavaScript.

However, Lambda supports many more programming languages than just JavaScript and one of the more important ones among them is certainly Java. In principle, deployment for Java and JavaScript is very similar: we provide a SAM template and an archive of a packaged application. However, Java uses a different toolset than JavaScript, so the build process of the app will be different.

In the following, I will briefly explain how to build and deploy a Java Lambda using Maven, SAM and Buildkite. If you want to get to the code straight away, find a complete sample project here:

https://github.com/mxro/lambda-java-sam-buildkite

First we define a simple Java based Lambda:

package com.amazonaws.handler;

import com.amazonaws.services.lambda.runtime.Context; 
import com.amazonaws.services.lambda.runtime.LambdaLogger;

public class SimpleHandler {
    public String myHandler(int myCount, Context context) {
        LambdaLogger logger = context.getLogger();
        logger.log("received : " + myCount);
        return String.valueOf(myCount);
    }
}

Then add a pom.xml to define the build and dependencies of our Java application:

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
    <modelVersion>4.0.0</modelVersion>
    <groupId>com.amazonaws</groupId>
    <artifactId>lambda-java-sam-buildkite</artifactId>
    <version>1.0.0</version>
    <packaging>jar</packaging>
    <name>Sample project for deploying a Java AWS Lambda function using SAM and Buildkite.</name>

    <properties>
        <maven.compiler.source>1.8</maven.compiler.source>
        <maven.compiler.target>1.8</maven.compiler.target>
        <maven.compiler.plugin.version>3.8.0</maven.compiler.plugin.version>
        <aws.lambda.java.core.version>1.1.0</aws.lambda.java.core.version>
        <junit.version>4.12</junit.version>
    </properties>

    <dependencies>
        <dependency>
            <groupId>com.amazonaws</groupId>
            <artifactId>aws-lambda-java-core</artifactId>
            <version>${aws.lambda.java.core.version}</version>
        </dependency>
        <dependency>
            <groupId>junit</groupId>
            <artifactId>junit</artifactId>
            <version>${junit.version}</version>
            <scope>test</scope>
        </dependency>    
    </dependencies>

    
    <build>
        <plugins>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-compiler-plugin</artifactId>
                <version>${maven.compiler.plugin.version}</version>
                <configuration>
                    <source>${maven.compiler.source}</source>
                    <target>${maven.compiler.target}</target>
                </configuration>
            </plugin>
        </plugins>
    </build>
    
</project>

We define the lambda using a SAM template. Note that we are referencing the JAR that is assembled using Maven target/lambda-java-sam-buildkite-1.0.0.jar.

AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: >
    sam-app

Globals:
    Function:
        Timeout: 20
        Environment: 

Resources:
  SimpleFunction:
    Type: AWS::Serverless::Function 
    Properties:
      CodeUri: target/lambda-java-sam-buildkite-1.0.0.jar
      Handler: com.amazonaws.handler.SimpleHandler::myHandler
      Runtime: java8

Then we need Dockerfile that will run our build. Here we simply start with an image that has Maven preinstalled and then install Python and the AWS SAM CLI.

FROM zenika/alpine-maven:3-jdk8

# Installing python
RUN apk add --update \
    python \
    python-dev \
    py-pip \
    build-base \
  && pip install virtualenv \
  && rm -rf /var/cache/apk/*

RUN python --version

# Installing AWS CLI and SAM CLI
RUN apk update && \
    apk upgrade && \
    apk add bash && \
    apk add --no-cache --virtual build-deps build-base gcc && \
    pip install awscli && \
    pip install aws-sam-cli && \
    apk del build-deps

RUN mkdir /app
WORKDIR /app
EXPOSE 3001

The following build script will run within this Dockerfile and first package the Java application into a Jar file using mvn package and then uses the SAM CLI to deploy the template and application.

#!/bin/bash -e

mvn package

echo "### SAM Deploy"

sam --version

sam package --template-file template.yaml --s3-bucket sam-buildkite-deployment-test --output-template-file packaged.yml

sam deploy --template-file ./packaged.yml --stack-name sam-buildkite-deployment-test --capabilities CAPABILITY_IAM

Finally we define the Buildkite template. Note that this template assumes the environment variables AWS_DEFAULT_REGION, AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY are provided by Buildkite.

steps:
  - name: Build and deploy to AWS
    command:
      - './.buildkite/deploy.sh'
    plugins:
      - docker-compose#v2.1.0:
          run: app
          config: 'docker-compose.yml'
          env:
            - AWS_DEFAULT_REGION
            - AWS_ACCESS_KEY_ID
            - AWS_SECRET_ACCESS_KEY

Now we simply need to create a Buildkite pipeline and link it to a repository with our source code.

Deploy Lambda using SAM and Buildkite

One of the many good things about Lambdas on AWS is that they are quite easy to deploy. Simply speaking, all that one requires is a zip file of an application that then can be uploaded using an API call.

Things unfortunately quickly become more complicated, especially if the Lambda depends on other resources on AWS, as they often do. Thankfully there is a solution for this in the form of the AWS Serverless Application Model (SAM). AWS SAM enables to specify lambdas along with their resources and dependencies in a simple and coherent way.

AWS being AWS, there are plenty of examples of deploying Lambdas defined using SAM using AWS tooling, such as CodePipeline and CodeBuild. In this article, I will show that it is just as easy deploying Lambdas using Buildkite.

For those wanting to skip straight to the code, here the link to the GitHub repo with an example project:

lambda-sam-buildkite

This example uses the Buildkite Docker Compose Plugin that leverages a Dockerfile, which provides the AWS SAM CLI:

FROM python:alpine
# Install awscli and aws-sam-cli
RUN apk update && \
    apk upgrade && \
    apk add bash && \
    apk add --no-cache --virtual build-deps build-base gcc && \
    pip install awscli && \
    pip install aws-sam-cli && \
    apk del build-deps
RUN mkdir /app
WORKDIR /app

The Buildkite pipeline assures the correct environment variables are passed to the Docker container so that the AWS CLI can be authenticated with AWS:

steps:
  - label: SAM deploy
    command: ".buildkite/deploy.sh"
    plugins:
      - docker-compose#v2.1.0:
          run: app
          env:
            - AWS_DEFAULT_REGION
            - AWS_ACCESS_KEY_ID
            - AWS_SECRET_ACCESS_KE

The script that is called in the pipeline simply calls the AWS SAM CLI to package the CloudFormation template and then deploys it:

#!/bin/bash -e

# Create packaged template and upload to S3
sam package --template-file template.yml \ 
            --s3-bucket sam-buildkite-deployment-test \
            --output-template-file packaged.yml

# Apply CloudFormation template
sam deploy --template-file ./packaged.yml \
           --stack-name sam-buildkite-deployment-test \
           --capabilities CAPABILITY_IAM

And that’s it already. This pipeline can easily be extended to deploy to different environments such as development, staging and production and to run unit and integration tests.

Resize EC2 Volume (without Resizing Partition)

Problem

You would like to resize a volume attached to an EC2 instance.

Solution

Do the following:

  • Create a snapshot of your volume (instructions)
  • Stop your instance
  • Go to EBS / Volumes and select Actions / Modify Volume

Modify Vol

  • Enter the new size for your volume (note you can only ever make the volume larger) and click on Modify

size

  • Wait for the modification to be complete (this might take a while, like 30 min or so)
  • Start your instance

Now, if everything went well, you should have more space available on the disk for the virtual machine. To confirm this, run:

df -h

You should see the new size of the volume as the size of your main partition:

size2

Notes

  • If the size of your partition, does not match the size of the volume, you probably need to resize your partition (instructions).
  • Resizing the partition is a very painful process, that I think should best be avoided at all costs. I think for this it helps if the EC2 instance attached to the volume is stopped when the resize is performed. Assure that this is the case before you do the resize.
  • If you forgot to stop your instance, and need to do a partition resize, there is a little workaround. Wait for six hours, then resize your volume again (this time while the instance is stopped). Then, it hopefully adjusts your partition size to the correct size.
  • In the above, you might be able to start up your instance even while the new volume is still optimizing. I haven’t tested this though but my guess is that it would work.

 

Set up MySQL Replication with Amazon RDS

Problem

You have an existing server that runs a MySQL database (either on EC2 or not) and you would like to replicate this server with a Amazon RDS MySQL instance.

After you follow the instructions from Amazon, your slave reports the IO status:

Slave_IO_State: Connecting to master

… and the replication does not work.

Solution

AWS provides very good documentation on how to set up the replication: Replication with a MySQL or MariaDB Instance Running External to Amazon RDS.

Follow the steps there but be aware of the following pitfall:

In step 6 `create a user that will be used for replication`: It says you should create a user for the domain ‘mydomain.com’. That will in all likelihood not work. Instead, try to find out the IP address of the Amazon RDS instance that should be the replication slave.

One way to do this is as follows:

  • Create the ‘repl_user’ for the domain ‘%’, e.g.:
CREATE USER 'repl_user'@'%' IDENTIFIED BY '<password>';
  • Also do the grants for this user
GRANT REPLICATION CLIENT, REPLICATION SLAVE ON *.* TO 'repl_user'@'%' IDENTIFIED BY '<password>';
  • Open port 3306 on your server for any IP address.
  • Then the replication should work.
  • Go to your master and run the following command:
SHOW PROCESSLIST;
  • Find the process with the user repl_user and get the IP address from there. This is the IP address for your Amazon RDS slave server.
  • Delete the user ‘repl_user’@’%’ on the master
  • Create the user ‘repl_user’@'[IP address of slave]’ on the master
  • Modify your firewall of your master to only accept connections on port 3306 from the IP address of the slave.
  • Restart replication with
call mysql.rds_stop_replication;
call mysql.rds_start_replication;
  • And check the status with
show slave status\G

The slave IO status should now be “Waiting for master to send event”.

 

 

 

Route 53 Cannot Find CloudFront Distribution

Problem

You have create a CloudFront distribution with a custom domain name (such as yourdomain.com).

Now if you try to link this distribution to your domain using Route 53, you get the following error message:

`No AWS resource exists with the Alias Target that you specified.`

error_message

Solution

Try the following to solve this problem:

  • Make sure that the CNAME you specified for the CloudFront distribution matches your domain name exactly. For instance, if your domain name is http://www.yourdomain.com, make sure that this is also the CNAME.
  • When creating the record set in Route 53, make sure to select the record type `A – IPv4 Address` and not CNAME.

ipv4

 

 

Solving ‘One or more of your origins do not exist’ for Cloud Front

Problem

You are trying to create a CloudFront distribution using Amazon’s API.

You get the error:

“One or more of your origins do not exist”

Solution

In my case, I provided a different value for these two properties:

DistributionConfig.DefaultCacheBehavior.TargetOriginId

and

DistributionConfig.Origins.Items[0].Id

Just make sure that the Id for one of your origins matches the TargetOriginId of the DefaultCacheBehavior and the error should disappear.

 

 

Automatically Make Snapshots for EC2

A quick Google search reveals that there are quite a few different approaches for automatically creating snapshots for EC2 images (such as herehere and here).

All of these are rather difficult to do.

Thankfully, after some more searching around I found a great way to schedule regular snapshots using AWS CloudWatch.

CloudWatch supports a built-in target for ‘Create a snapshot of an EBS volume’:

target

For details of how this can be set up, see the excellent step-by-step instructions on the CloudWatch Documentation.