Deploy Java Lambda using SAM and Buildkite

I’ve recently covered how to deploy a Node JS based Lambda using SAM and Buildkite. I would say that this should cover most use cases, since I believe a majority of AWS Lambdas are implemented with JavaScript.

However, Lambda supports many more programming languages than just JavaScript and one of the more important ones among them is certainly Java. In principle, deployment for Java and JavaScript is very similar: we provide a SAM template and an archive of a packaged application. However, Java uses a different toolset than JavaScript, so the build process of the app will be different.

In the following, I will briefly explain how to build and deploy a Java Lambda using Maven, SAM and Buildkite. If you want to get to the code straight away, find a complete sample project here:

https://github.com/mxro/lambda-java-sam-buildkite

First we define a simple Java based Lambda:

package com.amazonaws.handler;

import com.amazonaws.services.lambda.runtime.Context; 
import com.amazonaws.services.lambda.runtime.LambdaLogger;

public class SimpleHandler {
    public String myHandler(int myCount, Context context) {
        LambdaLogger logger = context.getLogger();
        logger.log("received : " + myCount);
        return String.valueOf(myCount);
    }
}

Then add a pom.xml to define the build and dependencies of our Java application:

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
    <modelVersion>4.0.0</modelVersion>
    <groupId>com.amazonaws</groupId>
    <artifactId>lambda-java-sam-buildkite</artifactId>
    <version>1.0.0</version>
    <packaging>jar</packaging>
    <name>Sample project for deploying a Java AWS Lambda function using SAM and Buildkite.</name>

    <properties>
        <maven.compiler.source>1.8</maven.compiler.source>
        <maven.compiler.target>1.8</maven.compiler.target>
        <maven.compiler.plugin.version>3.8.0</maven.compiler.plugin.version>
        <aws.lambda.java.core.version>1.1.0</aws.lambda.java.core.version>
        <junit.version>4.12</junit.version>
    </properties>

    <dependencies>
        <dependency>
            <groupId>com.amazonaws</groupId>
            <artifactId>aws-lambda-java-core</artifactId>
            <version>${aws.lambda.java.core.version}</version>
        </dependency>
        <dependency>
            <groupId>junit</groupId>
            <artifactId>junit</artifactId>
            <version>${junit.version}</version>
            <scope>test</scope>
        </dependency>    
    </dependencies>

    
    <build>
        <plugins>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-compiler-plugin</artifactId>
                <version>${maven.compiler.plugin.version}</version>
                <configuration>
                    <source>${maven.compiler.source}</source>
                    <target>${maven.compiler.target}</target>
                </configuration>
            </plugin>
        </plugins>
    </build>
    
</project>

We define the lambda using a SAM template. Note that we are referencing the JAR that is assembled using Maven target/lambda-java-sam-buildkite-1.0.0.jar.

AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: >
    sam-app

Globals:
    Function:
        Timeout: 20
        Environment: 

Resources:
  SimpleFunction:
    Type: AWS::Serverless::Function 
    Properties:
      CodeUri: target/lambda-java-sam-buildkite-1.0.0.jar
      Handler: com.amazonaws.handler.SimpleHandler::myHandler
      Runtime: java8

Then we need Dockerfile that will run our build. Here we simply start with an image that has Maven preinstalled and then install Python and the AWS SAM CLI.

FROM zenika/alpine-maven:3-jdk8

# Installing python
RUN apk add --update \
    python \
    python-dev \
    py-pip \
    build-base \
  && pip install virtualenv \
  && rm -rf /var/cache/apk/*

RUN python --version

# Installing AWS CLI and SAM CLI
RUN apk update && \
    apk upgrade && \
    apk add bash && \
    apk add --no-cache --virtual build-deps build-base gcc && \
    pip install awscli && \
    pip install aws-sam-cli && \
    apk del build-deps

RUN mkdir /app
WORKDIR /app
EXPOSE 3001

The following build script will run within this Dockerfile and first package the Java application into a Jar file using mvn package and then uses the SAM CLI to deploy the template and application.

#!/bin/bash -e

mvn package

echo "### SAM Deploy"

sam --version

sam package --template-file template.yaml --s3-bucket sam-buildkite-deployment-test --output-template-file packaged.yml

sam deploy --template-file ./packaged.yml --stack-name sam-buildkite-deployment-test --capabilities CAPABILITY_IAM

Finally we define the Buildkite template. Note that this template assumes the environment variables AWS_DEFAULT_REGION, AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY are provided by Buildkite.

steps:
  - name: Build and deploy to AWS
    command:
      - './.buildkite/deploy.sh'
    plugins:
      - docker-compose#v2.1.0:
          run: app
          config: 'docker-compose.yml'
          env:
            - AWS_DEFAULT_REGION
            - AWS_ACCESS_KEY_ID
            - AWS_SECRET_ACCESS_KEY

Now we simply need to create a Buildkite pipeline and link it to a repository with our source code.

Deploy Lambda using SAM and Buildkite

One of the many good things about Lambdas on AWS is that they are quite easy to deploy. Simply speaking, all that one requires is a zip file of an application that then can be uploaded using an API call.

Things unfortunately quickly become more complicated, especially if the Lambda depends on other resources on AWS, as they often do. Thankfully there is a solution for this in the form of the AWS Serverless Application Model (SAM). AWS SAM enables to specify lambdas along with their resources and dependencies in a simple and coherent way.

AWS being AWS, there are plenty of examples of deploying Lambdas defined using SAM using AWS tooling, such as CodePipeline and CodeBuild. In this article, I will show that it is just as easy deploying Lambdas using Buildkite.

For those wanting to skip straight to the code, here the link to the GitHub repo with an example project:

lambda-sam-buildkite

This example uses the Buildkite Docker Compose Plugin that leverages a Dockerfile, which provides the AWS SAM CLI:

FROM python:alpine
# Install awscli and aws-sam-cli
RUN apk update && \
    apk upgrade && \
    apk add bash && \
    apk add --no-cache --virtual build-deps build-base gcc && \
    pip install awscli && \
    pip install aws-sam-cli && \
    apk del build-deps
RUN mkdir /app
WORKDIR /app

The Buildkite pipeline assures the correct environment variables are passed to the Docker container so that the AWS CLI can be authenticated with AWS:

steps:
  - label: SAM deploy
    command: ".buildkite/deploy.sh"
    plugins:
      - docker-compose#v2.1.0:
          run: app
          env:
            - AWS_DEFAULT_REGION
            - AWS_ACCESS_KEY_ID
            - AWS_SECRET_ACCESS_KE

The script that is called in the pipeline simply calls the AWS SAM CLI to package the CloudFormation template and then deploys it:

#!/bin/bash -e

# Create packaged template and upload to S3
sam package --template-file template.yml \ 
            --s3-bucket sam-buildkite-deployment-test \
            --output-template-file packaged.yml

# Apply CloudFormation template
sam deploy --template-file ./packaged.yml \
           --stack-name sam-buildkite-deployment-test \
           --capabilities CAPABILITY_IAM

And that’s it already. This pipeline can easily be extended to deploy to different environments such as development, staging and production and to run unit and integration tests.

Resize EC2 Volume (without Resizing Partition)

Problem

You would like to resize a volume attached to an EC2 instance.

Solution

Do the following:

  • Create a snapshot of your volume (instructions)
  • Stop your instance
  • Go to EBS / Volumes and select Actions / Modify Volume

Modify Vol

  • Enter the new size for your volume (note you can only ever make the volume larger) and click on Modify

size

  • Wait for the modification to be complete (this might take a while, like 30 min or so)
  • Start your instance

Now, if everything went well, you should have more space available on the disk for the virtual machine. To confirm this, run:

df -h

You should see the new size of the volume as the size of your main partition:

size2

Notes

  • If the size of your partition, does not match the size of the volume, you probably need to resize your partition (instructions).
  • Resizing the partition is a very painful process, that I think should best be avoided at all costs. I think for this it helps if the EC2 instance attached to the volume is stopped when the resize is performed. Assure that this is the case before you do the resize.
  • If you forgot to stop your instance, and need to do a partition resize, there is a little workaround. Wait for six hours, then resize your volume again (this time while the instance is stopped). Then, it hopefully adjusts your partition size to the correct size.
  • In the above, you might be able to start up your instance even while the new volume is still optimizing. I haven’t tested this though but my guess is that it would work.

 

Set up MySQL Replication with Amazon RDS

Problem

You have an existing server that runs a MySQL database (either on EC2 or not) and you would like to replicate this server with a Amazon RDS MySQL instance.

After you follow the instructions from Amazon, your slave reports the IO status:

Slave_IO_State: Connecting to master

… and the replication does not work.

Solution

AWS provides very good documentation on how to set up the replication: Replication with a MySQL or MariaDB Instance Running External to Amazon RDS.

Follow the steps there but be aware of the following pitfall:

In step 6 `create a user that will be used for replication`: It says you should create a user for the domain ‘mydomain.com’. That will in all likelihood not work. Instead, try to find out the IP address of the Amazon RDS instance that should be the replication slave.

One way to do this is as follows:

  • Create the ‘repl_user’ for the domain ‘%’, e.g.:
CREATE USER 'repl_user'@'%' IDENTIFIED BY '<password>';
  • Also do the grants for this user
GRANT REPLICATION CLIENT, REPLICATION SLAVE ON *.* TO 'repl_user'@'%' IDENTIFIED BY '<password>';
  • Open port 3306 on your server for any IP address.
  • Then the replication should work.
  • Go to your master and run the following command:
SHOW PROCESSLIST;
  • Find the process with the user repl_user and get the IP address from there. This is the IP address for your Amazon RDS slave server.
  • Delete the user ‘repl_user’@’%’ on the master
  • Create the user ‘repl_user’@'[IP address of slave]’ on the master
  • Modify your firewall of your master to only accept connections on port 3306 from the IP address of the slave.
  • Restart replication with
call mysql.rds_stop_replication;
call mysql.rds_start_replication;
  • And check the status with
show slave status\G

The slave IO status should now be “Waiting for master to send event”.

 

 

 

Route 53 Cannot Find CloudFront Distribution

Problem

You have create a CloudFront distribution with a custom domain name (such as yourdomain.com).

Now if you try to link this distribution to your domain using Route 53, you get the following error message:

`No AWS resource exists with the Alias Target that you specified.`

error_message

Solution

Try the following to solve this problem:

  • Make sure that the CNAME you specified for the CloudFront distribution matches your domain name exactly. For instance, if your domain name is http://www.yourdomain.com, make sure that this is also the CNAME.
  • When creating the record set in Route 53, make sure to select the record type `A – IPv4 Address` and not CNAME.

ipv4

 

 

Solving ‘One or more of your origins do not exist’ for Cloud Front

Problem

You are trying to create a CloudFront distribution using Amazon’s API.

You get the error:

“One or more of your origins do not exist”

Solution

In my case, I provided a different value for these two properties:

DistributionConfig.DefaultCacheBehavior.TargetOriginId

and

DistributionConfig.Origins.Items[0].Id

Just make sure that the Id for one of your origins matches the TargetOriginId of the DefaultCacheBehavior and the error should disappear.

 

 

Automatically Make Snapshots for EC2

A quick Google search reveals that there are quite a few different approaches for automatically creating snapshots for EC2 images (such as herehere and here).

All of these are rather difficult to do.

Thankfully, after some more searching around I found a great way to schedule regular snapshots using AWS CloudWatch.

CloudWatch supports a built-in target for ‘Create a snapshot of an EBS volume’:

target

For details of how this can be set up, see the excellent step-by-step instructions on the CloudWatch Documentation.

AWS Lambda: Cross-account pass role is not allowed.

Today I came across the following exception while working with the AWS SDK for Amazon Lambda:

com.amazonaws.AmazonServiceException: Cross-account pass role is not allowed. (Service: AWSLambda; Status Code: 403; Error Code: AccessDeniedException; Request ID: xxx)

At first I was a bit puzzled where this exception might come from; but when I found out what the problem was, it seemed to be pretty obvious:

I tried to upload a lambda function to one AWS account while specifying an execution role that belonged to another AWS account.

So that could easily be fixed by providing a role belonging to the correct account!

Alternatively, this error might also occur when you deploy a lambda function which has references to another cross-account role in the template.yaml file (as mentioned by Steven T in the comments below)

UPDATE

As mentioned in the comments by rjhintz, if you require to use the role from another user, you can do so by modifying the policy for the role as follows:

{
   "Version":"2012-10-17",
   "Statement":[
      {
         "Effect":"Allow",
         "Principal":{
            "AWS":[
               "arn:aws:iam::123456789012:user/user1",
               "arn:aws:iam::123456789012:user/user2"
            ],
            "Service":"ec2.amazonaws.com"
         },
         "Action":"sts:AssumeRole"
      }
   ]
}

What is Amazon Flourish (for AWS)

According to a recent article on the New Stack Blog, the Amazon Serverless Team (responsible for instance for Amazon Lambda) is about to release a new open source product called ‘Floruish’.

Currently, there are very few details available on this product. These are some points I could find:

  • It will be a platform to manage components of serverless applications.
  • This includes versioning lambda functions and packaging lambda functions with other components such as database dependencies.
  • It will be open source (under Apache license)

As more details become available, I will update this post.

For now, here are some related resources regarding serverless applications with Amazon:

Bulk Change ACL for Amazon S3

Using bucket policies, it is easy to set ACL settings for all new objects that are uploaded to Amazon S3.

However, I wanted to remove ‘public’ read rights for a whole bunch of objects at the same time and such policies do not apply to objects that are already stored on S3.

I found an easy way to change the ACL settings for many objects at the same time. To bulk change, ACL, do the following:

  • Download the free tool CloudBerry Explorer for Amazon S3
  • Install it
  • In the AWS management console, go to Security Credentials
  • Create a new user ‘s3-super’. Save the access and secret key.
  • Assign the role  ‘AmazonS3FullAccess’ to the user

full_access

  • Start CloudBerry Explorer and connect to your S3 with the access and secret key of the s3-super user
  • Now in this tool navigate to the bucket with the objects you would like to change
  • Select one or more objects for which you want to change the ACL settings in the left-hand column.
  • Click on the button ACL Settings

acl

  • In the dialog that pops up, change the settings to what you like and click OK.

acl_settings

The ACL settings for your objects should now be changed.