Lambda Go Starter Project

Serverless development allows deploying low-cost, low-maintenance applications. Go is an ever-more popular language for developing backend applications. With first rate support both in AWS Lambda and GCP Cloud Functions, Go is an excellent choice for serverless development.

Unfortunately, setting up a flexible infrastructure and efficient deployment pipelines for Serverless applications is not always easy. Especially if we are interested in not just deploying a function but exposing it through HTTP for other services to use. This post describes the key elements of a little starter project for deploying a Go Lambda that exposes a simple, extensible REST API.

This project uses the following technologies:

  • Go 1.16
  • AWS Lambda
  • AWS API Gateway
  • Terraform for defining all infrastructure
  • Goldstack Template Framework for deployment orchestration (note this is based on Node.js/Yarn)

The source code for the project is available on GitHub: go-lambda-starter-project

The live API is deployed here: API root

To quickly create a customised starter project like this one: Go Gin Lambda Template on Goldstack

Go Project Setup

To setup a new Go project is very straightforward and can be achieved using the go mod init command. This results in the creation of a go.mod file. This file helps manage all dependencies for the project. For our project, we have also added a number of dependencies that are viewable in this file:

  • aws-lambda-go: For a low-level API for interacting with AWS Lambda
  • gin: Gin is used as the framework for building our HTTP server
  • aws-lambda-go-api-proxy: For linking AWS lambda with our HTTP framework Gin
  • gin-contrib/cors: For providing our Gin server with CORS configuration

This is the resulting go.mod file for the project:

go 1.16
require ( v1.23.0 v0.9.0 v1.3.1 v1.6.3
view raw go.mod hosted with ❤ by GitHub

Server Implementation

The HTTP server deployed through the Lambda is defined in a couple of Go files.


main.go is the file that is run when the Lambda is invoked. It also supports being invoked locally for easy testing of the Lambda. When run locally, it will start a server equivalent to the Lambda on localhost.

package main
import (
func main() {
// when no 'PORT' environment variable defined, process lambda
if os.Getenv("PORT") == "" {
// otherwise start a local server
view raw main.go hosted with ❤ by GitHub


The server.go file is where the HTTP server and its routes are defined. In this example, it just provides one endpoint /status that will return a hard-coded JSON. This file also configures the CORS config, in case we want to call the API from a frontend application hosted on a different domain.

package main
import (
func CreateServer() *gin.Engine {
r := gin.Default()
corsEnv := os.Getenv("CORS")
if corsEnv != "" {
config := cors.DefaultConfig()
config.AllowOrigins = []string{corsEnv}
r.GET("/status", func(c *gin.Context) {
c.JSON(200, gin.H{
"status": "ok",
return r
view raw server.go hosted with ❤ by GitHub


The infrastructure for this starter project is defined using Terraform. There are a couple of things we need to configure to get this project running, these are:

  • Route 53 Mappings for the domain we want to deploy the API to as well as a SSL certificate for being able to call the API via HTTPS:
  • An API Gateway for exposing our Lambda through a public endpoint:
  • The definition of the Lambda function that we will deploy our code into:

The details of the infrastructure are configured in a config file: goldstack.json.

"$schema": "./schemas/package.schema.json",
"name": "lambda-go-gin",
"template": "lambda-go-gin",
"templateVersion": "0.1.1",
"configuration": {},
"deployments": [
"name": "dev",
"configuration": {
"lambdaName": "go-gin-starter-project",
"apiDomain": "",
"hostedZoneDomain": ""
"awsUser": "awsUser",
"awsRegion": "us-west-2"
view raw goldstack.json hosted with ❤ by GitHub

The configuration options are documented in the Goldstack documentation as well as in the project readme. Note that these configuration options can be created using the Goldstack project builder or manually in the JSON file.

The infrastructure can easily be stood up by using a Yarn script:

cd packages/lambda-go-gin
yarn infra up dev

dev denotes here for which environment we want to stand up the infrastructure. Currently the project only contains one environment, dev, but it is easy to define (and stand up) other ones by defining them in the goldstack.json file quoted above.

Note it may seem unconventional here to use a Yarn script, and ultimately an npm module, to deploy the infrastructure for our Go lambda. However, using a scripting language to support the build and deployment of a compiled language is nothing unusual, and using Yarn allows us to use one language/framework for managing more complex projects that also involve frontend modules defined in React.


Deployment like standing up the infrastructure can easily be achieved with a Yarn script referencing the environment we want to deploy to:

yarn deploy dev

This will build our Go project, package and zip it as well as upload it to AWS. The credentials for the upload need to be defined in a file config/infra/aws/config.json, with contents such as the following:

"users": [
"name": "awsUser",
"type": "apiKey",
"config": {
"awsAccessKeyId": "[Access Key Id]",
"awsSecretAccessKey": "[Secret Access Key]",
"awsDefaultRegion": "us-west-2"
view raw config.json hosted with ❤ by GitHub

A guide how to obtain these credentials is available on the Goldstack documentation. It is also possible to provide these credentials in environment variables, which can be useful for CI/CD.


To adapt this starter project for your requirements, you will need to do the following:

  • Provide a config file with AWS credentials (or environment variables with the same)
  • Update packages/lambda-go-gin/goldstack.json with the infrastructure definitions for your project
  • Initialise the Yarn project with yarn
  • Deploy infrastructure using cd packages/lambda-go-gin; yarn infra up dev
  • Deploy the lambda using cd packages/lambda-go-gin; yarn deploy dev
  • Start developing your server code in packages/lambda-go-gin/server.go

This project also contains a script for local development. Simply run it with

cd packages/lambda-go-gin
yarn watch

This will spin up a local Gin server on http://localhost:8084 that will provide the same API that is exposed via API gateway and makes for easy local testing of the Lambda. Also deployment of the Lambda should only take a few seconds and can be triggered anytime using yarn deploy dev.

If you want to skip the steps listed above to configure the project, you can generate a customised project with the Goldstack project builder.

This template is just a very basic Go project to be deployed to Lambda and actually my first foray into Go development. I haven’t based any larger projects on this yet to test it out, so any feedback to improve the template is welcome. I will also be keep on updating the template on Goldstack with any future learnings.

Optimse Next.js SEO

Next.js is an awesome framework for building websites and web applications. I have covered Next.js in multiple posts on this blog, such as Next.js with Bootstrap Getting Started. One of the advantages of Next.js is that it can generate static or server-side rendered versions of pages developed with React. This is great for making it easy for search engines to crawl your site.

However, there are a few additional steps that we need to do to optimise a Next.js page for search engines. This posts lists the most important ones.

Ensure Every Page has a Title

In addition to the content of the page, the title of the page is also very important for search engine optimisation of your page. The title may be displayed as the heading of search results for your page and also helps the search engine algorithm to determine what your page is about.

Thankfully it is very easy to add a title to a page in Next.js. Simply import the Head component and you can define a title for your page:

import Head from 'next/head';
const Index = (): JSX.Element => {
return (
<title>My Page title</title>
<meta property="og:title" content="My page title" key="title" />
<h1>My page title</h1>
export default Index;
view raw index.tsx hosted with ❤ by GitHub

Provide a Page Description

While the title of a page will be shown as the heading of search results, the description is used to provide further details about your page.

The description can be added in a similar way to adding our title by again utilising the Head component. This time we add the <meta name="description" element:

import Head from 'next/head';
const Index = (): JSX.Element => {
return (
<title>My Page title</title>
<meta property="og:title" content="My page title" key="title" />
<meta name="description" content="My description" />
<meta property="og:description" content="My description" />
<h1>My page title</h1>
export default Index;
view raw index.tsx hosted with ❤ by GitHub

Ensure Not Relevant Pages are not Index

It is likely that your application will have pages that do not provide value for users coming in through a search result. These for instance may include test pages or pages that only make sense in the context of having viewed another page before.

For these pages it makes sense to prevent search engines from ignoring your page. For this, simply add the following meta element into your <Head> as shown above.

<meta name="robots" content="noindex">

Generate a Sitemap

The best way to generate a sitemap is to compile a sitemap.xml file and place it into to public folder. This can be easily accomplished using the nextjs-sitemap package. This requires to define a basic script with some configuration (adjust this configuration to the needs of your project):

const { configureSitemap } = require('@sergeymyssak/nextjs-sitemap');
const Sitemap = configureSitemap({
baseUrl: ';,
exclude: ['/admin'],
excludeIndex: true,
pagesConfig: {
'/about': {
priority: '0.5',
changefreq: 'daily',
isTrailingSlashRequired: true,
nextConfigPath: __dirname + '/next.config.js',
targetDirectory: __dirname + '/public',
pagesDirectory: __dirname + '/src/pages',
view raw sitemap-generator.js hosted with ❤ by GitHub

Install the package in your project:

npm install @sergeymyssak/nextjs-sitemap

And add a script into your package.json:

"scripts": {
"generate-sitemap": "node sitemap-generator.js",
view raw package.json hosted with ❤ by GitHub

Now you can run the script to generate the sitemap:

npm run generate-sitemap

Note that a sitemap may also do more harm than good. So if you want to provide a sitemap spent some time on the configuration and ensure that it is helpful to search engines.

While there are many further steps that one can take to improve performance in search engines, the above three really help us get most of the way there. From here, key is to provide high quality, relevant content.

Express.js on Lambda Getting Started

AWS Lambda is a cost efficient and easy way to deploy server applications. Express.js is a very popular Node.js framework that makes it very easy to develop REST APIs. This post will go through the basics of deploying an Express.js application to AWS Lambda.

You can also check out the sample project on GitHub.

Develop Express.js Server

We first need to implement our Express.js server. Nothing particular we need to keep in mind here. We can simply define routes etc. as we normally would:

import express from 'express';
import cors from 'cors';
import helmet from 'helmet';
import { rootHandler } from './root';
export const app: express.Application = express();
if (process.env.CORS) {`Starting server with CORS domain: ${process.env.CORS}`);
app.use(cors({ origin: process.env.CORS, credentials: true }));
app.get('/', rootHandler);
view raw server.ts hosted with ❤ by GitHub

In order to publish this server in a Lambda, we will need to add the aws-serverless-express to our project. Once that is done, we can define a new file lambda.ts with the following content:

import awsServerlessExpress from 'aws-serverless-express';
import { app } from './server';
const server = awsServerlessExpress.createServer(app);
exports.handler = (event: any, context: any): any => {
awsServerlessExpress.proxy(server, event, context);
view raw lambda.ts hosted with ❤ by GitHub

Note that we are importing the app object from our server.ts file. We have also added the package source-map-support. Initialising this module in our code will result in much easier to read stack traces in the Lambda console (since we will package up our Lambda with Webpack).

Please see all files that are required for the server, including the handler in the sample project.

Package Server

In order to deploy our server to AWS lambda, we need to package it up into a ZIP file. Generally lambda accepts any Node.js application definition in the ZIP file but we will package up our application using Webpack. This will drastically reduce the size of our server, which results in much improved cold start times for our Lambda.

For this, we simply add the webpack package to our project and define a webpack.config.js as follows:

/* eslint-disable @typescript-eslint/no-var-requires */
const path = require('path');
const PnpWebpackPlugin = require('pnp-webpack-plugin');
module.exports = {
entry: './dist/src/lambda.js',
output: {
path: path.resolve(__dirname, 'distLambda'),
filename: 'lambda.js',
libraryTarget: 'umd',
target: 'node',
mode: 'production',
devtool: 'source-map',
resolve: {
plugins: [PnpWebpackPlugin],
resolveLoader: {
plugins: [PnpWebpackPlugin.moduleLoader(module)],
module: {
rules: [
// this is required to load source maps of libraries
test: /\.(js|js\.map|map)$/,
enforce: 'pre',
use: [require.resolve('source-map-loader')],
view raw webpack.config.js hosted with ❤ by GitHub

Note that we are adding some configuration here to include source maps for easy to read stack traces, as well as load an additional plugin to support Yarn Pnp which is used in the sample project.

Running webpack should result in the following files being generated:

lambda.js should be around 650 kb which includes the whole Express server plus Helmet which we included earlier. Cold-starts for lambda should be sub 1 s with this file size.

Deploy to AWS

Lastly, we need to deploy this Lambda to AWS. For this we first need to define the infrastructure for the Lambda. In the sample project, this is done in Terraform:

resource "aws_lambda_function" "main" {
function_name = var.lambda_name
filename = data.archive_file.empty_lambda.output_path
handler = "lambda.handler"
runtime = "nodejs12.x"
memory_size = 2048
timeout = 900
role = aws_iam_role.lambda_exec.arn
lifecycle {
ignore_changes = [
view raw hosted with ❤ by GitHub

Important here is handler which should match the file name and main method name for our packaged Node.js application. The filename is lambda.js and we defined an export handler in lambda.ts above. Therefore the handler we need to define for the Lambda is lambda.handler. Also note we set the runtime to nodejs12.x. This ensures that Lambda knows to run our application as a Node.js application. To see how to define a Lambda function manually, see this post.

Note that there is a bit more we need to configure, including an API gateway that will send through HTTP requests to our Lambda. To see all infrastructure definitions, see the AWS Infrastructure definitions in the sample project. One important thing to note is that we need to use a proxy integration in our API gateway. This will ensure that our lambda receives all HTTP calls for the gateway and allow our Express server to do the routing.

resource "aws_api_gateway_integration" "lambda" {
rest_api_id =
resource_id = aws_api_gateway_method.proxy.resource_id
http_method = aws_api_gateway_method.proxy.http_method
# Lambdas can only be invoked via Post – but the gateway will also forward GET requests etc.
integration_http_method = "POST"
type = "AWS_PROXY"
uri = aws_lambda_function.main.invoke_arn
view raw hosted with ❤ by GitHub

Once our Lambda is created, we can simply ZIP up the folder distLambda/ and upload this to AWS using the Lambda console.

In the supplied sample project, we use @goldstack/template-lambda-express to help us with uploading our Lambda. Under the hood, this is using the AWS CLI. You can also use the AWS CLI directly using the update-function-code operation.

Next Steps

This post described a few fundamentals about deploying an AWS Lambda with an Express server. There are actually quite a number of steps involved to get a project working end to end. The easiest way I would recommend for getting a project with an Express server deployed on Lambda off the ground would be to use the tool Goldstack that I have developed, specifically the Express Lambda template. This has also been used to create the sample project for this post.

Otherwise, be welcome to check out the sample project on GitHub and modify it to your need. Note that one thing you will need to do is to update the goldstack.json configuration in packages/lambda-express/goldstack.json. Specifically you will need to change the domain configuration.

"$schema": "./schemas/package.schema.json",
"name": "lambda-express",
"template": "lambda-express",
"templateVersion": "0.1.19",
"configuration": {},
"deployments": [
"name": "prod",
"configuration": {
"lambdaName": "expressjs-lambda-getting-started",
"apiDomain": "",
"hostedZoneDomain": ""
"awsUser": "awsUser",
"awsRegion": "us-west-2"
view raw goldstack.json hosted with ❤ by GitHub

More details about the properties in this configuration can be found here

You will also need to create a config.json in config/infra/aws/config.json with AWS credentials for creating the infrastructure and deploying the Lambda.

"users": [
"name": "awsUser",
"type": "apiKey",
"config": {
"awsAccessKeyId": "your secret",
"awsSecretAccessKey": "your access key",
"awsDefaultRegion": "us-west-2"
view raw config.json hosted with ❤ by GitHub

If you simply use the Goldstack UI to configure your project, all these files will be prepared for you, and you can also easily create a fully configured monorepo that also includes modules for a React application or email sending.

Deploy Next.js to AWS

Next.js is becoming ever more popular these days and rightfully so. It is an extremely powerful and well-made framework that provides some very useful abstractions for developing React applications.

An easy way to deploy Next.js application is through using Vercel. However, it can also be useful to deploy Next.js application into other environments to simplify governance and billing. One popular way to to deploy frontend applications is through using the AWS services S3 and CloudFront.

This article describes how to set up the infrastructure required for running a Next.js application using Terraform on AWS, and some of the gotchas to keep in mind. You can also check out the code on GitHub.

Build Project

There are numerous ways to deploy a Next.js application. In our case, we will need to deploy our application as static website.

For this, we will need to define the following script.

"scripts": {
  "build:next": "next build && next export -o webDist/"

Running this script will output a bundled, stand-alone version of the Next.js application that can be deployed on any webserver that can host static files.

Next.js bundle files


We will need an S3 bucket to store the files resulting from the Next.js build process. This essentially is just a public S3 bucket.

Below the Terraform for generating such a bucket using the aws_s3_bucket resource. Note here:

  • The permissions for public read are set by acl = "public-read" but we also need a public read bucket policy that is defined
  • The index_document and error_document correspond to those output in the previous step.
resource "aws_s3_bucket" "website_root" {
  bucket = "${var.website_domain}-root"

  acl = "public-read"

  # Remove this line if you want to prevent accidential deletion of bucket
  force_destroy = true

  website {
    index_document = "index.html"
    error_document = "404.html"

  tags = {
    ManagedBy = "terraform"
    Changed   = formatdate("YYYY-MM-DD hh:mm ZZZ", timestamp())

  policy = <<EOF
  "Version": "2008-10-17",
  "Statement": [
      "Sid": "PublicReadForGetBucketObjects",
      "Effect": "Allow",
      "Principal": {
        "AWS": "*"
      "Action": "s3:GetObject",
      "Resource": "arn:aws:s3:::${var.website_domain}-root/*"

  lifecycle {
    ignore_changes = [tags]

Next we need to be able to upload our files to this S3 bucket. There are many ways to do this, for instance using the AWS CLI. I have also created an open source package that provides an integrated package for handling the upload of the files (plus setting up infrastructure through Terraform): template-nextjs. This library will also be used when you choose to create a starter project on Goldstack.

One thing to keep in mind when uploading website resources to S3 is that we want to avoid errors on the user’s end during the deployment process. We for instance do not want to delete the files before we re-upload them, since this may result in a small window in which the files are unavailable for users.

This can be solved by uploading the new files first using the AWS S3 sync command and then uploading them a second time with the --delete flag. The resulting commands will look somewhat like this:

aws s3 sync . s3://[bucketname]/[path]
aws s3 sync . s3://[bucketname]/[path] --delete

Since Next.js in general generates hashed file names, you could also just keep all old files on the bucket. However, Next.js deployments can quickly become large (> 50 MB), so depending on how often you deploy, this could quickly result in a significant amount of unnecessary data stored in your S3 bucket.

See here for reference as well the utility used by the `template-nextjs’ package above: utilsS3Deployment.


While it is possible to use an S3 bucket by itself to host a static website, it is usually preferrable to also use a CloudFront distribution in front of the bucket. This results in significantly faster load times for the user, and also enables us make our website available through a secure https:// link.

There are quite a few moving pieces involved in setting up a CloudFront distribution, so the best reference point here will be the complete Terraform availabe in the example project on GitHub. However, I will put a few excerpts from the Terraform configuration below.

We first need to configure an origin that points to our S3 bucket defined earlier:

  origin {
    domain_name = aws_s3_bucket.website_root.website_endpoint

    origin_id   = "origin-bucket-${}"
    custom_origin_config {
      http_port = 80
      https_port = 443
      origin_protocol_policy = "http-only"
      origin_ssl_protocols = ["TLSv1.2"]

And specify a root object that matches the index of our page:

  default_root_object = "index.html"

Then we need to set up the cache settings for CloudFront. A Next.js application consists of some files that should be loaded fresh by the user on every page load (e.g. index.html) but most files should only be downloaded once, and then can be cached safely. These pages can also be cached on CloudFront’s edge locations. This can be achieved in CloudFront using cache behaviours.

  # Priority 0
  ordered_cache_behavior {
    path_pattern     = "_next/static/*"
    allowed_methods  = ["GET", "HEAD", "OPTIONS"]
    cached_methods   = ["GET", "HEAD", "OPTIONS"]
    target_origin_id = "origin-bucket-${}"

    forwarded_values {
      query_string = false
      headers      = ["Origin"]

      cookies {
        forward = "none"

    min_ttl                = 0
    default_ttl            = 86400
    max_ttl                = 31536000
    compress               = true
    viewer_protocol_policy = "redirect-to-https"

We can also configure CloudFront to service the correct 404 page provided by Next.js:

  custom_error_response {
    error_caching_min_ttl = 60
    error_code            = 404
    response_page_path    = "/404.html"
    response_code         = 404

Finally, we can link the CloudFront distribution with an AWS provided SSL certificate (free). There is more involved to that than the following – for more information, please have a browse around the source files.

  custom_error_response {
    error_caching_min_ttl = 60
    error_code            = 404
    response_page_path    = "/404.html"
    response_code         = 404

In the example project, we also configure two CloudFront distributions. One to serve the main website, and one to allow forwarding users from an alternate domain. So if a user would go to the domain, they can be forwarded to You can find the configuration for that here


Lastly we need to be able to link our CloudFront distribtuion to a domain, and for this we can use the Route 53 service on AWS. This service works both for domains purchased on AWS and for domains purchased through another domain provider.

This can be defined in Terraform easily:

# Creates the DNS record to point on the main CloudFront distribution ID
resource "aws_route53_record" "website_cdn_root_record" {
  zone_id = data.aws_route53_zone.main.zone_id
  name    = var.website_domain
  type    = "A"

  alias {
    name                   = aws_cloudfront_distribution.website_cdn_root.domain_name
    zone_id                = aws_cloudfront_distribution.website_cdn_root.hosted_zone_id
    evaluate_target_health = false

The example project assumes that a Route 53 hosted zone is already configured for the domain you would like to deploy your application to. You can find instructions how to achieve that in the Goldstack documentation.


When deploying a Next.js application to AWS using the method described in this post, there are a few gotchas we need to keep in mind.

Chiefly, this deployment will not support any Next.js API routes and server-side rendering. There are ways to provide some support for these, and there is a good Serverless module for that. However, things can quickly become complicated, so I would recommend to instead deploy an API separately using Lambdas. Goldstack projects make that easy by allowing to define the Next.js application and lambdas for providing the API in one monorepo.

There are further some issues related to pre-fetching pages in some cases. These can be avoided by not using the Next.js <Link> component.

Final Thoughts

If you are looking for a fully integrated experience for deploying Next.js applications and you do not worry too much about costs and system governance I would recommend deploying to Vercel. You can still use the Goldstack Next.js templates if you are interested in that (see Vercel Deployment).

However, there are some benefits in being able to have your entire infrastructure defined in one cloud provider, and hosting a Next.js website on AWS is very cost effective and provides a good experience for users. Also, AWS has a great reputation for service uptime, especially for the services involved in hosting this solution (S3, CloudFront, Route 53).

Feel free to clone the sample project on GitHub or be welcome to check out Goldstack which provides an easy UI tool to configure your Next.js project (plus link it to additional services such as email sending and data storage on S3).

Overwrite Author in Git History

With every commit, git records the name of the author as well as the committer along with their respective email addresses. These will be public once you push your project to GitHub. So sometimes it may be advisable to change the email addresses of the author and committer for all the past commits in your repository.

This can easily be verified by running git log.

Git keeping track of my email address …

Thankfully it is surprisingly easy to change the email addresses of author and committer in the repository. Simply run the following command in the toplevel of your working tree:

git filter-branch -f --env-filter "GIT_AUTHOR_EMAIL='' GIT_COMMITTER_EMAIL='';" HEAD

Finally just do a push.

git push --force

Note that adding --force is important here, since otherwise the changes will be rejected by the remote with the error message:

 ! [rejected]        master -> master (non-fast-forward)
error: failed to push some refs to ''
hint: Updates were rejected because the tip of your current branch is behind
hint: its remote counterpart. Integrate the remote changes (e.g.
hint: 'git pull ...') before pushing again.
hint: See the 'Note about fast-forwards' in 'git push --help' for details.

Do not do a git pull in that case since that will undo the updating of the author and committer.

If you only want to update the author or committer of some of the commits, you can also use git filter-branch. For instance as follows:

git filter-branch --commit-filter '
      if [ "$GIT_AUTHOR_EMAIL" = "to_update@mail" ];
              GIT_AUTHOR_NAME="New Name";
              git commit-tree "$@";
              git commit-tree "$@";
      fi' HEAD

Note that it is easy for things to go wrong here with providing the multi-line commit-filter – the easiest way is to put this command into a separate script file.

Next.js with Bootstrap Getting Started

Next.js is an open-source framework for React that aspires to reduce the amount of boilerplate code required for developing React applications. Key features that Next.js provides out of the box are:

  • Routing
  • Code Splitting
  • Server side rendering

I recently developed a small example application with Next.js and came across some minor difficulties when trying to use React Bootstrap within the Next.js application. I will therefore provide a quick guide here how to get Next.js and Bootstrap working together.

Next.js Application with Bootstrap Styling

Thankfully, it is quite easy to get React Bootstrap and Next.js working together once one knows what to do. Essentially this can be accomplished in three steps.

Step 1: Initialise project

We first create a new Next.js project and install the required dependencies:

yarn init
yarn add react bootstrap next @zeit/next-css react-bootstrap react-dom

Then add the scripts to build and deploy Next.js to package.json:

  "scripts": {
    "dev": "next",
    "build": "next build",
    "start": "next start"

Step 2: Customise Next.js Configuration

In order for Next.js to be able to load the Bootstrap CSS for all pages, we need to create a next.config.js file in our project root and provide the following configuration:

const withCSS = require('@zeit/next-css')

module.exports = withCSS({
  cssLoaderOptions: {
    url: false

Step 3: Load Bootstrap CSS for All Pages

Next, we need to create the file pages/_app.js. This allows us to define some logic to run for every page that Next.js renders; for all client-side, server-side and static rendering.

We only need to ensure that the Bootstrap CSS is loaded:

// ensure all pages have Bootstrap CSS
import 'bootstrap/dist/css/bootstrap.min.css';

function MyApp({ Component, pageProps }) {

    <Component {...pageProps} />;


export default MyApp;

And that’s it! We can now start developing pages and components using Bootstrap styled React components:

import Container from 'react-bootstrap/Container';
import Row from 'react-bootstrap/Row';
import Col from 'react-bootstrap/Col';

function Landing() {
  return <Container>
        <h1>Next.js React Bootstrap</h1>

export default Landing;

I have also put together a small project on GitHub that makes use of Next.js and React Bootstrap:


If you are looking for something a bit more comprehensive, please have a look at the following template as well. This includes scripts for deployment into AWS, ESLint/TypeScript configuration and is regularly updated:

Goldstack Next.js + Bootstrap Template

Strapi 2020 Quick Review

I have recently reviewed KeystoneJS for a little project I am planning. I found it overall quite good but lacking in a few aspects, particularly in the way migrations are handled (or not handled). After some research, it seems that Strapi could be a possible alternative to KeystoneJS and so I decided to give this solution a quick review as well.

As I’ve done for KeystoneJS, I will first go through a little example and then conclude with my thoughts.

Getting Started

I started the project by simply applying yarn create:

yarn create strapi-app strapi --quickstart

Since Strapi uses Sqlite for local development, the Strapi server is ready to go after running this command (no database connection configuration is required). I then logged into the administration console and created an admin user and password.

I then went into the Content Type Builder and created a two types/tables. Quote that holds an author and a quote and Tag that holds tag names.

Content Type Builder

Creating these was really straightforward and simple. In the background, Strapi created matching definitions for these in the project directory api/:


  "connection": "default",
  "collectionName": "quotes",
  "info": {
    "name": "Quote"
  "options": {
    "increments": true,
    "timestamps": true
  "attributes": {
    "Author": {
      "type": "string"
    "tags": {
      "collection": "tag"
    "Quote": {
      "type": "string"


  "connection": "default",
  "collectionName": "tags",
  "info": {
    "name": "Tag"
  "options": {
    "increments": true,
    "timestamps": true
  "attributes": {
    "Name": {
      "type": "string",
      "required": true,
      "unique": true

The schema defined in these JSON files is dynamically translated into operations modifying the schema of the database Strapi is connected to. Upon deployment to a production system, Strapi will create correct schemas in the attached production database; e.g. for MongoDB or Postgres (see Running Strapi in production and version control sync, Create db schema from models).

I then installed the Graphql plugin. For me, this did not work through the admin web interface (it would just get stuck).

Strapi getting stuck after trying to install GraphQL plugin

I needed to run yarn again after this to fix the project. However, installing using the strapi cli worked without problems:

yarn strapi install graphql

Next I went to the Roles & Permissions plugin to configure public access to the endpoints generated from models:

Setting Permissions in Strapi

It must be noted here that permission settings are not reflected in the source code of the Strapi project. Therefore they are not versioned and cannot easily be deployed to testing and production environments (see #672 Permissions flow between environments)

After the permissions have been set, it is very easy to query the GraphQL API:

GraphQL query against API

I finally developed a little Next.js application that queries the GraphQL API exposed by Strapi. This as simple as hooking up an Apollo Client with the endpoint exposed by Strapi.

const client = new ApolloClient({
  uri: 'http://localhost:1337/graphql',

Which then makes it very easy to write dynamic pages with React:

import { useQuery } from '@apollo/react-hooks';
import { gql } from "apollo-boost";

const QUOTES = gql`
    quotes {

const QuoteList: any = () =&gt; {
  const { loading, error, data } = useQuery(QUOTES);
  if (error) return "Error loading quotes";
  if (loading) return "Loading ...";

  const { quotes } = data;

  return <ul>{{ Author, id }) =&gt; {
      return <li>{Author}</li>;

export default QuoteList;
Next.js app powered by Stapi Backend

All code I’ve developed for this example is available on GitHub:

Quick Review

Based on my experiences building the simple example above and studies of the documentation, my initial impressions of Strapi are:

  • I was very impressed with the speed of development using Strapi. I especially liked the Content Type Builder to quickly design the schema for the data.
  • Strapi provides both a very powerful Restful and GraphQL API for the defined data.
  • In contrast to KeystoneJS, database migrations are handled seamlessly.
  • Strapi feels still a bit rough around the edges, for instance some plugins lack proper descriptions and it crashed on me when trying to install the GraphQL plugin. I probably wouldn’t feel comfortable rolling it out for a mission critical production system.
  • For some reason, permissions are not migrated between environments, they are only stored in the database of the local system. I believe this can make deploying Strapi quite painful.

Overall, I think Strapi is a great technology, and so far it appears the best fit for the small project I am planning. I am especially impressed by the ‘no code’ approach to define the data models.

See also:

5 Things I love about Strapi, a Node.js headless CMS

KeystoneJS 5 Quick Review

I have recently started on a little project to organise the quotes that I have collected in years of reading (see kindle-citation-extractor). I originally got my quotes into Airtable but I quickly hit the limit for the free tier.

I figured that it would be great if I could develop a simple database with a simple user interface. Ideally I would not want to implement the basic CRUD views and so I had a look around for tools that can generate simple UIs for databases. My initial search revealed Keystone and Strapi.

I really liked the looks of KeystoneJS (Version 5) since it appears simple and clean. In this article, I will first document my experiences with the Getting Started example for KeystoneJS and conclude with my first impressions and comparison to similar solutions.

Getting Started

After some browsing around, I decided to follow the getting started guide from the Keystone documentation.

I am particularly interested in running Keystone with Postgres, so to get my local example running, I quickly spun up a Postgres server using Docker:

docker run --name keystone-pg -e POSTGRES_PASSWORD=password -d -v db:/var/lib/postgresql/data -p 5432:5432 postgres


Then I configured the keystone project as per instructions:

yarn create keystone-app  keystone-playground

Provided answers for the prompts:

Prompts for Keystone Project initialisation

Then I connected to the Postgres instance in Docker and created a keystone table:

Create keystone database

And finally run the example:

DATABASE_URL=postgres://postgres:password@localhost:5432/keystone &amp;&amp; yarn dev

Unfortunately, loading the AdminUI then resulted in the following error:

> GraphQL error: select count(*) from "public"."Todo" as "t0" where true – relation "public.Todo" does not exist

There appears to be an open issue for this already: Trouble running starter

I was able to fix this issue by modifying index.js as follows:

const keystone = new Keystone({
  adapter: new Adapter({
    dropDatabase: true,
    knexOptions: {
      client: 'postgres',
      connection: process.env.DATABASE_URL,

Adding the dropDatabase option here seems to force Keystone to create the data in the database upon startup.

Keystone example

The interface on localhost:3000 is also up and running:

Keystone 5 Example To Do list App

Quick Review

Based on looking around the documentation and my experiences with the sample app, my observations for Keystone JS 5 are as follows:

  • KeystoneJS 5 appears very modern, with excellent capabilities for GraphQL
  • Based on my experiences with the Getting Started example, it seems that the documentation for KeystoneJS leaves some things to be desired.
  • I like how lightweight KeystoneJS feels. It runs fast and the code to configure it seems very straightforward and simple.
  • A few lines of declarative code can yield impressive outcomes, such as a fully featured GraphQL API and a nice admin interface.
  • Seems like it is possible to deploy Keystone in Serverless environments, see Serverless deployment using Now.
  • KeystoneJS does not manage migrations when the data model is changed (see this comment). This requires to create any additional lists and fields manually in the database. Here an example how this can be accomplished using Knex migrations.

Potential alternatives for KeystoneJS are:

  • Strapi: Very similar to Keystone but based on a REST API first (GraphQL available as a plugin). Allows creating and editing table schema using the Admin UI. Overall it is more of a CMS that KeystoneJS.
  • Prisma: Prisma is closer to traditional ORM tools than KeystoneJS. The recently released Prisma Admin is similar to the Admin interface of KeystoneJs. Prisma offers a client library whereas KeystoneJS depends on clients interfacing with the data through the GraphQL API.

Overall I still believe that KeystoneJS is a viable technology for my use case. My biggest concern is around migrations; I believe it may be quite difficult to orchestrate this easily across development, test and production system. I will probably continue to poke around a bit more in the KeystoneJS examples and documentation and possibly try out one of the alternatives.

I have uploaded my project resulting from following the Getting Started guide to GitHub. I think it can be quite useful for complementing the existing Getting Started documentation, particularly when wanting to get started using Postgres:


Knex and Typescript Starter Project

SQL is a very expressive and powerful language. Unfortunately, it has often been difficult to interact with database using SQL from object-oriented languages due to a mismatch of the data structures in the database versus the structures in the application programming language. One common solution to this problem where Object-relational mapping frameworks, which often come with their own issues.

I was most delighted when I started working with Knex, a simple yet versatile framework for connecting and working with relational databases in JavaScript. Using it feels like working with an ORM but it only provides a very thin abstraction layer on top of SQL; this helps avoid many of the pitfalls potentially introduced by ORMs while still providing us with most of their conveniences.

As it turns out, Knex has excellent TypeScript support and I think building applications relying on a database using Knex and TypeScript is an excellent starting point.

I have put together a small project on GitHub that sets up the basics of getting started with Knex and TypeScript. This project specifically focuses on the setting up Knex and TypeScript and no other framework, for instance Express is included.

You can go ahead and clone the project from here:

After running yarn the following scripts can be run:

  • yarn test: Which will set up Jest in watch mode.
  • yarn build: Which will transpile TypeScript to ES6.
  • yarn watch: Which will run index.ts after every change (and compiles any changes using tsc)

The scripts for defining the database schema are placed in the folder migrations. Here the only currently defined migration:

import Knex from "knex";
import { Migration } from "./../migrationUtil";

export const migrations: Migration[] = [
        name: "000_define_quotes",
        async up(knex: Knex) {
            await knex.schema.createTable("quotes",
                (table) => {
                    table.timestamp("created", { useTz: true });
                    table.string("author", 512).notNullable();
                    table.string("book", 1024).notNullable();
                    table.dateTime("date_collected", { useTz: true });
                    table.string("location", 1024).notNullable();
                    table.string("link", 1024).notNullable();
                    table.index(["document_id"], "document_id_index");

            await knex.schema.createTable("tags",
                (table) => {
                    table.timestamp("created", { useTz: true });
                    table.string("name", 512).notNullable();
        /* eslint-disable-next-line  @typescript-eslint/no-empty-function */
        async down(kenx: Knex) {

This migration is defined as one of the migrations for this application.

import { migrations as mig001 } from "./migrations/001_define_quotes";
import { runMigration, Migration } from "./migrationUtil";
import Knex from "knex";

export async function runMigrations({ knex }: { knex: Knex }): Promise<void> {

  const migrations: Migration[] = [].concat(mig001);

  await runMigration({ migrations, knex });


The migrations are run upon application start up or before tests are run. See the test in migrations.test.ts:

import { runMigrations } from "../src/migrations";
import Knex from "Knex";

describe("Test migrations.", () => {

  it("Should run migrations without error.", async () => {
    const knex = Knex({
      client: "sqlite3",
      connection: { filename: ":memory:" },
      pool: { min: 1, max: 1 },
    await runMigrations({ knex });
    await knex.destroy();


Note that this way of running migrations differs a bit from the default way suggested on the Knex website, namely to define migrations in individual files and then run migrations through the Knex CLI. I find this suggested default way a bit cumbersome and I think defining the migrations as native part of the application allows for more flexibility; specifically making it easier to test the application and allowing to develop the application in a more modular way, by allowing us to define migrations per module rather than for the application as a whole (as long as foreign keys are used sparingly).

This is just a very simple starter project. There are other starter projects for TypeScript, such as TypeScript-Node

Textures and Lighting with React and Three.js

In my previous three posts, I have developed a simple WebGL application using react-three-fiber and three.js. In this post, I am adding texture loading and proper lighting to the application.

For reference, here the links to the previous versions of the app:

  • Version 1: Just being able to drag a shape on the screen
  • Version 2: Dragging and dropping shapes using physics
  • Version 3: Being able to move the camera

Here the version developed for this post:


Source Code

You can click to add objects, click and drag them as well as move the camera using WASD keys and mouse wheel to zoom.

Loading Textures

Textures can be loaded easily in react-three-fiber using the useResource hook.

All that is required to place the texture in the public/ folder of the react application, load the texture and then link it to the material by setting the map property.

    const [texture] = useLoader(TextureLoader, 'textures/grasslight-big.jpg');

    if (texture) {
        texture.wrapS = texture.wrapT = THREE.RepeatWrapping;
        texture.repeat.set(1500, 1500);
        texture.anisotropy = 16;

    return (
        <mesh ref={ref} receiveShadow position={position}
            <planeBufferGeometry attach="geometry" args={[10000, 10000]} />
            {texture &&
                <meshPhongMaterial attach="material" map={texture} />


I found that textures are often quite large in size; larger than 1 MB. This significantly extends loading times. Thus I have added a simple loading screen. Unfortunately to be able to display the text ‘loading’ I had to create a TextGeometry which in turn required a font to be loaded (I prepared the Roboto font using facetype.js. This font by itself is more than 300 kb, so even loading the loading screen takes a bit of time.


The goal of this application is to have a simple, very large plane on which any number of objects may be added. The issue I encountered with this was that to get shadows working with a DirectionalLight turned out to be very tricky. In the end, I used a combination of an AmbientLight with a SpotLight.

        <ambientLight intensity={0.9} />

        <primitive object={lightTarget} position={lightTargetPosition} />
            angle={Math.PI / 3}
            shadow-mapSize={new Vector2(2048 * 5, 2048 * 5)}

Since the SpotLight would not be able to cover the whole of the plane (as said, it is meant to be very large) and provide accurate shadows, I opted for moving the SpotLight when a user moves the camera.

    const lightTargetYDelta = 120;
    const lightTargetXDelta = 80;
    const [lightPosition, setLightPosition] = useState([-lightTargetXDelta, -lightTargetYDelta, 200]);
    const [lightTargetPosition, setLightTargetPosition] = useState([0, 0, 0]);
    const onCameraMoved = (delta) => {
        const newLightPosition =, idx) => lightPosition[idx] + e);
        const newLightTargetPosition = [newLightPosition[0] + lightTargetXDelta, newLightPosition[1] + lightTargetYDelta, 0];

This required both updating the position of the light setLightPosition as well as moving the light target setLightTargetPosition.


Since the amount of code for this example increased quite a bit over the past three iterations, I broke up the application into multiple modules, with most React components now sitting in their own file.

I think this really shows the advantage of using React with Three.js, since it is easy for each component to manage its own state.

For the next iteration, I will most likely be looking at how I can remove the textures or use much smaller textures. I would like the application to be able to load as quickly as possible, and textures clearly do not seem a great option for this.