Optimse Next.js SEO

Next.js is an awesome framework for building websites and web applications. I have covered Next.js in multiple posts on this blog, such as Next.js with Bootstrap Getting Started. One of the advantages of Next.js is that it can generate static or server-side rendered versions of pages developed with React. This is great for making it easy for search engines to crawl your site.

However, there are a few additional steps that we need to do to optimise a Next.js page for search engines. This posts lists the most important ones.

Ensure Every Page has a Title

In addition to the content of the page, the title of the page is also very important for search engine optimisation of your page. The title may be displayed as the heading of search results for your page and also helps the search engine algorithm to determine what your page is about.

Thankfully it is very easy to add a title to a page in Next.js. Simply import the Head component and you can define a title for your page:

import Head from 'next/head';
const Index = (): JSX.Element => {
return (
<>
<Head>
<title>My Page title</title>
<meta property="og:title" content="My page title" key="title" />
</Head>
<h1>My page title</h1>
<>
);
};
export default Index;
view raw index.tsx hosted with ❤ by GitHub

Provide a Page Description

While the title of a page will be shown as the heading of search results, the description is used to provide further details about your page.

The description can be added in a similar way to adding our title by again utilising the Head component. This time we add the <meta name="description" element:

import Head from 'next/head';
const Index = (): JSX.Element => {
return (
<>
<Head>
<title>My Page title</title>
<meta property="og:title" content="My page title" key="title" />
<meta name="description" content="My description" />
<meta property="og:description" content="My description" />
</Head>
<h1>My page title</h1>
<>
);
};
export default Index;
view raw index.tsx hosted with ❤ by GitHub

Ensure Not Relevant Pages are not Index

It is likely that your application will have pages that do not provide value for users coming in through a search result. These for instance may include test pages or pages that only make sense in the context of having viewed another page before.

For these pages it makes sense to prevent search engines from ignoring your page. For this, simply add the following meta element into your <Head> as shown above.

<meta name="robots" content="noindex">

Generate a Sitemap

The best way to generate a sitemap is to compile a sitemap.xml file and place it into to public folder. This can be easily accomplished using the nextjs-sitemap package. This requires to define a basic script with some configuration (adjust this configuration to the needs of your project):

const { configureSitemap } = require('@sergeymyssak/nextjs-sitemap');
const Sitemap = configureSitemap({
baseUrl: 'https://example.com&#39;,
exclude: ['/admin'],
excludeIndex: true,
pagesConfig: {
'/about': {
priority: '0.5',
changefreq: 'daily',
},
},
isTrailingSlashRequired: true,
nextConfigPath: __dirname + '/next.config.js',
targetDirectory: __dirname + '/public',
pagesDirectory: __dirname + '/src/pages',
});
Sitemap.generateSitemap();
view raw sitemap-generator.js hosted with ❤ by GitHub

Install the package in your project:

npm install @sergeymyssak/nextjs-sitemap

And add a script into your package.json:

"scripts": {
"generate-sitemap": "node sitemap-generator.js",
},
view raw package.json hosted with ❤ by GitHub

Now you can run the script to generate the sitemap:

npm run generate-sitemap

Note that a sitemap may also do more harm than good. So if you want to provide a sitemap spent some time on the configuration and ensure that it is helpful to search engines.

While there are many further steps that one can take to improve performance in search engines, the above three really help us get most of the way there. From here, key is to provide high quality, relevant content.

Express.js on Lambda Getting Started

AWS Lambda is a cost efficient and easy way to deploy server applications. Express.js is a very popular Node.js framework that makes it very easy to develop REST APIs. This post will go through the basics of deploying an Express.js application to AWS Lambda.

You can also check out the sample project on GitHub.

Develop Express.js Server

We first need to implement our Express.js server. Nothing particular we need to keep in mind here. We can simply define routes etc. as we normally would:

import express from 'express';
import cors from 'cors';
import helmet from 'helmet';
import { rootHandler } from './root';
export const app: express.Application = express();
app.use(helmet());
if (process.env.CORS) {
console.info(`Starting server with CORS domain: ${process.env.CORS}`);
app.use(cors({ origin: process.env.CORS, credentials: true }));
}
app.use(express.json());
app.get('/', rootHandler);
view raw server.ts hosted with ❤ by GitHub

In order to publish this server in a Lambda, we will need to add the aws-serverless-express to our project. Once that is done, we can define a new file lambda.ts with the following content:

require('source-map-support').install();
import awsServerlessExpress from 'aws-serverless-express';
import { app } from './server';
const server = awsServerlessExpress.createServer(app);
exports.handler = (event: any, context: any): any => {
awsServerlessExpress.proxy(server, event, context);
};
view raw lambda.ts hosted with ❤ by GitHub

Note that we are importing the app object from our server.ts file. We have also added the package source-map-support. Initialising this module in our code will result in much easier to read stack traces in the Lambda console (since we will package up our Lambda with Webpack).

Please see all files that are required for the server, including the handler in the sample project.

Package Server

In order to deploy our server to AWS lambda, we need to package it up into a ZIP file. Generally lambda accepts any Node.js application definition in the ZIP file but we will package up our application using Webpack. This will drastically reduce the size of our server, which results in much improved cold start times for our Lambda.

For this, we simply add the webpack package to our project and define a webpack.config.js as follows:

/* eslint-disable @typescript-eslint/no-var-requires */
const path = require('path');
const PnpWebpackPlugin = require('pnp-webpack-plugin');
module.exports = {
entry: './dist/src/lambda.js',
output: {
path: path.resolve(__dirname, 'distLambda'),
filename: 'lambda.js',
libraryTarget: 'umd',
},
target: 'node',
mode: 'production',
devtool: 'source-map',
resolve: {
plugins: [PnpWebpackPlugin],
},
resolveLoader: {
plugins: [PnpWebpackPlugin.moduleLoader(module)],
},
module: {
rules: [
// this is required to load source maps of libraries
{
test: /\.(js|js\.map|map)$/,
enforce: 'pre',
use: [require.resolve('source-map-loader')],
},
],
},
};
view raw webpack.config.js hosted with ❤ by GitHub

Note that we are adding some configuration here to include source maps for easy to read stack traces, as well as load an additional plugin to support Yarn Pnp which is used in the sample project.

Running webpack should result in the following files being generated:

lambda.js should be around 650 kb which includes the whole Express server plus Helmet which we included earlier. Cold-starts for lambda should be sub 1 s with this file size.

Deploy to AWS

Lastly, we need to deploy this Lambda to AWS. For this we first need to define the infrastructure for the Lambda. In the sample project, this is done in Terraform:

resource "aws_lambda_function" "main" {
function_name = var.lambda_name
filename = data.archive_file.empty_lambda.output_path
handler = "lambda.handler"
runtime = "nodejs12.x"
memory_size = 2048
timeout = 900
role = aws_iam_role.lambda_exec.arn
lifecycle {
ignore_changes = [
filename,
]
}
}
view raw lambda.tf hosted with ❤ by GitHub

Important here is handler which should match the file name and main method name for our packaged Node.js application. The filename is lambda.js and we defined an export handler in lambda.ts above. Therefore the handler we need to define for the Lambda is lambda.handler. Also note we set the runtime to nodejs12.x. This ensures that Lambda knows to run our application as a Node.js application. To see how to define a Lambda function manually, see this post.

Note that there is a bit more we need to configure, including an API gateway that will send through HTTP requests to our Lambda. To see all infrastructure definitions, see the AWS Infrastructure definitions in the sample project. One important thing to note is that we need to use a proxy integration in our API gateway. This will ensure that our lambda receives all HTTP calls for the gateway and allow our Express server to do the routing.

resource "aws_api_gateway_integration" "lambda" {
rest_api_id = aws_api_gateway_rest_api.main.id
resource_id = aws_api_gateway_method.proxy.resource_id
http_method = aws_api_gateway_method.proxy.http_method
# Lambdas can only be invoked via Post – but the gateway will also forward GET requests etc.
integration_http_method = "POST"
type = "AWS_PROXY"
uri = aws_lambda_function.main.invoke_arn
}
view raw api-gateway.tf hosted with ❤ by GitHub

Once our Lambda is created, we can simply ZIP up the folder distLambda/ and upload this to AWS using the Lambda console.

In the supplied sample project, we use @goldstack/template-lambda-express to help us with uploading our Lambda. Under the hood, this is using the AWS CLI. You can also use the AWS CLI directly using the update-function-code operation.

Next Steps

This post described a few fundamentals about deploying an AWS Lambda with an Express server. There are actually quite a number of steps involved to get a project working end to end. The easiest way I would recommend for getting a project with an Express server deployed on Lambda off the ground would be to use the tool Goldstack that I have developed, specifically the Express Lambda template. This has also been used to create the sample project for this post.

Otherwise, be welcome to check out the sample project on GitHub and modify it to your need. Note that one thing you will need to do is to update the goldstack.json configuration in packages/lambda-express/goldstack.json. Specifically you will need to change the domain configuration.

{
"$schema": "./schemas/package.schema.json",
"name": "lambda-express",
"template": "lambda-express",
"templateVersion": "0.1.19",
"configuration": {},
"deployments": [
{
"name": "prod",
"configuration": {
"lambdaName": "expressjs-lambda-getting-started",
"apiDomain": "expressjs-lambda.examples.goldstack.party",
"hostedZoneDomain": "goldstack.party"
},
"awsUser": "awsUser",
"awsRegion": "us-west-2"
}
]
}
view raw goldstack.json hosted with ❤ by GitHub

More details about the properties in this configuration can be found here

You will also need to create a config.json in config/infra/aws/config.json with AWS credentials for creating the infrastructure and deploying the Lambda.

{
"users": [
{
"name": "awsUser",
"type": "apiKey",
"config": {
"awsAccessKeyId": "your secret",
"awsSecretAccessKey": "your access key",
"awsDefaultRegion": "us-west-2"
}
}
]
}
view raw config.json hosted with ❤ by GitHub

If you simply use the Goldstack UI to configure your project, all these files will be prepared for you, and you can also easily create a fully configured monorepo that also includes modules for a React application or email sending.

Overwrite Author in Git History

With every commit, git records the name of the author as well as the committer along with their respective email addresses. These will be public once you push your project to GitHub. So sometimes it may be advisable to change the email addresses of the author and committer for all the past commits in your repository.

This can easily be verified by running git log.

Git keeping track of my email address …

Thankfully it is surprisingly easy to change the email addresses of author and committer in the repository. Simply run the following command in the toplevel of your working tree:

git filter-branch -f --env-filter "GIT_AUTHOR_EMAIL='newemail@site.com' GIT_COMMITTER_EMAIL='newemail@site.com';" HEAD

Finally just do a push.

git push --force

Note that adding --force is important here, since otherwise the changes will be rejected by the remote with the error message:

 ! [rejected]        master -> master (non-fast-forward)
error: failed to push some refs to 'git@github.com:repo/repo.git'
hint: Updates were rejected because the tip of your current branch is behind
hint: its remote counterpart. Integrate the remote changes (e.g.
hint: 'git pull ...') before pushing again.
hint: See the 'Note about fast-forwards' in 'git push --help' for details.

Do not do a git pull in that case since that will undo the updating of the author and committer.

If you only want to update the author or committer of some of the commits, you can also use git filter-branch. For instance as follows:

git filter-branch --commit-filter '
      if [ "$GIT_AUTHOR_EMAIL" = "to_update@mail" ];
      then
              GIT_AUTHOR_NAME="New Name";
              GIT_AUTHOR_EMAIL="new@email.com";
              git commit-tree "$@";
      else
              git commit-tree "$@";
      fi' HEAD

Note that it is easy for things to go wrong here with providing the multi-line commit-filter – the easiest way is to put this command into a separate script file.

Next.js with Bootstrap Getting Started

Next.js is an open-source framework for React that aspires to reduce the amount of boilerplate code required for developing React applications. Key features that Next.js provides out of the box are:

  • Routing
  • Code Splitting
  • Server side rendering

I recently developed a small example application with Next.js and came across some minor difficulties when trying to use React Bootstrap within the Next.js application. I will therefore provide a quick guide here how to get Next.js and Bootstrap working together.

Next.js Application with Bootstrap Styling

Thankfully, it is quite easy to get React Bootstrap and Next.js working together once one knows what to do. Essentially this can be accomplished in three steps.

Step 1: Initialise project

We first create a new Next.js project and install the required dependencies:

yarn init
yarn add react bootstrap next @zeit/next-css react-bootstrap react-dom

Then add the scripts to build and deploy Next.js to package.json:

  "scripts": {
    "dev": "next",
    "build": "next build",
    "start": "next start"
  },

Step 2: Customise Next.js Configuration

In order for Next.js to be able to load the Bootstrap CSS for all pages, we need to create a next.config.js file in our project root and provide the following configuration:

const withCSS = require('@zeit/next-css')

module.exports = withCSS({
  cssLoaderOptions: {
    url: false
  }
});

Step 3: Load Bootstrap CSS for All Pages

Next, we need to create the file pages/_app.js. This allows us to define some logic to run for every page that Next.js renders; for all client-side, server-side and static rendering.

We only need to ensure that the Bootstrap CSS is loaded:

// ensure all pages have Bootstrap CSS
import 'bootstrap/dist/css/bootstrap.min.css';

function MyApp({ Component, pageProps }) {

  return
    <Component {...pageProps} />;

}

export default MyApp;

And that’s it! We can now start developing pages and components using Bootstrap styled React components:

import Container from 'react-bootstrap/Container';
import Row from 'react-bootstrap/Row';
import Col from 'react-bootstrap/Col';

function Landing() {
  return <Container>
    <Row>
      <Col>
        <h1>Next.js React Bootstrap</h1>
      </Col>
    </Row>
  </Container>
}

export default Landing;

I have also put together a small project on GitHub that makes use of Next.js and React Bootstrap:

next-js-react-bootstrap

If you are looking for something a bit more comprehensive, please have a look at the following template as well. This includes scripts for deployment into AWS, ESLint/TypeScript configuration and is regularly updated:

Goldstack Next.js + Bootstrap Template

KeystoneJS 5 Quick Review

I have recently started on a little project to organise the quotes that I have collected in years of reading (see kindle-citation-extractor). I originally got my quotes into Airtable but I quickly hit the limit for the free tier.

I figured that it would be great if I could develop a simple database with a simple user interface. Ideally I would not want to implement the basic CRUD views and so I had a look around for tools that can generate simple UIs for databases. My initial search revealed Keystone and Strapi.

I really liked the looks of KeystoneJS (Version 5) since it appears simple and clean. In this article, I will first document my experiences with the Getting Started example for KeystoneJS and conclude with my first impressions and comparison to similar solutions.

Getting Started

After some browsing around, I decided to follow the getting started guide from the Keystone documentation.

I am particularly interested in running Keystone with Postgres, so to get my local example running, I quickly spun up a Postgres server using Docker:

docker run --name keystone-pg -e POSTGRES_PASSWORD=password -d -v db:/var/lib/postgresql/data -p 5432:5432 postgres

(db-start.sh)

Then I configured the keystone project as per instructions:

yarn create keystone-app  keystone-playground

Provided answers for the prompts:

Prompts for Keystone Project initialisation

Then I connected to the Postgres instance in Docker and created a keystone table:

Create keystone database

And finally run the example:

DATABASE_URL=postgres://postgres:password@localhost:5432/keystone &amp;&amp; yarn dev

Unfortunately, loading the AdminUI then resulted in the following error:

> GraphQL error: select count(*) from "public"."Todo" as "t0" where true – relation "public.Todo" does not exist

There appears to be an open issue for this already: Trouble running starter

I was able to fix this issue by modifying index.js as follows:

...
const keystone = new Keystone({
  name: PROJECT_NAME,
  adapter: new Adapter({
    dropDatabase: true,
    knexOptions: {
      client: 'postgres',
      connection: process.env.DATABASE_URL,
    }
  }),
});
...

Adding the dropDatabase option here seems to force Keystone to create the data in the database upon startup.

Keystone example

The interface on localhost:3000 is also up and running:

Keystone 5 Example To Do list App

Quick Review

Based on looking around the documentation and my experiences with the sample app, my observations for Keystone JS 5 are as follows:

  • KeystoneJS 5 appears very modern, with excellent capabilities for GraphQL
  • Based on my experiences with the Getting Started example, it seems that the documentation for KeystoneJS leaves some things to be desired.
  • I like how lightweight KeystoneJS feels. It runs fast and the code to configure it seems very straightforward and simple.
  • A few lines of declarative code can yield impressive outcomes, such as a fully featured GraphQL API and a nice admin interface.
  • Seems like it is possible to deploy Keystone in Serverless environments, see Serverless deployment using Now.
  • KeystoneJS does not manage migrations when the data model is changed (see this comment). This requires to create any additional lists and fields manually in the database. Here an example how this can be accomplished using Knex migrations.

Potential alternatives for KeystoneJS are:

  • Strapi: Very similar to Keystone but based on a REST API first (GraphQL available as a plugin). Allows creating and editing table schema using the Admin UI. Overall it is more of a CMS that KeystoneJS.
  • Prisma: Prisma is closer to traditional ORM tools than KeystoneJS. The recently released Prisma Admin is similar to the Admin interface of KeystoneJs. Prisma offers a client library whereas KeystoneJS depends on clients interfacing with the data through the GraphQL API.

Overall I still believe that KeystoneJS is a viable technology for my use case. My biggest concern is around migrations; I believe it may be quite difficult to orchestrate this easily across development, test and production system. I will probably continue to poke around a bit more in the KeystoneJS examples and documentation and possibly try out one of the alternatives.

I have uploaded my project resulting from following the Getting Started guide to GitHub. I think it can be quite useful for complementing the existing Getting Started documentation, particularly when wanting to get started using Postgres:

keystone-playground

Knex and Typescript Starter Project

SQL is a very expressive and powerful language. Unfortunately, it has often been difficult to interact with database using SQL from object-oriented languages due to a mismatch of the data structures in the database versus the structures in the application programming language. One common solution to this problem where Object-relational mapping frameworks, which often come with their own issues.

I was most delighted when I started working with Knex, a simple yet versatile framework for connecting and working with relational databases in JavaScript. Using it feels like working with an ORM but it only provides a very thin abstraction layer on top of SQL; this helps avoid many of the pitfalls potentially introduced by ORMs while still providing us with most of their conveniences.

As it turns out, Knex has excellent TypeScript support and I think building applications relying on a database using Knex and TypeScript is an excellent starting point.

I have put together a small project on GitHub that sets up the basics of getting started with Knex and TypeScript. This project specifically focuses on the setting up Knex and TypeScript and no other framework, for instance Express is included.

You can go ahead and clone the project from here:

https://github.com/mxro/knex-typescript-starter-project

After running yarn the following scripts can be run:

  • yarn test: Which will set up Jest in watch mode.
  • yarn build: Which will transpile TypeScript to ES6.
  • yarn watch: Which will run index.ts after every change (and compiles any changes using tsc)

The scripts for defining the database schema are placed in the folder migrations. Here the only currently defined migration:

import Knex from "knex";
import { Migration } from "./../migrationUtil";

export const migrations: Migration[] = [
    {
        name: "000_define_quotes",
        async up(knex: Knex) {
            await knex.schema.createTable("quotes",
                (table) => {
                    table.bigIncrements("id").unsigned().primary();
                    table.uuid("document_id").notNullable();
                    table.uuid("user_id").notNullable();
                    table.timestamp("created", { useTz: true });
                    table.text("quote").notNullable();
                    table.string("author", 512).notNullable();
                    table.string("book", 1024).notNullable();
                    table.text("raw_source").notNullable();
                    table.dateTime("date_collected", { useTz: true });
                    table.string("location", 1024).notNullable();
                    table.string("link", 1024).notNullable();
                    table.index(["document_id"], "document_id_index");
                });

            await knex.schema.createTable("tags",
                (table) => {
                    table.bigIncrements("id").unsigned().primary();
                    table.uuid("tag_id").notNullable();
                    table.timestamp("created", { useTz: true });
                    table.string("name", 512).notNullable();
                    table.uuid("document_id").notNullable();
                });
        },
        /* eslint-disable-next-line  @typescript-eslint/no-empty-function */
        async down(kenx: Knex) {
        },
    }
];

This migration is defined as one of the migrations for this application.

import { migrations as mig001 } from "./migrations/001_define_quotes";
import { runMigration, Migration } from "./migrationUtil";
import Knex from "knex";

export async function runMigrations({ knex }: { knex: Knex }): Promise<void> {

  const migrations: Migration[] = [].concat(mig001);

  await runMigration({ migrations, knex });

}

The migrations are run upon application start up or before tests are run. See the test in migrations.test.ts:

import { runMigrations } from "../src/migrations";
import Knex from "Knex";

describe("Test migrations.", () => {

  it("Should run migrations without error.", async () => {
    const knex = Knex({
      client: "sqlite3",
      connection: { filename: ":memory:" },
      pool: { min: 1, max: 1 },
    });
    await runMigrations({ knex });
    await knex.destroy();
  });

});

Note that this way of running migrations differs a bit from the default way suggested on the Knex website, namely to define migrations in individual files and then run migrations through the Knex CLI. I find this suggested default way a bit cumbersome and I think defining the migrations as native part of the application allows for more flexibility; specifically making it easier to test the application and allowing to develop the application in a more modular way, by allowing us to define migrations per module rather than for the application as a whole (as long as foreign keys are used sparingly).

This is just a very simple starter project. There are other starter projects for TypeScript, such as TypeScript-Node

Textures and Lighting with React and Three.js

In my previous three posts, I have developed a simple WebGL application using react-three-fiber and three.js. In this post, I am adding texture loading and proper lighting to the application.

For reference, here the links to the previous versions of the app:

  • Version 1: Just being able to drag a shape on the screen
  • Version 2: Dragging and dropping shapes using physics
  • Version 3: Being able to move the camera

Here the version developed for this post:

threejs-react-textures-light

Source Code

You can click to add objects, click and drag them as well as move the camera using WASD keys and mouse wheel to zoom.

Loading Textures

Textures can be loaded easily in react-three-fiber using the useResource hook.

All that is required to place the texture in the public/ folder of the react application, load the texture and then link it to the material by setting the map property.

    const [texture] = useLoader(TextureLoader, 'textures/grasslight-big.jpg');

    if (texture) {
        texture.wrapS = texture.wrapT = THREE.RepeatWrapping;
        texture.repeat.set(1500, 1500);
        texture.anisotropy = 16;
    }

    return (
        <mesh ref={ref} receiveShadow position={position}
            onClick={onPlaneClick}>
            <planeBufferGeometry attach="geometry" args={[10000, 10000]} />
            {texture &&
                <meshPhongMaterial attach="material" map={texture} />
            }

        </mesh>
    )

I found that textures are often quite large in size; larger than 1 MB. This significantly extends loading times. Thus I have added a simple loading screen. Unfortunately to be able to display the text ‘loading’ I had to create a TextGeometry which in turn required a font to be loaded (I prepared the Roboto font using facetype.js. This font by itself is more than 300 kb, so even loading the loading screen takes a bit of time.

Lighting

The goal of this application is to have a simple, very large plane on which any number of objects may be added. The issue I encountered with this was that to get shadows working with a DirectionalLight turned out to be very tricky. In the end, I used a combination of an AmbientLight with a SpotLight.

        <ambientLight intensity={0.9} />

        <primitive object={lightTarget} position={lightTargetPosition} />
        <spotLight
            castShadow
            intensity={0.25}
            position={lightPosition}
            angle={Math.PI / 3}
            penumbra={1}
            shadow-mapSize={new Vector2(2048 * 5, 2048 * 5)}
            target={lightTarget}
        />

Since the SpotLight would not be able to cover the whole of the plane (as said, it is meant to be very large) and provide accurate shadows, I opted for moving the SpotLight when a user moves the camera.

    const lightTargetYDelta = 120;
    const lightTargetXDelta = 80;
    const [lightPosition, setLightPosition] = useState([-lightTargetXDelta, -lightTargetYDelta, 200]);
    const [lightTargetPosition, setLightTargetPosition] = useState([0, 0, 0]);
    const onCameraMoved = (delta) => {
        const newLightPosition = delta.map((e, idx) => lightPosition[idx] + e);
        setLightPosition(newLightPosition);
        const newLightTargetPosition = [newLightPosition[0] + lightTargetXDelta, newLightPosition[1] + lightTargetYDelta, 0];
        setLightTargetPosition(newLightTargetPosition);
    };

This required both updating the position of the light setLightPosition as well as moving the light target setLightTargetPosition.

Modularity

Since the amount of code for this example increased quite a bit over the past three iterations, I broke up the application into multiple modules, with most React components now sitting in their own file.

I think this really shows the advantage of using React with Three.js, since it is easy for each component to manage its own state.

For the next iteration, I will most likely be looking at how I can remove the textures or use much smaller textures. I would like the application to be able to load as quickly as possible, and textures clearly do not seem a great option for this.

Medooze Media Server Demo

I’ve recently done some research into WebRTC and specifically on how to stream media captured in the browser to a server. Initially I thought I could use something like Kinesis Video Streams and have AWS do the heavy lifting for me. Unfortunately this turned out way more complicated than I had anticipated so I started looking for other options.

That is when I came across Media Servers such as Medooze, OpenVidu, Janus and Jitsi. Medooze caught my attention since it appears to scale very well and offers a NodeJS based server.

It did take me some time to find a meaningful demo for Medooze and then to get it running. So I thought I briefly document the steps here to get a demo for Medooze up and running (note this only works on Linux or Mac OS X):

  1. Head over to the media-server-client-js project and clone it.
  2. Run the following commands:
npm i
npm run-script dist
cd demo
npm i
  1. Get the IP address of your current machine:
ifconfig | grep "inet "
  1. Using this IP, launch the Medooze server in the demo directory:
node index.js [your IP]
  1. Head to a browser and open the URL https://%5Byour ip]:8000 (for instance https://10.0.2.15:8000. Accept the SSL certificate for your localhost.

You should now see the demo page:

Clicking the buttons will create video streams:

The animation on top of the remote button is a video stream taken from a local canvas and animation/video to the right is the same stream relayed through the server.

So the client sends a stream to the server, the server sends that stream right back and the client then renders that stream.

During my local testing I encountered an issue when adding tracks with codecs (VP8, H264) that I have filed and link here for reference: Adding tracks with Codecs does not work

Advantages of Using React Hooks

I always had the feeling that React is just a bit to complex, a bit to ‘heavy’ to be a truely elegant solution to the problem of building complex user interfaces in JavaScript. Two issues, for instance, are the general project setup, exemplified by the need to have create-react-app, and class-based components, with all their componentDidMount and this references.

While React Hooks are no solution to the first issue, they provide, in my mind, an elegant solution to the second; they provide a better way to do what we used to do with class-based components.

To illustrate this, I will first provide an implementation of a simple component using a class-based component and then refactor this into an implementation using React Hooks.

Here the initial implementation using a class-based component:

class User1 extends Component {
  constructor(props) {
    super(props);

    this.state = {
      userId: props.userId,
      userName: null,
      isLoading: false,
      error: null,
      unmounted: false,
    };
  }

  getUser() {
    this.setState({ isLoading: true, error: null });
  
    axios.get(`https://jsonplaceholder.typicode.com/users/${this.state.userId}`)
      .then(result => {
        if (this.state.unmounted) {
          return;
        }
        this.setState({
          userName: result.data.name,
          isLoading: false
        })
      }
      )
      .catch(error => {
        if (this.state.unmounted) {
          return;
        }

        this.setState({
          error,
          isLoading: false
        })
      });
  }

  componentDidMount() {
    this.getUser();
  }

  componentDidUpdate() {
    // this.getUser();
  }

  componentWillUnmount() {
    this.setState({ unmounted: true });
  }

  render() {
    return (<>
      {this.state.isLoading ? <p>Loading ...</p> : <></>}
      {this.state.error ? <p>Cannot load user</p> : <></>}
      {!this.state.isLoading && !this.state.error ? <p>{this.state.userName}</p> : <></>}
      <button onClick={() => {
        const newUserId = this.state.userId + 1;
        this.setState({ userId: newUserId }, this.getUser);
        this.getUser();
      }} >Next</button>
    </>);
  }
}

As can be seen in above code, this component requests data about a user from JSONPlaceholder and then display this data. There is also a button that will trigger loading of another user.

Simple enough – but we still need a fair amount of code to handle this scenario in a robust manner, including instances where we start the request for a new user before the previous request has been completed or where a request only completes after the component has been unmounted.

A component with the exact same functionality can be implemented using React Hooks:

function User2(props) {
  const [userId, setUserId] = useState(props.userId);
  const [name, setName] = useState(null);
  const [isLoading, setIsLoading] = useState(true);
  const [isError, setIsError] = useState(false);

  useEffect(() => {
    let cancelled = false;
    const fetchData = async () => {
      setIsLoading(true);
      setIsError(false);
      let response;
      try { 
        response = await axios.get(`https://jsonplaceholder.typicode.com/users/${userId}`);
      } catch (e) {
        setIsError(true);
        setIsLoading(false);
        return;
      }
      setIsLoading(false);
      if (cancelled) return;
      setName(response.data.name);
    };
    fetchData();
    return () => {
      cancelled = true;
    };
  }, [userId]);

  return (<>
    {isLoading ? <p>Loading ...</p> : <></>}
    {isError ? <p>Cannot load user</p> : <></>}
    {!isLoading && !isError && name ? <p>{name}</p> : <></>}
    <button onClick={() => setUserId(userId + 1)} >Next</button>
  </>);
}

Here we use useState to define a number of state variables and useEffect to deal with state updates. useState of course is essential in allowing us to define a functional component that also uses state. One major advantage in my mind of the hooks-based approach is that we don’t need to worry about using this and are in no danger of forgetting it.

useEffect replaces the functionality of componentDidMount and componentDidUpdate in the class-based components. I think it allows reacting to state changes in a much more elegant way. Firstly by linking it to the state of userId the useEffect handler we have defined will only trigger when the userId status has been updated, without us having to add any additional tests and logic around that. Secondly, it elegantly handles both the cases for when the component mounts as well as when the component state changes: by always triggering on component mount, and subsequently on changes to the userId. Thirdly, by returning a function as the result of the useEffect handler …

    return () => {
      cancelled = true;
    };

… we have a very easy way to deal with the component unmounting when a request is in flight.

However, the real power of React Hooks, in my mind, lies in the composability of Hooks. The following example implements the features for the component using a custom open source hook: use-data-api:

import useDataApi from 'use-data-api';

function User3(props) {
  const [userId, setUserId] = useState(props.userId);
  const [{ data, isLoading, isError }, performFetch] = useDataApi(null, null);

  useEffect(() => {
    performFetch(`https://jsonplaceholder.typicode.com/users/${userId}`);
  }, [userId, performFetch]);

  return (<>
    {isLoading ? <p>Loading ...</p> : <></>}
    {isError ? <p>Cannot load user</p> : <></>}
    {!isLoading && !isError && data ? <p>{data.name}</p> : <></>}
    <button onClick={() => setUserId(userId + 1)} >Next</button>
  </>);
}

Above we use the custom hook useDataApi that takes care of the details of having to deal with requests to an API (use-data-api/blob/master/src/index.js).

As can be seen, this last example is much shorter in length and easier to understand than the previous examples. Thus showing the biggest advantage for React Hooks – to extract complex behaviour into external functions that can be easily reused within a project and across projects.

To summarise, here all the advantages of using React Hooks discussed above:

  • Ability to create composite Hooks defining cross-cutting functionality concerns in an application.
  • Enables writing functional components with state (no more this).
  • useEffect provides a more concise and elegant way to handle component mount, update and unmount events.

Here the complete source code code of the examples used in this post:

react-hooks-tutorial

GraphQL, Node.JS and React Monorepo Starter Kit

Following the GraphQL Apollo Starter Kit (Lerna, Node.js), I wanted to dig deeper into developing a monorepo for a GraphQL/React client-server application.

Unfortunately, things are not as easy as I thought at first. Chiefly the create-react-app template does not appear to work very well with monorepos and local dependencies to other packages.

That’s why I put together a small, simple starter template for developing modular client-server applications using React, GraphQL and Node.js. Here is the code on GitHub:

nodejs-react-monorepo-starter-kit

Some things to note:

  • There are four packages in the project
    • client-main: The React client, based on create-react-app
    • client-components: Contains a definition of the component app. Used by client-main
    • server-main: The Node.js server definition
    • server-books: Contains schema and resolver for GraphQL backend. Used by server-main.
  • Each package defines it’s own package.json and can be built independent of the other packages.
  • The main entry point for the dependent packages (client-components and server-books) is set to dist/index.js. This way, packages which use them, can use the transpiled version created by babel and don’t need to worry about specific JS features used in the dependent packages.

Like GraphQL Apollo Starter Kit (Lerna, Node.js) this starter kit is meant to be very basic to allow easy exploration of the source code.