Next.js with Bootstrap Getting Started

Next.js is an open-source framework for React that aspires to reduce the amount of boilerplate code required for developing React applications. Key features that Next.js provides out of the box are:

  • Routing
  • Code Splitting
  • Server side rendering

I recently developed a small example application with Next.js and came across some minor difficulties when trying to use React Bootstrap within the Next.js application. I will therefore provide a quick guide here how to get Next.js and Bootstrap working together.

Next.js Application with Bootstrap Styling

Thankfully, it is quite easy to get React Bootstrap and Next.js working together once one knows what to do. Essentially this can be accomplished in three steps.

Step 1: Initialise project

We first create a new Next.js project and install the required dependencies:

yarn init
yarn add react bootstrap next @zeit/next-css react-bootstrap react-dom

Then add the scripts to build and deploy Next.js to package.json:

  "scripts": {
    "dev": "next",
    "build": "next build",
    "start": "next start"
  },

Step 2: Customise Next.js Configuration

In order for Next.js to be able to load the Bootstrap CSS for all pages, we need to create a next.config.js file in our project root and provide the following configuration:

const withCSS = require('@zeit/next-css')

module.exports = withCSS({
  cssLoaderOptions: {
    url: false
  }
});

Step 3: Load Bootstrap CSS for All Pages

Next, we need to create the file pages/_app.js. This allows us to define some logic to run for every page that Next.js renders; for all client-side, server-side and static rendering.

We only need to ensure that the Bootstrap CSS is loaded:

// ensure all pages have Bootstrap CSS
import 'bootstrap/dist/css/bootstrap.min.css';

function MyApp({ Component, pageProps }) {

  return
    <Component {...pageProps} />;

}

export default MyApp;

And that’s it! We can now start developing pages and components using Bootstrap styled React components:

import Container from 'react-bootstrap/Container';
import Row from 'react-bootstrap/Row';
import Col from 'react-bootstrap/Col';

function Landing() {
  return <Container>
    <Row>
      <Col>
        <h1>Next.js React Bootstrap</h1>
      </Col>
    </Row>
  </Container>
}

export default Landing;

I have also put together a small project on GitHub that makes use of Next.js and React Bootstrap:

next-js-react-bootstrap

KeystoneJS 5 Quick Review

I have recently started on a little project to organise the quotes that I have collected in years of reading (see kindle-citation-extractor). I originally got my quotes into Airtable but I quickly hit the limit for the free tier.

I figured that it would be great if I could develop a simple database with a simple user interface. Ideally I would not want to implement the basic CRUD views and so I had a look around for tools that can generate simple UIs for databases. My initial search revealed Keystone and Strapi.

I really liked the looks of KeystoneJS (Version 5) since it appears simple and clean. In this article, I will first document my experiences with the Getting Started example for KeystoneJS and conclude with my first impressions and comparison to similar solutions.

Getting Started

After some browsing around, I decided to follow the getting started guide from the Keystone documentation.

I am particularly interested in running Keystone with Postgres, so to get my local example running, I quickly spun up a Postgres server using Docker:

docker run --name keystone-pg -e POSTGRES_PASSWORD=password -d -v db:/var/lib/postgresql/data -p 5432:5432 postgres

(db-start.sh)

Then I configured the keystone project as per instructions:

yarn create keystone-app  keystone-playground

Provided answers for the prompts:

Prompts for Keystone Project initialisation

Then I connected to the Postgres instance in Docker and created a keystone table:

Create keystone database

And finally run the example:

DATABASE_URL=postgres://postgres:password@localhost:5432/keystone &amp;&amp; yarn dev

Unfortunately, loading the AdminUI then resulted in the following error:

> GraphQL error: select count(*) from "public"."Todo" as "t0" where true – relation "public.Todo" does not exist

There appears to be an open issue for this already: Trouble running starter

I was able to fix this issue by modifying index.js as follows:

...
const keystone = new Keystone({
  name: PROJECT_NAME,
  adapter: new Adapter({
    dropDatabase: true,
    knexOptions: {
      client: 'postgres',
      connection: process.env.DATABASE_URL,
    }
  }),
});
...

Adding the dropDatabase option here seems to force Keystone to create the data in the database upon startup.

Keystone example

The interface on localhost:3000 is also up and running:

Keystone 5 Example To Do list App

Quick Review

Based on looking around the documentation and my experiences with the sample app, my observations for Keystone JS 5 are as follows:

  • KeystoneJS 5 appears very modern, with excellent capabilities for GraphQL
  • Based on my experiences with the Getting Started example, it seems that the documentation for KeystoneJS leaves some things to be desired.
  • I like how lightweight KeystoneJS feels. It runs fast and the code to configure it seems very straightforward and simple.
  • A few lines of declarative code can yield impressive outcomes, such as a fully featured GraphQL API and a nice admin interface.
  • Seems like it is possible to deploy Keystone in Serverless environments, see Serverless deployment using Now.
  • KeystoneJS does not manage migrations when the data model is changed (see this comment). This requires to create any additional lists and fields manually in the database. Here an example how this can be accomplished using Knex migrations.

Potential alternatives for KeystoneJS are:

  • Strapi: Very similar to Keystone but based on a REST API first (GraphQL available as a plugin). Allows creating and editing table schema using the Admin UI. Overall it is more of a CMS that KeystoneJS.
  • Prisma: Prisma is closer to traditional ORM tools than KeystoneJS. The recently released Prisma Admin is similar to the Admin interface of KeystoneJs. Prisma offers a client library whereas KeystoneJS depends on clients interfacing with the data through the GraphQL API.

Overall I still believe that KeystoneJS is a viable technology for my use case. My biggest concern is around migrations; I believe it may be quite difficult to orchestrate this easily across development, test and production system. I will probably continue to poke around a bit more in the KeystoneJS examples and documentation and possibly try out one of the alternatives.

I have uploaded my project resulting from following the Getting Started guide to GitHub. I think it can be quite useful for complementing the existing Getting Started documentation, particularly when wanting to get started using Postgres:

keystone-playground

Knex and Typescript Starter Project

SQL is a very expressive and powerful language. Unfortunately, it has often been difficult to interact with database using SQL from object-oriented languages due to a mismatch of the data structures in the database versus the structures in the application programming language. One common solution to this problem where Object-relational mapping frameworks, which often come with their own issues.

I was most delighted when I started working with Knex, a simple yet versatile framework for connecting and working with relational databases in JavaScript. Using it feels like working with an ORM but it only provides a very thin abstraction layer on top of SQL; this helps avoid many of the pitfalls potentially introduced by ORMs while still providing us with most of their conveniences.

As it turns out, Knex has excellent TypeScript support and I think building applications relying on a database using Knex and TypeScript is an excellent starting point.

I have put together a small project on GitHub that sets up the basics of getting started with Knex and TypeScript. This project specifically focuses on the setting up Knex and TypeScript and no other framework, for instance Express is included.

You can go ahead and clone the project from here:

https://github.com/mxro/knex-typescript-starter-project

After running yarn the following scripts can be run:

  • yarn test: Which will set up Jest in watch mode.
  • yarn build: Which will transpile TypeScript to ES6.
  • yarn watch: Which will run index.ts after every change (and compiles any changes using tsc)

The scripts for defining the database schema are placed in the folder migrations. Here the only currently defined migration:

import Knex from "knex";
import { Migration } from "./../migrationUtil";

export const migrations: Migration[] = [
    {
        name: "000_define_quotes",
        async up(knex: Knex) {
            await knex.schema.createTable("quotes",
                (table) => {
                    table.bigIncrements("id").unsigned().primary();
                    table.uuid("document_id").notNullable();
                    table.uuid("user_id").notNullable();
                    table.timestamp("created", { useTz: true });
                    table.text("quote").notNullable();
                    table.string("author", 512).notNullable();
                    table.string("book", 1024).notNullable();
                    table.text("raw_source").notNullable();
                    table.dateTime("date_collected", { useTz: true });
                    table.string("location", 1024).notNullable();
                    table.string("link", 1024).notNullable();
                    table.index(["document_id"], "document_id_index");
                });

            await knex.schema.createTable("tags",
                (table) => {
                    table.bigIncrements("id").unsigned().primary();
                    table.uuid("tag_id").notNullable();
                    table.timestamp("created", { useTz: true });
                    table.string("name", 512).notNullable();
                    table.uuid("document_id").notNullable();
                });
        },
        /* eslint-disable-next-line  @typescript-eslint/no-empty-function */
        async down(kenx: Knex) {
        },
    }
];

This migration is defined as one of the migrations for this application.

import { migrations as mig001 } from "./migrations/001_define_quotes";
import { runMigration, Migration } from "./migrationUtil";
import Knex from "knex";

export async function runMigrations({ knex }: { knex: Knex }): Promise<void> {

  const migrations: Migration[] = [].concat(mig001);

  await runMigration({ migrations, knex });

}

The migrations are run upon application start up or before tests are run. See the test in migrations.test.ts:

import { runMigrations } from "../src/migrations";
import Knex from "Knex";

describe("Test migrations.", () => {

  it("Should run migrations without error.", async () => {
    const knex = Knex({
      client: "sqlite3",
      connection: { filename: ":memory:" },
      pool: { min: 1, max: 1 },
    });
    await runMigrations({ knex });
    await knex.destroy();
  });

});

Note that this way of running migrations differs a bit from the default way suggested on the Knex website, namely to define migrations in individual files and then run migrations through the Knex CLI. I find this suggested default way a bit cumbersome and I think defining the migrations as native part of the application allows for more flexibility; specifically making it easier to test the application and allowing to develop the application in a more modular way, by allowing us to define migrations per module rather than for the application as a whole (as long as foreign keys are used sparingly).

This is just a very simple starter project. There are other starter projects for TypeScript, such as TypeScript-Node

Textures and Lighting with React and Three.js

In my previous three posts, I have developed a simple WebGL application using react-three-fiber and three.js. In this post, I am adding texture loading and proper lighting to the application.

For reference, here the links to the previous versions of the app:

  • Version 1: Just being able to drag a shape on the screen
  • Version 2: Dragging and dropping shapes using physics
  • Version 3: Being able to move the camera

Here the version developed for this post:

threejs-react-textures-light

Source Code

You can click to add objects, click and drag them as well as move the camera using WASD keys and mouse wheel to zoom.

Loading Textures

Textures can be loaded easily in react-three-fiber using the useResource hook.

All that is required to place the texture in the public/ folder of the react application, load the texture and then link it to the material by setting the map property.

    const [texture] = useLoader(TextureLoader, 'textures/grasslight-big.jpg');

    if (texture) {
        texture.wrapS = texture.wrapT = THREE.RepeatWrapping;
        texture.repeat.set(1500, 1500);
        texture.anisotropy = 16;
    }

    return (
        <mesh ref={ref} receiveShadow position={position}
            onClick={onPlaneClick}>
            <planeBufferGeometry attach="geometry" args={[10000, 10000]} />
            {texture &&
                <meshPhongMaterial attach="material" map={texture} />
            }

        </mesh>
    )

I found that textures are often quite large in size; larger than 1 MB. This significantly extends loading times. Thus I have added a simple loading screen. Unfortunately to be able to display the text ‘loading’ I had to create a TextGeometry which in turn required a font to be loaded (I prepared the Roboto font using facetype.js. This font by itself is more than 300 kb, so even loading the loading screen takes a bit of time.

Lighting

The goal of this application is to have a simple, very large plane on which any number of objects may be added. The issue I encountered with this was that to get shadows working with a DirectionalLight turned out to be very tricky. In the end, I used a combination of an AmbientLight with a SpotLight.

        <ambientLight intensity={0.9} />

        <primitive object={lightTarget} position={lightTargetPosition} />
        <spotLight
            castShadow
            intensity={0.25}
            position={lightPosition}
            angle={Math.PI / 3}
            penumbra={1}
            shadow-mapSize={new Vector2(2048 * 5, 2048 * 5)}
            target={lightTarget}
        />

Since the SpotLight would not be able to cover the whole of the plane (as said, it is meant to be very large) and provide accurate shadows, I opted for moving the SpotLight when a user moves the camera.

    const lightTargetYDelta = 120;
    const lightTargetXDelta = 80;
    const [lightPosition, setLightPosition] = useState([-lightTargetXDelta, -lightTargetYDelta, 200]);
    const [lightTargetPosition, setLightTargetPosition] = useState([0, 0, 0]);
    const onCameraMoved = (delta) => {
        const newLightPosition = delta.map((e, idx) => lightPosition[idx] + e);
        setLightPosition(newLightPosition);
        const newLightTargetPosition = [newLightPosition[0] + lightTargetXDelta, newLightPosition[1] + lightTargetYDelta, 0];
        setLightTargetPosition(newLightTargetPosition);
    };

This required both updating the position of the light setLightPosition as well as moving the light target setLightTargetPosition.

Modularity

Since the amount of code for this example increased quite a bit over the past three iterations, I broke up the application into multiple modules, with most React components now sitting in their own file.

I think this really shows the advantage of using React with Three.js, since it is easy for each component to manage its own state.

For the next iteration, I will most likely be looking at how I can remove the textures or use much smaller textures. I would like the application to be able to load as quickly as possible, and textures clearly do not seem a great option for this.

Medooze Media Server Demo

I’ve recently done some research into WebRTC and specifically on how to stream media captured in the browser to a server. Initially I thought I could use something like Kinesis Video Streams and have AWS do the heavy lifting for me. Unfortunately this turned out way more complicated than I had anticipated so I started looking for other options.

That is when I came across Media Servers such as Medooze, OpenVidu, Janus and Jitsi. Medooze caught my attention since it appears to scale very well and offers a NodeJS based server.

It did take me some time to find a meaningful demo for Medooze and then to get it running. So I thought I briefly document the steps here to get a demo for Medooze up and running (note this only works on Linux or Mac OS X):

  1. Head over to the media-server-client-js project and clone it.
  2. Run the following commands:
npm i
npm run-script dist
cd demo
npm i
  1. Get the IP address of your current machine:
ifconfig | grep "inet "
  1. Using this IP, launch the Medooze server in the demo directory:
node index.js [your IP]
  1. Head to a browser and open the URL https://%5Byour ip]:8000 (for instance https://10.0.2.15:8000. Accept the SSL certificate for your localhost.

You should now see the demo page:

Clicking the buttons will create video streams:

The animation on top of the remote button is a video stream taken from a local canvas and animation/video to the right is the same stream relayed through the server.

So the client sends a stream to the server, the server sends that stream right back and the client then renders that stream.

During my local testing I encountered an issue when adding tracks with codecs (VP8, H264) that I have filed and link here for reference: Adding tracks with Codecs does not work

Advantages of Using React Hooks

I always had the feeling that React is just a bit to complex, a bit to ‘heavy’ to be a truely elegant solution to the problem of building complex user interfaces in JavaScript. Two issues, for instance, are the general project setup, exemplified by the need to have create-react-app, and class-based components, with all their componentDidMount and this references.

While React Hooks are no solution to the first issue, they provide, in my mind, an elegant solution to the second; they provide a better way to do what we used to do with class-based components.

To illustrate this, I will first provide an implementation of a simple component using a class-based component and then refactor this into an implementation using React Hooks.

Here the initial implementation using a class-based component:

class User1 extends Component {
  constructor(props) {
    super(props);

    this.state = {
      userId: props.userId,
      userName: null,
      isLoading: false,
      error: null,
      unmounted: false,
    };
  }

  getUser() {
    this.setState({ isLoading: true, error: null });
  
    axios.get(`https://jsonplaceholder.typicode.com/users/${this.state.userId}`)
      .then(result => {
        if (this.state.unmounted) {
          return;
        }
        this.setState({
          userName: result.data.name,
          isLoading: false
        })
      }
      )
      .catch(error => {
        if (this.state.unmounted) {
          return;
        }

        this.setState({
          error,
          isLoading: false
        })
      });
  }

  componentDidMount() {
    this.getUser();
  }

  componentDidUpdate() {
    // this.getUser();
  }

  componentWillUnmount() {
    this.setState({ unmounted: true });
  }

  render() {
    return (<>
      {this.state.isLoading ? <p>Loading ...</p> : <></>}
      {this.state.error ? <p>Cannot load user</p> : <></>}
      {!this.state.isLoading && !this.state.error ? <p>{this.state.userName}</p> : <></>}
      <button onClick={() => {
        const newUserId = this.state.userId + 1;
        this.setState({ userId: newUserId }, this.getUser);
        this.getUser();
      }} >Next</button>
    </>);
  }
}

As can be seen in above code, this component requests data about a user from JSONPlaceholder and then display this data. There is also a button that will trigger loading of another user.

Simple enough – but we still need a fair amount of code to handle this scenario in a robust manner, including instances where we start the request for a new user before the previous request has been completed or where a request only completes after the component has been unmounted.

A component with the exact same functionality can be implemented using React Hooks:

function User2(props) {
  const [userId, setUserId] = useState(props.userId);
  const [name, setName] = useState(null);
  const [isLoading, setIsLoading] = useState(true);
  const [isError, setIsError] = useState(false);

  useEffect(() => {
    let cancelled = false;
    const fetchData = async () => {
      setIsLoading(true);
      setIsError(false);
      let response;
      try { 
        response = await axios.get(`https://jsonplaceholder.typicode.com/users/${userId}`);
      } catch (e) {
        setIsError(true);
        setIsLoading(false);
        return;
      }
      setIsLoading(false);
      if (cancelled) return;
      setName(response.data.name);
    };
    fetchData();
    return () => {
      cancelled = true;
    };
  }, [userId]);

  return (<>
    {isLoading ? <p>Loading ...</p> : <></>}
    {isError ? <p>Cannot load user</p> : <></>}
    {!isLoading && !isError && name ? <p>{name}</p> : <></>}
    <button onClick={() => setUserId(userId + 1)} >Next</button>
  </>);
}

Here we use useState to define a number of state variables and useEffect to deal with state updates. useState of course is essential in allowing us to define a functional component that also uses state. One major advantage in my mind of the hooks-based approach is that we don’t need to worry about using this and are in no danger of forgetting it.

useEffect replaces the functionality of componentDidMount and componentDidUpdate in the class-based components. I think it allows reacting to state changes in a much more elegant way. Firstly by linking it to the state of userId the useEffect handler we have defined will only trigger when the userId status has been updated, without us having to add any additional tests and logic around that. Secondly, it elegantly handles both the cases for when the component mounts as well as when the component state changes: by always triggering on component mount, and subsequently on changes to the userId. Thirdly, by returning a function as the result of the useEffect handler …

    return () => {
      cancelled = true;
    };

… we have a very easy way to deal with the component unmounting when a request is in flight.

However, the real power of React Hooks, in my mind, lies in the composability of Hooks. The following example implements the features for the component using a custom open source hook: use-data-api:

import useDataApi from 'use-data-api';

function User3(props) {
  const [userId, setUserId] = useState(props.userId);
  const [{ data, isLoading, isError }, performFetch] = useDataApi(null, null);

  useEffect(() => {
    performFetch(`https://jsonplaceholder.typicode.com/users/${userId}`);
  }, [userId, performFetch]);

  return (<>
    {isLoading ? <p>Loading ...</p> : <></>}
    {isError ? <p>Cannot load user</p> : <></>}
    {!isLoading && !isError && data ? <p>{data.name}</p> : <></>}
    <button onClick={() => setUserId(userId + 1)} >Next</button>
  </>);
}

Above we use the custom hook useDataApi that takes care of the details of having to deal with requests to an API (use-data-api/blob/master/src/index.js).

As can be seen, this last example is much shorter in length and easier to understand than the previous examples. Thus showing the biggest advantage for React Hooks – to extract complex behaviour into external functions that can be easily reused within a project and across projects.

To summarise, here all the advantages of using React Hooks discussed above:

  • Ability to create composite Hooks defining cross-cutting functionality concerns in an application.
  • Enables writing functional components with state (no more this).
  • useEffect provides a more concise and elegant way to handle component mount, update and unmount events.

Here the complete source code code of the examples used in this post:

react-hooks-tutorial

GraphQL, Node.JS and React Monorepo Starter Kit

Following the GraphQL Apollo Starter Kit (Lerna, Node.js), I wanted to dig deeper into developing a monorepo for a GraphQL/React client-server application.

Unfortunately, things are not as easy as I thought at first. Chiefly the create-react-app template does not appear to work very well with monorepos and local dependencies to other packages.

That’s why I put together a small, simple starter template for developing modular client-server applications using React, GraphQL and Node.js. Here is the code on GitHub:

nodejs-react-monorepo-starter-kit

Some things to note:

  • There are four packages in the project
    • client-main: The React client, based on create-react-app
    • client-components: Contains a definition of the component app. Used by client-main
    • server-main: The Node.js server definition
    • server-books: Contains schema and resolver for GraphQL backend. Used by server-main.
  • Each package defines it’s own package.json and can be built independent of the other packages.
  • The main entry point for the dependent packages (client-components and server-books) is set to dist/index.js. This way, packages which use them, can use the transpiled version created by babel and don’t need to worry about specific JS features used in the dependent packages.

Like GraphQL Apollo Starter Kit (Lerna, Node.js) this starter kit is meant to be very basic to allow easy exploration of the source code.

Mastering Modular JavaScript

Today I was having a look around for best practices for defining JavaScript modules. In that search, I came across the book Mastering Modular JavaScript. This book offers a good selection of best practices for JS module development. Also, all chapters are freely available on GitHub:

For a more basic introduction to modules, see the chapter JavaScript Modules from the book Practical Modern JavaScript.

Everything new in JavaScript since ES6

It is no secret that things in the tech world change rather rapidly. It’s difficult to keep track of everything at the same time. For instance I have been working with JavaScript quite extensively some years ago but recently have been more involved with other tech stacks. Thus I have only followed the developments in the JavaScript world sporadically and was quite surprised by how many things have changed since the days of JavaScript: The Good Parts.

Since before ES6 things have not changed much for a long time, I imagine I am not the only one who could benefit from a little refresher of all the things that have changed since ES6. Thus I have compiled some of the changes I think are most important for ordinary development work. The idea is to provide a quick overview rather than explain every feature in detail – assuming that more information on any of the changes is readily available on the web.

This is not a complete list of everything that has changed. For instance, I included promises but omitted changes made to the way regular expressions work in ECMAScript 2018; since we are likely to come across promises many times per day whereas the changes to regular expressions only affect us in particular edge cases.

ECMAScript 6 / ECMAScript 2015

Variable Scoping

  • let x = 1;: To define block scoped variables

Arrow Functions

  • x => x + 1: Concise closure syntax
  • x => { return x + 1; }: Concise closure syntax
  • this: within lambdas refers to enclosing object (rather than to lambda function itself)

Promises

Promises for wrapping asynchronous code.


let p = new Promise((resolve, reject) => {

   resolve("hello");

});

p.then((msg) => console.log(msg)); 

Executing asynchronous operations in parallel

let parallelOperation = Promise.all([p1, p2]);
parallelOperation.then((data) => {let [res1, res2] = data; } );

Default Parameters and Spread Operator

  • function (x = 1, y = 2): Default values for function parameters
  • function (x, y, ...arr) {}: Capturing all remaining arguments in array for variadic functions
  • var newarr = [ 1, 2, ...oldarr]: ‘Spreading’ of elements from an array as literal elements
  • multiply(1, 2, ...arr): Spreading of elements from an array as individual function parameters

Multiline Strings and Templates

  • `My String⏎NewLine`: Multi-line string literals
  • `Hello ${person.name}`: Intuitive string interpolation
  • const proc = sh`kill -9 ${pid}`;: Tagged template literals for parsing custom languages. The example would result in calling the function sh with the parameters (['kill -9 '], pid)

Object Properties

  • let obj = { x, y }: Property shorthand for defining let object = { x: x, y: y }
  • obj = { func1 (x, y) { } }: Methods allowed as object properties

Deconstructor Assignment

  • var [ x, y, z ] = list: Deconstructing arrays into individual variables by assignment.
  • var [ x=0, y=0 ]: Default values for deconstructing arrays.
  • function( [ x, y ] ): Deconstructing arrays in function calls.
  • var { x, y, z } = getPoint(): Deconstructing objects into individual variables by assignment.
  • var { name: name, address: { street: street }, age: age} = getData(): Deconstructing objects into individual variables by assignment, including nested properties.
  • var p = { x=0, y=0 }: Default values for deconstructing objects.
  • function( { x, y } ): Deconstructing objects in function calls.

Modularity

  • export function add(x,y) { return x + y; }: Exporting functions
  • export var universe = 42;: Exporting variables
  • import { add, universe } from 'lib/module';: Importing functions and variables
  • import * from 'lib/module': Wildcard import
  • export default (x, y) => x + y;: Defining default export
  • import add from 'lib/add': Importing default export
  • import add, { universe } from 'lib/add': Importing default export and additional exports
  • export * from 'lib/module';: Reexporting from other modules

Classes

class keyword for constructing simple classes.

class Point {

  constructor (x, y) {
     this.x = x;
     this.y = y;
  }

  move (deltax, deltay) {
     new Point(this.x + deltax, this.y + deltay);
  }

}

extends keyword for extending classes:


class Car extends Vehicle {

  constructor (name) {
     super(name);
  }

static keyword for static methods


class Math {

  static add(x, y) {
    return x + y;
  }

}

get and set keywords for decorated property access.


class Rectangle {

  get area() { return this.x * this.y }

}

...

new Rectangle(2, 2).area === 4;

Iteration Through Object Values

  • for (let value of arr) { }: for … of loop for going iterating through values of objects.
  • Also note that objects can define their own iterators and generators

Data Structures

  • new Set(): For sets
  • new Map(): For maps
  • new WeakSet(): For sets whose items will be garbage collected when required
  • new WeakMap(): For sets whose items will be garbage collected when required

Symbols

  • Symbol(): For creating an object with a unique identity.
  • Symbol("note"): For creating a unique object with a descriptor.
  • Note: Symbol("node") !== Symbol("node")

ECMAScript 2016

  • **: Exponentiation operator
  • Array.prototypes.includes: Like indexOf but with true/false result and support for NaN

ECMAScript 2017

async/await for more expressive asynchronous operations

async function add1(x) {
  return x + 1;
}

async function add2(x) {
  let y = await add1(x);
  return await add1(y);
}

add2(5).then(console.log);

ECMAScript 2018

Rest/Spread Operators for Object Properties

Collect all not deconstructed properties from an object in another object:


var person = { firstName: "Paul", lastName: "Hendricks", password: "secret"};
var {password, ...sanitisedPerson } = person;
// sanitisedPerson = {firstName: "Paul", lastName: "Hendricks"}

Spread object properties

let details = { firstName: "Paul", lastName: "Hendricks" };

let user = { ...details, password: "secret" };

Finally for Promises

finally callback is guaranteed to be executed if promise succeeds or fails.


async function sayHello() {
console.log("hello");
}
sayHello().then(() => console.log("success") )
.catch((e) => console.log(e))
.finally(() => console.log("runs always")

for await Loop

Special for loops that resolve promises before every iteration.


const promises = [
  new Promise(resolve => resolve(1) ),
  new Promise(resolve => resolve(2) )
];

async function runAll() {
  for await (p of promises) {
    console.log(p);
  }
}

runAll();

References

Image credits: Flickr

Designing Micro Services the Right Way

For a few years now, micro services have been all the rage when it comes to the architecture of large applications. Personally I have always been a bit puzzled about what was so new a great about micro services in comparison to what came before them: Service Oriented Architecture (SOA). Indeed, SOA itself is often portrayed as a frightful antipattern from our past to be mentioned in the same breath as CORBA.

To me, the move from CORBA et all to SOA to Micro Services has not been one of disrupting innovation but one of continuous learning; chiefly in relation to the technologies we employ. It just makes a world of difference setting up a big old monolithic application from the past or an express server in Node.JS (which is also a ‘monolith’ in its own right but just a smaller one – hopefully).

The core problem we are trying to solve has not changed: distributed computing. Unfortunately, one of the first things we learnt about distributed computing seems to have been given less attention recently: that it is best avoided wherever possible. Why? Because it introduces great complexity into an application and can result in many development and operational problems (see YouTube: 10 Tips for failing badly at Microservices by David Schmitz).

One of the most problematic areas is data or persisted state. If the same piece of data needs to be used by multiple services, things become very complicated since it is often required to keep data in sync between multiple places (see YouTube: Managing Data in Microservices by Randy Shoup).

Recently I came across a presentation which I think outlined a very nice approach for dealing with micro services – one that relied heavily on code generation, enforcing common standards and automated testing. Furthermore in the presented architecture one language was used primarily, which I think is a very good approach. I highly recommend viewing this presentation for anyone interested in a way to deal with the complexity of micro services:

YouTube: Design Microservice Architectures the Right Way by Michael Bryzek

What I personally took away from this:

  • Focus on testability. Allow for fast unit and integration tests and even testing with production data. Only code that is easy to test and heavily tested allows for fast and bold development. This organisation for instance automatically updates all their dependencies once per week – automatically, since they have full confidence that their tests will pick up any issues.
  • Utilise code generation. The sad truth of micro services is that we will have to duplicate things, such as commonly used entities – especially if multiple programming languages are involved. Code generation provides an elegant way to deal with this unfortunate situation.
  • Enforce common standards. Although micro services are intended to reduce complexity by dividing up a complex system into small manageable chunks, they can actually result in increased overall complexity, especially if many different technologies are employed. In that case, enforcing strict common standards can help in keeping things simple for developers and ops.
  • Embrace events. Triggering services into action by using events rather than direct API calls can help in making a distributed system more predictable and easier to debug.

I think this presentation provides an excellent overview of best practices for micro services and I couldn’t think of anything to criticise or add. I think it represents the best way of building micro services I am aware of as of now.

I do think, however, there is one important additional issue to consider, and that is that a micro service built according to the best principles and standards will still be a liability if it wasn’t necessary to build a micro service to begin with. This is not so much the question if we should micro services or not (in any organisation of a certain size they are an imperative) but how many.

One of the key drivers of success for micro services within a larger system is to get the boundaries of the services right (see bounded context) and I think we should aim to make micro services as large as possible so we have as few of them as possible; taking into consideration the restrictions of team size, data and complexity:

  • Team: It might sound like heresy but I do think that one ‘physical’ micro service could be maintained by up to three to five teams (and not just one team per micro service). That of course would be the upper maximum, there is nothing wrong with having just one team per micro service. It really depends on what service you are building.
  • Data: And some more heresy: I think that for data it is often better to scale up rather than scale out. Why? Data is all about state and being able to keep state within the physical confines of one systems leads to much improved performance and reduced complexity. Thus we should think about the database management system we will be using for our service and what is the maximum we can scale it up to. Then take 20% of that and ask yourself if your data will stay within that limit. If not, it might be prudent to break the micro service apart or maybe change the DBMS.
  • Complexity: The main drivers of complexity in software are code size, inter-dependencies and heterogeneity. If our micro service would contain large amounts of code with many intricate inter-dependencies that tackles many different problems in different ways, it may be advisable to think about breaking the service up.

As mentioned, distributed systems are inherently more complex that non-distributed ones. Therefore, if we have larger micro services, our system becomes less distributed overall and we hopefully have less accidental complexity to deal with.

Thus, to sum things up, we must be aware of the dangers of micro services and deploy tooling strategically as outlined in the presentation as well as be mindful of how we can build our system in a way that we avoid the complexities of distributed systems as much as possible.