Overwrite Author in Git History

With every commit, git records the name of the author as well as the committer along with their respective email addresses. These will be public once you push your project to GitHub. So sometimes it may be advisable to change the email addresses of the author and committer for all the past commits in your repository.

This can easily be verified by running git log.

Git keeping track of my email address …

Thankfully it is surprisingly easy to change the email addresses of author and committer in the repository. Simply run the following command in the toplevel of your working tree:

git filter-branch -f --env-filter "GIT_AUTHOR_EMAIL='newemail@site.com' GIT_COMMITTER_EMAIL='newemail@site.com';" HEAD

Finally just do a push.

git push --force

Note that adding --force is important here, since otherwise the changes will be rejected by the remote with the error message:

 ! [rejected]        master -> master (non-fast-forward)
error: failed to push some refs to 'git@github.com:repo/repo.git'
hint: Updates were rejected because the tip of your current branch is behind
hint: its remote counterpart. Integrate the remote changes (e.g.
hint: 'git pull ...') before pushing again.
hint: See the 'Note about fast-forwards' in 'git push --help' for details.

Do not do a git pull in that case since that will undo the updating of the author and committer.

If you only want to update the author or committer of some of the commits, you can also use git filter-branch. For instance as follows:

git filter-branch --commit-filter '
      if [ "$GIT_AUTHOR_EMAIL" = "to_update@mail" ];
      then
              GIT_AUTHOR_NAME="New Name";
              GIT_AUTHOR_EMAIL="new@email.com";
              git commit-tree "$@";
      else
              git commit-tree "$@";
      fi' HEAD

Note that it is easy for things to go wrong here with providing the multi-line commit-filter – the easiest way is to put this command into a separate script file.

Next.js with Bootstrap Getting Started

Next.js is an open-source framework for React that aspires to reduce the amount of boilerplate code required for developing React applications. Key features that Next.js provides out of the box are:

  • Routing
  • Code Splitting
  • Server side rendering

I recently developed a small example application with Next.js and came across some minor difficulties when trying to use React Bootstrap within the Next.js application. I will therefore provide a quick guide here how to get Next.js and Bootstrap working together.

Next.js Application with Bootstrap Styling

Thankfully, it is quite easy to get React Bootstrap and Next.js working together once one knows what to do. Essentially this can be accomplished in three steps.

Step 1: Initialise project

We first create a new Next.js project and install the required dependencies:

yarn init
yarn add react bootstrap next @zeit/next-css react-bootstrap react-dom

Then add the scripts to build and deploy Next.js to package.json:

  "scripts": {
    "dev": "next",
    "build": "next build",
    "start": "next start"
  },

Step 2: Customise Next.js Configuration

In order for Next.js to be able to load the Bootstrap CSS for all pages, we need to create a next.config.js file in our project root and provide the following configuration:

const withCSS = require('@zeit/next-css')

module.exports = withCSS({
  cssLoaderOptions: {
    url: false
  }
});

Step 3: Load Bootstrap CSS for All Pages

Next, we need to create the file pages/_app.js. This allows us to define some logic to run for every page that Next.js renders; for all client-side, server-side and static rendering.

We only need to ensure that the Bootstrap CSS is loaded:

// ensure all pages have Bootstrap CSS
import 'bootstrap/dist/css/bootstrap.min.css';

function MyApp({ Component, pageProps }) {

  return
    <Component {...pageProps} />;

}

export default MyApp;

And that’s it! We can now start developing pages and components using Bootstrap styled React components:

import Container from 'react-bootstrap/Container';
import Row from 'react-bootstrap/Row';
import Col from 'react-bootstrap/Col';

function Landing() {
  return <Container>
    <Row>
      <Col>
        <h1>Next.js React Bootstrap</h1>
      </Col>
    </Row>
  </Container>
}

export default Landing;

I have also put together a small project on GitHub that makes use of Next.js and React Bootstrap:

next-js-react-bootstrap

If you are looking for something a bit more comprehensive, please have a look at the following template as well. This includes scripts for deployment into AWS, ESLint/TypeScript configuration and is regularly updated:

Goldstack Next.js + Bootstrap Template

Strapi 2020 Quick Review

I have recently reviewed KeystoneJS for a little project I am planning. I found it overall quite good but lacking in a few aspects, particularly in the way migrations are handled (or not handled). After some research, it seems that Strapi could be a possible alternative to KeystoneJS and so I decided to give this solution a quick review as well.

As I’ve done for KeystoneJS, I will first go through a little example and then conclude with my thoughts.

Getting Started

I started the project by simply applying yarn create:

yarn create strapi-app strapi --quickstart

Since Strapi uses Sqlite for local development, the Strapi server is ready to go after running this command (no database connection configuration is required). I then logged into the administration console and created an admin user and password.

I then went into the Content Type Builder and created a two types/tables. Quote that holds an author and a quote and Tag that holds tag names.

Content Type Builder

Creating these was really straightforward and simple. In the background, Strapi created matching definitions for these in the project directory api/:

api/quote/models/quote.settings.json

{
  "connection": "default",
  "collectionName": "quotes",
  "info": {
    "name": "Quote"
  },
  "options": {
    "increments": true,
    "timestamps": true
  },
  "attributes": {
    "Author": {
      "type": "string"
    },
    "tags": {
      "collection": "tag"
    },
    "Quote": {
      "type": "string"
    }
  }
}

api/tag/models/tag.settings.json

{
  "connection": "default",
  "collectionName": "tags",
  "info": {
    "name": "Tag"
  },
  "options": {
    "increments": true,
    "timestamps": true
  },
  "attributes": {
    "Name": {
      "type": "string",
      "required": true,
      "unique": true
    }
  }
}

The schema defined in these JSON files is dynamically translated into operations modifying the schema of the database Strapi is connected to. Upon deployment to a production system, Strapi will create correct schemas in the attached production database; e.g. for MongoDB or Postgres (see Running Strapi in production and version control sync, Create db schema from models).

I then installed the Graphql plugin. For me, this did not work through the admin web interface (it would just get stuck).

Strapi getting stuck after trying to install GraphQL plugin

I needed to run yarn again after this to fix the project. However, installing using the strapi cli worked without problems:

yarn strapi install graphql

Next I went to the Roles & Permissions plugin to configure public access to the endpoints generated from models:

Setting Permissions in Strapi

It must be noted here that permission settings are not reflected in the source code of the Strapi project. Therefore they are not versioned and cannot easily be deployed to testing and production environments (see #672 Permissions flow between environments)

After the permissions have been set, it is very easy to query the GraphQL API:

GraphQL query against API

I finally developed a little Next.js application that queries the GraphQL API exposed by Strapi. This as simple as hooking up an Apollo Client with the endpoint exposed by Strapi.

const client = new ApolloClient({
  uri: 'http://localhost:1337/graphql',
});

Which then makes it very easy to write dynamic pages with React:

import { useQuery } from '@apollo/react-hooks';
import { gql } from "apollo-boost";

const QUOTES = gql`
  {
    quotes {
      id
      Author
    }
  }
`;

const QuoteList: any = () =&gt; {
  const { loading, error, data } = useQuery(QUOTES);
  if (error) return "Error loading quotes";
  if (loading) return "Loading ...";

  const { quotes } = data;

  return <ul>{
    quotes.map(({ Author, id }) =&gt; {
      return <li>{Author}</li>;
    })
  }
  </ul>
};

export default QuoteList;
Next.js app powered by Stapi Backend

All code I’ve developed for this example is available on GitHub:

https://github.com/mxro/strapi-playground

Quick Review

Based on my experiences building the simple example above and studies of the documentation, my initial impressions of Strapi are:

  • I was very impressed with the speed of development using Strapi. I especially liked the Content Type Builder to quickly design the schema for the data.
  • Strapi provides both a very powerful Restful and GraphQL API for the defined data.
  • In contrast to KeystoneJS, database migrations are handled seamlessly.
  • Strapi feels still a bit rough around the edges, for instance some plugins lack proper descriptions and it crashed on me when trying to install the GraphQL plugin. I probably wouldn’t feel comfortable rolling it out for a mission critical production system.
  • For some reason, permissions are not migrated between environments, they are only stored in the database of the local system. I believe this can make deploying Strapi quite painful.

Overall, I think Strapi is a great technology, and so far it appears the best fit for the small project I am planning. I am especially impressed by the ‘no code’ approach to define the data models.

See also:

5 Things I love about Strapi, a Node.js headless CMS

KeystoneJS 5 Quick Review

I have recently started on a little project to organise the quotes that I have collected in years of reading (see kindle-citation-extractor). I originally got my quotes into Airtable but I quickly hit the limit for the free tier.

I figured that it would be great if I could develop a simple database with a simple user interface. Ideally I would not want to implement the basic CRUD views and so I had a look around for tools that can generate simple UIs for databases. My initial search revealed Keystone and Strapi.

I really liked the looks of KeystoneJS (Version 5) since it appears simple and clean. In this article, I will first document my experiences with the Getting Started example for KeystoneJS and conclude with my first impressions and comparison to similar solutions.

Getting Started

After some browsing around, I decided to follow the getting started guide from the Keystone documentation.

I am particularly interested in running Keystone with Postgres, so to get my local example running, I quickly spun up a Postgres server using Docker:

docker run --name keystone-pg -e POSTGRES_PASSWORD=password -d -v db:/var/lib/postgresql/data -p 5432:5432 postgres

(db-start.sh)

Then I configured the keystone project as per instructions:

yarn create keystone-app  keystone-playground

Provided answers for the prompts:

Prompts for Keystone Project initialisation

Then I connected to the Postgres instance in Docker and created a keystone table:

Create keystone database

And finally run the example:

DATABASE_URL=postgres://postgres:password@localhost:5432/keystone &amp;&amp; yarn dev

Unfortunately, loading the AdminUI then resulted in the following error:

> GraphQL error: select count(*) from "public"."Todo" as "t0" where true – relation "public.Todo" does not exist

There appears to be an open issue for this already: Trouble running starter

I was able to fix this issue by modifying index.js as follows:

...
const keystone = new Keystone({
  name: PROJECT_NAME,
  adapter: new Adapter({
    dropDatabase: true,
    knexOptions: {
      client: 'postgres',
      connection: process.env.DATABASE_URL,
    }
  }),
});
...

Adding the dropDatabase option here seems to force Keystone to create the data in the database upon startup.

Keystone example

The interface on localhost:3000 is also up and running:

Keystone 5 Example To Do list App

Quick Review

Based on looking around the documentation and my experiences with the sample app, my observations for Keystone JS 5 are as follows:

  • KeystoneJS 5 appears very modern, with excellent capabilities for GraphQL
  • Based on my experiences with the Getting Started example, it seems that the documentation for KeystoneJS leaves some things to be desired.
  • I like how lightweight KeystoneJS feels. It runs fast and the code to configure it seems very straightforward and simple.
  • A few lines of declarative code can yield impressive outcomes, such as a fully featured GraphQL API and a nice admin interface.
  • Seems like it is possible to deploy Keystone in Serverless environments, see Serverless deployment using Now.
  • KeystoneJS does not manage migrations when the data model is changed (see this comment). This requires to create any additional lists and fields manually in the database. Here an example how this can be accomplished using Knex migrations.

Potential alternatives for KeystoneJS are:

  • Strapi: Very similar to Keystone but based on a REST API first (GraphQL available as a plugin). Allows creating and editing table schema using the Admin UI. Overall it is more of a CMS that KeystoneJS.
  • Prisma: Prisma is closer to traditional ORM tools than KeystoneJS. The recently released Prisma Admin is similar to the Admin interface of KeystoneJs. Prisma offers a client library whereas KeystoneJS depends on clients interfacing with the data through the GraphQL API.

Overall I still believe that KeystoneJS is a viable technology for my use case. My biggest concern is around migrations; I believe it may be quite difficult to orchestrate this easily across development, test and production system. I will probably continue to poke around a bit more in the KeystoneJS examples and documentation and possibly try out one of the alternatives.

I have uploaded my project resulting from following the Getting Started guide to GitHub. I think it can be quite useful for complementing the existing Getting Started documentation, particularly when wanting to get started using Postgres:

keystone-playground

Knex and Typescript Starter Project

SQL is a very expressive and powerful language. Unfortunately, it has often been difficult to interact with database using SQL from object-oriented languages due to a mismatch of the data structures in the database versus the structures in the application programming language. One common solution to this problem where Object-relational mapping frameworks, which often come with their own issues.

I was most delighted when I started working with Knex, a simple yet versatile framework for connecting and working with relational databases in JavaScript. Using it feels like working with an ORM but it only provides a very thin abstraction layer on top of SQL; this helps avoid many of the pitfalls potentially introduced by ORMs while still providing us with most of their conveniences.

As it turns out, Knex has excellent TypeScript support and I think building applications relying on a database using Knex and TypeScript is an excellent starting point.

I have put together a small project on GitHub that sets up the basics of getting started with Knex and TypeScript. This project specifically focuses on the setting up Knex and TypeScript and no other framework, for instance Express is included.

You can go ahead and clone the project from here:

https://github.com/mxro/knex-typescript-starter-project

After running yarn the following scripts can be run:

  • yarn test: Which will set up Jest in watch mode.
  • yarn build: Which will transpile TypeScript to ES6.
  • yarn watch: Which will run index.ts after every change (and compiles any changes using tsc)

The scripts for defining the database schema are placed in the folder migrations. Here the only currently defined migration:

import Knex from "knex";
import { Migration } from "./../migrationUtil";

export const migrations: Migration[] = [
    {
        name: "000_define_quotes",
        async up(knex: Knex) {
            await knex.schema.createTable("quotes",
                (table) => {
                    table.bigIncrements("id").unsigned().primary();
                    table.uuid("document_id").notNullable();
                    table.uuid("user_id").notNullable();
                    table.timestamp("created", { useTz: true });
                    table.text("quote").notNullable();
                    table.string("author", 512).notNullable();
                    table.string("book", 1024).notNullable();
                    table.text("raw_source").notNullable();
                    table.dateTime("date_collected", { useTz: true });
                    table.string("location", 1024).notNullable();
                    table.string("link", 1024).notNullable();
                    table.index(["document_id"], "document_id_index");
                });

            await knex.schema.createTable("tags",
                (table) => {
                    table.bigIncrements("id").unsigned().primary();
                    table.uuid("tag_id").notNullable();
                    table.timestamp("created", { useTz: true });
                    table.string("name", 512).notNullable();
                    table.uuid("document_id").notNullable();
                });
        },
        /* eslint-disable-next-line  @typescript-eslint/no-empty-function */
        async down(kenx: Knex) {
        },
    }
];

This migration is defined as one of the migrations for this application.

import { migrations as mig001 } from "./migrations/001_define_quotes";
import { runMigration, Migration } from "./migrationUtil";
import Knex from "knex";

export async function runMigrations({ knex }: { knex: Knex }): Promise<void> {

  const migrations: Migration[] = [].concat(mig001);

  await runMigration({ migrations, knex });

}

The migrations are run upon application start up or before tests are run. See the test in migrations.test.ts:

import { runMigrations } from "../src/migrations";
import Knex from "Knex";

describe("Test migrations.", () => {

  it("Should run migrations without error.", async () => {
    const knex = Knex({
      client: "sqlite3",
      connection: { filename: ":memory:" },
      pool: { min: 1, max: 1 },
    });
    await runMigrations({ knex });
    await knex.destroy();
  });

});

Note that this way of running migrations differs a bit from the default way suggested on the Knex website, namely to define migrations in individual files and then run migrations through the Knex CLI. I find this suggested default way a bit cumbersome and I think defining the migrations as native part of the application allows for more flexibility; specifically making it easier to test the application and allowing to develop the application in a more modular way, by allowing us to define migrations per module rather than for the application as a whole (as long as foreign keys are used sparingly).

This is just a very simple starter project. There are other starter projects for TypeScript, such as TypeScript-Node

Textures and Lighting with React and Three.js

In my previous three posts, I have developed a simple WebGL application using react-three-fiber and three.js. In this post, I am adding texture loading and proper lighting to the application.

For reference, here the links to the previous versions of the app:

  • Version 1: Just being able to drag a shape on the screen
  • Version 2: Dragging and dropping shapes using physics
  • Version 3: Being able to move the camera

Here the version developed for this post:

threejs-react-textures-light

Source Code

You can click to add objects, click and drag them as well as move the camera using WASD keys and mouse wheel to zoom.

Loading Textures

Textures can be loaded easily in react-three-fiber using the useResource hook.

All that is required to place the texture in the public/ folder of the react application, load the texture and then link it to the material by setting the map property.

    const [texture] = useLoader(TextureLoader, 'textures/grasslight-big.jpg');

    if (texture) {
        texture.wrapS = texture.wrapT = THREE.RepeatWrapping;
        texture.repeat.set(1500, 1500);
        texture.anisotropy = 16;
    }

    return (
        <mesh ref={ref} receiveShadow position={position}
            onClick={onPlaneClick}>
            <planeBufferGeometry attach="geometry" args={[10000, 10000]} />
            {texture &&
                <meshPhongMaterial attach="material" map={texture} />
            }

        </mesh>
    )

I found that textures are often quite large in size; larger than 1 MB. This significantly extends loading times. Thus I have added a simple loading screen. Unfortunately to be able to display the text ‘loading’ I had to create a TextGeometry which in turn required a font to be loaded (I prepared the Roboto font using facetype.js. This font by itself is more than 300 kb, so even loading the loading screen takes a bit of time.

Lighting

The goal of this application is to have a simple, very large plane on which any number of objects may be added. The issue I encountered with this was that to get shadows working with a DirectionalLight turned out to be very tricky. In the end, I used a combination of an AmbientLight with a SpotLight.

        <ambientLight intensity={0.9} />

        <primitive object={lightTarget} position={lightTargetPosition} />
        <spotLight
            castShadow
            intensity={0.25}
            position={lightPosition}
            angle={Math.PI / 3}
            penumbra={1}
            shadow-mapSize={new Vector2(2048 * 5, 2048 * 5)}
            target={lightTarget}
        />

Since the SpotLight would not be able to cover the whole of the plane (as said, it is meant to be very large) and provide accurate shadows, I opted for moving the SpotLight when a user moves the camera.

    const lightTargetYDelta = 120;
    const lightTargetXDelta = 80;
    const [lightPosition, setLightPosition] = useState([-lightTargetXDelta, -lightTargetYDelta, 200]);
    const [lightTargetPosition, setLightTargetPosition] = useState([0, 0, 0]);
    const onCameraMoved = (delta) => {
        const newLightPosition = delta.map((e, idx) => lightPosition[idx] + e);
        setLightPosition(newLightPosition);
        const newLightTargetPosition = [newLightPosition[0] + lightTargetXDelta, newLightPosition[1] + lightTargetYDelta, 0];
        setLightTargetPosition(newLightTargetPosition);
    };

This required both updating the position of the light setLightPosition as well as moving the light target setLightTargetPosition.

Modularity

Since the amount of code for this example increased quite a bit over the past three iterations, I broke up the application into multiple modules, with most React components now sitting in their own file.

I think this really shows the advantage of using React with Three.js, since it is easy for each component to manage its own state.

For the next iteration, I will most likely be looking at how I can remove the textures or use much smaller textures. I would like the application to be able to load as quickly as possible, and textures clearly do not seem a great option for this.

Camera Movement with Three.js

I have recently been working on small example application using three.js and react-three-fiber. In the first two iterations, I first developed a simple draggable shape floating in space and then supported multiple shapes that can be moved on a physical plane. In this post, I am going to be extending the example to support camera movements. Here links to the previous two iterations and the one developed in this post:

Prototypes

Iteration 3: Camera Movements (this post)

Source Code: threejs-test

Published App: three-js-camera.surge.sh

You can move the camera using WASD and zoom in and out using the mouse scroll wheel. You can create new objects by clicking on any empty spot on the plane.

Iteration 2: Movable objects on Plane

Blog Post: Create and Drag Shapes with Three.js, React and Cannon.js

Published App: react-three-fiber-draggable-v2

Iteration 1: Draggable Shape in Space

Blog Post: Creating a Draggable Shape with React Three Fiber

Published App: react-three-fiber-draggable.surge.sh

In the following I will describe the two ways in which camera movement is supported:

Camera Movements

Using the Keyboard

The easiest way to move the camera using the keyboard would be to change the position of the camera on key press. That is what I first tried and it turned out to be a very insufficient solution. Instead, I stored for how long each key is pressed in an object and then calculate camera movement for every frame.

const keyPressed = {
}

function App() {
    ...
    const handleKeyDown = (e) => {
        if (!keyPressed[e.key]) {
            keyPressed[e.key] = new Date().getTime();
        }
    };

    const handleKeyUp = (e) => {
        delete keyPressed[e.key];
    };
    ...
}

In react-three-fiber, the useful useFrame hook is available where we then calculate the camera movement:

    useFrame((_, delta) => {
        // move camera according to key pressed
        Object.entries(keyPressed).forEach((e) => {
            const [key, start] = e;
            const duration = new Date().getTime() - start;

            // increase momentum if key pressed longer
            let momentum = Math.sqrt(duration + 200) * 0.01 + 0.05;

            // adjust for actual time passed
            momentum = momentum * delta / 0.016;

            // increase momentum if camera higher
            momentum = momentum + camera.position.z * 0.02;

            switch (key) {
                case 'w': camera.translateY(momentum); break;
                case 's': camera.translateY(-momentum); break;
                case 'd': camera.translateX(momentum); break;
                case 'a': camera.translateX(-momentum); break;
                default:
            }
        });
    });

We first calculate the duration how long a key is pressed and then use this to determine the momentum. Finally we use this momentum to update the position of the camera.

Using the Mouse Wheel

We use the mouse wheel to zoom in and out. For this, the position of the camera needs to change along the z axis.

    const mouseWheel = (e) => {
        let delta = e.wheelDelta;
        delta = delta / 240;
        delta = -delta;
        if (delta <= 0) {
            delta -= camera.position.z * 0.1;
        } else {
            delta += camera.position.z * 0.1;
        }
        if (camera.position.z + delta > 1 && camera.position.z + delta < 200) {
            camera.translateZ(delta);
        }
    };

Here we simply determine the direction in which the wheel is scrolled and adjust the position of the camera to be higher or lower accordingly. There is a certain range in which the camera is permitted. If it gets to close to the ground or too far away from it, its position is not changed anymore even if the mouse wheel is turned.

I further had to change the way the drag movement was implemented. This originally worked only using the position of the mouse on the screen, since the screen never moved.

In case of an dynamic camera, we need a bit more calculation to be done which I encapsulated in the get3DPosition method.

const get3DPosition = ({ screenX, screenY, camera }) => {
    var vector = new THREE.Vector3(screenX, screenY, 0.5);
    vector.unproject(camera);
    var dir = vector.sub(camera.position).normalize();
    var distance = - camera.position.z / dir.z;
    var pos = camera.position.clone().add(dir.multiplyScalar(distance));
    return [pos.x, pos.y, 0];
};

The major limitation I want to work on next is the lightning. There is currently a SpotLight light source that only covers a small part of the plane and objects which are not within the cone of this spotlight are not rendered in an aesthetically pleasing fashion.

The full source code is available on GitHub.

Create and Drag Shapes with Three.js, React and Cannon.js

Following up from my article published a few days ago, I have now extended and improved the simple WebGL application that I originally developed using Three.js and react-three-fiber.

Version 1 of the application allowed dragging a simple shape around on the screen:

App: https://react-three-fiber-draggable.surge.sh/

Source Code: https://github.com/mxro/threejs-test/tree/master/test1

Version 2 combines this basic premise with the cannon.js physics engine. Multiple objects can now be created and they drop down onto a solid plane, on which they can be moved.

App: https://react-three-fiber-draggable-v2.surge.sh/

Source Code: https://github.com/mxro/threejs-test/tree/master/test2

Simply click the canvas to add new shapes that then can be dragged around the plane.

The most important logic for this solution is in the DraggableDodecahedron component:

function DraggableDodecahedron({ position: initialPosition }) {
    const { size, viewport } = useThree();
    const [position, setPosition] = useState(initialPosition);
    const [quaternion, setQuaternion] = useState([0, 0, 0, 0]);
    const aspect = size.width / viewport.width;

    const { ref, body } = useCannon({ bodyProps: { mass: 100000 } }, body => {
        body.addShape(new CANNON.Box(new CANNON.Vec3(1, 1, 1)))
        body.position.set(...position);
    }, []);

    const bind = useDrag(({ offset: [,], xy: [x, y], first, last }) => {
        if (first) {
            body.mass = 0;
            body.updateMassProperties();
        } else if (last) {
            body.mass = 10000;
            body.updateMassProperties();
        }
        body.position.set((x - size.width / 2) / aspect, -(y - size.height / 2) / aspect, -0.7);
    }, { pointerEvents: true });

    useFrame(() => {
        // Sync cannon body position with three js
        const deltaX = Math.abs(body.position.x - position[0]);
        const deltaY = Math.abs(body.position.y - position[1]);
        const deltaZ = Math.abs(body.position.z - position[2]);
        if (deltaX > 0.001 || deltaY > 0.001 || deltaZ > 0.001) {
            setPosition(body.position.clone().toArray());
        }
        const bodyQuaternion = body.quaternion.toArray();
        const quaternionDelta = bodyQuaternion.map((n, idx) => Math.abs(n - quaternion[idx]))
            .reduce((acc, curr) => acc + curr);
        if (quaternionDelta > 0.01) {
            setQuaternion(body.quaternion.toArray());
        }
    });
    return (
        <mesh ref={ref} castShadow position={position} quaternion={quaternion} {...bind()}
            onClick={e => {
                e.stopPropagation();
            }}
        >

            <dodecahedronBufferGeometry attach="geometry" />
            <meshLambertMaterial attach="material" color="yellow" />

        </mesh>
    )
}

Most notably here are three React hooks:

With the first hook we create a Cannon body that is set to the same dimension and position as the shape.

     const { ref, body } = useCannon({ bodyProps: { mass: 100000 } }, body => {
        body.addShape(new CANNON.Box(new CANNON.Vec3(1, 1, 1)))
        body.position.set(...position);
    }, []);

In the second hook, we use react-use-gesture to react to drag events. We temporarily set the mass of the body/shape to be moved to 0 and reset it to the original mass once the drag gesture is complete. Finally we also set the position of the cannon.js body to the position that the drag gesture current indicates.

      const bind = useDrag(({ offset: [,], xy: [x, y], first, last }) => {
        if (first) {
            body.mass = 0;
            body.updateMassProperties();
        } else if (last) {
            body.mass = 10000;
            body.updateMassProperties();
        }
        body.position.set((x - size.width / 2) / aspect, -(y - size.height / 2) / aspect, -0.7);
    }, { pointerEvents: true });

The third hook, useFrame(), runs a callback function before every frame is rendered (this hook is supplied by react-three-fiber). This is used here to synchronise the position of the body in cannon.js with the three.js shape. Since cannon.js updates positions with a very fine granularity, we first assert that a body has changed its position or orientation by a significant margin. Only if this is the case, we update the shape. This helps React to avoid unnecessary updates of the ‘virtual dom’.

    useFrame(() => {
        // Sync cannon body position with three js
        const deltaX = Math.abs(body.position.x - position[0]);
        const deltaY = Math.abs(body.position.y - position[1]);
        const deltaZ = Math.abs(body.position.z - position[2]);
        if (deltaX > 0.001 || deltaY > 0.001 || deltaZ > 0.001) {
            setPosition(body.position.clone().toArray());
        }
        const bodyQuaternion = body.quaternion.toArray();
        const quaternionDelta = bodyQuaternion.map((n, idx) => Math.abs(n - quaternion[idx]))
            .reduce((acc, curr) => acc + curr);
        if (quaternionDelta > 0.01) {
            setQuaternion(body.quaternion.toArray());
        }
    });

Apart from these hooks there is a simple click handler that stops click events from propagating. This is to prevent the event handler defined for the plane to trigger (this triggers adding a new shape to the scene).

Next I will be adding camera movement to this example.

Creating a Draggable Shape with React Three Fiber

I recently became interested how to render 3D graphics in the browser. I think WebGL is an extremely powerful technology and may one day become an important way of rendering content on the web.

There are various frameworks and tools available to use WebGL such as Babylon.js and three.js. To me, three.js looks the most promising for the use cases I am interested in.

For simple examples, three.js works beautifully but I think more complex applications can easily become unwieldy when using this framework. Thus I was very happy to come across react-three-fiber, which provides a wrapper around three.js using React. React, for all its shortcomings, is a powerful way to keep code modular and maintainable.

To get my hands dirty with this library, I have created a little example of an application that renders a Dodecahedron and allows dragging this shape by tapping or clicking and dragging with the mouse.

Here the link to the deployed application:

react-three-fiber-draggable.surge.sh

And here the link to the source code:

github.com/mxro/threejs-test/tree/master/test1

I think the source code is pretty self-explanatory. Essentially all logic is encapsulated into index.js:

import ReactDOM from "react-dom"
import React, { useRef, useState } from "react"
import { Canvas, useThree, useFrame } from "react-three-fiber"
import { useDrag } from "react-use-gesture"
import "./index.css"

function DraggableDodecahedron() {
    const colors = ["hotpink", "red", "blue", "green", "yellow"];
    const ref = useRef();
    const [colorIdx, setColorIdx] = useState(0);
    const [position, setPosition] = useState([0, 0, 0]);
    const { size, viewport } = useThree();
    const aspect = size.width / viewport.width;
    useFrame(() => {
        ref.current.rotation.z += 0.01
        ref.current.rotation.x += 0.01
    });
    const bind = useDrag(({ offset: [x, y] }) => {
        const [,, z] = position;
        setPosition([x / aspect, -y / aspect, z]);
    }, { pointerEvents: true });

    return (
        <mesh position={position} {...bind()}
            ref={ref}
            onClick={e => {
                if (colorIdx === 4) {
                    setColorIdx(0);
                } else {
                    setColorIdx(colorIdx+1);
                }
            }}
            onPointerOver={e => console.log('hover')}
            onPointerOut={e => console.log('unhover')}>

            <dodecahedronBufferGeometry attach="geometry" />
            <meshLambertMaterial attach="material" color={colors[colorIdx]} />

        </mesh>
    )
}

ReactDOM.render(
    <Canvas>
        <spotLight intensity={1.2} position={[30, 30, 50]} angle={0.2} penumbra={1} castShadow />
        <DraggableDodecahedron />
    </Canvas>,
    document.getElementById("root")
)

Noteworthy here is that instead of creating a Material, Geometry and Mesh directly, they are defined in JSX. Also, instead of having to request an animation frame, we are using the hook useFrame to drive the animation for our component.

I think it can easily be seen how react-three-fiber could be used to make three.js applications more modular, for instance by handling the animation specifically for each component. I think this project is also testament to the power of React in that it cannot only be used with the DOM but also with other rendering technologies.

Medooze Media Server Demo

I’ve recently done some research into WebRTC and specifically on how to stream media captured in the browser to a server. Initially I thought I could use something like Kinesis Video Streams and have AWS do the heavy lifting for me. Unfortunately this turned out way more complicated than I had anticipated so I started looking for other options.

That is when I came across Media Servers such as Medooze, OpenVidu, Janus and Jitsi. Medooze caught my attention since it appears to scale very well and offers a NodeJS based server.

It did take me some time to find a meaningful demo for Medooze and then to get it running. So I thought I briefly document the steps here to get a demo for Medooze up and running (note this only works on Linux or Mac OS X):

  1. Head over to the media-server-client-js project and clone it.
  2. Run the following commands:
npm i
npm run-script dist
cd demo
npm i
  1. Get the IP address of your current machine:
ifconfig | grep "inet "
  1. Using this IP, launch the Medooze server in the demo directory:
node index.js [your IP]
  1. Head to a browser and open the URL https://%5Byour ip]:8000 (for instance https://10.0.2.15:8000. Accept the SSL certificate for your localhost.

You should now see the demo page:

Clicking the buttons will create video streams:

The animation on top of the remote button is a video stream taken from a local canvas and animation/video to the right is the same stream relayed through the server.

So the client sends a stream to the server, the server sends that stream right back and the client then renders that stream.

During my local testing I encountered an issue when adding tracks with codecs (VP8, H264) that I have filed and link here for reference: Adding tracks with Codecs does not work