Textures and Lighting with React and Three.js

In my previous three posts, I have developed a simple WebGL application using react-three-fiber and three.js. In this post, I am adding texture loading and proper lighting to the application.

For reference, here the links to the previous versions of the app:

  • Version 1: Just being able to drag a shape on the screen
  • Version 2: Dragging and dropping shapes using physics
  • Version 3: Being able to move the camera

Here the version developed for this post:

threejs-react-textures-light

Source Code

You can click to add objects, click and drag them as well as move the camera using WASD keys and mouse wheel to zoom.

Loading Textures

Textures can be loaded easily in react-three-fiber using the useResource hook.

All that is required to place the texture in the public/ folder of the react application, load the texture and then link it to the material by setting the map property.

    const [texture] = useLoader(TextureLoader, 'textures/grasslight-big.jpg');

    if (texture) {
        texture.wrapS = texture.wrapT = THREE.RepeatWrapping;
        texture.repeat.set(1500, 1500);
        texture.anisotropy = 16;
    }

    return (
        <mesh ref={ref} receiveShadow position={position}
            onClick={onPlaneClick}>
            <planeBufferGeometry attach="geometry" args={[10000, 10000]} />
            {texture &&
                <meshPhongMaterial attach="material" map={texture} />
            }

        </mesh>
    )

I found that textures are often quite large in size; larger than 1 MB. This significantly extends loading times. Thus I have added a simple loading screen. Unfortunately to be able to display the text ‘loading’ I had to create a TextGeometry which in turn required a font to be loaded (I prepared the Roboto font using facetype.js. This font by itself is more than 300 kb, so even loading the loading screen takes a bit of time.

Lighting

The goal of this application is to have a simple, very large plane on which any number of objects may be added. The issue I encountered with this was that to get shadows working with a DirectionalLight turned out to be very tricky. In the end, I used a combination of an AmbientLight with a SpotLight.

        <ambientLight intensity={0.9} />

        <primitive object={lightTarget} position={lightTargetPosition} />
        <spotLight
            castShadow
            intensity={0.25}
            position={lightPosition}
            angle={Math.PI / 3}
            penumbra={1}
            shadow-mapSize={new Vector2(2048 * 5, 2048 * 5)}
            target={lightTarget}
        />

Since the SpotLight would not be able to cover the whole of the plane (as said, it is meant to be very large) and provide accurate shadows, I opted for moving the SpotLight when a user moves the camera.

    const lightTargetYDelta = 120;
    const lightTargetXDelta = 80;
    const [lightPosition, setLightPosition] = useState([-lightTargetXDelta, -lightTargetYDelta, 200]);
    const [lightTargetPosition, setLightTargetPosition] = useState([0, 0, 0]);
    const onCameraMoved = (delta) => {
        const newLightPosition = delta.map((e, idx) => lightPosition[idx] + e);
        setLightPosition(newLightPosition);
        const newLightTargetPosition = [newLightPosition[0] + lightTargetXDelta, newLightPosition[1] + lightTargetYDelta, 0];
        setLightTargetPosition(newLightTargetPosition);
    };

This required both updating the position of the light setLightPosition as well as moving the light target setLightTargetPosition.

Modularity

Since the amount of code for this example increased quite a bit over the past three iterations, I broke up the application into multiple modules, with most React components now sitting in their own file.

I think this really shows the advantage of using React with Three.js, since it is easy for each component to manage its own state.

For the next iteration, I will most likely be looking at how I can remove the textures or use much smaller textures. I would like the application to be able to load as quickly as possible, and textures clearly do not seem a great option for this.

Camera Movement with Three.js

I have recently been working on small example application using three.js and react-three-fiber. In the first two iterations, I first developed a simple draggable shape floating in space and then supported multiple shapes that can be moved on a physical plane. In this post, I am going to be extending the example to support camera movements. Here links to the previous two iterations and the one developed in this post:

Prototypes

Iteration 3: Camera Movements (this post)

Source Code: threejs-test

Published App: three-js-camera.surge.sh

You can move the camera using WASD and zoom in and out using the mouse scroll wheel. You can create new objects by clicking on any empty spot on the plane.

Iteration 2: Movable objects on Plane

Blog Post: Create and Drag Shapes with Three.js, React and Cannon.js

Published App: react-three-fiber-draggable-v2

Iteration 1: Draggable Shape in Space

Blog Post: Creating a Draggable Shape with React Three Fiber

Published App: react-three-fiber-draggable.surge.sh

In the following I will describe the two ways in which camera movement is supported:

Camera Movements

Using the Keyboard

The easiest way to move the camera using the keyboard would be to change the position of the camera on key press. That is what I first tried and it turned out to be a very insufficient solution. Instead, I stored for how long each key is pressed in an object and then calculate camera movement for every frame.

const keyPressed = {
}

function App() {
    ...
    const handleKeyDown = (e) => {
        if (!keyPressed[e.key]) {
            keyPressed[e.key] = new Date().getTime();
        }
    };

    const handleKeyUp = (e) => {
        delete keyPressed[e.key];
    };
    ...
}

In react-three-fiber, the useful useFrame hook is available where we then calculate the camera movement:

    useFrame((_, delta) => {
        // move camera according to key pressed
        Object.entries(keyPressed).forEach((e) => {
            const [key, start] = e;
            const duration = new Date().getTime() - start;

            // increase momentum if key pressed longer
            let momentum = Math.sqrt(duration + 200) * 0.01 + 0.05;

            // adjust for actual time passed
            momentum = momentum * delta / 0.016;

            // increase momentum if camera higher
            momentum = momentum + camera.position.z * 0.02;

            switch (key) {
                case 'w': camera.translateY(momentum); break;
                case 's': camera.translateY(-momentum); break;
                case 'd': camera.translateX(momentum); break;
                case 'a': camera.translateX(-momentum); break;
                default:
            }
        });
    });

We first calculate the duration how long a key is pressed and then use this to determine the momentum. Finally we use this momentum to update the position of the camera.

Using the Mouse Wheel

We use the mouse wheel to zoom in and out. For this, the position of the camera needs to change along the z axis.

    const mouseWheel = (e) => {
        let delta = e.wheelDelta;
        delta = delta / 240;
        delta = -delta;
        if (delta <= 0) {
            delta -= camera.position.z * 0.1;
        } else {
            delta += camera.position.z * 0.1;
        }
        if (camera.position.z + delta > 1 && camera.position.z + delta < 200) {
            camera.translateZ(delta);
        }
    };

Here we simply determine the direction in which the wheel is scrolled and adjust the position of the camera to be higher or lower accordingly. There is a certain range in which the camera is permitted. If it gets to close to the ground or too far away from it, its position is not changed anymore even if the mouse wheel is turned.

I further had to change the way the drag movement was implemented. This originally worked only using the position of the mouse on the screen, since the screen never moved.

In case of an dynamic camera, we need a bit more calculation to be done which I encapsulated in the get3DPosition method.

const get3DPosition = ({ screenX, screenY, camera }) => {
    var vector = new THREE.Vector3(screenX, screenY, 0.5);
    vector.unproject(camera);
    var dir = vector.sub(camera.position).normalize();
    var distance = - camera.position.z / dir.z;
    var pos = camera.position.clone().add(dir.multiplyScalar(distance));
    return [pos.x, pos.y, 0];
};

The major limitation I want to work on next is the lightning. There is currently a SpotLight light source that only covers a small part of the plane and objects which are not within the cone of this spotlight are not rendered in an aesthetically pleasing fashion.

The full source code is available on GitHub.

Create and Drag Shapes with Three.js, React and Cannon.js

Following up from my article published a few days ago, I have now extended and improved the simple WebGL application that I originally developed using Three.js and react-three-fiber.

Version 1 of the application allowed dragging a simple shape around on the screen:

App: https://react-three-fiber-draggable.surge.sh/

Source Code: https://github.com/mxro/threejs-test/tree/master/test1

Version 2 combines this basic premise with the cannon.js physics engine. Multiple objects can now be created and they drop down onto a solid plane, on which they can be moved.

App: https://react-three-fiber-draggable-v2.surge.sh/

Source Code: https://github.com/mxro/threejs-test/tree/master/test2

Simply click the canvas to add new shapes that then can be dragged around the plane.

The most important logic for this solution is in the DraggableDodecahedron component:

function DraggableDodecahedron({ position: initialPosition }) {
    const { size, viewport } = useThree();
    const [position, setPosition] = useState(initialPosition);
    const [quaternion, setQuaternion] = useState([0, 0, 0, 0]);
    const aspect = size.width / viewport.width;

    const { ref, body } = useCannon({ bodyProps: { mass: 100000 } }, body => {
        body.addShape(new CANNON.Box(new CANNON.Vec3(1, 1, 1)))
        body.position.set(...position);
    }, []);

    const bind = useDrag(({ offset: [,], xy: [x, y], first, last }) => {
        if (first) {
            body.mass = 0;
            body.updateMassProperties();
        } else if (last) {
            body.mass = 10000;
            body.updateMassProperties();
        }
        body.position.set((x - size.width / 2) / aspect, -(y - size.height / 2) / aspect, -0.7);
    }, { pointerEvents: true });

    useFrame(() => {
        // Sync cannon body position with three js
        const deltaX = Math.abs(body.position.x - position[0]);
        const deltaY = Math.abs(body.position.y - position[1]);
        const deltaZ = Math.abs(body.position.z - position[2]);
        if (deltaX > 0.001 || deltaY > 0.001 || deltaZ > 0.001) {
            setPosition(body.position.clone().toArray());
        }
        const bodyQuaternion = body.quaternion.toArray();
        const quaternionDelta = bodyQuaternion.map((n, idx) => Math.abs(n - quaternion[idx]))
            .reduce((acc, curr) => acc + curr);
        if (quaternionDelta > 0.01) {
            setQuaternion(body.quaternion.toArray());
        }
    });
    return (
        <mesh ref={ref} castShadow position={position} quaternion={quaternion} {...bind()}
            onClick={e => {
                e.stopPropagation();
            }}
        >

            <dodecahedronBufferGeometry attach="geometry" />
            <meshLambertMaterial attach="material" color="yellow" />

        </mesh>
    )
}

Most notably here are three React hooks:

With the first hook we create a Cannon body that is set to the same dimension and position as the shape.

     const { ref, body } = useCannon({ bodyProps: { mass: 100000 } }, body => {
        body.addShape(new CANNON.Box(new CANNON.Vec3(1, 1, 1)))
        body.position.set(...position);
    }, []);

In the second hook, we use react-use-gesture to react to drag events. We temporarily set the mass of the body/shape to be moved to 0 and reset it to the original mass once the drag gesture is complete. Finally we also set the position of the cannon.js body to the position that the drag gesture current indicates.

      const bind = useDrag(({ offset: [,], xy: [x, y], first, last }) => {
        if (first) {
            body.mass = 0;
            body.updateMassProperties();
        } else if (last) {
            body.mass = 10000;
            body.updateMassProperties();
        }
        body.position.set((x - size.width / 2) / aspect, -(y - size.height / 2) / aspect, -0.7);
    }, { pointerEvents: true });

The third hook, useFrame(), runs a callback function before every frame is rendered (this hook is supplied by react-three-fiber). This is used here to synchronise the position of the body in cannon.js with the three.js shape. Since cannon.js updates positions with a very fine granularity, we first assert that a body has changed its position or orientation by a significant margin. Only if this is the case, we update the shape. This helps React to avoid unnecessary updates of the ‘virtual dom’.

    useFrame(() => {
        // Sync cannon body position with three js
        const deltaX = Math.abs(body.position.x - position[0]);
        const deltaY = Math.abs(body.position.y - position[1]);
        const deltaZ = Math.abs(body.position.z - position[2]);
        if (deltaX > 0.001 || deltaY > 0.001 || deltaZ > 0.001) {
            setPosition(body.position.clone().toArray());
        }
        const bodyQuaternion = body.quaternion.toArray();
        const quaternionDelta = bodyQuaternion.map((n, idx) => Math.abs(n - quaternion[idx]))
            .reduce((acc, curr) => acc + curr);
        if (quaternionDelta > 0.01) {
            setQuaternion(body.quaternion.toArray());
        }
    });

Apart from these hooks there is a simple click handler that stops click events from propagating. This is to prevent the event handler defined for the plane to trigger (this triggers adding a new shape to the scene).

Next I will be adding camera movement to this example.

Creating a Draggable Shape with React Three Fiber

I recently became interested how to render 3D graphics in the browser. I think WebGL is an extremely powerful technology and may one day become an important way of rendering content on the web.

There are various frameworks and tools available to use WebGL such as Babylon.js and three.js. To me, three.js looks the most promising for the use cases I am interested in.

For simple examples, three.js works beautifully but I think more complex applications can easily become unwieldy when using this framework. Thus I was very happy to come across react-three-fiber, which provides a wrapper around three.js using React. React, for all its shortcomings, is a powerful way to keep code modular and maintainable.

To get my hands dirty with this library, I have created a little example of an application that renders a Dodecahedron and allows dragging this shape by tapping or clicking and dragging with the mouse.

Here the link to the deployed application:

react-three-fiber-draggable.surge.sh

And here the link to the source code:

github.com/mxro/threejs-test/tree/master/test1

I think the source code is pretty self-explanatory. Essentially all logic is encapsulated into index.js:

import ReactDOM from "react-dom"
import React, { useRef, useState } from "react"
import { Canvas, useThree, useFrame } from "react-three-fiber"
import { useDrag } from "react-use-gesture"
import "./index.css"

function DraggableDodecahedron() {
    const colors = ["hotpink", "red", "blue", "green", "yellow"];
    const ref = useRef();
    const [colorIdx, setColorIdx] = useState(0);
    const [position, setPosition] = useState([0, 0, 0]);
    const { size, viewport } = useThree();
    const aspect = size.width / viewport.width;
    useFrame(() => {
        ref.current.rotation.z += 0.01
        ref.current.rotation.x += 0.01
    });
    const bind = useDrag(({ offset: [x, y] }) => {
        const [,, z] = position;
        setPosition([x / aspect, -y / aspect, z]);
    }, { pointerEvents: true });

    return (
        <mesh position={position} {...bind()}
            ref={ref}
            onClick={e => {
                if (colorIdx === 4) {
                    setColorIdx(0);
                } else {
                    setColorIdx(colorIdx+1);
                }
            }}
            onPointerOver={e => console.log('hover')}
            onPointerOut={e => console.log('unhover')}>

            <dodecahedronBufferGeometry attach="geometry" />
            <meshLambertMaterial attach="material" color={colors[colorIdx]} />

        </mesh>
    )
}

ReactDOM.render(
    <Canvas>
        <spotLight intensity={1.2} position={[30, 30, 50]} angle={0.2} penumbra={1} castShadow />
        <DraggableDodecahedron />
    </Canvas>,
    document.getElementById("root")
)

Noteworthy here is that instead of creating a Material, Geometry and Mesh directly, they are defined in JSX. Also, instead of having to request an animation frame, we are using the hook useFrame to drive the animation for our component.

I think it can easily be seen how react-three-fiber could be used to make three.js applications more modular, for instance by handling the animation specifically for each component. I think this project is also testament to the power of React in that it cannot only be used with the DOM but also with other rendering technologies.

Medooze Media Server Demo

I’ve recently done some research into WebRTC and specifically on how to stream media captured in the browser to a server. Initially I thought I could use something like Kinesis Video Streams and have AWS do the heavy lifting for me. Unfortunately this turned out way more complicated than I had anticipated so I started looking for other options.

That is when I came across Media Servers such as Medooze, OpenVidu, Janus and Jitsi. Medooze caught my attention since it appears to scale very well and offers a NodeJS based server.

It did take me some time to find a meaningful demo for Medooze and then to get it running. So I thought I briefly document the steps here to get a demo for Medooze up and running (note this only works on Linux or Mac OS X):

  1. Head over to the media-server-client-js project and clone it.
  2. Run the following commands:
npm i
npm run-script dist
cd demo
npm i
  1. Get the IP address of your current machine:
ifconfig | grep "inet "
  1. Using this IP, launch the Medooze server in the demo directory:
node index.js [your IP]
  1. Head to a browser and open the URL https://%5Byour ip]:8000 (for instance https://10.0.2.15:8000. Accept the SSL certificate for your localhost.

You should now see the demo page:

Clicking the buttons will create video streams:

The animation on top of the remote button is a video stream taken from a local canvas and animation/video to the right is the same stream relayed through the server.

So the client sends a stream to the server, the server sends that stream right back and the client then renders that stream.

During my local testing I encountered an issue when adding tracks with codecs (VP8, H264) that I have filed and link here for reference: Adding tracks with Codecs does not work

Advantages of Using React Hooks

I always had the feeling that React is just a bit to complex, a bit to ‘heavy’ to be a truely elegant solution to the problem of building complex user interfaces in JavaScript. Two issues, for instance, are the general project setup, exemplified by the need to have create-react-app, and class-based components, with all their componentDidMount and this references.

While React Hooks are no solution to the first issue, they provide, in my mind, an elegant solution to the second; they provide a better way to do what we used to do with class-based components.

To illustrate this, I will first provide an implementation of a simple component using a class-based component and then refactor this into an implementation using React Hooks.

Here the initial implementation using a class-based component:

class User1 extends Component {
  constructor(props) {
    super(props);

    this.state = {
      userId: props.userId,
      userName: null,
      isLoading: false,
      error: null,
      unmounted: false,
    };
  }

  getUser() {
    this.setState({ isLoading: true, error: null });
  
    axios.get(`https://jsonplaceholder.typicode.com/users/${this.state.userId}`)
      .then(result => {
        if (this.state.unmounted) {
          return;
        }
        this.setState({
          userName: result.data.name,
          isLoading: false
        })
      }
      )
      .catch(error => {
        if (this.state.unmounted) {
          return;
        }

        this.setState({
          error,
          isLoading: false
        })
      });
  }

  componentDidMount() {
    this.getUser();
  }

  componentDidUpdate() {
    // this.getUser();
  }

  componentWillUnmount() {
    this.setState({ unmounted: true });
  }

  render() {
    return (<>
      {this.state.isLoading ? <p>Loading ...</p> : <></>}
      {this.state.error ? <p>Cannot load user</p> : <></>}
      {!this.state.isLoading && !this.state.error ? <p>{this.state.userName}</p> : <></>}
      <button onClick={() => {
        const newUserId = this.state.userId + 1;
        this.setState({ userId: newUserId }, this.getUser);
        this.getUser();
      }} >Next</button>
    </>);
  }
}

As can be seen in above code, this component requests data about a user from JSONPlaceholder and then display this data. There is also a button that will trigger loading of another user.

Simple enough – but we still need a fair amount of code to handle this scenario in a robust manner, including instances where we start the request for a new user before the previous request has been completed or where a request only completes after the component has been unmounted.

A component with the exact same functionality can be implemented using React Hooks:

function User2(props) {
  const [userId, setUserId] = useState(props.userId);
  const [name, setName] = useState(null);
  const [isLoading, setIsLoading] = useState(true);
  const [isError, setIsError] = useState(false);

  useEffect(() => {
    let cancelled = false;
    const fetchData = async () => {
      setIsLoading(true);
      setIsError(false);
      let response;
      try { 
        response = await axios.get(`https://jsonplaceholder.typicode.com/users/${userId}`);
      } catch (e) {
        setIsError(true);
        setIsLoading(false);
        return;
      }
      setIsLoading(false);
      if (cancelled) return;
      setName(response.data.name);
    };
    fetchData();
    return () => {
      cancelled = true;
    };
  }, [userId]);

  return (<>
    {isLoading ? <p>Loading ...</p> : <></>}
    {isError ? <p>Cannot load user</p> : <></>}
    {!isLoading && !isError && name ? <p>{name}</p> : <></>}
    <button onClick={() => setUserId(userId + 1)} >Next</button>
  </>);
}

Here we use useState to define a number of state variables and useEffect to deal with state updates. useState of course is essential in allowing us to define a functional component that also uses state. One major advantage in my mind of the hooks-based approach is that we don’t need to worry about using this and are in no danger of forgetting it.

useEffect replaces the functionality of componentDidMount and componentDidUpdate in the class-based components. I think it allows reacting to state changes in a much more elegant way. Firstly by linking it to the state of userId the useEffect handler we have defined will only trigger when the userId status has been updated, without us having to add any additional tests and logic around that. Secondly, it elegantly handles both the cases for when the component mounts as well as when the component state changes: by always triggering on component mount, and subsequently on changes to the userId. Thirdly, by returning a function as the result of the useEffect handler …

    return () => {
      cancelled = true;
    };

… we have a very easy way to deal with the component unmounting when a request is in flight.

However, the real power of React Hooks, in my mind, lies in the composability of Hooks. The following example implements the features for the component using a custom open source hook: use-data-api:

import useDataApi from 'use-data-api';

function User3(props) {
  const [userId, setUserId] = useState(props.userId);
  const [{ data, isLoading, isError }, performFetch] = useDataApi(null, null);

  useEffect(() => {
    performFetch(`https://jsonplaceholder.typicode.com/users/${userId}`);
  }, [userId, performFetch]);

  return (<>
    {isLoading ? <p>Loading ...</p> : <></>}
    {isError ? <p>Cannot load user</p> : <></>}
    {!isLoading && !isError && data ? <p>{data.name}</p> : <></>}
    <button onClick={() => setUserId(userId + 1)} >Next</button>
  </>);
}

Above we use the custom hook useDataApi that takes care of the details of having to deal with requests to an API (use-data-api/blob/master/src/index.js).

As can be seen, this last example is much shorter in length and easier to understand than the previous examples. Thus showing the biggest advantage for React Hooks – to extract complex behaviour into external functions that can be easily reused within a project and across projects.

To summarise, here all the advantages of using React Hooks discussed above:

  • Ability to create composite Hooks defining cross-cutting functionality concerns in an application.
  • Enables writing functional components with state (no more this).
  • useEffect provides a more concise and elegant way to handle component mount, update and unmount events.

Here the complete source code code of the examples used in this post:

react-hooks-tutorial

Deploy Lambda using SAM and Buildkite

One of the many good things about Lambdas on AWS is that they are quite easy to deploy. Simply speaking, all that one requires is a zip file of an application that then can be uploaded using an API call.

Things unfortunately quickly become more complicated, especially if the Lambda depends on other resources on AWS, as they often do. Thankfully there is a solution for this in the form of the AWS Serverless Application Model (SAM). AWS SAM enables to specify lambdas along with their resources and dependencies in a simple and coherent way.

AWS being AWS, there are plenty of examples of deploying Lambdas defined using SAM using AWS tooling, such as CodePipeline and CodeBuild. In this article, I will show that it is just as easy deploying Lambdas using Buildkite.

For those wanting to skip straight to the code, here the link to the GitHub repo with an example project:

lambda-sam-buildkite

This example uses the Buildkite Docker Compose Plugin that leverages a Dockerfile, which provides the AWS SAM CLI:

FROM python:alpine
# Install awscli and aws-sam-cli
RUN apk update && \
    apk upgrade && \
    apk add bash && \
    apk add --no-cache --virtual build-deps build-base gcc && \
    pip install awscli && \
    pip install aws-sam-cli && \
    apk del build-deps
RUN mkdir /app
WORKDIR /app

The Buildkite pipeline assures the correct environment variables are passed to the Docker container so that the AWS CLI can be authenticated with AWS:

steps:
  - label: SAM deploy
    command: ".buildkite/deploy.sh"
    plugins:
      - docker-compose#v2.1.0:
          run: app
          env:
            - AWS_DEFAULT_REGION
            - AWS_ACCESS_KEY_ID
            - AWS_SECRET_ACCESS_KE

The script that is called in the pipeline simply calls the AWS SAM CLI to package the CloudFormation template and then deploys it:

#!/bin/bash -e

# Create packaged template and upload to S3
sam package --template-file template.yml \ 
            --s3-bucket sam-buildkite-deployment-test \
            --output-template-file packaged.yml

# Apply CloudFormation template
sam deploy --template-file ./packaged.yml \
           --stack-name sam-buildkite-deployment-test \
           --capabilities CAPABILITY_IAM

And that’s it already. This pipeline can easily be extended to deploy to different environments such as development, staging and production and to run unit and integration tests.

Setting up Continuous Deployment with Lerna and Buildkite

Buildkite is a great tool for running multi-step build and deployment pipelines. Lerna is a tool for managing multiple JavaScript packages within one git repository.

Given that both Lerna and Buildkite are quite popular, it is surprising how difficult it is to set up a basic build and deployment with these tools.

It is very easy to configure a deployment pipeline in Buildkite that will run every time a new commit has been made to a git branch. However, using Lerna, we want to build only those packages in a repository that have actually changed, rather than building all packages in the repository.

Lerna provides some build in tooling for this, chiefly the ls command which will provide us a list of all the packages defined in the monorepo. Using the –since filter with this command, we can easily determine all packages that have changed since the last commit as follows:

lerna ls --json --since HEAD^

Where ^HEAD is the git reference to the commit proceeding HEAD. The --json flag provides us with an output that is a bit easier to parse, for instance using jq.

However, in a CD environment, we usually build not only when a new merge to master has occurred but also when changes to a branch have been submitted. In that instance, we are not interested in the changes which occurred since the last commit but all the changes that have been made in the branch in comparison to the current master branch.

In this instance, the lerna ls command with a --since filter can help us when comparing the current branch with master.

lerna ls --json --since refs/heads/master

Internally, Lerna would run a command such as the following:

git --no-pager diff --name-only refs/heads/master..refs/remotes/origin/$BUILDKITE_BRANCH

This diff goes both ways, so when a file is changed in a package in master only or the branch only, it will cause the package to be listed among the packages to be build. We are only interested in those packages however that are changed in the branch. This can be somewhat assured by running a git pull before the lerna ls command:

git pull --no-edit origin master
lerna ls --json --since refs/heads/master

Unfortunately Buildkite by default does something of a ‘lazy clone’ of the repository. It will only ensure that the branch that is currently being built is checked out with the latest commit and other branches, including master, might be cached from previous builds and are on an old commit. This will prevent the above approach for building branches from working. Thankfully there is an environment variable in Buildkite we can use to force it to get the latest commit for all branches: BUILDKITE_CLEAN_CHECKOUT=true.

Having this list of packages that have changed, we can then trigger pipelines specific to building the changed packages. This can be accomplished using a trigger step.

- trigger: "[name of pipeline for package x]"
  label: ":rocket: Trigger: Build for package x"
  async: false
  build:
    message: "${BUILDKITE_MESSAGE}"
    commit: "${BUILDKITE_COMMIT}"
    branch: "${BUILDKITE_BRANCH}"

Another way to go about deploying with Lerna might be using lerna publish where we could push npm packages to an npm registry and then trigger builds from there. I haven’t tested this way and I think this would require a private npm registry, which the way outlined in this article would not.

If anyone has a more elegant way to go about orchestrating the builds of packages in a Lerna repository, please let everyone know in the comments.

Testing Apollo Client/Server Applications

Following up on the GraphQL, Node.JS and React Monorepo Starter Kit and GraphQL Apollo Starter Kit (Lerna, Node.js), I have now created an extended example which includes facilities to run unit and integration tests using Jest.

The code can be found on GitHub:

apollo-client-server-tests

The following tests are included:

React Component Test

This tests asserts a react component is rendered correctly. Backend data from GraphQL is supplied via a mock [packages/client-components/src/Books/Books.test.js]

import React from 'react';
import Books from './Books';

import renderer from 'react-test-renderer'
import { MockedProvider } from 'react-apollo/test-utils';

import GET_BOOK_TITLES from './graphql/queries/booktitles';

import wait from 'waait';

const mocks = [
  {
    request: {
      query: GET_BOOK_TITLES
    },
    result: {
      data: {
        books: [
          {
            title: 'Harry Potter and the Chamber of Secrets',
            author: 'J.K. Rowling',
          },
          {
            title: 'Jurassic Park',
            author: 'Michael Crichton',
          }
        ]
      },
    },
  },
];

it('Renders one book', async () => {

  const component = renderer.create(<MockedProvider mocks={mocks} addTypename={false}>
    <Books />
  </MockedProvider>);
  expect(component.toJSON()).toEqual('Loading...');

  // to wait for event loop to complete - after which component should be loaded
  await wait(0);

  const pre = component.root.findByType('pre');
  expect(pre.children).toContain('Harry Potter and the Chamber of Secrets');

});

GraphQL Schema Test

Based on the article Extensive GraphQL Testing in 3 minutes, this test verifies the GraphQL schema is defined correctly for running the relevant queries [packages/server-books/src/schema/index.test.js].

import {
    makeExecutableSchema,
    addMockFunctionsToSchema,
    mockServer
} from 'graphql-tools';

import { graphql } from 'graphql';

import booksSchema from './index';

const titleTestCase = {
    id: 'Query Title',
    query: `
      query {
        books {
            title
        }
      }
    `,
    variables: {},
    context: {},
    expected: { data: { books: [{ title: 'Title'} , { title: 'Title' }] } }
};

const cases = [titleTestCase];

describe('Schema', () => {
    const typeDefs = booksSchema;
    const mockSchema = makeExecutableSchema({ typeDefs });

    addMockFunctionsToSchema({
        schema: mockSchema,
        mocks: {
            Boolean: () => false,
            ID: () => '1',
            Int: () => 1,
            Float: () => 1.1,
            String: () => 'Title',
        }
    });

    test('Has valid type definitions', async () => {
        expect(async () => {
            const MockServer = mockServer(typeDefs);

            await MockServer.query(`{ __schema { types { name } } }`);
        }).not.toThrow();
    });

    cases.forEach(obj => {
        const { id, query, variables, context: ctx, expected } = obj;

        test(`Testing Query: ${id}`, async () => {
            return await expect(
                graphql(mockSchema, query, null, { ctx }, variables)
            ).resolves.toEqual(expected);
        });
    });

});

GraphQL Schema and Resolver Test

Extending the previous test as suggested by the article Effective Testing a GraphQL Server, this test affirms that GraphQL schema and resolvers are working correctly [packages/server-books/tests/Books.test.js].

import { makeExecutableSchema } from 'graphql-tools'
import { graphql } from 'graphql'
import resolvers from '../src/resolvers'
import typeDefs from '../src/schema'

const titleTestCase = {
    id: 'Query Title',
    query: `
      query {
        books {
            title
        }
      }
    `,
    variables: {},
    context: {},
    expected: { data: { books: [{ title: 'Harry Potter and the Chamber of Secrets' }, { title: 'Jurassic Park' }] } }
};

describe('Test Cases', () => {

    const cases = [titleTestCase]
    const schema = makeExecutableSchema({ typeDefs: typeDefs, resolvers: { Query: resolvers } })

    cases.forEach(obj => {
        const { id, query, variables, context, expected } = obj

        test(`query: ${id}`, async () => {
            const result = await graphql(schema, query, null, context, variables)
            return expect(result).toEqual(expected)
        })
    })
})

Conclusion

As with the previous two articles on getting started with Apollo etc. the code developed again aims to be as minimalistic as possible. It shows how Apollo client/server code may be tested in three different ways. These are quite exhaustive, even if the presented tests are simplistic.

The only test missing is an integration tests which will test the React component linked to a live Apollo server back-end. I am not sure if it is possible to run an ’embedded’ Apollo server in the browser. Running such a server for testing the React component would also be a good addition.

GraphQL, Node.JS and React Monorepo Starter Kit

Following the GraphQL Apollo Starter Kit (Lerna, Node.js), I wanted to dig deeper into developing a monorepo for a GraphQL/React client-server application.

Unfortunately, things are not as easy as I thought at first. Chiefly the create-react-app template does not appear to work very well with monorepos and local dependencies to other packages.

That’s why I put together a small, simple starter template for developing modular client-server applications using React, GraphQL and Node.js. Here is the code on GitHub:

nodejs-react-monorepo-starter-kit

Some things to note:

  • There are four packages in the project
    • client-main: The React client, based on create-react-app
    • client-components: Contains a definition of the component app. Used by client-main
    • server-main: The Node.js server definition
    • server-books: Contains schema and resolver for GraphQL backend. Used by server-main.
  • Each package defines it’s own package.json and can be built independent of the other packages.
  • The main entry point for the dependent packages (client-components and server-books) is set to dist/index.js. This way, packages which use them, can use the transpiled version created by babel and don’t need to worry about specific JS features used in the dependent packages.

Like GraphQL Apollo Starter Kit (Lerna, Node.js) this starter kit is meant to be very basic to allow easy exploration of the source code.