Textures and Lighting with React and Three.js

In my previous three posts, I have developed a simple WebGL application using react-three-fiber and three.js. In this post, I am adding texture loading and proper lighting to the application.

For reference, here the links to the previous versions of the app:

  • Version 1: Just being able to drag a shape on the screen
  • Version 2: Dragging and dropping shapes using physics
  • Version 3: Being able to move the camera

Here the version developed for this post:

threejs-react-textures-light

Source Code

You can click to add objects, click and drag them as well as move the camera using WASD keys and mouse wheel to zoom.

Loading Textures

Textures can be loaded easily in react-three-fiber using the useResource hook.

All that is required to place the texture in the public/ folder of the react application, load the texture and then link it to the material by setting the map property.

    const [texture] = useLoader(TextureLoader, 'textures/grasslight-big.jpg');

    if (texture) {
        texture.wrapS = texture.wrapT = THREE.RepeatWrapping;
        texture.repeat.set(1500, 1500);
        texture.anisotropy = 16;
    }

    return (
        <mesh ref={ref} receiveShadow position={position}
            onClick={onPlaneClick}>
            <planeBufferGeometry attach="geometry" args={[10000, 10000]} />
            {texture &&
                <meshPhongMaterial attach="material" map={texture} />
            }

        </mesh>
    )

I found that textures are often quite large in size; larger than 1 MB. This significantly extends loading times. Thus I have added a simple loading screen. Unfortunately to be able to display the text ‘loading’ I had to create a TextGeometry which in turn required a font to be loaded (I prepared the Roboto font using facetype.js. This font by itself is more than 300 kb, so even loading the loading screen takes a bit of time.

Lighting

The goal of this application is to have a simple, very large plane on which any number of objects may be added. The issue I encountered with this was that to get shadows working with a DirectionalLight turned out to be very tricky. In the end, I used a combination of an AmbientLight with a SpotLight.

        <ambientLight intensity={0.9} />

        <primitive object={lightTarget} position={lightTargetPosition} />
        <spotLight
            castShadow
            intensity={0.25}
            position={lightPosition}
            angle={Math.PI / 3}
            penumbra={1}
            shadow-mapSize={new Vector2(2048 * 5, 2048 * 5)}
            target={lightTarget}
        />

Since the SpotLight would not be able to cover the whole of the plane (as said, it is meant to be very large) and provide accurate shadows, I opted for moving the SpotLight when a user moves the camera.

    const lightTargetYDelta = 120;
    const lightTargetXDelta = 80;
    const [lightPosition, setLightPosition] = useState([-lightTargetXDelta, -lightTargetYDelta, 200]);
    const [lightTargetPosition, setLightTargetPosition] = useState([0, 0, 0]);
    const onCameraMoved = (delta) => {
        const newLightPosition = delta.map((e, idx) => lightPosition[idx] + e);
        setLightPosition(newLightPosition);
        const newLightTargetPosition = [newLightPosition[0] + lightTargetXDelta, newLightPosition[1] + lightTargetYDelta, 0];
        setLightTargetPosition(newLightTargetPosition);
    };

This required both updating the position of the light setLightPosition as well as moving the light target setLightTargetPosition.

Modularity

Since the amount of code for this example increased quite a bit over the past three iterations, I broke up the application into multiple modules, with most React components now sitting in their own file.

I think this really shows the advantage of using React with Three.js, since it is easy for each component to manage its own state.

For the next iteration, I will most likely be looking at how I can remove the textures or use much smaller textures. I would like the application to be able to load as quickly as possible, and textures clearly do not seem a great option for this.

Camera Movement with Three.js

I have recently been working on small example application using three.js and react-three-fiber. In the first two iterations, I first developed a simple draggable shape floating in space and then supported multiple shapes that can be moved on a physical plane. In this post, I am going to be extending the example to support camera movements. Here links to the previous two iterations and the one developed in this post:

Prototypes

Iteration 3: Camera Movements (this post)

Source Code: threejs-test

Published App: three-js-camera.surge.sh

You can move the camera using WASD and zoom in and out using the mouse scroll wheel. You can create new objects by clicking on any empty spot on the plane.

Iteration 2: Movable objects on Plane

Blog Post: Create and Drag Shapes with Three.js, React and Cannon.js

Published App: react-three-fiber-draggable-v2

Iteration 1: Draggable Shape in Space

Blog Post: Creating a Draggable Shape with React Three Fiber

Published App: react-three-fiber-draggable.surge.sh

In the following I will describe the two ways in which camera movement is supported:

Camera Movements

Using the Keyboard

The easiest way to move the camera using the keyboard would be to change the position of the camera on key press. That is what I first tried and it turned out to be a very insufficient solution. Instead, I stored for how long each key is pressed in an object and then calculate camera movement for every frame.

const keyPressed = {
}

function App() {
    ...
    const handleKeyDown = (e) => {
        if (!keyPressed[e.key]) {
            keyPressed[e.key] = new Date().getTime();
        }
    };

    const handleKeyUp = (e) => {
        delete keyPressed[e.key];
    };
    ...
}

In react-three-fiber, the useful useFrame hook is available where we then calculate the camera movement:

    useFrame((_, delta) => {
        // move camera according to key pressed
        Object.entries(keyPressed).forEach((e) => {
            const [key, start] = e;
            const duration = new Date().getTime() - start;

            // increase momentum if key pressed longer
            let momentum = Math.sqrt(duration + 200) * 0.01 + 0.05;

            // adjust for actual time passed
            momentum = momentum * delta / 0.016;

            // increase momentum if camera higher
            momentum = momentum + camera.position.z * 0.02;

            switch (key) {
                case 'w': camera.translateY(momentum); break;
                case 's': camera.translateY(-momentum); break;
                case 'd': camera.translateX(momentum); break;
                case 'a': camera.translateX(-momentum); break;
                default:
            }
        });
    });

We first calculate the duration how long a key is pressed and then use this to determine the momentum. Finally we use this momentum to update the position of the camera.

Using the Mouse Wheel

We use the mouse wheel to zoom in and out. For this, the position of the camera needs to change along the z axis.

    const mouseWheel = (e) => {
        let delta = e.wheelDelta;
        delta = delta / 240;
        delta = -delta;
        if (delta <= 0) {
            delta -= camera.position.z * 0.1;
        } else {
            delta += camera.position.z * 0.1;
        }
        if (camera.position.z + delta > 1 && camera.position.z + delta < 200) {
            camera.translateZ(delta);
        }
    };

Here we simply determine the direction in which the wheel is scrolled and adjust the position of the camera to be higher or lower accordingly. There is a certain range in which the camera is permitted. If it gets to close to the ground or too far away from it, its position is not changed anymore even if the mouse wheel is turned.

I further had to change the way the drag movement was implemented. This originally worked only using the position of the mouse on the screen, since the screen never moved.

In case of an dynamic camera, we need a bit more calculation to be done which I encapsulated in the get3DPosition method.

const get3DPosition = ({ screenX, screenY, camera }) => {
    var vector = new THREE.Vector3(screenX, screenY, 0.5);
    vector.unproject(camera);
    var dir = vector.sub(camera.position).normalize();
    var distance = - camera.position.z / dir.z;
    var pos = camera.position.clone().add(dir.multiplyScalar(distance));
    return [pos.x, pos.y, 0];
};

The major limitation I want to work on next is the lightning. There is currently a SpotLight light source that only covers a small part of the plane and objects which are not within the cone of this spotlight are not rendered in an aesthetically pleasing fashion.

The full source code is available on GitHub.

Create and Drag Shapes with Three.js, React and Cannon.js

Following up from my article published a few days ago, I have now extended and improved the simple WebGL application that I originally developed using Three.js and react-three-fiber.

Version 1 of the application allowed dragging a simple shape around on the screen:

App: https://react-three-fiber-draggable.surge.sh/

Source Code: https://github.com/mxro/threejs-test/tree/master/test1

Version 2 combines this basic premise with the cannon.js physics engine. Multiple objects can now be created and they drop down onto a solid plane, on which they can be moved.

App: https://react-three-fiber-draggable-v2.surge.sh/

Source Code: https://github.com/mxro/threejs-test/tree/master/test2

Simply click the canvas to add new shapes that then can be dragged around the plane.

The most important logic for this solution is in the DraggableDodecahedron component:

function DraggableDodecahedron({ position: initialPosition }) {
    const { size, viewport } = useThree();
    const [position, setPosition] = useState(initialPosition);
    const [quaternion, setQuaternion] = useState([0, 0, 0, 0]);
    const aspect = size.width / viewport.width;

    const { ref, body } = useCannon({ bodyProps: { mass: 100000 } }, body => {
        body.addShape(new CANNON.Box(new CANNON.Vec3(1, 1, 1)))
        body.position.set(...position);
    }, []);

    const bind = useDrag(({ offset: [,], xy: [x, y], first, last }) => {
        if (first) {
            body.mass = 0;
            body.updateMassProperties();
        } else if (last) {
            body.mass = 10000;
            body.updateMassProperties();
        }
        body.position.set((x - size.width / 2) / aspect, -(y - size.height / 2) / aspect, -0.7);
    }, { pointerEvents: true });

    useFrame(() => {
        // Sync cannon body position with three js
        const deltaX = Math.abs(body.position.x - position[0]);
        const deltaY = Math.abs(body.position.y - position[1]);
        const deltaZ = Math.abs(body.position.z - position[2]);
        if (deltaX > 0.001 || deltaY > 0.001 || deltaZ > 0.001) {
            setPosition(body.position.clone().toArray());
        }
        const bodyQuaternion = body.quaternion.toArray();
        const quaternionDelta = bodyQuaternion.map((n, idx) => Math.abs(n - quaternion[idx]))
            .reduce((acc, curr) => acc + curr);
        if (quaternionDelta > 0.01) {
            setQuaternion(body.quaternion.toArray());
        }
    });
    return (
        <mesh ref={ref} castShadow position={position} quaternion={quaternion} {...bind()}
            onClick={e => {
                e.stopPropagation();
            }}
        >

            <dodecahedronBufferGeometry attach="geometry" />
            <meshLambertMaterial attach="material" color="yellow" />

        </mesh>
    )
}

Most notably here are three React hooks:

With the first hook we create a Cannon body that is set to the same dimension and position as the shape.

     const { ref, body } = useCannon({ bodyProps: { mass: 100000 } }, body => {
        body.addShape(new CANNON.Box(new CANNON.Vec3(1, 1, 1)))
        body.position.set(...position);
    }, []);

In the second hook, we use react-use-gesture to react to drag events. We temporarily set the mass of the body/shape to be moved to 0 and reset it to the original mass once the drag gesture is complete. Finally we also set the position of the cannon.js body to the position that the drag gesture current indicates.

      const bind = useDrag(({ offset: [,], xy: [x, y], first, last }) => {
        if (first) {
            body.mass = 0;
            body.updateMassProperties();
        } else if (last) {
            body.mass = 10000;
            body.updateMassProperties();
        }
        body.position.set((x - size.width / 2) / aspect, -(y - size.height / 2) / aspect, -0.7);
    }, { pointerEvents: true });

The third hook, useFrame(), runs a callback function before every frame is rendered (this hook is supplied by react-three-fiber). This is used here to synchronise the position of the body in cannon.js with the three.js shape. Since cannon.js updates positions with a very fine granularity, we first assert that a body has changed its position or orientation by a significant margin. Only if this is the case, we update the shape. This helps React to avoid unnecessary updates of the ‘virtual dom’.

    useFrame(() => {
        // Sync cannon body position with three js
        const deltaX = Math.abs(body.position.x - position[0]);
        const deltaY = Math.abs(body.position.y - position[1]);
        const deltaZ = Math.abs(body.position.z - position[2]);
        if (deltaX > 0.001 || deltaY > 0.001 || deltaZ > 0.001) {
            setPosition(body.position.clone().toArray());
        }
        const bodyQuaternion = body.quaternion.toArray();
        const quaternionDelta = bodyQuaternion.map((n, idx) => Math.abs(n - quaternion[idx]))
            .reduce((acc, curr) => acc + curr);
        if (quaternionDelta > 0.01) {
            setQuaternion(body.quaternion.toArray());
        }
    });

Apart from these hooks there is a simple click handler that stops click events from propagating. This is to prevent the event handler defined for the plane to trigger (this triggers adding a new shape to the scene).

Next I will be adding camera movement to this example.

Creating a Draggable Shape with React Three Fiber

I recently became interested how to render 3D graphics in the browser. I think WebGL is an extremely powerful technology and may one day become an important way of rendering content on the web.

There are various frameworks and tools available to use WebGL such as Babylon.js and three.js. To me, three.js looks the most promising for the use cases I am interested in.

For simple examples, three.js works beautifully but I think more complex applications can easily become unwieldy when using this framework. Thus I was very happy to come across react-three-fiber, which provides a wrapper around three.js using React. React, for all its shortcomings, is a powerful way to keep code modular and maintainable.

To get my hands dirty with this library, I have created a little example of an application that renders a Dodecahedron and allows dragging this shape by tapping or clicking and dragging with the mouse.

Here the link to the deployed application:

react-three-fiber-draggable.surge.sh

And here the link to the source code:

github.com/mxro/threejs-test/tree/master/test1

I think the source code is pretty self-explanatory. Essentially all logic is encapsulated into index.js:

import ReactDOM from "react-dom"
import React, { useRef, useState } from "react"
import { Canvas, useThree, useFrame } from "react-three-fiber"
import { useDrag } from "react-use-gesture"
import "./index.css"

function DraggableDodecahedron() {
    const colors = ["hotpink", "red", "blue", "green", "yellow"];
    const ref = useRef();
    const [colorIdx, setColorIdx] = useState(0);
    const [position, setPosition] = useState([0, 0, 0]);
    const { size, viewport } = useThree();
    const aspect = size.width / viewport.width;
    useFrame(() => {
        ref.current.rotation.z += 0.01
        ref.current.rotation.x += 0.01
    });
    const bind = useDrag(({ offset: [x, y] }) => {
        const [,, z] = position;
        setPosition([x / aspect, -y / aspect, z]);
    }, { pointerEvents: true });

    return (
        <mesh position={position} {...bind()}
            ref={ref}
            onClick={e => {
                if (colorIdx === 4) {
                    setColorIdx(0);
                } else {
                    setColorIdx(colorIdx+1);
                }
            }}
            onPointerOver={e => console.log('hover')}
            onPointerOut={e => console.log('unhover')}>

            <dodecahedronBufferGeometry attach="geometry" />
            <meshLambertMaterial attach="material" color={colors[colorIdx]} />

        </mesh>
    )
}

ReactDOM.render(
    <Canvas>
        <spotLight intensity={1.2} position={[30, 30, 50]} angle={0.2} penumbra={1} castShadow />
        <DraggableDodecahedron />
    </Canvas>,
    document.getElementById("root")
)

Noteworthy here is that instead of creating a Material, Geometry and Mesh directly, they are defined in JSX. Also, instead of having to request an animation frame, we are using the hook useFrame to drive the animation for our component.

I think it can easily be seen how react-three-fiber could be used to make three.js applications more modular, for instance by handling the animation specifically for each component. I think this project is also testament to the power of React in that it cannot only be used with the DOM but also with other rendering technologies.