Medooze Media Server Demo

I’ve recently done some research into WebRTC and specifically on how to stream media captured in the browser to a server. Initially I thought I could use something like Kinesis Video Streams and have AWS do the heavy lifting for me. Unfortunately this turned out way more complicated than I had anticipated so I started looking for other options.

That is when I came across Media Servers such as Medooze, OpenVidu, Janus and Jitsi. Medooze caught my attention since it appears to scale very well and offers a NodeJS based server.

It did take me some time to find a meaningful demo for Medooze and then to get it running. So I thought I briefly document the steps here to get a demo for Medooze up and running (note this only works on Linux or Mac OS X):

  1. Head over to the media-server-client-js project and clone it.
  2. Run the following commands:
npm i
npm run-script dist
cd demo
npm i
  1. Get the IP address of your current machine:
ifconfig | grep "inet "
  1. Using this IP, launch the Medooze server in the demo directory:
node index.js [your IP]
  1. Head to a browser and open the URL https://%5Byour ip]:8000 (for instance https://10.0.2.15:8000. Accept the SSL certificate for your localhost.

You should now see the demo page:

Clicking the buttons will create video streams:

The animation on top of the remote button is a video stream taken from a local canvas and animation/video to the right is the same stream relayed through the server.

So the client sends a stream to the server, the server sends that stream right back and the client then renders that stream.

During my local testing I encountered an issue when adding tracks with codecs (VP8, H264) that I have filed and link here for reference: Adding tracks with Codecs does not work

Advantages of Using React Hooks

I always had the feeling that React is just a bit to complex, a bit to ‘heavy’ to be a truely elegant solution to the problem of building complex user interfaces in JavaScript. Two issues, for instance, are the general project setup, exemplified by the need to have create-react-app, and class-based components, with all their componentDidMount and this references.

While React Hooks are no solution to the first issue, they provide, in my mind, an elegant solution to the second; they provide a better way to do what we used to do with class-based components.

To illustrate this, I will first provide an implementation of a simple component using a class-based component and then refactor this into an implementation using React Hooks.

Here the initial implementation using a class-based component:

class User1 extends Component {
  constructor(props) {
    super(props);

    this.state = {
      userId: props.userId,
      userName: null,
      isLoading: false,
      error: null,
      unmounted: false,
    };
  }

  getUser() {
    this.setState({ isLoading: true, error: null });
  
    axios.get(`https://jsonplaceholder.typicode.com/users/${this.state.userId}`)
      .then(result => {
        if (this.state.unmounted) {
          return;
        }
        this.setState({
          userName: result.data.name,
          isLoading: false
        })
      }
      )
      .catch(error => {
        if (this.state.unmounted) {
          return;
        }

        this.setState({
          error,
          isLoading: false
        })
      });
  }

  componentDidMount() {
    this.getUser();
  }

  componentDidUpdate() {
    // this.getUser();
  }

  componentWillUnmount() {
    this.setState({ unmounted: true });
  }

  render() {
    return (<>
      {this.state.isLoading ? <p>Loading ...</p> : <></>}
      {this.state.error ? <p>Cannot load user</p> : <></>}
      {!this.state.isLoading && !this.state.error ? <p>{this.state.userName}</p> : <></>}
      <button onClick={() => {
        const newUserId = this.state.userId + 1;
        this.setState({ userId: newUserId }, this.getUser);
        this.getUser();
      }} >Next</button>
    </>);
  }
}

As can be seen in above code, this component requests data about a user from JSONPlaceholder and then display this data. There is also a button that will trigger loading of another user.

Simple enough – but we still need a fair amount of code to handle this scenario in a robust manner, including instances where we start the request for a new user before the previous request has been completed or where a request only completes after the component has been unmounted.

A component with the exact same functionality can be implemented using React Hooks:

function User2(props) {
  const [userId, setUserId] = useState(props.userId);
  const [name, setName] = useState(null);
  const [isLoading, setIsLoading] = useState(true);
  const [isError, setIsError] = useState(false);

  useEffect(() => {
    let cancelled = false;
    const fetchData = async () => {
      setIsLoading(true);
      setIsError(false);
      let response;
      try { 
        response = await axios.get(`https://jsonplaceholder.typicode.com/users/${userId}`);
      } catch (e) {
        setIsError(true);
        setIsLoading(false);
        return;
      }
      setIsLoading(false);
      if (cancelled) return;
      setName(response.data.name);
    };
    fetchData();
    return () => {
      cancelled = true;
    };
  }, [userId]);

  return (<>
    {isLoading ? <p>Loading ...</p> : <></>}
    {isError ? <p>Cannot load user</p> : <></>}
    {!isLoading && !isError && name ? <p>{name}</p> : <></>}
    <button onClick={() => setUserId(userId + 1)} >Next</button>
  </>);
}

Here we use useState to define a number of state variables and useEffect to deal with state updates. useState of course is essential in allowing us to define a functional component that also uses state. One major advantage in my mind of the hooks-based approach is that we don’t need to worry about using this and are in no danger of forgetting it.

useEffect replaces the functionality of componentDidMount and componentDidUpdate in the class-based components. I think it allows reacting to state changes in a much more elegant way. Firstly by linking it to the state of userId the useEffect handler we have defined will only trigger when the userId status has been updated, without us having to add any additional tests and logic around that. Secondly, it elegantly handles both the cases for when the component mounts as well as when the component state changes: by always triggering on component mount, and subsequently on changes to the userId. Thirdly, by returning a function as the result of the useEffect handler …

    return () => {
      cancelled = true;
    };

… we have a very easy way to deal with the component unmounting when a request is in flight.

However, the real power of React Hooks, in my mind, lies in the composability of Hooks. The following example implements the features for the component using a custom open source hook: use-data-api:

import useDataApi from 'use-data-api';

function User3(props) {
  const [userId, setUserId] = useState(props.userId);
  const [{ data, isLoading, isError }, performFetch] = useDataApi(null, null);

  useEffect(() => {
    performFetch(`https://jsonplaceholder.typicode.com/users/${userId}`);
  }, [userId, performFetch]);

  return (<>
    {isLoading ? <p>Loading ...</p> : <></>}
    {isError ? <p>Cannot load user</p> : <></>}
    {!isLoading && !isError && data ? <p>{data.name}</p> : <></>}
    <button onClick={() => setUserId(userId + 1)} >Next</button>
  </>);
}

Above we use the custom hook useDataApi that takes care of the details of having to deal with requests to an API (use-data-api/blob/master/src/index.js).

As can be seen, this last example is much shorter in length and easier to understand than the previous examples. Thus showing the biggest advantage for React Hooks – to extract complex behaviour into external functions that can be easily reused within a project and across projects.

To summarise, here all the advantages of using React Hooks discussed above:

  • Ability to create composite Hooks defining cross-cutting functionality concerns in an application.
  • Enables writing functional components with state (no more this).
  • useEffect provides a more concise and elegant way to handle component mount, update and unmount events.

Here the complete source code code of the examples used in this post:

react-hooks-tutorial

Deploy Java Lambda using SAM and Buildkite

I’ve recently covered how to deploy a Node JS based Lambda using SAM and Buildkite. I would say that this should cover most use cases, since I believe a majority of AWS Lambdas are implemented with JavaScript.

However, Lambda supports many more programming languages than just JavaScript and one of the more important ones among them is certainly Java. In principle, deployment for Java and JavaScript is very similar: we provide a SAM template and an archive of a packaged application. However, Java uses a different toolset than JavaScript, so the build process of the app will be different.

In the following, I will briefly explain how to build and deploy a Java Lambda using Maven, SAM and Buildkite. If you want to get to the code straight away, find a complete sample project here:

https://github.com/mxro/lambda-java-sam-buildkite

First we define a simple Java based Lambda:

package com.amazonaws.handler;

import com.amazonaws.services.lambda.runtime.Context; 
import com.amazonaws.services.lambda.runtime.LambdaLogger;

public class SimpleHandler {
    public String myHandler(int myCount, Context context) {
        LambdaLogger logger = context.getLogger();
        logger.log("received : " + myCount);
        return String.valueOf(myCount);
    }
}

Then add a pom.xml to define the build and dependencies of our Java application:

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
    <modelVersion>4.0.0</modelVersion>
    <groupId>com.amazonaws</groupId>
    <artifactId>lambda-java-sam-buildkite</artifactId>
    <version>1.0.0</version>
    <packaging>jar</packaging>
    <name>Sample project for deploying a Java AWS Lambda function using SAM and Buildkite.</name>

    <properties>
        <maven.compiler.source>1.8</maven.compiler.source>
        <maven.compiler.target>1.8</maven.compiler.target>
        <maven.compiler.plugin.version>3.8.0</maven.compiler.plugin.version>
        <aws.lambda.java.core.version>1.1.0</aws.lambda.java.core.version>
        <junit.version>4.12</junit.version>
    </properties>

    <dependencies>
        <dependency>
            <groupId>com.amazonaws</groupId>
            <artifactId>aws-lambda-java-core</artifactId>
            <version>${aws.lambda.java.core.version}</version>
        </dependency>
        <dependency>
            <groupId>junit</groupId>
            <artifactId>junit</artifactId>
            <version>${junit.version}</version>
            <scope>test</scope>
        </dependency>    
    </dependencies>

    
    <build>
        <plugins>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-compiler-plugin</artifactId>
                <version>${maven.compiler.plugin.version}</version>
                <configuration>
                    <source>${maven.compiler.source}</source>
                    <target>${maven.compiler.target}</target>
                </configuration>
            </plugin>
        </plugins>
    </build>
    
</project>

We define the lambda using a SAM template. Note that we are referencing the JAR that is assembled using Maven target/lambda-java-sam-buildkite-1.0.0.jar.

AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: >
    sam-app

Globals:
    Function:
        Timeout: 20
        Environment: 

Resources:
  SimpleFunction:
    Type: AWS::Serverless::Function 
    Properties:
      CodeUri: target/lambda-java-sam-buildkite-1.0.0.jar
      Handler: com.amazonaws.handler.SimpleHandler::myHandler
      Runtime: java8

Then we need Dockerfile that will run our build. Here we simply start with an image that has Maven preinstalled and then install Python and the AWS SAM CLI.

FROM zenika/alpine-maven:3-jdk8

# Installing python
RUN apk add --update \
    python \
    python-dev \
    py-pip \
    build-base \
  && pip install virtualenv \
  && rm -rf /var/cache/apk/*

RUN python --version

# Installing AWS CLI and SAM CLI
RUN apk update && \
    apk upgrade && \
    apk add bash && \
    apk add --no-cache --virtual build-deps build-base gcc && \
    pip install awscli && \
    pip install aws-sam-cli && \
    apk del build-deps

RUN mkdir /app
WORKDIR /app
EXPOSE 3001

The following build script will run within this Dockerfile and first package the Java application into a Jar file using mvn package and then uses the SAM CLI to deploy the template and application.

#!/bin/bash -e

mvn package

echo "### SAM Deploy"

sam --version

sam package --template-file template.yaml --s3-bucket sam-buildkite-deployment-test --output-template-file packaged.yml

sam deploy --template-file ./packaged.yml --stack-name sam-buildkite-deployment-test --capabilities CAPABILITY_IAM

Finally we define the Buildkite template. Note that this template assumes the environment variables AWS_DEFAULT_REGION, AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY are provided by Buildkite.

steps:
  - name: Build and deploy to AWS
    command:
      - './.buildkite/deploy.sh'
    plugins:
      - docker-compose#v2.1.0:
          run: app
          config: 'docker-compose.yml'
          env:
            - AWS_DEFAULT_REGION
            - AWS_ACCESS_KEY_ID
            - AWS_SECRET_ACCESS_KEY

Now we simply need to create a Buildkite pipeline and link it to a repository with our source code.

Deploy Lambda using SAM and Buildkite

One of the many good things about Lambdas on AWS is that they are quite easy to deploy. Simply speaking, all that one requires is a zip file of an application that then can be uploaded using an API call.

Things unfortunately quickly become more complicated, especially if the Lambda depends on other resources on AWS, as they often do. Thankfully there is a solution for this in the form of the AWS Serverless Application Model (SAM). AWS SAM enables to specify lambdas along with their resources and dependencies in a simple and coherent way.

AWS being AWS, there are plenty of examples of deploying Lambdas defined using SAM using AWS tooling, such as CodePipeline and CodeBuild. In this article, I will show that it is just as easy deploying Lambdas using Buildkite.

For those wanting to skip straight to the code, here the link to the GitHub repo with an example project:

lambda-sam-buildkite

This example uses the Buildkite Docker Compose Plugin that leverages a Dockerfile, which provides the AWS SAM CLI:

FROM python:alpine
# Install awscli and aws-sam-cli
RUN apk update && \
    apk upgrade && \
    apk add bash && \
    apk add --no-cache --virtual build-deps build-base gcc && \
    pip install awscli && \
    pip install aws-sam-cli && \
    apk del build-deps
RUN mkdir /app
WORKDIR /app

The Buildkite pipeline assures the correct environment variables are passed to the Docker container so that the AWS CLI can be authenticated with AWS:

steps:
  - label: SAM deploy
    command: ".buildkite/deploy.sh"
    plugins:
      - docker-compose#v2.1.0:
          run: app
          env:
            - AWS_DEFAULT_REGION
            - AWS_ACCESS_KEY_ID
            - AWS_SECRET_ACCESS_KE

The script that is called in the pipeline simply calls the AWS SAM CLI to package the CloudFormation template and then deploys it:

#!/bin/bash -e

# Create packaged template and upload to S3
sam package --template-file template.yml \ 
            --s3-bucket sam-buildkite-deployment-test \
            --output-template-file packaged.yml

# Apply CloudFormation template
sam deploy --template-file ./packaged.yml \
           --stack-name sam-buildkite-deployment-test \
           --capabilities CAPABILITY_IAM

And that’s it already. This pipeline can easily be extended to deploy to different environments such as development, staging and production and to run unit and integration tests.

Setting up Continuous Deployment with Lerna and Buildkite

Buildkite is a great tool for running multi-step build and deployment pipelines. Lerna is a tool for managing multiple JavaScript packages within one git repository.

Given that both Lerna and Buildkite are quite popular, it is surprising how difficult it is to set up a basic build and deployment with these tools.

It is very easy to configure a deployment pipeline in Buildkite that will run every time a new commit has been made to a git branch. However, using Lerna, we want to build only those packages in a repository that have actually changed, rather than building all packages in the repository.

Lerna provides some build in tooling for this, chiefly the ls command which will provide us a list of all the packages defined in the monorepo. Using the –since filter with this command, we can easily determine all packages that have changed since the last commit as follows:

lerna ls --json --since HEAD^

Where ^HEAD is the git reference to the commit proceeding HEAD. The --json flag provides us with an output that is a bit easier to parse, for instance using jq.

However, in a CD environment, we usually build not only when a new merge to master has occurred but also when changes to a branch have been submitted. In that instance, we are not interested in the changes which occurred since the last commit but all the changes that have been made in the branch in comparison to the current master branch.

In this instance, the lerna ls command with a --since filter can help us when comparing the current branch with master.

lerna ls --json --since refs/heads/master

Internally, Lerna would run a command such as the following:

git --no-pager diff --name-only refs/heads/master..refs/remotes/origin/$BUILDKITE_BRANCH

This diff goes both ways, so when a file is changed in a package in master only or the branch only, it will cause the package to be listed among the packages to be build. We are only interested in those packages however that are changed in the branch. This can be somewhat assured by running a git pull before the lerna ls command:

git pull --no-edit origin master
lerna ls --json --since refs/heads/master

Unfortunately Buildkite by default does something of a ‘lazy clone’ of the repository. It will only ensure that the branch that is currently being built is checked out with the latest commit and other branches, including master, might be cached from previous builds and are on an old commit. This will prevent the above approach for building branches from working. Thankfully there is an environment variable in Buildkite we can use to force it to get the latest commit for all branches: BUILDKITE_CLEAN_CHECKOUT=true.

Having this list of packages that have changed, we can then trigger pipelines specific to building the changed packages. This can be accomplished using a trigger step.

- trigger: "[name of pipeline for package x]"
  label: ":rocket: Trigger: Build for package x"
  async: false
  build:
    message: "${BUILDKITE_MESSAGE}"
    commit: "${BUILDKITE_COMMIT}"
    branch: "${BUILDKITE_BRANCH}"

Another way to go about deploying with Lerna might be using lerna publish where we could push npm packages to an npm registry and then trigger builds from there. I haven’t tested this way and I think this would require a private npm registry, which the way outlined in this article would not.

If anyone has a more elegant way to go about orchestrating the builds of packages in a Lerna repository, please let everyone know in the comments.

A gentle introduction to vim

vim is a great tool for editing any form of text, be it a blog post, text or programming code. Unfortunately there is a bit of a learning curve when first getting started with vim. In this article I want to provide an overview of some of the key features of vim and how to get started using this editor.

If you are using Linux, it is very easy to use vim. Just type vi in the terminal and vim will be open. In windows, I find the easiest way to use is to install cmder. Once it is installed, just start cmder and type vi just as one would in a Linux system.

Initially when you start up vim you will be in a new empty file and what is called normal mode. In normal mode, it is not possible to enter new text. To do so, we need to switch to insert mode, which is easily done by typing i. You can now write text to your heart’s content. Finally pressing Esc will get us back into normal mode.

To save our work, we must go into command mode, which is achieved from normal mode by prefixing commands with :. :w test.txt followed by enter will save the text we have entered and :q will quit the vim editor.

Switching between these modes may seem a bit cumbersome at the start but splitting up the task of working with text into these different modes allows for very efficient and effective workflows.

In the following I will first describe how each of the individual modes works and then show how we can use operators and text objects to unlock the full potential of vim.

Modes

Normal Mode

Normal mode is the default mode that vim will be in once its started and can usually be enabled by hitting the Esc key. It is the mode in which it is easiest to enable the other modes and also to navigate within a document. Some of the most useful commands for normal mode are the following:

  • j, k: Go one line up or down
  • h, l: Move one character left or right
  • i: Enable insert mode
  • v: Enable visual mode for selecting by character
  • V: Enable visual mode for selecting lines
  • :: Enable command line mode

Insert Mode

Insert mode is closest to how a normal editor works. It is possible to add and delete text. This also renders most keys on the keyboard unavailable for composing commands. So most importantly, we will need to remember the keyboard shortcut of only Esc to get back to normal mode.

Visual Mode

Visual mode is used for selecting text. The most important commands are:

  • j, k: Select line above or below
  • h, l: Select character to the right or left
  • d: Delete selected characters
  • y: Cut selected lines and place in register (press p in normal mode for pasting)
  • >, <: Indent lines to the left or right

Command Line Mode

Command line mode is for system level tasks such as saving or loading files and, importantly, for closing vim. Some important commands (as entered when in normal mode).

  • :wq: Save currently open file and quit
  • :q!: Quit vim and discard any unsaved changes

Operators and Text Objects

Since keys on the keyboard are only used for writing text in insert mode, the keys of the keyboard become available for other operations in the remaining modes. That is great for making complex operations quickly accessible using the keys that are the easiest to type. Since it is not required to use function keys such as Ctrl and Alt, it also becomes very easy to string different commands together. For instance, the following command will delete a word:

daw

Such more complex commands in vim are composed of operators, text objects and motions. In the example above d is an operator for deleting whereas aw denotes the text object of a word including its trailing space. So given that c is the operator for changing we could also write:

caw

This would enable us to change the word at our current cursor position rather than deleting it. Given that p references a paragraph text object, we could also write:

dap

This would delete a whole paragraph; with a paragraph being any block of text proceeded by an empty line.

Here a quick reference of some operators:

  • d: Delete
  • c: Change
  • y: Copy/yank into register

And some text objects:

  • aw: A complete word
  • as: A complete sentence
  • ap: A paragraph
  • i{: Everything within curly braces
  • i": Everything within quotation marks
  • a": The content of the quotation marks plus the quotation marks themselves.
  • /word: Everything up until and including ‘word’

There are usually two flavours of text objects. Those prefixed with a and those with i. The a prefix denotes that we are including whitespace, for instance the training whitespace after a word, whereas the i prefix does not include the whitespace. Therefore, when deleting a superfluous word, we would usually use daw whereas when we want to replace the word (and preserve the whitespace around the word), we would use ciw.

Motions

Motions can be used for two purposes. Without a proceeding operator, a motion can be used to navigate within the text when in normal mode. with a proceeding operator, the operator will be applied by whatever is captured by the motion.

For instance, entering $ in normal mode will get us to the end of a line. If we enter d$, we will delete all text until the end of the line.

Here a list of a few of the motions available in vim:

  • 0, $: Go to the beginning or end of a line
  • j, k: Go one line up or down
  • h, l: Move one character left or right
  • fc: Find the next occurrence of character c
  • Fc: Find the previous occurrence of character c
  • w, b: Go to the next/previous word
  • G: Go to the last line of the document
  • gg: Go to the first line of the document
  • (, ): Sentence backward / forward
  • {, }: Paragraph backward / forward

References and Further Resources

Testing Apollo Client/Server Applications

Following up on the GraphQL, Node.JS and React Monorepo Starter Kit and GraphQL Apollo Starter Kit (Lerna, Node.js), I have now created an extended example which includes facilities to run unit and integration tests using Jest.

The code can be found on GitHub:

apollo-client-server-tests

The following tests are included:

React Component Test

This tests asserts a react component is rendered correctly. Backend data from GraphQL is supplied via a mock [packages/client-components/src/Books/Books.test.js]

import React from 'react';
import Books from './Books';

import renderer from 'react-test-renderer'
import { MockedProvider } from 'react-apollo/test-utils';

import GET_BOOK_TITLES from './graphql/queries/booktitles';

import wait from 'waait';

const mocks = [
  {
    request: {
      query: GET_BOOK_TITLES
    },
    result: {
      data: {
        books: [
          {
            title: 'Harry Potter and the Chamber of Secrets',
            author: 'J.K. Rowling',
          },
          {
            title: 'Jurassic Park',
            author: 'Michael Crichton',
          }
        ]
      },
    },
  },
];

it('Renders one book', async () => {

  const component = renderer.create(<MockedProvider mocks={mocks} addTypename={false}>
    <Books />
  </MockedProvider>);
  expect(component.toJSON()).toEqual('Loading...');

  // to wait for event loop to complete - after which component should be loaded
  await wait(0);

  const pre = component.root.findByType('pre');
  expect(pre.children).toContain('Harry Potter and the Chamber of Secrets');

});

GraphQL Schema Test

Based on the article Extensive GraphQL Testing in 3 minutes, this test verifies the GraphQL schema is defined correctly for running the relevant queries [packages/server-books/src/schema/index.test.js].

import {
    makeExecutableSchema,
    addMockFunctionsToSchema,
    mockServer
} from 'graphql-tools';

import { graphql } from 'graphql';

import booksSchema from './index';

const titleTestCase = {
    id: 'Query Title',
    query: `
      query {
        books {
            title
        }
      }
    `,
    variables: {},
    context: {},
    expected: { data: { books: [{ title: 'Title'} , { title: 'Title' }] } }
};

const cases = [titleTestCase];

describe('Schema', () => {
    const typeDefs = booksSchema;
    const mockSchema = makeExecutableSchema({ typeDefs });

    addMockFunctionsToSchema({
        schema: mockSchema,
        mocks: {
            Boolean: () => false,
            ID: () => '1',
            Int: () => 1,
            Float: () => 1.1,
            String: () => 'Title',
        }
    });

    test('Has valid type definitions', async () => {
        expect(async () => {
            const MockServer = mockServer(typeDefs);

            await MockServer.query(`{ __schema { types { name } } }`);
        }).not.toThrow();
    });

    cases.forEach(obj => {
        const { id, query, variables, context: ctx, expected } = obj;

        test(`Testing Query: ${id}`, async () => {
            return await expect(
                graphql(mockSchema, query, null, { ctx }, variables)
            ).resolves.toEqual(expected);
        });
    });

});

GraphQL Schema and Resolver Test

Extending the previous test as suggested by the article Effective Testing a GraphQL Server, this test affirms that GraphQL schema and resolvers are working correctly [packages/server-books/tests/Books.test.js].

import { makeExecutableSchema } from 'graphql-tools'
import { graphql } from 'graphql'
import resolvers from '../src/resolvers'
import typeDefs from '../src/schema'

const titleTestCase = {
    id: 'Query Title',
    query: `
      query {
        books {
            title
        }
      }
    `,
    variables: {},
    context: {},
    expected: { data: { books: [{ title: 'Harry Potter and the Chamber of Secrets' }, { title: 'Jurassic Park' }] } }
};

describe('Test Cases', () => {

    const cases = [titleTestCase]
    const schema = makeExecutableSchema({ typeDefs: typeDefs, resolvers: { Query: resolvers } })

    cases.forEach(obj => {
        const { id, query, variables, context, expected } = obj

        test(`query: ${id}`, async () => {
            const result = await graphql(schema, query, null, context, variables)
            return expect(result).toEqual(expected)
        })
    })
})

Conclusion

As with the previous two articles on getting started with Apollo etc. the code developed again aims to be as minimalistic as possible. It shows how Apollo client/server code may be tested in three different ways. These are quite exhaustive, even if the presented tests are simplistic.

The only test missing is an integration tests which will test the React component linked to a live Apollo server back-end. I am not sure if it is possible to run an ’embedded’ Apollo server in the browser. Running such a server for testing the React component would also be a good addition.

GraphQL, Node.JS and React Monorepo Starter Kit

Following the GraphQL Apollo Starter Kit (Lerna, Node.js), I wanted to dig deeper into developing a monorepo for a GraphQL/React client-server application.

Unfortunately, things are not as easy as I thought at first. Chiefly the create-react-app template does not appear to work very well with monorepos and local dependencies to other packages.

That’s why I put together a small, simple starter template for developing modular client-server applications using React, GraphQL and Node.js. Here is the code on GitHub:

nodejs-react-monorepo-starter-kit

Some things to note:

  • There are four packages in the project
    • client-main: The React client, based on create-react-app
    • client-components: Contains a definition of the component app. Used by client-main
    • server-main: The Node.js server definition
    • server-books: Contains schema and resolver for GraphQL backend. Used by server-main.
  • Each package defines it’s own package.json and can be built independent of the other packages.
  • The main entry point for the dependent packages (client-components and server-books) is set to dist/index.js. This way, packages which use them, can use the transpiled version created by babel and don’t need to worry about specific JS features used in the dependent packages.

Like GraphQL Apollo Starter Kit (Lerna, Node.js) this starter kit is meant to be very basic to allow easy exploration of the source code.

GraphQL Apollo Starter Kit (Lerna, Node.js)

In many ways developing in Node.js is very fast and lightweight. However it can also be bewildering at times. Mostly so since for everything there seems to be more than one established way of doing things. Moreover, the right way to do something can change within 3 to 6 months, so best practices documented in books and blogs quickly become obsolete.

Some days ago I wanted to create a simple sample project that shows how to develop a simple client/server application using Apollo and React. I thought this would be a simple undertaking. I imagined I would begin with a simple tutorial or ‘starter kit’ and quickly be up and running.

I found that things were not as easy as I imagined, since there are plenty of examples to spin up a simple react app chiefly using create-react-app. I also found the awesome apollo-universal-starter-kit. However, the former lacks insight of how to link this to a Node.js back-end server and the latter is a bit complex and opinionated in some ways as to which frameworks to use.

This motivated me to develop a small, simple ‘starter kit’ for Apollo Client and Server in GraphQL with the following features:

  • Uses ES 2018 ‘vanilla’ JavaScript
  • Uses Lerna to define project with two packages (client and server)
  • Uses Node.js / Express as web server
  • Uses Babel to allow using ES 6 style modules for Node.js
  • Provides easy ways to start a development server and spin up a production instance

Here a link to the project on GitHub:

graphql-apollo-starter-kit

The README provides the instructions to run and build the project.

A few things of note about the implementation:

This project is aimed at developers new to Node.js/React/Apollo. It is not implemented in the most elegant way possible but in a way which makes it easy to understand what is going on by browsing through the code. Be welcome to check out the project and modify it!