Medooze Media Server Demo

I’ve recently done some research into WebRTC and specifically on how to stream media captured in the browser to a server. Initially I thought I could use something like Kinesis Video Streams and have AWS do the heavy lifting for me. Unfortunately this turned out way more complicated than I had anticipated so I started looking for other options.

That is when I came across Media Servers such as Medooze, OpenVidu, Janus and Jitsi. Medooze caught my attention since it appears to scale very well and offers a NodeJS based server.

It did take me some time to find a meaningful demo for Medooze and then to get it running. So I thought I briefly document the steps here to get a demo for Medooze up and running (note this only works on Linux or Mac OS X):

  1. Head over to the media-server-client-js project and clone it.
  2. Run the following commands:
npm i
npm run-script dist
cd demo
npm i
  1. Get the IP address of your current machine:
ifconfig | grep "inet "
  1. Using this IP, launch the Medooze server in the demo directory:
node index.js [your IP]
  1. Head to a browser and open the URL https://%5Byour ip]:8000 (for instance https://10.0.2.15:8000. Accept the SSL certificate for your localhost.

You should now see the demo page:

Clicking the buttons will create video streams:

The animation on top of the remote button is a video stream taken from a local canvas and animation/video to the right is the same stream relayed through the server.

So the client sends a stream to the server, the server sends that stream right back and the client then renders that stream.

During my local testing I encountered an issue when adding tracks with codecs (VP8, H264) that I have filed and link here for reference: Adding tracks with Codecs does not work

Advantages of Using React Hooks

I always had the feeling that React is just a bit to complex, a bit to ‘heavy’ to be a truely elegant solution to the problem of building complex user interfaces in JavaScript. Two issues, for instance, are the general project setup, exemplified by the need to have create-react-app, and class-based components, with all their componentDidMount and this references.

While React Hooks are no solution to the first issue, they provide, in my mind, an elegant solution to the second; they provide a better way to do what we used to do with class-based components.

To illustrate this, I will first provide an implementation of a simple component using a class-based component and then refactor this into an implementation using React Hooks.

Here the initial implementation using a class-based component:

class User1 extends Component {
  constructor(props) {
    super(props);

    this.state = {
      userId: props.userId,
      userName: null,
      isLoading: false,
      error: null,
      unmounted: false,
    };
  }

  getUser() {
    this.setState({ isLoading: true, error: null });
  
    axios.get(`https://jsonplaceholder.typicode.com/users/${this.state.userId}`)
      .then(result => {
        if (this.state.unmounted) {
          return;
        }
        this.setState({
          userName: result.data.name,
          isLoading: false
        })
      }
      )
      .catch(error => {
        if (this.state.unmounted) {
          return;
        }

        this.setState({
          error,
          isLoading: false
        })
      });
  }

  componentDidMount() {
    this.getUser();
  }

  componentDidUpdate() {
    // this.getUser();
  }

  componentWillUnmount() {
    this.setState({ unmounted: true });
  }

  render() {
    return (<>
      {this.state.isLoading ? <p>Loading ...</p> : <></>}
      {this.state.error ? <p>Cannot load user</p> : <></>}
      {!this.state.isLoading && !this.state.error ? <p>{this.state.userName}</p> : <></>}
      <button onClick={() => {
        const newUserId = this.state.userId + 1;
        this.setState({ userId: newUserId }, this.getUser);
        this.getUser();
      }} >Next</button>
    </>);
  }
}

As can be seen in above code, this component requests data about a user from JSONPlaceholder and then display this data. There is also a button that will trigger loading of another user.

Simple enough – but we still need a fair amount of code to handle this scenario in a robust manner, including instances where we start the request for a new user before the previous request has been completed or where a request only completes after the component has been unmounted.

A component with the exact same functionality can be implemented using React Hooks:

function User2(props) {
  const [userId, setUserId] = useState(props.userId);
  const [name, setName] = useState(null);
  const [isLoading, setIsLoading] = useState(true);
  const [isError, setIsError] = useState(false);

  useEffect(() => {
    let cancelled = false;
    const fetchData = async () => {
      setIsLoading(true);
      setIsError(false);
      let response;
      try { 
        response = await axios.get(`https://jsonplaceholder.typicode.com/users/${userId}`);
      } catch (e) {
        setIsError(true);
        setIsLoading(false);
        return;
      }
      setIsLoading(false);
      if (cancelled) return;
      setName(response.data.name);
    };
    fetchData();
    return () => {
      cancelled = true;
    };
  }, [userId]);

  return (<>
    {isLoading ? <p>Loading ...</p> : <></>}
    {isError ? <p>Cannot load user</p> : <></>}
    {!isLoading && !isError && name ? <p>{name}</p> : <></>}
    <button onClick={() => setUserId(userId + 1)} >Next</button>
  </>);
}

Here we use useState to define a number of state variables and useEffect to deal with state updates. useState of course is essential in allowing us to define a functional component that also uses state. One major advantage in my mind of the hooks-based approach is that we don’t need to worry about using this and are in no danger of forgetting it.

useEffect replaces the functionality of componentDidMount and componentDidUpdate in the class-based components. I think it allows reacting to state changes in a much more elegant way. Firstly by linking it to the state of userId the useEffect handler we have defined will only trigger when the userId status has been updated, without us having to add any additional tests and logic around that. Secondly, it elegantly handles both the cases for when the component mounts as well as when the component state changes: by always triggering on component mount, and subsequently on changes to the userId. Thirdly, by returning a function as the result of the useEffect handler …

    return () => {
      cancelled = true;
    };

… we have a very easy way to deal with the component unmounting when a request is in flight.

However, the real power of React Hooks, in my mind, lies in the composability of Hooks. The following example implements the features for the component using a custom open source hook: use-data-api:

import useDataApi from 'use-data-api';

function User3(props) {
  const [userId, setUserId] = useState(props.userId);
  const [{ data, isLoading, isError }, performFetch] = useDataApi(null, null);

  useEffect(() => {
    performFetch(`https://jsonplaceholder.typicode.com/users/${userId}`);
  }, [userId, performFetch]);

  return (<>
    {isLoading ? <p>Loading ...</p> : <></>}
    {isError ? <p>Cannot load user</p> : <></>}
    {!isLoading && !isError && data ? <p>{data.name}</p> : <></>}
    <button onClick={() => setUserId(userId + 1)} >Next</button>
  </>);
}

Above we use the custom hook useDataApi that takes care of the details of having to deal with requests to an API (use-data-api/blob/master/src/index.js).

As can be seen, this last example is much shorter in length and easier to understand than the previous examples. Thus showing the biggest advantage for React Hooks – to extract complex behaviour into external functions that can be easily reused within a project and across projects.

To summarise, here all the advantages of using React Hooks discussed above:

  • Ability to create composite Hooks defining cross-cutting functionality concerns in an application.
  • Enables writing functional components with state (no more this).
  • useEffect provides a more concise and elegant way to handle component mount, update and unmount events.

Here the complete source code code of the examples used in this post:

react-hooks-tutorial

Deploy Lambda using SAM and Buildkite

One of the many good things about Lambdas on AWS is that they are quite easy to deploy. Simply speaking, all that one requires is a zip file of an application that then can be uploaded using an API call.

Things unfortunately quickly become more complicated, especially if the Lambda depends on other resources on AWS, as they often do. Thankfully there is a solution for this in the form of the AWS Serverless Application Model (SAM). AWS SAM enables to specify lambdas along with their resources and dependencies in a simple and coherent way.

AWS being AWS, there are plenty of examples of deploying Lambdas defined using SAM using AWS tooling, such as CodePipeline and CodeBuild. In this article, I will show that it is just as easy deploying Lambdas using Buildkite.

For those wanting to skip straight to the code, here the link to the GitHub repo with an example project:

lambda-sam-buildkite

This example uses the Buildkite Docker Compose Plugin that leverages a Dockerfile, which provides the AWS SAM CLI:

FROM python:alpine
# Install awscli and aws-sam-cli
RUN apk update && \
    apk upgrade && \
    apk add bash && \
    apk add --no-cache --virtual build-deps build-base gcc && \
    pip install awscli && \
    pip install aws-sam-cli && \
    apk del build-deps
RUN mkdir /app
WORKDIR /app

The Buildkite pipeline assures the correct environment variables are passed to the Docker container so that the AWS CLI can be authenticated with AWS:

steps:
  - label: SAM deploy
    command: ".buildkite/deploy.sh"
    plugins:
      - docker-compose#v2.1.0:
          run: app
          env:
            - AWS_DEFAULT_REGION
            - AWS_ACCESS_KEY_ID
            - AWS_SECRET_ACCESS_KE

The script that is called in the pipeline simply calls the AWS SAM CLI to package the CloudFormation template and then deploys it:

#!/bin/bash -e

# Create packaged template and upload to S3
sam package --template-file template.yml \ 
            --s3-bucket sam-buildkite-deployment-test \
            --output-template-file packaged.yml

# Apply CloudFormation template
sam deploy --template-file ./packaged.yml \
           --stack-name sam-buildkite-deployment-test \
           --capabilities CAPABILITY_IAM

And that’s it already. This pipeline can easily be extended to deploy to different environments such as development, staging and production and to run unit and integration tests.

Setting up Continuous Deployment with Lerna and Buildkite

Buildkite is a great tool for running multi-step build and deployment pipelines. Lerna is a tool for managing multiple JavaScript packages within one git repository.

Given that both Lerna and Buildkite are quite popular, it is surprising how difficult it is to set up a basic build and deployment with these tools.

It is very easy to configure a deployment pipeline in Buildkite that will run every time a new commit has been made to a git branch. However, using Lerna, we want to build only those packages in a repository that have actually changed, rather than building all packages in the repository.

Lerna provides some build in tooling for this, chiefly the ls command which will provide us a list of all the packages defined in the monorepo. Using the –since filter with this command, we can easily determine all packages that have changed since the last commit as follows:

lerna ls --json --since HEAD^

Where ^HEAD is the git reference to the commit proceeding HEAD. The --json flag provides us with an output that is a bit easier to parse, for instance using jq.

However, in a CD environment, we usually build not only when a new merge to master has occurred but also when changes to a branch have been submitted. In that instance, we are not interested in the changes which occurred since the last commit but all the changes that have been made in the branch in comparison to the current master branch.

In this instance, the lerna ls command with a --since filter can help us when comparing the current branch with master.

lerna ls --json --since refs/heads/master

Internally, Lerna would run a command such as the following:

git --no-pager diff --name-only refs/heads/master..refs/remotes/origin/$BUILDKITE_BRANCH

This diff goes both ways, so when a file is changed in a package in master only or the branch only, it will cause the package to be listed among the packages to be build. We are only interested in those packages however that are changed in the branch. This can be somewhat assured by running a git pull before the lerna ls command:

git pull --no-edit origin master
lerna ls --json --since refs/heads/master

Unfortunately Buildkite by default does something of a ‘lazy clone’ of the repository. It will only ensure that the branch that is currently being built is checked out with the latest commit and other branches, including master, might be cached from previous builds and are on an old commit. This will prevent the above approach for building branches from working. Thankfully there is an environment variable in Buildkite we can use to force it to get the latest commit for all branches: BUILDKITE_CLEAN_CHECKOUT=true.

Having this list of packages that have changed, we can then trigger pipelines specific to building the changed packages. This can be accomplished using a trigger step.

- trigger: "[name of pipeline for package x]"
  label: ":rocket: Trigger: Build for package x"
  async: false
  build:
    message: "${BUILDKITE_MESSAGE}"
    commit: "${BUILDKITE_COMMIT}"
    branch: "${BUILDKITE_BRANCH}"

Another way to go about deploying with Lerna might be using lerna publish where we could push npm packages to an npm registry and then trigger builds from there. I haven’t tested this way and I think this would require a private npm registry, which the way outlined in this article would not.

If anyone has a more elegant way to go about orchestrating the builds of packages in a Lerna repository, please let everyone know in the comments.

Testing Apollo Client/Server Applications

Following up on the GraphQL, Node.JS and React Monorepo Starter Kit and GraphQL Apollo Starter Kit (Lerna, Node.js), I have now created an extended example which includes facilities to run unit and integration tests using Jest.

The code can be found on GitHub:

apollo-client-server-tests

The following tests are included:

React Component Test

This tests asserts a react component is rendered correctly. Backend data from GraphQL is supplied via a mock [packages/client-components/src/Books/Books.test.js]

import React from 'react';
import Books from './Books';

import renderer from 'react-test-renderer'
import { MockedProvider } from 'react-apollo/test-utils';

import GET_BOOK_TITLES from './graphql/queries/booktitles';

import wait from 'waait';

const mocks = [
  {
    request: {
      query: GET_BOOK_TITLES
    },
    result: {
      data: {
        books: [
          {
            title: 'Harry Potter and the Chamber of Secrets',
            author: 'J.K. Rowling',
          },
          {
            title: 'Jurassic Park',
            author: 'Michael Crichton',
          }
        ]
      },
    },
  },
];

it('Renders one book', async () => {

  const component = renderer.create(<MockedProvider mocks={mocks} addTypename={false}>
    <Books />
  </MockedProvider>);
  expect(component.toJSON()).toEqual('Loading...');

  // to wait for event loop to complete - after which component should be loaded
  await wait(0);

  const pre = component.root.findByType('pre');
  expect(pre.children).toContain('Harry Potter and the Chamber of Secrets');

});

GraphQL Schema Test

Based on the article Extensive GraphQL Testing in 3 minutes, this test verifies the GraphQL schema is defined correctly for running the relevant queries [packages/server-books/src/schema/index.test.js].

import {
    makeExecutableSchema,
    addMockFunctionsToSchema,
    mockServer
} from 'graphql-tools';

import { graphql } from 'graphql';

import booksSchema from './index';

const titleTestCase = {
    id: 'Query Title',
    query: `
      query {
        books {
            title
        }
      }
    `,
    variables: {},
    context: {},
    expected: { data: { books: [{ title: 'Title'} , { title: 'Title' }] } }
};

const cases = [titleTestCase];

describe('Schema', () => {
    const typeDefs = booksSchema;
    const mockSchema = makeExecutableSchema({ typeDefs });

    addMockFunctionsToSchema({
        schema: mockSchema,
        mocks: {
            Boolean: () => false,
            ID: () => '1',
            Int: () => 1,
            Float: () => 1.1,
            String: () => 'Title',
        }
    });

    test('Has valid type definitions', async () => {
        expect(async () => {
            const MockServer = mockServer(typeDefs);

            await MockServer.query(`{ __schema { types { name } } }`);
        }).not.toThrow();
    });

    cases.forEach(obj => {
        const { id, query, variables, context: ctx, expected } = obj;

        test(`Testing Query: ${id}`, async () => {
            return await expect(
                graphql(mockSchema, query, null, { ctx }, variables)
            ).resolves.toEqual(expected);
        });
    });

});

GraphQL Schema and Resolver Test

Extending the previous test as suggested by the article Effective Testing a GraphQL Server, this test affirms that GraphQL schema and resolvers are working correctly [packages/server-books/tests/Books.test.js].

import { makeExecutableSchema } from 'graphql-tools'
import { graphql } from 'graphql'
import resolvers from '../src/resolvers'
import typeDefs from '../src/schema'

const titleTestCase = {
    id: 'Query Title',
    query: `
      query {
        books {
            title
        }
      }
    `,
    variables: {},
    context: {},
    expected: { data: { books: [{ title: 'Harry Potter and the Chamber of Secrets' }, { title: 'Jurassic Park' }] } }
};

describe('Test Cases', () => {

    const cases = [titleTestCase]
    const schema = makeExecutableSchema({ typeDefs: typeDefs, resolvers: { Query: resolvers } })

    cases.forEach(obj => {
        const { id, query, variables, context, expected } = obj

        test(`query: ${id}`, async () => {
            const result = await graphql(schema, query, null, context, variables)
            return expect(result).toEqual(expected)
        })
    })
})

Conclusion

As with the previous two articles on getting started with Apollo etc. the code developed again aims to be as minimalistic as possible. It shows how Apollo client/server code may be tested in three different ways. These are quite exhaustive, even if the presented tests are simplistic.

The only test missing is an integration tests which will test the React component linked to a live Apollo server back-end. I am not sure if it is possible to run an ’embedded’ Apollo server in the browser. Running such a server for testing the React component would also be a good addition.

GraphQL, Node.JS and React Monorepo Starter Kit

Following the GraphQL Apollo Starter Kit (Lerna, Node.js), I wanted to dig deeper into developing a monorepo for a GraphQL/React client-server application.

Unfortunately, things are not as easy as I thought at first. Chiefly the create-react-app template does not appear to work very well with monorepos and local dependencies to other packages.

That’s why I put together a small, simple starter template for developing modular client-server applications using React, GraphQL and Node.js. Here is the code on GitHub:

nodejs-react-monorepo-starter-kit

Some things to note:

  • There are four packages in the project
    • client-main: The React client, based on create-react-app
    • client-components: Contains a definition of the component app. Used by client-main
    • server-main: The Node.js server definition
    • server-books: Contains schema and resolver for GraphQL backend. Used by server-main.
  • Each package defines it’s own package.json and can be built independent of the other packages.
  • The main entry point for the dependent packages (client-components and server-books) is set to dist/index.js. This way, packages which use them, can use the transpiled version created by babel and don’t need to worry about specific JS features used in the dependent packages.

Like GraphQL Apollo Starter Kit (Lerna, Node.js) this starter kit is meant to be very basic to allow easy exploration of the source code.

GraphQL Apollo Starter Kit (Lerna, Node.js)

In many ways developing in Node.js is very fast and lightweight. However it can also be bewildering at times. Mostly so since for everything there seems to be more than one established way of doing things. Moreover, the right way to do something can change within 3 to 6 months, so best practices documented in books and blogs quickly become obsolete.

Some days ago I wanted to create a simple sample project that shows how to develop a simple client/server application using Apollo and React. I thought this would be a simple undertaking. I imagined I would begin with a simple tutorial or ‘starter kit’ and quickly be up and running.

I found that things were not as easy as I imagined, since there are plenty of examples to spin up a simple react app chiefly using create-react-app. I also found the awesome apollo-universal-starter-kit. However, the former lacks insight of how to link this to a Node.js back-end server and the latter is a bit complex and opinionated in some ways as to which frameworks to use.

This motivated me to develop a small, simple ‘starter kit’ for Apollo Client and Server in GraphQL with the following features:

  • Uses ES 2018 ‘vanilla’ JavaScript
  • Uses Lerna to define project with two packages (client and server)
  • Uses Node.js / Express as web server
  • Uses Babel to allow using ES 6 style modules for Node.js
  • Provides easy ways to start a development server and spin up a production instance

Here a link to the project on GitHub:

graphql-apollo-starter-kit

The README provides the instructions to run and build the project.

A few things of note about the implementation:

This project is aimed at developers new to Node.js/React/Apollo. It is not implemented in the most elegant way possible but in a way which makes it easy to understand what is going on by browsing through the code. Be welcome to check out the project and modify it!

Mastering Modular JavaScript

Today I was having a look around for best practices for defining JavaScript modules. In that search, I came across the book Mastering Modular JavaScript. This book offers a good selection of best practices for JS module development. Also, all chapters are freely available on GitHub:

For a more basic introduction to modules, see the chapter JavaScript Modules from the book Practical Modern JavaScript.

Everything new in JavaScript since ES6

It is no secret that things in the tech world change rather rapidly. It’s difficult to keep track of everything at the same time. For instance I have been working with JavaScript quite extensively some years ago but recently have been more involved with other tech stacks. Thus I have only followed the developments in the JavaScript world sporadically and was quite surprised by how many things have changed since the days of JavaScript: The Good Parts.

Since before ES6 things have not changed much for a long time, I imagine I am not the only one who could benefit from a little refresher of all the things that have changed since ES6. Thus I have compiled some of the changes I think are most important for ordinary development work. The idea is to provide a quick overview rather than explain every feature in detail – assuming that more information on any of the changes is readily available on the web.

This is not a complete list of everything that has changed. For instance, I included promises but omitted changes made to the way regular expressions work in ECMAScript 2018; since we are likely to come across promises many times per day whereas the changes to regular expressions only affect us in particular edge cases.

ECMAScript 6 / ECMAScript 2015

Variable Scoping

  • let x = 1;: To define block scoped variables

Arrow Functions

  • x => x + 1: Concise closure syntax
  • x => { return x + 1; }: Concise closure syntax
  • this: within lambdas refers to enclosing object (rather than to lambda function itself)

Promises

Promises for wrapping asynchronous code.


let p = new Promise((resolve, reject) => {

   resolve("hello");

});

p.then((msg) => console.log(msg)); 

Executing asynchronous operations in parallel

let parallelOperation = Promise.all([p1, p2]);
parallelOperation.then((data) => {let [res1, res2] = data; } );

Default Parameters and Spread Operator

  • function (x = 1, y = 2): Default values for function parameters
  • function (x, y, ...arr) {}: Capturing all remaining arguments in array for variadic functions
  • var newarr = [ 1, 2, ...oldarr]: ‘Spreading’ of elements from an array as literal elements
  • multiply(1, 2, ...arr): Spreading of elements from an array as individual function parameters

Multiline Strings and Templates

  • `My String⏎NewLine`: Multi-line string literals
  • `Hello ${person.name}`: Intuitive string interpolation
  • const proc = sh`kill -9 ${pid}`;: Tagged template literals for parsing custom languages. The example would result in calling the function sh with the parameters (['kill -9 '], pid)

Object Properties

  • let obj = { x, y }: Property shorthand for defining let object = { x: x, y: y }
  • obj = { func1 (x, y) { } }: Methods allowed as object properties

Deconstructor Assignment

  • var [ x, y, z ] = list: Deconstructing arrays into individual variables by assignment.
  • var [ x=0, y=0 ]: Default values for deconstructing arrays.
  • function( [ x, y ] ): Deconstructing arrays in function calls.
  • var { x, y, z } = getPoint(): Deconstructing objects into individual variables by assignment.
  • var { name: name, address: { street: street }, age: age} = getData(): Deconstructing objects into individual variables by assignment, including nested properties.
  • var p = { x=0, y=0 }: Default values for deconstructing objects.
  • function( { x, y } ): Deconstructing objects in function calls.

Modularity

  • export function add(x,y) { return x + y; }: Exporting functions
  • export var universe = 42;: Exporting variables
  • import { add, universe } from 'lib/module';: Importing functions and variables
  • import * from 'lib/module': Wildcard import
  • export default (x, y) => x + y;: Defining default export
  • import add from 'lib/add': Importing default export
  • import add, { universe } from 'lib/add': Importing default export and additional exports
  • export * from 'lib/module';: Reexporting from other modules

Classes

class keyword for constructing simple classes.

class Point {

  constructor (x, y) {
     this.x = x;
     this.y = y;
  }

  move (deltax, deltay) {
     new Point(this.x + deltax, this.y + deltay);
  }

}

extends keyword for extending classes:


class Car extends Vehicle {

  constructor (name) {
     super(name);
  }

static keyword for static methods


class Math {

  static add(x, y) {
    return x + y;
  }

}

get and set keywords for decorated property access.


class Rectangle {

  get area() { return this.x * this.y }

}

...

new Rectangle(2, 2).area === 4;

Iteration Through Object Values

  • for (let value of arr) { }: for … of loop for going iterating through values of objects.
  • Also note that objects can define their own iterators and generators

Data Structures

  • new Set(): For sets
  • new Map(): For maps
  • new WeakSet(): For sets whose items will be garbage collected when required
  • new WeakMap(): For sets whose items will be garbage collected when required

Symbols

  • Symbol(): For creating an object with a unique identity.
  • Symbol("note"): For creating a unique object with a descriptor.
  • Note: Symbol("node") !== Symbol("node")

ECMAScript 2016

  • **: Exponentiation operator
  • Array.prototypes.includes: Like indexOf but with true/false result and support for NaN

ECMAScript 2017

async/await for more expressive asynchronous operations

async function add1(x) {
  return x + 1;
}

async function add2(x) {
  let y = await add1(x);
  return await add1(y);
}

add2(5).then(console.log);

ECMAScript 2018

Rest/Spread Operators for Object Properties

Collect all not deconstructed properties from an object in another object:


var person = { firstName: "Paul", lastName: "Hendricks", password: "secret"};
var {password, ...sanitisedPerson } = person;
// sanitisedPerson = {firstName: "Paul", lastName: "Hendricks"}

Spread object properties

let details = { firstName: "Paul", lastName: "Hendricks" };

let user = { ...details, password: "secret" };

Finally for Promises

finally callback is guaranteed to be executed if promise succeeds or fails.


async function sayHello() {
console.log("hello");
}
sayHello().then(() => console.log("success") )
.catch((e) => console.log(e))
.finally(() => console.log("runs always")

for await Loop

Special for loops that resolve promises before every iteration.


const promises = [
  new Promise(resolve => resolve(1) ),
  new Promise(resolve => resolve(2) )
];

async function runAll() {
  for await (p of promises) {
    console.log(p);
  }
}

runAll();

References

Image credits: Flickr

Test if Firebase is Initialized on Node.JS / Lambda

Firebase is build on the assumption that it will only be initialized once.

This can be a problem in Node.JS applications sometimes, especially if they are run as part of an Amazon Lambda function.

This can lead to errors as the following:

Firebase App named '[DEFAULT]' already exists.

Thankfully, there is an easy way to check if Firebase has already been initialized (firebase.initializeApp). Just wrap your call to initializeApp in the following:

if (firebase.apps.length === 0) {
    firebase.initializeApp({
        serviceAccount: {
            ...
        },
        databaseURL: ...
    });
}

Sources