Mastering Modular JavaScript

Today I was having a look around for best practices for defining JavaScript modules. In that search, I came across the book Mastering Modular JavaScript. This book offers a good selection of best practices for JS module development. Also, all chapters are freely available on GitHub:

For a more basic introduction to modules, see the chapter JavaScript Modules from the book Practical Modern JavaScript.

Everything new in JavaScript since ES6

It is no secret that things in the tech world change rather rapidly. It’s difficult to keep track of everything at the same time. For instance I have been working with JavaScript quite extensively some years ago but recently have been more involved with other tech stacks. Thus I have only followed the developments in the JavaScript world sporadically and was quite surprised by how many things have changed since the days of JavaScript: The Good Parts.

Since before ES6 things have not changed much for a long time, I imagine I am not the only one who could benefit from a little refresher of all the things that have changed since ES6. Thus I have compiled some of the changes I think are most important for ordinary development work. The idea is to provide a quick overview rather than explain every feature in detail – assuming that more information on any of the changes is readily available on the web.

This is not a complete list of everything that has changed. For instance, I included promises but omitted changes made to the way regular expressions work in ECMAScript 2018; since we are likely to come across promises many times per day whereas the changes to regular expressions only affect us in particular edge cases.

ECMAScript 6 / ECMAScript 2015

Variable Scoping

  • let x = 1;: To define block scoped variables

Arrow Functions

  • x => x + 1: Concise closure syntax
  • x => { return x + 1; }: Concise closure syntax
  • this: within lambdas refers to enclosing object (rather than to lambda function itself)

Promises

Promises for wrapping asynchronous code.


let p = new Promise((resolve, reject) => {

   resolve("hello");

});

p.then((msg) => console.log(msg)); 

Executing asynchronous operations in parallel

let parallelOperation = Promise.all([p1, p2]);
parallelOperation.then((data) => {let [res1, res2] = data; } );

Default Parameters and Spread Operator

  • function (x = 1, y = 2): Default values for function parameters
  • function (x, y, ...arr) {}: Capturing all remaining arguments in array for variadic functions
  • var newarr = [ 1, 2, ...oldarr]: ‘Spreading’ of elements from an array as literal elements
  • multiply(1, 2, ...arr): Spreading of elements from an array as individual function parameters

Multiline Strings and Templates

  • `My String⏎NewLine`: Multi-line string literals
  • `Hello ${person.name}`: Intuitive string interpolation
  • const proc = sh`kill -9 ${pid}`;: Tagged template literals for parsing custom languages. The example would result in calling the function sh with the parameters (['kill -9 '], pid)

Object Properties

  • let obj = { x, y }: Property shorthand for defining let object = { x: x, y: y }
  • obj = { func1 (x, y) { } }: Methods allowed as object properties

Deconstructor Assignment

  • var [ x, y, z ] = list: Deconstructing arrays into individual variables by assignment.
  • var [ x=0, y=0 ]: Default values for deconstructing arrays.
  • function( [ x, y ] ): Deconstructing arrays in function calls.
  • var { x, y, z } = getPoint(): Deconstructing objects into individual variables by assignment.
  • var { name: name, address: { street: street }, age: age} = getData(): Deconstructing objects into individual variables by assignment, including nested properties.
  • var p = { x=0, y=0 }: Default values for deconstructing objects.
  • function( { x, y } ): Deconstructing objects in function calls.

Modularity

  • export function add(x,y) { return x + y; }: Exporting functions
  • export var universe = 42;: Exporting variables
  • import { add, universe } from 'lib/module';: Importing functions and variables
  • import * from 'lib/module': Wildcard import
  • export default (x, y) => x + y;: Defining default export
  • import add from 'lib/add': Importing default export
  • import add, { universe } from 'lib/add': Importing default export and additional exports
  • export * from 'lib/module';: Reexporting from other modules

Classes

class keyword for constructing simple classes.

class Point {

  constructor (x, y) {
     this.x = x;
     this.y = y;
  }

  move (deltax, deltay) {
     new Point(this.x + deltax, this.y + deltay);
  }

}

extends keyword for extending classes:


class Car extends Vehicle {

  constructor (name) {
     super(name);
  }

static keyword for static methods


class Math {

  static add(x, y) {
    return x + y;
  }

}

get and set keywords for decorated property access.


class Rectangle {

  get area() { return this.x * this.y }

}

...

new Rectangle(2, 2).area === 4;

Iteration Through Object Values

  • for (let value of arr) { }: for … of loop for going iterating through values of objects.
  • Also note that objects can define their own iterators and generators

Data Structures

  • new Set(): For sets
  • new Map(): For maps
  • new WeakSet(): For sets whose items will be garbage collected when required
  • new WeakMap(): For sets whose items will be garbage collected when required

Symbols

  • Symbol(): For creating an object with a unique identity.
  • Symbol("note"): For creating a unique object with a descriptor.
  • Note: Symbol("node") !== Symbol("node")

ECMAScript 2016

  • **: Exponentiation operator
  • Array.prototypes.includes: Like indexOf but with true/false result and support for NaN

ECMAScript 2017

async/await for more expressive asynchronous operations

async function add1(x) {
  return x + 1;
}

async function add2(x) {
  let y = await add1(x);
  return await add1(y);
}

add2(5).then(console.log);

ECMAScript 2018

Rest/Spread Operators for Object Properties

Collect all not deconstructed properties from an object in another object:


var person = { firstName: "Paul", lastName: "Hendricks", password: "secret"};
var {password, ...sanitisedPerson } = person;
// sanitisedPerson = {firstName: "Paul", lastName: "Hendricks"}

Spread object properties

let details = { firstName: "Paul", lastName: "Hendricks" };

let user = { ...details, password: "secret" };

Finally for Promises

finally callback is guaranteed to be executed if promise succeeds or fails.


async function sayHello() {
console.log("hello");
}
sayHello().then(() => console.log("success") )
.catch((e) => console.log(e))
.finally(() => console.log("runs always")

for await Loop

Special for loops that resolve promises before every iteration.


const promises = [
  new Promise(resolve => resolve(1) ),
  new Promise(resolve => resolve(2) )
];

async function runAll() {
  for await (p of promises) {
    console.log(p);
  }
}

runAll();

References

Image credits: Flickr

Designing Micro Services the Right Way

For a few years now, micro services have been all the rage when it comes to the architecture of large applications. Personally I have always been a bit puzzled about what was so new a great about micro services in comparison to what came before them: Service Oriented Architecture (SOA). Indeed, SOA itself is often portrayed as a frightful antipattern from our past to be mentioned in the same breath as CORBA.

To me, the move from CORBA et all to SOA to Micro Services has not been one of disrupting innovation but one of continuous learning; chiefly in relation to the technologies we employ. It just makes a world of difference setting up a big old monolithic application from the past or an express server in Node.JS (which is also a ‘monolith’ in its own right but just a smaller one – hopefully).

The core problem we are trying to solve has not changed: distributed computing. Unfortunately, one of the first things we learnt about distributed computing seems to have been given less attention recently: that it is best avoided wherever possible. Why? Because it introduces great complexity into an application and can result in many development and operational problems (see YouTube: 10 Tips for failing badly at Microservices by David Schmitz).

One of the most problematic areas is data or persisted state. If the same piece of data needs to be used by multiple services, things become very complicated since it is often required to keep data in sync between multiple places (see YouTube: Managing Data in Microservices by Randy Shoup).

Recently I came across a presentation which I think outlined a very nice approach for dealing with micro services – one that relied heavily on code generation, enforcing common standards and automated testing. Furthermore in the presented architecture one language was used primarily, which I think is a very good approach. I highly recommend viewing this presentation for anyone interested in a way to deal with the complexity of micro services:

YouTube: Design Microservice Architectures the Right Way by Michael Bryzek

What I personally took away from this:

  • Focus on testability. Allow for fast unit and integration tests and even testing with production data. Only code that is easy to test and heavily tested allows for fast and bold development. This organisation for instance automatically updates all their dependencies once per week – automatically, since they have full confidence that their tests will pick up any issues.
  • Utilise code generation. The sad truth of micro services is that we will have to duplicate things, such as commonly used entities – especially if multiple programming languages are involved. Code generation provides an elegant way to deal with this unfortunate situation.
  • Enforce common standards. Although micro services are intended to reduce complexity by dividing up a complex system into small manageable chunks, they can actually result in increased overall complexity, especially if many different technologies are employed. In that case, enforcing strict common standards can help in keeping things simple for developers and ops.
  • Embrace events. Triggering services into action by using events rather than direct API calls can help in making a distributed system more predictable and easier to debug.

I think this presentation provides an excellent overview of best practices for micro services and I couldn’t think of anything to criticise or add. I think it represents the best way of building micro services I am aware of as of now.

I do think, however, there is one important additional issue to consider, and that is that a micro service built according to the best principles and standards will still be a liability if it wasn’t necessary to build a micro service to begin with. This is not so much the question if we should micro services or not (in any organisation of a certain size they are an imperative) but how many.

One of the key drivers of success for micro services within a larger system is to get the boundaries of the services right (see bounded context) and I think we should aim to make micro services as large as possible so we have as few of them as possible; taking into consideration the restrictions of team size, data and complexity:

  • Team: It might sound like heresy but I do think that one ‘physical’ micro service could be maintained by up to three to five teams (and not just one team per micro service). That of course would be the upper maximum, there is nothing wrong with having just one team per micro service. It really depends on what service you are building.
  • Data: And some more heresy: I think that for data it is often better to scale up rather than scale out. Why? Data is all about state and being able to keep state within the physical confines of one systems leads to much improved performance and reduced complexity. Thus we should think about the database management system we will be using for our service and what is the maximum we can scale it up to. Then take 20% of that and ask yourself if your data will stay within that limit. If not, it might be prudent to break the micro service apart or maybe change the DBMS.
  • Complexity: The main drivers of complexity in software are code size, inter-dependencies and heterogeneity. If our micro service would contain large amounts of code with many intricate inter-dependencies that tackles many different problems in different ways, it may be advisable to think about breaking the service up.

As mentioned, distributed systems are inherently more complex that non-distributed ones. Therefore, if we have larger micro services, our system becomes less distributed overall and we hopefully have less accidental complexity to deal with.

Thus, to sum things up, we must be aware of the dangers of micro services and deploy tooling strategically as outlined in the presentation as well as be mindful of how we can build our system in a way that we avoid the complexities of distributed systems as much as possible.

Move git repository

Sometimes it is necessary to move the location of a git repository; be it from one GitHub repo to another or moving a repo from GitHub to Bitbucket. This can be surprisingly tricky since one needs to make sure to include all branches, tags, etc. when copying the data.

Thankfully git magic allows doing this fairly easily. Just run the following commands:

git clone --mirror <old-repo-url>
cd <repo-name>
git remote add new-origin <new-repo-url>
git push new-origin --mirror

That should be it!

Note that if you are copying a GitHub repo you might get lovely messages such as the following. That should be fine and nothing to worry about.

 ! [remote rejected] refs/pull/1/head -> refs/pull/1/head (deny updating a hidden ref)
 ! [remote rejected] refs/pull/10/head -> refs/pull/10/head (deny updating a hidden ref)
 ! [remote rejected] refs/pull/100/head -> refs/pull/100/head (deny updating a hidden ref)
 ! [remote rejected] refs/pull/101/head -> refs/pull/101/head (deny updating a hidden ref)
 ! [remote rejected] refs/pull/102/head -> refs/pull/102/head (deny updating a hidden ref)

 

Free Tool for Renaming Files with Current Date

For many files, it is good to keep track of when they were originally created. In theory, every file created on any mainstream operating system should have its ‘creation date’ and ‘last date modified’ recorded as part of its metadata. This mechanisms, however, is often unreliable, especially when using file synchronisation tools such as Dropbox or Google Drive.

Thus it is often handy to prefix file names with the date they were created, such as:

2018 09 30 Letter.pdf

There are some nice tools available for this purpose, for instance Bulk Rename Utility. However, I often find these a bit too complex for what I need.

I have thus developed a little tool – Date Namer – , which does just the one thing I require: To prefix file names with the current date.

date-namer.PNG

Be welcome to download this tool from here:

Upcoming release will be published on this page: Releases. Further, all the source code for this tool is available on GitHub.

For those interested in the implementation details: For this project, I tried using Electron. This allows developing a Desktop application using Node.js. I found this overall quite easy to use. Internally this application will run an instance of Chromium to render the application. The running application takes thus around 50 MB of RAM. I think this is not too bad for this use case. The app performance is very good.

Tech Tip: Make Spotlight Searches Faster on Mac OS X

One of the things I really like about Windows 10, is the ability to hit the Windows key and type the first few letters of the application name to find and open this application. Mac OS X in theory provides the same feature by hitting the Command Key + Space. This opens a spotlight search.

Unfortunately I found this search to be inferior to the one found in Windows since it works slower – even on my very powerful Mac machine, it often takes more than a two to three seconds to ‘find’ the application I try to open.

Last week, I found a way to somewhat mitigate this. Just head to the settings and in there to Spotlight. Disable all the categories apart from ‘applications’.

spotlight.png

While this does make the search faster, also note that it won’t search for the other types of content anymore.

Upgrade to Oracle JDK 10 on CentOS/RHEL

With the release of Java 10 only a few days ago, it seems only prudent to update to Java 10 on suitable systems since the support for Java 9 official ends with the release of Java 10. (Note that Java 8 still enjoys long-time support, so it might be the best choice to stick with that on systems which are difficult to change)

  • Go to the official download site and indicate you agree to their terms.
  • Copy the link for jdk-10_linux-x64_bin.rpm
  • Log into your CentOS machine
  • Download the RPM file using the following command (Don’t forget to provide the link you have copied)

wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" [paste copied link here]

  • Install the JDK

sudo yum localinstall jdk-*_linux-x64_bin.rpm

  • Set the default Java version to 10 using alternatives

sudo alternatives --config java

  • Lastly, make sure you are running the correct version of Java:

java -version

 

Upload Elastic Beanstalk Application using Maven

AWS Elastic Beanstalk is well established service of the AWS cloud and can be used as a powerful platform to deploy applications in various languages. In this short tutorial, I will outline how to conveniently deploy a Tomcat application to AWS Elastic Beanstalk using the beanstalk-maven-plugin.

The following assumes that you already have a project which is configured to be deployed as WAR and provides a valid web.xml to start answering requests. If you are unsure of how to set this up, please have a look at the example project web-api-example-v2 (on GitHub).

Step 1: Create IAM User

  • Create a new user on IAM user AWS for programmatic access

iam

  • For Permissions, select ‘Attach existing policies directly’ and add the following policy

elastic

  • Save the access key and secret key

Step 2: Add Server to Local Maven Configuration

  • Add the following declaration in the element in your $HOME/.m2/settings.xml and provide the access key and secret key for the the user you’ve just created

<server>
  <id>aws.amazon.com</id>
  <username>[aws access key]</username>
  <password>[aws secret key]</password>
</server>

Step 3: Add Beanstalk Maven Plugin


<plugin>
  <groupId>br.com.ingenieux</groupId>
  <artifactId>beanstalk-maven-plugin</artifactId>
  <version>1.5.0</version>
</plugin>

  • Test your security credentials and connection to AWS

mvn beanstalk:check-availability -Dbeanstalk.cnamePrefix=test-war

Step 4: Create S3 Bucket for Application

  • Create a new S3 bucket with a name of your choice (e.g. the name of your application)

bucket

Step 5: Update Plugin Configuration

  • Provide the following configuration for the beanstalk-maven-plugin
<plugin>
  <groupId>br.com.ingenieux</groupId>
  <artifactId>beanstalk-maven-plugin</artifactId>  
  <version>1.5.0</version>
  <configuration>
    <applicationName>[Provide your application name]</applicationName>
    <!-- Path of the deployed application: cnamePrefix.us-east-1.elasticbeanstalk.com -->
    <cnamePrefix>${project.artifactId}</cnamePrefix>
    <environmentName>devenv</environmentName>
    <environmentRef>devenv</environmentRef>
    <solutionStack>64bit Amazon Linux 2015.03 v1.4.5 running Tomcat 8 Java 8</solutionStack>

    <!-- Bucket name here equal to artifactId - but this is not guaranteed      to be available, so therefore the bucket name is given statically -->
    <s3Bucket>[Provide your S3 bucket name]</s3Bucket>
    <s3Key>${project.artifactId}/${project.build.finalName}-${maven.build.timestamp}.war</s3Key>
    <versionLabel>${project.version}</versionLabel>
  </configuration>
</plugin>

Step 6: Deploy project

  • Run the following to upload the project to the S3 bucket:
mvn beanstalk:upload-source-bundle
  • If this succeeds, deploy the application

mvn beanstalk:upload-source-bundle beanstalk:create-application-version beanstalk:create-environment

Your application should now be deployed to Elastic Beanstalk. It will be available under


cname.us-east-1.elasticbeanstalk.com

Where cname is the cname you have specified in step 5

Good To Know

  • To find out, which solution stacks are available (to define the solutionStack environment variable), simply run

mvn beanstalk:list-stacks

References

PlantUML (Open Source Awesomeness)

I’ve always had a soft spot for diagrams. I think that representing information in various visual ways tremendously helps our thinking and understanding. Unfortunately it is often a big headache to create (and maintain) diagrams.

So I was very pleased today when I came across PlantUML. PlantUML is a Java library and web service which renders UML diagrams from text input. Take the following text definition for example:


@startuml
object Object01
object Object02
object Object03
object Object04
object Object05
object Object06
object Object07
object Object08

Object01 <|-- Object02
Object03 *-- Object04
Object05 o-- "4" Object06
Object07 .. Object08 : some labels
@enduml

This will be rendered into the following diagram:

diagram

PlantUML does not just support object diagrams but also many other types of diagrams. There is another service, called WebSequenceDiagrams which focusses on only sequence diagrams (and is not open source) but can be useful if more visually pleasing sequence diagrams are required,

Configuring an initd Service for node_exporter

I recently wrote an article showing how to configure Prometheus and Grafana for easy metrics collection. In that article, I assumed that the system which should be monitored would use the systemd approach for defining services.

I now had to set up the node_exporter utility on a system which uses the initd approach. Thus, I provide some simple instructions here on how to accomplish that.


wget https://github.com/prometheus/node_exporter/releases/download/v0.15.2/node_exporter-0.15.2.linux-amd64.tar.gz

  • Extract the archive

tar xvfz node_exporter-*.tar.gz

  • Create a link

ln -s node_exporter-* node_exporter

  • Create the file /opt/node_exporter/node_exporter.sh and add the following content:

#!/bin/sh

/opt/node_exporter/node_exporter --no-collector.diskstats


#!/bin/sh
### BEGIN INIT INFO
# Provides: node_exporter
# Required-Start: $local_fs $network $named $time $syslog
# Required-Stop: $local_fs $network $named $time $syslog
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Description:
### END INIT INFO

SCRIPT=/opt/node_exporter/node_exporter.sh
RUNAS=root

PIDFILE=/var/run/node_exporter.pid
LOGFILE=/var/log/node_exporter.log

start() {
if [ -f "$PIDFILE" ] && kill -0 $(cat "$PIDFILE"); then
echo 'Service already running' >&2
return 1
fi
echo 'Starting service…' >&2
local CMD="$SCRIPT &> \"$LOGFILE\" && echo \$! > $PIDFILE"
su -c "$CMD" $RUNAS > "$LOGFILE"
echo 'Service started' >&2
}

stop() {
if [ ! -f "$PIDFILE" ] || ! kill -0 $(cat "$PIDFILE"); then
echo 'Service not running' >&2
return 1
fi
echo 'Stopping service' >&2
kill -15 $(cat "$PIDFILE") && rm -f "$PIDFILE"
echo 'Service stopped' >&2
}

uninstall() {
echo -n "Are you really sure you want to uninstall this service? That cannot be undone. [yes|No] "
local SURE
read SURE
if [ "$SURE" = "yes" ]; then
stop
rm -f "$PIDFILE"
echo "Notice: log file is not be removed: '$LOGFILE'" >&2
update-rc.d -f  remove
rm -fv "$0"
fi
}

case "$1" in
start)
start
;;
stop)
stop
;;
uninstall)
uninstall
;;
retart)
stop
start
;;
*)
echo "Usage: $0 {start|stop|restart|uninstall}"
esac

Note 1: This sample script runs the script as user root. For production environments, it is highly recommended to configure another user (such as ‘prometheus’) which runs the script.

Note 2: Also check out this init.d script made specifically for node_exporter: node.exporter.default by eloo.

  • Make both files executable

chmod +x /etc/init.d/node_exporter

chmod +x <em>/opt/node_exporter/node_exporter.sh</em>

  • Test the script

/etc/init.d/node_exporter start

/etc/init.d/node_exporter stop

  • Enable start with chkconfig

chkconfig --add node_exporter

All done! Now you can configure your Prometheus server to grab the metrics from the node_exporter instance.