Making the backend of your React App configurable

Nowadays, the frontend and backend of a web application usually are separate parts – oftentimes implemented using different technologies – communicating with each other using HTTP or websockets. For simplicity and smaller deployments they are hostet on the same web server. There are several reasons to deploy them on different servers like load distribution, security, different environments running the same frontend with differing backends and so on.

To allow separate deployments without changing the frontend code per deployment we need to make the backend transparently configurable. Fortunately, this is relatively easy for frontend written in React and set up with create-react-app. To make this fully transparent for your frontend code we need to

  1. Make the backend URL configurable
  2. Replace the fetch() function to use the configured backend
  3. Activate the setup at the start of our app

Configuring a React App

Create-react-app provides a configuration mechanism with custom environment variables using .env-files. We can simply provide different env-files for our environments where we can configure different aspects of our application. In our use case this is the backend URL.

// The base url of the backend API. Add path prefix if the API does not run at the server root.
REACT_APP_BACKEND_API_BASE_URL=http://some.other.server:5000

Inside our React App we can reference the configured values using {process.env.REACT_APP_BACKEND_API_BASE_URL}.

Making the use of our configured backend transparent

In a modern JavaScript app the main mean to communicate with the backend is the fetch()-API. To make the use of our configured backend transparent we can replace the global fetch()-function with our version like so:

// remember the original fetch-function to delegate to
const originalFetch = global.fetch;

export const applyBaseUrlToFetch = (baseUrl) => {
  // replace the global fetch() with our version where we prefix the given URL with a baseUrl
  global.fetch = (url, options) => {
    const finalUrl = baseUrl + url;
    return originalFetch(finalUrl, options);
  };
};

That way all of our fetch() calls are re-routed to the configured backend.

Activating our fetch()-customization

Now that we have all the pieces of our infrastructure in place we need to activate the changes to fetch on application startup. So we add code like below to our index.js:

// If we have a differing backend configured, replace the global fetch()
if (process.env.REACT_APP_BACKEND_API_BASE_URL !== undefined && process.env.REACT_APP_BACKEND_API_BASE_URL !== '') {
  applyBaseUrlToFetch(process.env.REACT_APP_BACKEND_API_BASE_URL);
}

Now all our calls to a relative URL will be prefixed with a configurable base and that way different backends can be used with the same application code.

Caveats

The above approach works nicely if you have exactly one backend for your app and do not fetch from other sources. If you do, you may want to expose the original fetch function as something like fetchExternal() to be able to explicitly fetch from other sources.

In addition, if frontend and backend reside on different servers/sites using differring DNS-names you will have to configure CORS for your backends or your browser will refuse to make the requests!

Client-side web development: Drink the Kool-Aid or be cautious?

Client side web development is a fast-changing world. JavaScript libraries and frameworks come and go monthly. A couple of years ago jQuery was a huge thing, then AngularJS, and nowadays people use React or Vue.js with a state container like Redux. And so do we for new projects. Unfortunately, these modern client-side frameworks are based on the npm ecosystem, which is notoriously known for its dependency bloat. Even if you only have a couple of direct dependencies the package manager lock file will list hundreds of indirect dependencies. Experience has shown that lots of dependencies will result in a maintenance burden as time passes, especially when you have to do major version updates. Also, as mentioned above, frameworks come and then go out of fashion, and the maintainers of a framework move on to their next big thing pet project, leaving you and your project sitting on a barely or no longer maintained base, and frameworks can’t be easily replaced, because they tend to permeate every aspect of your application.

With this frustrating experience in mind we recently did an experiment for a new medium sized web project. We avoided frameworks and the npm ecosystem and only used JavaScript libraries with no or very few indirect dependencies, which really were necessary. Browsers have become better at being compatible to web standards, at least regarding the basics. Libraries like jQuery and poly-fills that paper over the incompatibilities can mostly be avoided — an interesting resource is the website You Might Not Need jQuery.

We still organised our views as components, and they are communicating via a very simple event dispatcher. Some things had to be done by foot, but not too much. It works, although the result is not as pure as it would have been with declarative views as facilitated by React and a functional state container like Redux. We’re still fans of the React+Redux approach and we’re using it happily (at least for now) for other projects, but we’re also skeptical regarding the long term costs, especially from relying on the npm ecosystem. Which approach will result in less maintenance burden? We don’t know yet. Time will tell.

Decoding non-utf8 server responses using the Fetch API

The new Javascript Fetch API is really nice addition to the language and my preferable, and in fact the only bearable, way to do server requests.
The Promise based API is a lot nicer than older, purely callback-based, approaches.

The usual approach to get a text response from a server using the Fetch API looks like this:

let request = fetch(url)
  .then(response => response.text())
  .then(handleText);

But this has one subtle problem:

I was building a client application that reads weather data from a small embedded device. We did not have direct access to changing the functionality of that device, but we could upload static pages to it, and use its existing HTML API to query the amount of registered rainfall and lightning strikes.

Using the fetch API, I quickly got the data and extracted it from the HTML but some of the identifiers had some screwed up characters that looked like decoding problems. So I checked whether the HTTP Content-Type was set correctly in the response. To my surprise it was correctly set as Content-Type: text/html; charset=iso-8859-1.

So why did my Javascript Application not get that? After some digging, it turned out that Response’s text() function always decodes the payload as utf-8. The mismatch between that and the charset explained the problem!

Obviously, I had to do the decoding myself. The solution I picked was to use the TextDecoder class. It can decode an ArrayBuffer with a given encoding. Luckily, that is easy to get from the response:

let request = fetch(url)
  .then(response => response.arrayBuffer())
  .then(buffer => {
    let decoder = new TextDecoder("iso-8859-1");
    let text = decoder.decode(buffer);
    handleText(text);
  });

Since I only had to support that single encoding, that worked well for me. Keep in mind that the TextDecoder is still experimental Technology. However, we had a specific browser as a target and it works there. Lucky us!

For what the javascript!

The setting

We are developing and maintaining an important web application for one of our clients. Our application scrapes a web page and embeds our own content into that page frame.

One day our client told us of an additional block of elements at the bottom of each page. The block had a heading “Image Credits” and a broken image link strangely labeled “inArray”. We did not change anything on our side and the new blocks were not part of the HTML code of the pages.

Ok, so some new Javascript code must be the source of these strange elements on our pages.

The investigation

I started the investigation using the development tools of the browser (using F12). A search for the string “Image Credits” instantly brought me to the right place: A Javascript function called on document.ready(). The code was basically getting all images with a copyright attribute and put the findings in an array with the text as the key and the image url as the value. Then it would iterate over the array and add the copyright information at the bottom of each page.

But wait! Our array was empty and we had no images with copyright attributes. Still the block would be put out. I verified all this using the debugger in the browser and was a bit puzzled at first, especially by the strange name “inArray” that sounded more like code than some copyright information.

The cause

Then I looked at the iteration and it struck me like lightning: The code used for (name in copyrightArray) to iterate over the elements. Sounds correct, but it is not! Now we have to elaborate a bit, especially for all you folks without a special degree in Javascript coding:

In Javascript there is no distinct notion of associative arrays but you can access all enumerable properties of an object using square brackets (taken from Mozillas Javascript docs):

var string1 = "";
var object1 = {a: 1, b: 2, c: 3};

for (var property1 in object1) {
  string1 = string1 + object1[property1];
}

console.log(string1);
// expected output: "123"

In the case of an array the indices are “just enumerable properties with integer names and are otherwise identical to general object properties“.

So in our case we had an array object with a length of 0 and a property called inArray. Where did that come from? Digging further revealed that one of our third-party libraries added a function to the array prototype like so:

Array.prototype.inArray = function (value) {
  var i;
  for (i = 0; i < this.length; i++) {
    if (this[i] === value) {
      return true;
    }
  }
  return false;
};

The solution

Usually you would iterate over an array using the integer index (old school) or better using the more modern and readable for…of (which also works on other iterable types). In this case that does not work because we do not use integer indices but string properties. So you have to use Object.keys().forEach() or check with hasOwnProperty() if in your for…in loop if the property is inherited or not to avoid getting unwanted properties of prototypes.

The takeaways

Iteration in Javascript is hard! See this lengthy discussion…The different constructs are named quite similar an all have subtle differences in behaviour. In addition, libraries can mess with the objects you think you know. So finally some advice from me:

  • Arrays are only true arrays with positive integer indices/property names!
  • Do not mess with the prototypes of well known objects, like our third-party library did…
  • Use for…of to iterate over true arrays or other iterables
  • Do not use associative arrays if other options are available. If you do, make sure to check if the properties are own properties and enumerable.

 

Analysing a React web app using SonarQube

Many developers especially from the Java world may know the code analysis platform SonarQube (formerly SONAR). While its focus was mostly integration all the great analysis tools for Java the modular architecture allows plugging tools for other languages to provide linter results and code coverage under the same web interface.

We are a polyglot bunch and are using more and more React in addition to our Java, C++, .Net and “what not” projects. Of course we would like the same quality overview for these JavaScript projects as we are used to in other ecosystems. So I tried SonarQube for react.

The start

Using SonarQube to analyse a JavaScript project is as easy as for the other languages: Just provide a sonar-project.properties file specifying the sources and some paths for analysis results and there you go. It may look similar to the following for a create-react-app:

sonar.projectKey=myproject:webclient
sonar.projectName=Webclient for my cool project
sonar.projectVersion=0.3.0

#sonar.language=js
sonar.sources=src
sonar.exclusions=src/tests/**
sonar.tests=src/tests
sonar.sourceEncoding=UTF-8

#sonar.test.inclusions=src/tests/**/*.test.js
sonar.coverage.exclusions=src/tests/**

sonar.junit.reportPaths=test-results/test-report.junit.xml
sonar.javascript.lcov.reportPaths=coverage/lcov.info

For the coverage you need to add some settings to your package.json, too:

{ ...
"devDependencies": {
"enzyme": "^3.3.0",
"enzyme-adapter-react-16": "^1.1.1",
"eslint": "^4.19.1",
"eslint-plugin-react": "^7.7.0",
"jest-junit": "^3.6.0"
},
"jest": {
"collectCoverageFrom": [
"src/**/*.{js,jsx}",
"!**/node_modules/**",
"!build/**"
],
"coverageReporters": [
"lcov",
"text"
]
},
"jest-junit": {
"output": "test-results/test-report.junit.xml"
},
...
}

This is all nice but the set of built-in rules for JavaScript is a bit thin and does not fit React apps nicely.

ESLint to the recue

But you can make SonarQube use ESLint and thus become more useful.

First you have to install the ESLint Plugin for SonarQube from github.

Second you have to setup ESLint to your liking using eslint --init in your project. That results in a eslintrc.js similar to this:

module.exports = {
  'env': {
    'browser': true,
    'commonjs': true,
    'es6': true
  },
  'extends': 'eslint:recommended',
  'parserOptions': {
    'ecmaFeatures': {
      'experimentalObjectRestSpread': true,
      'jsx': true
    },
    'sourceType': 'module'
  },
  'plugins': [
    'react'
  ],
  'rules': {
    'indent': [
      'error',
      2
    ],
    'linebreak-style': [
      'error',
      'unix'
    ],
    'quotes': [
      'error',
      'single'
    ],
    'semi': [
      'error',
      'always'
    ]
  }
};

Lastly enable the ESLint ruleset for your project in sonarqube and look at the results. You may need to tune one thing or another but you will get some useful static analysis helping you to improve your code quality further.

Some tricks for working with SVG in JavaScript

Scalable vector graphics (SVG) is a part of the document object model (DOM) and thus can be modified just like any other DOM node from JavaScript. But SVG has some pitfalls like having its own coordinate system and different style attributes which can be a headache. What follows is a non comprehensive list of hints and tricks which I found helpful while working with SVG.

Scalable vector graphics (SVG) is a part of the document object model (DOM) and thus can be modified just like any other DOM node from JavaScript. But SVG has some pitfalls like having its own coordinate system and different style attributes which can be a headache. What follows is a non comprehensive list of hints and tricks which I found helpful while working with SVG.

Coordinate system

From screen coordinates to SVG

function screenToSVG(svg, x, y) { // svg is the svg DOM node
  var pt = svg.createSVGPoint();
  pt.x = x;
  pt.y = y;
  var cursorPt = pt.matrixTransform(svg.getScreenCTM().inverse());
  return {x: Math.floor(cursorPt.x), y: Math.floor(cursorPt.y)}
}

From SVG coordinates to screen

function svgToScreen(element) {
  var rect = element.getBoundingClientRect();
  return {x: rect.left, y: rect.top, width: rect.width, height: rect.height};
}

Zooming and panning

Getting the view box

function viewBox(svg) {
    var box = svg.getAttribute('viewBox');
    return {x: parseInt(box.split(' ')[0], 10), y: parseInt(box.split(' ')[1], 10), width: parseInt(box.split(' ')[2], 10), height: parseInt(box.split(' ')[3], 10)};
};

Zooming using the view box

function zoom(svg, initialBox, factor) {
  svg.setAttribute('viewBox', initialBox.x + ' ' + initialBox.y + ' ' + initialBox.width / factor + ' ' + initialBox.height / factor);
}

function zoomFactor(svg) {
  var height = parseInt(svg.getAttribute('height').substring(0, svg.getAttribute('height').length - 2), 10);
  return 1.0 * viewBox(svg).height / height;
}

Panning (with zoom factor support)

function pan(svg, panX, panY) {
  var pos = viewBox(svg);
  var factor = zoomFactor(svg);
  svg.setAttribute('viewBox', (pos.x - factor * panX) + ' ' + (pos.y - factor * panY) + ' ' + pos.width + ' ' + pos.height);
}

Misc

Embedding HTML

function svgEmbedHTML(width, height, html) {
    var svg = document.createElementNS("http://www.w3.org/2000/svg", "foreignObject");
    svg.setAttribute('width', '' + width);
    svg.setAttribute('height', '' + height);
    var body = document.createElementNS('http://www.w3.org/1999/xhtml', 'body');
    body.style.background = 'none';
    svg.appendChild(body);
    body.appendChild(html);
    return svg;
}

Making an invisible rectangular click/touch area

function addTouchBackground(svgRoot) {
    var rect = svgRect(0, 0, '100%', '100%');
    rect.style.fillOpacity = 0.01;
    root.appendChild(rect);
}

Using groups as layers

This one needs an explanation. The render order of the svg children depends on the order in the DOM: the last one in the DOM is rendered last and thus shows above all others. If you want to have certain elements below or above others I found it helpful to use groups in svg and add to them.

function svgGroup(id) {
    var group = document.createElementNS('http://www.w3.org/2000/svg', 'g');
    if (id) {
        group.setAttribute('id', id);
    }
    return group;
}

// and later on:
document.getElementById(id).appendChild(yourElement);

Lessons learned developing hybrid web apps (using Apache Cordova)

In the past year we started exploring a new (at leat for us) terrain: hybrid web apps. We already developed mobile web apps and native apps but this year we took a first step into the combination of both worlds. Here are some lessons learned so far.

In the past year we started exploring a new (at leat for us) terrain: hybrid web apps. We already developed mobile web apps and native apps but this year we took a first step into the combination of both worlds. Here are some lessons learned so far.

Just develop a web app

after all the hybrid app is a (mobile) web app at its core, encapsulating the native interactions helped us testing in a browser and iterating much faster. Also clean architecture supports to defer decisions of the environment to the last possible moment.

Chrome remote debugging is a boon

The tools provided by Chrome for remote debugging on Android web views and browser are really great. You can even see and control the remote UI. The app has some redraw problems when the debugger is connected but overall it works great.

Versioning is really important

Developing web apps the user always has the latest version. But since our app can run offline and is installed as a normal Android app you have to have versions. These versions must be visible by the user, so he can tell you what version he runs.

Android app update fails silently

Sometimes updating our app only worked in parts. It seemed that the web view cached some files and didn’t update others. The problem: the updater told the user everything went smoothly. Need to investigate that further…

Cordova plugins helped to speed up

Talking to bluetooth devices? checked. Saving lots of data in a local sqlite? Plugins got you covered. Writing and reading local files? No problemo. There are some great plugins out there covering your needs without going native for yourself.

JavaScript isn’t as bad as you think

Working with JavaScript needs some discipline. But using a clean architecture approach and using our beloved event bus to flatten and exposing all handlers and callbacks makes it a breeze to work with UIs and logic.

SVG is great

Our apps uses a complex visualization which can be edited, changed, moved and zoomed by the user. SVG really helps here and works great with CSS and JavaScript.

Use log files

When your app runs on a mobile device without a connection (to the internet) you need to get information from the device to you. Just a console won’t cut it. You need log files to record the actions and errors the user provokes.

Accessibility is harder than you think

Modern design trends sometimes make it hard to get a good accessibility. Common problems are low contrast, using only icons on buttons, indiscernible touch targets, color as information bearer and touch targets that are too small.

These are just the first lessons we learned tackling hybrid development but we are sure there are more to come.

Internationalization of a React application with react-intl

For the internationalization of a React application I have recently used the seemingly popular react-intl package by Yahoo.

The basic usage is simple. To resolve a message use the FormattedMessage tag in the render method of a React component:

import {FormattedMessage} from "react-intl";

class Greeting extends React.Component {
  render() {
    return (
      <div>
        <FormattedMessage id="greeting.message"
            defaultMessage={"Hello, world!"}/>
      </div>
    );
  }
}

Injecting the “intl” property

If you have a text in your application that can’t be simply resolved with a FormattedMessage tag, because you need it as a string variable in your code, you have to inject the intl property into your React component and then resolve the message via the formatMessage method on the intl property.

To inject this property you have to wrap the component class via the injectIntl() function and then re-assign the wrapped class to the original class identifier:

import {intlShape, injectIntl} from "react-intl";

class SearchField extends React.Component {
  render() {
    const intl = this.props.intl;
    const placeholder = intl.formatMessage({
        id: "search.field.placeholder",
        defaultMessage: "Search"
      });
    return (<input type="search" name="query"
               placeholder={placeholder}/>);
  }
}
SearchField.propTypes = {
    intl: intlShape.isRequired
};
SearchField = injectIntl(SearchField);

Preserving references to components

In one of the components I had captured a reference to a child component with the React ref attribute:

ref={(component) => this.searchInput = component}

After wrapping the parent component class via injectIntl() as described above in order to internationalize it, the internal reference stopped working. It took me a while to figure out how to fix it, since it’s not directly mentioned in the documentation. You have to pass the “withRef: true” option to the injectIntl() call:

SearchForm = injectIntl(SearchForm, {withRef: true});

Here’s a complete example:

import {intlShape, injectIntl} from "react-intl";

class SearchForm extends React.Component {
  render() {
    const intl = this.props.intl;
    const placeholder = intl.formatMessage({
        id: "search.field.placeholder",
        defaultMessage: "Search"
      });
    return (
      <form>
        <input type="search" name="query"
               placeholder={placeholder}
               ref={(c) => this.searchInput = c}/>
      </form>
    );
  }
}
SearchForm.propTypes = {
  intl: intlShape.isRequired
};
SearchForm = injectIntl(SearchForm,
                        {withRef: true});

Conclusion

Although react-intl appears to be one of the more mature internationalization packages for React, the overall experience isn’t too great. Unfortunately, you have to litter the code of your components with dependency injection boilerplate code, and the documentation is lacking.

Arbitrary 2D curves with Highcharts

Highcharts is a versatile JavaScript charting library for the web. The library supports all kinds of charts: scatter plots, line charts, area chart, bar charts, pie charts and more.

A data series of type line or spline connects the points in the order of the x-axis when rendered. It is possible to invert the axes by setting the property inverted: true in the chart options.

var chart = $('#chart').highcharts({
  chart: {
    inverted: true
  },
  series: [{
    name: 'Example',
    type: 'line',
    data: [
      {x: 10, y: 50},
      {x: 20, y: 56.5},
      {x: 30, y: 46.5},
      // ...
    ]
  }]
});

line-chart-inverted

Connecting points in an arbitrary order

Connecting the points in an arbitrary order, however, is not supported by default. I couldn’t find a Highcharts plugin which supports this either, so I implemented a solution by modifying the getGraphPath function of the series object:

var series = $('#chart').highcharts().series[0];
var oldGetGraphPath = series.getGraphPath;
Object.getPrototypeOf(series).getGraphPath = function(points, nullsAsZeroes, connectCliffs) {
  points = points || this.points;
  points.sort(function(a, b) {
    return a.sortIndex - b.sortIndex;
  });
  return oldGetGraphPath.call(this, points, nullsAsZeroes, connectCliffs);
};

The new function sorts the points by a new property called sortIndex before the line path of the chart gets rendered. This new property must be assigned to each point object of the series data:

series.setData([
  {x: 10, y: 50, sortIndex: 1},
  {x: 20, y: 56.5, sortIndex: 2},
  {x: 30, y: 46.5, sortIndex: 3},
  // ...
], true);

Now we can render charts with points connected in an arbitrary order like this:

A line chart with points connected in arbitrary order
A line chart with points connected in arbitrary order

Modern developer #3: Framework independent JavaScript architecture

Usually small JavaScript projects start with simple wiring of callbacks onto DOM elements. This works fine when it the project is in its initial state. But in a short time it gets out of hand. Now we have spaghetti wiring and callback hell. Often at this point we try to get help by looking at adopting a framework, hoping to that its coded best practices draw us out of the mud. But now our project is tied to the new framework.
In search of another, framework independent way I stumbled upon scalable architecture by Nicholas Zakas.
It starts by defining modules as independent units. This means:

  • separate JavaScript and DOM elements from the rest of the application
  • Modules must not reference other modules
  • Modules may not register callbacks or even reference DOM elements outside their DOM tree
  • To communicate with the outside world, modules can only call the sandbox

The sandbox is a central hub. We use a pub/sub system:

sandbox.publish({type: 'event_type', data: {}});

sandbox.subscribe('event_type', this.callback.bind(this));

Besides being an event bus, the sandbox is responsible for access control and provides the modules with a consistent interface.
Modules are started and stopped (in case of misbehaving) in the application core. You could also use the core as an anti corruption layer for third party libraries.
This architecture gives a frame for implementation. But implementing it raises other questions:

  • how do the modules update their state?
  • where do we call the backend?

Handling state

A global model would reside or be referenced by the application core. In addition every module has its own model. Updates are always done in application event handlers, not directly in the DOM event handlers.
Let me illustrate. Say we have a module with keeps track of selected entries:

function Module(sandbox) {
  this.sandbox = sandbox;
  this.selectedEntries = [];
}

Traditionally our DOM event handler would update our model:

button.on('click', function(e) {
  this.selectedEntries.push(entry);
});

A better way would be to publish an application event, subscribe the module to this event and handle it in the application event handler:

this.sandbox.subscribe('entry_selected', this.entrySelected.bind(this));

Module.prototype.entrySelected = function(event) {
  this.selectedEntries.push(event.entry);
};

button.on('click', function(e) {
  this.sandbox.publish({type: 'entry_selected', entry: entry});
});

Other modules can now listen on selecting entries. The module itself does not need to know who selected the entry. All the internal communication of selection is visible. This makes it possible to use event sourcing.

Calling the backend

No module should directly call the backend. For this a special module called extension is used. Extensions encapsulate cross cutting concerns and shield communication with other systems.

Summary

This architecture keeps UI parts together with their corresponding code, flattens callbacks and centralizes the communication with the help of application events and encapsulates outside communication. On top of that it is simple and small.