Making the backend of your React App configurable

Nowadays, the frontend and backend of a web application usually are separate parts – oftentimes implemented using different technologies – communicating with each other using HTTP or websockets. For simplicity and smaller deployments they are hostet on the same web server. There are several reasons to deploy them on different servers like load distribution, security, different environments running the same frontend with differing backends and so on.

To allow separate deployments without changing the frontend code per deployment we need to make the backend transparently configurable. Fortunately, this is relatively easy for frontend written in React and set up with create-react-app. To make this fully transparent for your frontend code we need to

  1. Make the backend URL configurable
  2. Replace the fetch() function to use the configured backend
  3. Activate the setup at the start of our app

Configuring a React App

Create-react-app provides a configuration mechanism with custom environment variables using .env-files. We can simply provide different env-files for our environments where we can configure different aspects of our application. In our use case this is the backend URL.

// The base url of the backend API. Add path prefix if the API does not run at the server root.
REACT_APP_BACKEND_API_BASE_URL=http://some.other.server:5000

Inside our React App we can reference the configured values using {process.env.REACT_APP_BACKEND_API_BASE_URL}.

Making the use of our configured backend transparent

In a modern JavaScript app the main mean to communicate with the backend is the fetch()-API. To make the use of our configured backend transparent we can replace the global fetch()-function with our version like so:

// remember the original fetch-function to delegate to
const originalFetch = global.fetch;

export const applyBaseUrlToFetch = (baseUrl) => {
  // replace the global fetch() with our version where we prefix the given URL with a baseUrl
  global.fetch = (url, options) => {
    const finalUrl = baseUrl + url;
    return originalFetch(finalUrl, options);
  };
};

That way all of our fetch() calls are re-routed to the configured backend.

Activating our fetch()-customization

Now that we have all the pieces of our infrastructure in place we need to activate the changes to fetch on application startup. So we add code like below to our index.js:

// If we have a differing backend configured, replace the global fetch()
if (process.env.REACT_APP_BACKEND_API_BASE_URL !== undefined && process.env.REACT_APP_BACKEND_API_BASE_URL !== '') {
  applyBaseUrlToFetch(process.env.REACT_APP_BACKEND_API_BASE_URL);
}

Now all our calls to a relative URL will be prefixed with a configurable base and that way different backends can be used with the same application code.

Caveats

The above approach works nicely if you have exactly one backend for your app and do not fetch from other sources. If you do, you may want to expose the original fetch function as something like fetchExternal() to be able to explicitly fetch from other sources.

In addition, if frontend and backend reside on different servers/sites using differring DNS-names you will have to configure CORS for your backends or your browser will refuse to make the requests!

Object slicing with Grails and GORM

Some may know the problem called object slicing when passing or assigning polymorphic objects by value in C++. The issue is not limited to C++ as we experienced recently in one of our web application based on Grails. If you are curious just stay awhile and listen…

Our setting

Some of our domain entities use inheritance and their containing entities determine what to do using some properties. You may call that bad design but for now let us take it as it is and show some code to clarify the situation:

@Entity
class Container {
  private A a

  def doSomething() {
    if (hasActuallyB()) {
      return a.bMethod()
    }
    return a.something()
  }
}

@Entity
class A {

  def something() {
    return 'Something A does'
  }
}

@Entity
class B extends A {

  def bMethod() {
    return 'Something only B can do'
  }
}

class ContainerController {

  def save = {
    new Container(b: new B()).save()
  }

  def show = {
    def container = Container.get(params.id)
    [result: container.doSomething()]
  }
}

Such code worked for us without problems in until we upgraded to Grails 3. Suddenly we got exceptions like:

2019-02-18 17:03:43.370 ERROR --- [nio-8080-exec-1] o.g.web.errors.GrailsExceptionResolver   : MissingMethodException occurred when processing request: [GET] /container/show
No signature of method: A.bMethod() is applicable for argument types: () values: []. Stacktrace follows:

Caused by: groovy.lang.MissingMethodException: No signature of method: A.bMethod() is applicable for argument types: () values: []
at Container.doSomething(Container.groovy:123)

Debugging showed our assumptions and checks were still true and the Container member was saved correctly as a B. Still the groovy method call using duck typing did not work…

What is happening here?

Since the domain entities are persistent objects mapped by GORM and (in our case) Hibernate they do not always behave like your average POGO (plain old groovy object). They may in reality be Javassist proxy instances when fetched from the database. These proxies are set up to respond to the declared type and not the actual type of the member! Clearly, an A does not respond to the bMethod().

A workaround

Ok, the class hierarchy is not that great but we cannot rewrite everything. So what now?

Fortunately there is a workaround: You can explicitly unwrap the proxy object using GrailsHibernateUtil.unwrapIfProxy() and you have a real instance of B and your groovy duck typing and polymorphic calls work as expected again.

Unexpected RESTEasy application upgrade surprise

The setting

A few months ago we got to maintain a RESTEasy application running in a Wildfly 10 container. The application uses RESTEasy as both, server and client and contains a few custom interceptors and providers.

Now our client wants to move on to Wildfly 13 as deployment target. Most of the application works out-of-the-box or just by upgrading some dependencies in the new container but some critical parts like the REST client requests stopped working.

The investigation

After some digging through the error messages it became clear our interceptors and providers were not called anymore. What has changed? Wildfly 13 comes with RESTEasy 3.5.1 while we were using 3.0 in Wildfly 10. Looking at the upgrade documentation leaves us puzzled though:

RESTEasy 3.5 series is a spin-off of the old RESTEasy 3.0 series, featuring JAX-RS 2.1 implementation.

The reason why 3.5 comes from 3.0 instead of the 3.1 / 4.0 development streams is basically providing users with a selection of RESTEasy 4 critical / strategic new features, while ensuring full backward compatiblity. As a consequence, no major issues are expected when upgrading RESTEasy from 3.0.x to 3.5.x.

We are using the standard classpath scanning method which discovers annotated RESTEasy classes and registers them for the application. Trying to register them explicitly in the application yielded the message, that our providers are already registered:

RESTEASY002155: Provider class mypackage.MyProvider is already registered. 2nd registration is being ignored.

Scanning and registration seemed to just work alright. So what was happening here?

The resolution

After a bit more investigation we realized the issue was on the client side only! In Wildfly 10/RESTEasy 3.0 the providers were automatically registered for the client, too. This is not the case anymore in Wildfly 13/RESTEasy 3.5! You have to register them with the client either using the ResteasyClientBuilder or the ResteasyClient you are using like mentioned in the documentation:

Client client = new ResteasyClientBuilder() // Activate gzip compression on client:
                    .register(AcceptEncodingGZIPFilter.class)
                    .register(GZIPDecodingInterceptor.class)
                    .register(GZIPEncodingInterceptor.class)
                    .build();

This subtle change in (undocumented?) behaviour took several hours to debug. Nevertheless, we actually like the change because we prefer doing things explicitly instead of using some magic. So now it is clear what interceptors and providers our REST client is using.

Bringing your Grails app from 2.4 to 3.3

Updating to a new framework version often needs a lot of work and investigation how to fix problems that may arise. Usually there are upgrade guides that take you most of the way and make upgrading only a grind.

This also true for Grails and our upgrade experience with it. Often there are parts where you have to invest extra work and creativity. The current upgrade of our application from 2.4.5 to 3.3.8 is no exception:

The grind

The major changes and upgrade notes are part of the documentation so I will only mention them briefly:

  • Switch to the gradle build system
  • Using YAML as main configuration
  • Migration from filters to interceptors
  • New testing framework (partly optional because you can still use the old mixin framework with a plugin)
  • Package name changes
  • Former core features are now available as plugins like gsps, datasource and GORM
  • Functional tests need to use Spock+Geb or you will face weird problems and need to do extra work (we had selenium tests using selenium-server before)
  • Integration tests work differently so work needs to be done to migrate them
  • Logging using Lockback
  • Entities often need a @Entity annotation
  • Move some files to new Locations

The tricky stuff

  • A service named CounterService conflicts with spring boot autowiring so we had to rename it
  • Our TagLib tests using JUnit4 were failing with obscure errors, porting them to Spock fixed them.
  • We have so many dependencies that running the application with gradle:bootRun fails with: Createprocess error=206; the filename or extension is too long Fortunately adding grails { pathingJar = true } to build.gradle fixes the issue
  • Environment variables for gradle:bootRun are swallowed if not prefixed with “grails.”. We are using environment variables to customize running the application on the dev machines.

 

The hard parts

The most painful part was two central plugins we are using not being available anymore: shiro and searchable.

Shiro

For shiro there are some initial ports that work well for our needs, so the challenge was mostly finding the most fitting one of the forks on github. We went with the fork of Alin Pandichi and forked it ourselves to upgrade some version definitions.

Searchable becomes ElasticSearch

The real odyssee began looking for a replacement of the abandoned searchable plugin. Fortunately there is the compelling ElasticSearch-plugin which uses almost the same API as the searchable plugin:

The plugin focus on exposing Grails domain classes for the moment. It highly takes the existing Searchable Plugin as reference for its syntax and behaviour.

Unfortunately, we were unable to get it to work with our project trying many different versions, so we decided to fork and fix it for us. The main problems were:

  • Essentially, it does not work properly with hibernate as a data store because it chokes on the JavaAssist proxies hibernate often creates for domain objects.
  • An easy to fix concurrency issue
  • Not flexible enough converters

After a lot of debugging and a couple of fixes and the new feature of being able to use a spring bean as a converter we had search working smoothly and better than ever.

Wrapping it all up

The upgrade of our application to the newest incarnation of Grails was a rocky ride and took us quite some time.

On the other hand the framework got a lot better. Especially gradle is much better to manage than the previous build system.

So we are looking forward to a much better and robust development experience in the future and hope for some less revolutionary releases and easier upgrades.

The sorry state of Grails (Plugins)

We have been developing and maintaining a complex web application on Grails since summer of 2008. By then Grails had passed the 1.0 release milestone and was really hot. A good 10 years later the application is still in use and we are trying to upgrade from Grails 2.4 to 3.3.

Upgrading Grails – a rough ride

Similar to past upgrade experiences the ride is not very smooth. Besides the major changes like the much welcomed switch to the gradle build system, interceptors instead of filters and streamlined configuration there are again a host of more subtle changes. The biggest problem for us though is the plugin situation.

It’s the plugins

In the past we had tough breaks like the abandoned selenium plugin in favor of the much better geb for functional testing. That had cost us a lot of work and many lost and not yet rewritten functional tests.

This time it seems especially hard because you two of our central plugins are not readily available anymore:

  1. Apache Shiro Plugin
  2. Compass-based Searchable Plugin

1. Shiro authentication

There still is no official release of the shiro plugin for Grails 3.x. After some searching and researching the initial port on github we decided to fork and maintain the most current forked version ourselves and try to work with it. Fortunately it was relatively easy to integrate and to update some dependencies. Our authentication and authorization works at least as good as before and we do not face additional problems. Working with interceptors feels quite good, too.

2. Search

The situation is harder with search. Compass and the searchable plugin are dead – plain and simple. The replacement for grails is the elasticsearch plugin which mostly adopted the API of the searchable plugin. Getting it to work is not that easy though. You have different versions depending on the grails 3 version you are targetting. Each plugin version targets a specific elasticsearch server version and so on. Often times (like in the default configuration) you will need a matching mapper-attachment plugin that is not available on maven in newer versions. This is mentioned somewhere in the midst of the plugin documentation.

Furthermore the plugin itself has some problems with hibernate proxies and concurrency so here we have to mess around with the plugin code once more. Once we have everything working for us like before we will try to get our patches upstream.

Marching forward

The upgrade from 2.x to 3.x is the biggest (and best) step of Grails into the right direction. On the downside it places a lot of burden on the application and plugin developers. That again increases the cost of maintaining proven applications further.

Right now we are close to a Grails 3.3 version of our application but have invested considerable effort into this upgrade.

Our current recommendation and practice is to not start new web applications based on the grails framework because there have been too many breaking changes and the maintainance cost is high. But we are keeping a close look at grails because the increased modularization and and new options like the grails-react-profile may keep grails interesting in the future.

Using PostgreSQL for time-series data

The number of sensors and other things that periodically collect data is ever growing. This advent of the  internet of things (IoT) demands a way of storing and analyzing all this so-called time-series data. There are many options for such data – the most prominent being special time-series databases like InfluxDB or well suited, nicely scaling databases like Apache Cassandra.

The problem is you have to tailor your solution to one of these technologies whereas there is SQL with mature database management systems (DBMS) and drivers/bindings for almost any programming language.

Why not use a plain SQL database?

Relational SQL databases are a mature and well understood piece of technology albeit not as sexy as all those new NoSQL databases. Using them for time series data may not be a problem for smaller datasets but sooner or later your ingestion and query performance will degrade massivly. So in general it is not a good option to store all your time-series data in a traditional relational DBMS (RDBMS).

Why use PostgreSQL with TimescaleDB?

With the PostgreSQL extension TimescaleDB you get the best of both worlds: a well known query language, robust tools and scalability.

You access and manage your time-series database just like your ordinary PostgreSQL database. Almost everything including replication and backups will continue to work like before.

You do not have to deal with limitations of specialized solutions or learn a completely new ecosystem just for one aspect of your solution.

The future

We are successfully using TimescaleDB in one of our projects and will continue to share tipps and experience with this technology taking its rising importance into account.

 

Using Ansible vault for sensitive data

We like using ansible for our automation because it has minimum requirements for the target machines and all around infrastructure. You need nothing more than ssh and python with some libraries. In contrast to alternatives like puppet and chef you do not need special server and client programs running all the time and communicating with each other.

The problem

When setting up remote machines and deploying software systems for your customers you will often have to use sensitive data like private keys, passwords and maybe machine or account names. On the one hand you want to put your automation scripts and their data under version control and use them from your continuous integration infrastructure. On the other hand you do not want to spread the secrets of your customers all around your infrastructure and definately never ever in your source code repository.

The solution

Ansible supports encrypting sensitive data and using them in playbooks with the concept of vaults and the accompanying commands. Setting it up requires some work but then usage is straight forward and works seamlessly.

The high-level conversion process is the following:

  1. create a directory for the data to substitute on a host or group basis
  2. extract all sensitive variables into vars.yml
  3. copy vars.yml to vault.yml
  4. prefix variables in vault.yml with vault_
  5. use vault variables in vars.yml

Then you can encrypt vault.yml using the ansible-vault command providing a password.

All you have to do subsequently is to provide the vault password along with your usual playbook commands. Decryption for playbook execution is done transparently on-the-fly for you, so you do not need to care about decryption and encryption of your vault unless you need to update the data in there.

The step-by-step guide

Suppose we want work on a target machine run by your customer but providing you access via ssh. You do not want to store your ssh user name and password in your repository but want to be able to run the automation scripts unattended, e.g. from a jenkins job. Let us call the target machine ceres.

So first you setup the directory structure by creating a directory for the target machine called $ansible_script_root$/host_vars/ceres.

To log into the machine we need two sensitive variables: ansible_user and ansible_ssh_pass. We put them into a file called $ansible_script_root$/host_vars/ceres/vars.yml:

ansible_user: our_customer_ssh_account
ansible_ssh_pass: our_target_machine_pwd

Then we copy vars.yml to vault.yml and prefix the variables with vault_ resulting in $ansible_script_root$/host_vars/ceres/vault.yml with content of:

vault_ansible_user: our_customer_ssh_account
vault_ansible_ssh_pass: our_target_machine_pwd

Now we use these new variables in our vars.xml like this:

ansible_user: "{{ vault_ansible_user }}"
ansible_ssh_pass: "{{ vault_ansible_ssh_pass }}"

Now it is time to encrypt the vault using the command

ANSIBLE_VAULT_PASS="ourpwd" ansible-vault encrypt host_vars/ceres/vault.yml

resulting a encrypted vault that can be put in source control. It looks something like

$ANSIBLE_VAULT;1.1;AES256
35323233613539343135363737353931636263653063666535643766326566623461636166343963
3834323363633837373437626532366166366338653963320a663732633361323264316339356435
33633861316565653461666230386663323536616535363639383666613431663765643639383666
3739356261353566650a383035656266303135656233343437373835313639613865636436343865
63353631313766633535646263613564333965343163343434343530626361663430613264336130
63383862316361363237373039663131363231616338646365316236336362376566376236323339
30376166623739643261306363643962353534376232663631663033323163386135326463656530
33316561376363303339383365333235353931623837356362393961356433313739653232326638
3036

Using your playbook looks similar to before, you just need to provide the vault password using one of several options like specifying a password file, environment variable or interactive input. In our example we just use the environment variable inline:

ANSIBLE_VAULT_PASS="ourpwd" ansible-playbook -i inventory work-on-customer-machines.yml

After setting up your environment appropriately with a password file and the ANSIBLE_VAULT_PASSWORD_FILE environment variable your playbook commands are exactly the same like without using a vault.

Conclusion

The ansible vault feature allows you to safely store and use sensitive data in your infrastructure without changing too much using your automation scripts.