Be precise, round twice

Recently after implementing a new feature in a software that outputs lots of floating point numbers, I realized that the last digits were off by one for about one in a hundred numbers. As you might suspect at this point, the culprit was floating point arithmetic. This post is about a solution, that turned out to surprisingly easy.

The code I was working on loads a couple of thousands numbers from a database, stores all the numbers as doubles, does some calculations with them and outputs some results rounded half-up to two decimal places. The new feature I had to implement involved adding constants to those numbers. For one value, 0.315, the constant in one of my test cases was 0.80. The original output was “0.32” and I expected to see “1.12” as the new rounded result, but what I saw instead was “1.11”.

What happened?

After the fact, nothing too surprising – I just hit decimals which do not have a finite representation as a binary floating point number. Let me explain, if you are not familiar with this phenomenon: 1/3 happens to be a fraction which does not have a finte representation as a decimal:

1/3=0.333333333333…

If a fraction has a finite representation or not, depends not only on the fraction, but also on the base of your numbersystem. And so it happens, that some innocent looking decimal like 0.8=4/5 has the following representation with base 2:

4/5=0.1100110011001100… (base 2)

So if you represent 4/5 as a double, it will turn out to be slightly less. In my example, both numbers, 0.315 and 0.8 do not have a finite binary representation and with those errors, their sum turns out to be slightly less than 1.115 which yields “1.11” after rounding. On a very rough count, in my case, this problem appeared for about one in a hundred numbers in the output.

What now?

The customer decided that the problem should be fixed, if it appears too often and it does not take to much time to fix it. When I started to think about some automated way to count the mistakes, I began to realize, that I actually have all the information I need to compute the correct output – I just had to round twice. Once say, at the fourth decimal place and a second time to the required second decimal place:

(new BigDecimal(0.8d+0.315d))
    .setScale(4, RoundingMode.HALF_UP)
    .setScale(2, RoundingMode.HALF_UP)

Which produces the desired result “1.12”.

If doubles are used, the errors explained above can only make a difference of about 10^{-15}, so as long as we just add a double to a number with a short decimal representation while staying in the same order of magnitude, we can reproduce the precise numbers from doubles by setting the scale (which amounts to rounding) of our double as a BigDecimal.

But of course, this can go wrong, if we use numbers, that do not have a short neat decimal representation like 0.315. In my case, I was lucky. First, I knew that all the input numbers have a precision of three decimal places. There are some calculations to be done with those numbers. But: All numbers are roughly in the same order of magnitude and there is only comparing, sorting, filtering and the only honest calculation is taking arithmetic means. And the latter only means I had to increase the scale from 4 to 8 to never see any error again.

So, this solution might look a bit sketchy, but in the end it solves the problem with the limited time budget, since the only change happens in the output function. And it can also be a valid first step of a migration to numbers with managed precision.

PSA: logrotate does not support time-travel

When diagnostics explode

A great many things can break in a software system. However, diagnostics breaking the rest of the software is especially ironic. These tools are supposed to help you find bugs and other problems after the fact, not become one.
The system in question was a small data-recorder running on a BeagleBone Black (BBB), continiously recording measurements from specialized hardware.
These measurements are stored in an SQLite database and can be retrieved (and purged) via a very simple http interface.
For context: the BeagleBone Black is a small GNU/Linux ARM device, not unlike a Raspberry Pi.

During development, we noticed that logfiles would quickly grow to hundreds of megabyte, which could potentially be a problem if the data in the SQLite database is not retrieved, and subsequently purged, for a while. So as a precaution, we set the file-size limit to 5mb in /etc/logrotate.conf. We figured that should solve it, and during testing the logs never got very big again.

Fun in production

Imagine my surprise when I saw a 1.4gb /var/log folder that prevented any successful writes and subsequently corrupted the SQLite db. SQLite does not deal well with full disks, so this was a huge problem.

Two files especially, daemon.log and syslog, were huge with ~950mb and ~450mb respectively. They were clearly bigger than 5mb. logrotate was configured to run daily and weekly respectively. We were kind of spamming the log files, and estimated at max 50mb growth in either file per day, which should limit the files to 50mb and 350mb. But obviously, it didn’t.

Time travel

The production environment has several special properties:

  1. The BBB is not connected to the internet.
  2. There are semi-frequent power losses.
  3. The BBB does not have a battery, so power-cycling it means its internal date is reset.

What all this amounts to is: The system doesn’t know the current time and can’t get it via ntp. And whenever the system starts again, it resumes from a fixed date the disk was flashed with.

logrotate on the other hand doesn’t like that one bit. It’ll get confused by the files written in the future and even worse, it remembers when it last ran. And it doesn’t run if that’s in the future. So if the BBB runs nicely from January, 1st to July, 1st and then power-cycles, you’ll have to wait half a year for your daily logrotate run. And whenever it successfully runs, the problem will get worse.

So, in general, it’s not a good idea to run a full GNU Linux without a working clock!

Using CSV data as external table in Oracle DB

If you want to import CSV data into an Oracle database you can use the SQL*Loader command line tool. You simple create a control file that describes how to load the data and then call the sqlldr command with the control file name as an argument:

example.ctl

LOAD DATA
INFILE example.csv
INTO TABLE example_table
FIELDS TERMINATED BY ';'
(ID, NAME, AMOUNT, DESCRIPTION)
> sqlldr username/password example.ctl

But there’s another way to load CSV data into an Oracle database: External tables.

External tables

Oracle’s external tables feature allows you to query data from a file on the filesystem like a regular database table.

First you have to create a directory in the file system and put your CSV file inside:

mkdir -p /path/to/directory

example.csv

1;Water;250
2;Beer;500
3;Wine;150

Now connect to the database as “SYS as SYSDBA”, define the directory as a database object and grant read/write access to your user:

CREATE OR REPLACE DIRECTORY
  external_tables_dir AS '/path/to/directory';
GRANT READ,WRITE ON DIRECTORY
  external_tables_dir TO example_user;

Now you can connect as example_user and create an external table for the CSV file:

CREATE TABLE example_table (
  id NUMBER(4,0),
  name VARCHAR2(50),
  amount NUMBER(8,0)
)
ORGANIZATION EXTERNAL (
  DEFAULT DIRECTORY external_tables_dir
  ACCESS PARAMETERS (
    RECORDS DELIMITED BY NEWLINE
    FIELDS TERMINATED BY ';'
  )
  LOCATION ('example.csv')  
);

The relevant part here is the ORGANIZATION EXTERNAL block. It references the directory and the CSV file inside the directory and allows you to specify format parameters of the CSV file such as record and field delimiters.

Now you can query the table like a regular table:

SELECT * FROM example_table
ID NAME  AMOUNT
-- ----- ------
1  Water 250
2  Beer  500
3  Wine  150

Access information and errors such as bad or discarded records are stored in log files in the specified directory. The default names of these log files consist of the table name and an ID, e.g. example_table_12345.log, example_table_12345.bad and example_table_12345.dsc.

Docker runtime breaking your container

Docker (or container technology in general) is a great tool to clearly separate the concerns of developers and operations. We use it to simplify various tasks like building projects, packaging them for different platforms and deployment of our software onto the target machines like staging and production servers. All the specifics of the projects are contained and version controlled using the Dockerfiles and compose files.

Our operations only needs to provide some infrastructure able to build container images and run them. This works great most of the time and removes a lot of the friction between developers and operation where in the past snowflaky-servers needed to be setup and maintained. Developers often had to ask for specific setups and environments because each project had their own needs. That is all gone with this great container technology. Brave new world. Except when it suddenly does not work anymore.

Help, my deployment container stopped working!

As mentioned above we use docker to deploy our software to the target machines. These machines are often part of a corporate network protected by firewalls and only accessible using VPN. I already talked about how to use openvpn in a docker container for deployment. So the other day I was making a release of one of my long-running projects and pressing the deploy button for that project on our jenkins continuous integration server.

But instead of just leaning back, relaxing and watching the magic work the deployment failed and the red light lit up! A look into the job output showed that the connection to the target machine was refused. A quick check from the developer machine showed no problem on the receiving side. VPN, target machine and everything was up and running as usual.

After a quick manual deployment performed with care and administrator hat I went on an investigation journey…

What was going on?

The deployment job did not change for several months, the container image did not change and the rest of the infrastructure was working as expected. After more digging, debugging narrowing down the problem I found out, that openvpn did not work in the container anymore because of some strange permission denied error:

Tue May 19 15:24:14 2020 /sbin/ip addr add dev tap0 1xx.xxx.xxx.xxx/22 broadcast 1xx.xxx.xxx.xxx
Tue May 19 15:24:14 2020 /sbin/ip -6 addr add 2axx:1xxx:4:5xxx:9xx:5xxx:5xxx:4xxx/64 dev tap0
RTNETLINK answers: Permission denied
Tue May 19 15:24:14 2020 Linux ip -6 addr add failed: external program exited with error status: 2
Tue May 19 15:24:14 2020 Exiting due to fatal error

This hot trace made it easy to google for and revealed following issue on github: https://github.com/dperson/openvpn-client/issues/75. The cause of all the trouble was changed behaviour of the docker runtime. Our automatic updates had run over the weekend and actually installed a new package version of the docker runtime (see exerpt from apt history log):

containerd.io:amd64 (1.2.13-1, 1.2.13-2)

This subtle change broke my container! After some sacrifices to the whale gods I went on to implement the fix. Fortunately there is an easy way to get it working like before. You just have to pass following command line switch to docker run and everything works as expected:

--sysctl net.ipv6.conf.all.disable_ipv6=0

As nice as containers are for abstracting away hardware, operating systems and other environment details sometimes the container runtime shines through. It is just a shame that such things happen on minor releases or package release upgrades…

The ALARA principle in software engineering

The ALARA principle originates in radiation protection and means “As Low As Reasonably Achievable”. It means that you have to weigh the purpose of an action dealing with radiation against its disadvantages, like radiation damage or long-term risks. The word “reasonably” means that while some disadvantages are not avoidable, a practicable amount of protection should be in place to lower them. The principle calls for a balancing act: Not without safety measures, but don’t overextend your means by trying to achieve a safety level that isn’t helpful anymore.
To put the ALARA principle in practice with an example: You shouldn’t need a X-ray every time you go to the dentist, but given enough time since the last one and reasonable doubt about a tooth, the X-ray examination will benefit your dental (and overall) health more than if you deny it. It isn’t healthy by itself, but the information gained by it will be used to improve your health.

I learnt about the ALARA principle when my father (a nuclear physicist by heart) explained it to me in context of the current corona pandemic: Use protection like face masks and distance, but don’t stress yourself too much over that one time when you grabbed a pen in the postal office. While preparation and watchfulness is helpful, fear is detrimental to your mental health. And even the most resilient mind has bounded resources that can better be spent on constructive things instead of fear.

A fun fact from radiation protection is that at least three of the four main rules of protection can be applied to corona, too:

  • Distance yourself from the radiation source
  • Use appropriate protection gear
  • Avoid incorporation (keep the thing outside your body)
  • Limit your exposure time (this doesn’t fit as nicely, because the virus is probably not cumulative)

But how can we apply the ALARA principle to software engineering? I was instantly reminded about the “Thorough” rule of unit testing. In the book “Pragmatic Unit Testing” by Andy Hunt and Dave Thomas, the two original Pragmatic Programmers, good unit tests have to follow the ATRIP-rules. The T stands for “Thorough” which is often misinterpreted as “test everything at least twice”. In reality, the rule states that:

  • all mission critical functionality needs to be tested
  • for every occuring bug, there needs to be an additional test that ensures that the bug cannot happen again

The first thing that meets the eye is that the rule doesn’t define a bug as a failure of your testing effort. It takes a bug that probably happened in production and caused some damage as a motivation to strengthen your test coverage in that particular area. The second part of the rule calls for directed, well-aimed testing effort. It is easy to follow because it has a clear trigger: A bug happened, now you have to write a test.

The first part of the rule is more complicated: What is mission critical functionality? And what means “is tested thoroughly” in this context? And here, the ALARA principle can help us. The bug rate in the important parts of your code should be as low as reasonably achievable. “Reasonably achievable” is defined by the resources at your disposal (like time to market), your expertise in testing and the potential damage that could happen if something in your code goes wrong.

If the potential damage is high or even life-threatening, your reasonable effort should be much higher than if the most critical thing that happens is a 15 minute downtime while you restart the server. There are use cases where even 15 minutes mean subsequent damage, but most software is written for a more relaxed context.

I’ve always found the “Thorough” rule of good unit tests pleasant and comforting: If you made reasonable effort to test your most important code and write a test for every bug you or your users encounter, you can say that your bug rate is ALARA – “As Low As Reasonably Achievable”. And that is good enough for most cases.

What was your first thought when you heard about the ALARA principle? Tell us in the comment section!

Still thinking about managing time…

Now that I‘ve actually read what I‘ve written a few weeks ago 😉 … I‘ve obviously had some time to reflect. About more models of managing your time, about integrating such models in your daily life, their limits, and, of course, about the underlying force, the “why” behind all that.

While trying to adjust myself to the spacious world of home office, I especially came to notice, that time management itself probably wasn‘t the actual issue I was trying to improve. Sure enough, there are several antipatterns of time mismanagement that can easily lead to excessive spending, something you can improve with simple Home Office installations, e.g. having a clock clearly visible from your point of view – and, sensibly, one clearly visible from your cofee machine… These are about making time perceptible, especially when you‘re not the type of person with absolutely fixed times for lunch, or such mundane concepts.

But then, there‘s a certain limit to the amount of improvement you can easily gain from managing time alone. Sure, you could try to apply every single life hack you find online, but then again, the internet isn‘t very good in accounting for differences in personal psychology. The thing you can do, is trying to establish a few recommendations at a certain time and shaking established habits, but you need to evaluate their effect. Not everything is pure gold. For example, my last blog post pointed to the Pomodoro technique, where one will find that there are classes of work that can easily be scheduled into 25-30 minute blocks. But there are others where this restriction leads to more complications than it solves. Another “life hack” the internet throws at you at every opportunity is having a certain super-best time to set your alarm clock to, and I would advise to try to shuffle this once in a while to find out whether there‘s some setting that is best for you. But never think that you need e.g. the same rising pattern as Elon Musk in order to finish your blog post in time… Just keep track of yourself. How would other people know your default mode?

Now overall, each day feels different a bit, and it‘s a function of your emotional state as well as some generic randomness that has no less important effects on your productivity than a set of rules you can just adhere to. So, instead on focussing on managing your time on a given day, we could think of actually trying to manage your productivity. But then again, “productivity management” sounds so abstract that the handles we would think of are about stuff like

  • what you eat
  • how you sleep
  • how much sports you‘re willing to do
  • how much coffee you consume

and other very profound parameters of your existence. That‘s also something you can just play around a bit until you find an obvious optimum. (Did you know that the optimum amount of breakfast beer for you is likely to be zero?… … :P)

However, if you keep on fine tuning every single aspect for the rest of your life like a maniac, you risk to loose yourself in marginal details, without gaining anything.

So if you‘re still reading – we now return in trying to solve what it actually is that we want to manage. And for me, the best model is thinking about managing motivation. Not the general “I guess I am better off with a job than without one”-motivation, but the very real daily motivation that makes you jump from one task into the next one. The one that drives maximum output from your given time without actually having to manage your time itself. There are always days of unsteady condition, but by trying to avoid systematic interferences with your motivation, you can achieve to maximize their output, as well.

At a very general level (and as outlined above), one crucial ingredient in motivational management, for me, is the circumstance of following a self-made schedule. By which of course, I mean, you arrange your day to cooperate with your colleagues and customers, but it has to feel like as much a voluntary choice as possible within your given circumstances.

Then, there‘s planning ahead. Sounds trivial. But you can be the type of person that plans several weeks in advance, or the one that is actually unsure about what happens next monday – the common denominator is avoiding to worry about a kind of default course of events for a few days in a row. We all know that tasks like to fill out more time than they actually require, so you get some backlog one way or another; but if you manage to feel like your time is full of doing something worthwile, it‘s way easier to start your day at a given moment than when you try to arrange tasks of varying importance on-the-fly.

One major point – which I was absolutely amazed by, when I chose to believe it – is, that you can stop a task at many times, without losing your train of thought, not just when it‘s finished. So often, one fears the expected loss of concentration when he realizes that a single task will not fit in a limited time box. But unless you are involuntarily interrupted, and unless you somehow give in to the illusion that the brain is somehow capable of multi-tasking*, you can e.g. shift whole subsections of a given task to the next morning in a conscious manner, and then quickly return to your old concentration.

On the other hand, there‘s the concept of Maker vs. Manager Cycles. Briefly,

  • Someone in a “Manager” mode has a lot of (mostly) smaller issues, spread over many different topics, often only loosely connected, often urgent, and sometimes without intense technical depth. The Manager will gain his (“/her” implied henceforth) motivation surely by getting a lot of different topics done in a short time, thus benefit from a tight, low-overhead schedule. He can apply artificial limits to his time boxes and apply the Pareto rule thoroughly: (“About 80% of any result usually stems from about 20% of the tasks”).
  • However, someone in “Maker” mode probably has a more constrained set of tasks – like a programmer trying to construct a new feature with clearly defined requirements, or a number of multiple high-attention issues – which he wisely bundled into blocks of similar type – will benefit from being left alone for some hours.

For a more thorough discussion, I‘ll gladly point to the discussion of Paul Graham, as Claudia thankfully left in our comment section last time 😉

Which brings me to my final point. I found one of the strongest key to daily motivation lies in the fundamental acceptance of these realities. As outlined above, there just are some different subconscious modes, and different external circumstances, that drive your productivity to a larger scale than you can manipulate. If you already adopted a set of measures and found they did a good job for you, you better not worry if there‘s some kind of a blue day where everything seems to lead to nowhere. You can lose more time by over-optimization than you could gain from super-finely-tuned efficiency. You probably already know this, but do you also embrace it?

(* in my experience, and while I sometimes find myself still trying to do this, multi-tasking is not an existing thing. If you firmly believe otherwise, be sure to drop me a note in the comments..)

Math development practices

As a mathematician that recently switched to almost full-time software developing, I often compare the two fields. During the last years of my mathematics career I was in the rather unique position of doing both at once – developing software of some sort and research in pure mathematics. This is due to a quite new mathematical discipline called Homotopy Type Theory, which uses a different foundation as the mathematics you might have learned at a university. While it has been a possibility for quite some time to check formalized mathematics using computers, the usual way to do this entails a crazy amount of work if you want to use it for recent mathematics. By some lucky coincidences this was different for my area of work and I was able to write down my math research notes in the functional language Agda and have them checked for correctness.

As a disclaimer, I should mention that what I mean with “math” in this post, is very far from applied mathematics and very little of the kind of math I talk about is implement in computer algebra systems. So this post is about looking at pure, abstract math, as if it were a software project. Of course, this comparison is a bit off from the start, since there is no compiler for the math written in articles, but it is a common believe, that it should always be possible to translate correct math to a common foundation like the Zermelo Fraenkel Set Theory and that’s at least something we can type check with software (e.g. isabelle).

Refactoring is not well supported in math

In mathematics, you want to refactor what you write from time to time pretty much the same way and for the same reasons as you would while developing software. The problem is, you do not have tools which tell you immediately if your change introduces bugs, like automated tests and compilers checking your types.

Most of the time, this does not cause problems, judging from my experiences with refactoring software, most of the time a refactoring breaks something detected by a test or the compiler, it is just about adjusting some details. And, in fact, I would conjecture that almost all math articles have exactly those kind of errors – which is no problem at all, since the mathematicians reading those articles can fix them or won’t even notice.

As with refactoring in software development, what does matter are the rare cases where it is crucial that some easy to overlook details need to match exactly. And this is a real issue in math – sometimes a statements gets reformulated while proving it and the changes are so subtle that you do not even realize you have to check if what you prove still matches your original problem. The lack of tools that help you to catch those bugs is something that could really help math – but it has to be formalized to have tools like that and that’s not feasible so far for most math.

Retrospectively, being able to refactor my math research was the biggest advantage of having fully formal research notes in Agda. There is no powerful IDE like they are used in mainstream software development, just a good emacs-mode. But being able to make a change and check afterwards if things still compile, was already enough to enable me to do things I would not have done in pen and paper math.

Not being able to refactor might also be the root cause of other problems in math. One wich would be really horrible for a software project, is that sometimes important articles do not contain working versions of the theorems used in some field of study and you essentially need to find some expert in the field to tell you things like that. So in software project, that would mean, you have to find someone who allegedly made the code base run some time in the past by applying lots of patches which are not in the repository and wich he hopefully is still able to find.

The point I wanted to make so far is: In some respects, this comparison looks pretty bad for math and it becomes surprising that it works in spite of these deficiencies. So the remainder of this post is about the things on the upside, that make math check out almost all the time.

Math spent person-centuries on designing its datatypes

This might be exaggerated, but it is probably not that far off. When I started studying math, one of my lecturers said “inventing good definitions is not less important as proving new results”. Today, I could not agree more, immense work went into the definitions in pure math and they allowed me to solve problems I would be too dumb to even think about otherwise. One analogue in programming is finding the ‘correct’ datatypes, which, if achieved can make your algorithms a lot easier. Another analogue is using good libraries.

Math certainly reaps a great benefit from its well-thought-through definitions, but I must also admit, that the comparison is pretty unfair, since pure mathematicians usually take the freedom to chose nice things to reason about. But this is a point to consider when analysing why math still works, even if some of its practices should doom a software project.

I chose to speak about ‘datatypes’ instead of, say ‘interfaces’, since I think that mathematics does not make that much use of polymorhpism like I learned it in school around 2000. Instead, I think, in this respect mathematical practice is more in line with a data-oriented approach (as we saw last week here on this blog), in math, if you want your X to behave like a Y, you usually give a map, that turns your X into a Y, and then you use Y.

All code is reviewed

Obviously like everywhere in science, there is a peer-review processs if you want to publish an article. But there are actually more instances of things that can be called a review of your math research. Possibly surprising to outsiders, mathematicians talk a lot about their ideas to each other and these kind of talks can be even closer to code reviews than the actual peer reviews. This might also be comparable to pair programming. Also, these review processes are used to determine success in math. Or, more to the point, your math only counts if you managed to communicate your ideas successfully and convince your audience that they work.

So having the same processes in software development would mean that you have to explain your code to your customer, which would be a software developer as yourself, and he would pay you for every convincing implementaion idea. While there is a lot of nonsense in that thought, please note that in a world like that, you cannot get payed for a working 300-line block code function that nobody understands. On the other hand, you could get payed for understanding the problem your software is supposed to solve even if your code fails to compile. And in total, the interesting things here for me is, that this shift in incentives and emphasis on practices that force you to understand your code by communicating it to others can save a very large project with some quite bad circumstances.

Generating Rows in Oracle Database

Sometimes you want to automatically populate a database table with a number rows. Maybe you need a big table with lots of entries for a performance experiment or some dummy data for development. Unfortunately, there’s no standard SQL statement to achieve this task. There are different possibilities for the various database management systems. For the Oracle database (10g or later) I will show you the simplest one I have encountered so far. It actually “abuses” an unrelated functionality: the CONNECT BY clause for hierarchical queries in combination with the DUAL table.

Here’s how it can be used:

SELECT ROWNUM id
FROM dual
CONNECT BY LEVEL <= 1000;

This select creates a result set with the numbers from 1 to 1000. You can combine it with INSERT to populate the following table with rows:

CREATE TABLE example (
  id   NUMBER(5,0),
  name VARCHAR2(200)
);

INSERT INTO example (id, name)
SELECT ROWNUM, 'Name '||ROWNUM
FROM dual
CONNECT BY LEVEL <= 10;

The resulting table is:

ID  NAME
1   Name 1
2   Name 2
3   Name 3
...
10  Name 10

Of course, you can use the incrementing ROWNUM in more creative ways. The following example populates a table for time series data with a million values forming a sinus curve with equidistant timestamps (in this case 15 minute intervals) starting with a specified time:

CREATE TABLE example (
  id    NUMBER(5,0),
  time  TIMESTAMP,
  value NUMBER
);

INSERT INTO example (id, time, value)
SELECT
  ROWNUM,
  TIMESTAMP'2020-05-01 12:00:00'
     + (ROWNUM-1)*(INTERVAL '15' MINUTE),
  SIN(ROWNUM/10)
FROM dual
CONNECT BY LEVEL <= 1000000;
ID  TIME              VALUE
1   2020-05-01 12:00  0.099833
2   2020-05-01 12:15  0.198669
3   2020-05-01 12:30  0.295520
...

As mentioned at the beginning, there are other row generator techniques to achieve this. But this one is the simplest so far, at least for Oracle.

Updating Grails 3.3.x to 4.0.x

We have a long history of maintaining a fairly large grails application which we took from Grails 1.0 to 4.0. We sometimes decided to skip some intermediate releases of the framework because of problems or missing incentives to upgrade. If your are interested in our experiences of the past, feel free to have a look our stories:

This is the next installment of our journey to the latest and greatest version of the Grails framework. This time the changes do not seem as intimidating like going from 2.x to 3.x. There are less moving parts, at least from the perspective of an application developer where almost everything stayed the same (gradle build system, YAML configuration, Geb functional tests etc.). Under the hood there are of course some bigger changes like new major versions of GORM/Hibernate and Spring Boot and the switch to Micronaut as the parent application context.


The hurdles we faced

  • For historical reasons our application uses flush mode “auto”. This does not work until today, see https://github.com/grails/grails-core/issues/11376
  • The most work intensive change is that Hibernate 5 requires you to perform your work in transactions. So we have dozens of places where we need to add missing @Transactional annotations to make especially saving domain objects work. Therefore we have to essentially test the whole application.
  • The handling of HibernateProxies again became more intransparent which led to numerous IllegalArgumentExceptions (“object ist not an instance of declaring type”). Sometimes we could move from generated hashCode()/equals() implementations to the groovy-Annotation @EqualsAndHashCode (actually a good thing) whereas in other places we did manual unwrapping or switched to eager fetching to avoid these problems.

In addition we faced minor gotchas like changed configuration entries. The one that cost us some hours was the subtle change of server.contextPath to server.servlet.context-path but nothing major or blocking.

We also had to transform many of our unit and integration tests to Spock and the new Grails Testing Support framework. It makes the tests more readable anyway and feels more fruitful than trying to debug the old-style Grails Test Mixins based tests.

Improvements

One major improvement for us in the Grails ecosystem is the good news that the shiro plugin is again officially available, maintained and cleaned up (see https://github.com/nerdErg/grails-shiro). Now we do not need to use our own poor man’s port anymore.

Open questions

Regarding the proclaimed performance improvements and reduced memory consumptions we do not have final numbers or impressions yet. We will deliver results on this front in the future.

More important is an incovenience we are still facing regarding hot-code-reloading. It does not work for us even using OpenJDK 8 with the old spring-loaded mechanism. The new restart-style of micronaut/spring-boot is not really productive for us because the startup times can easily reach the minute range even on fast hardware.

Pro-Tip

My hottest advice for you is this one:

Create a fresh Grails 4 app and compare central files like application.yml and build.gradle to get up to the state-of-the-art.

Conclusion

While this upgrade still was a lot of work and meant many places had to be touched it was a lot smoother than many of the previous ones. We hope that things improve further in the future as the technological stack is up-to-date and much more mature than in the early days…

Thoughts about Time Management

Now that we live in a time where many project-driven jobs have been forced out of their natural habitat (i.e. home office), one might more than ever ask oneself, „how do I get the most out of my day?“ This is especially interesting when coordination within a team can not happen ad hoc – as it would be possible in an open office environment – but has to fit into every involved one‘s schedule.

So, maybe, within these constraints, it is helpful to remind oneself of some key principles of how humans and their tasks interact with each other. The following compilation of ideas is largely based on fragments acquired by the author and is in no way fresh, groundbreaking research in any field 😉

1. Parkinson‘s Law and Time Boxes.

Sounding like absolutely commonplace knowledge, it is often circulated that “Work expands so as to fill the time available for its completion.” (originally published in 1955). This can be understood by acknowledging that the human mind usually is quite capable of abstraction and, therefore, of solving hypothetical problem statements… given one task, one can find an arbitrary level of depth of sub-tasks and scent out complications between them; all of which stands in the very way of what one might call one‘s claim for „perfection“, whatever that means.

In theory, however, this tendency can be confined somewhat by thinking of any task to only live in a static, smallest-possible „Time Box“. This aims at removing the buffer resources (or „cushions“) around any single task. By defining such a Time Box by first using words that are easy to grasp,and secondly allocating the minimum amount of time one can barely imagine, one can prevent his or her mind from wandering off into these depths of abstraction (e.g. to „make my home great again“ could involve several complicated and hitherto unknown steps, but to „empty the trash bins in 10 minutes“ feels quite palpable, even if it‘s only a minor aspect of the overall picture).

For singular tasks that need to be split up over several Time Boxes, one might use the idea of the „pomodoro technique“ as a guidance, which estimates that productive, uninterrupted work can usually happen in time intervals of about 25 minutes. In any way, the typical length of a work day should be booked out by timeslots completely, as any addition of a „buffer zone“ will probably directly calm the mind in any preceding Time Box („relax, I don‘t have to respect the end of my Time Box that much, that‘s what the cushions are for“). It might even be preferable to book out the whole work week in advance.

The point in all these is not that one can always manage to fulfil one‘s Time Box, but to give a quick and emotional feedback to the mind: „Stay focused or postpone the current Time Box, but don‘t enter a state of limbo in which you feel like working on the issue at hand, but actually create new tasks, and with impunity.“ On the other hand, if one manages to calibrate the Time Box duration to its projected task, one can very well thrive on the motivation resulting from finishing this task, amplifying focus in a most natural way. Leading over…

2. „Eat the Frog“.

The completion of a task usually leads to one of several effects: As mentioned above, one can feel a motivational push and immediate drive to tackle the next problem; or, however, it can lead to a temporary deflation due to the nearing-the-end-of-time-box-stress relief. Also, some tasks might feel so in routine, that they don‘t lead to excitement or fatigue at all. Either way, it‘s nearly impossible for anyone to predict at 9 am in which state of mind he/she is at 3 am. The key in upholding a certain level of progress, then, is to schedule the most off-putting item at the very beginning of every undertaking. It holds both in a general sense of „Risk First“, i.e. „tackle the problem that is potentially hardest to contain first“, and on a daily basis, i.e. „start your day with the most annoying problem (the most sluggish blocker)“. This is described by metaphorically eating a frog (indeed, it‘s a metaphor – it won‘t really help you if you actually devour one).

Solving the „Maximum Risk Time Box“ first has the advantage, that if the the project already unexpectedly starts to go awry in this stage, there still is plenty of time to communicate with one‘s customer, to outsource some duties to other contributors or to generally refine the original vision of the schedule. Of course, the whole point of Time Boxes is not just to put on artificial pressure for its own sake; but to lay out a road map to any given goal – just as when hiking, the need for some re-orientation is usually not to be seen problematic, if it is identified early enough.

Additionally, going to tackle the „Most annoying Thing of the Day“ first, can add some pleasure of feeling some progress early on any given day. Now that the disgusting frog has been eaten out of the way, you can view any other given task of the day as a comparatively low-threshold obstacle, being easier to put on your plate, easier to digest (it‘s still a metaphor). Furthermore, you can actually measure the progress in the superordinate picture by logging the progress of successfully eaten daily frogs.

To be continued…

Of course, such conceptions aren‘t very good advice if they don‘t align well with a specific project or the general psychology of their bearer. You might still need to flexibly adjust time for interruptions (e.g. video calls), and there are still many complications in which future tasks depend on the result of present tasks, so don‘t waste too much time trying to devise the perfect vision of Time Boxes and Frogs-To-Be-Eaten. With enough practice, however, this mindest can very well be useful in giving some feedback of accomplishment back to the architect.

I‘ll keep you updated when some undeniable drawbacks catch my eye.