Leibniz would have known how to override equals

Equality is a subtle and thorny business, in programming as well as in pure mathematics, physics and philosphy. Probably every software developer got annoyed somtime by unexpected behaviour of some ‘equals’ method or corresponding operators and assertions. There are lot’s of questions that depend on context and answering them for some particular context might cost some pain and time – here is a list of examples:

  • What about objects that come with database-ids? Should they be equal for the objects to be equal?
  • Are dates with time zones equal if they represent the same instant but have a different time zone?
  • What about numbers represented by functions that compute digits up to a given precision?

Leibniz’ Law

This post is about applying an idea of Leibniz I like, to the problem of finding good answers to the questions above. It is called “Leibniz’ law” and can be phrased as a definition or characterization of equality:

Two objects are equal, if and only if, they agree in all properties.

If you are not familiar with the phrase “if and only if”, that’s from mathematics and it is a shorthand for saying, that two things are true:

  • If two objects are equal, then they agree in all properties.
  • If two objects agree in all properties, then they are equal.

Lebniz’ law is sometimes stated using mathematical symbols, like “\forall“, but this would be besides the point of this post – what those properties are will not be defined in a formal mathematical way. If I am in doubt about equality while programming, I am concerned about properties relevant to the problem I want to solve. For example, in almost all circumstances I can imagine, for a list, a relevant property would be its length, but not the place in the computers memory where it is stored.

But what are relevant properties in general? For me, such a property is the result of running some piece of meaningful code. And what meaningful code is, depends on your judgement how the object in question should be used. So in total, this boils down to the following:

Two instances of a type are equal, if and only if, they yield the same results in any meaningful piece of code.

Has this gotten us anywhere? My answer is yes, since the question about equality was reduced to a question about use cases of a type, which might be a starting point of defining a new type anyway.

Turtles all the way down

Please take a moment to note what a sneaky beast equality can be: Above I explained equality by using equality – right where I said “same results”. It is really hard to make statements about anything at all without using some notion of equality in some way. Even in programming, where you can freely define when two objects are equal, you can very well forget that you are using a system, namely your programming language, which usually already comes with an intricate notion of equality defined on the syntax you are using to define your notion of equality…

On a more practical note, that means that messed up notions of equality usually propagate if you define new kinds of objects from known ones.

Relation to Liskov’s Principle

With our above definition, we are very close to an informal interpretation of Liskov’s Substitution Principle, which we can rephrase as:

In all meaningful code for a type, an instance of a subtype has to behave the same way.

For comparison, the message of this post stated in the same tongue:

In all meaningful code for a type, two equal instance of the type should behave the same way.

Mastering programming like a martial artist

Gichin Funakoshi, who is sometimes called the “father of modern karate”, issued a list of twenty guiding principles for his students, called the “Shōtōkan nijū kun”. While these principles as a whole are directed at karate practitioners, many of them are very useful for other disciplines as well.
In my understanding, lifelong learning is a fundamental pillar of both karate and programming, and many of those principles focus on “learning” as a more fundamental action. I’d like to focus on a particular one now:

Formal stances are for beginners; later, one stands naturally.

While, at first, this seems to focus on stances, the more important concept is progression, and how it relates to formalities.

Shuhari

It is a variation of the concept of Shuhari, the three stages of mastery in martial arts. I think they map rather beautifully to mastery in programming too.

The first stage, shu, is about learning traditions and movements, and how to apply them strictly and faithfully. For programming, this is learning to write your first programs with strict rules, like coding conventions, programming patterns and all the processes needed to release your programs to the world. There is no room for innovation, this stage is about absorbing what knowledge and practices already exist. While learning these rules may seem like a burden, the restrictions are also a gift. Because it is always clear what is right and what is wrong, and decisions are easy.

The second stage, ha, is about breaking away from these rules and traditions. Coding conventions, programming-patterns etc. are still followed. But more and more, exceptions are allowed, and encouraged, when they serve a greater purpose. A hack might no longer seem so bad, when you consider how much time it saved. Technical dept is no longer just avoided, but instead managed. Decisions are a little harder here, but there’s always the established conventions to fall back to.

In the final stage, ri, is about transcendence. Rules lose their inherent meaning to purpose. Coding conventions, best-practices, and patterns can still be observed, but they are seen for what they are: merely tools to achieve a goal. As thus, all conventions are subject to scrutiny here. They can be ignored, changed or even abandoned completely if necessary. This is the stages for true innovation, but also for mastery. To make decisions on this level, a lot of practice and knowledge and a bit of wisdom are certainly required.

How to use this for teaching

When I am teaching programming, I try to find out what stage my student is in, and adapt my style appropriately (although I am not always successful in this).

Are they beginners? Then it is better to teach rigid concepts. Do not leave room for options, do not try to explore alternatives or trade-offs. Instead, take away some of the complexity and propose concrete solutions. Setup rigid guidelines, how to code, how to use the IDEs, how to use tools, how to communicate. Explain exactly how they are to fulfill all their tasks. Taking decisions away will make things a lot easier for them.

Students in the second, or even in the final stage, are much more receptive to these freedoms. While students on the second stage will still need guidance in the form of rules and conventions, those in the final stage will naturally adapt or reject them. It is much more useful to talk about goals and purpose with advanced students.

Querying gaps between date ranges in Oracle SQL

Let’s say we have a database table with date ranges, each range designated by a RANGE_START and a RANGE_END column:

CREATE TABLE date_ranges (
  range_start DATE,
  range_end   DATE
);
RANGE_START	RANGE_END
-----------	---------
05/02/2020	01/04/2020
02/04/2020	15/04/2020
16/04/2020	01/05/2020
01/06/2020	20/06/2020
21/06/2020	01/07/2020
02/07/2020	31/07/2020
05/08/2020	30/08/2020

We are now interested in finding the gaps between these date ranges. If we look at this example data set we can see that there are two gaps:

RANGE_START	RANGE_END
05/02/2020	01/04/2020
02/04/2020	15/04/2020
16/04/2020	01/05/2020
-- gap --
01/06/2020	20/06/2020
21/06/2020	01/07/2020
02/07/2020	31/07/2020
-- gap --
05/08/2020	30/08/2020

What would be the SQL query to find these automatically? With standard SQL this would be a difficult task. However, there are some special functions in Oracle SQL called analytic functions that greatly help with this task. Analytic functions compute an aggregate value based on a group of rows. They differ from aggregate functions in that they return multiple rows for each group. In this case we will use the analytic functions MAX and LEAD:

SELECT * FROM (
  SELECT
    MAX(range_end)
      OVER(ORDER BY range_start) + 1 gap_start,
    LEAD(range_start)
      OVER(ORDER BY range_start) - 1 gap_end
  FROM date_ranges
) WHERE gap_start <= gap_end;

The result of this query are the date range gaps we are interested in:

GAP_START	GAP_END
---------	-------
02/05/2020	31/05/2020
01/08/2020	04/08/2020

Note that the MAX function in the query is the analytic MAX function, not the aggregate MAX function, indicated by the OVER keyword with an analytic clause. It operates on a sliding window. The LEAD analytic function allows you to access the following row from the current row without using a self-join.

Building the right software

When we talk about software development a lot of the discussion revolves around programming languages, frameworks and the latest in technology.

While all the above and also the knowledge and skill of the developers certainly matter a great deal regarding the success of a software project the interaction between the involved individual is highly undervalued in my opion. Some weeks ago I watched a great talk connecting air plane crashes and interaction in a software development team. The golden quote for me was certainly this one:

Building software takes technical skill, but building the right software take human interaction and lots of it”

Nickolas Means (“How to crash an airplane”, The Lead Developer UK 2016)

I could not word it better and it matches my personal experience. Many, if not most of the problems in software projects are about human communication, values, feelings and opinions and not technical.

In his talk Nickolas Means focuses on internal team communication and I completely agree with him. My focus as a team lead shifted a lot from technical to fostering diversity, opinions and communication within the team. I am less strict in enforcing certain rules and styles in a project. I think this leads to more freedom and better opportunities for experimentation and exploration of ways to approach a problem.

Extend it to your customer

As we work on projects in different domains with a variety of customers we are really working hard to understand our customers. Building up open, trustworthy and stable communications is key in forming a fruitful and productive collaborative partnership in a (software) project. It will help you to produce a great software that does meet the customers needs instead of just a great software. It may also help you in situation where you mess up or technical problems plague the project.

The aspect of human interaction in software projects has its place rightfully in the agile software development manifesto:

Through this work we have come to value:

Individuals and interactions over processes and tools

The Authors of the Agile Manifesto

Almost 20 years later this is still undervalued and many software developers are still way too much on the technical side. We are striving to steadily improve our skills on the human interaction side and think it proves fruitful everytime we succeed.

I hope that more and more software developers will grasp the value of this shifted view point and that it will increase quality and value of the software solutions provided to all users.

Maybe it will make working in this field friendlier for not so tech-savvy people and allow for more of much needed diversity in tech.

Your most precious resource

When I was a young student, living on my own for the first time, my most scarce resource seemed to be money. Money’s too tight to mention was (and probably still is) a motto that every student could understand. So we traded our time for money and participated in experiments and underpaid student assistant jobs.

Soon after I graduated, money began to accumulate. I have a rather frugal lifestyle, so my expenses didn’t suddenly surge. Instead, my perception of time and money shifted: Money isn’t the bottleneck (anymore), time is. Suddenly, time was much more valueable than money and I would gladly pay money if that meant some hours of additional leisure time or one less problem to tend to. It seems that time is the most precious thing there is.

The traditional economic wisdom supports this idea: “Time is money” is true, but the reverse is not: “money is time” doesn’t cut it. The richest man on earth still only has as much time available to him as anybody else.

If time is the most precious resource, the drive to automation as a time-saving effort can be understood directly. Automation also reduces learning costs if you scale horizontally by parallelizing production.

But soon after I had enough money to optimize my time, I hit another resource bottleneck. Suddenly, I had more time on hand than attention to spare. It turns out that attention is the most valueable resource you can spend. It is just entangled enough with time that its hard to distinguish which runs out first. If you reflect a bit, it becomes obvious. The term “to pay attention” is pretty spot on.

Let me take up the thought of automation again, this time in the domain of software development, in the form of automated tests. Here, automation is not in its most profitable shape. You don’t gain much from scaling your tests horizontally. If you don’t change the code, it doesn’t matter if your tests run once or a thousand times in parallel, the result will be the same (except if you run hardware-dependent tests, but even then you probably don’t gain much after covering all hardware variations).

You also don’t gain much from scaling your tests vertically by making them run faster and faster. It sure helps to have them run continuously in the background (think of a user-modded compiler – look into Continuous Test Runners if you are interested), but after one test run per meaningful change, the profit hits a limit.

So why else is automated testing an economically sound practice? My take on it is delegated attention. You write a small software (your test) that augments your attention area onto code that probably fades from your own attention pretty soon. Automated tests provide automated attention in a sustainable manner (except for those tests that cry wolf for no good reason, those are attention sinks and should be removed from your portfolio). Because of the automation, this delegated attention never fades – even after many years, the test has a close eye on “its” code.

If you are a developer, you have automation and zero-cost copying (aka parallelization without upfront costs) intrinsically in your solution portfolio. Look for ways how to make money with those super-powers. Or even better, look for ways to save time. But if you want the best return on investment for your efforts, you should look for ways to expand your attention area.

Do you agree that attention is our most precious resource? What do you do to lower your attention expenses? Perhaps you have experience in the Ops/DevOps area that resonates with this thoughts? Share your opinion by commenting below or writing your own blog entry!

We will pay attention to you.

State Management for emotionally overwhelmed React rookies

State Management for overwhelmed React rookies

The topic of React state management is nowhere near new. Just to be safe what we‘re talking about here: Consider a multitude of components, which, in nice React-fashion, are finely interleaved into each other, each one with a Single Responsibility, each one only holding as much information as it needs. Depending on the complexity of your application, there can now be a lot of complex dependencies, where one small component somewhere might cause another small component totally-elsewhere to update (re-render), without these having to really know much about each other, because we strive for Low Coupling. In front-end development, this is not only done in terms of „cleaner code“, but also in the performance problem of having to re-render stuff that is not actually changing.

So, just a few months back, a new competitor appeared in the question of React state management, which was open-sourced by Facebook and is called Recoil. The older top dog in this field is the widely-used Redux, with smaller interventions of libraries like MobX, that also aimed to offer an alternative of managing state in smaller applications, and when React in version 16.3 opened up a new standard of Context API, it already officially advanced quite a step into the direction of an official React solution to these questions.

There‘s probably not a single web developer on earth who wouldn‘t agree that in our field, one of the most fun…fundamental challenges is the effort of staying afloat on top of the turbulent JavaScript-osphere. If you are the type of person who doesn‘t want to jump on every bandwagon, but still don‘t want to miss out on all the amazing opportunities that all this progress could give you, you better start a bunch of side projects (call them „recreational“ if you like) and give yourself a chance to dive into particular technologies with confined scope, for some research time.

This is what I‘ve done now and I try to focus completely on the issues that an ambitious developer can experience when having all these choices. This is what I want to outline for you now, because as usual – if you have lots of time studying a single technology, you can succeed in spite of many limitations, and you also get used to certain kinds of things you might have to do that you originally didn‘t want to do, and so on and so on.

So with Redux, nobody really appeared to talk a lot of bad things about it and there even are some Mariuses who seem to be absolutely in love with the official Redux documentation, that are actually more of a guide to time-tested Best Practices, giving you the opportunity to do things right and have a scaleable state container which supports you even if your application grows to large dimensions. Then there‘s stuff like a time-travelling state debugger and the flexible middleware integration which I didn‘t even touch yet. When your project has a number of unrelated data structures, there‘s the Ducks pattern that advises you to organize your required Reducers, Actions and Action Creators in a coherently arranged files. Which, however, turned complicated in my one project in which the types of data objects aren‘t as unrelated, and I had to remove all the combineReducer() logic and ended up with one large global state object; I now have one source file that just consists of everything-related-with-redux and for my purpose, this seems fine, but I still have to write rather cumbersome connect(mapStateToProps, mapDispatchToProps) structure in every component in which I want to access the state. I would prefer to have smaller state containers, but maybe it‘s due to the structure of my project that makes these complicated.

It really is that way: Due to the everchanging recommendations that come with the evolution of React, the question of how to do things best (read: best for your specific purpose), always stays fresh. Since React 16.8 and the arrival of Hooks there is a procession towards less boilerplate code, favoring functional components with a leaner appearance. In this spirit, I strived for something less Redux-y. E.g. if I want some text in my state to be set; I would have to do something like

// ./ducks/TextDucks.js
// avoid having to rely on a magical string, therefore save the string to a constant
const SET_TEXT = 'SET_TEXT'; 

// action creator
export const setTextCreator = (text) ==> ({type: CLEAR_TEXT, payload: {text}});

const Reducer = (state = initialState, {type, payload}) => {
  //... other state stuff
  if (type === SET_TEXT) {
    return {...state, text: payload.text};
  }
}

================
// Component file
import {setTextCreator} from './ducks/TextDucks.js';
const mapDispatchToProps = (dispatch) => ({
  setText: text => dispatch(setTextCreator(text));
});
const Component = ({setText, ...}) => {
  // here can I actually use setText()
};
export default connect(..., mapDispatchToProps)(Component);

Which is more organized than passing along some setText(‘‘) function through my whole component tree, but still feels a bit overhead.

Then there was MobX. Which seemed to be quite lightweight and clearly laid out a coherent use of the Observable pattern, implemented with its own JavaScript decorators that follow a simple structure. Still, however, the lookandfeel of this code would appear to differ quite a lot from my usual coding style, which kept me from actually using it. This decision was then advanced by certain statements online, who, some years ago, actually predicted that the advancement of React’s own Context API would make any third-party library redundant. Now to be fair, React’s current Context API, with its useReducer() and useContext() also makes it possible to imiate a Redux-like structure already, but consider it as ways of thinking: If you write your code in the same style as you would with Redux’ recommendations, why not use it directly? Clearly, the point of avoiding Redux should go towards the direction of thinking differently.

The Context API actually supplies the underlying structure on which Redux’ own <Provider> builds. Insofar, it is not a big astonishment that you can accomplish similar things with it. Using the Context API, you wrap your whole Application like

// myContext.js
import React from "react";
const TextContext = React.createContext();
export default TextContext;

// App.js
import TextContext from './myContext';
const App = () => <TextContext.Provider value={"initial text"}>{/* actual app components here */}</TextContext.Provider>;

// some subComponent.js
import React from 'react';
import TextContext fom './myContext';
const SubComponent = (props) => {
  const [text, setText] = React.useContext(TextContext);
  // now use setText() as you would with a local React useState dispatch function..
}

Personally, I found that arrangement a bit clearer than the Redux structure, if you ‘re not used to Redux’ way of thinking anyway. However, if your state grows and is more than just a text, you would either keep state information in one large object again, or wrap your <App/> in a ton of different Contexts which I personally disdained already when I just had three different global state variables to manage.

Having all these possbilities at hand, one might wonder why Facebook felt the need to implement a new way of state management with Recoil. Recoil is still in its experimental phase, however, it didn’t take long to find one aspect very promising: The coding style of managing state in Recoil feels a lot like writing React code itself, which itself makes it very smooth to operate, as you don’t have to treat global state much different than local state. Our example would look like this

// textState.js
import * as Recoil from 'recoil';
export const text = Recoil.atom({key: 'someUniqueKey', default: 'inital text'});

// App.js
import {RecoilRoot} from 'recoil';
const App = () => <RecoilRoot>{/* here the actual app components */}</RecoilRoot>

// some Component.js
import * as TextState from './textState';
const [text, setText] = Recoil.useRecoilState(TextState.text);
// from then on, you can use setText() like you would as a React useState dispatch function

Even more simple, with Recoil you directly have access to the single useRecoilValue() function to just read the text value, and the single useSetRecoilState() function to just dispatch a new text. This avoids the complication of having to re-think your treating of whatever-in-your-state-is-global differently from what is local. Your <App/> component doesn’t grow to ugly levels of intendation, and you can neatly organize everything state-related in a separate file.

As someone who considers himself quite eager to learn new technologies, but also wants to quickly see some results without having to learn a lot of fresh basic understanding first, I had the most fun trying out Recoil in my projects, I have to admit. However, I totally believe that the demise of Redux is not closing in that soon at all, due to its focus on sustainability. For the future, I would aim to see my one Recoil project grow, and I keep you updated on how well this grows…

Be precise, round twice

Recently after implementing a new feature in a software that outputs lots of floating point numbers, I realized that the last digits were off by one for about one in a hundred numbers. As you might suspect at this point, the culprit was floating point arithmetic. This post is about a solution, that turned out to surprisingly easy.

The code I was working on loads a couple of thousands numbers from a database, stores all the numbers as doubles, does some calculations with them and outputs some results rounded half-up to two decimal places. The new feature I had to implement involved adding constants to those numbers. For one value, 0.315, the constant in one of my test cases was 0.80. The original output was “0.32” and I expected to see “1.12” as the new rounded result, but what I saw instead was “1.11”.

What happened?

After the fact, nothing too surprising – I just hit decimals which do not have a finite representation as a binary floating point number. Let me explain, if you are not familiar with this phenomenon: 1/3 happens to be a fraction which does not have a finte representation as a decimal:

1/3=0.333333333333…

If a fraction has a finite representation or not, depends not only on the fraction, but also on the base of your numbersystem. And so it happens, that some innocent looking decimal like 0.8=4/5 has the following representation with base 2:

4/5=0.1100110011001100… (base 2)

So if you represent 4/5 as a double, it will turn out to be slightly less. In my example, both numbers, 0.315 and 0.8 do not have a finite binary representation and with those errors, their sum turns out to be slightly less than 1.115 which yields “1.11” after rounding. On a very rough count, in my case, this problem appeared for about one in a hundred numbers in the output.

What now?

The customer decided that the problem should be fixed, if it appears too often and it does not take to much time to fix it. When I started to think about some automated way to count the mistakes, I began to realize, that I actually have all the information I need to compute the correct output – I just had to round twice. Once say, at the fourth decimal place and a second time to the required second decimal place:

(new BigDecimal(0.8d+0.315d))
    .setScale(4, RoundingMode.HALF_UP)
    .setScale(2, RoundingMode.HALF_UP)

Which produces the desired result “1.12”.

If doubles are used, the errors explained above can only make a difference of about 10^{-15}, so as long as we just add a double to a number with a short decimal representation while staying in the same order of magnitude, we can reproduce the precise numbers from doubles by setting the scale (which amounts to rounding) of our double as a BigDecimal.

But of course, this can go wrong, if we use numbers, that do not have a short neat decimal representation like 0.315. In my case, I was lucky. First, I knew that all the input numbers have a precision of three decimal places. There are some calculations to be done with those numbers. But: All numbers are roughly in the same order of magnitude and there is only comparing, sorting, filtering and the only honest calculation is taking arithmetic means. And the latter only means I had to increase the scale from 4 to 8 to never see any error again.

So, this solution might look a bit sketchy, but in the end it solves the problem with the limited time budget, since the only change happens in the output function. And it can also be a valid first step of a migration to numbers with managed precision.

PSA: logrotate does not support time-travel

When diagnostics explode

A great many things can break in a software system. However, diagnostics breaking the rest of the software is especially ironic. These tools are supposed to help you find bugs and other problems after the fact, not become one.
The system in question was a small data-recorder running on a BeagleBone Black (BBB), continiously recording measurements from specialized hardware.
These measurements are stored in an SQLite database and can be retrieved (and purged) via a very simple http interface.
For context: the BeagleBone Black is a small GNU/Linux ARM device, not unlike a Raspberry Pi.

During development, we noticed that logfiles would quickly grow to hundreds of megabyte, which could potentially be a problem if the data in the SQLite database is not retrieved, and subsequently purged, for a while. So as a precaution, we set the file-size limit to 5mb in /etc/logrotate.conf. We figured that should solve it, and during testing the logs never got very big again.

Fun in production

Imagine my surprise when I saw a 1.4gb /var/log folder that prevented any successful writes and subsequently corrupted the SQLite db. SQLite does not deal well with full disks, so this was a huge problem.

Two files especially, daemon.log and syslog, were huge with ~950mb and ~450mb respectively. They were clearly bigger than 5mb. logrotate was configured to run daily and weekly respectively. We were kind of spamming the log files, and estimated at max 50mb growth in either file per day, which should limit the files to 50mb and 350mb. But obviously, it didn’t.

Time travel

The production environment has several special properties:

  1. The BBB is not connected to the internet.
  2. There are semi-frequent power losses.
  3. The BBB does not have a battery, so power-cycling it means its internal date is reset.

What all this amounts to is: The system doesn’t know the current time and can’t get it via ntp. And whenever the system starts again, it resumes from a fixed date the disk was flashed with.

logrotate on the other hand doesn’t like that one bit. It’ll get confused by the files written in the future and even worse, it remembers when it last ran. And it doesn’t run if that’s in the future. So if the BBB runs nicely from January, 1st to July, 1st and then power-cycles, you’ll have to wait half a year for your daily logrotate run. And whenever it successfully runs, the problem will get worse.

So, in general, it’s not a good idea to run a full GNU Linux without a working clock!

Using CSV data as external table in Oracle DB

If you want to import CSV data into an Oracle database you can use the SQL*Loader command line tool. You simple create a control file that describes how to load the data and then call the sqlldr command with the control file name as an argument:

example.ctl

LOAD DATA
INFILE example.csv
INTO TABLE example_table
FIELDS TERMINATED BY ';'
(ID, NAME, AMOUNT, DESCRIPTION)
> sqlldr username/password example.ctl

But there’s another way to load CSV data into an Oracle database: External tables.

External tables

Oracle’s external tables feature allows you to query data from a file on the filesystem like a regular database table.

First you have to create a directory in the file system and put your CSV file inside:

mkdir -p /path/to/directory

example.csv

1;Water;250
2;Beer;500
3;Wine;150

Now connect to the database as “SYS as SYSDBA”, define the directory as a database object and grant read/write access to your user:

CREATE OR REPLACE DIRECTORY
  external_tables_dir AS '/path/to/directory';
GRANT READ,WRITE ON DIRECTORY
  external_tables_dir TO example_user;

Now you can connect as example_user and create an external table for the CSV file:

CREATE TABLE example_table (
  id NUMBER(4,0),
  name VARCHAR2(50),
  amount NUMBER(8,0)
)
ORGANIZATION EXTERNAL (
  DEFAULT DIRECTORY external_tables_dir
  ACCESS PARAMETERS (
    RECORDS DELIMITED BY NEWLINE
    FIELDS TERMINATED BY ';'
  )
  LOCATION ('example.csv')  
);

The relevant part here is the ORGANIZATION EXTERNAL block. It references the directory and the CSV file inside the directory and allows you to specify format parameters of the CSV file such as record and field delimiters.

Now you can query the table like a regular table:

SELECT * FROM example_table
ID NAME  AMOUNT
-- ----- ------
1  Water 250
2  Beer  500
3  Wine  150

Access information and errors such as bad or discarded records are stored in log files in the specified directory. The default names of these log files consist of the table name and an ID, e.g. example_table_12345.log, example_table_12345.bad and example_table_12345.dsc.

Docker runtime breaking your container

Docker (or container technology in general) is a great tool to clearly separate the concerns of developers and operations. We use it to simplify various tasks like building projects, packaging them for different platforms and deployment of our software onto the target machines like staging and production servers. All the specifics of the projects are contained and version controlled using the Dockerfiles and compose files.

Our operations only needs to provide some infrastructure able to build container images and run them. This works great most of the time and removes a lot of the friction between developers and operation where in the past snowflaky-servers needed to be setup and maintained. Developers often had to ask for specific setups and environments because each project had their own needs. That is all gone with this great container technology. Brave new world. Except when it suddenly does not work anymore.

Help, my deployment container stopped working!

As mentioned above we use docker to deploy our software to the target machines. These machines are often part of a corporate network protected by firewalls and only accessible using VPN. I already talked about how to use openvpn in a docker container for deployment. So the other day I was making a release of one of my long-running projects and pressing the deploy button for that project on our jenkins continuous integration server.

But instead of just leaning back, relaxing and watching the magic work the deployment failed and the red light lit up! A look into the job output showed that the connection to the target machine was refused. A quick check from the developer machine showed no problem on the receiving side. VPN, target machine and everything was up and running as usual.

After a quick manual deployment performed with care and administrator hat I went on an investigation journey…

What was going on?

The deployment job did not change for several months, the container image did not change and the rest of the infrastructure was working as expected. After more digging, debugging narrowing down the problem I found out, that openvpn did not work in the container anymore because of some strange permission denied error:

Tue May 19 15:24:14 2020 /sbin/ip addr add dev tap0 1xx.xxx.xxx.xxx/22 broadcast 1xx.xxx.xxx.xxx
Tue May 19 15:24:14 2020 /sbin/ip -6 addr add 2axx:1xxx:4:5xxx:9xx:5xxx:5xxx:4xxx/64 dev tap0
RTNETLINK answers: Permission denied
Tue May 19 15:24:14 2020 Linux ip -6 addr add failed: external program exited with error status: 2
Tue May 19 15:24:14 2020 Exiting due to fatal error

This hot trace made it easy to google for and revealed following issue on github: https://github.com/dperson/openvpn-client/issues/75. The cause of all the trouble was changed behaviour of the docker runtime. Our automatic updates had run over the weekend and actually installed a new package version of the docker runtime (see exerpt from apt history log):

containerd.io:amd64 (1.2.13-1, 1.2.13-2)

This subtle change broke my container! After some sacrifices to the whale gods I went on to implement the fix. Fortunately there is an easy way to get it working like before. You just have to pass following command line switch to docker run and everything works as expected:

--sysctl net.ipv6.conf.all.disable_ipv6=0

As nice as containers are for abstracting away hardware, operating systems and other environment details sometimes the container runtime shines through. It is just a shame that such things happen on minor releases or package release upgrades…