How my display usage changed over time

When I was eight years old, my parents bought our first computer. With it came a tiny monochrome display that could be used to show 80×25 characters in amber yellow. I’m typing this text on my most recent computer that is equipped with several displays that show a combined amount of nearly 27 million pixels with at least 2^24 colors each. I don’t dare to count the number of characters that are on screen right now. Something happened along the way.

The formative years

Me as an eight year old boy immediately “clicked” with that first computer. It became my destiny to unlock its full potential. I was delighted when my parents upgraded to a much better PC years later with a color display that could actually show 256 colors at once on a 14″ frame. It was still a CRT monitor, so the refresh rate was probably around 30 Hz and I remember the “fishbowl eyes” you got from longer computer sessions.

If we want to have a visual representation of this monitor, it looks like this:

14″ CRT, 4:3

My first own computer came with a 17″ CRT monitor, which was considered a luxury size and didn’t really fit on the desk. I used this monitor up until my first year of my studies. Nothing in my world would suggest that using more than one monitor per computer is even possible. This computer had a mouse without scroll wheel and no internet access:

17″ CRT, 4:3

When I studied computer science, I came in contact with a lot of people that all took computing and programming serious. Some had monitors the size of a freezer, which hinted at me (and my peers) that 17″ is not as lavish as we thought. But still, a computer had one CPU and one monitor. I scraped my money together and bought a 19″ flat panel CRT monitor. Flat panel just meant that the display area didn’t resemble a fish bowl by itself. It could run up to 60 Hz:

19″ CRT, 4:3

The professional setup

That was my personal computing situation when I founded my company in the third year of my studies. I knew that the equipment had to be better and more professional. Our first work computers still had one CPU and one monitor. It just happened to be gigantic 21″ CRTs. Our desks had extra depth to provide a healthy distance between eyes and display area. Those monitors were delicate enough to provide a “de-gauss” button that would unhinge random electronics around it if pressed:

21″ CRT, 4:3

This was how software was developed in the early 2000s. A “fast” computer with lots of RAM (1 GB were not unheard of), a magnetic harddisk with 160 GB of storage and still one CPU and one monitor. At least, they had internet access and a mouse with a scroll wheel now.

Everything we did, we did in the same place

This setup lasted for four or five years, with better computers, but still the same old monitors. Then, virtually over night, the prices for the new and very cool TFT “flat panel” monitors dropped to readonable numbers. These monitors were really flat and very thin compared to the CRT fish bowls that hogged our desks. We were thrilled and replaced all of our monitors within one year.

The double setup

But just as the CPUs now got two “cores”, we didn’t just replace our one monitor, we doubled it. Every workplace now had two monitors:

2x 24″ TFT, 16:10

And not just that. The monitors were bigger, better, smaller, easier on the eye and had a greater resolution (called WUXGA, essentially FullHD with some extra pixels on the vertical axis).

And we had two of them! This was a game changer because things that used to be done one after another could now be done in parallel – on the CPU and on the monitors. We began to dedicate screen estate to fixed activities:

The monitors are now assigned to certain tasks

The left monitor was the “coding space” while the right monitor was the “tryout space”. The actual distribution of activity to screen location differed from developer to developer, but we all agreed that we would not go back to single monitoring.

During that time, I was sometimes asked if two monitors “are worth the investment”. I blogged about it and I’m still convinced that a second monitor is the single most profitable investment you can do for a developer.

The triple setup

In the blog post above, I made one statement that I soon took back: A third monitor is not the game changer like the transition from one to two monitors, but – if the hardware issues are solved – the next step in the evolution that truly separates work, work result and communication:

3x 27″ TFT, 16:9

This setup is probably wider than your standard desk and requires a dedicated monitor stand, but it is the first time you can do the three essential things a programmer does in parallel:

  • Browse the internet (like stackoverflow or an API documentation)
  • Edit your source code in a fullscreen IDE
  • Watch the result of your changes live (with hot reloading)

Your workflow essentially moves your head from the left (gather new knowledge) over the middle (apply the new knowledge) to the right (evaluate the result of the new knowledge) and back again for the next step:

A typical left-to-right setup

This is our default workplace setup since 2018, with two possible resolution levels:

  • QHD: 3x 2560 x 1440 pixels. This results in 11 million pixels per computer
  • UHD: 3x 3840 x 2160 pixels. You now have almost 25 million pixels at your disposal

There is a biological limit what a human can see at once. This setup nearly fills your complete viewspace. You cannot fit a fourth monitor to the sides that you can really see. The only possibility to expand is now the vertical axis, with additional monitors above and maybe below.

The pandemic setup

I would probably still use the triple monitor setup if there hadn’t happened a fundemental change in the way we develop software in early 2020. In March 2020, we decided within days to abandon our office desks and retreat into home office workplaces that were improvised at first. Now, nearly two years later, all these workplaces are fully equipped and still continually improved. But not only our places changed, our communication as well. Video calls are a natural component of our workday now. And in my case, they happen in parallel to my normal work. So I had to dedicate screen space to videoconferencing. And I’ve done it by adding a fourth monitor:

3x 27″ TFT, 1x 10″ TFT, 16:9

This small monitor sits right next to the webcam, so if I look at my dialog partner, I also seem to look right into the camera. This setup adds a new distinctive activity to the mix:

You can guess what I’m doing by following my gaze

I’ve described the other ingredients for a fully equipped home office in a previous blog post. You can see an early photo of my setup in this post.

Conclusion

And this is the setup I’m writing this blog post on. 27 million pixels that I can use to speed up my workflow by assigning dedicated working zones. If you had asked my in 2009 if I can imagine to double the amount of monitors and have nearly six times more pixels available, I would have said no way.

But by looking back to the beginning, I can see how the fundamentals of personal computing changed in every aspect. A computer is no longer “one CPU” and it doesn’t have only one monitor. Today’s displaying technology is capable of providing a lot of screen estate. The main limiting factor is our own imagination. Reaping the benefits of dedicated display areas is satisfying and increases your work troughput effortlessly.

If you ask yourself how your ideal monitor setup should look like, try to reflect on how you move your application windows around or how often you switch applications without moving your head. If you would like to make the switch without hiding the previous context, you’ve just found a use case for an additional monitor.

What is your monitor setup and your usage pattern with it? Tell us in the comments!

The ever-connecting WebSocket

This is another of these „funny how we live in a time, where we take connectivity for granted“-posts. But what is taken for granted, usually still is somewhat cumbersome under the hood. As in our current episode.

Admittedly, the arrival of WebSockets in the last decade were one of the more significant steps towards a fluid internet experience. The WebSocket protocol is an advancement from the old „some client asks some server to handle some stuff“ way in that it is bi-directional: After mutual agreement („hand shake“), the connection stays open for the server to send data to the client, without the client having to ask first. Consider the server to be a complex application which processes lots of tasks and from time to time creates some „news“ for the client, which the user might want to read in real time.

Nowadays, the WebSocket itself is long established. What surprised us a few weeks, however – and what made us invest several days in actual research – is their behaviour when paired with loss of internet connection. Which had quite some surprise for us.

Now, this is a real scenario for one of our customers. You have a web application running on a mobile device, and this device moves in and out of WiFi-accessible areas all the time. The application should just show this circumstance and attempt to reconnect. Now the straightforward thing was to use the native WebSocket API class, or the “websocket” npm package (which acts as a small wrapper around that API); this comes with a small enough set of event handlers (onopen, onclose, onerror, onmessage). but the less obvious thing was: How is “connection lost” actually noticed? Is it onerror? Is it onclose?

In reality, this is not clear at all. Depending on the type of internet loss, there might occur a delay of several minutes until onclose fires, and onerror alone seems not to imply any closing at all. Furthermore, it depended on the type of internet loss. How do you even simulate “mobile device walked away from WiFi” as accurately as possible? While disconnecting our WiFi seemed to register with almost no delay, this was too far from the real scenario. It was only after switching to an ethernet cable and then unplugging it, that we saw the effect. And we found that the onclose event is actually quite confused if we reconnect our cable before it has fired. It could happen, then, that one old onclose did not fire until a new WebSocket was already opened, i.e. not a good indicator of “no connection” at all.

This confusion made it clear that the WebSocket technology is not as well defined as we thought it was. We actually resorted to one of the most basic ideas in order to notice our “(dis)connected” state: Continuously checking for it. Indeed – as low-level as it sounds.

We found that following solution to work quite well:

  • The server continuously sends a “heart beat” over the WebSocket. We are aware that there is a websocket.ping() method but we didn’t want to run into more surprises here.
  • WebSocket handling is done inside our own module which
    • wraps the WebSocket onmessage event in order to expect that heart beat or else “the watchdog gets angry”
    • has its own onclose event which communicates the problem to the outside as early as possible
    • also, instantly tries to reconnect
    • wraps the WebSocket onclose event in order to make it quiet if the watchdog gets angry and it would fire too late; but otherwise fire (if the watchdog is happy and the WebSocket is closed normally).

The latter implements Loose Coupling / the Principle of Least Knowledge / Separation of Concerns. We do not want our module to have a much larger interface than the original WebSocket implementation. In fact, the only information from our application to our new module is “is the user logged in”? In our application, this is part of the Redux state, but we want our module to know neither of React, Redux or other magic; it should be vanilla TypeScript in order be testable, or even better, so straightforward that any tests would be trivial.

So there we have it. If you are interested in the code, I’d be glad to share that, but the actual deed here was in finding out what we actually need.

I have no idea why the WebSocket specification is the way it is, but if you ever encounter such a problem, that would be my advice – take the thing, put it in your own thing, and couple the things loosely.

But anyway, it was fun to realize that even in 2021, a two-way-connected client-server system still might need a small guardian that tells you whether everything’s fine.

Addendum: Monkey-patching an existing class in TypeScript

I leave that here for quick reference. As stated above, we needed to equip our websocket instances with a flag to ignore their onclose events. Now some sources might readily give you the quick advice to do it as:

const socket = new w3cwebsocket(...);
(socket as any).silent = false;

But why use TypeScript if you want to work around the type system anyway? Just extend it.

class CustomWebSocket extends w3cwebsocket {
    silent: boolean = false;
    constructor(url: string) {
        super(url);
    }
}

const socket = new CustomWebSocket(...);

Understanding, identifying and fixing the N+1 query problem

One of the most common performance pitfalls for applications accessing data from databases is the so-called “N+1 query problem”, or sometimes also called the “N+1 selects problem”. It is the first thing you should look for when an application has performance issues related to database access. It is especially easy to run into with object-relational mappers (ORMs).

The problem

The problem typically arises when your entity-relationship model has a 1:n or n:m association. It exists when application code executes one query to get objects of one entity and then executes another query for each of these objects to get the objects of an associated entity. An example would be a blog application that executes one query to fetch all authors whose names start with the letter ‘B’, and then another query for each of these authors to fetch their articles. In pseudocode:

# The 1 query
authors = sql("SELECT * FROM author WHERE name LIKE 'B%'");

# The N queries
articles = []
FOR EACH author IN authors:
    articles += sql("SELECT * FROM article WHERE author_id=:aid", aid: author.id)

The first query is the “1” in “N+1”, the following queries in the loop are the “N”.

Of course, to anybody who knows SQL this is a very naive way to get the desired result. However, OR mappers often seduce their users into writing inefficient database access code by hiding the SQL queries and allowing their users to reach for the normal tools of their favorite programming language like loops or collection operations such as map. A lot of popular web application frameworks come along with OR mappers: Rails with Active Record, Grails with GORM (Hibernate based), Laravel with Eloquent.

How to detect

The easiest way to detect the problem in an application is to log the database queries. Virtually all ORMs have a configuration option to enable query logging.

For Grails/GORM the logging can be enabled per data source in the application.yml config file:

dataSource:
    logSql: true
    formatSql: true

For Rails/ActiveRecord query logging is automatically enabled in the development environment. Since Grails 5.2 the Verbose Query Logs format is enabled by default, which you had to explicitly enable in earlier versions.

For Laravel/Eloquent you can enable and access the query log with these two methods/functions:

DB::connection()->enableQueryLog();
DB::getQueryLog();

Once query logging is enabled you will quickly see if the same query is executed over and over again, usually indicating the presence of the N+1 problem.

How to fix

The goal is to replace the N+1 queries with a single query. In SQL this means joining. The example above would be written as a single query:

SELECT article.*
FROM article
JOIN author
  ON article.author_id=author.id
WHERE author.name LIKE 'B%'

The query interface of ORMs usually allows you to write joins as well. Here the example in ActiveRecord:

Article.joins(:authors).where("authors.name LIKE ?", "B%")

Another option when using ORMs is to enable eager loading for associations. In GORM this can be enabled via the fetchMode static property:

class Author {
    static hasMany = [articles: Article]
    static fetchMode = [articles: 'eager']
}

REST APIs

The problem isn’t limited to SQL databases and SQL queries. For REST APIs it’s the “N+1 requests problem”, describing the situation where a client application has to call the server N+1 times to fetch one collection resource + N child resources. Here the REST-API has to be extended or modified to serve the client’s use cases with a single request. Another option is to offer a GraphQL API instead of a REST API. GraphQL is a query language for HTTP APIs that allows complex queries, so the client application can specify exactly what resources it needs with in a single request.

Migrating a Grails application from Oracle to PostgreSQL

In my previous post I explained how to migrate an Oracle schema with data to a PostgreSQL database management system (DBMS). Besides the general tasks and issues there are additional topics to migrate a complete application using the database to the other DBMS.

In our specific case we have a grails application which we maintain since Grails 1.0 times for more than 12 years. During that time we did a ton of feature development with lots of refactoring and many database migrations. So the source database will most like not be perfectly consistent and clean.

General approach

Since Grails/GORM and the DatabaseMigration-Plugin (DBM-Plugin) do a great job at preparing an empty database with a matching schema for the application to run we let the framework tools generate the schema and only migrate the data using Ora2Pg.

Sounds simple, but how it is done in detail and what else to look for?

Generating the initial dabase schema

The DBM-Plugin provides a script to create a database changelog with a schema matching the domain model of your grails application. It is integrated in gradle, so you can grails dbm-generate-gorm-changelog initialdb.groovy to create the migration scripts providing a fitting schema. You then include this script in grails-app/migrations/changelog.groovy or replace all the migrations you had before included there with this initial database changelog.

To prepare an empty database to run with your application you call the gradle task dbmUpdate.

Checking all plain SQL code

If you are only using GORM’s dynamic finders, save()/update()/delete()-methods, HQL and the criteria API you are probably fine to run your application or perform the data mirgration step.

Our application has some specific parts where we use plain SQL. Because of syntactical differences you will want to check all the plain SQL if it works with PostgreSQL. The most obvious stuff is dealing with sequences or other queries where you need the dual table in Oracle.

Migrating the data

This is probably the part where the most things can go wrong. We had quite some work with data-inconsistencies and left-overs from manual corrections that happened over the course of running and upgrading the application for so many year. For younger and simpler applications this may not present any challenges but for us it was quite time-consuming.

Now you can use Ora2Pg to import the data. After the whole data import using Ora2Pg worked as intended you should check the value of the hibernate_sequence . This sequence is used to generate the ids of all grails domain objects.

Do not let the sequences from the autoincrement columns of the tables of your domain objects confuse you! They are not used by Grails/GORM. To avoid this confusion you can remove the default value of the id columns and the accompanying sequences.

Checking the result

You should always run acceptance or manual tests to make sufficiently sure that the migration worked as intended. There is always the possibility of a configuration or software error or some oversights in checking the application code.

If possible tests the result on a dedicated system with some snapshot of the real world data before making the switch on the production system. Good luck!

Hyperfocus on Non-Essentials

When tasked with managing a complex and potentially overwhelming project, a common behaviour of inexperienced managers/developers is to focus on things that are easy to achieve (“low-hanging fruits”), fun to produce (“cherry-picking”) or within the comfort zone.

This means that in the extreme, the developer exclusively focusses on things that are of no interest for the business client but can simulate progress and results.

This behaviour is an application of the “path of least resistance” and I know exactly what it feels like. Here’s the story why:

When I was fourteen years old, my programming career was already 6 years in the making. Of course, I only wrote code for myself, teaching myself new concepts and new errors alike. My only scale of success was “does it run?” and “is it still fun for me?”. My only programming language was BASIC, first the dialect GW-BASIC (still with line numbers!), then the more advanced QBasic (with named jump markers instead of line numbers).

I grew up in small cities and was basically alone with my hobby. But a friend had a parent that owned an optometrist shop that was interested in using computers for their day-to-day operations. I was asked to write a program to handle the shop’s inventory and sales. The task was interesting, but I had no idea how any shop, let alone this particular one, handles their business. I agreed to build a prototype and work from there.

I knew that this project was bigger and more ambitious than any hobby project of my own before, but it was programming after all – how hard could it be?

My plan was to do two things in parallel: Buy and read a book about real software development with BASIC and try to sketch out the application as as “coded paper prototype”.

The book turned out to be the confessions of a frustrated software developer that basically assured the reader on every page that BASIC was not dead and appended dozens of pages with code listings to every chapter. There was probably a lot of wisdom in this book, too, but it missed me by miles.

The sketch of the application began with a menu of all the things I thought would be necessary, like “inventory” or “sales process”. I also included an “Extras” menu and one thing in the menu should be a decent screen saver. Back in those days, the CRT monitors suffered from burn-in if the same image was shown for a long time and I figured that this application would run all day every day, so it seemed logical and important to have a screen saver that is automatically turned on after some period of inactivity.

Which presented itself as a really hard problem, because BASIC was essentially single-threaded (or at least it was to my knowledge back then) and I had to invent some construct that can perhaps be described as “obscure co-routines”. That was some fun programming!

After I solved the automatic activation of the screensaver functionality, I discovered that I could easily make the actual screensaver that gets shown a parameter. So I programmed not one, but several cool and innovative ASCII art screensavers that you could choose from in the extras menu. One screen saver was inspired by the snake game, another one was “colored blocks” that would appear and disappear to form a captivating mood picture.

That was the state of the application when my friend’s parent asked for a demo. I had:

  • No additional knowledge about application design
  • A menu of things I invested no second thought in
  • Several very cool screensavers that activated themselves automatically. Isn’t that great?

You can probably guess how that demo went. None of the things I had developed mattered in the slightest for the optometrist shop. My passion for my creation didn’t translate to the business very well.

I had worked intensively on this project. I hyperfocused on totally non-essential stuff and stayed mostly in my comfort zone, even if I felt as if I had made great progress.

It is easy to fall into this trap. It is easy to mistake one’s own feelings of progress and success with the external (real) ones. It feels very good to work frantically on things that matter to oneself. It becomes a tragedy if the things only matter to oneself and nobody else.

So what can we do to avoid this trap? If you have an idea, write a comment about it! I hope to hear lots of different takes on this problem.

Here is my solution: “Risk first”. With this project strategy, the first task in a project is to solve the hardest part, to cut the biggest knot or to chart the most relevant area. It means that after the first milestone is a success, the project will gradually become easier. It’s the precursor to “fail fast”, which is a “risk first” project that didn’t meet its first milestone.

It is almost guaranteed that the first milestone in a “risk first” project will not be in your comfort zone, is no low-hanging fruit that you can pick without effort and while it might be fun to work on, it’s probably something your customer has a real interest in.

By starting a project “risk first”, I postpone my tendency to focus on non-essentials towards the end of the project. And with concepts like “business value”, I can see very clearly when my work becomes irrelevant for the customer. That’s when I stop my professional work and my hobby begins.

You are not safe with Semantic Versioning (right now).

TL;DR: Several recent hijacks of widely spread NPM libraries should make you double-think whether to trust the package.json-semantic-versioning notation using carets and tildes.

So what’s that about?

Version updates are one oft he most haunting things any person with any kind of computer does ever encounter. On the one hand, it’s good news – some thing that you use has evolved again, and you are right at the source. Have better tools, less bugs, new functionality, and usually delivered by just a few clicks. But there’s always these question marks – do I want to know what happened there? Do I want that now? Wasn’t that old version totally working?

So we all know that dilemma. I mean, as we speak, every Windows user is kindly remembered that one could now let that version 11 live on her system. But is it compatible with your system? Will this work out of the box? Is your time worth trying now or do you wait until the waters have settled? And while Microsoft is asking that question only once a few years apart, there’s software like Notepad++ that wants me to have its updates every single start. Because apparently, text editors can grow up so fast..?

Now imagine this problem, times a few googol, and you are in the everyday world of every web developer. In the npm universe, you have nearly total modular flexibility, which comes with so many small packages that update for whatever reason — One likes to believe that mostly these are bugfixes. Or might these be these breaking changes? Did JavaScript evolve again? Do you want that stuff? Are you state-of-the-art or are you in dependency hell?

Issues like these are the source of semantic versioning. The idea is, that you apply a certain level of trust into the usually three-membered version “major.minor.patch” label. Version “3.10.1” means

  • Major version 3; the API of your software is promised to be compatible within each major version (except for the initial version 0)
  • Minor version 10, you use a new version for new API functionality, e.g. there have been 10 times when version 3.0.0 was improved without breaking backwards compatibility
  • Patch version 1, which is incremented for bugfixes within that API specification, i.e. backwards compatible within the whole major version.

This is done in good faith because only when the package maintainer “re-think”s his API, major version is incremented, and that mediates a level of trust in that you in return only have to re-think your usage of that API, if you switch major version. Therefore, dependency management systems like the npm / yarn package.json allow for the convenient notation to specify e.g.

"dependencies": {
    /* ... */
    "styled-components": "^5.1.1",
    "websocket": "~1.0.31"
  },

The caret (^) notation tells us that when the styled-components package was added to our projects, we installed version 5.1.1, but we trust the npm universe that far that every future execution of “npm install” / “yarn install” can increase this version within the same major version, e.g. if version 5.2.0 was released in the meantime, then update for its new content, and as we speak, we are at version 5.3.3, so this project is well up-to-date with whatever the good folks put in there.

Similarly, the tilde (~) notation only allows that behavior within this minor version, e.g. at the moment any call of “… install” would retrieve the current version 1.0.34 but would not get version 1.1.0 whenever that was released.

The opposite of using these is called dependency pinning, and there is lots of further reading available, e.g. here.

There is a certain misconception that “… install” will only update any of these versions if there is no “package-lock.json” for npm or no “yarn.lock” for yarn is around. That is not the case, see below, but first, my actual point.

So the point of semantic versioning is the establishment of trust between the package developer and the user: “This update only changes about that much”.

Problem: You cannot trust the npm universe right now.

Now the last weeks showed us not only a hijacking of the npm package us-parser-js at the end of October, but also another one of the packages coa and rc 11 days ago – these appeared somewhat correlated and came with a mixture of password-stealing and secret installing of crypto-mining tools, all in all the result of some bad folks getting access to these package repositories, making them execute malicious code in their install scripts – note that install scripts are not uncommon for widely spread npm packages. This means that while you can complain that these hackers did not really adhere to the Semantic Versioning code (oh..??), and also these breaches were noticed in a couple of hours each, think about this:

anyone with a certain caret or tilde in her package.json might have infected herself just by a unluckily-timed call of “npm install”.

Think of an automated script. Think of CI. Think of anyone who just wants to build his project and be as up-to-date as one can get. A last year survey of npm developers showed that usage of two-factor authentication is just below 10%, and while this doesn’t mean that the other 90% are completely irresponsible, there just is no system in place that would promise us that such attacks will just go away soon.

So of course we can not write every dependency of your projects itself, especially if they are not direct dependencies. But think of it as Russian Roulette: At least you can minimize the number of pulling the trigger.

You can not know which package is affected next. You better make sure to pin that version to exactly a version you can trust right now, and if you are ever in need of updating this, at least have a quick googling – whether there’s some sh*t going down right now.

Do you have further ideas on how to isolate your development / CI environments from whatever just happens in the outer rims of the npm universe? Please feel free to share.

How to make npm / yarn respect their respective lockfiles (package-lock.json / YARN.lock)

In principle, you can even live with the caret / tilde, if you make sure that you never actually call “npm / yarn install” itself, but make them actually consider their so-called-lockfiles as lockfiles. In their current versions, these calls should lead to that behaviour:

# instead of npm install
npm ci

# instead of yarn install
# for yarn 1.x:
yarn install --frozen-lockfile
# for yarn 2:
yarn install --immutable

As you can see from the npm call, this is especially suited for CI environments, this means you have to make sure the package-lock.json / yarn.lock is part of your repository.

One disadvantage of our approach is that npm really likes to notice you of being not very up to date, and produce lots of noise for whatever reason that you want to get rid of. Just be sure to pay some amount of attention when you update.

WPF Redux Sample Application

A while ago, I wrote about how we are using the redux architexture in our C# applications. I have just pushed an example showing ReduxSimple with WPF and our extensions in a .NET 5 application to our github account. The example itself is just a counter with an increment and a decrement button, but it already shows the whole redux cycle.

The store setup in App.xaml.cs shows how the ReducerBuilder can be used to build a State reducer from the Reducer class via reflection.

I also added a small prime-number factorization to show how to use ‘expensive’ functions in the view part of the application using our SelectorGraph. This makes it possible to properly derive view data from the state, only updating them once when one of their inputs changes. In the example, that is the counter. So the number will only be factorized when the counter changes, while all other future state changes do update the selector.

The example does not use the UIDuplexBinder yet. It allows read/write binding of WPF controls to an IObservable and an action-creator, and is hopefully pretty straight-forward to use. Please enjoy!

Linking separate PostgreSQL servers with a Foreign Data Wrapper

If you want to query a table in one database server from another you need a way to connect these two servers with each other. For PostgreSQL databases the feature that makes this possible is called Foreign Data Wrapper.

To use this feature you have to load its extension into the current database:

CREATE EXTENSION postgres_fdw;

The postgres_fdw extension ships with a default PostgreSQL installation. It allows you to connect a PosgreSQL database with another PostgreSQL database. You can connect a PostgreSQL database with databases by other vendors, too, but then you need external extensions like oracle_fdw for Oracle databases or mysql_fdw for MySQL databases. In this article we will only use postgres_fdw.

You can check if the extension was loaded successfully. The following query should return a row for the extension:

SELECT * FROM pg_extension WHERE extname='postgres_fdw';

The next step is to set up the remote server instance:

CREATE SERVER remotesrv
  FOREIGN DATA WRAPPER postgres_fdw
  OPTIONS (
    host '127.0.0.1',
    port '5432',
    dbname 'remotedb'
  );

This statement registers a remote server under the name remotesrv, a name you can choose freely. You have to specify which Foreign Data Wrapper to use (postgres_fdw in this case) as well the target host, port and database name.

The CREATE SERVER statement didn’t contain any user login information, you have to provide it via a user mapping:

CREATE USER MAPPING
  FOR CURRENT_USER
  SERVER remotesrv
  OPTIONS (
    user 'joe',
    password 'secret'
  );

In this case we map the remote user joe to the current user (CURRENT_USER) of the local server. It doesn’t have to be the current user, you could specify any user name.

Now you have to import tables from the remote database. You can either explicitly import individual tables or a whole schema. Here’s how to import the public schema of the remote (“foreign”) database into the public schema of the local database:

IMPORT FOREIGN SCHEMA public
  FROM SERVER remotesrv
  INTO public;

You can restrict which tables to import with the LIMIT TO or EXCEPT clauses. The following statement will only import the tables books and students:

IMPORT FOREIGN SCHEMA public
  LIMIT TO (students, books)
  FROM SERVER remotesrv
  INTO public;

Now you can access these tables as if they were in the local database.

Migrating from Oracle to PostgreSQL

We are maintaining several applications with a SQL-Database as our data storage. If we can decide freely, we usually opt for PostgreSQL as the database management system (DBMS). But sometimes our clients have specific requirements because they are running the services on-premises so we use our customers’ choice. SQL is SQL anyway, is it not?

No it isn’t. And this year one of our customers asked us to migrate our application from Oracle to PostgreSQL. The migration was challenging even though we are using an object-relational mapper (ORM) and the necessary changes to our application code were very limited.

In this post I want to explain the general, application-agnostic challenges of such a migration. A follow-up will cover the application- and framework-specific issues.

Why is it not easy?

Luckily, PostgreSQL supports most common SQL features of Oracle, especially sequences, PL/SQL like scripts, triggers, foreign keys etc. and all the important datatypes. So you are mostly migrating from an inferior to a more powerful solution, at least feature and capability-wise from a client perspective. Please note that I am not judging the performance, replication, clustering and other administrative features here!

Unfortunately there is no simple and powerful enough tool to simply dump the oracle database into some standard SQL text format that you could pipe into psql or use with pg_restore. In addition there is also a challenge to convert the different number-types of Oracle to sematically equivalent PostgreSQL types etc.

Another challenge is coping with the referential integrity. Especially data in complex schemas with a lot of foreign keys are harder to migrate without proper tool support as you have to figure out the correct order of tables to restore.

Nevertheless, such a migration is doable, especially if you do not have too much scripting logic in your database. And there is a free tool to help you with all this stuff called Ora2Pg.

What can Ora2Pg do for you?

It can export the full database schema including constraints, convert datatypes based on configuration provided by you and offers a basic automatic conversion of PL/SQL code to PLPGSQL. When running the migration you can interactively choose what to migrate and what to skip. That allows you to only migrate the data into a readily prepared schema, for example.

How to run Ora2Pg?

Ora2Pg is a collection of perl scripts and configuration files so you need a system capable of running these. If you do not want to mess with your whole system and install all of the dependencies I prepared a Dockerfile able to run Ora2Pg:

FROM centos:7

# Prepare the system for ora2pg 
RUN yum install -y wget
RUN wget https://yum.oracle.com/RPM-GPG-KEY-oracle-ol7 -O /etc/pki/rpm-gpg/RPM-GPG-KEY-oracle

COPY ol7-temp.repo /etc/yum.repos.d/
RUN yum install -y oraclelinux-release-el7
RUN mv /etc/yum.repos.d/ol7-temp.repo /etc/yum.repos.d/ol7-temp.repo.disabled
RUN yum install -y oracle-instantclient-release-el7
RUN yum install -y oracle-instantclient-basic
RUN yum install -y oracle-instantclient-devel
RUN yum install -y oracle-instantclient-sqlplus

RUN yum install -y perl perl-CPAN perl-DBI perl-Time-HiRes perl-YAML perl-local-lib make gcc
RUN yum install -y perl-App-cpanminus

RUN cpanm CPAN::Config
RUN cpanm CPAN::FirstTime

ENV LD_LIBRARY_PATH=/usr/lib/oracle/21/client64/lib
ENV ORACLE_HOME=/usr/lib/oracle/21/client64

RUN perl -MCPAN -e 'install DBD::Oracle'

COPY ora2pg-21.1.tar.gz /tmp

WORKDIR /tmp
RUN tar zxf ora2pg-21.1.tar.gz && cd ora2pg-21.1 && perl Makefile.PL && make && make install

RUN mkdir -p /migration
RUN ora2pg --project_base /migration --init_project my_project
WORKDIR /migration/my_project

# uncomment this if you have a customized ora2pg.conf
#COPY ora2pg.conf /migration/my_project/config/

CMD ora2pg -t SHOW_VERSION -c config/ora2pg.conf && ora2pg -t SHOW_TABLE -c config/ora2pg.conf\
 && ora2pg -t SHOW_REPORT --estimate_cost -c config/ora2pg.conf\
 && ./export_schema.sh && ora2pg -t INSERT -o data.sql -b ./data -c ./config/ora2pg.conf

Here are the commands and the workflow to export the oracle database using the above docker image:

docker build -t o2pg .
# this will fail initially but create the project structure and generate a default configuration file
docker run --name oracle-export o2pg
# copy the project structure to the host system
docker cp oracle-export:/migration/my_project ./my_project_migration/

Now you can edit the configuration in my_project_migration/config and copy it to the directory you have built and run the docker commands. Most importantly you have to change the connection parameters at the top of the ora2pg.conf file. When you are ready to make the first go you need to enable configuration copying in the Dockerfile and rebuild the image. Now you should get your first somehow usable export.

The most import config options we changed for our projects are:

  • Connection parameters
  • Excluded tables that you do not want to migrate
  • Deletion of the contents of the target tables
  • Conversion of some datatypes like NUMBER(*,0) to bigint and NUMBER:1 to boolean for some columns

Most of the defaults are sensible to begin with but you can tailor the export specifically to your needs. If you feel ready to try the import you can run the import using a second docker image based on the following Dockerfile-import:

FROM centos:7

# Prepare the system for ora2pg 
RUN yum install -y wget
RUN wget https://yum.oracle.com/RPM-GPG-KEY-oracle-ol7 -O /etc/pki/rpm-gpg/RPM-GPG-KEY-oracle

COPY ol7-temp.repo /etc/yum.repos.d/
RUN yum install -y oraclelinux-release-el7
RUN mv /etc/yum.repos.d/ol7-temp.repo /etc/yum.repos.d/ol7-temp.repo.disabled
RUN yum install -y oracle-instantclient-release-el7
RUN yum install -y oracle-instantclient-basic
RUN yum install -y oracle-instantclient-devel
RUN yum install -y oracle-instantclient-sqlplus
RUN yum install -y postgresql-server

RUN yum install -y perl perl-CPAN perl-DBI perl-Time-HiRes perl-YAML perl-local-lib make gcc
RUN yum install -y perl-App-cpanminus

RUN cpanm CPAN::Config
RUN cpanm CPAN::FirstTime

ENV LD_LIBRARY_PATH=/usr/lib/oracle/21/client64/lib
ENV ORACLE_HOME=/usr/lib/oracle/21/client64

RUN perl -MCPAN -e 'install DBD::Oracle'

COPY ora2pg-21.1.tar.gz /tmp

WORKDIR /tmp
RUN tar zxf ora2pg-21.1.tar.gz && cd ora2pg-21.1 && perl Makefile.PL && make && make install

# you need to mount the project volume to /my_project
WORKDIR /my_project

ENV pg_port=5432

CMD ./import_all.sh -d $pg_db -h $pg_host -p $pg_port -U $pg_user -o $pg_user

To run the import with your exported project run build and run the import container as follows:

docker build -t postgres-import -f Dockerfile-import .
docker run -it --rm -e pg_host=target-db.intranet -e pg_db=my_project_db -e pg_user=my_db_user -v ./my_project_migration:/my_project postgres-import

Then you can interactively provide the database password and decide which migration steps to perform.

Caveat

Depending on your schema, data and privileges in the target database it may be necessary to disable all triggers before importing and reenable them after a successful import. This can done by replacing all occurences of TRIGGER USER by TRIGGER ALL in the file data/data.sql. You may need appropriate privileges for this to work.

Final words

Such a migration is not an easy task but may be worth it in total cost of ownership and maybe developer satisfaction as Oracle has some oddities and limitations for backend developers.

I will tackle some application specific issues with such a migration in a follow-up article that we encountered when migrating our system from Oracle to PostgreSQL using the above approach and tools.

WPF: Recipe for customizable User Controls with flexible Interactivity

The most striking feature of WPF is its peculiar understanding of flexibility. Which means, that usually you are free to do anything, anywhere, but it instantly hands back to you the responsibility to pay super-close attention of where you actually are.

As projects with grow, their user interfaces usually grow, and over time there usually appears the need to re-use any given component of that user interface.

This not only is the working of the DRY principle at the code level, Consistency is also one of the Nielsen-Norman Usability Heuristics, i.e. a good plan as to not confuse your users with needless irritations. This establishes trust. Good stuff.

Now say that you have a re-usable custom button that should

  1. Look a certain way at a given place,
  2. Show custom interactivity (handling of mouse events)
  3. Be fully integrated in the XAML workflow, especially accepting Bindings from outside, as inside an ItemsControl or other list-type Control.

As usual, this was a multi-layered problem. It took me a while to find my optimum-for-now solution, but I think I managed, so let me try to break it down a bit. Consider the basic structure:

<ItemsControl ItemsSource="{Binding ListOfChildren}">
	<ItemsControl.ItemTemplate>
		<DataTemplate>
			<Button Style="{StaticResource FancyButton}"
				Command="{Binding SomeAwesomeCommand}"
				Content="{Binding Title}"
				/>
		</DataTemplate>
	</ItemsControl.ItemTemplate>
</ItemsControl>
Quick Note about the Style

We see the Styling of FancyButton (defined in some ResourceDictionary, merged together with a lot of stuff in the App.xaml Applications.Resources), and I want to define the styling here in order to modify it in some other places i.e. this could be defined in said ResourceDictionary like

<Style TargetType="{x:Type Button}" x:Key="FancyButton"> ... </Style>

<Style TargetType="{x:Type Button}" BasedOn="{StaticResource FancyButton}" x:Key="SmallFancyButton"> ... </Style>
<Style TargetType="{x:Type Button}" BasedOn="{StaticResource FancyButton}" x:Key="FancyAlertButton"> ... </Style>
... as you wish ...
Quick Note about the Command

We also see SomeAwesomeCommand, defined in the view model of what ListOfChildren actually consists of. So, SomeAwesomeCommand is a Property of a custom ICommand-implementing class, but there’s a catch:

Commands on a Button work on the Click event. There’s no native way to assign that to different events like e.g. DragOver, so this sounds like our new User Control would need quite some Code Behind in order to wire up any Non-Click-Event with that Command. Thankfully, there is a surprisingly simple solution, called Interaction.Triggers. Apply it as

  1. installing System.Windows.Interactivity from NuGet
  2. adding the to your XAML namespaces: xmlns:i="clr-namespace:System.Windows.Interactivity;assembly=System.Windows.Interactivity"
  3. Adding the trigger inside:
<Button ...>
    <i:Interaction.Triggers>
        <i:EventTrigger EventName="DragOver">
            <i:InvokeCommandAction Command="WhateverYouFeelLike"/>
        </i:EventTrigger>
    </i:Interaction.Triggers>
</Button>

But that only as a side note, remember that the point of having our separate User Control is still valid; considering that you would want to have some extra interactivity in your own use cases.

Now: Extracting the functionality to our own User Control

I chose not to derive some class from the Button class itself because it would couple me closer to the internal workings of Button; i.e. an application of Composition over Inheritance. So the first step looks easy: Right click in the VS Solution Explorer -> Add -> User Control (WPF) -> Create under some name (say, MightyButton) -> Move the <Button.../> there -> include the XAML namespace and place the MightyButton in our old code:

// old place
<Window ...
	xmlns:ui="clr-namespace:WhereYourMightyButtonLives
	>
	...
	<ItemsControl ItemsSource="{Binding ListOfChildren}">
		<ItemsControl.ItemTemplate>
			<DataTemplate>
				<ui:MightyButton Command="{Binding SomeAwesomeCommand}"
						 Content="{Binding Title}"
						 />
			</DataTemplate>
		</ItemsControl.ItemTemplate>
	</ItemsControl>
	...
</Window>

// MightyButton.xaml
<UserControl ...>
	<Button Style="{StaticResource FancyButton}"/>
</UserControl>

But now it get’s tricky. This could compile, but still not work because of several Binding mismatches.

I’ve written a lot already, so let me just define the main problems. I want my call to look like

<ui:DropTargetButton Style="{StaticResource FancyButton}"
		     Command="{Binding OnBranchFolderSelect}"
		     ...
		     />

But again, these are two parts. Let me clarify.

Side Quest: I want the Style to be applied from outside.

Remember the idea of having SmallFancyButton, FancyAlertButton or whatsoever? The problem is, that I can’t just pass it to <ui:MightyButton.../> as intended (see last code block), because FancyButton has its definition of TargetType="{x:Type Button}". Not TargetType="{x:Type ui:MightyButton}".

Surely I could change that. But I will regret this when I change my component again; I would always have to adjust the FancyButton definition every time (at several places) even though it always describes a Button.

So let’s keep the Style TargetType to be Button, and just treat the Style as something to be passed to the inner-lying Button.

Main Quest: Passing through Properties from the ListOfChildren members

Remember that any WPF Control inherits a lot of Properties (like Style, Margin, Height, …) from its ancestors like FrameworkElement, and you can always extend that with custom Dependency Properties. Know that Command actually is not one of these inherited Properties – it only exists for several UI Elements like the Button, but not in a general sense, so we can easily extend this.

Go to the Code Behind, and at some suitable place make a new Dependency Property. There is a Visual Studio shorthand of writing “propdp” and pressing Tab twice. Then adjust it to read like

public ICommand Command
        {
            get { return (ICommand)GetValue(CommandProperty); }
            set { SetValue(CommandProperty, value); }
        }

        public static readonly DependencyProperty CommandProperty =
            DependencyProperty.Register("Command", typeof(ICommand), typeof(DropTargetButton), new PropertyMetadata(null));

With Style, we have one of these inherited Properties. Nevertheless, I want my Property to be called Style, which is quite straightforward by just employing the new keyword (i.e. we really want to shadow the inherited property, which is tolerable because we already know our FancyButton Style to its full extent.)

public new Style Style
        {
            get { return (Style)GetValue(StyleProperty); }
            set { SetValue(StyleProperty, value); }
        }

        public static readonly new DependencyProperty StyleProperty =
            DependencyProperty.Register("Style", typeof(Style), typeof(DropTargetButton), new PropertyMetadata(null));

And then we’re nearly there, we just have to make the Button inside know where to take these Properties. In an easy setting, this could be accomplished by making the UserControl constructor set DataContext = this; but STOP!

If you do that, you lose easy access to the outer ItemsControl elements. Sure you could work around – remember the WPF philosophy of allowing you many ways – but more practicable imo is to have an ElementName. Let’s be boring and take “Root”.

<UserControl x:Class="ComplianceManagementTool.UI.DropTargetButton"
	     ...
             xmlns:i="clr-namespace:System.Windows.Interactivity;assembly=System.Windows.Interactivity"
             xmlns:local="clr-namespace:ComplianceManagementTool.UI"
             x:Name="Root"
             >
    <Button Style="{Binding Style, ElementName=Root}"
            AllowDrop="True"
            Command="{Binding Command, ElementName=Root}"
            Content="{Binding Text, ElementName=Root}"
            >
        <i:Interaction.Triggers>
            <i:EventTrigger EventName="DragOver">
                <i:InvokeCommandAction Command="{Binding Command, ElementName=Root}"/>
            </i:EventTrigger>
        </i:Interaction.Triggers>
    </Button>
</UserControl>

As some homework, I’ve left you the Content property to add as a Dependency Property, as well. You could go ahead and add as many DPs to your User Control, and inside that Control (which is quite maiden-like still, if we ignore all that DP boilerplate code) you could have as many complex interactivity as you would require, without losing the flexibility of passing the corresponding Commands from the outside.

Of course, this is just one way of about seventeen plusminus thirtythree, add one or two, which is about the usual number of WPF ways of doing things. Nevertheless, this solution now lives in our blog, and maybe it is of some help to you. Or Future-Me.