Create custom jre in Docker

I recently wrote a Java application and wanted to run it in a Docker container. Unfortunately, my program needs a module from the Java jdk, which is why I can’t run it in the standard jre. When I run the program with the jdk, everything works but I have much more than I actually need and a 390MB docker container.

That’s why I set out to build my own jre, cut-down exactly to my application needs.

For this I found two Gradle plugins that help me. Jdeps checks the dependencies of an application and can summarise them in a file. This file can be used as input for the Jlink plugin. Jlink builds its own jre from it.

In the following I will present a way to build a custom jre with gradle and the two plugins in a multistage dockerfile and run an application in it.

Configuration in Gradle

First we need to do some configuration in gradle.build file. To use jdeps, the application must be packaged as an executable jar file and all dependencies must be copied in a seperate folder . First we create a task that copies all current dependencies to folder named lib. For the jar, we need to define the main class and set the class path to search for all dependencies in the lib folder.

// copies all the jar dependencies of your app to the lib folder
task copyDependencies(type: Copy) {
    from configurations.runtimeClasspath
    into "$buildDir/lib"
}

jar {
    manifest {
        attributes["Main-Class"] = "Main"
        attributes["Class-Path"] = configurations.runtimeClasspath
            .collect{'lib/'+it.getName()}.join(' ')
    }
}

Docker Image

Now we can get to work on the Docker image.

As a first step, we build our jre in a Java jdk. To do this, we run the CopyDependencies task we just created and build the Java application with gradle. Then we let jdeps collect all dependencies from the lib folder.
The output is written to the file jre-deps.info. It is therefore important that no errors or warnings are output, which is set with the -q parameter. The print-module-deps is crucial so that the dependencies are output and saved in the file.

The file is now passed to jlink and a custom-fit jre for the application is built from it. The parameters set in the call also reduce the size. The settings can be read in detail in the plugin documentation linked above.

FROM eclipse-temurin:17-jdk-alpine AS jre-build

COPY . /app
WORKDIR /app

RUN chmod u+x gradlew; ./gradlew copyDependencies; ./gradlew build

# find JDK dependencies dynamically from jar
RUN jdeps \
--ignore-missing-deps \
-q \
--multi-release 17 \
--print-module-deps \
--class-path build/lib/* \
build/libs/app-*.jar > jre-deps.info

RUN jlink \
--compress 2 \
--strip-java-debug-attributes \
--no-header-files \
--no-man-pages \
--output jre \
--add-modules $(cat jre-deps.info)

FROM alpine:3.17.0
WORKDIR /deployment

# copy the custom JRE produced from jlink
COPY --from=jre-build /app/jre jre
# copy the app dependencies
COPY --from=jre-build /app/build/lib/* lib/
# copy the app
COPY --from=jre-build /app/build/libs/app-*.jar app.jar

# run in user context
RUN useradd schneide
RUN chown schneide:schneide .
USER schneide

# run the app on startup
CMD jre/bin/java -jar app.jar

In the second Docker stage, a smaller image is used and only the created jre, the dependencies and the app.jar are copied into it. The app is then executed with the jre.

In my case, I was able to reduce the Docker container size to 110 MB with the alpine image, which is less than a third. Or, using a ubuntu:jammy image, to 182 MB, which is also more than half.

Using custom Docker containers for development with WebStorm & Co.

Docker has become one of the go-to tools of many developers these days. Not because any project should implement as many technological buzz words per se, but due to their great deal of flexibility compared with their small hassle of setup.

For stuff like node-based applications, using a Dev Container is useful because in principle, you do not need to have any of the npm stuff on your actual machine – not only you avoid having these monstrous node_modules folders, but also avoid having accidental dependencies on some specific configuration that might hold true on your device, but not generally.

For some of these reasons probably, JetBrains included Docker Dev Containers as a kind of “remote” development. In a sense, a docker container can be thought of as a remote machine, regardless of the fact that it shares your local hardware and is just a software abstraction.

In my opinion, JetBrains usually does great software, but there is some weird behaviour in their usage of Docker Dev Containers and it took us a while to find a quite general and IDE-independent solution; I’ll just use WebStorm as an example of something that appeared unusually hard to tame. I guess it will become better eventually.

For now, one might think of using the built-in config like:

  1. New Run Configuration -> npm
  2. Node Interpreter: “…”
  3. “+” -> Add Remote… -> “Docker”
  4. Use an image of your choice, either one of the node base images or a custom one (see below) with its corresponding tag

Now for reasons that seem to be completely undocumented and unavoidable (tell me if you know more!), the IDE forces you to then mount your project to /opt/project inside this container, where it gets mirrored during runtime to somewhere /tmp/<temporary uuid>/ – and in several of our projects (due to our folder structure which is not even particularly abnormal) this made this option to be completely unusable.

The way one can work without these strange idiosyncrasies is as follows:

First, create a Dockerfile in which you do all the required setup. It might be an optional idea to set the user, away from “root” to something more restricted like “node” (even though in development, you probably have your eyes on everything nevertheless). You can do more custom setup here. This can look like

FROM node:16.18.0-bullseye-slim

WORKDIR /your-home-inside-container
RUN chown node .

COPY package.json package-lock.json /your-home-inside-container

USER node

RUN npm ci --ignore-scripts

# COPY <whatever you might want> <where you want it inside>

EXPOSE 3000

CMD npm start

From that Dockerfile, build a local image in the same folder like:

# you might need -f if the Dockerfile is not named "Dockerfile"
docker build -t your-dev-image .

Then, create a new Run Configuration but choose “Shell script” (not npm)

docker run -it --rm --entrypoint= -v ${PWD}/src:/your-home-inside-container/src -p 0.0.0.0:3000:3000 your-dev-image

You might use a different “-p” port forwarding if you do not want to have your development server broadcasting on port 3000 (another advantage of Dev Containers, you can easily run multiple instances on different ports).

This is about the whole magic. But there are two further things that could be important here:

Hot Reloading (live updating whenever source files change)

This is done rather easily, however seems to change once in a while. We figured out that at least if you are using react-scripts@5.0.1 (which is what “npm start” addresses, unless you do that differently), you just need to set the environment variable “WATCHPACK_POLLING=true”. I.e put that in your Dockerfile a

ENV WATCHPACK_POLLING true

or pass it into your docker run ... -e WATCHPACK_POLLING=true ... your-dev-image line

Routing a development proxy to some “local host”

If your software e.g. adresses a backend that is running on your development machine or another Docker Dev Container, it can not just access that host from inside the Docker container. Neither is the port forwarding via “-p …:…” of any use, because that addresses the other direction – i.e. what port from the container is exposed to outside access – here, we go the other direction.

When the software inside the container would actually want to address “localhost”, it needs to be directed at the host under which your local machine appears. Docker has a special hostname for that and it is host.docker.internal

I.e. if your local backend is running on “localhost:8080” on your machine, you need to tell your Dev Container to direct its requests to “host.docker.internal:8080”.

In one of our projects, we needed some specific control over the proxy that the React development server gives you and here is way to gain that control – add a “setupProxy.js” inside your src/ folder and put in it something like

const { createProxyMiddleware } = require('http-proxy-middleware');

module.exports = function(app) {
    if (process.env.LOCAL_DEVELOPMENT) {
        return;
    }

    let httpProxyMiddleware = createProxyMiddleware({
        target: process.env.REACT_APP_PROXY || 'http://localhost:8080',
        changeOrigin: true,
    });
    app.use('/api', httpProxyMiddleware); // change to your needs accordingly
};

This way, one can always change the address via setting a REACT_APP_PROXY environment variable as in the step above; and one can also disable the whole proxying by setting the LOCAL_DEVELOPMENT env variable to true. Name these as you like, and you can even extend this setupProxy to include web sockets or different proxies for different routes, if you have any questions on that, just comment below 🙂

Improving my C++ time queue

Another code snippet that can be found in a few of my projects is the “time queue”, which is a simple ‘priority queue’ style data structure that I use to defer actions to a later time.

With this specific data structure, I have multiple implementations that clearly came from the same source. One indicator for that is a snarky comment in both about how std::list is clearly not the best choice for the underlying data structure. They have diverged a bit since then though.

Requirements

In my use case not use time points, but only durations in standard-library nomenclature. This is a pretty restrictive requirement, because otherwise any priority queue (e.g. from boost or even from the standard library) can be used quite well. On the other hand, it allows me to use floating-point durations with predictable accuracy. The queue has two important functions:

  1. insert to insert a timeout duration and a payload.
  2. tick is called with a specific duration and then reports the payloads that have timed out since their insertions.

Typically tick is called a lot more frequently than insert, and it should be fast. The payload is typically something like a std::function or an id for a state-machine that needs to be pulsed.

The basic idea is to only keep the duration difference to the previous item in the list. Only the first item keeps its total timeout. This way, when tick is called, usually only the first item needs to be updated. tick only has to touch more items when they time out.

Simple Implementation

One of the implementations for void insert(TimeType timeout, PayloadType payload) looks like this:

if (tick_active_)
{
  deferred_.push_back({ .remaining = after, .payload = std::move(payload) });
  return;
}

auto i = queue_.begin();
for (; i != queue_.end() && timeout > i->remaining; ++i)
  timeout -= i->remaining;

if (i != queue_.end())
  i->remaining -= timeout;

queue_.insert(i, { .remaining = after, .payload = std::move(payload) });

There is a special case there that guards against inserting into queue_ (which is still a very bad std::list) by instead inserting into deferred_ (which is a std::vector, phew). We will see why this is useful in the implementation for template void tick(TimeType delta, Executor execute):

tick_active_ = true;
auto i = queue_.begin();
for (; i != queue_.end() && delta >= i->remaining; ++i)
{
  delta -= i->remaining;
  execute(i->payload);
}

if (i != queue_.end())
  i->remaining -= delta;

queue_.erase(queue_.begin(), i);
tick_active_ = false;

while (!deferred_.empty())
{
  auto& entry = deferred_.back();
  insert(entry.remaining, std::move(entry.payload));
  deferred_.pop_back();
}

The timed out items are reported via a callback that is supplied as Executor execute. Of course, these can do anything, including inserting new items, which can invalidate the iterator. This is a common use case, in fact, as many deferred actions will naturally want follow ups (let’s ignore for the moment that the implementation is nowhere near exception safe…). The items that were deferred to deferred_ in insert get added to queue_ after the iteration is complete.

This worked well enough to ship, but the other implementation had another good idea. Instead of reporting the timed-out items to a callback, it just returned them in a vector. The whole tick_active_ guard becomes unnecessary, as any processing on the returned items is naturally deferred until after the iteration:

std::vector<PayloadType> tick(TimeType delta)
{
  std::vector<PayloadType> result;
  auto i = queue_.begin();
  for (; i != queue_.end() && delta >= i->remaining; ++i)
  {
    delta -= i->remaining;
    result.push_back(i->payload);
  }

  if (i != queue_.end())
    i->remaining -= delta;

  queue_.erase(queue_.begin(), i);
  return result;
}

This solves the insert-while-tick problem, and lets us use the result neatly in a range-based for-loop like this: for (auto const& payload : queue.tick(delta)) {}. Which I personally always find a little bit nicer than inversion-of-control. However, the cost is at least one extra allocation for timed-out items. This might be acceptable, but maybe we can do better for very little extra complexity.

No copies at all

Note that the items are already in a list, just interleaved with the remaining time. Let’s split that up into std::vectors for each. Now when we tick and have timed-out items, we just mark the range at the beginning of the payload vector and return the items as an std::span. They are subsequently erased on the next call to either tick or insert. Let’s start with a private helper function that I will call slide, for the lack of a better name:

std::tuple<TimeType, std::size_t> slide(TimeType delta)
{
  if (timeout_end_ > 0)
  {
    payloads_.erase(payloads_.begin(), payloads_.begin() + timeout_end_);
    timeout_end_ = 0;
  }
  auto i = remaining_.begin();
  for (; i != remaining_.end() && delta >= *i; ++i)
    delta -= *i;

  if (i != remaining_.end())
    *i -= delta;

  auto offset = i - remaining_.begin();
  return {delta, offset};
}

It is used in both interface functions:

void insert(TimeType delta, PayloadType payload)
{
  auto [remaining, offset] = slide(delta);
    
  remaining_.insert(remaining_.begin() + offset, remaining);
  payloads_.insert(payloads_.begin() + offset, std::move(payload));
}

std::span<PayloadType> tick(TimeType delta)
{
  auto [remaining, offset] = slide(delta);

  timeout_end_ = offset;
  remaining_.erase(remaining_.begin(), remaining_.begin() + offset);
  return {payloads_.data(), timeout_end_};
}

Now this is pretty neat. We can still use the timed-out items from tick directly in a loop like for (auto payload : q.tick(500ms)), but there are no more allocations and not even moves. Of course, the std::span returned from tick is only valid until the next call to either tick or insert, but I find that to be an acceptable trade-off.

I am pretty happy with how that improvement turned out, except for the name of the helper function. Maybe you have a nice idea? Let me know!

Copying and moving rows between tables in PostgreSQL

In this article, I’ll show some helpful tips for copying and moving data between database tables in PostgreSQL.

Copying

The simplest operation is copying rows from one table to another table. The associated SQL query is known to most. You can simply combine an INSERT with a SELECT:

INSERT INTO short_books
  SELECT *
    FROM books
    WHERE pages < 50;

Of course, if you want to copy a complete table, you must first create the target table with the same columns. Instead of just repeating the original CREATE TABLE with all the column definitions with a different name, there is a shortcut in the form of CREATE TABLE … LIKE.

CREATE TABLE books_copy (LIKE books);

If you want the copy to inherit all constraints, indices and defaults of the source table you can add INCLUDING ALL:

CREATE TABLE books_copy (LIKE books INCLUDING ALL);

Instead of executing a CREATE TABLE first and then an INSERT, you can also directly combine CREATE TABLE with a SELECT:

CREATE TABLE books_copy AS
  SELECT * FROM books;

Moving

The direct method of moving specific rows from one table to another table is a bit less known. You can of course first copy the rows into the target table and then delete the rows from the source table. However, this is also possible with just one statement, in one go. To do this, you need to know the RETURNING clause. It can be appended to a DELETE or UPDATE statement and causes the affected rows to be returned as the result set after the respective action:

DELETE FROM books
  WHERE pages < 50
  RETURNING
    title, author, pages;

This can be used in combination with the WITH … AS clause to move rows between tables with just one SQL statement:

WITH selection AS (
  DELETE FROM books
  WHERE pages < 50
  RETURNING *
)
INSERT INTO short_books
  SELECT * FROM selection;

The function of WITH can be thought of as defining a named temporary view that can only be used in the current statement.

Web Security for Frontend and Backend

The web is everywhere and we use it for tons of important tasks like online banking, shopping and communication. So it becomes increasingly important to implement proper security. As attacks like cross-site scripting (XSS) or cross-site request forgery (CSRF) are wide-spread browsers, web standards designers and web application developers implement more and more mechanisms to make such attacks harder or even impossible. This puts a certain burden on both frontend and backend developers.

Since security is hard and should not be an afterthought I would like to give you some advice when implementing a web app using a Javascript-frontend and a backend service written in some of the common languages/frameworks like .NET, Micronaut, Javalin, Flask or the like.

Frontend advice

I prefer traditional cookie-based sessions to JWT-based approaches for interactive web frontends because of simplicity, browser support and the possibility to use it without Javascript. For service-to-service communication bearer tokens of some kind may be more appropriate. Your Javascript client has to include the credentials in the fetch() calls to cause the browser to send the cookie.

Unfortunately, incorrect use of cookies may be insecure, so be sure to check up-to-date advice on cookies; see some hints below in the backend part because cookies are configured and issued there.

Backend advice

Modern web security requires additional measures on the server side to ensure secure authentication and communication with web clients. You should use https whereever possible to gain at least transport security and avoid many cases of sniffing credentials or changing content between client and backend.

Improving security of cookies

First of all, cookies should be HttpOnly so that scripts cannot access the contents of a cookie. Furthermore you should ideally set the SameSite and Secure attributes appropriately and use https whenever possible. That way you have mitigated the most common attacks on your session handling and authentication.

Another bonus for cookies is that browsers can inform you about problems with your cookie setup:

Configuring Cross-Origin Resource Sharing (CORS)

Nowadays it is common for web app to be served from a different host than the backend API. This is a potential problem because attackers may sneak scripts into the browser of a user and use the existing session to access the resources in an illegal way. Therefore another means of improving security of web apps running in browsers was introduced with the access control using CORS.

For browsers to be able to prevent or allow requests to certain resources the backend has to provide appropriate Access-Control-headers, most notably Access-Control-Allow-Origin and Access-Control-Allow-Credentials. Make sure to set these values correctly or your frontend will have trouble to access your backend or you introduce a potential security whole.

Fortunately many web frameworks make it easy to configure CORS, see Micronaut documentation for example.

Conclusion

Security is always important and browser vendors keep implementing additional measures to mitigate problems in the current web environment. Make sure you keep up with the latest advice and measures and implement them in your applications.

Applying the Golden Circle to software development

When I was a young and impressionable software developer (1998), the slogan of the year was “the code is the documentation”. This essentially meant that comments, and (inline) code comments in particular, were a sign of bad code. The reasoning was (and still is) that if you can’t articulate your ideas clear enough in source code, adding additional text won’t rescue your communication. “Communication” is the conveying of what you want to accomplish with your code towards two listeners: the computer and the next human that works with your code. The computer is often the easier part, because it just does as it is told without interpretation.

Some years later (2004), Jeff Atwood, the author of the influential codinghorror blog, still condemns most of the comments we could find in contemporary source code. But there is more nuance than just “comments are bad” and it is cleared up later (2006) that there is a difference in what should be expressed in source code and what should be expressed in comments. Yes, you should write comments, but the “right kind”.

According to Jeff Atwood, the source code contains the “how”. It tells the story in all its details for the next human and the computer. The comments, on the other hand, are not intended for the computer and shouldn’t contain details. They should contain the “why”, the high-level picture and the motivation behind the specific “how” that we find.

The code tells you how, the comments tell you why” (2006) is a great way to describe the expected content of both “layers”.

But my feeling was (and still is) that something is missing in that description. My programs aren’t just code and comments, there are more things that I try to tell my story with. And just a few weeks ago, I had a sudden idea that I might be able to describe what that missing piece could be. It is just an idea and the puzzle probably is still missing pieces, but it feels “right” enough that I want to write this blog post to discuss the idea. But I need to talk about something else first.

In 2009, Simon Sinek presents the Golden Circle to the world. The TED talk is probably the most energetic piece of explanation in human history. The Golden Circle is “the world’s simplest idea”, defining three layers of “clarity” to actions:

  • What: The “lowest” level, meaning that every person knows “what they are doing”.
  • How: The “intermediate” level. Some people know “how they do it”. They explicitly choose their method of doing and can reason about it.
  • Why: The “highest” level. According to Simon Sinek, only a few people know “why they do it”. He talks about “purpose, cause and belief”.

If you don’t know about the Golden Circle yet, please watch the 20 minutes TED talk while I wait here for you. If you want to think about it first – I’m patient. The Golden Circle has inspired and guided me ever since. Not that I’m very good at applying it in my business or personal life, but it stayed with me and gave me a coordinate system to categorize things.

And with that categorization practice, I feel as if the layers in Jeff Atwood’s blog post from 2006 are misnamed and one crucial layer is missing:

  • What: The code tells you what (not how!). It is the detailed step-by-step recipe to replicate a behaviour. It is so simple that even a computer can do it, without being aware of it.
  • How: This is the missing layer. I want to talk about it in a minute.
  • Why: That’s the part that is correctly placed and named: The why of a story requires the spoken word. Only comments are viable for this kind of information. The computer (as of today) has no understanding what any of it means.

The missing layer is the “How”. In my idea, I envisioned that everything that is deliberately put in the code, but not readily understood by a computer, like structures, patterns, idioms or even names (I call them “activated comments”) are there for the “How”. We structure our code not because the computer requires it, the compiling stages of our programming languages even interfold and compact it until it is a binary blob. We don’t name our variables and types because the computer would learn something from them. The first thing a compiler does is to shorten our names to unreadable “symbols”. Most of our patterns get replaced by other, more minute patterns that the compiler puts into place. We put these things in our code because it helps us understand “how the story goes”. It provides us with guidance how to make sense of the mess. The structure tells you how to approach the program.

The computer only knows about “What”. There are maybe some indications about a simple “How”-awareness in some technologies, but most of the time, the computer is deaf to human communication.

Which brings me to my description of code and comment, as an updated version of Jeff Atwood’s motto:

“The code tells you what, the structure tell you how and the comments tell you why”

By “structure”, I mean everything that gets lost during translation for the computer, but is visible for the human reader. It entails high-level things like “architecture” or “code design” and lower-end decisions like names and formatting. If you have a better word for it, let me know!

I hope that this blog post inspired you to have a thought yourself. Don’t hesitate and tell us your thought in the comment below.

Lineendings in repository

Git normally leaves files and their lineendings untouched. However, it is often desired to have uniform line endings in a project. Git provides support for this.

Config Variable

What some may already know is the configuration variable core.autocrlf. With this, a developer can locally specify that his newly created files are checked in to Git with LF. By setting the variable to “true” the files will be converted to CRLF locally by Git on Windows and converted back when saved to the repository. If the variable is set to “input” the files are used locally with the same lineending as in Git without conversion.
The problem is, this normalization only affects new files and each developer must set it locally. If you set core.autocrlf to false, files can still be checked in with not normalized line endings.

Gitattributes File

Another possibility is the .gitattributes file. The big advantage is that the file is checked in similarly to the .gitignore file and the settings therefore apply to all developers. To do this, the .gitattributes file is created in the repository and a path pattern and the text attribute are defined in it. The setting affects how the files are stored locally for the git switch, git checkout and git merge commands and how the files are stored in the repository for git add and git commit.

*.jpg          -text

The text attribute can be unset, then neither check-in nor check-out will do any conversions

*              text=auto

The attribute can also be set to auto. In this case, the line endings will be converted to LF at check-in if Git recognizes the file contents as text. However, if the file is already CRLF in the repository, no conversion takes place and the files remain CRLF. In the example above, the settings are set for all file types.

*.txt         text
*.vcproj      text eol=crlf
*.sh          text eol=lf

If the attribute is set, the lineending are stored in the default repository with LF. But eol can also be used to force fixed line endings for specific file types.

*.ps1	      text working-tree-encoding=UTF-16

Furthermore, settings such as the encoding can be set via the gitattributes file by using working-tree-encoding attribute. Everything else can be read in the documentation of the gitattributes file.

We use this possibility more and more often in our projects. Partly only to set single file types like .sh files to LF or to normalize the whole project.

SQLite in ASP.NET 6.0: Access your database file via HTTP Endpoint

It is one of our fundamental principles to always choose the most-easy-while-capable tool for a job. For this, we try not to shower our customers with the newest, most hip technology available, but to use a technology stack we are

  • comfortable with
  • quick to provide the required minimum of customer value
  • keeping enough options open in order anything changes

One of the heavily affected aspects in that regard is the choice of data storage. There are a lot of different design paradigms one can choose from, but with the “most easy” aspect at hand, the question mostly resolves around the needs of the customer, not the wants (or “might be useful one day”) of the developer.

If your customer already has their PostgreSQL databases distributed in their Kubernetes as an example, it might be advisable to aim for that. If the customer does not have any integrated structure yet, I start with the question:

Is anything more necessary than a single-file database?

For one of our ASP.NET 6.0 applications, this was answered with the choice of Sqlite, due to it being native to the Microsoft universe including Entity Framework, which has many common use cases already answered, i.e. gives you way of caring about your application logic more than their database abstractions.

(It might be said that for .NET, an interesting project seems to have been LiteDB, which also operates on a single database file, but at the time of this writing, seems to have gone stale in development / support, and therefore fell out of my favour soon. Sad.).

Now we have a project in which we are closely in touch with the customer and their live system, very often had it been useful to access their platform and take a snapshot of the database for backup or assurance of our logic, and with the technical overhead in that specific case (which required several steps of sequentially granting remote access), I thought myself:

Why can’t I have a (sufficiently secured) HTTP endpoint that gives me this SQLite file as a File download?

The solution was a bit tricky because either the file was not read-accessible during that HTTP request (having been open already), the filestream was not possible because it was being closed too early, or the encoding of the resulting file would not fit. What finally worked was:

        private readonly static System.Text.Encoding enc1252 =
            CodePagesEncodingProvider.Instance.GetEncoding(1252);

        [HttpGet("database")]
        public ActionResult GetDatabase()
        {
            var dataSource = "sqlite.db";
            if (!System.IO.File.Exists(dataSource))
            {
                return NotFound(dataSource);
            }
            db.SaveChanges();
            // Note: CloseConnection() was not required!

            using var fs = new FileStream(dataSource, FileMode.Open, FileAccess.Read, FileShare.ReadWrite);
            var reader = new StreamReader(fs, enc1252);
            var data = reader.ReadToEnd();
            var ms = new MemoryStream(enc1252.GetBytes(data));
            return new FileStreamResult(ms, "application/octet-stream");
        }

Feel free to comment on that way because I found it more than none-trivial to arrive there, but maybe I missed something obvious. Some definite stumbling stones definitively were in

  • The Mime Type “application/octet-stream”, which for some reason would not work with the more adequately sounding choice “application/x-sqlite3” – I have no idea why.
  • The Encoding, which on our system was the Windows CodePages-1252 default, which needed to be specified not only in the interpretation of our bytes stream (second location), but also in the definition of the StreamReader itself (first location).
  • Please note that if your database is encoded via CP-1252, you also need the System.Text.Encoding.CodePages package (available via NuGet)
  • What looks like a missing “using”, is really intentional: If the StreamReader was opened with “using var reader = …”, it had the effect of being disposed before the request was handled correctly – I ran into an error of FileStreamResult: “Cannot access a closed Stream.” – keeping the StreamReader open solved that and the internet told me that this is still not a memory leak; the StreamReader reader gets disposed when the FileStream fs is disposed (see the using in front of that), but it still feels weird.

If you have any comments on that, I’d be very glad to learn from them, but if you don’t and you just have another use case for that problem – I’m happy to help!

Simple abstractions are good abstractions

I think that a lot of accidental complexity in software is produced by not picking the simplest abstraction for the job. Let me lead with an example: Consider this code from a code generator that generates C++ code:

std::ostringstream extra_properties;
if (!attribute.unit.empty())
{
  extra_properties << fmt::format("\n      properties.set_unit(\"{0}\");", attribute.unit);
}
if (!attribute.min_value.empty())
{
  extra_properties << fmt::format("\n      properties.set_min_value(\"{0}\");", attribute.min_value);
}
if (!attribute.max_value.empty())
{
  extra_properties << fmt::format("\n      properties.set_max_value(\"{0}\");", attribute.max_value);
}

It has a lot of ugly duplication: basically everything but the method names and values. So, how do we get rid of the duplication? Just a couple of years ago, I would probably have used a function for that:

void property_snippet(std::ostringstream& str, std::string const& method_name, std::string const& value)
{
  if (value.empty())
    return;
  str << fmt::format("\n      properties.{0}(\"{1}\");", method_name, value);
}

And then turn the call site code into:

property_snippet(extra_properties, "set_unit", attribute.unit);
property_snippet(extra_properties, "set_min_value", attribute.min_value);
property_snippet(extra_properties, "set_max_value", attribute.max_value);

Back then, I would have said that this is a definite improvement, but nowadays I am not so sure anymore. The call-site is a lot more concise, but we still have about half its code duplicated: the first half of each line. The additional function adds lots of complexity that is not necesarily offset by the gain at the call-site: the declaration with all the parameters. And the code gets separated, which is only really good if the function does a little bit more than this one.

This variant can, however, be made simpler with lambdas that capture extra_properties instead of passing it each time. While that is a better solution, I would argue that function objects and capturing are not necessarily simple either, so this only makes second place.

Nowdays, my first go-to abstraction is an in-place list and a loop:

std::tuple<char const*, std::string> methods_and_values[] = {
  {"set_unit", attribute.unit},
  {"set_min_value", attribute.min_value},
  {"set_max_value", attribute.max_value},
};

for (auto [method_name, value] : methods_and_values)
{
  if (value.empty())
    continue;
  extra_properties << fmt::format("\n      properties.{0}(\"{1}\");", method_name, value);
}

For me, this has the added benefit that is clearly separates the ‘inert’ data part of the code and the ‘active’ transformation. While this example is C++, this works in almost languages that I know of, even such arcane beasts as Xbase++.

PostgreSQL’s new MERGE command

PostgreSQL version 15 introduces a new SQL command: the MERGE command. This allows merging a table into another table. The MERGE command has existed for some time in other databases such as Oracle or SQL Server.

The principle of this command is that you have a target table in which you want to insert or remove data based on a source table under certain conditions, or you want to update existing entries with data from the source table. The source table doesn’t have to be a real table, it can just as easily be a SELECT query.

How to use it, step-by-step

The command begins with MERGE INTO, followed by the name of the target table. We call it dest here:

MERGE
  INTO dest ...

Then you specify the source table with USING, here we call it src:

MERGE
  INTO dest
  USING src
  ...

If you want to use a SELECT query as the source instead of a real table, you can do it like this:

MERGE
  INTO dest
  USING (SELECT ... FROM ...) AS src
  ...

Now you need a condition that is used to match entries from one table to entries from the other table. This is specified after ON. In this example we simply use the IDs of the two tables:

MERGE
  INTO dest
  USING src
  ON dest.id=src.id
  ...

This is followed by a case distinction that describes what should happen if the condition either applies or not. The possible actions can be: UPDATE, DELETE, INSERT, or DO NOTHING.

The two cases are specified with WHEN MATCHED THEN and WHEN NOT MATCHED THEN:

MERGE
  INTO dest
  USING src
  ON dest.id=src.id
  WHEN MATCHED THEN
    UPDATE SET ...
  WHEN NOT MATCHED THEN
    INSERT (...) VALUES (...);

If a match exists, then reasonable actions are UPDATE, DELETE, or DO NOTHING. If no match exists, then reasonable actions are INSERT or DO NOTHING.

In the WHEN cases, additional conditions can be specified with AND:

MERGE
  INTO dest
  USING src
  ON dest.id=src.id
  WHEN MATCHED AND dest.value > src.value THEN
    DELETE
  WHEN MATCHED THEN
    UPDATE SET ...
  WHEN NOT MATCHED THEN
    DO NOTHING;

A realistic example

Here’s an example demonstrating a use case that might occur in the real world:

MERGE
  INTO account a
  USING transaction t
  ON a.id=t.account_id
WHEN MATCHED THEN
  UPDATE SET balance = a.balance + t.amount
WHEN NOT MATCHED THEN
  INSERT (id, balance) VALUES (t.account_id, t.amount);

This statement processes a table of monetary transactions and applies them to their matching customer accounts by adding the amount of each transaction to the balance of the matching account. If no matching account exists it will be created and the initial balance is the amount of the first transaction.