Windows 10 quality of life features

Starting with Windows 10 Microsoft switched from big-bang releases of its operating system to so called rolling releases: They release new features and improvements in regular intervals – once or twice a year – without changing the product name.

The great thing is that users get the improvements made by Microsofts engineers much sooner than in the past where you had to wait several years for a big “service pack” to arrive or even a new major release of Windows like 2000, XP, Vista, 7, 8, 8.1 and finally 10 (I am leaving out the dark times on purpose 😉).

The bad thing is that it is harder to see what version or release you are running. Of course there is a (less visible) name for every Windows release. This version or codename sometimes gets mentioned on support pages or in blog posts because functionality of Window 10 can change significantly between these rolling feature updates. And sometimes you may run some app or tool that tells you need Windows 10 2004 or higher.

What version of Window 10 am I running?

I know of two simple ways to find out what version of Windows 10 you are running currently:

  1. Running the tool winver
  2. Opening the Settings -> System -> Info page

Why does it matter?

Another downside is that users often are not aware of new stuff added to their operating system. And Microsoft does an awful job promoting the changes and improvements!

Of course there are announcements about the big things after upgrading your operating system to the next feature level. And Microsoft uses these for marketing its own apps and services. They slap you their new Edge-browser in the face on every occasion and try to trick you in creating a Microsoft account. It is absolutely not obvious how to use Windows without a Microsoft account like the decades before. Skip the process here, continue without and risk your live…

On the other hand they really improve their software and slowly but steadily round the rough edges of their system. The UIs for environment variables are finally quite usable.

Now back to the main theme of the post: There are some hidden gems built into Windows 10 that I learned of only lately and I think are vastly underadvertised – unlike Microsoft’s marketing of their big products.

Built-in screen recorder

Ok, many gamers may know it because it Windows briefly displays the shortcut Win+G when starting a game. It is not only usable for games but you can record any window, capture application sounds and record your voice. You can easily record your own screencasts and video tutorials using this built-in solution.

Built-in clipboard manager

How often did you wish to be able copy multiple items and choose one of the last few copied elements when pasting? While such clipboard managers have been around for a long time and sometimes provide tons of useful features Windows 10 has a simple one built-in. Just press Win+V instead of Ctrl+V to paste your clipboard entry and you will get a list of the copied items to choose from.

Built-in screenshot/snipping tool

Many people may know the old way of making screenshots using the oddly labeled PrtScr-key (sometimes also PrntScrn or simply Print), opening a painting application like MS paint and pasting the image using Ctrl+V. Well, Microsoft improved this workflow a lot by including a snipping tool that you can activate using Win+Shift+S. This tool lets you select either a rectangular or free-form region, a window or an entire screen to capture. After doing that you get a notification allowing you to make modifications to the capture and save it to disk.

On-screen emoji-keyboard

Just a little helper in these modern social media times is the on-screen emoji-keyboard. Using Win+. you can activate it, browse tons of common emojis and enter them into you messages and texts 🐱‍🏍🤘.

Windows Terminal

Ok, this last but not least one is not (yet) built-in and mostly interesting for developers and power users. Nevertheless, I think it is noteworthy that Microsoft finally built a capable terminal application with modern features like multiple tabs, full unicode and font support, customizable background with blur and the ability to host different shells like the old and trusted command prompt CMD, the newer PowerShell and WSL. You can find it in the Microsoft store for free.

Conclusion

While releases of Windows 10 are more subtle than past new Windows releases many things change both under the hood and user visible. Every once in a while something you missed for years or installed third-party tools for may be added without you knowing. That’s another reason why talking to colleagues and friends and practices like pair-programming and brown-bag meetings are so valuable for sharing knowledge and experience.

I hope there is something for you in my findings of hidden windows gems. If you have some Windows 10 features you discovered and really like, feel free to leave them in the comments. I will gladly try them out!

Keeping in touch with your pipeline Jenkins jobs

We are using continuous integration (CI) at the Softwareschneiderei for many years now. Our CI platform of choice is historically Jenkins which was called Hudson back in the day.

Things moved on since then and the integration with GitLab got a lot better with the advent of multibranch pipeline jobs. This kind of job allows you to automatically build branches and merge requests within the same job and keep the builds separate.

Another cool feature of Jenkins is the job configuration as code, defined in Jenkinsfile and used in pipeline jobs. That way it is easy to create and maintain a job configuration alongside your project’s source code inside your repository. No need anymore to click through pages of web UIs to configure your job. That way you also get the complete job configuration change history as additional benefits.

I prefer using scripted instead of declarative pipelines for Jenkinsfiles because they give me more control, freedom and power. But like always, this power and flexiblity comes at a price…

Sending out build notifications

In my case I wanted to always send out build notification regardless of the job result. This is quite easy if you have plugins like the Mattermost Notification Plugin or one of the mail plugins. Since our pipeline script consists of Groovy code this seems quite straightforward: Put the notification code into a try-finally-block:

node {
    try {
        stage ('Checkout and build') {
            checkout scm
			// Do something to build our project
		}
		// Maybe some additional stages like testing, code-analysis, packaging and deployment
    } finally {
        stage ('Notify') {
            mattermostSend "${env.JOB_NAME} - ${currentBuild.displayName} finished with Status [${currentBuild.currentResult}] (<${env.BUILD_URL}|Open>)"
        }
    }
}

Unfortunately, this pipeline script will always return SUCCESS as the build result! Even if someone aborts the job execution or a stage in the try-block fails…

Managing build status

So the seasoned programmer probably already knows the fix: Setting the build result in appropriate catch-blocks:

node {
    try {
        stage ('Checkout and build') {
            checkout scm
			// Do something to build our project
		}
		// Maybe some additional stages like testing, code-analysis, packaging and deployment
    } catch (Exception e) {
        if (e in org.jenkinsci.plugins.workflow.steps.FlowInterruptedException) {
            currentBuild.result = 'ABORTED'
        } else {
            echo "Exception: ${e.class}, message: ${e.message}"
            currentBuild.result = 'FAILURE'
        }
    } finally {
        stage ('Notify') {
            mattermostSend "${env.JOB_NAME} - ${currentBuild.displayName} finished with Status [${currentBuild.currentResult}] (<${env.BUILD_URL}|Open>)"
        }
    }
}

You can control the granularity and the exceptions thrown by your build steps at will and implement exactly the status reporting that you want. The available statuses are defined in hudson.model.Result, so feel free to realize your own build status management to best fit your project.

Co-Variant methods on C# collections

C# offers a powerful API for working with collections and especially LINQ offers lots of functional goodies to work with them. Among them is also the Concat()-method which allows to concatenate two IEnumerables.

We recently had the use-case of concatenating two collections with elements of a common super-type:

class Animal {}
class Cat : Animal {}
class Dog : Animal {}

public IEnumerable<Animal> combineAnimals(IEnumerable<Cat> cats, IEnumerable<Dog> dogs)
{
  // XXX: This does not work because Concat is invariant!!!
  return cats.Concat(dogs);
}

The above example does not work because concat requires both sequences to have the same type and returns a combined sequences of this type. If we do not care about the specifities of the subclasses be can build a Concatenate()-method ourselves which make the whole thing possible because instances of both subclasses can be put into a collection of their common parent class.

private static IEnumerable<TResult> Concatenate<TResult, TFirst, TSecond>(
  this IEnumerable<TFirst> first,
  IEnumerable<TSecond> second)
    where TFirst: TResult where TSecond : TResult
{
  IList<TResult> result = new List<TResult>();
  foreach (var f in first)
  {
    result.Add(f);
  }
  foreach (var s in second)
  {
    result.Add(s);
  }
  return result;
}

The above method is a bit clunky to call but works as intended:

public IEnumerable<Animal> combineAnimals(IEnumerable<Cat> cats, IEnumerable<Dog> dogs)
{
  // Works great!
  return cats.Concatenate<Animal, Cat, Dog>(dogs);
}

A variant of the above is a Concatenate()-method can be useful if you use a collection of the parent class to collect instances of subclass collections:

private static IEnumerable<TResult> Concatenate<TResult, TIn>(
  this IEnumerable<TResult> first,
  IEnumerable<TIn> devs)
    where TIn : TResult
{
  IList<TResult> result = first.ToList(); 
  foreach (var dev in devs)
  {
    result.Add(dev);
  }
  return result;
}

public IEnumerable<Animal> combineAnimals(IEnumerable<Cat> cats, IEnumerable<Dog> dogs)
{
  IEnumerable<Animal> result = new List<Animal>();
  result = result.Concatenate(cats);
  return result.Concatenate(dogs);
}

Maybe the above examples can serve as an inspiration for more utility methods that may improve working with collections in C#.

Using (elastic)search with .NET Core

Many modern applications require powerful search mechanisms to become useful and make their users more productive. That is in large part due to the amount of data available to work with. Thankfully there are already powerful tools to index your data and make it searchable.

One of the most well known state-of-the-art solutions is ElasticSearch and it has an API to be used from .NET called NEST. While the documentation is ok I want to give a quick rundown on how to add searching capabilities to your .NET Core application. Some ideas are borrowed from the great post “Using Elasticsearch with ASP.NET Core and Docker“.

Getting ElasticSearch running

The easiest way to get up and running with elasicsearch is to use their docker images and just run the container on your development machine. I like to a docker compose file like the following to get elasticsearch and its tooling application kibana up and running fast:

version: '3.8'
services:
    elasticsearch:
        image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
        container_name: elastic
        environment:
            - node.name=elastic
            - cluster.initial_master_nodes=elastic
        ports:
            - "9200:9200"
            - "9300:9300"
        volumes:
            - type: bind
              source: ./esdata
              target: /usr/share/elasticsearch/data
        networks:
            - esnetwork
    kibana:
        image: docker.elastic.co/kibana/kibana:7.8.0
        ports:
            - "5601:5601"
        networks:
            - esnetwork
        depends_on:
            - elasticsearch
volumes:
    esdata:
networks:
    esnetwork:
        driver: bridge

After you run it with docker compose you can talk to the search service on http://localhost:9200/ and the kibana management GUI on http://localhost:5601/. On the Kibana UI especially the Dev Tools and its console are interesting for experimenting with search queries.

Access ElasticSearch from your .NET Core app

I find it quite elegant to write a extension method for the IServiceCollection to configure the ElasticClient and register it as a Singleton to the dependency injection framework of .NET Core like so

    public static class ElasticSearchExtension
    {
        public static void AddElasticsearch(
            this IServiceCollection services, IConfiguration configuration)
        {
            var url = configuration["elasticsearch:url"];
            var settings = new ConnectionSettings(new Uri(url))
                    .DefaultMappingFor<SearchableDevice>(deviceMapping => deviceMapping
                        .IndexName("devices")
                        .IdProperty(dev => dev.Id)
                    )
                ;
            var client = new ElasticClient(settings);
            var response = client.Indices.Create("devices", creator => creator
                .Map<SearchableDevice>(device => device
                    .AutoMap()
                )
            );
            // maybe check response to be safe...
            services.AddSingleton<IElasticClient>(client);
        }
    }

The configuration block looks like following

  "ElasticSearch": {
    "url": "http://localhost:9200/"
  },

and allows for future extension.

Our search service has to be registered in Startup of course:

public void ConfigureServices(IServiceCollection services)
{
    ...
    services.AddElasticsearch(Configuration);
}

Indexing our objects

In the above example we built a SearchableDevice class whose public properties are going to be indexed by ElasticSearch. The API allows for a much more fined grained control about what and how to index but we want to keep things simple without having to worry about excluding Properties and so on. If you have set it up that way indexing a SearchableDevice is merely one simple call:

// SearchClient is the injected IElasticClient
// mySearchableDevice is an instance of SearchableDevice
SearchClient.IndexDocument(mySearchableDevice);

Searching for objects

When developing a search query I like to try it in the Kibana Dev Tools and then transform it to a NEST call. A simple query to look in all device properties if they start with “needle” looks like this:

{
  "query": {
    "multi_match": {
      "type": "phrase_prefix", 
      "fields": ["*"],
      "query": "needle"
    }
  }
}

The nice thing about the Kibana Dev Tools console is that it can display and complete the possible values for fields like “type” in your multi_match query.

That search query can then be translated to a NEST call in a straightforward way and looks this way:

var response = SearchClient.Search<SearchableDevice>(sd => sd
    .Query(q => q
        .MultiMatch(query => query
            .Type(TextQueryType.PhrasePrefix)
            .Fields("*")
            .Query("needle")
        )
    )
);

The search response contains the hits with their source objects and some metadata like the score and result count.

Bear in mind that ElasticSearch only returns the first/best 10 matches by default, so specifying the result size often might be closer to what you want.

Wrapping it up

Getting started with ElasticSearch in .NET Core does not require too much boiler plate an setup work if you use tools like docker and the NEST library. Making it usable and tuning the indexing and querying may require a lot of work to achieve the best results. On the other hand smaller applications can start-off with a simple search setup like shown above and simply evolve it when need be.

Building the right software

When we talk about software development a lot of the discussion revolves around programming languages, frameworks and the latest in technology.

While all the above and also the knowledge and skill of the developers certainly matter a great deal regarding the success of a software project the interaction between the involved individual is highly undervalued in my opion. Some weeks ago I watched a great talk connecting air plane crashes and interaction in a software development team. The golden quote for me was certainly this one:

Building software takes technical skill, but building the right software take human interaction and lots of it”

Nickolas Means (“How to crash an airplane”, The Lead Developer UK 2016)

I could not word it better and it matches my personal experience. Many, if not most of the problems in software projects are about human communication, values, feelings and opinions and not technical.

In his talk Nickolas Means focuses on internal team communication and I completely agree with him. My focus as a team lead shifted a lot from technical to fostering diversity, opinions and communication within the team. I am less strict in enforcing certain rules and styles in a project. I think this leads to more freedom and better opportunities for experimentation and exploration of ways to approach a problem.

Extend it to your customer

As we work on projects in different domains with a variety of customers we are really working hard to understand our customers. Building up open, trustworthy and stable communications is key in forming a fruitful and productive collaborative partnership in a (software) project. It will help you to produce a great software that does meet the customers needs instead of just a great software. It may also help you in situation where you mess up or technical problems plague the project.

The aspect of human interaction in software projects has its place rightfully in the agile software development manifesto:

Through this work we have come to value:

Individuals and interactions over processes and tools

The Authors of the Agile Manifesto

Almost 20 years later this is still undervalued and many software developers are still way too much on the technical side. We are striving to steadily improve our skills on the human interaction side and think it proves fruitful everytime we succeed.

I hope that more and more software developers will grasp the value of this shifted view point and that it will increase quality and value of the software solutions provided to all users.

Maybe it will make working in this field friendlier for not so tech-savvy people and allow for more of much needed diversity in tech.

Docker runtime breaking your container

Docker (or container technology in general) is a great tool to clearly separate the concerns of developers and operations. We use it to simplify various tasks like building projects, packaging them for different platforms and deployment of our software onto the target machines like staging and production servers. All the specifics of the projects are contained and version controlled using the Dockerfiles and compose files.

Our operations only needs to provide some infrastructure able to build container images and run them. This works great most of the time and removes a lot of the friction between developers and operation where in the past snowflaky-servers needed to be setup and maintained. Developers often had to ask for specific setups and environments because each project had their own needs. That is all gone with this great container technology. Brave new world. Except when it suddenly does not work anymore.

Help, my deployment container stopped working!

As mentioned above we use docker to deploy our software to the target machines. These machines are often part of a corporate network protected by firewalls and only accessible using VPN. I already talked about how to use openvpn in a docker container for deployment. So the other day I was making a release of one of my long-running projects and pressing the deploy button for that project on our jenkins continuous integration server.

But instead of just leaning back, relaxing and watching the magic work the deployment failed and the red light lit up! A look into the job output showed that the connection to the target machine was refused. A quick check from the developer machine showed no problem on the receiving side. VPN, target machine and everything was up and running as usual.

After a quick manual deployment performed with care and administrator hat I went on an investigation journey…

What was going on?

The deployment job did not change for several months, the container image did not change and the rest of the infrastructure was working as expected. After more digging, debugging narrowing down the problem I found out, that openvpn did not work in the container anymore because of some strange permission denied error:

Tue May 19 15:24:14 2020 /sbin/ip addr add dev tap0 1xx.xxx.xxx.xxx/22 broadcast 1xx.xxx.xxx.xxx
Tue May 19 15:24:14 2020 /sbin/ip -6 addr add 2axx:1xxx:4:5xxx:9xx:5xxx:5xxx:4xxx/64 dev tap0
RTNETLINK answers: Permission denied
Tue May 19 15:24:14 2020 Linux ip -6 addr add failed: external program exited with error status: 2
Tue May 19 15:24:14 2020 Exiting due to fatal error

This hot trace made it easy to google for and revealed following issue on github: https://github.com/dperson/openvpn-client/issues/75. The cause of all the trouble was changed behaviour of the docker runtime. Our automatic updates had run over the weekend and actually installed a new package version of the docker runtime (see exerpt from apt history log):

containerd.io:amd64 (1.2.13-1, 1.2.13-2)

This subtle change broke my container! After some sacrifices to the whale gods I went on to implement the fix. Fortunately there is an easy way to get it working like before. You just have to pass following command line switch to docker run and everything works as expected:

--sysctl net.ipv6.conf.all.disable_ipv6=0

As nice as containers are for abstracting away hardware, operating systems and other environment details sometimes the container runtime shines through. It is just a shame that such things happen on minor releases or package release upgrades…

Updating Grails 3.3.x to 4.0.x

We have a long history of maintaining a fairly large grails application which we took from Grails 1.0 to 4.0. We sometimes decided to skip some intermediate releases of the framework because of problems or missing incentives to upgrade. If your are interested in our experiences of the past, feel free to have a look our stories:

This is the next installment of our journey to the latest and greatest version of the Grails framework. This time the changes do not seem as intimidating like going from 2.x to 3.x. There are less moving parts, at least from the perspective of an application developer where almost everything stayed the same (gradle build system, YAML configuration, Geb functional tests etc.). Under the hood there are of course some bigger changes like new major versions of GORM/Hibernate and Spring Boot and the switch to Micronaut as the parent application context.


The hurdles we faced

  • For historical reasons our application uses flush mode “auto”. This does not work until today, see https://github.com/grails/grails-core/issues/11376
  • The most work intensive change is that Hibernate 5 requires you to perform your work in transactions. So we have dozens of places where we need to add missing @Transactional annotations to make especially saving domain objects work. Therefore we have to essentially test the whole application.
  • The handling of HibernateProxies again became more intransparent which led to numerous IllegalArgumentExceptions (“object ist not an instance of declaring type”). Sometimes we could move from generated hashCode()/equals() implementations to the groovy-Annotation @EqualsAndHashCode (actually a good thing) whereas in other places we did manual unwrapping or switched to eager fetching to avoid these problems.

In addition we faced minor gotchas like changed configuration entries. The one that cost us some hours was the subtle change of server.contextPath to server.servlet.context-path but nothing major or blocking.

We also had to transform many of our unit and integration tests to Spock and the new Grails Testing Support framework. It makes the tests more readable anyway and feels more fruitful than trying to debug the old-style Grails Test Mixins based tests.

Improvements

One major improvement for us in the Grails ecosystem is the good news that the shiro plugin is again officially available, maintained and cleaned up (see https://github.com/nerdErg/grails-shiro). Now we do not need to use our own poor man’s port anymore.

Open questions

Regarding the proclaimed performance improvements and reduced memory consumptions we do not have final numbers or impressions yet. We will deliver results on this front in the future.

More important is an incovenience we are still facing regarding hot-code-reloading. It does not work for us even using OpenJDK 8 with the old spring-loaded mechanism. The new restart-style of micronaut/spring-boot is not really productive for us because the startup times can easily reach the minute range even on fast hardware.

Pro-Tip

My hottest advice for you is this one:

Create a fresh Grails 4 app and compare central files like application.yml and build.gradle to get up to the state-of-the-art.

Conclusion

While this upgrade still was a lot of work and meant many places had to be touched it was a lot smoother than many of the previous ones. We hope that things improve further in the future as the technological stack is up-to-date and much more mature than in the early days…

Adding a dynamic React page to your classic grails multi-page application

We are developing and maintaining a more than 10 years old classic multi-page application based on the Grails web framework. With the advent of HTML 5 and modern browsers with faster JavaScript engines user expect more and more dynamic and pleasant user experience (UX) from web applications. Our application is used by hundreds of users and our customer expects a stable, familiar and feature-rich experience that continues to improve over time. Something like a complete rewrite of the UI is way out of scope time- and budget-wise.

One of the new feature requests would benefit highly from a client-side JavaScript implementation so we looked at our options. Fortunately it is quite easy to integrate a react app with grails and the gradle build system. So we implemented the new page almost completely as a react app while leaving all the other pages as normal server-side rendered Groovy Server Pages (GSP). The result is quite convincing and opens up a transition path to more and more dynamic client-side pages and perhaps even to the complete transformation to a single-page-application (SPA) in a distant future.

Integrating a React-App into Grails build process

The Grails react-webpack profile can serve as a great starting point to integrate a react app into an existing grails project. First you create the react app for the new page in the folder src/main/webapp, using the create-react-app scripts for example. Then you need to add a $GRAILS_PROJECT/webpack.config.js to configure webpack appropriately like so:

var path = require('path');

module.exports = {
  entry: './src/main/webapp/index.js',
  output: {
    path: path.join(__dirname, 'grails-app/assets/javascripts'),
    publicPath: '/assets/',
    filename: 'bundle.js'
  },
  module: {
    rules: [
      {
        test: /\.js$/,
        include: path.join(__dirname, 'src/main/webapp'),
        use: {
          loader: 'babel-loader',
          options: {
            presets: ["@babel/preset-env", "@babel/preset-react"],
            plugins: ["transform-class-properties"]
          }
        }
      },
      {
        test: /\.css$/,
        use: [
          'style-loader',
          'css-loader'
        ]
      },
      {
        test: /\.(jpe?g|png|gif|svg)$/i,
        use: {
          loader: 'url-loader?limit=10000&prefix=assets/!img'
        }
      }
    ]
  }
};

The next step is to move the package.json to the $GRAILS_PROJECT directory because we want gradle tasks to take care of building and bundling it as a grails asset. To make this convenient we add some gradle tasks employing yarn to our build.gradle:

buildscript {
    dependencies {
        ...
        classpath "com.moowork.gradle:gradle-node-plugin:1.2.0"
    }
}

...

apply plugin:"com.moowork.node"

...

node {
    version = '12.15.0'
    yarnVersion = '1.22.0'
    distBaseUrl = 'https://nodejs.org/dist'
    download = true
}

task bundle(type: YarnTask, dependsOn: 'yarn') {
    group = 'build'
    description = 'Build the client bundle'
    args = ['run', 'bundle']
}

task webpack(type: YarnTask, dependsOn: 'yarn') {
    group = 'application'
    description = 'Build the client bundle in watch mode'
    args = ['run', 'start']
}

bootRun.dependsOn(['bundle'])
assetCompile.dependsOn(['bundle'])

...

Now we have integrated our new react app with the grails build system and packaging. The webpack task allows updating the javascript bundle on the fly so that we have almost the same hot reloading support when developing as with the rest of grails.

Delivering the react app as a page

Now that we have integrated the react app in the build and packaging process of our grails application we need to deliver it when the new page is requested by the browser. This is quite simple and straightforward and can be achieved with a GSP like so:

<html>
<head>
    <meta name="layout" content="main"/>
    <title>
        <g:message code="example.header"/>
    </title>
</head>
<body>
    <div id="react-content">
    </div>
    <asset:javascript src="bundle.js"/>
</body>
</html>

Now you just have to develop the endpoints for the javascript app in form of normal grails controllers rendering JSON instead of GSP views. This is extremely easy using groovy maps and the grails JSON converters:

import grails.converters.JSON

class DataApiController {

    def getData = {
        def responseData = [
            name: 'John',
            age: 37
        ]
        render responseData as JSON
    }
}

Conclusion

Grails and its build infrastructure is flexible enough to easily integrate SPA pages into an existing traditional web application. This allows you to deliver modern UX and features expected by nowadays users without completely rewriting your trusty and proven grails application. The process can be gradually and individual pages/views can be renewed when needed. That way you can continually add value to your customer while incrementally modernizing your application.

Running a for-loop in a docker container

Docker is a great tool for running services or deployments in a defined and clean environment. Operations just has to provide a host for running the containers and everything else is up to the developers. They can forge their own environment and setup all the prerequisites appropriately for their task. No need to beg the admins to install some tools and configure server machines to fit the needs of a certain project. The developers just define their needs in a Dockerfile.

The Dockerfile contains instructions to setup a container in a domain specific language (DSL). This language consists only of a couple commands and is really simple. Like every language out there, it has its own quirks though. I would like to show a solution to one I encountered when trying to deploy several items to a target machine.

The task at hand

We are developing a distributed system for data acquisition, storage and real-time-display for one of our clients. We deliver the different parts of the system as deb-packages for the target machines running at the customer’s site. Our customer hosts her own debian repository using an Artifactory server. That all seems simple enough, because artifactory tells you how to upload new artifacts using curl. So we built a simple container to perform the upload using curl. We tried to supply the bash shell script required to the CMD instruction of the Dockerfile but ran into issues with our first attempts. Here is the naive, dysfunctional Dockerfile:

FROM debian:stretch
RUN DEBIAN_FRONTEND=noninteractive apt-get update &amp;&amp; apt-get -y dist-upgrade
RUN DEBIAN_FRONTEND=noninteractive apt-get update &amp;&amp; apt-get -y install dpkg curl

# Setup work dir, $PROJECT_ROOT must be mounted as a volume under /elsa
WORKDIR /packages

# Publish the deb-packages to clients artifactory
CMD for package in *.deb; do\n\
  ARCH=`dpkg --info $package | grep "Architecture" | sed "s/Architecture:\ \([[:alnum:]]*\).*/\1/g" | tr -d [:space:]`\n\
  curl -H "X-JFrog-Art-Api:${API_KEY}" -XPUT "${REPOSITORY_URL}/${package};deb.distribution=${DISTRIBUTION};deb.component=non-free;deb.architecture=$ARCH" -T ${package} \n\
  done

The command fails because the for-shell built-in instruction does not count as a command and the shell used to execute the script is sh by default and not bash.

The solution

After some unsuccessfull attempts to set the shell to /bin/bash using dockers’ SHELL instruction we finally came up with the solution for an inline shell script in the CMD instruction of a Dockerfile:

FROM debian:stretch
RUN DEBIAN_FRONTEND=noninteractive apt-get update && apt-get -y dist-upgrade
RUN DEBIAN_FRONTEND=noninteractive apt-get update && apt-get -y install dpkg curl

# Setup work dir, $PROJECT_ROOT must be mounted as a volume under /elsa
WORKDIR /packages

# Publish the deb-packages to clients artifactory
CMD /bin/bash -c 'for package in *.deb;\
do ARCH=`dpkg --info $package | grep "Architecture" | sed "s/Architecture:\ \([[:alnum:]]*\).*/\1/g" | tr -d [:space:]`;\
  curl -H "X-JFrog-Art-Api:${API_KEY}" -XPUT "${REPOSITORY_URL}/${package};deb.distribution=${DISTRIBUTION};deb.component=non-free;deb.architecture=$ARCH" -T ${package};\
done'

The trick here is to call bash directly and supplying the shell script using the -c parameter. An alternative would have been to extract the script into an own file and call that in the CMD instruction like so:

# Publish the deb-packages to clients artifactory
CMD ["deploy.sh", "${API_KEY}", "${REPOSITORY_URL}", "${DISTRIBUTION}"]

In the above case I prefer the inline solution because of the short and simple script, no need for an additional external file and worrying about how to pass the parameters to the script.

Code duplication is not always evil

Before you start getting mad at me first a disclaimer: I really think you should adhere to the DRY (don’t repeat yourself) principle. But in my opinion the term “code duplication” is too weak and blurry and should be rephrased.

Let me start with a real life story from a few weeks ago that lead to a fruitful discussion with some fellow colleagues and my claims.

The story

We are developing a system using C#/.NET Core for managing network devices like computers, printers, IP cameras and so on in a complex network infrastructure. My colleague was working on a feature to sync these network devices with another system. So his idea was to populate our carefully modelled domain entities using the JSON-data from the other system and compare them with the entities in our system. As this was far from trivial we decided to do a pair-programming session.

We wrote unit tests and fixed one problem after another, refactored the code that was getting messing and happily chugged along. In this process it became more and more apparent that the type system was not helping us and we required quite some special handling like custom IEqualityComparers and the like.

The problem was that certain concepts like AddressPools that we had in our domain model were missing in the other system. Our domain handles subnets whereas the other system talks about ranges. In our system the entities are persistent and have a database id while the other system does not expose ids. And so on…

By using the same domain model for the other system we introduced friction and disabled benefits of C#’s type system and made the code harder to understand: There were several occasions where methods would take two IEnumerables of NetworkedDevices or Subnets and you needed to pay attention which one is from our system and which from the other.

The whole situation reminded me of a blog post I read quite a while ago:

https://www.sandimetz.com/blog/2016/1/20/the-wrong-abstraction

Obviously, we were using the wrong abstraction for the entities we obtained from the other system. We found ourselves somewhere around point 6. in Sandy’s sequence of events. In our effort to reuse existing code and avoid code duplication we went down a costly and unpleasant path.

Illustration by example

If code duplication is on the method level we may often simply extract and delegate like Uncle Bob demonstrates in this article. In our story that would not have been possible. Consider the following model of Price and Discount e-commerce system:

public class Price {
    public final BigDecimal amount;
    public final Currency currency;

    public Price(BigDecimal amount, Currency currency) {
        this.amount = amount;
        this.currency = currency;
    }

    // more methods like add(Price)
}

public class Discount {
    public final BigDecimal amount;
    public final Currency currency;

    public Discount(BigDecimal amount, Currency currency) {
        this.amount = amount;
        this.currency = currency;
    }

    // more methods like add(Discount<span 				data-mce-type="bookmark" 				id="mce_SELREST_start" 				data-mce-style="overflow:hidden;line-height:0" 				style="overflow:hidden;line-height:0" 			></span>)
}

The initial domain entities for price and discount may be implemented in the completely same way but they are completely different abstractions. Depending on your domain it may be ok or not to add two discounts. Discounts could be modelled in a relative fashion like “30 % off” using a base price and so. Coupling them early on by using one entity for different purposes in order to avoid code duplication would be a costly error as you will likely need to disentangle them at some later point.

Another example could be the initial model of a name. In your system Persons, countries and a lot of other things could have a name entity attached which may look identical at first. As you flesh out your domain it becomes apparent that the names are different things really: person names should not be internationalized and sometimes obey certain rules. Country names in contrast may very well be translated.

Modified code duplication claim

Duplicated code is the root of all evil in software design.

— Robert C. Martin

I would like to reduce the temptation of eliminating code duplication for different abstractions by modifying the well known claim of Uncle Bob to be a bit more precise:

Duplicated code for the same abstraction is the root of all evil in software design.

If you introduce coupling of independent concepts by eliminating code duplication you open up a new possibility for errors and maintenance drag. And these new problems tend to be harder to spot and to resolve than real code duplication.

Duplication allows code to evolve independently. I think it is important to add these two concepts to your thinking.