Clean deployment of .NET Core application

Microsofts .NET Core framework has rightfully earned its spot among cross-platform frameworks. We like to use it for example as a RESTful backend for our react frontends. If you are not burying your .NET Core application in a docker container without the need to configure/customize it you may feel agitated by its default deployment layout: All the dependencies live next to some JSON configuration files in one directory.

While this is ok if you do not need to look in there for a configuration file and change something you may like to clean it up and put the files into different folders. This can be achieved by customizing your MS build but it is all but straightforward!

Our goal

  1. Put all of our dependencies into a lib directory
  2. Put all of our configuration files int a configuration directory
  3. Remove unneeded files

The above should not require any interaction but be part of the regular build process.

The journey

We need to customize the MSBuild system to achieve our goal because the deps.json file must be rewritten to change the location of our dependencies. This is the hardest part! First we add the RoslynCodeTaskFactory as a package reference to our MSbuild in the csproj of our project. That we we can implement tasks using C#. We define two tasks that will help us in rewriting the deps.json:

<Project ToolsVersion="15.8" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
  <UsingTask TaskName="RegexReplaceFileText" TaskFactory="CodeTaskFactory" AssemblyFile="$(RoslynCodeTaskFactory)" Condition=" '$(RoslynCodeTaskFactory)' != '' ">
    <ParameterGroup>
      <InputFile ParameterType="System.String" Required="true" />
      <OutputFile ParameterType="System.String" Required="true" />
      <MatchExpression ParameterType="System.String" Required="true" />
      <ReplacementText ParameterType="System.String" Required="true" />
    </ParameterGroup>
    <Task>
      <Using Namespace="System" />
      <Using Namespace="System.IO" />
      <Using Namespace="System.Text.RegularExpressions" />
      <Code Type="Fragment" Language="cs">
        <![CDATA[ File.WriteAllText( OutputFile, Regex.Replace(File.ReadAllText(InputFile), MatchExpression, ReplacementText) ); ]]>
      </Code>
    </Task>
  </UsingTask>

  <UsingTask TaskName="RegexTrimFileText" TaskFactory="CodeTaskFactory" AssemblyFile="$(RoslynCodeTaskFactory)" Condition=" '$(RoslynCodeTaskFactory)' != '' ">
    <ParameterGroup>
      <InputFile ParameterType="System.String" Required="true" />
      <OutputFile ParameterType="System.String" Required="true" />
      <MatchExpression ParameterType="System.String" Required="true" />
    </ParameterGroup>
    <Task>
      <Using Namespace="System" />
      <Using Namespace="System.IO" />
      <Using Namespace="System.Text.RegularExpressions" />
      <Code Type="Fragment" Language="cs">
        <![CDATA[ File.WriteAllText( OutputFile, Regex.Replace(File.ReadAllText(InputFile), MatchExpression, "") ); ]]>
      </Code>
    </Task>
  </UsingTask>
</Project>

We put the tasks in a file called RegexReplace.targets file in the Build directory and import it in our csproj using <Import Project="Build/RegexReplace.targets" />.

Now we can just add a new target that is executed after the publish target to our main project csproj to move the assemblies around, rewrite the deps.json and remove unwanted files:

  <Target Name="PostPublishActions" AfterTargets="AfterPublish">
    <ItemGroup>
      <Libraries Include="$(PublishUrl)\*.dll" Exclude="$(PublishUrl)\MyProject.dll" />
    </ItemGroup>
    <ItemGroup>
      <Unwanted Include="$(PublishUrl)\MyProject.pdb;$(PublishUrl)\.filenesting.json" />
    </ItemGroup>
    <Move SourceFiles="@(Libraries)" DestinationFolder="$(PublishUrl)/lib" />
    <Copy SourceFiles="Build\MyProject.runtimeconfig.json;Build\web.config" DestinationFiles="$(PublishUrl)\MyProject.runtimeconfig.json;$(PublishUrl)\web.config" />
    <Delete Files="@(Libraries)" />
    <Delete Files="@(Unwanted)" />
    <RemoveDir Directories="$(PublishUrl)\Build" />
    <RegexTrimFileText InputFile="$(PublishUrl)\MyProject.deps.json" OutputFile="$(PublishUrl)\MyProject.deps.json" MatchExpression="(?&lt;=&quot;).*[/|\\](?=.*\.dll|.*\.exe)" />
    <RegexReplaceFileText InputFile="$(PublishUrl)\MyProject.deps.json" OutputFile="$(PublishUrl)\MyProject.deps.json" MatchExpression="&quot;path&quot;: &quot;.*&quot;" ReplacementText="&quot;path&quot;: &quot;.&quot;" />
  </Target>

The result

All this work should result in a working application with a root directory layout like in the image. As far as we know the remaining files like the web.config, the main project assembly and the two json files cannot easily relocated. The resulting layout is nevertheless quite clean and makes it easy for administrators to find the configuration files they need to customize.

Of course one can argue if the result is worth the hassle but if your customers’ administrators and operations value it you should do it.

Have unregistered classes throw with the unity DI container

The unity container (not to be confused with game engine) is one of the most popular dependency injection tools for C#.
However, by default the unity container will try to Resolve() all classes that it can. If you do not register a class, it will will often just succeed anyways.
I much prefer explicitly registering classes, and resolution just throwing and exception if I try to resolve something I did not register.
There’s a viable solution for that on stackoverflow, but it fails to throw when trying to resolve a class that was only registered via its interface.
Here’s our fixed version:

public class UnityRegistrationTracking : UnityContainerExtension
{
  private readonly ConcurrentDictionary<NamedTypeBuildKey, bool> registrations =
    new ConcurrentDictionary<NamedTypeBuildKey, bool>();

  protected override void Initialize()
  {
    base.Context.Registering += Context_Registering;
    base.Context.Strategies.Add(
        new ValidateRegistrationStrategy(this.registrations), UnityBuildStage.Setup);
  }

  private void Context_Registering(object sender, RegisterEventArgs e)
  {
    var buildKey = new NamedTypeBuildKey(e.TypeFrom, e.Name);
    this.registrations.AddOrUpdate(buildKey, true,
      (key, oldValue) => true);
  }

  public class ValidateRegistrationStrategy : BuilderStrategy
  {
    private ConcurrentDictionary<NamedTypeBuildKey, bool> registrations;

    public ValidateRegistrationStrategy(ConcurrentDictionary<NamedTypeBuildKey, bool> registrations)
    {
      this.registrations = registrations;
    }

    public override void PreBuildUp(ref BuilderContext context)
    {
        var buildKey = new NamedTypeBuildKey(context.RegistrationType, context.Name);
        if (!this.registrations.ContainsKey(buildKey))
        {
          throw new ResolutionFailedException(buildKey.Type, buildKey.Name,
            string.Format("Type {0} was not explicitly registered in the container.", buildKey.Type.Name));
        }
    }
  }
}

We hook into two parts of the unity API here:

  1. The registration, which is called when you call Unity.RegisterType
  2. The resolution process, which is called when unity tries to resolve a specific instance.

The first part happens in Context_Registering. We just store the registration in dictionary for later. It is important to use TypeFrom as a key, since we want to refer to objects by the interfaces they are registered with, not their concrete implementations.
The second part is the ValidateRegistrationStrategy. All registered BuilderStrategy objects go in a list that is processed when an object is built. The UnityBuildStage.Setup acts as a sorting key, to make sure that this strategy is executed as early as possible.
In the strategy, we check whether the requested type was previously registered, and throw an exception if it was not. It is important to use context.RegistrationType here, since context.Type will again contain the concrete type, and not the interface.

Using WPF-Toolkits CheckComboBox with Data-Binding

Xceed’s WPF Toolkit is a popular extension to the standard components offered by Microsoft’s WPF. One fancy control that I have been using lately is the CheckComboBox, which is a ComboBox that show’s a list of items and checkboxes when opened and a list of selected items when closed. For example, it is great for selecting filtering options in smaller sets.
However, it took me a little bit to get it all up and running with DataBinding. I am going to walk you throught it. For reference, I’m starting with a .NET 4.6.1 WPF App in Visual Studio 2017.

First you have to install Extended.Wpf.Toolkit, which I am doing via VS’s built-in package manager. To actually use the control, I am adding an XML namespace into my MainWindow’s XAML:

xmlns:xctk="http://schemas.xceed.com/wpf/xaml/toolkit"

Then I’m adding the control in a simple StackPanel, while already adding DataBindings:

<xctk:CheckComboBox
  ItemsSource="{Binding Path=Options}"
  DisplayMemberPath="Name"
  SelectedMemberPath="Selected"/>

This means that my control will look at a collection named “Options” in my view-model, using it’s elements “Name” property for display and its “Selected” property for the checkmark. If you run the program at this point, you should be able to see an empty CheckComboBox, albeit badly layouted.

Now it’s time to create the view model. Let’s start with a small class-let to represent our items:

class Item
{
  public string Name { get; set; }
  public bool Selected { get; set; }
}

As you can see, the names match what we set for DisplayMemberPath and SelectedMemberPath in the XAML. Now for the ViewModel class:

class ViewModel
{
  public ViewModel()
  {
    var languages = new string[]
    {
      "C", "C#", "C++", "D", "Java",
      "Rust", "Python", "ES6"
    };
    
    Options = new List<Item>();
    foreach (var language in languages)
    {
      Options.Add(new Item {
          Name = language,
          Selected = true });
    }
  }
  
  public List<Item> Options { get; set; }
}

If you run it at this point, you should be able to see an all-selected list of programming languages in the drop-down. But it is lacking a crucial detail: it is not observable, meaning the component will not be notified if the data in the view-model is changed by other means. To make sure that it can, the Item list and the Item have to implement the INotifyPropertyChanged interface. To do that, you have to fire a specific event whenever a property changes with the name of that property in it.

Let’s do that for the Item first:

class Item : INotifyPropertyChanged
{
  private bool _selected;
  private string _name;

  public string Name
  {
      get => _name; set
      {
        _name = value;
        EmitChange(nameof(Name));
      }
  }
  public bool Selected
  {
    get => _selected; set
    {
      _selected = value;
      EmitChange(nameof(Selected));
    }
  }

  private void EmitChange(params string[] names)
  {
    if (PropertyChanged == null)
      return;
    foreach (var name in names)
      PropertyChanged(this,
        new PropertyChangedEventArgs(name));
  }

 public event PropertyChangedEventHandler
                PropertyChanged;
}

That got bigger! But it’s not a lot of meat really. For the Item list, we can just use ObservableCollection instead of List:

public ObservableCollection<Item> Options {get; set;}

That’s it. Two-way data binding set-up for the item collection, and you can now change the view-model and have the component react to it, but also react to changes from the component by hooking into the property-set functions.
Now you could also implement INotifyPropertyChanged for the ViewModel, if you intend to swap in new ObserableCollections, but that is not necessary for this example.

Adaptive random generation of multiple outcomes

Games often want to use randomness to spice up the experience – it can be a lot more exciting to risk something when you are not entirely certain of the result. In my game abstractanks, the power-ups are generated randomly. However, when playing the game, it seemed like it was always generating the same power-ups in a row, which can be kind of frustrating. Tough luck, because this is just how uniform randomness behaves – as anyone who ever played the board game Sorry! or the german class Mensch ärgere dich nicht! can sure testify.

Let’s try that with C# code:

var outcomes = new []{'A', 'B', 'C', 'D', 'E', 'F'};
for (int i = 0; i < 200; ++i)
{
  var index = random.Next(outcomes.Length-1);
  Console.Write(outcomes[index]);
}

This will produce something like this:

ECDDBABCEEBBDDADEDECCAECECEADBBCCCDAEBEECDBCACAEAA
BDEACECBDDBAEDCEEAEAECDEEEECBCCEECEDCBAECCCBDCDDEA
CEAABDEEBDEAEBABABDEBDAACBECBBAACAEDEEBAECECECCBAB
BBAAEEDEDEEBCACDDEBBCBACADDDBAECBAEDACBEAEABBCAEEA

See all those strings of Cs and Es? Horrible! That does not feel random!

Games Dota 2 work with this by tweaking distribution after each “roll”. Specifically, for things like Phantom Assassin’s Coup de Grace, the chance is increased slightly after each unsuccessful attempt, making the 4th or so attempt a guaranteed critical strike.
But this technique only work nicely for one event in a stream of attempts. It fails to make multiple outcomes look better.
For game I devised an algorithm with two properties:
1. The same roll never appears twice in succession
2. No long stretches without a specific roll
Here’s my algorithm to do that:

var outcomes = new []{'A', 'B', 'C', 'D', 'E', 'F'};
var chances = new []{1.0, 1.0, 1.0, 1.0, 1.0, 1.0};
for (int i = 0; i < 200; ++i)
{
  // Normalize chances so they sum up to 1.0, then build their prefix sum
  var total = chances.Sum();
  var prefixSum = chances
    .Select(x => x / total)
    .Aggregate(new List<double>(), (list, x) =>
    {
      list.Add(list.LastOrDefault() + x);
      return list;
    }).ToList();

  // Roll an outcome
  var roll = random.NextDouble();
  var pick = prefixSum.FindIndex(x => x >= roll);
  Console.Write(outcomes[pick]);

  // Now adapt the distribution by removing all chance from
  // the last pick and distributing it to the N-1 others
  var increment = chances[pick] / (chances.Length - 1);
  chances = chances.Select((x, index) =>
  {
    if (index == pick)
      return 0.0;
    else
      return x + increment;
  }).ToArray();
}

And here’s what that produces:

AEDFBCAEFDBAEDFCBDECBFCFDBEACFDECBDAEFCEBACDEAFBDC
AFEBCABFDECDAFBECFADFCEBACFDCEDBFAEDCDBDAFEDBFEABD
CFABEDFBACDEAEFBECADBAFECAFBDACECFAEBDCAFCDBAEDAFA
BDFCDEACDBFEDFCBACDAFBEADFCDEFBEACBEFDCECFABDFCDBE

Much nicer for my needs, but it still looks pretty random. Also, my statistics buddy assured me that this algorithm still guarantees equal outcome probability for all items when run forever. I do now know if this is a new technique, but I did not find anything like this when I was looking for it.
Let me know if this is of any use to you.

.NET Core for platform independent web development

Several of our projects are based on the .NET platform. Until recently all of them used the classic .NET Framework. With a new project we had the opportunity to give .NET Core a try. The name stands for a moderized variant of the .NET Framework. It is developed by The .NET Foundation and Microsoft as a platform independent open-source project.

Not every type of project is currently suitable for .NET Core. If you want to develop a Windows desktop application (WinForms, WPF) you still have to use the classic .NET Framework. However, for server based applications .NET Core is a really good fit. Our application, for example, is implemented as a JSON API server with .NET Core and a React/Redux based client interface.

The Benefits

Since .NET Core is platform independent it runs on Linux, MacOS and Windows. We no longer need a Window machines to build the project from our CI server. Microsoft provides Docker images for building and running .NET Core projects.

ASP.NET Core applications are no longer bound to Microsoft’s IIS or IIS Express. You can also host them on Apache or Nginx servers as well.

With .NET Core you also have a vast choice of IDEs. Of course, you can use Visual Studio on Windows. But you also have the option to use JetBrains’ Rider (on any platform), Visual Studio for Mac or Visual Studio Code (Mac, Linux, Windows). If you don’t want to use an IDE for everything .NET Core also has a nice command-line interface. For example, the following command sets up a new ASP.NET Core project with React and Redux:

$ dotnet new reactedux

To compile an run the project:

$ dotnet run

The Entity Framework Core also has a feature I missed in the Entity Framework for the classic .NET Framework: a pure in-memory database provider, which is very useful for testing.

The Downsides

When you browse the NuGet packages list you have to be aware that not every package is compatible with .NET Core yet, but the list is growing. And, as mentioned above, you can’t develop desktop GUI applications with .NET Core.

Integrating .NET projects with Gradle

Recently I have created Gradle build scripts for several .NET projects, bot C# and VB.NET projects. Projects for the .NET platform are usually built with MSBuild, which is part of the .NET Framework distribution and itself a full-blown build automation tool: you can define build targets, their dependencies and execute tasks to reach the build targets. I have written about the basics of MSBuild in a previous blog post.

The .NET projects I was working on were using MSBuild targets for the various build stages as well. Not only for building and testing, but also for the release and deployment scripts. These scripts were called from our Jenkins CI with the MSBuild Jenkins Plugin.

Gradle plugins

However, I wasn’t very happy with MSBuild’s clunky Ant-like XML based syntax, and for most of our other projects we are using Gradle nowadays. So I tried Gradle for a new .NET project. I am using the Gradle MSBuild and Gradle NUnit plugins. Of course, the MSBuild Gradle plugin is calling MSBuild, so I don’t get rid of MSBuild completely, because Visual Studio’s .csproj and .vbproj project files are essentially MSBuild scripts, and I don’t want to get rid of them. So there is one Gradle task which to calls MSBuild, but everything else beyond the act of compilation is automated with regular Gradle tasks, like copying files, zipping release artifacts etc.

Basic usage of the MSBuild plugin looks like this:

plugins {
  id "com.ullink.msbuild" version "2.18"
}

msbuild {
  // either a solution file
  solutionFile = 'DemoSolution.sln'
  // or a project file (.csproj or .vbproj)
  projectFile = file('src/DemoSoProject.csproj')

  targets = ['Clean', 'Rebuild']

  destinationDir = 'build/msbuild/bin'
}

The plugin offers lots of additional options, be sure to check out the documentation on Github. If you want to give the MSBuild step its own task name, which is currently not directly mentioned on the Github page, use the task type Msbuild from the package com.ullink:

import com.ullink.Msbuild

// ...

task buildSolution(type: 'Msbuild', dependsOn: '...') {
  // ...
}

Since the .NET projects I’m working on use NUnit for unit testing, I’m using the NUnit Gradle plugin by the same creator as well. Again, please consult the documentation on the Github page for all available options. What I found necessary was setting the nunitHome option, because I don’t want the plugin to download a NUnit release from the internet, but use the one that is included with our project. Also, if you want a task with its own name or multiple testing tasks, use the NUnit task type in the package com.ullink.gradle.nunit:

import com.ullink.gradle.nunit.NUnit

// ...

task test(type: 'NUnit', dependsOn: 'buildSolution') {
  nunitVersion = '3.8.0'
  nunitHome = "${project.projectDir}/packages/NUnit.ConsoleRunner.3.8.0/tools"
  testAssemblies = ["${project.projectDir}/MyProject.Tests/bin/Release/MyProject.Tests.dll"]
}
test.dependsOn.remove(msbuild)

With Gradle I am now able to share common build tasks, for example for our release process, with our other non .NET projects, which use Gradle as well.

OPC-UA Performance and Bulk Reads

In a previous post on OPC on this blog I introduced some basics of OPC. Now we’ll take look at some performance characteristics of OPC-UA. Performance depends both on the used OPC server and the client, of course. But there are general tips to improve performance.

  • to get maximum performance use OPC without security

OPC message signing and encryption adds overhead. Turn off security for maximum performance if your use case allows to use OPC without security.

  • bulk reads increase performance

Bulk reads

A bulk read call reads multiple variables at once, which reduces communication overhead between client and server.

Here’s a code example using Eclipse Milo, an open-source OPC-UA stack implementation for the Java VM.

final String endpointUrl = "opc.tcp://localhost:53530/OPCUA/SimulationServer";
final EndpointDescription[] endpoints = UaTcpStackClient.getEndpoints(endpointUrl).get();
final OpcUaClientConfigBuilder config = new OpcUaClientConfigBuilder();
config.setEndpoint(endpoints[0]);

final OpcUaClient client = new OpcUaClient(config.build());
client.connect().get();

final List<NodeId> nodeIds = IntStream.rangeClosed(1, 50).mapToObj(i -> new NodeId(5, "Counter" + i)).collect(Collectors.toList());
final List<ReadValueId> readValueIds = nodeIds.stream().map(nodeId -> new ReadValueId(nodeId, AttributeId.Value.uid(), null, null)).collect(Collectors.toList());

// Bulk read call
final ReadResponse response = client.read(0, TimestampsToReturn.Both, readValueIds).get();
final DataValue[] results = response.getResults();
if (null != results) {
	final List<Integer> values = Arrays.stream(results).map(result -> (Integer) result.getValue().getValue()).collect(Collectors.toList());
	System.out.println(values.stream().map(String::valueOf).collect(Collectors.joining(",")));
}

client.disconnect().get();

The code performs a bulk read call on 50 integer variables (“Counter1” to “Counter50”). For performance tests you can put the bulk read call in a loop and measure the times. You should, however, connect to the server over the target network, not on localhost.

With a free (however not open-source) OPC UA simulation server by Prosys and Eclipse Milo for the client I measured times around 3.3 ms per bulk read of these 50 integer variables. I got similar results with the UA.NET stack by the OPC Foundation. Of course, you should do your own measurements with your target setup.

Keep also in mind that the preferred way to use OPC UA is not to constantly poll the values of all the variables. OPC UA allows you to monitor variables for changes and to get notified in case of a change, which is a more event-driven approach.