Using WPF-Toolkits CheckComboBox with Data-Binding

Xceed’s WPF Toolkit is a popular extension to the standard components offered by Microsoft’s WPF. One fancy control that I have been using lately is the CheckComboBox, which is a ComboBox that show’s a list of items and checkboxes when opened and a list of selected items when closed. For example, it is great for selecting filtering options in smaller sets.
However, it took me a little bit to get it all up and running with DataBinding. I am going to walk you throught it. For reference, I’m starting with a .NET 4.6.1 WPF App in Visual Studio 2017.

First you have to install Extended.Wpf.Toolkit, which I am doing via VS’s built-in package manager. To actually use the control, I am adding an XML namespace into my MainWindow’s XAML:

xmlns:xctk="http://schemas.xceed.com/wpf/xaml/toolkit"

Then I’m adding the control in a simple StackPanel, while already adding DataBindings:

<xctk:CheckComboBox
  ItemsSource="{Binding Path=Options}"
  DisplayMemberPath="Name"
  SelectedMemberPath="Selected"/>

This means that my control will look at a collection named “Options” in my view-model, using it’s elements “Name” property for display and its “Selected” property for the checkmark. If you run the program at this point, you should be able to see an empty CheckComboBox, albeit badly layouted.

Now it’s time to create the view model. Let’s start with a small class-let to represent our items:

class Item
{
  public string Name { get; set; }
  public bool Selected { get; set; }
}

As you can see, the names match what we set for DisplayMemberPath and SelectedMemberPath in the XAML. Now for the ViewModel class:

class ViewModel
{
  public ViewModel()
  {
    var languages = new string[]
    {
      "C", "C#", "C++", "D", "Java",
      "Rust", "Python", "ES6"
    };
    
    Options = new List<Item>();
    foreach (var language in languages)
    {
      Options.Add(new Item {
          Name = language,
          Selected = true });
    }
  }
  
  public List<Item> Options { get; set; }
}

If you run it at this point, you should be able to see an all-selected list of programming languages in the drop-down. But it is lacking a crucial detail: it is not observable, meaning the component will not be notified if the data in the view-model is changed by other means. To make sure that it can, the Item list and the Item have to implement the INotifyPropertyChanged interface. To do that, you have to fire a specific event whenever a property changes with the name of that property in it.

Let’s do that for the Item first:

class Item : INotifyPropertyChanged
{
  private bool _selected;
  private string _name;

  public string Name
  {
      get => _name; set
      {
        _name = value;
        EmitChange(nameof(Name));
      }
  }
  public bool Selected
  {
    get => _selected; set
    {
      _selected = value;
      EmitChange(nameof(Selected));
    }
  }

  private void EmitChange(params string[] names)
  {
    if (PropertyChanged == null)
      return;
    foreach (var name in names)
      PropertyChanged(this,
        new PropertyChangedEventArgs(name));
  }

 public event PropertyChangedEventHandler
                PropertyChanged;
}

That got bigger! But it’s not a lot of meat really. For the Item list, we can just use ObservableCollection instead of List:

public ObservableCollection<Item> Options {get; set;}

That’s it. Two-way data binding set-up for the item collection, and you can now change the view-model and have the component react to it, but also react to changes from the component by hooking into the property-set functions.
Now you could also implement INotifyPropertyChanged for the ViewModel, if you intend to swap in new ObserableCollections, but that is not necessary for this example.

Adaptive random generation of multiple outcomes

Games often want to use randomness to spice up the experience – it can be a lot more exciting to risk something when you are not entirely certain of the result. In my game abstractanks, the power-ups are generated randomly. However, when playing the game, it seemed like it was always generating the same power-ups in a row, which can be kind of frustrating. Tough luck, because this is just how uniform randomness behaves – as anyone who ever played the board game Sorry! or the german class Mensch ärgere dich nicht! can sure testify.

Let’s try that with C# code:

var outcomes = new []{'A', 'B', 'C', 'D', 'E', 'F'};
for (int i = 0; i < 200; ++i)
{
  var index = random.Next(outcomes.Length-1);
  Console.Write(outcomes[index]);
}

This will produce something like this:

ECDDBABCEEBBDDADEDECCAECECEADBBCCCDAEBEECDBCACAEAA
BDEACECBDDBAEDCEEAEAECDEEEECBCCEECEDCBAECCCBDCDDEA
CEAABDEEBDEAEBABABDEBDAACBECBBAACAEDEEBAECECECCBAB
BBAAEEDEDEEBCACDDEBBCBACADDDBAECBAEDACBEAEABBCAEEA

See all those strings of Cs and Es? Horrible! That does not feel random!

Games Dota 2 work with this by tweaking distribution after each “roll”. Specifically, for things like Phantom Assassin’s Coup de Grace, the chance is increased slightly after each unsuccessful attempt, making the 4th or so attempt a guaranteed critical strike.
But this technique only work nicely for one event in a stream of attempts. It fails to make multiple outcomes look better.
For game I devised an algorithm with two properties:
1. The same roll never appears twice in succession
2. No long stretches without a specific roll
Here’s my algorithm to do that:

var outcomes = new []{'A', 'B', 'C', 'D', 'E', 'F'};
var chances = new []{1.0, 1.0, 1.0, 1.0, 1.0, 1.0};
for (int i = 0; i < 200; ++i)
{
  // Normalize chances so they sum up to 1.0, then build their prefix sum
  var total = chances.Sum();
  var prefixSum = chances
    .Select(x => x / total)
    .Aggregate(new List<double>(), (list, x) =>
    {
      list.Add(list.LastOrDefault() + x);
      return list;
    }).ToList();

  // Roll an outcome
  var roll = random.NextDouble();
  var pick = prefixSum.FindIndex(x => x >= roll);
  Console.Write(outcomes[pick]);

  // Now adapt the distribution by removing all chance from
  // the last pick and distributing it to the N-1 others
  var increment = chances[pick] / (chances.Length - 1);
  chances = chances.Select((x, index) =>
  {
    if (index == pick)
      return 0.0;
    else
      return x + increment;
  }).ToArray();
}

And here’s what that produces:

AEDFBCAEFDBAEDFCBDECBFCFDBEACFDECBDAEFCEBACDEAFBDC
AFEBCABFDECDAFBECFADFCEBACFDCEDBFAEDCDBDAFEDBFEABD
CFABEDFBACDEAEFBECADBAFECAFBDACECFAEBDCAFCDBAEDAFA
BDFCDEACDBFEDFCBACDAFBEADFCDEFBEACBEFDCECFABDFCDBE

Much nicer for my needs, but it still looks pretty random. Also, my statistics buddy assured me that this algorithm still guarantees equal outcome probability for all items when run forever. I do now know if this is a new technique, but I did not find anything like this when I was looking for it.
Let me know if this is of any use to you.

.NET Core for platform independent web development

Several of our projects are based on the .NET platform. Until recently all of them used the classic .NET Framework. With a new project we had the opportunity to give .NET Core a try. The name stands for a moderized variant of the .NET Framework. It is developed by The .NET Foundation and Microsoft as a platform independent open-source project.

Not every type of project is currently suitable for .NET Core. If you want to develop a Windows desktop application (WinForms, WPF) you still have to use the classic .NET Framework. However, for server based applications .NET Core is a really good fit. Our application, for example, is implemented as a JSON API server with .NET Core and a React/Redux based client interface.

The Benefits

Since .NET Core is platform independent it runs on Linux, MacOS and Windows. We no longer need a Window machines to build the project from our CI server. Microsoft provides Docker images for building and running .NET Core projects.

ASP.NET Core applications are no longer bound to Microsoft’s IIS or IIS Express. You can also host them on Apache or Nginx servers as well.

With .NET Core you also have a vast choice of IDEs. Of course, you can use Visual Studio on Windows. But you also have the option to use JetBrains’ Rider (on any platform), Visual Studio for Mac or Visual Studio Code (Mac, Linux, Windows). If you don’t want to use an IDE for everything .NET Core also has a nice command-line interface. For example, the following command sets up a new ASP.NET Core project with React and Redux:

$ dotnet new reactedux

To compile an run the project:

$ dotnet run

The Entity Framework Core also has a feature I missed in the Entity Framework for the classic .NET Framework: a pure in-memory database provider, which is very useful for testing.

The Downsides

When you browse the NuGet packages list you have to be aware that not every package is compatible with .NET Core yet, but the list is growing. And, as mentioned above, you can’t develop desktop GUI applications with .NET Core.

Integrating .NET projects with Gradle

Recently I have created Gradle build scripts for several .NET projects, bot C# and VB.NET projects. Projects for the .NET platform are usually built with MSBuild, which is part of the .NET Framework distribution and itself a full-blown build automation tool: you can define build targets, their dependencies and execute tasks to reach the build targets. I have written about the basics of MSBuild in a previous blog post.

The .NET projects I was working on were using MSBuild targets for the various build stages as well. Not only for building and testing, but also for the release and deployment scripts. These scripts were called from our Jenkins CI with the MSBuild Jenkins Plugin.

Gradle plugins

However, I wasn’t very happy with MSBuild’s clunky Ant-like XML based syntax, and for most of our other projects we are using Gradle nowadays. So I tried Gradle for a new .NET project. I am using the Gradle MSBuild and Gradle NUnit plugins. Of course, the MSBuild Gradle plugin is calling MSBuild, so I don’t get rid of MSBuild completely, because Visual Studio’s .csproj and .vbproj project files are essentially MSBuild scripts, and I don’t want to get rid of them. So there is one Gradle task which to calls MSBuild, but everything else beyond the act of compilation is automated with regular Gradle tasks, like copying files, zipping release artifacts etc.

Basic usage of the MSBuild plugin looks like this:

plugins {
  id "com.ullink.msbuild" version "2.18"
}

msbuild {
  // either a solution file
  solutionFile = 'DemoSolution.sln'
  // or a project file (.csproj or .vbproj)
  projectFile = file('src/DemoSoProject.csproj')

  targets = ['Clean', 'Rebuild']

  destinationDir = 'build/msbuild/bin'
}

The plugin offers lots of additional options, be sure to check out the documentation on Github. If you want to give the MSBuild step its own task name, which is currently not directly mentioned on the Github page, use the task type Msbuild from the package com.ullink:

import com.ullink.Msbuild

// ...

task buildSolution(type: 'Msbuild', dependsOn: '...') {
  // ...
}

Since the .NET projects I’m working on use NUnit for unit testing, I’m using the NUnit Gradle plugin by the same creator as well. Again, please consult the documentation on the Github page for all available options. What I found necessary was setting the nunitHome option, because I don’t want the plugin to download a NUnit release from the internet, but use the one that is included with our project. Also, if you want a task with its own name or multiple testing tasks, use the NUnit task type in the package com.ullink.gradle.nunit:

import com.ullink.gradle.nunit.NUnit

// ...

task test(type: 'NUnit', dependsOn: 'buildSolution') {
  nunitVersion = '3.8.0'
  nunitHome = "${project.projectDir}/packages/NUnit.ConsoleRunner.3.8.0/tools"
  testAssemblies = ["${project.projectDir}/MyProject.Tests/bin/Release/MyProject.Tests.dll"]
}
test.dependsOn.remove(msbuild)

With Gradle I am now able to share common build tasks, for example for our release process, with our other non .NET projects, which use Gradle as well.

OPC-UA Performance and Bulk Reads

In a previous post on OPC on this blog I introduced some basics of OPC. Now we’ll take look at some performance characteristics of OPC-UA. Performance depends both on the used OPC server and the client, of course. But there are general tips to improve performance.

  • to get maximum performance use OPC without security

OPC message signing and encryption adds overhead. Turn off security for maximum performance if your use case allows to use OPC without security.

  • bulk reads increase performance

Bulk reads

A bulk read call reads multiple variables at once, which reduces communication overhead between client and server.

Here’s a code example using Eclipse Milo, an open-source OPC-UA stack implementation for the Java VM.

final String endpointUrl = "opc.tcp://localhost:53530/OPCUA/SimulationServer";
final EndpointDescription[] endpoints = UaTcpStackClient.getEndpoints(endpointUrl).get();
final OpcUaClientConfigBuilder config = new OpcUaClientConfigBuilder();
config.setEndpoint(endpoints[0]);

final OpcUaClient client = new OpcUaClient(config.build());
client.connect().get();

final List<NodeId> nodeIds = IntStream.rangeClosed(1, 50).mapToObj(i -> new NodeId(5, "Counter" + i)).collect(Collectors.toList());
final List<ReadValueId> readValueIds = nodeIds.stream().map(nodeId -> new ReadValueId(nodeId, AttributeId.Value.uid(), null, null)).collect(Collectors.toList());

// Bulk read call
final ReadResponse response = client.read(0, TimestampsToReturn.Both, readValueIds).get();
final DataValue[] results = response.getResults();
if (null != results) {
	final List<Integer> values = Arrays.stream(results).map(result -> (Integer) result.getValue().getValue()).collect(Collectors.toList());
	System.out.println(values.stream().map(String::valueOf).collect(Collectors.joining(",")));
}

client.disconnect().get();

The code performs a bulk read call on 50 integer variables (“Counter1” to “Counter50”). For performance tests you can put the bulk read call in a loop and measure the times. You should, however, connect to the server over the target network, not on localhost.

With a free (however not open-source) OPC UA simulation server by Prosys and Eclipse Milo for the client I measured times around 3.3 ms per bulk read of these 50 integer variables. I got similar results with the UA.NET stack by the OPC Foundation. Of course, you should do your own measurements with your target setup.

Keep also in mind that the preferred way to use OPC UA is not to constantly poll the values of all the variables. OPC UA allows you to monitor variables for changes and to get notified in case of a change, which is a more event-driven approach.

Using PostgreSQL with Entity Framework

The most widespread O/R (object-relational) mapper for the .NET platform is the Entity Framework. It is most often used in combination with Microsoft SQL Server as database. But the architecture of the Entity Framework allows to use it with other databases as well. A popular and reliable is open-source SQL database is PostgreSQL. This article shows how to use a PostgreSQL database with the Entity Framework.

Installing the Data Provider

First you need an Entity Framework data provider for PostgreSQL. It is called Npgsql. You can install it via NuGet. If you use Entity Framework 6 the package is called EntityFramework6.Npgsql:

> Install-Package EntityFramework6.Npgsql

If you use Entity Framework Core for the new .NET Core platform, you have to install a different package:

> Install-Package Npgsql.EntityFrameworkCore.PostgreSQL

Configuring the Data Provider

The next step is to configure the data provider and the database connection string in the App.config file of your project, for example:

<configuration>
  <!-- ... -->

  <entityFramework>
    <providers>
      <provider invariantName="Npgsql"
         type="Npgsql.NpgsqlServices, EntityFramework6.Npgsql" />
    </providers>
  </entityFramework>

  <system.data>
    <DbProviderFactories>
      <add name="Npgsql Data Provider"
           invariant="Npgsql"
           description="Data Provider for PostgreSQL"
           type="Npgsql.NpgsqlFactory, Npgsql"
           support="FF" />
    </DbProviderFactories>
  </system.data>

  <connectionStrings>
    <add name="AppDatabaseConnectionString"
         connectionString="Server=localhost;Database=postgres"
         providerName="Npgsql" />
  </connectionStrings>

</configuration>

Possible parameters in the connection string are Server, Port, Database, User Id and Password. Here’s an example connection string using all parameters:

Server=192.168.0.42;Port=5432;Database=mydatabase;User Id=postgres;Password=topsecret

The database context class

To use the configured database you create a database context class in the application code:

class AppDatabase : DbContext
{
  private readonly string schema;

  public AppDatabase(string schema)
    : base("AppDatabaseConnectionString")
  {
    this.schema = schema;
  }

  public DbSet<User> Users { get; set; }

  protected override void OnModelCreating(DbModelBuilder builder)
  {
    builder.HasDefaultSchema(this.schema);
    base.OnModelCreating(builder);
  }
}

The parameter to the super constructor call is the name of the configured connection string in App.config. In this example the method OnModelCreating is overridden to set the name of the used schema. Here the schema name is injected via constructor. For PostgreSQL the default schema is called “public”:

using (var db = new AppDatabase("public"))
{
  var admin = db.Users.First(user => user.UserName == "admin")
  // ...
}

The Entity Framework mapping of entity names and properties are case sensitive. To make the mapping work you have to preserve the case when creating the tables by putting the table and column names in double quotes:

create table public."Users" ("Id" bigserial primary key, "UserName" text not null);

With these basics you’re now set up to use PostgreSQL in combination with the Entity Framework.

 

Evolution of programming languages

Programming languages evolve over time. They get new language features and their standard library is extended. Sounds great, doesn’t it? We all know not going forward means your go backward.

But I observe very different approaches looking at several programming ecosystems we are using.

Featuritis

Java and especially C# added more and more “me too” features release after release making relatively lean languages quite complex multi-paradigm languages. They started object oriented and added generics, functional programming features and declarative programming (LINQ in C#) and different UI toolkits (AWT, Swing, JavaFx in Java; Winforms, WPF in C#) to the mix.

Often the new language features add their own set of quirks because they are an afterthought and not carefully enough designed.

For me, this lack of focus makes said language less attractive than more current approaches like Kotlin or Go.

In addition, deprecation often has no effect (see Java) where 20 year old code and style still works which increases the burden further . While it is great from a business perspektive in that your effort to maintain compatibility is low it does not help your code base. Different styles and old ways of doing something tend to remain forever.

Revolution

In Grails (I know, it is not a programming language, but I has its own ecosystem) we see more of a revolution. The core concept as a full stack framework stays the same but significant components are changed quite rapidly. We have seen many changes in technology like jetty to tomcat, ivy to maven, selenium-rc to geb, gant to gradle and the list goes on.

This causes many, sometimes subtle, changes in behaviour that are a real pain when maintaining larger applications over many years.

Framework updates are often a time-consuming hassle but if you can afford it your code base benefits and will eventually become cleaner.

Clean(er) evolution

I really like the evolution in C++. It was relatively slow – many will argue too slow – in the past but it has picked up pace in the last few years. The goal is clearly stated and only features that support it make it in:

  • Make C++ a better language for systems programming and library building
  • Make C++ easier to teach and learn
  • Zero-Cost abstractions
  • better Tool-support

If you cannot make it zero-cost your chances are slim to get your feature in…

C at its core did not change much at all and remained focused on its merits. The upgrades mostly contained convenience features like line comments, additional data type definitions and multithreading.

Honest evolution – breaking backwards compatibility

In Python we have something I would call “honest evolution”. Python 3 introduced some breaking changes with the goal of a cleaner and more consistent language. Python 2 and 3 are incompatible so the distinction in the version number is fair. I like this approach of moving forward as it clearly communicates what is happening and gets rid of the sins in the past.

The downside is that many systems still come with both, a Python 2 and a Python 3 interpreter and accompanying libraries. Fortunately there are some options and tools for your code to mitigate some of the incompatibilities, like the __future__ module and python-six.

At some point in the future (expected in 2020) there will only support for Python 3. So start making the switch.