Redux architecture with WPF/C#

For me, the redux architecture has been a game changer in how I write UI programs. All the common problems surrounding observability, which is so important for good UX, are neatly solved signal spaghetti or having to trap the user in modal dialogs.

For the past two years, we have been working on writing a whole suite of applications in C# and WPF, and most programs in that suite now use a redux-style architecture. We had to overcome a few problems adapting the architecture and our coding style to the platform, but I think it was well worth it.

We opted to use Odonno’s ReduxSimple to organize our state. It’s a nice little library, but it alone does not enable you to write UI apps just yet.

Unidirectional UI in a stateful world

WPF, like most desktop UI toolkits, is a stateful framework. The preferred way to supply it with data is via two-way data binding and custom view-model objects. In order to make WPF suitable for unidirectional UI, you need something like a “controlled mode” for the WPF controls. In that mode, data coming from the application is just displayed and not modified without a round-trip through the application state. This is directly opposing conventional data-binding, which tries to hide the direction of the data-flow.

To reinterate: we need WPF to call a function when the user changes a value in an input control, but not when we are updating the value from our application state. Since we have control when we are writing to the components, we added a simple “filter” that intercepts our change event handlers in that case. After some evolution of these concepts, we now have this neatly abstracted in a couple of tool functions like this:

public UIDuplexBinder BindInput(TextBox textBox, IObservable<string> observable, Func<string, object> actionCreator)
{
  // ...
}

This updates the TextBox whenever new values are coming in on the IObservable, and when we are not changing the value via that observable, it calls the given action creator and dispatches the action to the store. We have such helper functions for most of our input controls, and similar functions for passive elements like TextBlocks and to show/hide things.

Since this is relatively straight-forward code, we are skipping MVVM and doing this binding directly in the code behind. When our binder functions are not sufficient, which sometimes do more complex updating in view models.

Immutable data

In a Redux-style architecture, observability comes from lightweight diffing, which in turn comes from immutable data updates in your reducers.

System.Collection.Immutable is great for updating the collections in your reducers in a non-mutable way, although their Equals implementation does not behave value-like, which is needed in this case. So in the types that use collections, we use an extension method called LazyEquals that ||s Object.ReferenceEquals and Linq.Enumerable.SequenceEqual.

For the non-collection data, C#9’s record types and with expressions are great. Before switching to .NET 5 earlier this year, we used a utility function from Converto, a companion library of ReduxSimple, that implements a .With via reflection and anonymous types. However, that function silently no-ops when you get the member name wrong in the anonymous type. So we had to write a lot of stupidly simple unit-tests to make sure that no typos slipped through, and our code would survive “rename” refactorings. The new with expressions offload this responsibility to the compiler, which works even better. Nothing wrong with lots of tests, of course.

Next steps

With all this, writing Redux style WPF programs has become a breeze. But one sore spot remains: We still have to supply custom Equals implementations whenever our State types contain a collection. Even when they do not, the generated Equals for records does not early-out via a ReferenceEquals, which can make a Redux-style architecture. This is error prone and cumbersome, so we are currently debating whether this warrants changing C#’s defaults via something like Undefault.NET so the generated Equals for records all do value-like comparison with ReferenceEquals early-outs. Of course, doing something like that is firmly in danger-zone, but maybe the benefits outweigh the risks in this case? This would sure eliminate lots of custom Equals implementations for the price of a subtle, yet somewhat intuitive behavior change. What do you think?

The function that never ended

One of the unwritten laws in procedural programming is that any function you call will, at one point, end. In the presence of exceptions, this does not mean that the function will return gracefully, but it will end non-the-less.

When this does not happen, something very strange is afoot. For example, C’s exit() function ends a program right here and now. But this was a WPF application in C#, only using official libraries. Surely there was nothing like that there. And all I was doing was trying to dispose of my SignalR connection on on program shutdown.

I had registered a delegate for “Exit” in my App.xaml’s <Application>. The SignalR client only implements IAsyncDisposable, so I made that shutdown function asynchronous using the async keyword. I awaited the client’s DisposeAsync and the program just stopped right there, not getting to any code I wanted to dispose of after that. No exception thrown either. Very weird.

Trying to step into the function with a debugger, I learned that the program exited when the SignalR client’s DisposeAsync was awaiting something itself. Just exited normally with exit code 0.

At that point, it became painfully obvious what was happening. async functions do not behave as predictably as normal function. Whenever they are awaiting something, their “tail” is in fact posted to a dispatcher, which resumes the function at that point when the awaited Task is completed. But since I was already exiting my application, the Dispatcher was no longer executing newcomers like the remainder of my shutdown sequence.

To fix this, I reversed the order: when a user triggers an application exit, I first clean up my client and then trigger application exit.

Co-Variant methods on C# collections

C# offers a powerful API for working with collections and especially LINQ offers lots of functional goodies to work with them. Among them is also the Concat()-method which allows to concatenate two IEnumerables.

We recently had the use-case of concatenating two collections with elements of a common super-type:

class Animal {}
class Cat : Animal {}
class Dog : Animal {}

public IEnumerable<Animal> combineAnimals(IEnumerable<Cat> cats, IEnumerable<Dog> dogs)
{
  // XXX: This does not work because Concat is invariant!!!
  return cats.Concat(dogs);
}

The above example does not work because concat requires both sequences to have the same type and returns a combined sequences of this type. If we do not care about the specifities of the subclasses be can build a Concatenate()-method ourselves which make the whole thing possible because instances of both subclasses can be put into a collection of their common parent class.

private static IEnumerable<TResult> Concatenate<TResult, TFirst, TSecond>(
  this IEnumerable<TFirst> first,
  IEnumerable<TSecond> second)
    where TFirst: TResult where TSecond : TResult
{
  IList<TResult> result = new List<TResult>();
  foreach (var f in first)
  {
    result.Add(f);
  }
  foreach (var s in second)
  {
    result.Add(s);
  }
  return result;
}

The above method is a bit clunky to call but works as intended:

public IEnumerable<Animal> combineAnimals(IEnumerable<Cat> cats, IEnumerable<Dog> dogs)
{
  // Works great!
  return cats.Concatenate<Animal, Cat, Dog>(dogs);
}

A variant of the above is a Concatenate()-method can be useful if you use a collection of the parent class to collect instances of subclass collections:

private static IEnumerable<TResult> Concatenate<TResult, TIn>(
  this IEnumerable<TResult> first,
  IEnumerable<TIn> devs)
    where TIn : TResult
{
  IList<TResult> result = first.ToList(); 
  foreach (var dev in devs)
  {
    result.Add(dev);
  }
  return result;
}

public IEnumerable<Animal> combineAnimals(IEnumerable<Cat> cats, IEnumerable<Dog> dogs)
{
  IEnumerable<Animal> result = new List<Animal>();
  result = result.Concatenate(cats);
  return result.Concatenate(dogs);
}

Maybe the above examples can serve as an inspiration for more utility methods that may improve working with collections in C#.

Using (elastic)search with .NET Core

Many modern applications require powerful search mechanisms to become useful and make their users more productive. That is in large part due to the amount of data available to work with. Thankfully there are already powerful tools to index your data and make it searchable.

One of the most well known state-of-the-art solutions is ElasticSearch and it has an API to be used from .NET called NEST. While the documentation is ok I want to give a quick rundown on how to add searching capabilities to your .NET Core application. Some ideas are borrowed from the great post “Using Elasticsearch with ASP.NET Core and Docker“.

Getting ElasticSearch running

The easiest way to get up and running with elasicsearch is to use their docker images and just run the container on your development machine. I like to a docker compose file like the following to get elasticsearch and its tooling application kibana up and running fast:

version: '3.8'
services:
    elasticsearch:
        image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
        container_name: elastic
        environment:
            - node.name=elastic
            - cluster.initial_master_nodes=elastic
        ports:
            - "9200:9200"
            - "9300:9300"
        volumes:
            - type: bind
              source: ./esdata
              target: /usr/share/elasticsearch/data
        networks:
            - esnetwork
    kibana:
        image: docker.elastic.co/kibana/kibana:7.8.0
        ports:
            - "5601:5601"
        networks:
            - esnetwork
        depends_on:
            - elasticsearch
volumes:
    esdata:
networks:
    esnetwork:
        driver: bridge

After you run it with docker compose you can talk to the search service on http://localhost:9200/ and the kibana management GUI on http://localhost:5601/. On the Kibana UI especially the Dev Tools and its console are interesting for experimenting with search queries.

Access ElasticSearch from your .NET Core app

I find it quite elegant to write a extension method for the IServiceCollection to configure the ElasticClient and register it as a Singleton to the dependency injection framework of .NET Core like so

    public static class ElasticSearchExtension
    {
        public static void AddElasticsearch(
            this IServiceCollection services, IConfiguration configuration)
        {
            var url = configuration["elasticsearch:url"];
            var settings = new ConnectionSettings(new Uri(url))
                    .DefaultMappingFor<SearchableDevice>(deviceMapping => deviceMapping
                        .IndexName("devices")
                        .IdProperty(dev => dev.Id)
                    )
                ;
            var client = new ElasticClient(settings);
            var response = client.Indices.Create("devices", creator => creator
                .Map<SearchableDevice>(device => device
                    .AutoMap()
                )
            );
            // maybe check response to be safe...
            services.AddSingleton<IElasticClient>(client);
        }
    }

The configuration block looks like following

  "ElasticSearch": {
    "url": "http://localhost:9200/"
  },

and allows for future extension.

Our search service has to be registered in Startup of course:

public void ConfigureServices(IServiceCollection services)
{
    ...
    services.AddElasticsearch(Configuration);
}

Indexing our objects

In the above example we built a SearchableDevice class whose public properties are going to be indexed by ElasticSearch. The API allows for a much more fined grained control about what and how to index but we want to keep things simple without having to worry about excluding Properties and so on. If you have set it up that way indexing a SearchableDevice is merely one simple call:

// SearchClient is the injected IElasticClient
// mySearchableDevice is an instance of SearchableDevice
SearchClient.IndexDocument(mySearchableDevice);

Searching for objects

When developing a search query I like to try it in the Kibana Dev Tools and then transform it to a NEST call. A simple query to look in all device properties if they start with “needle” looks like this:

{
  "query": {
    "multi_match": {
      "type": "phrase_prefix", 
      "fields": ["*"],
      "query": "needle"
    }
  }
}

The nice thing about the Kibana Dev Tools console is that it can display and complete the possible values for fields like “type” in your multi_match query.

That search query can then be translated to a NEST call in a straightforward way and looks this way:

var response = SearchClient.Search<SearchableDevice>(sd => sd
    .Query(q => q
        .MultiMatch(query => query
            .Type(TextQueryType.PhrasePrefix)
            .Fields("*")
            .Query("needle")
        )
    )
);

The search response contains the hits with their source objects and some metadata like the score and result count.

Bear in mind that ElasticSearch only returns the first/best 10 matches by default, so specifying the result size often might be closer to what you want.

Wrapping it up

Getting started with ElasticSearch in .NET Core does not require too much boiler plate an setup work if you use tools like docker and the NEST library. Making it usable and tuning the indexing and querying may require a lot of work to achieve the best results. On the other hand smaller applications can start-off with a simple search setup like shown above and simply evolve it when need be.

Data-Oriented Design: Using data as interfaces

A Code Centric World

In main-stream OOP, polymorphism is achieved by virtual functions. To reuse some code, you simply need one implementation of a specific “virtual” interface. Bigger programs are composed by some functions calling other functions calling yet other functions. Virtual functions introduce a flexibility here to that allow parts of the call tree to be replaced, allowing calling functions to be reused by running on different, but homogenuous, callees. This is a very “code centric” view of a program. The data is merely used as context for functions calling each other.

Duality

Let us, for the moment, assume that all the functions and objects that such a program runs on, are pure. They never have any side effects, and communicate solely via parameters to and return values from the function. Now that’s not traditional OOP, and a more functional-programming way of doing things, but it is surely possible to structure (at least large parts of) traditional OOP programs that way. This premise helps understanding how data oriented design is in fact dual to the traditional “code centric” view of a program: Instead of looking at the functions calling each other, we can also look at how the data is being transformed by each step in the program because that is exactly what goes into, and comes out of each function. IS-A becomes “produces/consumes compatible data”.

Cooking without functions

I am using C# in the example, because LINQ, or any nice map/reduce implementation, makes this really staight-forward. But the principle applies to many languages. I have been using the technique in C++, C#, Java and even dBase.
Let’s say we have a recipe of sorts that has a few ingredients encoded in a simple class:

class Ingredient
{
  public string Name { get; set; }
  public decimal Amount { get; set; }
}

We store them in a simple List and have a nice function that can compute the percentage of each ingredient:

public static IReadOnlyList<(string, decimal)> 
    Percentages(IEnumerable<Ingredient> incredients)
{
  var sum = incredients.Sum(x => x.Amount);
    return incredients
      .Select(x => (x.Name, x.Amount / sum))
      .ToList();
}

Now things change, and just to make it difficult, we need a new ingredient type that is just a little more complicated:

class IngredientInfo
{
  public string Name { get; set; }
  /* other useful stuff */
}

class ComplicatedIngredient
{
  public IngredientInfo Info { get; set; }
  public decimal Amount { get; set; }
}

And we definitely want to use the old, simple one, as well. But we need our percentage function to work for recipes that have both Ingredients and also ComplicatedIngredients. Now the go-to OOP approach would be to introduce a common interface that is implemented by both classes, like this:

interface IIngredient
{
  string GetName();
  string GetAmount();
}

That is trivial to implement for both classes, but adds quite a bunch of boilerplate, just about doubling the size of our program. Then we just replace IReadOnlyList<Ingredient> by IReadOnlyList<IIngredient> in the Percentage function. That last bit is just so violating the Open/Closed principle, but just because we did not use the interface right away (Who thought YAGNI was a good idea?). Also, the new interface is quite the opposite of the Tell, don’t ask principle, but there’s no easy way around that because the “Percentage” function only has meaning on a List<> of them.

Cooking with data

But what if we just use data as the interface? In this case, it so happens that we can easiely turn a ComplicatedIngredient into an Ingredient for our purposes. In C#’s LINQ, a simple Select() will do nicely:

var simplified = complicated
  .Select(x => new Ingredient
   { 
     Name = x.Info.Name,
     Amount = x.Amount
   });

Now that can easiely be passed into the Percentages function, without even touching it. Great!

In this case, one object could neatly be converted into the other, which is often not the case in practice. However, there’s often a “common denominator class” that can be found pretty much the same way as extracting a common interface would. Just look at the info you can retrieve from that imaginary interface. In this case, that was the same as the original Ingredients class.

Further thoughts

To apply this, you sometimes have to restructure your programs a little bit, which often means going wide instead of deep. For example, you might have to convert your data to a homogenuous form in a preprocessing step instead of accessing different objects homogenuously directly in your algorithms, or use postprocessing afterwards.
In languages like C++, this can even net you a huge performance win, which is often cited as the greatest thing about data-oriented design. But, first and foremost, I find that this leads to programs that are easier to understand for both machine and people. I have found myself using this data-centric form of code reuse a lot more lately.

Are you using something like this as well or are you still firmly on the override train, and why? Tell me in the comments!

Getting started with exact arithmetic and F#

In this blog post, I claimed that some exact arithmetic beyond rational numbers can be implemented on a computer. Today I want to show you how that might be done by showing you the beginning of my implementation. I chose F# for the task, since I have been waiting for an opportunity to check it out anyway. So this post is a more practical (first) follow up on the more theoretic one linked above with some of my F# developing experiences on the side.

F# turned out to be mostly pleasant to use, the only annoying thing that happened to me along the way was some weirdness of F# or of the otherwise very helpful IDE Rider: F# seems to need a compilation order of the source code files and I only found out by acts of desperation that this order is supposed to be controlled by drag & drop:

The code I want to (partially) explain is available on github:

https://github.com/felixwellen/ExactArithmetic

I will link to the current commit, when I discuss specifc sections below.

Prerequesite: Rational numbers and Polynomials

As explained in the ‘theory post’, polynomials will be the basic ingredient to cook more exact numbers from the rationals. The rationals themselves can be built from ‘BigInteger’s (source). The basic arithmetic operations follow the rules commonly tought in schools (here is addition):

static member (+) (l: Rational, r: Rational) =
    Rational(l.up * r.down + r.up * l.down,
             l.down * r.down)

‘up’ and ‘down’ are ‘BigInteger’s representing the nominator and denominator of the rational number. ‘-‘, ‘*’ and ‘/’ are defined in the same style and extended to polynomials with rational coefficients (source).

There are two things important for this post, that polynomials have and rationals do not have: Degrees and remainders. The degree of a polynomial is just the number of its coefficients minus one, unless it is constant zero. The zero-polynomial has degree -1 in my code, but that specific value is not too important – it just needs to be smaller than all the other degrees.

Remainders are a bit more work to calculate. For two polynomials P and Q where Q is not zero, there is always a unique polynomial R that has a smaller degree such that:

P = Q * D + R

For some polynomial D (the algorithm is here).

Numberfields and examples

The ingredients are put together in the type ‘NumberField’ which is the name used in algebra, so it is precisely what is described here. Yet it is far from obvious that this is the ‘same’ things as in my example code.

One source of confusion of this approach to exact arithmetic is that we do not know which solution of a polynomial equation we are using. In the example with the square root, the solutions only differ in the sign, but things can get more complicated. This ambiguity is also the reason that you will not find a function in my code, that approximates the elements of a numberfield by a decimal number. In order to do that, we would have to choose a particular solution first.

Now, in the form of unit tests unit tests, we can look at a very basic example of a number field: The one from the theory-post containing a solution of the equation X²=2:

let TwoAsPolynomial = Polynomial([|Rational(2,1)|])
let ModulusForSquareRootOfTwo = 
     Polynomial.Power(Polynomial.X,2) - TwoAsPolynomial
let E = NumberField(ModulusForSquareRootOfTwo)   
let TwoAsNumberFieldElement = NumberFieldElement(E, TwoAsPolynomial)

[<Fact>]
let ``the abstract solution is a solution of the given equation``() =
    let e = E.Solution in  (* e is a solution of the equation 'X^2-2=0' *)
    Assert.Equal(E.Zero, e * e - TwoAsNumberFieldElement)

There are applications of these numbers which have no obvious relation to square roots. For example, there are numberfields containing roots of unity, which would allow us to calculate with rotations in the plane by rational fraction of a full rotation. This might be the topic of a follow up post…

Some strings are more equal before your Oracle database

When working with customer code based on ADO.net, I was surprised by the following error message:

The german message just tells us that some UpdateCommand had an effect on “0” instead of the expected “1” rows of a DataTable. This happened on writing some changes to a table using an OracleDataAdapter. What really surprised me at this point was that there certainly was no other thread writing to the database during my update attempt. Even more confusing was, that my method of changing DataTables and using the OracleDataAdapter to write changes had worked pretty well so far.

In this case, the title “DBConcurrencyExceptionturned out to be quite misleading. The text message was absolutely correct, though.

The explanation

The UpdateCommand is a prepared statement generated by the OracleDataAdapter. It may be used to write the changes a DataTable keeps track of to a database. To update a row, the UpdateCommand identifies the row with a WHERE-clause that matches all original values of the row and writes the updates to the row. So if we have a table with two rows, a primary id and a number, the update statement would essentially look like this:

UPDATE EXAMPLE_TABLE
  SET ROW_ID =:current_ROW_ID, 
      NUMBER_COLUMN =:current_NUMBER_COLUMN
WHERE
      ROW_ID =:old_ROW_ID 
  AND NUMBER_COLUMN =:old_NUMBER_COLUMN

In my case, the problem turned out to be caused by string-valued columns and was due to some oracle-weirdness that was already discussed on this blog (https://schneide.blog/2010/07/12/an-oracle-story-null-empty-or-what/): On writing, empty strings (more precisely: empty VARCHAR2s) are transformed to a DBNull. Note however, that the following are not equivalent:

WHERE TEXT_COLUMN = ''
WHERE TEXT_COLUMN is null

The first will just never match… (at least with Oracle 11g). So saying that null and empty strings are the same would not be an accurate description.

The WHERE-clause of the generated UpdateCommands look more complicated for (nullable) columns of type VARCHAR2. But instead of trying to understand the generated code, I just guessed that the problem was a bug or inconsistency in the OracleDataAdapter that caused the exception. And in fact, it turned out that the problem occured whenever I tried to write an empty string to a column that was DBNull before. Which would explain the message of the DBConcurrencyException, since the DataTable thinks there is a difference between empty strings and DBNulls but due to the conversion there will be no difference when the corrensponding row is updated. So once understood, the problem was easily fixed by transforming all empty strings to null prior to invoking the UpdateCommand.

Clean deployment of .NET Core application

Microsofts .NET Core framework has rightfully earned its spot among cross-platform frameworks. We like to use it for example as a RESTful backend for our react frontends. If you are not burying your .NET Core application in a docker container without the need to configure/customize it you may feel agitated by its default deployment layout: All the dependencies live next to some JSON configuration files in one directory.

While this is ok if you do not need to look in there for a configuration file and change something you may like to clean it up and put the files into different folders. This can be achieved by customizing your MS build but it is all but straightforward!

Our goal

  1. Put all of our dependencies into a lib directory
  2. Put all of our configuration files int a configuration directory
  3. Remove unneeded files

The above should not require any interaction but be part of the regular build process.

The journey

We need to customize the MSBuild system to achieve our goal because the deps.json file must be rewritten to change the location of our dependencies. This is the hardest part! First we add the RoslynCodeTaskFactory as a package reference to our MSbuild in the csproj of our project. That we we can implement tasks using C#. We define two tasks that will help us in rewriting the deps.json:

<Project ToolsVersion="15.8" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
  <UsingTask TaskName="RegexReplaceFileText" TaskFactory="CodeTaskFactory" AssemblyFile="$(RoslynCodeTaskFactory)" Condition=" '$(RoslynCodeTaskFactory)' != '' ">
    <ParameterGroup>
      <InputFile ParameterType="System.String" Required="true" />
      <OutputFile ParameterType="System.String" Required="true" />
      <MatchExpression ParameterType="System.String" Required="true" />
      <ReplacementText ParameterType="System.String" Required="true" />
    </ParameterGroup>
    <Task>
      <Using Namespace="System" />
      <Using Namespace="System.IO" />
      <Using Namespace="System.Text.RegularExpressions" />
      <Code Type="Fragment" Language="cs">
        <![CDATA[ File.WriteAllText( OutputFile, Regex.Replace(File.ReadAllText(InputFile), MatchExpression, ReplacementText) ); ]]>
      </Code>
    </Task>
  </UsingTask>

  <UsingTask TaskName="RegexTrimFileText" TaskFactory="CodeTaskFactory" AssemblyFile="$(RoslynCodeTaskFactory)" Condition=" '$(RoslynCodeTaskFactory)' != '' ">
    <ParameterGroup>
      <InputFile ParameterType="System.String" Required="true" />
      <OutputFile ParameterType="System.String" Required="true" />
      <MatchExpression ParameterType="System.String" Required="true" />
    </ParameterGroup>
    <Task>
      <Using Namespace="System" />
      <Using Namespace="System.IO" />
      <Using Namespace="System.Text.RegularExpressions" />
      <Code Type="Fragment" Language="cs">
        <![CDATA[ File.WriteAllText( OutputFile, Regex.Replace(File.ReadAllText(InputFile), MatchExpression, "") ); ]]>
      </Code>
    </Task>
  </UsingTask>
</Project>

We put the tasks in a file called RegexReplace.targets file in the Build directory and import it in our csproj using <Import Project="Build/RegexReplace.targets" />.

Now we can just add a new target that is executed after the publish target to our main project csproj to move the assemblies around, rewrite the deps.json and remove unwanted files:

  <Target Name="PostPublishActions" AfterTargets="AfterPublish">
    <ItemGroup>
      <Libraries Include="$(PublishUrl)\*.dll" Exclude="$(PublishUrl)\MyProject.dll" />
    </ItemGroup>
    <ItemGroup>
      <Unwanted Include="$(PublishUrl)\MyProject.pdb;$(PublishUrl)\.filenesting.json" />
    </ItemGroup>
    <Move SourceFiles="@(Libraries)" DestinationFolder="$(PublishUrl)/lib" />
    <Copy SourceFiles="Build\MyProject.runtimeconfig.json;Build\web.config" DestinationFiles="$(PublishUrl)\MyProject.runtimeconfig.json;$(PublishUrl)\web.config" />
    <Delete Files="@(Libraries)" />
    <Delete Files="@(Unwanted)" />
    <RemoveDir Directories="$(PublishUrl)\Build" />
    <RegexTrimFileText InputFile="$(PublishUrl)\MyProject.deps.json" OutputFile="$(PublishUrl)\MyProject.deps.json" MatchExpression="(?&lt;=&quot;).*[/|\\](?=.*\.dll|.*\.exe)" />
    <RegexReplaceFileText InputFile="$(PublishUrl)\MyProject.deps.json" OutputFile="$(PublishUrl)\MyProject.deps.json" MatchExpression="&quot;path&quot;: &quot;.*&quot;" ReplacementText="&quot;path&quot;: &quot;.&quot;" />
  </Target>

The result

All this work should result in a working application with a root directory layout like in the image. As far as we know the remaining files like the web.config, the main project assembly and the two json files cannot easily relocated. The resulting layout is nevertheless quite clean and makes it easy for administrators to find the configuration files they need to customize.

Of course one can argue if the result is worth the hassle but if your customers’ administrators and operations value it you should do it.

Have unregistered classes throw with the unity DI container

The unity container (not to be confused with game engine) is one of the most popular dependency injection tools for C#.
However, by default the unity container will try to Resolve() all classes that it can. If you do not register a class, it will will often just succeed anyways.
I much prefer explicitly registering classes, and resolution just throwing and exception if I try to resolve something I did not register.
There’s a viable solution for that on stackoverflow, but it fails to throw when trying to resolve a class that was only registered via its interface.
Here’s our fixed version:

public class UnityRegistrationTracking : UnityContainerExtension
{
  private readonly ConcurrentDictionary<NamedTypeBuildKey, bool> registrations =
    new ConcurrentDictionary<NamedTypeBuildKey, bool>();

  protected override void Initialize()
  {
    base.Context.Registering += Context_Registering;
    base.Context.Strategies.Add(
        new ValidateRegistrationStrategy(this.registrations), UnityBuildStage.Setup);
  }

  private void Context_Registering(object sender, RegisterEventArgs e)
  {
    var buildKey = new NamedTypeBuildKey(e.TypeFrom, e.Name);
    this.registrations.AddOrUpdate(buildKey, true,
      (key, oldValue) => true);
  }

  public class ValidateRegistrationStrategy : BuilderStrategy
  {
    private ConcurrentDictionary<NamedTypeBuildKey, bool> registrations;

    public ValidateRegistrationStrategy(ConcurrentDictionary<NamedTypeBuildKey, bool> registrations)
    {
      this.registrations = registrations;
    }

    public override void PreBuildUp(ref BuilderContext context)
    {
        var buildKey = new NamedTypeBuildKey(context.RegistrationType, context.Name);
        if (!this.registrations.ContainsKey(buildKey))
        {
          throw new ResolutionFailedException(buildKey.Type, buildKey.Name,
            string.Format("Type {0} was not explicitly registered in the container.", buildKey.Type.Name));
        }
    }
  }
}

We hook into two parts of the unity API here:

  1. The registration, which is called when you call Unity.RegisterType
  2. The resolution process, which is called when unity tries to resolve a specific instance.

The first part happens in Context_Registering. We just store the registration in dictionary for later. It is important to use TypeFrom as a key, since we want to refer to objects by the interfaces they are registered with, not their concrete implementations.
The second part is the ValidateRegistrationStrategy. All registered BuilderStrategy objects go in a list that is processed when an object is built. The UnityBuildStage.Setup acts as a sorting key, to make sure that this strategy is executed as early as possible.
In the strategy, we check whether the requested type was previously registered, and throw an exception if it was not. It is important to use context.RegistrationType here, since context.Type will again contain the concrete type, and not the interface.

Using WPF-Toolkits CheckComboBox with Data-Binding

Xceed’s WPF Toolkit is a popular extension to the standard components offered by Microsoft’s WPF. One fancy control that I have been using lately is the CheckComboBox, which is a ComboBox that show’s a list of items and checkboxes when opened and a list of selected items when closed. For example, it is great for selecting filtering options in smaller sets.
However, it took me a little bit to get it all up and running with DataBinding. I am going to walk you throught it. For reference, I’m starting with a .NET 4.6.1 WPF App in Visual Studio 2017.

First you have to install Extended.Wpf.Toolkit, which I am doing via VS’s built-in package manager. To actually use the control, I am adding an XML namespace into my MainWindow’s XAML:

xmlns:xctk="http://schemas.xceed.com/wpf/xaml/toolkit"

Then I’m adding the control in a simple StackPanel, while already adding DataBindings:

<xctk:CheckComboBox
  ItemsSource="{Binding Path=Options}"
  DisplayMemberPath="Name"
  SelectedMemberPath="Selected"/>

This means that my control will look at a collection named “Options” in my view-model, using it’s elements “Name” property for display and its “Selected” property for the checkmark. If you run the program at this point, you should be able to see an empty CheckComboBox, albeit badly layouted.

Now it’s time to create the view model. Let’s start with a small class-let to represent our items:

class Item
{
  public string Name { get; set; }
  public bool Selected { get; set; }
}

As you can see, the names match what we set for DisplayMemberPath and SelectedMemberPath in the XAML. Now for the ViewModel class:

class ViewModel
{
  public ViewModel()
  {
    var languages = new string[]
    {
      "C", "C#", "C++", "D", "Java",
      "Rust", "Python", "ES6"
    };
    
    Options = new List<Item>();
    foreach (var language in languages)
    {
      Options.Add(new Item {
          Name = language,
          Selected = true });
    }
  }
  
  public List<Item> Options { get; set; }
}

If you run it at this point, you should be able to see an all-selected list of programming languages in the drop-down. But it is lacking a crucial detail: it is not observable, meaning the component will not be notified if the data in the view-model is changed by other means. To make sure that it can, the Item list and the Item have to implement the INotifyPropertyChanged interface. To do that, you have to fire a specific event whenever a property changes with the name of that property in it.

Let’s do that for the Item first:

class Item : INotifyPropertyChanged
{
  private bool _selected;
  private string _name;

  public string Name
  {
      get => _name; set
      {
        _name = value;
        EmitChange(nameof(Name));
      }
  }
  public bool Selected
  {
    get => _selected; set
    {
      _selected = value;
      EmitChange(nameof(Selected));
    }
  }

  private void EmitChange(params string[] names)
  {
    if (PropertyChanged == null)
      return;
    foreach (var name in names)
      PropertyChanged(this,
        new PropertyChangedEventArgs(name));
  }

 public event PropertyChangedEventHandler
                PropertyChanged;
}

That got bigger! But it’s not a lot of meat really. For the Item list, we can just use ObservableCollection instead of List:

public ObservableCollection<Item> Options {get; set;}

That’s it. Two-way data binding set-up for the item collection, and you can now change the view-model and have the component react to it, but also react to changes from the component by hooking into the property-set functions.
Now you could also implement INotifyPropertyChanged for the ViewModel, if you intend to swap in new ObserableCollections, but that is not necessary for this example.