My favorite C++20 feature

As I evolved my programming style away from mutating long-lived “big” objects and structures and towards are more functional and data-oriented style based mainly on pure functions, I also find myself needing a lot more structs. These naturally occur as return types for functions with ‘richer’ output if you do not want to use std::tuple or other ad-hoc types everywhere. If you see a program as a sequence of data-transformations, I guess the structs are the immediate representations encoded in the type system.

Let my first clarify what I mean by structs, as opposed to what the language says: A type that has all public data members, obeys the rule of zero, and is valid in any configuration. A typical struct v3 { float x{},y{},z{};}; 3d vector is a struct, std::vector is not.

These types are great. You can copy them around, use them with structured binding, they correctly propagate constness, and they are a great fit to pass them through layers of functions calls. And, when used as function parameters, they are great for evolving your program over time, because you can just change the single struct, as opposed to every function call that uses this parameter combination. Or you can easily batch, or otherwise ‘delay’, calls by recording the function parameters. Just throw the parameters into a container and execute the code later.

And with C++20, they got even better, because now you can use them with my favorite new feature: designated initializers, which allows you to use the member names at the initialization site and use RAII. E.g., for a struct that symbolizes an http request: struct http_request { http_method method; std::string url; std::vector<header_entry> headers; }; You can now initialize it like this:

auto request = http_request{
  .method = http_method::get,
  .uri = "localhost:7634",
  .headers = { { .name = "Authorization", .value = "Bearer TOKEN" } },
};

You can even use this directly as a parameter without repeating the type name, de facto giving your named parameters for a pair of extra curlys:

run_request({
    .method = http_method::get,
    .uri = "localhost:7634",
    .headers = { { .name = "Authorization", .value = "Bearer TOKEN" } },
});

You can, of course, combine this named-parameter style-struct with other function parameters in your API, but like with lambdas, I think they are most readable as the last parameter. Hence, also like with lambdas, you probably never want to have more than one at each call-site. I’m very happy with this new feature and it’s already making the code using my APIs a lot more readable.

The boy scout rule and git in practice

There’s a dichotomy when applying the boy scout rule to programming: cleaning up code that you happen to come across ‘pollutes’ your merge-/pull-requests, making it harder to review and therefor more unlikely to be accepted.

One way to cope with this is to submit the ‘clean up’ and the feature/task related changes separately, and merge them back into upstream in separate steps. But often times, it is much easier to just fix a small problem right away instead of switching back to your main branch and doing it there. In fact, it might prevent the developer from doing the improvement, which I want to avoid. Quite the opposite, I want to encourage my fellow developers to do improvements.

So one thing that we do about this is to mark the changes that are unrelated (or tangentially related) to the task with their own commit and a special prefix in the commit message like:

BSR: More consistent function signatures

As you might have guessed, BSR stands for boy scout rule. This does not solve the fact that the diffs get larger than necessary, but it makes it possible to ‘filter out’ the pure refactorings. In some cases, these commits can later be cherry-picked onto the main branch before doing the review. Of course, this only works for small refactorings, but this is where the boy scout rule applies.

Composition of C# iterator methods

Iterator methods in C# or one of my favorite features of that language. I do not use it all that often, but it is nice to know it is there. If you are not sure what they are, here’s a little example:

public IEnumerable<int> Iota(int from, int count)
{
  for (int offset = 0; offset < count; ++offset)
    yield return from + offset;
}

They allow you to lazily generate any sequence directly in code. For example, I like to use them when generating a list of errors on a complex input (think compiler errors). The presence of the yield contextual keyword transforms the function body into a state machine, allowing you to pause it until you need the next value.

However, this makes it a little more difficult to compose such iterator methods, and in reverse, refactor a complex iterator method into several smaller ones. It was not obvious to me right away how to do it at all in a ‘this always works’ manner, so I am sharing how I do it here. Consider this slightly more complex iterator method:

public IEnumerable<int> IotaAndBack(int from, int count)
{
  for (int offset = 0; offset < count; ++offset)
    yield return from + offset;

  for (int offset = 0; offset < count; ++offset)
    yield return from + count - offset - 1;
}

Now we want to extract both loops into their own functions. My one-size-fits-all solution is this:

public IEnumerable<int> AndBack(int from, int count)
{
  for (int offset = 0; offset < count; ++offset)
    yield return from + count - offset - 1;
}

public IEnumerable<int> IotaAndBack(int from, int count)
{
  foreach (var x in Iota(from, count))
     yield return x;

  foreach (var x in AndBack(from, count))
     yield return x;
}

As you can see, a little ‘foreach harness’ is needed to compose the parts into the outer function. Of course, in a simple case like this, the LINQ version Iota(from, count).Concat(AndBack(from, count)) also works. But that only works when the outer function is sufficiently simple.

WPF Redux Sample Application

A while ago, I wrote about how we are using the redux architexture in our C# applications. I have just pushed an example showing ReduxSimple with WPF and our extensions in a .NET 5 application to our github account. The example itself is just a counter with an increment and a decrement button, but it already shows the whole redux cycle.

The store setup in App.xaml.cs shows how the ReducerBuilder can be used to build a State reducer from the Reducer class via reflection.

I also added a small prime-number factorization to show how to use ‘expensive’ functions in the view part of the application using our SelectorGraph. This makes it possible to properly derive view data from the state, only updating them once when one of their inputs changes. In the example, that is the counter. So the number will only be factorized when the counter changes, while all other future state changes do update the selector.

The example does not use the UIDuplexBinder yet. It allows read/write binding of WPF controls to an IObservable and an action-creator, and is hopefully pretty straight-forward to use. Please enjoy!

Chopping up big tasks

As a programmer, you have probably dealt with a task that seemed simple enough in the beginning but just keeps going and going and going. I have been chewing on such a task for the better part of the last three weeks and finally closed it today*. A really long time, considering I usually complete my tasks in less than a day, up to three days for especially long ones.

I know that many programmers can get lost with big tasks like this. I am certainly no exception. Analysis paralysis and decision fatigue can easily get the best of you, considering the mountain of work still ahead.

But I have a few ways to deal with such situations. Of course, your mileage may vary. But I am sure that without them, this specific issue would have taken me even longer. It boils down to one rule:

Focus on the essentials only.

This obviously relates to yak shaving. Sometimes you need to do something else first before you can complete your task. This is recursive and can quickly take up lots of time. This will ultimately be required, but for the moment it distracts from the original task. While you complete a side task, the main task will not advance, leading to a feeling of getting stuck, technical problems (like merge conflicts in long-running branches) and psychological problems (like decision fatigue).

So what can you do about this? My advice is to rigorously cut-off side tasks, by taking up technical debt temporarily. I annotate my code with HACK, TODO and FIXME to mark all the isolated spots I still need to change for the 100% version. The end feature (= user story) does not have to be completed by the end of my task, but I should be reasonably confident that the main work is done. Anything to that end will work.

Some required changes immediately appear to be too extensive for such a small annotation. In that case, I will usually create a new follow-up issue in our issue-tracker and mark the code with a link to it.

After completing the main work in this way, but before I merge my code or close my original work-item/issue, I make another pass over all the HACK, TODO and FIXMEs I generated. The smaller ones I fix right away. Anything where the way to complete them is not super obvious gets converted into an issue in the issue tracker, and cross-linked from the code. This means I add a comment referencing the issue from the code and I make sure that the issue says that it is marked in the code. E.g., for this specific task, I now have 6 open follow-up issues.

After that, I usually merge the code into the main branch. If it’d break something or be misleading with all the follow up issues not done yet, the feature can sometimes be disabled with a feature toggle. Alternatively, the follow up tasks can be completed in their own branches and merged back onto the main task’s branch, which can be merged once everything is done. This hugely depends on your product cycle, of course.

Do you have any clever methods to handle bigger tasks?

Using a C++ service from C# with delegates and PInvoke

Imagine you want to use a C++ service from contained in a .dll file from a C# host application. I was using a C++ service performing some hardware orchestration from a C# WPF application for the UI. This service pushes back events to the UI in undetermined intervals. Let’s write a small C++ service like that real quick:

#include <thread>
#include <string>

using StringAction = void(__stdcall*)(char const*);

void Report(StringAction onMessage)
{
  for (int i = 0; i < 10; ++i)
  {
    onMessage(std::to_string(i).c_str());
    std::this_thread::sleep_for(std::chrono::seconds(1));
  }
}

static std::thread thread;

extern "C"
{
  __declspec(dllexport) void __stdcall Start(StringAction onMessage)
  {
    thread = std::thread([onMessage] {Report(onMessage);});
  }

  __declspec(dllexport) void __stdcall Join()
  {
    thread.join();
  }
}

Compile & link this as a .dll that we’ll call Library.dll for now. Catchy, no?

Now we write a small helper class in C# to access our nice service:

class LibraryLoader
{
  public delegate void StringAction(string message);

  [DllImport("Library.dll", CallingConvention = CallingConvention.StdCall)]
  private static extern void Start(StringAction onMessage);

  [DllImport("Library.dll", CallingConvention = CallingConvention.StdCall)]
  public static extern void Join();

  public static void StartWithAction(Action<string> action)
  {
    Start(x => action(x));
  }
}

Now we can use our service from C#:

LibraryLoader.StartWithAction(x => Console.WriteLine(x));
// Do other things while we wait for the service to do its thing...
LibraryLoader.Join();

If this does not work for you, make sure the C# application can find the C++ Library.dll, as VS does not help you with this. The easiest way to do this, is to copy the dll into the same folder as the C# application files. When you’re starting from VS 2019, that is likely something like bin\Debug\net5.0. You could also adapt the PATH environment variable to include the target directory of your Library.dll.

If you’re getting a BadImageFormatException, make sure the C# application is compiled for the same Platform target as the C++ application. By default, VS builds C++ for “x86”, while it builds C# projects for “Any CPU”. You can change this to x86 in the project settings under Build/Platform target.

Now if this is all you’re doing, the application will probably work fine and report its mysterious number sequence flawlessly. But if you do other things, e.g. something that triggers garbage collection, like this:

LibraryLoader.StartWithAction(x => Console.WriteLine(x));
Thread.Sleep(2000);
GC.Collect();
LibraryLoader.Join();

The application will crash with a very ominous ExecutionEngineException after 2 seconds. In a more realistic environment, e.g. my WPF application, this happened seemingly at random.

Now why is this? The Action<string> we registered to print to the console gets garbage collected, because there is nothing in the managed environment keeping it alive. It exists only as a dependency to the function pointer in C++ land. When C++ wants to message something, it calls into nirvana. Not good. So let’s just store it, to keep it alive:

static StringAction messageDelegate;
public static void StartWithAction(Action<string> action)
{
  messageDelegate = x => action(x);
  Start(messageDelegate);
}

Now the delegate is kept alive in the static variable, thereby matching the lifetime of the C++ equivalent, and the crash is gone. And there you have it, long-lasting callbacks from C++ to C#.

A very strange bug

A week ago, one of our junior programmers encountered a strange bug in his WPF application. This particular application has a main window with pages, i.e. views, that can be switched between, e.g. via the main menu. The first page, however, is the login page. And while on the login page, the main menu should be disabled, so users cannot go where they are not authorized to go.

And this worked fine. A simple boolean in the main window’s view-model was used to disable the menu when in on login page, and enable it otherwise. We have a couple of applications that behave this way, and there were enough examples to get this to work.

Now the programmer introduced a new feature: when the application is started for the first time, there should be a configuration page right after the login page. During the configuration, the main menu should still be disabled. When the user hits the save button on the configuration page, the configuration should be stored and they should get to the dashboard with an enabled main menu.

New Feature, new Bug

Of course, this required changing the condition for when the main menu is disabled: When on either of the two pages, keep it disabled. But now the very strange bug appeared. When going to the dashboard from the configuration page, the main menu was correctly enabled, but all of its menu entries were still disabled. And this only happened when opening the main menu for the first time. When closing and opening it again, all menu entries were correctly enabled.

Now a lot of hands-on debugging ensued. The junior developer used all of the tools at his disposal: web searching, debug output, consulting other senior developers. The leads were plenty, too. Could it be a broken INotifyPropertyChanged implementation? Was ICommand.CanExecute not returning the correct value? Can we attach our own CanExecute handlers to the associated CommandBindings to at least get around the issue? Do we manually have to trigger a refresh of the enabled state?

Nothing worked, and no new information was gained. Even after fiddling around with the problem for a few days, there was no solution, no new insight to be found, not even a workaround. All our code seemed to be working alright.

From good to bad

One of my debugging mantras, that always helped me with the nastiest of bugs, is:

Work from a good, bug-free scenario to the bad, buggy scenario. Use small increments and bisection to find the step that breaks it.

In this situation, we were lucky. We had a good, working scenario in the same application. Starting the application without the “first time configuration” was working nicely. So what was the difference? From the login page the user also hit a button to change to the dashboard page.

The only difference was: the configuration was not stored in between. So we commented that out. Finally! Progress! We could not believe it. Commenting out the “store the configuration” code made our menu items work. Time to dig deeper: The store-the-configuration code was using a helper dialog called TaskDialog that awaits a given Task while showing an “in progress” animation. Our industrious junior developer thought that might be a good idea for storing the configuration data using File.WriteAllTextAsync. Further bisection revealed that it was not actually the “save” Task that was causing the problem, but our TaskDialog: Removing the await from the TaskDialog, our MainWindow‘s main menu was still broken.

This was surprising since the TaskDialog had been in-production, seemingly working alright for quite some time. Yet all our clues hinted at it being the culprit. In its implementation, it runs the given Task directly in its async “Loaded” event handler. Once it is done, it sets the DialogResult to true.

So we hypothesized that it is probably not a good idea to close the dialog while it is currently in the process of opening. The configuration saving task was probably very fast and never yielding, so only that was showing the strange behavior, while all our previous use cases were “slow enough” and yielded at least once.

Hence we tried a small modification: We delayed the execution of our Task and the subsequent DialogResult = true; slightly to the next “event frame” using Application.Current.Dispatcher.InvokeAsync. And that did the trick! The main menu items were finally correctly enabled after leaving the configuration page.

And this is how we solved this very weird bug, where the trigger does not appear to relate to the symptom at all. There is probably still a bug causing this weird behavior somewhere in WPF, but at least we are not longer triggering it with our TaskDialog. Remember, start from the good case, iterate and bisect!

Redux architecture with WPF/C#

For me, the redux architecture has been a game changer in how I write UI programs. All the common problems surrounding observability, which is so important for good UX, are neatly solved without signal spaghetti or having to trap the user in modal dialogs.

For the past two years, we have been working on writing a whole suite of applications in C# and WPF, and most programs in that suite now use a redux-style architecture. We had to overcome a few problems adapting the architecture and our coding style to the platform, but I think it was well worth it.

We opted to use Odonno’s ReduxSimple to organize our state. It’s a nice little library, but it alone does not enable you to write UI apps just yet.

Unidirectional UI in a stateful world

WPF, like most desktop UI toolkits, is a stateful framework. The preferred way to supply it with data is via two-way data binding and custom view-model objects. In order to make WPF suitable for unidirectional UI, you need something like a “controlled mode” for the WPF controls. In that mode, data coming from the application is just displayed and not modified without a round-trip through the application state. This is directly opposing conventional data-binding, which tries to hide the direction of the data-flow.

In other words: we need WPF to call a function when the user changes a value in an input control, but not when we are updating the value from our application state. Since we have control when we are writing to the components, we added a simple “filter” that intercepts our change event handlers in that case. After some evolution of these concepts, we now have this neatly abstracted in a couple of tool functions like this:

public UIDuplexBinder BindInput(TextBox textBox, IObservable<string> observable, Func<string, object> actionCreator)
{
  // ...
}

This updates the TextBox whenever new values are coming in on the IObservable, and when we are not changing the value via that observable, it calls the given action creator and dispatches the action to the store. We have such helper functions for most of our input controls, and similar functions for passive elements like TextBlocks and to show/hide things.

Since this is relatively straight-forward code, we are skipping MVVM and doing this binding directly in the code behind. When our binder functions are not sufficient, which sometimes do more complex updating in view models.

Immutable data

In a Redux-style architecture, observability comes from lightweight diffing, which in turn comes from immutable data updates in your reducers.

System.Collection.Immutable is great for updating the collections in your reducers in a non-mutable way. But their Equals implementation does not behave value-like, which is needed in this case. So in the types that use collections, we use an extension method called LazyEquals that ||s Object.ReferenceEquals and Linq.Enumerable.SequenceEqual.

For the non-collection data, C#9’s record types and with expressions are great. Before switching to .NET 5 earlier this year, we used a utility function from Converto, a companion library of ReduxSimple, that implements a .With via reflection and anonymous types. However, that function silently no-ops when you get the member name wrong in the anonymous type. So we had to write a lot of stupidly simple unit-tests to make sure that no typos slipped through, and our code would survive “rename” refactorings. The new with expressions offload this responsibility to the compiler, which works even better. Nothing wrong with lots of tests, of course.

Next steps

With all this, writing Redux style WPF programs has become a breeze. But one sore spot remains: We still have to supply custom Equals implementations whenever our State types contain a collection. Even when they do not, the generated Equals for records does not early-out via a ReferenceEquals, which can make a Redux-style architecture slower.

This is error prone and cumbersome, so we are currently debating whether this warrants changing C#’s defaults via something like Undefault.NET so the generated Equals for records all do value-like comparison with ReferenceEquals early-outs. Of course, doing something like that is firmly in danger-zone, but maybe the benefits outweigh the risks in this case? This would sure eliminate lots of custom Equals implementations for the price of a subtle, yet somewhat intuitive behavior change. What do you think?

The function that never ended

One of the unwritten laws in procedural programming is that any function you call will, at one point, end. In the presence of exceptions, this does not mean that the function will return gracefully, but it will end non-the-less.

When this does not happen, something very strange is afoot. For example, C’s exit() function ends a program right here and now. But this was a WPF application in C#, only using official libraries. Surely there was nothing like that there. And all I was doing was trying to dispose of my SignalR connection on on program shutdown.

I had registered a delegate for “Exit” in my App.xaml’s <Application>. The SignalR client only implements IAsyncDisposable, so I made that shutdown function asynchronous using the async keyword. I awaited the client’s DisposeAsync and the program just stopped right there, not getting to any code I wanted to dispose of after that. No exception thrown either. Very weird.

Trying to step into the function with a debugger, I learned that the program exited when the SignalR client’s DisposeAsync was awaiting something itself. Just exited normally with exit code 0.

At that point, it became painfully obvious what was happening. async functions do not behave as predictably as normal function. Whenever they are awaiting something, their “tail” is in fact posted to a dispatcher, which resumes the function at that point when the awaited Task is completed. But since I was already exiting my application, the Dispatcher was no longer executing newcomers like the remainder of my shutdown sequence.

To fix this, I reversed the order: when a user triggers an application exit, I first clean up my client and then trigger application exit.

Who watches the FileSystemWatcher?

One of the ways we sometimes implement communication with legacy systems is via the file-system. The legacy system will write files about some events into a predefined directory, and the other system watches this directory for changes. In C#/.NET, the handy FileSystemWatcher class is a great tool for that.

We had a working solution using that in production for the last two years. Then it suddenly stopped working. There was no apparent change in the related code, so I suspect our upgrade from .NET Core 2.1 to .NET 5.0 triggered the change in behavior. The code looked something like this:

public void BeginWatching()
{
    var filter = NotifyFilters.LastAccess | NotifyFilters.LastWrite
        | NotifyFilters.FileName | NotifyFilters.DirectoryName;

    var watcher = new FileSystemWatcher
    {
        Path = directory,
        NotifyFilter = filter,
        Filter = "*.txt",
        EnableRaisingEvents = true,
    };
    /* Hook up some events here... */
}

And after some debugging, it turned out none of the attached event handlers were firing anymore. The solution was to not let go of the watcher instance, and keep it around in the enclosing class.

this.watcher = new FileSystemWatcher

This makes sense, of course. Before, the watcher was only a local variable, and could be collected by the garbage collector at any moment. That, in turn, disabled the process, causing no more events to be emitted.

Interestingly, a few days later, my student asked about his FileSystemWatcher also no longer working. I immediatly suspected the same problem, but when we looked at the code, he had already moved the watcher into a property of the enclosing class. Turns out, for him the problem was just one level up: the enclosing class was only created as a local variable, and the contained watcher stopped after that went out of scope.

Now the only question is: why did we never observe this before? Either something in the GC changed, or something in the implementation of the watcher changed. Can anyone enlighten the situation?