# React for the algebra enthusiast – Part 1

When I learned to use the react framework, I always had the feeling that it is written in a very mathy way. Since simple googling did not give me any hints if this was a consideration in the design, I thought it might be worth sharing my thoughts on that. I should mention that I am sure others have made the same observations, but it might help algebraist to understand react faster and mathy computer scientiests to remember some algebra.

## Free monoids

In abstract algebra, a monoid is a set M together with a binary operation “$\cdot$” satisfying these two laws:

• There is a neutral element “e”, such that: $\forall x \in M: x \cdot e = e \cdot x = e$
• The operation is associative, i.e. $\forall x,y,z \in M: x \cdot (y\cdot z) = (x\cdot y) \cdot z$

Here are some examples:

• Any set with exactly one element together with the unique choice of operation on it.
• The natural numbers $\mathbb{N}=\{0,1,2,\dots \}$ with addition.
• The one-based natural numbers $\mathbb{N}_1=\{1,2,3,\dots\}$ with multiplication.
• The Integers $\mathbb Z$ with addition.
• For any set M, the set of maps from M to M is a monoid with composition of maps.
• For any set A, we can construct the set List(A), consisting of all finite lists of elements of A. List(A) is a monoid with concatenation of lists. We will denote lists like this: $[1,2,3,\dots]$

Monoids of the form List(A) are called free. With “of the form” I mean that the elements of the sets can be renamed so that sets and operations are the same. For example, the monoid $\mathbb{N}$ with addition and List({1}) are of the same form, witnessed by the following renaming scheme:

$0 \mapsto []$

$1 \mapsto [1]$

$2 \mapsto [1,1]$

$3 \mapsto [1,1,1]$

$\dots$

— so addition and appending lists are the same operation under this identification.

With the exception of $\mathbb{N}_1$, the integers and the monoid of maps on a set, all of the examples above are free monoids. There is also a nice abstract definition of “free”, but for the purpose at hand to describe a special kind of monoid, it is good enough to say, that a monoid M is free, if there is a set A such that M is of the form List(A).

## Action monoids

A react-app (and by that I really mean a react+redux app) has a set of actions. An action always has a type, which is usally a string and a possibly empty list of arguments.

Let us stick to a simple app for now, where each action just has a type and nothing else. And let us further assume, that actions can appear in arbirtrary sequences. That means any action can be fired in any state. The latter simplification will keep us clear from more advanced algebra for now.

For a react-app, sequences of actions form a free monoid. Let us look at a simple example: Suppose our app is a counter which starts with “0” and has an increment (I) and decrement (D) action. Then the sequences of action can be represented by strings like

ID, IIDID, DDD, IDI, …

which form a free monoid with juxtaposition of strings. I have to admit, so far this is not very helpful for a practitioner – but I am pretty sure the next step has at least some potential to help in a complicated situation:

## Quotients

Quotients of sets by an equivalence relation are a very basic tool of modern math. For a monoid, it is not clear if a quotient of its underlying set will still be a monoid with the “same” operations.

Let us look at an example, where everything goes well. In the example from above, the counter should show the same integer if we decrement and then increment (or the other way around). So we could say that the two action sequences

• ID and
• DI

do really nothing and should be considered equivalent to the empty action sequence. So let’s say that any sequence of actions is equivalent to the same sequence with any occurence of “DI” or “ID” deleted. So for example we get:

IIDIIDD $\sim$ I

With this rule, we can reduce any sequence to an equivalent one that is a sequence of Is, a sequence of Ds or empty. So the quotient monoid can be identified with the integers (in two different ways, but that’s ok) and addition corresponds to juxtaposition of action sequences.

The point of this example and the moral of this post is, that we can take a syntactic description (the monoid of action sequences), which is easy to derive from the source code and look at a quotient of the action monoid by a reasonable relation to arrive at some algebraic structure which has a lot to do with the semantic of the app.

So the question remains, if this works just well for an example or if we have a general recipe.

Here is a problem in the general situation: Let $x,y,z\in M$ be elements of a monoid $M$ with operation “$\cdot$” and $\sim$ be an equivalence relation such that $x$ is identified with $y$. Then, denoting equivalence classes with $[\_]$ it is not clear if $[x] \cdot [y]$ should be defined to be $[x\cdot z]$ or $[y\cdot z]$.

Fortunately problems like that disappear for free monoids like our action monoid and equivalence relations constructed in a specific way. As you can see on wikipedia, it is always ok to take the equivalence relation generated by the same kind of identifications we made above: Pick some pairs of sequences which are known “to do the same” from a semantic point of view (like “ID” and “DI” did the same as the empty sequence) and declare sequences to be equivalent, if they arise by replacing sequences known to be the same.

So the approach is that general: It works for apps, where actions do not have parameters and can be fired in any order and for equivalence relations generated by defining finitely many action sequences to do the same. The “any order” is a real restriction, but this post also has a “Part 1” in the title…

# Crashes when returning references to vector elements

Recently, I was experiencing a strange crash that I traced to a piece of C++ code looking more or less like this:

template <class T>
class container
{
public:
std::vector<T> values_;
T default_;

T const& get() const
{
if (values_.empty())
return default_;
return values.front();
}
};


This was crashing when calling get(), with a non-empty values_ member. It looks fairly innocent. And it ran in production for a couple of years already. So what changed?

I had, in fact, never instanciated this template with T = bool before. And that was causing the crash, while still compiling without any errors. Now if you’re a little versed in the C++ standard library you might know that std::vector is a special snowflake indeed. In an effort to save space, and, I suspect, prove the usefulness of template specializations, it is not really a “normal” container holding bool values. Instead, it holds some type of integers and packs each pseudo-bool into one of their bits. The consequence is that the accessor functions like operator[], front() and back() cannot return a reference to a bool. Instead, they return a “proxy” object that supports assignment to and from a bool.

Back to the get() function: it tries to return a reference to a bool. Of course, that bool doesn’t really exist except as a temporary, and so this results in a dangling reference that causes a segmentation fault when used.

I suspect there could have been a warning about a dangling reference somewhere there. I have seen clang-tidy especially report things like this (with a few false positives too), but it did not show up for me. To fix it, I am now just returning a bool instead of a bool const& for T = bool. A special case in my case to work around a special case in std::vector.

# Contiguous date ranges in Oracle SQL

In one of my last posts from a couple of weeks ago I wrote about querying gaps between non-contiguous date ranges in Oracle SQL. This week’s post is about contiguous date ranges.

While non-contiguous date ranges are best represented in a database table with a start_date and an end_date column, it is better to represent contiguous date ranges only by one date column, so that we avoid redundancy and do not have to keep the start date of a date range in sync with the end date of the previous date range. In this post I will use the start date:

CREATE TABLE date_ranges (
name VARCHAR2(100),
start_date DATE
);


The example content of the table is:

NAME	START_DATE
----	----------
A	05/02/2020
B	02/04/2020
C	16/04/2020
D	01/06/2020
E	21/06/2020
F	02/07/2020
G	05/08/2020


This representation means that the date range with the most recent start date does not have an end. The application using this data model can choose whether to interpret this as a date range with an open end or just as the end point for the previous range and not as a date range by itself.

While this is a nice non-redundant representation, it is less convenient for queries where we want to have both a start and an end date per row, for example in order to check wether a given date lies within a date range or not. Luckily, we can transform the ranges with a query:

SELECT
date_ranges.*,
LEAD(date_ranges.start_date)
OVER (ORDER BY start_date)
AS end_date
FROM date_ranges;


As in the previous post on non-contiguous date ranges the LEAD analytic function allows you to access the following row from the current row without using a self-join. Here’s the result:

NAME	START_DATE	END_DATE
----	----------	--------
A	05/02/2020	02/04/2020
B	02/04/2020	16/04/2020
C	16/04/2020	01/06/2020
D	01/06/2020	21/06/2020
E	21/06/2020	02/07/2020
F	02/07/2020	05/08/2020
G	05/08/2020	(null)


By using a WITH clause you can use this query like a view and join it with the another table, for example with the join condition that a date lies within a date range:

WITH ranges AS
(SELECT date_ranges.*, LEAD(date_ranges.start_date) OVER (ORDER BY start_date) AS end_date FROM date_ranges)
SELECT timeseries.*, ranges.name
FROM timeseries LEFT OUTER JOIN ranges ON
timeseries.measurement_date
BETWEEN ranges.start_date AND ranges.end_date;


# Co-Variant methods on C# collections

C# offers a powerful API for working with collections and especially LINQ offers lots of functional goodies to work with them. Among them is also the Concat()-method which allows to concatenate two IEnumerables.

We recently had the use-case of concatenating two collections with elements of a common super-type:

class Animal {}
class Cat : Animal {}
class Dog : Animal {}

public IEnumerable<Animal> combineAnimals(IEnumerable<Cat> cats, IEnumerable<Dog> dogs)
{
// XXX: This does not work because Concat is invariant!!!
return cats.Concat(dogs);
}


The above example does not work because concat requires both sequences to have the same type and returns a combined sequences of this type. If we do not care about the specifities of the subclasses be can build a Concatenate()-method ourselves which make the whole thing possible because instances of both subclasses can be put into a collection of their common parent class.

private static IEnumerable<TResult> Concatenate<TResult, TFirst, TSecond>(
this IEnumerable<TFirst> first,
IEnumerable<TSecond> second)
where TFirst: TResult where TSecond : TResult
{
IList<TResult> result = new List<TResult>();
foreach (var f in first)
{
result.Add(f);
}
foreach (var s in second)
{
result.Add(s);
}
return result;
}


The above method is a bit clunky to call but works as intended:

public IEnumerable<Animal> combineAnimals(IEnumerable<Cat> cats, IEnumerable<Dog> dogs)
{
// Works great!
return cats.Concatenate<Animal, Cat, Dog>(dogs);
}


A variant of the above is a Concatenate()-method can be useful if you use a collection of the parent class to collect instances of subclass collections:

private static IEnumerable<TResult> Concatenate<TResult, TIn>(
this IEnumerable<TResult> first,
IEnumerable<TIn> devs)
where TIn : TResult
{
IList<TResult> result = first.ToList();
foreach (var dev in devs)
{
result.Add(dev);
}
return result;
}

public IEnumerable<Animal> combineAnimals(IEnumerable<Cat> cats, IEnumerable<Dog> dogs)
{
IEnumerable<Animal> result = new List<Animal>();
result = result.Concatenate(cats);
return result.Concatenate(dogs);
}


Maybe the above examples can serve as an inspiration for more utility methods that may improve working with collections in C#.

# Make yourself comfortable

If you have to add a new feature to an existing code base, you’ve likely already experienced an uncomfortable truth: Nobody has thought about your use case. Nothing in the existing code base fits your goals. This isn’t because everybody wanted you to fail, but because your new feature is in fact brand new to the software and responsible software developers stop working on a code as soon as their work is done (while obeying the project’s definition of done).

So you try to shoehorn your functionality into the existing code. It’s not neat, but you get it working. Are you done? In my opionion, you haven’t even started yet. Your first attempt to combine your idea of the best implementation of your functionality and the existing system will be cumbersome and painful. Use it as a learning experience how the system behaves and throw it away once you get a working prototype. Yes, you’ve read that right. Throw it away as in undo your commits or changes. This was the learning/exploration phase of your implementation. You’ve applied your idea to reality. It didn’t work well. Now is the time to apply reality to your idea. Commence your second attempt.

For your second attempt, you should make use of your refactoring skills on the existing code. Bend it to your anticipated and tried needs. And once the code base is ready, drop your new feature into the new code “socket”. Your work doesn’t need to be cumbersome and painful. Make yourself comfortable, then make it work.

Here is a example, based on a real case:

An existing system was in development for many years and worked with a lot of domain objects. One domain object was a price tag that looked something like this:

interface PriceTag {
PriceCategory category();
TaxGroup taxGroup();
Euro nettoAmount();
Product forProduct();
}

Well, it was a normal domain object giving back other normal domain objects. The new feature should be an audio module that could read price tags out loud. The team used a text-to-speech synthesizing library that takes a string and outputs an audio stream. No big deal and pretty independent from the already existing code base.

But the code that takes a price tag and converts it into a string, aka the connection point between the unbound library code and the existing system, was ugly and undecipherable:

String priceTagToText(PriceTag price) {
return price.forProduct().getDenotation()
+ " for only "
+ CurrencyFormatter.format(price.nettoAmount())
+ " with "
+ String.valueOf(price.taxGroup().percentage())
+ " % VAT in the "
+ price.category().getDenotation()
+ " section.";
}

This is how it looks if somebody tries to combine two building blocks that aren’t meant for each other. To test this method, you’ll have to mock deep into the domain objects.

If two building blocks aren’t matching naturally, maybe it’s an idea to add some lubrication code between them. This code isn’t exactly doing anything newfound, but adds a requirement seam that point towards the existing system:

interface ReadablePriceTag {
String denotation();
String netto();
String vatPercentage();
String category();
}

You can probably already see where this is heading. Just in case you cannot, I will take you through all parts of the code.

First we can write a priceTagToText() method that reads a lot nicer:

String priceTagToText(ReadablePriceTag price) {
return price.denotation()
+ " for only "
+ price.netto()
+ " with "
+ price.vatPercentage()
+ " VAT in the "
+ price.category()
+ " section.";
}

The second and complementary part is the implementation of the ReadablePriceTag interface that is given a PriceTag object and translates the data for the new methods:

class PriceTagBasedReadablePriceTag {
private final PriceTag price;

PriceTagBasedReadablePriceTag(PriceTag price) {
this.price = price;
}

@Override
public String denotation() {
return this.price.forProduct().getDenotation();
}

@Override
public String netto() {
return CurrencyFormatter.format(this.price.nettoAmount());
}

@Override
public String vatPercentage() {
return String.valueOf(
this.price.taxGroup().percentage()) + " %";
}

@Override
public String category() {
return this.price.category().getDenotation();
}
}


Basically, you have a lot of existing code that is using PriceTag objects and some new code that wants to use ReadablePriceTag objects. The PriceTagBasedReadablePriceTag class is the connector between both worlds (at least in one direction). We can definitely argue about the name, but that’s a detail, not the main point. The main points of all this effort are two things:

1. The new code does not suffer in quality and readability from decisions made at a different time in a different context.
2. The code clearly models these contexts. If you are aware of Domain Driven Design, you probably see the “Bounded Context” border that crosses right between PriceTag and ReadablePriceTag. The PriceTagBasedReadablePriceTag class is the bridge across that border.

If you express your context borders explicitely like in this example, your code reads fine on any side of the border. There is no notion of “old and fitting” and “new and awkward” code. It seems like additional work and it surely is, but is pays off in the long run because you can play this game indefinitely. A code base that gets more muddied and forced with time will reach a breaking point after which any effort needs knowledge in archeology and cryptoanalysis.

So, my advice boils down to one thing: Make yourself comfortable when adding new code to an existing code base.
And then, think about your type names. PriceTagBasedReadablePriceTag is most likely not the best name for it. But that’s a topic for another blog post. What would be your name for this class?

# What is it with Software Development and all the clues to manage things?

As someone who started programming a long time ago (roughly 20 years, now that I think about it), but only in recent years entered the world of real software development, the mastery of day-to-day-challenges happens to consist of two main topics: First, inour rapidly evolving field we never run out of new technologies to learn, and then, there’s a certain engineering aspect underlying, how to do things in a certain manner, with lots of input every year.

So after I recently shared some of such ideas with my friends — I indeed still have a few ones of those), I wondered: How is it, that in the modern software development world, most of the information about managing things actually comes from the field itself, and rather feeding back its ideas of project management, quality, etc. into the non-software-subspaces of the world? (Ideas like the Agile movement, Software Craftmanship, the calls of doing things Lean and Clean, nowadays prospered so much that you see their application or modification in several other industries. Like advertisement, just as an example.)

I see a certain kind of brain food in this question. What tells software development apart from other current fields, so that there is a broad discussion and considerable input at its base level? After all, if you plan on becoming someone who builds houses, makes cars, or manages cities, you wouldn‘t engage in such a vivid culture of „how“ to do things, rather focussing on the „what“.

Of course, I might be mistaken in this view. But, by asking: what actually tells software development apart from these other fields of producing something, I see a certain kind of brain food, helpful for approaching every day tasks and valuing better tips over worse ones.

So, what can that be?

1. Quite peculiar is the low entry threshold in being able to call yourself a “programmer”. With the lots of resources you get at relatively little cost (assumption: you have a computer with a working internet connection), you have a lot of channels by which you can learn the „what“ of software development first, and saving the „how“ for later. If you plan on building a house, there‘s not a bazillion of books, tutorials, and videos, after all.

2. Similarly, there‘s the rather low cost of failure when drafting a quick hobby project. Not always will a piece of code that you write in your free time tell yourself „hey man, you ever thought about some better kind of architecture?“ – which is, why bad habits can stick and even feel right. If you choose the „wrong“ mindset, you don‘t always lose heaps of money, and neither do you, if you switch your strategy once in a while, you also don‘t automatically. (you probably will, though, if you are too careless in this process).

3. Furthermore, there‘s the dynamic extension of how your project is going to be used („Scope Creep“). One would build a skyscraper in a different way than a bungalow (I‘m not an expert, though), but with software, it often feels like adding a simple feature here, extending the scope there, unless you hit a point where all its interdependencies are in a complex state of conflict…

4. Then, it‘s a matter of transparency: If you sit in a badly designed car, it becomes rather obvious when it always exhausts clouds of black smoke. Or your house always smells like scents of fresh toilet. Of course, a well designed piece of software will come with a great user experience, but as you can see in many commercial products, there also is quite some presence of low-than-average-but-still-somehow-doing-what-it-should software. Probably users are more tolerant with software than with cars?

5. Also, as in most technical fields, it is not the case that „pure consultants“ are widely received in a positive light. For most nerds, you don‘t get a lot of credibility if you talk about best practices without having got your hands dirty over a longer period of time. Ergo, it needs some experienced software programmers in order to advise less experienced software programmers… but surely, it‘s questionable whether this is a good thing.

6. After all, the requirements for someone who develops a project might be very different in each field. From my academical past in computational Physics, I know that there is quite some demand for „quick & dirty“ solutions. Need to add some Dark Matter in your model here? Well, plug this formula in and check the results. Not every user has the budget or liberty in creating a solid structure of your program. If you want to have a new laboratory building, of course, you very well want it it do be designed as good as it can get.

All in all, these observations somehow boil down to the question, whether software development is to be seen more like a set of various engineering skills, rather like a handcraft, an art, or a complex program of study. It is the question, whether the “crack” in this field is the one who does complex arithmetics in its head, or the one who just gets what the customer wants. I like thinking about such peculiar modes of thought, as they help me in understanding what kinds of things I should learn next.

Or is there something completely else to it?

# Compiling Agda 2.6.2 on Fedora 32

Agda is a dependently typed functional programming language that I like very much. Its latest versions have some special features which are not supported by any more well known languages, for example higher inductive types. Since I want to use all the special features of Agda, I regularly compile the latest version. This is a procedure which comes with a few surprises more often than not, so this post is about saving you the time it took me to figure out what to do.

I like to compile Agda using the Haskell tool Stack, which can be installed with

curl -sSL https://get.haskellstack.org/ | sh

The sources of Agda have to be checked out with all submodules – otherwise there will be some weird “cabal” (another Haskell tool, more basic than Stack) errors which I was not able to understand. This can be done with:

git clone --recurse-submodules https://github.com/agda/agda.git

Now if you go to the Agda folder

cd agda

There should be files called “stack-8.8.4.yaml” and similar. Those files can can tell Stack how to build Agda. The number is the version of the Haskell compiler which is used to build Agda. I usually use the latest, you do not have to figure out which (if any) is installed on your system, since Stack will just download the appropriate compiler for you.

However, just running stack failed for my on Fedora 32 due to some linking problem in the end. It turned out, that “libtinfo.so” was not found by the linker. “libtinfo.so.6” is available on Fedora 32, so adding a link to it fixed the problem:

sudo ln -s /usr/lib64/libtinfo.so.6 /usr/lib64/libtinfo.so

Now, you can tell Stack to get all necessary things and compile and install Agda with:

stack install --stack-yaml stack-8.8.4.yaml

This also installs a binary into “~/.local/bin” which is in PATH on Fedora 32 by default, so you should be able to call agda from the command line. Also, you can use “agda-mode” to configure emacs for agda.

# Unintentionally hidden application state

What do libraries like React and Dear Imgui or paradigms like Data-oriented design and Data-driven programming have in common?
One aspect is that they all rely on explictly modeling an application’s state as data.

## What do I mean by hidden state?

Now you will probably think: of course, I do that too. What other way is there, really? Let me give you an example, from the widely used Qt library:

QMessageBox box;
msgBox.setText("explicitly modeled data!");
msgBox.exec();


So this appears to be harmless, but is actually one of the most common cases were state is modeled implicitly. But we’re explicitly setting data there, you say. That is not the problem. The exec() call is the problem. It is blocking until the the message box is closed again, thereby implicitly modeling the “This message box is shown” state by this thread’s instruction pointer and position on the call stack. The instruction pointer is now tied to this state: while it does not move out of that function, the message box is still shown.

This is usually not a big problem, but it demonstrates nicely how state can be hidden unintentionally in an application. And even a small thing like this already prevents you from doing certain things, for example easily serializing your complete UI state.

## Is it bad?

This cannot really be avoided while keeping some kind of conceptual separation between code and data. It can sure be minimized. But is it bad? Well, it is certainly good to know when you’re using implicit state like that. And there’s one critiria for when it should most certainly minimized as much as possible: highly interactive programs, like user interfaces, games, machine control systems and AI agents.
These kinds of programs usually have some spooky and sponteanous interactions between different, seemlingly unrelated objects, and weird transitions between states. And using the call-stack and instruction pointer to model those states makes them particularly unsuitable to being interacted with.

## Alternatives?

So what can you use instead? The tried and tested alternative is always a state-machine. There are also related alternatives like behavior trees, which are actually quite similar to a “call stack in data” but much more flexible. Hybrid solutions that move only bits of the code into the realm of data are promises/futures and coroutines. Both effectively allow a “linear” function to be decoupled from its call-stack, and be treated more like data. And if their current popularity is any indicator, that is enough for many applications.

What do you think? Should hidden state like this always be avoided?

# Changing the keyboard navigation behaviour of form inputs

The default behaviour in HTML forms is that you can move the focus from one input element to the next via the tab key and submit the form via the enter key. This is also how dialogs work on most operating systems when using the native UI components. This behaviour is consistent across all browsers, and changing it messes with the user’s expectations and reduces accessibility. So I would normally advise against changing this behaviour without good reasons.

However, one of our customers wanted a different behaviour for an application developed by us. This application replaced an older application where the enter key did not submit the form, but moved the focus to the next input element. The ‘muscle memory’ effect made users accidentally submit the form by hitting the enter key, causing frustration. Since this application is not a public web site, but merely a web technology based intranet application with a small and specialized user base, changing the default behaviour is acceptable if the users want it.

So here’s how to do it. The following JavaScript function focusNextInputOnEnter takes a form element as a parameter and changes the focus behaviour on the input elements within this form.

function focusNextInputOnEnter(form) {
var inputs = form.querySelectorAll('input, select, textarea');
for (var i = 0; i < inputs.length; i++) {
var input = inputs[i];
input.addEventListener('keypress', (function(index) {
return function(event) {
if (!isEnter(event.which)) {
return;
}
var nextIndex = index + 1;
while (nextIndex < inputs.length) {
var nextInput = inputs[nextIndex];
if (nextInput.disabled) {
nextIndex++;
continue;
}
nextInput.focus();
break;
}
};
})(i));
}

function isEnter(keyCode) {
return keyCode === 13;
}
}


It works by handling the keypress events on the input elements and checking the key code for the enter key (code 13). It has an additional check so that disabled input elements are skipped.

To apply this change in behaviour to a form we have to call the function when the DOM content is loaded:

<form id="demo-form">
<input type="text">
<input type="text" disabled="disabled">
<input type="checkbox">
<select>
<option>A</option>
<option>B</option>
</select>
<textarea></textarea>
<input type="text">
<input type="text">
</form>

<script>
document.addEventListener('DOMContentLoaded', function() {
focusNextInputOnEnter(document.getElementById('demo-form'));
});
</script>


I want to reiterate my warning that you should definitely not do this for public web sites, and elsewhere only if you know that this is what your users want.

# Using (elastic)search with .NET Core

Many modern applications require powerful search mechanisms to become useful and make their users more productive. That is in large part due to the amount of data available to work with. Thankfully there are already powerful tools to index your data and make it searchable.

One of the most well known state-of-the-art solutions is ElasticSearch and it has an API to be used from .NET called NEST. While the documentation is ok I want to give a quick rundown on how to add searching capabilities to your .NET Core application. Some ideas are borrowed from the great post “Using Elasticsearch with ASP.NET Core and Docker“.

## Getting ElasticSearch running

The easiest way to get up and running with elasicsearch is to use their docker images and just run the container on your development machine. I like to a docker compose file like the following to get elasticsearch and its tooling application kibana up and running fast:

version: '3.8'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
container_name: elastic
environment:
- node.name=elastic
- cluster.initial_master_nodes=elastic
ports:
- "9200:9200"
- "9300:9300"
volumes:
- type: bind
source: ./esdata
target: /usr/share/elasticsearch/data
networks:
- esnetwork
kibana:
image: docker.elastic.co/kibana/kibana:7.8.0
ports:
- "5601:5601"
networks:
- esnetwork
depends_on:
- elasticsearch
volumes:
esdata:
networks:
esnetwork:
driver: bridge


After you run it with docker compose you can talk to the search service on http://localhost:9200/ and the kibana management GUI on http://localhost:5601/. On the Kibana UI especially the Dev Tools and its console are interesting for experimenting with search queries.

## Access ElasticSearch from your .NET Core app

I find it quite elegant to write a extension method for the IServiceCollection to configure the ElasticClient and register it as a Singleton to the dependency injection framework of .NET Core like so

    public static class ElasticSearchExtension
{
public static void AddElasticsearch(
this IServiceCollection services, IConfiguration configuration)
{
var url = configuration["elasticsearch:url"];
var settings = new ConnectionSettings(new Uri(url))
.DefaultMappingFor<SearchableDevice>(deviceMapping => deviceMapping
.IndexName("devices")
.IdProperty(dev => dev.Id)
)
;
var client = new ElasticClient(settings);
var response = client.Indices.Create("devices", creator => creator
.Map<SearchableDevice>(device => device
.AutoMap()
)
);
// maybe check response to be safe...
services.AddSingleton<IElasticClient>(client);
}
}


The configuration block looks like following

  "ElasticSearch": {
"url": "http://localhost:9200/"
},


and allows for future extension.

Our search service has to be registered in Startup of course:

public void ConfigureServices(IServiceCollection services)
{
...
services.AddElasticsearch(Configuration);
}


## Indexing our objects

In the above example we built a SearchableDevice class whose public properties are going to be indexed by ElasticSearch. The API allows for a much more fined grained control about what and how to index but we want to keep things simple without having to worry about excluding Properties and so on. If you have set it up that way indexing a SearchableDevice is merely one simple call:

// SearchClient is the injected IElasticClient
// mySearchableDevice is an instance of SearchableDevice
SearchClient.IndexDocument(mySearchableDevice);


## Searching for objects

When developing a search query I like to try it in the Kibana Dev Tools and then transform it to a NEST call. A simple query to look in all device properties if they start with “needle” looks like this:

{
"query": {
"multi_match": {
"type": "phrase_prefix",
"fields": ["*"],
"query": "needle"
}
}
}


The nice thing about the Kibana Dev Tools console is that it can display and complete the possible values for fields like “type” in your multi_match query.

That search query can then be translated to a NEST call in a straightforward way and looks this way:

var response = SearchClient.Search<SearchableDevice>(sd => sd
.Query(q => q
.MultiMatch(query => query
.Type(TextQueryType.PhrasePrefix)
.Fields("*")
.Query("needle")
)
)
);


The search response contains the hits with their source objects and some metadata like the score and result count.

Bear in mind that ElasticSearch only returns the first/best 10 matches by default, so specifying the result size often might be closer to what you want.

## Wrapping it up

Getting started with ElasticSearch in .NET Core does not require too much boiler plate an setup work if you use tools like docker and the NEST library. Making it usable and tuning the indexing and querying may require a lot of work to achieve the best results. On the other hand smaller applications can start-off with a simple search setup like shown above and simply evolve it when need be.