Help me with the Spiderman Operator

From time to time, I encounter a silly syntax in Java that I silently dubbed the “spiderman operator” because of all the syntactically pointing that’s going on. My problem is that it’s not very readable, I don’t know an alternative syntax for it and my programming style leads me more often to it than I am willing to ignore.

The spiderman operator looks like this:

x -> () -> x

In its raw form, it means that you have a function that takes x and returns a Supplier of x:

Function<X, Supplier<X>> rawForm = x -> () -> x;

That in itself is not very useful or mysterious, but if you take into account that the Supplier<X> is just one possible type you can return, because in Java, as long as the signature fits, the thing sits, it gets funnier.

A possible use case

Let’s define a type that is an interface with just one method:

public interface DomainValue {
    BigDecimal value();
}

In Java, the @FunctionalInterface annotation is not required to let the interface be, in fact, a functional interface. It only needs to have one method without implementation. How can we provide methods with implementation in Java interfaces. Default methods are the way:

@FunctionalInterface
public interface DomainValue {
    BigDecimal value();

    default String denotation() {
        return getClass().getSimpleName();
    }
}

Let’s say that we want to load domain values from a key-value-store with the following access method:

Optional<Double> loadEntry(String key)

If there is no entry with the given key or the syntax is not suitable to be interpreted as a double, the method returns Optional.emtpy(). Else it returns the double value wrapped in an Optional shell. We can convert it to our domain value like this:

Optional<DomainValue> myValue = 
    loadEntry("current")
        .map(BigDecimal::new)
        .map(x -> () -> x);

And there it is, the spiderman operator. We convert from Double to BigDecimal and then to DomainValue by saying that we want to convert our BigDecimal to “something that can supply a BigDecimal”, which is exactly what our DomainValue can do.

A bigger use case

Right now, the DomainValue type is nothing more than a mantle around a numerical value. But we can expand our domain to have more specific types:

public interface Voltage extends DomainValue {
}
public interface Power extends DomainValue {
    @Override
    default String denotation() {
        return "Electric power";
    }
}

Boring!

public interface Current extends DomainValue {
    default Power with(Voltage voltage) {
	return () -> value().multiply(voltage.value());
    }
}

Ok, this is maybe no longer boring. We can implement a lot of domain functionality just in interfaces and then instantiate ad-hoc types:

Voltage europeanVoltage = () -> BigDecimal.valueOf(220);
Current powerSupply = () -> BigDecimal.valueOf(2);
Power usage = powerSupply.with(europeanVoltage);

Or we load the values from our key-value-store:

Optional<Voltage> maybeVoltage = 
    loadEntry("voltage")
        .map(BigDecimal::new)
        .map(x -> () -> x);

Optional<Current> maybeCurrent = 
    loadEntry("current")
        .map(BigDecimal::new)
        .map(x -> () -> x);

You probably see it already: We have some duplicated code! The strange thing is, it won’t go away so easily.

The first call for help

But first I want to sanitize the code syntactically. The duplication is bad, but the spiderman operator is just unreadable.

If you have an idea how the syntax of the second map() call can be improved, please comment below! Just one request: Make sure your idea compiles beforehands.

Failing to eliminate the duplication

There is nothing easier than eliminating the duplication above: The code is syntactically identical and only the string parameter is different – well, and the return type. We will see how this affects us.

What we cannot do:

<DV extends DomainValue> Optional<DV> loadFor(String entry) {
    Optional<BigDecimal> maybeValue = load(entry);
    return maybeValue.map(x -> () -> x);
}

Suddenly, the spiderman operator does not compile with the error message:

The target type of this expression must be a functional interface

I can see the problem: Subtypes of DomainValue are not required to stay compatible to the functional interface requirement (just one method without implementation).

Interestingly, if we work with a wildcard for the generic, it compiles:

Optional<? extends DomainValue> loadFor(String entry) {
    Optional<BigDecimal> maybeValue = load(entry);
    return maybeValue.map(x -> () -> x);
}

The problem is that we still need to downcast to our specific subtype afterwards. But we can use this insight and move the downcast into the method:

<DV extends DomainValue> Optional<DV> loadFor(
	String entry,
	Class<DV> type
) {
	Optional<BigDecimal> maybeValue = load(entry);
	return maybeValue.map(x -> type.cast(x));
}

Which makes our code readable enough, but at the price of using reflection:

Optional<Voltage> european = loadFor("voltage", Voltage.class);
Optional<Current> powerSupply = loadFor("current", Current.class);

I’m not a fan of this solution, because downcasts are dangerous and reflection is dangerous, too. Mixing two dangerous things doesn’t neutralize the danger most of the time. This code will fail during runtime sooner or later, without any compiler warning us about it. If you don’t believe me, add a second method without implementation to the Current interface and see if the compiler warns you. Hint: This is what you will see at runtime:

java.lang.ClassCastException: Cannot cast java.math.BigDecimal to Current

But, to our surprise, it doesn’t even need a second method. The code above doesn’t work. Even if we reintroduce our spiderman operator (with an additional assignment to help the type inference), the cast won’t work:

<DV extends DomainValue> Optional<DV> loadFor(
    String entry,
    Class<DV> type
) {
    Optional<BigDecimal> maybeValue = load(entry);
    Optional<DomainValue> maybeDomainValue = maybeValue.map(x -> () -> x);
    return maybeDomainValue.map(x -> type.cast(x));
}

The ClassCastException just got a lot more mysterious:

java.lang.ClassCastException: Cannot cast Loader$$Lambda$8/0x00000008000028c0 to Current

My problem is that I am stuck. There is working code that uses the spiderman operator and produces code duplication, but there is no way around the duplication that I can think of. I can get objects for the supertype (DomainValue), but not for a specific subtype of it. If I want that, I have to accept duplication. Or am I missing something?

The second call for help

If you can think about a way to eliminate the duplication, please tell me (or us) in the comments. This problem doesn’t need to be solved for my peace of mind or the sanity of my code – the duplication is confined to a particular place.

Being used to roam nearly without boundaries in the Java syntax (25 years of thinking in Java will do that to you), this particular limitation hit hard. If you can give me some ideas, I would be grateful.

Don’t just useCallback() with higher-order-functions

This is a small thing that once took me longer to debug than necessary, which is why it might be useful to some of you out there.

From time to time, we have that situation in a React application where it’s just not really avoidable that a small component has to accomplish a rather expensive computation. That’s what memoization is for, i.e. reusing the results of old computations when we know that these are still applicable.

React, in its functional approach, has three ways of memoiziating things, and for whole components there is React.memo(), while for usage inside a component we have the hooks React.useMemo() most commonly used for values or value-like objects, and React.useCallback() for functions. Because JavaScript is quite a functional languare, there is a rough equivalence between the latter two – but now I’m here to look into that.

// rather trivial function – these are equal React.useMemo(() => () => x, [x]); React.useCallback(() => x, [x]); // higher-order function – they are not! React.useMemo(() => higherOrderFunction(x), [x]); React.useCallback(higherOrderFunction(x), [x]);

There are various such higher-order components that are avilable for developers to use re-existing logic. One such case is debouncing, i.e. when you expect state changes to sometimes come in very large batches, the most common case probably a <input/> field whose value is triggering a server request or something like that. Other common cases would be drag’n’drop interactions or window resizing.

With a useRef(), one can rather easily write such debouncing oneself (google it or ask in the comments), but there is lodash.debounce which take care of that with such a higher-component function.

const MILLISEC = 500;

const Component = () => {
  const [value, setValue] = React.useState("");

  const handle = React.useMemo(() => debounce(event => { ... }, MILLISEC), []);

  return <input onChange={handle} value={value}/>;
};

Now I don’t want to talk about the specific case of debounce() (but one can look at the source code to guess its doing), this is just an example. Third-party logic is helpful when not-reinventing-the-wheel, but you can’t be that sure about computational costs, especially when some of your dependencies might update in the future – so that might be a good point to use memoization without actually seeing the benefit in the time of developing. (*)

As Dmitir Pavlutin here states nicely for that specific case, you can not juse write useCallback(debounce(...), []) here in place of useMemo. It is rather trivial but you need to take care: The JavaScript engine will have no other option than to execute the debounce() on creation of the callback, it can not know that this is something to be evaluated later.

Anything that is not an arrow function () => { ... } or an old-school function() { ... } will be evaluated when the corresponding line is reached. The syntax does not allow anything to be wrapped around it in order to delay that execution to the first call.

So. Debounce might not be the most expensive thing, and in general one might not even need memoization, but if you do – always remember that something has to be a function in order for any of that to work.

(*) This is not a call for premature optimization.

It cannot be stressed enough that one shouldn’t wrap every single computation into a memoization in either case. Sure, one should care about useless computations as stated above, but always know that the memo thing itself is not free. So when in doubt, think about how to quantify your specific gain, e.g. via the React DevTools Profiler, the performance API or at least logging of Date.now() timestamps.

Also, only think about performance when doing so. If there is any case of “my application actually behaves differently” when using useMemo / useCallback, this is a red flag – drop the thought of optimization instantly and care about your overall architecture first.

Writing windows daemons in C++20

One little snippet I’ve found myself reusing surprisingly often is how to write a daemon program with graceful shutdown in windows. To recap, a daemon is a program that sits and does ‘background work’ until it is explicitly shut down by the user. For my purposes, it is also a console program. Like this one:

int main(int argn, char** argv)
{
  while (true)
  {
    std::cout << "ping!" << std::endl;
    std::this_thread::sleep_for(100ms);
  }
  std::cout << "shutdown!" << std::endl;
  return EXIT_SUCCESS;
}

If you run this program, it will, of course, continuously print “ping!”. And you can kill it by entering ctrl+C on the console. But the shutdown will not be graceful: “shutdown!” will not be printed. It’ll just look like this:

ping!
ping!
ping!
^C

C++20 introduced std::stop_source and std::stop_token, which help to implement a graceful shutdown. We’ll use the following code:

'namespace
{
static std::stop_source exit_source;
static std::atomic<bool> main_exited = false;
static bool already_registered = false;

static void atexit_handler()
{
  main_exited = true;
}

BOOL control_handler(DWORD Type)
{
  switch (Type)
  {
  case CTRL_C_EVENT:
  case CTRL_CLOSE_EVENT:
    exit_source.request_stop();

    while (!main_exited)
      Sleep(10);

    return TRUE;
    // Pass other signals to the next handler.
  default:
    return FALSE;
  }
}
} // namespace

std::stop_token register_exit_signal()
{
  if (!already_registered)
  {
    if (!SetConsoleCtrlHandler((PHANDLER_ROUTINE)control_handler, TRUE))
      throw std::runtime_error("Unable to register control handler");

    atexit(&atexit_handler);
    already_registered = true;
  }
  return exit_source.get_token();
}'namespace
{
static std::stop_source exit_source;
static std::atomic<bool> main_exited = false;
static bool already_registered = false;

static void atexit_handler()
{
  main_exited = true;
}

BOOL control_handler(DWORD Type)
{
  switch (Type)
  {
  case CTRL_C_EVENT:
  case CTRL_CLOSE_EVENT:
    exit_source.request_stop();

    while (!main_exited)
      Sleep(10);

    return TRUE;
    // Pass other signals to the next handler.
  default:
    return FALSE;
  }
}
} // namespace

std::stop_token register_exit_signal()
{
  if (!already_registered)
  {
    if (!SetConsoleCtrlHandler((PHANDLER_ROUTINE)control_handler, TRUE))
      throw std::runtime_error("Unable to register control handler");

    atexit(&atexit_handler);
    already_registered = true;
  }
  return exit_source.get_token();
}

You’re going to have to include both <stop_token> and <Window.h> for this. Now we can adapt our daemon loop slightly:

int main(int argn, char** argv)
{
  auto token = register_exit_signal(); // <-- register the exit signal here
  while (!token.stop_requested()) // ... and test the current state here
  {
    std::cout << "ping!" << std::endl;
    std::this_thread::sleep_for(100ms);
  }
  std::cout << "shutdown!" << std::endl;
  return EXIT_SUCCESS;
}

Note that this requires cooperatively handling the shutdown. But now the output correctly prints “shutdown” when killed with ctrl+C.

ping!
ping!
shutdown!

There’s linux/macOS code for this same interface too. It works by handling SIGINT/SIGTERM. But that information is somewhat easier to come by, so I’ll leave it out for brevity. Feel free to comment if you think that’d be interesting as well.

LDAP-Authentication in Wildfly (Elytron)

Authentication is never really easy to get right but it is important. So there are plenty of frameworks out there to facilitate authentication for developers.

The current installment of the authentication system in Wildfly/JEE7 right now is called Elytron which makes using different authentication backends mostly a matter of configuration. This configuration however is quite extensive and consists of several entities due to its flexiblity. Some may even say it is over-engineered…

Therefore I want to provide some kind of a walkthrough of how to get authentication up and running in Wildfly elytron by using a LDAP user store as the backend.

Our aim is to configure the authentication with a LDAP backend, to implement login/logout and to secure our application endpoints using annotations.

Setup

Of course you need to install a relatively modern Wildfly JEE server, I used Wildfly 26. For your credential store and authentication backend you may setup a containerized Samba server, like I showed in a previous blog post.

Configuration of security realms, domains etc.

We have four major components we need to configure to use the elytron security subsystem of Wildfly:

  • The security domain defines the realms to use for authentication. That way you can authenticate against several different realms
  • The security realms define how to use the identity store and how to map groups to security roles
  • The dir-context defines the connection to the identity store – in our case the LDAP server.
  • The application security domain associates deployments (aka applications) with a security domain.

So let us put all that together in a sample configuration:

<subsystem xmlns="urn:wildfly:elytron:15.0" final-providers="combined-providers" disallowed-providers="OracleUcrypto">
    ...
    <security-domains>
        <security-domain name="DevLdapDomain" default-realm="AuthRealm" permission-mapper="default-permission-mapper">
            <realm name="AuthRealm" role-decoder="groups-to-roles"/>
        </security-domain>
    </security-domains>
    <security-realms>
        ...
        <ldap-realm name="LdapRealm" dir-context="ldap-connection" direct-verification="true">
            <identity-mapping rdn-identifier="CN" search-base-dn="CN=Users,DC=ldap,DC=schneide,DC=dev">
                <attribute-mapping>
                    <attribute from="cn" to="Roles" filter="(member={1})" filter-base-dn="CN=Users,DC=ldap,DC=schneide,DC=dev"/>
                </attribute-mapping>
            </identity-mapping>
        </ldap-realm>
        <ldap-realm name="OtherLdapRealm" dir-context="ldap-connection" direct-verification="true">
            <identity-mapping rdn-identifier="CN" search-base-dn="CN=OtherUsers,DC=ldap,DC=schneide,DC=dev">
                <attribute-mapping>
                    <attribute from="cn" to="Roles" filter="(member={1})" filter-base-dn="CN=auth,DC=ldap,DC=schneide,DC=dev"/>
                </attribute-mapping>
            </identity-mapping>
        </ldap-realm>
        <distributed-realm name="AuthRealm" realms="LdapRealm OtherLdapRealm"/>
    </security-realms>
    <dir-contexts>
        <dir-context name="ldap-connection" url="ldap://ldap.schneide.dev:389" principal="CN=Administrator,CN=Users,DC=ldap,DC=schneide,DC=dev">
            <credential-reference clear-text="admin123!"/>
        </dir-context>
    </dir-contexts>
</subsystem>
<subsystem xmlns="urn:jboss:domain:undertow:12.0" default-server="default-server" default-virtual-host="default-host" default-servlet-container="default" default-security-domain="DevLdapDomain" statistics-enabled="true">
    ...
    <application-security-domains>
        <application-security-domain name="myapp" security-domain="DevLdapDomain"/>
    </application-security-domains>
</subsystem>

In the above configuration we have two security realms using the same identity store to allow authenticating users in separate subtrees of our LDAP directory. That way we do not need to search the whole directory and authentication becomes much faster.

Note: You may not need to do something like that if all your users reside in the same subtree.

The example shows a simple, but non-trivial use case that justifies the complexity of the involved entities.

Implementing login functionality using the Framework

Logging users in, using their session and logging them out again is almost trivial after all is set up correctly. Essentially you use HttpServletRequest.login(username, password), HttpServletRequest.getSession() , HttpServletRequest.isUserInRole(role) and HttpServletRequest.logout() to manage your authentication needs.

That way you can check for active session and the roles of the current user when handling requests. In addition to the imperative way with isUserInRole() we can secure endpoints declaratively as shown in the last section.

Declarative access control

In addition to fine grained imperative access control using the methods on HttpServletRequest we can use annotations to secure our endpoints and to make sure that only authenticated users with certain roles may access the endpoint. See the following example:

@WebServlet(urlPatterns = ["/*"], name = "MyApp endpoint")
@ServletSecurity(
    HttpConstraint(
        transportGuarantee = ServletSecurity.TransportGuarantee.NONE,
        rolesAllowed = ["oridnary_user", "super_admin"],
    )
)
public class MyAppEndpoint extends HttpServlet {
...
}

To allow unauthenticated access you can use the value attribute instead of rolesAllowed in the HttpConstraint:

@ServletSecurity(
    HttpConstraint(
        transportGuarantee = ServletSecurity.TransportGuarantee.NONE,
        value = ServletSecurity.EmptyRoleSemantic.PERMIT)
)

I hope all of the above helps to setup simple and secure authentication and authorization in Wildfly/JEE.

Improved automated instance construction in C++

In my last blog post, I wrote about how I am automatically deducing constructor parameters in my dependency injection container. The approach had a major drawback: It worked only for 2 or more parameters, since there was an ambiguity with copy- or move-constructors with exactly one parameter.

Right after I wrote that post, I actually found a solution to that problem in the Boost.DI FAQ, which explains how to do that in its CPPnow 2016 slides. It restricts the type conversion operator by using SFINEA on an unused template parameter. I did not even know that was possible! It defines the templated conversion operator very similar to this:

template <class T,
  class = std::enable_if_t<!std::is_same<std::remove_cvref_t<T>, Exclude>{}>>
operator T&() const
{
  return p_->get<std::remove_cvref_t<T>>();
}

Since this is a bit more involved than the bare templated conversion operator from last time, repeating it would be bad. In the last version, I used 3 helper types, the inferred_locator, mimic and the provider_wrapper, but that can be streamlined into one class:

template <typename Exclude> struct mimic
{
  mimic(std::size_t)
  {
  }

  mimic(service_provider const& p, std::size_t)
  : p_(&p)
  {
  }

  template <class T, class = std::enable_if_t<!std::is_same<std::remove_cvref_t<T>, Exclude>{}>> operator T&() const
  {
    return p_->get<std::remove_cvref_t<T>>();
  }

  service_provider const* p_{ nullptr };
};

Note that is uses some unused extra size_t parameters, which make the parameter expansion easier in the next step. Now can use that for the SFINEA in the recursive construction:

// Actual dependency injection..
template <class T, std::size_t Head, std::size_t... Rest> constexpr auto
make_injected_(service_provider const& p, std::index_sequence<Head, Rest...>,
    decltype(T{ mimic<T>{ Head }, mimic<T>{ Rest }... }) * = nullptr)
{
  return std::make_unique(mimic<T>(p, Head), mimic<T>(p, Rest)...);
}

// Trivial no-dependency case
template <class T> constexpr auto
make_injected_(service_provider const& p, std::index_sequence<>)
{
  return std::make_unique<T>();
}

// Fallback to try with fewer parameters
template <class T, std::size_t... Rest> constexpr auto make_injected_(service_provider const& p, std::index_sequence<Rest...>)
{
  return make_injected_<T>(p, std::make_index_sequence<sizeof...(Rest) - 1>{});
}

template <class T, std::size_t Max = 16> auto
make_injected(service_provider const& p)
{
  return make_injected_<T>(p, std::make_index_sequence<Max>{});
}

Just after I found this solution, my former colleague Dirk Reinbach sent me a very neat C++20 variant to restrict the conversion operator via a concept:

template <typename T, typename U>
concept not_is_same = !std::is_same_v<std::remove_cvref_t<T>, std::remove_cvref_t<U>>;

template <typename Exclude> struct mimic
{
  /* other members... */
  template <not_is_same<Exclude> T> operator T&() const
  {
    return p_->get<std::remove_cvref_t<T>>();
  }
};

This works just as well, and is more readable, too. I have not measured, but I guess it’s probably also faster to compile, since all things SFINEA are notoriously slow.

What dependent types can do for you

In a way, this post is also about Test Driven Developement and *Type* Driven Developement. While the two share the same acronym, I always thought of them as different concepts. However, as I recently experienced, when the two concepts are used in a dependently typed language, there is something like a fluid transition between them.

While I will talk about programming in the dependently typed language Agda, not much is needed to follow what is going on – I will just walk through an exercise and explain everything along the way.

The exercise I want to use, is here. It talks about a submarine, its position and certain commands, that change the position. Examples for commands are forward 1, down 2 and up 3. These ‘values’ can be used just like that with the following definition of the type of commands:

      data Command : Set where
        forward : Nat -> Command
        up      : Nat -> Command
        down    : Nat -> Command

Agda can be used in a very mathy way – this should really be read as saying, that the type of commands is a Set and there are three constructors (highlighted green) which take a natural number as argument and produce a command. So, using that application is just juxtaposition, we can make the following definitions now:

  justSomeCommand = forward 5
   anotherOne = up 1

Now the exercise text explains, how these commands can be applied to the position of the submarine. Working as a software developer, I built the habit of turning specifications like that into tests. Since I don’t know any better, I just wrote ‘tests’ in Agda using equations to translate the exercise text – I’ll explain the syntax below:

  apply (forward 5) (pos 0 0) ≡ pos 5 0
  apply (down 5) (pos 5 0) ≡ pos 5 5
  apply (forward 8) (pos 5 5) ≡ pos 13 5

Note that the triple equal sign is different from what we used above. Roughly, this is because it is the proposition, that some tings are equal, while the normal equal sign above, was used to make definitions. The code doesn’t type check as is. We haven’t defined ‘apply’ and it is not valid Agda to just write down equations like that. Let’s fix the latter problem first, by turning it into declarations and definitions. This will actually define elements of the datatypes of equality proofs – but I’m pretty sure you can accept these changes just as boilerplate we have to add to our equations:

  example1 : apply (forward 5) (pos 0 0) ≡ pos 5 0
  example1 = refl

  example2 : apply (down 5) (pos 5 0) ≡ pos 5 5
  example2 = refl

  example3 : apply (forward 8) (pos 5 5) ≡ pos 13 5
  example3 = refl

Now, to make the examples type-check, we have to define ‘pos’ and ‘apply’. Positions can be done analogous to commands:

  data Position : Set where
    pos : Nat -> Nat -> Position

(Here, the type of ‘pos’ just tells us, that it is a function taking two natural numbers as arguments.) Now we are ready to start with ‘apply’:

   apply : Command → Position → Position
    apply = ?

So apply is a function, that takes a ‘Command’ and a ‘Position’ and returns another ‘Position’. For the definition of ‘apply’ I just entered a questionmark ‘?’. It is one of my favorite features of Agda, that terms can be left out like this before type checking. Agda still checks everything we have given so far and will give us a lot of information about what ‘?’ could be. This is called ‘interacting with a hole’. Because, well, it is a hole in your code and the type checker is there to tell you, which things might fit into this hole. After type checking, the hole and what Agda tells us about it, will look like this:

   apply : Command → Position → Position
    apply = ?

Goal: Command -> Position -> Position

This was type-checked with a couple of imports – see my final version of the code if you want to reproduce. The first thing Agda tells us, is the type of the goal and then there is some mumbling about constraints with some fragments, that look like they have something to do with the examples from above – the latter is actually not information about the hole, but general information about the type checking. So lets look at them to see, if the type checker has to say anything:

The refl terms in the definitions of the examples, are highlighted yellow

Something is yellow! This is Agda’s way to tell us, that it does not have enough information to decide, if everything is okay. Which makes a lot of sense, since we haven’t given a definition of ‘apply’ and these equations are about values computed with ‘apply’. So let us just continue to define ‘apply’ and see if the yellow vanishes. This is analogous to the stage in TDD were your tests don’t pass because your code does not yet compile.

We will use pattern matching on the given ‘Command’ and ‘Position’ to define ‘apply’ – the cases below were generated by Agda (I only changed variable names), and we now have a hole for each case:

  apply : Command → Position → Position
  apply (forward x) (pos h d) = {!!}
  apply (up x) (pos h d) = {!!}
  apply (down x) (pos h d) = {!!}

There are various ways in which Agda can use the information given by types to help us with filling these holes. First of all, we can just ask Agda to make the hole ‘smaller’ if there is a unique canonical way to do so. This will work here, since ‘Position’ has only one constructor. So we get new holes for arguments of the constructor ‘pos’ and can try to fill those.

Let us focus on the first case and see what happens if we enter something not in line with our tests:

If we ask Agda, if ‘h+d’ fits into the ‘hole’, it will say no and tell us what the problem is in the following way:

While this is essentially the same kind of feedback you would get from a unit test, there are at least two important advantages to note:

  • This is feedback from the type checker and it is combined with other things the type checker can tell you. It means you get a lot of feedback at once, when you ask Agda, if something you wrote fits into a hole.
  • ‘refl’ is only a simple case of the proves you can write in Agda. More complicated ones need some training, but you can go way beyond unit tests and ‘check’ infinitely many cases or even better: all cases.

If you want more, just try Agda yourself! One easy way to do that, is to use Ingo Blechschmidt’s Agdapad, which let’s you try Agda in your browser.

Automated instance construction in C++

I’m currently mostly switching back and forth between C# and C++ projects. One of the things that I’m missing most when switching to C++ is a nice dependency-injection (DI) library. After checking out what was already available, I finally decided I wanted to try to build my own slim type-indexed variant. I quickly started by registering factories and instances in a map on std::type_index, making it possible to both have the DI retain ownership (with std::unique_ptr) or just make a type available via a bare pointer. So I was able to do things like:

// Register an instance
di.insert_unique(std::make_unique<foo_service>());
// Register a factory
di.insert_unique([] {return std::make_unique<bar_service>());
// Register an existing bare pointer
di.insert_bare(my_bare_thingy);

// ... and retrieve them
auto& foo = di.get<foo_service>();

One of the most powerful aspects of a DI library is the ability to transitively setup dependencies. I like constructor injection the most, so I implemented a very naive way like this:

di.insert_unique([](auto& p) { return std::make_unique<complex_service>(
  p.get<base_service1>(), p.get<base_service2>(), p.get<base_service3>());
});

This is pretty verbose and we basically have to repeat all the constructor parameter types. But it’s easy to implement. We can do a little bit better by using a templated type-conversion operator and using it to call the get:

class service_provider
{
  struct inferred_locator
  {
    service_provider const* provider;
    template <class T> operator T&() const
    {
      return provider->get<std::remove_const_t<T>>();
    }
  };
  
  inferred_locator get() const
  {
    return { .provider = this };
  }
  
  /** typed get implementations here... */
};

Now we can reduce the previous registration to:

di.insert_unique([](auto& p) { 
  return std::make_unique<complex_service>(p.get(), p.get(), p.get());
});

That is basically only the number of constructor parameters in a verbose way. We could write a small template that takes the number, creates an std::index_sequence from it and then unpacks each index into an invokation of service_provider::get. But then we would still have to update registrations when adding (or removing) a new dependency to a services’s constructor. With a litte more work, we can actually get this instead:

di.insert_unique<complex_service>();

This partly inspired by Antony Polukhin’s C++ reflection talks, and combines std::index_sequence based unpacking, SFINEA and the templated type-conversion operator:

template <class T, std::size_t Head, std::size_t... Rest>
constexpr auto make_unique_impl(provider_wrapper const& p,
    std::index_sequence<Head, Rest...>,
    decltype(T{ mimic{ Head }, mimic{ Rest }... }) * = nullptr) -> std::unique_ptr<T>
{
    // This next requirement is so we do not accidentally recurse into the copy/move-ctors
    static_assert(sizeof...(Rest) + 1 > 1, "Can only deduce constructors with two or more parameters.");
    return std::make_unique<T>(p(Head), p(Rest)...);
}

template <class T, std::size_t... Rest>
constexpr auto make_unique_impl(provider_wrapper const& p, std::index_sequence<Rest...>) -> std::unique_ptr<T>
{
    // This next requirement is so we do not accidentally recurse into the copy/move-ctors
    static_assert(sizeof...(Rest) > 1, "Can only deduce constructors with two or more parameters.");
    return make_unique_impl<T>(p, std::make_index_sequence<sizeof...(Rest) - 1>{});
}

template <class T, std::size_t Max = 8> auto make_unique(service_provider const& p)
{
    return make_unique_impl<T>(provider_wrapper{ &p }, std::make_index_sequence<Max>{});
}

This uses two new types: mimic, which is only used for SFINEA, takes std::size_t on construction (for the unpacking from the std::index_sequence) and converts to anything (templated type conversion again) and the provider_wrapper, which is a simple adaptor around service_provider that takes an unused std::size_t argument (again, for unpacking). The first overload of make_unique_impl is slightly more specialized (because it has Head and Rest), so the compiler tries it first. If it works, it returns a new instance of the service we want. Otherwise, it will fail without an error due to SFINEA in the unused and defaulted third parameter. The compiler will then try the second overload, which will recurse to a variant with fewer parameters. The outermost make_unique starts this recursion with 8 parameters, because that should be enough for any sane service. I stop this recursion at one constructor parameter, even though that is a useful configuration. This is because I have not yet found a way to avoid calling the copy or move constructors accidentally. If anyone knows how to do that, I’d be very happy to hear how. My workaround right now is to explicitly register a factory in that case.

Packaging Java-Project as DEB-Packages

Providing native installation mechanisms and media of your software to your customers may be a large benefit for them. One way to do so is packaging for the target linux distributions your customers are running.

Packaging for Debian/Ubuntu is relatively hard, because there are many ways and rules how to do it. Some part of our software is written in Java and needs to be packaged as .deb-packages for Ubuntu.

The official way

There is an official guide on how to package java probjects for debian. While this may be suitable for libraries and programs that you want to publish to official repositories it is not a perfect fit for your custom project that you provide spefically to your customers because it is a lot of work, does not integrate well with your delivery pipeline and requires to provide packages for all of your dependencies as well.

The convenient way

Fortunately, there is a great plugin for ant and maven called jdeb. Essentially you include and configure the plugin in your pom.xml as with all the other build related stuff and execute the jdeb goal in your build pipeline and your are done. This results in a nice .deb-package that you can push to your customers’ repositories for their convenience.

A working configuration for Maven may look like this:

<build>
    <plugins>
        <plugin>
            <artifactId>jdeb</artifactId>
            <groupId>org.vafer</groupId>
            <version>1.8</version>
            <executions>
                <execution>
                    <phase>package</phase>
                    <goals>
                        <goal>jdeb</goal>
                    </goals>
                    <configuration>
                        <dataSet>
                            <data>
                                <src>${project.build.directory}/${project.build.finalName}-jar-with-dependencies.jar</src>
                                <type>file</type>
                                <mapper>
                                    <type>perm</type>
                                    <prefix>/usr/share/java</prefix>
                                </mapper>
                            </data>
                            <data>
                                <type>link</type>
                                <linkName>/usr/share/java/MyProjectExecutable</linkName>
                                <linkTarget>/usr/share/java/${project.build.finalName}-jar-with-dependencies.jar</linkTarget>
                                <symlink>true</symlink>
                            </data>
                            <data>
                                <src>${project.basedir}/src/deb/MyProjectStartScript</src>
                                <type>file</type>
                                <mapper>
                                    <type>perm</type>
                                    <prefix>/usr/bin</prefix>
                                    <filemode>755</filemode>
                                </mapper>
                            </data>
                        </dataSet>
                    </configuration>
                </execution>
            </executions>
        </plugin>
    </plugins>
</build>

If you are using gradle as your build tool, the ospackage-plugin may be worth a look. I have not tried it personally, but it looks promising.

Wrapping it up

Packaging your software for your customers drastically improves the user experience for users and administrators. Doing it the official debian-way is not always the best or most efficient option. There are many plugins or extensions for common build systems to conveniently build native packages that may easier for many use-cases.

5 Not-so-Beginner’s React Pitfalls

React, in my opinion, has become quite a useful tool over the years. I admin I haven’t given the other major frameworks a try, but from the look of the resulting code, I only would give Svelte a real chance in the nearer future (in fact, you’d really have to pay me real big money to convince me about Angular).

Now with many of the more useful JS libraries, React is in a state where not only has it survived quite a time (reaching v18 only a few weeks ago), but also breeding a community that harbors a lot of valuable knowledge, enabling one to efecavoid the most common pitfalls at the beginning of your journey. There are lots of resources you can easily find online, from few-hour-courses to several posts in other blogs about the most common traps.

However, in our daily life it appears that there still are some very good points to make about how not to go about React’s unopinionatedness. So these are some of our own findings that I’ve not yet seen overly emphasized, and maybe they are here for your advantage.

1. HAVE YOUR STATES ATOMIC

It might happen that one migrates an older React component where functional programming wasn’t the norm yet, or out of whatever habit, that you declares something like a greedy React state as

const [state, setState] = useState({this: ..., that: ... , ..., ...});

Now your state profits much from immutability (think of this as “your machine then knows that it’s content is clear and unique, given any time”) and therefore you do not need to care about the same-or-not-sameness of state.that when evaluating state.this. Therefore, it is usually advised to split that up into several independent states as

const [this, setThis] = useState(...);
const [that, setThat] = useState(...);
...

That is more readable and everything. However, the most useful rule to build your states is not even to split everything up as small-as-possible, but rather, to have your states atomic. By that, we mean, “not needlessly large, but containing all what might change at the same time”.

One common example is basic data fetching. If you don’t choose to grab for react-query, which I personally like. But if you do e.g. a simple GET request, you usually do not only have “data” (some response), but also at least a “pending” (has the request finished yet?) and an “error” (is this response even usable?) field. These all change at the same time. Thus, they belong to the same entity. That state, designed atomically

const [query, setQuery] = useState({
    pending: false,
    data: null,
    error: null,
});

side note: you might choose not to use the null object as an initial value here because of the known problem of ambivalence with this object. For this illustration, it will suffice.

So, this query state now is atomic. Not to split further without serious consequences, as you will. If you had another, unrelated query, you would not just put it right into the same state entity; but if you had another property of that query (like e.g. a separate field for the status code, …), it would belong.

This helps in having more predictable useEffect, useMemo etc. dependency arrays. You can have an Effect depending on [query] as a whole and this makes complete semantic sense. It would be very hard to predict it’s behaviour, if you mashed multiple queries or whatever-state-you-can-think-of in there.

2.HAVE YOUR EFFECTS ATOMIC & TEAR THEM DOWN

Similarly, it is not super obvious (to the newcomer’s eye at least), that you can have multiple useEffects(). You can adhere to the Single Responsibility principle right there — the only good Effects are the ones that you can grasp in a twinkling of an eye. Use one each for every single thing you want to achieve, don’t lump multiple different things together in a somewhat-“constructor”-type of thinking. This keeps the dependency arrays small and controllable, and there are fewer cases of peculiar “But this CANNOT EVEN happen!!”.

Moreover, Effects have a function designed to clean them up, or the teardown function. If your Effect starts any larger operation and then for some reason your component get’s re-rendered before your operation is finished, you are likely to get hit by that effect in a state where you forgot about it already. You can follow this example

// example: listening to the scroll event
useEffect(() => {
    const handler = (event) => { /* ... */ };
    document.addEventListener('scroll', handler);
    return () => document.removeEventListener('scroll', handler);
}, []);

// or you might do something later in life
useEffect(() => {
    const timeout = setTimeout(() => { /* ... */ }, 5000);
    return () => clearTimeout(timeout);
}, []);

Some asynchronous operations might not have a simple teardown operation, but you can at least tell your Promises to disregard the effect. This is at least responsible for the very ugly

Warning: Can’t perform a React state update on an unmounted component. This is a no-op, but it indicates a memory leak in your application.

If you are responsible, you clean your Browser Console of all of these warnings. It appears if you call a setState-or-similar function at a point where the teardown actually should have happened. This pattern solves that case:

// this example uses a fetch Promise,
// but it also works for stale setTimeout handlers etc.

useEffect(() => {
    let mounted = true;
    fetch('/whatever').then(() => {
        if (mounted) {
            setState(true);
        }
    };
    return () => { mounted = false };
}, []);

// if you do not check for the value of mounted,
// the "memory leak" error can appear, if the
// fetch returns when the component updated meanwhile.

Side note: I also can not recall a single case in which the common React linter rule “exhaustivedeps” was worth ignoring. I had several occasions in which I believed to outsmart the stupid machine, only to end up in much larger problems down the road. Sure, things like Redux’ dispatch() might be cumbersome to include always, but I found that if I just make sure that exhaustive-deps never fires, I am more happy in the long run.

3.USEEFFECT() in too DEEP Functions

Especially in the context of data fetching, it might appear luring to put your useEffect() calls as deep (in the direction of the smallest components) as you can. Even more so, if you don’t have a rigid way of state management.

Now, I feel the point that this appears as “more modular” and flexible, but for me, has happend to situations where way too many requests were sent to our backends. You trade the modularity for the unpredictability of some Effects, so the best way I came to think of it was: Treat useEffect() like a bug.

I’m not saying that using it is wrong. But if you are wary of it’s appearance, this can help. Sometimes, it is just possible to do everything an Effect does – just completely outside React. Maybe, the Effect code can instead live in your index.js (as vanilla JS or otherwise) and just injected into your Root component, e.g. as props or via other libraries. E.g. with a Redux middleware, some effects can run with a higher degree of control about your state.

Remember: Modularity is not bad per se. It’s good. Don’t elevate the most particular effects to the top level of your application, but figure out where they can live well enough so you exactly know when they need to fire.

So far, there hasn’t been a case where I wished that I stuffed my useEffects further down to the virtual DOM leaves, but several, in which elevating them helped me a lot.

4. USE CUSTOM HOOKS with minimal interface

I consider it helpful, even for React beginners, to always be on the lookout of what could be its own React hook. A React Hook is any function that has a name beginning with “use” and for the most time, these consist of some combination of internal useState, useEffect, useContext and useRef definitions.

But their merit is in that they allow for much cleaner, dumber looking Components themselves – consider: dumb components are the best!

If they are only needed once, you can have them co-located next to where they are needed, but even just the act of giving them an own name makes for much more understandable code.

I use custom hooks for a lot of things, e.g.

  • having a State that is persisted in the localStorage / sessionStorage
  • having a State that updates in a debounced / throttled / delayed manner
  • standardizing very basic data fetching
  • accessing the window width at any time (nice for Responsive layout)
  • creating a React ref for an element with an “clicked outside” handler
  • standardized response of messages from connected websockets

I will now spare you the code, but if you have questions about any of these, just drop a comment.

One important point, though: Always have your interface minimal. E.g. if your custom hook has an internal setState(), think hard about whether you pass that function to the outside via the hook return value. Even if you are the only developer on a project, treat yourself as two different instances, one “framework designer” and one “framework consumer”, and as the designer, think hard about what havoc the consumer could do if you allow him too much.

5. Do not duplicate STATE informAtion (especially with react-router)

This applies to every state information, but it’s important to recognize that your URL route is just that: a kind of global state. One that your user can edit directly at any time, leaving the synchronization up to you.

So do not go about it by reading the URL parameters into some state that has it’s own setState! If you define a certain role of a state parameter in your URL, then it is your obligation to have a uni-directional data flow:

  1. From the route, that value flows into your application in a clearly-defined manner,
  2. where you act upon it as you wish, until you need to change it
  3. Then you change the route. Then go back to 1

Of course, one might imagine that in some cases you can not guarantee that. Then maybe do your own synchronization logic, but I would highly advise you to stash that away into e.g. a custom hook, or middleware if you use Redux, so that you can test it thoroughly and it won’t break too soon.

Further note: There are situations where it is quite sensible to have two very similar states, if they have a different responsibility. These are not a bug.

E.g. if you GET a value from a server, then edit it in a controlled <input/> field, and PUT it to the server again, you do not wish to do so on every key press. Then these are not meant to be the same:

  1. the value as you currently know it from the server
  2. the value as it exists inside the <input/>

These are semantically different. They can and should be a different state entity. But if you have something that is utterly dependant on one other state, then chances are you do not really need another entity.

All in all,

that turned out longer than I envisioned it to be become. But I hope it is of any help to any React coders who managed the absolute basics and now are prone to the next-level pitfalls.

The good news is that after a certain bunch of hardships, there is rarely the case of even more surprises. So, manage your state and effects responsibly, especially the asynchronous ones, and the rest are practices that apply for any software development.

Or am I misled?

Reading a conanfile.txt from a conanfile.py

I am currently working on a project that embeds another library into its own source tree via git submodules. This is currently convenient because the library’s development is very much tied to the host project and having them both in the same CMake project cuts down dramatically on iteration times. Yet, that library already has its own conan dependencies in a conanfile.txt. Because I did not want to duplicate the dependency information from the library, I decided to pull those into my host projects requirements programmatically using a conanfile.py.

Luckily, you can use conan’s own tools for that:

from conans.client.loader import ConanFileTextLoader

def load_library_conan(recipe_folder):
    text = Path(os.path.join(recipe_folder, "libary_folder", "conanfile.txt")).read_text()
    return ConanFileTextLoader(text)

You can then use that in your stage methods, e.g.:

    def config_options(self):
        for line in load_library_conan(self.recipe_folder).options.splitlines():
            (key, value) = line.split("=", 2)
            (library, option) = key.split(":", 2)
            setattr(self.options[library], option, value)

    def requirements(self):
        for x in load_library_conan(self.recipe_folder).requirements:
            self.requires(x)

I realize this is a niche application, but it helped me very much. It would be cool if conan could delegate into subfolders natively, but I did not find a better way to do this.