## Help me with the Spiderman Operator

From time to time, I encounter a silly syntax in Java that I silently dubbed the “spiderman operator” because of all the syntactically pointing that’s going on. My problem is that it’s not very readable, I don’t know an alternative syntax for it and my programming style leads me more often to it than I am willing to ignore.

The spiderman operator looks like this:

``x -> () -> x``

In its raw form, it means that you have a function that takes x and returns a Supplier of x:

``Function<X, Supplier<X>> rawForm = x -> () -> x;``

That in itself is not very useful or mysterious, but if you take into account that the Supplier<X> is just one possible type you can return, because in Java, as long as the signature fits, the thing sits, it gets funnier.

## A possible use case

Let’s define a type that is an interface with just one method:

``````public interface DomainValue {
BigDecimal value();
}``````

In Java, the @FunctionalInterface annotation is not required to let the interface be, in fact, a functional interface. It only needs to have one method without implementation. How can we provide methods with implementation in Java interfaces. Default methods are the way:

``````@FunctionalInterface
public interface DomainValue {
BigDecimal value();

default String denotation() {
return getClass().getSimpleName();
}
}``````

Let’s say that we want to load domain values from a key-value-store with the following access method:

``Optional<Double> loadEntry(String key)``

If there is no entry with the given key or the syntax is not suitable to be interpreted as a double, the method returns Optional.emtpy(). Else it returns the double value wrapped in an Optional shell. We can convert it to our domain value like this:

``````Optional<DomainValue> myValue =
.map(BigDecimal::new)
.map(x -> () -> x);``````

And there it is, the spiderman operator. We convert from Double to BigDecimal and then to DomainValue by saying that we want to convert our BigDecimal to “something that can supply a BigDecimal”, which is exactly what our DomainValue can do.

## A bigger use case

Right now, the DomainValue type is nothing more than a mantle around a numerical value. But we can expand our domain to have more specific types:

``````public interface Voltage extends DomainValue {
}``````
``````public interface Power extends DomainValue {
@Override
default String denotation() {
return "Electric power";
}
}``````

Boring!

``````public interface Current extends DomainValue {
default Power with(Voltage voltage) {
return () -> value().multiply(voltage.value());
}
}``````

Ok, this is maybe no longer boring. We can implement a lot of domain functionality just in interfaces and then instantiate ad-hoc types:

``````Voltage europeanVoltage = () -> BigDecimal.valueOf(220);
Current powerSupply = () -> BigDecimal.valueOf(2);
Power usage = powerSupply.with(europeanVoltage);``````

Or we load the values from our key-value-store:

``````Optional<Voltage> maybeVoltage =
.map(BigDecimal::new)
.map(x -> () -> x);

Optional<Current> maybeCurrent =
.map(BigDecimal::new)
.map(x -> () -> x);``````

You probably see it already: We have some duplicated code! The strange thing is, it won’t go away so easily.

## The first call for help

But first I want to sanitize the code syntactically. The duplication is bad, but the spiderman operator is just unreadable.

If you have an idea how the syntax of the second map() call can be improved, please comment below! Just one request: Make sure your idea compiles beforehands.

## Failing to eliminate the duplication

There is nothing easier than eliminating the duplication above: The code is syntactically identical and only the string parameter is different – well, and the return type. We will see how this affects us.

What we cannot do:

``````<DV extends DomainValue> Optional<DV> loadFor(String entry) {
return maybeValue.map(x -> () -> x);
}``````

Suddenly, the spiderman operator does not compile with the error message:

The target type of this expression must be a functional interface

I can see the problem: Subtypes of DomainValue are not required to stay compatible to the functional interface requirement (just one method without implementation).

Interestingly, if we work with a wildcard for the generic, it compiles:

``````Optional<? extends DomainValue> loadFor(String entry) {
return maybeValue.map(x -> () -> x);
}``````

The problem is that we still need to downcast to our specific subtype afterwards. But we can use this insight and move the downcast into the method:

``````<DV extends DomainValue> Optional<DV> loadFor(
String entry,
Class<DV> type
) {
return maybeValue.map(x -> type.cast(x));
}``````

Which makes our code readable enough, but at the price of using reflection:

``````Optional<Voltage> european = loadFor("voltage", Voltage.class);

I’m not a fan of this solution, because downcasts are dangerous and reflection is dangerous, too. Mixing two dangerous things doesn’t neutralize the danger most of the time. This code will fail during runtime sooner or later, without any compiler warning us about it. If you don’t believe me, add a second method without implementation to the Current interface and see if the compiler warns you. Hint: This is what you will see at runtime:

java.lang.ClassCastException: Cannot cast java.math.BigDecimal to Current

But, to our surprise, it doesn’t even need a second method. The code above doesn’t work. Even if we reintroduce our spiderman operator (with an additional assignment to help the type inference), the cast won’t work:

``````<DV extends DomainValue> Optional<DV> loadFor(
String entry,
Class<DV> type
) {
Optional<DomainValue> maybeDomainValue = maybeValue.map(x -> () -> x);
return maybeDomainValue.map(x -> type.cast(x));
}``````

The ClassCastException just got a lot more mysterious:

java.lang.ClassCastException: Cannot cast Loader\$\$Lambda\$8/0x00000008000028c0 to Current

My problem is that I am stuck. There is working code that uses the spiderman operator and produces code duplication, but there is no way around the duplication that I can think of. I can get objects for the supertype (DomainValue), but not for a specific subtype of it. If I want that, I have to accept duplication. Or am I missing something?

## The second call for help

If you can think about a way to eliminate the duplication, please tell me (or us) in the comments. This problem doesn’t need to be solved for my peace of mind or the sanity of my code – the duplication is confined to a particular place.

Being used to roam nearly without boundaries in the Java syntax (25 years of thinking in Java will do that to you), this particular limitation hit hard. If you can give me some ideas, I would be grateful.

## LDAP-Authentication in Wildfly (Elytron)

Authentication is never really easy to get right but it is important. So there are plenty of frameworks out there to facilitate authentication for developers.

The current installment of the authentication system in Wildfly/JEE7 right now is called Elytron which makes using different authentication backends mostly a matter of configuration. This configuration however is quite extensive and consists of several entities due to its flexiblity. Some may even say it is over-engineered…

Therefore I want to provide some kind of a walkthrough of how to get authentication up and running in Wildfly elytron by using a LDAP user store as the backend.

Our aim is to configure the authentication with a LDAP backend, to implement login/logout and to secure our application endpoints using annotations.

## Setup

Of course you need to install a relatively modern Wildfly JEE server, I used Wildfly 26. For your credential store and authentication backend you may setup a containerized Samba server, like I showed in a previous blog post.

## Configuration of security realms, domains etc.

We have four major components we need to configure to use the elytron security subsystem of Wildfly:

• The security domain defines the realms to use for authentication. That way you can authenticate against several different realms
• The security realms define how to use the identity store and how to map groups to security roles
• The dir-context defines the connection to the identity store – in our case the LDAP server.
• The application security domain associates deployments (aka applications) with a security domain.

So let us put all that together in a sample configuration:

```<subsystem xmlns="urn:wildfly:elytron:15.0" final-providers="combined-providers" disallowed-providers="OracleUcrypto">
...
<security-domains>
<security-domain name="DevLdapDomain" default-realm="AuthRealm" permission-mapper="default-permission-mapper">
<realm name="AuthRealm" role-decoder="groups-to-roles"/>
</security-domain>
</security-domains>
<security-realms>
...
<ldap-realm name="LdapRealm" dir-context="ldap-connection" direct-verification="true">
<identity-mapping rdn-identifier="CN" search-base-dn="CN=Users,DC=ldap,DC=schneide,DC=dev">
<attribute-mapping>
<attribute from="cn" to="Roles" filter="(member={1})" filter-base-dn="CN=Users,DC=ldap,DC=schneide,DC=dev"/>
</attribute-mapping>
</identity-mapping>
</ldap-realm>
<ldap-realm name="OtherLdapRealm" dir-context="ldap-connection" direct-verification="true">
<identity-mapping rdn-identifier="CN" search-base-dn="CN=OtherUsers,DC=ldap,DC=schneide,DC=dev">
<attribute-mapping>
<attribute from="cn" to="Roles" filter="(member={1})" filter-base-dn="CN=auth,DC=ldap,DC=schneide,DC=dev"/>
</attribute-mapping>
</identity-mapping>
</ldap-realm>
<distributed-realm name="AuthRealm" realms="LdapRealm OtherLdapRealm"/>
</security-realms>
<dir-contexts>
</dir-context>
</dir-contexts>
</subsystem>
<subsystem xmlns="urn:jboss:domain:undertow:12.0" default-server="default-server" default-virtual-host="default-host" default-servlet-container="default" default-security-domain="DevLdapDomain" statistics-enabled="true">
...
<application-security-domains>
<application-security-domain name="myapp" security-domain="DevLdapDomain"/>
</application-security-domains>
</subsystem>
```

In the above configuration we have two security realms using the same identity store to allow authenticating users in separate subtrees of our LDAP directory. That way we do not need to search the whole directory and authentication becomes much faster.

Note: You may not need to do something like that if all your users reside in the same subtree.

The example shows a simple, but non-trivial use case that justifies the complexity of the involved entities.

## Implementing login functionality using the Framework

Logging users in, using their session and logging them out again is almost trivial after all is set up correctly. Essentially you use `HttpServletRequest.login(username, password)`, `HttpServletRequest.getSession() `, `HttpServletRequest.isUserInRole(role)` and `HttpServletRequest.logout()` to manage your authentication needs.

That way you can check for active session and the roles of the current user when handling requests. In addition to the imperative way with `isUserInRole()` we can secure endpoints declaratively as shown in the last section.

## Declarative access control

In addition to fine grained imperative access control using the methods on HttpServletRequest we can use annotations to secure our endpoints and to make sure that only authenticated users with certain roles may access the endpoint. See the following example:

```@WebServlet(urlPatterns = ["/*"], name = "MyApp endpoint")
@ServletSecurity(
HttpConstraint(
transportGuarantee = ServletSecurity.TransportGuarantee.NONE,
)
)
public class MyAppEndpoint extends HttpServlet {
...
}

```

To allow unauthenticated access you can use the `value `attribute instead of `rolesAllowed `in the `HttpConstraint`:

```@ServletSecurity(
HttpConstraint(
transportGuarantee = ServletSecurity.TransportGuarantee.NONE,
value = ServletSecurity.EmptyRoleSemantic.PERMIT)
)
```

I hope all of the above helps to setup simple and secure authentication and authorization in Wildfly/JEE.

## Packaging Java-Project as DEB-Packages

Providing native installation mechanisms and media of your software to your customers may be a large benefit for them. One way to do so is packaging for the target linux distributions your customers are running.

Packaging for Debian/Ubuntu is relatively hard, because there are many ways and rules how to do it. Some part of our software is written in Java and needs to be packaged as .deb-packages for Ubuntu.

## The official way

There is an official guide on how to package java probjects for debian. While this may be suitable for libraries and programs that you want to publish to official repositories it is not a perfect fit for your custom project that you provide spefically to your customers because it is a lot of work, does not integrate well with your delivery pipeline and requires to provide packages for all of your dependencies as well.

## The convenient way

Fortunately, there is a great plugin for ant and maven called jdeb. Essentially you include and configure the plugin in your `pom.xml` as with all the other build related stuff and execute the jdeb goal in your build pipeline and your are done. This results in a nice .deb-package that you can push to your customers’ repositories for their convenience.

A working configuration for Maven may look like this:

```<build>
<plugins>
<plugin>
<artifactId>jdeb</artifactId>
<groupId>org.vafer</groupId>
<version>1.8</version>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>jdeb</goal>
</goals>
<configuration>
<dataSet>
<data>
<src>\${project.build.directory}/\${project.build.finalName}-jar-with-dependencies.jar</src>
<type>file</type>
<mapper>
<type>perm</type>
<prefix>/usr/share/java</prefix>
</mapper>
</data>
<data>
</data>
<data>
<src>\${project.basedir}/src/deb/MyProjectStartScript</src>
<type>file</type>
<mapper>
<type>perm</type>
<prefix>/usr/bin</prefix>
<filemode>755</filemode>
</mapper>
</data>
</dataSet>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
```

If you are using gradle as your build tool, the ospackage-plugin may be worth a look. I have not tried it personally, but it looks promising.

## Wrapping it up

Packaging your software for your customers drastically improves the user experience for users and administrators. Doing it the official debian-way is not always the best or most efficient option. There are many plugins or extensions for common build systems to conveniently build native packages that may easier for many use-cases.

## Serving static resources in Javalin running as servlets

Javalin is a nice JVM-based microframework targetted at web APIs supporting Java and Kotlin as implementation language. Usually, it uses Jetty and runs standalone on the server or in a container.

However, those who want or need to deploy it to a servlet container/application server like Tomcat or Wildfly can do so by only changing a few lines of code and annotating at least one Url as a @WebServlet. Most of your application will continue to run unchanged.

But why do I say only “most of your application”?

Unfortunately, `Javalin-jetty` and `Javalin-standalone` do not provide complete feature parity. One important example is serving static resources, especially, if you do not want to only provide an API backend service but also serve resources like a single-page-application (SPA) or an OpenAPI-generated web interface.

## Serving static resources in Javalin-jetty

Serving static files is straightforward and super-simple if you are using Javalin-jetty. Just configure the Javalin app using `config.addStaticFiles()` to specify some paths and file locations and your are done.

The OpenAPI-plugin for Javalin uses the above mechanism to serve it’s web interface, too.

## Serving static resources in Javalin-standalone

Javalin-standalone, which is used for deployment to application servers, does not support serving static files as this is a jetty feature and standalone is built to run without jetty. So the short answer is: you can not!

The longer answer is, that you can implement a workaround by writing a servlet based on Javalin-standalone to serve files from the classpath for certain Url-paths yourself. See below a sample implementation in Kotlin using Javalin-standalone to accomplish the task:

```package com.schneide.demo

import io.javalin.Javalin
import io.javalin.http.Context
import io.javalin.http.HttpCode
import java.net.URLConnection
import javax.servlet.annotation.WebServlet
import javax.servlet.http.HttpServlet
import javax.servlet.http.HttpServletRequest
import javax.servlet.http.HttpServletResponse

private const val DEFAULT_CONTENT_TYPE = "text/plain"

@WebServlet(urlPatterns = ["/*"], name = "Static resources endpoints")
class StaticResourcesEndpoints : HttpServlet() {
private val wellknownTextContentTypes = mapOf(
"js" to "text/javascript",
"css" to "text/css"
)

private val servlet = Javalin.createStandalone()
.get("/") { context ->
serveResource(context, "/public", "index.html")
}
.get("/*") { context ->
serveResource(context, "/public")
}
.javalinServlet()!!

private fun serveResource(context: Context, prefix: String, fileName: String = "") {
val filePath = context.path().replace(context.contextPath(), prefix) + fileName
val resource = javaClass.getResourceAsStream(filePath)
if (resource == null) {
context.status(HttpCode.NOT_FOUND).result(filePath)
return
}
var mimeType = URLConnection.guessContentTypeFromName(filePath)
if (mimeType == null) {
mimeType = guessContentTypeForWellKnownTextFiles(filePath)
}
context.contentType(mimeType)
context.result(resource)
}

private fun guessContentTypeForWellKnownTextFiles(filePath: String): String {
if (filePath.indexOf(".") == -1) {
return DEFAULT_CONTENT_TYPE
}
val extension = filePath.substring(filePath.lastIndexOf('.') + 1)
return wellknownTextContentTypes.getOrDefault(extension, DEFAULT_CONTENT_TYPE)
}

override fun service(req: HttpServletRequest?, resp: HttpServletResponse?) {
servlet.service(req, resp)
}
}

```

The code performs 3 major tasks:

1. Register a Javalin-standalone app as a WebServlet for certain URLs
2. Load static files bundled in the WAR-file from defined locations
3. Guess the content-type of the files as good as possible for the response

Feel free to use and modify the code in your project if you find it useful. I will try to get this workaround into Javalin-standalone if I find the time to improve feature-parity between Javalin-jetty and Javalin-standalone. Until then I hopy you find the code useful.

## JDBC’s wasNull method pitfall

Java’s `java.sql` package provides a general API for accessing data stored in relational databases. It is part of JDBC (Java Database Connectivity). The API is relatively low-level, and is often used via higher-level abstractions based on JDBC, such as query builders like jOOQ, or object–relational mappers (ORMs) like Hibernate.

If you choose to use JDBC directly you have to be aware that the API relatively old. It was added as part of JDK 1.1 and predates later additions to the language such as generics and optionals. There are also some pitfalls to be avoided. One of these pitfalls is ResultSet’s wasNull method.

##### The wasNull method

The `wasNull` method reports whether the database value of the last ‘get’ call for a nullable table column was NULL or not:

```int height = resultSet.getInt("height");
if (resultSet.wasNull()) {
height = defaultHeight;
}
```

The `wasNull` check is necessary, because the return type of `getInt` is the primitive data type `int`, not the nullable `Integer`. This way you can find out whether the actual database value is 0 or NULL.

The problem with this API design is that the ResultSet type is very stateful. Its state does not only change with each row (by calling `next` method), but also with each ‘get’ method call.

If any other ‘get’ method call is inserted between the original ‘get’ method call and its `wasNull` check the code will be wrong. Here’s an example. The original code is:

```var width = rs.getInt("width");
var height = rs.getInt("height");
var size = new Size(width, rs.wasNull() ? defaultHeight : height);
```

A developer now wants to add a third dimension to the size:

```var width = rs.getInt("width");
var height = rs.getInt("height");
var depth = rs.getInt("depth");
var size = new Size(width, rs.wasNull() ? defaultHeight : height, depth);
```

It’s easy to overlook the `wasNull` call, or to wrongly assume that adding another ‘get’ method call is a safe code change. But the `wasNull` check now refers to “depth” instead of “height”, which breaks the original intention.

So my advice is to wrap the ‘get’ calls for nullable database values in their own methods that return an `Optional`:

```Optional<Integer> getOptionalInt(ResultSet rs, String columnName) {
final int value = rs.getInt(columnName);
if (rs.wasNull()) {
return Optional.empty();
}
return Optional.of(value);
}
```

Now the default value fallback can be safely applied with the `orElse` method:

```var width = rs.getInt("width");
var height = getOptionalInt(rs, "height").orElse(defaultHeight);
var depth = rs.getInt("depth");
var size = new Size(width, height, depth);
```

## Be precise, round twice

Recently after implementing a new feature in a software that outputs lots of floating point numbers, I realized that the last digits were off by one for about one in a hundred numbers. As you might suspect at this point, the culprit was floating point arithmetic. This post is about a solution, that turned out to surprisingly easy.

The code I was working on loads a couple of thousands numbers from a database, stores all the numbers as doubles, does some calculations with them and outputs some results rounded half-up to two decimal places. The new feature I had to implement involved adding constants to those numbers. For one value, 0.315, the constant in one of my test cases was 0.80. The original output was “0.32” and I expected to see “1.12” as the new rounded result, but what I saw instead was “1.11”.

## What happened?

After the fact, nothing too surprising – I just hit decimals which do not have a finite representation as a binary floating point number. Let me explain, if you are not familiar with this phenomenon: 1/3 happens to be a fraction which does not have a finte representation as a decimal:

1/3=0.333333333333…

If a fraction has a finite representation or not, depends not only on the fraction, but also on the base of your numbersystem. And so it happens, that some innocent looking decimal like 0.8=4/5 has the following representation with base 2:

4/5=0.1100110011001100… (base 2)

So if you represent 4/5 as a double, it will turn out to be slightly less. In my example, both numbers, 0.315 and 0.8 do not have a finite binary representation and with those errors, their sum turns out to be slightly less than 1.115 which yields “1.11” after rounding. On a very rough count, in my case, this problem appeared for about one in a hundred numbers in the output.

## What now?

The customer decided that the problem should be fixed, if it appears too often and it does not take to much time to fix it. When I started to think about some automated way to count the mistakes, I began to realize, that I actually have all the information I need to compute the correct output – I just had to round twice. Once say, at the fourth decimal place and a second time to the required second decimal place:

```(new BigDecimal(0.8d+0.315d))
.setScale(4, RoundingMode.HALF_UP)
.setScale(2, RoundingMode.HALF_UP)
```

Which produces the desired result “1.12”.

If doubles are used, the errors explained above can only make a difference of about $10^{-15}$, so as long as we just add a double to a number with a short decimal representation while staying in the same order of magnitude, we can reproduce the precise numbers from doubles by setting the scale (which amounts to rounding) of our double as a BigDecimal.

But of course, this can go wrong, if we use numbers, that do not have a short neat decimal representation like 0.315. In my case, I was lucky. First, I knew that all the input numbers have a precision of three decimal places. There are some calculations to be done with those numbers. But: All numbers are roughly in the same order of magnitude and there is only comparing, sorting, filtering and the only honest calculation is taking arithmetic means. And the latter only means I had to increase the scale from 4 to 8 to never see any error again.

So, this solution might look a bit sketchy, but in the end it solves the problem with the limited time budget, since the only change happens in the output function. And it can also be a valid first step of a migration to numbers with managed precision.

## The Java Cache API and Custom Key Generators

The Java Cache API allows you to add a `@CacheResult` annotation to a method, which means that calls to the method will be cached:

```import javax.cache.annotation.CacheResult;

@CacheResult
public String exampleMethod(String a, int b) {
// ...
}
```

The cache will be looked up before the annotated method executes. If a value is found in the cache it is returned and the annotated method is never actually executed.

The cache lookup is based on the method parameters. By default a cache key is generated by a key generator that uses `Arrays.deepHashCode(Object[])` and `Arrays.deepEquals(Object[], Object[])` on the method parameters. The cache lookup based on this key is similar to a HashMap lookup.

You can define and configure multiple caches in your application and reference them by name via the `cacheName` parameter of the `@CacheResult` annotation:

```@CacheResult(cacheName="examplecache")
public String exampleMethod(String a, int b) {
```

If no cache name is given the cache name is based on the fully qualified method name and the types of its parameters, for example in this case: “my.app.Example.exampleMethod(java.lang.String,int)”. This way there will be no conflicts with other cached methods with the same set of parameters.

### Custom Key Generators

But what if you actually want to use the same cache for multiple methods without conflicts? The solution is to define and use a custom cache key generator. In the following example both methods use the same cache (“examplecache”), but also use a custom cache key generator (`MethodSpecificKeyGenerator`):

```@CacheResult(
cacheName="examplecache",
cacheKeyGenerator=MethodSpecificKeyGenerator.class)
public String exampleMethodA(String a, int b) {
// ...
}

@CacheResult(
cacheName="examplecache",
cacheKeyGenerator=MethodSpecificKeyGenerator.class)
public String exampleMethodB(String a, int b) {
// ...
}
```

Now we have to implement the `MethodSpecificKeyGenerator`:

```import org.infinispan.jcache.annotation.DefaultCacheKey;

import javax.cache.annotation.CacheInvocationParameter;
import javax.cache.annotation.CacheKeyGenerator;
import javax.cache.annotation.CacheKeyInvocationContext;
import javax.cache.annotation.GeneratedCacheKey;

public class MethodSpecificKeyGenerator
implements CacheKeyGenerator {

@Override
public GeneratedCacheKey generateCacheKey(CacheKeyInvocationContext<? extends Annotation> context) {
Stream<Object> methodIdentity = Stream.of(context.getMethod());
Stream<Object> parameterValues = Arrays.stream(context.getKeyParameters()).map(CacheInvocationParameter::getValue);
return new DefaultCacheKey(Stream.concat(methodIdentity, parameterValues).toArray());
}
}
```

This key generator not only uses the parameter values of the method call but also the identity of the method to generate the key. The call to `context.getMethod()` returns a `java.lang.reflect.Method` instance for the called method, which has appropriate `hashCode()` and `equals()` implementations. Both this method object and the parameter values are passed to the DefaultCacheKey implementation, which uses deep equality on its parameters, as mentioned above.

By adding the method’s identity to the cache key we have ensured that there will be no conflicts with other methods when using the same cache.

## 24 hour time format: Difference between JodaTime and java.time

We have been using JodaTime in many projects since before Java got better date and time support with Java 8. We update projects to the newer `java.time` classes whenever we work on them, but some still use JodaTime. One of these was a utility that imports time series from CSV files. The format for the time stamps is flexible and the user can configure it with a format string like “yyyyMMdd HHmmss”. Recently a user tried to import time series with timestamps like this:

```20200101 234500
20200101 240000
20200102 001500
```

As you can see this is a 24-hour format. However, the first hour of the day is represented as the 24th hour of the previous day if the minutes and seconds are zero, and it is represented as “00” otherwise. When the user tried to import this with the “yyyyMMdd HHmmss” format the application failed with an internal exception:

```org.joda.time.IllegalFieldValueException:
Cannot parse "20200101 240000": Value 24 for
hourOfDay must be in the range [0,23]
```

Then he tried “yyyyMMdd kkmmss”, which uses the “kk” format for hours. This format allows the string “24” as hour. But now “20200101 240000” was parsed as 2020-01-01T00:00:00 and not as 2020-01-02T00:00:00, as intended.

I tried to help and find a format string that supported this mixed 24-hour format, but I did not find one, at least not for JodaTime. However, I found out that with `java.time` the import would work with the “yyyyMMdd HHmmss” format, even though the documentation for “H” simply says “hour-of-day (0-23)”, without mentioning 24.

The import tool was finally updated to `java.time` and the user was able to import the time series file.

## Think of your code as a maintenance minefield

Most of the cost, effort and time of a software project is spent on the maintenance phase, the modification of a software product after delivery. If you think about all these resources as “negative investments” or debt settlement and try to associate your spendings with specific code areas or even single lines of code, you’ll probably find that the maintenance cost per line is not equally distributed. There are lots of lines of code that outlast the test of time without any maintenance work at all, a fair amount of lines that require moderate attention and some lines that seem to require constant and excessive developer care.

If you transfer this image to another metaphor, your code presents itself like a minefield for maintenance effort: Most of the area is harmless and safe to travel. But there are some positions that will just blow up once touched. The difference is that as a software developer, you don’t tread on the minefield, but you catch the flak if something happens.

You should try to deliver your code free of maintenance mines.

## Spotting a maintenance mine

Identifying a line of code as a maintenance mine after the fact is easy. You probably already recognize the familiar code as “troublesome” because you’ve spent hours trying to understand and fix it. The commit history of your version control system can show you the “hottest” lines in your code – the areas that were modified most often. If you add tests for each new bug, you’ll find that the code is probably tested really well, with tests motivated by different bug issues. In hindsight, you can clearly distinguish low-effort code from high maintenance code.

But before delivery, all code looks the same. Or does it?

## An example of a maintenance mine

Let’s look at an example. Our system monitors critical business data and sends out alerts if certain conditions are met. One implementation of the part sending the alerts is a simple e-mail sender. The code is given here:

```
public class SendEmailService {

public void sendTo(
Person person,
String subject,
String body) {
execCmd(
buildCmd(
person.email(), subject, body));
}

private String buildCmd(String recipientMailAdress, String subject, String body){
return "'/usr/bin/mutt -t " + recipientMailAdress + " -u " + subject + " -m " + body + "'";
}

private int execCmd(String command) throws IOException{
return Runtime.getRuntime()
.exec(command).exitValue();
}
}

```

This code has two interesting problems:

• The first problem is that it is written in Java, a platform agnostic programming language, but depends on being run on a linux (or sufficiently similar unixoid) operating system. The system it runs on needs to supply the /usr/bin/mutt program and have the e-mail sending settings properly configured or else every try to run the send command will result in an error. This implicit dependency on the configuration of the production system isn’t the best way to deal with the situation, but it’s probably a one-time pain. The problem clearly presents itself and once the system is set up in the right way, it is gone (until somebody tampers with the settings again). And my impression is that this code separates two concerns between development and operations rather nicely: Development provides software that can send specific e-mails if operations provides a system that is capable of sending e-mails. No need to configure the system for e-mail sending and doing it again for the software on said system.
• The second problem looks like a maintenance mine. In the line where the code passes the command line to the operating system (by calling Runtime.getRuntime().exec()), a Process object is returned that is only asked for its exitValue(), implicating a wait for the termination of the system command. The line looks straight and to the point. No need to store and handle intermediate objects if you aren’t interested in them. But perhaps, you should care:

By default, the created process does not have its own terminal or console. All its standard I/O (i.e. stdin, stdout, stderr) operations will be redirected to the parent process, where they can be accessed via the streams obtained using the methods `getOutputStream()`, `getInputStream()`, and `getErrorStream()`. The parent process uses these streams to feed input to and get output from the process. Because some native platforms only provide limited buffer size for standard input and output streams, failure to promptly write the input stream or read the output stream of the process may cause the process to block, or even deadlock.

This means that the Process object’s stdout and stderr outputs are stored in buffers of unknown (and system dependent) size. If one of these buffers fills up, the execution of the command just stops, as if somebody had paused it indefinitely. So, depending on your call’s talkativeness, your alert e-mail will not be sent, your system will appear to have failed to recognize the condition and you’ll never see a stacktrace or error exit value. All other e-mails (with less chatter) will go through just fine. This is a guaranteed source of frantic telephone calls, headaches and lost trust in your system and your ability to resolve issues.

And all the problems originate from one line of code. This is a maintenance mine with a stdout fuse.

The fix for this line might lie in the use of the ProcessBuilder class or your own utility code to drain the buffers. But how would you discover the mine before you deliver it?

## Mines often lie at borders

One thing that stands out in this line of code is that it passes control to the “outside”. It acts as a transit point to the underlying operating system and therefor has a lot of baggage to check. There are no safety checks implemented, so the transit must be regarded as unsafe. If you look out for transit points in your code (like passing control to the file system, the network, a database or another external system), make sure you’ve read the instructions and requirements thoroughly. The problems of a maintenance mine aren’t apparent in your code and only manifest themselves during the interaction with the external system. And this is a situation that happens disproportionately often in production and comparably seldom during development.

So, think of your code as a maintenance minefield and be careful around its borders.

What is your minesweeper story? Drop us a comment.

## Migrating from JScience quantities to Unit API 2.0

If you’re developing software that operates a lot with physical quantities you absolutely should use a library that defines types for quantities and supports safe conversions between units of measurements. Our go-to library for this in Java was JScience. The latest version of JScience is 4.3.1, which was released in 2012.

Since then a group of developers has formed that strives towards the standardization of a units API for Java. JScience maintainer Jean-Marie Dautelle is actively involved in this effort. The group operates under the name Units of Measurement alongside with their GitHub presence unitsofmeasurement.

Over the years there have been several JSRs (Java Specification Requests) by the group:

The current state of affairs is JSR-385, which is the basis of this post. The Units of Measurement API 2.0, or Unit API 2.0 for short, was released in July 2019.

### JARs

While JScience is distributed as one JAR (~600 KiB), a setup of Unit API involves three JARs (~300 KiB in total):

• unit-api-2.0.jar
• indriya-2.0.jar
• uom-lib-common-2.0.jar

JScience offers a lot more functionality than just quantities and units, but that’s the part we have been using and what we are interested in.

The unit-api JAR only defines interfaces, which is the scope of JSR-385. So you need an implementation to do anything useful with it. The reference implementation is called Indriya, provided by the second JAR. The third JAR, uom-lib-common, is a utility library used by Indriya for common functionality shared with other projects under the Units of Measurement umbrella.

### Using quantities

Here’s a simple use of a physical quantity with JScience, in this example Length:

```import org.jscience.physics.amount.Amount;

import javax.measure.quantity.Length;

import static javax.measure.unit.SI.*;

// ...

final Amount<Length> d = Amount.valueOf(214, CENTI(METRE));
final double d_metre = d.doubleValue(METRE);
```

And here’s the equivalent code using Units API 2.0 and Indriya:

```import tech.units.indriya.quantity.Quantities;

import javax.measure.Quantity;
import javax.measure.quantity.Length;

import static javax.measure.MetricPrefix.CENTI;
import static tech.units.indriya.unit.Units.METRE;

// ...

final Quantity<Length> d = Quantities.getQuantity(214, CENTI(METRE));
final double d_metre = d.to(METRE).getValue().doubleValue();
```

### Consistency

While JScience also defines aliases with alternative spellings like METER and constants for many prefixed units like CENTIMETER or MILLIMETER, Indriya encourages consistency and only allows METRE, CENTI(METRE), MILLI(METRE).

### Quantity names

Most quantities have the same names in both projects, but there are some differences:

• Amount<Duration> becomes Quantity<Time>
• Amount<Velocity> becomes Quantity<Speed>

In these cases Unit API uses the correct SI names, i.e. time and speed. Wikipedia explains the difference between speed and velocity.

### Arithmetic operations

The method names for the elementary arithmetic operations have changed:

• minus() becomes subtract()
• times() becomes multiply()

Only the method name for division is the same:

• divide() is still divide()

However, the runtime exceptions thrown on division by zero are different:

• JScience: java.lang.ArithmeticException: / by zero
• Indriya: java.lang.IllegalArgumentException: cannot initalize a rational number with divisor equal to ZERO

### Type hints

If you divide or multiply two quantities the Java type system needs a type hint, because it doesn’t know the resulting quantity. Here’s how this looks in JScience versus Unit API:

With JScience:

```Amount<Area> a = Amount.valueOf(100, SQUARE_METRE);
Amount<Length> b = Amount.valueOf(10, METRE);
Amount<Length> c = a.divide(b)
.to(METRE);
```

With Unit API:

```Quantity<Area> a = Quantities.getQuantity(100, SQUARE_METRE);
Quantity<Length> b = Quantities.getQuantity(10, METRE);
Quantity<Length> c = a.divide(b)
.asType(Length.class);
```

### Comparing quantities

If you want to compare quantities via compareTo(), isLessThan(), etc. you need quantities of type ComparableQuantity. The Quantities.getQuantity() factory method returns a ComparableQuantity, which is a sub-interface of Quantity.

### Defining custom units

Defining custom units is very similar to JScience. Here’s an example for degree (angle), which is not an SI unit:

```public static final Unit<Angle> DEGREE_ANGLE =