Leibniz would have known how to override equals

Equality is a subtle and thorny business, in programming as well as in pure mathematics, physics and philosphy. Probably every software developer got annoyed somtime by unexpected behaviour of some ‘equals’ method or corresponding operators and assertions. There are lot’s of questions that depend on context and answering them for some particular context might cost some pain and time – here is a list of examples:

  • What about objects that come with database-ids? Should they be equal for the objects to be equal?
  • Are dates with time zones equal if they represent the same instant but have a different time zone?
  • What about numbers represented by functions that compute digits up to a given precision?

Leibniz’ Law

This post is about applying an idea of Leibniz I like, to the problem of finding good answers to the questions above. It is called “Leibniz’ law” and can be phrased as a definition or characterization of equality:

Two objects are equal, if and only if, they agree in all properties.

If you are not familiar with the phrase “if and only if”, that’s from mathematics and it is a shorthand for saying, that two things are true:

  • If two objects are equal, then they agree in all properties.
  • If two objects agree in all properties, then they are equal.

Lebniz’ law is sometimes stated using mathematical symbols, like “\forall“, but this would be besides the point of this post – what those properties are will not be defined in a formal mathematical way. If I am in doubt about equality while programming, I am concerned about properties relevant to the problem I want to solve. For example, in almost all circumstances I can imagine, for a list, a relevant property would be its length, but not the place in the computers memory where it is stored.

But what are relevant properties in general? For me, such a property is the result of running some piece of meaningful code. And what meaningful code is, depends on your judgement how the object in question should be used. So in total, this boils down to the following:

Two instances of a type are equal, if and only if, they yield the same results in any meaningful piece of code.

Has this gotten us anywhere? My answer is yes, since the question about equality was reduced to a question about use cases of a type, which might be a starting point of defining a new type anyway.

Turtles all the way down

Please take a moment to note what a sneaky beast equality can be: Above I explained equality by using equality – right where I said “same results”. It is really hard to make statements about anything at all without using some notion of equality in some way. Even in programming, where you can freely define when two objects are equal, you can very well forget that you are using a system, namely your programming language, which usually already comes with an intricate notion of equality defined on the syntax you are using to define your notion of equality…

On a more practical note, that means that messed up notions of equality usually propagate if you define new kinds of objects from known ones.

Relation to Liskov’s Principle

With our above definition, we are very close to an informal interpretation of Liskov’s Substitution Principle, which we can rephrase as:

In all meaningful code for a type, an instance of a subtype has to behave the same way.

For comparison, the message of this post stated in the same tongue:

In all meaningful code for a type, two equal instance of the type should behave the same way.

Be precise, round twice

Recently after implementing a new feature in a software that outputs lots of floating point numbers, I realized that the last digits were off by one for about one in a hundred numbers. As you might suspect at this point, the culprit was floating point arithmetic. This post is about a solution, that turned out to surprisingly easy.

The code I was working on loads a couple of thousands numbers from a database, stores all the numbers as doubles, does some calculations with them and outputs some results rounded half-up to two decimal places. The new feature I had to implement involved adding constants to those numbers. For one value, 0.315, the constant in one of my test cases was 0.80. The original output was “0.32” and I expected to see “1.12” as the new rounded result, but what I saw instead was “1.11”.

What happened?

After the fact, nothing too surprising – I just hit decimals which do not have a finite representation as a binary floating point number. Let me explain, if you are not familiar with this phenomenon: 1/3 happens to be a fraction which does not have a finte representation as a decimal:

1/3=0.333333333333…

If a fraction has a finite representation or not, depends not only on the fraction, but also on the base of your numbersystem. And so it happens, that some innocent looking decimal like 0.8=4/5 has the following representation with base 2:

4/5=0.1100110011001100… (base 2)

So if you represent 4/5 as a double, it will turn out to be slightly less. In my example, both numbers, 0.315 and 0.8 do not have a finite binary representation and with those errors, their sum turns out to be slightly less than 1.115 which yields “1.11” after rounding. On a very rough count, in my case, this problem appeared for about one in a hundred numbers in the output.

What now?

The customer decided that the problem should be fixed, if it appears too often and it does not take to much time to fix it. When I started to think about some automated way to count the mistakes, I began to realize, that I actually have all the information I need to compute the correct output – I just had to round twice. Once say, at the fourth decimal place and a second time to the required second decimal place:

(new BigDecimal(0.8d+0.315d))
    .setScale(4, RoundingMode.HALF_UP)
    .setScale(2, RoundingMode.HALF_UP)

Which produces the desired result “1.12”.

If doubles are used, the errors explained above can only make a difference of about 10^{-15}, so as long as we just add a double to a number with a short decimal representation while staying in the same order of magnitude, we can reproduce the precise numbers from doubles by setting the scale (which amounts to rounding) of our double as a BigDecimal.

But of course, this can go wrong, if we use numbers, that do not have a short neat decimal representation like 0.315. In my case, I was lucky. First, I knew that all the input numbers have a precision of three decimal places. There are some calculations to be done with those numbers. But: All numbers are roughly in the same order of magnitude and there is only comparing, sorting, filtering and the only honest calculation is taking arithmetic means. And the latter only means I had to increase the scale from 4 to 8 to never see any error again.

So, this solution might look a bit sketchy, but in the end it solves the problem with the limited time budget, since the only change happens in the output function. And it can also be a valid first step of a migration to numbers with managed precision.

Math development practices

As a mathematician that recently switched to almost full-time software developing, I often compare the two fields. During the last years of my mathematics career I was in the rather unique position of doing both at once – developing software of some sort and research in pure mathematics. This is due to a quite new mathematical discipline called Homotopy Type Theory, which uses a different foundation as the mathematics you might have learned at a university. While it has been a possibility for quite some time to check formalized mathematics using computers, the usual way to do this entails a crazy amount of work if you want to use it for recent mathematics. By some lucky coincidences this was different for my area of work and I was able to write down my math research notes in the functional language Agda and have them checked for correctness.

As a disclaimer, I should mention that what I mean with “math” in this post, is very far from applied mathematics and very little of the kind of math I talk about is implement in computer algebra systems. So this post is about looking at pure, abstract math, as if it were a software project. Of course, this comparison is a bit off from the start, since there is no compiler for the math written in articles, but it is a common believe, that it should always be possible to translate correct math to a common foundation like the Zermelo Fraenkel Set Theory and that’s at least something we can type check with software (e.g. isabelle).

Refactoring is not well supported in math

In mathematics, you want to refactor what you write from time to time pretty much the same way and for the same reasons as you would while developing software. The problem is, you do not have tools which tell you immediately if your change introduces bugs, like automated tests and compilers checking your types.

Most of the time, this does not cause problems, judging from my experiences with refactoring software, most of the time a refactoring breaks something detected by a test or the compiler, it is just about adjusting some details. And, in fact, I would conjecture that almost all math articles have exactly those kind of errors – which is no problem at all, since the mathematicians reading those articles can fix them or won’t even notice.

As with refactoring in software development, what does matter are the rare cases where it is crucial that some easy to overlook details need to match exactly. And this is a real issue in math – sometimes a statements gets reformulated while proving it and the changes are so subtle that you do not even realize you have to check if what you prove still matches your original problem. The lack of tools that help you to catch those bugs is something that could really help math – but it has to be formalized to have tools like that and that’s not feasible so far for most math.

Retrospectively, being able to refactor my math research was the biggest advantage of having fully formal research notes in Agda. There is no powerful IDE like they are used in mainstream software development, just a good emacs-mode. But being able to make a change and check afterwards if things still compile, was already enough to enable me to do things I would not have done in pen and paper math.

Not being able to refactor might also be the root cause of other problems in math. One wich would be really horrible for a software project, is that sometimes important articles do not contain working versions of the theorems used in some field of study and you essentially need to find some expert in the field to tell you things like that. So in software project, that would mean, you have to find someone who allegedly made the code base run some time in the past by applying lots of patches which are not in the repository and wich he hopefully is still able to find.

The point I wanted to make so far is: In some respects, this comparison looks pretty bad for math and it becomes surprising that it works in spite of these deficiencies. So the remainder of this post is about the things on the upside, that make math check out almost all the time.

Math spent person-centuries on designing its datatypes

This might be exaggerated, but it is probably not that far off. When I started studying math, one of my lecturers said “inventing good definitions is not less important as proving new results”. Today, I could not agree more, immense work went into the definitions in pure math and they allowed me to solve problems I would be too dumb to even think about otherwise. One analogue in programming is finding the ‘correct’ datatypes, which, if achieved can make your algorithms a lot easier. Another analogue is using good libraries.

Math certainly reaps a great benefit from its well-thought-through definitions, but I must also admit, that the comparison is pretty unfair, since pure mathematicians usually take the freedom to chose nice things to reason about. But this is a point to consider when analysing why math still works, even if some of its practices should doom a software project.

I chose to speak about ‘datatypes’ instead of, say ‘interfaces’, since I think that mathematics does not make that much use of polymorhpism like I learned it in school around 2000. Instead, I think, in this respect mathematical practice is more in line with a data-oriented approach (as we saw last week here on this blog), in math, if you want your X to behave like a Y, you usually give a map, that turns your X into a Y, and then you use Y.

All code is reviewed

Obviously like everywhere in science, there is a peer-review processs if you want to publish an article. But there are actually more instances of things that can be called a review of your math research. Possibly surprising to outsiders, mathematicians talk a lot about their ideas to each other and these kind of talks can be even closer to code reviews than the actual peer reviews. This might also be comparable to pair programming. Also, these review processes are used to determine success in math. Or, more to the point, your math only counts if you managed to communicate your ideas successfully and convince your audience that they work.

So having the same processes in software development would mean that you have to explain your code to your customer, which would be a software developer as yourself, and he would pay you for every convincing implementaion idea. While there is a lot of nonsense in that thought, please note that in a world like that, you cannot get payed for a working 300-line block code function that nobody understands. On the other hand, you could get payed for understanding the problem your software is supposed to solve even if your code fails to compile. And in total, the interesting things here for me is, that this shift in incentives and emphasis on practices that force you to understand your code by communicating it to others can save a very large project with some quite bad circumstances.

Getting started with exact arithmetic and F#

In this blog post, I claimed that some exact arithmetic beyond rational numbers can be implemented on a computer. Today I want to show you how that might be done by showing you the beginning of my implementation. I chose F# for the task, since I have been waiting for an opportunity to check it out anyway. So this post is a more practical (first) follow up on the more theoretic one linked above with some of my F# developing experiences on the side.

F# turned out to be mostly pleasant to use, the only annoying thing that happened to me along the way was some weirdness of F# or of the otherwise very helpful IDE Rider: F# seems to need a compilation order of the source code files and I only found out by acts of desperation that this order is supposed to be controlled by drag & drop:

The code I want to (partially) explain is available on github:

https://github.com/felixwellen/ExactArithmetic

I will link to the current commit, when I discuss specifc sections below.

Prerequesite: Rational numbers and Polynomials

As explained in the ‘theory post’, polynomials will be the basic ingredient to cook more exact numbers from the rationals. The rationals themselves can be built from ‘BigInteger’s (source). The basic arithmetic operations follow the rules commonly tought in schools (here is addition):

static member (+) (l: Rational, r: Rational) =
    Rational(l.up * r.down + r.up * l.down,
             l.down * r.down)

‘up’ and ‘down’ are ‘BigInteger’s representing the nominator and denominator of the rational number. ‘-‘, ‘*’ and ‘/’ are defined in the same style and extended to polynomials with rational coefficients (source).

There are two things important for this post, that polynomials have and rationals do not have: Degrees and remainders. The degree of a polynomial is just the number of its coefficients minus one, unless it is constant zero. The zero-polynomial has degree -1 in my code, but that specific value is not too important – it just needs to be smaller than all the other degrees.

Remainders are a bit more work to calculate. For two polynomials P and Q where Q is not zero, there is always a unique polynomial R that has a smaller degree such that:

P = Q * D + R

For some polynomial D (the algorithm is here).

Numberfields and examples

The ingredients are put together in the type ‘NumberField’ which is the name used in algebra, so it is precisely what is described here. Yet it is far from obvious that this is the ‘same’ things as in my example code.

One source of confusion of this approach to exact arithmetic is that we do not know which solution of a polynomial equation we are using. In the example with the square root, the solutions only differ in the sign, but things can get more complicated. This ambiguity is also the reason that you will not find a function in my code, that approximates the elements of a numberfield by a decimal number. In order to do that, we would have to choose a particular solution first.

Now, in the form of unit tests unit tests, we can look at a very basic example of a number field: The one from the theory-post containing a solution of the equation X²=2:

let TwoAsPolynomial = Polynomial([|Rational(2,1)|])
let ModulusForSquareRootOfTwo = 
     Polynomial.Power(Polynomial.X,2) - TwoAsPolynomial
let E = NumberField(ModulusForSquareRootOfTwo)   
let TwoAsNumberFieldElement = NumberFieldElement(E, TwoAsPolynomial)

[<Fact>]
let ``the abstract solution is a solution of the given equation``() =
    let e = E.Solution in  (* e is a solution of the equation 'X^2-2=0' *)
    Assert.Equal(E.Zero, e * e - TwoAsNumberFieldElement)

There are applications of these numbers which have no obvious relation to square roots. For example, there are numberfields containing roots of unity, which would allow us to calculate with rotations in the plane by rational fraction of a full rotation. This might be the topic of a follow up post…

Some strings are more equal before your Oracle database

When working with customer code based on ADO.net, I was surprised by the following error message:

The german message just tells us that some UpdateCommand had an effect on “0” instead of the expected “1” rows of a DataTable. This happened on writing some changes to a table using an OracleDataAdapter. What really surprised me at this point was that there certainly was no other thread writing to the database during my update attempt. Even more confusing was, that my method of changing DataTables and using the OracleDataAdapter to write changes had worked pretty well so far.

In this case, the title “DBConcurrencyExceptionturned out to be quite misleading. The text message was absolutely correct, though.

The explanation

The UpdateCommand is a prepared statement generated by the OracleDataAdapter. It may be used to write the changes a DataTable keeps track of to a database. To update a row, the UpdateCommand identifies the row with a WHERE-clause that matches all original values of the row and writes the updates to the row. So if we have a table with two rows, a primary id and a number, the update statement would essentially look like this:

UPDATE EXAMPLE_TABLE
  SET ROW_ID =:current_ROW_ID, 
      NUMBER_COLUMN =:current_NUMBER_COLUMN
WHERE
      ROW_ID =:old_ROW_ID 
  AND NUMBER_COLUMN =:old_NUMBER_COLUMN

In my case, the problem turned out to be caused by string-valued columns and was due to some oracle-weirdness that was already discussed on this blog (https://schneide.blog/2010/07/12/an-oracle-story-null-empty-or-what/): On writing, empty strings (more precisely: empty VARCHAR2s) are transformed to a DBNull. Note however, that the following are not equivalent:

WHERE TEXT_COLUMN = ''
WHERE TEXT_COLUMN is null

The first will just never match… (at least with Oracle 11g). So saying that null and empty strings are the same would not be an accurate description.

The WHERE-clause of the generated UpdateCommands look more complicated for (nullable) columns of type VARCHAR2. But instead of trying to understand the generated code, I just guessed that the problem was a bug or inconsistency in the OracleDataAdapter that caused the exception. And in fact, it turned out that the problem occured whenever I tried to write an empty string to a column that was DBNull before. Which would explain the message of the DBConcurrencyException, since the DataTable thinks there is a difference between empty strings and DBNulls but due to the conversion there will be no difference when the corrensponding row is updated. So once understood, the problem was easily fixed by transforming all empty strings to null prior to invoking the UpdateCommand.

Dragging DataGridRows in WPF

The Windows Presentation Foundation (WPF) is a framework for graphical user interfaces. It has a powerful component called DataGrid, which is pretty useful for letting the user interact with data loaded from a database:

The design is flexible and the cells of this grid can be filled with other elements like ComboBoxes, Buttons or Images in a straight forward way. Here is an example picture with a ComboBox:

This post is about handling Drag & Drop for the rows of a DataGrid with occasional ComboBox-columns. The approach presented here will not work if all the columns are ComboBoxes, since the rows will be draggable only in all non-ComboBox-columns.

The problem I had with the drag and drop solutions I found on the web was that they prevented the ComboBoxes from reacting correctly to mouse events. So the limited but quite simple solution I settled for lets the cells handle the mouse events.

Accourding to the documentation the first thing you have to do is:

Identify the start of a drag event

As mentioned above among the various choices which event of which element should be used to decide that a drag started, the MouseMove of the text-cells of my DataGrid turned out to be the best choice by far:

In the so called code-behind, the obvious move is to say, that if the user has the left button pressed while moving the mouse, she wants to start a drag:

And the DataRowView that is being dragged may be found with the following method:

Dropping

The “Drop” part is about as straight forward as I expected:

And the handler can extract the row we gave above from the event arguments:

Inductive types on the rise

One thing I really got used to when using Agda for academic projects are inductive types. And inductive types are probably what I currently miss most when using mainstream languages to solve practical problems.

This Post is aimed at software developers that do not know inductive types (Agda, Coq, Idris), variants (OCaml, F#) or GADTs (Haskell). Other software developers might still be interested in the last section about higher inductive types.

What are inductive types?

I will use Agda’s syntax for the most part. Here is a simple example of an inductive type named ‘Bool’:

2019-12-16 15_45_01-emacs@DESKTOP-39VG353

The colons are to be read as ‘is of type’ and ‘Set’ is the type of types.  The code defines an inductive type named ‘Bool’ with the constructors ‘True’ and ‘False’. I use ‘constructor’ with a broader meaning than it has in object oriented programming.

The type ‘Bool’ will behave somewhat like the following enum in Java:

2019-12-16 15_43_42-emacs@DESKTOP-39VG353

The analogy with ‘enum’ works as long as the constructors have zero arguments. Here is an inductive type where one constructor has one argument:

2019-12-16 15_54_58-emacs@DESKTOP-39VG353

For any type ‘A’, ‘Optional A’ will be a type that behaves like an immutable version of ‘Optional<A>’ in Java. So for example, ‘Some True’ would be a value of type ‘Optional Bool’ (Note that function application is written without parenthesis). It is also possible, to have constructors with arguments of the type to be defined:

2019-12-16 16_22_42-emacs@DESKTOP-39VG353

The natural numbers defined in this way will be great for verification and very bad for actual calculations since this representation is unary. For example, the number three can be defined with three constructor calls:

2019-12-16 16_27_12-emacs@DESKTOP-39VG353

The really interesting thing to note is, that this quite short inductive definition of the natural numbers actually behaves like the natural numbers from mathematics. And you can prove things about those naturals using the same induction based proofs you learn mathematics courses. In Agda, those proofs can be done using pattern matching, the topic of the next section.

In Agda, inductive definitions are also supported for dependent types. There are lots of interesting things that can be done using this combination of concepts. One is an inductive definition of equality for all types. This won’t give you a reasonable ‘Equals’-method for all your types, but it provides you with a consistent notion what such a method should return.

Patterns

The great thing about inductive types is, that functions may be defined by pattern matching. A simple case is the negation function on the type ‘Bool’ defined above:

2019-12-16 16_06_53-emacs@DESKTOP-39VG353

The first line declares the type of the new function ‘negation’ to be ‘Bool -> Bool’ and the lines below are a definition by pattern matching. Agda checks if the pattern covers all cases. If you want the same compile time check in Java (prior to version 12) you would have to use a trick.

Here is an example using the types defined above, with a more complicated pattern:

2019-12-16 17_02_58-emacs@DESKTOP-39VG353

Note that ‘IsEven’ is recursively used in the last line. The termination checker of Agda makes sure that the recursion doesn’t loop forever and this definition passes this check, since ‘n’ is of a lower ‘height’ than the argument ‘Successor (Successor n)’. So progress will be made on each recursion and the computation will stop eventually.

Those checks are important when pattern matching is used to prove things, which can be done for example in the style of the following pseudo code:

2019-12-16 17_28_23-emacs@DESKTOP-39VG353

Higher inductive types

In mathematics new sets are often created by identifying elements of some easy to understand set. For example, the rational numbers can be constructed as pairs of integers ‘(a,b)’, where b is not zero, by identification of pairs ‘(a,b)’ and ‘(c,d)’ if ‘c*b = a*d’.

It is now possible in some systems to construct such quotients as inductive types. Agda has a special mode called ‘cubical’ which allows inductive types to have constructors that ‘produce’ equalities in the inductive type. Here is an excerpt from the standard library for Agda’s cubical mode, that defines the rational numbers inductively:

2019-12-16 17_57_02-emacs@DESKTOP-39VG353

The first constructor ‘con’ tells us, that we can produce a rational number from a pair of integers ‘u’ and ‘a’ provided ‘a’ is not zero. The constructor ‘path’ makes the identification explained above. The third constructor ‘trunc’ has to do with some curious weirdness that comes with having ‘inductive equalities’ – some elements of a type might be equal in different ways. ‘trunc’ uses inductive equalities again, to ‘truncate’ the possibilites how rational numbers can be equal back to the expected ‘yes’ and ‘no’.

This appearance of extra equalities between things is by no means a pathology, but a connection to a topic in pure mathematics called homotopy theory. But so far there are not much suggestions how the homotopy theory we have at our fingertips in Agda can help us with pratical programming. If we ‘trunc’ our quotient however, we have a pretty usable way of mimicking the mathematical style described above when defining data types.

As more and more concepts from academic languages pour into the mainstream, I have hopes that I can use at least some inductive techniques some day, saving me from some annoying bugs and hard to read constructions.

Compiling and using Agda through the Windows Linux Subsystem

The Windows Subsystem for Linux (WSL) more or less runs a linux kernel on Windows 10. In this post, I will describe how to use WSL to compile and run Agda, a dependently typed functional programming language. Compiling agda yourself makes sense if you want to use the latest features, of which there are quite nice ones. The approach presented here is just my preferred way of compiling and using Agda on a Linux system with some minor adjustments.

Prerequisits for compiling

First, you need the “ubuntu app”, you can install it following this guide. Essentially, you just have to activate WSL and install the app through the Microsoft Store, but following the guide step by step allowed me to do it without creating a Microsoft account.
After installing your ubuntu app will ask you to create a new account and it will probably need some updating, which you can do by running:

sudo apt-get update

A usability hint: You can copy-paste to the ubuntu app by copying with CTRL-C and right-clicking into the ubuntu-window. You have to make the ubuntu-window active before the right-click. You can copy stuff in the ubuntu-window by marking and pressing CTRL-C.

There are two tools that can get the dependencies and compile Agda for you, “cabal” and “stack”. I prefer to use stack:

https://docs.haskellstack.org/en/stable/README/

After installing, stack asked me to append something to my PATH, which I did only for the session:

export PATH=$PATH:/home/USER/.local/bin

Getting the sources and compiling

Git is preinstalled, so you can just get the agda sources with:

git clone https://github.com/agda/agda.git

Go into the agda folder. It will contain a couple of files with names like

stack-8.8.1.yaml

These are configuration files for stack. The numbers indicate the version of ghc (the haskell compiler) that will be used. I take always the newest version (if everything works with it…) and make a copy

cp stack-8.8.1.yaml stack.yaml

– since stack will just use the “stack.yaml” for configuration when run in a folder. Now:

stack setup

will download ghc binaries and install them somewhere below “HOME/.stack/”. Before building, we have to install a dependency (otherwise there will be a linker error with the current ubuntu app):

sudo apt install libtinfo-dev

Then tell stack to build and hope for the best (that took around 5.2GB of RAM and half an hour on my system…):

stack build

On success, it should look like this:
agda_folders

If you are not confident with finding the locations from the last lines again, you should secure the path from the last lines. We will need “agda” and “agda-mode”, which are in the same folder.

Using Agda

Of course, you can use agda from the command line, but it is a lot more fun to use from emacs (or, possibly atom, which I have not checked out). Since the ubuntu app does not come with a window system and on the other hand our freshly built agda cannot be invoked easily from windows programs, I found it most convenient to run emacs in the ubuntu app, but use an x-server in windows.

For the latter, you can install Xming and start it. Then install emacs in the ubuntu app:

sudo apt install emacs

Before starting emacs, we should install the “agda-mode” for emacs. This can be done by

./stack/install/[LONG PATH FROM ABOVE]/bin/agda-mode setup

Now run emacs with the variables “DISPLAY” set to something which connects it to Xming and “PATH” appended by the long thing from above, so emacs can find agda (and agda-mode):

PATH=$PATH:~/agda/.stack-work/[LONG PATH FROM ABOVE]/bin/ DISPLAY=:0 emacs

Then everything should work and you can test the agda-mode, for example with a new file containing the following:

agda-mode-test

CTRL-C, CTRL-L tells agda-mode to check and highlight the contents of the file.  Here is more on using the agda-mode. Have fun!

Sources:

A new star in software verification?

F* (pronounced F star) is a functional programming language with a dependent type system that supports verification. What follows is a story about my experiences with F* and a concluding opinion why languages like F* could be useful for the working software developer.

There is a big project using F* and related tools for the ambitious goal to build a verified implementation of TLS.
So far, judging from the website, there is a verified efficient implementation of the relevant cryptographic primitives but their implementation of the protocol is not yet verified. That is certainly already a great achievement and got me interested F*.

Setting it up

There is a F*-Tutorial which comes with an online editor, which is great if you just want to play a bit with the language. You don’t have to install anything, provided you have a recent mainstream browser. The tutorial is fun to do, but after I spent some time on it, I wanted to know what it is like to use F* not on a prescribed path.

Sooner or later, you probably want to install it on your machine, which is not a problem at all, if you are on linux system and an emacs user. There is a binary and an emacs mode which is also easy to install on windows but the combination was somehow forbiddingly slow on my (fast) machine. In the following, pictures will show excerpts from the emacs-mode.

Programming in F*

Let me first show you, what programming in F* looks like. Here is a definition of the commonly known (higher order) function “filter” which takes a predicate (function to bool in this case) and a list and returns the “sublist” of all things on the list matching the predicate:

filter1

The arrows and strange dots are just “->” and “::” rendered by the emacs mode. Otherwise, it is exactly the pattern matching definition that would also work in OCaml or F# (where you can write exactly the same thing):

The second argument “l” is matched against the two things a list might be, either the empty list “[]” or a list with consisting of some first element “x” and a list “xs”. In the first case, we are done and in the second case we decide if “x” should be in the result list and recursively call “filter” to deal with the “xs”.

The type of “filter” can be inferred by F*, but we could have declared it before the definition like this:

filter3

The alpha is really fancy emacs-mode rendering for ” ‘a ” and stands for an arbitrary type. This is also the type F* would infer, since what we did does not require anything special from the element type of the lists.

So the declaration says that our function takes some predicate “p” a list of things p may be applied to and returns a list of the same type just with “Tot” written in front of it. The latter tells us, that filter is a total function, i.e. it terminates and will not throw an exception (I guess there is a bit more to it, but so far that explanation worked well for me). There is also a modifier “ML” which would tell us, that the function behaves like a function in OCaml – so it could throw runtime exceptions or loop forever. After introducing you to some verification features, I will tell you why this distinction is important here.

Let us first rewrite our declaration a bit:

filter4

I replaced the alpha with a fixed type, the natural numbers (that will simplify things…) and introduced variables “p” and “l” that we can use now to construct types. What kind of types? Well, in this case, I want to construct a replacement for “(list N)” which tells us a bit more about the result of filtering a list. One reason to have types is to make sure, that the result of some construction meets a specification. Usually these kinds of specifications are quite coarse and might just tell you, that the result is an integer. So far, the specification of the filter function only tells us, that the result will be a list of natural numbers. In F*, we can do a lot more. The following specifies the result of filter to be the sublist of all elements of “l” that satisfy “p”:

filter5

Between the curly braces, I put a formula in predicate logic, which describes this specification. The ” ” is a logical “and”, the “∀” is the “forall” from math and ” element x l’ ” is some function which evaluates to true if x is an element of l’ and false if this is not the case.

Now, one of the things that make me happy when playing with F* is that the definition of filter I gave in the beginning checks against this new declaration as is. This is due to F* using a problem solver to prove that our specification is satisfied and the not-so-random coincidence that to prove the formula, you can use the same structural induction on the list as we used to recursively define “filter”.

For completeness, here is my definition of “element”:

element1

The “#a:eqtype” means, that “a” is some type such that comes with a total equality function for terms of type “a”. The natural numbers used above are such an “a”. Btw: Those natural numbers are not limited in size, like e.g. “unsigned int”. They behave a lot like “BigInteger” in some languages and are certainly not fast, but they are good for proving things.

And now, I should tell you, that the declaration of “filter” which nicely specified what filter does, is probably not what you should do. In fact, it is usually better to split things into definitions and “Lemma”s. So here is again the original definition of filter (with inferred type) and a Lemma about it:

filter6

The “Lemma” is again like “Tot” above – it modifies the type. Technically, “filter_specification” is a function, but the only thing important here, is if its definition (“let rec filter_specification = […]”) type checks with the declaration (“val filter_specification […]”) given – because that means that the definition is a proof of the proposition encoded in the declaration. In particular, it is not important what this function returns. In fact, “Lemma” produces a type all of whose terms are equal.

Verifying Project Euler exercises

First thing I usually do when checking out a new programming language is to solve a couple of Project Euler exercises to get fluent in the basic constructions. So, I thought, why shouldn’t I try the same with F* and prove something about my solution code?

Problem 1 asks us to add all multiples of 3 and 5 below 1000. Usually, my mind drops the “below” and I get a wrong result. But not so this time! Here is my definition of the natural numbers below 1000, which produces the correct list of numbers, since specifying precisely what it is made me read the problem text very carefully:

naturals_below_1000

Now, “naturals_below_1000” is specified by its type, which turned out to work well in this case, since it is not a general purpose function I will reuse.

Now for the task at hand, we need to filter numbers that are multiples of 3 or 5. I took the liberty to equivalently say, numbers divisible by 3 or 5:

divisible

I used modulo arithmetic “%” defined in some library to have an easy definition of “divides”. Also note, that I use definitions in proving that I defined the correct list. Both are a bit fishy if I were to claim that I proved some term in my program meets some mathematical definition commonly used for the concepts in the problem text. So you could call that lazy or efficient according to taste.

Now, the number which the exercise asks for can be defined:

ex1

F* can not just check if everything is correct but also evaluate expression. In particular the expression “ex1” which turns out to be the correct answer to the problem.

Is it of use for the working software developer?

I used Scala some years ago to develop a large prototype and, while there were some pains, it worked about as well as expected and I’m quite sure it would have been less fun and less efficient to use Java for the same project. One important point in the decision to use Scala was the possibility to use Java libraries.
F# has the same kind of advantage and I learned to love its type system from OCaml, which apparently was the basis for F#. So for the same reasons that made Scala work for me when constructing prototypes, I could decide to use F# in similar situations. I don’t know, how realistic my hope is, that F* can be used on top of F# one day, but that would be a great improvement over just having F# which is already something I look forward to. So far, it as possible to extract F# code from an F* program.

Usually, tools like F* are thought of and presented as tools to effectively prevent bugs, i.e. catching all of them no matter the effort. I think, what we need for our daily programming tasks are tools for preventing bugs efficiently. My guess is, that dependent types in a practical language like F# are such a tool. This is of course a heavily biased view, since I have a background involving some dependent type theory, which makes me blind to the effort of learning how dependent types work.

One problem with dependently typed languages is, that small changes can break lots of proofs that took effort to prove. This is particularly bad, if you spell out the proofs in all detail and it means refactoring can become quite expensive.

From what I have seen, the use of a problem solver when type checking reduces this problem quite a bit, so this might not be a real issue here and with other dependently typed languages that do something similar. And of, substitute technologies that achieve similar goals, like unit tests or comments,  also increase refactoring work.

So ultimately, apart from the fun, I also look forward at using F* as tool. As I mentioned before, I already knew about dependent types and could translate what I see when learning F*, so I can’t tell you how much effort it takes, if you don’t already know those things. But what I can tell you is, that learning dependent type theory was certainly among the most rewarding scientific experiences in my life, so maybe you want to do that anyway.

There are more exact numbers than you might know

There are lots of annoying problems a programmer can run into, when working with the inexact floating point numbers supported by common cpus. So the idea of using rational numbers

\frac{n}{d}

with arbitrary size integers n,d for floating point computation can be quite appealing. However, if you really try this in practice, you are likely to end up with unacceptable performance (for an example, see the post The Great Rational Explosion on this blog).
Another problem with the rationals is, that they don’t support lots of mathematical operations. For example, any prime will not have a rational square root and numbers like \pi and e are not rational.
This post is about the problem of the missing square roots. I will sketch how the rationals can be extended by some specific real numbers. The focus is on what can be done and not how it can be done.

The point of the following example is that we can extend the rational numbers by some new numbers such that we have the square root of 2 and are still able to perform the basic arithmetic operations, i.e. addition, subtraction, multiplication and division by non-zero numbers. And of course, we still want to have the absolute exactness of the rationals.
To achieve this goal, our new numbers can be represented as tuples (a,b) of rational numbers a,b. The idea is, that (a,b) stands for the real number

a+b\sqrt{2}

And now, a small miracle happens: If you have two of those new numbers then all of the basic arithmetic operations are again given as one of our new numbers! You can believe me or just check this for yourself now. So in total, we have extended the realm of exact calculation to real numbers which are of the form

a+b\sqrt{2}.

This is admittedly a bit lame, since we are still far away from calculating general square roots. But the miracle mentioned above is actually not so small and happens in way more generality: The real number \sqrt{2} that we added to the rationals is the solution of the polynomial equation

X^2=2 or equivalently X^2-2=0

and we can do the same thing with any irrational real number that is the solution of some polynomial equation

a_n X^n + a_{n-1} X^{n-1} + \dots + a_1 X^1 + a_0 = 0 or P(X)=0

(where a_n,\dots,a_0 are rational numbers).
For this general construction, the polynomial above has to be chosen such that n is as small as possible. Then the new numbers can be represented as tuples of length n. For the basic arithmetic operations, we think of a tuples (b_n,\dots,b_0) as the polynomial

b_n X^n+\dots b_1 X^1 + b_0

and define addition to be addition of those polynomials. To multiply two tuples, take the product of the polynomials and its remainder on division by P. You can check out how this recipie does the right thing in the example with P=X^2-2. To avoid confusion, let us put the polynomial defining in which extension of the rationals we are into the number. So know, a number is a tuple

(P,b_n,\dots,b_0).

In particular, there are extensions that contain a square root of an arbitrary rational. But if we look at, say, \sqrt{2} and \sqrt{3} we will get two extensions. So we have \sqrt{2} and \sqrt{3} but we can’t do anything that involves both of them. Or in other words, if we have two numbers, (P,b_n,\dots,b_0) and (Q,c_m,\dots,c_0) we don’t know how to perform basic arithmetic unless n=m and P=Q.

Here is an easy way to fix this: If we take a step back, we can realize that everything we did above doesn’t need specific features of the rational numbers. The basic arithmetic operations are enough. So we can apply the construction of new numbers including some root to an extension of the rationals. For the problem mentioned above, that would give us tuples of the form

(X^2-3, (X^2-2, a, b), (X^2-2, c, d))

– now with four rational numbers a,b,c,d.
The corresponding general pattern for an extension (of some extension …) is:

(P, a_0,\dots, a_n)

Where the numbers a_i are from some fixed extension and P is a polynomial with coefficients from this fixed extension. We also have to make sure, that P has a property called irreducible. It means that P has at least degree 1 and there is no pair of polynomials Q,R with lesser degree (and coefficients in the extension) such that their product P\cdot R is equal to P. If this is not the case, weird things will happen.
Note that if we keep extending our numbers on demand, we can now get square roots of everything, in fact, this even works for square roots of negative numbers. But there are some unmentioned problems, if we want to mix numbers from different towers of extensions. On the other hand, there are also lots of unmentioned possibilites: These construction are pretty general and they can also be applied to finite fields or some extension of the rationals which contains a trancendent element like \pi. But I will stop here since my goal of showing some exact representations of numbers that surprised myself once, is reached.

If you want to learn more about the mathematical background, this pdf might help. There is also a way to join two extensions to one of the form we discussed here, the key is this nice theorem: Primitive Element Theorem. There are lots of nice mathematical facts in this area. My former Algebra working group in Karlsruhe, in particular Fabian Ruoff, kindly reminded me of some of them over lunch, when I was discussing the topics of this post.
Then, there are of course implementations. Here is a class in the cgal library for square roots.