Using PostgreSQL with Entity Framework

The most widespread O/R (object-relational) mapper for the .NET platform is the Entity Framework. It is most often used in combination with Microsoft SQL Server as database. But the architecture of the Entity Framework allows to use it with other databases as well. A popular and reliable is open-source SQL database is PostgreSQL. This article shows how to use a PostgreSQL database with the Entity Framework.

Installing the Data Provider

First you need an Entity Framework data provider for PostgreSQL. It is called Npgsql. You can install it via NuGet. If you use Entity Framework 6 the package is called EntityFramework6.Npgsql:

> Install-Package EntityFramework6.Npgsql

If you use Entity Framework Core for the new .NET Core platform, you have to install a different package:

> Install-Package Npgsql.EntityFrameworkCore.PostgreSQL

Configuring the Data Provider

The next step is to configure the data provider and the database connection string in the App.config file of your project, for example:

<configuration>
  <!-- ... -->

  <entityFramework>
    <providers>
      <provider invariantName="Npgsql"
         type="Npgsql.NpgsqlServices, EntityFramework6.Npgsql" />
    </providers>
  </entityFramework>

  <system.data>
    <DbProviderFactories>
      <add name="Npgsql Data Provider"
           invariant="Npgsql"
           description="Data Provider for PostgreSQL"
           type="Npgsql.NpgsqlFactory, Npgsql"
           support="FF" />
    </DbProviderFactories>
  </system.data>

  <connectionStrings>
    <add name="AppDatabaseConnectionString"
         connectionString="Server=localhost;Database=postgres"
         providerName="Npgsql" />
  </connectionStrings>

</configuration>

Possible parameters in the connection string are Server, Port, Database, User Id and Password. Here’s an example connection string using all parameters:

Server=192.168.0.42;Port=5432;Database=mydatabase;User Id=postgres;Password=topsecret

The database context class

To use the configured database you create a database context class in the application code:

class AppDatabase : DbContext
{
  private readonly string schema;

  public AppDatabase(string schema)
    : base("AppDatabaseConnectionString")
  {
    this.schema = schema;
  }

  public DbSet<User> Users { get; set; }

  protected override void OnModelCreating(DbModelBuilder builder)
  {
    builder.HasDefaultSchema(this.schema);
    base.OnModelCreating(builder);
  }
}

The parameter to the super constructor call is the name of the configured connection string in App.config. In this example the method OnModelCreating is overridden to set the name of the used schema. Here the schema name is injected via constructor. For PostgreSQL the default schema is called “public”:

using (var db = new AppDatabase("public"))
{
  var admin = db.Users.First(user => user.UserName == "admin")
  // ...
}

The Entity Framework mapping of entity names and properties are case sensitive. To make the mapping work you have to preserve the case when creating the tables by putting the table and column names in double quotes:

create table public."Users" ("Id" bigserial primary key, "UserName" text not null);

With these basics you’re now set up to use PostgreSQL in combination with the Entity Framework.

 

Packaging kernel modules/drivers using DKMS

Hardware drivers on linux need to fit to the running kernel. When drivers you need are not part of the distribution in use you need to build and install them yourself. While this may be ok to do once or twice it soon becomes tedious doing it after every kernel update.

The Dynamic Kernel Module Support (DKMS) may help in such a situation: The module source code is installed on the target machine and can be rebuilt and installed automatically when a new kernel is installed. While veterans may be willing to manually maintain their hardware drivers with DKMS end user do not care about the underlying system that keeps their hardware working. They want to manage their software updates using the tools of their distribution and everything should be working automagically.

I want to show you how to package a kernel driver as an RPM package hiding all of the complexities of DKMS from the user. This requires several steps:

  1. Preparing/patching the driver (aka kernel module) to include dkms.conf and follow the required conventions of DKMS
  2. Creating a RPM spec-file to install the source, tool chain and integrate the module source with DKMS

While there is native support for RPM packaging in DKMS I found the following procedure more intuitive and flexible.

Preparing the module source

You need at least a small file called dkms.conf to describe the module source to the DKMS system. It usually looks like that:

PACKAGE_NAME="menable"
PACKAGE_VERSION=3.9.18.4.0.7
BUILT_MODULE_NAME[0]="menable"
DEST_MODULE_LOCATION[0]="/extra"
AUTOINSTALL="yes"

Also make sure that the source tarball extracts into the directory /usr/src/$PACKAGE_NAME-$PACKAGE_VERSION ! If you do not like /usr/src as a location for your kernel modules you can configure it in /etc/dkms/framework.conf.

Preparing the spec file

Since we are not building a binary and package it but install source code, register, build and install it on the target machine the spec file looks a bit different than usual: We have no build step, instead we just install the source tree and potentially additional files like udev rules or documentation and perform all DKMS work in the postinstall and preuninstall scripts. All that means, that we build a noarch-RPM an depend on dkms, kernel sources and a compiler.

Preparation section

Here we unpack and patch the module source, e.g.:

Source: %{module}-%{version}.tar.bz2
Patch0: menable-dkms.patch
Patch1: menable-fix-for-kernel-3-8.patch

%prep
%setup -n %{module}-%{version} -q
%patch0 -p0
%patch1 -p1

Install section

Basically we just copy the source tree to /usr/src in our build root. In this example we have to install some additional files, too.

%install
rm -rf %{buildroot}
mkdir -p %{buildroot}/usr/src/%{module}-%{version}/
cp -r * %{buildroot}/usr/src/%{module}-%{version}
mkdir -p %{buildroot}/etc/udev/rules.d/
install udev/10-siso.rules %{buildroot}/etc/udev/rules.d/
mkdir -p %{buildroot}/sbin/
install udev/men_path_id udev/men_uiq %{buildroot}/sbin/

Post-install section

In the post-install script of the RPM we add our module to the DKMS system build and install it:

occurrences=/usr/sbin/dkms status | grep "%{module}" | grep "%{version}" | wc -l
if [ ! occurrences > 0 ];
then
    /usr/sbin/dkms add -m %{module} -v %{version}
fi
/usr/sbin/dkms build -m %{module} -v %{version}
/usr/sbin/dkms install -m %{module} -v %{version}
exit 0

Pre-uninstall section

We need to remove our module from DKMS if the user uninstalls our package to leave the system in a clean state. So we need a pre-uninstall script like this:

/usr/sbin/dkms remove -m %{module} -v %{version} --all
exit 0

Conclusion

Packaging kernel modules using DKMS and RPM is not really hard and provides huge benefits to your users. There are some little quirks like the post-install and pre-uninstall scripts but after you got that working you (and your users) are rewarded with a great, fully integrated experience. You can use the full spec file of the driver in the above example as a template for your driver packages.

Streaming images from your application to the web with GStreamer and Icecast – Part 2

In the last article we learned how to create a GStreamer pipeline that streams a test video via an Icecast server to the web. In this article we will use GStreamer’s programmable appsrc element, in order to feed the pipeline with raw image data from our application.

First we will recreate the pipeline from the last article in C source code. We use plain C, since the original GStreamer API is a GLib based C API.

#include <gst/gst.h>

int main(int argc, char *argv)
{
    GError *error = NULL;

    gst_init(&argc, &argv);

    GstPipeline *pipeline = gst_parse_launch("videotestsrc ! vp8enc ! webmmux ! shout2send ip=127.0.0.1 port=8000 password=hackme mount=/test.webm", &error);
    if (error != NULL) {
        g_printerr("Could not create pipeline: %s\n", error->message);
        return 1;
    }
    gst_element_set_state(pipeline, GST_STATE_PLAYING);

    GMainLoop *loop = g_main_loop_new(NULL, FALSE);
    g_main_loop_run(loop);

    g_free(loop);
    g_free(pipeline);

    return 0;
}

In order to compile this code the GStreamer development files must be installed on your system. On an openSUSE Linux system, for example, you have to install the package gstreamer-plugins-base-devel. Compile and run this code from the command line:

$ cc demo1.c -o demo1 $(pkg-config --cflags --libs gstreamer-1.0)
$ ./demo1

The key in this simple program is the gst_parse_launch call. It takes the same pipeline string that we built on the command line in the previous article as an argument and creates a pipeline object. The pipeline is then started by setting its state to playing.

appsrc

So far we have only recreated the same pipeline that we called via gst-launch-1.0 before in C code. Now we will replace the videotestsrc element with an appsrc element:

#include <gst/gst.h>

extern guchar *get_next_image(gsize *size);

const gchar *format = "GRAY8";
const guint fps = 15;
const guint width = 640;
const guint height = 480;

typedef struct {
    GstClockTime timestamp;
    guint sourceid;
    GstElement *appsrc;
} StreamContext;

static StreamContext *stream_context_new(GstElement *appsrc)
{
    StreamContext *ctx = g_new0(StreamContext, 1);
    ctx->timestamp = 0;
    ctx->sourceid = 0;
    ctx->appsrc = appsrc;
    return ctx;
}

static gboolean read_data(StreamContext *ctx)
{
    gsize size;

    guchar *pixels = get_next_image(&size);
    GstBuffer *buffer = gst_buffer_new_wrapped(pixels, size);

    GST_BUFFER_PTS(buffer) = ctx->timestamp;
    GST_BUFFER_DURATION(buffer) = gst_util_uint64_scale_int(1, GST_SECOND, fps);
    ctx->timestamp += GST_BUFFER_DURATION(buffer);

    gst_app_src_push_buffer(ctx->appsrc, buffer);

    return TRUE;
}

static void enough_data(GstElement *appsrc, guint unused, StreamContext *ctx)
{
    if (ctx->sourceid != 0) {
        g_source_remove(ctx->sourceid);
        ctx->sourceid = 0;
    }
}

static void need_data(GstElement *appsrc, guint unused, StreamContext *ctx)
{
    if (ctx->sourceid == 0) {
        ctx->sourceid = g_idle_add((GSourceFunc)read_data, ctx);
    }
}

int main(int argc, char *argv[])
{
    gst_init(&argc, &argv);

    GstElement *pipeline = gst_parse_launch("appsrc name=imagesrc ! videoconvert ! vp8enc ! webmmux ! shout2send ip=127.0.0.1 port=8000 password=hackme mount=/test.webm", NULL);
    GstElement *appsrc = gst_bin_get_by_name(GST_BIN(pipeline), "imagesrc");

    gst_util_set_object_arg(G_OBJECT(appsrc), "format", "time");
    gst_app_src_set_caps(appsrc, gst_caps_new_simple("video/x-raw",
        "format", G_TYPE_STRING, format,
        "width", G_TYPE_INT, width,
        "height", G_TYPE_INT, height,
        "framerate", GST_TYPE_FRACTION, fps, 1, NULL));

    GMainLoop *loop = g_main_loop_new(NULL, FALSE);
    StreamContext *ctx = stream_context_new(appsrc);

    g_signal_connect(appsrc, "need-data", G_CALLBACK(need_data), ctx);
    g_signal_connect(appsrc, "enough-data", G_CALLBACK(enough_data), ctx);

    gst_element_set_state(pipeline, GST_STATE_PLAYING);

    g_main_loop_run(loop);

    gst_element_set_state(pipeline, GST_STATE_NULL);

    g_free(ctx);
    g_free(loop);

    return 0;
}

We assign a name (“imagesrc”) to the appsrc element by setting its name attribute in the pipeline string in line 58. The element can then be retrieved via this name by calling the function gst_bin_get_by_name. In lines 61-66 we set properties and capabilities of the appsrc element such as the image format (in this example 8 bit greyscale), width, height and frames per second.

In lines 71 and 72 we connect callback functions to the “need-data” and “enough-data” signals. The appsrc element emits the need-data signal, when it wants us to feed more image frame buffers to the pipeline and the enough-data signal when it wants us to stop.

We use an idle source to schedule calls to the read_data function in the main loop. The interesting work happens in read_data: we acquire the raw pixel data of the image for the next frame as byte array, in this example represented by a call to a function named get_next_image. The pixel data is wrapped into a GStreamer buffer and the duration and timestamp of the buffer is set. We track the time in a self-defined context object. The buffer is then sent to the appsrc via gst_app_src_push_buffer. GStreamer will take care of freeing the buffer once it’s no longer needed.

Conclusion

With little effort we created a simple C program that streams image frames from within the program itself as video to the Web by leveraging the power of GStreamer and Icecast.

Streaming images from your application to the web with GStreamer and Icecast – Part 1

Streaming existing media files such as videos to the web is a common task solved by streaming servers. But maybe you would like to encode and stream a sequence of images originating from inside your application on the fly as video to the web. This two part article series will show how to use the GStreamer media framework and the Icecast streaming server to achieve this goal.

GStreamer

GStreamer is an open source framework for setting up multimedia pipelines. The idea of such a pipeline is that it is constructed from elements, each performing a processing step on the multimedia data that flows through them. Each element can be connected to other elements (source and a sink elements), forming a directed, acyclic graph structure. GStreamer pipelines are comparable to Unix pipelines for text processing. In the simplest case a pipeline is a linear sequence of elements, each element receiving data as input from its predecessor element and sending the processed output data to its successor element. Here’s a GStreamer pipeline that encodes data from a video test source with the VP8 video codec, wraps (“multiplexes”) it into the WebM container format and writes it to a file:

videotestsrc ! vp8enc ! webmmux ! filesink location=test.webm

In contrast to Unix pipelines the notation for GStreamer pipelines uses an exclamation mark instead of a pipe symbol. An element can be configured with attributes denoted as key=value pairs. In this case the filesink element has an attribute specifying the name of the file into which the data should be written. This pipeline can be directly executed with a command called gst-launch-1.0 that is usually part of a GStreamer installation:

gst-launch-1.0 videotestsrc ! vp8enc ! webmmux ! filesink location=test.webm
videotestsrc
videotestsrc

If we wanted to use a different codec and container format, for example Theora/Ogg, we would simply have to replace the two elements in the middle:

gst-launch-1.0 videotestsrc ! theoraenc ! oggmux ! filesink location=test.ogv

Icecast

If we want to stream this video to the Web instead of writing it into a file we can send it to an Icecast server. This can be done with the shout2send element:

gst-launch-1.0 videotestsrc ! vp8enc ! webmmux ! shout2send ip=127.0.0.1 port=8000 password=hackme mount=/test.webm

This example assumes that an Icecast server is running on the local machine (127.0.0.1) on port 8000. On a Linux distribution this is usually just a matter of installing the icecast package and starting the service, for example via systemd:

systemctl start icecast

Note that WebM streaming requires at least Icecast version 2.4, while Ogg Theora streaming is supported since version 2.2. The icecast server can be configured in a config file, usually located under /etc/icecast.xml or /etc/icecast2/icecast.xml. Here we can set the port number or the password. We can check if our Icecast installation is up and running by browsing to its web interface: http://127.0.0.1:8000/ Let’s go back to our pipeline:

gst-launch-1.0 videotestsrc ! vp8enc ! webmmux ! shout2send ip=127.0.0.1 port=8000 password=hackme mount=/test.webm

The mount attribute in the pipeline above specifies the path in the URL under which the stream will be available. In our case the stream will be available under http://127.0.0.1:8000/test.webm You can open this URL in a media player such as VLC or MPlayer, or you can open it in a WebM cabable browser such as Chrome or Firefox, either directly from the URL bar or from an HTML page with a video tag:

<video src="http://127.0.0.1:8000/test.webm"></video>

If we go to the admin area of the Icecast web interface we can see a list of streaming clients connected to our mount point. We can even kick unwanted clients from the stream.

Conclusion

This part showed how to use GStreamer and Icecast to stream video from a test source to the web. In the next part we will replace the videotestsrc element with GStreamer’s programmable appsrc element, in order to feed the pipeline with raw image data from our application.

TANGO – Making equipment remotely controllable

Usually hardwareTango_logo vendors ship some end user application for Microsoft Windows and drivers for their hardware. Sometimes there are generic application like coriander for firewire cameras. While this is often enough most of these solutions are not remotely controllable. Some of our clients use multiple devices and equipment to conduct their experiments which must be orchestrated to achieve the desired results. This is where TANGO – an open source software (OSS) control system framework – comes into play.

Most of the time hardware also can be controlled using a standardized or proprietary protocol and/or a vendor library. TANGO makes it easy to expose the desired functionality of the hardware through a well-defined and explorable interface consisting of attributes and commands. Such an interface to hardware –  or a logical piece of equipment completely realised in software – is called a device in TANGO terms.

Devices are available over the (intra)net and can be controlled manually or using various scripting systems. Integrating your hardware as TANGO devices into the control system opens up a lot of possibilites in using and monitoring your equipment efficiently and comfortably using TANGO clients. There are a lot of bindings for TANGO devices if you do not want to program your own TANGO client in C++, Java or Python, for example LabVIEW, Matlab, IGOR pro, Panorama and WinCC OA.

So if you have the need to control several pieces of hardware at once have a look at the TANGO framework. It features

  • network transparency
  • platform-indepence (Windows, Linux, Mac OS X etc.) and -interoperability
  • cross-language support(C++, Java and Python)
  • a rich set of tools and frameworks

There is a vivid community around TANGO and many drivers for different types of equipment already exist as open source projects for different types of cameras, a plethora of motion controllers and so on. I will provide a deeper look at the concepts with code examples and guidelines building for TANGO devices in future posts.

Ansible: Play it again, Sam

Recently we started using Ansible for the provisioning of some of our servers. Ansible is one of many configuration management / provisioning tools that are popular right now. Puppet and Chef are probably more widely known representatives of their kind, but what attracted us to Ansible was the fact that it’s agentless: the target machines don’t need an agent installed, all you need is remote access via SSH. Well, almost. It turns out that Python is also required on the remote machines, otherwise you’ll be limited to a very basic set of functionality (the raw module). Fortunately, most Linux distributions have Python installed by default.

With Ansible you describe the desired target configuration as a sequence of tasks in a YAML file called Playbook: package installation, copying files, enabling and starting services, etc. The playbook is semi-declarative. Each step usually describes a goal, e.g. package XY should be present. Action is only taken if necessary. On the other hand it’s also very imperative: steps are executed sequentially and you can have conditionals and loops (e.g. “with_items”). You can also define handlers, which are executed once after they have been notified, for example if you want to restart the Apache web server after its configuration has changed.

Before a playbook is applied to a remote machine Ansible will query “facts” about this machine. These facts are available as variables in the playbook. You can also define your own variables.

A playbook is usually applied to a set of machines. Available machines are listed in a separate file, the inventory, where they can be grouped by roles. With one command you can configure or update all the machines of a specific role at once. You can also execute a “dry run”, which simulates a playbook run and tells you what changes would be applied.

So far our experience with Ansible has been good. The concepts are easy to grasp. YAML syntax requires getting used to, but at least it’s not XML. On the website the actual documentation is a bit hidden among promotion for their commercial products, but you can also directly visit docs.ansible.com.

Dart and TypeScript as JavaScript alternatives

JavaScript was designed at Netscape by Brendan Eich within a couple of weeks as a simple scripting language for the web browser. It’s an interesting mixture of Self‘s prototype-based object model, first-class functions inspired by LISP, a C/AWK-like syntax and a misleading name imposed by marketing.

Unfortunately, the haste in which JavaScript was designed by a single person shows in many places. Lots of features are inconsistent and violate the principle of least surprise. Just skim through the JavaScript Garden to get an idea.

Another aspect casting a poor light on JavaScript is the bad design of the browser DOM API, including incompatibilities between different browser implementations.

Douglas Crockford redeemed the reputation of JavaScript somewhat, by writing articles like “JavaScript: The World’s Most Misunderstood Programming Language“, the (relatively thin) book “JavaScript: The Good Parts” and discovering the JSON format. But even his book consists for the most part of advice on how to avoid the bad and the ugly parts.

However, JavaScript is ubiquitous. It is the world’s most widely deployed programming language, it’s the only programming language option available in all browsers on all platforms. The browser DOM API incompatibilities were ironed out by libraries like jQuery. And thanks to the JavaScript engine performance race started by Google some time ago with their V8 engine, there are now implementations available with decent performance – at least for a scripting language.

Some people even started to like JavaScript and are writing server-side code in it, for example the node.js community. People write office suites, emulators and 3D games in JavaScript. Atwood’s Law seems to be confirmed: “Any application that can be written in JavaScript, will eventually be written in JavaScript.”

Trans-compiling to JavaScript is a huge thing. There are countless transpilers of existing or new programming languages to JavaScript. One of these, CoffeeScript, is a syntactic sugar mixture of Ruby and Python on top of JavaScript semantics, and has gained some name recognition and adoption, at least in the Rails community.

But there are two other JavaScript alternatives, backed by large companies, which also happen to be browser manufacturers: Dart by Google and TypeScript by Microsoft. Both have recently reached version 1.0 (Dart even 1.2), and I will have a look at them in this blog post.

Large-scale application development and types

Scripting languages with dynamic type systems are neat and flexible for small and medium sized projects, but there is evidence that organizations with large code bases and large teams prefer at least some amount of static types. For example, Google developed the Google Web Toolkit, which compiled Java to JavaScript and the Closure compiler, which adds type information and checks to JavaScript via special comments, and now Dart. Facebook recently announced their Hack language, which adds a static type system to PHP, and Microsoft develops TypeScript, a static type add-on to JavaScript.

The reasoning is that additional type information can help finding bugs earlier, improve tool support, e.g. auto-completion in IDEs and refactoring capabilities such as safe, project-wide renaming of identifiers. Types can also help VMs with performance optimization.

TypeScript

This weekend the release of TypeScript 1.0 was announced by Microsoft’s language designer Anders Hejlsberg, designer of C#, also known as the creator of the Turbo Pascal compiler and Delphi.

TypeScript is not a completely new language. It’s a superset of JavaScript that mainly adds optional type information to the language via Pascal-like colon notation. Every JavaScript program is also a valid TypeScript program.

The TypeScript compiler tsc takes .ts files and translates them into .js files. The output code does not change a lot and is almost the same code that you would write by hand in JavaScript, but with erased type annotations. It does not add any runtime overhead.

The type system is heavily based on type inference. The compiler tries to infer as much type information as possible by following the flow of types through the code.

TypeScript has interfaces that are very similar to interfaces in Go: A type does not have to declare which interfaces it implements. Interfaces are satisfied implicitly if a type has all the required methods and properties – in short, TypeScript has a structural type system.

Type definitions for existing APIs and libraries such as the browser DOM API, jQuery, AngularJS, Underscore.js, etc. can be added via .d.ts files.
These definition files are very similar to C header files and contain type signatures of the API’s functions. There’s a community maintained repository of .d.ts files called Definitely Typed for almost all popular JavaScript libraries.

TypeScript also enhances JavaScript with functionaliy that is planned for ECMAScript 6, such as classes, inheritance, modules and shorthand lambda expressions. The syntax is the same as the proposed ES6 syntax, and the generated code follows the usual JavaScript patterns.

TypeScript is an open source project under Apache License 2.0. The project even accepts contributions and pull-requests (yes, Microsoft). Microsoft has integrated TypeScript support into Visual Studio 2013, but there is support for other IDEs and editors such as JetBrain’s IDEA or Sublime Text.

Dart

Dart is a JavaScript alternative developed by Google. Two of the main brains behind Dart are Lars Bak and Gilad Bracha. In the early 90s they worked in the Self VM group at Sun. Then they left Sun for LongView Technologies (Animorphic Systems), a company that developed Strongtalk, a statically typed variant of Smalltalk, and later the now-famous HotSpot VM for Java. Sun bought LongView Technologies and made HotSpot Java’s default VM. Bracha co-authored parts of the Java specification, and designed an object-oriented language in the tradition of Self and Smalltalk called Newspeak. At Google, Lars Bak was head developer of the V8 JavaScript engine team.

Unlike TypeScript, Dart is not a JavaScript superset, but a language of its own. It’s a curly-braces-and-semicolons language that aims for familiarity. The object model is very similar to Java: it has classes, inheritance, abstract classes and methods, and an @override annotation. But it also has the usual grab bag of features that “more sugar than Java but similar” languages like C#, Groovy or JetBrain’s Kotlin have:

Lambdas (via the fat arrow =>), mixins, operator overloading, properties (uniform access for getters and setters), string interpolation, multi-line strings (in triple quotes), collection literals, map access via [], default values for arguments, optional arguments.

Like TypeScript, Dart allows optional type annotations. Wrong type annotations do not stop Dart programs from executing, but they produce warnings. It has a simple notion of generics, which are optional as well.

Everything in Dart is an object and every variable can be nullable. There are no visibility modifiers like public or private: identifiers starting with an underscore are private. The “truthiness” rules are simple compared to JavaScript: all values except true are false.

Dart comes with batteries included: it has a standard library offering collections, APIs for asynchronous programming (event streams, futures), a sane HTML/DOM API, removing the need for jQuery, unit testing and support for interoperating with JavaScript. A port of Angular.js to Dart exists as well and is called AngularDart.

Dart supports a CSP-like concurrency model based on isolates – independent worker threads that don’t share memory and can communicate via SendPorts and
ReceivePorts.

However, the Dart language is only one half of the Dart project. The other important half is the Dart VM. Dart can be compiled to JavaScript for compatibility with every browser, but it offers enhanced performance compared to JavaScript when the code is directly executed on the Dart VM.

Dart is an open source project under BSD license. Google provides an Eclipse based IDE for Dart called the “Dart Editor” and Dartium, a special build of the Chromium browser that includes the Dart VM.

Conclusion

TypeScript follows a less radical approach than Dart. It’s a typed superset of JavaScript and existing JavaScript projects can be converted to TypeScript simply by renaming the source files from *.js to *.ts. Type annotations can be added gradually. It would even be simple to switch back from TypeScript to JavaScript, because the generated JavaScript code is extremely close to the original source code.

Dart is a more ambitious project. It comes with a new VM and offers performance improvements. It will be interesting to see if Google is going to ship Chrome with the Dart VM one day.

Centralized project documentation

Project documentation is one thing developers do not like to think about but it is necessary for others to use the software. There are several approaches to project documentation where it is either stored in the source code repository and/or some kind of project web page, e.g. in a wiki. It is often hard for different groups of people to find the documentation they need and to maintain it. I want to show an approach to store and maintain the documentation in one place and integrate it in several other locations.

The project documentation (not API documentation, generated by tools like javadoc or Doxygen) should be version controlled and close to the source code. So a directory in the project source tree seems to be a good place. That way the developers or ducumenters can keep it up-to-date with the current source code version. For others it may be hard to access the docs hidden somewhere in the source tree. So we need to integrate them into other tools to become easily accessible by all the people who need them.

Documentation format

We start with markdown as the documentation format because it is easily read and written using a normal text editor. It can be converted to HTML, PDF and other common document formats. The markdown files reside in a directory next to the source tree, named documentation for example. With pegdown there is a nice java library allowing integration of markdown support in your projects.

Integration in your wiki

Often you want to have your project documentation available on a web page, usually a wiki. With confluence you can directly embed markdown files from an URL in your project page using a plugin. This allows you to enrich the general project documentation in the source tree with your organisation specific documentation. The documentation becomes more widely accessible and searchable. The link can be served by a source code browser like gitweb: http://myrepo/git/?p=MyProject.git;a=blob_plain;f=README.md;hb=HEAD and is alsways up-to-date.

Integration in jenkins

Jenkins has a plugin to use markdown as description format. Combined with the project description setter plugin you can use a file from your workspace to display the job description. Short usage instructions or other notes and links can be maintained in the source tree and show up on the jenkins job page.

Integration in Github or Gitlab

Project hosting platforms like Github or your own repository manager, e.g. gitlab also can display markdown-formatted content from your source tree as the project description yielding a basic project page more or less for free.

Conclusion

Using markdown as a basis for your project documentation is a very flexible approach. It stays usable without any tool support and can be integrated and used in various ways using a plethora of tools and converters. Especially if you plan to open source a project it should contain useful documentation in such a widely understood format distributed with your source code.

Know Your Tools: Why Mockitos when() works

Some days ago, my colleague asked how Mockito can differentiate between a method invocation outside of an expectation and one inside. If you want to know it too, read on.

Some days ago, my colleague asked how Mockito can differentiate between a method invocation outside of an expectation and one inside. If you want to know it too, read on.

The difference

Typically a mocking framework follows a Record/Replay/Verify model. In the first phase the expectations are recorded, in the second the mocked methods are called by the code under test and finally the expectations are verified. Consider an example with EasyMock straight from their documentation:

//record
mock = createMock(Collaborator.class);
mock.documentAdded("New Document");
//replay
replay(mock);
classUnderTest.addDocument("New Document", new byte[0]);
//verify
verify(mock);

Now, with Mockito the difference between the phases is not as clear as with EasyMock:

//record
LinkedList mockedList = mock(LinkedList.class);
when(mockedList.get(0)).thenReturn("first");
//replay
System.out.println(mockedList.get(0));
//verify
verify(mockedList).get(0);

The invocation of get() is evaluated before the invocations of when() or println() so there is no way to change the phase before the call. There is also no way to tell whether the current expectation is the last to start the replay mode automatically. How does it work then? All necessary code is contained in the following classes: MockitoCore, MockHandlerImpl, OngoingStubbing and MockingProgressImpl with its wrapper ThreadSafeMockingProgress.

Record

//record
LinkedList mockedList = mock(LinkedList.class);
when(mockedList.get(0)).thenReturn("first");

In the second line, a mock is created via the mock method. This call is delegated to MockitoCore, which initiates a creation of a proxy and a registration of MockHandlerImpl as the handler for its invocations.

The third line actually contains three steps. First the method to mock is invoked on the mock. Because MockHandlerImpl has been registered for all method calls on this proxy, it is now called. It keeps the current invocation, adds it to the list of all invocations recorded and creates the object to collect the expectations, the “OngoingStubbing”. The instance of OngoingStubbing is stored in an instance of the MockingProgressImpl. To keep the instance between the calls to the framework, a ThreadLocal member of singleton ThreadSafeMockingProgress is used. Since no mocked answer for the call to mock exists, a default result is returned. The second step is the invocation of when(), which returns the instance of OngoingStubbing previously deposited by MockHandlerImpl in MockingProgressImpl. OngoingStubbing implements the method then(), which is used as a means of recording the expected result in the third step. The result and the cached invocation are then saved together, ready to be retrieved. During this process, the invocation call is “consumed” and removed from the list of recorded invocations.

Replay

//replay
System.out.println(mockedList.get(0));

In the line five the method get() is called again. Since the result for it has been defined, MockHandlerImpl returns the retrieved result to the caller. The call is recorded and stored for for further use.

Verify

//verify
verify(mockedList).get(0);

Verification also consists of multiple steps. The call to verify() marks the end of stubbing and sets the verification mode. In the following call to get() on the basis of set verification mode MockHandlerImpl is able to differentiate between the phases and passes the invocations recorded to the verification code.

Final thoughts

The developers of Mockito achieved much with simple constructs like singletons and shared state. The stuff behind the syntax sugar is sometimes even considered magic. I hope that, after reading this article, you no longer believe in magic but use your knowledge to create similar great frameworks.

Another point: Since Mockito uses ThreadLocal as storage for its state, is it possible to confuse it by using multiple threads? What do you think?

Testing C programs using GLib

Writing programs in good old C can be quite refreshing if you use some modern utility library like GLib. It offers a comprehensive set of tools you expect from a modern programming environment like collections, logging, plugin support, thread abstractions, string and date utilities, different parsers, i18n and a lot more. One essential part, especially for agile teams, is onboard too: the unit test framework gtest.

Because of the statically compiled nature of C testing involves a bit more work than in Java or modern scripting environments. Usually you have to perform these steps:

  1. Write a main program for running the tests. Here you initialize the framework, register the test functions and execute the tests. You may want to build different test programs for larger projects.
  2. Add the test executable to your build system, so that you can compile, link and run it automatically.
  3. Execute the gtester test runner to generate the test results and eventually a XML-file to you in your continuous integration (CI) infrastructure. You may need to convert the XML ouput if you are using Jenkins for example.

A basic test looks quite simple, see the code below:

#include <glib.h>
#include "computations.h"

void computationTest(void)
{
    g_assert_cmpint(1234, ==, compute(1, 1));
}

int main(int argc, char** argv)
{
    g_test_init(&argc, &argv, NULL);
    g_test_add_func("/package_name/unit", computationTest);
    return g_test_run();
}

To run the test and produce the xml-output you simply execute the test runner gtester like so:

gtester build_dir/computation_tests --keep-going -o=testresults.xml

GTester unfortunately produces a result file which is incompatible with Jenkins’ test result reporting. Fortunately R. Tyler Croy has put together an XSL script that you can use to convert the results using

xsltproc -o junit-testresults.xml tools/gtester.xsl testresults.xml

That way you get relatively easy to use unit tests working on your code and nice some CI integration for your modern C language projects.

Update:

Recent gtester run the test binary multiple times if there are failing tests. To get a report of all (passing and failing) tests you may want to use my modified gtester.xsl script.