Updating from Grails 2.3 to something newer

We are developing, running and maintaining moderately sized Grails web application with > 120 domain classes  since 2008 or Grails 1.0.3. The web application is still in production running on Grails 2.3.8. Just recently we wanted Java 8 support and the usual bugfixes and improvements you get by updating the framework. Since time and budget are very limited (as always…) we decided not to move to 3.x but only to the latest 2.x version. It seemed a safer and easier option and opened up the way to 3.x where many things changed completely.

Trying to go to 2.5.4

The upgrade procedure is generally well documented in Grails. That allowed us to upgrade from 1.0 to 1.3, from 1.3 to 2.2 and finally from 2.2 to 2.3. We skipped 2.0 because of too many problems we faced during the upgrade. As usual the major changes and tasks are mentioned in the upgrade guide. It started smoothly but we finally had to abort the upgrade process because we were bitten by https://github.com/grails/grails-data-mapping/issues/581 . We had not the time to dig fully into it and resolve the issue.

Trying to go to 2.4.5

Many of the changes and improvements and most notably a Groovy version supporting the Java 8 runtime are already available in Grails 2.4.5. So we gave it a shot hoping for fewer problems than with 2.5.4. Actually we got our application running in less than an hour but quite some of our unit, integration and functional tests failed. After finding some advice in http://stackoverflow.com/questions/16532631/grails-unit-test-mock-domain-with-assigned-id we changed our unit tests to use the @Mock() mixin instead of mockDomain() which works in 2.3 and is broken in 2.4.

When trying to fix our integration tests we saw that some of our HQL queries failed. Something was wrong navigating/querying multiple association levels so we finally gave up on this one, too.

Conclusion

Even though we managed to keep our Grails application alive for many years and several framework versions each upgrade carries a significant risk of breakage and requires quite some effort. This time we are stuck again and will have to invest more time to bring the application up-to-date again.

I would advise anyone already using or deciding for Grails as the web framework of choice to start with the latest and greatest release and to budget several person days for upgrades of medium sized projects. The devil is in the details…

Recap of the Schneide Dev Brunch 2016-04-10

If you couldn’t attend the Schneide Dev Brunch at 10th of April 2016, here is a summary of the main topics.

brunch64-borderedLast sunday, we held another Schneide Dev Brunch, a regular brunch on the second sunday of every other (even) month, only that all attendees want to talk about software development and various other topics. In case you miss the recap article about the february brunch: It didn’t happen. We all took a break, but are on track again. So if you bring a software-related topic along with your food, everyone has something to share. We were quite a lot of developers this time, so we had enough stuff to talk about. As usual, a lot of topics and chatter were exchanged. This recapitulation tries to highlight the main topics of the brunch, but cannot reiterate everything that was spoken. If you were there, you probably find this list inconclusive:

Why software development conferences?

We began with a curious question: Why are there even conferences about software development? You can read most of the content for free on the internet and even watch the talks afterwards. So why attend one for a lot of money? We discussed the topic a bit and came up with an analysis:
There are (at least) four different interested groups in a conference:

  • The organizer or commercial host is mostly interested in a positive revenue. As long as there’s a possibility for some net gain, somebody will host a conference. The actual topic is a secondary matter for them (this might explain some of the weirder conferences out there, like the boring conference).
  • The developers that really attend a conference are a small subset of all developers. They all have their own personal motives to pay money and invest time and inconviences to be there in person. Some might rely on the quality filter of a conference board, some are looking forward to meet their peers in an annual ritual. There might be those that can learn best if somebody talk-feeds them the topic. Whatever reason, a lot of developers enjoy participating at conferences. If it happens to be paid by the employer and booked as worktime, who would not?
  • Then there are the speakers. They have the additional burden to convince a committee of their topic, prepare a talk of high quality and be able to perform on stage (something that is harder than it looks). The speakers seek reputation and credible proof of expertise. His resume will probably profit, too.
  • And at last, the companies that sponsor the conference, maintain a booth with big roll-ups and smiling employees and give their developers a chance to attend are in the game to represent, to recruit and build their brand. A lot of traditional marketing effort goes into trade fairs, so why not treat the developer market like any other and be present in the developer fairs?

We can conclude that software development conferences can provide value for every associated stakeholder. As long as this sentence holds true, conferences will be held.
The question didn’t came out of the blue: one of our attendees got accepted as a speaker on the Karlsruher Entwicklertag 2016 and wanted to learn about the different expectations he needs to address. He will give his talk on the next Dev Brunch to practice the flow and to pass the hardest critics. The topic: git internals. We are thrilled!

Stratagems and strategies

The next topic contained another talk, not at a conference, but in the context of a “general topics” series at a local university (the Duale Hochschule in Karlsruhe). The talk introduces the concept of the 36 stratagems and of modern strategies to the audience. We talked a bit about the concept itself and found that the list of logical fallacies is somewhat similar. We even found an application of the stratagems in local history (sorry, only german source found): The Bretten’s Hundle
The talk itself is this monday, so you’ll need to hurry if you want to attend.

Psychology of deception

As often during the dev brunch, one topic led to the other, and we soon talked about morale and ethics. The concept of micro-expressions to reveal the hidden agenda of others came up, as well as the TV series “lie to me” that is inspired by the work of Paul Ekman, a professor of psychology. There even is a commercial training program to improve your skill of “spotting the liar”.

Games with morale aspects

Well, we are nerds. While crime investigation is thrilling, there is the even more enthralling topic of games with psychological and moralistic aspects. We soon exchanged our experiences with games like “Haze” or “Spec Ops: The Line”. But it doesn’t stop at shooter games, you can have similar insights by playing “Papers, Please” (a strong favorite for one of our next Schneide game nights) or “This War Of Mine”. You can even try some multiplayer games specifically designed for social insights, like “The Ship: Murder Party”.
And if you haven’t got much time but still want to learn something about yourself, little games like “60 Seconds!” are a great start.
This topic lead to some ideas for upcoming Schneide game nights in 2016.

Book review: A tour of C++

One attendee of the brunch provided a summary of the book “A Tour of C++” from Bjarne Stroustrup, that recently got updated to the language possibilities of C++ 11. In his words, the book is a rather incomplete introduction to the language, with way too many aspects described in a way too short manner. It’s more of a reading list to really grasp the concepts, so it may serve as a source of inspiration. For example, the notion of “move semantics” was explained, but to discover the consequences is up to the developer. The part about template programming was well done and every chapter has a suitable list of advices in the tradition of “Effective XYZ” at the end. So it’s not a bad book, but too short to be satisfying. It’s like a tourist’s tour around C++ 11, so the title holds its promise.

The left-pad incident

When we finished the “official” agenda, the topic of the recent left-pad incident came up and left us laughing. We really live in glorious times when the happiness of the (Javascript) world depends on a few lines of code. Not that this couldn’t happen in any other ecosystem.

Epilogue

As usual, the Dev Brunch contained a lot more chatter and talk than listed here. The number of attendees makes for an unique experience every time. We are looking forward to the next Dev Brunch at the Softwareschneiderei. And as always, we are open for guests and future regulars. Just drop us a notice and we’ll invite you over next time.

Modern CMake with target_link_libraries

Dependency hell?

One thing that has eluded me in the past was how to efficiently manage dependencies of different components within one CMake project. I’d use the include_directories, add_definitions and add_compile_options command in the top-level or in mid-level CMakeLists.txt files just to get the whole thing to compile. Of course, this is all heavily order-dependent – so the build system breaks as soon as you make an ever so subtle change to the directory layout. I’ve seen projects tackle this problem in various ways – for example by defining specifically named variables for each library and using that for their clients. Other projects defined “interface” files for each library that could be included by other targets. All these homegrown solutions work, but they are rather clumsy and don’t work well when integrating libraries not written in that same convention.

target_link_libraries to the rescue!

It turns out there’s actually a pretty elegant solution built into CMake, which centers around target_link_libraries. But information on this is pretty scarce on the web. The ones that initially put me on the right track were The Ultimate Guide to Modern CMake and CMake – Introduction and best practices. Of course, it’s all in the CMake documentation, but mentioned implicitly at best.

The gist is this: Using target_link_libraries to link A to an internal target B will not only add the linker flags required to link to B, but also the definitions, include paths and other settings – even transitively – if they are configured that way.

To do this, you need to use target_include_directories and target_compile_definitions with the PUBLIC or INTERFACE keywords on your targets. There’s also the PRIVATE keyword that can be used to avoid adding the settings to all dependent targets.

A simple example

Here’s a small example of a library that uses Boost in its headers and therefore wishes to have its clients setup those directories as well:

set(TARGET_NAME cool_lib)

add_library(${TARGET_NAME} STATIC 
  cool_feature.cpp cool_feature.hpp)

target_include_directories(${TARGET_NAME}
  INTERFACE ${CMAKE_CURRENT_SOURCE_DIR})

target_include_directories(${TARGET_NAME} SYSTEM
  PUBLIC ${Boost_INCLUDE_DIR})

Now here’s a program that wants to use that:

set(TARGET_NAME cool_tool)

add_executable(cool_tool main.cpp)

target_link_libraries(cool_tool
  PRIVATE cool_lib)

cool_tool can just #include "cool_feature.hpp" without knowing exactly where it is located in the source tree or without having to worry about setting up the boost includes for itself! Pretty neat!

PRIVATE, PUBLIC and INTERFACE

Typically, you’d use the PRIVATE keyword for includes and definitions that are exclusively used in you implementation, i.e. your *.cpp and *.c files and internal headers. It’s good practice to favor PRIVATE to avoid “leaking” dependencies so they won’t stack up in the dependent libraries and bring down your compile times. The INTERFACE keyword is a bit more curious: For example, with definitions, you can use it to define your .dll interface differently for compilation and usage. For include directories, one common usage is to set the own source directory with INTERFACE if you keep your headers and source files in the same folder. The PUBLIC keyword is used when definitions and includes are relevant for the own and dependent libraries. It pretty much is the combination of PRIVATE and INTERFACE – whenever you’re temped to put something in both of those, put it in PUBLIC instead. It is probably the most common option.

The future!

I hope that all open-source libraries switch to this style sooner rather than later so you can easily include them in your build-trees. Just don’t use the old commands that add properties for all following targets like add_definitions, include_directories etc. and use the commands with the target_ prefix!

All .NET assemblies for one and one for all

Sometimes you have developed a simple utility tool that doesn’t need the directory structure of a full-blown application for resources and other configuration. However, this tool might have a couple of library dependencies. On the .NET platform this usually means that you have to distribute the .dll files for the libraries along with the executable (.exe) file of the tool.

Wouldn’t it be nice to distribute your tool only as a single .exe file, so that users don’t have to drag around a lot of files when they move the tool from one location to another?

In the C++ world you would use static linking to link library dependencies into the resulting executable. For the .NET platform Microsoft provides a command-line tool called ILMerge. It can merge multiple .NET assemblies into a single assembly:

ILMerge

You can either download ILMerge from Microsoft as an .msi package or install it as a NuGet package from the package manager console (accessible in Visual Studio under Tools: Library Package Manager):

PM> Install-Package ilmerge

The basic command line syntax of ILMerge is:

> ilmerge /out:filename <primary assembly> [...]

The primary assembly would be the original executable of your tool. It must be listed first, followed by the library assemblies (.dll files) to merge. Here’s an example, which represents the scenario from the diagram above:

> ilmerge /out:StandaloneApplication.exe Application.exe A.dll B.dll C.dll

Keep in mind that the resulting executable is still dependent on the existence of the .NET framework on the system, it’s not completely independent.

Graphical user interface

There’s also a graphical user interface for ILMerge available. It’s an open-source tool by a third-party developer and it’s called ILMerge-GUI, published on Microsoft’s CodePlex project hosting platform.

ILMerge-GUI

You simply drag and drop the assemblies to merge on the designated area, choose a name for the output assembly and click the “Merge!” button.

Timestamps make horrible identifiers

If you think about using a timestamp or date as an identifier for some kind of entity, object or data record – think again. They are horribly ill-equipped to be identifiers due to their dynamic resolution. Here’s the story how we got to this conclusion.

vetre / fotoliaNot long ago, I’ve struggled with a system that uses timestamps as entity identifiers. What can I say? Timestamps aren’t meant to identify anything else than a specific point in time. Don’t use them as entity identifiers, ever. If you want to know why, I invite you to read on. The blog post is written in Freytag’s dramatic structure for added effect.

Exposition

We’ve designed a system that runs on multiple instances that communicate in all sorts of way. A central archive instance stores all data related to measurements. The whole network revolves around the notion of measurement. Measurement data is the most precious data and all instances will either produce or consume data based on these measurements.

Most important for human operators is an instance that lets you view all existing measurement data. Let’s call it the viewer. The viewer displays an overview list of all measurements in a given context and lets the operator choose to view ever more details of any of them. To be able to provide the overview list as fast as possible, we added a cache that holds the information.

Rising action

This measurement list cache was the source of all kinds of peculiar behaviour of the system. Most, but not all measurement data was incomplete. The list cache entries were assembled from different sources that were available at different times, so it seemed that while one part of the data got written to the cache, another part couldn’t be written for whatever reasons. The operator could load detailed data of some few measurements, but the majority just produced an error message that the data couldn’t be found (despite it being present).
The most obviously broken functionality left the following trace in the log files (paraphrased):

- storing measurement at 2016-02-28T13:25:55.189+01:00 into the list cache
- measurement stored
[...]
- loading measurement at 2016-02-28T13:25:55.189+01:00 from the list cache
- error: measurement not found in list cache

So, the system is essentially telling me that it can’t load some data it just stored. As you can imagine, this may lead to some questions about the sanity of the database product underneath.

Climax

After some investigation and fruitless integration testing, it dawned me: The problem wasn’t timing or the database. All the bugs could be explained with only one circumstance: Measurements were ultimately identified by their timestamp, the moment the measurement was made. There’s also a location, type and some other information in the identifier for each measurement, but only the timestamp changes between two measurements in the same narrow context. And the timestamp was stored in different precisions, depending on the origin of the measurement identifier. Most identifiers were create at the measurement producing system instances (let’s call them measurers) and had millisecond precision. As soon as they got stored in the production database (but not our development database), they lost the milliseconds. And some of the most important measurement data got exported to third-party systems, using a minute-based precision. So we had one measurement identifier in the system, but with three different types, each mostly incompatible to each other.

Falling action

That’s why the log excerpt above never occurred in development, but in production: The measurement is stored in the database, the used identifier gets passed around in the software, but a query on the exact same identifier in the database yields no result because the timestamps now differ in the millisecond range. And the strange effects that sometimes, everything worked just fine? That’s when the milliseconds are zero by chance. Given that most actions in the system are scheduled and performed automatically exactly on the zero mark, the zero milliseconds case happened more often than it would in an even distribution.

Our system dealt with three types of measurement identifiers: Millisecond-precise identifiers produced by the measurers, second-precise identifiers used by the measurement list cache and minute-precise identifiers used (and sometimes fed back into the system) by the data export. These identifiers were incompatible even for the same measurement most of the time, but not always. In unit tests, the timestamps were made up and didn’t reveal the problem properly (who thinks about odd milliseconds when making up a timestamp?).

My solution was to pull this incompatibility up into the type system. Instead of one measurement identifier, there are now three measurement identifiers: MillisecondPreciseIdentifier, SecondPreciseIdentifier and MinutePreciseIdentifier. An identifier of higher precision can be converted to an identifier of lower precision, but not the other way around. Everytime a measurement identifier is created, it needs to explicitely state its precision of the timestamp. This made the compiler highlight the problematic usages clearly as type conflicts and therefore dealing with the problem much easier.

Revelation

Choosing a timestamp as a vital part of a (measurement) identifier was a mistake from the beginning. The greater problem was the omission of the timestamp’s precision. Timestamps perform more like floating-point numbers and less like integers, even if every timestamp can be represented by a long. As soon as I made the precision of each timestamp clear to the compiler, the bugs revealed themselves. The annoying difference between developer and production database would have been detected much sooner, because a millisecond-precise timestamp will now warn in the log files if its millisecond part is zero. As soon as this log entry is seen very often, its clear that something is wrong. The new datatypes not only serve as a clearer API contract definition tool, but also as a runtime sanity check.

If you don’t want to repeat this mistake, keep in mind that each timestamp, date or whatever time-related data type you use will inherently have a maximum precision. As soon as you mix different precisions into the same data type, you’re going to have a bad time. Explicitely state the required precision in your type system and your compiler will keep an eye on it, too.

Simple C++11 – Part III – Best friends

Now that we got the whole rigid setup of how to create a compile unit and a class setup out-of-the-way, we can finally start to write some code. What separates simple modern C++ code from the old ways is the degree of abstraction you can use to write your code. Previously, you had to think in memory and instructions. Now, powerful abstractions and language mechanisms help you to think in values and operations, and still get down to the bare metal of the machine when you need to. Here’s my personal set of “best friend” language and library features that helps me be as expressive as possible in the lower-level application code and still leverage the raw power of C++.

std::vector<T>

With all its simplicity, it is still powerful enough to handle the greater part of all memory management issues. Better yet, it maps excellently to modern hardware and even when used naively, it is often extremely efficient. And in the rare cases when it is not, the performance can usually be easily improved by using std::vector::reserve.

With C++11, you can now even toss it around, nest it and return huge vectors from functions without any performance problems. Also, initializer_lists make it easy to fill them with data.

std::vector<int> my_special_numbers() {
  return {4, 8, 15, 16, 23, 42};
}

Such code is no longer a subtle performance problem, but actually encouraged.

There’s no doubt that whenever you need a container, std::vector should be your first candidate.

for-each

Printing a range like that is now easy. No need to even know about the existence of iterators or use counters:

for (auto&& number : my_special_values()) {
  std::cout << number << std::endl;
}

std::unordered_map<K,V>

For the rare cases when a flat vector will just not suffice, this neat hash-map will make your life easier. C++11’s initializer syntax makes it a lot cleaner to fill these than before:

std::unordered_map<std::string, int>
my_icecream_ratings() {
  return {
    {"vanilla", 3},
    {"chocolate", 9},
    {"strawberry", 8},
    {"raspberry", 7},
    {"lemon", 3}
  };
}

auto

And now working with them becomes nice and easy too:

auto ratings = my_icecream_ratings();
ratings.insert({"caramel", 2});
std::cout << "Chocolate was a "
  << ratings["chocolate"];

You can even change the result type to an unordered_multimap or something similar and the code will still work.

std::shared_ptr<T>

In a perfect or, should I say, functional world, shared ownership would not be a thing. Pointers or even references would not exist. It just makes things a lot more complex than a clear ownership. It just appears that when requirements change, this or that object is no longer exclusively owned by that other object. Or the lifetime of an object cannot easily be scoped in the presence of multithreading. When this happens, and std::shared_ptr will make your tasks bearable. This is as close as you usually get to completely automatic lifetime management in C++.

void save_image_in_background(
  std::shared_ptr<image const> raw_image) {
  auto thread = std::thread([raw_image]{
    raw_image.save("raw.png");
  });
 
  thread.detach();
}

I like to think of pointers as a necessary evil. Sometimes, the alternative just makes things even more confusing, and when that happens, you at least don’t want manual resource management in the way.

Of course, std::unique_ptr seems to a powerful competitor for shared_ptr’s tasks, but in my experience, you very rarely need a single-ownership pointer in application code. Why not use a moveable type instead? unique_ptr can be useful as a helper to implement library primitives, but you should rarely encounter one in application-level code.

Less is more

Note how many fancy C++11 features did not make my list. For example, lambdas are very useful – and I even used one in my shared_ptr example. But they should be used in moderation. They allow to define code out-of-place, to be executed whenever. This makes it harder to reason about them.
Likewise, things like variadic templates are great for library code, but rarely help in application level.

This ends my small series on C++ for now. I hope I have shown how concentrating on a few simple features helps you write more maintainable and less obscure C++ code, on a level of abstraction that is not lower than most comparable languages. Do you have other methods to achieve this? Or do you even want to have this? I’d like to hear!

The JavaScript ‘console’ Object

Most JavaScript developers are familiar with these basic functions of the console object: console.log(), .info(), .warn() and .error(). These functions dump a string or an object to the JavaScript console.

However, the console object has a lot more to offer. I’ll demonstrate a selection of the additional functionality, which is less known, but can be useful for development and debugging.

Tabular data

Arrays with tabular structure can be displayed with the console.table() function:

var timeseries = [
 {timestamp: new Date('2016-04-01T00:00:00Z'), value: 42, checked: true},
 {timestamp: new Date('2016-04-01T00:15:00Z'), value: 43, checked: true},
 {timestamp: new Date('2016-04-01T00:30:00Z'), value: 43, checked: true},
 {timestamp: new Date('2016-04-01T00:45:00Z'), value: 41, checked: false},
 {timestamp: new Date('2016-04-01T01:00:00Z'), value: 40, checked: false},
 {timestamp: new Date('2016-04-01T01:15:00Z'), value: 39, checked: false}
];

console.table(timeseries);

The browser will render the data in a table view:

Output of console.table()
JavaScript console table output

This function does not only work with arrays of objects, but also with arrays of arrays.

Benchmarking

Sometimes you want to benchmark certain sections of your code. You could write your own function using new Date().getTime(), but the functions console.time() and console.timeEnd() are already there:

console.time('calculation');
// code to benchmark
console.timeEnd('calculation');

The string parameter is a label to identify the benchmark. The JavaScript console output will look like this:

calculation: 21.460ms

Invocation count

The function console.count() can count how often a certain point in the code is called. Different counters are identified with string labels:

for (var i = 1; i <= 100; i++) {
  if (i % 15 == 0) {
    console.count("FizzBuzz");
  } else if (i % 3 == 0) {
    console.count("Fizz");
  } else if (i % 5 == 0) {
    console.count("Buzz");
  }
}

Here’s an excerpt of the output:

...
FizzBuzz: 6 (count-demo.js, line 3)
Fizz: 25 (count-demo.js, line 5)
Buzz: 13 (count-demo.js, line 7)
Fizz: 26 (count-demo.js, line 5)
Fizz: 27 (count-demo.js, line 5)
Buzz: 14 (count-demo.js, line 7)

Conclusion

The console object does not only provide basic log output functionality, but also some lesser-known, yet useful debugging helper functions. The Console API reference describes the full feature set of the console object.

Making CherryPy Application WSGI compatible

When choosing a micro web framework evolving it to fit your needs is key. As CherryPy is one of our choices I want to show you how to evolve it in terms of web server. Of course you can use the embedded CherryPy web server in development and for small sites. It is fast enough for many use cases and supports important features like SSL so you may come a long way just using it. There are several reasons to put your CherryPy behind a tried and trusted native web server like Apache or nginx:

  • Consistent production environment using different application servers (e.g. for Java and Python) using a powerful and uniform frontend
  • Many options and possibilites using Apache modules
  • Well known and understood environment for administrators
  • Separation of web-facing http server concerns and your web application
  • Improved performance and security

Making CherryPy a WSGI-compatible

The good news is that CherryPy application objects are already a WSGI-compliant application. So creating a wsgi.py like the following will enable integration with mod_wsgi of Apache:

def application(environ, start_response):
    cherrypy.tree.mount(MyApp(), script_name=None, config=None)
    return cherrypy.tree(environ, start_response)

Integrating with Apache’s mod_wsgi

It is quite easy to integrate a Python WSGI application with apache using mod_wsgi. If the module is present you just need to add some directives telling Apache where to mount the wsgi application defined by your wsgi.py script:

WSGIScriptAlias /my_app /path/to/wsgi.py
# May be required to allow your web app using libraries installed on the system
<Directory /usr/lib/python2.7/site-packages/ >
    Order deny,allow
    Allow from all
    Require all granted
</Directory>

After you have such a setup working properly you can consult the mod_wsgi documentation on how to improve in regards to threading, script reloading etc.

Configuring the WSGI-app

Many web applications need some form of configuration. Your application should not make assumptions on its install location or some directory structure. Generally speaking, an application should never assume that it can use relative path names for accessing the filesystem. Also access to operating system environment variables is dangerous because the application may run in different contexts. But we can specify WSGI-environment variables in the web servers’ configuration. An easy and safe way is to provide the configuration directory and other values using WSGI-environment variables that we can specify in the mod_wsgi configuration:

WSGIScriptAlias /my_app /path/to/wsgi.py
SetEnv configuration_dir /etc/my_shiny_web_app
...

We can access the wsgi-environment in python like so:

def application(environ, start_response):
    configdir = environ['configuration_dir']
    cherrypy.config.update(os.path.join(configdir, 'global.conf'))

    cherrypy.tree.mount(MyApp(), config=os.path.join(configdir, 'my_app.conf'))
    return cherrypy.tree(environ, start_response)

Note: Because your web app can be mounted to other locations than “/” on the the web server your application should not hard-code absolute links and the like. They all will be dead if your app is mounted at a different location.

Modular development of complex UIs with atomic design

Creating user interfaces is traditionally an expensive development effort. Every web page, dialog or screen is hand crafted from scratch. Developers on the one hand write object oriented, modular code in the whole application but as soon as the UI level is reached everything breaks down.

Creating user interfaces is traditionally an expensive development effort. Every web page, dialog or screen is hand crafted from scratch. Developers on the one hand write object oriented, modular code in the whole application, use myriads of frameworks and libraries but as soon as the UI level is reached everything breaks down. Each view is written in isolation.
Designers have a different view of the UI. They see the interface through the lens of style guides and guidelines. The look and feel throughout the interface should be consistent and should be experienced as a whole.

Atomic design

These two worlds can be combined.
Many designers and developers see the need to design and create design systems. Brad Frost is the one who coined and describes a language for structuring user interfaces: atomic design. The names take heavy cues from chemistry but the important part is the containment part.
Atoms are the low level building blocks: e.g. the widgets in native UI kits or the tags in the web world. But also things like colors or type faces are atoms.
Molecules are simple combinations of atoms. A search field which is comprised of a label, a text field and a button is a molecule.
Organisms are more complex UI components. Organisms can be created from atoms, molecules and other organisms. A complete form would be a perfect example.
Combining all these into a full page or window layout is called a template in atomic design. This template is the abstract definition, the blueprint of the complete screen or page.
Filling this template with content results in a page.
All this sounds pretty abstract and the examples found in the web are very basic so let’s dive in and identify the parts in an example UI.

Decomposing a complex UI

Here we take an example from the excellent UI concept by Lennart Ziburski: desktop neo. (If you haven’t seen this, you should take a look).

finder

Our first decomposing task is to identify distinct parts of the user interface and give them names. These would be the organisms.

organisms

Interlude: how to name things

As with every naming endeavor it is hard to decide which name is appropriate. Dan Mall argues in favor of display patterns to be name givers. Display patterns describe the (abstract) visual aspect and can be used with multiple content patterns. Content patterns describe the types of elements and can be rendered in multiple display patterns. Since we want to name an organism which is content agnostic we should take cues from the visual appearance not the content inside it.

Decomposing further

Now we break those organisms further down. Let’s start with the card grid organism. As the name already suggests it organizes cards in a grid or tabular layout. We have different kinds of cards. First take a look at the preview card at the left.

preview_card

The preview card consists of a thumbnail showing a preview of an item, an icon and a label. This is a simple interface element and is therefore a molecule created from the three mentioned atoms. A name for this molecule could be “image with caption”.

Interlude: testing states in the abstract

Our example touches an important and often neglected part of interfaces: you need to test for different content. Here the longer name is cut with an ellipsis. This is a simple case. But what if the name is missing? Or has unusual characters. Or or or. Besides that we need to indicate the current state of the interface as well. Do we have an error? Are we loading something? Interfaces have different states. Five to be exact. The good part is that we can (and should) test them on the abstract level of atoms, molecules and organisms.

A more complex organism

The cards in the right card grid are more complex examples. Every card is an organism with a title (atom) and a content part (molecule/organism).

The weather card has a simple molecule consisting of an icon and two labels.

weather_card

Whereas the schedule card consists of a list organism which itself includes molecules. These molecules have two labels and one or more actions (links or buttons).

schedule_card

The other parts of the interface can be decomposed as well. Charlotte Jackson describes an interesting approach to decomposing your existing interfaces: print them out, cut them to pieces and name these pieces.

Making the jump

Until now we talked about the designer’s view of the interface but the developer has to translate all these definitions into code and hook them up to content. The approach from the atomic design side is largely the same for web or native but in development we have to distinguish between them.

On Rails

Let’s first take a look at the web side of things. We could use a client side component framework like react but here are like to keep it simple.
We just use Rails in our example but every other web framework will work as well. We need to organize our newly defined chemistry lab in three parts: HTML (or views), CSS and JavaScript.
For CSS and JavaScript we use the include mechanism of the asset pipeline or import if you use SASS. Each dimension gets a separate directory inside app/assets/stylesheets or app/assets/javascripts respectively.
We name our directories atoms, molecules and organisms. The same is true for views: a directory named molecules and one named organisms inside app/views/atomic_design. No need for atoms since they are basic HTML tags or helpers. Atomic design’s templates become Rails’ layouts. Via calls to render we can inject content into these abstract organisms:

<%= render layout: '/atomic_design/organisms/card', locals: {title: 'weather in Berlin'} do %>
  <%= render layout: '/atomic_design/molecules/image_with_text', locals: {image_class: 'fa_sunny'} do %>
    <span class="temperature">23 °C</span><span class="condition">Sunny</span>
  <% end %>
<% end %>

Native

On the native side we also need a component and include mechanism. Usually every widget toolkit has a preferred way to create custom components or containers. If you develop for iOS you extend the UIView class in order to create a custom UI component. These custom views would be the molecules and organisms of our design system. To combine them you add them to other views as their subviews. The init* or properties can be used to fill these with content. The actual mechanism is similar for most native UI kits.

Design with benefits

Using atomic design to create a design system seems to be a lot of work at first. And it is.
We already mentioned two benefits: creating a common understanding and a better way to test things in isolation. Design systems help all project participants, not only designers and developers, to share a common language and understand each other. They help new members to hit the ground running. With tools like pattern lab your atomic design can also be used as documentation.
On the testing front the holy grail is to test things in isolation and in integration, atomic design and its strict separation helps immensely. Often only the sunshine or ideal state is tested and maybe a handful of error states. Thinking in isolation of molecules and organisms about the whole five states and the diverse structure of your content creates a manageable endeavor and maps a path through the jungle of our interfaces. The value which atomic design brings to the table is that your efforts to test scale with the number of molecules and organisms and not with the number of pages or screens. The isolation which a design system, and in particular atomic design, creates is comparable to the advent of unit testing in the world of software development. The separation of display patterns and content patterns reminds me of the functional paradigm with its separation of data and functions.

Simple C++11 – Part II – Class declarations

In the previous part, I’ve shown my guidelines for setting up compilation units. When writing simple application code with C++11, either classes or free-functions should be your main building blocks. Therefor, in this part, I will focus on what to look out for when writing class declarations.

While templates can be very useful, they do not scale well as the code base gets larger. Metaprogramming or other niche styles have their places, too, but I like to look at those as a means to create language extensions rather than principal implementation tools.

Avoid inline implementations

…especially in header files. It can be tempting to write classes solely in the header file. In fact, it has almost become a sign of quality for parts of C++ code to be header only. But this scales badly in most cases, and evolving such a code-base will result in a dramatic explosion of compile times. Always splitting classes into a declaration and definition acts as a first-level compile- firewall and dependency-breaker. Users of your class no longer need to worry about changes in the implementation of the member functions of that class. Note that those changes are often indirect: a change only affects a class that is used in the implementation of your class’ member functions. By splitting the declaration and definition, users of your class do not have to be recompiled.

But why stop at the compiler? The same argument holds for programmers. If you start to split interface and implementation on this level, you automatically provide ‘reader-firewalls’ as well. By just providing a clean header file, you are giving readers sort of a manual for your class. No need to look at the implementation at all, if the interface is well-defined.

Inline code definition is also the main reason against excessive use of templates. Yes, they grant a lot of flexibility, but you pay a hefty price which needs to be justified by an enormous reduction of complexity elsewhere. In general, templates are a bit too powerful for their own good, which is why they need extra moderation.

Always declare implicit functions

Implicitly declared functions seem comfortable, but they have a few implications that are hard to understand. First of, if an implicit function gets generated for your class, it will be generated as inline. This means that the implementation becomes a dependency to all users of your class. This can have very subtle effects such as this:

#include <vector>
class Entry;

class EntryManager {
public:
  EntryManager(EntryGenerator& generator);
  int getEntryCount() const;
  std::string getIDForEntry(int index) const;
private:
  std::vector<Entry> mData;
};

On the surface, it looks like there should be no dependency (other than the name) on MyEntry when including this header. But there is!
The destructor is not declared so it will get generated – as inline. Because deletion of a vector requires the held type to be complete, any place that needs to be able to destruct a MyEntryManager also needs to know how to destruct MyEntry, which is not intended at all. Remember there’s a total of six functions that can be implicitly generated! Because of that, there are analogous problems for copy-construction, assignment, move-construction and move-assignment.

To avoid these problems, either delete the function explicitly in the header, default it in the implementation file, or actually implement it. You rarely need to do the latter, so I advise to default all the ones you need, and delete the rest:

#include <vector>
class Entry;

class EntryManager {
public:
  EntryManager(EntryGenerator& generator);
  EntryManager(EntryManager const&)=delete;
  EntryManager& operator=(EntryManager const&)=delete;
  EntryManager(EntryManager&& rhs);
  EntryManager& operator=(EntryManager&& rhs);
  ~EntryManager();
  int getEntryCount() const;
  std::string getIDForEntry(int index) const;
private:
  std::vector<MyEntry> mData;
};

And somewhere in the implementation file:

EntryManager::EntryManager(EntryManager&& rhs) = default;
EntryManager::~EntryManager() = default;
EntryManager& EntryManager::operator=(EntryManager&& rhs) = default;

This has another nice side effect because the vector-template gets instantiated into that object file and does not “bloat” all use-sites.

Exactly one public function and one private data section per class

..starting with the public section. This is where you address the next programmer that has to read your class. And it should be the only place for him to look.

I avoid private member functions because they cannot be tested easily and can add hidden compile-time dependencies to a project. Why should a user of your class recompile if you change an implementation detail? For small and trivial implementation helpers, the unnamed-namespace in the implementation file is a much better place. If those helpers become larger or more complex, it is a better idea to implement them in a collaborating class, which can be tested and reused.

Protected member functions split your interface to two parts, one exclusively for derived classes and one for everyone (including derived classes). This is very rarely needed, and in almost all of those cases, a separate interface will scale better (although it is slightly harder to implement).

Either an interface or an implementation

So far, I have left inheritance out of the picture and only talked about concrete classes. Inheritance is actually rarely needed, composition often suffices. But if it is needed, make sure that a class is either concrete and final (implementations), or has a complete and minimal set of pure-virtual member functions (interfaces). This will result in shallow hierarchies and easily understood interfaces. Remember that inheritance is not a tool for sharing code from the classes you implement, but for the code using those classes – i.e. where the Liskov Substition Principle holds.

Now it gets really easy to implement new classes in the hierarchy: Just implement all the functions in the interface. No more questioning whether to leave the default behaviour or override. You will also automatically tend towards clearer separation of components – things that need to be polymorphic move to the interface, other  functionality merely uses it.

This pattern is useful even when polymorphy is not needed. Such small interfaces devoid of any implementation detail can act as another compiler firewall. Collaborators can work with just the interface and do not have to be recompiled when the implementation changes. Also, the interface can be implemented for mock or fake objects in testing.

Conclusion

This concludes the second part of the series. I originally intended it to be about how to write a whole class, but that would have been too much to digest for one post. I am well aware that some of these guidelines can stir quite the controversy in the C++ community. For example, declaring the implicit functions seems to be in conflict with the recently popular rule of zero. Scott Meyers had similar concerns, but does not quite touch the inline aspect.

For me personally, these guidelines have helped tremendously, especially when scaling to bigger code-bases. But as before, I am curious what others are thinking about this!