Take your programming course with a grain of salt, please

If you are cursed with silly rules in your programming course, we offer you some word of encouragement to find a mentor and keep your mental sanity and programming habits.

Lately, we had a talk with one of our former interns who now happens to study informatics at university. He presented some code he had written for his programming course and we did a team code review. The review itself was a lot of fun and sparked quite a few discussions. At one point, we assessed the different implementation styles of a method, changing the rather complex single return code into an early return method. Our former intern (now student) listened to the solution and stated: “I am not allowed to do that.”

There was a sudden silence, as everyone tried to comprehend what that means.

The student explained: “my course instructor prefers the single return approach over the early return style”. Well, that’s one thing, we can handle different opinions. “And”, he continued, “he announced there will be a painful deduction of points if we don’t comply to this style.” When the course tried to discuss this point, the explanation given was: “the single return style is superior because the other style is frowned upon.”

We couldn’t believe it. But, as it turns out, there are many rules like the one above in this programming course. And nearly every rule is highly debatable if not plain wrong (in our perception).

There is no problem with the presentation of certain rules in a beginner’s programming course. Novices need clear and precise rules to learn, according to the Dreyfus Model of Skill Acquisition. The concept just doesn’t work for students that aren’t on the Novices level anymore. These students are explicitely forbidden to create more advanced solutions. They are discouraged to look into different programming styles because it will only harm their grades.

We can think of a possible explanation for this scenario: The assignments have to be evaluated by the course instructors. It takes a lot of hard work (and time) to evaluate hundreds of totally different solutions for the same problem. If the solutions are mostly similar in style and concepts, the evaluation is a lot easier and can be done without full understanding of the code.

This is a rather poor explanation. It says “don’t be too advanced in your field of study or you will be too troublesome to attend to”. This is essentially an anesthetization by decree. But the real problem arises when you realize that there won’t be any continuative programming courses. They will never teach you all the more advanced concepts or rectify the silly rules that should get you along as a beginner. After you’ve successfully mastered this course, the studying focusses on the more academic topics of our field. The next possibility to develop your programming skills in a professional setting is your first software development job.

We don’t have a practical solution to this problem. One obvious solution would be to have more instructors evaluate less assignment solutions in the same time, enabling them to dive deeper in the code and give better personalized feedback. This scenario lacks at least enough capable instructors. The reality shows that Novices level students (in the sense of the Dreyfus Model) are often taught by Advanced Beginner level instructors (called a “tutor”).

But we have a word of encouragement for all you students out there, feeling dumbed down by your instructors: It’s not your fault! Take your programming course rules with a (big) grain of salt and talk to other developers. If you don’t know anybody already in the industry, try to make contact with some fellow open source developer on the web. It’s really just the advice “Find a Mentor” from the book Apprenticeship Patterns (highly recommended for aspiring software developers) applied in real life.

Because if you don’t actively unlearn all these arbitrary rules or at least put them into perspective, you’ll start your professional developer career with the burden of some really antic code quirks.

Good luck and tell us your story, if you want.

The Impatient Acceptance Test

When implementing new features it is always a good idea to test them – preferably with automated acceptance tests. Yet, there is a vast number of pitfalls to be avoided when doing so in order to put the testing effort to proper use. Just last week, I encountered a new one (at least for me): The impatient test.

The new feature was basically a long taking background operation which presents its result in a table on the GUI. If the operation succeeded, date and time are displayed in the accordant cell of the table, if not, the cell is left untouched (e.g. left empty if there was no successful run yet, or, if there was, the date of the last successful run is shown).

During the test, four tasks were to be completed by the background operation, two of which were supposed to succeed and the others to fail. So, the expected result was something like:

Expected result

Having obeyed  the “fail-first” guideline and having seen the test pass later on, I was quite sure the test and the feature worked as intended. Yet, manual testing proved otherwise. With the exact same scenario, task 4 always succeeded.

In fact, there was a bug that caused task 4 to result in a false-positive. But why did the automated test not uncover this flaw? Let’s recall what the test does:

  1. Prepare the test environment and the application
  2. Start the background operation comprised of tasks 1-4
  3. Wait
  4. Evaluate/assert the results
  5. Clean up

Some investigation unveiled that the problem was caused by the fact that the test just did not wait long enough for the background operation to finish properly, e.g. the results were evaluated before task 4 was finished. Thus, the false-positive occurred just after the test checked whether it was there:

Timeline of the test
Results being evaluated before everything is finished

After having spotted the source of the problem, several possible solutions ranging from “wait longer in the test” to “explicitly display unfinished runs in the application” came to mind. The most elegant and practical of which is to have the test waiting for the background operation to finish instead of just waiting a given period. Even though this required some more infrastructure.

Though acceptance testing is a great tool for developing software, this experience reminded me that there is also the possibility of flawed, or not completely correct, tests luring you into a false sense of security if you pay too few attention. Manual testing in addition to automated testing may help to avoid these pitfalls.

Don’t mix C++ smart pointers with references

This post will teach by example that mixing smart pointers with references in c++ is not a particularly good idea.

As I did in the past, I will use this post as means to remember and to push the following principle deeper in my head – and hopefully in yours as a reader and C++ programmer:

Do not mix smart pointers with references in your C++ programms.

Of course I knew that before I created this little helper library, that was supposed to make it easier to send data asynchronous over an existing connection. Here is the situation (simplified):

class A
{
  ...
  void doStuff();

  private:
     // a private shared_ptr to B
    boost::shared_ptr<B> _bPointer;
};

class C
{
  public:
    C(B& b) : _b(b)
    {}

    ~C()
    {
      _bRef.resetSomeValueToDefault();
    }

  private:
     // a private reference to B which is set in the ctor
    B& _bRef;
};

void A::doStuff()
{
  createBpointerIfNotExisting();
  C myC(*_bPointer);
  myC.someMethodThatDoesSomethingWithB();
  if (someCondition) {
    // Delete this B instance.
    // A new instance will be created next time
    _bPointer.reset();
  }
}

So class A has a shared pointer of B which is given as a reference to an instance of class C in method A::doStuff. Class C stores the B instance as reference and interacts with it during its lifetime, which ends at the end of A::doStuff.

The last interaction occurrs at the very end of its life – in the destructor.

I highlighted the most important facts, but I’ll give you a few more moments …

The following happens (in A::doStuff):

  • line 29: if no instance of B exists (i.e. _bPointer is null), a new B instance is created and held in _bPointer
  • line 30: instance myC of C is created on the stack. A reference of B is given as ctor parameter
  • line 32-35: if “someCondition” is true, _bPointer is reseted which means that the B instance gets deleted
  • line 37: A::doStuff() ends and myC goes out of scope
  • line 19: the destructor of C is called and _bRef is accessed
  • since the B instance does not exist any more … memory corruption!!!

The most annoying thing with this kind of errors is that the program crashes somewhere, but almost never where the error actually occurred. This means, that you get stack traces pointing you right into some rock-solid 3rd party library which had never failed since you know and use it, or to some completely unrelated part in your code that had worked without any problems before and hasn’t been changed in years.

I even had these classes unit tested before I integrated them. But for some strange reason – maybe because everything gets reset after each test method – the bug never occurred in the tests.

So always be very cautious when you mix smart pointers with references, and when you do, make sure you have your object lifetimes completely under control!

Separate Master Data and Variable Data

If you come across an accumulation of data fields, you might want to split them into master data and variable data. This could at least help when dealing with storage issues.

In the design of data structures or objects, there are two different kinds of data, namely “Master Data” (german: Stammdaten) and “Variable Data” (german: Bewegungsdaten). The first kind, master data, are data fields that will change seldom over time and can sometimes be used to “identify” an object. The second kind are data fields that capture the current value of an object’s aspect, but are expected to change in the future. If you can categorize your data fields in this manner, think about separating them into different objects.

Let me make an actual example. An application we develop has a central instance (the “center”) that distributes situational data to several operation desks, powered by client applications, named the “clients”. Each client instance is registered in the center to enable the supervision and administration of clients. The data for each client is stored in a ClientInformation object that is mapped to a database relation. Let’s have a look at some of the data fields of ClientInformation:

  • int internalIdentifier – the database primary key for the record
  • String type – some type of the client application
  • String instanceName – the given readable denotation of the operation desk
  • String version – the currently installed version of the client application
  • Date connectionDate – the last time this client application established a connection
  • Date lastActionDate – the last time this client application issued an action command (“was active”)

We can start all kinds of (justified) discussion about primitive obsession, too much information at one place and so on, but for this blog entry, only the categorization in master data and variable data is of interest. My opinion on the example is that the first three data fields (internalIdentifier, type and instanceName) are definitely in the master data category. The last two data fields are clearly variable data, while the version field is something in between. My guts tell me to categorize the version as master data, because it won’t change on a daily schedule.

When separating the two categories of data, the ClientInformation object may turn into a reference holder object only. In this case, the ClientInformation holds two references, one to a new ClientMasterData object (holding internalIdentifier, type, instanceName and version) and another one to a new ClientVariableData object (holding connectionDate and lastActionDate).

A less radical modification would be to let the master data remain in the ClientInformation object and only extract the variable data into a new ClientConnectionData object. If a client connects, only the referenced ClientConnectionData object has to change.

If you separate your master data from the variable data, you can very easily concentrate on the variable data for performance optimizations. This is where the data changes will happen and a tuned storage strategy will pay off. The master data should be designed more carefully concerning the type information, so if we really start the discussion about primitive obsession, I would first tend to the master data fields and argue that the type shouldn’t be a String but an Enum and the version should be a more sophisticated Version type. This could be modelled even with a slow object/relational mapper because the data is only written/read once.

The next time you come across one of your data model objects that contain more than two data fields, have a look at their categorization in master and variable data. Perhaps you can see a good reason to split the object.

Depth-first programmers

Depth-first programmers are always busy creating horribly complicated solutions that are somehow off the mark. Here’s why and what to do against it.

Just as there are at least two fundamentally different approaches for searching, namely depth-first and breadth-first search, there are also different types of programmers. The depth-first programmer is a dangerous type, as he is prone to yak shaving and reinvention of the wheel.

The depth-first programmer

Let me try to define the term of a “depth-first programmer” by a little (true) story. A novice java programmer should make some changes to an existing code. To secure his work, he should and wanted to write unit tests in JUnit. He started the work and soon enough, first results could be seen. But when he started to write his tests, the progress notifications stopped. The programmer worked frantically for hours and then days to write what appeared to be some simple data-driven tests.

Finally, the novice java programmer reported success and showed his results. He wrote his tests and “had to extend JUnit a bit to do it right”. Wait, what? Well, in JUnit, the test methods cannot have parameters, but the programmer’s tests needed to be parametrized. So he replaced the part of JUnit that calls the test methods by reflection with an “improved” algorithm that could also inject parameters. His implementation relied on obscure data structures that provided the actual parameter values and only really worked for his needs. The whole mess was nearly intangible, a big bloat and needed most of the development time for the unit tests.

And it was totally unnecessary once you learn about “Parameterized” JUnit4 tests or build light-weight data drivers instead of changing the signature of the test method itself. But this programmer dove deep into JUnit to adjust the framework itself to his needs. When asked about it, he stated that “he needed to pass the parameters somehow”. That’s right, but he choose the most expensive way to do so.

He exhibited the general behaviour of a depth-first programmer: whenever you face a problem, take the first possible solution to a problem you can come up with and work on it without evaluation against other possibilities. And continue on the path without looking back, no matter how long it takes.

Stuck in activism

The problem with this approach should be common sense. The obvious option isn’t always the best or even a good one. You have to evaluate the different possible solutions by their advantages and drawbacks. A less obvious solution might be far better in every aspect but obviousness. Another problem with this approach is the absence of internal warning signs.

Getting stuck is an internal warning sign every programmer understands. You’ve worked your way in a certain direction and suddenly, you cannot advance further. Something blocks your anticipated way of solving the problem and you cannot think of an acceptable way past it. A depth-first programmer never gets stuck this way. No matter how expensive, he will pursue the first thing that brings him closer to the target. A depth-first programmer always churns out code at full speed. Most of it isn’t needed on second thought and can be plain harmful when left in the project. The depth-first programmer will always report progress even when he needs days for a task of minutes. He is stuck in activism.

Progress without guidance

This isn’t a rant about incompetent programmers. Every good programmer knows the situation when you suddenly realize that you’re shaving a yak when all you wanted to do is to add a feature to the code base. This is your self-guidance system regaining consciousness after a period of auto-piloting in depth-first mode. Every programmer behaves depth-first sometimes.

This can be explained with the Dreyfus Model of Skill Acquisition. On the first stage, called “Beginner”, you are simply not capable of proper self-evaluation. You cannot distinguish between good and not so good approaches beforehands or even afterwards. Your expertise in the narrow field of the problem at hand isn’t broad enough to recognize an error even when you are working on the error yourself for prolonged times.

In the Dreyfus Model, a beginner needs external guidance. Somebody with more experience has to point out errors for you and formulate alternatives as clearly and specific as possible. Without external guidance, a beginner will become a depth-first programmer. We’ve all been there.

 Be a guide

The real failure in the story above was done by me. Instead of interacting with the novice java programmer after a few hours when I thought he should be done by now, I let him “advance”. I could have avoided the resulting mess by providing guidance and a few alternate solutions for the immediate problem. I would give an overview of the problem’s context and some hints about the general direction this task should be solved.

Every depth-first programmer works in a suboptimal environment. The programmer tries his best, it’s really the environment that could do better.

So, the next time you see somebody working frantically on a problem that should be rather easy to solve, lend him a hand. Be gentle and empathic about his attempt and work with proposals, not with instructions. Perhaps you’ve spared yourself a mess like an unnecessarily extended JUnit library and the depth-first programmer the frustration when his hard work of several days is silently discarded.