Open Source Love Day March 2010

Our Open Source Love Day for March 2010 brought love for Grails, our cmake hudson plugin, RXTX and winp. Everything went smooth and was lots of fun.

Yesterday, we held our first Open Source Love Day (OSLD) for this year. The last OSLD was at December 2009. Then, we reassigned a day in January and February each to perform our relocation to the new (and much bigger) office. But now we are back to regular duty and had the time to donate some work back to the Open Source ecosystem.

The Open Source Love Day

We introduced a monthly Open Source Love Day to show our appreciation to the Open Source software ecosystem and to donate back. We heavily rely on Open Source software for our projects. We would be honored if you find our contributions useful. Check out our first OSLD blog posting for details on the event itself.

Participate at our OSLD by using the features we’ve built today:

  • Grails still has some bugs. Instead of only complaining about them, we try to fix them. There is a bug with checkboxes and nested boolean properties that bugged us in a customer project. It’s filed unter GRAILS-3299 and has a proposed patch now.
  • In previous OSLDs, we produced the cmake hudson plugin. In the corresponding blog entry, comments with bug reports began to pile up. They addressed issues with hudson master/slave setups. So we implemented a hudson master/slave test environment, using VirtualBox virtual machines to perform as slaves. This setup quickly revealed the problems that were typical enough to devote a complete blog entry about this topic soon. Fixing the problems resulted in the new cmake hudson plugin version 1.2 to be released yesterday.
  • We are using the RXTX project to perform serial (RS232) communication in several projects. We are really glad the project exists, because the “official” communications API from Sun/Oracle is nothing but a mess. With RXTX, we only had a problem with emulated COM ports. Emulated COM ports exist when you use a USB->Serial or Ethernet->Serial converter, which is what our customer chose to do. If you unplug the converter during operation, the corresponding COM port disappears. This causes RXTX to crash, bringing the JVM down, too. We wrote a test application and used it with every converter we own (and we own quite a lot of them!). Then we began tracing the RXTX source code (at C code level), altering it to “only” throw an IOException when the virtual COM port disappears. The corresponding patch will be proposed to the RXTX project soon.
  • Another API we use a lot is the tiny winp project, written by Kohsuke Kawaguchi, the creator of hudson. We kill Windows processes with it, within a project that runs on Windows 2000, Windows XP and Windows 7. The latest Windows version seemed incompatible with winp, even the 32bit edition. We didn’t find the cause for this, but developed a workaround that will be proposed to the winp project soon.

What were our lessons learnt today?

  • If you face OutOfMemoryErrors on a 64bit Java6 JVM, try to switch back to a 32bit Java5 JVM. It helped us with our Grails bugfixing (during the test phase).
  • Hudson Master/Slave support for plugins isn’t particularly hard. It’s just that you need to be aware of the topic and replace some types like java.io.File. We gathered the same experience twice with our Crap4J plugin and the cmake plugin. It’s time to tell the world about it. Stay tuned!
  • The good old error return code is an error prone coding paradigm, because all too often, users of a function/method just forget to check the returned result. This was the case with a call to WaitForSingleObject in RXTX.
  • If you don’t understand an implementation well enough to fix the cause, you might at least be able to produce a workaround. It’ll work for you and provide guidance for the original author about where the bug might hide. This is why we count our winp efforts as success, too.
  • Your project either is mavenized or it isn’t. Everything in between is half-assed.

This OSLD was a bit short, as we had some guests in the evening, but nevertheless, it was fun. Well, to be precise, it was this special software engineer’s type of fun: The whole company was remarkably quiet most of the day, with everyone working totally focussed. We scratched our own itches, enhanced our customer projects and contributed to the open source community. A very good day!

Stay tuned if you want to know more about the specifics of the hudson plugin development or the to-be-proposed patches. We will publish them here.

Follow-up to our Dev Brunch March 2010

A follow-up to our March 2010 Dev Brunch, summarizing the talks and providing bonus material.

Yesterday we held our Dev Brunch for this month. It was the second brunch in our new office, with some attendees visiting it for the first time. The reactions were the same: “I want to move in here!”. The topics were of different kinds, from live presentations to mere questions open for discussion.

The Dev Brunch

If you want to know more about the meaning of the term “Dev Brunch” or how we implement it, have a look at the follow-up posting of the brunch in October 2009. We continued to allow presence over topics. These topics were discussed today:

  • Singleton vs. Monostate – We all know that Singletons are bad for your test coverage, they make a poor performance on your dependency chart and are generally seen as “evil”. We discussed the Monostate pattern and if it could solve some of the problems Singletons inherently bring along. Based upon the article from Uncle Bob, we concluded that Monostates are difficile at least and don’t help with the abovementioned problems.
  • What is “agile” for you? – This simple question provoked a lot of thoughts. You can always obey the Agile Manifesto word by word without understanding what the deeper motives are. The answer that fitted best was: “You can name it when you see it”. We concluded that it’s easy and common practice to label any given process “agile” just to sound modern.
  • News around Yoxos – If you are using Eclipse, you’ve certainly heard about Yoxos already. Now during the EclipseCon 2010, good news were announced. We got a sneak peek on the new Yoxos Launcher and how it will help in managing your pack of Eclipse installations. We are looking forward to become beta testers because we can’t wait to use it.
  • Teaser talk for “Actors in Scala” – The actor paradigm for parallel programming is a promising alternative to threads. While threads are inevitable complex even for simple tasks, actors seem to recreate  a more natural approach to parallelism. This talk was only the teaser for a more in-depth talk next time, with hands-on code examples.
  • Properties in Scala – This talk had lots of code examples and hands-on discussion about the Properties feature of Scala. Properties are an elegant way to reduce your boilerplate code for simple objects and to sustain compatibility with Java frameworks that rely on the Java Beans semantics. We clearly understood the advantages, but ran into some strangeness related to the conjoint namespaces of fields and methods along the way. Scala isn’t Java, that’s for sure.
  • Introduction to PreziPrezi is a modern presentation tool in the tradition of the dreaded PowerPoint or Apple’s Keynote. It adds a twist to your presentation by adding two new dimensions: laying out everything on a big single canvas (no slides!) and relying heavily on zooming effects. The online editor is surprisingly usable, yet simple and lightweight. If you want to meet prezi, check out the introduction prezis and the showcase on their homepage.

As usual, the topics ranged from first-hand experiences to literature research. For additional information, check out the comment sections. Comments and resources might be in german language.

Retrospection of the brunch

We keep getting better in timing our talks. We nearly maintained our time limit and didn’t hurry anything. For the next brunch, we are looking forward to use our new office roof garden to brunch and talk in the springtime sun.

Gesture Touchscreens might render Paper Prototyping useless

With the advent of highly dynamic, gesture-controlled user interfaces for touchscreens, Paper Prototyping seems to lose its applicability.

Paper Prototyping is an highly effective tool to examine the usability of software, even before it is written. The basic idea of Paper Prototyping is that you perform real (software) tasks with real users, but replace everything technical with low-fi substitutes.

Adventures in Low-fi

A typical Paper Prototyping session looks like this:

  • The user gets his task description and has to perform it with the software
  • The computer screen is replaced by an arrangement of paper sheets on a desk
  • The graphical user interface is replaced by hand-drawn copies on paper
  • The computer itself is replaced by a human, mimicking the software responses to input
  • The user operates by finger-pointing or writing with a pen

Advantages of Paper Prototyping

The whole situation described above seems awkward on first look, but is really rewarding for a project in its early stages. The customer has to provide real end user tasks and enough details of the solution to make up a prototype. The team has to produce reasonable drafts of the software GUI and come up with enough understanding of the processes and tasks involved to survive the session without major outages.

The result of a Paper Prototyping session can be used in various ways:

  • Detailed specification of the GUI
  • Use Case or User Story (acted out already)
  • Mock screenshots for the user manual
  • Data for initial acceptance tests

Classical user interface vs. gestures

This approach worked very well as long as the user only had a mouse (pointing, clicking) and a keyboard (writing) at hands. Even then, advanced features like grabbing (drag&drop) or automatic scrolling challenged the creativity of the prototypers. But most GUIs were rather dull and static. The perfect playground for Paper Prototyping.

With the advent of touchscreens, we soon realized that pointing and clicking only needs one finger out of ten. Gestures were introduced to keep all our fingertips busy and to enrich the interaction between user and GUI. We instantly understand the “zoom in” or “scroll down” gestures because it resembles natural behaviour (at least for some of us).

In the wake of gestures, the GUI of our software gets more and more dynamic. The GUI has to be minimalistic so we can control it even with stubby fingers (the new handicap of our generation, compare cell phone key pads). Detailed information has to be provided on demand and only temporarily. Everything can be manipulated. The classical approach to tab through a form (by carefully designed tab orders) isn’t that suitable anymore.

Gestures vs. Paper Prototyping

When using a Paper Prototype, the throughput of scribbled paper is enormous even with the classical GUIs. The more dynamic some dialog is, the more different parts need to be prepared in various states and locations (depending on the fragmentation of the paper screen). With gestures on a touchscreen, the user needs to be able to express them on the screen. Most touchscreen interfaces heavily depend on the (simulated) physical interaction between the fingers and some drawn “objects” on the interface. This is the moment when Paper Prototyping falls short of resembling the real interaction. You just can’t fiddle that fast with all the paper shreds.

No solution yet

I observed this effect when performing a Paper Prototype workshop with my students. The interfaces with classical mouse/keyboard handling performed well in the sessions. Interfaces for touchscreens (iPhone apps were the big newcomer here) just didn’t work out well, especially when downsized to palm size. We weren’t able to come up with a viable solution to make Paper Prototyping perform again for touchscreens and gestures.

Any ideas out there?

Blog harvest, February 2010

Some noteworthy blog articles, harvested for February 2010. If you ever asked yourself about the personality of your web framework, you’ll find the link to the answer here.

After the move to the new office is nearly complete, work begins to normalize again. Here is the February blog harvest with a little more entries, as I wasn’t unable to read other blogs, but to write on our own blog. There are many fun articles this time that I found share-worthy, perhaps because they made me laugh even in harder times.

This was the more serious part of this harvesting. Let’s read some articles that share their message in a lighter way:

  • What kind of woman would your web framework be? – If you ever have to sell a new hot (web) framework to management, why not take this plausible approach? At least they could relate to what you are talking about.
  • It’s Not the Recession, You Just Suck – Ouch! That hurt. This is a wake-up call for everybody who likes to blame it on higher means. And it reminds me to hurry up with this blog entry and get back to work.
  • I test therefore I log bugs – Ever tried to explain “programming” to your grandparents? You’ll end in esoterics (“teaching machines to have dreams”) or in obviousnesses. This is a story about consensus on the latter.

This blog harvest closes with a video:

  • Uncle Bob on Software Craftsmanship – Much of what Bob Martin says has truth in it, but for me the last two minutes are the most explicit and rewarding. By the way, Uncle Bob looks good in the T-Shirt (I always feared it would be teared, regarding the sounds when he stretches), but needs to switch his cell phone off.

Follow-up to our Dev Brunch February 2010

A follow-up to our February 2010 Dev Brunch, summarizing the talks and providing bonus material.

Today, we held our second Dev Brunch for 2010. It was the first one in the new office, with some packing cases still around. The brunch had some interesting topics, most of them small and focussed. We discussed if the topics should be announced beforehands to avoid collision, but defined these collisions as enrichments rather than duplications.

The Dev Brunch

If you want to know more about the meaning of the term “Dev Brunch” or how we implement it, have a look at the follow-up posting of the brunch in October 2009. This time, we didn’t urge all participants to bring their own topic. Presence is more important than topic.

  • Scrum adventure book review – There are lots of book on the Scrum project management process. But the one called “Geschichten vom Scrum” (sorry, it’s a german book!) will teach you all the basics and some advanced practical topics of Scrum while telling you the fairy tale of a kingdom haunted by dragons. By following a group of common fairy tale characters in their quest to build a dragon trap the Scrum way, you’ll learn a great share of real world Scrum and still be entertained. You might compare this book to Tom DeMarco’s “The Deadline”, a novel about general project management.
  • What is the Google Web Toolkit? – Based on the learning from the presentation of the Karlsruhe Java User Group (JUG-KA), we skipped through the slides to get to know the Google Web Toolkit (GWT) framework. Advanced topics were discussed in the next talk.
  • First hand experience with GWT – We talked about the sweet spots and pain points of Google Web Toolkit, based on the experiences in a real project. This was very helpful to sort out the marketing promises from the definite advantages. While the browser doesn’t affect the developer anymore, the separation of client (browser) and server will still leak through.
  • First impressions of the Lift framework – The way to go with web application development in Scala is Lift. It’s a framework borrowing the best from “Seaside, Rails, Django and Wicket” and combining it with Scala and the whole Java ecosystem. While this talk was just a teaser, it already looked promising.

As usual, the topics ranged from first-hand experiences to literature research or summaries of recently attended presentations. You can check out the comments for additional resources, but they may be in german language.

Retrospection of the brunch

It’s right to grant access to “non-topics”. This will lower the barrier for occasional guests while they are valuable for their experiences and insights. This brunch was enriched by yet another topic collision, which is the perfect situation for a more in-depth discussion.

Speed up your buildbox, Part IV: Beyond the box

This is the fourth and last part of a series on how to boost your build box without much effort. This episode talks about possible measures to increase the build performance when a single box isn’t enough.

© Friedberg - Fotolia.comIn the first three parts of our effort to speed up our buildbox, we replaced the harddisk with a RAM disk, upgraded the CPU to the top-notch model and installed plenty of fast RAM. This brought the build time down from 03:30 minutes to around 02:00 minutes. The CPU frequency was the biggest time saving factor in our case study. Two minutes is as fast as the build can get for our project without fiddling with the actual build process. It’s sufficient for our case, but it may not for yours.

Even top speed is too slow

Lets assume we maxed out the hardware and still have a build duration far beyond the magical ten minutes mark. What can we do now? There are two viable options at hand if you can exclude the possibility that your build process is really inefficient and needs optimization. In the latter case, it would be better to revise the process instead of the build infrastructure.

Two ways to speed up your build infrastructure

You can go down one or both of two general paths to speed up your build process. To understand the examples, lets assume the build takes 20 minutes to run on your top-notch build box.

  • Add more build boxes. This is the classical “parallelize it!” approach. It won’t speed up the individual build process, but enable more processes to run at the same time. This approach wont change anything if your team does seldom check-ins, which in itself is an anti-pattern to continuous integration. But if your team commits changes every ten minutes, having at least two build boxes will prevent the second committer from waiting 30 minutes on the CI results. Instead, the results will always be there after 20 minutes. You haven’t exactly sped up your build process, but the maximal waiting time of your committers. For details on the implementation, see below at “Growing a build park“.
  • Chop up your build process. This is known as “staging” or “pipelining” your build. This won’t speed up the individual build process, either, but deliver certain partial results of your build earlier. Lets assume you can split your build process into four distinct stages: compile, unit test, integration test, package. Whenever a stage yields a result, the comitter gets feedback immediately. In our example, this might be every 5 minutes. This has several disadvantages, as for example discussed in the article “The pipeline of doom” by Julian Simpson, but can lower the waiting time for specific aspects of your build drastically. You haven’t exactly sped up your build process, but the response time for partial results and therefore the average waiting time of your committer. For details on the implementation, see below at “Installing a build pipeline“.

Growing a build park

If you want to reduce the initial waiting delay of a build before it gets processed or increase the throughput of builds, the build farm pattern is your way to go. By adding slave build machines to your build master, you can distribute the workload on more shoulders. The best way to set up your infrastructure is to introduce a dedicated master box that only delegates actual builds to its slaves. The master box handles the archivation of build artifacts and deals with the web server requests, while the slaves only perform build tasks. The master box can be of average power, with increased storage size, while the slaves should be ultra-fast, without the need of big disks. Solid state disks or even RAM disks of the slaves can be tuned to actual workspace sizes, as it is all that needs to be stored there.

Distributed builds with Hudson

The Hudson continuous integration server has a strength in setting up these master/slave scenarios. It’s ridiculously easy to set up a build slave. You basically only need to click on a link to start the slave process. If you happen to have a standard build, everything needed gets downloaded automatically. If you want your slaves to operate automatically, you can install a windows daemon, provide a SSH account or write your own script. Usually, slaves are set up in a matter of minutes without hassle. A great idea is to turn powerful collegue boxes into build slaves (aka CI zombies) by booting an USB stick. The best way to start with master/slave builds is to turn your current PC into a hudson slave right now by using the Java Web Start method.

Installing a build pipeline

If you are interested in early but incomplete feedback from your build box, staging your build will help you out. If partitioned right, you’ll receive a series of answers on specific questions from your build process. The questions may be like:

  1. Will it compile?
  2. Will it pass the unit tests?
  3. Will it function (pass the integration tests)?
  4. Will it blend?

Ok, the last question is rather unlikely to be answered by your build box. The overall build process will not be any faster, but basic safety test results are reported earlier. If you combine this approach with distributed builds, there is the possibility to designate specifically tuned machines to different stages. The Hudson continuous integration server has the ability to tag a slave with different labels. You can then configure your build to only run on slaves with the desired label assigned.

Staged builds with Hudson

Staging with the Hudson continuous integration server isn’t as easy as the master/slave feature, but there are some plugins that allow for more complex setups. You might experience some functionality that’s still under development, but basic staging is possible even today. In combination with specialized slave build boxes, this approach can lower your build duration. It is a a complex endeavour, though.

Conclusion

Once your single build box is maxed out but still not fast enough, you enter a different realm of continuous integration infrastructure setups. Speeding up a build process beyond the single box isn’t as easy as installing more RAM. But with a fair amount of planning, you have a fair chance to improve the situation. Note that you won’t primarily lower build duration, but increase throughput and utilize partitioning and specialization. These are different measures and might not affect the wall clock time of your build. The combination of staging and distribution is the most powerful setup, but will result in the most complex infrastructure to maintain. Before entering this realm, be sure to apply any possible optimization to your build process. Because you’ll not leave that realm again soon.

What’s your story on build optimization beyond the box? Drop us a comment.

Follow-up to our Dev Brunch January 2010

A follow-up to our January 2010 Dev Brunch, summarizing the talks and providing bonus material.

Today was our first Dev Brunch for the new year 2010. We held a well-attended and very interesting session with lots of coffee. It was the last brunch in the old office, as we are currently moving to new rooms. The brunch ended with a sneak peek into the new office.

The Dev Brunch

If you want to know more about the meaning of the term “Dev Brunch” or how we realize it, have a look at the follow-up posting of the brunch in October 2009. We used notebooks throughout the sessions today.

The topics of this session were:

  • Agile done wrong – A project that was converted to be agile now tends to be even more conservative when management lost faith in their developers. A rather sad first-hand story, with lots of Dilbert-style humor in it.
  • Implicits in Scala – Scala introduces a powerful feature of implicit (hence the name) type conversion that can be used to greatly simplify work with complex type systems. Or to totally disturb your understanding of it.
  • Follow-up on the local XP-Days – The XP Days Germany of andrena objects ag are a small, yet powerful conference in Karlsruhe. We got a summary of the overall style and different presentations. Things like Pokens, Pecha Kucha (watch your pronunciation of it) and live code katas are all very promising stuff. Most presentation content itself was interesting, too.
  • Exception safety in Java – A classical topic of (not only) C++, ported to Java. This overview presentation highlighted the basics of exception safety and some insights for Java, mostly borrowed from Alan Griffiths.
  • Preview of an Eclipse based product – We won’t go into much detail here, but we got a glance of an upcoming product that will greatly ease the use of multi-site programming with Eclipse. The EclipseCon 2010 in March might get promising.

The topics ranged from first-hand experiences to literature research. We look forward to provide additional information linked in comments on this article, partially in german language.

Retrospection of the brunch

It was very entertaining to meet everyone after the long holiday season. Lots of news and chatter and stuff. The topics were interesting and thought provoking. If you weren’t there, you’ve missed something. Check out the comments for compensation.

Blog harvest, January 2010

Some noteworthy blog articles, harvested for January 2010. Don’t miss the worst gadget gallery!

Welcome to the new year. We started late to work again this year and will collect some overtime in the next weeks as we are in the process of moving to a new office. This might impact the additional blog entries, so here is a quick blog harvest. Don’t miss the anti-gadget gallery at the end – but beware, it will make you laugh out loud, so don’t read it at work (unless you are supposed to have fun at work).

  • Reasons to Use Google Collections – Recently, the google collections library went gold. It’s a really nice collection of… collections. If you haven’t met it yet, do it now. You might read James Sugrue’s teaser alongside.
  • How Programming Books Promote Code Smells – Finally, somebody shares my opinion on “code examples crippled for brevity”. I got disgusted by some java books because their code examples were so bad, it hurt. The situation eases a lot with the availability of “real” code in open source projects that you can read as recompense.
  • Test-Driven Teaching! – Another problem with bad code examples is to avoid them in live examples (like in lectures). Here is an interesting article by Peter Karich presenting the idea of integrating test code in your lectures. I’ll see if my students like the idea.
  • A performance tuning story – The best stories are real stories. This story has all that’s needed: a mysterious effect at the beginning, fast-paced action in the middle and an open end. The X files weren’t as thrilling as this one.
  • Maven and Ant guys: you’ll never agree. On anything. Period. Deal with it! – What would we do if we couldn’t join an eternal flame war at surf time? You can join on one side (there are always two sides, seldom more!) or stand in between and pick on both. Lieven Doclo chose the Maven side of builds.
  • Maven Mythbusters – The most prominent maven myths get busted by John Ferguson Smart to end the flame war mentioned above once and for all (or at least take out some misinformation). I’ve linked to the first busted myth, you’ll have no trouble finding the other(s). Read the comments, too!
  • The Top 10 Posts of this Blog Over the Years – Stephan Schmidt’s blog “Code Monkeyism” is a regular link target in my harvests. When he harvested his own blog for the most popular entries, I couldn’t resist linking him once more. Maybe this is called meta-harvesting.

This was the article side of this harvesting. Let’s continue to have fun by sliding through the gallery and then… sliding through your repository:

  • Gizmodo’s Worst Gadget Gallery – Oh yes, ten years full of technological crap, complete with pictures and a short description. If you spend 30 seconds on each gadget, you’re in for half an hour pure fun. My favorites are the DeathStar for personal reference (we lost 7 of 9) and the Eye-Track for its funny description.
  • Mining your source code repository – Part 1 – Yes, all your legacy repositories are still there. Discover the archeologist in yourself and perform some data mining on them. This article may be the starting point for your new career.

Booked in February

A short book preview of the upcoming O’Reilly title “97 Things Every Programmer Should Know” and our participation in it.

Ok, the title is a bit misleading – it’s a play of words(*). This entry is actually a book preview on the upcoming book “97 Things Every Programmer Should Know” from O’Reilly.

97 Things Of Wisdom

The “97 Things” series started out with “97 Things Every Software Architect Should Know” early last year. The book essentially is a collection of short articles on specific topics that should bother today’s software architect. You may classify as a software architect if you don’t just stir up source code but are also in charge to give the system a shape.

The articles are straight to the point and can be read within five minutes each. Don’t expect detailed textbook chapters of the topics, but they work extremely well as creative appetizers. And there are nearly a hundred appetizers from well-respected members of the software architect community in this book.

Just imagine you would meet all the authors for five minutes each on a conference and just ask them for an appealing thought. This book serves as the best replacement for it.

Wisdom continued

Soon after the first book, there was a second book in the series, “97 Things Every Project Manager Should Know”. I haven’t read this one yet, but it is on my must-read list for 2010.

And now, next month, there will be another book, this time for the fellow coder: “97 Things Every Programmer Should Know”. As usual, there are 97 selected articles with bits of wisdom from big community names. Kevlin Henney is the editor for this book (we featured him in our last blog harvest). You can take a sneak peek online in the 97TEPSK wiki, where the articles were fostered (and a second part is likely to emerge). But don’t forget to buy a paper copy that you can foist on your peers to inspire them, too.

Telling from the articles I’ve read so far, the book will be great. Please don’t expect detailed language specifics, lengthy code examples or fancy UML diagrams. But expect a whole bunch of great ideas that stem from real experiences of real programmers.

One percent of a book

What’s our relation to the new book? We’ve contributed an article to it! Even if we thereby only wrote approximately one percent of the book, it feels great and we consider ourselves honored.

The topic of our article is Extreme Feedback Devices (XFD): “Let Your Project Speak for Itself”. We gathered quite a lot of these devices over the years and ran a few experiments, so we thought we are qualified to write about it. And there it is, the first bit of our wisdom, printed in a book.

We will, of course, continue to publish our wisdom on this blog first. If you’ve followed us over the last years, the article comes as no real surprise. But I’m sure some other articles of the book will. Go buy it!

(*) Play of words in a language other than your native tongue are always dangerous. I hope this one worked out well.

Schneide blog heartbeat revisited

A short review of our company blogging engagement in 2009, with a description of the underlying rules.

The start of a new year is a great opportunity to look back and review the old year. This article reviews this blog, how we run it and what happened in 2009.

The first review

This blog came to life in February 2007 and was revived and retrofitted with a basic rule set in August 2008. Exactly a year ago, i wrote a first review of the changes, explaining some of the rules behind it and judging the outcome. You might want to read it in order to understand some of the following metaphors.

A year with constant pace

We haven’t changed the rules in a year. We still run this blog at a constant, sustainable pace. We still collect and foster “vegetables”, our metaphor for blog entries. Everyone in our company has a “garden” full of blog entry drafts that evolve over time and finally get published. We still don’t think that maintaining a company blog has to lead to internal competition or a blog quality assurance department.

We ran this blog for a whole year with weekly entries by just passing the blog token around. Instead of getting tired to write yet another blog entry, we sometimes asked to publish our entry ahead of time just because it was ready and eager to meet the world. We kept discipline, but in a flexible manner.

The results

What can you expect to happen when all you do is to keep your flow (we call it “obeying the mechanics”)? A single picture tells it all:

You see the visitor statistics from the day we revived the blog. The small mound around 2008-10 was last year’s visitor maximum. We grew every month this year. We did expect the numbers to grow, but not exponentially as in the last months. We are overwhelmed by success. Which leads to a few additional rules.

The additional rules

  • As the amount of discussion around our blog rises, we introduced the rule of “author-based commenting“. Every comment on our blog needs to get our approval (by saying something on topic, we just filter out the spam) and will eventually get an answer from us. The responsible person for both actions is the original blog entry author. This may lead to slightly longer approval delays, but adds coherence to the comment trail and discussion tone.
  • We regularly publish our articles on aggregator sites like dzone.com. All of these sites provide their own commenting system. We tend not to answer comments on these sites. It would shatter the discussion without benefit for the ordinary visitor. If you want us to answer, feel free to copy the comment into our blog.
  • We introduced some regularly “events” in our company last year. The Open Source Love Day (OSLD), the Dev Brunch and the occasional Blog Harvest are all worth to write about, but are attended by many authors. We agreed to publish these special event entries out of turn, whenever they are ready but in a timely manner. These entries share a common icon set to distinguish them from regular entries. They are a cocktail of our combined writing skills and tend to be very specific. Regard them as “bonus tracks” on our written company album.

What to expect in the future

We are looking forward to keep our pace in 2010. The blog will receive a facelift and better integration with our website soon. We plan to provide some improvements on finding relating groups of existing articles. But we don’t want to make changes in our ruleset or dedication.

If you happen to follow us on our blog, drop a comment. We really like to hear from you. By the way, in 2010 the first entry on reader request will be published. Stay tuned!