Designing an API? Good luck!

An API Design Fest is a great opportunity to gather lasting insights what API design is really about. And it will remind you why there are so few non-disappointing APIs out there.

If you’ve developed software to some extent, you’ve probably used dozens if not hundreds of APIs, so called Application Programming Interfaces. In short, APIs are the visible part of a library or framework that you include into your project. In reality, the last sentence is a complete lie. Every programmer at some point got bitten by some obscure behavioural change in a library that wasn’t announced in the interface (or at least the change log of the project). There’s a lot more to developing and maintaining a proper API than keeping the interface signatures stable.

A book about API design

practicalapidesignA good book to start exploring the deeper meanings of API development is “Practical API Design” by Jaroslav Tulach, founder of the NetBeans project. Its subtitle is “Confessions of a Java Framework Architect” and it holds up to the content. There are quite some confessions to make if you develop relevant APIs for several years. In the book, a game is outlined to effectively teach API design. It’s called the API Design Fest and sounds like a lot of fun.

The API Design Fest

An API Design Fest consists of three phases:

  • In the first phase, all teams are assigned the same task. They have to develop a rather simple library with an usable API, but are informed that it will “evolve” in a way not quite clear in the future. The resulting code of this phase is kept and tagged for later inspection.
  • The second phase begins with the revelation of the additional use case for the library. Now the task is to include the new requirement into the existing API without breaking previous functionality. The resulting code is kept and tagged, too.
  • The third phase is the crucial one: The teams are provided with the results of all other teams and have to write a test that works with the implementation of the first phase, but breaks if linked to the implementation of the second phase, thus pointing out an API breach.

The team that manages to deliver an unbreakable implementation wins. Alternatively, points are assigned for every breach a team can come up with.

The event

This sounds like too much fun to pass it without trying it out. So, a few weeks ago, we held an API Design Fest at the Softwareschneiderei. The game mechanics require a prepared moderator that cannot participate and at least two teams to have something to break in the third phase. We tried to cram the whole event into one day of 8 hours, which proved to be quite exhausting.

In a short introduction to the fundamental principles of API design that can withstand requirement evolution, we summarized five rules to avoid the most common mistakes:

  •  No elegance: Most developers are obsessed with the concept of elegance. In API design, there is no such thing as beauty in the sense of elegance, only beauty in the sense of evolvability.
  •  API includes everything that an user depends on: Your API isn’t defined by you, it’s defined by your users. Everything they rely on is a fixed fact, if you like it or not. Be careful about leaky abstractions.
  •  Never expose more than you need to: Design your API for specific use cases. Support those use cases, but don’t bother to support anything else. Every additional item the user can get a hold on is essentially accidental complexity and will sabotage your evolution attempts.
  •  Make exposed classes final and their constructor private: That’s right. Lock your users out of your class hierarchies and implementations. They should only use the types you explicitly grant them.
  •  Extendable types cannot be enhanced: The danger of inheritance in API design is that you suddenly have to support the whole class contract instead of “only” the interface/protocol contract. Read about Liskov’s Substitution Principle if you need a hint why this is a major hindrance.

The introduction closed with the motto of the day: “Good judgement comes from experience. Experience comes from bad judgement.” The API Design Fest day was dedicated to bad judgement. Then, the first phase started.

The first phase

No team had problems to grasp the assignment or to find a feasible approach. But soon, eager discussions started as the team projected the breakability of their current design. It was very interesting to listen to their reasoning.

After two hours, the first phase ended with complete implementations of the simple use cases. All teams were confident to be prepared for the extensions that would happen now. But as soon as the moderator revealed the additional use cases for the API, they went quiet and anxious. Nobody saw this new requirement coming. That’s a very clever move by Jaroslav Tulach: The second assignment resembles a real world scenario in the very best manner. It’s a nightmare change for every serious implementation of the first phase.

The second phase

But the teams accepted the new assignment and went to work, expanding their implementation to their best effort. The discussions revolved around possible breaches with every attempt to change the code. The burden of an API already existing was palpable even for bystanders.

After another two hours of paranoid and frantic development, all teams had a second version of their implementation and we gathered for a retrospective.

The retrospective

In this discussion, all teams laid down arms and confessed that they had already broken their API with simple means and would accept defeat. So we called off the third phase and prolonged the discussion about our insights from the two phases. What a result: everybody was a winner that day, no losers!

Some of our insights were:

  • Users as opponents: While designing an API, you tend to think about your users as friends that you grant a wish (a valid use case). During the API Design Fest, the developers feared the other teams as “malicious” users and tried to anticipate their attack vectors. This led to the rejection of a lot of design choices simply because “they could exploit that”. To a certain degree, this attitude is probably healthy when designing a real API.
  • Enum is a dead end: Most teams used Java Enums in their implementation. Every team discovered in the second phase that Enums are a dead end in regard of design evolution. It’s probably a good choice to thin out their usage in an API context.
  • The most helpful concepts were interfaces and factories.
  • If some object needs to be handed over to the user, make it immutable.
  • Use all the modifiers! No, really. During the event, Java’s package protected modifier experienced a glorious revival. When designing an API, you need to know about all your possibilities to express restrictions.
  • Forbid everything: But despite your enhanced expressibility, it’s a safe bet to disable every use case you don’t want to support, if only to minimize the target area for other teams during an API Design Fest.

The result

The API Design Fest was a great way to learn about the most obvious problems and pitfalls of API design in the shortest possible manner. It was fun and exhausting, but also a great motivator to read the whole book (again). Thanks to Jaroslav Tulach for his great idea.

A touch of Grails: cache invalidation

In a recent post I introduced a caching strategy. One difficulty with caching is always when and where do I need to invalidate the cache contents.

In a recent post I introduced a caching strategy. One difficulty with caching is always when and where do I need to invalidate the cache contents. One of the easiest things to do is to use timestamped cache keys like id-lastUpdated. But what if you cache another related domain object under the same key. One option would be to include its timestamp in the key, too. Another one is to touch the timestamped object.

class A {
  Long id
  Date lastUpdated
}

class B {
  A a
  
  def beforeUpdate = {
    a.lastUpdated = new Date()
  }
}

So you would need this touch in beforeUpdate, afterInsert and beforeDelete. This can get pretty cumbersome. What if I could just declare that I want to touch another object when this object changes like

class A {
  Long id
  Date lastUpdated
}

class B {
  static touch = ['a']

  A a
}

For this we just need to listen to the GORM persistence events.

class TouchPersistenceListener extends AbstractPersistenceEventListener {

  public TouchPersistenceListener(final Datastore datastore) {
    super(datastore)
  }

  @Override
  protected void onPersistenceEvent(AbstractPersistenceEvent event) {
    touch(event.entityObject)
  }

  public void touch(def domain) {
    MetaProperty touch = domain.metaClass.hasProperty(domain, 'touch')
    if (touch != null) {
      Date now = new Date()
      def listOfPropertiesToTouch = touch.getProperty(domain)
      listOfPropertiesToTouch.each { propertyName ->
        if (domain."$propertyName" == null) {
          return
        }
        if (domain."$propertyName" instanceof Collection) {
          domain."$propertyName".findAll {it}*.lastUpdated = now
        } else {
          domain."$propertyName".lastUpdated = now
        }
      }
    }
  }

  boolean supportsEventType(Class eventType) {
    return [PreInsertEvent, PostInsertEvent, PreUpdateEvent].contains(eventType)
  }
}

One caveat here is that all persistence events are fired by hibernate during a flush. Especially the delete event caused problems like objects which have a session but you cannot load lazy loaded associations. This is a known hibernate bug and is fixed is 4.01 but Grails uses an older version. So we just decorate the save and delete methods of our domain classes and register our listener for the other events.

class BootStrap {
  def grailsApplication

  def init = { servletContext ->
    def ctx = grailsApplication.mainContext
    ctx.getBeansOfType(Datastore).values().each { Datastore d ->
      def touchListener = new TouchPersistenceListener(d)
      ctx.addApplicationListener touchListener
      decorateWith(touchListener)
    }
  }

  private void decorateWith(TouchPersistenceListener touchListener) {
    grailsApplication.domainClasses.each { GrailsClass clazz ->
      if (clazz.hasProperty('touch')) {
        def oldSave = clazz.metaClass.pickMethod('save', [Map] as Class[])
        def oldDelete = clazz.metaClass.pickMethod('delete', [Map] as Class[])
        clazz.metaClass.save = { params ->
          touchListener.touch(delegate)
          oldSave.invoke(delegate, [params] as Object[])
        }
        clazz.metaClass.delete = { params ->
          touchListener.touch(delegate)
          oldDelete.invoke(delegate, [params] as Object[])
        }
      }
    }
  }
}

Now we can just declare that changing this object touches another one it also works with 1:n or m:n associations. We don’t have to worry where to invalidate the cache in our code just annotate every object used in the cache with touch if it has an association to our object used in the key or include it in the key. Changing those objects invalidates the cache automatically.

Small cause – big effect (Story 2)

A short story about when Wethern’s Law of Suspended Judgement caused some trouble on the production system.

This is another story about a small misstep during programming that caused a lot of trouble in effect. The core problem has nothing to do with a specific programming language or some technical aspect, but the concept of assumptions. Let’s have a look at what an assumption really is:

assumption: The act of taking for granted, or supposing a thing without proof; a supposition; an unwarrantable claim.

A whole lot of our daily work is based on assumptions, even about technical details that we could easily prove or refute with a simple test. And that isn’t necessarily a bad thing. As long as the assumptions are valid or the results of false information are in the harmless spectrum, it saves us valuable time and resources.

A hidden risk

97thingsarchitectBut what if we aren’t aware of our assumptions? What if we believe them being fact and build a functionality on it without proper validation? That’s when little fiascos happen.

Timothy High cited in his chapter “Challenge assumptions – especially your own” in the book “97 Things Every Software Architect Should Know” the lesser known Wethern’s Law of Suspended Judgement:

Assumption is the mother of all screw-ups.

Much too often, we only realize afterwards that the crucial fact we relied on wasn’t that trustworthy. It was another false assumption in disguise.

Distributed updates

But now it’s storytime again: Some years ago, a software company created a product with an ever-changing core. The program was used by quite some customers and could be updated over the internet. If you want to have a current analogy, think about an anti-virus scanner with its relatively static scanning unit and the virus signature database that gets outdated within days. To deliver the update patches to the customer machines, a single update server was sufficient. The update patches were small and happened weekly or daily, so it could be handled by a dedicated machine.

To determine if it needed another patch, the client program contacted the update server and downloaded a single text file that contained the list of all available patches. Because the patches were incremental, the client program needed to determine its patch level, calculate the missing patches and download and apply them in the right order. This procedure did its job for some time, but one day, the size of the patch list file exceeded the size of a typical patch, so that the biggest part of network traffic was consumed by the initial list download.

Cutting network traffic

A short time measure was to shorten the patch list by removing ancient versions that were surely out of use by then. But it became apparent that the patch list file would be the system’s achilles heel sooner or later, especially if the update frequency would move up a gear, as it was planned. So there was a plan to eliminate the need to download the patch list if it had not changed since the last contact. Instead of directly downloading the patch list, the client would now download a file containing only the modification date of the patch list file. If the modification date was after the last download, it would continue to download the patch list. Otherwise, it would refrain from downloading anything, because no new patches could be present.

Discovering the assumption

The new system was developed and put into service shortly before the constant traffic would exhaust the capabilities of the single update server. But the company admins watched in horror as the traffic didn’t abate, but continued on the previous level and even increased a bit. The number of requests to the server nearly doubled. Apparently, the clients now always downloaded the modification date file and the patch list file, regardless of their last contact. The developers must have screwed up with their implementation.

But no error was found in the code. Everything worked just fine, if – yes, if (there’s the assumption!) the modification date was correct. A quick check revealed that the date in the modification date file was in fact the modification date of the patch list file. This wasn’t the cause of the problem either. Until somebody discovered that the number in the modification date file on the update server changed every minute. And that in fact, the patch list file got rewritten very often, regardless of changes in its content. A simple cron job, running every minute, pushed the latest patch list file from the development server to the update server, changing its modification date in the process and editing the modification date file accordingly.

The assumption was that if the patch list file’s content would not change, the same would be true for its modification date. This was true during development, but changed once the cron job got involved. The assumption hold true during development and went into sabotage mode on the live system.

Record and challenge your assumptions

The developers cannot be blamed for the error in this story. But they could have avoided it if they had adopted the habit of recording their assumptions and had them communicated to the administrators in charge of the live system. The first step towards this goal is to make the process of relying on assumptions visible to oneself. If you can be sure to know about your assumptions, you can record them, test against them (to make them fact in your world) and subsequently communicate them to your users. This whole process won’t even start if you aren’t aware of your assumptions.

Scaling your web app: Cache me if you can

Invalidation and transaction aware caching using memcached with Grails as an example

One of the biggest problems of caches is how and when do I invalidate my cache content? When you read outdated data from the cache you are toast.
For example we have a list of children elements inside a parent. Normally you would cache the children under the parent’s id:

cache[parent.id] = children

But how do you know if your cache content is still valid? When one child or the list of children changes you write the new content into the cache

cache[parent.id] = newChildren

But when do you update the cache? If you place the update code where the list of children is modified the cache is updated before transaction has ended. You break the isolation. Another point would be after the transaction has been committed but then you have to track all changes. There is a better way: use a timestamp from the database which is also visible to other transactions when it is committed. It should also be in the parent object because you need this object for the cache key nonetheless. You could use lastUpdated or another timestamp for this which is updated when the children collection changes. The cache key is now:

cache[parent.id + '_' + parent.lastUpdated]

Now other transactions read the parent object and get the old timestamp and so the old cache content before the transaction is committed. The transaction itself gets the new content. In Grails if you change the collection lastUpdated is automatically updated and in Rails with belongs_to and touch even a change in a child updates the lastUpdate of the parent – no manual invalidation needed.

Excourse: using memcached with Grails

If you want to use memcached from the JVM there is a good library which wraps common calls: spymemcached. If you want to use spymemcached from Grails you drop the jar into your lib folder and wrap it in a Service:

class MemcachedService implements InitializingBean {
  static final Object NULL = "NULL"
  def MemcachedClient memcachedClient

  def void afterPropertiesSet() {
    memcachedClient = new MemcachedClient(
      new ConnectionFactoryBuilder().setTranscoder(new CustomSerializingTranscoder()).build(),
      AddrUtil.getAddresses("localhost:11211")
    )
  }

  def connected() {
    return !memcachedClient.availableServers.isEmpty()
  }

  def get(String key) {
    return memcachedClient.get(key)
  }

  def set(String key, Object value) {
    memcachedClient.set(key, 600, value)
  }

  def clear() {
    memcachedClient.flush()
  }
}

Spymemcached serializes your cache content so you need to make all your cached classes implement Serializable. Since Grails uses its own class loaders we had problems with deserializing and used a custom serializing transcoder to get the right class loader (taken from this issue):

public class CustomSerializingTranscoder extends SerializingTranscoder {

  @Override
  protected Object deserialize(byte[] bytes) {
    final ClassLoader currentClassLoader = Thread.currentThread().getContextClassLoader();
    ObjectInputStream in = null;
    try {
      ByteArrayInputStream bs = new ByteArrayInputStream(bytes);
      in = new ObjectInputStream(bs) {
        @Override
        protected Class<ObjectStreamClass> resolveClass(ObjectStreamClass objectStreamClass) throws IOException, ClassNotFoundException {
          try {
            return (Class<ObjectStreamClass>) currentClassLoader.loadClass(objectStreamClass.getName());
          } catch (Exception e) {
            return (Class<ObjectStreamClass>) super.resolveClass(objectStreamClass);
          }
        }
      };
      return in.readObject();
    } catch (Exception e) {
      e.printStackTrace();
      throw new RuntimeException(e);
    } finally {
      closeStream(in);
    }
  }

  private static void closeStream(Closeable c) {
    if (c != null) {
      try {
        c.close();
      } catch (IOException e) {
        e.printStackTrace();
      }
    }
  }
}

With the connected method you can check if any memcached instances are available. Which is better than calling a method and waiting for the timeout.

def connected() {
  return !memcachedClient.availableServers.isEmpty()
}

Now you can inject your Service where you need to and cache along.

Cache the outermost layer

If you use Hibernate you get database based caching almost for free, so why bother using another cache? In one application we used Hibernate to fetch a large chunk of data from the database and even with caches it took 100 ms. Measuring the code showed that the processing of the data (conversion for the client) took by far the biggest chunk. Caching the processed data lead to 2 ms for the whole request. So one take away is here that caching the result of (user indepedent) calculations and conversions can speed up your request even further. When you got static resources you could also use HTTP directives.

Small cause – big effect (Story 1)

This is a story about a small programming error that caused a company-wide chaos.

Today’s modern software development is very different from the low-level programming you had to endure in the early days of computing. Computer science invented concepts like “high-level programming languages” that abstract a lot of the menial tasks away that are necessary to perform the desired functionality. Examples of these abstractions are register allocation, memory management and control structures. But even if we step into programming at a “high level”, we build whole cities of skyscrapers (metaphorically speaking) on top of those foundation layers. And still, if we make errors in the lowest building blocks of our buildings, they will come crashing down like a tower in a game of jenga.

Paris_Tuileries_Garden_Facepalm_statueThis story illustrates one such error in a small and low building block that has widespread and expensive effects. And while the problem might seem perfectly avoidable in isolation, in reality, there are no silver bullets that will solve those problems completely and forever.

The story is altered in some aspects to protect the innocent, but the core message wasn’t changed. I also want to point out that our company isn’t connected to this story in any way other than being able to retell it.

The setting

Back in the days of unreliable and slow dial-up internet connections, there was a big company-wide software system that should connect all clients, desktop computers and notebooks likewise, with a central server that should publish important news and documentation updates. The company developing and using the system had numerous field workers that connected from the hotel room somewhere near the next customer, let the updates run and disconnected again. They relied on extensive internal documentation that had to be up-to-date when working offline at the customer’s site.

Because the documentation was too big to be transferred completely after each change, but not partitioned enough to use a standard tool like rsync, the company developed a custom solution written in Java. A central server kept track of every client and all updates that needed to be delivered. This was done using a Set, more specifically a HashSet for each client. Imagine a HashMap (or Dictionary) with only a key, but no payload value. The Set’s entries were the update command objects itself. With this construct, whenever a client connected, the server could just iterate the corresponding Set and execute every object in it. After the execution had succeeded, the object was removed from the Set.

Going live

The system went live and worked instantly. All clients were served their updates and the computation power requirements for the central server were low because only few decisions needed to be made. The internet connection was the bottleneck, as expected. But it soon turned out to be too much of a bottleneck. More and more clients didn’t get all their updates. Some were provided with the most recent updates, but lacked older ones. Others only got the older ones.

The administrators asked for a bigger line and got it installed. The problems thinned out for a while, but soon returned as strong as ever. It wasn’t a problem of raw bandwidth apparently. The developers had a look at their data structure and discovered that a HashSet wouldn’t guarantee the order of traversal, so that old and new updates could easily get mixed up. But that shouldn’t be a problem because once the updates were delivered, they would be removed from the Set. And all updates had to be delivered anyways, regardless of age.

Going down

Then the central server instance stopped working with an OutOfMemoryError. The heap space of the Java virtual machine was used up by update command objects, sitting in their HashSets waiting to be executed and removed. It was clear that there were far too many update command objects to come up with a reasonable explanation. The part of the system that generated the update commands was reviewed and double-checked. No errors related to the problem at hand were found.

The next step was a review of the algorithm for iterating, executing and removing the update commands. And there, right in the update command class, the cause was found: The update command objects calculated their hashcode value based on their data fields, including the command’s execution date. Every time the update command was executed, this date field was updated to the most recent value. This caused the hashcode value of the object to change. And this had the effect that the update command object couldn’t be removed from the Set because the HashSet implementation relies on the hashcode value to find its objects. You could ask the Set if the object was still contained and it would answer “no”, but still include it into each loop over the content.

The cause

The Sets with update commands for the clients always grew in size, because once a update command object was executed, it couldn’t be removed but appeared absent. Whenever a client connected, it got served all update commands since the beginning, over and over again in semi-random order. This explained why sometimes, the most recent updates were delivered, but older ones were still missing. It also explained why the bandwidth was never enough and all clients lacked updates sooner or later.

The cost of this excessive update orgy was substantial: numerous clients had leeched all (useless) data they could get until the connection was cut, day for day, over expensive long-distance calls. Yet, they lacked crucial updates that caused additional harm and chaos. And all this damage could be tracked down to a simple programming error:

Never include mutable data into the calculation of a hashkey.

Java Pub Quiz

Everybody loves a pub quiz. So I collected some Java trivia questions ranging from syntax over frameworks. Have fun!

Everybody loves a pub quiz. So I collected some Java trivia questions ranging from syntax over frameworks. Have fun!

  1. What is the name of the following syntax elements: (1 point each)
    <>
    ?:
  2. Is this valid Java code and why or why not?
    http://www.google.de
  3. When does a != a result in true for the same a?
  4. Which non internal package cannot be imported?
  5. What’s the result of “Hello” == “Hello” and why?
  6. How is this piece of code construct called?
    new ArrayList() {{
      add("Hello");
    }};
    
  7. For which value does Math.abs(int) return a negative number?
  8. What does file.delete() do when the file in question cannot be deleted?
  9. What is the result of Arrays.asList(1, 2, 3).add(2) ?

Solutions

  1. diamond operator and ternary operator or elvis (in Groovy)
  2. Yes, http: is a label and // starts a comment
  3. Double.NaN
  4. The default package
  5. True. All String literals are interned.
  6. Double brace initialisation
  7. Integer.MIN_VALUE
  8. It returns false
  9. java.lang.UnsupportedOperationException

Framework

Here we name three classes from a JDK package or an open source framework, can you guess which package or framework it is?

  1. Closeable, Console, Serializable
  2. Objects, Properties, Random
  3. Callable, Future, Phaser
  4. Point, Robot, Toolkit
  5. AutoCloseable, Iterable, Process
  6. Mapping, PersistentCollection, Session
  7. EqualsBuilder, Mutable, StringUtils
  8. ApplicationContext, DataBinder, JdbcTemplate
  9. Frequency, Length, Volume
  10. Minutes, Weeks, Years

Solutions

  1. java.io
  2. java.util
  3. java.util.concurrent
  4. java.awt
  5. java.lang
  6. Hibernate
  7. Apache Commons (Lang)
  8. Spring
  9. JScience
  10. Joda Time

Grails / GORM performance tuning tips

Every situation and every code is different but here are some pitfalls that can cost performance and tips you can try to improve performance in Grails / GORM

First things first: never optimize without measuring. Even more so with Grails there are many layers involved when running code: the code itself, the compiler optimized version, the Grails library stack, hotspot, the Java VM, the operating system, the C libraries, the CPU… With this many layers and even more possibilities you shouldn’t guess where you can improve the performance.

Measuring the performance

So how do you measure code? If you have a profiler like JProfiler you can use it to measure different aspects of your code like CPU utilization, hotspots, JDBC query performance, Hibernate, etc. But even without a decent profiler some custom code snippets can go a long way. Sometimes we use dedicated methods for measuring the runtime:

class Measurement {
  public static void runs(String opertationName, Closure toMeasure) {
    long start = System.nanoTime()
    toMeasure.call()
    long end = System.nanoTime()
    println("Operation ${operationName} took ${(end - start) / 1E6} ms")
  }
}

or you can even show the growth in the Hibernate persistence context:

class Measurement {
  public static void grown(String opertationName, Closure toMeasure) {
    PersistenceContext pc = sessionFactory.currentSession.persistenceContext
    Map before = numberOfInstancesPerClass(pc)
    toMeasure.call()
    Map after = numberOfInstancesPerClass(pc)
    println "The operation ${operationName} has grown the persistence context: ${differenceOf(after, before)}"
  }
}

Improving the performance

So when you found your bad performing code, what can you do about it? Every situation and every code is different but here are some pitfalls that can cost performance and tips you can try to improve performance:

GORM hotspots

Performance problems with GORM can be in different areas. A good rule of thumb is to reduce the number of queries hitting the database. This can be achieved by combining results with outer join, eager fetching associations or improving caching. Another hotspot can be long running operations which you can improve via creating indices on the database but first analyze the query with database specific tools like ANALYZE.
Also a typical problem can be a large persistence context. Why is this a problem? The default flush mode in Hibernate and hence GORM is auto which means before any query the persistence context is flushed. Flushing means Hibernate checks every property of every instance if it has changed. The larger the persistence context the more work to do. One option would be to clear the session periodically after a flush but this could decrease the performance because once loaded and therefore cached instances need to be reloaded from the database.
Another option is to identify the parts of your code which only need read access on the instances. Here you can use a stateless session or in Grails you can use the Spring annotation @Transactional(readOnly = true). It can be beneficial for the performance to separate read only and write access to the database. You could also experiment with the flush mode but beware that this can lead to wrong query results.

The thin line: where to stop?

If you measure and improve you can get big and small improvements. The problem is to decide which of these small ones change the code in a good or minimal way. It is a trade off between performance and code design as some performance improvements can worsen the code quality. Another cup of tea left untouched in this discussion is scalability. Whereas performance concentrates of the actual data and the current situation, scalability looks on the performance of the system when the data increases. Some performance improvements can worsen scalability. As with performance: measure, measure, measure.

An experiment on recruitment seriousness

I had the chance to witness an experiment about the seriousness of companies to attract new developers. The outcome was surprisingly accurate.

Most software development companies in our area are desperately searching for additional software developers to employ. The pressure rose until two remarkable recruiting tools were installed. At first, every tram car in town was plastered with advertisement shouting “we search developers” as the only message. These advertisement have a embarrassing low average appeal to the target audience, so the second tool was a considerable bonus for every developer that was recruited by recommendation. This was soon called the “headhunter’s reward” and laughed upon.

Testing the prospects

The sad thing isn’t the current desperation in the local recruitment efforts, it’s the actual implementation of the whole process. Let’s imagine for a moment that a capable software developer from another town arrives at our train station, enters a tram and takes the advertisement serious. He discovers that the company in question is nearby and decides to pay them a visit – right now, right here. What do you think will happen?

I had the fortune to talk to a developer who essentially played the scenario outlined above through with five local software development companies that are actively recruiting and advertising. His experiences differed greatly, but gave a strangely accurate hint of the potential future employer’s actual company culture. And because the companies’ reactions were utmost archetypical, its a great story to learn from.

Meeting the company

The setting for each company was the same: The developer chose an arbitrary date and appeared at the reception – without an appointment, without previous contact, without any documents. He expressed interest in the company and generally in any open developer position they had open. He also was open to spontaneous talks or even a formal job interview, though he didn’t bring along a resume. It was the perfect simulation of somebody who got instantaneously convinced by the tram advertisement and rushed to meet the company of his dreams.

The reactions

Before we take a look at the individual reactions, lets agree to some acceptable level of action from the company’s side. The recruitment process of a company isn’t a single-person task. The “headhunter’s reward” tries to communicate this fact through monetary means. Ideally, the whole company staff engages in many little actions that add up to a consistent whole, telling everybody who gets in contact with any part of the company how awesome it is to work there. While this would be recruitment in perfection, it’s really the little actions that count: Taking a potentially valuable new employee serious, expressing interest and care. It might begin with offering a cool beverage on a exceptionally hot day or giving out the company’s image brochure. If you can agree to the value of these little actions, you will understand my evaluation scheme of the actual reactions.

The accessible boss

One company won the “contest” with great distance to all other participants. After the developer arrived at the reception, he was relayed to the local boss who really had a tight schedule, but offered a coffee talk for ten minutes after some quick calls to shift appointments. Both the developer and the company’s boss exchanged basic informations and expectations in a casual manner. The developer was provided with a great variety of company print material, like the obligatory image brochure, the latest monthly company magazine and a printout of current open job offerings. The whole visit was over in half an hour, but gave a lasting impression of the company. The most notable message was: “we really value you – you are boss level important”. And just to put things into perspective: this wasn’t the biggest company on the list!

The accessible office

Another great reaction was the receptionist who couldn’t reach anybody in charge (it was generally not the most ideal timing) and decided to improvise. She “just” worked in the accounting department, but tried her best to present the software development department and explain basic cool facts about the company. The visit included a tour through the office space and ended with providing generic information material about the company. The most notable message was: “We like to work here – have a look”.

The helpless reception

Two companies basically reacted the same way: The receptionist couldn’t reach anybody in charge, decided to express helplessness and hope for sympathy. Compared to the reactions above, this is is a rather poor and generic approach to the recruitment effort. In one case, the receptionist even forgot basic etiquette and didn’t offer the obligatory coffee or image brochure. The most notable message was: “We only work here – and if you join, you will, too”. To put things into perspective: one of them was the biggest company on the list, probably with rigid processes, highly partitioned responsibilities and strict security rules.

The rude reception

The worst first impression made the company with the reception acting like a defense position. Upon entering, the developer was greeted coldly by the two receptionists. When he explained the motivation of his visit, the first receptionist immediately zoned out while the second one answered: “We have an e-mail address for applications, please use it” and lost all interest in the guest. The most notable message was: “Go away – why do you bother us?”.

What can be learnt?

The whole experiment can be seen from two sides. If you are a developer looking for a new position in a similar job market situation, you’ll gain valuable insights about your future employer by just dropping by and assessing the reactions. If you are a software development company desperately looking out for developers, you should regard your recruitment efforts as a whole-company project. Good recruitment is done by everybody in your company, one thing at a time. Recruitment is a boss task, but to be handled positively, it has to be accompanied by virtually everybody from the whole staff. And a company full of happy developers will attract more happy developers just by convincing recruitment work done by them in the spare time, most of the time without being explicitly aware of it.

Your own perfection hinders you

Remember when you first started programming? A post against your (exaggerated) perfection.

Remember when you first started programming? Did you think about tests? Did you plan an architecture before coding? Did you look at your results and thought what a crap? No, you were lucky to see something, something done. Imperfect but in a sense beautiful. You had a feeling of accomplishment. That little pixel that responded to you pressing keys, the little web page that just saved some data. You did something.
Now fast forward to today. You now write software professionally. With tests, architecture, well thought out. You are practicing an agile methodology (whatever that means). Don’t get me wrong these are all important points and can help you to get a solid implementation. But what if you need to implement a prototype? Just to try something? Fire and forget. Quick and dirty. Can you do it? Do you start with writing tests? Planning the architecture? Writing a spec? And afterwards: what do you think of the result? is it ugly? is it not done “professionally”?
What if you start in another field of your profession? Maybe you made websites your whole career and now start with desktop or mobile apps. Or you implemented back end code and now start writing code for the front end. Do you feel insecure? Do you think you just write crap? You shouldn’t. Remember your beginnings. Yes, you have matured, you know more, you write better. But you should celebrate getting something done. Shipping something. Seeing something. That feeling of accomplishment. Don’t criticize too hard, don’t be too harsh to you and your code. Get something done and then improve along the way. Just like when you started. Just keep shipping.
And for you, young software engineers, who just start. We all went through this phase. Don’t look at other’s work and think: wow, this looks so good and my work doesn’t. Think: he went also through this phase and now he can make this wonderful work, I will, too.
Ira Glass, an american writer said it best.