Arbitrary limits in IT

When I was a young teenager, a game captured my attention for months: Sid Meyer’s Railroad Tycoon. The essence of the game was to build railways between cities and transport as much passengers and freight goods between them as possible.

I tell this story because it was the first time I discovered a software bug and exploited it. In a way, it was the start of my “hacking” career. Don’t worry, I’m a software developer now, not a hacker. I produce the bugs, I don’t search and exploit them.

The bug in Railroad Tycoon had to do with buying industry. You could not only build tracks and buy trains, you could also acquire industrial complexes like petroleum plants and factories. And while you couldn’t build more tracks once your money ran out, you could accrue debt by buying industry. If you did this extensively, your debt suddenly turned into a small fortune. I didn’t know about the exact mechanics back then, but it was a classic 16-bit signed integer overflow bug. If your debt exceeded 32.768 dollars, the sign turned positive. That was a lot of money in the game and you had a lot of industry, too. The english wikipedia article seems to be a bit inaccurate.

If you are accustomed with IT, there a some numbers that you immediately recognize as problematic, like 255, 32.767, 65.535 or 2.147.483.647. If anything unusual or bad happens around these numbers, you know what’s up. It’s usually related to an integer overflow or (in the case of Railroad Tycoon) underflow.

But then, there are problematic numbers that just seem random. If you want to name a table in an older Oracle database, you couldn’t name it longer than 30 characters. Not 32 or something that could somehow be related to a technical cause, but 30. Certain text values couldn’t be longer than 2000 characters. Not 2048 (or 2047 with a terminating zero character), but straight 2000. These numbers look more “usual” for a normal human, but they appear just as arbitrary to the IT professional’s eye as 2048 might seem to others.

Recently, I introduced a new developer to an internal project of ours. We set up the development environment and let the program run a few times. Then, we introduced some observable changes to the code to explain the different parts. But suddenly, a console output didn’t appear. All we did was to introduce a line of code in the form of:

System.out.println(output);

And the output just didn’t show up. We checked that the program executed the code beforehands and even fired up a debugger (something I’m not really fond of) to see that the output string is really filled.

We changed the line of code to:

System.out.println(output.length());

And got our result: 32.483 characters.

As you can see, the number is somewhat near the 32k danger zone. But in IT, these danger zones are really small. You can be directly besides it and don’t notice anything, but one step more and you’re in trouble. In a way, they act like minefields.

There should be nothing wrong with 32.483 characters printed on a console. Well, unless you use Eclipse and Windows. If you do, there is a new danger zone, starting with 32.000 characters. And this zone isn’t small. In fact, it affects any text with more than 32.000 characters that should be printed in an Eclipse console on Windows:

https://bugs.eclipse.org/bugs/show_bug.cgi?id=23406

‘ScriptShape’ WINAPI fails with E_FAIL for OpenType fonts when the length of the string is over 32000 (0x7d00) characters

Notes: 32000 is hardcoded in ‘gdi32full!GenericEngineGetGlyphs’ Windows function.

https://bugs.eclipse.org/bugs/show_bug.cgi?id=23406#c31

There is nothing special about the number 32.000. My guess is that some developer at Microsoft in the nineties had to impose a limit and just thought that “32.000 characters ought to be enough for anybody”. This is a common mistake made by Microsoft and the whole IT industry.

The problem is that now, 20 or even 30 years later, this limit is still in place. Our processing power grew by a factor of 1.000 (yes, one thousand times more power), the amount of available memory even by a factor of 16.000 and we are still limited to 32.000 characters for a line of text. If the limit would grow accordingly, you could now fit up to 32.000.000 characters in that string and it would just work.

So, what is the moral of this story? IT and software development are minefields where you can step on a mine that was hidden 20+ years ago at any turn. But even more important: If you write code, please be aware that every limit you introduce into your solution will cause trouble in the future. Some limits can be explained by other limits, but others are just arbitrary. Make the arbitrary limits visible and maybe even configurable!

Applied User Research on my own pile of synthesizer machines

Last year, I moved. Moving into a new apartment is very much akin to a major rewrite of a complex piece of software. A piece of software with a limited amount of users, maybe (well, me, my girlfriend, that’s it), but with an immensely sophisticated structure of requirements… despite the decade-long experience of “living somewhere” one usually has at this point.

For example, I have a scarce habit of hoarding hardware synthesizers – from Soviet-era monstrosities like the Поливокс to modern machines, they have a few things in common, as they

  1. take quite some space (due to their extensive wiring) and some are heavy
  2. idle around for most of the time (I mean, I have a job)
  3. are versatile enough not to have a clear-cut function in any workflow.

These are somewhat essential. All in all, I had some unassigned space in my new home and my machines instantly colonized there. Like his original version of “work expands to fill the time available“, there seems to be another Parkinson‘s Law correlating hardware synthesizers and the space you allow them to have. So basically, they now occupied one room of their own.

Which could be the end of the story and everyone would live happily ever after. If you have, like, a Googol amount of rooms. And considering Point 2 above – this solution feels quite like a waste.

Now – also last year, I began to deep-dive into the techniques of User Experience. And basically, my problem is pretty much comparable to a customer with a few quite diverging requirements. Who wants his problem solved, but hasn‘t yet figured out his actual needs to begin with.

Ergo, I should be able to use my insights of the field of UX, or Interaction Design, directly to my advantage, by applying techniques of User Research to my own behaviour.

And the first question should always be: By which approach do we get the largest understanding by the least effort? But the zeroth question is actually: How “wicked” is my problem?. Do I have an absolutely ill-defined, ever-changing, non-testable set of challenges? Or is my problem-solving rather a technical feat, implementing the best patterns, clearest details, best documentation?

Indeed. Point 3 above makes my problem rather ill-defined. On some wickedness scale, with 1 being a quick YouTube search for a How-To, 10 being a problem that I would need to go on a week-long mountain retreat with daily meditation sessions… I would give it a 6-7.

This feels like it should be in the range of difficult, but solvable with a kind of generalized abstract thinking. I choose to opt for the technique of the Five Whys: The goal is to find a consecutive number of deeper questions after my problem solving. But generally, I understand this as

  • “Five” is a general number one can aim for. There can be more, less, and there can be branches in the questioning.
  • “Why” is a placeholder for any qualifier that goes to a more abstract level. It can also be a „What do you want…“, „How about…“, „What‘s wrong with…“ serving that purpose

So. I consider myself sitting on opposite sides of a table and asking:

  1. What do you want to accomplish, and why?
    • I want to have a truckload of synthesizers, fully functional, and also not to waste my space.
  2. Why do you need your space?
    • Not only do I also have other stuff. Less clutter is generally a way to improve life quality.
  3. Why don‘t you just get rid of the machines, i.e. selling, basement, …?
    • Well. I do want to have access to spontaneous synthesizer jams. The pandemic makes it hard to get that creative input anywhere else.
  4. Why does this need to be spontaneous?
    • Because musical, like any creative flow, comes spontaneously. I don‘t want to have a great idea gone by because of long efforts in setting up.
  5. Why would set up times have to be slow?
    • Because in an strictly ordered system I need to collect stuff from various spaces, i.e. the synthesizer itself, the power plug from a box of power plugs, cables for MIDI, audio, control voltages, …
  6. Why would these have to be in their ordered boxes, for apart from the synthesizer?
    • Hm. Well, I guess they wouldn‘t have to be. That‘s just what one would do..?

Now basically, this example did not happen as straightforward as I just made it seem, but it helped me to channel my focus on a rule that is actually quite known in the design of enjoyable user interfaces: „Group things according to their usage, not their nature“. This is a form of reducing cognitive load. UI designers sometimes make this mistake, e.g. grouping “everything that is a filter” in one section, “everything that is a configuration” in one section, and so on; focussing more on the technical implementation of what these are, not on the proximity in the actual domain. This gets worse, the more technical any domain in its essence is, which is the mistake I did.

Wiring hardware synthesizers together is a very technical task, but there is much more use in grouping what belongs together when one wants to use it, not when one looks at its definition.

So now I have it. I, as a user, am happy that I, as a researcher, actually asked stupid questions like “Why don‘t you sell all the garbage?” in order to channel my focus to a system, in which setting up the whole stuff is rather quick, reducing the cognitive load, or rather shift it to the cleanup process – which is fine because I now can trust the system 😉

Hacking one‘s wetware by sleeping slantwise

Over the turn of the year I took some weeks off, in order to work out some private projects, finish some books that I had halfway-finished for far too long, and of course, to reflect about what such concepts like New Year actually could mean. As usual, the short answer is… not much per se, but one can seize the occasion to contrive a few goals for the year. You know, not these mundane, marginal resolutions like “on February 20th I will definitely go for a run”, or ambiguous abstracts like “in 2021 I‘m finally gonna be a people person!”, but a more profound search for something new, a kind of evaluation of undiscovered instrument in the toolkit of one‘s being, the juicy stuff.

Now we‘ve seen for quite some years, that the world of self-improvement likes to border on the superstitious. From particularly fine-tuned compositions of one‘s diet plan, to the sheer religious belief in certain routines, there‘s no shortage in shady suggestions that draw their data from unique success stories that makes me wonder: Even if there was a kind of biological truth to these underlying claims, how could I survive the cringing of my heart that I would experience by reading these articles?

A special downside of solutions of the kind „collection of very intricate details“ is that they aim to intervene in your life at a very incessant level. Which is, you have to think about them all day in fear of breaking the patterns, i.e. you not only distract yourself from the stuff you actually want to do, but also not giving you a certainty of feedback in either – if something appears to help – what exactly it is that helps, or – if there‘s no effect to be noticed – which detail is maybe done wrong in order to blast away all the other efforts. I guess I would only resort to such methods if I had the impression that something‘s seriously wrong with me, and I currently don‘t notice people telling me this more than once a week, so it‘s fine.

Then there are the kind of solutions that I consider just minor variations of the stuff that one already sort of knows. E.g. I don‘t consider “doing some physical movement once in a while is quite ok” as a real form of “bio-hack”, as that is just common knowledge. Ok, you still have to actually apply it, fair enough, but from an intellectual standpoint it’s boring.

So anyway, I‘m still convinced there ought to be some ways of modifying your life style at quite a beginner-suitable level, one that can easily be opted-in, low requirement of risk or investion, and that‘s where I usually start listening.

A few years back, for instance, for me that was the discovery of intermittent fasting, which in my case happened to show nearly instantaneous effects, mostly positive (e.g. for subjective impressions of my attention and overall well-being). This is something that I‘d at least easily recommend trying out a few times, but as for me, right now, I‘m not really inclined in implementing it right now.

One can probably be a bit ambivalent about all these over-the-(online-)counter supplement prescriptions that are listed everywhere. I‘m not going to recommend the regular use of any substance here, but there‘s plenty of articles about nootropics or other cognitive enhancers; some of these articles also border on the quasi-religious realm, others appear more scientific (the usage of coffee is living in the same domain, and I like that a lot) – so I‘ll leave it to the reader to form his own opinion.

Having said that, I can now finally reveal what this post is all about, as I seem to have found another intriguing way which I‘m just trying out now since a few weeks. I found the claim that it is advantageous of sleeping on an inclined bed. That certainly was outside my curretn scope, the underlying claims are about an improved flow of your glymphatic system, as well as pressure regulation. Be that as it may, it doesn‘t sound harmful and it‘s easily obtainable: by raising your bed‘s head end about 10-15cm with some suitable, stable risers, available for below 20€. I found only weird for the first night and seemed to wake up in a better mood. Together with such nice add-on as a wake up light (if you have one), this certainly qualifies as a “why didn’t anyone tell me earlier?”-moment for me.

So, if you have more intriguing bio-hacks that you consider definitely on the non-quacky side, I‘m interested 🙂

Windows 10 quality of life features

Starting with Windows 10 Microsoft switched from big-bang releases of its operating system to so called rolling releases: They release new features and improvements in regular intervals – once or twice a year – without changing the product name.

The great thing is that users get the improvements made by Microsofts engineers much sooner than in the past where you had to wait several years for a big “service pack” to arrive or even a new major release of Windows like 2000, XP, Vista, 7, 8, 8.1 and finally 10 (I am leaving out the dark times on purpose 😉).

The bad thing is that it is harder to see what version or release you are running. Of course there is a (less visible) name for every Windows release. This version or codename sometimes gets mentioned on support pages or in blog posts because functionality of Window 10 can change significantly between these rolling feature updates. And sometimes you may run some app or tool that tells you need Windows 10 2004 or higher.

What version of Window 10 am I running?

I know of two simple ways to find out what version of Windows 10 you are running currently:

  1. Running the tool winver
  2. Opening the Settings -> System -> Info page

Why does it matter?

Another downside is that users often are not aware of new stuff added to their operating system. And Microsoft does an awful job promoting the changes and improvements!

Of course there are announcements about the big things after upgrading your operating system to the next feature level. And Microsoft uses these for marketing its own apps and services. They slap you their new Edge-browser in the face on every occasion and try to trick you in creating a Microsoft account. It is absolutely not obvious how to use Windows without a Microsoft account like the decades before. Skip the process here, continue without and risk your live…

On the other hand they really improve their software and slowly but steadily round the rough edges of their system. The UIs for environment variables are finally quite usable.

Now back to the main theme of the post: There are some hidden gems built into Windows 10 that I learned of only lately and I think are vastly underadvertised – unlike Microsoft’s marketing of their big products.

Built-in screen recorder

Ok, many gamers may know it because it Windows briefly displays the shortcut Win+G when starting a game. It is not only usable for games but you can record any window, capture application sounds and record your voice. You can easily record your own screencasts and video tutorials using this built-in solution.

Built-in clipboard manager

How often did you wish to be able copy multiple items and choose one of the last few copied elements when pasting? While such clipboard managers have been around for a long time and sometimes provide tons of useful features Windows 10 has a simple one built-in. Just press Win+V instead of Ctrl+V to paste your clipboard entry and you will get a list of the copied items to choose from.

Built-in screenshot/snipping tool

Many people may know the old way of making screenshots using the oddly labeled PrtScr-key (sometimes also PrntScrn or simply Print), opening a painting application like MS paint and pasting the image using Ctrl+V. Well, Microsoft improved this workflow a lot by including a snipping tool that you can activate using Win+Shift+S. This tool lets you select either a rectangular or free-form region, a window or an entire screen to capture. After doing that you get a notification allowing you to make modifications to the capture and save it to disk.

On-screen emoji-keyboard

Just a little helper in these modern social media times is the on-screen emoji-keyboard. Using Win+. you can activate it, browse tons of common emojis and enter them into you messages and texts 🐱‍🏍🤘.

Windows Terminal

Ok, this last but not least one is not (yet) built-in and mostly interesting for developers and power users. Nevertheless, I think it is noteworthy that Microsoft finally built a capable terminal application with modern features like multiple tabs, full unicode and font support, customizable background with blur and the ability to host different shells like the old and trusted command prompt CMD, the newer PowerShell and WSL. You can find it in the Microsoft store for free.

Conclusion

While releases of Windows 10 are more subtle than past new Windows releases many things change both under the hood and user visible. Every once in a while something you missed for years or installed third-party tools for may be added without you knowing. That’s another reason why talking to colleagues and friends and practices like pair-programming and brown-bag meetings are so valuable for sharing knowledge and experience.

I hope there is something for you in my findings of hidden windows gems. If you have some Windows 10 features you discovered and really like, feel free to leave them in the comments. I will gladly try them out!

What is it with Software Development and all the clues to manage things?

As someone who started programming a long time ago (roughly 20 years, now that I think about it), but only in recent years entered the world of real software development, the mastery of day-to-day-challenges happens to consist of two main topics: First, inour rapidly evolving field we never run out of new technologies to learn, and then, there’s a certain engineering aspect underlying, how to do things in a certain manner, with lots of input every year.

So after I recently shared some of such ideas with my friends — I indeed still have a few ones of those), I wondered: How is it, that in the modern software development world, most of the information about managing things actually comes from the field itself, and rather feeding back its ideas of project management, quality, etc. into the non-software-subspaces of the world? (Ideas like the Agile movement, Software Craftmanship, the calls of doing things Lean and Clean, nowadays prospered so much that you see their application or modification in several other industries. Like advertisement, just as an example.)

I see a certain kind of brain food in this question. What tells software development apart from other current fields, so that there is a broad discussion and considerable input at its base level? After all, if you plan on becoming someone who builds houses, makes cars, or manages cities, you wouldn‘t engage in such a vivid culture of „how“ to do things, rather focussing on the „what“.

Of course, I might be mistaken in this view. But, by asking: what actually tells software development apart from these other fields of producing something, I see a certain kind of brain food, helpful for approaching every day tasks and valuing better tips over worse ones.

So, what can that be?

1. Quite peculiar is the low entry threshold in being able to call yourself a “programmer”. With the lots of resources you get at relatively little cost (assumption: you have a computer with a working internet connection), you have a lot of channels by which you can learn the „what“ of software development first, and saving the „how“ for later. If you plan on building a house, there‘s not a bazillion of books, tutorials, and videos, after all.

2. Similarly, there‘s the rather low cost of failure when drafting a quick hobby project. Not always will a piece of code that you write in your free time tell yourself „hey man, you ever thought about some better kind of architecture?“ – which is, why bad habits can stick and even feel right. If you choose the „wrong“ mindset, you don‘t always lose heaps of money, and neither do you, if you switch your strategy once in a while, you also don‘t automatically. (you probably will, though, if you are too careless in this process).

3. Furthermore, there‘s the dynamic extension of how your project is going to be used („Scope Creep“). One would build a skyscraper in a different way than a bungalow (I‘m not an expert, though), but with software, it often feels like adding a simple feature here, extending the scope there, unless you hit a point where all its interdependencies are in a complex state of conflict…

4. Then, it‘s a matter of transparency: If you sit in a badly designed car, it becomes rather obvious when it always exhausts clouds of black smoke. Or your house always smells like scents of fresh toilet. Of course, a well designed piece of software will come with a great user experience, but as you can see in many commercial products, there also is quite some presence of low-than-average-but-still-somehow-doing-what-it-should software. Probably users are more tolerant with software than with cars?

5. Also, as in most technical fields, it is not the case that „pure consultants“ are widely received in a positive light. For most nerds, you don‘t get a lot of credibility if you talk about best practices without having got your hands dirty over a longer period of time. Ergo, it needs some experienced software programmers in order to advise less experienced software programmers… but surely, it‘s questionable whether this is a good thing.

6. After all, the requirements for someone who develops a project might be very different in each field. From my academical past in computational Physics, I know that there is quite some demand for „quick & dirty“ solutions. Need to add some Dark Matter in your model here? Well, plug this formula in and check the results. Not every user has the budget or liberty in creating a solid structure of your program. If you want to have a new laboratory building, of course, you very well want it it do be designed as good as it can get.

All in all, these observations somehow boil down to the question, whether software development is to be seen more like a set of various engineering skills, rather like a handcraft, an art, or a complex program of study. It is the question, whether the “crack” in this field is the one who does complex arithmetics in its head, or the one who just gets what the customer wants. I like thinking about such peculiar modes of thought, as they help me in understanding what kinds of things I should learn next.

Or is there something completely else to it?

Shooting Troubles With Toys

As software grows, one of the typical challenges is keeping track of the quirks and subtleties of all the languages, third-party libraries, frameworks, IDEs / toolchains or whatnot you, at places, need to maneuver in order not to construct everything yourself. That‘s often just a matter of familiarization and after you stumbled across a particular type of problem – once or a few times – it gets assimilated as a trivial thing. It sometimes gets so far as keeping certain warnings or harmless exceptions in your software – technical debt. (Alas, your customer doesn‘t usually care for the perfect product, or, let‘s say, wait that long and pay for the perfect product..).

Now, once in a while all the third-party implementations you rely on interact in a odd manner. These are the cases where you get a „Cannot update a component while rendering a different component“ in a React/Recoil application, a NoClassDefFoundError in a Java / Grails application, a general SegFault in a C / C++ program, or your database does weird stuff. U name it.

So even when you encounter a problem that your then, time is a critical thing. So what you do? Google it. Find fellow victims on Stack Overflow, GitHub, etc. – but this only goes so far, depending on how common your problem ist.

Now you should always have a version control system at hand, of course. This has the huge advantage of being able to simplify your problem. Just enter a new branch that no one cares about, and you can completely get rid of all the confusing mess that is your reliance on third party content. Of course, this is a possibility one can always know about. Point here being, do it as a habit. Learn it as a habit. If it‘s only in „deactivating this probably useless flag“, „hardcoding this localhost into this URL in order to make progress quickly“ – you do not want to risk carrying this into production code. Just know that keeping experimental branches open for a longer time is a bad habit, either. So think of them e.g. as „in two hours, I will either merge or delete this branch, there‘s no way about it“. There‘s nothing in there that you wouldn‘t be able to reproduce if required.

And sometimes, it‘s more effective to work from the other end. Instead of going from „very close to where we want to be“, start from a place completely unpolluted by your technical debt. Start a toy project. Use exactly the dependencies that you have in your real project, and try to set up your error scenario. In our case, this method helped us in understanding a completely meaningless NoClassDefFoundError, because suddenly – with exactly the same JDK and Grails packages that we had in the real code – IntelliJ IDE just felt more like telling us verbosely what the actual problem is. Which you can then see without all the clutter.

Even more, this procedure does help your with your own Rubber Ducking – after all, you want to describe to yourself a scenario, where „Actually I don‘t get it, I am just doing this and that and…“, well, are you? Or is there more to it that your eyes don‘t see? Just find out.

Of course, this is just the precursor to a more test driven approach. Toy projects aren‘t really anything else, they are just isolated environments in which you completely see what is going on, with an essential setup and a clear expectation. These are tests. Now if you already wrote them, why not think about including them in your projects as tests? Especially if you‘re kind of new to test driven development, you can make this habit of toy boxing a guide on the road to a more test driven way of thinking.

Or maybe, just don‘t make errors. If you ever have the option – just choose that [I guess then you have time to fix mine, too? :)]

Your most precious resource

When I was a young student, living on my own for the first time, my most scarce resource seemed to be money. Money’s too tight to mention was (and probably still is) a motto that every student could understand. So we traded our time for money and participated in experiments and underpaid student assistant jobs.

Soon after I graduated, money began to accumulate. I have a rather frugal lifestyle, so my expenses didn’t suddenly surge. Instead, my perception of time and money shifted: Money isn’t the bottleneck (anymore), time is. Suddenly, time was much more valueable than money and I would gladly pay money if that meant some hours of additional leisure time or one less problem to tend to. It seems that time is the most precious thing there is.

The traditional economic wisdom supports this idea: “Time is money” is true, but the reverse is not: “money is time” doesn’t cut it. The richest man on earth still only has as much time available to him as anybody else.

If time is the most precious resource, the drive to automation as a time-saving effort can be understood directly. Automation also reduces learning costs if you scale horizontally by parallelizing production.

But soon after I had enough money to optimize my time, I hit another resource bottleneck. Suddenly, I had more time on hand than attention to spare. It turns out that attention is the most valueable resource you can spend. It is just entangled enough with time that its hard to distinguish which runs out first. If you reflect a bit, it becomes obvious. The term “to pay attention” is pretty spot on.

Let me take up the thought of automation again, this time in the domain of software development, in the form of automated tests. Here, automation is not in its most profitable shape. You don’t gain much from scaling your tests horizontally. If you don’t change the code, it doesn’t matter if your tests run once or a thousand times in parallel, the result will be the same (except if you run hardware-dependent tests, but even then you probably don’t gain much after covering all hardware variations).

You also don’t gain much from scaling your tests vertically by making them run faster and faster. It sure helps to have them run continuously in the background (think of a user-modded compiler – look into Continuous Test Runners if you are interested), but after one test run per meaningful change, the profit hits a limit.

So why else is automated testing an economically sound practice? My take on it is delegated attention. You write a small software (your test) that augments your attention area onto code that probably fades from your own attention pretty soon. Automated tests provide automated attention in a sustainable manner (except for those tests that cry wolf for no good reason, those are attention sinks and should be removed from your portfolio). Because of the automation, this delegated attention never fades – even after many years, the test has a close eye on “its” code.

If you are a developer, you have automation and zero-cost copying (aka parallelization without upfront costs) intrinsically in your solution portfolio. Look for ways how to make money with those super-powers. Or even better, look for ways to save time. But if you want the best return on investment for your efforts, you should look for ways to expand your attention area.

Do you agree that attention is our most precious resource? What do you do to lower your attention expenses? Perhaps you have experience in the Ops/DevOps area that resonates with this thoughts? Share your opinion by commenting below or writing your own blog entry!

We will pay attention to you.

Still thinking about managing time…

Now that I‘ve actually read what I‘ve written a few weeks ago 😉 … I‘ve obviously had some time to reflect. About more models of managing your time, about integrating such models in your daily life, their limits, and, of course, about the underlying force, the “why” behind all that.

While trying to adjust myself to the spacious world of home office, I especially came to notice, that time management itself probably wasn‘t the actual issue I was trying to improve. Sure enough, there are several antipatterns of time mismanagement that can easily lead to excessive spending, something you can improve with simple Home Office installations, e.g. having a clock clearly visible from your point of view – and, sensibly, one clearly visible from your cofee machine… These are about making time perceptible, especially when you‘re not the type of person with absolutely fixed times for lunch, or such mundane concepts.

But then, there‘s a certain limit to the amount of improvement you can easily gain from managing time alone. Sure, you could try to apply every single life hack you find online, but then again, the internet isn‘t very good in accounting for differences in personal psychology. The thing you can do, is trying to establish a few recommendations at a certain time and shaking established habits, but you need to evaluate their effect. Not everything is pure gold. For example, my last blog post pointed to the Pomodoro technique, where one will find that there are classes of work that can easily be scheduled into 25-30 minute blocks. But there are others where this restriction leads to more complications than it solves. Another “life hack” the internet throws at you at every opportunity is having a certain super-best time to set your alarm clock to, and I would advise to try to shuffle this once in a while to find out whether there‘s some setting that is best for you. But never think that you need e.g. the same rising pattern as Elon Musk in order to finish your blog post in time… Just keep track of yourself. How would other people know your default mode?

Now overall, each day feels different a bit, and it‘s a function of your emotional state as well as some generic randomness that has no less important effects on your productivity than a set of rules you can just adhere to. So, instead on focussing on managing your time on a given day, we could think of actually trying to manage your productivity. But then again, “productivity management” sounds so abstract that the handles we would think of are about stuff like

  • what you eat
  • how you sleep
  • how much sports you‘re willing to do
  • how much coffee you consume

and other very profound parameters of your existence. That‘s also something you can just play around a bit until you find an obvious optimum. (Did you know that the optimum amount of breakfast beer for you is likely to be zero?… … :P)

However, if you keep on fine tuning every single aspect for the rest of your life like a maniac, you risk to loose yourself in marginal details, without gaining anything.

So if you‘re still reading – we now return in trying to solve what it actually is that we want to manage. And for me, the best model is thinking about managing motivation. Not the general “I guess I am better off with a job than without one”-motivation, but the very real daily motivation that makes you jump from one task into the next one. The one that drives maximum output from your given time without actually having to manage your time itself. There are always days of unsteady condition, but by trying to avoid systematic interferences with your motivation, you can achieve to maximize their output, as well.

At a very general level (and as outlined above), one crucial ingredient in motivational management, for me, is the circumstance of following a self-made schedule. By which of course, I mean, you arrange your day to cooperate with your colleagues and customers, but it has to feel like as much a voluntary choice as possible within your given circumstances.

Then, there‘s planning ahead. Sounds trivial. But you can be the type of person that plans several weeks in advance, or the one that is actually unsure about what happens next monday – the common denominator is avoiding to worry about a kind of default course of events for a few days in a row. We all know that tasks like to fill out more time than they actually require, so you get some backlog one way or another; but if you manage to feel like your time is full of doing something worthwile, it‘s way easier to start your day at a given moment than when you try to arrange tasks of varying importance on-the-fly.

One major point – which I was absolutely amazed by, when I chose to believe it – is, that you can stop a task at many times, without losing your train of thought, not just when it‘s finished. So often, one fears the expected loss of concentration when he realizes that a single task will not fit in a limited time box. But unless you are involuntarily interrupted, and unless you somehow give in to the illusion that the brain is somehow capable of multi-tasking*, you can e.g. shift whole subsections of a given task to the next morning in a conscious manner, and then quickly return to your old concentration.

On the other hand, there‘s the concept of Maker vs. Manager Cycles. Briefly,

  • Someone in a “Manager” mode has a lot of (mostly) smaller issues, spread over many different topics, often only loosely connected, often urgent, and sometimes without intense technical depth. The Manager will gain his (“/her” implied henceforth) motivation surely by getting a lot of different topics done in a short time, thus benefit from a tight, low-overhead schedule. He can apply artificial limits to his time boxes and apply the Pareto rule thoroughly: (“About 80% of any result usually stems from about 20% of the tasks”).
  • However, someone in “Maker” mode probably has a more constrained set of tasks – like a programmer trying to construct a new feature with clearly defined requirements, or a number of multiple high-attention issues – which he wisely bundled into blocks of similar type – will benefit from being left alone for some hours.

For a more thorough discussion, I‘ll gladly point to the discussion of Paul Graham, as Claudia thankfully left in our comment section last time 😉

Which brings me to my final point. I found one of the strongest key to daily motivation lies in the fundamental acceptance of these realities. As outlined above, there just are some different subconscious modes, and different external circumstances, that drive your productivity to a larger scale than you can manipulate. If you already adopted a set of measures and found they did a good job for you, you better not worry if there‘s some kind of a blue day where everything seems to lead to nowhere. You can lose more time by over-optimization than you could gain from super-finely-tuned efficiency. You probably already know this, but do you also embrace it?

(* in my experience, and while I sometimes find myself still trying to do this, multi-tasking is not an existing thing. If you firmly believe otherwise, be sure to drop me a note in the comments..)

Transposition as a programming technique

If you have been programming for a while, you will probably, and hopefully, agree that it is preferable to have a sequence of functions as opposed to the same number of functions nested. In other words, call-graph breadth is better than depth. Among other reasons, a “linear” set of instructions is often easier to follow, which is better for humans, and also tends to not go haywire with what memory it touches, which is better for computers.
However, deep call hierarchies occur much more than I would like. I have seen call stacks well beyond 200 functions deep. But this need not be – one can often be turned into the other by transposition. Transposition derives from the latin transponere, which roughly means “to put across”. With matrices, it means swapping rows and columns. Similarly, we can swap call-hierarchy depth for breath.

The example

A couple of months ago, I was tasked with programming a standing-wave display of power-line voltage curves. As you might know, the signal is roughly sine-wave shaped at about 50Hz. The signal is captured in time windows of 200ms, i.e. there’s a new packet of data 5 times a second with 10 sine cycles in it. However, the frequency value jitters just enough to make the signal drift a bit in the 200ms window, i.e. the wave moves forwards and backwards a little bit. The standing-wave feature tries to remove that drift and make it seemingly stationary in our fixed time window, so changes in amplitude become more visible.

Algorithm 1

The idea seems simple enough for just one signal:

  1. In the previous wave, search backwards to find the spot where the wave crosses from positive to negative.
  2. Take the previous wave from that point on and stitch it together with the current one, and cut that off at 200ms of data.

But there is not just one signal, there can be hundreds. And they should all be aligned to one designated “master” signal. So now we add extra steps:

  1. For all other signals, find the wave packets overlapping (in time) with our new stiched wave packet.
  2. Order them, and stitch them to a new wave packet covering exactly the same time window.

Now even in this version, finding the right packets for a time interval can be more tricky than it seems, because the values for the signals come in irregularly and can be shifted significantly. So you can just buffer of the last N (5?) packets for each signal and search in there. Still, one more requirement remains. For the display of archived data, the algorithm should work on batches of waves, i.e. many seconds worth, which made step 3 harder by extending the search space. So add:

  1. For each previous and current pair in a given time-interval:

Now the whole thing was pretty much implemented with steps 0 to 4 being functions calling into the next step, with major loops on the 0th and 3rd step. The wave data flows through these implementation layers vertically, i.e. from step 0 to step 4 and back, but the control flow of the program does not. It flows perpendicular to it, horizontally, solely controlled by the outer-most loop. It is intuitive to write it this way – after all, the control flow follows the flow of time in the data we are processing, but the code was not particularly easy, especially with the search in step 3 becoming unnecessarily complex.

Algorithm 2

Now let us try transposing this, and match the flow of data with our control flow:

  1. Gather all relevant signals for the time interval and sort their packets.
  2. Extract all the “stitching” time codes from the master signal.
  3. For all signals, traverse pairs together with the time codes and stitch accordingly.

The whole process becomes more digestible, and processing the data in stages made it obvious that sorted data makes using a “merge” type algorithm very easy.
Both algorithms use the same data, but the second makes it explicit, while the first just passes it through the call-stack in chunks.

Conclusion

I have since used this idea of “transposition” a few times to clean up and simplify my designs. It seems especially helpful when trying to decouple messaging from bulk processing.
The idea of looking at the data flow and adapting the control flow to match it, is central to data-oriented design. I argue that while this can be used to optimized programs, transposition is mainly a tool to make programs simpler, which can then lead to optimization. Separating processing into stages is also very similar to loop-fission.
Have you used a technique like this before? Do you, perhaps, know it by another name? Let me know!

Don’t use wrapper

It’s a bad word for a piece of code, and you should feel bad for using it. Here is why:

1. It is easily phonetically confused with “rapper”

Well, this one is actually funny. Really the only redeeming quality. So if someone tells me that they “made a wrapper”, I immediately giggle a bit inside.

2. Wrapping things is a programmers job.

As programmers, we are in the business of abstractions, and a function clearly is an abstraction. A function that calls something else wraps that something else. So isn’t everything a wrapper?
Who would say that the following function is a wrapper?

template <class T>
T multiplication_wrapper(T a, T b)
{
  return a * b;
}

It does wrap the multiplication operator, does it not? Of course, the example is contrived, but many people call equally simple functions “wrapper functions”.

3. It is often a bad analogy

When you wrap something, like a present, you first need to unwrap it to actually use it. So in that case, it acts more like a kind of envelope. This is clearly not the case for what most people call wrappers. You could wrap some data in a .zip file – that would make sense! But no one uses it like that.
Another use of the word wrap implies something that goes around something else, forming a fixture of sorts. Like a wraparound baby sling. So I guess this could work for some uses, like a protection layer. Again, it is not used like that.
Finally, there’s wrapping up something, as in finishing something. Well, maybe if you’re wrapping your main function around the rest of your code, you can finish writing your program. A very monolithic approach.

There are plenty of better alternatives

In most cases, what people should rather use is either facade or adapter. Both names convey a lot more meaning than wrapper. A facade is something that wraps code to make the interface nicer. An adapter wraps an interface to integrate with some other piece of code. Both are structural design patterns. Both wrap something. But then again, that could be said for all of the structural design patterns. Or, most code. Except maybe assembler?

So please, calling something a wrapper is not enough. You might as well just call it function/object/abstraction. Use adapter, facade, decorator, proxy etc.. The why is more important than the what.