Windows 10 quality of life features

Starting with Windows 10 Microsoft switched from big-bang releases of its operating system to so called rolling releases: They release new features and improvements in regular intervals – once or twice a year – without changing the product name.

The great thing is that users get the improvements made by Microsofts engineers much sooner than in the past where you had to wait several years for a big “service pack” to arrive or even a new major release of Windows like 2000, XP, Vista, 7, 8, 8.1 and finally 10 (I am leaving out the dark times on purpose 😉).

The bad thing is that it is harder to see what version or release you are running. Of course there is a (less visible) name for every Windows release. This version or codename sometimes gets mentioned on support pages or in blog posts because functionality of Window 10 can change significantly between these rolling feature updates. And sometimes you may run some app or tool that tells you need Windows 10 2004 or higher.

What version of Window 10 am I running?

I know of two simple ways to find out what version of Windows 10 you are running currently:

  1. Running the tool winver
  2. Opening the Settings -> System -> Info page

Why does it matter?

Another downside is that users often are not aware of new stuff added to their operating system. And Microsoft does an awful job promoting the changes and improvements!

Of course there are announcements about the big things after upgrading your operating system to the next feature level. And Microsoft uses these for marketing its own apps and services. They slap you their new Edge-browser in the face on every occasion and try to trick you in creating a Microsoft account. It is absolutely not obvious how to use Windows without a Microsoft account like the decades before. Skip the process here, continue without and risk your live…

On the other hand they really improve their software and slowly but steadily round the rough edges of their system. The UIs for environment variables are finally quite usable.

Now back to the main theme of the post: There are some hidden gems built into Windows 10 that I learned of only lately and I think are vastly underadvertised – unlike Microsoft’s marketing of their big products.

Built-in screen recorder

Ok, many gamers may know it because it Windows briefly displays the shortcut Win+G when starting a game. It is not only usable for games but you can record any window, capture application sounds and record your voice. You can easily record your own screencasts and video tutorials using this built-in solution.

Built-in clipboard manager

How often did you wish to be able copy multiple items and choose one of the last few copied elements when pasting? While such clipboard managers have been around for a long time and sometimes provide tons of useful features Windows 10 has a simple one built-in. Just press Win+V instead of Ctrl+V to paste your clipboard entry and you will get a list of the copied items to choose from.

Built-in screenshot/snipping tool

Many people may know the old way of making screenshots using the oddly labeled PrtScr-key (sometimes also PrntScrn or simply Print), opening a painting application like MS paint and pasting the image using Ctrl+V. Well, Microsoft improved this workflow a lot by including a snipping tool that you can activate using Win+Shift+S. This tool lets you select either a rectangular or free-form region, a window or an entire screen to capture. After doing that you get a notification allowing you to make modifications to the capture and save it to disk.

On-screen emoji-keyboard

Just a little helper in these modern social media times is the on-screen emoji-keyboard. Using Win+. you can activate it, browse tons of common emojis and enter them into you messages and texts 🐱‍🏍🤘.

Windows Terminal

Ok, this last but not least one is not (yet) built-in and mostly interesting for developers and power users. Nevertheless, I think it is noteworthy that Microsoft finally built a capable terminal application with modern features like multiple tabs, full unicode and font support, customizable background with blur and the ability to host different shells like the old and trusted command prompt CMD, the newer PowerShell and WSL. You can find it in the Microsoft store for free.

Conclusion

While releases of Windows 10 are more subtle than past new Windows releases many things change both under the hood and user visible. Every once in a while something you missed for years or installed third-party tools for may be added without you knowing. That’s another reason why talking to colleagues and friends and practices like pair-programming and brown-bag meetings are so valuable for sharing knowledge and experience.

I hope there is something for you in my findings of hidden windows gems. If you have some Windows 10 features you discovered and really like, feel free to leave them in the comments. I will gladly try them out!

What is it with Software Development and all the clues to manage things?

As someone who started programming a long time ago (roughly 20 years, now that I think about it), but only in recent years entered the world of real software development, the mastery of day-to-day-challenges happens to consist of two main topics: First, inour rapidly evolving field we never run out of new technologies to learn, and then, there’s a certain engineering aspect underlying, how to do things in a certain manner, with lots of input every year.

So after I recently shared some of such ideas with my friends — I indeed still have a few ones of those), I wondered: How is it, that in the modern software development world, most of the information about managing things actually comes from the field itself, and rather feeding back its ideas of project management, quality, etc. into the non-software-subspaces of the world? (Ideas like the Agile movement, Software Craftmanship, the calls of doing things Lean and Clean, nowadays prospered so much that you see their application or modification in several other industries. Like advertisement, just as an example.)

I see a certain kind of brain food in this question. What tells software development apart from other current fields, so that there is a broad discussion and considerable input at its base level? After all, if you plan on becoming someone who builds houses, makes cars, or manages cities, you wouldn‘t engage in such a vivid culture of „how“ to do things, rather focussing on the „what“.

Of course, I might be mistaken in this view. But, by asking: what actually tells software development apart from these other fields of producing something, I see a certain kind of brain food, helpful for approaching every day tasks and valuing better tips over worse ones.

So, what can that be?

1. Quite peculiar is the low entry threshold in being able to call yourself a “programmer”. With the lots of resources you get at relatively little cost (assumption: you have a computer with a working internet connection), you have a lot of channels by which you can learn the „what“ of software development first, and saving the „how“ for later. If you plan on building a house, there‘s not a bazillion of books, tutorials, and videos, after all.

2. Similarly, there‘s the rather low cost of failure when drafting a quick hobby project. Not always will a piece of code that you write in your free time tell yourself „hey man, you ever thought about some better kind of architecture?“ – which is, why bad habits can stick and even feel right. If you choose the „wrong“ mindset, you don‘t always lose heaps of money, and neither do you, if you switch your strategy once in a while, you also don‘t automatically. (you probably will, though, if you are too careless in this process).

3. Furthermore, there‘s the dynamic extension of how your project is going to be used („Scope Creep“). One would build a skyscraper in a different way than a bungalow (I‘m not an expert, though), but with software, it often feels like adding a simple feature here, extending the scope there, unless you hit a point where all its interdependencies are in a complex state of conflict…

4. Then, it‘s a matter of transparency: If you sit in a badly designed car, it becomes rather obvious when it always exhausts clouds of black smoke. Or your house always smells like scents of fresh toilet. Of course, a well designed piece of software will come with a great user experience, but as you can see in many commercial products, there also is quite some presence of low-than-average-but-still-somehow-doing-what-it-should software. Probably users are more tolerant with software than with cars?

5. Also, as in most technical fields, it is not the case that „pure consultants“ are widely received in a positive light. For most nerds, you don‘t get a lot of credibility if you talk about best practices without having got your hands dirty over a longer period of time. Ergo, it needs some experienced software programmers in order to advise less experienced software programmers… but surely, it‘s questionable whether this is a good thing.

6. After all, the requirements for someone who develops a project might be very different in each field. From my academical past in computational Physics, I know that there is quite some demand for „quick & dirty“ solutions. Need to add some Dark Matter in your model here? Well, plug this formula in and check the results. Not every user has the budget or liberty in creating a solid structure of your program. If you want to have a new laboratory building, of course, you very well want it it do be designed as good as it can get.

All in all, these observations somehow boil down to the question, whether software development is to be seen more like a set of various engineering skills, rather like a handcraft, an art, or a complex program of study. It is the question, whether the “crack” in this field is the one who does complex arithmetics in its head, or the one who just gets what the customer wants. I like thinking about such peculiar modes of thought, as they help me in understanding what kinds of things I should learn next.

Or is there something completely else to it?

Shooting Troubles With Toys

As software grows, one of the typical challenges is keeping track of the quirks and subtleties of all the languages, third-party libraries, frameworks, IDEs / toolchains or whatnot you, at places, need to maneuver in order not to construct everything yourself. That‘s often just a matter of familiarization and after you stumbled across a particular type of problem – once or a few times – it gets assimilated as a trivial thing. It sometimes gets so far as keeping certain warnings or harmless exceptions in your software – technical debt. (Alas, your customer doesn‘t usually care for the perfect product, or, let‘s say, wait that long and pay for the perfect product..).

Now, once in a while all the third-party implementations you rely on interact in a odd manner. These are the cases where you get a „Cannot update a component while rendering a different component“ in a React/Recoil application, a NoClassDefFoundError in a Java / Grails application, a general SegFault in a C / C++ program, or your database does weird stuff. U name it.

So even when you encounter a problem that your then, time is a critical thing. So what you do? Google it. Find fellow victims on Stack Overflow, GitHub, etc. – but this only goes so far, depending on how common your problem ist.

Now you should always have a version control system at hand, of course. This has the huge advantage of being able to simplify your problem. Just enter a new branch that no one cares about, and you can completely get rid of all the confusing mess that is your reliance on third party content. Of course, this is a possibility one can always know about. Point here being, do it as a habit. Learn it as a habit. If it‘s only in „deactivating this probably useless flag“, „hardcoding this localhost into this URL in order to make progress quickly“ – you do not want to risk carrying this into production code. Just know that keeping experimental branches open for a longer time is a bad habit, either. So think of them e.g. as „in two hours, I will either merge or delete this branch, there‘s no way about it“. There‘s nothing in there that you wouldn‘t be able to reproduce if required.

And sometimes, it‘s more effective to work from the other end. Instead of going from „very close to where we want to be“, start from a place completely unpolluted by your technical debt. Start a toy project. Use exactly the dependencies that you have in your real project, and try to set up your error scenario. In our case, this method helped us in understanding a completely meaningless NoClassDefFoundError, because suddenly – with exactly the same JDK and Grails packages that we had in the real code – IntelliJ IDE just felt more like telling us verbosely what the actual problem is. Which you can then see without all the clutter.

Even more, this procedure does help your with your own Rubber Ducking – after all, you want to describe to yourself a scenario, where „Actually I don‘t get it, I am just doing this and that and…“, well, are you? Or is there more to it that your eyes don‘t see? Just find out.

Of course, this is just the precursor to a more test driven approach. Toy projects aren‘t really anything else, they are just isolated environments in which you completely see what is going on, with an essential setup and a clear expectation. These are tests. Now if you already wrote them, why not think about including them in your projects as tests? Especially if you‘re kind of new to test driven development, you can make this habit of toy boxing a guide on the road to a more test driven way of thinking.

Or maybe, just don‘t make errors. If you ever have the option – just choose that [I guess then you have time to fix mine, too? :)]

Your most precious resource

When I was a young student, living on my own for the first time, my most scarce resource seemed to be money. Money’s too tight to mention was (and probably still is) a motto that every student could understand. So we traded our time for money and participated in experiments and underpaid student assistant jobs.

Soon after I graduated, money began to accumulate. I have a rather frugal lifestyle, so my expenses didn’t suddenly surge. Instead, my perception of time and money shifted: Money isn’t the bottleneck (anymore), time is. Suddenly, time was much more valueable than money and I would gladly pay money if that meant some hours of additional leisure time or one less problem to tend to. It seems that time is the most precious thing there is.

The traditional economic wisdom supports this idea: “Time is money” is true, but the reverse is not: “money is time” doesn’t cut it. The richest man on earth still only has as much time available to him as anybody else.

If time is the most precious resource, the drive to automation as a time-saving effort can be understood directly. Automation also reduces learning costs if you scale horizontally by parallelizing production.

But soon after I had enough money to optimize my time, I hit another resource bottleneck. Suddenly, I had more time on hand than attention to spare. It turns out that attention is the most valueable resource you can spend. It is just entangled enough with time that its hard to distinguish which runs out first. If you reflect a bit, it becomes obvious. The term “to pay attention” is pretty spot on.

Let me take up the thought of automation again, this time in the domain of software development, in the form of automated tests. Here, automation is not in its most profitable shape. You don’t gain much from scaling your tests horizontally. If you don’t change the code, it doesn’t matter if your tests run once or a thousand times in parallel, the result will be the same (except if you run hardware-dependent tests, but even then you probably don’t gain much after covering all hardware variations).

You also don’t gain much from scaling your tests vertically by making them run faster and faster. It sure helps to have them run continuously in the background (think of a user-modded compiler – look into Continuous Test Runners if you are interested), but after one test run per meaningful change, the profit hits a limit.

So why else is automated testing an economically sound practice? My take on it is delegated attention. You write a small software (your test) that augments your attention area onto code that probably fades from your own attention pretty soon. Automated tests provide automated attention in a sustainable manner (except for those tests that cry wolf for no good reason, those are attention sinks and should be removed from your portfolio). Because of the automation, this delegated attention never fades – even after many years, the test has a close eye on “its” code.

If you are a developer, you have automation and zero-cost copying (aka parallelization without upfront costs) intrinsically in your solution portfolio. Look for ways how to make money with those super-powers. Or even better, look for ways to save time. But if you want the best return on investment for your efforts, you should look for ways to expand your attention area.

Do you agree that attention is our most precious resource? What do you do to lower your attention expenses? Perhaps you have experience in the Ops/DevOps area that resonates with this thoughts? Share your opinion by commenting below or writing your own blog entry!

We will pay attention to you.

Still thinking about managing time…

Now that I‘ve actually read what I‘ve written a few weeks ago 😉 … I‘ve obviously had some time to reflect. About more models of managing your time, about integrating such models in your daily life, their limits, and, of course, about the underlying force, the “why” behind all that.

While trying to adjust myself to the spacious world of home office, I especially came to notice, that time management itself probably wasn‘t the actual issue I was trying to improve. Sure enough, there are several antipatterns of time mismanagement that can easily lead to excessive spending, something you can improve with simple Home Office installations, e.g. having a clock clearly visible from your point of view – and, sensibly, one clearly visible from your cofee machine… These are about making time perceptible, especially when you‘re not the type of person with absolutely fixed times for lunch, or such mundane concepts.

But then, there‘s a certain limit to the amount of improvement you can easily gain from managing time alone. Sure, you could try to apply every single life hack you find online, but then again, the internet isn‘t very good in accounting for differences in personal psychology. The thing you can do, is trying to establish a few recommendations at a certain time and shaking established habits, but you need to evaluate their effect. Not everything is pure gold. For example, my last blog post pointed to the Pomodoro technique, where one will find that there are classes of work that can easily be scheduled into 25-30 minute blocks. But there are others where this restriction leads to more complications than it solves. Another “life hack” the internet throws at you at every opportunity is having a certain super-best time to set your alarm clock to, and I would advise to try to shuffle this once in a while to find out whether there‘s some setting that is best for you. But never think that you need e.g. the same rising pattern as Elon Musk in order to finish your blog post in time… Just keep track of yourself. How would other people know your default mode?

Now overall, each day feels different a bit, and it‘s a function of your emotional state as well as some generic randomness that has no less important effects on your productivity than a set of rules you can just adhere to. So, instead on focussing on managing your time on a given day, we could think of actually trying to manage your productivity. But then again, “productivity management” sounds so abstract that the handles we would think of are about stuff like

  • what you eat
  • how you sleep
  • how much sports you‘re willing to do
  • how much coffee you consume

and other very profound parameters of your existence. That‘s also something you can just play around a bit until you find an obvious optimum. (Did you know that the optimum amount of breakfast beer for you is likely to be zero?… … :P)

However, if you keep on fine tuning every single aspect for the rest of your life like a maniac, you risk to loose yourself in marginal details, without gaining anything.

So if you‘re still reading – we now return in trying to solve what it actually is that we want to manage. And for me, the best model is thinking about managing motivation. Not the general “I guess I am better off with a job than without one”-motivation, but the very real daily motivation that makes you jump from one task into the next one. The one that drives maximum output from your given time without actually having to manage your time itself. There are always days of unsteady condition, but by trying to avoid systematic interferences with your motivation, you can achieve to maximize their output, as well.

At a very general level (and as outlined above), one crucial ingredient in motivational management, for me, is the circumstance of following a self-made schedule. By which of course, I mean, you arrange your day to cooperate with your colleagues and customers, but it has to feel like as much a voluntary choice as possible within your given circumstances.

Then, there‘s planning ahead. Sounds trivial. But you can be the type of person that plans several weeks in advance, or the one that is actually unsure about what happens next monday – the common denominator is avoiding to worry about a kind of default course of events for a few days in a row. We all know that tasks like to fill out more time than they actually require, so you get some backlog one way or another; but if you manage to feel like your time is full of doing something worthwile, it‘s way easier to start your day at a given moment than when you try to arrange tasks of varying importance on-the-fly.

One major point – which I was absolutely amazed by, when I chose to believe it – is, that you can stop a task at many times, without losing your train of thought, not just when it‘s finished. So often, one fears the expected loss of concentration when he realizes that a single task will not fit in a limited time box. But unless you are involuntarily interrupted, and unless you somehow give in to the illusion that the brain is somehow capable of multi-tasking*, you can e.g. shift whole subsections of a given task to the next morning in a conscious manner, and then quickly return to your old concentration.

On the other hand, there‘s the concept of Maker vs. Manager Cycles. Briefly,

  • Someone in a “Manager” mode has a lot of (mostly) smaller issues, spread over many different topics, often only loosely connected, often urgent, and sometimes without intense technical depth. The Manager will gain his (“/her” implied henceforth) motivation surely by getting a lot of different topics done in a short time, thus benefit from a tight, low-overhead schedule. He can apply artificial limits to his time boxes and apply the Pareto rule thoroughly: (“About 80% of any result usually stems from about 20% of the tasks”).
  • However, someone in “Maker” mode probably has a more constrained set of tasks – like a programmer trying to construct a new feature with clearly defined requirements, or a number of multiple high-attention issues – which he wisely bundled into blocks of similar type – will benefit from being left alone for some hours.

For a more thorough discussion, I‘ll gladly point to the discussion of Paul Graham, as Claudia thankfully left in our comment section last time 😉

Which brings me to my final point. I found one of the strongest key to daily motivation lies in the fundamental acceptance of these realities. As outlined above, there just are some different subconscious modes, and different external circumstances, that drive your productivity to a larger scale than you can manipulate. If you already adopted a set of measures and found they did a good job for you, you better not worry if there‘s some kind of a blue day where everything seems to lead to nowhere. You can lose more time by over-optimization than you could gain from super-finely-tuned efficiency. You probably already know this, but do you also embrace it?

(* in my experience, and while I sometimes find myself still trying to do this, multi-tasking is not an existing thing. If you firmly believe otherwise, be sure to drop me a note in the comments..)

Transposition as a programming technique

If you have been programming for a while, you will probably, and hopefully, agree that it is preferable to have a sequence of functions as opposed to the same number of functions nested. In other words, call-graph breadth is better than depth. Among other reasons, a “linear” set of instructions is often easier to follow, which is better for humans, and also tends to not go haywire with what memory it touches, which is better for computers.
However, deep call hierarchies occur much more than I would like. I have seen call stacks well beyond 200 functions deep. But this need not be – one can often be turned into the other by transposition. Transposition derives from the latin transponere, which roughly means “to put across”. With matrices, it means swapping rows and columns. Similarly, we can swap call-hierarchy depth for breath.

The example

A couple of months ago, I was tasked with programming a standing-wave display of power-line voltage curves. As you might know, the signal is roughly sine-wave shaped at about 50Hz. The signal is captured in time windows of 200ms, i.e. there’s a new packet of data 5 times a second with 10 sine cycles in it. However, the frequency value jitters just enough to make the signal drift a bit in the 200ms window, i.e. the wave moves forwards and backwards a little bit. The standing-wave feature tries to remove that drift and make it seemingly stationary in our fixed time window, so changes in amplitude become more visible.

Algorithm 1

The idea seems simple enough for just one signal:

  1. In the previous wave, search backwards to find the spot where the wave crosses from positive to negative.
  2. Take the previous wave from that point on and stitch it together with the current one, and cut that off at 200ms of data.

But there is not just one signal, there can be hundreds. And they should all be aligned to one designated “master” signal. So now we add extra steps:

  1. For all other signals, find the wave packets overlapping (in time) with our new stiched wave packet.
  2. Order them, and stitch them to a new wave packet covering exactly the same time window.

Now even in this version, finding the right packets for a time interval can be more tricky than it seems, because the values for the signals come in irregularly and can be shifted significantly. So you can just buffer of the last N (5?) packets for each signal and search in there. Still, one more requirement remains. For the display of archived data, the algorithm should work on batches of waves, i.e. many seconds worth, which made step 3 harder by extending the search space. So add:

  1. For each previous and current pair in a given time-interval:

Now the whole thing was pretty much implemented with steps 0 to 4 being functions calling into the next step, with major loops on the 0th and 3rd step. The wave data flows through these implementation layers vertically, i.e. from step 0 to step 4 and back, but the control flow of the program does not. It flows perpendicular to it, horizontally, solely controlled by the outer-most loop. It is intuitive to write it this way – after all, the control flow follows the flow of time in the data we are processing, but the code was not particularly easy, especially with the search in step 3 becoming unnecessarily complex.

Algorithm 2

Now let us try transposing this, and match the flow of data with our control flow:

  1. Gather all relevant signals for the time interval and sort their packets.
  2. Extract all the “stitching” time codes from the master signal.
  3. For all signals, traverse pairs together with the time codes and stitch accordingly.

The whole process becomes more digestible, and processing the data in stages made it obvious that sorted data makes using a “merge” type algorithm very easy.
Both algorithms use the same data, but the second makes it explicit, while the first just passes it through the call-stack in chunks.

Conclusion

I have since used this idea of “transposition” a few times to clean up and simplify my designs. It seems especially helpful when trying to decouple messaging from bulk processing.
The idea of looking at the data flow and adapting the control flow to match it, is central to data-oriented design. I argue that while this can be used to optimized programs, transposition is mainly a tool to make programs simpler, which can then lead to optimization. Separating processing into stages is also very similar to loop-fission.
Have you used a technique like this before? Do you, perhaps, know it by another name? Let me know!

Don’t use wrapper

It’s a bad word for a piece of code, and you should feel bad for using it. Here is why:

1. It is easily phonetically confused with “rapper”

Well, this one is actually funny. Really the only redeeming quality. So if someone tells me that they “made a wrapper”, I immediately giggle a bit inside.

2. Wrapping things is a programmers job.

As programmers, we are in the business of abstractions, and a function clearly is an abstraction. A function that calls something else wraps that something else. So isn’t everything a wrapper?
Who would say that the following function is a wrapper?

template <class T>
T multiplication_wrapper(T a, T b)
{
  return a * b;
}

It does wrap the multiplication operator, does it not? Of course, the example is contrived, but many people call equally simple functions “wrapper functions”.

3. It is often a bad analogy

When you wrap something, like a present, you first need to unwrap it to actually use it. So in that case, it acts more like a kind of envelope. This is clearly not the case for what most people call wrappers. You could wrap some data in a .zip file – that would make sense! But no one uses it like that.
Another use of the word wrap implies something that goes around something else, forming a fixture of sorts. Like a wraparound baby sling. So I guess this could work for some uses, like a protection layer. Again, it is not used like that.
Finally, there’s wrapping up something, as in finishing something. Well, maybe if you’re wrapping your main function around the rest of your code, you can finish writing your program. A very monolithic approach.

There are plenty of better alternatives

In most cases, what people should rather use is either facade or adapter. Both names convey a lot more meaning than wrapper. A facade is something that wraps code to make the interface nicer. An adapter wraps an interface to integrate with some other piece of code. Both are structural design patterns. Both wrap something. But then again, that could be said for all of the structural design patterns. Or, most code. Except maybe assembler?

So please, calling something a wrapper is not enough. You might as well just call it function/object/abstraction. Use adapter, facade, decorator, proxy etc.. The why is more important than the what.

The four archetypes of cloud users – part 1 of 2

You probably know people that avoid cloud services as if they are poisonous and others that jump onto every cloud bandwagon. You can categorize them in four archetypes. Let’s start with the tinfoil hat and the clipboard.

In the occupational field of accounting, the strong trend towards cloud services is noticeable. Everything needs to be digital, and with digital, they mean online, and with online, they mean in the cloud. Every expense voucher needs to be scanned and uploaded, because in many cases, it can be booked automatically. In the new era of accounting, human intervention is only needed for special cases.

I see this as a good example of how digital online services can transform the world. Every step in the process would have technically been possible for the last twenty years, but only the cloud could unify the different participants enough so that a streamlined end-to-end process is marketable to the masses. And in this marketing ecstasy, the stakeholders that profit the most (the accountants) often forget that their benefits are just a part of the whole picture. In order to assess the perceived and actual benefits of all stakeholders, you at least need to apply an archetype to each participant.

The four archetypes

In my opinion, there are four different archetypes of cloud users. Let’s have a look at them and then assess the risks and potentials when selling a digital online service to them. I’ll list the archetypes in the order from biggest risk to biggest potential.

Archetype 1: The tinfoil hat

A person that could be identified as a “tinfoil hat” doesn’t need to be a conspiracy weirdo or paranoid maniac. In fact, the person probably has deep and broad knowledge about technology and examines new technologies in detail. The one distinctive feature of the tinfoil hats is that they take security, including IT security, very seriously. They don’t take security for granted, don’t trust asseverations and demand proof. You can’t convince a tinfoil hat by saying that the data transfer is “encrypted”, you need to specify the actual encryption algorithm. Using RC4 ciphers for the SSL protocol isn’t good enough for the tinfoil. You need at least proof that you understood the last sentence and took actions to mitigate the problem. Even then, the tinfoil will hesitate to give any data out of hands and often choose the cumbersome way in order to stay safe. “Better safe than sorry” is his everyday motto.

Tinfoil hats always search for scenarios that could compromise their data or infrastructure. They are paranoid by default and actively invest in security. “On premises” is the only way they deploy their own services, and “on premises” is how they prefer to keep their data.

Typical signs of a tinfoil hat archetype include:

  • self-hosted applications
  • physical servers
  • lack of (open) wireless network
  • physically separated networks
  • signed and encrypted e-mails

Trying to sell a cloud service to a tinfoil hat is like trying to sell a flight to an aviophobian (somebody with fear of flying). There is always another way to get from A to B, seemingly safer and more controllable. If you are selling cloud services, tinfoil hats are your worst nightmare. If you can convince a tinfoil hat, your product is probably made of fairy dust and employs lots of unicorns.

Archetype 2: The clipboard

Clipboard people are wary of new technologies, but assess them in the context of usability. They demand high security, but will compromise if the potential of the new technology far exceeds the risk. Other than the tinfoil hat, the clipboard sees his role as an enabler, but will not rest to increase the perceived or actual safety of the product. You can appease a clipboard by giving evidence of security audits from a third party. They will trust known authorities, because it means that they can always deflect blame in case of an accident to these authorities.

Clipboards run on checklists, safety protocols and recurring audits. They don’t try to avert every possibility of a security breach, but will examine each incident in detail and update their checklists. They don’t care about “on premises” or “off premises” as long as the service is reachable, safe enough and reliable. If a cloud service has an higher availability than the local counter-part, the clipboard will think about a migration.

Typical signs of a clipboard archetype include:

  • Virtual Private Networks (VPN)
  • Two-Factor Authentication
  • Token-Based Authentication
  • Strong Encryption

The clipboard will listen if you pitch your cloud service and can be enticed by the new or better capabilities. But in the very next sentence, he will ask about security and be insistent until you provide proof – first-hand or by credible third parties. You can convince a clipboard if your product is designed with safety in mind. As long as the safety is state-of-the-art, you’ll close the deal.

Outlook on the second part

In the second part of this blog entry, we will look at the remaining two archetypes, namely the “combination lock” and the “smartphone”. Stay tuned.

Did you identify with one of the archetypes? What are your most important aspects of cloud services? I would love to hear from you.

A procedure to deal with big amounts of email

How can you survive the daily email flood and still keep track of your work? Here is one personal ruleset that adapts real-life habits to virtual message management.

The problem

You probably know the problem already: A day with less than 500 emails feels like your internet connection might be lost. The amount of emails you receive can accurately predict the time of day. In my case, I’ll always receive my 300th email each day right before lunch. Imagine that I’ve spent one minute for each message, then I would have done nothing but reading emails yet. And by the time I return from lunch, more emails have found their way into my inbox. My job description is not “email reader”. It actually is one of the lesser prioritized activities of my job. But I keep most answering times low and always know the content of my inbox. You’ll seldom hear “sorry, I haven’t seen your email yet” from me. How I keep the email flood in check is the topic of this blog entry. It’s my personal procedure, so nothing fancy with a big name, but you might recognize some influence from well-known approaches like “Getting Things Done” by David Allen.

The disclaimer

Disclaimer: You might entirely disagree with my approach. That’s totally acceptable. But keep in mind that it works for at least one person for a long time now, even if it doesn’t fit your style. Email processing seems to be a delicate topic, please keep your comments constructive. By the way, I’d love to hear about your approach. I’m always eager to learn and improve.

The analogy

Let me start with a common analogy: Your email inbox is like your mailbox. All letters you receive go through your mailbox. All emails you receive go through your inbox. That’s where this analogy ends and it was never useful to begin with. Your postman won’t show up every five minutes and stuff more commercial mailings, letters, postcards and post-it notes into your mailbox (raising that little flag again that indicates the presence of mail). He also won’t announce himself by ringing your door bell (every five minutes, mind you) and proclaiming the first line of three arbirtrarily chosen letters. Also, I’ve rarely seen mailboxes that contain hundreds if not thousands of letters, some read, some years old, in different states of decay. It’s a common sight for inboxes whose owners gave up on keeping up. I’ve seen high stacks of unanswered correspondence, but never in the mailbox. And this brings me to my new analogy for your email inbox: Your email inbox is like your desk. The stacks of decaying letters and magazines? Always on desks (and around it in extreme cases). The letters you answer directly? You bring them to your desk first. Your desk is usually clear of pending work documents and this should be the case for your inbox, too.

(c) Fotolia Datei: #87397590 | Urheber: thodonal

The rules

My procedure to deal with the continuous flood of e-amils is based on three rules:

  • The inbox is the only queue of emails that needs attention. It is only filled with new emails (which require activity from my side) or emails that require my attention in the foreseeable future. The inbox is therefor only filled with pending work.
  • Email processing is done manually. I look at each email once and hopefully only once. There are no automatic filters that sort emails into different queues before I’ve seen them.
  • For every email, interaction results in an activity or decision on my side. No email gets “left there”.

Let me explain the context of the rules in a bit more detail:

My email account has lots and lots of folders to store all emails until eternity. The folders are organized in a hierarchy, but that doesn’t really matter, because every folder can hold emails. The hierarchy of folders isn’t pre-planned, it emerges from the urge to group emails together. It’s possible that I move specific emails from one folder to another because the hierarchy has changed. I will use automatic filters to process emails I’ve already read. But I will mark every email as read by myself and not move unread emails around automatically. This narrows the place to look for new emails to one place: the inbox. Every other folder is only for archivation, not for processing.

The sweeps

The amount of unread emails in my inbox is the amount of work I need to do to return to the “only pending work” state. Let’s say that I opened my email reader and it shows 50 new messages in the inbox. Now I’ll have to process and archivate these 50 emails to be in the same state as before I had opened it. I usually do three separate processing steps:

  • The first sweep is to filter out any spam messages by immediate tell-tale signs. This is the only automatic filter that I’ll allow: the junk filter. To train it, I mark any remaining junk mail as spam and let the filter deal with it. I’m still not sure if the junk filter really makes it easier for me, because I need to scan through the junk regularly to “rescue” false positives (legitimate mails that were wrongfully sorted out), but the junk filter in combination with my fast spam sweep will lower the message count significally. In our example, we now have 30 mails left.
  • The second sweep picks every email that is for information purpose only. Usually those mails are sent by software tools like issue trackers, wikis, code review tools or others. Machines don’t feel the effort of writing an email, so they’ll write a lot. Most of the time, the message content is only a few lines of text. I grab each of these mails and drag them into their corresponding folder. While I’m dragging, I read (and memorize) the content of the email. The problem with this kind of information is that it’s a lot of very small chunks of data for a lot of different contexts. In order to understand those messages, you have to switch your mental context in a matter of seconds. You can do it, but only if you aren’t interrupted by different mental states. So ignore any email that requires more than a few seconds of focused attention from you. Let them sit into your inbox along with the emails that require an answer. The only activity for mails included in your second sweep is “drag to folder & memorize”. Because machines write often, we now have 10 mails left in our example.
  • The third sweep now attends to each remaining mail independently. Here, the three-minutes rule applies: If it takes less than three minutes to reply to the email right now, then do it right now and archivate the mail in a suitable folder (you might even create a new folder for it). Remember: if you’ve processed an email, it leaves the inbox. If it takes more than three minutes, you need to schedule an attention slot for this particular message on your todo-list. This is the only time the email remains in the inbox, because it’s a signal of pending work. In our example, 7 emails could be answered with short replies, but 3 require deep concentration or some more text for the answer.

After the three sweeps, only emails that indicate pending work remain in the inbox. They only leave the inbox after I’ve dealt with them. I need to schedule a timebox to work on them, but after that, they’ll find themselves out of the inbox and in a folder. As soon as an email is in a folder, I forget about its existence. I need to remember the information that were in it, but not the message itself.

The effects

While dealing with each email manually sounds painfully slow at first, it becomes routine after only a short while. The three sweeps usually take less than five minutes for 50 emails, excluding the three major correspondence tasks that make their way as individual items on my work schedule. Depending on your ratio of spam to information messages to real correspondence, your results may vary.

The big advantage of dealing manually with each email is that I’ve seen each message with my own eyes. Every email that an automatic filter grabs and hides before you can see it should not have been sent in the first place – it’s simulating a pull notification scheme (you decide when to receive it by opening the folder) rather than leveraging the push notification scheme (you need to deal with it right now, not later) that emails are inherently. Things like timelines, activity streams or message boards are pull-oriented presentations of presumably the same information, perhaps that’s what you should replace your automatically hidden emails with.

You’ll have an ever-growing archive with lots of folders for different things (think of a shelf full of document files in real life), but you’ll never look into them as long as you don’t desperately search “that one mail from 3 months ago”. You’ll also have a clean inbox, preferably in “blank slate” condition or at least with only emails that require actions from you. So you’ll have a clear overview of your pending work (the things in your inbox) and the work already done (the things  in your folders).

The epilogue

That’s when you discover that part of your work can be described as “look at each email and move it to a folder” as if you were an official in charge for virtual paper. We virtualized our paperwork, letters, desks, shelves and document files. But the procedures to deal with them is still the same.

Your turn now

How do you process your emails? What are your rules and habits? What are your experiences with folders vs. tags? I would love to hear from you – in a comment, not an email.

The rule of additive changes

Change is in the nature of software development. Most difficult aspects of the craft revolve around dealing with change. How does one keep software extensible? How do you adapt to new business requirements?

With experience comes the intuition that some kind of changes are more volatile than other changes. For example, it is often safer to add a new function or type to an application than change an existing one.

This is because adding something new means that it is not already strongly connected to the rest of the application. Or at least that’s the assumption. You have yet to decide how the new component interacts with the rest of the application. Usually this is done by a, preferably small, incision in the innards of your software. The first change, the adding, should not break anything. If anything, the small incision should be the only dangerous aspect of the change.

This is as very important concept: adding should not break things! This is so important, I want to give it a name:

The Rule of Additive Changes

Adding something to a well-designed software system should not break existing functionality. Exceptions should be thoroughly documented and communicated.

Systems should always be designed and tought so that the rule of additive changes holds. Failure to do so will lead to confusing surprises in the best cases, and well hidden bugs in worse cases.

The rule is nothing new, however: it’s a foundation, an axiom, to many other rules, such as the Liskov Substitution Principle:

Inheritance

Quoting from Wikipedia:

“If S is a subtype of T, then objects of type T in a program may be replaced with objects of type S without altering any of the desirable properties of that program”

This relies on subtyping as an additive change: S works at least as good as any T, so it is an extension, an addition. You should therefore design your systems in a way that the Liskov Substition Principle, and therefore the rule of additive changes, both hold: An addition of a new type in a hierarchy cannot break anything.

Whitelists vs. Blacklists

Blacklists will often violate the rule of additive changes. Once you add a new element to the domain, the domain behind the blacklist will change as well, while the domain behind a whitelist will be unaffected. Ultimately, both can be what you want, but usually, the more contained change will break less – and you can still change the whitelist explicitly later!

Note that systems that filter classes from a hierarchy via RTTI or, even more subtle, via ask-interfaces, are blacklists. Those systems can break easily when new types are introduces to a hierarchy. Extra care needs to be taken to make sure the rule of addition holds for these systems.

Introspection and Reflection

Without introspection and reflection, programs cannot know when you are adding a new type or a new function. However, with introspection, they can. Any additive change can also be an incision point. Therefore, you need to be extra careful when designing systems that use introspection: They should not break existing functionality for adding something.

For example, adding a function to enable a specific new functionality is okay. A common case of this would be to adding a function to a controller in a web-framework to add a new action. This will not inferfere with existing functionality, so it is fine.

On the other hand, adding a member to a controller should not disable or change functionality. Adding a special member for “filtering” or some kind of security setting falls into this category. You think you’re merely adding something, but in fact you are modifying. A system that relies on such behavior therefore violates the rule of additive changes. Decorating the member is a much better alternative, as that makes it clear that you are indeed modifying something, which might break existing functionality.

Not all languages or frameworks provide this possibility though. In that case, the only alternative is good communication and documentation!

Refactoring

Many engineers implicitly assume that the rule of additive changes holds. In his book “Working Effectively With Legacy Code”, Micheal Feathers proposes the sprout and wrap techniques to change legacy software. The underlying technique is the same for both: formulating a potentially breaking change as mostly additive, with only a small incision point. In the presence of systems that do not follow the rule of additive changes, such risk minimization does not work at all. For example, adding additional function can break a system that relies heavily on introspection – which goes against all intuition.

Conclusion

This rule is not a new concept. It is something that many programmers have in their head already, but possibly fractured into lots of smaller guidelines. But it is one overarching concept and it needs a name to be accessible as such. For me, that makes things a lot clearer when reasoning about systems at large.