Lineendings in repository

Git normally leaves files and their lineendings untouched. However, it is often desired to have uniform line endings in a project. Git provides support for this.

Config Variable

What some may already know is the configuration variable core.autocrlf. With this, a developer can locally specify that his newly created files are checked in to Git with LF. By setting the variable to “true” the files will be converted to CRLF locally by Git on Windows and converted back when saved to the repository. If the variable is set to “input” the files are used locally with the same lineending as in Git without conversion.
The problem is, this normalization only affects new files and each developer must set it locally. If you set core.autocrlf to false, files can still be checked in with not normalized line endings.

Gitattributes File

Another possibility is the .gitattributes file. The big advantage is that the file is checked in similarly to the .gitignore file and the settings therefore apply to all developers. To do this, the .gitattributes file is created in the repository and a path pattern and the text attribute are defined in it. The setting affects how the files are stored locally for the git switch, git checkout and git merge commands and how the files are stored in the repository for git add and git commit.

*.jpg          -text

The text attribute can be unset, then neither check-in nor check-out will do any conversions

*              text=auto

The attribute can also be set to auto. In this case, the line endings will be converted to LF at check-in if Git recognizes the file contents as text. However, if the file is already CRLF in the repository, no conversion takes place and the files remain CRLF. In the example above, the settings are set for all file types.

*.txt         text
*.vcproj      text eol=crlf
*.sh          text eol=lf

If the attribute is set, the lineending are stored in the default repository with LF. But eol can also be used to force fixed line endings for specific file types.

*.ps1	      text working-tree-encoding=UTF-16

Furthermore, settings such as the encoding can be set via the gitattributes file by using working-tree-encoding attribute. Everything else can be read in the documentation of the gitattributes file.

We use this possibility more and more often in our projects. Partly only to set single file types like .sh files to LF or to normalize the whole project.

Reading a conanfile.txt from a conanfile.py

I am currently working on a project that embeds another library into its own source tree via git submodules. This is currently convenient because the library’s development is very much tied to the host project and having them both in the same CMake project cuts down dramatically on iteration times. Yet, that library already has its own conan dependencies in a conanfile.txt. Because I did not want to duplicate the dependency information from the library, I decided to pull those into my host projects requirements programmatically using a conanfile.py.

Luckily, you can use conan’s own tools for that:

from conans.client.loader import ConanFileTextLoader

def load_library_conan(recipe_folder):
    text = Path(os.path.join(recipe_folder, "libary_folder", "conanfile.txt")).read_text()
    return ConanFileTextLoader(text)

You can then use that in your stage methods, e.g.:

    def config_options(self):
        for line in load_library_conan(self.recipe_folder).options.splitlines():
            (key, value) = line.split("=", 2)
            (library, option) = key.split(":", 2)
            setattr(self.options[library], option, value)

    def requirements(self):
        for x in load_library_conan(self.recipe_folder).requirements:
            self.requires(x)

I realize this is a niche application, but it helped me very much. It would be cool if conan could delegate into subfolders natively, but I did not find a better way to do this.

git-submodules in Jenkins pipeline scripts

Nowadays, the source control git is a widespread tool and work nicely hand in hand with many IDEs and continuous integration (CI) solutions.

We use Jenkins as our CI server and migrated mostly to the so-called pipeline scripts for job configuration. This has the benefit of storing your job configuration as code in your code repository and not in the CI servers configuration. Thus it is easier to migrate the project to other Jenkins CI instances, and you get versioning of your config for free.

Configuration of a pipeline job

Such a pipeline job is easily configured in Jenkins merely providing the repository and the location of the pipeline script which is usually called Jenkinsfile. A simple Jenkinsfile may look like:

node ('build&&linux') {
    try {
        env.JAVA_HOME="${tool 'Managed Java 11'}"
        stage ('Prepare Workspace') {
            sh label: 'Clean build directory', script: 'rm -rf my_project/build'
            checkout scm // This fetches the code from our repository
        }
        stage ('Build project') {
            withGradle {
                sh 'cd my_project && ./gradlew --continue war check'
            }
            junit testResults: 'my_project/build/test-results/test/TEST-*.xml'
        }
        stage ('Collect artifacts') {
            archiveArtifacts(
                artifacts: 'my_project/build/libs/*.war'
            )
        }
    } catch (Exception e) {
        if (e in org.jenkinsci.plugins.workflow.steps.FlowInterruptedException) {
            currentBuild.result = 'ABORTED'
        } else {
            echo "Exception: ${e.class}, message: ${e.message}"
            currentBuild.result = 'FAILURE'
        }
    }
}

If you are running GitLab you get some nice features in combination with the Jenkins Gitlab plugin like automatic creation of builds for all your branches and merge requests if you configure the job as a multibranch pipeline.

Everything works quite well if your project resides in a single Git repository.

How to use it with git submodules

If your project uses git-submodules to connect other git repositories that are not directly part of your project the responsible line checkout scm in the Jenkinsfile does not clone or update the submodules. Unfortunately, the fix for this issue leads to a somewhat bloated checkout command as you have to copy and mention the settings which are injected by default into the parameter object of the GitSCM class and its extensions…

The simple one-liner from above becomes something like this:

checkout scm: [
    $class: 'GitSCM',
    branches: scm.branches,
    extensions: [
        [$class: 'SubmoduleOption',
        disableSubmodules: false,
        parentCredentials: false,
        recursiveSubmodules: true,
        reference: 'https://github.com/softwareschneiderei/ADS.git',
        shallow: true,
        trackingSubmodules: false]
    ],
    submoduleCfg: [],
    userRemoteConfigs: scm.userRemoteConfigs
]

After these changes projects with submodules work as expected, too.

Simple build triggers with secured Jenkins CI

The jenkins continuous integration (CI) server provides several ways to trigger builds remotely, for example from a git hook. Things are easy on an open jenkins instance without security enabled. It gets a little more complicated if you like to protect your jenkins build environment.

Git plugin notify commit url

For git there is the “notifyCommitUrl” you can use in combination with the Poll SCM settings:

$JENKINS_URL/git/notifyCommit?url=http://$REPO/project/myproject.git

Note two things regarding this approach:

  1. The url of the source code repository given as a parameter must match the repository url of the jenkins job.
  2. You have to check the Poll SCM setting, but you do not need to provide a schedule

Another drawback is its restriction to git-hosted jobs.

Jenkins remote access api

Then there is the more general and more modern jenkins remote access api, where you may trigger builds regardless of the source code management system you use.
curl -X POST $JENKINS_URL/job/$JOB_NAME/build?token=$TOKEN

It allows even triggering parameterized builds with HTTP POST requests like:

curl -X POST $JENKINS_URL/job/$JOB_NAME/build \
--user USER:TOKEN \
--data-urlencode json='{"parameter": [{"name":"id", "value":"123"}, {"name":"verbosity", "value":"high"}]}'

Both approaches work great as long as your jenkins instance is not secured and everyone can do everything. Such a setting may be fine in your companies intranet but becomes a no-go in more heterogenious environments or with a public jenkins server.

So the way to go is securing jenkins with user accounts and restricted access. If you do not want to supply username/password as part of the url for doing HTTP BASIC auth and create users just for your repository triggers there is another easy option:

Using the Build Authorization Token Root Plugin!

Build authorization token root plugin

The plugin introduces a configuration setting in the Build triggers section to define an authentication token:

It also exposes a url you can access without being logged in to trigger builds just providing the token specified in the job:

$JENKINS_URL/buildByToken/build?job=$JOB_NAME&token=$TOKEN

Or for parameterized builds something like:

$JENKINS_URL/buildByToken/buildWithParameters?job=$JOB_NAME&token=$TOKEN&Type=Release

Conclusion

The token root plugin does not need HTTP POST requests but also works fine using HTTP GET. It does neither requires a user account nor the awkward Poll SCM setting. In my opinion it is the most simple and pragmatic choice for build triggering on a secured jenkins instance.

Recap of the Schneide Dev Brunch 2016-08-14

If you couldn’t attend the Schneide Dev Brunch at 14th of August 2016, here is a summary of the main topics.

brunch64-borderedTwo weeks ago at sunday, we held another Schneide Dev Brunch, a regular brunch on the second sunday of every other (even) month, only that all attendees want to talk about software development and various other topics. This brunch had its first half on the sun roof of our company, but it got so sunny that we couldn’t view a presentation that one of our attendees had prepared and we went inside. As usual, the main theme was that if you bring a software-related topic along with your food, everyone has something to share. We were quite a lot of developers this time, so we had enough stuff to talk about. As usual, a lot of topics and chatter were exchanged. This recapitulation tries to highlight the main topics of the brunch, but cannot reiterate everything that was spoken. If you were there, you probably find this list inconclusive:

Open-Space offices

There are some new office buildings in town that feature the classic open-space office plan in combination with modern features like room-wide active noise cancellation. In theory, you still see your 40 to 50 collegues, but you don’t necessarily hear them. You don’t have walls and a door around you but are still separated by modern technology. In practice, that doesn’t work. The noise cancellation induces a faint cheeping in the background that causes headaches. The noise isn’t cancelled completely, especially those attention-grabbing one-sided telephone calls get through. Without noise cancellation, the room or hall is way too noisy and feels like working in a subway station.

We discussed how something like this can happen in 2016, with years and years of empirical experience with work settings. The simple truth: Everybody has individual preferences, there is no golden rule. The simple conclusion would be to provide everybody with their preferred work environment. Office plans like the combi office or the flexspace office try to provide exactly that.

Retrospective on the Git internal presentation

One of our attendees gave a conference talk about the internals of git, and sure enough, the first question of the audience was: If git relies exclusively on SHA-1 hashes and two hashes collide in the same repository, what happens? The first answer doesn’t impress any analytical mind based on logic: It’s so incredibly improbable for two SHA-1 hashes to collide that you might rather prepare yourself for the attack of wolves and lightning at the same time, because it’s more likely. But what if it happens regardless? Well, one man went out and explored the consequences. The sad result: It depends. It depends on which two git elements collide in which order. The consequences range from invisible warnings without action over silently progressing repository decay to immediate data self-destruction. The consequences are so bitter that we already researched about the savageness of the local wolve population and keep an eye on the thunderstorm app.

Helpful and funny tools

A part of our chatter contained information about new or noteworthy tools to make software development more fun. One tool is the elastic tabstop project by Nick Gravgaard. Another, maybe less helpful but more entertaining tool is the lolcommits app that takes a mugshot – oh sorry, we call that “aided selfie” now – everytime you commit code. That smug smile when you just wrote your most clever code ever? It will haunt you during a git blame session two years later while trying to find that nasty heisenbug.

Anonymous internet communication

We invested a lot of time on a topic that I will only decribe in broad terms. We discussed possibilities to communicate anonymously over a compromised network. It is possible to send hidden messages from A to B using encryption and steganography, but a compromised network will still be able to determine that a communication has occured between A and B. In order to communicate anonymously, the network must not be able to determine if a communication between A and B has happened or not, regardless of the content.

A promising approach was presented and discussed, with lots of references to existing projects like https://github.com/cjdelisle/cjdns and https://hyperboria.net/. The usual suspects like the TOR project were examined as well, but couldn’t hold up to our requirements. At last, we wanted to know how hard it is to found a new internet service provider (ISP). It’s surprisingly simple and well-documented.

Web technology to single you out

We ended our brunch with a rather grim inspection about the possibilities to identify and track every single user in the internet. To use completely exotic means of surfing is not helpful, as explained in this xkcd comic. When using a stock browser to surf, your best practice should be to not change the initial browser window size – but just see for yourself if you think it makes a difference. Here is everything What Web Can Do Today to identify and track you. It’s so extensive, it’s really scary, but on the other hand quite useful if you happen to develop a “good” app on the web.

Epilogue

As usual, the Dev Brunch contained a lot more chatter and talk than listed here. The number of attendees makes for an unique experience every time. We are looking forward to the next Dev Brunch at the Softwareschneiderei. And as always, we are open for guests and future regulars. Just drop us a notice and we’ll invite you over next time.

For the gamers: Schneide Game Nights

Another ongoing series of events that we established at Softwareschneiderei are the Schneide Game Nights that take place at an irregular schedule. Each Schneide Game Night is a saturday night dedicated to a new or unknown computer game that is presented by a volunteer moderator. The moderator introduces the guests to the game, walks them through the initial impressions and explains the game mechanics. If suitable, the moderator plays a certain amount of time to show more advanced game concepts and gives hints and tipps without spoiling too much suprises. Then it’s up to the audience to take turns while trying the single player game or to fire up the notebooks and join a multiplayer session.

We already had Game Nights for the following games:

  • Kerbal Space Program: A simulator for everyone who thinks that space travel surely isn’t rocket science.
  • Dwarf Fortress: A simulator for everyone who is in danger to grow attached to legendary ASCII socks (if that doesn’t make much sense now, lets try: A simulator for everyone who loves to dig his own grave).
  • Minecraft: A simulator for everyone who never grew out of the LEGO phase and is still scared in the dark. Also, the floor is lava.
  • TIS-100: A simulator (sort of) for everyone who thinks programming in Assembler is fun. Might soon be an olympic discipline.
  • Faster Than Light: A roguelike for everyone who wants more space combat action than Kerbal Space Program can provide and nearly as much text as in Dwarf Fortress.
  • Don’t Starve: A brutal survival game in a cute comic style for everyone who isn’t scared in the dark and likes to hunt Gobblers.
  • Papers, Please: A brutal survival game about a bureaucratic hero in his border guard booth. Avoid if you like to follow the rules.
  • This War of Mine: A brutal survival game about civilians in a warzone, trying not to simultaneously lose their lives and humanity.
  • Crypt of the Necrodancer: A roguelike for everyone who wants to literally play the vibes, trying to defeat hordes of monsters without skipping a beat.
  • Undertale: A 8-bit adventure for everyone who fancies silly jokes and weird storytelling. You’ll feel at home if you’ve played the NES.

The Schneide Game Nights are scheduled over the same mailing list as the Dev Brunches and feature the traditional pizza break with nearly as much chatter as the brunches. The next Game Night will be about:

  • Factorio: A simulator that puts automation first. Massive automation. Like, don’t even think about doing something yourself, let the robots do it a million times for you.

If you are interested in joining, let us know.

Recap of the Schneide Dev Brunch 2016-06-12

If you couldn’t attend the Schneide Dev Brunch at 12th of June 2016, here is a summary of the main topics.

brunch64-borderedLast sunday, we held another Schneide Dev Brunch, a regular brunch on the second sunday of every other (even) month, only that all attendees want to talk about software development and various other topics. This brunch was a little different because it had a schedule for the first half. That didn’t change much of the outcome, though. As usual, the main theme was that if you bring a software-related topic along with your food, everyone has something to share. We were quite a lot of developers this time, so we had enough stuff to talk about. As usual, a lot of topics and chatter were exchanged. This recapitulation tries to highlight the main topics of the brunch, but cannot reiterate everything that was spoken. If you were there, you probably find this list inconclusive:

The internals of git

Git is a version control system that has, in just a few years, taken over the places of nearly every previous tool. It’s the tool that every developer uses day in day out, but nobody can explain the internals, the “plumbing” of it. Well, some can and one of our attendees did. In preparation of a conference talk with live demonstration, he gave the talk to us and told us everything about the fundamental basics of git. We even created our own repository from scratch, using only a text editor and some arcane commands. If you visited the Karlsruhe Entwicklertag, you could hear the gold version of the talk, we got the release candidate.

The talk introduced us to the basic building blocks of a git repository. These elements and the associated commands are called the “plumbing” of git, just like the user-oriented commands are called the “porcelain”. The metaphor was clearly conceived while staring at the wall in a bathroom. Normal people only get to see the porcelain, while the plumber handles all the pipework and machinery.

Code reviews

After the talk about git and a constructive criticism phase, we moved on to the next topic about code reviews. We are all interested in or practicing with different tools, approaches and styles of code review, so we needed to get an overview. There is one company called SmartBear that has its public relationship moves done right by publishing an ebook about code reviews (Best Kept Secrets of Code Review). The one trick that really stands out is adding preliminary comments about the code from the original author to facilitate the reviewer’s experience. It’s like a pre-review of your own code.

We talked about different practices like the “30 minutes, no less” rule (I don’t seem to find the source, have to edit it in later, sorry!) and soon came to the most delicate point: the programmer’s ego. A review isn’t always as constructive as our criticism of the talk, so sometimes an ego will get bruised or just appear to be bruised. This is the moment emotions enter the room and make everything more complicated. The best thing to keep in mind and soul is the egoless programming manifesto and, while we are at it, the egoless code review. If everything fails, your process should put a website between the author and the reviewer.

That’s when tools make their appearance. You don’t need a specific tool for code reviews, but maybe they are helpful. Some tools dictate a certain workflow while others are more lenient. We concentrated on the non-opinionated tools out there. Of course, Review Ninja is the first tool that got mentioned. Several of our regular attendees worked on it already, some are working with it. There are some first generation tools like Barkeep or Review Board. Then, there’s the old gold league like Crucible. These tools feel a bit dated and expensive. A popular newcomer is Upsource, the code review tool from JetBrains. This is just a summary, but there are a lot of tools out there. Maybe one day, a third generation tool will take this market over like git did with version control.

Oh, and you can read all kind of aspects from reviewed code (but be sure to review the publishing date).

New university for IT professionals

In the german city of Köln (cologne), a new type of university is founded right now: https://code.university/ The concept includes a modern approach to teaching and learning. What’s really cool is that students work on their own projects from day one. That’s a lot like we started our company during our studies.

Various chatter

After that, we discussed a lot of topics that won’t make it into this summary. We drifted into ethics and social problems around IT. We explored some standards like the infamous ISO 26262 for functional safety. We laughed, chatted and generally had a good time.

Economics of software development

At last, we talked about statistical analysis and economic viewpoints of software development. That’s actually a very interesting topic if it were not largely about huge spreadsheets filled with numbers, printed on neverending pages referenced by endless lists of topics grouped by numerous chapters. Yes, you’ve already anticipated it, I’m talking about the books of Capers Jones. Don’t get me wrong, I really like them:

There a some others, but start with these two to get used to hard facts instead of easy tales. In the same light, you might enjoy the talk and work of Greg Wilson.

Epilogue

As usual, the Dev Brunch contained a lot more chatter and talk than listed here. The number of attendees makes for an unique experience every time. We are looking forward to the next Dev Brunch at the Softwareschneiderei. And as always, we are open for guests and future regulars. Just drop us a notice and we’ll invite you over next time.

The whole company under version control

One of our secrets is that we’ve put the whole company under version control. You can see every change to our business data and undo every mistake.

by Sashkin / fotolia

A minor fact about the Softwareschneiderei that always evokes surprised reactions is that everything we do is under version control. This should be no surprise for our software development work, as version control is a best practice for about twenty years now. If you aren’t a software developer or unfamiliar with the concept of version control for whatever reasons, here’s a short explanation of its main features:

Summary of version control

Version control systems are used to track the change history of a file or a bunch of files in a way that makes it possible to restore previous versions if needed. Each noteworthy change of a file (or a bunch of files) is stored as a commit, a new savepoint that can be restored. Each commit can be provided with a change note, a short comment that describes the changes made. This results in a timeline of noteworthy changes for each file. All the committed changes are immutable, so you get revision safety of your data for almost no cost.

Usual work style for developers

In software development, each source code file has to be “in a repository”, the repository being the central database for the version control system. The repository is accessible over the network and holds the commits for the project. One of the first lessons a developer has to learn is that source code that isn’t committed to a version control system just doesn’t exist. You have to commit early and you have to commit often. In modern development, commit cycles of a few minutes are usual and necessary. Each development step results in a commit.

What we’ve done is to adopt this work style for our whole company. Every document that we process is stored under version control. If we write you a quote or an invoice, it is stored in our company data repository. If we send you a letter, it was first committed to the repository. Every business analysis spreadsheet, all lists and inventories, everything is stored in a repository.

Examples of usage scenarios

Let me show you two examples:

We have a digital list of all the invoices we sent. It’s nothing but a spreadsheet with the most important data for each invoice. Every time we write an invoice, it is another digital document with all the necessary text and an additional line in the list of invoices. Both changes, the new invoice document and the extended list are included in one commit with a comment that hints to the invoice number and the project number. These changes are now included in the ever-growing timeline of our company data.

We also have a liquidity analysis spreadsheet that needs to be updated often. Every time somebody makes a change to the spreadsheet, it’s a new commit with a comment what was updated. If the update was wrong for whatever reason, we can always backtrack to the spreadsheet content right before that faulty commit and try again. We don’t only have the spreadsheet, but the whole history of how it was filled out, by whom and when.

Advantages of version controlled files

Before we switched to a version controlled work style, we had network shares as the place to store all company data. This is probably the de-facto standard of how important files are handled in many organizations. Adding version control has some advantages:

  • While working with network shares, everybody works on the same file. Most programs show a warning that another user has write access on a file and only opens in read-only mode. But not every program does that and that’s where edit collisions occur without anybody noticing. With version control, you work on a local copy of the file. You can always change the file, but you will get a “merge conflict” when another user has altered the file in the repository after your last synchronization. These merge conflicts are usually minor inconveniences with source code, but a major pain with binary file formats like spreadsheets. So you’ll know about edit collisions and you’ll try to avoid them. How do you avoid them? By planning and communicating your work better. Version control emphasizes the collaborative work setting we all live in.
  • Version controlled data is always traceable. You can pinpoint exactly who did what at which time and why (as stated in the commit comment). There is no doubt about any number in a spreadsheet or any file in your repository. This might sound like a surveillance nightmare, but it’s more of a protection against mishaps and honest errors.
  • Version control lets you review your edits. Every time you commit your work, you’ll see a list of files that you’ve changed. If there is a file that you didn’t know you’ve changed, the version control just saved your ass. You can undo the erroneous change with a simple click. If you’d worked with network shares, this change would have gone unnoticed. With version control, you have to double-check your work.
  • There are no accidental deletions with version control. Because you have every file stored in the repository, you can always undo every delete operation. With network shares, every file lives in the constant fear of the delete key. With version control, you catch your mishap in the commit step and just restore the file.

Summary of the adoption

When we switched to version control for every company data, we just committed our network shares in the repository and started. The work style is a bit inconvenient at first, because it is additional work and needs frequent breaks for the commits, but everybody got used to it very fast. Soon, the advantages began to outweight the inconvenience and now working with our company data is free of fear because we have the safety net of version control.

You want to know more about version control? Feel free to ask!

Transferring commits via Git bundles

Sometimes you want to send (e.g. by e-mail) a set of new Git commits to someone else who has the same repository at an older state, without transferring the whole repository and without sharing a common remote repository.

One feature that might come to your mind are Git patches. Patches, however, don’t work when there are branches and merge commits in the commit history: git format-patch creates patches for the commits across the various branches in the order of their commit times and doesn’t create patches for merge commits.

Git bundles

The solution to the problem are Git bundles. Git bundles contain a partial excerpt of a Git repository in a single file.

This is how to create a bundle, including branches, merge commits and tags:

$ git bundle create my.bundle <base commit>..HEAD --branches --tags

<base commit> must be replaced with the last commit (i.e. commit hash or tag), which was included in the old state of the repository.

A Git bundle can be imported into a repository via git pull:

$ git pull /path/to/my.bundle

Recap of the Schneide Dev Brunch 2014-08-31

If you couldn’t attend the Schneide Dev Brunch at 31nd of August 2014, here is a summary of the main topics.

brunch64-borderedYesterday, we held another Schneide Dev Brunch, a regular brunch on a sunday, only that all attendees want to talk about software development and various other topics. If you bring a software-related topic along with your food, everyone has something to share. The brunch was well-attended this time but the weather didn’t allow for an outside session. There were lots of topics and chatter. As always, this recapitulation tries to highlight the main topics of the brunch, but cannot reiterate everything that was spoken. If you were there, you probably find this list inconclusive:

Docker – the new (hot) kid in town

Docker is the hottest topic in software commissioning this year. It’s a lightweight virtualization technology, except that you don’t obtain full virtual machines. It’s somewhere between a full virtual machine and a simple chroot (change root). And it’s still not recommended for production usage, but is already in action in this role in many organizations.
We talked about the magic of git and the UnionFS that lay beneath the surface, the ease of migration and disposal and even the relative painlessness to run it on Windows. I can earnestly say that Docker is the technology that everyone will have had a look at before the year is over. We at the Softwareschneiderei run an internal Docker workshop in September to make sure this statement holds true for us.

Git – the genius guy with issues

The discussion changed over to Git, the distributed version control system that supports every versioning scheme you can think of but won’t help you if you entangle yourself in the tripwires of your good intentions. Especially the surrounding tooling was of interest. Our attendees had experience with SmartGit and Sourcetree, both capable of awesome dangerous stuff like partial commmits and excessive branching. We discovered a lot of different work styles with Git and can agree that Git supports them all.
When we mentioned code review tools, we discovered a widespread suspiciousness of heavy-handed approaches like Gerrit. There seems to be an underlying motivational tendency to utilize reviews to foster a culture of command and control. On a technical level, Gerrit probably messes with your branching strategy in a non-pleasant way.

Teamwork – the pathological killer

We had a long and deep discussion about teamwork, liability and conflicts. I cannot reiterate everything, but give a few pointers how the discussion went. There is a common litmus test about shared responsibility – the “hold the line” mindset. Every big problem is a problem of the whole team, not the poor guy that caused it. If your ONOZ lamp lights up and nobody cares because “they didn’t commit anything recently”, you just learned something about your team.
Conflicts are inevitable in every group of people larger than one. We talked about team dynamics and how most conflicts grow over long periods only to erupt in a sudden and painful way. We worked out that most people aren’t aware of their own behaviour and cannot act “better”, even if they were. We learned about the technique of self-distancing to gain insights about one’s own feelings and emotional drive. Two books got mentioned that may support this area: “How to Cure a Fanatic” by Amos Oz and “On Liberty” from John Stuart Mill. Just a disclaimer: the discussion was long and the books most likely don’t match the few headlines mentioned here exactly.

Code Contracts – the potential love affair

An observation of one attendee was a starting point for the next topic: (unit) tests as a mean for spot checks don’t exactly lead to the goal of full confidence over the code. The explicit declaration of invariants and subsequent verification of those invariants seem to be more likely to fulfil the confidence-giving role.
Turns out, another attendee just happened to be part of a discussion on “next generation verification tools” and invariant checking frameworks were one major topic. Especially the library Code Contracts from Microsoft showed impressive potential to really be beneficial in a day-to-day setting. Neat features like continuous verification in the IDE and automatic (smart) correction proposals makes this approach really stand out. This video and this live presentation will provide more information.

While this works well in the “easy” area of VM-based languages like C#, the classical C/C++ ecosystem proves to be a tougher nut to crack. The common approach is to limit the scope of the tools to the area covered by LLVM, a widespread intermediate representation of source code.

Somehow, we came across the book titles “The Economics of Software Quality” by Capers Jones, which provides a treasure of statistical evidence about what might work in software development (or not). Another relatively new and controversial book is “Agile! The Good, the Hype and the Ugly” from Bertrand Meyer. We are looking forward to discuss them in future brunches.

Visual Studio – the merchant nobody likes but everybody visits

One attendee asked about realistic alternatives to Visual Studio for C++ development. Turns out, there aren’t many, at least not free of charge. Most editors and IDEs aren’t particularly bad, but lack the “everything already in the box” effect that Visual Studio provides for Windows-/Microsoft-only development. The main favorites were Sublime Text with clang plugin, Orwell Dev-C++ (the fork from Bloodshed C++), Eclipse CDT (if the code assist failure isn’t important), Code::Blocks and Codelite. Of course, the classics like vim or emacs (with highly personalized plugins and setup) were mentioned, too. KDevelop and XCode were non-Windows platform-based alternatives, too.

Stinky Board – the nerdy doormat

One attendee experiments with input devices that might improve the interaction with computers. The Stinky Board is a foot-controlled device with four switches that act like additional keys. In comparison to other foot switches, it’s very sturdy. The main use case from our attendee are keys that you need to keep pressed for their effect, like “sprint” or “track enemy” in computer games. In a work scenario, there are fewer of these situations. The additional buttons may serve for actions that are needed relatively infrequently, but regularly – like “run project”.

This presentation produced a lot of new suggestions, like the Bragi smart headphones, which include sensors for head gestures. Imagine you shaking your head for “undo change” or nod for “run tests” – while listening to your fanciest tunes (you might want to refrain from headbanging then). A very interesting attempt to combine mouse, keyboard and joystick is the “King’s Assembly“, a weird two-piece device that’s just too cool not to mention. We are looking forward to hear more from it.

Epilogue

As usual, the Dev Brunch contained a lot more chatter and talk than listed here. The high number of attendees makes for an unique experience every time. We are looking forward to the next Dev Brunch at the Softwareschneiderei. And as always, we are open for guests and future regulars. Just drop us a notice and we’ll invite you over next time.

Follow-up to our Dev Brunch August 2010

A follow-up to our August 2010 Dev Brunch, summarizing the talks and providing bonus material.

Last Sunday , we held our Dev Brunch for August 2010. We had to meet early in August, as there will be a lot of holiday absence in the next weeks. The setting was more classical again, with a real brunch on a late sunday morning. We had a lot more registrations than finally attendees, but it was said this was caused by a proper birthday party the night before. Due to rainy weather, we stayed inside and discussed the topics listed below.

The Dev Brunch

If you want to know more about the meaning of the term “Dev Brunch” or how we implement it, have a look at the follow-up posting of the brunch in October 2009. We continue to allow presence over topics. Our topics for the brunch were:

  • Clean Code Developer Initiative – The Clean Code Developer movement uses colored wristbands to subsequentially focus on different aspects of principles and practices of a professional software developer. Despite the name, it’s a german group with german web sites. But everybody who read Uncle Bob’s “Clean Code” knows what the curriculum is about. The talk gave a general summary about the intiative and some firsthand experiences with following the rules. If you read the book or are interested in profound software development, give it a try.
  • Non-bare repositories in git – The distributed version control system git differentiates between “bare” and “non-bare” repositories. If you are a local developer, you’ll use the non-bare type. When two developers with similar non-bare repositories (e.g. of the same project) meet, they can’t easily share commits or patches with the “push” command. This is a consequence of the “push” not being the exact opposite of the “fetch” command. If you try to synchronize two non-bare git repositories with push commands, you’ll most likely fail. The only safe approach is to introduce an intermediate bare repository or a branch in on of the repositories that only gets used by extern users. Even the repository owner has to push to this branch then. We discussed the setup and consequences, which are small in a broader use case and sad for ad-hoc workgroups.

Retrospection of the brunch

The group of attendees was small and a bit hung over. This led to a brunch that lacked technical topics a bit but emphasized social and cultural topics that didn’t make it on the list above. A great brunch just before the holiday season.