Adaptive random generation of multiple outcomes

Games often want to use randomness to spice up the experience – it can be a lot more exciting to risk something when you are not entirely certain of the result. In my game abstractanks, the power-ups are generated randomly. However, when playing the game, it seemed like it was always generating the same power-ups in a row, which can be kind of frustrating. Tough luck, because this is just how uniform randomness behaves – as anyone who ever played the board game Sorry! or the german class Mensch ärgere dich nicht! can sure testify.

Let’s try that with C# code:

var outcomes = new []{'A', 'B', 'C', 'D', 'E', 'F'};
for (int i = 0; i < 200; ++i)
{
  var index = random.Next(outcomes.Length-1);
  Console.Write(outcomes[index]);
}

This will produce something like this:

ECDDBABCEEBBDDADEDECCAECECEADBBCCCDAEBEECDBCACAEAA
BDEACECBDDBAEDCEEAEAECDEEEECBCCEECEDCBAECCCBDCDDEA
CEAABDEEBDEAEBABABDEBDAACBECBBAACAEDEEBAECECECCBAB
BBAAEEDEDEEBCACDDEBBCBACADDDBAECBAEDACBEAEABBCAEEA

See all those strings of Cs and Es? Horrible! That does not feel random!

Games Dota 2 work with this by tweaking distribution after each “roll”. Specifically, for things like Phantom Assassin’s Coup de Grace, the chance is increased slightly after each unsuccessful attempt, making the 4th or so attempt a guaranteed critical strike.
But this technique only work nicely for one event in a stream of attempts. It fails to make multiple outcomes look better.
For game I devised an algorithm with two properties:
1. The same roll never appears twice in succession
2. No long stretches without a specific roll
Here’s my algorithm to do that:

var outcomes = new []{'A', 'B', 'C', 'D', 'E', 'F'};
var chances = new []{1.0, 1.0, 1.0, 1.0, 1.0, 1.0};
for (int i = 0; i < 200; ++i)
{
  // Normalize chances so they sum up to 1.0, then build their prefix sum
  var total = chances.Sum();
  var prefixSum = chances
    .Select(x => x / total)
    .Aggregate(new List<double>(), (list, x) =>
    {
      list.Add(list.LastOrDefault() + x);
      return list;
    }).ToList();

  // Roll an outcome
  var roll = random.NextDouble();
  var pick = prefixSum.FindIndex(x => x >= roll);
  Console.Write(outcomes[pick]);

  // Now adapt the distribution by removing all chance from
  // the last pick and distributing it to the N-1 others
  var increment = chances[pick] / (chances.Length - 1);
  chances = chances.Select((x, index) =>
  {
    if (index == pick)
      return 0.0;
    else
      return x + increment;
  }).ToArray();
}

And here’s what that produces:

AEDFBCAEFDBAEDFCBDECBFCFDBEACFDECBDAEFCEBACDEAFBDC
AFEBCABFDECDAFBECFADFCEBACFDCEDBFAEDCDBDAFEDBFEABD
CFABEDFBACDEAEFBECADBAFECAFBDACECFAEBDCAFCDBAEDAFA
BDFCDEACDBFEDFCBACDAFBEADFCDEFBEACBEFDCECFABDFCDBE

Much nicer for my needs, but it still looks pretty random. Also, my statistics buddy assured me that this algorithm still guarantees equal outcome probability for all items when run forever. I do now know if this is a new technique, but I did not find anything like this when I was looking for it.
Let me know if this is of any use to you.

A programmer’s report on the Global Game Jam

Last weekend was this year’s global game jam. A game jam is a type of event where the participants – the “jammers” – create a game in a fixed, usually quite short, amount of time with some additional restrictions like a theme. Usually, this means computer games, although you can also make board or pen and paper games.

A worldwide event

As the name implies, the global game jam is a worldwide event. Most bigger cities have “hubs”, i.e. jam locations, where the jammers gather at the beginning of the weekend in their local time zone. Then, a few motivating keynote videos are shown before the theme is announced. Last year’s theme was “Transmission”, while this year’s theme was “What Home means to you”. A game using that theme is to be created within the next 48-hours.

The pitch

People get a few minutes to brainstorm about the theme and maybe get an idea that they want to implement. There is kind of a small stage area, where people with those ideas pitch them to the others in hopes of getting a team assembled. Last year, a guy even made a short power-point presentation to pitch his idea.
While you can make a game on your own, that is pretty hard. You typically need at least programming, graphics, audio, music, project-management and game-design expertise on your team. It is rare to find that in a single person.

Implementation

So if you have a nice idea, or join a team with one – now implementation work starts.

Last year, my team quickly abandoned of using Unity as the engine for our terra-forming game, since no one on the team had any experience with it. We switched to Love2D, a neat little framework based on Lua. It is a very lightweight little framework and Lua is an elegant engine. Given that our team had no dedicated graphics people, I am pretty proud of the little game that we finished: Terraformer

This year, I joined another team and we picked the Godot engine to create our game. Godot comes with its own programming language called “Godot script”. The syntax looks like it borrows heavily from both Python and JavaScript. It is, however, quite pleasant to work with, except that most higher-order functional features that are all so common on most languages seem to be missing. But it does have coroutines, so there’s that.
Either way, working with godot was mostly pretty pleasant, except for its interaction with git. Just opening specific parts of the game in the godot editor would change files, and sometimes we’d get huge nonsensical changes, and merge conflicts, when someone on another platform (i.e. between Mac and Windows) pushed something.
We ended up making this little game: Home God
I’m particularly proud of the character movement controls. I think they turned out quite fluidly.

Not just for game programmers…

The global game jam is an awesome experience for non-game programmers looking to improve their craft. In fact, most participants in my hub where at most hobbyists, from what I could tell. The tight time-budget and the objective of just getting it to work will give many programmers a totally different perspective of how to do things. Things that steal your time will hurt a lot.

Don’t use wrapper

It’s a bad word for a piece of code, and you should feel bad for using it. Here is why:

1. It is easily phonetically confused with “rapper”

Well, this one is actually funny. Really the only redeeming quality. So if someone tells me that they “made a wrapper”, I immediately giggle a bit inside.

2. Wrapping things is a programmers job.

As programmers, we are in the business of abstractions, and a function clearly is an abstraction. A function that calls something else wraps that something else. So isn’t everything a wrapper?
Who would say that the following function is a wrapper?

template <class T>
T multiplication_wrapper(T a, T b)
{
  return a * b;
}

It does wrap the multiplication operator, does it not? Of course, the example is contrived, but many people call equally simple functions “wrapper functions”.

3. It is often a bad analogy

When you wrap something, like a present, you first need to unwrap it to actually use it. So in that case, it acts more like a kind of envelope. This is clearly not the case for what most people call wrappers. You could wrap some data in a .zip file – that would make sense! But no one uses it like that.
Another use of the word wrap implies something that goes around something else, forming a fixture of sorts. Like a wraparound baby sling. So I guess this could work for some uses, like a protection layer. Again, it is not used like that.
Finally, there’s wrapping up something, as in finishing something. Well, maybe if you’re wrapping your main function around the rest of your code, you can finish writing your program. A very monolithic approach.

There are plenty of better alternatives

In most cases, what people should rather use is either facade or adapter. Both names convey a lot more meaning than wrapper. A facade is something that wraps code to make the interface nicer. An adapter wraps an interface to integrate with some other piece of code. Both are structural design patterns. Both wrap something. But then again, that could be said for all of the structural design patterns. Or, most code. Except maybe assembler?

So please, calling something a wrapper is not enough. You might as well just call it function/object/abstraction. Use adapter, facade, decorator, proxy etc.. The why is more important than the what.

How to teach C++

In the closing Keynote of this year’s Meeting C++, Nicolai Josuttis remarked how hard it can be to teach C++ with its ever expanding complexity. His example was teaching rookies about initialization in C++, i.e. whether to use assignment =, parens () or curly braces {}. He also asked for more application-level programmers to participate.

Well, I am an application programmer, and I also have experience with teaching C++. Last year I held a C++ introductory course for experienced C programmers who mostly had never used C++ before. From my experience, I can completely agree with what Nico had to say about teaching C++. The language and its subtlety can be truly overwhelming.

Most of the complexity in C++ boils down to tuning your code for optimal performance. We cannot just leave that out, can we? After all, the sole reason to use C++ is performance, right?

My approach

Performance is one key reason for using C++, no doubt about it. There is a few more, but let us not get distracted. What is even more important is the potential to optimize for performance. But you can do that later! It is actually quite crazy what performance crimes you can get away with in C++ and still have something pretty fast overall, especially with move-semantics and improved RVO.

Given that, I picked a simple subset to start with:

  • Pass by-value only, do not use pointers nor references.
  • Structure your programs around simple data-only structs and functions transforming them.
  • Make good use of the data structures and algorithms from std.

Believe me, you too can write pretty useful programs this way. This is not too far from a data-oriented style anyways.
So, yes, we can actually leave out the performance specific parts for quite some time, while still making sure our programs can be optimized eventually.

And from there?

You can gradually start introducing references and move semantics, when measurement shows copying affecting the performance. This way you can introduce tooling like a profiler and see the effects off passing things around by reference in a language where everything, by default, is passed by value. These things will start to make sense in a context.

The arguably better approach to move semantics is of course types that prevent you from copying, but allow moving. Most resources with side-effects are like this: file handles, locks, threads. You will need to introduce RAII for this to make sense, e.g. writing ctors and dtors.

But you still do not need to write templates, use virtual-function polymorphism, or even pointers. But when you get to those, you will have a good context to use them.

Have you tought C++ and some experience to share? I would like to hear about it!

3 rules for projects under version control

Checking out a project under version control should be easy and repeatable. Here are a few tips on how to achieve that:

1. Self-containment

You should not need a specifically configured machine to start working on a project. Ideally, you clone the project and get started. Many things can be a problem to achieve that.
Maybe your project is needs a specific operating system, dependency installed, hardware or database setup to run. I personally draw that line at this:
You get a manual with the code that helps you to set up your development environment once and this should be as automated & easy as possible. You should always be able to run your projects without periphery, i.e. specific hardware that can be plugged in or databases that need to be installed.
To achieve this for hardware, you can often fake it via polymorphic interfaces and dependency injection, much like mocking it for testing. The same can be done with databases – or you can use in-memory databases as a fallback.

2. Separate build-artifacts

Building the project into an executable form should be clearly separated from the source-controlled files. For example, the build should never ever modify a file that is under version control. Ideally, the “build” directory is completely independent from the source – enabling a true out-of-source build. However, it is often a good middle ground to allow building in a few dedicated directories in your source repository – but these need to be in .gitignore.

3. Separate runtime data

In the same way, running your project should not touch any source controlled files. Ideally, the project can be run out-of-source. This is trivial for small programs that do not have data, but once some data needs to be managed by the source control system, it gets a little more tricky for the executables to find the data. For data that needs to be changed by the program (we call these “stores”), it is advisable to maintain templates in the VCS or in codes, and copy them to the runtime directory during the build process. For data that is not changed by running the program, such as images, videos, translation-tables etc., you can copy them as well, or make sure the program finds them in the source repository.

Following these guidelines will make it easier to work with version control, especially when multiple people are involved.

luabind deboostified tips and tricks

luabind deboostified is a fork of the luabind project that helps exposing APIs to Lua. As the name implies, it replaces the boost dependency with modern C++, which makes it a lot more pleasant to work with.

Here are a few tips and tricks I learned while working with it. Some tricks might be applicable to the original luabind – I do not know.

1. Splitting module registration

You can split the registration code for different classes. I usually add a register function per class, like this:

struct A {
  void doSomething();
  static luabind::scope registerWithLua();
};

struct B {
  void goodStuff();
  static luabind::scope registerWithLua();
};

You can then combine their registration code into a single module on the Lua side:

void registerAll(lua_State* L) {
  luabind::module(L)[
    A::registerWithLua(),
    B::registerWithLua()];
}

The implementation of a registration function looks like this:

luabind::scope A::registerWithLua()
{
  return luabind::class_<A>("A")
    .def("doSomething", &A::doSomething);
}

2. Multiple policies and multiple return values

Unlike C++, Lua has real multiple return values. You can use that by utilizing the return value policies that luabind offers. Lets say, you want to write this in Lua:

local x, y = a.getPosition()

The C++ side could look like this:

void getPosition(A const& a, float& x, float& y);

The deboostified fork needs its policies supplied in a type list. Let’s use a small helper meta-function to build that:

template <typename... T>
using joined = 
  typename luabind::meta::join<T...>::type;

Once you have that, you can expose it like this:

luabind::def("getPosition", &getPosition,
              joined<
                luabind::pure_out_value<2>,
                luabind::pure_out_value<3>
              >());

3. Specialized data structures using luabind::object

Using the converters in luabind is not the only way to make Lua values from C++. Almost everything you can do in Lua itself, you can do with luabind::object. Here is a somewhat contrived example:

luabind::object repeat(luabind::object what,
                       int count) {
  // Create a new table object
  auto result = luabind::newtable(
    what.interpreter());
  // Fill it as an array [1..N]
  for (int i = 1; i <= count; ++i)
    result[i] = what;
  return result;
}

This function can then be exported via luabind::def and used just like any other function. This is just the tip of the iceberg, though. For example, you can also write functions that, at runtime, behave differently when a number is passed in as when a table is passed in. You can find out the Lua type with luabind::type(myObject).

Of course, as soon as you want to create new objects to return to Lua, you need the lua_State pointer in that function. Using the interpreter from a passed-in luabind::object is one way, but I have yet to find another pleasant way to do this. It is probably possible to use the policies to do this, and have them pass that in as a special parameter, but for now I am using some complicated machinery to bind lambda functions that capture the Lua interpreter.

That’s it for now..

Keep in mind that these are not thoroughly researched best-practices, but patterns I have used to solve actual problems. There might be better solutions out there – if you know any, please let me know. Hope this helped!

Decoding non-utf8 server responses using the Fetch API

The new Javascript Fetch API is really nice addition to the language and my preferable, and in fact the only bearable, way to do server requests.
The Promise based API is a lot nicer than older, purely callback-based, approaches.

The usual approach to get a text response from a server using the Fetch API looks like this:

let request = fetch(url)
  .then(response => response.text())
  .then(handleText);

But this has one subtle problem:

I was building a client application that reads weather data from a small embedded device. We did not have direct access to changing the functionality of that device, but we could upload static pages to it, and use its existing HTML API to query the amount of registered rainfall and lightning strikes.

Using the fetch API, I quickly got the data and extracted it from the HTML but some of the identifiers had some screwed up characters that looked like decoding problems. So I checked whether the HTTP Content-Type was set correctly in the response. To my surprise it was correctly set as Content-Type: text/html; charset=iso-8859-1.

So why did my Javascript Application not get that? After some digging, it turned out that Response’s text() function always decodes the payload as utf-8. The mismatch between that and the charset explained the problem!

Obviously, I had to do the decoding myself. The solution I picked was to use the TextDecoder class. It can decode an ArrayBuffer with a given encoding. Luckily, that is easy to get from the response:

let request = fetch(url)
  .then(response => response.arrayBuffer())
  .then(buffer => {
    let decoder = new TextDecoder("iso-8859-1");
    let text = decoder.decode(buffer);
    handleText(text);
  });

Since I only had to support that single encoding, that worked well for me. Keep in mind that the TextDecoder is still experimental Technology. However, we had a specific browser as a target and it works there. Lucky us!