Bit-fiddling is possible in Java

We have a service interface for Modbus devices that we can use remotely from Java. Modbus supports only very basic data types like single bits and 16-bit words. Our service interface provides the contents of a 16-bit input or holding register as a Java integer.

Often one 16-bit register is not enough to solve a problem in your domain, like representing a temperature as floating point number. A common solution is to combine two 16-bit registers and interpret their contents as a floating point number.

So the question is how to combine the bits of the two int values and convert them to a float in Java. There are at least two possiblities I want to show you.

The “old-fashioned” way includes bit-shifting and bit-wise operators which you actually can use in Java despite of the major flaw regarding primitive types: there are no unsigned data types; even byte is signed!

public static float floatFrom(int mostSignificat16Bits, int leastSignificant16Bits) {
    int bits = (mostSignificat16Bits << 16) | leastSignificant16Bits;
    return Float.intBitsToFloat(bits);
}

As seemingly more modern way is using java.nio.ByteBuffer:

public static float nioFloatFrom(int mostSignificat16Bits, int leastSignificant16Bits) {
    final ByteBuffer buf = ByteBuffer.allocate(4).putShort((short) mostSignificat16Bits).putShort((short) leastSignificant16Bits);
    buf.rewind(); // rewind to the beginning of the buffer, so it can be read!
    return buf.getFloat();
}

The second method seems superior when working with many values because you can fill the buffer conveniently in one phase and extract the float values conveniently by subsequent calls to getFloat().

Conclusion

Even if it is not one of Java’s strengths you actually can work on the bit- and byte-level and perform low level tasks. Do not let the lack of unsigned data types scare you; signedness is irrelevant for the bits in memory.

Centralized project documentation

Project documentation is one thing developers do not like to think about but it is necessary for others to use the software. There are several approaches to project documentation where it is either stored in the source code repository and/or some kind of project web page, e.g. in a wiki. It is often hard for different groups of people to find the documentation they need and to maintain it. I want to show an approach to store and maintain the documentation in one place and integrate it in several other locations.

The project documentation (not API documentation, generated by tools like javadoc or Doxygen) should be version controlled and close to the source code. So a directory in the project source tree seems to be a good place. That way the developers or ducumenters can keep it up-to-date with the current source code version. For others it may be hard to access the docs hidden somewhere in the source tree. So we need to integrate them into other tools to become easily accessible by all the people who need them.

Documentation format

We start with markdown as the documentation format because it is easily read and written using a normal text editor. It can be converted to HTML, PDF and other common document formats. The markdown files reside in a directory next to the source tree, named documentation for example. With pegdown there is a nice java library allowing integration of markdown support in your projects.

Integration in your wiki

Often you want to have your project documentation available on a web page, usually a wiki. With confluence you can directly embed markdown files from an URL in your project page using a plugin. This allows you to enrich the general project documentation in the source tree with your organisation specific documentation. The documentation becomes more widely accessible and searchable. The link can be served by a source code browser like gitweb: http://myrepo/git/?p=MyProject.git;a=blob_plain;f=README.md;hb=HEAD and is alsways up-to-date.

Integration in jenkins

Jenkins has a plugin to use markdown as description format. Combined with the project description setter plugin you can use a file from your workspace to display the job description. Short usage instructions or other notes and links can be maintained in the source tree and show up on the jenkins job page.

Integration in Github or Gitlab

Project hosting platforms like Github or your own repository manager, e.g. gitlab also can display markdown-formatted content from your source tree as the project description yielding a basic project page more or less for free.

Conclusion

Using markdown as a basis for your project documentation is a very flexible approach. It stays usable without any tool support and can be integrated and used in various ways using a plethora of tools and converters. Especially if you plan to open source a project it should contain useful documentation in such a widely understood format distributed with your source code.

CMake as a project model

One thing I actually like about Maven is its attempt to create a project model, conventions and a basic project structure. It allows easy integration IDEs, build servers and so on. For projects in C/C++ I find CMake an attractive way to specify the project model.

Despite its name CMake is not really a build system but more of a cross-platform project description with an integrated build system generator. Cross-platform means not only operating system (OS) here but development environment in general. CMake thus can generate project files for MS Visual Studio 20xx, Eclipse CDT, Code::Blocks, KDevelop and the like. QtCreator can even import a CMake-based project natively which brings you up to speed in almost no time.

CMake allows you organise your project in modules and manage your external dependencies. It does not impose as strict rules as Maven does and lacks an own artifact repository infrastructure but you can use the CMake package definitions or pkg-config information many projects provide.

Packaging and deployment with CPack allows you to deliver releases of your project the way your user would expect it: Depending on the target platform you can package your software as a Windows installer executable (using NSIS), several options for Mac OS X (like Package Maker or Bundle) and the popular DEB and RPM package formats for Linux Distributions.

Conclusion

CMake is more than just another build system. It is a flexible tool to define your project and integrate it with your favorite development tools. It can support the whole project lifecyle from initial creation and normal development to deployment and delivery of your software to the user.

Integrating googletest in CMake-based projects and Jenkins

In my – admittedly limited – perception unit testing in C++ projects does not seem as widespread as in Java or the dynamic languages like Ruby or Python. Therefore I would like to show how easy it can be to integrate unit testing in a CMake-based project and a continuous integration (CI) server. I will briefly cover why we picked googletest, adding unit testing to the build process and publishing the results.

Why we chose googletest

There are a plethora of unit testing frameworks for C++ making it difficult to choose the right one for your needs. Here are our reasons for googletest:

  • Easy publishing of result because of JUnit-compatible XML output. Many other frameworks need either a Jenkins-plugin or a XSLT-script to make that work.
  • Moderate compiler requirements and cross-platform support. This rules out xUnit++ and to a certain degree boost.test because they need quite modern compilers.
  • Easy to use and integrate. Since our projects use CMake as a build system googletest really shines here. CppUnit fails because of its verbose syntax and manual test registration.
  • No external dependencies. It is recommended to put googletest into your source tree and build it together with your project. This kind of self-containment is really what we love. With many of the other frameworks it is not as easy, CxxTest even requiring a Perl interpreter.

Integrating googletest into CMake project

  1. Putting googletest into your source tree
  2. Adding googletest to your toplevel CMakeLists.txt to build it as part of your project:
    add_subdirectory(gtest-1.7.0)
  3. Adding the directory with your (future) tests to your toplevel CMakeLists.txt:
    add_subdirectory(test)
  4. Creating a CMakeLists.txt for the test executables:
    include_directories(${gtest_SOURCE_DIR}/include)
    set(test_sources
    # files containing the actual tests
    )
    add_executable(sample_tests ${test_sources})
    target_link_libraries(sample_tests gtest_main)
    
  5. Implementing the actual tests like so (@see examples):
    #include "gtest/gtest.h"
    
    TEST(SampleTest, AssertionTrue) {
        ASSERT_EQ(1, 1);
    }
    

Integrating test execution and result publishing in Jenkins

  1. Additional build step with shell execution containing something like:
    cd build_dir && test/sample_tests --gtest_output="xml:testresults.xml"
  2. Activate “Publish JUnit test results” post-build action.

Conclusion

The setup of a unit testing environment for a C++ project is easier than many developers think. Using CMake, googletest and Jenkins makes it very similar to unit testing in Java projects.

Prioritizing: order of tasks

This is an entry that extends the post on microprojects by two additional prioritizing strategies.

Strategy: Cover your ass

This is the strategy suggested by the previous post. After preparing a list of milestones and their estimates, you pick the next most problematic milestone and work on it. A list of tasks ordered by this strategy helps you to “fail fast”: in less than half of estimated time you will know, whether you will succeed or bust the budget – even when little is known about the concrete implementation or the esimates are off by some amount.

Strategy: Most value first

In lot of projects, this is the strategy used by customers. All features not absolutely necessary to achieve the goal are cut or declared optional. If you look at minesweeper: you can play it without the highscore, the timer, the modifiable field side or even probably without the random component (i.e. make 99 fields), but not without the mines. After you determined that your budget is too small, you know what the customer can live without and if you have the option to cut features, then this is probably the strategy for you.

Strategy: Most painful, when omitted

This is the strategy best applied before the pain is real. In contrast to other two strategies, it does contain hard to quantify criteria like:

  • Quality
  • Security
  • Performance

The cost to implement them is non-linear and not directly visible. The temptation is big to use time and money to create more profitable features instead. They can be prioritized by:

  • probability of occurence
  • damage in the case of occurence
  • implementation cost
  • growth of the above factors with time

This is a lot of work for a single task – most likely you will setup project wide guidelines and default scenarios that will be reviewed by recurring audits.

How to use partial mocks in real life

Partial mocks are an advanced feature of modern mocking libraries like mockito. Partial mocks retain the original code of a class only stubbing the methods you specify. If you build your system largely from scratch you most likely will not need to use them. Sometimes there is no easy way around them when working with dependencies not designed for testability. Let us look at an example:

/**
 * Evil dependency we cannot change
 */
public final class CarvedInStone {

    public CarvedInStone() {
        // may do unwanted things
    }

    public int thisHasSideEffects(int i) {
        return 31337;
    }

    // many more methods
}

public class ClassUnderTest {

    public Result computeSomethingInteresting() {
        // some interesting stuff
        int intermediateResult = new CarvedInStone().thisHasSideEffects(42);
        // more interesting code
        return new Result(intermediateResult * 1337);
    }
}

We want to test the computeSomethingInteresting() method of our ClassUnderTest. Unfortunately we cannot replace CarvedInStone, because it is final and does not implement an interface containing the methods of interest. With a small refactoring and partial mocks we can still test almost the complete class:

public class ClassUnderTest {
    public int computeSomethingInteresting() {
        // some interesting stuff
        int intermediateResult = intermediateResultsFromCarvedInStone(42);
        // more interesting code
        return intermediateResult * 1337;
    }

    protected int intermediateResultsFromCarvedInStone(int input) {
        return new CarvedInStone().thisHasSideEffects(input);
    }
}

We refactored our dependency into a protected method we can use to stub out with our partial mocking to be tested like this:

public class ClassUnderTestTest {
    @Test
    public void interestingComputation() throws Exception {
        ClassUnderTest cut = spy(new ClassUnderTest());
        doReturn(1234).when(cut).intermediateResultsFromCarvedInStone(42);
        assertEquals(1649858, cut.computeSomethingInteresting());
    }
}

Caveat: Do not use the usual when-thenReturn-style:

when(cut.intermediateResultsFromCarvedInStone(42)).thenReturn(1234);

with partial mocks because the real method will get called once!

So the only untested code is a simple delegation. Measures like that refactoring and partial mocking generally serve as a first step and not the destination.

Where to go from here

To go the whole way we would encapsulate all unmockable dependencies into wrapper objects providing the functionality we need here and inject them into our ClassUnderTest. Then we can replace our wrapper(s) easily using regular mocking.

Doing all this can be a lot of work and/or risk depending on the situation so the depicted process serves as an low risk intermediate step for getting as much important code under test as possible.

Note that the wrappers themselves stay largely untestable like our protected delegating method.

Object Calisthenics: Change the way you think

Some time ago I spoke with my colleague about skill sharpening and training the brain to come up with new solutions. He proposed a two hour session at the weekend implementing a small game using object calisthenics.

Rules

The rules are described in The ThoughtWorks Anthology book. Here is the list for quick reference.

  1. Use only one level of indentation per method.
  2. Don’t use the else keyword.
  3. Wrap all primitives and strings.
  4. Use only one dot per line.
  5. Don’t abbreviate.
  6. Keep all entities small.
  7. Don’use any classes with more than two instance variables.
  8. Use first-class collections.
  9. Don’t use any getters/setters/properties.

Most of the rules seemed simple enough. Rules 2 and 5 are standard in Softwareschneiderei, 1, 4, 6 and 8 are stricter versions of common sense, 3 is a tedious object wrapping. The rules I was anxious about were 7 and 9. To increase the learning effect, I added an extra rule to the list that is critical in real life programming:

  1.   Write tests for your code.

It doesn’t matter whether to write test first, test after or even test driven. Only then is the code “value added”.

Experiences

The game was minesweeper. It contains a nice mix of algorithms, data structures and UI. I concentrated the efforts on the algorithmic part. My first step was to analyse and create the needed data structures.

  • The smallest unit is the cell.
  • A cell can be either hidden or revealed, have a mine or be empty.
  • The game field contains such cells in rows and columns.
  • The position of a cell in a field is defined by its coordinate that contains the x and y position.

To associate anything with coordinates the coordinates had to be comparable to each other. Rule 9 forbids exposure of internal state, so the Coordinate class got its equals() and hashCode(). Only the creator of the coordinate had the knowledge about the number of dimensions and the values of the positions. Even the tests had no access to the inner state and tested only those two methods.

Since the revealed flag concept and a mine flag concept had similar properties, I decided not to track cells but to track their flags. Through this architectural decision, I had a field with two flag containers, one for revealed cells and one for cells with mines. An additional benefit was that it was enough to put only the coordinate into the container to mark a cell as a mine.

The next step was to link the parts together and add some behaviour. Setting a mine, then revealing a cell and obtaining the number of mines also. Setting a mine and marking the cell as revealed is a simple task with the containers. Testing that the revealed cell contained the mine was more tricky. To achieve that, the reveal method got an additional parameter, a closure with a hasMine parameter.

public void reveal(final Coordinate coordinate, final CellContainerVisitor revealedCellsVisitor) {
    revealedCells.mark(coordinate);
    visit(coordinate, revealedCellsVisitor);
}

private void visit(final Coordinate coordinate, final CellContainerVisitor revealedCellsVisitor) {
    revealedCellsVisitor.visit(coordinate, hasMineAt(coordinate));
}

@Test
public void containsMines() {
    final CellContainer target = new CellContainer();
    target.placeMineAt(someCoordinate());

    final List<Coordinate> mineCells = new ArrayList<Coordinate>();
    target.reveal(someCoordinate(), (coordinate, hasMine) -> {
        if (hasMine.equals(new HasMine(true))) {
           mineCells.add(coordinate);
        }
    });

    assertThat(mineCells, hasSize(1));
    assertThat(mineCells, contains(someCoordinate()));
}

The next game rule consumed the rest of the session: calculating the number of mines in the neighborhood. The main obstacle was to compute the coordinate of the neighbour. To do this it is necessary to add an offset to a position in a coordinate without exposing its internal structure. In the end I reverted to using more closures.

Conclusion

To achieve my goal I had to reverse the order in which I normally develop business logic: Rule 9 seems to support top-down approach: The interfaces of domain objects are nearly completely dominated by the way they are used by their containers.

Most of the time in this two hour session was spent staring at the screen and to think how to write readable code and readable tests without exposing internal details of the objects. Time well spent.

Guide to better Unit Tests: Focused Tests

Every now and then we stumble over unit tests with much setup and numerous checked aspects. These tests easily become a maintenance nightmare. While J.B. Rainsberger advocates getting rid of integration tests in his somewhat lengthy but very insightful talk at Agile 2009 he gives some advice I would like to use as a guide to better unit tests. His goal is basic correctness achieved by the means of what he aptly calls focused tests. Focused tests test exactly one interesting behaviour.

The proposed way to write these focused tests is to look at three different topics for each unit under test:

  1. Interactions (Do I ask my collaborators the right questions?)
  2. Do I handle all answers correctly?
  3. Do I answer questions correctly?

Conventional unit testing emphasizes on the third topic which works fine for leave classes that do not need collaborators. Usually, your programming world is not as simple, so you need mocking and stubbing to check all these aspects without turning your unit test into some large integration test that is slow to run and potentially difficult to maintain.
I will try to show you the approach using a simple and admittedly a bit contrived example. Hopefully, it illustrates Rainsberger’s technique good enough. Assume the IllustrationController below is our unit under test:

public class IllustrationController {
    private final PermissionService permissionService;
    private final IllustrationAction action;

    public IllustrationController(PermissionService permissionService, IllustrationAction action) {
        super();
        this.permissionService = permissionService;
        this.action = action;
    }

/**
* @return true, if the action was executed, false otherwise
*/
    public boolean performIfAllowed(Role r) {
        if (!permissionService.allowed(r)) {
            return false;
        }
        this.action.execute();
        return true;
    }
}

It has two collaborators: PermissionService and IllustrationAction. The first thing to check is:

Do I ask my collaborators the right questions?

In this case this is quite simple to answer, as we only have a few cases: Do we pass the right role to the PermissionService? This results in tests like below:

@Test
public void asksForPermissionWithCorrectRole() throws Exception {
PermissionService ps = mock(PermissionService.class);
IllustrationAction action = mock(IllustrationAction.class);

IllustrationController ic = new IllustrationController(ps, action);
ic.performIfAllowed(Role.User);
// this question needs a test in PermissionService
verify(ps, atLeastOnce()).allowed(Role.User);
ic.performIfAllowed(Role.Admin);
// this question needs a test in PermissionService
verify(ps, atLeastOnce()).allowed(Role.Admin);
}

Do I handle all answers correctly?

In our example only the PermissionService provides two different answers, so we can easily test that:

@Test
public void interactsWithActionBecausePermitted() {
PermissionService ps = mock(PermissionService.class);
IllustrationAction action = mock(IllustrationAction.class);
// there has to be a case when PermissionService returns true, so write a test for it!
when(ps.allowed(any(Role.class))).thenReturn(true);

IllustrationController ic = new IllustrationController(ps, action);
ic.performIfAllowed(Role.Admin);

verify(ps, atLeastOnce()).allowed(any(Role.class));
verify(action, times(1)).execute();
}

@Test
public void noActionInteractionBecauseForbidden() {
PermissionService ps = mock(PermissionService.class);
IllustrationAction action = mock(IllustrationAction.class);
// there has to be a case when PermissionService returns false, so write a test for it!
when(ps.allowed(any(Role.class))).thenReturn(false);

IllustrationController ic = new IllustrationController(ps, action);
ic.performIfAllowed(Role.User);

verify(ps, atLeastOnce()).allowed(any(Role.class));
verify(action, never()).execute();
}

Note here, that not only return values are answers but also exceptions. If our action may throw exceptions on execution we can handle, we have to test that too!

Do I answer questions correctly?

Our controller answers the question, if the operation was performed or not by returning a boolean from its performIfAllowed()-method so lets check that:

@Test
public void handlesForbiddenExecution() throws Exception {
PermissionService ps = mock(PermissionService.class);
IllustrationAction action = mock(IllustrationAction.class);
when(ps.allowed(any(Role.class))).thenReturn(false);

IllustrationController ic = new IllustrationController(ps, action);
assertFalse("Perform returned success even though it was forbidden.", ic.performIfAllowed(Role.User));
}

@Test
public void handlesSuccessfulExecution() throws Exception {
PermissionService ps = mock(PermissionService.class);
IllustrationAction action = mock(IllustrationAction.class);
when(ps.allowed(any(Role.class))).thenReturn(true);

IllustrationController ic = new IllustrationController(ps, action);
assertTrue("Perform returned failure even though it was allowed.", ic.performIfAllowed(Role.Admin));
}

Conclusion
What we are doing here is essentially splitting different aspects of interesting behaviour in their own tests. The first two questions define the contract between our unit under test and its collaborators. For every question we ask and therefore stub using our mocking framework there has to be a test, that verifies that this question is answered like we expect it. If we handle all the answers correctly, our interaction is deemed to be correct, too. And finally, if our class implements its class contract correctly by answering the third question our clients also know what to expect and can rely on us.

Because each test focuses on only one aspect it tends to be simple and should only break if that aspect changes. In many cases these kind of tests can make your integration tests obsolete like Rainsberger states. I think there are cases in modern frameworks like grails where you do not want to mock all the framework magic because it is too easy to make wrong assumptions about the behaviour of the framework. So imho integration tests provide some additional value there because the behaviour of the platform stays part of the tests without being tests explicitly.

Build remote controllable applications – expose APIs!

With the advent of mobile computing and (almost) always available network access we should think about the way we are building our applications. We often think in terms of client and server applications. In my opinion we should expand on this and start building “schizophrenic” applications that are clients and servers at the same time even if they are running on the same machine with only one available client at first. Let me elaborate on this:

Exposing an API and thus acting as a “server” is important for many applications to make integration into other software systems possible. It is obvious in the web where you often integrate widgets or services onto your page, for example like-buttons, maps, avatars and so on. It lets others use your software in their programs via scripting and possibly other means. All that broadens the use cases for your application. It becomes more valuable because there are more possibilities for your clients or others to use it. Often your application acts as a client to different services providing value of its own. So it serves two purposes and provides double or even multiplied value!

I would like to make some more specific examples:

  • Many vendors of some piece of hardware provide a user program to their hardware as a windows application. There is no (easy) way to remote control the hardware or to use it on different platforms. Sometimes you get a driver library and can build a service around it but it usually takes a lot of work.
  • Applications provide only one interface, usually a platform specific GUI and you are stuck with it. Would it not be nice to see specialized views for your mobile device of choice? In the future it is perhaps your brand new smart watch where you would like to see the status.

If your application is build from ground up with other software as a client in mind, you (or others) can add new ways of interacting with your application and the value it provides. This comes with a side-effect that should not be underestimated: Exposing an API helps your design!

Added benefits

You do not only open your application to a plethora of use cases but you will also build a better software. Thinking of the boundaries of the system, designing an API, using it in different scenarios and with different front ends will make you system better structured and much clearer separated into modules. When you create the possibility for different clients of your system you remove most of the danger of mixing UI code with business logic. If your clients have to use a defined API, which is not necessarily the same for all clients, they have to depend on the specified behaviour exposed as a service. It does not matter if the API is Java, CORBA, REST/JSON, SOAP or what not, the pure existence will have you define boundaries to your system. Your application will become part of one or more other systems forcing you to put thought into modularisation and separation at a larger scale than classes or packages. All this will help you with the design and overall your application will handle more use-cases than you might have imagined and will be prepared for changes in computing unforeseen and yet to come.

Adding new frontends or APIs and exchanging different parts of the system become comparatively easy in contrast to many conventionally built monolithic applications.

Know Your Tools: Why Mockitos when() works

Some days ago, my colleague asked how Mockito can differentiate between a method invocation outside of an expectation and one inside. If you want to know it too, read on.

Some days ago, my colleague asked how Mockito can differentiate between a method invocation outside of an expectation and one inside. If you want to know it too, read on.

The difference

Typically a mocking framework follows a Record/Replay/Verify model. In the first phase the expectations are recorded, in the second the mocked methods are called by the code under test and finally the expectations are verified. Consider an example with EasyMock straight from their documentation:

//record
mock = createMock(Collaborator.class);
mock.documentAdded("New Document");
//replay
replay(mock);
classUnderTest.addDocument("New Document", new byte[0]);
//verify
verify(mock);

Now, with Mockito the difference between the phases is not as clear as with EasyMock:

//record
LinkedList mockedList = mock(LinkedList.class);
when(mockedList.get(0)).thenReturn("first");
//replay
System.out.println(mockedList.get(0));
//verify
verify(mockedList).get(0);

The invocation of get() is evaluated before the invocations of when() or println() so there is no way to change the phase before the call. There is also no way to tell whether the current expectation is the last to start the replay mode automatically. How does it work then? All necessary code is contained in the following classes: MockitoCore, MockHandlerImpl, OngoingStubbing and MockingProgressImpl with its wrapper ThreadSafeMockingProgress.

Record

//record
LinkedList mockedList = mock(LinkedList.class);
when(mockedList.get(0)).thenReturn("first");

In the second line, a mock is created via the mock method. This call is delegated to MockitoCore, which initiates a creation of a proxy and a registration of MockHandlerImpl as the handler for its invocations.

The third line actually contains three steps. First the method to mock is invoked on the mock. Because MockHandlerImpl has been registered for all method calls on this proxy, it is now called. It keeps the current invocation, adds it to the list of all invocations recorded and creates the object to collect the expectations, the “OngoingStubbing”. The instance of OngoingStubbing is stored in an instance of the MockingProgressImpl. To keep the instance between the calls to the framework, a ThreadLocal member of singleton ThreadSafeMockingProgress is used. Since no mocked answer for the call to mock exists, a default result is returned. The second step is the invocation of when(), which returns the instance of OngoingStubbing previously deposited by MockHandlerImpl in MockingProgressImpl. OngoingStubbing implements the method then(), which is used as a means of recording the expected result in the third step. The result and the cached invocation are then saved together, ready to be retrieved. During this process, the invocation call is “consumed” and removed from the list of recorded invocations.

Replay

//replay
System.out.println(mockedList.get(0));

In the line five the method get() is called again. Since the result for it has been defined, MockHandlerImpl returns the retrieved result to the caller. The call is recorded and stored for for further use.

Verify

//verify
verify(mockedList).get(0);

Verification also consists of multiple steps. The call to verify() marks the end of stubbing and sets the verification mode. In the following call to get() on the basis of set verification mode MockHandlerImpl is able to differentiate between the phases and passes the invocations recorded to the verification code.

Final thoughts

The developers of Mockito achieved much with simple constructs like singletons and shared state. The stuff behind the syntax sugar is sometimes even considered magic. I hope that, after reading this article, you no longer believe in magic but use your knowledge to create similar great frameworks.

Another point: Since Mockito uses ThreadLocal as storage for its state, is it possible to confuse it by using multiple threads? What do you think?