How we distribute our backups geographically

If you fear not only about single point of failure but even area of failure in your data security assessments, here is a simple and effective process to distribute your backups.

We are a software development company, so all of our most valueable assets are constantly endangered by hardware failure. We regularly do risk assessments in regard to data security and over the years created a fine-tuned system of duplication and doubled duplication to prevent data loss. Those assessments aren’t really complicated, you basically sit down, relax and think about your deepest fears on a certain topic. Then you write them down and act on their avoidance or circumvention. Here’s an example of some results:

  • No data transfer over unsecured internet connections
  • No single point of failure
  • No single area of failure

The last result is of particular interest today: We want to prevent data loss in case of “area-based desaster”, like a whole-building fire or meteorite impact. Well, to be clear on the meteorite scenario, it is both highly improbable and dangerous. If the meteorite happens to be just a bit bigger than average, we won’t worry about backups anymore because we all live in a perimeter around our company. Yes, worst-case scenarios are always morbid.

Stages of data-loss prevention

We have several measures in effect to prevent data-loss in place. Technologies like RAID drives and processes like daily backups and several copies of that backups make sure that we always have at least one copy of all important data even in the most drastic locally confined desaster. But to adhere to the first rule that no data transfer can happen over unsecured internet connections and to make sure that an internet connection isn’t a single point of failure that may compromise data security, we had to come up with a way to distribute our backups in a physical manner without much effort.

The backup export disks

Our system relies on three facts:

  • Small and resilient hard drives with high capacity are affordable
  • Every home of our employees can be an unique backup storage location
  • If we take turns, the effort is low for everybody, but high enough to be effective

So we bought an “backup export disk” for every employee. It’s an 2,5″ USB-powered hard drive with enough storage capacity to keep our most important data. All export disks are registered at the backup distribution system that can, upon connect, provide them with the most current backup. And a little “backup export token” that gets passed from employee to employee in a predetermined order. The token is just a piece of cardboard that says “tag, you are it!”.

Our backup export process

So what do you have to do when you find the “backup export token” on your desk? Just five easy steps:

  • Bring your backup export disk next day (this is the hardest part: remembering to bag the disk at home)
  • Plug it into the backup distribution system (a specific computer in off-state with an USB-cable) and switch it on
  • Wait for the system to do its job. This will take a while, but you’ll get an e-mail at completion, so just wait for the e-mail to arrive
  • Unplug the backup export disk and take it back home (store it in a dry and safe place)
  • Forward the backup export token to the next employee in line

That’s all there is to the obvious process. Some more things happen behind the scenes, but the process mostly relies on the effect of repetition by several operators.

Simple and effective

This process ensures that our backup gets “exported” at least thrice a week to different locations. All in all, we store our backup in at least five locations with a maximum age of two weeks. The system can scale up (or down) without limitation, so it won’t change even if we double or triple the location count or the export frequency. And any individual disk cannot be compromised as the data is secured by strong encryption, so there is no need to restrict physical access to it on the storage locations (like using a safe) or fret if a disk would get lost.

Decentralized, but supervised

Every time a backup export disk is connected to the backup distribution system, the disk’s health figures and remaining space is reported to the administrators. Using this information, we can also reconstruct the distribution history and fetch the most current disk in an emergency case. If a disk shows its age, it gets replaced by a new one without effort. We only need to tell the backup distribution system about it and associate it with an employee so that the e-mail is sent to the right person.

Conclusion

By assigning our employees with the core mechanics of keeping the backups distributed and automating the rest, we reached a level of data security that even protects against area effect scenarios.

TANGO – Making equipment remotely controllable

Usually hardwareTango_logo vendors ship some end user application for Microsoft Windows and drivers for their hardware. Sometimes there are generic application like coriander for firewire cameras. While this is often enough most of these solutions are not remotely controllable. Some of our clients use multiple devices and equipment to conduct their experiments which must be orchestrated to achieve the desired results. This is where TANGO – an open source software (OSS) control system framework – comes into play.

Most of the time hardware also can be controlled using a standardized or proprietary protocol and/or a vendor library. TANGO makes it easy to expose the desired functionality of the hardware through a well-defined and explorable interface consisting of attributes and commands. Such an interface to hardware –  or a logical piece of equipment completely realised in software – is called a device in TANGO terms.

Devices are available over the (intra)net and can be controlled manually or using various scripting systems. Integrating your hardware as TANGO devices into the control system opens up a lot of possibilites in using and monitoring your equipment efficiently and comfortably using TANGO clients. There are a lot of bindings for TANGO devices if you do not want to program your own TANGO client in C++, Java or Python, for example LabVIEW, Matlab, IGOR pro, Panorama and WinCC OA.

So if you have the need to control several pieces of hardware at once have a look at the TANGO framework. It features

  • network transparency
  • platform-indepence (Windows, Linux, Mac OS X etc.) and -interoperability
  • cross-language support(C++, Java and Python)
  • a rich set of tools and frameworks

There is a vivid community around TANGO and many drivers for different types of equipment already exist as open source projects for different types of cameras, a plethora of motion controllers and so on. I will provide a deeper look at the concepts with code examples and guidelines building for TANGO devices in future posts.

Snowflakes are a bad sign

Snowflake servers are brittle and expensive. Treating hardware like cattle instead of pets is one strategy to overcome the snowflake syndrome. Here are some strategies to foster this mindset.

snowflakeFirst, allow me a bad joke: If you enter your server room and find real snowflakes, it might be a sign that your air conditioning is over-ambitious. But even if you just enter your server room, you probably see some snowflakes, but in the metaphorical sense.

Snowflake servers

Snowflakes are servers with an unique layout. I cannot say it better than Martin Fowler two years ago in his Bliki posting SnowflakeServer, but I’m trying to add some insights and more current tools. The term probably originates in the motto that everybody is a “precious unique snowflake”. This holds true for humans and animals, but not for machines. Let’s examine how a snowflake is born. Imagine that in the beginning, all servers are the same: standard hardware, a default operating system and nothing more. You pick one server to host a special application and adjust the hardware accordingly. Now you already have an hardware snowflake – not the worst thing, but you better document your rationale behind the adjustment in an accessible way – a wiki page specifically for that server perhaps. Because sooner or later, that machine will fail (or become hopelessly obsolete) and needs to be replaced – with adequate hardware. Without your documentation, you’ll have to remember why the old machine had that specific layout – and if it was sufficient. I’ve seen the “ancient server” anti-pattern much too often: A dusted machine, buzzing like an asthmatic pensioner in the last corner of the server room, and nobody was allowed near. Because there are no spare parts (VESA local bus isn’t supported anymore), if one part fails, the whole system is doomed – operating system and software included. Entire organizations rely on the readiness for duty of one hardware assembly – and almost always a crude one.

Server as cattle

The ancient server happens more likely when you treat your servers like pets. This is the crucial mental switch you’ll have to make: servers are cattle, not pets. They have numbers, not names. They can be monitored, upgraded and fostered, but at the end of the day, they serve a clearly defined business case and deserve no emotional investment of the owner. If a pet gets hurt, you take it to the veterinary and cure it. If cattle gets sick, you call the veterinary to make sure it’s not contagious and then replace the affected individuals – to cure them would be more expensive. Pets live as long as they can, cattle has a dacattlete of expiry. And our cattle (servers) really isn’t sentient, so stop treating it like pets.

Strategies to run a ranch

Our current answer to make the transition from pet zoo to cattle ranch without significantly increasing the amount of metal in our server room can be boiled down to three strategies:

  • Virtualize the logical machines. Instead of working on “real metal machines”, more and more of our services run inside virtual machines. This allows for a clearer separation of concerns (one duty per machine) and keeps the emotional commitment towards the machine low. Currently, we use VirtualBox and Docker for this task. Both are easy to set up and fulfill their task well.
  • Remove the names from real metal machines. We really number our real machines now. Giving clever names to virtual machines is still possible, but not necessary: they are probably only accessed using DNS aliases that specify their use, like “projectX-database” or “projectY-webserver”. We even choose the computer cases for our machines accordingly to separate the pets (unique cases) from cattle (uniform cases).
  • Specify the machine. The virtualized hardware must be described and explained (e.g. why this particular machine needs twice the normal RAM ration). Currently, we use Vagrant to specify the hardware and operating system of our virtual machines. The specifications are stored in a version controlled repository, so there is a place where most of our server infrastructure is described in a deployable fashion. Even more, all necessary third-party software products are specified, too. Imagine a todo list of what to install and prepare, like the one you’ve handed over to your admin in the past, but automatically executable. We currently use Ansible for our configuration management because it has very low requirements for the target platform itself and has a low learning curve.

Applying these three strategies, every (logical) machine in our server room should be reproduceable. They are still individuals, specifically tailored for their jobs, but completely specified and virtualized. The real metal machines only run the bare minimum of software necessary to host the logical machines. None of the machines promote emotional attachment – they are tools for their job.

Data is snow

One important insight is that persistent data will turn your machine into a snowflake over time (we use the term as a verb: “data will snowflake your machine”). You will become emotionally and financially attached to this data – otherwise, there is no need to persist it in the first place. We don’t have a panacea here yet. You probably want to use a database and a sophisticated backup strategy here. Just make sure that the presence of precious data on it doesn’t obscure your stance towards the machine. You want to keep the data and still be able to throw the machine away.

Don’t stop at machines

We are software developers, so we cannot deny that the concept of snowflaking is very helpful for our own projects, too. Every dependency that we can bring with us during deployment (called “self-containment” or “batteries included” in our slang) is one less thing of “snowflaking” the target machine. Every piece of infrastructure (real, virtualized or purely conceptual) we implicitly rely on (like valid certificates, SSH keys or passwords and database locations) will snowflake the target machine and should be treated accordingly: documented, specified and automated. If you hot-fix a production server, it’s definitely a huge snowflaking action that needs to be at least carefully documented. You can’t avoid snowflaking completely, but strive to mimize the manual amount of it and then sanitize the automated part.

Snowflaking is a concept

We’ve found the term of “snowflaking” very useful to transport the necessity and value in documenting, specifying and automating everything that doesn’t happen on a developer machine (and even there, the build process is fully automated). Snowflaked enviroments tend to be expensive in maintainance and brittle in operations. The effort to mitigate the effects of snowflaking pays off very soon and is highly reuseable. But even more powerful is the change in the mindset as soon as the concept of “snowflaking” is understood. It’s a short term for a broad range of strategies and values/beliefs. It’s a powerful and scalable concept.

We’d love to hear your experiences

You’ve probably experimented with various tools and concepts to manage your servers, too. What were your experiences and insights? Add a comment below, we are looking forward to your input.

A small story about outsourcing

A true story about why it isn’t always cheaper to produce more cost-effective. And a story about a process that wasn’t tailored around human requirements.

Let me tell you a story about human labor and automization, cost efficiency and the result of local optimization. The story itself is true, but nearly all details are changed to protect the innocent.

An opportunity

Once, there was a company that produced sensoric equipment with a large portion of electronic circuitry. The whole device was manufactured at the company’s main factory and admired for its outstanding rigidity. Then, one day, the opportunity offered itself to outsource the assembly of the electronics to a country in the asian region. The company boss immediately recognized the business value in this change: The same parts would be produced by the company, shipped to the asian contract manufacturerer, assembled and promptly returned. Then, the company’s engineers will provide the firmware and software for the final product. By outsourcing the most generic step in the production line, the production costs could be lowered significantly.

A detail

There is one little detail that needs to be told: The sensors relied on some very specific and fragile parts only the company itself could produce. These parts were especially sensible to the atmosphere they were assembled in. A very important aspect of the production process was a special purpose machine that could assemble the parts while sustaining the necessary gas mixture and pressure. Upon closer inspection, one could say that the essence of this product’s secret ingredience weren’t the parts itself, but the specifically tailored production process.

The special purpose machine had to be transferred to the contract manufacturer in asia, otherwise, the sensors could not be assembled. This was a minor inconvenience compared to the large profits that could be realized once the outsourcing was completed.

A success

The machine was transported to the contractor, installed and tested. A special crew of workers of the contractor’s staff was trained to operate the machine properly and within the necessary conditions. After a while, the production line began its work. The first sensors assembled offshore returned home. They all worked as intended. The local engineers couldn’t tell the difference but by looking at the serial number. The company management was pleased, the profitability was increased.

A failure

Everything went well for a while. Then, the local engineers noticed a slightly higher number of faulty sensors. Not long after, the quality assurance reported decreasing performance numbers of the devices. The rigidity of the device, the unique selling point, slowly deteriorated. The company management was worried and established a task force to indentify the root cause for this change to the worse.

A mystery

The task force inspected the reported problems and couldn’t make much sense of the numbers. It wasn’t a problem of whole faulty batches (indicating incidents like transport damage), but also not of individual faulty pieces. Instead, they found that if a piece was faulty, the next few pieces from that series were also faulty. Then, there were long intervals with perfectly good pieces until another group of clearly faulty pieces occurred. Something had to go wrong during the assembly process at the contractor.

A revelation

When the task force arrived in the contractor’s factory and inspected the special purpose machine, they found that the atmosphere regulator was damaged. This automatic part of the machine takes care of the mixture and pressure of the gas in the machine during operation and keeps it in the necessary range by applying or draining specific gas. The contractor didn’t bother to replace the rather expensive part when cheap human labor is readily available. They had hired a worker to perform the atmosphere regulation manually. Some lowly paid worker had to watch the pressure numbers and provide more or less gas, just as needed. This was nearly as good as the automatic regulation and still good enough to produce quality devices.

An explanation

But, the contractor only hired one worker per shift. This worker had to go to the toilet sometimes during the work day. When he was away from the machine, it went along unregulated, soon to be misadjusted to the point of only producing junk. Once the worker returned, he would balance the numbers and bring the machine in the OK state again. This situation occurred periodically, but not too often to taint whole batches. Only during his absence, the series of faulty devices would be produced.

A conclusion

I don’t want to add much moral to this story. Perhaps one thing should be considered when recapitulating: Both the company and the contractor “optimized” their costs locally by making cost efficient decisions that turned out to be expensive in the long run. The company chose between expensive, but controlled local production and cheap outsourced assembly, arguably the most delicate step in the whole production process. The contractor chose between a high one-time investment in an automatism and the low ongoing cost of cheap human labor. Both decisions are comprehensible on their own, but lead to a situation that would never have occurred in the original setting.

SSD? Don’t think! Just Buy!

SSDs makes everything blazingly fast – even Grails + IDEA development

My personal experience with SSDs began with an Intel X25M that I built into a Lenovo Thinkpad R61. It replaced a Seagate 160 GB 5400rpm which in combination with Windows Vista … well, let’s just say, it wasn’t that fast.

The SSD changed everything. It was not just faster, it was downright awesome! As if I had a completely new computer.

With that in mind I thought about my desktop PC. It’s a little more than 2 year old Windows XP box, Intel Core2Duo 2.7 GHz, 4GB RAM, with a not so slow Samsung HDD. I use it mainly for programming, which is most of the time Grails programming under IntelliJ IDEA.

And let me tell you, the Grails + IDEA combination can get dog slow at times. The start-up time of IDEA alone gives you time to skim over the first three pages of Hacker News and read the latest XKCD.

So the plan was to put an extra SSD into the Windows box and put only programming related stuff on it. This would save me the potential hassle of moving my whole system but would still give me development speed-up.

I had to be a little careful because the standard settings for IDEA’s so-called “system path” and “config path” is in the user’s home directory. (Btw, this settings can be changed in file “idea.properties” which resides in “IDEA_INSTALLATION_DIR\bin”, e.g.: c:\Progam Files\JetBrains\IntelliJ IDEA 9.0.4\bin)

I think you already guessed the result. Three words: fast, faster, SSD. It’s just amazing! IDEA start-up is so fast now, I barely have time for a quick look at the newest headlines on InfoQ.

The next step is of course to put the whole system on SSD but that will probably have to wait until we upgrade the whole company to Win7. Can’t wait… 🙂

About PrintStream and Exceptions

Several of our projects deal with sensor hardware of different types often connected via the good old™ serial port. That is fine most of the time because most protocols are simple and RXTX provides a nice cross-platform library for most of your serial port needs. But many new computers do not feature the old RS232 serial ports anymore or other contraints prevent the use of a plain RS232 serial port. Here come serial converters like the Advantech ADAM 4570 (serial-to ethernet) or usb-to-serial converters into play. Usually this works fine.

Now one of our customers had a test system using an unreliable converter with sensor hardware. The hardware problems uncovered a robustness issue in our software which crashed the JVM when the virtual serial port of the converter disappeared and our app tried to write to it. Despite the faulty hardware our software had to be robust because it manages many more devices other than just that one sensor over serial. Looking at the problem we discovered that the crash occurred somewhere in the native part of RXTX. So we decided to scratch our own itch (and the one of the customer) and set out to fix the issue in RXTX at a Open Source Love Day (OSLD) . So we fixed the problem and submitted the patch to the bugtracker of the RXTX project. Our sample program now worked flawlessly and threw an IOException when the serial port failed in some way.

Happy to have fixed the problem we incorporated the patch RXTX in our production software but it still crashed and no IOException appeared anywhere in the logs. After another bughunting session we spotted the subtle difference of sample and production program: the use of OutputStream insted of PrintStream. PrintStream silently swallows all exceptions which proved fatal in our use case with the unreliable stream carrier. So the final fix was essentially replacing our PrintStream code

RXTXPort port = new RXTXPort("COM6");
PrintStream p = new PrintStream(port.getOutputStream(), true, "iso8859");
p.print("command");

with using OutputStream directly:

RXTXPort port = new RXTXPort("COM6");
OutputStream o = port.getOutputStream();
o.write("command".getBytes("iso8859"));

Conclusion

Be careful when using PrintStream with unreliable stream carriers it swallows exceptions! That may shadow problems which you may want to know of. Often PrintStreams behaviour will not be a problem but in certain cases like the one depicted above it causes a lot of headaches.

SSD and (One)-touch Backup solution

As explained a while ago we (developers) get an annual creativity budget. This time I decided to improve my notebook working experience and reliability by introducing two new items:

  1. A fast SSD replacing the conventional relatively slow 2,5″ hard disk
  2. An one-touch backup solution which in fact is a no touch solution

The SSD is a X25-m from Intel with 160Gb and the backup solution is a Seagate Replica with 500Gb disk space. Although there are recurring problems with the firmware and toolbox software the Intel SSD seemed to be the best choice price/performance/reliability wise. To be on a safer side data wise we paired it with the backup solution. Let me first explain the migration which went really smooth and was the first stress test for the backup system. The steps were the following:

  1. Backup the existing system with the replica which does not require any user interaction after the client backup software is automatically installed
  2. replace the original harddisk with the SSD
  3. reboot the system with the recovery CD of the replica solution and restore the backed up system
  4. reboot the recovered system from the SSD

The whole process went really smooth and only took some hours of data copying. There were no hickups whatsoever. After booting from the SSD my system was exactly like before, so the replica already proved that it really works even in the worst case of a complete drive loss.

The performance of the whole system is noticable better especially at system and application startup as you would expect.

Conclusion

The backup solution is so damn easy to use that I would recommend it to all people running Windows and caring about the data on their system. To keep your backup up to date just plug the external hard drive in a free USB port and continue working. You don’t have to do any configuration and other hassles which often end any effort of deploying a working backup solution. This is even more true for private people who do not have the knowledge to fiddle with system details. So go for a “one touch backup” if you do not have some working solution in use already!

A modern SSD can really improve your working experience especially on notebooks where hard disk performance is far worse than in an workstation environment. So older hardware can get new life and make your life easier and more productive.

Speed up your buildbox, Part III: Memory

This is the third part of a series on how to boost your build box without much effort. This episode talks about the effects of faster and more RAM.

© Friedberg - Fotolia.comIn the first and second part of our effort to speed up our buildbox, we replaced the harddisk with a RAM disk and swapped in a bigger CPU. This brought the build time down from 03:30 minutes to 02:00 minutes.

Boosting the memory

When we began the journey, we wanted to undercut the 02:00 minutes threshold. The last component that directly impacts performance of our box was the memory. We started out with 4 GB of DDR2-800 modules. To have a feeling for the effects, we upgraded to 4 GB of DDR2-1066 first and then added another 4 GB, resulting in 8 GB of RAM. We expected the performance gain to be small, but noticeable. The RAM disk, for example, is directly affected by memory speed.

As much, but faster

The first upgrade brought the first surprise: Upgrading from DDR2-800 to DDR2-1066 modules didn’t change anything. It’s not that the mainboard or CPU doesn’t support the faster RAM, it just seems to be fast enough, despite the data bus clock rate. Our build process still took 02:00 minutes, reproducible and without exception.

Filling all the banks

The mainboard can load up to 16 GB of RAM, but our budget just allowed to buy 8 GB of DDR2-1066 RAM. We installed it and ran the same 32 bit Ubuntu Linux as before. The build process took 02:00 minutes, which was expected now.

Changing to 64bit

We changed to boot harddisk, installed a 64 bit Ubuntu Linux and ran the build again. Still 02:00 minutes. The switch to 64 bit wasn’t a big deal with Java, but some of the included native libraries complained about the change. Recompiling them solved the issue.

Finally reaching the target

As a last measure, we increased the maximum memory of the build JVM to the biggest value it would accept. This was -Xmx2600m, a surplus of 600 MB to the original setting. This sped up the build process by five seconds, it took 01:55 minutes now.

Conclusion and perspective

We’ve reached our anticipated target of less than two minutes build time. We exceeded our original budget of 500 EUR, but bought some parts that finally weren’t used in the build box, but elsewhere. The two parts that made the whole difference were the CPU and some more memory to spend it on the RAM disk.

If you want to speed up your single build box, aim for the CPU/RAM combo and try to install a RAM disk to perform all the work on.

This leads me to the perspective of the next part of the series: If you plugged in the most expensive CPU and enormous amounts of RAM to speed up your buildbox, you still aren’t done. You should invest some time to look into distributed builds. Hudson as our continuous integration server provides nearly instant “build slave” support. With this feature, you can set up a whole build farm to further increase your build throughput.

Stay tuned for “Part IV: Beyond the box”

Speed up your buildbox, Part II: Processor

This is the second part of a series on how to boost your build box without much effort. This episode talks about the effects of different processors.

© Friedberg - Fotolia.comIn the first part of our effort to speed up our buildbox, we replaced the spindle harddisk with a Solid State Disk (SSD) and finally, a RAM disk. This brought the build time down from 03:30 minutes to 02:50 minutes.

The Central Performance Unit

The next step on our journey to a faster buildbox was to replace the processor. Our initial processor was an Intel Core2 Duo E6750 with 2.67 GHz. To our pleasure, the processor socket, namely the LGA775 socket, is extremely versatile in supporting different processors. We had no problem in plugging in faster dual or even quad core processors, except upgrading the BIOS.

Taking the 3 GHz mark

The next processor to try out was an Intel Core2 Duo E8500 with 3.17 GHz operating frequency. The L2 cache went up from 4 MB to 6 MB.

The build time went down immediately from 02:50 minutes to 02:20 minutes. That’s nearly 20 percent less build time. And it’s perfectly linear with the CPU speed increase (also nearly 20 percent).

As a result: Investing in CPU clock power seems to pay off. The higher the frequency, the lower the build time.

Doubling the cores

Fortunately, the LGA775 socket supports quad core processors, too. We plugged in a Core2 Quad Q9550 with 2.8 GHz and ran the build again.

The result was astonishing: Despite the lower frequency, the build time dropped from 02:20 minutes to 02:00 minutes. We can’t really explain this one with basic math like the frequency coupling of the dual cores.

If your build is perfectly multithreaded, something javac isn’t, you’ll notice an even bigger speedup.

To sum it up: you can’t have enough GHz or processor cores when running a build.

Reviewing the result

We replaced the harddisk with RAM and upgraded the processor to meet the current performance threshold. This brought us from a starting 03:30 minutes build time to 02:00 minutes now. The CPU is the major player in this game, so upgrade it first.

Outlook on the third part

But what about the RAM? We really wanted to know what happens when we replace the RAM with bigger and faster one. Read more about this experiment in the third part of the series, coming soon.

Honey, I shrunk the build box

Meet the world’s smallest hudson server, operating with even less power than your energy saving light bulb.

We are currently posting an ongoing series on how to make your (hudson) build box faster. This article talks about making it smaller.

Making a build box as small as possible isn’t the most familiar requirement today and wasn’t for us. But when i privately bought a fit-PC2, we couldn’t resist trying it out as a hudson server.

The fit-PC2

mini1This is a computer that really fits everywhere. In your car, on the back of your monitor or just, as in our case, on a beer mat. It’s a fully equipped PC with the specification of a standard netbook (Atom 1.6GHz CPU, 1GB RAM, 160GB HDD) and the dimensions of a 5-port Ethernet switch. The most astonishing fact about it is that it uses standard size 2,5″ notebook harddisks. For more information about the computer, look around the CompuLab website, they do not exaggerate.

Operating the fit-PC2

mini2The fit-PC2 is a normal computer in every aspect. We run ours with Ubuntu linux and official Java packages from Sun. As the case is fanless, it accumulates some heat, but never over 60° Celsius (140° Fahrenheit). We measured an average temperature of 45°C on the case surface while building a large project. The Gnome desktop feels snappy, application load delays are sufficiently small and customizing the software outfit is as easy as it can get with Ubuntu.

Setting up hudson

Installing the hudson continuous integration server on a Debian based linux system is a matter of three commands. See Koshuke Kawaguchi’s blog entry on that topic for details. After the automatic installation procedure, hudson already runs on port 8080 of the machine. Setting up the project’s job and initiating the first build were a matter of a few minutes. Hudson reacts swiftly to website clicks.

The world’s smallest hudson server

mini3This is the smallest hudson instance we’ve heard of up to now. It runs in a case measuring 11,5 x 10,0 x 2,6 centimeters. The power consumption is around 8 Watt when building (including the self-usage of the measuring device itself), which would be even lower once we replace the mechanical harddisk with a solid state disk (SSD).

From the performance specifications given above, you should not expect a speed wonder. The fit-PC2 finished the project’s build within 09:50 minutes, which is dangerously near the ten-minutes mark for acceptable continuous feedback. So this box will not go into regular duty, but return home to me (remember, i bought it privately).

Conclusion

The whole purpose of this experiment was to get used to a new era of microcomputers. They are palm-sized and nearly battery operated, but fully equipped with standard components and powerful enough to perform regular tasks. The fit-PC2 is a strong instance of these devices.

Show off your hudson server

Well, to be honest, another purpose of this experiment was to show off our hudson skills, operating with hudson instances from heterogeneous slave farms to this single 300 cubic centimetres box. We would like to hear from your hudson instance. You may add a comment and/or share a link with your story. Maybe the hudson wiki is the ultimate place to gather all the stories.


P.S. This blog entry’s title is an adaption of my childhood’s favorite movie.