Scala: Easier to read (and write), harder to understand?

There is a vivid discussion about Scala’s complexity going on for some weeks now on the web even with a response from Martin Odersky. I want to throw my 2¢ together with some hopefully new aspects into the discussion.
Scala’s liberal syntax rules and compiler magic like type inference and implicit conversions allow nicely written APIs and DSLs almost looking like prose texts. Take a look at APIs like scalatest and imagine the Java/Junit equivalent:

@Test def demonstrateScalaTest() {
  val sb = new StringBuilder("Scala")
  sb.append(" is cool!")
  sb.toString should be ("Scala is cool!")
  evaluating { "concise".charAt(-1) } should produce [StringIndexOutOfBoundsException]
}

There are really nice features that reduce day-to-day programming tasks to keywords or one-liners. Here are some examples:

// singletons have their own keyword (object), static does not exist!
object MySingleton {
  def printMessage {
    println("I am the only one")
 }
}

// lazy initialization/evaluation
lazy val complexResult = computeForHours()

// bean-style data container with two scala properties and one java-bean property with getter+setter
class Data(val readOnly: String, var readWrite: Int, @BeanProperty var javaProperty: String)

// tuples as return values or quick data transfer objects (DTO) for methods yielding multiple data objects
def correctCoords(x: Double, y: Double) = (x + 12, y * 0.5)
val (correctedX, correctedY) = correctCoords(0.37, 34.2)
println("corrected: " + correctedX + ", " + correctedY)

On the other hand there are so many features built-in that really make it hard to understand the code if you are not scala programmer with some experience. I like the differentiation between application and library code Martin Odersky himself makes in Programming Scala. The frameworks I have tried so far (Lift, scalatest and scala-swing) in Scala make your life very easy as long as you just use them. It is really a breeze and much more fun than using most APIs in Java for example. But when something goes wrong or you really want/have to understand what is going on you can have a hard time. This is true at least for a Scala beginner, sometimes perhaps for an pro, too.

Final Thoughts
In my opinion Scala is a very nice language that successfully combines clean object oriented programming with functional features. You can migrate from a pure OO-style to a nice hybrid “Scala-style” like many programmers did when they first used Java mostly with procedural style using classes only as namespaces for their static methods. I am quite sure that a Scala code style and best practices still have to develop. Programmers will need their time diving into the language and using it for their benefit. I hope Scala prospers and gains attention in the industry because I personally think it is a nice step forward compared to Java (which turns more and more into a mess where you need profound knowledge to fight your problems).

Regarding the complexity, which certainly exists in Scala, I only want to raise some questions which may be answered sometime in the future:

  • Maybe the tooling is just not there (yet)?
  • Maybe you sometimes just don’t have to understand everything what’s happening underneath?
  • Maybe Scala makes debugging much more seldom but harder, when something does not work out?
  • Maybe the features and power of Scala are worth learning?
  • Maybe certain features will just be banned by the teams like sometimes in Java teams (think of switch-case, the ?-operator, Autoboxing e.g.)?

== isn’t equals, or is it?

Beware of the subtle differences of == and equals in Java and Groovy.

== and equals behave different in Java (and Groovy). You all know the subtle difference when it comes to comparing strings. equals is recommended in Java, == works in Groovy, too. So you could think that using equals is always the better option… think again!
Take a look at the following Groovy code:

  String test = "Test"
  assertTrue("Test" == test) // true!
  assertTrue("Test" == "${test}") // true!
  assertTrue("Test".equals("${test}")) // false?!

The same happens with numbers:

  assertTrue(1L == 1) // true!
  assertTrue(1L.equals(1)) // false?!

A look at the API description of equals shows the underlying cause (taken from the Integer class):

Compares this object to the specified object. The result is true if and only if the argument is not null and is an Integer object that contains the same int value as this object.

equals follows the contract or best practice (see Effective Java) that the compared objects must be of the same class. So comparing different types via equals always results in false. You can convert both arguments to the same type beforehand to ensure that you get the expected behavior: the comparison of values. So next time when you compare two values think of the types or convert both values to the same type.

Java Swing Layouting done right

A praise of the most developer-friendly Java Swing layout manager to date: DesignGridLayout.

Layout Managers were an huge benefit for Java Swing. They enabled software developers to program layout rather than to “drag and drop” it with some proprietary GUI builder. That’s nothing against a good GUI builder, but against the “source code” that gets generated as a result of using it. But after some time of playing and working with the layout managers given by Swing itself, we concluded that they weren’t up to the task. Since then, we were constantly on the lookout for new and better ways to tackle the layouting task.

A history of layout managers

Let’s reiterate our major path with different layout managers:

  • GridBagLayout – the most versatile layout manager included in the Java Swing core classes. It’s capable to handle virtually every layouting task, but the price is huge constraint setup code. Since the code bloats with even facile complexity in the dialog, it’s not maintainable once written. The advantages over GUI builders aren’t really present.
  • StringGridBagLayout – has the same power as GridBagLayout, but with much more concise constraint definitions. It uses a string based domain specific language that you have to learn. After a while, you begin to feel a clumsiness when inserting variables into the constraints.
  • TableLayout – was a new approach to layouting by applying a global grid to your panel. You define the grid by specifying row and column constraints. If you need special cell constraints afterwards, you can alter them, but it’s getting bloated again.
  • StringTableLayout – provided a string based domain specific language over the TableLayout. It had some nice additional features, but lacked versatility with dynamic GUIs.
  • FormLayout – was a great relief and a good companion for many full sized layouting tasks. By concentrating on a problem domain (form based layouts), it played out some advantages over general purpose layout managers. This layout is still in use here.
  • MigLayout – the bigger brother of all these layouts. MigLayout comes with several pages of cheat sheets and you’re soon lost without it. It combines the approaches of all layout managers listed (and many more) and blends them into a massively powerful and versatile product. If you learn this layout manager thoroughly, you’ll never have to look elsewhere. But the learning curve is steep and the complexity of your code scales with the complexity of the GUI (which isn’t a drawback).

All these layout managers added value to our GUIs and are in use until today, albeit seldom.

Keep it simple

Most of the time, your dialogs aren’t these super-fancy, highly dynamic full-page layouts every UI designer dreams about. If they are, pick one of the layout managers from the list and wade through the constraint setup. But let’s say you want to layout a rather plain dialog with some widgets, but you want to do it quick without sacrificing the looks. Here is a developer-friendly solution for this task: Use the DesignGridLayout manager.

Slick and easy layouts

The one thing that differentiates the DesignGridLayout from almost every other layout manager is that you use the layout manager instance itself (in a fluent interface style) to arrange the constraints of your grid. You do not add your widgets to the panel and hope for the layout manager to catch up with the layout, you add them to the layout manager (and hope for it to fill it into your panel, which it does nicely). Here is a little example of the API usage:

JPanel content = new JPanel();
DesignGridLayout layout = new DesignGridLayout(content);
JTextArea history = new JTextArea();
history.setRows(5);
JTextField message = new JTextField();
JButton sendNow = new JButton("Send");
layout.row().grid(new JLabel("History:")).add(new JScrollPane(history));
layout.row().grid(new JLabel("Message:")).add(message, 2).add(sendNow);
content.setLayout(layout);

If you are interested in the possibilities of the layout manager, you should read the usage introduction page of DesignGridLayout.

Developer-friendly approach

One big advantage of the fluent API when compared with the string based constraint definitions is the compiler and type system support. You can’t spell anything wrong and the code completion feature of your IDE guides you to the right method and parameter order. The other advantage is that you don’t need to mess with pixel sizes for spacing and such. It’s handled by the layout manager in the most comfortable manner.

And because an article about a layout manager isn’t of any worth without a picture, here’s one:

This is a frame with the panel we constructed in the example code above.

Follow-up to our Dev Brunch August 2010

A follow-up to our August 2010 Dev Brunch, summarizing the talks and providing bonus material.

Last Sunday , we held our Dev Brunch for August 2010. We had to meet early in August, as there will be a lot of holiday absence in the next weeks. The setting was more classical again, with a real brunch on a late sunday morning. We had a lot more registrations than finally attendees, but it was said this was caused by a proper birthday party the night before. Due to rainy weather, we stayed inside and discussed the topics listed below.

The Dev Brunch

If you want to know more about the meaning of the term “Dev Brunch” or how we implement it, have a look at the follow-up posting of the brunch in October 2009. We continue to allow presence over topics. Our topics for the brunch were:

  • Clean Code Developer Initiative – The Clean Code Developer movement uses colored wristbands to subsequentially focus on different aspects of principles and practices of a professional software developer. Despite the name, it’s a german group with german web sites. But everybody who read Uncle Bob’s “Clean Code” knows what the curriculum is about. The talk gave a general summary about the intiative and some firsthand experiences with following the rules. If you read the book or are interested in profound software development, give it a try.
  • Non-bare repositories in git – The distributed version control system git differentiates between “bare” and “non-bare” repositories. If you are a local developer, you’ll use the non-bare type. When two developers with similar non-bare repositories (e.g. of the same project) meet, they can’t easily share commits or patches with the “push” command. This is a consequence of the “push” not being the exact opposite of the “fetch” command. If you try to synchronize two non-bare git repositories with push commands, you’ll most likely fail. The only safe approach is to introduce an intermediate bare repository or a branch in on of the repositories that only gets used by extern users. Even the repository owner has to push to this branch then. We discussed the setup and consequences, which are small in a broader use case and sad for ad-hoc workgroups.

Retrospection of the brunch

The group of attendees was small and a bit hung over. This led to a brunch that lacked technical topics a bit but emphasized social and cultural topics that didn’t make it on the list above. A great brunch just before the holiday season.

Grails: Migrating enum mapping from 1.0 to 1.2 or newer

We have an ongoing long-term web application running on Grails. It started with Grails 1.0.3 and Grails moved on to 1.3.3 in the meantime. Due to time constraints and lack of resources we were not able to update to each new major version. Now, some years later, the time has finally come for us to benefit from all the new features, bugfixes and improvements of the platform. There were quite some changes in behaviour and one of the biggest is the change of how enums are mapped to the database.

In Grails 1.0.x and 1.1.x Java enums were mapped as int values in the database. Starting with Grails 1.2 they are mapped as varchars containing the enum name. Now you have the problem to migrate your existing data over to the new mapping style of the framework. One solution is to use autobase or liquibase migrations to port the enum values for the new mapping style.
Suppose we have the enum Coolness:

pulic enum Coolness {
    COOL, UNCOOL;
}

The SQL for migrating it on the PostgreSQL DBMS looks as simple like this:

alter table foo alter column bar type varchar
    using case when bar = 0 then 'COOL'
            when bar = 1 then 'UNCOOL'
            end

and as an autobase migration it becomes something like this:

changeSet(id: 'change foo bar from int to varchar', author: 'me') {
    preConditions(onFail: "CONTINUE") {
        dbms(type: "postgresql")
    }
    sql("""alter table foo alter column bar type varchar
            using case when bar = 0 then 'COOL'
                    when bar = 1 then 'UNCOOL'
                    end""")
}

We use the precondition to skip our non-persistent in-memory databases we use in development and only apply the change set with persistent test or production databases.

For Oracle and maybe other database systems which do not support altering the column type with the using-clause it may look like

alter table foo add bar_new varchar(255);
update foo set bar_new =
    (case when type = 0 then 'COOL'
            when type = 1 then 'UNCOOL'
      end);
alter table foo drop column bar;
alter table foo rename column bar_new to bar;

I hope this helps when you have to perfom similar migrations some time.

Stay tuned for other changes in API and behaviour between the different Grails framework versions.

Get the basics right

Nowadays with all the fancy stuff around, with features over features, bells and whistles it is even more important to get the basics right.

Nowadays with all the fancy stuff around, with features over features, bells and whistles it is even more important to get the basics right. But what are the basics?
If you apply for a job the first basic would be to read the job posting carefully. Many corporations require you to use a special keyword or cite the reference in a certain way. This is an easy way to avoid that the email ends up in the spam folder and it shows that you can also see who really read the job posting. But many get this wrong. Why? For me that’s one of the basics. Another basic breaker is many or highly visible typos. Once in a while we get some unusual and fancy looking applications with typos in the job title or in headlines. Hmmm.. why bother with time-consuming layouts and colors and have typos all over the place?
This trend can be seen in many places. We have a new and modern door opener. The buttons are in white and pastel colors. Which ruins the contrast. When the light is dim, I cannot make out a difference between the one for opening the door and switching on the light in the hallway. Looking fancy but useless.
The IT business is also good in breaking the basics. In the last weeks some of the major IDEs or frameworks brought out versions which had regression in one of the most basic places: version control. Why didn’t they catch it before release?
Why are features nowadays more important than the basics?

Open Source Love Day July 2010

Our Open Source Love Day for July 2010 brought love for Hudson (especially the CMake and Crap4j plugins), RXTX and JUnit.

This friday , we held our Open Source Love Day for July 2010.  We began with several internal meetings and discussion (like the Homepage Comittee meeting) and dived right in our work afterwards. Everybody had a little backlog of issues that we wanted to get done on this day. Nearly everybody succeeded (well, the author had a minor delay – read about it below). The day went by in a very fast pace, but it felt right.

The Open Source Love Day

We introduced a monthly Open Source Love Day (OSLD) to show our appreciation to the Open Source software ecosystem and to donate back. We heavily rely on Open Source software for our projects. We would be honored if you find our contributions useful. Check out our first OSLD blog posting for details on the event itself.

On this OSLD, we accomplished the following tasks:

  • There are really cool new features in the latest JUnit versions and Rules are one of them. What hurt our aesthetic sense was that the field that hold the Rule instance has to be public. Checkstyle was on our side, so we tweaked JUnit to allow all kinds of visibility. You can read about the change needed here: http://github.com/KentBeck/junit/issues#issue/31. The fix is almost trivial and will hopefully be incorporated in the next versions of JUnit, so we do not publish our altered version.
  • We constantly receive requests and remarks about our cmake plugin for Hudson. This lead to a new version of the plugin fixing two issues with matrix builds and custom build types. Head over to the plugin homepage and grab the new version 1.6. The issues were in detail:
    • The plugin can be used with matrix builds now
    • Custom build types can be defined now
  • RXTX is our choice for serial port communication with Java. We fixed some issues during the last few OSLDs, with one issue left for today: When you flush your stream while using a special type of usb-to-rs232 converter, you got an exception. The corresponding issue is #102 in the RXTX issue tracker. We proposed a patch that fixes the problem.
  • Another hudson plugin is our crap4j reporter. It lacked some love for months now and finally broke when used with the latest hudson versions. Fixing the problem was a lot harder than we thought, basically because the plugin needed adjustments to recent API changes and we couldn’t figure out exactly what adjustments are necessary. You might have a look at the developer mailing list thread for this question. Finally, we got it resolved (on sunday, with a sudden stroke of insight) and a new version 0.8 is published.
  • We use an internal time tracking tool for our projects. This tool isn’t specifically open source yet, but continues to grow in terms of features and usability. The work invested in this tool helps us to continue with the OSLD, so it’s beneficial work nonetheless.
  • During the last OSLD, we had plans for a new hudson plugin and even produced a prototype. This time, we looked around the hudson plugin zoo (it’s getting a bit difficult to keep track of all of them) for inspiration and found a wonderful piece of art: The Groovy Postbuild Plugin. Using this plugin with a small groovy script served our needs exactly. No need for a full-blown plugin when you can scratch your itch with a simple script. Thanks to Serban Iordache for his great work!

What were our lessons learnt today?

  • If you need to setup a fresh workspace for an open source project, consider to prepare it over the night before, or the download delay will kill your precious work time. There is nothing more frustrating than staring at a “downloading…” progress bar while being eager to start programming.
  • Always look around what others have done before. We wanted to build a full hudson plugin from scratch when all we needed was a little groovy script placed inside another plugin. Sweet!
  • Do not hesitate to privately fix open source issues that won’t get done in time for you. Just make sure to have a management process in place to track those changes and be able to re-apply them to future versions. More important though, be able to tell exactly when NOT to re-apply them because the original project has fixed the issue.

Retrospective of the OSLD

The OSLD went smooth and was productive. We tend to work on backlogs instead of searching for random issues now, but that’s just a sign that our approach has matured and we depend on the OSLD to get work done.

Last wednesday, we held our Open Source Love Day for June 2010. This one was productive despite the heat that had us sweating the whole day long (as a sidenote: it got even warmer the days afterwards). Some features were finished and will help at least us in our projects. We still look forward for the right way to release them. Another release was even more problematic, you will read about it below.The Open Source Love Day

We introduced a monthly Open Source Love Day (OSLD) to show our appreciation to the Open Source software ecosystem and to donate back. We heavily rely on Open Source software for our projects. We would be honored if you find our contributions useful. Check out our first OSLD blog posting for details on the event itself.

On this OSLD, we accomplished the following tasks:

Follow-up to our Dev Brunch July 2010

A follow-up to our July 2010 Dev Brunch, summarizing the talks and providing bonus material.

Last Saturday, we held our Dev Brunch for July 2010. The setting of this brunch was unusual, as we didn’t brunch, but cooked spaghetti (to be exact: had spaghetti cooked while we ranted about different workplaces). We also didn’t start in the late morning, but in the early afternoon. Later on, a LAN computer game party was held in our office, limiting our time-frame a bit. Due to rainy weather, we stayed inside and discussed the topics listed below.

The Dev Brunch

If you want to know more about the meaning of the term “Dev Brunch” or how we implement it, have a look at the follow-up posting of the brunch in October 2009. We continue to allow presence over topics. Our topics for the brunch were:

  • Your own Java ResourceBundle implementation – Since Java 6, there is the new possibility to add your own ResourceBundle formats under the generic API using ResourceBundle.Control. We discussed several possible use cases and had an example case mocked up in source code. The API enables you to do what was impossible beforehands but isn’t as polished as it could be. Worth a closer look if you want to combine ResourceBundle with your i18n database, for example.
  • Thoughts on “Team Rooms” – Lately, there was a very good blog entry about team rooms and how they are introduced by Martin Fowler. The article is titled “The rise of the cattle office” and has some valid points. But nearly every attendee of the brunch likes working in a team room. We had a great discussion that can’t be summarized in a single sentence, but one advice: Mr. Fowler, please put up some nicer teaser image in your bliki!
  • Retrospective of the Java Forum Stuttgart 2010 – The Java Forum Stuttgart 2010 is a local conference dedicated to Java. It grew into a 1k+ developer’s meeting for southwest germany. You cannot avoid to meet former colleagues and chat non-stop in the pauses. The presentations are mostly very professional and worthwhile. We learnt a bit about long-term serialization issues (put a version in your XML namespace!), better JUnit (Rules are cool!), some Dependency Injection myths (though this presentation could have been snappier) and got introduced to Apache Hadoop (Map/Reduce at its best). Embedded Java still is the hell we remembered it to be. But the best presentation of the day clearly was Dr. Simon Wiest talking about Hudson and advanced techniques to speed up your build.

Retrospection of the brunch

The group of attendees was small again, with several firsttime guests. This helped the disgression factor a lot, we talked a lot about all kinds of topics that didn’t make it on the list above. The time and setup was a bit unusual, but the brunch itself was fun and insightful as always.

Non-trivial Custom Data in QActions

If you want to implement dynamic context menus with non-trivial custom data in your QActions, the Qt4 documentation is not very helpful. The article describes some solutions to this task.

Sometimes I get very frustrated with the online Qt4 documentation. Sure, the API docs are massive but for many parts they provide only very basic information. Unfortunately, many Qt books, too, often stop exactly at the point where it gets interesting.

One example for this are context menus. The API docs just show you how menus in general are created and how they are connected to the application: Basically, all menus are instances of QMenu which are filled with instances of QAction. QActions are used as representation of any kind of action than can be triggered from the GUI.

The standard method to connect QActions to the GUI controlling code is to use one of their signals, e.g. triggered(). This signal can be connected to a slot of your own class where you can then execute the corresponding action. This works fine as long as you have a limited set of actions that you all know at coding time. For example, a menu in your tool bar which contains actions Undo/Redo/Cut/Copy/Paste can be created very easily.

But there are use cases where you do not know in advance how many actions there will be in your menus. For example, in an application that provides a GUI for composing a complex data structure you may want to provide the user assisting context menus for adding new data parts depending on what parts already exist. Suddenly, you have to connect many actions to one slot and then you somehow have to know which QAction the user actually clicked.

Btw, let’s all recall the Command Pattern for a moment… ok, now on to some solutions.

Method 0 – QAction::setData: The QAction class provides method setData(), which can be used to store custom data in a QAction instance using QVariant as data wrapper. If you then use QMenu’s triggered signal, which gives you a pointer to the QAction that was clicked, you can extract your data from the QAction. I find this a little bit ugly since you have to wrap your data into QVariant which can get messy, if you want to provide more than one data element

Method 1 – Enhancing QAction::triggered(): By sub-classing QAction you can provide your own triggered() signal which you can enhance with all parameters you need in your slot.

class MyAction : public QAction
{
  Q_OBJECT
  public:
    MyAction(QString someActionInfo)
      : someActionInfo_(someActionInfo)
    {
      connect(this, SIGNAL(triggered()),
              this, SLOT(onTriggered()));
    }
  signals:
    void triggered(QString someActionInfo);
  private slots:
    void onTriggered() {
      emit triggered(someActionInfo_);
    }
  private:
    QString someActionInfo_;
};

This is nice and easy but limited to what data types can be transported via signal/slot parameters.

Method 2 – QSignalMapper: From the Qt4 docs on QSignalMapper:

This class collects a set of parameterless signals, and re-emits them with integer, string or widget parameters corresponding to the object that sent the signal.

… which is basically the same as we did in method 1.

Method 3 – Separate domain specific action classes: By the time the context menu is created you add QActions to the menu using QMenu’s addAction methods. Then you create instances of separate Command-like classes (as in Command Pattern) and connect them with the QAction’s triggered() signal:

// Command-like custom action class. No GUI related stuff here!
class MySpecialAction : public QObject
{
  Q_OBJECT
  public:
    MySpecialAction(QObject* parent, <all necessary parameters to execute>);

  public slots:
    void execute();
  ...
};

// create context menu
QAction* specialAction =
  menu->addAction("Special Action Nr. 1");
MySpecialAction* mySpecialAction =
  new MySpecialAction(specialAction, ...);
connect(specialAction, SIGNAL(triggered()),
        mySpecialAction, SLOT(execute()));

As you can see, QAction specialAction is parent of mySpecialAction, thereby taking ownership of mySpecialAction. This is my preferred approach because it is the most flexible in terms of what custom data can be stored in the command. Furthermore, the part that contains the execution code – MySpecialAction – has nothing at all to do with GUI stuff and can easily be used in other parts of the system, e.g. non-GUI system interfaces.

How have you solved this problem?

Are programming books overrated?

A little insight gathered through feedback from an internship. Software development books are somewhat overrated as they can’t teach practice well.

In the last few weeks, we had an internship of a student that just finished academic high school (“Gymnasium”) and is looking forward to take up studies in computer science. He wanted to get in touch with the practical aspects of the career he is about to choose. The programming courses in school merely covered the basics of a programming language (Java) and some UML.

We prepared the student for the internship by feeding him several books we thought were appropriate for his level of knowledge. The books were a beginner’s book about Java (Head First Java), an introduction to unit testing (Pragmatic Unit Testing) and a foundation on clean code programming (Refactoring). Our student read them thoroughly and could make references to the chapters during pair programming sessions.

Retrospective on the books

But one feedback we got from him was that the books alone were nearly useless for his case. If there wouldn’t have been tutorial style pair programming coding sessions and several short lectures , he couldn’t grasp the deeper meaning of the book chapters he read (he suffered from the “blank slate blockade” several times). This came a bit as a surprise for us, as the student was very clever and really into it. It wasn’t the student, it was the books.

But you can’t blame it on “Refactoring”, for example, as this book is an all-time classic filled with really important knowledge. It has to be the medium itself, books are not the ideal source to learn about programming and software development.

Books are part of the academics

There is an old question in our profession. It revolves around if we are more like engineers or artists, craftsmen or scientists. In the core of this question is a uncertainty about the right model of education. Artists and craftsmen prefer more practical training, with apprentice/master relations and personal knowledge transfer. For engineers and scientists, literature and more standardized lectures are better suited. Academic knowledge is transferred during debate, not during exercises.

The duality of our profession

Projecting the feedback of our student onto this question, there seems to be a duality in our profession: Both (or all four if you want) approaches are needed to form a whole. You can’t learn the theory and expect to excel on the job. But pratical experience alone will not suffice to keep up with the pace of our profession. Good books are like afterburners here, you’ll be hurled forward by every page.

Conclusion

If it’s really true that we need to learn our profession both ways at once, pair programming (in the tour guide or backseat driver style) is an essential part of our qualification. And our current university curriculum fails to deliver this part. Students nowadays can team up to program together on an assignment, but that’s not learning from a master (unless one in the team has distinctly more experience than everybody else and is able to transfer it). So I vote to bring more craftsmanship to the academic education, as the books alone won’t cut it.

Your opinion?

What’s your opinion on this topic? Drop us a line about your thoughts.