Running the Atom Editor Behind a Firewall

Granted, this is a minor topic, much less sophisticated than most of my blog’s posts. But it took me a couple of hours to find out how to run the Atom editor behind a firewall, so it me be worth a short article.

If you’re running Atom behind a firewall, you won’t be able to install plugins not updates until you configure the proxy settings. Basically, all you have to do is to set two user-defined variables: http-proxy and https-proxy. However, it’s not that obvious where to configure these variables.

The easiest way find or create the configuration file is to open the settings dialog (“File” –> “Settings”). On bottom of the left hand side, there’s a button called “Open config folder”. Clicking it opens a new project (.atom). That’s the settings folder in your user profile. The root folder should contain a file called .apmrc. If it doesn’t, create it.

Next you add these lines to the file (replacing username, password, proxyserver and the port number with the settings you use in your internet browser):


Don’t add the variable proxy, nor try to write the variables in capital letters. Sometimes people suggest these things, but Atom ignores these variables.

Not that you have to prefix the proxy server name with http://. If you omit it, you’ll get a “parse exception”. In my case I had to use http:// for both the http and the https protocol – but that may be a peculiarity of my company’s network.


The Only One Dependency You Need in JavaEE 7?

Adam Bien is a charismatic evangelist of JavaEE 7. He’s got something to say, and he always makes me think. But that doesn’t mean I always agree with him. In his latest blog post, he propagates to use a simple, single dependency to use JavaEE 7. All you have to use is to install a JavaEE 7 server, add this dependency to you application’s pom.xml, and you’re good to go.

You really are. And it’s tempting:


As cool as Adam’s recipe sounds… Well, Adam, I strongly disagree. In a way, you’re absolutely right. All you ever need to use JavaEE7 now is that single, simple dependency you mention in your post. But what about the future?

The two alternatives

Before I continue, let’s summarize the two alternatives quickly. Adam Bien’s suggestion works if you’re running your application on a JavaEE7 application server. That’s a pleasant experience. It’s made for JavaEE7, so it’s unlikely that you run into trouble like configuration mistakes or missing libraries. The drawback is that it’s hard to update anything. I know of at least one application server that makes updating almost impossible.

The alternative is to put every JavaEE dependency you need into your *.war file and deploy it on a Tomcat or Jetty. This approach means you’re responsible for configuring the libraries yourself. However, you can update them without having to care about the application server.
Continue reading

CDI: Lazy Injection at Runtime or How to Obtain Every Matching Implementation

These days I’ve discovered a nice feature of CDI. What do you make of this code?

@Inject @Any
Instance<IValidator> validators;

Instance indicates that a single object is injected, but that’s quite the case. In my case the program uses the Instance like so:

for(IValidator validator:validators){

Obviously, @Inject Instance<IValidator> does something completely different than @Inject IValidator.
Continue reading

Nomin – Mapping Java Objects Without the Pain

Many best practices in the Java world involve mapping Java objects to other Java objects. More often than not, this is downright stupid. Both objects are more or less identical, but you have to write a lot of code to map object A to object B. Plus, you have to write the backward mapping, too. It goes without saying that this is a very error-prone task. Not to mention it’s boring as hell, making the task even more error-prone.

This is why I start to roll my eyes every time someone asks me to map objects. Granted, the underlying design pattern has its virtues. Decoupling the front-end code from the back-end code gives you a lot of flexibility. But do you need this flexibility? Even if you do, is it worth the pain?

Mapping objects automatically

Nomin to the rescue. It’s one of those little tools that come in handy. It allows you to write very compact Groovy scripts to map object. Even better, if the attribute names of the objects are identical, Nomin is able to figure out how to map the objects without you help. You don’t have to write a script at all. All you have to do is to invoke Nomin and have it map the objects for you:

NominMapper nomin = new Nomin();
EntityA entityA =, EntityA.class);

Continue reading

What’s New in BootsFaces 0.8.1?

The open source branch of BootsFaces is still young. Only 14 months went by since the first release on Halloween 2014. But it’s already an impressive success story. In November, we’ve seen more than 1.000 downloads from Maven Central and Bintray. Ed Burns mentioned BootsFaces in one of his talks at JavaOne. JSFCentral and have asked us to write articles about BootsFaces. Obviously, BootsFaces has stirred a lot of attention, and many projects use it for their daily work.

Since the Halloween 2014 release, we’ve published five releases. That’s roughly a release every three months. The latest release took quite bit longer to finish. But it’s loaded with a host of new features, so it has surely been worth the wait. Personally, I call it the AJAX release, because that’s my big ticket. But there’s more in store four you. Other big tickets are the advanced search expressions inspired by PrimeFaces and the theme support. Plus, BootsFaces 0.8.1 has six new components. Seven, if you count the experimental <b:dataTable />. But that not a finished component yet. However, I felt it already is useful enough to include it with BootsFaces, even if it still requires polishing and “sugaring”.

But let’s talk about first things first.

Download coordinates

BootsFaces is available in two different flavors. There’s the regular version at Maven Central, and there’s a highly-optimized version at GitHub. The optimized version is 50 KB smaller, requires Java 7 or higher and should be a bit faster. The version hosted a Maven Central is targeted at a broader audience. It only requires Java 6. Alternatively, you can check out the repository from GitHub and build BootsFaces from source.

Add these lines to your project’s pom.xml:


Add this line to your project’s .gradle build file:

runtime 'net.bootsfaces:bootsfaces:0.8.1'

The BootsFaces project comes with both a Gradle build file and a Maven build file. The Maven pom.xml is the easy way to get started and should suffice for most purposes. Only if you want to tweak and optimize BootsFaces, you need the Gradle build. In particular, the Maven build doesn’t generate the CSS and JS files itself, but relies on the output of the Gradle build. By the way, that’s the reason why we keep the generated file in GitHub.

In any case, the URL of the repository is

Continue reading

A Comprehensive Guide to JSF AJAX

These days I analyzed the AJAX implementation of Mojarra. My goal was to learn enough about it to implement an improved version of the original AJAX implementation in our BootsFaces libraries. Along the way I learned that I had chosen quite a chunk to swallow. The JSF AJAX specification has many, many options, most of which I wasn’t even aware of. Time to write an exhaustive guide. Apart from the JSF specification we’ll also have a look at it’s PrimeFaces counterpart and – of course – what BootsFaces 0.8.0 will bring to you.

Source code of the examples

You can find the source code of the examples on GitHub. There’s a second project on GitHub covering the BootsFaces AJAX examples.
Continue reading

JSF vs. PrimeFaces vs. BootsFaces Search Expressions

It’s a bit inconvenient and error-prone to define an ID for each input field, each label and each message of a JSF view. You can make your like easier using the advanced search expressions. Used wisely, advanced search expressions enable you to move input fields on the screen or between JSF views without having to update zillions of ids. In fact, search expressions are possibly the most compelling reason to use PrimeFaces or – since version 0.8.0 – BootsFaces. Their search expression engines go far beyond the JSF standard.

Why are there different search expression engines?

Traditionally, JSF relies heavily on ids. Standard JSF 2.x adds a few generic search expression that allow you to get rid of the ids in many cases. The PrimeFaces team – and Thomas Andraschko in particular – took the idea to another level. They implemented a variety of new search expressions such as @next and @previous. Unfortunately, these search expressions can only be used with PrimeFaces widgets. There’s an open ticket offering to implement the PrimeFaces search expressions in the Mojarra framework. Last time I looked the ticket was still dormant, laid aside due to performance considerations.
Continue reading

Single-Page Applications With BootsFaces

Singe-page applications have become tremendously popular in the last couple of years. They are fast, they are responsive and they can save network bandwidth. The first SPAs that caught the attention of the community were based on JavaScript. So nowadays everybody seems to believe you can’t write an SPA with JSF.

But you can. It’s not even difficult. And it pays: the application becomes much more responsive. This article shows two approaches how to do it with BootsFaces. I’m sure you can do the same with other JSF frameworks like PrimeFaces, too, but I didn’t test it yet.

Currently, there are at least two approaches. You can exploit BootsFaces AJAX to do your navigation, or you can add AngularJS to the equation. The latter approach requires you to learn both Angular and JSF, but it may be useful because it allows you to use client-side AngularJS components.
Continue reading

About Owls, Nightingales and Clean Code

Sigh! Today I’ve been wading knee-deep through all-to-clean code. Code that might have been written in a textbook. Code following the style – yeah, once I was young and naive enough to believe the textbooks without asking – I taught at university. Code that follows all the best practices. Code following the rules of the Clean Code Initiative.

Code that’s illegible.


Yeah, illegible. I love clean code – but believe it or not, code that’s all too clean is every bit as illegible as spaghetti code.
Continue reading

Does Super-Fast Storage Impact Programming?

Now, that’s a disruptive change. Actually, it developed in the open, but still, I didn’t really think about it until today. We all have learned to love SSDs. My old company PCs used to boot for a couple of minutes, sometimes even for a quarter of an hour, before I could start my daily work. If you’ve got an SSD, that’s typically done in a minute, and if you’ve got a Mac, it’s a matter of seconds. The extreme being tablets and smartphones: Most of the time they run in stand-by, so they are just there when you need them.

Let’s think this to the end. Imagine that storing or retrieving data on your hard disk doesn’t take any perceptible time. Imagine that external storage of mass data isn’t your system performance’s bottleneck. What does this mean to programmers? What does it mean to hardware designers?

Actually, that doesn’t seem to be an utopia. Today I’ve learned that it’s possible to buy super-fast external storage plugged into your system’s PCIe bus. In other words: very close to your CPU. Flash memory – i.e. SSDs – are only the start. It’s fast, but NVDIMMs are way faster. Putting it in a nutshell, these are traditional RAMs backed by a flash memory, so they don’t loose data after switching the system off. If I’ve got it right, NVDIMMs are future tech, while PCIe flash memory is already there. It costs a lot, but it doesn’t seems to be an exotic technology. So let’s assume NVDIMMs were already there. Just for the fun of it.

Now for the exciting question: How does such super-fast memory affect us?

The idea is new to me, so I may have missed a lot of details, but it seems obvious to me that this is a game-changer.

For example, consider hardware design. It’s built on a hierarchy. Fast memory is scarce, but there’s an abundance of cheap and slow memory. So hardware designers started to invent caches. Later, they added a cache to cache the first level cache, and nowadays even third-level caches are mainstream technology. This continues at the hard disk level. Many hard disks employ caches themselves.

If mass storage was as fast as the CPU, all these optimizations were superfluous. Actually, they were a waste of energy, possibly even performance.

Software design would be affected, too. Software designers know that external storage is slow, so they try hard to keep all the data they need in memory – or, if you’re really power-hungry, in the CPU cache. A lot of thought went into finding ways how to keep your program and your data small. How to avoid database accesses. And so on. Mind you: would you implement your programs they way you do today if storing data in the database is just as fast as storing it in a temporary hash table?

My bet is that affordable NVDIMMs require us to redesign both our hardware and our software on all levels. For example, currently the programming API differs vastly if you write data to memory or if you write data to the hard disk. I suspect that’s one of the things that’d change with affordable NVDIMMs.

And what about database design? Technologies like SAP HANA indicate that databases running on RAM require – or allow for – a new programming model. SQL is optimized for relational databases running on slow memory. Databases that are as fast as the CPU may need a different optimization. For instance, many recent RAM-based databases are column-based, not row-based.

Did I make you curious? If so, I’d like to point you to the article I read this evening. As it seems, four guys had the opportunity to explore PCIe SSDs for four years and wrote an article about it. Highly recommended!

Oh, and one of the affected branches is the book industry. Almost every IT text book needs to be rewritten with the advent of super-fast external mass storage. :)

Running CDI on a Simple Tomcat

I’m big into CDI, but I’m not fond of application servers at all. So I’m bound to find out how to configure how to activate the CDI magic on a simple servlet container like Tomcat. That shouldn’t be too difficult. Mind you, it’s easy to run Weld (which happens to be the reference implementation of CDI) without a servlet container. However, in the past getting Weld on Tomcat proved to be a tad difficult, due to mediocre documentation.

Luckily, that has changed.

Jo Desmet (at least I suppose he’s the man behind the Musings in Java blog) has published a nice walk-through to get CDI and JSF on Tomcat 8 up and running. I followed his step-by-step tutorial and managed to activate CDI on my Tomcat 8 in a couple of minutes. Highly recommended!

However, no tutorial is so good it can’t be improved, so let me add one or two notes on it:

  • For some reason, declaring JTA and JSF as provided API and runtime implementation didn’t work for me. In theory, it should work, but I ended up adding both as compile time dependencies.
  • Jo didn’t say precisely where to put the context.xml and the beans.xml. I chose to put put into the src/main/webapp folder. To be pedantic, the context.xml went into the src/main/webapp/META-INF folder, and the beans.xml file went into the src/main/webapp/WEB-INF folder. (From what I remember from earlier experiments, both file should also work when put into the classpath – but I didn’t take the time to check this, too).

Cutting a long story short, you don’t need a full-blown application server just to use CDI!

Getting Started With AngularJS 2.0: Forms (Part I)

Let’s continue our journey into the universe of Angular2 with exploring forms. Actually, that’s something odd: many articles and tutorials on Angular2 spend a lot of time explaining that the old two-way-binding of AngularJS 1.x has been replaced by something different, something superior.

Putting it in a nutshell

Forget about that. You can learn about the subtleties of Angular2 later. From a beginners view of point, good old two-way-binding is still there. It even has become more simple.

Putting it in a nutshell, you simply use the new, slightly controversial syntax <input [(ng-model)]="someProperty" /> to bind an input field directly to a property called someProperty of your controller. Initially, the input field is populated with the value someProperty, and when the user starts to type in the input field, every key stroke immediately changes the variable someProperty.
Continue reading

Newsflash: How to Develop Efficiently In TypeScript

In one or two of my recent posts, I claimed that the TypeScript compiler is extremely fast, and that that Google Chrome caches your TypeScript code to aggressively.

Both things have changed since then. The bad news is that the editor I use, Atom, has started to compile the entire project each and every time I edit a file. I don’t know whether this is a configuration error of mine, or if it’s a bug of Atom, or even a peculiarity of the current version of TypeScript. I’m positive the error is on my side, but as long as I haven’t found the reason I reluctantly have to revoke my claim that TypeScript compiles in virtually no time.

Now for the good news. The Angular2 team recommends to use “live-server”, and it works just great. Install it via

npm i typescript live-server --save-dev

Next, open the file package.json, find the scripts section and replace it with

  "scripts": {
    "tsc": "tsc -p src -w",
    "start": "live-server --open=src"

Now you can start your project by starting npm start at the command line. This starts the server, opens the browser, opens the index.html of your project within the browser and – that’s the really cool part – restarts your application each time you edit a file.

Read the full story at the Angular2 quickstart tutorial.

Newsflash: How to Migrate an AngularJS 1.x Application to Angular 2

Do you remember how disturbing the initial announcement of Angular2 was? As far as I remember, it was at ngEurope, roughly 12 months ago. At the time, the Angular team said there wasn’t a migration plan. I guess they should have added the word “yet”. Most developers and project managers understood there wouldn’t be a migration plan at all. Many of them saw the future of their AngularJS projects in tatters.

Well, that was premature. It’s far from being impossible to migrate from AngularJS 1.x to Angular2. Quite the contrary, there’s even a project dedicated to make the migration as smooth and easy as possible. Or, as Pascal Precht and Misko Hevery put it, boring.

Today I’ve stumbled upon an in-depth analysis of how to use the project, which is called – guess what – ngUpgrade. Read the full story written by Pascal Precht. It’s a great read, and I’m sure it’ll be useful to many Angular 1.x project teams. Highly recommended!

By the way, that’s not the only great news for those of us working for major companies: According to Brian Green’s announcement at the AngularConnect 2015 conference in London, Angular2 is going to support Internet Explorer 9 and 10 (at 53’40”).

Newsflash: Angular2 Release Date Projection

Nobody knows when Angular2 is going to be released. Chances are this includes the Angular team (although they don’t tell us). So people start to try to look into the crystal ball. One thing is that the API feels mature. Granted, it still happens to break more often than is to my liking, but the basic design decisions seem to have been done. Another indicator for the progress is the documentation site. The developer guide is still incomplete, but it’s growing by the day.

Another indicator is the progress on the open tickets. Currently, each day 2.2% of the open tickets are solved. Judging by this measure, the release date of Angular2 isn’t far away. Actually, the Angular 2 Beta Burndown Chart indicated Angular2 is going to be released in 2015.

Personally, I think that’s nonsense – there’s more to releasing software than just closing tickets on the bug tracker – but it’s a good idea. Plus, it’s fun, and it looks great.

There’s only one thing that’s odd: the burndown chart isn’t an Angular application. Let alone an Angular2 application. :)

Update Nov 9, 2015:
My article and my tweet about it started a funny discussion, which lead to an full-blown Angular2 implementation of the Angular2 Release Date Projection tool. Sometimes it’s fun to be a catalyst :).

Update Nov 17, 2015:
In the meantime, the release projection has been migrated to Angular2. It’s really fun to be a catalyst!

Getting Started with AngularJS 2.0: Your First Application

Since I started my AngularJS 2.0 series, AngularJS has evolved considerably. Version numbers went up from 2.0.0-alpha.36 to 2.0.0-alpha.45. Plus, they’ve added quite a lot of content to the documentation pages. So I wasn’t surprised it took me two evenings to update my AngularJS 2.0 chess demo to the actual version. However, this turned out not to be Angular’s fault: the library responsible to deal with Angular’s module system, System.js, introduced one or two breaking changes. The good news being: the API of Angular 2 is pretty stable, stable enough to continue with a couple of hands-on articles.

Today, I’d like to make you familiar with the basic concepts of Angular 2. Let’s write your first application together. Or rather – well, writing a simple demo application is something virtually every tutorial does. Let’s tackle Angular 2 from a different Angle. I’ve uploaded a slightly simplified version of my Angular 2 chess program on GitHub. So you can start with a real-world application. I won’t explain every feature of the chess program – that’d be an entire book, not a blog entry. This article simply gives you a head start to grasp the look and feel of an Angular 2 application, and to play with it.

Curious? Clone the repository at and open the subfolder /Blog01. That’s a full-blown, running Angular 2 application. Let’s examine it step by step.
Continue reading

Why Sometimes Unit Tests do More Harm than Good?

I always wondered how to write truly efficient and useful unit tests. So I’m glad my co-worker Thomas Papendieck offered to write a guest article, sharing his expertise with you. Thomas, it’s your stage!

Unit tests are cool!

Unit tests secure already existing behavior in the code base and support the programmer while doing changes by detecting damages.

Almost every programmer knows this statement.

…but are they really?

On the other hand almost every programmer made the opposite experience: Unit tests come in the way of the programmer when she wants to introduce changes to the production code. After the change a whole bunch of unit tests broke and it takes a big effort to make them pass or even compile again. I remember projects where fixing tests took up to ten times longer that the actual change.

Why does that happen? Is it that unit tests are only hyped by ivory-tower screwballs propagating academic ideals? Those lucky dreamers never being responsible for a real life application?

The staggering answer is: No.

Continue reading

Newsflash: Angular 2 Survey Results

Granted, I can hardly call the news of this newsflash “new”: it has been published Sept 01, 2015. But it’s very interesting nonetheless. The AngularJS team’s blog has the results of a survey asking developers what they expect of Angular 2. Each survey result is spiced with an in-depth analysis, many of which provide additional information about Angular 2. Highly recommended.

Read the survey results at

How to Count Java Objects in Memory

There are all these wonderful tools for profiling Java programs. JVisualVM, JProfiler, Mission Control and the Flight recorder, jhat, just to name a few. Yet every once in a while, you can’t use any of them, for one reason or another. Where do we go from here?

How come there’s no profiler?

The situation is not quite as exotic as you may think. When I was asked recently to write programs in the middleware layer, the lack of tools was the one of the reasons why I denied. More precisely, I was asked to write Java programs running in the webMethods ESB. In theory, that’s a great idea, but the lack of tools makes programming a pain. No debugger. No profiler.

In theory, you can debug such a program using remote debugging, but that requires many preconditions to be fulfilled. The administrator has to start the program with additional parameters, and they have to open the ports in the firewall. Sometimes that’s possible in the development and test stages, but usually it’s completely out of question in the production stage.

However, being a consultant, sometimes I’m asked to help our customer nonetheless. Continue reading


My previous article focused on what the MV* paradigms are. But it didn’t answer the question whether using one of the MV* patterns is worth the pain. One thing is for sure: none of the MV* paradigms comes for free. It depends on your project whether using an MV* patterns is a wise investment or a waste of time and money.

Dissecting a chess application

For instance, consider my AngularJS 2.0 chess program (see the source code at GitHib or play it at It doesn’t follow any particular architectural pattern. Or rather, it doesn’t follow one of the established MVW patterns. To begin with, there’s no model. I simply didn’t need it. The program’s data are stored in the controller component of the program. I don’t know all the subtleties of the MV* theory, but I guess it’s OK to say the chess demo stores its data in the viewmodel layer.

Actually, the chess demo doesn’t consist of many layers. There’s the HTML code, there’s the chess engine and there’s some glue code.

My previous post (and the majority of tutorials) claims that AngularJS favors the MVVM pattern. Let’s stick to this fiction for a moment. It’s easy to identify the view layer: that’s the HMTL pages and the CSS stylesheets. The glue code is the viewmodel. I already said that there’s no need for a model layer.
Continue reading