Category Archives: developer productivity

Blisk – a Browser to Make Web Programming Easier

Mobile first! Seriously? Do you really optimize your application for mobile usage?

Actually, you can’t. The current state-of-the-art makes it difficult to adopt the “mobile first!” approach seriously. Mind you: that would require you to program your application on the phone instead of using your desktop browser. Until today, I’ve seen such an approach only with iOS programming and with UWP/Xamarin. The application is developed on the desktop, but run and tested on the cell phone or the tablet.

The other day a friend of mine, Dario DĀ“Urzo, showed me another option for web developers. What started as a small JavaScript plugin two years ago has evolved to a full-blown browser in 2016. More precisely, it’s a browser based on Chromium which adds a number of tools dedicated to web designers and web programmers.

Targeting multiple devices

When you open a URL in Blisk you’ll see the web page twice. On the right-hand side, there’s the desktop view you expect. On the left-hand side, you see the same page in a simulated smartphone. As you can see in the screenshot, both views are remarkably different. It’s great to be able to seem both views at a glance. This enables you to truly adopt the “mobile first!” slogan. Even better: if you scroll one of the windows, the other windows scrolls simultaneously. Thus you can compare both the look and the feel of both displays.

Blisk supports a number of devices out of the box. It doesn’t cover every smartphone and every tablet available on the market, but the number of devices seems to cover almost every use case.

Multiple inspect views

Seeing multiple devices at a glance is just a start. You can also open device-specific developer tools and keep them open simultaneously. That, in turn, allows you to find differences between devices a lot faster than with the traditional single window.

Limitations

Being a browser based on Chromium, I’m pretty sure Blisk has an annoying limitation: it doesn’t simulate the device completely. It only modifies the screen estate the replaces the mouse pointer by a large finger touch point. What it does not do is to simulate the rendering engine. If there’s an incompatibility because – say – Safari implements a feature differently than Chrome does, you’ll still miss it with Blisk. You still need to test your application with the real device.

Automatic reload

That’s another feature I’m told to be very useful. You can configure Blisk to watch certain files or folders. When you edit and save a file, Blisk automatically reloads the page. I suspect that’s one of the features that are useful to some and annoying to others, so probably it’s good news you can also switch it off. However, if your application supports reloading, I imagine that’s a tool boosting productivity.

Code analysis

Currently, that’s only a promise. The module providing static code analysis is still under development. I’m looking forward to it. Integrating code analysis into your browser (i.e. your runtime environment) might make you aware of problems much earlier than with traditional approaches such as SonarQube.

Screen shots

Another feature that’s still under development. However, I doubt that’s a killer feature to me. Tools like Greenshot integrate so nicely into my PC I never considered taking screenshots awkward. So I’m curious – maybe the developers have a clever idea I missed?

Integration with other tools

That’s something offering more potential to boost productivity. Blisk offers to provide integration with third-party tools, such as bug trackers and project management tools. Unfortunately, this is another module still under development.

Licence

Blisk has a long licence agreement, but basically it’s free both for commercial use and for personal use as long as you don’t modify it.

Wrapping it up

As long as you are aware on the limitations, Blisk is a great tool for every web designer and web developer targeting multiple devices. At the time of writing, many interesting features are still missing. However, what’s already there looks interesting enough to adopt Blick for development.


Dig deeper

Blick project page
Blick license agreement

How to Use Advanced Search Expressions with Mojarra, MyFaces or OmniFaces

I consider the advanced search expressions of PrimeFaces and BootsFaces tremendously useful. In particular, @next and @previous make it a lot easier to move input fields around on the screen or from one screen to another during the prototype phase of your project. The only problem with the search expressions is that they only work with PrimeFaces and BootsFaces components. If you also use the base components like <h:inputText />, you back to using ids and the four search expressions defined by the JSF standard (@form, @this, @all and @none).

JSF 2.3 may change that. It’s not settled yet, but there an ongoing discussion to include a similar search expression framework in JSF 2.3.

But what about now?

That’s a bright look-out. Problem is, most of us live don’t live in the future. Quite the contrary, from a technical point of view, most of us developer live in the past. I know many projects currently developed with JavaEE 6, a technology presented seven years ago. In other words: even if Oracle is going to accept our proposal in JSF 2.3, that doesn’t help many developers before 2020. So the challenge is to find a way to add the search expressions to JSF 2.0, 2.1, and 2.2.

How to do it

Cutting a long story short: I’ve found such a solution. It’s not exactly elegant, but it does the trick. The nice thing about my solution is that you don’t need to add the BootsFaces library to your project just to use the search expressions. You only need a single package of BootsFaces. Most likely I’m going to publish this package on Maven Central after running some more tests.

Actually, it’s fairly simple to use the search expressions in non-BootsFaces components. Every attribute of a JSF component supports EL expressions. This includes the attributes for, render and execute. So I’ve defined a JSF bean allowing you to resolve the advanced search expressions:

<h:message for=
 "#{searchExpressionResolverBean.resolve(component, '@previous')}" />

Of course, that’s a far cry from what we really want:

<h:message for="@previous" />

Luckily, there’s a second option we can use to simplify things. JSF allows us to register a “facelets decorator”. This class is called when the XHTML file is loaded and parsed. The facelet decorator can modify each facelet and replace it with another facelets. That’s how the HTML5 friendly markup of JSF 2.2 has been implemented. The JSF 2.2 recognizes an HTML input tag and replaces it with a JSF h:inputText if there’s an JSF attribute. We can use the same trick to replace the string "@previous" by "#{searchExpressionResolverBean.resolve(component, '@previous')}"". In other words, our decorator translates the search expressions to something core JSF can understand at load time.

How to activate the search expressions

Unless you’re already using BootsFaces, copy the folder https://github.com/TheCoder4eu/BootsFaces-OSP/tree/master/src/main/java/net/bootsfaces/expressions to your project. This folders contains the entire search expression resolver library, and it also contains the decorator in a subfolder.

If you want to benefit from the simplified syntax of the decorator, you have to activate it by adding a few lines to your project’s web.xml. If you don’t use BootsFaces, that’s:

    <context-param>
      <param-name>javax.faces.FACELETS_DECORATORS</param-name>
      <param-value>
          net.bootsfaces.expressions.decorator.SearchExpressionsTagDecorator
      </param-value>
    </context-param>

If you use BootsFaces, chances are you also want to benefit from the full power of the relaxed HTML5-like coding style of BootsFaces. If so, you have to register a different, more powerful decorator:

    <context-param>
      <param-name>javax.faces.FACELETS_DECORATORS</param-name>
      <param-value>
          net.bootsfaces.decorator.BootsFacesTagDecorator
      </param-value>
    </context-param>

Limits of the decorator

I encountered a surprising limitation of the decorator when I tried to use it with f:ajax. Both the render and the execute attribute are translated by the decorator, but they are ignored – at least by Mojarra (I didn’t try MyFaces yet). That’s fairly annoying because it forces you to use my first, verbose proposal. You can abbreviate the long name of the JSF bean SearchExpressionResolverBean by deriving a bean with a shorter name, but there’s no way to reach the simple syntax we really want.

Still, being able to use advanced search expressions with frameworks like Mojarra, MyFaces, RichFaces, or ButterFaces that haven’t been designed to be extended that way is nice.

Benefits for users of BootsFaces and PrimeFaces

Every once in a while, even users of such huge libraries like PrimeFaces (157 components) and BootsFaces (68 components) need a component that’s not part of the library. A popular way to build components is to build them from scratch using the base libraries. Another popular way is to add a library like OmniFaces, that’s not aware of the search expressions, either. But it does contain components like <o:commandScript /> that may benefit from the search expression.

By the way, if you prefer PrimeFaces over BootsFaces, you can also use their search expression framework. I didn’t prepare code to use it, but it’s fairly simple. The source code is at https://github.com/primefaces/primefaces/tree/master/src/main/java/org/primefaces/expression. The key class is the SearchExpressionFacade. There’s no facelets decorator, but I’m sure you’ll be able to migrate the BootsFaces facelets decorator to PrimeFaces within a few hours.

Benefits for the rest of the JSF world

Well, that’s fairly obvious: you couldn’t use the advanced search expression before. Now you can. More likely than not, you don’t see the use case use. But that shouldn’t stop you. Just give @next, @previous, @parent, @after, and @styleClass(.) a try. Before you know, you’ll be addicted. Using fixed ids may be the most efficient way with respect to CPU usage, but at the same time, it’s very inflexible and limited if your JSF views are changed frequently. The advanced search expressions of PrimeFaces and BootsFaces make moving fields from one panel to another or from one view to another a breeze.

Wrapping it up

JSF is very flexible. So it’s possible to add search expressions to JSF using a couple of tricks. However, the f:ajax facet of JSF seems to ignore the changes of the facelet decorators, so the result isn’t as nice as it’s intended to be. But still, it’s nice to able to use the advanced search expressions in JSF 2.2 or below.

Let’s Make JavaScript Development Simple Again!

Recently, I’ve grown increasingly uneasy about the current state of JavaScript development. I’m a full-time Java developer who tries to get familiar with JavaScript. Over the years, JavaScript has become a really nifty programming language. There’s hardly anything you can’t do with JavaScript. It’s one of the most exciting areas of development in 2016.

Only… each time I start to read a tutorial about JavaScript, I feel stupid. I know I’m not stupid – at least if it comes to programming – so the explanation must be something different. I freely admit I’m lazy. That’s a valid explanation why I don’t cope with most tutorials.
Continue reading

Newsflash: How to Reduce Tomcat Startup Time

Modern application servers start very fast, but sometimes you still need more performance. Especially if you’ve got a huge and complex application. Gavin Pickin describes an interesting option on his blog. For some reason, he suggests to add an option that’s already the standard configuration of Tomcat 8 (at least on my machine), but it might be interesting nonetheless.

Thing is, Tomcat scans every Jar file it finds at startup time. This is necessary for frameworks like Servlet 3.0, CDI and JSF, which enable to you add some magic to your application by simply annotating a class. However, many jar files don’t contain such an annotation, so it’s a waste of time to have them scanned. Adding such a library to Tomcat’s skip list gives your application a boost. Especially, if you skip not one, but many library.

Read the full story at Gavin Pickin’s blog.

Adding Type Inference to Java: Good or Evil?

My previous post raved about the simplicity type inference is going to bring to Java. I love the idea. Java is such a verbose, ceremonious language. Little wonder so many developers prefer dynamically typed languages, which seem to be so much simpler to use until you write a huge enterprise application. Java’s proposed type inference is strongly and statically typed, so it’s going to make life simpler without introducing problems. JEP 286 is good news, indeed!

Type infererence obfuscates types

A fine mess it is, Dave Brosius answered. Consider this code. It’s wrong, but can you spot the error?

public boolean foo(Set<Long> ids) {
    val myId = getId();
    return ids.contains(myId);
}

In fact, I couldn’t until Dave helped me, providing the solution: getId() returns a String, so the contains method will never find the id.

Well. On the one hand Dave is right: the val keyword obfuscates the fact that we’re talking about a String, so it’s a potential source of confusion. You’ll spot the bug immediately with the Java 8 version of the code:

public boolean foo(Set<Long> ids) {
    String myId = getId();
    return ids.contains(myId);
}

This source code clearly reveals that we’re comparing Strings with numbers, which is unlikely to work (at least in the Java language). Readability matters!

… but …

On the other hand, the example is all wrong. It’s deliberately misleading. I, for one, was an easy victim because I associated the ID of the code snippet with a database primary key, and my primary keys are always numeric. I prefer to have them generated by a sequence of the database.

Several years ago many developers adopted the UUID as an alternative synthetic primary key. From this perspective, I shouldn’t have been surprised by the fact the getId() returns a String. But then, why don’t we call the method getUUID()? Readability matters!

Another interesting aspect is that JEP 286 has been carefully designed to minimize readability problems. In my eyes it’s too conservative, but Dave sought and found a weak spot of the Java API that doesn’t work well with type inference. Even though Set is a generic type, the contains() method doesn’t care about the type. It simply accepts arbitrary Objects as parameter, allowing us to try to search for a String, even though it should be clear from the context that there are no String in the set.

My personal summary: Granted, you have a point, Dave. Type inference might obfuscate the type, and that’s a bad thing. But if used carefully, this shouldn’t be too much of a problem. I can’t remember someone complaining about type inference with Scala, Kotlin or Ceylon. Well, maybe I’ve heard people complain about Scala type inference. Scala seems to attract very clever guys, who love to write their programs in a very, say, academic way. Scala’s type inference is very sophisticated. Sometimes this leads to very sophisticated Scala programs, and that’s one of the key aspects why Scala never gained traction. The Java enhancement proposal is much more conservative. Probably exactly because of the Scala experience.

By the way, as Phil Webb pointed out, using the val keyword isn’t the real source of the problem. Using the “inline” refactoring causes the same obfuscation problem:

public boolean foo(Set<Long> ids) {
    return ids.contains(getId());
}

IDEs to the rescue

When I published this article on reddit, the first comment pointed out that sooner or later IDEs are able to display the type of the local variable, even if it is defined as a var or val. That doesn’t solve every readability problem, because most likely the type of the variable is displayed in a tooltip only. In other words: most of the time it is invisible. But certainly even that tooltip will help a lot. In fact, the tooltip is an excellent example of my idea: developers aren’t bothered with technical details all the time, but they can easily look them up when they need them.

Another example of obfuscation

Mark Struberg contributed another potential example of trouble introduced by var and val:

val myValue = a().b().c().d();

Which type does myValue have? What happens when the return type of d() changes? Plus, in real life code, the chain of calls may not be quite as simple. It may be hidden among hundreds of lines, each looking deceptively innocent.

Granted, that’s the sort of things that are going to happen, and it’ll be the source of countless overtime hours. But the real question is: which is worse? Actually, we could even run an experiment to decide on the question of hours of overtime. Does type inference increase the number of extra hours, or does it increase them?

Lessons learned from other languages

Most modern programming languages make use of type inference. There’s a lot of discussion whether static or dynamic typing is useful, but so far, I’ve heard little complains about type inference in strongly and statically typed languages like Scala, C#, Ceylon or Kotlin. I’m sceptic about dynamic typing, but the current JEP 286 is limited to strong and static typing, so judging from the experience of the communities of other programming languages, I suppose adding limited type inference to Java is going to be a success story.

Mind you: the examples of Dave and Mark have been carefully designed to expose the weaknesses of the JEP. That’s good, because they made us aware of a problem we might have missed otherwise. But then, why don’t we rely on the common sense of the programmers? Usually they know how much abstraction they and their working mates can cope with. Adding type inference to Java doesn’t forbid us to use explicit types. So let’s give it a try, and use techniques like code review or pair programming to make sure our code doesn’t get too sophisticated!

Benefits of type inference

During our discussion on Twitter, Phil Webb exclaimed he’s sometimes afraid of extracting a term into a local variable because of the scary type declaration. Well, this rings a bell with me. I often use this particular refactoring to make the code more readable. Extracting a term to a local variable allows us to assign an expressive variable name to the term, and modern JVMs are clever enough to inline the variable automatically again. But if this local variable is drowned by a verbose type declaration, little is won. In this particular example, using val or var might improve readability a lot. More often then not, we aren’t interested in the type of the variable (that’s a technical issue), but we are interested in the role the variable plays (which is a business issue).

That’s the general pattern why type inference may make the Java language more useful. It gives you an option to hide technical stuff when its irrelevant for the reader. As they say, code that’s not there is code you don’t have to understand. It goes without saying that there are corner cases. Sometimes code is so clever it’s so compact it’s difficult to decipher. It’s like learning latin. At school, we used to ponder half an hour just to decipher a single sentence. Obviously, that’s the wrong approach. But if used wisely, type inference may add to the readability of the source code.

Like I said: Readability matters!

JEP 286 addresses Type Inference in a very conservative way

Given that Brian Goetz (or whoever is the author of this particular line) calls type inference a “non-controversial feature”, there’s quite a lot of controversy about the topic. But if you’re afraid of type inference, read the JEP. It’s a fast read – maybe five to ten minutes – and it’s going to make you rest assured. The authors have limited the scope of type inference considerably.

Target typing vs. left-hand-side type inference

In part, that’s due to target typing. The adoption of Lambda Expressions brought target typing to the Java ecosystem. This is also known as right-hand-side type inference, because it adds type inference to the right hand side of assignments. JEP 286 adds type inference to the left hand side of the assignment operator. Obviously, it’s difficult to have both. This might turn out as a problem to Java programmers.

It’s your responsibility!

But then, with power comes responsibility. Type inference doesn’t mean you are forced to convert every type declaration to a var or val declaration. It’s up to you. If you think var is useful, use it. Otherwise, use the traditional approach, which may be as verbose as

Map<Date, Map<Integer, Double>> map 
     = new HashMap<Date, new HashMap<Integer, Double>>();

In my eyes, this is a scary example: you don’t know nothing about the business value the attributes adds to your program. You only know it’s a map, and reading the type declaration, you learn twice it’s a map of maps. Probably it’s better to use val and an expressive identifier:

val interestRateTableByDate 
     = new HashMap<Date, new HashMap<Integer, Double>>();

It’s still a difficult line, but if you’re working in the finance industry, you might at least guess that this is a collections of tables of interest rates (which are different depending how much money you invest). Interest rate are volatile, so you need to add another dimension – the date – to determine the interest rate.

Granted, you can also do this without val, but then the variable name sort of drowns in technical stuff:

Map<Date, Map<Integer, Double>> interestRateTableByDate 
     = new HashMap<Date, new HashMap<Integer, Double>>();

Dig deeper

JEP286 (adding local variable type inference to Java)
Binary builds for JEP 286 (use at own risk – I didn’t check neither for viruses, nor did I try the files myself!)
My previous article on the proposal to add local variable type inference to Java
Lukas Eder on local variable type inference
Pros and cons of JEP 286 by Roy van Rijn


Java May Adopt (Really Useful) Type Inference at Last

I’m all for simplicity – and that’s why I’m complaining so much about Java and many Java libraries. Java is such a ceremonious language, and it attracts so many people who love ceremonies and design their libraries to be used in a very elaborate way. More often than not, this results in a lot of boring key strokes. That’s a pain to write, but it’s also painful to read. As you may or may not know, I love complaining, but I also love to do something about the pain points. Just thinks of BootsFaces and AngularFaces, my attempts to simplify JSF programming. However, there’s one domain I have very little influence: core Java. And that’s a pity, because that’s a domain that influences everybody. Every Java programmer, that is.

Can you imagine my joy when I read there’s an official proposal to introduce type inference to the Java language? Even better, the author is Brian Goetz, and the proposal calls type inference a “non controversial feature”. Better still, there’s also a survey allowing you to influence the development.

Update March 13, 2016: As Brian Goetz mentions in his comment below, my wording is not entirely correct. Java already has some type inference since Java 5. The diamond operator is type inference on generic types, and Java 8 introduced a fairly sophisticated and useful type inference called target typing. However, adding local variable type inference to the language take the idea to a whole new level, at least from the programmer’s point of view. I believe it can be used much more often than target typing and the diamond operator.

What is type inference?

BeyondJava.net seems to attract people who work with UIs, so chances are you’ve already worked with JavaScript. In JavaScript, you don’t declare the type of a variable, but you simply declare it by writing

var x = 6;

That’s a nice and simple approach to deal with variables. You don’t have to declare it an integer. Mind you, the number “6” is an integer, so you can easily infer the type of the variable from the context. In a nutshell, that’s what type inference is: don’t force your programmers to jot down trivia you can deduce just as well from the context.

Actually, I’ve mentioned JavaScript, so if you’re pedantic, my example is an example what type inference is not. JavaScript doesn’t have a type concept at all. Modern JavaScript VMs employ a type concept in order to provide efficient code, but that means they have to go to remarkable lengths to support the flexibility of the JavaScript language. The type system of JavaScript allows you to change the type of every variable over time. What starts as an integer, ends up as a string.

It goes without saying that’s not the Java way. The Java way is to chose a type, and to stick to it. It’s either an integer, or it’s a string. It can’t be both. That’s what type inference means: you derive the type from context. After that, the type is immutable. Don’t confuse type inference with dynamic typing. Both approaches look similar from the outside, and they pursue similar goals, but statically typed type inference is much more strict. Does that sound like a disadvantage? It’s not. For one, static type systems help you to discover many mistakes at compile time. If you’ve ever worked on an enterprise-scale program – say 50.000 lines of code or up – you’ll probably love this feature. Second, the compiler can generate much more efficient code if it knows the type for sure. The resulting program is smaller and faster.

Isn’t it already part of the language?

I’ve written one or two articles about type inference in the Java language. In fact, Java 8 has an interesting approach to type inference based on target typing.

Well. Target typing simplifies that Java language a lot, especially with Lambda expressions. So it’s definitely something to get excited about. But – like the diamond operator – it introduces types on the right hand side of the assignment.

Thing is, the right hand side of the assignment usually is the side defining the type of the variable.

Java tries to do it the other way round. This approach works surprisingly well, but it’s a far cry from the simplicity languages like Scala offer. To make the Java language much more simple, we have to induce the type on the left hand side of the assignment based on the expression on the right hand side of the “=” character.

The scope of the current JEP

Java is not Scala, so it’s not surprising the scope of JEP 286 is much more limited than the Scala approach. But that’s not a bad thing: As much as I love the Scala language, I’ve seen Scala programs that are downright scary, so it pays to introduce Scala features to Java slowly and carefully. The current JEP only covers local variable with initializers. It does not cover methods. Nor does it cover object attributes or class variables. That sounds like a sensible choice to me. In particular, method-level type inference often is kind of scary. Everything’s fine as long as the method consists of one or two lines. Thing is, type inference doesn’t say anything about the size of the method. It still has to work if the method is 6.000 lines long. (Before you ask: I’ve already seen a method consisting of more than 6.000 lines). That’s what Scala does, but I daresay programmers find it hard to infer the type of the method from the context. JEP 286 avoids this kind of problems by limiting the scope to local variables. In fact, JEP 286 is very restrictive. It forbids type inference in most corner cases.

In part, that’s a result of target typing. Java already goes into great lengths to support type inference on the right hand side of the assignment (aka target typing). Adding type inference on the left hand side is ambitious. If I’m not mistaken, Scala does a good job to support both approaches, but the Java way is slightly different. It’s not about showing off. Neither is it about showing what can be done with today’s technology. Instead, it’s about supporting big enterprises. That, in turn, makes the Java language a tad conservative. Backward compatibility is a virtue. It doesn’t pay off to tentatively add a feature. You’ll never be able to remove it again. So every change to the Java language has to be thoroughly thought of.

Still, the effect if the type inference proposal is impressive. According to JEP 286, the vast majority of local variables can make use of type inference. No less than 83.5% of the variables could infer the type out of the box. With some precautions, the number might rise above 90%. Ignoring the variables initialized with null, up to 99% of the variables of the JDK’s source code have a type that can be inferred by the context.

Wrapping it up

I’ve almost abandoned all hope for progress of the Java language, but obviously, that was premature. Chances are one of the key features that make languages like Scala attractive to me is going to be added to Java. That’s bad news to Scala: such a beautiful language, but it’s never going to take off! But on the other hand, it’s good news to the Java community. In other words, it’s good news to the vast majority of programmers using the JVM. And I guess innovative languages like Kotlin, Groovy, Scala, Ceylon or C# had to lead the way and to prove the value of the concept first before introducing the feature to the Java language. After all, we all want the Java language to remain the rock-solid foundation of our industry, don’t we?


Dig deeper and vote!

a Survey allowing you to influence the development
official proposal to introduce type inference
Binary builds for JEP 286 (use at own risk – I didn’t check neither for viruses, nor did I try the files myself!)
My follow-up article: is type inference good or evil?
Lukas Eder on local variable type inference
Pros and cons of JEP 286 by Roy van Rijn


The Art of Efficient Programming in the Agile Ages

Heck, this article is going to make me sound like my own grandfather. Everything was better in the old days! But I’m still convinced this article tells a story, and it seems to be important to me. At least, it’s a thought I’ve been nurturing for years. By the way, this article does not tell you everything was better in the early days of programming. Can you imagine to program without the help of the internet? You have to buy actual books to learn more about programming. Those books were hard to get hold of. Editors didn’t have autocompletion. Monitors used to flicker, causing a lot of eye strain and headache. There wasn’t even such a thing as StackOverflow. No, programming was a lot worse in almost every way. I’m happy to have arrived in the agile ages.

But still: Did you ever think about what gets lost in agile projects?

Simple. Agile projects are about producing software efficiently. But they are not about writing efficient software. Writing efficient software costs time and money, and that’s precisely what agile programming tries to avoid.
Continue reading

The Only One Dependency You Need in JavaEE 7?

Adam Bien is a charismatic evangelist of JavaEE 7. He’s got something to say, and he always makes me think. But that doesn’t mean I always agree with him. In his latest blog post, he propagates to use a simple, single dependency to use JavaEE 7. All you have to use is to install a JavaEE 7 server, add this dependency to you application’s pom.xml, and you’re good to go.

You really are. And it’s tempting:

<dependency>
	<groupId>javax</groupId>
	<artifactId>javaee-api</artifactId>
	<version>7.0</version>
	<scope>provided</scope>
</dependency>

As cool as Adam’s recipe sounds… Well, Adam, I strongly disagree. In a way, you’re absolutely right. All you ever need to use JavaEE7 now is that single, simple dependency you mention in your post. But what about the future?

The two alternatives

Before I continue, let’s summarize the two alternatives quickly. Adam Bien’s suggestion works if you’re running your application on a JavaEE7 application server. That’s a pleasant experience. It’s made for JavaEE7, so it’s unlikely that you run into trouble like configuration mistakes or missing libraries. The drawback is that it’s hard to update anything. I know of at least one application server that makes updating almost impossible.

The alternative is to put every JavaEE dependency you need into your *.war file and deploy it on a Tomcat or Jetty. This approach means you’re responsible for configuring the libraries yourself. However, you can update them without having to care about the application server.
Continue reading

Nomin – Mapping Java Objects Without the Pain

Many best practices in the Java world involve mapping Java objects to other Java objects. More often than not, this is downright stupid. Both objects are more or less identical, but you have to write a lot of code to map object A to object B. Plus, you have to write the backward mapping, too. It goes without saying that this is a very error-prone task. Not to mention it’s boring as hell, making the task even more error-prone.

This is why I start to roll my eyes every time someone asks me to map objects. Granted, the underlying design pattern has its virtues. Decoupling the front-end code from the back-end code gives you a lot of flexibility. But do you need this flexibility? Even if you do, is it worth the pain?

Mapping objects automatically

Nomin to the rescue. It’s one of those little tools that come in handy. It allows you to write very compact Groovy scripts to map object. Even better, if the attribute names of the objects are identical, Nomin is able to figure out how to map the objects without you help. You don’t have to write a script at all. All you have to do is to invoke Nomin and have it map the objects for you:

NominMapper nomin = new Nomin();
EntityA entityA = nomin.map(entityB, EntityA.class);

Continue reading

About Owls, Nightingales and Clean Code

Sigh! Today I’ve been wading knee-deep through all-to-clean code. Code that might have been written in a textbook. Code following the style – yeah, once I was young and naive enough to believe the textbooks without asking – I taught at university. Code that follows all the best practices. Code following the rules of the Clean Code Initiative.

Code that’s illegible.

Illegible?

Yeah, illegible. I love clean code – but believe it or not, code that’s all too clean is every bit as illegible as spaghetti code.
Continue reading

Eclipse Code Recommenders

Roughly a year ago, I attended a talk held by a very charismatic Marcel Bruch. Why, he asked, don’t we use the collective brains of the Java developer community? The Eclipse plugin written by his team does just that: Eclipse Code Recommenders observes what developers are doing to provide better code completions.

Eclipse autocompletion drives me crazy!

The idea is simple. Sooner or later, probably every developer gets nuts because Eclipse autocompletion suggests the same odd completions over and over again. For instance, some AWT classes are frequently at the top of the list. When I joined the Java world – that’s a long time ago, when Java 1.2 was new – AWT has already been legacy, having been superseded by Swing. So why does it appear in the list at all? Let alone at the top?
Continue reading