BootsFaces 0.8.6 has been Released

Fork me on GitHubThis weekend, we’ve published a new version of BootsFaces. Basically, BootsFaces 0.8.6 is not a big deal. It fixes half a dozen bugs. The most annoying bug that has been fixed was compatibility with Internet Explorer.

The other important change is a breaking change correcting another breaking change introduced with BootsFaces 0.8.5. Starting with BootsFaces 0.8.6, the process attribute of AJAX request now is 100% compatible with its PrimeFaces counterpart. We didn’t feel comfortable with introducing a second breaking change within merely two weeks, but at the end of the day compatibility with PrimeFaces was more important. We want you to be able to use PrimeFaces along with BootsFaces, and that means we should try to align our APIs as much as possible.

Another error we’ve fixed was basically caused by our documentation. Until 0.8.1, you could use <b:image /> with both the src attribute and the name attribute. BootsFaces 0.8.5 added the standard JSF resource library, which changed the meaning of name from “ignored” to “points to a file within the resources folder”. This, in turn, broke the application of at least one developer. Since BootsFaces 0.8.6, the src attribute has priority. Plus, we’ve corrected the documentation. Now it doesn’t claim the name attribute was necessary.

There are a few more bugfixes, as you can see in our release notes. If you’ve already adopted BootsFaces 0.8.5, we recommend updating to 0.8.6. If you haven’t, we recommend updating to 0.8.6, too, because the new version contains a lot of improvements and bug fixes. In any case, we don’t expect any migration effort, with the possible exception of the process attribute.

Like always, we’d like to thank everybody who’s reported a bug or a feature request on our bug tracker or on StackOverflow.com. You’ve contributed to the success of BootsFaces, making it an even bigger success story than it already is!

Having said that, there’s only one word left to say:

Enjoy!

When Your Framework Gets in Your Way

These days, my framework of choice got spectacularly in my way. Don’t get me wrong: D3.js is a good framework. It was the right tool for the job. But it slowed my progress nonetheless.

You know, I wanted to write a particularly fancy presentation. Everybody knows PowerPoint slides by heart, and even a nifty presentation framework like reveal.js has become boring. But there that nice little framework I always wanted to play with. And there’s this nice interactive sunburst diagram. What about converting it into an interactive presentation? And while we’re at it, why don’t you translate the code to TypeScript?

This turned out to be a stupid idea. Luckily I’m not in a hurry, so I don’t mind, and the result is impressive indeed. But with a tight deadline it had been a disaster.

It’s not the first time I experienced this. Even the best framework comes at a price.
Continue reading

What’s New in BootsFaces 0.8.5?

Fork me on GitHubWhat started as a small bugfix release, ended as a full-blown feature release. If the sheer number of commits is an indicator, the new release is awesome: the previous release counted 599 commits. In the meantime, 240 commits went into the 0.8.5 version, give or take a few. Needless to say, this amounts to a lot of added functionality: 11 new components, countless improvements and – of course – bugfixes. Plus, we’ve migrated the relaxed HTML-like markup style from AngularFaces to BootsFaces.

Continue reading

Let’s Make JavaScript Development Simple Again!

Recently, I’ve grown increasingly uneasy about the current state of JavaScript development. I’m a full-time Java developer who tries to get familiar with JavaScript. Over the years, JavaScript has become a really nifty programming language. There’s hardly anything you can’t do with JavaScript. It’s one of the most exciting areas of development in 2016.

Only… each time I start to read a tutorial about JavaScript, I feel stupid. I know I’m not stupid – at least if it comes to programming – so the explanation must be something different. I freely admit I’m lazy. That’s a valid explanation why I don’t cope with most tutorials.
Continue reading

NewsFlash: Lukas Eder’s JAX Talk And Turing Complete SQL

Did you know SQL is Turing complete? Obviously, it’s not. It’s a purely declarative language. It’s designed with a single purpose in mind: dealing with relational databases.

Surprisingly, modern SQL is Turing complete, indeed. This stunning claim has been proven by David Fetter. He’s also written an example that really blew my hat off. It’s possible to write a SELECT statement drawing a Mandelbrot set.

As I’ve mentioned before, SQL is a purely declarative language. So are HTML and XML. But there’s no way HTML could be Turing complete, so this insight came as a surprise to me. The key to being Turing complete are the window functions offered by most modern SQL databases. These, in turn, allow for recursion, and that’s one of the loopholes allowing a purely declarative language like SQL to sneak into the realm of procedural languages. That’s the same trick that makes PROLOG a Turing complete language.

I got aware of the phenomenon when I attended the great JAX talk of Lukas Eder. Today, he’s published a transcript of his talk, including the slides. Highly recommended!

Dig deeper

10 SQL Tricks That You Didn’t Think Were Possible by Lukas Eder

SQL Is Now Turing Complete
drawing a Mandelbrot set with PostgreSQL 8.4

TypeScript and ES2016 Decorators vs. Java Annotations

Consider this TypeScript snippet. It’s a very simple Angular2 component. It looks almost like a Java class, doesn’t it?

@Component({
    selector: 'my-app',
    template: '<h1>My First Angular 2 App</h1>'
})
export class AppComponent { }

In particular, the @Component looks like a Java annotation. In fact, it plays the some role as a Java annotation. It tells the Angular2 framework that this class is not an ordinary POJO, but something special. It’s a component, one of the building blocks of Angular.

Striking similarity

That’s exactly the same thing I already knew from the Java world. @Named converts a POJO into a CDI bean. Sometimes Java annotations even have the same fancy curly braces syntax:

@ResourceDependencies({ 
  @ResourceDependency(library = "bsf", name = "js/datatables.min.js", target = "body"),
  @ResourceDependency(library = "bsf", name = "css/datatables.min.css", target = "head")
})
public class DataTable extends UIData {}

So why do the JavaScript and TypeScript communities insist on calling their annotations “decorators”?

Decorators decorate!

The answer is simple: Decorators are called decorators because that’s what they are. You can use them as annotations. But in reality, they are function calls. The Angular2 framework contains a function called Component() that takes a function as a parameter and returns another function. The @ of @Component tells TypeScript and EcmaScript 2016 to call the Component() function and to pass it the function after the decorator. Usually decorators are used to add functionality to a function.

AOP

Does this ring a bell? Exactly – that’s more or less the same as aspect oriented programming. In fact, AOP is one of the classical use cases of Java annotations. Just think of @transactional. This annotation surrounds a method with the glue code needed to execute it in a transaction. In other words, it adds some code before executing the method (opening a transaction) and it adds code after the method (committing the transaction, or rolling it back).

Oops!

Making the @ symbol an alternative way to call functions is a very elegant way to implement both static annotations and AOP. Plus, it should be possible to pass variables as additional parameters. In Java, annotations only support constants that can be resolved a compile time, and every once in a while, that’s a nuisance.

In hindsight, it’s hard to understand why the Java team added simple annotations instead of full-blown decorators to Java 5. I suppose they didn’t want to add a complex multi-purpose tool. Annotations can be used for exactly one purpose. They resemble a comment. They rely on a framework to be useful. This framework can be the editor, the compiler or the runtime environment. Only a few years later the JavaScript world showed how useful decorators are.

Can decorators be added to Java?

Sure. Java 8 introduced default methods. So we could define a default method in an annotation class and execute it when the annotated method is called. We’d have to consider a couple of corner cases, though. Annotations can be applied to classes, static fields, class attributes and method parameters. We’d have to define useful semantics for each of these corner cases. But that’s not a big deal.

Wrapping it up

However, I don’t think this will ever happen. Annotations have become such an important part of the Java world it’s unlikely anybody takes the risk. So, for now, we can only marvel at the way the language designers took the syntax of the annotation and converted it into something even more useful. By the way, the flexibility of the JavaScript language allows for some tricks Java developers can only dream of. Only – Java applications tend to be a lot bigger than JavaScript applications, so this dreams would quickly turn into a nightmare. Nonetheless, the article I’ve linked in the “dig deeper” section is very interesting, and it’s full of surprises – at least for a programmer like me spending most of their day in the Java ecosystem.

Dig deeper

Exploring EcmaScript 2016 decorators in detail

Newsflash: Concurrency Explained with Starbucks

It’s hard to get concurrency right! Especially for programmers who try to program it using a low-level language like, say, Java 5. In no time, you’ll run into all kinds of problems like deadlocks, race conditions and synchronization, just to name a few. That’s why I recommend using a language like Scala if you need to leverage the power of all your CPU’s core.

Funny thing is that it’s surprisingly easy to explain concurrency in simple words. This article maps concurrency to a real-world example. Serving coffee to customers is a good example on how to use multithreading to improve performance. Read the full story at particular.net. This article even explains advanced topics like out-of-order execution and speculative execution. Highly recommended!

Newsflash: How to Reduce Tomcat Startup Time

Modern application servers start very fast, but sometimes you still need more performance. Especially if you’ve got a huge and complex application. Gavin Pickin describes an interesting option on his blog. For some reason, he suggests to add an option that’s already the standard configuration of Tomcat 8 (at least on my machine), but it might be interesting nonetheless.

Thing is, Tomcat scans every Jar file it finds at startup time. This is necessary for frameworks like Servlet 3.0, CDI and JSF, which enable to you add some magic to your application by simply annotating a class. However, many jar files don’t contain such an annotation, so it’s a waste of time to have them scanned. Adding such a library to Tomcat’s skip list gives your application a boost. Especially, if you skip not one, but many library.

Read the full story at Gavin Pickin’s blog.

AngularBeans: A Fresh New Take on AngularJS and JavaEE

I’m proud to have convinced Bessem Hmidi to present his AngularBeans framework at BeyondJava.net. AngularBeans is a fresh, new approach integrating AngularJS with a JavaEE backend and has stirred some attraction in the JavaEE world recently.
Bessem works at “Business & Decision” as a Technical Architect. He is also founder and leader of the “Esprit Tunisian Java user Group” and was a part of the Research & Development program at ESPRIT (Tunisian school of engineering). Bringing over 9 years of working with Java related technologies especially Java EE, he is also a professional course instructor, an educator and an international speaker. Plus, he’s founder of the Tunis conference JUG Day.

Introduction

The Java ecosystem doesn’t lack of web frameworks. There’s a framework for every taste. No matter if you prefer an action based, component driven, request driven… there’s a framework for everyone.
But at a closer look on what happened those few last years in web development, we will be driven to accept the evidence: it’s the era of single page applications, of client side rendering and of the real-time web. Users have become much more demanding. They don’t put up with a single black-and-white terminal screen any more. They’re familiar with highly responsive, interactive user interfaces which are both more fun and more efficient to use.
Those needs are the sources of countless modern client side JavaScript frameworks such as Ember.js, react.js, meteor.js and of course AngularJS. They all consider the server a “pure” service provider, only responsible for processing and providing data, as opposed to the the classic “dynamic views generator” server. Most – maybe even most – developers consider server-side rendering a thing of the past. From a technical point of view, server-side rendering was inevitable in earlier times, but nowadays browser technology has evolved to a point that allows us to shift both rendering and business logic to the client side1.
Continue reading

  1. Footnote: there are exceptions. Client-side rendering is bad when you’re concerned about you mobile device’s battery life

Newsflash: The Wrong Abstraction

I’ve just read an interesting article of Sandi Metz. I consider it important (and provocative) enough to dedicate a newsflash to Sandi’s article. Basically, my article is a commented link. Mind you, how often did you do stupid things during your working hours just because you valued existing to code high?

That’s a common trap I’ve often watch catching other people (and myself, of course). Like Sandi says,

Existing code exerts a powerful influence. Its very presence argues that it is both correct and necessary.

Existing code tricks you into believing it’s good code

Every once in a while you come across complicated code, and you say by yourself, “whow! Such a sophisticated code! It has to be the essence of wisdom of generations of programmers!” More often than not, it’s just clumsy code generations of programmers contributed to without knowing what they did. It’s really hard to detect such a situation. Because you sometimes the code is complicated because it has to be. You never know.

Don’t be overly respectful!

However, sometimes it’s good not to be afraid of redundant code. Adding the wrong abstraction is worse than suffering from a few duplicated lines of code. Often my team tried to unify objects that are identical in the eye of the stakeholder. Consider the simple example of a mailing address. It’s simple, it’s even standardized by your postal office (and probably the ISO and DIN committees, but I didn’t check). Thing is, when we tried to implement the unified address in my previous company, we started to hear “yes, it’s identical, except for…”.

That’s a surprisingly common example. Trying to merge concepts that are identical – only differing by a small margin – may be the first step to unhappiness.

You can refactor in two directions: back and forth

I’d rather you developed an instinct when to introduce abstractions, and – more important – when to unravel abstractions that already have been introduced in your code base. Refactoring isn’t a one-way road. Most of the time you go forward, but there’s a good reason why you can apply refactoring also the other way round. In fact, the notion of “forward” and “backward” is misleading. We’ve been taught to believe that reducing code duplication is good, which is why we instinctively attribute a direction to refactorings, but that’s s subjective attribution which may be right in most situations and wrong in others.

That said, I recommend you to read Sandi’s article. It’s a fast-paced read that’s clearly worth you attention.

The State of Polyglot Programming in 2016: Mission Impossible?

Once upon a time, they used to say every programmer has to learn a new programming language each year, just to stay in shape. When I mention this to Java programmers, they usually start laughing. What used to be true in the 70s, maybe even in the 80s and the early 90s, has become utterly impossible in 2016. Modern programming ecosystems like JavaEE have reached such a level of complexity that it has become challenging to master a single programming language, let alone a second.

Learning the core language is not the problem

Thing is, it doesn’t suffice to learn the language. That’s easy. Once you’ve learned an object oriented programming language, you’ll quickly be fluent in any other object oriented language. Only shifting to another programming paradigm may give you a hard time. I, for one, have to admit I never managed to wrap my head around PROLOG. It’s not difficult, but it’s so different from what I’m usually doing. My rich experience which is so useful at other times prevents me from being a successful PROLOG programmer. But the vast majority of modern languages follow the procedural paradigm, spiced with object oriented programming or functional programming (and in same cases, both). So there’s not to much a challenge from the language side per se.
Continue reading

Adding Type Inference to Java: Good or Evil?

My previous post raved about the simplicity type inference is going to bring to Java. I love the idea. Java is such a verbose, ceremonious language. Little wonder so many developers prefer dynamically typed languages, which seem to be so much simpler to use until you write a huge enterprise application. Java’s proposed type inference is strongly and statically typed, so it’s going to make life simpler without introducing problems. JEP 286 is good news, indeed!

Type infererence obfuscates types

A fine mess it is, Dave Brosius answered. Consider this code. It’s wrong, but can you spot the error?

public boolean foo(Set<Long> ids) {
    val myId = getId();
    return ids.contains(myId);
}

In fact, I couldn’t until Dave helped me, providing the solution: getId() returns a String, so the contains method will never find the id.

Well. On the one hand Dave is right: the val keyword obfuscates the fact that we’re talking about a String, so it’s a potential source of confusion. You’ll spot the bug immediately with the Java 8 version of the code:

public boolean foo(Set<Long> ids) {
    String myId = getId();
    return ids.contains(myId);
}

This source code clearly reveals that we’re comparing Strings with numbers, which is unlikely to work (at least in the Java language). Readability matters!

… but …

On the other hand, the example is all wrong. It’s deliberately misleading. I, for one, was an easy victim because I associated the ID of the code snippet with a database primary key, and my primary keys are always numeric. I prefer to have them generated by a sequence of the database.

Several years ago many developers adopted the UUID as an alternative synthetic primary key. From this perspective, I shouldn’t have been surprised by the fact the getId() returns a String. But then, why don’t we call the method getUUID()? Readability matters!

Another interesting aspect is that JEP 286 has been carefully designed to minimize readability problems. In my eyes it’s too conservative, but Dave sought and found a weak spot of the Java API that doesn’t work well with type inference. Even though Set is a generic type, the contains() method doesn’t care about the type. It simply accepts arbitrary Objects as parameter, allowing us to try to search for a String, even though it should be clear from the context that there are no String in the set.

My personal summary: Granted, you have a point, Dave. Type inference might obfuscate the type, and that’s a bad thing. But if used carefully, this shouldn’t be too much of a problem. I can’t remember someone complaining about type inference with Scala, Kotlin or Ceylon. Well, maybe I’ve heard people complain about Scala type inference. Scala seems to attract very clever guys, who love to write their programs in a very, say, academic way. Scala’s type inference is very sophisticated. Sometimes this leads to very sophisticated Scala programs, and that’s one of the key aspects why Scala never gained traction. The Java enhancement proposal is much more conservative. Probably exactly because of the Scala experience.

By the way, as Phil Webb pointed out, using the val keyword isn’t the real source of the problem. Using the “inline” refactoring causes the same obfuscation problem:

public boolean foo(Set<Long> ids) {
    return ids.contains(getId());
}

IDEs to the rescue

When I published this article on reddit, the first comment pointed out that sooner or later IDEs are able to display the type of the local variable, even if it is defined as a var or val. That doesn’t solve every readability problem, because most likely the type of the variable is displayed in a tooltip only. In other words: most of the time it is invisible. But certainly even that tooltip will help a lot. In fact, the tooltip is an excellent example of my idea: developers aren’t bothered with technical details all the time, but they can easily look them up when they need them.

Another example of obfuscation

Mark Struberg contributed another potential example of trouble introduced by var and val:

val myValue = a().b().c().d();

Which type does myValue have? What happens when the return type of d() changes? Plus, in real life code, the chain of calls may not be quite as simple. It may be hidden among hundreds of lines, each looking deceptively innocent.

Granted, that’s the sort of things that are going to happen, and it’ll be the source of countless overtime hours. But the real question is: which is worse? Actually, we could even run an experiment to decide on the question of hours of overtime. Does type inference increase the number of extra hours, or does it increase them?

Lessons learned from other languages

Most modern programming languages make use of type inference. There’s a lot of discussion whether static or dynamic typing is useful, but so far, I’ve heard little complains about type inference in strongly and statically typed languages like Scala, C#, Ceylon or Kotlin. I’m sceptic about dynamic typing, but the current JEP 286 is limited to strong and static typing, so judging from the experience of the communities of other programming languages, I suppose adding limited type inference to Java is going to be a success story.

Mind you: the examples of Dave and Mark have been carefully designed to expose the weaknesses of the JEP. That’s good, because they made us aware of a problem we might have missed otherwise. But then, why don’t we rely on the common sense of the programmers? Usually they know how much abstraction they and their working mates can cope with. Adding type inference to Java doesn’t forbid us to use explicit types. So let’s give it a try, and use techniques like code review or pair programming to make sure our code doesn’t get too sophisticated!

Benefits of type inference

During our discussion on Twitter, Phil Webb exclaimed he’s sometimes afraid of extracting a term into a local variable because of the scary type declaration. Well, this rings a bell with me. I often use this particular refactoring to make the code more readable. Extracting a term to a local variable allows us to assign an expressive variable name to the term, and modern JVMs are clever enough to inline the variable automatically again. But if this local variable is drowned by a verbose type declaration, little is won. In this particular example, using val or var might improve readability a lot. More often then not, we aren’t interested in the type of the variable (that’s a technical issue), but we are interested in the role the variable plays (which is a business issue).

That’s the general pattern why type inference may make the Java language more useful. It gives you an option to hide technical stuff when its irrelevant for the reader. As they say, code that’s not there is code you don’t have to understand. It goes without saying that there are corner cases. Sometimes code is so clever it’s so compact it’s difficult to decipher. It’s like learning latin. At school, we used to ponder half an hour just to decipher a single sentence. Obviously, that’s the wrong approach. But if used wisely, type inference may add to the readability of the source code.

Like I said: Readability matters!

JEP 286 addresses Type Inference in a very conservative way

Given that Brian Goetz (or whoever is the author of this particular line) calls type inference a “non-controversial feature”, there’s quite a lot of controversy about the topic. But if you’re afraid of type inference, read the JEP. It’s a fast read – maybe five to ten minutes – and it’s going to make you rest assured. The authors have limited the scope of type inference considerably.

Target typing vs. left-hand-side type inference

In part, that’s due to target typing. The adoption of Lambda Expressions brought target typing to the Java ecosystem. This is also known as right-hand-side type inference, because it adds type inference to the right hand side of assignments. JEP 286 adds type inference to the left hand side of the assignment operator. Obviously, it’s difficult to have both. This might turn out as a problem to Java programmers.

It’s your responsibility!

But then, with power comes responsibility. Type inference doesn’t mean you are forced to convert every type declaration to a var or val declaration. It’s up to you. If you think var is useful, use it. Otherwise, use the traditional approach, which may be as verbose as

Map<Date, Map<Integer, Double>> map 
     = new HashMap<Date, new HashMap<Integer, Double>>();

In my eyes, this is a scary example: you don’t know nothing about the business value the attributes adds to your program. You only know it’s a map, and reading the type declaration, you learn twice it’s a map of maps. Probably it’s better to use val and an expressive identifier:

val interestRateTableByDate 
     = new HashMap<Date, new HashMap<Integer, Double>>();

It’s still a difficult line, but if you’re working in the finance industry, you might at least guess that this is a collections of tables of interest rates (which are different depending how much money you invest). Interest rate are volatile, so you need to add another dimension – the date – to determine the interest rate.

Granted, you can also do this without val, but then the variable name sort of drowns in technical stuff:

Map<Date, Map<Integer, Double>> interestRateTableByDate 
     = new HashMap<Date, new HashMap<Integer, Double>>();

Dig deeper

JEP286 (adding local variable type inference to Java)
Binary builds for JEP 286 (use at own risk – I didn’t check neither for viruses, nor did I try the files myself!)
My previous article on the proposal to add local variable type inference to Java
Lukas Eder on local variable type inference
Pros and cons of JEP 286 by Roy van Rijn


Java May Adopt (Really Useful) Type Inference at Last

I’m all for simplicity – and that’s why I’m complaining so much about Java and many Java libraries. Java is such a ceremonious language, and it attracts so many people who love ceremonies and design their libraries to be used in a very elaborate way. More often than not, this results in a lot of boring key strokes. That’s a pain to write, but it’s also painful to read. As you may or may not know, I love complaining, but I also love to do something about the pain points. Just thinks of BootsFaces and AngularFaces, my attempts to simplify JSF programming. However, there’s one domain I have very little influence: core Java. And that’s a pity, because that’s a domain that influences everybody. Every Java programmer, that is.

Can you imagine my joy when I read there’s an official proposal to introduce type inference to the Java language? Even better, the author is Brian Goetz, and the proposal calls type inference a “non controversial feature”. Better still, there’s also a survey allowing you to influence the development.

Update March 13, 2016: As Brian Goetz mentions in his comment below, my wording is not entirely correct. Java already has some type inference since Java 5. The diamond operator is type inference on generic types, and Java 8 introduced a fairly sophisticated and useful type inference called target typing. However, adding local variable type inference to the language take the idea to a whole new level, at least from the programmer’s point of view. I believe it can be used much more often than target typing and the diamond operator.

What is type inference?

BeyondJava.net seems to attract people who work with UIs, so chances are you’ve already worked with JavaScript. In JavaScript, you don’t declare the type of a variable, but you simply declare it by writing

var x = 6;

That’s a nice and simple approach to deal with variables. You don’t have to declare it an integer. Mind you, the number “6” is an integer, so you can easily infer the type of the variable from the context. In a nutshell, that’s what type inference is: don’t force your programmers to jot down trivia you can deduce just as well from the context.

Actually, I’ve mentioned JavaScript, so if you’re pedantic, my example is an example what type inference is not. JavaScript doesn’t have a type concept at all. Modern JavaScript VMs employ a type concept in order to provide efficient code, but that means they have to go to remarkable lengths to support the flexibility of the JavaScript language. The type system of JavaScript allows you to change the type of every variable over time. What starts as an integer, ends up as a string.

It goes without saying that’s not the Java way. The Java way is to chose a type, and to stick to it. It’s either an integer, or it’s a string. It can’t be both. That’s what type inference means: you derive the type from context. After that, the type is immutable. Don’t confuse type inference with dynamic typing. Both approaches look similar from the outside, and they pursue similar goals, but statically typed type inference is much more strict. Does that sound like a disadvantage? It’s not. For one, static type systems help you to discover many mistakes at compile time. If you’ve ever worked on an enterprise-scale program – say 50.000 lines of code or up – you’ll probably love this feature. Second, the compiler can generate much more efficient code if it knows the type for sure. The resulting program is smaller and faster.

Isn’t it already part of the language?

I’ve written one or two articles about type inference in the Java language. In fact, Java 8 has an interesting approach to type inference based on target typing.

Well. Target typing simplifies that Java language a lot, especially with Lambda expressions. So it’s definitely something to get excited about. But – like the diamond operator – it introduces types on the right hand side of the assignment.

Thing is, the right hand side of the assignment usually is the side defining the type of the variable.

Java tries to do it the other way round. This approach works surprisingly well, but it’s a far cry from the simplicity languages like Scala offer. To make the Java language much more simple, we have to induce the type on the left hand side of the assignment based on the expression on the right hand side of the “=” character.

The scope of the current JEP

Java is not Scala, so it’s not surprising the scope of JEP 286 is much more limited than the Scala approach. But that’s not a bad thing: As much as I love the Scala language, I’ve seen Scala programs that are downright scary, so it pays to introduce Scala features to Java slowly and carefully. The current JEP only covers local variable with initializers. It does not cover methods. Nor does it cover object attributes or class variables. That sounds like a sensible choice to me. In particular, method-level type inference often is kind of scary. Everything’s fine as long as the method consists of one or two lines. Thing is, type inference doesn’t say anything about the size of the method. It still has to work if the method is 6.000 lines long. (Before you ask: I’ve already seen a method consisting of more than 6.000 lines). That’s what Scala does, but I daresay programmers find it hard to infer the type of the method from the context. JEP 286 avoids this kind of problems by limiting the scope to local variables. In fact, JEP 286 is very restrictive. It forbids type inference in most corner cases.

In part, that’s a result of target typing. Java already goes into great lengths to support type inference on the right hand side of the assignment (aka target typing). Adding type inference on the left hand side is ambitious. If I’m not mistaken, Scala does a good job to support both approaches, but the Java way is slightly different. It’s not about showing off. Neither is it about showing what can be done with today’s technology. Instead, it’s about supporting big enterprises. That, in turn, makes the Java language a tad conservative. Backward compatibility is a virtue. It doesn’t pay off to tentatively add a feature. You’ll never be able to remove it again. So every change to the Java language has to be thoroughly thought of.

Still, the effect if the type inference proposal is impressive. According to JEP 286, the vast majority of local variables can make use of type inference. No less than 83.5% of the variables could infer the type out of the box. With some precautions, the number might rise above 90%. Ignoring the variables initialized with null, up to 99% of the variables of the JDK’s source code have a type that can be inferred by the context.

Wrapping it up

I’ve almost abandoned all hope for progress of the Java language, but obviously, that was premature. Chances are one of the key features that make languages like Scala attractive to me is going to be added to Java. That’s bad news to Scala: such a beautiful language, but it’s never going to take off! But on the other hand, it’s good news to the Java community. In other words, it’s good news to the vast majority of programmers using the JVM. And I guess innovative languages like Kotlin, Groovy, Scala, Ceylon or C# had to lead the way and to prove the value of the concept first before introducing the feature to the Java language. After all, we all want the Java language to remain the rock-solid foundation of our industry, don’t we?


Dig deeper and vote!

a Survey allowing you to influence the development
official proposal to introduce type inference
Binary builds for JEP 286 (use at own risk – I didn’t check neither for viruses, nor did I try the files myself!)
My follow-up article: is type inference good or evil?
Lukas Eder on local variable type inference
Pros and cons of JEP 286 by Roy van Rijn


Type-safe Navigation in JSF

What to make of this? In general, I eagerly embrace everything offering type safety. However, in this particular case, I’m not entirely convinced. Probably I won’t use this feature in my projects any time soon. On the other hands, this stuff has so many options that it may easily pay off in larger projects. Especially, if some of the JSF views of your applications require authorization.

I’m talking about the “type-safe navigation” offered by the JSF module of Deltaspike. Basically, that’s a Java file describing every JSF page of your application.

Back to the future!

In a way, it’s funny that Deltaspike, which aims to improve JavaEE in general and JSF in particular, adds such a feature. One of the key advantages of JSF 2 was to get rid of the navigation rules file of JSF 1.x. Deltaspike brings it back, albeit in a different way.
Continue reading

Junit 5

When I heard about JUnit 5 this morning, I was baffled. How can you improve something that’s already perfect?

Java 8 and Lamba expressions

Apparently, you can. Actually, it’s fairly easy to improve JUnit 4. The most obvious step is to migrate to Java 8. JUnit 5 fully embraces the new features of Java 8. In particular, now you can use Lambda expressions in assertions. As a consequence, JUnit 5 requires Java 8. It doesn’t support the older versions of the Java language. Given the mature status of JUnit 4, this isn’t even a bold step. I suppose most conservative developers who insist on using an older version than Java 8 can cope with JUnit 4.
Continue reading

The Art of Efficient Programming in the Agile Ages

Heck, this article is going to make me sound like my own grandfather. Everything was better in the old days! But I’m still convinced this article tells a story, and it seems to be important to me. At least, it’s a thought I’ve been nurturing for years. By the way, this article does not tell you everything was better in the early days of programming. Can you imagine to program without the help of the internet? You have to buy actual books to learn more about programming. Those books were hard to get hold of. Editors didn’t have autocompletion. Monitors used to flicker, causing a lot of eye strain and headache. There wasn’t even such a thing as StackOverflow. No, programming was a lot worse in almost every way. I’m happy to have arrived in the agile ages.

But still: Did you ever think about what gets lost in agile projects?

Simple. Agile projects are about producing software efficiently. But they are not about writing efficient software. Writing efficient software costs time and money, and that’s precisely what agile programming tries to avoid.
Continue reading

Running the Atom Editor Behind a Firewall

Granted, this is a minor topic, much less sophisticated than most of my blog’s posts. But it took me a couple of hours to find out how to run the Atom editor behind a firewall, so it me be worth a short article.

If you’re running Atom behind a firewall, you won’t be able to install plugins not updates until you configure the proxy settings. Basically, all you have to do is to set two user-defined variables: http-proxy and https-proxy. However, it’s not that obvious where to configure these variables.

The easiest way find or create the configuration file is to open the settings dialog (“File” –> “Settings”). On bottom of the left hand side, there’s a button called “Open config folder”. Clicking it opens a new project (.atom). That’s the settings folder in your user profile. The root folder should contain a file called .apmrc. If it doesn’t, create it.

Next you add these lines to the file (replacing username, password, proxyserver and the port number with the settings you use in your internet browser):

http-proxy=http://username:password@proxyserver:8088
https-proxy=http://username:password@proxyserver:8088
strict-ssl=false

Don’t add the variable proxy, nor try to write the variables in capital letters. Sometimes people suggest these things, but Atom ignores these variables.

Not that you have to prefix the proxy server name with http://. If you omit it, you’ll get a “parse exception”. In my case I had to use http:// for both the http and the https protocol – but that may be a peculiarity of my company’s network.

Enjoy!

The Only One Dependency You Need in JavaEE 7?

Adam Bien is a charismatic evangelist of JavaEE 7. He’s got something to say, and he always makes me think. But that doesn’t mean I always agree with him. In his latest blog post, he propagates to use a simple, single dependency to use JavaEE 7. All you have to use is to install a JavaEE 7 server, add this dependency to you application’s pom.xml, and you’re good to go.

You really are. And it’s tempting:

<dependency>
	<groupId>javax</groupId>
	<artifactId>javaee-api</artifactId>
	<version>7.0</version>
	<scope>provided</scope>
</dependency>

As cool as Adam’s recipe sounds… Well, Adam, I strongly disagree. In a way, you’re absolutely right. All you ever need to use JavaEE7 now is that single, simple dependency you mention in your post. But what about the future?

The two alternatives

Before I continue, let’s summarize the two alternatives quickly. Adam Bien’s suggestion works if you’re running your application on a JavaEE7 application server. That’s a pleasant experience. It’s made for JavaEE7, so it’s unlikely that you run into trouble like configuration mistakes or missing libraries. The drawback is that it’s hard to update anything. I know of at least one application server that makes updating almost impossible.

The alternative is to put every JavaEE dependency you need into your *.war file and deploy it on a Tomcat or Jetty. This approach means you’re responsible for configuring the libraries yourself. However, you can update them without having to care about the application server.
Continue reading

CDI: Lazy Injection at Runtime or How to Obtain Every Matching Implementation

These days I’ve discovered a nice feature of CDI. What do you make of this code?

@Inject @Any
Instance<IValidator> validators;

Instance indicates that a single object is injected, but that’s quite the case. In my case the program uses the Instance like so:

for(IValidator validator:validators){
   validator.validate();
}

Obviously, @Inject Instance<IValidator> does something completely different than @Inject IValidator.
Continue reading