Category Archives: Concepts of programming languages

The Dark Path?

It always pays to listen to Uncle Bob Martin. He’s clearly someone who’s got something to say. But that doesn’t mean I always agree. Today, I don’t. And I consider it interesting why we disagree.

I’ve just read Robert Martin’s post“The Dark Path”. He picks three innovative traits of Kotlin and Swift: strict null checks, strict exception handling and making classes final (in Java speech) or closed (in Kotlin lingo) by default. It’s an interesting post I recommend to read. Uncle Bob shortly explains these features, continues with complaining about languages becoming too complex because they try to fill each leak of their predecessor languages, and closes with an ardent call to write tests. Like usual, he arguments very convincingly. It’s hard to disagree after reading the article. Yet I do.
Continue reading

The Rise and Fall of Scala

There seems to be a consensus in the Java community that the Scala programming language is already on the decline. I consider this a pity because I always enjoyed programming in Scala. Actually, I even started this blog to support alternative JVM languages like Scala. But the market has decided differently. Moshe Kranc has written an excellent analysis about the rise and fall of Scala. I found Moshe’s article interesting enough to summarize it and to add a couple of thoughts of mine.

Before I start

However, I’ve also read some of the 169 comments to Moshe’s article, so I’d like to add a few words. First of all, I’m really fond of the Scala language. I’m not trying to write an anti-Scala article. Quite the contrary. I’m following the fate of Groovy and Scala for at least ten years now, and I’m still puzzled why the industry doesn’t adopt one of these languages. In many aspects, both languages are clearly superior to Java.
Continue reading

HTML5: Optional HTML Tags

The other day Marco Rinck tweeted something that’s very confusing, almost disturbing. Google’s HTML style guide suggests to omit optional HTML tags. Taking in mind that the page rank of your website is influenced (among other things) by whether your page has a clear layout and a good coding quality, following these guidelines is almost mandatory. Granted, I didn’t check whether this particular style guide is linked to the results shown in the webmaster tools, but you get the idea: if Google publishes a style guide on HTML and CSS, it’s going to have an impact on the market. So let’s have a close look at it.

What are optional HTML tags?

At first glance, the idea of declaring certain HTML tags “optional” and suggesting to omit them makes a lot of sense. Putting it in a nutshell, the idea is to make the code more human-readable. More to the point, the suggestion is to write HTML code the way the average human would write it if they didn’t happen to be a developer. Developers have learned to love clear structures, but that’s not the way most people think. A particularly fascinating property of the human mind is its ability to recognize patterns. Even better, it’s able to infer patterns where there are none.

Basically, that’s the idea of optional HTML tags: you don’t have to write an HTML tag if you can easily infer it from the context.

For instance, there’s no point in writing the <html> tag. The document is shown in the browser, so we already know it’s an HTML document. Similarly, there’s no point in wrapping the <head> tag around the <title> tag, and the <body> around the <for> or <p> tag. Both forms and paragraphs make only sense in the body of an HTML document, so it’s easy to infer the <body> tag.

Cool! But…

Being a human being – and a particularly lazy one – Google’s style guide fills my heart with joy. Mind you: having to end each and every paragraph with a </p> tag before starting a new paragraph is really silly. Actually, it’s the number one mistake when I add a new page to our BootsFaces showcase.

On the other hand, the new style guide is exactly the opposite of what web designers have been taught during the last couple of years. I’m not entirely sure about the version numbers, but as far as I remember, starting with HTML4, XHTML was promoted as the new way to go. Former versions of HTML – or rather, the browser rendering engines – tended to be very sloppy. Or, putting it positively, they mapped the pattern inference capabilities of the human brain. The (then) new standard was to get rid of inference. The idea was to define everything in a clear, concise manner.

What about parsers?

I don’t know why this clean coding style was propagated, but my theory (hey, that’s pattern inference at it’s best!) is that the idea was to make HTML machine-readable. Clean XHTML code can be parsed by an XML parser. In fact, that was what allowed me to write BabbageFaces. JSF usually generates very clean XML code which can be read and analyzed by an XML parser. That’s what I did to reduce the size of the HTML code sent to the browser.

However, I consider BabbageFaces a nice but failed project, basically because hardly any JSF page fulfills the strict requirements of XML. The average JSF page renders code that can almost, but not quite be read by an XML parser. Almost always optional HTML tags are used. Browser vendors always knew that HTML pages are written by human beings, so they introduced pattern inference from day one. So browsers don’t have trouble displaying the HTML code generated by JSF pages. The XML parser of BabbageFaces quickly ran into trouble with the same pages. Which is sort of remarkable, given that the HTML code generated by JSF frameworks usually is a lot cleaner than HTML code written by humans.

We need new parsers!

Far be it from me to resist the new guidelines. Quite the contrary. In recent years, compiler manufacturers have learned a lot. The semicolon that’s required at the end of each Java statement originally was introduced to make it easier to implement parsers analyzing the source code. It took compiler manufacturers a couple of years to recognize that the semicolon can be induced from context.

There are many programs out there analyzing HTML code. I don’t know whether they cope with tag inference or not. Thing is, HTML4 made it official that they don’t have to, while HTML5 officially makes tag inference part of the HTML language. As a consequence, you can’t simply use an XML parser to parse HTML pages. You need a full-blown HTML parser. A short investigation showed that Jsoup may be such a parser, but please don’t take my word for it: neither did I try it, nor did I compare it with other parsers.

Wrapping it up

Actually, the Google HTML and CSS style guide give the best summary itself:

This approach may require a grace period to be established as a wider guideline as it’s significantly different from what web developers are typically taught.

As a human being, I welcome the new standard because it makes both reading and writing HTML pages easier. As someone who sometimes writes programs to analyze HTML pages, I’m not so sure. At least, the idea to simply use an XML parser to analyze an HTML page is doomed.


Dig deeper

HTML5 specification of optional tags
Google’s HTML styleguide on optional HTML tags

NewsFlash: Lukas Eder’s JAX Talk And Turing Complete SQL

Did you know SQL is Turing complete? Obviously, it’s not. It’s a purely declarative language. It’s designed with a single purpose in mind: dealing with relational databases.

Surprisingly, modern SQL is Turing complete, indeed. This stunning claim has been proven by David Fetter. He’s also written an example that really blew my hat off. It’s possible to write a SELECT statement drawing a Mandelbrot set.

As I’ve mentioned before, SQL is a purely declarative language. So are HTML and XML. But there’s no way HTML could be Turing complete, so this insight came as a surprise to me. The key to being Turing complete are the window functions offered by most modern SQL databases. These, in turn, allow for recursion, and that’s one of the loopholes allowing a purely declarative language like SQL to sneak into the realm of procedural languages. That’s the same trick that makes PROLOG a Turing complete language.

I got aware of the phenomenon when I attended the great JAX talk of Lukas Eder. Today, he’s published a transcript of his talk, including the slides. Highly recommended!

Dig deeper

10 SQL Tricks That You Didn’t Think Were Possible by Lukas Eder

SQL Is Now Turing Complete
drawing a Mandelbrot set with PostgreSQL 8.4

TypeScript and ES2016 Decorators vs. Java Annotations

Consider this TypeScript snippet. It’s a very simple Angular2 component. It looks almost like a Java class, doesn’t it?

@Component({
    selector: 'my-app',
    template: '<h1>My First Angular 2 App</h1>'
})
export class AppComponent { }

In particular, the @Component looks like a Java annotation. In fact, it plays the some role as a Java annotation. It tells the Angular2 framework that this class is not an ordinary POJO, but something special. It’s a component, one of the building blocks of Angular.

Striking similarity

That’s exactly the same thing I already knew from the Java world. @Named converts a POJO into a CDI bean. Sometimes Java annotations even have the same fancy curly braces syntax:

@ResourceDependencies({ 
  @ResourceDependency(library = "bsf", name = "js/datatables.min.js", target = "body"),
  @ResourceDependency(library = "bsf", name = "css/datatables.min.css", target = "head")
})
public class DataTable extends UIData {}

So why do the JavaScript and TypeScript communities insist on calling their annotations “decorators”?

Decorators decorate!

The answer is simple: Decorators are called decorators because that’s what they are. You can use them as annotations. But in reality, they are function calls. The Angular2 framework contains a function called Component() that takes a function as a parameter and returns another function. The @ of @Component tells TypeScript and EcmaScript 2016 to call the Component() function and to pass it the function after the decorator. Usually decorators are used to add functionality to a function.

AOP

Does this ring a bell? Exactly – that’s more or less the same as aspect oriented programming. In fact, AOP is one of the classical use cases of Java annotations. Just think of @transactional. This annotation surrounds a method with the glue code needed to execute it in a transaction. In other words, it adds some code before executing the method (opening a transaction) and it adds code after the method (committing the transaction, or rolling it back).

Oops!

Making the @ symbol an alternative way to call functions is a very elegant way to implement both static annotations and AOP. Plus, it should be possible to pass variables as additional parameters. In Java, annotations only support constants that can be resolved a compile time, and every once in a while, that’s a nuisance.

In hindsight, it’s hard to understand why the Java team added simple annotations instead of full-blown decorators to Java 5. I suppose they didn’t want to add a complex multi-purpose tool. Annotations can be used for exactly one purpose. They resemble a comment. They rely on a framework to be useful. This framework can be the editor, the compiler or the runtime environment. Only a few years later the JavaScript world showed how useful decorators are.

Can decorators be added to Java?

Sure. Java 8 introduced default methods. So we could define a default method in an annotation class and execute it when the annotated method is called. We’d have to consider a couple of corner cases, though. Annotations can be applied to classes, static fields, class attributes and method parameters. We’d have to define useful semantics for each of these corner cases. But that’s not a big deal.

Wrapping it up

However, I don’t think this will ever happen. Annotations have become such an important part of the Java world it’s unlikely anybody takes the risk. So, for now, we can only marvel at the way the language designers took the syntax of the annotation and converted it into something even more useful. By the way, the flexibility of the JavaScript language allows for some tricks Java developers can only dream of. Only – Java applications tend to be a lot bigger than JavaScript applications, so this dreams would quickly turn into a nightmare. Nonetheless, the article I’ve linked in the “dig deeper” section is very interesting, and it’s full of surprises – at least for a programmer like me spending most of their day in the Java ecosystem.

Dig deeper

Exploring EcmaScript 2016 decorators in detail

The State of Polyglot Programming in 2016: Mission Impossible?

Once upon a time, they used to say every programmer has to learn a new programming language each year, just to stay in shape. When I mention this to Java programmers, they usually start laughing. What used to be true in the 70s, maybe even in the 80s and the early 90s, has become utterly impossible in 2016. Modern programming ecosystems like JavaEE have reached such a level of complexity that it has become challenging to master a single programming language, let alone a second.

Learning the core language is not the problem

Thing is, it doesn’t suffice to learn the language. That’s easy. Once you’ve learned an object oriented programming language, you’ll quickly be fluent in any other object oriented language. Only shifting to another programming paradigm may give you a hard time. I, for one, have to admit I never managed to wrap my head around PROLOG. It’s not difficult, but it’s so different from what I’m usually doing. My rich experience which is so useful at other times prevents me from being a successful PROLOG programmer. But the vast majority of modern languages follow the procedural paradigm, spiced with object oriented programming or functional programming (and in same cases, both). So there’s not to much a challenge from the language side per se.
Continue reading

Adding Type Inference to Java: Good or Evil?

My previous post raved about the simplicity type inference is going to bring to Java. I love the idea. Java is such a verbose, ceremonious language. Little wonder so many developers prefer dynamically typed languages, which seem to be so much simpler to use until you write a huge enterprise application. Java’s proposed type inference is strongly and statically typed, so it’s going to make life simpler without introducing problems. JEP 286 is good news, indeed!

Type infererence obfuscates types

A fine mess it is, Dave Brosius answered. Consider this code. It’s wrong, but can you spot the error?

public boolean foo(Set<Long> ids) {
    val myId = getId();
    return ids.contains(myId);
}

In fact, I couldn’t until Dave helped me, providing the solution: getId() returns a String, so the contains method will never find the id.

Well. On the one hand Dave is right: the val keyword obfuscates the fact that we’re talking about a String, so it’s a potential source of confusion. You’ll spot the bug immediately with the Java 8 version of the code:

public boolean foo(Set<Long> ids) {
    String myId = getId();
    return ids.contains(myId);
}

This source code clearly reveals that we’re comparing Strings with numbers, which is unlikely to work (at least in the Java language). Readability matters!

… but …

On the other hand, the example is all wrong. It’s deliberately misleading. I, for one, was an easy victim because I associated the ID of the code snippet with a database primary key, and my primary keys are always numeric. I prefer to have them generated by a sequence of the database.

Several years ago many developers adopted the UUID as an alternative synthetic primary key. From this perspective, I shouldn’t have been surprised by the fact the getId() returns a String. But then, why don’t we call the method getUUID()? Readability matters!

Another interesting aspect is that JEP 286 has been carefully designed to minimize readability problems. In my eyes it’s too conservative, but Dave sought and found a weak spot of the Java API that doesn’t work well with type inference. Even though Set is a generic type, the contains() method doesn’t care about the type. It simply accepts arbitrary Objects as parameter, allowing us to try to search for a String, even though it should be clear from the context that there are no String in the set.

My personal summary: Granted, you have a point, Dave. Type inference might obfuscate the type, and that’s a bad thing. But if used carefully, this shouldn’t be too much of a problem. I can’t remember someone complaining about type inference with Scala, Kotlin or Ceylon. Well, maybe I’ve heard people complain about Scala type inference. Scala seems to attract very clever guys, who love to write their programs in a very, say, academic way. Scala’s type inference is very sophisticated. Sometimes this leads to very sophisticated Scala programs, and that’s one of the key aspects why Scala never gained traction. The Java enhancement proposal is much more conservative. Probably exactly because of the Scala experience.

By the way, as Phil Webb pointed out, using the val keyword isn’t the real source of the problem. Using the “inline” refactoring causes the same obfuscation problem:

public boolean foo(Set<Long> ids) {
    return ids.contains(getId());
}

IDEs to the rescue

When I published this article on reddit, the first comment pointed out that sooner or later IDEs are able to display the type of the local variable, even if it is defined as a var or val. That doesn’t solve every readability problem, because most likely the type of the variable is displayed in a tooltip only. In other words: most of the time it is invisible. But certainly even that tooltip will help a lot. In fact, the tooltip is an excellent example of my idea: developers aren’t bothered with technical details all the time, but they can easily look them up when they need them.

Another example of obfuscation

Mark Struberg contributed another potential example of trouble introduced by var and val:

val myValue = a().b().c().d();

Which type does myValue have? What happens when the return type of d() changes? Plus, in real life code, the chain of calls may not be quite as simple. It may be hidden among hundreds of lines, each looking deceptively innocent.

Granted, that’s the sort of things that are going to happen, and it’ll be the source of countless overtime hours. But the real question is: which is worse? Actually, we could even run an experiment to decide on the question of hours of overtime. Does type inference increase the number of extra hours, or does it increase them?

Lessons learned from other languages

Most modern programming languages make use of type inference. There’s a lot of discussion whether static or dynamic typing is useful, but so far, I’ve heard little complains about type inference in strongly and statically typed languages like Scala, C#, Ceylon or Kotlin. I’m sceptic about dynamic typing, but the current JEP 286 is limited to strong and static typing, so judging from the experience of the communities of other programming languages, I suppose adding limited type inference to Java is going to be a success story.

Mind you: the examples of Dave and Mark have been carefully designed to expose the weaknesses of the JEP. That’s good, because they made us aware of a problem we might have missed otherwise. But then, why don’t we rely on the common sense of the programmers? Usually they know how much abstraction they and their working mates can cope with. Adding type inference to Java doesn’t forbid us to use explicit types. So let’s give it a try, and use techniques like code review or pair programming to make sure our code doesn’t get too sophisticated!

Benefits of type inference

During our discussion on Twitter, Phil Webb exclaimed he’s sometimes afraid of extracting a term into a local variable because of the scary type declaration. Well, this rings a bell with me. I often use this particular refactoring to make the code more readable. Extracting a term to a local variable allows us to assign an expressive variable name to the term, and modern JVMs are clever enough to inline the variable automatically again. But if this local variable is drowned by a verbose type declaration, little is won. In this particular example, using val or var might improve readability a lot. More often then not, we aren’t interested in the type of the variable (that’s a technical issue), but we are interested in the role the variable plays (which is a business issue).

That’s the general pattern why type inference may make the Java language more useful. It gives you an option to hide technical stuff when its irrelevant for the reader. As they say, code that’s not there is code you don’t have to understand. It goes without saying that there are corner cases. Sometimes code is so clever it’s so compact it’s difficult to decipher. It’s like learning latin. At school, we used to ponder half an hour just to decipher a single sentence. Obviously, that’s the wrong approach. But if used wisely, type inference may add to the readability of the source code.

Like I said: Readability matters!

JEP 286 addresses Type Inference in a very conservative way

Given that Brian Goetz (or whoever is the author of this particular line) calls type inference a “non-controversial feature”, there’s quite a lot of controversy about the topic. But if you’re afraid of type inference, read the JEP. It’s a fast read – maybe five to ten minutes – and it’s going to make you rest assured. The authors have limited the scope of type inference considerably.

Target typing vs. left-hand-side type inference

In part, that’s due to target typing. The adoption of Lambda Expressions brought target typing to the Java ecosystem. This is also known as right-hand-side type inference, because it adds type inference to the right hand side of assignments. JEP 286 adds type inference to the left hand side of the assignment operator. Obviously, it’s difficult to have both. This might turn out as a problem to Java programmers.

It’s your responsibility!

But then, with power comes responsibility. Type inference doesn’t mean you are forced to convert every type declaration to a var or val declaration. It’s up to you. If you think var is useful, use it. Otherwise, use the traditional approach, which may be as verbose as

Map<Date, Map<Integer, Double>> map 
     = new HashMap<Date, new HashMap<Integer, Double>>();

In my eyes, this is a scary example: you don’t know nothing about the business value the attributes adds to your program. You only know it’s a map, and reading the type declaration, you learn twice it’s a map of maps. Probably it’s better to use val and an expressive identifier:

val interestRateTableByDate 
     = new HashMap<Date, new HashMap<Integer, Double>>();

It’s still a difficult line, but if you’re working in the finance industry, you might at least guess that this is a collections of tables of interest rates (which are different depending how much money you invest). Interest rate are volatile, so you need to add another dimension – the date – to determine the interest rate.

Granted, you can also do this without val, but then the variable name sort of drowns in technical stuff:

Map<Date, Map<Integer, Double>> interestRateTableByDate 
     = new HashMap<Date, new HashMap<Integer, Double>>();

Dig deeper

JEP286 (adding local variable type inference to Java)
Binary builds for JEP 286 (use at own risk – I didn’t check neither for viruses, nor did I try the files myself!)
My previous article on the proposal to add local variable type inference to Java
Lukas Eder on local variable type inference
Pros and cons of JEP 286 by Roy van Rijn


Java May Adopt (Really Useful) Type Inference at Last

I’m all for simplicity – and that’s why I’m complaining so much about Java and many Java libraries. Java is such a ceremonious language, and it attracts so many people who love ceremonies and design their libraries to be used in a very elaborate way. More often than not, this results in a lot of boring key strokes. That’s a pain to write, but it’s also painful to read. As you may or may not know, I love complaining, but I also love to do something about the pain points. Just thinks of BootsFaces and AngularFaces, my attempts to simplify JSF programming. However, there’s one domain I have very little influence: core Java. And that’s a pity, because that’s a domain that influences everybody. Every Java programmer, that is.

Can you imagine my joy when I read there’s an official proposal to introduce type inference to the Java language? Even better, the author is Brian Goetz, and the proposal calls type inference a “non controversial feature”. Better still, there’s also a survey allowing you to influence the development.

Update March 13, 2016: As Brian Goetz mentions in his comment below, my wording is not entirely correct. Java already has some type inference since Java 5. The diamond operator is type inference on generic types, and Java 8 introduced a fairly sophisticated and useful type inference called target typing. However, adding local variable type inference to the language take the idea to a whole new level, at least from the programmer’s point of view. I believe it can be used much more often than target typing and the diamond operator.

What is type inference?

BeyondJava.net seems to attract people who work with UIs, so chances are you’ve already worked with JavaScript. In JavaScript, you don’t declare the type of a variable, but you simply declare it by writing

var x = 6;

That’s a nice and simple approach to deal with variables. You don’t have to declare it an integer. Mind you, the number “6” is an integer, so you can easily infer the type of the variable from the context. In a nutshell, that’s what type inference is: don’t force your programmers to jot down trivia you can deduce just as well from the context.

Actually, I’ve mentioned JavaScript, so if you’re pedantic, my example is an example what type inference is not. JavaScript doesn’t have a type concept at all. Modern JavaScript VMs employ a type concept in order to provide efficient code, but that means they have to go to remarkable lengths to support the flexibility of the JavaScript language. The type system of JavaScript allows you to change the type of every variable over time. What starts as an integer, ends up as a string.

It goes without saying that’s not the Java way. The Java way is to chose a type, and to stick to it. It’s either an integer, or it’s a string. It can’t be both. That’s what type inference means: you derive the type from context. After that, the type is immutable. Don’t confuse type inference with dynamic typing. Both approaches look similar from the outside, and they pursue similar goals, but statically typed type inference is much more strict. Does that sound like a disadvantage? It’s not. For one, static type systems help you to discover many mistakes at compile time. If you’ve ever worked on an enterprise-scale program – say 50.000 lines of code or up – you’ll probably love this feature. Second, the compiler can generate much more efficient code if it knows the type for sure. The resulting program is smaller and faster.

Isn’t it already part of the language?

I’ve written one or two articles about type inference in the Java language. In fact, Java 8 has an interesting approach to type inference based on target typing.

Well. Target typing simplifies that Java language a lot, especially with Lambda expressions. So it’s definitely something to get excited about. But – like the diamond operator – it introduces types on the right hand side of the assignment.

Thing is, the right hand side of the assignment usually is the side defining the type of the variable.

Java tries to do it the other way round. This approach works surprisingly well, but it’s a far cry from the simplicity languages like Scala offer. To make the Java language much more simple, we have to induce the type on the left hand side of the assignment based on the expression on the right hand side of the “=” character.

The scope of the current JEP

Java is not Scala, so it’s not surprising the scope of JEP 286 is much more limited than the Scala approach. But that’s not a bad thing: As much as I love the Scala language, I’ve seen Scala programs that are downright scary, so it pays to introduce Scala features to Java slowly and carefully. The current JEP only covers local variable with initializers. It does not cover methods. Nor does it cover object attributes or class variables. That sounds like a sensible choice to me. In particular, method-level type inference often is kind of scary. Everything’s fine as long as the method consists of one or two lines. Thing is, type inference doesn’t say anything about the size of the method. It still has to work if the method is 6.000 lines long. (Before you ask: I’ve already seen a method consisting of more than 6.000 lines). That’s what Scala does, but I daresay programmers find it hard to infer the type of the method from the context. JEP 286 avoids this kind of problems by limiting the scope to local variables. In fact, JEP 286 is very restrictive. It forbids type inference in most corner cases.

In part, that’s a result of target typing. Java already goes into great lengths to support type inference on the right hand side of the assignment (aka target typing). Adding type inference on the left hand side is ambitious. If I’m not mistaken, Scala does a good job to support both approaches, but the Java way is slightly different. It’s not about showing off. Neither is it about showing what can be done with today’s technology. Instead, it’s about supporting big enterprises. That, in turn, makes the Java language a tad conservative. Backward compatibility is a virtue. It doesn’t pay off to tentatively add a feature. You’ll never be able to remove it again. So every change to the Java language has to be thoroughly thought of.

Still, the effect if the type inference proposal is impressive. According to JEP 286, the vast majority of local variables can make use of type inference. No less than 83.5% of the variables could infer the type out of the box. With some precautions, the number might rise above 90%. Ignoring the variables initialized with null, up to 99% of the variables of the JDK’s source code have a type that can be inferred by the context.

Wrapping it up

I’ve almost abandoned all hope for progress of the Java language, but obviously, that was premature. Chances are one of the key features that make languages like Scala attractive to me is going to be added to Java. That’s bad news to Scala: such a beautiful language, but it’s never going to take off! But on the other hand, it’s good news to the Java community. In other words, it’s good news to the vast majority of programmers using the JVM. And I guess innovative languages like Kotlin, Groovy, Scala, Ceylon or C# had to lead the way and to prove the value of the concept first before introducing the feature to the Java language. After all, we all want the Java language to remain the rock-solid foundation of our industry, don’t we?


Dig deeper and vote!

a Survey allowing you to influence the development
official proposal to introduce type inference
Binary builds for JEP 286 (use at own risk – I didn’t check neither for viruses, nor did I try the files myself!)
My follow-up article: is type inference good or evil?
Lukas Eder on local variable type inference
Pros and cons of JEP 286 by Roy van Rijn


The Art of Efficient Programming in the Agile Ages

Heck, this article is going to make me sound like my own grandfather. Everything was better in the old days! But I’m still convinced this article tells a story, and it seems to be important to me. At least, it’s a thought I’ve been nurturing for years. By the way, this article does not tell you everything was better in the early days of programming. Can you imagine to program without the help of the internet? You have to buy actual books to learn more about programming. Those books were hard to get hold of. Editors didn’t have autocompletion. Monitors used to flicker, causing a lot of eye strain and headache. There wasn’t even such a thing as StackOverflow. No, programming was a lot worse in almost every way. I’m happy to have arrived in the agile ages.

But still: Did you ever think about what gets lost in agile projects?

Simple. Agile projects are about producing software efficiently. But they are not about writing efficient software. Writing efficient software costs time and money, and that’s precisely what agile programming tries to avoid.
Continue reading

Newflash: Are Java 8 Lambdas Closures?

Bruce Eckel has published some interesting thoughts about closures and lambda expressions in Java 8. He claims that Java’s lambdas are essentially closures because the original definition of closures stems from functional programming languages, and pure functional languages don’t know variables. Hence, Java’s restriction that lambdas can only access effectively final variables of the surrounding scope is not a real restriction, he claims. Plus, it can be circumvented by encapsulating the interesting variable in an object. Basically, that’s the same trick Java programmers use to implement call-by-reference parameters.

I can’t say I really agree with Bruce Eckel. In my eyes closures are a lot more useful than lambda expressions because they can access variables of the surrounding scope, while lambda can only access constant values of the outer scope. But it’s an interesting article nonetheless. Read the full story on Bruce Eckel’s blog.

Getting Started with TypeScript

The other day I showed a TypeScript program to our architect. He doesn’t like JavaScript, but when he saw my TypeScript program, he was pleasantly surprised. TypeScript looks pretty familiar to Java programmers, making it a good language to get started with client-side programming. Plus, the core feature of TypeScript is types, making development much more fun. I know fans of dynamic types disagree, but wait until you’ve seen the autocompletion and refactoring features of your editor before you judge. Be that as it may: TypeScript comes with powerful type inference, too, so most of the time you can use it as a dynamically typed language and still benefit from types. If you’re still sceptical: hard-core JavaScript may be relieved to learn types are optional.

After a couple of works I’d say that TypeScript is the language of choice for me. In earlier times, I’ve propagated Dart, which is an even nicer choice as a language. Unfortunately, Dart suffered from needing a virtual machine of its own, and the lacking interoperability with existing JavaScript code. So, it didn’t make a bit impact on the market yet, so I prefer a language that compiles natively to JavaScript.

So, I’ve decided to write a tiny tutorial on TypeScript. It’s not an exhaustive step-by-step tutorial. I can’t beat the official TypeScript manual, so I won’t even try. Instead, I’ll give you a short tour-de-force from a Java programmers perspective.
Continue reading

Has OO Done More Harm Than Good?

Ten years ago, this question would have been heresy. Even today, it’s perfect to start a lively discussion, as Eberhard Wolff did today on Twitter. Object oriented programming still has a lot of defenders (including me), but recently the critics are gathering, too (also including me). So what is it that makes people skeptical about object oriented programming?
Continue reading

Continuous Query Language – Processing Data in Real Time

The other day I learned about Odysseus, a framework to process data in real time. Truth to tell, I know little about the topic, but I consider it interesting enought to share it with you. Most of you are familiar with relational databases, SQL, O-R-mappers, transactions, the ACID principle and things like this. Working with streaming data is similar – and at the same time, it requires a major shift of mind. The more you look into it, the bigger the differences.
Continue reading

Sparkling Services

WebServices and REST services have come a long way in Java. Nowadays, it’s very easy to create a webservice. Basically, it’s adding an annotation to a method. All you need is a servlet container, such as Tomcat. But then, configuring and running a servlet container isn’t really simple. Chances are you think it’s simple, but that’s because most Java programmers have been using application servers for years. They already carry everything they need in their toolbox. But from a newbie’s point of view, things look a little different.

Spark is a nice alternative making it really easy to write a REST service:

import static spark.Spark.*;

public class HelloWorld {
    public static void main(String[] args) {
        get("/hello", (req, res) -> "Hello World");
    }
}

That’s all you need to implement a web server that serves a single REST service. Start the program, open your browser and enter the URL to test your REST service:

http://localhost:4567/hello

Notice that this is a main method. You don’t have to deploy a war file. Instead, Spark follows the idea popularized by Spring Boot: it embeds Jetty in a jar file. When I tested the code, Spark took virtually no time to start. That’s the nice thing about embedding Jetty: it has a really small footprint. I started to use an embedded Jetty for my web applications years ago. It’s a really fast alternative if you don’t need a full-blown application server.
Continue reading

Non-Object-Oriented Java

Hardly ever have I seen code reuse in business-related code. That’s a puzzling observation, given that Java programmers are expected to write object-oriented programs for many reasons, code reuse being an important reason. My puzzlement even grew when I started to analyze my former company’s SAP programs. They aren’t object oriented. Quite the contrary, the coding conventions discourage using ABAP objects. Does this make the ABAP code any worse?

I don’t think so. The average ABAP program is focused on business code. Plus, ABAP programmers usually prefer customizing over programming. SAP delivers a framework covering most of the technical issues, so there’s no point in creating your own class hierarchy of business classes. It’s better to take the data structures and objects SAP gives you and to customize them. More often than not, the result violates requirements concerning cognitive ergonomics, usability and performance, but that’s not today’s topic. SAP’s ABAP framework allows for efficient programming, and it’s perfectly possible to write huge programs without resorting to object orientation. Dropping OO is even a sensible choice.
Continue reading

Type Erasure Revisited

You can’t beat type erasure, they say. Information about generic Java data types is lost during compilation, and there’s no way to get it back. While that’s absolutely true, Mahmoud Ben Hassine and I found a tiny loophole which allowed us to improve Mahmoud’s jPopulator framework tremendously. Under certain circumstances, it’s possible to find about the type which is supposed to be erased. So jPopulator is able to populate not only simple beans, but also complex generic data types.

Mahmoud agreed to write an article about the ups and downs of our investigation. You’ll see it wasn’t exactly easy-going. Along the way, you’ll learn about the loophole allowing us to beat type erasure.
Continue reading

How Static or Dynamic Typing Affects Your Coding Style

Erik Osheim has written an excellent article comparing dynamic and static typing. More precisely, he compares Python to Scala. What makes his article interesting, is that he focuses on the consequences of the type systems. The article you’re currently reading is a (not so) short summary of Eriks article, plus a few thoughts of mine.
Continue reading

Should You Avoid or Embrace “Static”?

More often than not, the keyword static confuses Java programmers. As a consequence, Ken Fogel asks his students never to use static in Java unless explicitely told to do so. While that’s a good hint for starters, it’s only part of the story.

Funny thing is, I recommend to use static as often as possible. There’s a twist: Never use static variables, but always use static methods.
Continue reading

Creating Annotations is Fun! Too Much Fun?

Once again Lukas Eder has written an article I mostly agree with – but not completely. Lukas claims Annotations have become an antipattern.

The nice thing about annotations is you can define your own annotation. It’s easy, and it’s extraordinary popular among framework designers. So we see annotations in the bean validation API, in JPA, in CDI, in JSF, in EJBs, WebServices and Rest and much more.

Annotations – a programming language of their own

While I absolutely love these annotations, there’s a problem: they are starting to become a programming language of their own. What do you think about something like this?

@NotNull
@Size(min=6,max=10)
@Pattern("[a-zA-Z]+")
@Inject @UserID
@Column(name="userID")
private String userID;

@Valid
@ManagedProperty("#{customer.account}")
@OneToOne
private Account account;

Continue reading

Newsflash: Java to Scala Converter

JavaToScala.com may help you to learn Scala. It takes an arbitrary Java class and converts it to Scala.

Naturally, this approach emphasizes the similarities between the two languages. The few examples I tried were converted into solid Scala classes, but nothing out of the ordinary. The converter won’t introduce a case class or a trait if it seems fit. However, emphasizing the similarities is not a bad thing: many Java programmers are scared away from Scala after seeing advanced code.

The core of the converter is another project available on GitHub: Scalagen is a Maven plugin converting Java source code to Scala. The project home page states that the converter isn’t perfect, but may introduce errors for certain Java constructs, so I take it the same applies for JavaToScala.com. But still, it’s an interesting way to get familiar with Scala.