- 16 minutes read

There are at least three exciting things about GraalVM: the AOT compiler (aka "substrate VM", recently baptized "native image"), the all-new JIT compiler, and a new approach to polyglot programming. Polyglot programming opens new opportunities, so we're positive to see many new uses of polyglot programming in the future, although we believe it will always remain a bit exotic. Even so, Graal's approach to polyglot programming is fascinating from the perspective of a language designer.

Which languages does GraalVM support?

GraalVM runs an incredible number of programming languages, some of them completely unexpected:

The left-hand side is what we'd expect from every JVM. GraalVM runs Java, Java compiles to bytecode, so it's no surprise Graal also runs every other language compiling to Java bytecode: Groovy, Scala, Kotlin, Ceylon, Clojure, JRuby, just to name a few.


The green box in the middle shows another implementation of Ruby. It's called "TruffleRuby" because it has been implemented with the Truffle framework. According to the blogosphere, TruffleRuby is the fasted implementation of Ruby you can get. For some reason, it's a bit difficult to get current benchmarks, so we went back to a 2015 article. In the meantime, TruffleRuby has probably improved a lot. But so have JRuby and CRuby. So we've got no idea whether the 2015 figures are still valid or not. Be that as it may, the 2015 figures show a juicy factor five performance boost. The article also mentions a problem of the TruffleRuby implementation: it's a bit slow to come to speed. Or it was a bit slow in 2015. We're no Ruby experts, but we tried to run a benchmark:

truffleruby 20.1.0, like ruby 2.6.5, GraalVM CE Native [x86_64-darwin] 1. calculation took 658 ms 2. calculation took 550 ms 3. calculation took 456 ms 4. calculation took 707 ms 5. calculation took 469 ms 6. calculation took 236 ms ... 18. calculation took 364 ms 19. calculation took 629 ms ... 31. calculation took 200 ms ... 69. calculation took 890 ms 70. calculation took 266 ms ... 100. calculation took 489 ms Average duration: 428 ms Average cold start: 471 ms Fasted run: 200 ms (#31) Slowest run: 890 ms (#69)

TruffleRuby performance oscillates wildly in our demo. In general, it's twice as fast as Ruby 2.7.0. But that's only the average. Sometimes the algorithm runs four times faster, and sometimes it's even 25% slower.

Side remark: JRuby without Truffle

Ruby is interesting for a second reason, too. JRuby may be a JVM language. But compiling JRuby to bytecode on the OpenJDK results in a major performance penalty, as Charles Nutter reports. The JVM verifies the bytecode, parses it, and does many things an interpreter does, too. According to Charles, this extra effort adds up. So it's faster to write an interpreter parsing the human-readable source code. Why? Among other things, because the interpreter runs hot quickly, so it's heavily optimized by the JIT compiler. We've met a similar concept when we explained Truffle in depth.

GraalVM could remove the performance penalty of compiling Ruby to bytecode. We're curiously watching what happens in the future. However, the JRuby team is excited about compiling JRuby to native machine code using the substrate VM. That's one of the key motivations behind re-activating the Ruby-to-bytecode AOT compiler.

Other languages GraalVM supports natively

Two other popular languages Truffle supports are R and Python. R, in particular, is remarkable because it's reported to run 30 times faster. We hope to hear more about R running on GraalVM in the future.

In any case, we've never heard about running R on the JVM before. In a way, the two programming languages were two opposing - or just separate - camps. On the R hand side, there were the data scientists. On the other hand, there were Java programmers doing business stuff. Most of the time, both camps work at different tasks, so that's not a big deal. But every once in a while, a business application could benefit from a statistical calculation or visualization. Before GraalVM, that was difficult. The natural separation between data scientists and business developers gets deeper by technology. Enter Truffle. It provides a bridge between the two camps.

That's just great. Mind you: how many useful features are never implemented because they are considered difficult?


And then there's JavaScript. The GraalVM implementation of JavaScript is much more than just the language. That alone would be impressive enough. Truffle JavaScript performance is more or less on par with Google's V8 engine. That's awesome in the light of the considerable optimization efforts the V8 team has put into their V8 JIT compiler. Maybe it's not even true, and we fell for smart marketing, as both our experiments and an open issue on GitHub indicates. However, being on par with the performance of the V8 compiler for any workload is a long-term goal of the development team. That's a bold goal.

As challenging as the task may be, just implementing a good compiler gets you nowhere in the JavaScript world. In 2020, JavaScript is useless without the ecosystem provided by Node.js and npm. So a large part of the GraalVM/JavaScript project does just that: it implements a decent Node.js environment.

Node.js reality check

Truffle ships with a complete node.js environment. The blog you're reading is an Angular app, so running the Angular CLI in Truffle seemed to be a good test case.

To replace the standard node.js implementation by the Truffle implementation, we've added the GraalVM path to the path environment variable. We've also set JAVA_HOME, just to be sure:

export PATH=/Library/Java/JavaVirtualMachines/graalvm-ce-java11-$PATH export JAVA_HOME=/Library/Java/JavaVirtualMachines/graalvm-ce-java11-

To test the new settings, we've checked the version numbers of Java, node.js, and GraalVMs lli tool:

$ java --version openjdk 11.0.5 2019-10-15 OpenJDK Runtime Environment GraalVM CE (build 11.0.5+10-jvmci-19.3-b06) OpenJDK 64-Bit Server VM GraalVM CE (build 11.0.5+10-jvmci-19.3-b06, mixed mode, sharing) $ node --version v12.10.0 $ lli --version LLVM (GraalVM CE Native

Next, we've compiled and started the blog using the Angular CLI. That didn't work with GraalVM 20.1.0. It just reported an error message indicating there's something wrong with foreign language support of the node.js engine bundled with GraalVM:

$ ng s -o Unknown error: TypeError [ERR_NO_ICU]: "fatal" option is not supported on Node.js compiled without ICU

Some time ago, we ran the same test with GraalVM 19.3.0. At the time, things looked a lot better:

$ ng s -o chunk {main} main.js, main.js.map (main) 547 kB [initial] [rendered] chunk {polyfills} polyfills.js, polyfills.js.map (polyfills) 330 kB [initial] [rendered] chunk {runtime} runtime.js, runtime.js.map (runtime) 6.15 kB [entry] [rendered] chunk {scripts} scripts.js, scripts.js.map (scripts) 9.46 kB [entry] [rendered] chunk {styles} styles.js, styles.js.map (styles) 265 kB [initial] [rendered] chunk {vendor} vendor.js, vendor.js.map (vendor) 10.7 MB [initial] [rendered] Date: 2020-02-02T13:14:29.051Z - Hash: ... - Time: 754889ms

The good news: the Angular CLI works. It compiles the code, opens the browser, and even Hot Module Reloading works.

It's just a bit slow. It took 754 seconds - almost a quarter of an hour! - as opposed to 20 seconds using standard node.js. Hot module reloading takes 34 seconds.

It's interesting to watch the compilation. The Angular compiler reports progress during the compilation. During the first minute, nothing happened, until the early progress reports trickle in after what felt like an eternity. Over time, the Angular compiler picks up speed. That's when the JIT compiler of GraalVM kicks in.

To our surprise, you can even observe that with Hot Module reloading. The more often you do it, the faster it gets. Peak performance seems to be 12 seconds, three times as fast as the first build. That's still a performance penalty, but it's almost acceptable. Even with standard node.js, this blog recompiles a bit slowly. However, it's still a factor 10 performance penalty.

Number crunching with GraalVM

JavaScript is known for many things, but efficient number-crunching isn't among this list. However, the JVM is pretty good at number crunching. So let's calculate prime numbers. We've repeated the algorithms several times to watch the warm-up phase. You can find the source codes on our GitHub repository.

Standard node.js v12.14.1 1. calculation took 161 milliseconds 2. calculation took 93 milliseconds 3. calculation took 93 milliseconds 4. calculation took 91 milliseconds 5. calculation took 113 milliseconds 10. calculation took 90 milliseconds 50. calculation took 90 milliseconds GraalVM JavaScript v12.10.0 1. calculation took 2216 milliseconds 2. calculation took 1826 milliseconds 3. calculation took 1810 milliseconds 4. calculation took 167 milliseconds 5. calculation took 191 milliseconds 10. calculation took 165 milliseconds 50. calculation took 169 milliseconds

As you can see, both the V8 engine of node.js and Truffle have a warm-up phase. Starting with the third repetition, standard node.js 10.x runs the test twice as fast. Node.js 12.14.1 is even quicker, both considering peak performance and the warm-up phase. The second repetition runs almost at full speed.

Graal JavaScript speeds up much more. However, it starts ten times slower, and it takes a bit longer to pick up speed. It reaches peak performance with the fourth repetition. Peak performance is roughly 20% below the peak performance of node.js 10.x, and half the performance of node.js 12.14.1.

Comparing Truffle JavaScript performance to Java performance

We also translated the algorithm to Java and reran the test, both on AdoptOpenJDK 13 and GraalVM

openjdk 13.0.1 2019-10-15 OpenJDK Runtime Environment AdoptOpenJDK (build 13.0.1+9) OpenJDK 64-Bit Server VM AdoptOpenJDK (build 13.0.1+9, mixed mode, sharing) Java version: 13.0.1 1. calculation took 47 milliseconds 2. calculation took 47 milliseconds 3. calculation took 28 milliseconds 4. calculation took 66 milliseconds 5. calculation took 39 milliseconds 10. calculation took 24 milliseconds 50. calculation took 21 milliseconds GraalVM openjdk 11.0.5 2019-10-15 OpenJDK Runtime Environment GraalVM CE (build 11.0.5+10-jvmci-19.3-b06) OpenJDK 64-Bit Server VM GraalVM CE (build 11.0.5+10-jvmci-19.3-b06, mixed mode, sharing) Java version: 11.0.5 1. calculation took 123 milliseconds 2. calculation took 57 milliseconds 3. calculation took 38 milliseconds 4. calculation took 64 milliseconds 5. calculation took 54 milliseconds 10. calculation took 29 milliseconds 50. calculation took 26 milliseconds

This time, GraalVM almost matches the performance of AdoptOpenJDK 13, although it starts slightly slower. Both initial performance and peak performance are far better than with JavaScript. So even if they say JavaScript is blazing fast, this doesn't apply to number crunching yet.

Running Truffle on Windows

If you're using Windows, the latest release GraalVM 20.1.0 is good news for you. Finally, it also includes Node.js. Earlier versions were restricted to Linux and macOS.

Unlikely contenders: C, C++, LUA, and FORTRAN

GraalVM doesn't end there. There's a large class of programming languages with an LLVM compiler. Truffle - more precisely: the Sulong project - can run LLVM bitcode. In other words, Truffle runs as different languages like C, C++, C#, D, LUA, and FORTRAN. Plus Objective-C, Ada, Haskell, Ruby (again!), ActionScript, Delphi, Julia, Common LISP, RUST, and Swift.

We've even seen a paper about using WebAssembly with GraalVM. Who knows, maybe we'll see GraalVM in the browser in the future?

Let's return to the hard facts. LLVM is short for "low-level virtual machine." So it's similar to Java's bytecode, the IR code of .NET, or the P-code of good, old Pascal. As far as we can see, LLVM is much more hardware-oriented. It strongly resembles a RISC instruction set. So the vast range of languages compiling to LLVM doesn't come as a surprise: LLVM doesn't make many assumptions about the language to run. Java bytecode is pickier. It only supports a few datatypes, for example. LLVM is more flexible.

Compiling - say - C++ code to LLVM and having it run by Truffle sounds like a tedious process. It's a far cry from efficient software development. So why should you be excited about that?

Once again, this question has many answers.

First of all, it's exciting to open the doors to languages that used to be clearly outside the Java universe. Inviting an Ada programmer to participate in our Java-centric party... well, that's a fascinating option in itself.

Second, LLVM allows us to run many proven algorithms that just happen to be written in the wrong language. Years ago, I tried to call a credit scoring algorithm written in C from Java. So I had to dive into technologies like JNI. It was a painful process. Chances are Truffle and Sulong take the sting out of it. If we'd been able to compile the C code to LLVM and to run it via Truffle, we'd been better off.

Wrapping it up

About the co-author

Karine Vardanyan occupies herself with making her master at the technical university Darmstadt, Germany. Until recently, she used to work at OPITZ CONSULTING, where she met Stephan. She's interested in artificial intelligence, chatbots, and all things Java.
GraalVM promises to be a high-performance polyglot JVM. As we've seen, there's a lot of marketing, but much of this claim is true. Under certain circumstances, GraalVM runs Java faster than OpenJDK. It runs other languages, too: Ruby, JavaScript, R, and C++. We've tested Ruby and JavaScript. GraalVM runs both languages without problems. Most of the time, it even runs Ruby faster than Ruby 2.7.0 does. In the case of JavaScript, we've seen mixed results. On the plus side, GraalVM ships with a complete node.js environment. We've managed to run the Angular CLI on this node.js application. However, it was a bit slow - roughly 10 to 20 times faster than native node.js 12.4.1. But that's not always true: after the warm-up phase, our small number-crunching test runs at half the speed of its node.js counterpart.

We're sure the teams are going to optimize GraalVM in the future. As of today, performance is not the reason to adopt GraalVM for JavaScript development. It may be much more interesting to Ruby programmers.

At first glance, that's slightly disappointing. But that's unfair. The exciting bit about GraalVM is that it's a runtime running a large variety of programming languages. Plus, you can run these languages side-by-side in the same application. The next part of this series shows you how to do that.

Dig deeper

Essay about adding a JIT compiler to Ruby (don't forget to look at the beautiful images!)

JavaScript performance on GraalVM (according to Oracle, so take it with a grain of salt)

Our performance tests on GitHub