; last updated - 3 minutes read

Performance considerations in the eighties

Time and again, I'm surprised by the performance of modern CPUs. Actually, the speed boost astonishing me most took place a decade ago, give or take a few years. Before that, floating point operations were a lot slower than integer operations. When I learned my first assembly language, the 6502 processor was state of the art. This processor was so simple that you had to write a program to implement the integer multiplication. Multiplying floating point numbers was an even longer program. I've forgotten the actual numbers, but as a rule of thumb, multiplying integers was ten times slower than adding two integers. Multiplying floating point numbers was another magnitude slower.

... compared to 2017

Nowadays, x86 CPUs multiply four floating point numbers in a single CPU cycle (under optimal conditions). In theory, the fastest possible operation takes one CPU cycle, to it boils down to saying that multiplying floating point numbers is every bit as fast as adding integers. The complexity of the algorithm hasn't gone away, so this is absolutely stunning. Not only have hardware designers managed to implement the complex algorithm in hardware, they also managed to implement it in a fashion resembling parallel programming. I suppose the final algorithm isn't that complicated. Actually, I've got an idea what it looks like. But it took hardware designers years to get there, so it's obvious it wasn't a low-hanging fruit.

What about Java?

As Grey Panther shows in his article, the effect also shows in Java programs. Under certain circumstances, integer operations can be slower than floating point operations. I don't think that holds true for individual operations because in this case, the latency induced by the CPU pipeline plays a major role. But when you do a lot of floating point operations, the JIT compiler and it's optimations kicks in, allowing for an efficient CPU use. The net result is that it may pay to prefer floats over integers.

Wrapping it up

However, what's more important, is that every data type is blazing fast. You can choose the data type that suits your problem. In earlier times, performance considerations often dictated the choice of the data type. Luckily, this is a thing of the past.


Dig deeper

Benchmarking the cost of primitive operations on the JVM

Intel® 64 and IA-32 Architectures Optimization Reference Manual


Comments