# Java vs Javascript: Speed of Math

*By royvanrijn On July 15, 2012 · 33 Comments · In Java Programming, Javascript, Math*

Last week I’ve created a ray marcher 3d engine which renders the Mandelbulb. And I’ve translated it into pure Javascript a couple of days later. After the translation I decided I should optimize the code a little for speed, so I made some speed improvements in the Javascript code. The main optimization was using an array for the vector3d instead of a class/function.

Rendering the Mandelbulb on a 400×400 canvas now took just 1850ms in Javascript (Chrome, V8). Which is very fast! Even faster than my Java implementation (running on Java 1.6.0.33 -server, which was faster than Java 7). But the Java code didn’t have some of the speed optimizations. So I re-translated the Javascript code back to Java. It produced the following numbers (lower is better performance):

What has happened here? The output is the same, why is Java so much slower than Javascript? I would have suspected the opposite…

I fired up the profiler to see what was causing the Java code to be so slow, and it turned out the method it spend most time in was Math.pow(). Other slow methods were Math.acos(), cos(), sin() etc. It turns out that the Math library isn’t very fast, but there is an alternative, FastMath. Apache Commons has implemented a faster Math library for commons-math. Lets see what changing Math.* to FastMath.* does to the performance:

This is already much better. But still the method causing most delay is FastMath.pow(). Why is Javascript so much faster? The method is made so you can calculate the power of two doubles, not only integer values. But I’m only doing Integer powers (7 and 8 to be precise). So I decided to implement my own method:

private double fasterPow(double d, int exp) { double r = d; for(int i = 1; i<exp; i++) { r *= d; } return r; }

**Warning: This isn’t the same as Math.pow/FastMath.pow!**

The speed with this new method is much better and seems comparable with Javascript. Maybe this is an optimization the V8 engine does by default? Who knows.

The slowest method in the program now is FastMath.acos. From highschool I know that acos(x) can also be calculated as atan(sqrt(1-x*x)/x). So I created a own version of acos. When benchmarked, the different methods: Math.acos(), FastMath.acos() and FastMath.atan(FastMath.sqrt(1-x*x)/x), the result is again surprising:

The custom acos() function is a bit faster than FastMath.acos() and a lot faster than Math.acos(). Using this function in the Mandelbulb renderer gives us the following metric:

So it turns out that with a bit of tweaking we can get the Java version faster than Javascript, but I would have never imagined Java would be slower in the first place. The Chrome V8 guys really did an amazing job improving the speed of their Javascript VM. Mozilla isn’t far behind, they are getting +/- 2200 ms in the benchmark. Which is also faster than Java.Math and FastMath! It seems that V8′s math implementation has some optimizations that Java could really use. The tricks used above don’t make any difference with the Javascript version.

**Edit 1:** Is Javascript faster than Java?

Well surprisingly in this case it is. With the code a 100% the same, using arrays as vector and Math.* the code actually runs faster in my browser!

**Edit 2:** People have been asking me: What could have been done to make it faster in Java? And, why is it slow?

Well the answer is twofold:

1) The Math libraries are made for ‘double’ in Java. Having a power() method work with doubles is much harder than working with just integer numbers. The only way to optimize this would be to overload the methods with int-variants. This would allow much greater speeds and optimizations. I think Java should add Math.pow(float, int), Math.pow(int, int) etc.

2) All the Math libraries have to work in *all* situations, with negative numbers, small numbers, large numbers, zero, null etc. They tend to have a lot of checks to cope with all those scenario’s. But most of the time you’ll know more about the numbers you put in… For example, my fastPower method will only work with positive integers larger than zero. Maybe you know that the power will always have even numbers…? This all means that the implementation can be improved. The problem is, this *can’t* be easily achieved in a generic (math) library.

### Latest tweet

- Java++, huge thanks to Oracle and the OpenJDK community! 07:46:58 PM March 18, 2014

### Links

### Archives

@Andrey: I’ve tested it, but on small numbers (lets say 8, which I need…?) it isn’t faster on my laptop, it is actually 100 times slower (?!):

This results in:

Can you verify this?

Thanks but I may have been mistaken =).

If you look at the algorithmic, your code is indeed more optimized for your needs so it will be faster than Math.pow(). But my point is that the JVM optimizes “hot” code (branches or methods). For instance, using the JVM option “-server” and the default parameters, a method will be compiled into native code by the JVM after 10k invocations.

So in you case, you may see that during the first 9.999 times, the Java implementation take ~5.2 seconds. This behavior is normal since it is warmup time. But after the 10’000th time, it may take less than 2 seconds once the JVM has optimized it.

You can use the flag -XX:+PrintCompilation if you want the JVM to print a message when it compiles a code block. Until you see no more compilation message, you cannot measure precisely the duration of your code. See this page for a complete listing of JVM options : http://jvm-options.tech.xebia.fr

Using atan in JS, compared to acos: http://jsperf.com/acos-vs-atan-flavour — in Chrome, the difference is massively in favour of atan (acos 13.3m op/sec vs. atan 24m op/sec), but Chrome has interesting behaviour for trigonometric functions. In Firefox, the difference is marginal, borderline non-existent (11.8 vs. 11). in Opera, there is a big difference, in favour of acos (7.5 vs. 6) and in IE9 there is a big in favour of atan (8,6 vs 14.2).

So it would speed up things significantly, not really, or to a negative degree, depending on the browser =)

It’s no secret that Java’s trig functions are slow. The biggest reason for this is that Java favors cross platform compatibility over performance. It would be really really nice if Oracle would add a JVM flag indicating whether or not performance was desired over strict compatibility for all floating point operations.

Roy,

Yeah it sucks on small powers, but if much faster with higher ones.

Try this one, it is 10-20% faster (3 multiplications instead of 7):

private static double casePow(double d, int exp) {

double d2, d4, d8;

switch (exp)

{

case 0 : return 1;

case 1 : return d;

case 2 : return d * d;

case 3 : return d * d * d;

case 4 : d2 = d * d; return d2 * d2;

case 5 : d2 = d * d; return d2 * d2 * d;

case 6 : d2 = d * d; return d2 * d2 * d * d;

case 7 : d2 = d * d; d4 = d2 * d2; return d4 * d2 * d;

case 8 : d2 = d * d; d4 = d2 * d2; return d4 * d4;

case 9 : d2 = d * d; d4 = d2 * d2; return d4 * d4 * d;

default: return 1; //use different one

}

}

For performance testing, use Caliper: http://code.google.com/p/caliper/

Regarding the implementation itself, switch+case is slow. Use if/else instead. See implementation here: http://jafama.svn.sourceforge.net/viewvc/jafama/src/odk/lang/FastMath.java?view=markup

Look for the method named powFast.

Just for your information, there is a MUCH better way to implement exponentiation on integers, commonly called fast exponentiation. I’m curious how much your code would improve if you substituted it in. I’ve written the code below (making some assumptions about java behaving like C, I’m not a java guy) This is a logorithmic time algorithm as opposed to linear; although it only works for integer powers.

One point that wasn’t addressed. Did you actually make sure that these different approaches got the same answer? I didn’t see this discussed but perhaps I missed it or it was otherwise implied. If you don’t get the same answers, its hard to make such comparisons.

Liked the article, thankyou. It inspired me to tinker a bit with making a fast-pow in javascript (for integer exponents) & it seems *nearly* as fast as Math.pow on my browsers :-)

Incase it’s vaguely helpful, relevant or of interest in terms of algorithm :

function fpow3(x, exp) {

if(exp == 0) return 1;

if(exp == 1) return x;

var r = 1, v = x, e = exp;

while(1) {

if(e & 1) r = r * v;

`e = e >> 1;`

if(!e) return r; // return asap

v = v * v; // next exp bit is double the multiplier : x, x*x, (x*x)*(x*x), ((x*x)*(x*x))*((x*x)*(x*x)), etc

}

}

You should try actually replacing your Math.pow() stuff directly inline with x*x*x*x*x*x*x if you say that the exponents are only 7 or 8… I bet that will again double the performane without all the function call and loop overhead…

Not sure if you’re reading comments on this old article but anyway, here goes.

Just happened to come across this blog, just like I just happened to come across this one which states a possible cause for the difference in speed:

And there is additional restrictions about when numbers can be considered to be integers. V8 has a faster version of Math.pow because the specification that it is implementing allows for a faster version.

Groeten,

Friso

I do read comments to old posts; and yes I’ve also come across that point! They are allowed to take some shortcuts in the math libraries that Java cannot do. This could very well explain the speed difference, but even so it is still impressive!

On the speed of java vs javascript: http://developer-blog.cloudbees.com/2013/12/about-paypal-node-vs-java-fight.html