First let me state that I understand Groovy (and scripting languages in general) are going to be slower than a more native counterpart. However, I thought that in groovy's case things would level out once the groovy scripts were compiled to bytecode. This doesn't seem to be the case.
I created a groovy script and a Java class to read a text file containing 10,000 lines of text that were all the same length. I then executed each 20 times.
I created a the following groovy script:
Running this as a groovy script I averaged about 791 ms. I compiled the script and ran it as bytecode and averaged about 604 ms.
I then created the following Java class.
Running this averages at about 336 ms. So in review:
groovy script: 791 ms groovy bytecode: 604 ms java: 336 ms
I have to say that I was more surprised that the difference between groovy script and groovy compiled wasn't more. But I was still disappointed at the difference between groovy compiled and java; nearly twice as slow running the compile groovy code. I realize benchmarks like this don't mean a whole lot. I did expect a different outcome though.
Hm, well a factor of two or so doesn't sound bad really. There's a lot of discussion right now on the groovy-dev mailing list about strategies to improve performance in the future - it seems to be a primary focus right now. Groovy call stacks tend to be rather, um, baroque with all their mechanisms for dynamic overrides and whatnot, even with compiled bytecode. My understanding is 1.6 is supposed to have some significant improvements in that area, and 2.0 hopes for considerably more, simplifying call stacks considerably.
Good thinking Liz; that's some interesting results.
I'm a little surprised by this as well. Last week I asked Jeff Brown about Groovy performance. One interesting thing he noted was that because groovy isn't typesafe there are less checks on the variable, so it's faster in some aspects--although overall groovy is slower.
That said, a factor of 2 is usually my breakpoint for performance. By that I mean, if between technologies A and B, the former is easier to code or maintain, or has better support, etc. and comes at a a cost of a factor of 2 or less, I'll typically pick A. Usually the cost of additionally hardware in the next 2 years is outweighed by the reduced cost of code, and after 2 years the hardware only gets faster. Of course it depends a lot on your particular situation.