[kaffe] Java benchmarks on different architectures
Riccardo
zuse at libero.it
Thu Jul 8 03:15:03 PDT 2004
Hello,
I am pleased to announce that the paper on java benchmarking I was
working on the last months has been released [1].
I only want to state some things so that the results are NOT
misinterpreted. Benchmarking is extremely tricky and many variables
influence it.
- the goal was to benchmark different processors and evolutions of a
processor. So although a Java to C comparison is done, this was not the
final goal. Benchmarks would have been done differently if that has been
the goal
- the benchmarks are quite small and simple. Maybe too simple to
evidence some problems.
- C code optimization can affect the execution speed on some computers
more than on others. As well as compiler versions. The assumptions I did
are clearly stated (namely GCC everywhere, except on apple where the
compiler is gcc-based, and no cpu/arch specific optimizations were
enabled)
- many things can be improved. Not all machines were benchmarked with
all available tests due to time constraints. Where possible both Sun and
Kaffe [2] VMs were used. On one computer a first attempt of SableVM [3]
was attempted too. GCJ was not used (again due to time limitations)
- different VMs behave differently on different CPU's. SO be careful in
making architecture comparison. While for example the different results
on Solaris/SPARC computers are very interesting (same OS, same Java SDK,
same binaries) comparing even to the same version ported on another
platform (say, IRIX/MIPS) may compare more the quality of the port of
the VM than the CPU capabilities itself. Of course the data is still
meaningful. If you are looking for a platform where java speed is
important and you know which VM you will use...
- if you want to run the benchmarks on your own, be careful about the
above remarks about optimizations. Also be careful about the data size.
I used an average size that could suit about all machines I tested on.
BUt on fast machines the startup overhead and timing granularity may
cause too big error. You should feel free to increase the data sizes and
data cycles in the tests, but beware that then comparing results with
mine is no longer possible.
If you want to discuss some choices or some results. Or if you want to
write new benchmarks or adapt mine to other purposes, feel free to
contact me.
- Riccardo
---
[1] Currently, the location of the results is:
http://homepage.mac.com/riccardo_mottola/kaffe-devel/benchmarks/index.
html
[2] http://www.kaffe.org
[3] http://sablevm.org/news.html
More information about the kaffe
mailing list