Hi, let me make a few random comments about this speed discussion. First, let me point out what's needed: we need to complete the gcj integration in order to precompile Kaffe's class libraries. Now that both gcj+libgcj have been released, this has become even more pressing. Anybody who wants to help with this would be very welcome. Because kaffe and libgcj share a historic lineage, we hope that this can indeed be accomplished with only few changes to gcj. It is not a trivial job, however. Second, about Kaffe's JIT: it is waiting to be replaced by a better implementation. Any implementation that is as portable can replace it. It just needs to be written. A lot of people come up with good ideas on this mailing list and that's great. However, it won't matter if these ideas don't turn into implementations. Now, about Constantin's little test. I assume he is talking about the iute.java program @http://rufus.w3.org/tools/Kaffe/messages/3796.html This program is no indicator of the quality of the jit compiler whatsoever. This is not to say that the results are uninfluenced by how good or bad the jit compiler is, but it is not a JIT benchmark. What it really measures is the combination of the speed of the string and hashtable implementations and the allocator and garbage collector. For example, gcj is about as fast for this test as Sun's jit compiler. Does that mean that the quality of the code generated by gcj is no better than what a jit compiler would generate? I would hope not; it probably means that Cygnus's string/hashtable/run-time is a bit slower than Sun's. But really, we simply don't know what it means. Next, keep in mind that all performance results in Java are heavily influenced by the frequency and duration of garbage collection. For Kaffe, if you configure with "--with-timing", the VM will do timing and print some statistics at the end that tells you how much time was spent in gc. By tweaking the initial and maximum heap size (-ms and -mx), you can see variations on the order of several seconds in the time Kaffe takes. Writing pure JIT benchmarks is hard. I'd assume what you'd have to do is to avoid calling any methods in the Java run-time. In those cases where this is unavoidable (for the new operator, for instance), you'd have to calibrate and deduct the difference. To my knowledge, however, nobody has focused on writing such benchmarks; and the good reason for this is that they would be of little use, precisely because of the fact that the performance of real applications depends on the combination of run-time, run-time libraries and jit compiler. About the java.lang.StackOverflowError in the intrp version: increase the stack size using "-ss", like "-ss 64k". I haven't had a chance to look at what became the final 1.0b4, but apparently a proposed change to increase the default stack size for the interpreter didn't make it in. - Godmar