First gcc-compiled Java program runs!
Michael Thomas
mike at mtcc.com
Sat Jun 14 12:49:03 PDT 1997
Per Bothner writes:
> The following little Java program compiles under "cc1java"
> (the Gcc front-end for Java that reads a .class file and
> emits assembly code). It runs correctly, using Kaffe (a
> modified pre-0.9.1 snapshot) as the run-time engine.
> It took 16 seconds to execute. In comparison, the
> same programs compiled by Kaffe's JIT takes 26 seconds,
> and Sun's JDK 1.1 takes 88 seconds to run the same program.
> In all cases, this is under Solaris 2.5.
Well I took my own advice and here are the results
of what I have available: I have a 150 Mhz Pentium
running Linux 2.0.26:
Platform Time (sec) % of Native C % of Kaffe
-----------------------------------------------------------------
JDK 1.0.2 305 4% 10%
Kaffe 0.84 31 41% 100%
Toba 1.0b6 21 61% 148%
Native C 13 100% 238%
Given Per's numbers on his Sun:
Platform Time (sec) % of Native C % of Kaffe
-----------------------------------------------------------------
JDK 1.1 88 -- 29%
Kaffe 26 -- 100%
cc1java 16 -- 162%
Which says that his numbers are sort of in line
with Toba, though perhaps somewhat faster. It's
way to early to make comparisons, but it certainly
looks promising. 70-80% of Native C would be
fantastic, and probably sufficient for most
situations.
I do have a couple of questions though:
1) How awful is Sun's javac? Is the bytecode it
generates optimized very well?
2) You mention that you are reading a .class
file, rather than a .java file. Does that
mean you're doing the equivalent of toba,
but omitting the cc1 step?
3) Do you plan on actually writing a java
parser which would give you the optimizer
enough info to do its job, or does the
bytecode have enough info to feed it into the
optimizer already? (I've notice that it is
broken into basic blocks, but don't know if
it has enough info for the gcc optimizer to
do anything useful with it.)
4) Has there been any thought to doing an elisp
like solution, ie keep kaffe as the VM, but
also keeping persistant compiled classes
around for speed, JIT'ing the rest? I wrote
a dopey little application the other day,
and did some timing on it and found that
about 1/3 of its time was being spent in the
JIT. For small run applications, that's a
very unhappy situation. I looked around a
bit in the JIT code, and it doesn't seem like
it would be impossible to strong arm the
JIT into dumping code right before it links
it. Further execution would then only need
to do the dynamic linking, which is a lot
cheaper than code generation. It seems
especially ludicrous to be JIT'ing the
classes.zip stuff each time you run something.
--
Michael Thomas (mike at mtcc.com http://www.mtcc.com/~mike/)
"I dunno, that's an awful lot of money."
Beavis
More information about the kaffe
mailing list