First gcc-compiled Java program runs!
Per Bothner
bothner at cygnus.com
Sat Jun 14 14:06:21 PDT 1997
> 70-80% of Native C would be fantastic, and probably sufficient for most
> situations.
But once you use objects seriously, that Java overhead will be more
difficult to compile way than the trivial integer benchmark I posted.
> 1) How awful is Sun's javac? Is the bytecode it
> generates optimized very well?
My impression is that the code is OK, but not seriouly optimized.
The main thing -O gives you (as far as I know) is inlining of
some final methods.
>
> 2) You mention that you are reading a .class
> file, rather than a .java file. Does that
> mean you're doing the equivalent of toba,
> but omitting the cc1 step?
Basically.
> 3) Do you plan on actually writing a java
> parser which would give you the optimizer
> enough info to do its job, or does the
> bytecode have enough info to feed it into the
> optimizer already?
Yes and yes. In other words: We do plan on writing an integrated
Java parser and semantic analizer, but that is primarily for
compilation speed. I haven't yet hit on any real optimization
problem caused by the byetcode (which is at a very high level as it is).
> 4) Has there been any thought to doing an elisp
> like solution, ie keep kaffe as the VM, but
> also keeping persistant compiled classes
> around for speed, JIT'ing the rest?
A little thought, but nothing serious yet.
> It seems
> especially ludicrous to be JIT'ing the
> classes.zip stuff each time you run something.
Of course. The plan is to compile classes.zip using
cc1java ahead of time. Where that is supported, it will be
the recommened (but not only) way to install Kaffe.
--Per Bothner
Cygnus Solutions bothner at cygnus.com http://www.cygnus.com/~bothner
More information about the kaffe
mailing list