[kaffe] Slow byte to char conversion
Dalibor Topic
kaffe@rufus.w3.org
Wed, 16 Aug 2000 07:17:11 -0700 (PDT)
Hi,
I wrote a simple program to show a Java charmap (
something like Encode.java in developers directory).
It essentially creates a byte array with size 1, and
creates a string with the appropriate Unicode char
using the encoding in question for every value a byte
can take.
When displaying a serialized converter like 8859_2,
the performance is very bad. Comparing current kaffe
from CVS running on SuSE Linux 6.4 with jit3 and IBM's
JRE 1.3 running in interpreted mode, kaffe is about 10
times slower.
While I consider the idea to use serialized encoders
based on hashtables a great one, it is very
inefficient for ISO-8859-X and similar byte to char
encodings. These encodings use most of the 256
possible values a byte can take to encode characters,
so I tried using an array instead. I achieved
comparable running times to JRE 1.3.
Why was the hashtable based conversion chosen over
alternatives (switch based lookup, array based
lookup)?
Dali
=====
"Success means never having to wear a suit"
__________________________________________________
Do You Yahoo!?
Send instant messages & get email alerts with Yahoo! Messenger.
http://im.yahoo.com/