My guess is because of something very interesting I recently found in some parsing I was doing:
Looping through an array is FASTER in a while-loop with a termination condition of boolean (true), where the ArrayOutOfBoundsException is handled, than it is in a for-loop that has to test the value of the counter for each interation through the loop.
It sounds strange, but it is true.
e.g.
int[] a = new int[10000];
while (true) {
try {
System.out.println(a[i]);
i++;
}
catch(ArrayIndexOutOfBoundsException e) {
//refill a or break
i = 0;
}
}
is faster than:
int[] a = new int[10000];
for (int i = 0, i < 10000; i++) {
System.out.println(a[i]);
}
I never would have expected it either, but in this situation, the overhead from the try-catch is less expensive than testing,
(i < 10000)
Weird, huh?
BradO
-----Original Message-----
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]
Sent: Friday, June 15, 2001 11:12 AM
To: [EMAIL PROTECTED]
Subject: a performance issue
Hi all,
In many classes, to detect the need of array-resizing, we try to access a
specific position, and do the resizing when
"ArrayIndexOutOfBoundsException" is caught (or NullPointerException for
multidimensional arrays).
But my understanding is that throw-catch exceptions is much more expensive
than get the array size directly: arrayVal.length. So there would be a big
performance hit if we resize arrays frequently. (In fact, we do resize
StringPool very often.)
So could anyone explain to me why it's designed this way?
Thanks,
Sandy Gao
Software Developer, IBM Canada
(1-416) 448-3255
[EMAIL PROTECTED]
---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
