Sandy Gao writes:

>1. When the array is big enough, no exception will be thrown. Then whether
>the following (a) is faster than (b)?
>a. fOffset[chunk][index] == 0;
>b. if (fOffset.length <= chunk || fOffset[chunk] == null ||
>fOffset[chunk].length <= index)
>(a) looks simpler, but doesn't it need to check the bounds to see whether
>an exception should be thrown? And won't such check be similar to what's
in
>(b)? Let's ASSUME (a) is faster for now, and use "DIFF" to denote the time
>difference between (a) and (b).

But (b) is always implicitly checked by the JVM regardless of whether the
exception is thrown or not.  Therefore, there is no issue of a time
difference
between (a) and (b).  You cannot avoid doing (b), you can only avoid doing
(b) twice.  Checking (b) twice is more expensive than letting the JVM
handle
all the checks for you and just providing the code to handle the
"exceptional"
case, i.e. when the checks fail.

Here is another couple of examples from Xerces1...

    private int loadNextByte() throws Exception {
        fCurrentOffset++;
        if (USE_TRY_CATCH_FOR_LOAD_NEXT_BYTE) {
            fCurrentIndex++;
            try {
                fMostRecentByte = fMostRecentData[fCurrentIndex] & 0xFF;
                return fMostRecentByte;
            } catch (ArrayIndexOutOfBoundsException ex) {
                return slowLoadNextByte();
            }
        } else {
            if (++fCurrentIndex == UTF8DataChunk.CHUNK_SIZE)
                return slowLoadNextByte();
            else
                return(fMostRecentByte = fMostRecentData[fCurrentIndex] &
0xFF);
        }
    }

(and yes, USE_TRY_CATCH_FOR_LOAD_NEXT_BYTE is set to true)

and

    public byte byteAt(int offset) throws IOException {
        int chunk = offset >> CHUNK_SHIFT;
        int index = offset & CHUNK_MASK;
        try {
            return fData[chunk][index];
        } catch (NullPointerException ex) {
            // ignore -- let fill create new chunk
        } catch (ArrayIndexOutOfBoundsException e) {
            // current chunk array is not big enough; resize
            byte newdata[][] = new byte[fData.length * 2][];
            System.arraycopy(fData, 0, newdata, 0, fData.length);
            fData = newdata;
        }
        if (index == 0) {
            fill();
            return fData[chunk][index];
        }
        return 0;
    }

Yes, it is important to keep the resizes down, but it is actually because
of the expense of the data copies and not the Exception processing.

Regards,
Glenn



---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to