From: Joe Huber <[EMAIL PROTECTED]>
Date: Sat, 17 Feb 2007 08:18:24 -0800

At 3:12 PM -0800 2/16/07, Mel Patrick wrote:
I really liked the idea of the word/long read functions based the processor
and I think I will implement those in future.

But the issue is NOT what processor is running, the issue is the
endianness of the DATA that needs to be read or written.

Hard coding the endianness to the processor means that files and
network streams would not be interoperable across machines that use a
different processor.

Personnally I think RB's approach of having the LittleEndian flag on
memory blocks and binary streams is exactly the right approach. It
handles everything in the simplest manner possible.

Yes you need to think through whether data needs to be interoperable
across platforms. If it's an existing file or stream format, then
obviously you need to accommodate that regardless of platform. If
you're creating a new format that needs to be interoperable across
platforms then just use LittleEndian since that's what all modern
Macs and Windows machines use.

No.

Use BigEndian because it's SANE and logical :(

It's not called network order for nothing you know.

I'll bet you that Apple's CoreEndian can byte-swap 100MB of data in 1 tick :)

Why not give it a go, and see how fast it is yourself?
_______________________________________________
Unsubscribe or switch delivery mode:
<http://www.realsoftware.com/support/listmanager/>

Search the archives:
<http://support.realsoftware.com/listarchives/lists.html>

Reply via email to