On Thursday, 26 April 2018 01:27:59 PDT Jason H wrote:
Nah. Big endian is the way to go. Just because one company made little
endian prevelant isn't good enough. Network byte order is big endian, and
when looking at hex dumps, the bytes appear in the order that you expect to
see them... Which is why I think the big endian deadline was chosen. You're
sending data with QDataStream, network byte order is the right order. The
bits keep getting more significant, right to left, rather than zig zagging
right to left then over to the right for the next byte.
You can be as sarcastic as you want viewing the world from the throw away x86 hobby chip grain of sand on the beach. And no, one little company, especially if you mean INTEL didn't make little-endian popular.

https://en.wikipedia.org/wiki/Endianness

When Intel developed the8008 <https://en.wikipedia.org/wiki/Intel_8008>microprocessor for Datapoint, they used little-endian for compatibility. However, as Intel was unable to deliver the 8008 in time, Datapoint used amedium scale integration <https://en.wikipedia.org/wiki/Medium_scale_integration>equivalent,

http://www.columbia.edu/cu/computinghistory/pdp10.html

The Digital Equipment Corporation PDP-10 (1964-1983) is one of the most influential computers in history in more ways than can be listed here. It was the foundation of the DECsystem-10 and the DECSYSTEM-20 and ran a variety of operating systems including TOPS-10, ITS, WAITS, TYMCOM-X, TENEX, and TOPS-20. It was the first widely used timesharing system. It was the basis of the ARPANET (now Internet). It was the platform upon which many of today's popular applications were first developed including EMACS, TeX, ISPELL (the first spell-checker), and Kermit.

The big iron of the day, DEC-10, DEC-20 and IBM all used big-endian to achieve massive throughput. When DEC started making smaller PDP-11 machines they went to mixed-endian and dramatically reduced performance. While the 11/44 was a massive seller and still exists under the hoods of many other devices today, it was sloooow compared to the big boxes.

VAX hardware went little-endian to be more compatible with all of the knock-off midrange computers and and the wanna-be dual floppy computers. It suffered profusely from I/O throughput as a result.

We won't discuss what happened when the Alpha chip was being "second sourced" at INTEL.

For whatever reason the highest throughput machines have always been and continue to be big-endian. Don't confuse "throughput" with some rigged count of numerical calculations. Throughput is the ability to read-calculate-store complete transactions.

https://www.symmetrymagazine.org/article/april-2014/ten-things-you-might-not-know-about-particle-accelerators


       There are more than 30,000 accelerators in operation around the
       world.

Tiny wanna-be computers get built into sensors placed around these things and they transfer back to big-endian machines for serious throughput. The simple act of gathering sensor data and packetizing it can easily exceed the throw-away chip's capability so adding more workload to it cannot really be considered.




--
Roland Hughes, President
Logikal Solutions
(630)-205-1593

http://www.theminimumyouneedtoknow.com
http://www.infiniteexposure.net
http://www.johnsmith-book.com
http://www.logikalblog.com
http://www.interestingauthors.com/blog
http://lesedi.us/
http://onedollarcontentstore.com

_______________________________________________
Interest mailing list
Interest@qt-project.org
http://lists.qt-project.org/mailman/listinfo/interest

Reply via email to