On 6/24/2010 4:59 PM, Guido van Rossum wrote:

But I wouldn't go so far as to claim that interpreting the protocols
as text is wrong. After all we're talking exclusively about protocols
that are designed intentionally to be directly "human readable"

I agree that the claim "':' is just a byte" is a bit shortsighted.

If the designers of the protocols had intended to use uninterpreted bytes as protocol markers, they could and I suspect would have used unused control codes, of which there are several. Then there would have been no need for escape mechanisms to put things like :<> into content text.

I am very sure that the reason for specifying *ascii* byte values was to be crysal clear as to what *character* was meant and to *exclude* use on the internet of the main imcompatible competitor encoding -- IBM's EBCDIC -- which IBM used in all of *its* networks. Until the IBM PC came out in the early 1980s (and IBM originally saw that as a minor sideline and something of a toy), there was a battle over byte encodings between IBM and everyone else.

--
Terry Jan Reedy

_______________________________________________
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com

Reply via email to