Nick Coghlan wrote:
Arbitrary binary data and ASCII compatible binary data are *different things* and the only argument in favour of modelling them with a single type is because Python 2 did it that way.

I would say that ASCII compatible binary data is a
*subset* of arbitrary binary data. As such, a type
designed for arbitrary binary data is a perfectly good
way of representing ASCII compatible binary data.

What are you saying -- that there should be one type
for ASCII compatible binary data, and another type
for all binary data *except* when it's ASCII compatible?

That makes no sense to me.

The Python 3 text model was built on the notion of "no implicit encoding and decoding"

This is nonsense. There are plenty of implicit
encoding and decoding operations in Python 3.

When you open a text file, it gets an encoding. After
that, anything you write to it is implicitly encoded
using that encoding. There's even a default encoding
when you open the file, so you don't even have to be
explicit about that.

It's more correct to say that it was built on the
notion of using separate types for encoded and
decoded data, so that it's *possible* to keep track
of the difference. It doesn't mean that there can't
be conversions between the two types that are
implicit to one degree or another.

--
Greg
_______________________________________________
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com

Reply via email to