On 12/11/2008 10:07, Tim Chase wrote:
You haven't explained why you think that you *need* a list of all
encodings that exist at a point in time. What are you going to do with
the list?

Just because I ran into this recently, the Dilbert.com site returns a bogus Content-type header with

  Content-Type: text/html; charset=utf-8lias

For Python to parse this, I had to use Python's list of known encodings in order to determine whether I could even parse the site (for passing it to a string's .encode() method).

You haven't said why you think you need a list of known encodings!

I would have thought that just trying it on some dummy data will let you determine very quickly whether the alleged encoding is supported by the Python version etc that you are using.

E.g.

| >>> alleged_encoding = "utf-8lias"
| >>> "any old ascii".decode(alleged_encoding)
| Traceback (most recent call last):
|  File "<stdin>", line 1, in <module>
| LookupError: unknown encoding: utf-8lias
| >>>



--
http://mail.python.org/mailman/listinfo/python-list

Reply via email to