On Sep 28, 7:12 pm, Lie <[EMAIL PROTECTED]> wrote:
> On Sep 28, 3:35 pm, est <[EMAIL PROTECTED]> wrote:
>
> > > Because that's how ASCII is defined.
> > > Because that's how ASCII is defined.  ASCII is a 7-bit code.
>
> > Then why can't python use another default encoding internally
> > range(256)?
>
> > > Python refuses to guess and tries the lowest common denominator -- ASCII 
> > > -- instead.
>
> > That's the problem. ASCII is INCOMPLETE!
>
> What do you propose? Use mbsc and smack out linux computers? Use KOI
> and make non-Russians suicide? Use GB and shot dead non-Chinese? Use
> latin-1 and make emails servers scream?
>
> > If Python choose another default encoding which handles range(256),
> > 80% of python unicode encoding problems are gone.
>
> > It's not HARD to process unicode, it's just python & python community
> > refuse to correct it.
>
> Python's unicode support is already correct. Only your brainwave have
> not been tuned to it yet.
>
> > > stop dreaming of a magic solution
>
> > It's not 'magic' it's a BUG. Just print 0x7F to 0xFF to console,
> > what's wrong????
>
> > > Isn't that more or less the same as telling the OP to use unicode() 
> > > instead of str()?
>
> > sockets could handle str() only. If you throw unicode objects to a
> > socket, it will automatically call str() and cause an error.

Have you ever programmed with CJK characters before?
--
http://mail.python.org/mailman/listinfo/python-list

Reply via email to