In article <[EMAIL PROTECTED]>, Peter Dyballa <[EMAIL PROTECTED]> writes:

> Am 04.04.2005 um 03:19 schrieb Kenichi Handa:

>>  The above script outputs raw bytes 0..255, which is not a
>>  valid utf-8 code expected in *shell* buffer.  So, Emacs
>>  decodes them as raw-byte characters (i.e. characters
>>  belonging to charsets eight-bit-control and
>>  eight-bit-graphic).
>> 
>>  If you want to get iso-8859-13 characters in *shell* buffer,
>>  you must change the process coding systems of the buffer to
>>  iso-latin-7 by C-x RET p iso-latin-7 RET iso-latin-7 RET.

> My script indeed produces only raw output, which, to become valid 
> UTF-8, would need an introductory C2 or C3 character. The question is: 
> why were these raw characters converted into valid UTF-8

They were not.  Why do you think "those raw characters were
converted into valid UTF-8"?

You wrote:

> In shell I only saw octal representation. 

So, those raw characters were NOT recognized as valid UTF-8,
and thus not converted into normal Emacs characters.

> and why was this not converted into ISO 8859-13 upon
> inserting into the file buffer?

As far as you are copying from a multibyte byffer to a
multibyte buffer of Emacs, no character is converted upon
inserting.

> Or at least when saving the file?

Because they are raw-byte characters.

> My usual selection-coding-system seems to be
> compound-text-with-extensions, open to accept anything.

Why is selection-coding-system relevant to the current
problem?

---
Ken'ichi HANDA
[EMAIL PROTECTED]


_______________________________________________
Emacs-pretest-bug mailing list
Emacs-pretest-bug@gnu.org
http://lists.gnu.org/mailman/listinfo/emacs-pretest-bug

Reply via email to