If you want to write up a patch to recognize a UTF8 BOM and ignore it, go
ahead. You can just modify the Tokenizer class to recognize and discard a
BOM appearing at the beginning of the input.
On Thu, Jul 2, 2009 at 1:31 PM, Marc Gravell wrote:
>
> OK... is there any way it /could/ silently ign
OK... is there any way it /could/ silently ignore the BOM? ;-p
I can try to advise the caller to use files without BOMs, but protoc
reads UTF8 anyway it seems reasonable to accept a BOM?
Marc
--~--~-~--~~~---~--~~
You received this message because you are subscrib
protoc actually expects its input to be UTF-8 (though non-ASCII characters
are only allowed in default values for string fields). It just doesn't like
the BOM.
On Thu, Jul 2, 2009 at 12:44 PM, Marc Gravell wrote:
>
> My bad... it isn't the line endings - it is the UTF8 BOM; when I
> switched one
My bad... it isn't the line endings - it is the UTF8 BOM; when I
switched one it switched the other.
(which is annoying; encoding is much trickier than just cr/lf!)
Marc
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Gro
Protoc treats \r as plain whitespace, so it should have no problem with
Windows line endings. I just tested this and sure enough, protoc works fine
with .proto files that use Windows-style line endings.
Mac pre-OSX line endings (\r with no \n) won't work if the file contains any
comments.
What ki
I'm using protoc as the raw .proto parser for protobuf-net (I then
process the compiled binary for code-generation); at the moment, it is
very sensitive about line endings - if it isn't LF, it won't work.
This creates a bit of a nag for Windows users, as you have to go out
of your way to get the