[protobuf] ProtoBuf 3 C# Options

2018-05-04 Thread Jeremy Swigart
Anyone know why generated C# classes don't include the named constants for 
options? (MessageOptions,FieldOptions,etc).

They still work but you have to hard code the number value directly, which 
is less that readable and isn't on par with the named constants generated 
for the C++ files.

-- 
You received this message because you are subscribed to the Google Groups 
"Protocol Buffers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to protobuf+unsubscr...@googlegroups.com.
To post to this group, send email to protobuf@googlegroups.com.
Visit this group at https://groups.google.com/group/protobuf.
For more options, visit https://groups.google.com/d/optout.


Re: [protobuf] Re: Compatibility Issue + Max value for the indices/field numbers + are high field number slower?

2016-06-20 Thread Jeremy Ong
> Separate embedded messages would involve switches for the code generation
(official build vs modded build) but could be doable, and maybe it is even
a bit cleaner.
> The CRC thing would have meant an uniform solution, but maybe namespacing
the modded messages isn't the worst idea.

I'd be really surprised if a CRC ended up being more performant than a
single branch to see if the mod message exists if I'm understanding your
situation correctly.

On Mon, Jun 20, 2016 at 3:36 PM, a_teammate  wrote:

> Am Montag, 20. Juni 2016 20:50:17 UTC+2 schrieb Jeremy Ong:
>
>> https://developers.google.com/protocol-buffers/docs/encoding#structure
>>
>> Protobuf messages are associative arrays of key value pairs, where the
>> key is a union of the field number encoded as a varint and the value wire
>> type (union operator being a left shift of the field number by 3 bits).
>> Because the field number is variable width, it's theoretical size is
>> unbounded but is likely implementation dependent as some programming
>> languages super arbitrarily large numbers, and other implementations might
>> use fixed width types to represent the field for convenience.
>>
>
> Ah! thank you for pointing that out, that lets me understand the structure
> of protobuf much better, and especially thanks for the link!
>
> Am Montag, 20. Juni 2016 21:47:39 UTC+2 schrieb Feng Xiao:
>>
>>
>> On Sun, Jun 19, 2016 at 7:56 AM, a_teammate  wrote:
>>>
>>> The problem is however that we don't only want forward backward
>>> compability between server and client, but also sideways:
>>> e.g. person A introduced message index 2 and also did person B, both
>>> meaning totally different things, but it should be recognized and
>>> ignored(/or maybe even accepted?! if we find a way to do this)
>>>
>> If person A already added a field with field number 2, how can person B
>> add another one with the same field number? Do you have multiple copies of
>> the .proto files and they are not synced?
>>
>
> Well we're an open-source multiplayer-game and highly encourage modding:
> So it would be cool if a modded client meeting a modded server could work
> together (could be doable since our scripting uses a similar/the same API,
> if that's smart security-wise is another question ofc)
> The protobuf code gets generated here from code reflection, so people
> don't need to deal with syncing themselves. That's where I meant would the
> CRCing would come into play,
> well I assume I wasn't quite clear about that initially.
>
>  Am Montag, 20. Juni 2016 20:50:17 UTC+2 schrieb Jeremy Ong:
>
>> If you are trying to prevent collisions between two people modifying the
>> key space, I recommend making separate embedded messages so there is no
>> chance of collision. CRC-ing field numbers is just too heavy weight for
>> what it is you're trying to do in my opinion.
>>
>
> Separate embedded messages would involve switches for the code generation
> (official build vs modded build) but could be doable, and maybe it is even
> a bit cleaner.
> The CRC thing would have meant an uniform solution, but maybe namespacing
> the modded messages isn't the worst idea.
>
> Another alternative would be to sync the metadata initially (so the modded
> server deals with the input according to the clients description of its own
> protocol, not on the servers assumption), well we've got the choice :)
>
> Am Montag, 20. Juni 2016 20:50:17 UTC+2 schrieb Jeremy Ong:
>
>> Regarding performance, varint encoding/decoding time is O(n) in the byte
>> length of the result. Whether this is important depends on your application
>> of course, but you're really better off understanding how the encoding
>> works so you can do a quick back of the envelope guess to see if it
>> matters, followed by actually benchmarking if performance is really that
>> important to you.
>>
>>
> Yeah I see, well benchmarking will come into play sooner or later thats
> for sure!
>
> Am Montag, 20. Juni 2016 21:47:39 UTC+2 schrieb Feng Xiao:
>>
>>
>>
>> On Sun, Jun 19, 2016 at 7:56 AM, a_teammate  wrote:
>>
>>> 1) what is the maximum value of the protobuf field numbers?
>>>
>> The range of valid field numbers is 1 to 2^29 - 1:
>> https://developers.google.com/protocol-buffers/docs/proto#assigning-tags
>>
>> Some field numbers in this range are reserved so you will need to account
>> for those as well.
>>
>
> Ah nice! so if our benchmarks suggests a negligible performane impact,
> hashing could be doable, since that area see

Re: [protobuf] Compatibility Issue + Max value for the indices/field numbers + are high field number slower?

2016-06-20 Thread Jeremy Ong
all this stuff :)
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Protocol Buffers" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to protobuf+unsubscr...@googlegroups.com.
>> To post to this group, send email to protobuf@googlegroups.com.
>> Visit this group at https://groups.google.com/group/protobuf.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Protocol Buffers" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to protobuf+unsubscr...@googlegroups.com.
> To post to this group, send email to protobuf@googlegroups.com.
> Visit this group at https://groups.google.com/group/protobuf.
> For more options, visit https://groups.google.com/d/optout.
>



-- 
Jeremy Ong
PlexChat CTO
650.400.6453

-- 
You received this message because you are subscribed to the Google Groups 
"Protocol Buffers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to protobuf+unsubscr...@googlegroups.com.
To post to this group, send email to protobuf@googlegroups.com.
Visit this group at https://groups.google.com/group/protobuf.
For more options, visit https://groups.google.com/d/optout.


Re: [protobuf] Compatibility Issue + Max value for the indices/field numbers + are high field number slower?

2016-06-20 Thread Jeremy Ong
https://developers.google.com/protocol-buffers/docs/encoding#structure

Protobuf messages are associative arrays of key value pairs, where the key
is a union of the field number encoded as a varint and the value wire type
(union operator being a left shift of the field number by 3 bits). Because
the field number is variable width, it's theoretical size is unbounded but
is likely implementation dependent as some programming languages super
arbitrarily large numbers, and other implementations might use fixed width
types to represent the field for convenience.

If you are trying to prevent collisions between two people modifying the
key space, I recommend making separate embedded messages so there is no
chance of collision. CRC-ing field numbers is just too heavy weight for
what it is you're trying to do in my opinion.

Regarding performance, varint encoding/decoding time is O(n) in the byte
length of the result. Whether this is important depends on your application
of course, but you're really better off understanding how the encoding
works so you can do a quick back of the envelope guess to see if it
matters, followed by actually benchmarking if performance is really that
important to you.

On Sun, Jun 19, 2016 at 7:56 AM, a_teammate  wrote:

> Hey there,
>
> This might a stupid question, but i haven't found anything certain in the
> docs/specs about that:
>
> Our main goal is actually to keep compatibility while syncing a tree.
>
> The protocol is actually just one giant oneof containing all possible
> paths for the tree:
>
> message TreeNodeChanged {
>
>oneof key {
>
>   sometype path_to_node1 = 1;
>
>   sometype path_to_node2 = 2;
>
>   ...
>
> }
>
>  }
>
>
> and thats already working.
> The problem is however that we don't only want forward backward
> compability between server and client, but also sideways:
> e.g. person A introduced message index 2 and also did person B, both
> meaning totally different things, but it should be recognized and
> ignored(/or maybe even accepted?! if we find a way to do this)
>
>
> So the idea of my mate was to make the field number a hash of the
> path_to_node!
> Candidates are e.g. a 32bit FNV-hash or maybe an adapted CRC32. This would
> both mean field numbers in a pretty high area.
>
> Maybe we're going the totally wrong way here but following this path leads
> to the following issues:
>
>
> 1) what is the maximum value of the protobuf field numbers?
>
> In the proto language specification
> <https://developers.google.com/protocol-buffers/docs/reference/proto3-spec#fields>
> it simply says its of type "intLit" and intLit is:
>
> intLit = decimalLit | octalLit | hexLit
>> decimalLit = ( "1" … "9" ) { decimalDigit }
>> octalLit   = "0" { octalDigit }
>> hexLit = "0" ( "x" | "X" ) hexDigit { hexDigit }
>>
>>
> So this means only decimal and hexadecimal values are actually allowed
> doesnt it?
> Then however given:
>
> decimalDigit = "0" … "9"
> hexDigit = "0" … "9" | "A" … "F" | "a" … "f"
>
>  Means it has different limits for hex and int notation, is that correct?
>
> I mean:
>
> the max value for decimalLit is one billion-1 : "999 999 999"
> according to this specs, which fits fine in a 32bit integer (with 30bits
> set)
>
> but for base 16 its allowed length is 16! which would be awesome cause that
> would mean an allowed integer size of  64bit.
>
> So which one is true? Both?
>
> which leads to issue 2:
>
> 1) are there issues with high field numbers
>
> And are they even tested at all?
>
> I've red elsewhere
> <https://developers.google.com/protocol-buffers/docs/proto#customoptions>
> that *"we have used field numbers in the range 5-9. This range is
> reserved for internal use within individual organizations"*
> which would suggest that even values above 50 000 are uncommen ..
>
> Furthermore some people mentioned high values would suffer from beeing
> less performant, but: in how far is that relevant? Only because the index
> number consumes slightly more memory?
>
>
> Well: Maybe we totally ask the wrong questions here and theres a much
> simpler logic already introduced or invented to better make protobuf
> message version independent, if yes we would be happy to hear them!
>
> Thanks in advance and for reading all this stuff :)
>
> --
> You received this message because you are subscribed to the Google Groups
> "Protocol Buffers" group.
> To unsubscribe from this group and stop receivin

Re: [protobuf] sanitizer/asan_interface.h: No such file or directory when building with -fsanitize=address

2016-06-01 Thread Jeremy Ong
Those debug sanitizers are part of the LLVM suite of tools.
http://compiler-rt.llvm.org/


On Wed, Jun 1, 2016 at 12:06 PM, Benjamin Sapp  wrote:

> Hi, I ran the following:
>
> $ git clone https://github.com/google/protobuf.git
> $ cd protobuf
> $ bazel build --copt -fsanitize=address --linkopt -fsanitize=address --copt
> -DADDRESS_SANITIZER=1 --compilation_mode=fastbuild  --verbose_failures --
> curses=no :protobuf
> INFO: Loading...
> INFO: Found 1 target...
> INFO: Building...
> ERROR: /tmp/protobuf/BUILD:71:1: C++ compilation of rule
> '//:protobuf_lite' failed: namespace-sandbox failed: error executing
> command
>   (cd /home/bensapp/.cache/bazel/_bazel_bensapp/
> 9d77888f7e298040819668f1b7f626a8/protobuf && \
>   exec env - \
> PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin \
>   /home/bensapp/.cache/bazel/_bazel_bensapp/
> 9d77888f7e298040819668f1b7f626a8/protobuf/_bin/namespace-sandbox @/home/
> bensapp/.cache/bazel/_bazel_bensapp/9d77888f7e298040819668f1b7f626a8/
> protobuf/bazel-sandbox/7e30b972-a4b9-427a-add1-c307b1088902-0.params -- /
> usr/bin/gcc -U_FORTIFY_SOURCE '-D_FORTIFY_SOURCE=1' -fstack-protector -
> Wall -Wunused-but-set-parameter -Wno-free-nonheap-object 
> -fno-omit-frame-pointer
> '-fsanitize=address' '-DADDRESS_SANITIZER=1' '-std=c++0x' -iquote . -iquote
> bazel-out/local_linux-fastbuild/genfiles -iquote external/bazel_tools -iquote
> bazel-out/local_linux-fastbuild/genfiles/external/bazel_tools -isystem
> src -isystem bazel-out/local_linux-fastbuild/genfiles/src -isystem
> external/bazel_tools/tools/cpp/gcc3 -DHAVE_PTHREAD -Wall -Wwrite-strings -
> Woverloaded-virtual -Wno-sign-compare '-Wno-error=unused-function' -no-
> canonical-prefixes -fno-canonical-system-headers -Wno-builtin-macro-redefined
> '-D__DATE__="redacted"' '-D__TIMESTAMP__="redacted"'
> '-D__TIME__="redacted"'
> '-frandom-seed=bazel-out/local_linux-fastbuild/bin/_objs/protobuf_lite/src/google/protobuf/arena.pic.o'
> -MD -MF bazel-out/local_linux-fastbuild/bin/_objs/protobuf_lite/src/google
> /protobuf/arena.pic.d -fPIC -c src/google/protobuf/arena.cc -o bazel-out/
> local_linux-fastbuild/bin/_objs/protobuf_lite/src/google/protobuf/arena.
> pic.o).
> src/google/protobuf/arena.cc:35:38: fatal error: sanitizer/asan_interface.
> h: No such file or directory
>  #include 
>   ^
> compilation terminated.
> INFO: Building complete.
> Target //:protobuf failed to build
> INFO: Elapsed time: 0.525s, Critical Path: 0.34s
>
> The only place I have sanitizer/asan_interface.h on my machine is
> /usr/lib/llvm-3.6/lib/clang/3.6.0/include/sanitizer/asan_interface.h
>
> but I'm using (and would like to keep using) gcc.
>
> Any advice, or is there a proper place to file a bug?
>
> --
> You received this message because you are subscribed to the Google Groups
> "Protocol Buffers" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to protobuf+unsubscr...@googlegroups.com.
> To post to this group, send email to protobuf@googlegroups.com.
> Visit this group at https://groups.google.com/group/protobuf.
> For more options, visit https://groups.google.com/d/optout.
>



-- 
Jeremy Ong
PlexChat CTO
650.400.6453

-- 
You received this message because you are subscribed to the Google Groups 
"Protocol Buffers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to protobuf+unsubscr...@googlegroups.com.
To post to this group, send email to protobuf@googlegroups.com.
Visit this group at https://groups.google.com/group/protobuf.
For more options, visit https://groups.google.com/d/optout.


Re: [protobuf] can protobuf3 be used with protobuf2?

2016-05-19 Thread Jeremy Ong
The handling of unknown fields is precisely where the difference lies and
sticking to proto2 is indeed the plan. However, because you have many
clients forced to stick with proto2, you inevitably bifurcate the users,
and implementations are now forced to support two standards. Proto3 was
designed supposedly to *simplify* the implementation of the protocol, but
this is moot when the old implementation is necessary anyways. Honestly, it
feels like a repeat of a python2 and python3 thingy (although admittedly
perhaps not as serious).

As for JSON fields not being keyed to field number, that was a choice made
with the encoding and has nothing to do with JSON itself. Do those JSON
blobs also lose forward compatibility when the schema changes?

Thanks for the discussion,
J

On Thu, May 19, 2016 at 7:01 PM, Tim Kientzle  wrote:

>
> > On May 18, 2016, at 10:01 PM, Jeremy Ong  wrote:
> >
> > Why does adding JSON require dropping unknown fields? So long as fields
> are keyed to field number, I don't see why the JSON encoding requires
> special treatment with respect to the binary one.
>
> JSON fields aren’t keyed to field number.  They’re keyed to field name.
>
> Even apart from field naming, JSON and protobuf wire formats don’t
> correspond 1:1, so you can’t even correctly translate the primitive values
> without the schema.
>
> > However, proto3 makes breaks in compatibility with the underlying data
> (proto2 encoded), which is where I find myself in disagreement.
>
> What do you think is different?  Having decoded (by hand) a fair bit of
> proto2 and proto3 data, they look exactly the same to me.
>
> As I mentioned before, if preserving unknown fields is essential for you,
> you should stick with proto2.  It’s still around and will be for a long
> time.


> Cheers,
>
> Tim
>
>


-- 
Jeremy Ong
PlexChat CTO
650.400.6453

-- 
You received this message because you are subscribed to the Google Groups 
"Protocol Buffers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to protobuf+unsubscr...@googlegroups.com.
To post to this group, send email to protobuf@googlegroups.com.
Visit this group at https://groups.google.com/group/protobuf.
For more options, visit https://groups.google.com/d/optout.


Re: [protobuf] can protobuf3 be used with protobuf2?

2016-05-18 Thread Jeremy Ong
Why does adding JSON require dropping unknown fields? So long as fields are
keyed to field number, I don't see why the JSON encoding requires special
treatment with respect to the binary one.

I can understand how transitioning between major versions may require
breaks in compatibility. However, proto3 makes breaks in compatibility with
the underlying data (proto2 encoded), which is where I find myself in
disagreement. Why not preserve data compatibility so that overtime, proto2
users can migrate? Unknown field handling (or lack thereof) is honestly the
one I find most egregious.

On Wed, May 18, 2016 at 9:51 PM, Tim Kientzle  wrote:

> After studying proto3 pretty carefully, I’ve come around quite a bit on
> these changes:
>
> I believe adding JSON requires dropping unknown fields.  You simply cannot
> preserve unknown fields and properly support multiple encodings.
>
> I’m less sure about replacing extension support with Any.  Extensions have
> some ugly problems, but I feel the current spec for Any also has some real
> drawbacks.
>
> Removing field presence is a subtle issue, but I’m starting to suspect it
> was actually a very good change.  It reduces the generated code and the
> workaround of using a single-element oneof is cleaner than it might sound.
> In essence, a single-element oneof is just a way to explicitly declare that
> you want to track presence for that field.  And oneof is supported by
> proto2 now, so you can use that technique there as well.
>
> Finally, remember that proto2 is not going away:   If proto2 assumptions
> are deeply baked into your systems, you can keep using it.  protoc will
> continue to support it for a very long time.
>
> Cheers,
>
> Tim
>
>
>
> > On May 18, 2016, at 1:33 PM, Jeremy Ong  wrote:
> >
> > Big fan of 4, 5, 6, and 7. Huge un-fan of 2, and 3. I am mixed on 1
> because I love the removal of required fields, hate the removal of field
> presence. All the changes I dislike are significant losses in functionality
> and break compatibility with existing users of proto2 and I'd be interested
> to understand why "ease of implementation" is good justification for this
> break in compatibility and what I perceive to be a loss in functionality.
> >
> > On Wed, May 18, 2016 at 11:18 AM, 'Feng Xiao' via Protocol Buffers <
> protobuf@googlegroups.com> wrote:
> >
> >
> > On Wed, May 18, 2016 at 9:27 AM, Artem Kazakov 
> wrote:
> > +1
> > Yes, a checklist would be extremely helpful.
> >
> >
> > On Friday, April 29, 2016 at 5:04:56 PM UTC-4, Kostiantyn Shchepanovskyi
> wrote:
> > It would be nice to have a migration guide (checklist) somewhere, like:
> >
> > 1. All fields should be optional.
> > 2. Do not use custom defailt values.
> > 3. All enums should have first element with tag = 0.
> > 4. Do not use extension for anything except custom options.
> >
> > Something else?
> > In 3.0.0-alpha-1 release note there is a list of main proto3 changes:
> > The following are the main new features in language version 3:
> >
> >   • Removal of field presence logic for primitive value fields,
> removal of required fields, and removal of default values. This makes
> proto3 significantly easier to implement with open struct representations,
> as in languages like Android Java, Objective C, or Go.
> >   • Removal of unknown fields.
> >   • Removal of extensions, which are instead replaced by a new
> standard type called Any.
> >   • Fix semantics for unknown enum values.
> >   • Addition of maps.
> >   • Addition of a small set of standard types for representation of
> time, dynamic data, etc.
> >   • A well-defined encoding in JSON as an alternative to binary
> proto encoding.
> >
> >
> >
> >
> > On Friday, April 29, 2016 at 1:18:12 AM UTC+3, Feng Xiao wrote:
> >
> >
> > On Tue, Apr 26, 2016 at 7:04 PM, Bo Gao  wrote:
> > suppose server side is updating into protobuf3, but client side still
> use protobuf2, can then communicate will?
> > Yes, as long as you only use proto3 features, they are wire compatible.
> >
> >
> > --
> > You received this message because you are subscribed to the Google
> Groups "Protocol Buffers" group.
> > To unsubscribe from this group and stop receiving emails from it, send
> an email to protobuf+u...@googlegroups.com.
> > To post to this group, send email to prot...@googlegroups.com.
> > Visit this group at https://groups.google.com/group/protobuf.
> > For more options, visit https://groups.google.com/d/optout.
> >
> >
> > --
> > You receiv

Re: [protobuf] can protobuf3 be used with protobuf2?

2016-05-18 Thread Jeremy Ong
Big fan of 4, 5, 6, and 7. Huge un-fan of 2, and 3. I am mixed on 1 because
I love the removal of required fields, hate the removal of field presence.
All the changes I dislike are significant losses in functionality and break
compatibility with existing users of proto2 and I'd be interested to
understand why "ease of implementation" is good justification for this
break in compatibility and what I perceive to be a loss in functionality.

On Wed, May 18, 2016 at 11:18 AM, 'Feng Xiao' via Protocol Buffers <
protobuf@googlegroups.com> wrote:

>
>
> On Wed, May 18, 2016 at 9:27 AM, Artem Kazakov  wrote:
>
>> +1
>> Yes, a checklist would be extremely helpful.
>>
>>
>> On Friday, April 29, 2016 at 5:04:56 PM UTC-4, Kostiantyn Shchepanovskyi
>> wrote:
>>>
>>> It would be nice to have a migration guide (checklist) somewhere, like:
>>>
>>> 1. All fields should be optional.
>>> 2. Do not use custom defailt values.
>>> 3. All enums should have first element with tag = 0.
>>> 4. Do not use extension for anything except custom options.
>>>
>>> Something else?
>>>
>> In 3.0.0-alpha-1 release note
> <https://github.com/google/protobuf/releases/tag/v3.0.0-alpha-1> there is
> a list of main proto3 changes:
>
> The following are the main new features in language version 3:
>
>1. Removal of field presence logic for primitive value fields, removal
>of required fields, and removal of default values. This makes proto3
>significantly easier to implement with open struct representations, as in
>languages like Android Java, Objective C, or Go.
>2. Removal of unknown fields.
>3. Removal of extensions, which are instead replaced by a new standard
>type called Any.
>4. Fix semantics for unknown enum values.
>5. Addition of maps.
>6. Addition of a small set of standard types for representation of
>time, dynamic data, etc.
>7. A well-defined encoding in JSON as an alternative to binary proto
>encoding.
>
>
>
>
>
>>
>>> On Friday, April 29, 2016 at 1:18:12 AM UTC+3, Feng Xiao wrote:
>>>>
>>>>
>>>>
>>>> On Tue, Apr 26, 2016 at 7:04 PM, Bo Gao  wrote:
>>>>
>>>>> suppose server side is updating into protobuf3, but client side still
>>>>> use protobuf2, can then communicate will?
>>>>>
>>>> Yes, as long as you only use proto3 features, they are wire compatible.
>>>>
>>>>
>>>>> --
>>>>> You received this message because you are subscribed to the Google
>>>>> Groups "Protocol Buffers" group.
>>>>> To unsubscribe from this group and stop receiving emails from it, send
>>>>> an email to protobuf+u...@googlegroups.com.
>>>>> To post to this group, send email to prot...@googlegroups.com.
>>>>> Visit this group at https://groups.google.com/group/protobuf.
>>>>> For more options, visit https://groups.google.com/d/optout.
>>>>>
>>>>
>>>> --
>> You received this message because you are subscribed to the Google Groups
>> "Protocol Buffers" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to protobuf+unsubscr...@googlegroups.com.
>> To post to this group, send email to protobuf@googlegroups.com.
>> Visit this group at https://groups.google.com/group/protobuf.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Protocol Buffers" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to protobuf+unsubscr...@googlegroups.com.
> To post to this group, send email to protobuf@googlegroups.com.
> Visit this group at https://groups.google.com/group/protobuf.
> For more options, visit https://groups.google.com/d/optout.
>



-- 
Jeremy Ong
PlexChat CTO

-- 
You received this message because you are subscribed to the Google Groups 
"Protocol Buffers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to protobuf+unsubscr...@googlegroups.com.
To post to this group, send email to protobuf@googlegroups.com.
Visit this group at https://groups.google.com/group/protobuf.
For more options, visit https://groups.google.com/d/optout.


Re: [protobuf] Re: Forced serialization/deserialization of unknown fields in proto3 messages

2016-03-18 Thread Jeremy Ong
Neither are appropriate in my use case unfortunately. I want to be
able to tag any message with data in a field range special within the
organization. The point is that I don't want to add fields to the
existing hundreds and hundreds of message types we have already.

For the time being, I have switched everything back to proto2, which
despite some inconvenience I believe is more feature complete and
superior to proto3 at this time.

On Fri, Mar 18, 2016 at 4:20 PM, 'Feng Xiao' via Protocol Buffers
 wrote:
>
>
> On Tuesday, March 15, 2016 at 12:32:07 PM UTC-7, Jeremy Ong wrote:
>>
>> Hi google pb,
>>
>> I was wondering if an interface exists for specifying that I do not want
>> the proto3 serialization or deserialization to discard unknown fields. My
>> understanding was that this change was made from proto2 to proto3, and is a
>> pretty severe restriction if there are no ways around it. The motivating
>> example in my case is to potentially decorate a message with fields in the
>> extension ranges that are not part of the message body. The meaning is
>> purely semantic and I do not want the data therein to be contained in the
>> protobuf format itself. If unknown fields are not an option, are there other
>> options or suggestions to handle this?
>
> Some alternatives to consider:
> 1. use a bytes field to store these data, and decode it manually if it's a
> proto.
> 2. use an google.protobuf.Any field for it if the data is still a protobuf
> message.
>
>>
>>
>> Best,
>> Jeremy
>
> --
> You received this message because you are subscribed to the Google Groups
> "Protocol Buffers" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to protobuf+unsubscr...@googlegroups.com.
> To post to this group, send email to protobuf@googlegroups.com.
> Visit this group at https://groups.google.com/group/protobuf.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Protocol Buffers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to protobuf+unsubscr...@googlegroups.com.
To post to this group, send email to protobuf@googlegroups.com.
Visit this group at https://groups.google.com/group/protobuf.
For more options, visit https://groups.google.com/d/optout.


Re: [protobuf] Re: Forced serialization/deserialization of unknown fields in proto3 messages

2016-03-18 Thread Jeremy Ong
Thanks for the suggestion but this would mean that every message would need
to be packed in this structure including nested messages. It's simply too
much for a hack for what I believe was a simple and useful feature that was
part of proto2 and removed with no explanation or way to disable it.

On Fri, Mar 18, 2016 at 7:06 PM, Steven Parkes 
wrote:

> You can do something like this by serializing the normal message and the
> added fields separately and then concatenating the byte strings.
>
> Not helpful if you need to do this above the byte string level. And may
> have other disadvantages, e.g., extra copies ...
>
> On Fri, Mar 18, 2016 at 6:51 PM, Jeremy Ong  wrote:
>
>> Neither are appropriate in my use case unfortunately. I want to be
>> able to tag any message with data in a field range special within the
>> organization. The point is that I don't want to add fields to the
>> existing hundreds and hundreds of message types we have already.
>>
>> For the time being, I have switched everything back to proto2, which
>> despite some inconvenience I believe is more feature complete and
>> superior to proto3 at this time.
>>
>> On Fri, Mar 18, 2016 at 4:20 PM, 'Feng Xiao' via Protocol Buffers
>>  wrote:
>> >
>> >
>> > On Tuesday, March 15, 2016 at 12:32:07 PM UTC-7, Jeremy Ong wrote:
>> >>
>> >> Hi google pb,
>> >>
>> >> I was wondering if an interface exists for specifying that I do not
>> want
>> >> the proto3 serialization or deserialization to discard unknown fields.
>> My
>> >> understanding was that this change was made from proto2 to proto3, and
>> is a
>> >> pretty severe restriction if there are no ways around it. The
>> motivating
>> >> example in my case is to potentially decorate a message with fields in
>> the
>> >> extension ranges that are not part of the message body. The meaning is
>> >> purely semantic and I do not want the data therein to be contained in
>> the
>> >> protobuf format itself. If unknown fields are not an option, are there
>> other
>> >> options or suggestions to handle this?
>> >
>> > Some alternatives to consider:
>> > 1. use a bytes field to store these data, and decode it manually if
>> it's a
>> > proto.
>> > 2. use an google.protobuf.Any field for it if the data is still a
>> protobuf
>> > message.
>> >
>> >>
>> >>
>> >> Best,
>> >> Jeremy
>> >
>> > --
>> > You received this message because you are subscribed to the Google
>> Groups
>> > "Protocol Buffers" group.
>> > To unsubscribe from this group and stop receiving emails from it, send
>> an
>> > email to protobuf+unsubscr...@googlegroups.com.
>> > To post to this group, send email to protobuf@googlegroups.com.
>> > Visit this group at https://groups.google.com/group/protobuf.
>> > For more options, visit https://groups.google.com/d/optout.
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Protocol Buffers" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to protobuf+unsubscr...@googlegroups.com.
>> To post to this group, send email to protobuf@googlegroups.com.
>> Visit this group at https://groups.google.com/group/protobuf.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Protocol Buffers" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to protobuf+unsubscr...@googlegroups.com.
> To post to this group, send email to protobuf@googlegroups.com.
> Visit this group at https://groups.google.com/group/protobuf.
> For more options, visit https://groups.google.com/d/optout.
>



-- 
Jeremy Ong
PlexChat CTO
301.648.8260

-- 
You received this message because you are subscribed to the Google Groups 
"Protocol Buffers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to protobuf+unsubscr...@googlegroups.com.
To post to this group, send email to protobuf@googlegroups.com.
Visit this group at https://groups.google.com/group/protobuf.
For more options, visit https://groups.google.com/d/optout.


[protobuf] Forced serialization/deserialization of unknown fields in proto3 messages

2016-03-15 Thread Jeremy Ong
Hi google pb,

I was wondering if an interface exists for specifying that I do not want 
the proto3 serialization or deserialization to discard unknown fields. My 
understanding was that this change was made from proto2 to proto3, and is a 
pretty severe restriction if there are no ways around it. The motivating 
example in my case is to potentially decorate a message with fields in the 
extension ranges that are not part of the message body. The meaning is 
purely semantic and I do not want the data therein to be contained in the 
protobuf format itself. If unknown fields are not an option, are there 
other options or suggestions to handle this?

Best,
Jeremy

-- 
You received this message because you are subscribed to the Google Groups 
"Protocol Buffers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to protobuf+unsubscr...@googlegroups.com.
To post to this group, send email to protobuf@googlegroups.com.
Visit this group at https://groups.google.com/group/protobuf.
For more options, visit https://groups.google.com/d/optout.


Re: [protobuf] Re: Protobuf Buffers v3.0.0-alpha-1

2015-02-06 Thread Jeremy Swigart
I don't understand. If a message is a simple struct then the generated wrapper 
code would populate it with the default as defined by the proto it was compiled 
with wouldn't it? Are you suggesting that the implementation on different 
platforms would lack the wrapper objects generated by protobuf? As long as you 
have that you have the default value. This rationale doesn't make sense. 

-- 
You received this message because you are subscribed to the Google Groups 
"Protocol Buffers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to protobuf+unsubscr...@googlegroups.com.
To post to this group, send email to protobuf@googlegroups.com.
Visit this group at http://groups.google.com/group/protobuf.
For more options, visit https://groups.google.com/d/optout.


[protobuf] Add field option on imported message

2015-02-06 Thread Jeremy Swigart
Is it possible to add a field option to select fields within an imported 
message?  Ie without modifying the original proto file, add custom options to 
its fields? 

Thanks

-- 
You received this message because you are subscribed to the Google Groups 
"Protocol Buffers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to protobuf+unsubscr...@googlegroups.com.
To post to this group, send email to protobuf@googlegroups.com.
Visit this group at http://groups.google.com/group/protobuf.
For more options, visit https://groups.google.com/d/optout.


Re: [protobuf] Re: Protobuf Buffers v3.0.0-alpha-1

2015-01-14 Thread Jeremy Swigart
That sounds like a poor design decision, and one easily readded without 
breaking anything. If a field doesn't have an explicit default, you use 0 or 
whatever, thereby not breaking anyone not using them, but if an explicit 
default is provided that is used instead. I am using that feature as well. 

-- 
You received this message because you are subscribed to the Google Groups 
"Protocol Buffers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to protobuf+unsubscr...@googlegroups.com.
To post to this group, send email to protobuf@googlegroups.com.
Visit this group at http://groups.google.com/group/protobuf.
For more options, visit https://groups.google.com/d/optout.


[protobuf] Protobuf Buffers v3.0.0-alpha-1

2014-12-12 Thread Jeremy Swigart
Does the arena allocator also get used by messages allocated as children of the 
root message? 

-- 
You received this message because you are subscribed to the Google Groups 
"Protocol Buffers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to protobuf+unsubscr...@googlegroups.com.
To post to this group, send email to protobuf@googlegroups.com.
Visit this group at http://groups.google.com/group/protobuf.
For more options, visit https://groups.google.com/d/optout.


[protobuf] Passing messages without a compiled proto

2014-12-10 Thread Jeremy Swigart
Is it possible, given a proto file, but without compiling it(protoc), to use 
the proto file directly to be able to load data? In other words parsing the 
proto file at run time and generating a reflection interface or something such 
that a tool may read the messages with just the proto file without having to 
generate the code with protoc ? 

-- 
You received this message because you are subscribed to the Google Groups 
"Protocol Buffers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to protobuf+unsubscr...@googlegroups.com.
To post to this group, send email to protobuf@googlegroups.com.
Visit this group at http://groups.google.com/group/protobuf.
For more options, visit https://groups.google.com/d/optout.


[protobuf] Re: Dynamically determine the type of a message

2013-02-25 Thread Jeremy Swigart
What about something like this

message UnknownMessage 
{
enum MessageType
{
MESSAGE_TYPE_A = 1;
MESSAGE_TYPE_B = 2;
MESSAGE_TYPE_C = 3;
}

required MessageTypemsgType = 1;
required bytesmsgPayload = 2;
}


On Wednesday, February 10, 2010 6:02:35 PM UTC-5, fokenrute wrote:
>
> Hi, I'm developing a C++ application and I use Protocol Buffers for
> network communications.
> Somewhere in my app, I receive messages which can be of different
> types and I'm searching for a mean to dynamically determine the type
> of these messages (which are stored in a buffer). I read something
> about the reflexion interface, but I don't konw how to use it to do
> what i want.
> Thanks in advance for your replies.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Protocol Buffers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to protobuf+unsubscr...@googlegroups.com.
To post to this group, send email to protobuf@googlegroups.com.
Visit this group at http://groups.google.com/group/protobuf?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [protobuf] Static allocation

2012-07-20 Thread Jeremy Swigart
I know the answer is that it doesn't support this, but here's what I'm 
wanting to set up to give a clearer view.

Suppose you have a set of nested messages that represent the application 
state of an entire application, in this case a game and its entities and 
various other stuff. Here's a trimmed down example.

message Game{
  required string name = 1;
  // other info
  message Entity {
required int32  uid = 1;
// other stuff, position, orientation, etc
  }
  repeated Entity entities = 2;
}

Basically what I want to do here, to avoid having to write a bunch of 
external code to track last transmitted data set, etc, is to have a 
persistent instance of this Game message. At first it will be empty, and 
during the update each frame, or possible more infrequently, the game will 
iterate its own internal lists and compare the newest data that's in the 
real game objects with the data cached in these messages. The idea here is 
that these messages represent the latest state of the world that has been 
sent to the network client. When the real data differs from the cached data 
in the message, I update that data and serialize that to a network packet 
to send, and I'm left with the last sent network state in the message 
hierarchy. If the state is small enough, I may even create an empty Game 
message on the stack during this process that represents the network 
packet, and as I'm looping through checking dirty data, build an updated 
game state message with only the changed data since the last network 
transmission.

With all this in mind, there is the issue of frequent iteration of a large 
nested message structure, and the cache misses that come with that. Not so 
bad on x86 architecture, but pretty bad on current gaming consoles. The 
other issue is with frequently hammering the dynamic memory allocation. 
This part can be alleviated a lot with how its used, like not creating 
temporary messages on the stack to fill in. To avoid this, I can just 
update the cached messages in place, and then send those messages 
individually, instead of building up a full new Game message that contains 
all the diffs of the entire hierarchy. I would kinda prefer to build up the 
full snapshots of the changed state, as that would greatly simplify a 
number of aspects of the application. I can maintain persistant Game 
message allocations simply for building the diff state, but I was hoping 
there was an option to utilize the stack more for temporary messages. One 
goal for this type of application(a remote debugger), especially on certain 
platforms like the consoles, is for a minimal additional performance and 
memory footprint, because they are often already running on the upper end 
of their capabilities.

For these reasons it would be pretty useful if there was an option for the 
compiler to generate statically nested variables, and max length string 
buffer according to proto markup. I'll press on and depending on whether or 
not it turns out to badly effect performance maybe do some compiler 
modifications myself.

Thanks for the help.


On Thursday, July 19, 2012 8:22:29 AM UTC-5, Evan Jones wrote:
>
> On Jul 18, 2012, at 16:14 , Jeremy wrote: 
> > I understand, but if one wants to keep a large persistent message 
> allocated and walk over it frequently, there is a price to pay on cache 
> misses that can be significant. 
>
> I guess you are wishing that the memory layout was completely contiguous? 
> Eg. if you have three string fields, that their memory would be laid out 
> one field after another? Chances are good that with most dynamic memory 
> allocators, if you allocate this specific sized message at one time, the 
> fields will *likely* be contiguous or close to it, but obviously there are 
> no guarantees. I would personally be surprised if these cache misses would 
> be an important performance difference, but as normal there is only one way 
> to tell: measure it. 
>
> If you want something like this in protobuf though, you would need to 
> change a *lot* of the internals. This would not be a simple change. I 
> suggest trying to re-use a message, and seeing if the performance is 
> acceptable or not. If not, you'll need to find some other serialization 
> solution. Good luck, 
>
> Evan 
>
> -- 
> http://evanjones.ca/ 
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Protocol Buffers" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/protobuf/-/3FyoW8V5NooJ.
To post to this group, send email to protobuf@googlegroups.com.
To unsubscribe from this group, send email to 
protobuf+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en.



Re: [protobuf] Static allocation

2012-07-18 Thread Jeremy
I understand, but if one wants to keep a large persistent message allocated
and walk over it frequently, there is a price to pay on cache misses that
can be significant.

In my situation I am maintaining a large persistent instance of a large
message type that I used as a cached data set to compare runtime data with,
and if it has changed, I want to re-send the parent message with only the
changed bits across the network. This frequent operation can benefit from
locality of reference if there was an option to generate the code
statically, so that every lookup into the message doesn't have to involve a
cache miss.


On Tue, Jul 17, 2012 at 1:30 PM, Evan Jones  wrote:

> On Jul 17, 2012, at 2:33 , Jeremy Swigart wrote:
> > Is there a way to tell the proto compiler to generate message
> definitions for which the message fields are statically defined rather than
> each individual field allocated with dynamic memory? Obviously the repeater
> fields couldn't be fully statically allocated(unless you could provide the
> compiler with a max size), but it would be preferable to have the option to
> create messages with minimal dynamic memory impact. Is this possible in the
> current library?
>
> I'll assume you are talking C++. In this case, if you re-use a single
> message, it will re-use the dynamically allocated memory. This means that
> after the "maximal" message(s) have been parsed, it will no longer allocate
> memory. This is approximately equivalent to what you want. See Optimization
> Tips in:
>
> https://developers.google.com/protocol-buffers/docs/cpptutorial
>
> Hope that helps,
>
> Evan
>
> --
> http://evanjones.ca/
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Protocol Buffers" group.
To post to this group, send email to protobuf@googlegroups.com.
To unsubscribe from this group, send email to 
protobuf+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en.



[protobuf] Static allocation

2012-07-17 Thread Jeremy Swigart
Is there a way to tell the proto compiler to generate message definitions 
for which the message fields are statically defined rather than each 
individual field allocated with dynamic memory? Obviously the repeater 
fields couldn't be fully statically allocated(unless you could provide the 
compiler with a max size), but it would be preferable to have the option to 
create messages with minimal dynamic memory impact. Is this possible in the 
current library?

-- 
You received this message because you are subscribed to the Google Groups 
"Protocol Buffers" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/protobuf/-/Hgcdsv8WS6gJ.
To post to this group, send email to protobuf@googlegroups.com.
To unsubscribe from this group, send email to 
protobuf+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en.



Re: [protobuf] incompatible type changes philosophy

2012-05-10 Thread Jeremy Stribling



On 05/10/2012 07:52 AM, Evan Jones wrote:

On May 9, 2012, at 15:26 , Jeremy Stribling wrote:

* There are two nodes, 1 and 2, running version A of the software.
* They exchange messages containing protobuf P, which contains a string field F.
* We write a new version B of the software, which changes field F to an integer 
as an optimization.
* We upgrade node 1, but node 2.
* If node 1 sends a protobuf P to node 2, I want node 2 to be able to access 
field F as a string, even though the wire format sent by node 1 was an integer.


I think you can achieve your goals by building a layer on top of the existing protocol 
buffer parsing, possibly in combination with some custom options, a protoc plugin, and 
maybe a small tweak to the existing C++ code generator. You do the breaking change by 
effectively "renaming" the field, then using a protoc plugin to make it 
invisible to the application. To make this concrete, your Version A looks like:

message P {
optional string F = 1;
}


Then Version B looks like the following:

message P {
optional string old_F = 1 [(custom_upgrade_option) = 
"some_upgrade_code"];
optional int32 F = 2;
}


With this structure, Version B can always parse a Version A message. Senders will always 
ensure there is only one version in the message, so the only thing you are 
"losing" here is a field number, which isn't a huge deal. However, you but now 
want to automatically convert old_F to F. This can be done without changing the guts of 
the parser by writing a protoc plugin that generates a member function based on the 
custom option:

void UpgradeToLatest() {
 if (has_old_F()) {
 set_F(some_upgrade_code(get_old_F()));
 clear_old_F();
 }
}


You then need to make sure that Version B of the software calls this everywhere it is 
needed. Maybe this argues that what is needed is a "post-processing" insertion 
point in ::MergePartialFromCodedStream? Then your protoc plugin could insert this call 
after a protocol buffer message is successfully parsed, so the application would only 
ever have to deal with the integer version.


Yep, I think something like that could work.  Thanks, I'll have to 
explore how best to add a post-processing insertion point there, if we 
decide to go that route.





In the other direction, I don't understand how the downgrading can possibly be 
done at the receiver, since it doesn't know how to do the downgrade (unless you 
are thinking about mobile code?). So in your example, Node 1 must create a 
Version A protocol buffer message when sending to Node 2. This means you need 
*some* sort of handshaking between Node 1 and Node 2, to indicate supported 
versions.

This is reason I proposed adding some other member function that takes a 
"target_version", so the sender knows what to emit. If sending the same message 
to multiple recipients, you'll need to send the lowest version in the group. Based on the 
above, your plugin could emit:

void DowngradeToVersion(int target_version) {
 if (target_version<  0xB&&  has_F()) {
 set_old_F(some_downgrade_code(get_F()));
 clear_F();
 }
}


There are many other ways you could do this, but it seems to me that this 
proposal is a way to do it without complicating the base protocol buffers 
library with application-specific details.


Downgrading at the sender is not an option, because the "sender" might 
be writing something to persistent storage that can be read by any 
version of the program -- there might be no direct connection over which 
to relay versions.  It is possible to do the downgrading at the receiver 
by having two separate processes, likely connected over a local socket 
-- one that holds the main logic of your program, and one which is 
responsible only for translation.  Then, as part of your upgrade, you 
can first upgrade the translation program separately on all nodes, so 
they know how to downgrade from newer versions of the data.  This 
upgrade would be easy, and completely non-disruptive to the main logic 
process.  After all translation programs in the system have been 
upgraded, you can start the (possibly long) process of upgrading the 
other processes, one by one, without worrying much about the effect they 
have on the non-upgraded nodes.  As long as there's a stable interface 
between the two processes that can withstand restarts at either end, 
this should be possible.  This is what's described in Sameer's thesis.


So the challenge I'm pondering is how to plug in calls to such a program 
from somewhere in the protobuf processing path, for only the case where 
the incoming message's version is not natively supported by the 
program.  Perhaps, as you suggest, a post-processing insertion point in 
MergePartialFromCodedStream is the right way to go.  I'll report back if 
I make any progress on this

Re: [protobuf] incompatible type changes philosophy

2012-05-09 Thread Jeremy Stribling

On 05/09/2012 04:41 AM, Evan Jones wrote:

On May 8, 2012, at 21:26 , Jeremy Stribling wrote:

Thanks for the response.  As you say, this solution is painful because you 
can't enable the optimization until the old version of the program is 
completely deprecated.  This is somewhat simple in the case that you yourself 
are deploying the software, but when you're shipping software to customers (as 
we are) and have to support many old versions, it will take a very long time 
(possibly years) before you can enable the optimization.  Also, it breaks the 
downgrade path.  Once you enable the optimization, you can never downgrade back 
to a version that did not know about the new field.

I think I now understand your problem. You want to add some additional stuff to your .proto file to 
indicate the incompatible change, then have the application code not need to know about it? Eg. you 
want to write the application code that only accesses "new_my_data" and never needs to 
check for "deprecated_my_data", but in fact the underlying protocol buffer supports both 
fields, or something like that.


Hey Evan, thanks for the response.  That is one way to look at it.  
Ideally, the application code would only access my_data(), and it would 
magically appear as the new type in the new version of the app and the 
old type in the old version of the app.  But renaming the field for the 
new version is fine too.  The important points are twofold: 1) the data 
would only appear once on the wire and in storage, and translated if 
necessary by the receiver to the expected format, and 2) that this 
translation could work on the downgrade path as well, so that old 
applications could be able to interpret data written by new 
applications, even if the format of the fields have changes.  Sameer 
Ajmani's ECOOP paper and thesis work discusses these types of scenarios 
(http://pmg.csail.mit.edu/~ajmani/papers/ecoop06-upgrades.pdf).



It seems to me like this is starts to end up in the territory of "too high level for the 
protocol buffer library itself" since I can't imagine this working without handshaking like 
Oliver talked about (e.g. "I understand everything up to version X"). My personal 
experience has been more like what Daniel describes: you keep both versions of the field, and your 
code has if statements to check for both. I believe this can be made to work, even in your 
scenario, but it does require ugly code in your application to handle it. My impression is that you 
are trying to avoid that.


I'm trying to avoid keeping both version of the data in the wire format, 
since in this scenario the whole reason for the change was 
optimization.  I don't care if the new version of the protobuf has two 
separate fields; there just needs to be a way for the old version to 
still get at its old data.  Involving the application in some way is 
totally reasonable and expected; I am just hoping to find a way to add a 
translator into the deserialization code, so that it can be upgraded 
independently on old instances of the program, to be able to interpret 
the new version of the protobof while still running the old version of 
the application code.  Here's a specific example:


* There are two nodes, 1 and 2, running version A of the software.
* They exchange messages containing protobuf P, which contains a string 
field F.
* We write a new version B of the software, which changes field F to an 
integer as an optimization.

* We upgrade node 1, but node 2.
* If node 1 sends a protobuf P to node 2, I want node 2 to be able to 
access field F as a string, even though the wire format sent by node 1 
was an integer.





Random brainstorming that may not be helpful in any way:

I'm curious about how you end up choosing to solve this, but I think you are going to need to use some 
combination of custom field options (to specify the change in a way that protoc can parse?), and then hacks 
in the C++ code generator  to call your custom upgrade / downgrade code. I think this can work somewhat 
seamlessly in the "reading older messages" case (eg. you just add code that says "if we see 
the old field, upgrade it to the new field"). However, this can't work in the "writing a newer 
message for an older receiver" case without making the Serialize* code aware of the version it should be 
*writing*. I think this is going to be pretty application specific?


I think doing it on the deserialize is better, because then we can put 
the burden of translation on the receiver, and the sender can merrily 
send the same serialized message to multiple receivers (tagged with its 
own version) without having to keep track of the version capabilities of 
each receiver.  This is especially important, as Oliver pointed out, 
when the data is not transferred over a live connection but through the 
persistent state.  It will definitely be app-

Re: [protobuf] incompatible type changes philosophy

2012-05-08 Thread Jeremy Stribling



On 05/08/2012 06:04 PM, Daniel Wright wrote:
On Tue, May 8, 2012 at 4:42 PM, Jeremy Stribling <mailto:st...@nicira.com>> wrote:


I'm working on a project to upgrade- and downgrade-proof a distributed
system that uses protobufs to communicate data between instances
of a C
++ program.  I'm trying to cover all possible cases for data schema
changes between versions of my programs, and I was hoping to get some
insight from the community on what the best practice is for the
following tricky scenario.

To reduce serialization type and protobuf message size, the format of
a field in a message is changed between incompatible types.  For
example, a string field gets changed to an int, or perhaps a field
gets changed from one message type to another.  Because this is being
done as an optimization, it makes no sense to keep both versions of
the data around, so I think whether we change the field ID is not
relevant -- we only ever want to have one version of the field in any
particular protobuf.


Even though you don't keep both versions of the data around, you 
should keep both fields around, and have the code be able to read from 
whichever is set during the transition.  You can rename the old one 
(say put "deprecated" in the name) so that people know that it's old, 
but don't actually remove it from the .proto file until no old 
instances of the proto remain.  To put it more concretely, say you have


  optional string my_data = 1;

Now you come up with a way to encode it as an int64 instead.  You'd 
change the .proto to:


  optional string deprecated_my_data = 1;
  optional int64 my_data = 2;

- At this point, you write the data to "deprecated_my_data" and not 
"my_data", but when you read, you check has_my_data() and 
has_deprecated_my_data() and read from whichever one is present.  It 
might help to wrapper functions for reading and writing during the 
transition if the field is accessed in many places.


- once all instances of the program have been re-compiled so they all 
know about the new int64 field, you can start writing to my_data and 
not deprecated_my_data.


- once all of the instances of the program have been recompiled again, 
you can remove the code that reads deprecated_my_data, and delete the 
field.


This is kind of painful, but it's much cleaner than adding a version 
number.  It also only ever writes the data to one field, so there's no 
bloat during the transition.




Thanks for the response.  As you say, this solution is painful because 
you can't enable the optimization until the old version of the program 
is completely deprecated.  This is somewhat simple in the case that you 
yourself are deploying the software, but when you're shipping software 
to customers (as we are) and have to support many old versions, it will 
take a very long time (possibly years) before you can enable the 
optimization.  Also, it breaks the downgrade path.  Once you enable the 
optimization, you can never downgrade back to a version that did not 
know about the new field.


--
You received this message because you are subscribed to the Google Groups "Protocol 
Buffers" group.
To post to this group, send email to protobuf@googlegroups.com.
To unsubscribe from this group, send email to 
protobuf+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en.



[protobuf] incompatible type changes philosophy

2012-05-08 Thread Jeremy Stribling
I'm working on a project to upgrade- and downgrade-proof a distributed
system that uses protobufs to communicate data between instances of a C
++ program.  I'm trying to cover all possible cases for data schema
changes between versions of my programs, and I was hoping to get some
insight from the community on what the best practice is for the
following tricky scenario.

To reduce serialization type and protobuf message size, the format of
a field in a message is changed between incompatible types.  For
example, a string field gets changed to an int, or perhaps a field
gets changed from one message type to another.  Because this is being
done as an optimization, it makes no sense to keep both versions of
the data around, so I think whether we change the field ID is not
relevant -- we only ever want to have one version of the field in any
particular protobuf.

Of course, this makes communicating between versions of the program
very difficult, and I think it requires there to be some kind of
translator code to transform the field from one format to the other.
Ideally, this transformation would be invisible to the rest of the
program.  One ugly thought I had was to have a version field in every
message, and then in the autogenerated C++ serialize code, maybe in
MergePartialCodedFromStream, I could insert a call to an external
translator program that would transform the input bytes into something
that could be decoded by the version of the message expected by this
instance of the program.  I don't think there's an insertion point
defined for this part of the code, so I'd have to write my own script
to do it.  The external translator program could be upgraded
independently of the main program, so older versions would know how to
intepret the fields of the newer versions.

I'm wondering if anyone has experience with a scenario like this, and
if there's a more elegant way to solve it.  If not, what do folks
think of this business of an external translator program?  Foolish
nonsense?  Worthy of a proper insertion point?

Thanks,

Jeremy

-- 
You received this message because you are subscribed to the Google Groups 
"Protocol Buffers" group.
To post to this group, send email to protobuf@googlegroups.com.
To unsubscribe from this group, send email to 
protobuf+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en.



[protobuf] Re: 'Delta' Protocol Buffers

2009-11-13 Thread Jeremy Leader

I think he's saying that he wants support for sparse repeated fields.

He wants to diff 2 messages (of the same type) producing an output protobuf 
containing only the fields that differ between the 2 inputs.  For a repeated 
field, if the first 100 instances of the field are the same in both input 
messages, and only the 101st instance of the field differs, he wants the output 
message to contain a 101st instance of the repeated field, without having to 
contain the preceding 100 instances.

I suppose you could define some meta-format that specified that a repeated 
field 
in the inputs would result in a repeated field in the output containing a 
nested 
message with an index field and a difference field.

-- 
Jeremy Leader
jlea...@oversee.net

Kenton Varda wrote:
> What do you mean by "represented with their index"?
> 
> I don't understand the problem.  Why do repeated fields pose a challenge 
> for diffing?
> 
> On Fri, Nov 13, 2009 at 1:11 AM, Paddy W  <mailto:patrick.j.wa...@gmail.com>> wrote:
> 
> 
> I am working on an application that compares two protocol buffers of
> the same type for differences and generates a new 'delta' protocol
> buffer with only the differences. This works fine for top level fields
> but I can not see a way of dealing with repeating fields or nested
> messages in a way that avoids including all of the repeating field
> entries/nested message fields in the delta. Is it the case that
> repeated fields are positional, in that they must all be present, as
> opposed to being represented with their index? Is there any way of
> approaching this issue or will the idea of 'delta' messages only work
> with 'flat' buffer definitions. Any advice would be appreciated.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Protocol Buffers" group.
To post to this group, send email to protobuf@googlegroups.com
To unsubscribe from this group, send email to 
protobuf+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en
-~--~~~~--~~--~--~---



Re: Why I get inconsistent values with serialization ?

2009-10-15 Thread Jeremy Leader

It might help in debugging this to notice that  == 0x270F, and 30064781071 
== 0x7270F.  I suspect the corruption is happening to the data in the 
object, and not to the data in serialized form, because while those numbers 
only 
differ in a single bit, their varint encodings differ in many places.

-- 
Jeremy Leader
jlea...@oversee.net

The_Glu wrote:
> Hello,
> 
> I use the following prototype :
> 
> message Find {
>required uint64 tag = 1;
>required Common.Hash peerID = 2;
>required string pattern = 3;
> }
> 
> Now:
> 
>Protos::Core::Find findProto;
>findProto.set_tag();
> 
> (findProto.tag() == ) is true.
> 
> findProto.DebugString() return
> 
>   tag: 
>   peerID {
> hash: "323655354"
>   }
>   pattern: "coucou"
>   )
> 
> Great isn't it ?
> 
> But now, if I serialize and unserialize the prototype
> 
>   std::string output;
>   findProto.SerializeToString(&output);
> 
>   Protos::Core::Find findMessage;
>   findMessage.ParseFromString(output);
> 
> What do I get ?
> 
>   tag: 30064781071
>   peerID {
> hash: "323655354"
>   }
>   pattern: "coucou"
>   )
> 
> Why my tag changed oO ? It's not a random value, and it's works for
> small numbers.
> 
> I tried to change the type of tag to uint32, int32, etc. always the
> same problem.
> 
> Thanks for your help,
> 

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Protocol Buffers" group.
To post to this group, send email to protobuf@googlegroups.com
To unsubscribe from this group, send email to 
protobuf+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en
-~--~~~~--~~--~--~---



Re: Decoders for Network Analyzer (Wireshark/Ethereal)

2009-06-19 Thread Jeremy Leader

http://code.google.com/p/protobuf-wireshark/

I haven't tried it myself yet, but I believe it can generically dump any 
protobuf message (with no field names, and some ambiguity about types 
that share wire encodings), or if you give it a .proto file it can dump 
field names and accurate data types.

-- 
Jeremy Leader
jlea...@oversee.net

Jon M wrote:
> Hello,
> 
> I am evaluating using Protocol Buffers for objects that are shared
> across a distributed system. One key thing I would like is a way to
> look at these objects "on the wire" by sniffing the IP traffic in my
> network. Traditionally, we have written custom decoders for sniffers
> such as Wireshark/Ethereal to decode packets of our custom protcols
> into something human-friendly. Does anyone know if there is some sort
> of decoders for the Protocol Buffer encoding scheme? What have others
> done to view network traffic encoded using Protocol Buffers?
> 
> Thanks,
> Jon


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Protocol Buffers" group.
To post to this group, send email to protobuf@googlegroups.com
To unsubscribe from this group, send email to 
protobuf+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en
-~--~~~~--~~--~--~---



Re: Problem compiling after checking out from svn

2009-04-07 Thread Jeremy Leader

I'm not an autotools expert, but I suspect you're using an inappropriate 
version (too old? or too new?) of autoconf and friends.  On my system, I 
see:

% rpm -q --whatprovides /usr/bin/autoreconf /usr/bin/automake
autoconf-2.59-12
automake-1.9.6-2.1

-- 
Jeremy Leader
jlea...@oversee.net


Wink Saville wrote:
> I checked out the sources via svn and ran ./autogen.sh but it failed:
> 
> w...@savu:/usr/local/google/users/wink/svn-clients/protobuf/protobuf-read-only
> $ ./autogen.sh
> + autoreconf -f -i -Wall,no-obsolete
> configure.ac <http://configure.ac>: 14: `automake requires 
> `AM_CONFIG_HEADER', not `AC_CONFIG_HEADER'
> automake: configure.ac <http://configure.ac>: installing `./install-sh'
> automake: configure.ac <http://configure.ac>: installing `./mkinstalldirs'
> automake: configure.ac <http://configure.ac>: installing `./missing'
> configure.ac <http://configure.ac>: 14: required file `./[config.h].in' 
> not found
> automake: src/Makefile.am: not supported: source file 
> `google/protobuf/stubs/common.cc' is in subdirectory
> automake: src/Makefile.am: not supported: source file 
> `google/protobuf/stubs/hash.cc' is in subdirectory
> automake: src/Makefile.am: not supported: source file 
> `google/protobuf/stubs/hash.h' is in subdirectory
> .
> automake: src/Makefile.am: not supported: source file 
> `google/protobuf/unittest_custom_options.pb.h' is in subdirectory
> automake: src/Makefile.am: not supported: source file 
> `google/protobuf/compiler/cpp/cpp_test_bad_identifiers.pb.cc 
> <http://cpp_test_bad_identifiers.pb.cc>' is in subdirectory
> automake: src/Makefile.am: not supported: source file 
> `google/protobuf/compiler/cpp/cpp_test_bad_identifiers.pb.h' is in 
> subdirectory
> src/Makefile.am:15: invalid variable `nobase_dist_proto_DATA'
> src/Makefile.am:26: invalid variable `nobase_include_HEADERS'
> src/Makefile.am:259: invalid unused variable name: 
> `nodist_protobuf_test_SOURCES'
> src/Makefile.am:10: invalid unused variable name: `AM_LDFLAGS'
> autoreconf: automake failed with exit status: 1
> 
> Suggestions?
> 
> -- Wink


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Protocol Buffers" group.
To post to this group, send email to protobuf@googlegroups.com
To unsubscribe from this group, send email to 
protobuf+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en
-~--~~~~--~~--~--~---



Re: same message, different serialization

2009-03-16 Thread Jeremy Leader

I believe you can concatenate two (serialized) messages together, 
yielding a message with all the fields from both messages.

So if all your fields are optional, you could create one instance of 
your message with just the fields common to both clients, and another 
instance with just the fields you want to send to client 2.  Serialize 
both instances, send instance 1 to client 1, and send the concatenation 
of instance 1 and instance 2 to client 2.

That way, there's no double serialization, and each client only gets the 
fields you want to send to them.

-- 
Jeremy Leader
jlea...@oversee.net


edan wrote:
> I have kind of a funny requirement that I was hoping protobuf might 
> support in an elegant fashion:
> I have a message that I want to sent to 2 different clients.  They want 
> mostly the same stuff, but there are a few fields that can be fairly 
> large, that I only want to send to one of the clients, and spare the 
> work of passing on the wire and deserializing for the other client that 
> doesn't want them.
> Is there a way to define the fields in the .proto or change 
> serialization such that I don't serialize those fields for the client 
> that isn't interested in them?  I know I can clear the fields myself 
> after serializing for the first client, and then serialize again, but 
> this has disadvantages of double-serialization (which I could live with) 
> but also requires going through the message (some of the fields are on 
> repeated sub-messages) using iterators and clearing fields, so it's a 
> little messy.  I was hoping there is a prettier way?  Any help is 
> appreciated.
> --edan


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Protocol Buffers" group.
To post to this group, send email to protobuf@googlegroups.com
To unsubscribe from this group, send email to 
protobuf+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en
-~--~~~~--~~--~--~---



Re: speed - python implementation

2008-10-28 Thread Jeremy Leader

Jeremy Leader wrote:
> Might it be possible to use the XS wrappers generated by protobuf-perlxs 
> from Python?

Aaah, not enough caffeine yet.  I somehow confused XS (Perl-specific) 
with SWIG (supports Perl, Python, and many others).  Never mind!

-- 
Jeremy Leader
[EMAIL PROTECTED]

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Protocol Buffers" group.
To post to this group, send email to protobuf@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en
-~--~~~~--~~--~--~---



Re: speed - python implementation

2008-10-28 Thread Jeremy Leader

Might it be possible to use the XS wrappers generated by protobuf-perlxs 
from Python?

-- 
Jeremy Leader
[EMAIL PROTECTED]

andres wrote:
> Hi,
> 
> I would like to use protocol buffers in my python code but currently
> the serialization and parsing methods are too slow compared to
> cPickle. I've read several posts stating that this is because the
> python implementation has not been optimized for speed yet. Are there
> plans to improve the performance of proto buffers in python? Does
> anybody know of a C++ extension/wrapper module which lets you access C+
> + compiled protocol buffers directly from python code?
> 
> Thanks,
> Andres


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Protocol Buffers" group.
To post to this group, send email to protobuf@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en
-~--~~~~--~~--~--~---



Re: Data structures using protocol buffers

2008-10-22 Thread Jeremy Leader

Protocol Buffers are a serialization format, rather than general-purpose 
data structures.  To do computations, you'd probably want to build some 
auxiliary data structures, which you populate when you deserialize the 
protobuf data.  You could have node objects that resemble your original 
.proto file, where nodes have references to their neighbors, and you'd 
probably need a map from node id to node object reference, which you'd 
use during deserialization.

-- 
Jeremy Leader
[EMAIL PROTECTED]

GDR wrote:
> my bad. The code snippet should be as follows:
> 
> for(UndirectedGraphNode node : UndirectedGraph.getNodesList() ) {
>double sum = 0;
>int count = 0;
>for(UndirectedGraphNodeReference neighbor :
> node.getNeighborsList() ) {
>  sum += 
>  count++;
>}
>node.setWeight(sum/count);
> 
> }
> 
> - graph.proto -
> 
> package graph;
> 
> option java_package = "graph";
> option java_outer_classname = "UndirectedGraphType";
> option optimize_for = CODE_SIZE;
> 
> message UndirectedGraphNodeReference {
>required string id = 1;
> }
> 
> message UndirectedGraphNode {
>required string id = 1;
>required double weight = 2;
>repeated UndirectedGraphNodeReference neighbors = 2;
> }
> 
> message UndirectedGraph {
>repeated UndirectedGraphNode nodes = 1;
> }
> 
> On Oct 22, 2:36 pm, GDR <[EMAIL PROTECTED]> wrote:
>> That does solve the duplicate information problem but it makes updates
>> to node attributes (like weight) difficult. Let's say, I want to
>> assign the weight of each node to the average of its neighbors.
>>
>> for(UndirectedGraphNode node : UndirectedGraph.getNodesList() ) {
>>double sum = 0;
>>int count = 0;
>>for(UndirectedGraphNodeReference neighbor :
>> node.getNeighborsList() ) {
>>  sum += 
>>  count++;
>>}
>>node.setWeight(sum/count);
>>
>> }
>>
>> - graph.proto -
>>
>> package graph;
>>
>> option java_package = "graph";
>> option java_outer_classname = "UndirectedGraphType";
>> option optimize_for = CODE_SIZE;
>>
>> message UndirectedGraphNodeReference {
>>required string id = 1;
>>required double weight = 2;
>>
>> }
>>
>> message UndirectedGraphNode {
>>required string id = 1;
>>repeated UndirectedGraphNodeReference neighbors = 2;
>>
>> }
>>
>> message UndirectedGraph {
>>repeated UndirectedGraphNode nodes = 1;
>>
>> }
>>
>> On Oct 22, 2:14 pm, Jeremy Leader <[EMAIL PROTECTED]> wrote:
>>
>>> I was assuming all the properties of a node (weight, label, color,
>>> whatever) would be in UndirectedGraphNode; UndirectedGraphNodeReference
>>> would only have the id and nothing else.
>>> --
>>> Jeremy Leader
>>> [EMAIL PROTECTED]
>>> GDR wrote:
>>>> Thanks Jeremy. That worked!
>>>> But we now have information about the same node being replicated. For
>>>> instance, let's say we have a field 'weight' attached to each node as
>>>> shown below. This setup will replicate the weight information of a
>>>> node as many times as its degree. If the weight of a node changes, I
>>>> will have update all it's occurrences in the PB. Any way I can avoid
>>>> it?
>>>> package graph;
>>>> option java_package = "graph";
>>>> option java_outer_classname = "UndirectedGraphType";
>>>> option optimize_for = CODE_SIZE;
>>>> message UndirectedGraphNodeReference {
>>>>required string id = 1;
>>>>required double weight = 2;
>>>> }
>>>> message UndirectedGraphNode {
>>>>required string id = 1;
>>>>repeated UndirectedGraphNodeReference neighbors = 2;
>>>> }
>>>> message UndirectedGraph {
>>>>repeated UndirectedGraphNode nodes = 1;
>>>> }
>>>> On Oct 21, 6:37 pm, Jeremy Leader <[EMAIL PROTECTED]> wrote:
>>>>> Keep in mind that protobufs describe serialized data, and there's no
>>>>> concept of an object reference like Java uses.  In your example, if A
>>>>> and B are neighbors, then in your proto, the data representing A
>>>>> contains the data representing B, and the data representing B contains
>>>>> the data representing A!
>>>>> One way around this is to implement your own form of refer

Re: Data structures using protocol buffers

2008-10-22 Thread Jeremy Leader

I was assuming all the properties of a node (weight, label, color, 
whatever) would be in UndirectedGraphNode; UndirectedGraphNodeReference 
would only have the id and nothing else.

-- 
Jeremy Leader
[EMAIL PROTECTED]

GDR wrote:
> Thanks Jeremy. That worked!
> But we now have information about the same node being replicated. For
> instance, let's say we have a field 'weight' attached to each node as
> shown below. This setup will replicate the weight information of a
> node as many times as its degree. If the weight of a node changes, I
> will have update all it's occurrences in the PB. Any way I can avoid
> it?
> 
> package graph;
> 
> option java_package = "graph";
> option java_outer_classname = "UndirectedGraphType";
> option optimize_for = CODE_SIZE;
> 
> message UndirectedGraphNodeReference {
>required string id = 1;
>required double weight = 2;
> }
> 
> message UndirectedGraphNode {
>required string id = 1;
>repeated UndirectedGraphNodeReference neighbors = 2;
> }
> 
> message UndirectedGraph {
>repeated UndirectedGraphNode nodes = 1;
> }
> 
> On Oct 21, 6:37 pm, Jeremy Leader <[EMAIL PROTECTED]> wrote:
>> Keep in mind that protobufs describe serialized data, and there's no
>> concept of an object reference like Java uses.  In your example, if A
>> and B are neighbors, then in your proto, the data representing A
>> contains the data representing B, and the data representing B contains
>> the data representing A!
>>
>> One way around this is to implement your own form of references, perhaps
>> using the node ids like this:
>>
>> package graph;
>>
>> option java_package = "graph";
>> option java_outer_classname = "UndirectedGraph";
>> option optimize_for = CODE_SIZE;
>>
>> message UndirectedGraphNodeReference {
>>required string id = 0;
>>
>> }
>>
>> message UndirectedGraphNode {
>>required string id = 0;
>>repeated UndirectedGraphNodeReference neighbors;
>>
>> }
>>
>> message UndirectedGraph {
>>repeated UndirectedGraphNode nodes;
>>
>> }
>>
>> --
>> Jeremy Leader
>> [EMAIL PROTECTED]
>>
>> GDR wrote:
>>> Hi,
>>> I'm wondering how would one go about implementing self-referential
>>> data structures?  As an exercise, I tried to implement a PB version of
>>> the adjacency list representation of a graph. I'm having a hard time
>>> getting it work. Any suggestions?
>>> Thanks!
>>> --- graph.proto ---
>>> package graph;
>>> option java_package = "graph";
>>> option java_outer_classname = "UndirectedGraph";
>>> option optimize_for = CODE_SIZE;
>>> message UndirectedGraphNode {
>>>   required string id = 0;
>>>   repeated UndirectedGraphNode neighbors;
>>> }
>>> message UndirectedGraph {
>>>   repeated UndirectedGraphNode nodes;
>>> }
> 

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Protocol Buffers" group.
To post to this group, send email to protobuf@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en
-~--~~~~--~~--~--~---



Re: Data structures using protocol buffers

2008-10-21 Thread Jeremy Leader

Keep in mind that protobufs describe serialized data, and there's no 
concept of an object reference like Java uses.  In your example, if A 
and B are neighbors, then in your proto, the data representing A 
contains the data representing B, and the data representing B contains 
the data representing A!

One way around this is to implement your own form of references, perhaps 
using the node ids like this:

package graph;

option java_package = "graph";
option java_outer_classname = "UndirectedGraph";
option optimize_for = CODE_SIZE;

message UndirectedGraphNodeReference {
   required string id = 0;
}

message UndirectedGraphNode {
   required string id = 0;
   repeated UndirectedGraphNodeReference neighbors;
}

message UndirectedGraph {
   repeated UndirectedGraphNode nodes;
}

-- 
Jeremy Leader
[EMAIL PROTECTED]

GDR wrote:
> Hi,
> 
> I'm wondering how would one go about implementing self-referential
> data structures?  As an exercise, I tried to implement a PB version of
> the adjacency list representation of a graph. I'm having a hard time
> getting it work. Any suggestions?
> 
> Thanks!
> 
> --- graph.proto ---
> 
> package graph;
> 
> option java_package = "graph";
> option java_outer_classname = "UndirectedGraph";
> option optimize_for = CODE_SIZE;
> 
> message UndirectedGraphNode {
>   required string id = 0;
>   repeated UndirectedGraphNode neighbors;
> }
> 
> message UndirectedGraph {
>   repeated UndirectedGraphNode nodes;
> }
> 
> 

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Protocol Buffers" group.
To post to this group, send email to protobuf@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en
-~--~~~~--~~--~--~---



Re: 2.0.2 release is up

2008-10-13 Thread Jeremy Leader

edan wrote:
> In any case, using gcc 4.1.2, "make" and "make check" (any reason you
> didn't use the more standard "make test"?) succeeded for me, so I
> guess I will have to just wait to update to protobuf-2.0.2 until I can
> move myself to the newer gcc.

For what it's worth, "make check" is standard in projects built using 
Gnu automake (see 
http://www.gnu.org/software/automake/manual/html_node/Tests.html).

-- 
Jeremy Leader
[EMAIL PROTECTED]

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Protocol Buffers" group.
To post to this group, send email to protobuf@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en
-~--~~~~--~~--~--~---