RE: Performance: Sending a message with ~150k items, approx 3.3mb, can I do better than 100ms?

2009-07-13 Thread Alex Black

Kenton: I made a mistake with these numbers - pls ignore them - I'll revisit 
tomorrow.

Thx.

-Original Message-
From: protobuf@googlegroups.com [mailto:proto...@googlegroups.com] On Behalf Of 
Alex Black
Sent: Tuesday, July 14, 2009 2:05 AM
To: Protocol Buffers
Subject: Re: Performance: Sending a message with ~150k items, approx 3.3mb, can 
I do better than 100ms?


ok, I took I/O out of the picture by serializing each message into a 
pre-allocated buffer, and this time I did a more through measurement.

Benchmark 1: Complete scenario
- average time 262ms (100 runs)

Benchmark 2: Same as # 1 but no IO
- average time 250ms (100 runs)

Benchmark 3: Same as 2 but with serialization commented out
- average time 251ms (100 runs)

Benchmark 4: Same as 3 but with message composition commented out too (no 
protobuf calls)
- average time 185 ms (100 runs)

So from this I conclude:
- My initial #s were wrong
- My timings vary too much for each run to really get accurate averages
- IO takes about 10ms
- Serialization takes ~0ms
- Message composition and setting of fields takes ~66ms

My message composition is in a loop, the part in the loop looks like:

uuid_t relatedVertexId;

myProto::IdConfidence* neighborIdConfidence = 
pNodeWithNeighbors-
>add_neighbors();

// Set the vertex id
neighborIdConfidence->set_id((const void*) 
relatedVertexId, 16);
// set the confidence
neighborIdConfidence->set_confidence( confidence );

currentBatchSize++;

if ( currentBatchSize == BatchSize )
{
// Flush out this batch
//stream << getNeighborsResponse;
getNeighborsResponse.Clear();
currentBatchSize = 0;
}

On Jul 14, 1:27 am, Kenton Varda  wrote:
> Oh, I didn't even know you were including composition in there.  My 
> benchmarks are only for serialization of already-composed messages.
> But this still doesn't tell us how much time is spent on network I/O vs.
> protobuf serialization.  My guess is that once you factor that out, 
> your performance is pretty close to the benchmarks.
>
> On Mon, Jul 13, 2009 at 10:11 PM, Alex Black  wrote:
>
> > If I comment out the actual serialization and sending of the message 
> > (so I am just composing messages, and clearing them each batch) then 
> > the 100ms drops to about 50ms.
>
> > On Jul 14, 12:36 am, Alex Black  wrote:
> > > I'm sending a message with about ~150k repeated items in it, total 
> > > size is about 3.3mb, and its taking me about 100ms to serialize it 
> > > and send it out.
>
> > > Can I expect to do any better than this? What could I look into to 
> > > improve this?
> > > - I have "option optimize_for = SPEED;" set in my proto file
> > > - I'm compiling with -O3
> > > - I'm sending my message in batches of 1000
> > > - I'm using C++, on ubuntu, x64
> > > - I'm testing all on one machine (e.g. client and server are on 
> > > one
> > > machine)
>
> > > My message looks like:
>
> > > message NodeWithNeighbors
> > > {
> > >         required Id nodeId = 1;
> > >         repeated IdConfidence neighbors = 2;
>
> > > }
>
> > > message GetNeighborsResponse
> > > {
> > >         repeated NodeWithNeighbors nodesWithNeighbors = 1;
>
> > > }
>
> > > message IdConfidence
> > > {
> > >         required bytes id = 1;
> > >         required float confidence = 2;
>
> > > }
>
> > > Where "bytes id" is used to send 16byte IDs (uuids).
>
> > > I'm writing each message (batch) out like this:
>
> > >         CodedOutputStream codedOutputStream(&m_ProtoBufStream);
>
> > >         // Write out the size of the message
> > >         codedOutputStream.WriteVarint32(message.ByteSize());
> > >         // Ask the message to serialize itself to our stream 
> > > adapter,
> > which
> > > ultimately calls Write on us
> > >         // which we then call Write on our composed stream
> > >         message.SerializeWithCachedSizes(&codedOutputStream);
>
> > > In my stream implementation I'm buffering every 16kb, and calling 
> > > send on the socket once i have 16kb.
>
> > > Thanks!
>
> > > - Alex


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Protocol Buffers" group.
To post to this group, send email to protobuf@googlegroups.com
To unsubscribe from this group, send email to 
protobuf+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en
-~--~~~~--~~--~--~---



Re: Performance: Sending a message with ~150k items, approx 3.3mb, can I do better than 100ms?

2009-07-13 Thread Alex Black

ok, I took I/O out of the picture by serializing each message into a
pre-allocated buffer, and this time I did a more through measurement.

Benchmark 1: Complete scenario
- average time 262ms (100 runs)

Benchmark 2: Same as # 1 but no IO
- average time 250ms (100 runs)

Benchmark 3: Same as 2 but with serialization commented out
- average time 251ms (100 runs)

Benchmark 4: Same as 3 but with message composition commented out too
(no protobuf calls)
- average time 185 ms (100 runs)

So from this I conclude:
- My initial #s were wrong
- My timings vary too much for each run to really get accurate
averages
- IO takes about 10ms
- Serialization takes ~0ms
- Message composition and setting of fields takes ~66ms

My message composition is in a loop, the part in the loop looks like:

uuid_t relatedVertexId;

myProto::IdConfidence* neighborIdConfidence = 
pNodeWithNeighbors-
>add_neighbors();

// Set the vertex id
neighborIdConfidence->set_id((const void*) 
relatedVertexId, 16);
// set the confidence
neighborIdConfidence->set_confidence( confidence );

currentBatchSize++;

if ( currentBatchSize == BatchSize )
{
// Flush out this batch
//stream << getNeighborsResponse;
getNeighborsResponse.Clear();
currentBatchSize = 0;
}

On Jul 14, 1:27 am, Kenton Varda  wrote:
> Oh, I didn't even know you were including composition in there.  My
> benchmarks are only for serialization of already-composed messages.
> But this still doesn't tell us how much time is spent on network I/O vs.
> protobuf serialization.  My guess is that once you factor that out, your
> performance is pretty close to the benchmarks.
>
> On Mon, Jul 13, 2009 at 10:11 PM, Alex Black  wrote:
>
> > If I comment out the actual serialization and sending of the message
> > (so I am just composing messages, and clearing them each batch) then
> > the 100ms drops to about 50ms.
>
> > On Jul 14, 12:36 am, Alex Black  wrote:
> > > I'm sending a message with about ~150k repeated items in it, total
> > > size is about 3.3mb, and its taking me about 100ms to serialize it and
> > > send it out.
>
> > > Can I expect to do any better than this? What could I look into to
> > > improve this?
> > > - I have "option optimize_for = SPEED;" set in my proto file
> > > - I'm compiling with -O3
> > > - I'm sending my message in batches of 1000
> > > - I'm using C++, on ubuntu, x64
> > > - I'm testing all on one machine (e.g. client and server are on one
> > > machine)
>
> > > My message looks like:
>
> > > message NodeWithNeighbors
> > > {
> > >         required Id nodeId = 1;
> > >         repeated IdConfidence neighbors = 2;
>
> > > }
>
> > > message GetNeighborsResponse
> > > {
> > >         repeated NodeWithNeighbors nodesWithNeighbors = 1;
>
> > > }
>
> > > message IdConfidence
> > > {
> > >         required bytes id = 1;
> > >         required float confidence = 2;
>
> > > }
>
> > > Where "bytes id" is used to send 16byte IDs (uuids).
>
> > > I'm writing each message (batch) out like this:
>
> > >         CodedOutputStream codedOutputStream(&m_ProtoBufStream);
>
> > >         // Write out the size of the message
> > >         codedOutputStream.WriteVarint32(message.ByteSize());
> > >         // Ask the message to serialize itself to our stream adapter,
> > which
> > > ultimately calls Write on us
> > >         // which we then call Write on our composed stream
> > >         message.SerializeWithCachedSizes(&codedOutputStream);
>
> > > In my stream implementation I'm buffering every 16kb, and calling send
> > > on the socket once i have 16kb.
>
> > > Thanks!
>
> > > - Alex
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Protocol Buffers" group.
To post to this group, send email to protobuf@googlegroups.com
To unsubscribe from this group, send email to 
protobuf+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en
-~--~~~~--~~--~--~---



Re: Performance: Sending a message with ~150k items, approx 3.3mb, can I do better than 100ms?

2009-07-13 Thread Kenton Varda
Oh, I didn't even know you were including composition in there.  My
benchmarks are only for serialization of already-composed messages.
But this still doesn't tell us how much time is spent on network I/O vs.
protobuf serialization.  My guess is that once you factor that out, your
performance is pretty close to the benchmarks.

On Mon, Jul 13, 2009 at 10:11 PM, Alex Black  wrote:

>
> If I comment out the actual serialization and sending of the message
> (so I am just composing messages, and clearing them each batch) then
> the 100ms drops to about 50ms.
>
> On Jul 14, 12:36 am, Alex Black  wrote:
> > I'm sending a message with about ~150k repeated items in it, total
> > size is about 3.3mb, and its taking me about 100ms to serialize it and
> > send it out.
> >
> > Can I expect to do any better than this? What could I look into to
> > improve this?
> > - I have "option optimize_for = SPEED;" set in my proto file
> > - I'm compiling with -O3
> > - I'm sending my message in batches of 1000
> > - I'm using C++, on ubuntu, x64
> > - I'm testing all on one machine (e.g. client and server are on one
> > machine)
> >
> > My message looks like:
> >
> > message NodeWithNeighbors
> > {
> > required Id nodeId = 1;
> > repeated IdConfidence neighbors = 2;
> >
> > }
> >
> > message GetNeighborsResponse
> > {
> > repeated NodeWithNeighbors nodesWithNeighbors = 1;
> >
> > }
> >
> > message IdConfidence
> > {
> > required bytes id = 1;
> > required float confidence = 2;
> >
> > }
> >
> > Where "bytes id" is used to send 16byte IDs (uuids).
> >
> > I'm writing each message (batch) out like this:
> >
> > CodedOutputStream codedOutputStream(&m_ProtoBufStream);
> >
> > // Write out the size of the message
> > codedOutputStream.WriteVarint32(message.ByteSize());
> > // Ask the message to serialize itself to our stream adapter,
> which
> > ultimately calls Write on us
> > // which we then call Write on our composed stream
> > message.SerializeWithCachedSizes(&codedOutputStream);
> >
> > In my stream implementation I'm buffering every 16kb, and calling send
> > on the socket once i have 16kb.
> >
> > Thanks!
> >
> > - Alex
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Protocol Buffers" group.
To post to this group, send email to protobuf@googlegroups.com
To unsubscribe from this group, send email to 
protobuf+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en
-~--~~~~--~~--~--~---



Re: Performance: Sending a message with ~150k items, approx 3.3mb, can I do better than 100ms?

2009-07-13 Thread Alex Black

If I comment out the actual serialization and sending of the message
(so I am just composing messages, and clearing them each batch) then
the 100ms drops to about 50ms.

On Jul 14, 12:36 am, Alex Black  wrote:
> I'm sending a message with about ~150k repeated items in it, total
> size is about 3.3mb, and its taking me about 100ms to serialize it and
> send it out.
>
> Can I expect to do any better than this? What could I look into to
> improve this?
> - I have "option optimize_for = SPEED;" set in my proto file
> - I'm compiling with -O3
> - I'm sending my message in batches of 1000
> - I'm using C++, on ubuntu, x64
> - I'm testing all on one machine (e.g. client and server are on one
> machine)
>
> My message looks like:
>
> message NodeWithNeighbors
> {
>         required Id nodeId = 1;
>         repeated IdConfidence neighbors = 2;
>
> }
>
> message GetNeighborsResponse
> {
>         repeated NodeWithNeighbors nodesWithNeighbors = 1;
>
> }
>
> message IdConfidence
> {
>         required bytes id = 1;
>         required float confidence = 2;
>
> }
>
> Where "bytes id" is used to send 16byte IDs (uuids).
>
> I'm writing each message (batch) out like this:
>
>         CodedOutputStream codedOutputStream(&m_ProtoBufStream);
>
>         // Write out the size of the message
>         codedOutputStream.WriteVarint32(message.ByteSize());
>         // Ask the message to serialize itself to our stream adapter, which
> ultimately calls Write on us
>         // which we then call Write on our composed stream
>         message.SerializeWithCachedSizes(&codedOutputStream);
>
> In my stream implementation I'm buffering every 16kb, and calling send
> on the socket once i have 16kb.
>
> Thanks!
>
> - Alex
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Protocol Buffers" group.
To post to this group, send email to protobuf@googlegroups.com
To unsubscribe from this group, send email to 
protobuf+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en
-~--~~~~--~~--~--~---



Re: Performance: Sending a message with ~150k items, approx 3.3mb, can I do better than 100ms?

2009-07-13 Thread Kenton Varda
Speed varies a lot depending on the precise content.  My benchmarks
generally show serialization performance somewhere between 100 MB/s and 1
GB/s, whereas you're seeing 33MB/s, but my benchmarks do not include any
kind of I/O.  Maybe you could separate the serialization step from the I/O
(by serializing to one huge in-memory buffer) so that you can measure the
times separately?
On Mon, Jul 13, 2009 at 9:36 PM, Alex Black  wrote:

>
> I'm sending a message with about ~150k repeated items in it, total
> size is about 3.3mb, and its taking me about 100ms to serialize it and
> send it out.
>
> Can I expect to do any better than this? What could I look into to
> improve this?
> - I have "option optimize_for = SPEED;" set in my proto file
> - I'm compiling with -O3
> - I'm sending my message in batches of 1000
> - I'm using C++, on ubuntu, x64
> - I'm testing all on one machine (e.g. client and server are on one
> machine)
>
> My message looks like:
>
> message NodeWithNeighbors
> {
>required Id nodeId = 1;
>repeated IdConfidence neighbors = 2;
> }
>
> message GetNeighborsResponse
> {
>repeated NodeWithNeighbors nodesWithNeighbors = 1;
> }
>
> message IdConfidence
> {
>required bytes id = 1;
>required float confidence = 2;
> }
>
> Where "bytes id" is used to send 16byte IDs (uuids).
>
> I'm writing each message (batch) out like this:
>
>CodedOutputStream codedOutputStream(&m_ProtoBufStream);
>
>// Write out the size of the message
>codedOutputStream.WriteVarint32(message.ByteSize());
>// Ask the message to serialize itself to our stream adapter, which
> ultimately calls Write on us
>// which we then call Write on our composed stream
>message.SerializeWithCachedSizes(&codedOutputStream);
>
> In my stream implementation I'm buffering every 16kb, and calling send
> on the socket once i have 16kb.
>
> Thanks!
>
> - Alex
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Protocol Buffers" group.
To post to this group, send email to protobuf@googlegroups.com
To unsubscribe from this group, send email to 
protobuf+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en
-~--~~~~--~~--~--~---



Performance: Sending a message with ~150k items, approx 3.3mb, can I do better than 100ms?

2009-07-13 Thread Alex Black

I'm sending a message with about ~150k repeated items in it, total
size is about 3.3mb, and its taking me about 100ms to serialize it and
send it out.

Can I expect to do any better than this? What could I look into to
improve this?
- I have "option optimize_for = SPEED;" set in my proto file
- I'm compiling with -O3
- I'm sending my message in batches of 1000
- I'm using C++, on ubuntu, x64
- I'm testing all on one machine (e.g. client and server are on one
machine)

My message looks like:

message NodeWithNeighbors
{
required Id nodeId = 1;
repeated IdConfidence neighbors = 2;
}

message GetNeighborsResponse
{
repeated NodeWithNeighbors nodesWithNeighbors = 1;
}

message IdConfidence
{
required bytes id = 1;
required float confidence = 2;
}

Where "bytes id" is used to send 16byte IDs (uuids).

I'm writing each message (batch) out like this:

CodedOutputStream codedOutputStream(&m_ProtoBufStream);

// Write out the size of the message
codedOutputStream.WriteVarint32(message.ByteSize());
// Ask the message to serialize itself to our stream adapter, which
ultimately calls Write on us
// which we then call Write on our composed stream
message.SerializeWithCachedSizes(&codedOutputStream);

In my stream implementation I'm buffering every 16kb, and calling send
on the socket once i have 16kb.

Thanks!

- Alex
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Protocol Buffers" group.
To post to this group, send email to protobuf@googlegroups.com
To unsubscribe from this group, send email to 
protobuf+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en
-~--~~~~--~~--~--~---



OUT OF CHAOS COMES GREAT OPPORTUNITY FOR MUSIC...OUT OF CHAOS COMES GREAT OPPORTUNITY FOR MUSIC...OUT OF CHAOS COMES GREAT OPPORTUNITY FOR MUSIC... http://mp3123.50webs.com/out-of-chaos-comes-great-

2009-07-13 Thread Terry Qualls
OUT OF CHAOS COMES GREAT OPPORTUNITY FOR MUSIC...OUT OF CHAOS COMES GREAT
OPPORTUNITY FOR MUSIC...OUT OF CHAOS COMES GREAT OPPORTUNITY FOR MUSIC...


http://mp3123.50webs.com/out-of-chaos-comes-great-opportunity-for-music.html

http://mp3123.50webs.com/out-of-chaos-comes-great-opportunity-for-music.html

http://mp3123.50webs.com/out-of-chaos-comes-great-opportunity-for-music.html

@

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Protocol Buffers" group.
To post to this group, send email to protobuf@googlegroups.com
To unsubscribe from this group, send email to 
protobuf+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en
-~--~~~~--~~--~--~---



Re: Compiling on AIX 5.3 using xlC 3.55 compiler

2009-07-13 Thread Kenton Varda
And yes, I'd love a patch.  :)

On Mon, Jul 13, 2009 at 5:22 PM, Kenton Varda  wrote:

> google/protobuf/stubs/hash.h already contains some hacks for hash_map.  To
> support unordered_map, all we'd have to do is add another hack there which
> defines hash_map to be a subclass of unordered_map.  Subclassing effectively
> functions as a template typedef here.
> I would rather not replace the identifier "hash_map" with "unordered_map"
> in the actual code until the Google style guide rules on the issue.  I
> suspect that Google code will go on using hash_map with a similar hack
> because updating our entire code base is just not worth the effort.
>
>
> On Mon, Jul 13, 2009 at 5:13 PM, Monty Taylor wrote:
>
>>
>> vikram wrote:
>> > I have found out that with new xlC versions like 8.X onwards hash_map
>> > like functionality is supported but different name as unordered_map.
>> > So it there any way you can to use this container without modifying
>> > much of the code.  In the code hash_map is used in many places. So it
>> > needs to be replaced with unordered_map with
>> > xlC compiler on AIX.   Please provide some idea. I was trying to do
>> > template typedef but seems like I can not have all typenames while
>> > doing that.
>>
>> I was actually just working on making an update to the m4 to detect
>> unordered_map in Drizzle. (We swiped the hash_map detection macro)
>>
>> unordered_map is the name it's apparently going to land in C++0x as, and
>> is the name that it exists as in gcc 4.3 and 4.4. gcc still has hash_map
>> as well, but it throws a deprecated warning.
>>
>> Might not be a terrible idea to go ahead and shift to unordered_map and
>> then put in a mapping/typedef for hash_map if something doesn't have u_m?
>>
>> (Kenton - would you be interested in a patch doing that?)
>>
>> > On Jul 1, 12:00 pm, Kenton Varda  wrote:
>> >> Well, it looks like all of these are stuck in the same place -- in the
>> same
>> >> call to hash_map::find().  This would seem to indicate that your STL
>> >> implementation is broken.  It's also possible that the infinite loop is
>> >> actually in protobuf code, and the only reason we see it always
>> breaking in
>> >> the same find() call is because that's the most expensive part of the
>> loop.
>> >>  You could test this by breaking under gdb again, and then repeatedly
>> typing
>> >> "finish" to make it run to completion of the current function call.  If
>> it
>> >> eventually gets back to protobuf code, then the problem is there,
>> otherwise
>> >> it's in the STL code.  (Actually, I should have told you to do this
>> >> originally, rather than the "collect multiple stack traces" idea...)
>> >>
>> >> On Tue, Jun 30, 2009 at 7:36 PM, vikram  wrote:
>> >>
>> >>> Hey Kenton,
>> >>>This is compilation without STL implementation . I am assuming
>> >>> that if hash_map does not exist , google protocol buffer emulates
>> >>> hash_map.   I am pasting 3-4 instances of stack where protoc is in
>> >>> infinite loop
>> >>> #0  0xd1cfdc60 in
>> >>>
>> _Node::_Right__Q2_3std5_TreeXTQ2_3std12_Tmap_traitsXTQ2_3std12basic_stringXTcTQ2_3std11char_traitsXTc_TQ2_3std9allocatorXTc__TQ3_6google8protobuf25FieldDescriptorProto_TypeTQ3_6google8protobuf4hashXTQ2_3std12basic_stringXTcTQ2_3std11char_traitsXTc_TQ2_3std9allocatorXTc___TQ2_3std9allocatorXTQ2_3std4pairXTCQ2_3std12basic_stringXTcTQ2_3std11char_traitsXTc_TQ2_3std9allocatorXTc__TQ3_6google8protobuf25FieldDescriptorProto_Type__SP0__FPQ3_3std9_Tree_nodXTQ2_3std12_Tmap_traitsXTQ2_3std12basic_stringXTcTQ2_3std11char_traitsXTc_TQ2_3std9allocatorXTc__TQ3_6google8protobuf25FieldDescriptorProto_TypeTQ3_6google8protobuf4hashXTQ2_3std12basic_stringXTcTQ2_3std11char_traitsXTc_TQ2_3std9allocatorXTc___TQ2_3std9allocatorXTQ2_3std4pairXTCQ2_3std12basic_stringXTcTQ2_3std11char_traitsXTc_TQ2_3std9allocatorXTc__TQ3_6google8protobuf25FieldDescriptorProto_Type__SP0
>> >>> (_P=0xf04ca4e0) at /usr/vacpp/include/xtree:154
>> >>> #1  0xd1d1bbdc in
>> >>>
>> _Lbound__Q2_3std5_TreeXTQ2_3std12_Tmap_traitsXTQ2_3std12basic_stringXTcTQ2_3std11char_traitsXTc_TQ2_3std9allocatorXTc__TQ3_6google8protobuf25FieldDescriptorProto_TypeTQ3_6google8protobuf4hashXTQ2_3std12basic_stringXTcTQ2_3std11char_traitsXTc_TQ2_3std9allocatorXTc___TQ2_3std9allocatorXTQ2_3std4pairXTCQ2_3std12basic_stringXTcTQ2_3std11char_traitsXTc_TQ2_3std9allocatorXTc__TQ3_6google8protobuf25FieldDescriptorProto_Type__SP0__CFRCQ2_3std12basic_stringXTcTQ2_3std11char_traitsXTc_TQ2_3std9allocatorXTc__
>> >>> (this=0xf04ca4e0, _...@0x2ffc) at /usr/vacpp/include/xtree.t:377
>> >>> #2  0xd1d22878 in
>> >>>
>> lower_bound__Q2_3std5_TreeXTQ2_3std12_Tmap_traitsXTQ2_3std12basic_stringXTcTQ2_3std11char_traitsXTc_TQ2_3std9allocatorXTc__TQ3_6google8protobuf25FieldDescriptorProto_TypeTQ3_6google8protobuf4hashXTQ2_3std12basic_stringXTcTQ2_3std11char_traitsXTc_TQ2_3std9allocatorXTc___TQ2_3std9allocatorXTQ2_3std4pairXTCQ2_3std12basic_stringXTcTQ2_3std11char_traitsXTc_TQ2_3std9allocatorXTc__TQ3_6google8protobuf25Field

Re: Compiling on AIX 5.3 using xlC 3.55 compiler

2009-07-13 Thread Kenton Varda
google/protobuf/stubs/hash.h already contains some hacks for hash_map.  To
support unordered_map, all we'd have to do is add another hack there which
defines hash_map to be a subclass of unordered_map.  Subclassing effectively
functions as a template typedef here.
I would rather not replace the identifier "hash_map" with "unordered_map" in
the actual code until the Google style guide rules on the issue.  I suspect
that Google code will go on using hash_map with a similar hack because
updating our entire code base is just not worth the effort.

On Mon, Jul 13, 2009 at 5:13 PM, Monty Taylor  wrote:

>
> vikram wrote:
> > I have found out that with new xlC versions like 8.X onwards hash_map
> > like functionality is supported but different name as unordered_map.
> > So it there any way you can to use this container without modifying
> > much of the code.  In the code hash_map is used in many places. So it
> > needs to be replaced with unordered_map with
> > xlC compiler on AIX.   Please provide some idea. I was trying to do
> > template typedef but seems like I can not have all typenames while
> > doing that.
>
> I was actually just working on making an update to the m4 to detect
> unordered_map in Drizzle. (We swiped the hash_map detection macro)
>
> unordered_map is the name it's apparently going to land in C++0x as, and
> is the name that it exists as in gcc 4.3 and 4.4. gcc still has hash_map
> as well, but it throws a deprecated warning.
>
> Might not be a terrible idea to go ahead and shift to unordered_map and
> then put in a mapping/typedef for hash_map if something doesn't have u_m?
>
> (Kenton - would you be interested in a patch doing that?)
>
> > On Jul 1, 12:00 pm, Kenton Varda  wrote:
> >> Well, it looks like all of these are stuck in the same place -- in the
> same
> >> call to hash_map::find().  This would seem to indicate that your STL
> >> implementation is broken.  It's also possible that the infinite loop is
> >> actually in protobuf code, and the only reason we see it always breaking
> in
> >> the same find() call is because that's the most expensive part of the
> loop.
> >>  You could test this by breaking under gdb again, and then repeatedly
> typing
> >> "finish" to make it run to completion of the current function call.  If
> it
> >> eventually gets back to protobuf code, then the problem is there,
> otherwise
> >> it's in the STL code.  (Actually, I should have told you to do this
> >> originally, rather than the "collect multiple stack traces" idea...)
> >>
> >> On Tue, Jun 30, 2009 at 7:36 PM, vikram  wrote:
> >>
> >>> Hey Kenton,
> >>>This is compilation without STL implementation . I am assuming
> >>> that if hash_map does not exist , google protocol buffer emulates
> >>> hash_map.   I am pasting 3-4 instances of stack where protoc is in
> >>> infinite loop
> >>> #0  0xd1cfdc60 in
> >>>
> _Node::_Right__Q2_3std5_TreeXTQ2_3std12_Tmap_traitsXTQ2_3std12basic_stringXTcTQ2_3std11char_traitsXTc_TQ2_3std9allocatorXTc__TQ3_6google8protobuf25FieldDescriptorProto_TypeTQ3_6google8protobuf4hashXTQ2_3std12basic_stringXTcTQ2_3std11char_traitsXTc_TQ2_3std9allocatorXTc___TQ2_3std9allocatorXTQ2_3std4pairXTCQ2_3std12basic_stringXTcTQ2_3std11char_traitsXTc_TQ2_3std9allocatorXTc__TQ3_6google8protobuf25FieldDescriptorProto_Type__SP0__FPQ3_3std9_Tree_nodXTQ2_3std12_Tmap_traitsXTQ2_3std12basic_stringXTcTQ2_3std11char_traitsXTc_TQ2_3std9allocatorXTc__TQ3_6google8protobuf25FieldDescriptorProto_TypeTQ3_6google8protobuf4hashXTQ2_3std12basic_stringXTcTQ2_3std11char_traitsXTc_TQ2_3std9allocatorXTc___TQ2_3std9allocatorXTQ2_3std4pairXTCQ2_3std12basic_stringXTcTQ2_3std11char_traitsXTc_TQ2_3std9allocatorXTc__TQ3_6google8protobuf25FieldDescriptorProto_Type__SP0
> >>> (_P=0xf04ca4e0) at /usr/vacpp/include/xtree:154
> >>> #1  0xd1d1bbdc in
> >>>
> _Lbound__Q2_3std5_TreeXTQ2_3std12_Tmap_traitsXTQ2_3std12basic_stringXTcTQ2_3std11char_traitsXTc_TQ2_3std9allocatorXTc__TQ3_6google8protobuf25FieldDescriptorProto_TypeTQ3_6google8protobuf4hashXTQ2_3std12basic_stringXTcTQ2_3std11char_traitsXTc_TQ2_3std9allocatorXTc___TQ2_3std9allocatorXTQ2_3std4pairXTCQ2_3std12basic_stringXTcTQ2_3std11char_traitsXTc_TQ2_3std9allocatorXTc__TQ3_6google8protobuf25FieldDescriptorProto_Type__SP0__CFRCQ2_3std12basic_stringXTcTQ2_3std11char_traitsXTc_TQ2_3std9allocatorXTc__
> >>> (this=0xf04ca4e0, _...@0x2ffc) at /usr/vacpp/include/xtree.t:377
> >>> #2  0xd1d22878 in
> >>>
> lower_bound__Q2_3std5_TreeXTQ2_3std12_Tmap_traitsXTQ2_3std12basic_stringXTcTQ2_3std11char_traitsXTc_TQ2_3std9allocatorXTc__TQ3_6google8protobuf25FieldDescriptorProto_TypeTQ3_6google8protobuf4hashXTQ2_3std12basic_stringXTcTQ2_3std11char_traitsXTc_TQ2_3std9allocatorXTc___TQ2_3std9allocatorXTQ2_3std4pairXTCQ2_3std12basic_stringXTcTQ2_3std11char_traitsXTc_TQ2_3std9allocatorXTc__TQ3_6google8protobuf25FieldDescriptorProto_Type__SP0__CFRCQ2_3std12basic_stringXTcTQ2_3std11char_traitsXTc_TQ2_3std9allocatorXTc__
> >>> (this=0xf04ca4e0, __classretu...@0x2ff21d70, _...@0x2ff22

Re: Compiling on AIX 5.3 using xlC 3.55 compiler

2009-07-13 Thread Monty Taylor

vikram wrote:
> I have found out that with new xlC versions like 8.X onwards hash_map
> like functionality is supported but different name as unordered_map.
> So it there any way you can to use this container without modifying
> much of the code.  In the code hash_map is used in many places. So it
> needs to be replaced with unordered_map with
> xlC compiler on AIX.   Please provide some idea. I was trying to do
> template typedef but seems like I can not have all typenames while
> doing that.

I was actually just working on making an update to the m4 to detect
unordered_map in Drizzle. (We swiped the hash_map detection macro)

unordered_map is the name it's apparently going to land in C++0x as, and
is the name that it exists as in gcc 4.3 and 4.4. gcc still has hash_map
as well, but it throws a deprecated warning.

Might not be a terrible idea to go ahead and shift to unordered_map and
then put in a mapping/typedef for hash_map if something doesn't have u_m?

(Kenton - would you be interested in a patch doing that?)

> On Jul 1, 12:00 pm, Kenton Varda  wrote:
>> Well, it looks like all of these are stuck in the same place -- in the same
>> call to hash_map::find().  This would seem to indicate that your STL
>> implementation is broken.  It's also possible that the infinite loop is
>> actually in protobuf code, and the only reason we see it always breaking in
>> the same find() call is because that's the most expensive part of the loop.
>>  You could test this by breaking under gdb again, and then repeatedly typing
>> "finish" to make it run to completion of the current function call.  If it
>> eventually gets back to protobuf code, then the problem is there, otherwise
>> it's in the STL code.  (Actually, I should have told you to do this
>> originally, rather than the "collect multiple stack traces" idea...)
>>
>> On Tue, Jun 30, 2009 at 7:36 PM, vikram  wrote:
>>
>>> Hey Kenton,
>>>This is compilation without STL implementation . I am assuming
>>> that if hash_map does not exist , google protocol buffer emulates
>>> hash_map.   I am pasting 3-4 instances of stack where protoc is in
>>> infinite loop
>>> #0  0xd1cfdc60 in
>>> _Node::_Right__Q2_3std5_TreeXTQ2_3std12_Tmap_traitsXTQ2_3std12basic_stringXTcTQ2_3std11char_traitsXTc_TQ2_3std9allocatorXTc__TQ3_6google8protobuf25FieldDescriptorProto_TypeTQ3_6google8protobuf4hashXTQ2_3std12basic_stringXTcTQ2_3std11char_traitsXTc_TQ2_3std9allocatorXTc___TQ2_3std9allocatorXTQ2_3std4pairXTCQ2_3std12basic_stringXTcTQ2_3std11char_traitsXTc_TQ2_3std9allocatorXTc__TQ3_6google8protobuf25FieldDescriptorProto_Type__SP0__FPQ3_3std9_Tree_nodXTQ2_3std12_Tmap_traitsXTQ2_3std12basic_stringXTcTQ2_3std11char_traitsXTc_TQ2_3std9allocatorXTc__TQ3_6google8protobuf25FieldDescriptorProto_TypeTQ3_6google8protobuf4hashXTQ2_3std12basic_stringXTcTQ2_3std11char_traitsXTc_TQ2_3std9allocatorXTc___TQ2_3std9allocatorXTQ2_3std4pairXTCQ2_3std12basic_stringXTcTQ2_3std11char_traitsXTc_TQ2_3std9allocatorXTc__TQ3_6google8protobuf25FieldDescriptorProto_Type__SP0
>>> (_P=0xf04ca4e0) at /usr/vacpp/include/xtree:154
>>> #1  0xd1d1bbdc in
>>> _Lbound__Q2_3std5_TreeXTQ2_3std12_Tmap_traitsXTQ2_3std12basic_stringXTcTQ2_3std11char_traitsXTc_TQ2_3std9allocatorXTc__TQ3_6google8protobuf25FieldDescriptorProto_TypeTQ3_6google8protobuf4hashXTQ2_3std12basic_stringXTcTQ2_3std11char_traitsXTc_TQ2_3std9allocatorXTc___TQ2_3std9allocatorXTQ2_3std4pairXTCQ2_3std12basic_stringXTcTQ2_3std11char_traitsXTc_TQ2_3std9allocatorXTc__TQ3_6google8protobuf25FieldDescriptorProto_Type__SP0__CFRCQ2_3std12basic_stringXTcTQ2_3std11char_traitsXTc_TQ2_3std9allocatorXTc__
>>> (this=0xf04ca4e0, _...@0x2ffc) at /usr/vacpp/include/xtree.t:377
>>> #2  0xd1d22878 in
>>> lower_bound__Q2_3std5_TreeXTQ2_3std12_Tmap_traitsXTQ2_3std12basic_stringXTcTQ2_3std11char_traitsXTc_TQ2_3std9allocatorXTc__TQ3_6google8protobuf25FieldDescriptorProto_TypeTQ3_6google8protobuf4hashXTQ2_3std12basic_stringXTcTQ2_3std11char_traitsXTc_TQ2_3std9allocatorXTc___TQ2_3std9allocatorXTQ2_3std4pairXTCQ2_3std12basic_stringXTcTQ2_3std11char_traitsXTc_TQ2_3std9allocatorXTc__TQ3_6google8protobuf25FieldDescriptorProto_Type__SP0__CFRCQ2_3std12basic_stringXTcTQ2_3std11char_traitsXTc_TQ2_3std9allocatorXTc__
>>> (this=0xf04ca4e0, __classretu...@0x2ff21d70, _...@0x2ffc) at /usr/
>>> vacpp/include/xtree:377
>>> #3  0xd1d28f34 in
>>> find__Q2_3std5_TreeXTQ2_3std12_Tmap_traitsXTQ2_3std12basic_stringXTcTQ2_3std11char_traitsXTc_TQ2_3std9allocatorXTc__TQ3_6google8protobuf25FieldDescriptorProto_TypeTQ3_6google8protobuf4hashXTQ2_3std12basic_stringXTcTQ2_3std11char_traitsXTc_TQ2_3std9allocatorXTc___TQ2_3std9allocatorXTQ2_3std4pairXTCQ2_3std12basic_stringXTcTQ2_3std11char_traitsXTc_TQ2_3std9allocatorXTc__TQ3_6google8protobuf25FieldDescriptorProto_Type__SP0__CFRCQ2_3std12basic_stringXTcTQ2_3std11char_traitsXTc_TQ2_3std9allocatorXTc__
>>> (this=0xf04ca4e0, __classretu...@0x2ff21dd0, _...@0x2ffc) at /usr/
>>> vacpp/include/xtree:365
>>> #4  0xd1d2fd34 in
>>> ParseType__Q4_6google

Re: Compiling on AIX 5.3 using xlC 3.55 compiler

2009-07-13 Thread vikram

I have found out that with new xlC versions like 8.X onwards hash_map
like functionality is supported but different name as unordered_map.
So it there any way you can to use this container without modifying
much of the code.  In the code hash_map is used in many places. So it
needs to be replaced with unordered_map with
xlC compiler on AIX.   Please provide some idea. I was trying to do
template typedef but seems like I can not have all typenames while
doing that.

Vikram

On Jul 1, 12:00 pm, Kenton Varda  wrote:
> Well, it looks like all of these are stuck in the same place -- in the same
> call to hash_map::find().  This would seem to indicate that your STL
> implementation is broken.  It's also possible that the infinite loop is
> actually in protobuf code, and the only reason we see it always breaking in
> the same find() call is because that's the most expensive part of the loop.
>  You could test this by breaking under gdb again, and then repeatedly typing
> "finish" to make it run to completion of the current function call.  If it
> eventually gets back to protobuf code, then the problem is there, otherwise
> it's in the STL code.  (Actually, I should have told you to do this
> originally, rather than the "collect multiple stack traces" idea...)
>
> On Tue, Jun 30, 2009 at 7:36 PM, vikram  wrote:
>
> > Hey Kenton,
>
> >        This is compilation without STL implementation . I am assuming
> > that if hash_map does not exist , google protocol buffer emulates
> > hash_map.   I am pasting 3-4 instances of stack where protoc is in
> > infinite loop
>
> > #0  0xd1cfdc60 in
>
> > _Node::_Right__Q2_3std5_TreeXTQ2_3std12_Tmap_traitsXTQ2_3std12basic_stringXTcTQ2_3std11char_traitsXTc_TQ2_3std9allocatorXTc__TQ3_6google8protobuf25FieldDescriptorProto_TypeTQ3_6google8protobuf4hashXTQ2_3std12basic_stringXTcTQ2_3std11char_traitsXTc_TQ2_3std9allocatorXTc___TQ2_3std9allocatorXTQ2_3std4pairXTCQ2_3std12basic_stringXTcTQ2_3std11char_traitsXTc_TQ2_3std9allocatorXTc__TQ3_6google8protobuf25FieldDescriptorProto_Type__SP0__FPQ3_3std9_Tree_nodXTQ2_3std12_Tmap_traitsXTQ2_3std12basic_stringXTcTQ2_3std11char_traitsXTc_TQ2_3std9allocatorXTc__TQ3_6google8protobuf25FieldDescriptorProto_TypeTQ3_6google8protobuf4hashXTQ2_3std12basic_stringXTcTQ2_3std11char_traitsXTc_TQ2_3std9allocatorXTc___TQ2_3std9allocatorXTQ2_3std4pairXTCQ2_3std12basic_stringXTcTQ2_3std11char_traitsXTc_TQ2_3std9allocatorXTc__TQ3_6google8protobuf25FieldDescriptorProto_Type__SP0
> > (_P=0xf04ca4e0) at /usr/vacpp/include/xtree:154
> > #1  0xd1d1bbdc in
>
> > _Lbound__Q2_3std5_TreeXTQ2_3std12_Tmap_traitsXTQ2_3std12basic_stringXTcTQ2_3std11char_traitsXTc_TQ2_3std9allocatorXTc__TQ3_6google8protobuf25FieldDescriptorProto_TypeTQ3_6google8protobuf4hashXTQ2_3std12basic_stringXTcTQ2_3std11char_traitsXTc_TQ2_3std9allocatorXTc___TQ2_3std9allocatorXTQ2_3std4pairXTCQ2_3std12basic_stringXTcTQ2_3std11char_traitsXTc_TQ2_3std9allocatorXTc__TQ3_6google8protobuf25FieldDescriptorProto_Type__SP0__CFRCQ2_3std12basic_stringXTcTQ2_3std11char_traitsXTc_TQ2_3std9allocatorXTc__
> > (this=0xf04ca4e0, _...@0x2ffc) at /usr/vacpp/include/xtree.t:377
> > #2  0xd1d22878 in
>
> > lower_bound__Q2_3std5_TreeXTQ2_3std12_Tmap_traitsXTQ2_3std12basic_stringXTcTQ2_3std11char_traitsXTc_TQ2_3std9allocatorXTc__TQ3_6google8protobuf25FieldDescriptorProto_TypeTQ3_6google8protobuf4hashXTQ2_3std12basic_stringXTcTQ2_3std11char_traitsXTc_TQ2_3std9allocatorXTc___TQ2_3std9allocatorXTQ2_3std4pairXTCQ2_3std12basic_stringXTcTQ2_3std11char_traitsXTc_TQ2_3std9allocatorXTc__TQ3_6google8protobuf25FieldDescriptorProto_Type__SP0__CFRCQ2_3std12basic_stringXTcTQ2_3std11char_traitsXTc_TQ2_3std9allocatorXTc__
> > (this=0xf04ca4e0, __classretu...@0x2ff21d70, _...@0x2ffc) at /usr/
> > vacpp/include/xtree:377
> > #3  0xd1d28f34 in
>
> > find__Q2_3std5_TreeXTQ2_3std12_Tmap_traitsXTQ2_3std12basic_stringXTcTQ2_3std11char_traitsXTc_TQ2_3std9allocatorXTc__TQ3_6google8protobuf25FieldDescriptorProto_TypeTQ3_6google8protobuf4hashXTQ2_3std12basic_stringXTcTQ2_3std11char_traitsXTc_TQ2_3std9allocatorXTc___TQ2_3std9allocatorXTQ2_3std4pairXTCQ2_3std12basic_stringXTcTQ2_3std11char_traitsXTc_TQ2_3std9allocatorXTc__TQ3_6google8protobuf25FieldDescriptorProto_Type__SP0__CFRCQ2_3std12basic_stringXTcTQ2_3std11char_traitsXTc_TQ2_3std9allocatorXTc__
> > (this=0xf04ca4e0, __classretu...@0x2ff21dd0, _...@0x2ffc) at /usr/
> > vacpp/include/xtree:365
> > #4  0xd1d2fd34 in
>
> > ParseType__Q4_6google8protobuf8compiler6ParserFPQ3_6google8protobuf25FieldDescriptorProto_TypePQ2_3std12basic_stringXTcTQ2_3std11char_traitsXTc_TQ2_3std9allocatorXTc__
> > (this=0x2ff22278,
> >    type=0x2ff21e24, type_name=0x2ff21e28) at google/protobuf/compiler/
> > parser.cc:1000
> > #5  0xd1d31438 in
>
> > ParseMessageField__Q4_6google8protobuf8compiler6ParserFPQ3_6google8protobuf20FieldDescriptorProtoPQ3_6google8protobuf16RepeatedPtrFieldXTQ3_6google8protobuf15DescriptorProto_
> > (
> >     this=0x2ff22278, field=0x2000f918, messages=0x2000f808) at google/
> > protob

Re: DescriptorPool in python

2009-07-13 Thread Kenton Varda
I don't think that's currently implemented in Python, unfortunately.

On Mon, Jul 13, 2009 at 8:28 AM, Ferenc Szalai  wrote:

>
> Hi
>
> There is any plan to implement DescriptorPool class in python?
> Especially, I missing easy way to transform FileDescriptorProto to
> Descriptor in python.
>
> --
> Regards,
> Ferenc
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Protocol Buffers" group.
To post to this group, send email to protobuf@googlegroups.com
To unsubscribe from this group, send email to 
protobuf+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en
-~--~~~~--~~--~--~---



Re: Protobuf "Lite"

2009-07-13 Thread Kenton Varda
I've pretty much finished the refactoring.  Ideally I'd like to get it into
SVN this week, but realistically it will probably happen next week or the
week after since I will be out of town from the 16th to the 22nd.  An
official release will hopefully follow a week or two after that.  Stay tuned
to this group for announcements...

On Mon, Jul 13, 2009 at 3:32 AM, kkw  wrote:

>
> Hi,
>
>  I'm new on this group so at the very beginning I'd like to say "Hi"
> to all of you.
>
>  I've read the previous posts about size of the protobuf binary and
> I've seen the information that "lite" version is under development. Do
> you have any idea on it's release date (even rough estimate)?
>
>  Thank you for the answer.
>
> Regards,
>  Michal
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Protocol Buffers" group.
To post to this group, send email to protobuf@googlegroups.com
To unsubscribe from this group, send email to 
protobuf+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en
-~--~~~~--~~--~--~---



DescriptorPool in python

2009-07-13 Thread Ferenc Szalai

Hi

There is any plan to implement DescriptorPool class in python?
Especially, I missing easy way to transform FileDescriptorProto to
Descriptor in python.

--
Regards,
Ferenc
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Protocol Buffers" group.
To post to this group, send email to protobuf@googlegroups.com
To unsubscribe from this group, send email to 
protobuf+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en
-~--~~~~--~~--~--~---



Protobuf "Lite"

2009-07-13 Thread kkw

Hi,

  I'm new on this group so at the very beginning I'd like to say "Hi"
to all of you.

  I've read the previous posts about size of the protobuf binary and
I've seen the information that "lite" version is under development. Do
you have any idea on it's release date (even rough estimate)?

  Thank you for the answer.

Regards,
  Michal
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Protocol Buffers" group.
To post to this group, send email to protobuf@googlegroups.com
To unsubscribe from this group, send email to 
protobuf+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en
-~--~~~~--~~--~--~---