Re: Performance: Sending a message with ~150k items, approx 3.3mb, can I do better than 100ms?

2009-07-14 Thread Alex Black

ok, I took I/O out of the picture by serializing each message into a
pre-allocated buffer, and this time I did a more through measurement.

Benchmark 1: Complete scenario
- average time 262ms (100 runs)

Benchmark 2: Same as # 1 but no IO
- average time 250ms (100 runs)

Benchmark 3: Same as 2 but with serialization commented out
- average time 251ms (100 runs)

Benchmark 4: Same as 3 but with message composition commented out too
(no protobuf calls)
- average time 185 ms (100 runs)

So from this I conclude:
- My initial #s were wrong
- My timings vary too much for each run to really get accurate
averages
- IO takes about 10ms
- Serialization takes ~0ms
- Message composition and setting of fields takes ~66ms

My message composition is in a loop, the part in the loop looks like:

uuid_t relatedVertexId;

myProto::IdConfidence* neighborIdConfidence = 
pNodeWithNeighbors-
add_neighbors();

// Set the vertex id
neighborIdConfidence-set_id((const void*) 
relatedVertexId, 16);
// set the confidence
neighborIdConfidence-set_confidence( confidence );

currentBatchSize++;

if ( currentBatchSize == BatchSize )
{
// Flush out this batch
//stream  getNeighborsResponse;
getNeighborsResponse.Clear();
currentBatchSize = 0;
}

On Jul 14, 1:27 am, Kenton Varda ken...@google.com wrote:
 Oh, I didn't even know you were including composition in there.  My
 benchmarks are only for serialization of already-composed messages.
 But this still doesn't tell us how much time is spent on network I/O vs.
 protobuf serialization.  My guess is that once you factor that out, your
 performance is pretty close to the benchmarks.

 On Mon, Jul 13, 2009 at 10:11 PM, Alex Black a...@alexblack.ca wrote:

  If I comment out the actual serialization and sending of the message
  (so I am just composing messages, and clearing them each batch) then
  the 100ms drops to about 50ms.

  On Jul 14, 12:36 am, Alex Black a...@alexblack.ca wrote:
   I'm sending a message with about ~150k repeated items in it, total
   size is about 3.3mb, and its taking me about 100ms to serialize it and
   send it out.

   Can I expect to do any better than this? What could I look into to
   improve this?
   - I have option optimize_for = SPEED; set in my proto file
   - I'm compiling with -O3
   - I'm sending my message in batches of 1000
   - I'm using C++, on ubuntu, x64
   - I'm testing all on one machine (e.g. client and server are on one
   machine)

   My message looks like:

   message NodeWithNeighbors
   {
           required Id nodeId = 1;
           repeated IdConfidence neighbors = 2;

   }

   message GetNeighborsResponse
   {
           repeated NodeWithNeighbors nodesWithNeighbors = 1;

   }

   message IdConfidence
   {
           required bytes id = 1;
           required float confidence = 2;

   }

   Where bytes id is used to send 16byte IDs (uuids).

   I'm writing each message (batch) out like this:

           CodedOutputStream codedOutputStream(m_ProtoBufStream);

           // Write out the size of the message
           codedOutputStream.WriteVarint32(message.ByteSize());
           // Ask the message to serialize itself to our stream adapter,
  which
   ultimately calls Write on us
           // which we then call Write on our composed stream
           message.SerializeWithCachedSizes(codedOutputStream);

   In my stream implementation I'm buffering every 16kb, and calling send
   on the socket once i have 16kb.

   Thanks!

   - Alex
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Protocol Buffers group.
To post to this group, send email to protobuf@googlegroups.com
To unsubscribe from this group, send email to 
protobuf+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en
-~--~~~~--~~--~--~---



RE: Performance: Sending a message with ~150k items, approx 3.3mb, can I do better than 100ms?

2009-07-14 Thread Alex Black

Kenton: I made a mistake with these numbers - pls ignore them - I'll revisit 
tomorrow.

Thx.

-Original Message-
From: protobuf@googlegroups.com [mailto:proto...@googlegroups.com] On Behalf Of 
Alex Black
Sent: Tuesday, July 14, 2009 2:05 AM
To: Protocol Buffers
Subject: Re: Performance: Sending a message with ~150k items, approx 3.3mb, can 
I do better than 100ms?


ok, I took I/O out of the picture by serializing each message into a 
pre-allocated buffer, and this time I did a more through measurement.

Benchmark 1: Complete scenario
- average time 262ms (100 runs)

Benchmark 2: Same as # 1 but no IO
- average time 250ms (100 runs)

Benchmark 3: Same as 2 but with serialization commented out
- average time 251ms (100 runs)

Benchmark 4: Same as 3 but with message composition commented out too (no 
protobuf calls)
- average time 185 ms (100 runs)

So from this I conclude:
- My initial #s were wrong
- My timings vary too much for each run to really get accurate averages
- IO takes about 10ms
- Serialization takes ~0ms
- Message composition and setting of fields takes ~66ms

My message composition is in a loop, the part in the loop looks like:

uuid_t relatedVertexId;

myProto::IdConfidence* neighborIdConfidence = 
pNodeWithNeighbors-
add_neighbors();

// Set the vertex id
neighborIdConfidence-set_id((const void*) 
relatedVertexId, 16);
// set the confidence
neighborIdConfidence-set_confidence( confidence );

currentBatchSize++;

if ( currentBatchSize == BatchSize )
{
// Flush out this batch
//stream  getNeighborsResponse;
getNeighborsResponse.Clear();
currentBatchSize = 0;
}

On Jul 14, 1:27 am, Kenton Varda ken...@google.com wrote:
 Oh, I didn't even know you were including composition in there.  My 
 benchmarks are only for serialization of already-composed messages.
 But this still doesn't tell us how much time is spent on network I/O vs.
 protobuf serialization.  My guess is that once you factor that out, 
 your performance is pretty close to the benchmarks.

 On Mon, Jul 13, 2009 at 10:11 PM, Alex Black a...@alexblack.ca wrote:

  If I comment out the actual serialization and sending of the message 
  (so I am just composing messages, and clearing them each batch) then 
  the 100ms drops to about 50ms.

  On Jul 14, 12:36 am, Alex Black a...@alexblack.ca wrote:
   I'm sending a message with about ~150k repeated items in it, total 
   size is about 3.3mb, and its taking me about 100ms to serialize it 
   and send it out.

   Can I expect to do any better than this? What could I look into to 
   improve this?
   - I have option optimize_for = SPEED; set in my proto file
   - I'm compiling with -O3
   - I'm sending my message in batches of 1000
   - I'm using C++, on ubuntu, x64
   - I'm testing all on one machine (e.g. client and server are on 
   one
   machine)

   My message looks like:

   message NodeWithNeighbors
   {
           required Id nodeId = 1;
           repeated IdConfidence neighbors = 2;

   }

   message GetNeighborsResponse
   {
           repeated NodeWithNeighbors nodesWithNeighbors = 1;

   }

   message IdConfidence
   {
           required bytes id = 1;
           required float confidence = 2;

   }

   Where bytes id is used to send 16byte IDs (uuids).

   I'm writing each message (batch) out like this:

           CodedOutputStream codedOutputStream(m_ProtoBufStream);

           // Write out the size of the message
           codedOutputStream.WriteVarint32(message.ByteSize());
           // Ask the message to serialize itself to our stream 
   adapter,
  which
   ultimately calls Write on us
           // which we then call Write on our composed stream
           message.SerializeWithCachedSizes(codedOutputStream);

   In my stream implementation I'm buffering every 16kb, and calling 
   send on the socket once i have 16kb.

   Thanks!

   - Alex


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Protocol Buffers group.
To post to this group, send email to protobuf@googlegroups.com
To unsubscribe from this group, send email to 
protobuf+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en
-~--~~~~--~~--~--~---



Re: Performance: Sending a message with ~150k items, approx 3.3mb, can I do better than 100ms?

2009-07-14 Thread Kenton Varda
OK.  If your message composition (or parsing, on the receiving end) takes a
lot of time, you might look into how much of that is due to memory
allocation.  Usually this is a pretty significant fraction.  Two good ways
to improve that:
1) If your app builds many messages over time and most of them have roughly
the same shape (i.e. which fields are set, the size of repeated fields,
etc. are usually similar), then you should clear and reuse the same message
object rather than allocate a new one each time.  This way it will reuse the
same memory, avoiding allocation.

2) Use tcmalloc:
  http://google-perftools.googlecode.com
It is often faster than your system's malloc, particularly for
multi-threaded C++ apps.  All C++ servers at Google use this.

On Mon, Jul 13, 2009 at 11:50 PM, Alex Black a...@alexblack.ca wrote:


 Kenton: I made a mistake with these numbers - pls ignore them - I'll
 revisit tomorrow.

 Thx.

 -Original Message-
 From: protobuf@googlegroups.com [mailto:proto...@googlegroups.com] On
 Behalf Of Alex Black
 Sent: Tuesday, July 14, 2009 2:05 AM
 To: Protocol Buffers
 Subject: Re: Performance: Sending a message with ~150k items, approx 3.3mb,
 can I do better than 100ms?


 ok, I took I/O out of the picture by serializing each message into a
 pre-allocated buffer, and this time I did a more through measurement.

 Benchmark 1: Complete scenario
 - average time 262ms (100 runs)

 Benchmark 2: Same as # 1 but no IO
 - average time 250ms (100 runs)

 Benchmark 3: Same as 2 but with serialization commented out
 - average time 251ms (100 runs)

 Benchmark 4: Same as 3 but with message composition commented out too (no
 protobuf calls)
 - average time 185 ms (100 runs)

 So from this I conclude:
 - My initial #s were wrong
 - My timings vary too much for each run to really get accurate averages
 - IO takes about 10ms
 - Serialization takes ~0ms
 - Message composition and setting of fields takes ~66ms

 My message composition is in a loop, the part in the loop looks like:

uuid_t relatedVertexId;

myProto::IdConfidence* neighborIdConfidence =
 pNodeWithNeighbors-
 add_neighbors();

// Set the vertex id
neighborIdConfidence-set_id((const void*)
 relatedVertexId, 16);
// set the confidence
neighborIdConfidence-set_confidence( confidence );

currentBatchSize++;

if ( currentBatchSize == BatchSize )
{
// Flush out this batch
//stream  getNeighborsResponse;
getNeighborsResponse.Clear();
currentBatchSize = 0;
}

 On Jul 14, 1:27 am, Kenton Varda ken...@google.com wrote:
  Oh, I didn't even know you were including composition in there.  My
  benchmarks are only for serialization of already-composed messages.
  But this still doesn't tell us how much time is spent on network I/O vs.
  protobuf serialization.  My guess is that once you factor that out,
  your performance is pretty close to the benchmarks.
 
  On Mon, Jul 13, 2009 at 10:11 PM, Alex Black a...@alexblack.ca wrote:
 
   If I comment out the actual serialization and sending of the message
   (so I am just composing messages, and clearing them each batch) then
   the 100ms drops to about 50ms.
 
   On Jul 14, 12:36 am, Alex Black a...@alexblack.ca wrote:
I'm sending a message with about ~150k repeated items in it, total
size is about 3.3mb, and its taking me about 100ms to serialize it
and send it out.
 
Can I expect to do any better than this? What could I look into to
improve this?
- I have option optimize_for = SPEED; set in my proto file
- I'm compiling with -O3
- I'm sending my message in batches of 1000
- I'm using C++, on ubuntu, x64
- I'm testing all on one machine (e.g. client and server are on
one
machine)
 
My message looks like:
 
message NodeWithNeighbors
{
required Id nodeId = 1;
repeated IdConfidence neighbors = 2;
 
}
 
message GetNeighborsResponse
{
repeated NodeWithNeighbors nodesWithNeighbors = 1;
 
}
 
message IdConfidence
{
required bytes id = 1;
required float confidence = 2;
 
}
 
Where bytes id is used to send 16byte IDs (uuids).
 
I'm writing each message (batch) out like this:
 
CodedOutputStream codedOutputStream(m_ProtoBufStream);
 
// Write out the size of the message
codedOutputStream.WriteVarint32(message.ByteSize());
// Ask the message to serialize itself to our stream
adapter,
   which
ultimately calls Write on us
// which we then call Write on our composed stream
message.SerializeWithCachedSizes(codedOutputStream);

Re: Protobuf Lite

2009-07-14 Thread Michal

Thanks a lot!

On Jul 14, 1:18 am, Kenton Varda ken...@google.com wrote:
 I've pretty much finished the refactoring.  Ideally I'd like to get it into
 SVN this week, but realistically it will probably happen next week or the
 week after since I will be out of town from the 16th to the 22nd.  An
 official release will hopefully follow a week or two after that.  Stay tuned
 to this group for announcements...

 On Mon, Jul 13, 2009 at 3:32 AM, kkw mpomaran...@gmail.com wrote:

  Hi,

   I'm new on this group so at the very beginning I'd like to say Hi
  to all of you.

   I've read the previous posts about size of the protobuf binary and
  I've seen the information that lite version is under development. Do
  you have any idea on it's release date (even rough estimate)?

   Thank you for the answer.

  Regards,
   Michal
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Protocol Buffers group.
To post to this group, send email to protobuf@googlegroups.com
To unsubscribe from this group, send email to 
protobuf+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en
-~--~~~~--~~--~--~---



RE: Performance: Sending a message with ~150k items, approx 3.3mb, can I do better than 100ms?

2009-07-14 Thread Alex Black
Thanks for those tips.  I am using tcmalloc, and I'm re-using message
for each batch, e.g. I fill it up with say 500 items, send it out, clear
it, re-use it.
 
Here are my hopefully accurate timings, each done 100 times, averaged:
 
1. Baseline (just loops through the data on the server) no protobuf:
191ms
2. Compose messages, serialize them, no I/O or deserialization: 213ms
3. Same as #2 but with IO to a dum java client: 265ms
4. Same as #3 but add java protobuf deserialization: 323ms
 
So from this it looks like:
- composing and serializing the messages takes 22ms
- sending the data over sockets takes 52ms
- deserializing the data in java with protobuf takes 58ms
 
The amount of data being sent is: 3,959,368 bytes in 158,045 messages
(composed in batches of 1000).
 
- Alex



From: Kenton Varda [mailto:ken...@google.com] 
Sent: Tuesday, July 14, 2009 3:26 AM
To: Alex Black
Cc: Protocol Buffers
Subject: Re: Performance: Sending a message with ~150k items, approx
3.3mb, can I do better than 100ms?


OK.  If your message composition (or parsing, on the receiving end)
takes a lot of time, you might look into how much of that is due to
memory allocation.  Usually this is a pretty significant fraction.  Two
good ways to improve that: 

1) If your app builds many messages over time and most of them have
roughly the same shape (i.e. which fields are set, the size of
repeated fields, etc. are usually similar), then you should clear and
reuse the same message object rather than allocate a new one each time.
This way it will reuse the same memory, avoiding allocation.

2) Use tcmalloc:
  http://google-perftools.googlecode.com
It is often faster than your system's malloc, particularly for
multi-threaded C++ apps.  All C++ servers at Google use this.

On Mon, Jul 13, 2009 at 11:50 PM, Alex Black a...@alexblack.ca wrote:



Kenton: I made a mistake with these numbers - pls ignore them -
I'll revisit tomorrow.

Thx.


-Original Message-
From: protobuf@googlegroups.com
[mailto:proto...@googlegroups.com] On Behalf Of Alex Black
Sent: Tuesday, July 14, 2009 2:05 AM
To: Protocol Buffers
Subject: Re: Performance: Sending a message with ~150k items,
approx 3.3mb, can I do better than 100ms?


ok, I took I/O out of the picture by serializing each message
into a pre-allocated buffer, and this time I did a more through
measurement.

Benchmark 1: Complete scenario
- average time 262ms (100 runs)

Benchmark 2: Same as # 1 but no IO
- average time 250ms (100 runs)

Benchmark 3: Same as 2 but with serialization commented out
- average time 251ms (100 runs)

Benchmark 4: Same as 3 but with message composition commented
out too (no protobuf calls)
- average time 185 ms (100 runs)

So from this I conclude:
- My initial #s were wrong
- My timings vary too much for each run to really get accurate
averages
- IO takes about 10ms
- Serialization takes ~0ms
- Message composition and setting of fields takes ~66ms

My message composition is in a loop, the part in the loop looks
like:

   uuid_t relatedVertexId;

   myProto::IdConfidence*
neighborIdConfidence = pNodeWithNeighbors-
add_neighbors();

   // Set the vertex id
   neighborIdConfidence-set_id((const
void*) relatedVertexId, 16);
   // set the confidence
   neighborIdConfidence-set_confidence(
confidence );

   currentBatchSize++;

   if ( currentBatchSize == BatchSize )
   {
   // Flush out this batch
   //stream  getNeighborsResponse;
   getNeighborsResponse.Clear();
   currentBatchSize = 0;
   }

On Jul 14, 1:27 am, Kenton Varda ken...@google.com wrote:
 Oh, I didn't even know you were including composition in
there.  My
 benchmarks are only for serialization of already-composed
messages.
 But this still doesn't tell us how much time is spent on
network I/O vs.
 protobuf serialization.  My guess is that once you factor that
out,
 your performance is pretty close to the benchmarks.

 On Mon, Jul 13, 2009 at 10:11 PM, Alex Black
a...@alexblack.ca wrote:

  If I comment out the actual serialization and sending of the
message
  (so I am just composing messages, and clearing them each
batch) then
  the 100ms drops to about 

com.google.protobuf.Descriptors.FileDescriptor.InternalDescriptorAssigner cannot be resolved to a type

2009-07-14 Thread Mike

I am getting the error

com.google.protobuf.Descriptors.FileDescriptor.InternalDescriptorAssigner
cannot be resolved to a type

when Eclipse compiles my generated Java code? I wrote a plugin that
generates the protocol buffer messages from the ecore file of an EMF
Model diagram. It creates a new project, puts the .proto files in a
folder and then runs protoc java compiler against the .proto files to
create the java code. The java files are put in another folder of the
same project under a package file. I put the jar file protobuf-
java-2.0.0beta.jar in the Java Build Path libraries of the project I
placed the generated java files in. It removes all errors related to
protocol buffer code except the one I listed. I have searched the
Internet but have not seen this error anywhere? Anyone come across
this before?

Thanks,
Mike

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Protocol Buffers group.
To post to this group, send email to protobuf@googlegroups.com
To unsubscribe from this group, send email to 
protobuf+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en
-~--~~~~--~~--~--~---



DIGITAL PRINTING VS THE TRADITIONAL MET...DIGITAL PRINTING VS THE TRADITIONAL MET...DIGITAL PRINTING VS THE TRADITIONAL MET... http://attraction123.50webs.com/Digital_Printing_vs_the_Traditional_met

2009-07-14 Thread Terry Qualls
DIGITAL PRINTING VS THE TRADITIONAL MET...DIGITAL PRINTING VS THE
TRADITIONAL MET...DIGITAL PRINTING VS THE TRADITIONAL MET...


http://attraction123.50webs.com/Digital_Printing_vs_the_Traditional_met.html

http://attraction123.50webs.com/Digital_Printing_vs_the_Traditional_met.html

http://attraction123.50webs.com/Digital_Printing_vs_the_Traditional_met.html

@

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Protocol Buffers group.
To post to this group, send email to protobuf@googlegroups.com
To unsubscribe from this group, send email to 
protobuf+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en
-~--~~~~--~~--~--~---



Re: Cannot resolve InternalDescriptorAssigner?

2009-07-14 Thread Kenton Varda
It looks like your protobuf library and protocol compiler binary are from
different releases.  This won't work.  Please make sure both are upgraded to
2.1.0, the latest release.

On Tue, Jul 14, 2009 at 8:52 AM, Michael Stapleton 
mike.staple...@echostar.com wrote:


 Hi All,
 I am using release protobuf-java-2.0.0beta.jar and when I generate the java
 code form my protobuf messages I get the error:

 com.google.protobuf.Descriptors.FileDescriptor.InternalDescriptorAssigner
 cannot be resolved to a type

 I get this in every Java file generated. I am using Eclipse IDE and
 generating
 the code from eclipse using a thread to execute the protoc compiler. I have
 searched the Internet and there doesn't seem to be anyone else with his
 error. Do I need a newer java protobuf version and if so where is it
 documented? Any help would be appreciated.

 Thanks,
 Mike

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Protocol Buffers group.
To post to this group, send email to protobuf@googlegroups.com
To unsubscribe from this group, send email to 
protobuf+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en
-~--~~~~--~~--~--~---



Re: com.google.protobuf.Descriptors.FileDescriptor.InternalDescriptorAssigner cannot be resolved to a type

2009-07-14 Thread Kenton Varda
Answered in the other thread -- you need to use matching protoc and protobuf
library versions.

On Tue, Jul 14, 2009 at 2:57 PM, Mike mike.staple...@echostar.com wrote:


 I am getting the error

 com.google.protobuf.Descriptors.FileDescriptor.InternalDescriptorAssigner
 cannot be resolved to a type

 when Eclipse compiles my generated Java code? I wrote a plugin that
 generates the protocol buffer messages from the ecore file of an EMF
 Model diagram. It creates a new project, puts the .proto files in a
 folder and then runs protoc java compiler against the .proto files to
 create the java code. The java files are put in another folder of the
 same project under a package file. I put the jar file protobuf-
 java-2.0.0beta.jar in the Java Build Path libraries of the project I
 placed the generated java files in. It removes all errors related to
 protocol buffer code except the one I listed. I have searched the
 Internet but have not seen this error anywhere? Anyone come across
 this before?

 Thanks,
 Mike

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Protocol Buffers group.
To post to this group, send email to protobuf@googlegroups.com
To unsubscribe from this group, send email to 
protobuf+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en
-~--~~~~--~~--~--~---



Re: Compiling on AIX 5.3 using xlC 3.55 compiler

2009-07-14 Thread vikram

Kenton  Monty,

I added hack as followes in the hash.h

// File changed .

#if defined(HAVE_HASH_MAP)  defined(HAVE_HASH_SET)
#include HASH_MAP_H
#include HASH_SET_H
#elif  defined (__xlC__)
#define MISSING_HASH
#include unordered_map
#include unordered_set
#else
#define MISSING_HASH
#include map
#include set
#endif

namespace google {
namespace protobuf {
#if defined(MISSING_HASH)  defined(__xlC__)

//@TODO
//Inherit hash_map from unordered_map
template typename Key
struct hash : public std::tr1::hashKey {
};

template typename Key
struct hashconst Key* {
  inline size_t operator()(const Key* key) const {
return reinterpret_castsize_t(key);
  }
};

template typename Key, typename Data,
  typename HashFcn = hashKey,
  typename EqualKey = std::equal_toKey 
class hash_map : public std::tr1::unordered_mapKey, Data, HashFcn ,
EqualKey {

};

template typename Key,
  typename HashFcn = hashKey,
  typename EqualKey = std::equal_toKey 
class hash_set : public std::tr1::unordered_set
Key, HashFcn, EqualKey {
};
#elif defined(MISSING_HASH)

File continues as it is

Stack trace

pthread_kill(??, ??) at 0xd01246b4
_p_raise(??) at 0xd0124124
raise.raise(??) at 0xd0375b28
abort() at 0xd03d3e78
google::protobuf::internal::LogMessage::Finish()(this = 0x2ff21e40),
line 171 in common.cc
google::protobuf::internal::LogFinisher::operator=
(google::protobuf::internal::LogMessage)(this = 0x2ff21e38, other = 
(...)), line 176 in common.cc
protobuf_AssignDesc_google_2fprotobuf_2fdescriptor_2eproto()(), line
82 in descriptor.pb.cc
pthread_once(??, ??) at 0xd0115e78
common.GoogleOnceInit(pthread_once_t*,void(*)())(0xf04a9d00,
0xf04b15a0), line 114 in once.h
protobuf_AssignDescriptorsOnce()(), line 408 in descriptor.pb.cc
google::protobuf::FileOptions::descriptor()(), line 3862 in
descriptor.pb.cc
google::protobuf::FileOptions::GetDescriptor() const(this =
0x2000e248), line 4190 in descriptor.pb.cc
google::protobuf::compiler::Parser::ParseOptionAssignment
(google::protobuf::Message*)(this = 0x2ff223b8, options = 0x2000e248),
line 659 in parser.cc
google::protobuf::compiler::Parser::ParseOption
(google::protobuf::Message*)(this = 0x2ff223b8, options = 0x2000e248),
line 1081 in parser.cc
google::protobuf::compiler::Parser::ParseTopLevelStatement
(google::protobuf::FileDescriptorProto*)(this = 0x2ff223b8, file =
0x2ff22460), line 375 in parser.cc
google::protobuf::compiler::Parser::Parse
(google::protobuf::io::Tokenizer*,google::protobuf::FileDescriptorProto*)
(this = 0x2ff223b8, input = 0x2ff22368, file = 0x2ff22460), line 321
in parser.cc
google::protobuf::compiler::SourceTreeDescriptorDatabase::FindFileByName
(const
std::basic_stringchar,std::char_traitschar,std::allocatorchar
,google::protobuf::FileDescriptorProto*)(this = 0x2ff22688, filename
= (...), output = 0x2ff22460), line 145 in importer.cc
TryFindFileInFallbackDatabase(const
std::basic_stringchar,std::char_traitschar,std::allocatorchar )
const(0x2ff226ac, 0x2000b9d8), line 1230 in descriptor.cc
NFS write error on host esfs3-lnx.actuate.com: 28.
File: userid=1104, groupid=1000
FindFileByName(const
std::basic_stringchar,std::char_traitschar,std::allocatorchar )
const(0x2ff226ac, 0x2000b9d8), line 875 in descriptor.cc
google::protobuf::compiler::Importer::Import(const
std::basic_stringchar,std::char_traitschar,std::allocatorchar )
(this = 0x2ff22688, filename = (...)), line 194 in importer.cc

Protoc compiler aborted

./protoc google/protobuf/unittest.proto google/protobuf/
unittest_empty.proto google/protobuf/unittest_import.proto google/
protobuf/unittest_mset.proto google/protobuf/
unittest_optimize_for.proto google/protobuf/
unittest_embed_optimize_for.proto google/protobuf/
unittest_custom_options.proto google/protobuf/compiler/cpp/
cpp_test_bad_identifiers.proto -I. --cpp_out=.
libprotobuf ERROR google/protobuf/descriptor.cc:2215] Invalid proto
descriptor for file google/protobuf/descriptor.proto:
libprotobuf ERROR google/protobuf/descriptor.cc:2218]
google.protobuf.FileDescriptorSet.file:
.google.protobuf.FileDescriptorProto is not defined.
libprotobuf ERROR google/protobuf/descriptor.cc:2218]
google.protobuf.FileDescriptorProto.message_type:
.google.protobuf.DescriptorProto is not defined.
libprotobuf ERROR google/protobuf/descriptor.cc:2218]
google.protobuf.FileDescriptorProto.extension:
.google.protobuf.FieldDescriptorProto is not defined.
libprotobuf ERROR google/protobuf/descriptor.cc:2218]
google.protobuf.FileDescriptorProto.options:
.google.protobuf.FileOptions is not defined.
libprotobuf ERROR google/protobuf/descriptor.cc:2218]
google.protobuf.DescriptorProto.field:
.google.protobuf.FieldDescriptorProto is not defined.
libprotobuf ERROR google/protobuf/descriptor.cc:2218]
google.protobuf.DescriptorProto.extension:
.google.protobuf.FieldDescriptorProto is not defined.
libprotobuf ERROR google/protobuf/descriptor.cc:2218]
google.protobuf.DescriptorProto.nested_type:
.google.protobuf.DescriptorProto is not 

Re: Compiling on AIX 5.3 using xlC 3.55 compiler

2009-07-14 Thread Kenton Varda
It looks like your implementation of hash_map is not working correctly --
all lookups are failing.  You might try writing a little test for hash_map
itself that would be easier to debug.

On Tue, Jul 14, 2009 at 6:27 PM, vikram patilvik...@gmail.com wrote:


 Kenton  Monty,

I added hack as followes in the hash.h

 // File changed .

 #if defined(HAVE_HASH_MAP)  defined(HAVE_HASH_SET)
 #include HASH_MAP_H
 #include HASH_SET_H
 #elif  defined (__xlC__)
 #define MISSING_HASH
 #include unordered_map
 #include unordered_set
 #else
 #define MISSING_HASH
 #include map
 #include set
 #endif

 namespace google {
 namespace protobuf {
 #if defined(MISSING_HASH)  defined(__xlC__)

 //@TODO
 //Inherit hash_map from unordered_map
 template typename Key
 struct hash : public std::tr1::hashKey {
 };

 template typename Key
 struct hashconst Key* {
  inline size_t operator()(const Key* key) const {
return reinterpret_castsize_t(key);
  }
 };

 template typename Key, typename Data,
  typename HashFcn = hashKey,
  typename EqualKey = std::equal_toKey 
 class hash_map : public std::tr1::unordered_mapKey, Data, HashFcn ,
 EqualKey {

 };

 template typename Key,
  typename HashFcn = hashKey,
  typename EqualKey = std::equal_toKey 
 class hash_set : public std::tr1::unordered_set
Key, HashFcn, EqualKey {
 };
 #elif defined(MISSING_HASH)

 File continues as it is

 Stack trace

 pthread_kill(??, ??) at 0xd01246b4
 _p_raise(??) at 0xd0124124
 raise.raise(??) at 0xd0375b28
 abort() at 0xd03d3e78
 google::protobuf::internal::LogMessage::Finish()(this = 0x2ff21e40),
 line 171 in common.cc
 google::protobuf::internal::LogFinisher::operator=
 (google::protobuf::internal::LogMessage)(this = 0x2ff21e38, other = 
 (...)), line 176 in common.cc
 protobuf_AssignDesc_google_2fprotobuf_2fdescriptor_2eproto()(), line
 82 in descriptor.pb.cc
 pthread_once(??, ??) at 0xd0115e78
 common.GoogleOnceInit(pthread_once_t*,void(*)())(0xf04a9d00,
 0xf04b15a0), line 114 in once.h
 protobuf_AssignDescriptorsOnce()(), line 408 in descriptor.pb.cc
 google::protobuf::FileOptions::descriptor()(), line 3862 in
 descriptor.pb.cc
 google::protobuf::FileOptions::GetDescriptor() const(this =
 0x2000e248), line 4190 in descriptor.pb.cc
 google::protobuf::compiler::Parser::ParseOptionAssignment
 (google::protobuf::Message*)(this = 0x2ff223b8, options = 0x2000e248),
 line 659 in parser.cc
 google::protobuf::compiler::Parser::ParseOption
 (google::protobuf::Message*)(this = 0x2ff223b8, options = 0x2000e248),
 line 1081 in parser.cc
 google::protobuf::compiler::Parser::ParseTopLevelStatement
 (google::protobuf::FileDescriptorProto*)(this = 0x2ff223b8, file =
 0x2ff22460), line 375 in parser.cc
 google::protobuf::compiler::Parser::Parse
 (google::protobuf::io::Tokenizer*,google::protobuf::FileDescriptorProto*)
 (this = 0x2ff223b8, input = 0x2ff22368, file = 0x2ff22460), line 321
 in parser.cc
 google::protobuf::compiler::SourceTreeDescriptorDatabase::FindFileByName
 (const
 std::basic_stringchar,std::char_traitschar,std::allocatorchar
 ,google::protobuf::FileDescriptorProto*)(this = 0x2ff22688, filename
 = (...), output = 0x2ff22460), line 145 in importer.cc
 TryFindFileInFallbackDatabase(const
 std::basic_stringchar,std::char_traitschar,std::allocatorchar )
 const(0x2ff226ac, 0x2000b9d8), line 1230 in descriptor.cc
 NFS write error on host esfs3-lnx.actuate.com: 28.
 File: userid=1104, groupid=1000
 FindFileByName(const
 std::basic_stringchar,std::char_traitschar,std::allocatorchar )
 const(0x2ff226ac, 0x2000b9d8), line 875 in descriptor.cc
 google::protobuf::compiler::Importer::Import(const
 std::basic_stringchar,std::char_traitschar,std::allocatorchar )
 (this = 0x2ff22688, filename = (...)), line 194 in importer.cc

 Protoc compiler aborted

 ./protoc google/protobuf/unittest.proto google/protobuf/
 unittest_empty.proto google/protobuf/unittest_import.proto google/
 protobuf/unittest_mset.proto google/protobuf/
 unittest_optimize_for.proto google/protobuf/
 unittest_embed_optimize_for.proto google/protobuf/
 unittest_custom_options.proto google/protobuf/compiler/cpp/
 cpp_test_bad_identifiers.proto -I. --cpp_out=.
 libprotobuf ERROR google/protobuf/descriptor.cc:2215] Invalid proto
 descriptor for file google/protobuf/descriptor.proto:
 libprotobuf ERROR google/protobuf/descriptor.cc:2218]
 google.protobuf.FileDescriptorSet.file:
 .google.protobuf.FileDescriptorProto is not defined.
 libprotobuf ERROR google/protobuf/descriptor.cc:2218]
 google.protobuf.FileDescriptorProto.message_type:
 .google.protobuf.DescriptorProto is not defined.
 libprotobuf ERROR google/protobuf/descriptor.cc:2218]
 google.protobuf.FileDescriptorProto.extension:
 .google.protobuf.FieldDescriptorProto is not defined.
 libprotobuf ERROR google/protobuf/descriptor.cc:2218]
 google.protobuf.FileDescriptorProto.options:
 .google.protobuf.FileOptions is not defined.
 libprotobuf ERROR google/protobuf/descriptor.cc:2218]