Thanks Kenton. I will try to debug it and will let you know.  Did anyone
successfully compiled protocol buffer on AIX ? I've seen couple of posts but
never saw some reply with success.

Thanks & Regards,
Vikram

On Tue, Jan 19, 2010 at 7:56 PM, Kenton Varda <ken...@google.com> wrote:

> It just looks up the type name in a hash_map:
>
> http://code.google.com/p/protobuf/source/browse/trunk/src/google/protobuf/compiler/parser.cc#1007
>
>
> <http://code.google.com/p/protobuf/source/browse/trunk/src/google/protobuf/compiler/parser.cc#1007>kTypeNames
> is initialized here:
>
> http://code.google.com/p/protobuf/source/browse/trunk/src/google/protobuf/compiler/parser.cc#61
>
>
> On Tue, Jan 19, 2010 at 7:53 PM, vikram patil <patilvik...@gmail.com>wrote:
>
>> Hmm
>>
>> Could  you please point me to code which is responsible for recognizing
>> built in types ? I will try to debug more . My independent tests which
>> evaluates unordered_map are working fine. But when I am trying to use it
>> with protocol buffer it fails.
>>
>>
>> Thanks & Regards,
>> Vikram
>>
>>
>> On Tue, Jan 19, 2010 at 7:48 PM, Kenton Varda <ken...@google.com> wrote:
>>
>>> Wait, I misread your error report.  It looks like the errors are coming
>>> from protoc.  However, the errors are very odd -- it appears that protoc is
>>> failing to recognize built-in types like "string" and "int32".  This could
>>> happen if the hash_map/unordered_map implementation is broken and not
>>> properly matching string keys.
>>>
>>>
>>> On Tue, Jan 19, 2010 at 6:53 PM, Kenton Varda <ken...@google.com> wrote:
>>>
>>>> This sounds like another problem with your compiler -- it can't find
>>>> std::string.
>>>>
>>>> Note that in common.h we use "using namespace std;" to import all of std
>>>> into the google::protobuf namespace.  This is not good practice but we
>>>> didn't think it was worth the effort to "fix" it.
>>>>
>>>> On Tue, Jan 19, 2010 at 6:42 PM, vikram <patilvik...@gmail.com> wrote:
>>>>
>>>>> Thanks Kenton,  I configured correctly using following configure
>>>>> string
>>>>>
>>>>> ./configure CC="/compiler/xlcpp/usr/vac/bin/xlc_r " CXX="/compiler/
>>>>> xlcpp/usr/vacpp/bin/xlC_r" CXXFLAGS="-g -qlanglvl=extended -
>>>>> D__IBMCPP_TR1__ -qidirfirst -I/compiler/xlcpp/usr/vacpp/include "
>>>>> CFLAGS="-g -qlanglvl=extc99"
>>>>>
>>>>> Configure detects unordered_map correctly and uses that but when I
>>>>> tried with simple proto file I got following error
>>>>> bash-3.00$ ./lt-protoc -I. test.proto --cpp_out=.
>>>>> test.proto:4:12: "string" is not defined.
>>>>> test.proto:5:12: "int32" is not defined.
>>>>> test.proto:6:12: "int32" is not defined.
>>>>>
>>>>> test.proto
>>>>>
>>>>> package tutorial;
>>>>>
>>>>> message SearchRequest {
>>>>>  required string query = 1;
>>>>>  optional int32 page_number = 2;
>>>>>  optional int32 result_per_page = 3;
>>>>> }
>>>>>
>>>>>
>>>>>
>>>>> Its seems like descriptor.cc hold kTypeTonName map which identifies
>>>>> basic google protocol buffer supported datatypes but compiled compiler
>>>>> could not figure it out.
>>>>> It seems like function using this array is never called when I
>>>>> debugged
>>>>>
>>>>> Function from descriptor.cc
>>>>> void FieldDescriptor::DebugString(int depth, string *contents) const {
>>>>>
>>>>> Please provide some idea on this
>>>>>
>>>>> Thanks & Regards,
>>>>> Vikram
>>>>> On Jan 13, 2:17 pm, Kenton Varda <ken...@google.com> wrote:
>>>>> > stl_hash.m4 should automatically look it whatever directory your
>>>>> compiler
>>>>> > uses.  If for some reason your compiler does not automatically look
>>>>> in the
>>>>> > directory you want, then you should add the proper CXXFLAGS to make
>>>>> it look
>>>>> > there, e.g.:
>>>>> >
>>>>> >   ./configure CXXFLAGS=-I/XYZ/vacpp/include
>>>>> >
>>>>> > (-I is GCC's flag for this; your compiler may be different.)
>>>>> >
>>>>> > On Wed, Jan 13, 2010 at 12:20 PM, vikram <patilvik...@gmail.com>
>>>>> wrote:
>>>>> > > Hello Guys,
>>>>> >
>>>>> > >     I am seeing that google protocol buffer is now supporting
>>>>> > > unorderd_map with new modification in hash.h . But I am confused
>>>>> where
>>>>> > > exactly stl_hash.m4 looks for unordered_map by default . Can we
>>>>> make
>>>>> > > it to look in different directly as xlc compiler on AIX is
>>>>> installed
>>>>> > > under XYZ/vacpp/include which is different that default
>>>>> /usr/include
>>>>> > > directory?
>>>>> >
>>>>> > > I tried to run m4 with stl_hash.m4 as input and XYZ/vacpp/include
>>>>> as
>>>>> > > include directory but it failed. saying " end quote is not
>>>>> provided"
>>>>> > > Is there anyway I can make stl_hash.m4 to look into
>>>>> > > different include file than /usr/include
>>>>> >
>>>>> > > Thanks & Regards,
>>>>> > > Vikram
>>>>> >
>>>>> > > --
>>>>> > > You received this message because you are subscribed to the Google
>>>>> Groups
>>>>> > > "Protocol Buffers" group.
>>>>> > > To post to this group, send email to proto...@googlegroups.com.
>>>>> > > To unsubscribe from this group, send email to
>>>>> > > protobuf+unsubscr...@googlegroups.com<protobuf%2bunsubscr...@googlegroups.com>
>>>>> <protobuf%2bunsubscr...@googlegroups.com<protobuf%252bunsubscr...@googlegroups.com>
>>>>> >
>>>>> > > .
>>>>> > > For more options, visit this group at
>>>>> > >http://groups.google.com/group/protobuf?hl=en.
>>>>>
>>>>> --
>>>>> You received this message because you are subscribed to the Google
>>>>> Groups "Protocol Buffers" group.
>>>>> To post to this group, send email to proto...@googlegroups.com.
>>>>> To unsubscribe from this group, send email to
>>>>> protobuf+unsubscr...@googlegroups.com<protobuf%2bunsubscr...@googlegroups.com>
>>>>> .
>>>>>
>>>>> For more options, visit this group at
>>>>> http://groups.google.com/group/protobuf?hl=en.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>
>
--
You received this message because you are subscribed to the Google Groups "Protocol Buffers" group.
To post to this group, send email to proto...@googlegroups.com.
To unsubscribe from this group, send email to protobuf+unsubscr...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/protobuf?hl=en.

Reply via email to