Hi,

The version of netty is netty-3.6.6.Final.jar which is from hbase's path. all 
ocurrences of protobuf.jar are 2.5.0

There are some older versions of netty around, in hadoop and flume-ng but none 
of those are in the classpath.

-Ian 

On Wednesday 01 October 2014 07:28:05 Ted Yu wrote:
> Can you check the version of netty that is in the classpath of your client ?
> 
> I wonder if it uses protobuf version other than 2.5.0 which is used by
> hbase.
> 
> Cheers
> 
> On Wed, Oct 1, 2014 at 4:37 AM, Ian Brooks <i.bro...@sensewhere.com> wrote:
> 
> > Hi,
> >
> >  I have a java client that connects to hbase and reads and writes data to
> > hbase. every now and then, I'm seeing the following stack traces in the
> > application log and I'm not sure why they are coming up.
> >
> > org.apache.hadoop.hbase.client.ClusterStatusListener - ERROR - Unexpected
> > exception, continuing.
> > com.google.protobuf.InvalidProtocolBufferException: Protocol message tag
> > had invalid wire type.
> >         at
> > com.google.protobuf.InvalidProtocolBufferException.invalidWireType(InvalidProtocolBufferException.java:99)
> >         at
> > com.google.protobuf.UnknownFieldSet$Builder.mergeFieldFrom(UnknownFieldSet.java:498)
> >         at
> > com.google.protobuf.GeneratedMessage.parseUnknownField(GeneratedMessage.java:193)
> >         at
> > org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos$ClusterStatus.<init>(ClusterStatusProtos.java:7554)
> >         at
> > org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos$ClusterStatus.<init>(ClusterStatusProtos.java:7512)
> >         at
> > org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos$ClusterStatus$1.parsePartialFrom(ClusterStatusProtos.java:7689)
> >         at
> > org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos$ClusterStatus$1.parsePartialFrom(ClusterStatusProtos.java:7684)
> >         at
> > com.google.protobuf.AbstractParser.parsePartialFrom(AbstractParser.java:141)
> >         at
> > com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:176)
> >         at
> > com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:182)
> >         at
> > com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:49)
> >         at
> > org.jboss.netty.handler.codec.protobuf.ProtobufDecoder.decode(ProtobufDecoder.java:122)
> >         at
> > org.jboss.netty.handler.codec.oneone.OneToOneDecoder.handleUpstream(OneToOneDecoder.java:66)
> >         at
> > org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
> >         at
> > org.jboss.netty.channel.socket.oio.OioDatagramWorker.process(OioDatagramWorker.java:52)
> >         at
> > org.jboss.netty.channel.socket.oio.AbstractOioWorker.run(AbstractOioWorker.java:73)
> >         at
> > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >         at
> > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >         at java.lang.Thread.run(Thread.java:745)
> >
> > I'm running hbase-0.98.3-hadoop2
> >
> > -Ian
> >

Reply via email to