[ 
https://issues.apache.org/jira/browse/HBASE-11987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14135036#comment-14135036
 ] 

Andrey Stepachev commented on HBASE-11987:
------------------------------------------

Checked, that on state migration all should work as expected, we will read 
state, rewrite it in enabled state and then updated with what we'll find in zk. 
Sounds like that should work.

> Make zk-less table states backward compatible.
> ----------------------------------------------
>
>                 Key: HBASE-11987
>                 URL: https://issues.apache.org/jira/browse/HBASE-11987
>             Project: HBase
>          Issue Type: Bug
>    Affects Versions: 2.0.0
>            Reporter: Andrey Stepachev
>            Assignee: Andrey Stepachev
>         Attachments: 11987v2.txt, HBASE-11987.patch
>
>
> Changed protobuf format not handled properly, so on startup of on top of old 
> hbase files protobuf raises exception:
> Thanks to [~stack] for finding that.
> {noformat}
> 90 2014-09-15 16:28:12,387 FATAL [ActiveMasterManager] master.HMaster: Failed 
> to become active master
>  91 java.io.IOException: content=20546
>  92   at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.readTableDescriptor(FSTableDescriptors.java:599)
>  93   at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.createTableDescriptorForTableDirectory(FSTableDescriptors.java:804)
>  94   at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.createTableDescriptor(FSTableDescriptors.java:771)
>  95   at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.createTableDescriptor(FSTableDescriptors.java:749)
>  96   at 
> org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:460)
>  97   at 
> org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:147)
>  98   at 
> org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:127)
>  99   at 
> org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:488)
> 100   at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:155)
> 101   at org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1244)
> 102   at java.lang.Thread.run(Thread.java:744)
> 103 Caused by: org.apache.hadoop.hbase.exceptions.DeserializationException: 
> com.google.protobuf.InvalidProtocolBufferException: While parsing a protocol 
> message, the input ended unexpectedly in the middle of a field.  This could 
> mean either than the input has been truncated or that an embedded message 
> misreported its own length.
> 104   at 
> org.apache.hadoop.hbase.TableDescriptor.parseFrom(TableDescriptor.java:120)
> 105   at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.readTableDescriptor(FSTableDescriptors.java:597)
> 106   ... 10 more
> 107 Caused by: com.google.protobuf.InvalidProtocolBufferException: While 
> parsing a protocol message, the input ended unexpectedly in the middle of a 
> field.  This could mean either than the input has been truncated or that an 
> embedded message misreported its own length.
> 108   at 
> com.google.protobuf.InvalidProtocolBufferException.truncatedMessage(InvalidProtocolBufferException.java:70)
> 109   at 
> com.google.protobuf.CodedInputStream.refillBuffer(CodedInputStream.java:728)
> 110   at 
> com.google.protobuf.CodedInputStream.readRawByte(CodedInputStream.java:769)
> 111   at 
> com.google.protobuf.CodedInputStream.readRawLittleEndian64(CodedInputStream.java:488)
> 112   at 
> com.google.protobuf.CodedInputStream.readFixed64(CodedInputStream.java:203)
> 113   at 
> com.google.protobuf.UnknownFieldSet$Builder.mergeFieldFrom(UnknownFieldSet.java:481)
> 114   at 
> com.google.protobuf.GeneratedMessage.parseUnknownField(GeneratedMessage.java:193)
> 115   at 
> org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableName.<init>(HBaseProtos.java:215)
> 116   at 
> org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableName.<init>(HBaseProtos.java:173)
> 117   at 
> org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableName$1.parsePartialFrom(HBaseProtos.java:261)
> 118   at 
> org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableName$1.parsePartialFrom(HBaseProtos.java:256)
> 119   at 
> com.google.protobuf.CodedInputStream.readMessage(CodedInputStream.java:309)
> 120   at 
> org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableSchema.<init>(HBaseProtos.java:852)
> 121   at 
> org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableSchema.<init>(HBaseProtos.java:799)
> 122   at 
> org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableSchema$1.parsePartialFrom(HBaseProtos.java:923)
> 123   at 
> org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableSchema$1.parsePartialFrom(HBaseProtos.java:918)
> 124   at 
> com.google.protobuf.CodedInputStream.readMessage(CodedInputStream.java:309)
> 125   at 
> org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableDescriptor.<init>(HBaseProtos.java:3396)
> 126   at 
> org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableDescriptor.<init>(HBaseProtos.java:3343)
> 127   at 
> org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableDescriptor$1.parsePartialFrom(HBaseProtos.java:3445)
> 128   at 
> org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableDescriptor$1.parsePartialFrom(HBaseProtos.java:3440)
> 129   at 
> org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableDescriptor$Builder.mergeFrom(HBaseProtos.java:3800)
> 130   at 
> org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableDescriptor$Builder.mergeFrom(HBaseProtos.java:3672)
> 131   at 
> com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:337)
> 132   at 
> com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:267)
> 133   at 
> com.google.protobuf.AbstractMessageLite$Builder.mergeFrom(AbstractMessageLite.java:170)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to