[jira] [Commented] (HBASE-7767) Get rid of ZKTable, and table enable/disable state in ZK

2014-09-16 Thread Andrey Stepachev (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135014#comment-14135014
 ] 

Andrey Stepachev commented on HBASE-7767:
-

surprisingly, at some point changed pb message and didn't handle that properly. 
fixed there https://issues.apache.org/jira/browse/HBASE-11987

 Get rid of ZKTable, and table enable/disable state in ZK 
 -

 Key: HBASE-7767
 URL: https://issues.apache.org/jira/browse/HBASE-7767
 Project: HBase
  Issue Type: Sub-task
  Components: Zookeeper
Affects Versions: 2.0.0
Reporter: Enis Soztutar
Assignee: Andrey Stepachev
 Fix For: 2.0.0

 Attachments: 
 0001-HBASE-7767-Get-rid-of-ZKTable-and-table-enable-disab.patch, 
 0001-rebase.patch, 7767v2.txt, HBASE-7767.patch, HBASE-7767.patch, 
 HBASE-7767.patch, HBASE-7767.patch, HBASE-7767.patch, HBASE-7767.patch, 
 HBASE-7767.patch, HBASE-7767.patch, HBASE-7767.patch, HBASE-7767.patch, 
 HBASE-7767.patch, HBASE-7767.patch, HBASE-7767.patch


 As discussed table state in zookeeper for enable/disable state breaks our 
 zookeeper contract. It is also very intrusive, used from the client side, 
 master and region servers. We should get rid of it. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-11987) Make zk-less table states backward compatible.

2014-09-16 Thread Andrey Stepachev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Stepachev updated HBASE-11987:
-
Description: 
Changed protobuf format not handled properly, so on startup of on top of old 
hbase files protobuf rises exception:
Thanks to [~stack] for finding that.
{noformat}
90 2014-09-15 16:28:12,387 FATAL [ActiveMasterManager] master.HMaster: Failed 
to become active master
 91 java.io.IOException: content=20546
 92   at 
org.apache.hadoop.hbase.util.FSTableDescriptors.readTableDescriptor(FSTableDescriptors.java:599)
 93   at 
org.apache.hadoop.hbase.util.FSTableDescriptors.createTableDescriptorForTableDirectory(FSTableDescriptors.java:804)
 94   at 
org.apache.hadoop.hbase.util.FSTableDescriptors.createTableDescriptor(FSTableDescriptors.java:771)
 95   at 
org.apache.hadoop.hbase.util.FSTableDescriptors.createTableDescriptor(FSTableDescriptors.java:749)
 96   at 
org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:460)
 97   at 
org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:147)
 98   at 
org.apache.hadoop.hbase.master.MasterFileSystem.init(MasterFileSystem.java:127)
 99   at 
org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:488)
100   at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:155)
101   at org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1244)
102   at java.lang.Thread.run(Thread.java:744)
103 Caused by: org.apache.hadoop.hbase.exceptions.DeserializationException: 
com.google.protobuf.InvalidProtocolBufferException: While parsing a protocol 
message, the input ended unexpectedly in the middle of a field.  This could 
mean either than the input has been truncated or that an embedded message 
misreported its own length.
104   at 
org.apache.hadoop.hbase.TableDescriptor.parseFrom(TableDescriptor.java:120)
105   at 
org.apache.hadoop.hbase.util.FSTableDescriptors.readTableDescriptor(FSTableDescriptors.java:597)
106   ... 10 more
107 Caused by: com.google.protobuf.InvalidProtocolBufferException: While 
parsing a protocol message, the input ended unexpectedly in the middle of a 
field.  This could mean either than the input has been truncated or that an 
embedded message misreported its own length.
108   at 
com.google.protobuf.InvalidProtocolBufferException.truncatedMessage(InvalidProtocolBufferException.java:70)
109   at 
com.google.protobuf.CodedInputStream.refillBuffer(CodedInputStream.java:728)
110   at 
com.google.protobuf.CodedInputStream.readRawByte(CodedInputStream.java:769)
111   at 
com.google.protobuf.CodedInputStream.readRawLittleEndian64(CodedInputStream.java:488)
112   at 
com.google.protobuf.CodedInputStream.readFixed64(CodedInputStream.java:203)
113   at 
com.google.protobuf.UnknownFieldSet$Builder.mergeFieldFrom(UnknownFieldSet.java:481)
114   at 
com.google.protobuf.GeneratedMessage.parseUnknownField(GeneratedMessage.java:193)
115   at 
org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableName.init(HBaseProtos.java:215)
116   at 
org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableName.init(HBaseProtos.java:173)
117   at 
org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableName$1.parsePartialFrom(HBaseProtos.java:261)
118   at 
org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableName$1.parsePartialFrom(HBaseProtos.java:256)
119   at 
com.google.protobuf.CodedInputStream.readMessage(CodedInputStream.java:309)
120   at 
org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableSchema.init(HBaseProtos.java:852)
121   at 
org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableSchema.init(HBaseProtos.java:799)
122   at 
org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableSchema$1.parsePartialFrom(HBaseProtos.java:923)
123   at 
org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableSchema$1.parsePartialFrom(HBaseProtos.java:918)
124   at 
com.google.protobuf.CodedInputStream.readMessage(CodedInputStream.java:309)
125   at 
org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableDescriptor.init(HBaseProtos.java:3396)
126   at 
org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableDescriptor.init(HBaseProtos.java:3343)
127   at 
org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableDescriptor$1.parsePartialFrom(HBaseProtos.java:3445)
128   at 
org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableDescriptor$1.parsePartialFrom(HBaseProtos.java:3440)
129   at 
org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableDescriptor$Builder.mergeFrom(HBaseProtos.java:3800)
130   at 
org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableDescriptor$Builder.mergeFrom(HBaseProtos.java:3672)
131   at 
com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:337)
132   at 
com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:267)

[jira] [Updated] (HBASE-11987) Make zk-less table states backward compatible.

2014-09-16 Thread Andrey Stepachev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Stepachev updated HBASE-11987:
-
Description: 
Changed protobuf format not handled properly, so on startup of on top of old 
hbase files protobuf raises exception:
Thanks to [~stack] for finding that.
{noformat}
90 2014-09-15 16:28:12,387 FATAL [ActiveMasterManager] master.HMaster: Failed 
to become active master
 91 java.io.IOException: content=20546
 92   at 
org.apache.hadoop.hbase.util.FSTableDescriptors.readTableDescriptor(FSTableDescriptors.java:599)
 93   at 
org.apache.hadoop.hbase.util.FSTableDescriptors.createTableDescriptorForTableDirectory(FSTableDescriptors.java:804)
 94   at 
org.apache.hadoop.hbase.util.FSTableDescriptors.createTableDescriptor(FSTableDescriptors.java:771)
 95   at 
org.apache.hadoop.hbase.util.FSTableDescriptors.createTableDescriptor(FSTableDescriptors.java:749)
 96   at 
org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:460)
 97   at 
org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:147)
 98   at 
org.apache.hadoop.hbase.master.MasterFileSystem.init(MasterFileSystem.java:127)
 99   at 
org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:488)
100   at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:155)
101   at org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1244)
102   at java.lang.Thread.run(Thread.java:744)
103 Caused by: org.apache.hadoop.hbase.exceptions.DeserializationException: 
com.google.protobuf.InvalidProtocolBufferException: While parsing a protocol 
message, the input ended unexpectedly in the middle of a field.  This could 
mean either than the input has been truncated or that an embedded message 
misreported its own length.
104   at 
org.apache.hadoop.hbase.TableDescriptor.parseFrom(TableDescriptor.java:120)
105   at 
org.apache.hadoop.hbase.util.FSTableDescriptors.readTableDescriptor(FSTableDescriptors.java:597)
106   ... 10 more
107 Caused by: com.google.protobuf.InvalidProtocolBufferException: While 
parsing a protocol message, the input ended unexpectedly in the middle of a 
field.  This could mean either than the input has been truncated or that an 
embedded message misreported its own length.
108   at 
com.google.protobuf.InvalidProtocolBufferException.truncatedMessage(InvalidProtocolBufferException.java:70)
109   at 
com.google.protobuf.CodedInputStream.refillBuffer(CodedInputStream.java:728)
110   at 
com.google.protobuf.CodedInputStream.readRawByte(CodedInputStream.java:769)
111   at 
com.google.protobuf.CodedInputStream.readRawLittleEndian64(CodedInputStream.java:488)
112   at 
com.google.protobuf.CodedInputStream.readFixed64(CodedInputStream.java:203)
113   at 
com.google.protobuf.UnknownFieldSet$Builder.mergeFieldFrom(UnknownFieldSet.java:481)
114   at 
com.google.protobuf.GeneratedMessage.parseUnknownField(GeneratedMessage.java:193)
115   at 
org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableName.init(HBaseProtos.java:215)
116   at 
org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableName.init(HBaseProtos.java:173)
117   at 
org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableName$1.parsePartialFrom(HBaseProtos.java:261)
118   at 
org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableName$1.parsePartialFrom(HBaseProtos.java:256)
119   at 
com.google.protobuf.CodedInputStream.readMessage(CodedInputStream.java:309)
120   at 
org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableSchema.init(HBaseProtos.java:852)
121   at 
org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableSchema.init(HBaseProtos.java:799)
122   at 
org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableSchema$1.parsePartialFrom(HBaseProtos.java:923)
123   at 
org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableSchema$1.parsePartialFrom(HBaseProtos.java:918)
124   at 
com.google.protobuf.CodedInputStream.readMessage(CodedInputStream.java:309)
125   at 
org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableDescriptor.init(HBaseProtos.java:3396)
126   at 
org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableDescriptor.init(HBaseProtos.java:3343)
127   at 
org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableDescriptor$1.parsePartialFrom(HBaseProtos.java:3445)
128   at 
org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableDescriptor$1.parsePartialFrom(HBaseProtos.java:3440)
129   at 
org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableDescriptor$Builder.mergeFrom(HBaseProtos.java:3800)
130   at 
org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableDescriptor$Builder.mergeFrom(HBaseProtos.java:3672)
131   at 
com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:337)
132   at 

[jira] [Updated] (HBASE-11644) External MOB compaction tools

2014-09-16 Thread Jingcheng Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingcheng Du updated HBASE-11644:
-
Attachment: HBASE-11644-Sep-16.diff

Upload the latest patch( Sep-16), add a necessary comment to the code.

 External MOB compaction tools
 -

 Key: HBASE-11644
 URL: https://issues.apache.org/jira/browse/HBASE-11644
 Project: HBase
  Issue Type: Sub-task
  Components: Compaction, master
Affects Versions: hbase-11339
Reporter: Jingcheng Du
Assignee: Jingcheng Du
 Fix For: hbase-11339

 Attachments: HBASE-11644-Sep-15.diff, HBASE-11644-Sep-16.diff, 
 HBASE-11644-Sep-16.diff, HBASE-11644.diff


 From the design doc,  mob files are not involved in the normal HBase 
 compaction process.  This means deleted mobs would still take up space and 
 that we never really merge mob files that accrue over time.   Currently, MOBs 
 depend on two external tools:
 1) A TTL cleaner that removes mobs that have passed their TTL or exceeded 
 minVersions.
 2) A 'sweep tool' cleaner that remove mobs that have had their references 
 deleted and merges small files into larger ones.  
 Today the tools are triggered by admins.  The longer term goal would be to 
 integrate them into hbase such that by default mobs are cleaned.  The tools 
 will be preserved however so that advanced admins can disable automatic 
 cleanups and manually trigger these compaction like operaitons.  #1 would 
 likely be a chore in the master while #2 requires some design work to 
 integrate into hbase.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11987) Make zk-less table states backward compatible.

2014-09-16 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135032#comment-14135032
 ] 

stack commented on HBASE-11987:
---

Your patch is better than the one I was hacking up here.  Let me adjust a 
little... There is some repeated code and let me warn the table is migrated in 
ENABLED state One sec.


 Make zk-less table states backward compatible.
 --

 Key: HBASE-11987
 URL: https://issues.apache.org/jira/browse/HBASE-11987
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0
Reporter: Andrey Stepachev
Assignee: Andrey Stepachev
 Attachments: HBASE-11987.patch


 Changed protobuf format not handled properly, so on startup of on top of old 
 hbase files protobuf raises exception:
 Thanks to [~stack] for finding that.
 {noformat}
 90 2014-09-15 16:28:12,387 FATAL [ActiveMasterManager] master.HMaster: Failed 
 to become active master
  91 java.io.IOException: content=20546
  92   at 
 org.apache.hadoop.hbase.util.FSTableDescriptors.readTableDescriptor(FSTableDescriptors.java:599)
  93   at 
 org.apache.hadoop.hbase.util.FSTableDescriptors.createTableDescriptorForTableDirectory(FSTableDescriptors.java:804)
  94   at 
 org.apache.hadoop.hbase.util.FSTableDescriptors.createTableDescriptor(FSTableDescriptors.java:771)
  95   at 
 org.apache.hadoop.hbase.util.FSTableDescriptors.createTableDescriptor(FSTableDescriptors.java:749)
  96   at 
 org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:460)
  97   at 
 org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:147)
  98   at 
 org.apache.hadoop.hbase.master.MasterFileSystem.init(MasterFileSystem.java:127)
  99   at 
 org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:488)
 100   at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:155)
 101   at org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1244)
 102   at java.lang.Thread.run(Thread.java:744)
 103 Caused by: org.apache.hadoop.hbase.exceptions.DeserializationException: 
 com.google.protobuf.InvalidProtocolBufferException: While parsing a protocol 
 message, the input ended unexpectedly in the middle of a field.  This could 
 mean either than the input has been truncated or that an embedded message 
 misreported its own length.
 104   at 
 org.apache.hadoop.hbase.TableDescriptor.parseFrom(TableDescriptor.java:120)
 105   at 
 org.apache.hadoop.hbase.util.FSTableDescriptors.readTableDescriptor(FSTableDescriptors.java:597)
 106   ... 10 more
 107 Caused by: com.google.protobuf.InvalidProtocolBufferException: While 
 parsing a protocol message, the input ended unexpectedly in the middle of a 
 field.  This could mean either than the input has been truncated or that an 
 embedded message misreported its own length.
 108   at 
 com.google.protobuf.InvalidProtocolBufferException.truncatedMessage(InvalidProtocolBufferException.java:70)
 109   at 
 com.google.protobuf.CodedInputStream.refillBuffer(CodedInputStream.java:728)
 110   at 
 com.google.protobuf.CodedInputStream.readRawByte(CodedInputStream.java:769)
 111   at 
 com.google.protobuf.CodedInputStream.readRawLittleEndian64(CodedInputStream.java:488)
 112   at 
 com.google.protobuf.CodedInputStream.readFixed64(CodedInputStream.java:203)
 113   at 
 com.google.protobuf.UnknownFieldSet$Builder.mergeFieldFrom(UnknownFieldSet.java:481)
 114   at 
 com.google.protobuf.GeneratedMessage.parseUnknownField(GeneratedMessage.java:193)
 115   at 
 org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableName.init(HBaseProtos.java:215)
 116   at 
 org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableName.init(HBaseProtos.java:173)
 117   at 
 org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableName$1.parsePartialFrom(HBaseProtos.java:261)
 118   at 
 org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableName$1.parsePartialFrom(HBaseProtos.java:256)
 119   at 
 com.google.protobuf.CodedInputStream.readMessage(CodedInputStream.java:309)
 120   at 
 org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableSchema.init(HBaseProtos.java:852)
 121   at 
 org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableSchema.init(HBaseProtos.java:799)
 122   at 
 org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableSchema$1.parsePartialFrom(HBaseProtos.java:923)
 123   at 
 org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableSchema$1.parsePartialFrom(HBaseProtos.java:918)
 124   at 
 com.google.protobuf.CodedInputStream.readMessage(CodedInputStream.java:309)
 125   at 
 org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableDescriptor.init(HBaseProtos.java:3396)
 126   at 
 

[jira] [Updated] (HBASE-11987) Make zk-less table states backward compatible.

2014-09-16 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-11987:
--
Attachment: 11987v2.txt

Your patch [~octo47]... with minor change to log message and made a method of 
repeated code.  Thanks for the test too.

 Make zk-less table states backward compatible.
 --

 Key: HBASE-11987
 URL: https://issues.apache.org/jira/browse/HBASE-11987
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0
Reporter: Andrey Stepachev
Assignee: Andrey Stepachev
 Attachments: 11987v2.txt, HBASE-11987.patch


 Changed protobuf format not handled properly, so on startup of on top of old 
 hbase files protobuf raises exception:
 Thanks to [~stack] for finding that.
 {noformat}
 90 2014-09-15 16:28:12,387 FATAL [ActiveMasterManager] master.HMaster: Failed 
 to become active master
  91 java.io.IOException: content=20546
  92   at 
 org.apache.hadoop.hbase.util.FSTableDescriptors.readTableDescriptor(FSTableDescriptors.java:599)
  93   at 
 org.apache.hadoop.hbase.util.FSTableDescriptors.createTableDescriptorForTableDirectory(FSTableDescriptors.java:804)
  94   at 
 org.apache.hadoop.hbase.util.FSTableDescriptors.createTableDescriptor(FSTableDescriptors.java:771)
  95   at 
 org.apache.hadoop.hbase.util.FSTableDescriptors.createTableDescriptor(FSTableDescriptors.java:749)
  96   at 
 org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:460)
  97   at 
 org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:147)
  98   at 
 org.apache.hadoop.hbase.master.MasterFileSystem.init(MasterFileSystem.java:127)
  99   at 
 org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:488)
 100   at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:155)
 101   at org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1244)
 102   at java.lang.Thread.run(Thread.java:744)
 103 Caused by: org.apache.hadoop.hbase.exceptions.DeserializationException: 
 com.google.protobuf.InvalidProtocolBufferException: While parsing a protocol 
 message, the input ended unexpectedly in the middle of a field.  This could 
 mean either than the input has been truncated or that an embedded message 
 misreported its own length.
 104   at 
 org.apache.hadoop.hbase.TableDescriptor.parseFrom(TableDescriptor.java:120)
 105   at 
 org.apache.hadoop.hbase.util.FSTableDescriptors.readTableDescriptor(FSTableDescriptors.java:597)
 106   ... 10 more
 107 Caused by: com.google.protobuf.InvalidProtocolBufferException: While 
 parsing a protocol message, the input ended unexpectedly in the middle of a 
 field.  This could mean either than the input has been truncated or that an 
 embedded message misreported its own length.
 108   at 
 com.google.protobuf.InvalidProtocolBufferException.truncatedMessage(InvalidProtocolBufferException.java:70)
 109   at 
 com.google.protobuf.CodedInputStream.refillBuffer(CodedInputStream.java:728)
 110   at 
 com.google.protobuf.CodedInputStream.readRawByte(CodedInputStream.java:769)
 111   at 
 com.google.protobuf.CodedInputStream.readRawLittleEndian64(CodedInputStream.java:488)
 112   at 
 com.google.protobuf.CodedInputStream.readFixed64(CodedInputStream.java:203)
 113   at 
 com.google.protobuf.UnknownFieldSet$Builder.mergeFieldFrom(UnknownFieldSet.java:481)
 114   at 
 com.google.protobuf.GeneratedMessage.parseUnknownField(GeneratedMessage.java:193)
 115   at 
 org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableName.init(HBaseProtos.java:215)
 116   at 
 org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableName.init(HBaseProtos.java:173)
 117   at 
 org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableName$1.parsePartialFrom(HBaseProtos.java:261)
 118   at 
 org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableName$1.parsePartialFrom(HBaseProtos.java:256)
 119   at 
 com.google.protobuf.CodedInputStream.readMessage(CodedInputStream.java:309)
 120   at 
 org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableSchema.init(HBaseProtos.java:852)
 121   at 
 org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableSchema.init(HBaseProtos.java:799)
 122   at 
 org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableSchema$1.parsePartialFrom(HBaseProtos.java:923)
 123   at 
 org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableSchema$1.parsePartialFrom(HBaseProtos.java:918)
 124   at 
 com.google.protobuf.CodedInputStream.readMessage(CodedInputStream.java:309)
 125   at 
 org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableDescriptor.init(HBaseProtos.java:3396)
 126   at 
 org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableDescriptor.init(HBaseProtos.java:3343)
 127   at 
 

[jira] [Commented] (HBASE-11987) Make zk-less table states backward compatible.

2014-09-16 Thread Andrey Stepachev (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135036#comment-14135036
 ] 

Andrey Stepachev commented on HBASE-11987:
--

Checked, that on state migration all should work as expected, we will read 
state, rewrite it in enabled state and then updated with what we'll find in zk. 
Sounds like that should work.

 Make zk-less table states backward compatible.
 --

 Key: HBASE-11987
 URL: https://issues.apache.org/jira/browse/HBASE-11987
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0
Reporter: Andrey Stepachev
Assignee: Andrey Stepachev
 Attachments: 11987v2.txt, HBASE-11987.patch


 Changed protobuf format not handled properly, so on startup of on top of old 
 hbase files protobuf raises exception:
 Thanks to [~stack] for finding that.
 {noformat}
 90 2014-09-15 16:28:12,387 FATAL [ActiveMasterManager] master.HMaster: Failed 
 to become active master
  91 java.io.IOException: content=20546
  92   at 
 org.apache.hadoop.hbase.util.FSTableDescriptors.readTableDescriptor(FSTableDescriptors.java:599)
  93   at 
 org.apache.hadoop.hbase.util.FSTableDescriptors.createTableDescriptorForTableDirectory(FSTableDescriptors.java:804)
  94   at 
 org.apache.hadoop.hbase.util.FSTableDescriptors.createTableDescriptor(FSTableDescriptors.java:771)
  95   at 
 org.apache.hadoop.hbase.util.FSTableDescriptors.createTableDescriptor(FSTableDescriptors.java:749)
  96   at 
 org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:460)
  97   at 
 org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:147)
  98   at 
 org.apache.hadoop.hbase.master.MasterFileSystem.init(MasterFileSystem.java:127)
  99   at 
 org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:488)
 100   at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:155)
 101   at org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1244)
 102   at java.lang.Thread.run(Thread.java:744)
 103 Caused by: org.apache.hadoop.hbase.exceptions.DeserializationException: 
 com.google.protobuf.InvalidProtocolBufferException: While parsing a protocol 
 message, the input ended unexpectedly in the middle of a field.  This could 
 mean either than the input has been truncated or that an embedded message 
 misreported its own length.
 104   at 
 org.apache.hadoop.hbase.TableDescriptor.parseFrom(TableDescriptor.java:120)
 105   at 
 org.apache.hadoop.hbase.util.FSTableDescriptors.readTableDescriptor(FSTableDescriptors.java:597)
 106   ... 10 more
 107 Caused by: com.google.protobuf.InvalidProtocolBufferException: While 
 parsing a protocol message, the input ended unexpectedly in the middle of a 
 field.  This could mean either than the input has been truncated or that an 
 embedded message misreported its own length.
 108   at 
 com.google.protobuf.InvalidProtocolBufferException.truncatedMessage(InvalidProtocolBufferException.java:70)
 109   at 
 com.google.protobuf.CodedInputStream.refillBuffer(CodedInputStream.java:728)
 110   at 
 com.google.protobuf.CodedInputStream.readRawByte(CodedInputStream.java:769)
 111   at 
 com.google.protobuf.CodedInputStream.readRawLittleEndian64(CodedInputStream.java:488)
 112   at 
 com.google.protobuf.CodedInputStream.readFixed64(CodedInputStream.java:203)
 113   at 
 com.google.protobuf.UnknownFieldSet$Builder.mergeFieldFrom(UnknownFieldSet.java:481)
 114   at 
 com.google.protobuf.GeneratedMessage.parseUnknownField(GeneratedMessage.java:193)
 115   at 
 org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableName.init(HBaseProtos.java:215)
 116   at 
 org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableName.init(HBaseProtos.java:173)
 117   at 
 org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableName$1.parsePartialFrom(HBaseProtos.java:261)
 118   at 
 org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableName$1.parsePartialFrom(HBaseProtos.java:256)
 119   at 
 com.google.protobuf.CodedInputStream.readMessage(CodedInputStream.java:309)
 120   at 
 org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableSchema.init(HBaseProtos.java:852)
 121   at 
 org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableSchema.init(HBaseProtos.java:799)
 122   at 
 org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableSchema$1.parsePartialFrom(HBaseProtos.java:923)
 123   at 
 org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableSchema$1.parsePartialFrom(HBaseProtos.java:918)
 124   at 
 com.google.protobuf.CodedInputStream.readMessage(CodedInputStream.java:309)
 125   at 
 org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableDescriptor.init(HBaseProtos.java:3396)
 126   at 
 

[jira] [Commented] (HBASE-11987) Make zk-less table states backward compatible.

2014-09-16 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135038#comment-14135038
 ] 

stack commented on HBASE-11987:
---

I tried it here locally w/ an /hbase that had old format.  It works for me +1.  
Let me just commit.

 Make zk-less table states backward compatible.
 --

 Key: HBASE-11987
 URL: https://issues.apache.org/jira/browse/HBASE-11987
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0
Reporter: Andrey Stepachev
Assignee: Andrey Stepachev
 Attachments: 11987v2.txt, HBASE-11987.patch


 Changed protobuf format not handled properly, so on startup of on top of old 
 hbase files protobuf raises exception:
 Thanks to [~stack] for finding that.
 {noformat}
 90 2014-09-15 16:28:12,387 FATAL [ActiveMasterManager] master.HMaster: Failed 
 to become active master
  91 java.io.IOException: content=20546
  92   at 
 org.apache.hadoop.hbase.util.FSTableDescriptors.readTableDescriptor(FSTableDescriptors.java:599)
  93   at 
 org.apache.hadoop.hbase.util.FSTableDescriptors.createTableDescriptorForTableDirectory(FSTableDescriptors.java:804)
  94   at 
 org.apache.hadoop.hbase.util.FSTableDescriptors.createTableDescriptor(FSTableDescriptors.java:771)
  95   at 
 org.apache.hadoop.hbase.util.FSTableDescriptors.createTableDescriptor(FSTableDescriptors.java:749)
  96   at 
 org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:460)
  97   at 
 org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:147)
  98   at 
 org.apache.hadoop.hbase.master.MasterFileSystem.init(MasterFileSystem.java:127)
  99   at 
 org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:488)
 100   at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:155)
 101   at org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1244)
 102   at java.lang.Thread.run(Thread.java:744)
 103 Caused by: org.apache.hadoop.hbase.exceptions.DeserializationException: 
 com.google.protobuf.InvalidProtocolBufferException: While parsing a protocol 
 message, the input ended unexpectedly in the middle of a field.  This could 
 mean either than the input has been truncated or that an embedded message 
 misreported its own length.
 104   at 
 org.apache.hadoop.hbase.TableDescriptor.parseFrom(TableDescriptor.java:120)
 105   at 
 org.apache.hadoop.hbase.util.FSTableDescriptors.readTableDescriptor(FSTableDescriptors.java:597)
 106   ... 10 more
 107 Caused by: com.google.protobuf.InvalidProtocolBufferException: While 
 parsing a protocol message, the input ended unexpectedly in the middle of a 
 field.  This could mean either than the input has been truncated or that an 
 embedded message misreported its own length.
 108   at 
 com.google.protobuf.InvalidProtocolBufferException.truncatedMessage(InvalidProtocolBufferException.java:70)
 109   at 
 com.google.protobuf.CodedInputStream.refillBuffer(CodedInputStream.java:728)
 110   at 
 com.google.protobuf.CodedInputStream.readRawByte(CodedInputStream.java:769)
 111   at 
 com.google.protobuf.CodedInputStream.readRawLittleEndian64(CodedInputStream.java:488)
 112   at 
 com.google.protobuf.CodedInputStream.readFixed64(CodedInputStream.java:203)
 113   at 
 com.google.protobuf.UnknownFieldSet$Builder.mergeFieldFrom(UnknownFieldSet.java:481)
 114   at 
 com.google.protobuf.GeneratedMessage.parseUnknownField(GeneratedMessage.java:193)
 115   at 
 org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableName.init(HBaseProtos.java:215)
 116   at 
 org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableName.init(HBaseProtos.java:173)
 117   at 
 org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableName$1.parsePartialFrom(HBaseProtos.java:261)
 118   at 
 org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableName$1.parsePartialFrom(HBaseProtos.java:256)
 119   at 
 com.google.protobuf.CodedInputStream.readMessage(CodedInputStream.java:309)
 120   at 
 org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableSchema.init(HBaseProtos.java:852)
 121   at 
 org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableSchema.init(HBaseProtos.java:799)
 122   at 
 org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableSchema$1.parsePartialFrom(HBaseProtos.java:923)
 123   at 
 org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableSchema$1.parsePartialFrom(HBaseProtos.java:918)
 124   at 
 com.google.protobuf.CodedInputStream.readMessage(CodedInputStream.java:309)
 125   at 
 org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableDescriptor.init(HBaseProtos.java:3396)
 126   at 
 org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableDescriptor.init(HBaseProtos.java:3343)
 127   at 
 

[jira] [Commented] (HBASE-11644) External MOB compaction tools

2014-09-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135037#comment-14135037
 ] 

Hadoop QA commented on HBASE-11644:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12668985/HBASE-11644-Sep-16.diff
  against trunk revision .
  ATTACHMENT ID: 12668985

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 16 new 
or modified tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10917//console

This message is automatically generated.

 External MOB compaction tools
 -

 Key: HBASE-11644
 URL: https://issues.apache.org/jira/browse/HBASE-11644
 Project: HBase
  Issue Type: Sub-task
  Components: Compaction, master
Affects Versions: hbase-11339
Reporter: Jingcheng Du
Assignee: Jingcheng Du
 Fix For: hbase-11339

 Attachments: HBASE-11644-Sep-15.diff, HBASE-11644-Sep-16.diff, 
 HBASE-11644-Sep-16.diff, HBASE-11644.diff


 From the design doc,  mob files are not involved in the normal HBase 
 compaction process.  This means deleted mobs would still take up space and 
 that we never really merge mob files that accrue over time.   Currently, MOBs 
 depend on two external tools:
 1) A TTL cleaner that removes mobs that have passed their TTL or exceeded 
 minVersions.
 2) A 'sweep tool' cleaner that remove mobs that have had their references 
 deleted and merges small files into larger ones.  
 Today the tools are triggered by admins.  The longer term goal would be to 
 integrate them into hbase such that by default mobs are cleaned.  The tools 
 will be preserved however so that advanced admins can disable automatic 
 cleanups and manually trigger these compaction like operaitons.  #1 would 
 likely be a chore in the master while #2 requires some design work to 
 integrate into hbase.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11987) Make zk-less table states backward compatible.

2014-09-16 Thread Andrey Stepachev (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135039#comment-14135039
 ] 

Andrey Stepachev commented on HBASE-11987:
--

Great. Thanks!

 Make zk-less table states backward compatible.
 --

 Key: HBASE-11987
 URL: https://issues.apache.org/jira/browse/HBASE-11987
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0
Reporter: Andrey Stepachev
Assignee: Andrey Stepachev
 Attachments: 11987v2.txt, HBASE-11987.patch


 Changed protobuf format not handled properly, so on startup of on top of old 
 hbase files protobuf raises exception:
 Thanks to [~stack] for finding that.
 {noformat}
 90 2014-09-15 16:28:12,387 FATAL [ActiveMasterManager] master.HMaster: Failed 
 to become active master
  91 java.io.IOException: content=20546
  92   at 
 org.apache.hadoop.hbase.util.FSTableDescriptors.readTableDescriptor(FSTableDescriptors.java:599)
  93   at 
 org.apache.hadoop.hbase.util.FSTableDescriptors.createTableDescriptorForTableDirectory(FSTableDescriptors.java:804)
  94   at 
 org.apache.hadoop.hbase.util.FSTableDescriptors.createTableDescriptor(FSTableDescriptors.java:771)
  95   at 
 org.apache.hadoop.hbase.util.FSTableDescriptors.createTableDescriptor(FSTableDescriptors.java:749)
  96   at 
 org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:460)
  97   at 
 org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:147)
  98   at 
 org.apache.hadoop.hbase.master.MasterFileSystem.init(MasterFileSystem.java:127)
  99   at 
 org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:488)
 100   at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:155)
 101   at org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1244)
 102   at java.lang.Thread.run(Thread.java:744)
 103 Caused by: org.apache.hadoop.hbase.exceptions.DeserializationException: 
 com.google.protobuf.InvalidProtocolBufferException: While parsing a protocol 
 message, the input ended unexpectedly in the middle of a field.  This could 
 mean either than the input has been truncated or that an embedded message 
 misreported its own length.
 104   at 
 org.apache.hadoop.hbase.TableDescriptor.parseFrom(TableDescriptor.java:120)
 105   at 
 org.apache.hadoop.hbase.util.FSTableDescriptors.readTableDescriptor(FSTableDescriptors.java:597)
 106   ... 10 more
 107 Caused by: com.google.protobuf.InvalidProtocolBufferException: While 
 parsing a protocol message, the input ended unexpectedly in the middle of a 
 field.  This could mean either than the input has been truncated or that an 
 embedded message misreported its own length.
 108   at 
 com.google.protobuf.InvalidProtocolBufferException.truncatedMessage(InvalidProtocolBufferException.java:70)
 109   at 
 com.google.protobuf.CodedInputStream.refillBuffer(CodedInputStream.java:728)
 110   at 
 com.google.protobuf.CodedInputStream.readRawByte(CodedInputStream.java:769)
 111   at 
 com.google.protobuf.CodedInputStream.readRawLittleEndian64(CodedInputStream.java:488)
 112   at 
 com.google.protobuf.CodedInputStream.readFixed64(CodedInputStream.java:203)
 113   at 
 com.google.protobuf.UnknownFieldSet$Builder.mergeFieldFrom(UnknownFieldSet.java:481)
 114   at 
 com.google.protobuf.GeneratedMessage.parseUnknownField(GeneratedMessage.java:193)
 115   at 
 org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableName.init(HBaseProtos.java:215)
 116   at 
 org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableName.init(HBaseProtos.java:173)
 117   at 
 org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableName$1.parsePartialFrom(HBaseProtos.java:261)
 118   at 
 org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableName$1.parsePartialFrom(HBaseProtos.java:256)
 119   at 
 com.google.protobuf.CodedInputStream.readMessage(CodedInputStream.java:309)
 120   at 
 org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableSchema.init(HBaseProtos.java:852)
 121   at 
 org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableSchema.init(HBaseProtos.java:799)
 122   at 
 org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableSchema$1.parsePartialFrom(HBaseProtos.java:923)
 123   at 
 org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableSchema$1.parsePartialFrom(HBaseProtos.java:918)
 124   at 
 com.google.protobuf.CodedInputStream.readMessage(CodedInputStream.java:309)
 125   at 
 org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableDescriptor.init(HBaseProtos.java:3396)
 126   at 
 org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableDescriptor.init(HBaseProtos.java:3343)
 127   at 
 

[jira] [Commented] (HBASE-11987) Make zk-less table states backward compatible.

2014-09-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135042#comment-14135042
 ] 

Hadoop QA commented on HBASE-11987:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12668982/HBASE-11987.patch
  against trunk revision .
  ATTACHMENT ID: 12668982

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.TestClassFinder

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10915//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10915//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10915//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10915//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10915//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10915//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10915//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10915//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10915//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10915//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10915//console

This message is automatically generated.

 Make zk-less table states backward compatible.
 --

 Key: HBASE-11987
 URL: https://issues.apache.org/jira/browse/HBASE-11987
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0
Reporter: Andrey Stepachev
Assignee: Andrey Stepachev
 Attachments: 11987v2.txt, HBASE-11987.patch


 Changed protobuf format not handled properly, so on startup of on top of old 
 hbase files protobuf raises exception:
 Thanks to [~stack] for finding that.
 {noformat}
 90 2014-09-15 16:28:12,387 FATAL [ActiveMasterManager] master.HMaster: Failed 
 to become active master
  91 java.io.IOException: content=20546
  92   at 
 org.apache.hadoop.hbase.util.FSTableDescriptors.readTableDescriptor(FSTableDescriptors.java:599)
  93   at 
 org.apache.hadoop.hbase.util.FSTableDescriptors.createTableDescriptorForTableDirectory(FSTableDescriptors.java:804)
  94   at 
 org.apache.hadoop.hbase.util.FSTableDescriptors.createTableDescriptor(FSTableDescriptors.java:771)
  95   at 
 org.apache.hadoop.hbase.util.FSTableDescriptors.createTableDescriptor(FSTableDescriptors.java:749)
  96   at 
 org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:460)
  97   at 
 org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:147)
  98   at 
 org.apache.hadoop.hbase.master.MasterFileSystem.init(MasterFileSystem.java:127)
  99   at 
 org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:488)
 100   at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:155)
 101   at org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1244)
 102   at java.lang.Thread.run(Thread.java:744)
 103 Caused by: org.apache.hadoop.hbase.exceptions.DeserializationException: 
 com.google.protobuf.InvalidProtocolBufferException: While parsing a 

[jira] [Commented] (HBASE-11974) When a disabled table is scanned, NotServingRegionException is thrown instead of TableNotEnabledException

2014-09-16 Thread Andrey Stepachev (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135041#comment-14135041
 ] 

Andrey Stepachev commented on HBASE-11974:
--

[~enis] yeah, you are right. and that creates additional load to master (with 
HBASE-7767 master will serve isEnabled requests). Client should do that and may 
be cache.

 When a disabled table is scanned, NotServingRegionException is thrown instead 
 of TableNotEnabledException
 -

 Key: HBASE-11974
 URL: https://issues.apache.org/jira/browse/HBASE-11974
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu
 Attachments: 11974-test.patch, 11974-v1.txt, 11974-v2.txt, 
 11974-v3.txt, 11974-v4.txt


 When a disabled table is scanned, TableNotEnabledException should be thrown.
 However, currently NotServingRegionException is thrown.
 Thanks to Romil Choksi who discovered this problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-11987) Make zk-less table states backward compatible.

2014-09-16 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-11987:
--
   Resolution: Fixed
Fix Version/s: 2.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Pushed to master. Thanks [~octo47].  Let me make sure we backport this if we 
backport the reset.  Thanks.

 Make zk-less table states backward compatible.
 --

 Key: HBASE-11987
 URL: https://issues.apache.org/jira/browse/HBASE-11987
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0
Reporter: Andrey Stepachev
Assignee: Andrey Stepachev
 Fix For: 2.0.0

 Attachments: 11987v2.txt, HBASE-11987.patch


 Changed protobuf format not handled properly, so on startup of on top of old 
 hbase files protobuf raises exception:
 Thanks to [~stack] for finding that.
 {noformat}
 90 2014-09-15 16:28:12,387 FATAL [ActiveMasterManager] master.HMaster: Failed 
 to become active master
  91 java.io.IOException: content=20546
  92   at 
 org.apache.hadoop.hbase.util.FSTableDescriptors.readTableDescriptor(FSTableDescriptors.java:599)
  93   at 
 org.apache.hadoop.hbase.util.FSTableDescriptors.createTableDescriptorForTableDirectory(FSTableDescriptors.java:804)
  94   at 
 org.apache.hadoop.hbase.util.FSTableDescriptors.createTableDescriptor(FSTableDescriptors.java:771)
  95   at 
 org.apache.hadoop.hbase.util.FSTableDescriptors.createTableDescriptor(FSTableDescriptors.java:749)
  96   at 
 org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:460)
  97   at 
 org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:147)
  98   at 
 org.apache.hadoop.hbase.master.MasterFileSystem.init(MasterFileSystem.java:127)
  99   at 
 org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:488)
 100   at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:155)
 101   at org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1244)
 102   at java.lang.Thread.run(Thread.java:744)
 103 Caused by: org.apache.hadoop.hbase.exceptions.DeserializationException: 
 com.google.protobuf.InvalidProtocolBufferException: While parsing a protocol 
 message, the input ended unexpectedly in the middle of a field.  This could 
 mean either than the input has been truncated or that an embedded message 
 misreported its own length.
 104   at 
 org.apache.hadoop.hbase.TableDescriptor.parseFrom(TableDescriptor.java:120)
 105   at 
 org.apache.hadoop.hbase.util.FSTableDescriptors.readTableDescriptor(FSTableDescriptors.java:597)
 106   ... 10 more
 107 Caused by: com.google.protobuf.InvalidProtocolBufferException: While 
 parsing a protocol message, the input ended unexpectedly in the middle of a 
 field.  This could mean either than the input has been truncated or that an 
 embedded message misreported its own length.
 108   at 
 com.google.protobuf.InvalidProtocolBufferException.truncatedMessage(InvalidProtocolBufferException.java:70)
 109   at 
 com.google.protobuf.CodedInputStream.refillBuffer(CodedInputStream.java:728)
 110   at 
 com.google.protobuf.CodedInputStream.readRawByte(CodedInputStream.java:769)
 111   at 
 com.google.protobuf.CodedInputStream.readRawLittleEndian64(CodedInputStream.java:488)
 112   at 
 com.google.protobuf.CodedInputStream.readFixed64(CodedInputStream.java:203)
 113   at 
 com.google.protobuf.UnknownFieldSet$Builder.mergeFieldFrom(UnknownFieldSet.java:481)
 114   at 
 com.google.protobuf.GeneratedMessage.parseUnknownField(GeneratedMessage.java:193)
 115   at 
 org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableName.init(HBaseProtos.java:215)
 116   at 
 org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableName.init(HBaseProtos.java:173)
 117   at 
 org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableName$1.parsePartialFrom(HBaseProtos.java:261)
 118   at 
 org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableName$1.parsePartialFrom(HBaseProtos.java:256)
 119   at 
 com.google.protobuf.CodedInputStream.readMessage(CodedInputStream.java:309)
 120   at 
 org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableSchema.init(HBaseProtos.java:852)
 121   at 
 org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableSchema.init(HBaseProtos.java:799)
 122   at 
 org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableSchema$1.parsePartialFrom(HBaseProtos.java:923)
 123   at 
 org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableSchema$1.parsePartialFrom(HBaseProtos.java:918)
 124   at 
 com.google.protobuf.CodedInputStream.readMessage(CodedInputStream.java:309)
 125   at 
 org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableDescriptor.init(HBaseProtos.java:3396)
 126   at 
 

[jira] [Commented] (HBASE-11978) Backport 'HBASE-7767 Get rid of ZKTable, and table enable/disable state in ZK' to 1.0

2014-09-16 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135052#comment-14135052
 ] 

stack commented on HBASE-11978:
---

Will need HBASE-11987 if we backport.

 Backport 'HBASE-7767 Get rid of ZKTable, and table enable/disable state in 
 ZK' to 1.0
 -

 Key: HBASE-11978
 URL: https://issues.apache.org/jira/browse/HBASE-11978
 Project: HBase
  Issue Type: Task
  Components: Zookeeper
Reporter: stack
Assignee: stack
 Fix For: 0.99.1


 Lets try and backport this nice cleanup.  It removes enabled/disabled state 
 form zk.   It automigrates so SHOULD not be an issue.  Let me test though 
 first.  Marking against 0.99.1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-11988) AC/VC system table create on postStartMaster fails too often in test

2014-09-16 Thread Anoop Sam John (JIRA)
Anoop Sam John created HBASE-11988:
--

 Summary: AC/VC system table create on postStartMaster fails too 
often in test
 Key: HBASE-11988
 URL: https://issues.apache.org/jira/browse/HBASE-11988
 Project: HBase
  Issue Type: Bug
Reporter: Anoop Sam John
Assignee: Anoop Sam John


See for example
{noformat}
2014-09-16 04:02:08,833 ERROR [ActiveMasterManager] master.HMaster(633): 
Coprocessor postStartMaster() hook failed
java.io.IOException: Table Namespace Manager not ready yet, try again later
at 
org.apache.hadoop.hbase.master.HMaster.checkNamespaceManagerReady(HMaster.java:1669)
at 
org.apache.hadoop.hbase.master.HMaster.getNamespaceDescriptor(HMaster.java:1852)
at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1096)
at 
org.apache.hadoop.hbase.security.access.AccessControlLists.init(AccessControlLists.java:143)
at 
org.apache.hadoop.hbase.security.access.AccessController.postStartMaster(AccessController.java:1059)
at 
org.apache.hadoop.hbase.master.MasterCoprocessorHost$58.call(MasterCoprocessorHost.java:692)
at 
org.apache.hadoop.hbase.master.MasterCoprocessorHost.execOperation(MasterCoprocessorHost.java:861)
at 
org.apache.hadoop.hbase.master.MasterCoprocessorHost.postStartMaster(MasterCoprocessorHost.java:688)
at 
org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:631)
at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:155)
at org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1244)
at java.lang.Thread.run(Thread.java:744)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11974) When a disabled table is scanned, NotServingRegionException is thrown instead of TableNotEnabledException

2014-09-16 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135055#comment-14135055
 ] 

Enis Soztutar commented on HBASE-11974:
---

bq. I haven't found a clean way to do the check in ConnectionManager layer.
We cannot do it the way in v4 patch. Doing a zk rpc for every scanner open is 
very costly. 

 When a disabled table is scanned, NotServingRegionException is thrown instead 
 of TableNotEnabledException
 -

 Key: HBASE-11974
 URL: https://issues.apache.org/jira/browse/HBASE-11974
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu
 Attachments: 11974-test.patch, 11974-v1.txt, 11974-v2.txt, 
 11974-v3.txt, 11974-v4.txt


 When a disabled table is scanned, TableNotEnabledException should be thrown.
 However, currently NotServingRegionException is thrown.
 Thanks to Romil Choksi who discovered this problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11988) AC/VC system table create on postStartMaster fails too often in test

2014-09-16 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135057#comment-14135057
 ] 

Anoop Sam John commented on HBASE-11988:


{noformat}
2014-09-15 20:33:29,003 DEBUG 
[PostOpenDeployTasks:f364c75f82c175a3b5c2c7d7cf810ce3] 
master.AssignmentManager(2676): Got transition OPENED for 
{f364c75f82c175a3b5c2c7d7cf810ce3 state=PENDING_OPEN, ts=1410813208967, 
server=asf901.gq1.ygridcore.net,44530,1410813206686} from 
asf901.gq1.ygridcore.net,44530,1410813206686
2014-09-15 20:33:29,003 INFO  
[PostOpenDeployTasks:f364c75f82c175a3b5c2c7d7cf810ce3] 
master.RegionStates(949): Transition {f364c75f82c175a3b5c2c7d7cf810ce3 
state=PENDING_OPEN, ts=1410813208967, 
server=asf901.gq1.ygridcore.net,44530,1410813206686} to 
{f364c75f82c175a3b5c2c7d7cf810ce3 state=OPEN, ts=1410813209003, 
server=asf901.gq1.ygridcore.net,44530,1410813206686}
2014-09-15 20:33:29,003 INFO  
[PostOpenDeployTasks:f364c75f82c175a3b5c2c7d7cf810ce3] 
master.RegionStateStore(207): Updating row 
hbase:namespace,,1410813208374.f364c75f82c175a3b5c2c7d7cf810ce3. with 
state=OPENopenSeqNum=2server=asf901.gq1.ygridcore.net,44530,1410813206686
2014-09-15 20:33:29,006 INFO  
[PostOpenDeployTasks:f364c75f82c175a3b5c2c7d7cf810ce3] 
master.RegionStates(381): Onlined f364c75f82c175a3b5c2c7d7cf810ce3 on 
asf901.gq1.ygridcore.net,44530,1410813206686
2014-09-15 20:33:29,007 DEBUG 
[PostOpenDeployTasks:f364c75f82c175a3b5c2c7d7cf810ce3] 
regionserver.HRegionServer(1735): Finished post open deploy task for 
hbase:namespace,,1410813208374.f364c75f82c175a3b5c2c7d7cf810ce3.
2014-09-15 20:33:29,007 DEBUG [RS_OPEN_REGION-asf901:44530-0] 
handler.OpenRegionHandler(122): Opened 
hbase:namespace,,1410813208374.f364c75f82c175a3b5c2c7d7cf810ce3. on 
asf901.gq1.ygridcore.net,44530,1410813206686
2014-09-15 20:33:29,009 INFO  [ActiveMasterManager] master.HMaster(621): Master 
has completed initialization
2014-09-15 20:33:29,025 ERROR [ActiveMasterManager] master.HMaster(633): 
Coprocessor postStartMaster() hook failed
java.io.IOException: Table Namespace Manager not ready yet, try again later
at 
org.apache.hadoop.hbase.master.HMaster.checkNamespaceManagerReady(HMaster.java:1669)
at 
org.apache.hadoop.hbase.master.HMaster.getNamespaceDescriptor(HMaster.java:1852)
at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1096)
at 
org.apache.hadoop.hbase.security.visibility.VisibilityController.postStartMaster(VisibilityController.java:186)
at 
org.apache.hadoop.hbase.master.MasterCoprocessorHost$58.call(MasterCoprocessorHost.java:692)
at 
org.apache.hadoop.hbase.master.MasterCoprocessorHost.execOperation(MasterCoprocessorHost.java:861)
at 
org.apache.hadoop.hbase.master.MasterCoprocessorHost.postStartMaster(MasterCoprocessorHost.java:688)
at 
org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:631)
at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:155)
at org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1244)
at java.lang.Thread.run(Thread.java:744)
{noformat}
So the ns table create was over and available before the VC try creating labels 
table. 
{code}
void checkNamespaceManagerReady() throws IOException {
  checkInitialized();
  if (tableNamespaceManager == null ||
!tableNamespaceManager.isTableAvailableAndInitialized()) {
 throw new IOException(Table Namespace Manager not ready yet, try again 
later);
  }
}
{code}
The check test whether ns table is available and *active*.  The active state 
check will fail as we can see in the log that in master the state changed after 
this exception log
{noformat}
2014-09-15 20:33:29,410 DEBUG [MASTER_TABLE_OPERATIONS-asf901:44530-0] 
util.FSTableDescriptors(658): Deleted table descriptor at 
hdfs://localhost:34588/user/jenkins/hbase/data/hbase/namespace/.tabledesc/.tableinfo.01
2014-09-15 20:33:29,410 INFO  [MASTER_TABLE_OPERATIONS-asf901:44530-0] 
util.FSTableDescriptors(624): Updated 
tableinfo=hdfs://localhost:34588/user/jenkins/hbase/data/hbase/namespace/.tabledesc/.tableinfo.02
2014-09-15 20:33:29,412 DEBUG [MASTER_TABLE_OPERATIONS-asf901:44530-0] 
master.TableStateManager(197): Table hbase:namespace written descriptor for 
state ENABLED
2014-09-15 20:33:29,412 DEBUG [MASTER_TABLE_OPERATIONS-asf901:44530-0] 
master.TableStateManager(199): Table hbase:namespace updated state to ENABLED
{noformat}
Now my Q is this.
During master init we create the ns table, if it is not there, and wait in a 
while loop until the ns table is available (region opened) ( See 
TableNamespaceManager#start())   Can this loop check for the ns table active 
state also? 

 AC/VC system table create on postStartMaster fails too often in test
 

 Key: HBASE-11988
   

[jira] [Updated] (HBASE-11986) Document MOB in Ref Guide

2014-09-16 Thread Misty Stanley-Jones (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Misty Stanley-Jones updated HBASE-11986:

Attachment: HBASE-11986.patch

Documented MOB in a new chapter, made a new section in the Schema chapter about 
cell size, added link from cell size troubleshooting area, added MOB cache 
properties to hbase-default.xml.

 Document MOB in Ref Guide
 -

 Key: HBASE-11986
 URL: https://issues.apache.org/jira/browse/HBASE-11986
 Project: HBase
  Issue Type: Sub-task
  Components: documentation
Reporter: Misty Stanley-Jones
Assignee: Misty Stanley-Jones
 Fix For: hbase-11339

 Attachments: HBASE-11986.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-11986) Document MOB in Ref Guide

2014-09-16 Thread Misty Stanley-Jones (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Misty Stanley-Jones updated HBASE-11986:

Status: Patch Available  (was: Open)

 Document MOB in Ref Guide
 -

 Key: HBASE-11986
 URL: https://issues.apache.org/jira/browse/HBASE-11986
 Project: HBase
  Issue Type: Sub-task
  Components: documentation
Reporter: Misty Stanley-Jones
Assignee: Misty Stanley-Jones
 Fix For: hbase-11339

 Attachments: HBASE-11986.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11988) AC/VC system table create on postStartMaster fails too often in test

2014-09-16 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135066#comment-14135066
 ] 

Anoop Sam John commented on HBASE-11988:


Any recent change in the AM side which changes the order/way in changing the 
data structure (states) in the HMaster side memory?  We didn't see these 
failures till last week I think. Any ideas?

 AC/VC system table create on postStartMaster fails too often in test
 

 Key: HBASE-11988
 URL: https://issues.apache.org/jira/browse/HBASE-11988
 Project: HBase
  Issue Type: Bug
Reporter: Anoop Sam John
Assignee: Anoop Sam John

 See for example
 {noformat}
 2014-09-16 04:02:08,833 ERROR [ActiveMasterManager] master.HMaster(633): 
 Coprocessor postStartMaster() hook failed
 java.io.IOException: Table Namespace Manager not ready yet, try again later
   at 
 org.apache.hadoop.hbase.master.HMaster.checkNamespaceManagerReady(HMaster.java:1669)
   at 
 org.apache.hadoop.hbase.master.HMaster.getNamespaceDescriptor(HMaster.java:1852)
   at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1096)
   at 
 org.apache.hadoop.hbase.security.access.AccessControlLists.init(AccessControlLists.java:143)
   at 
 org.apache.hadoop.hbase.security.access.AccessController.postStartMaster(AccessController.java:1059)
   at 
 org.apache.hadoop.hbase.master.MasterCoprocessorHost$58.call(MasterCoprocessorHost.java:692)
   at 
 org.apache.hadoop.hbase.master.MasterCoprocessorHost.execOperation(MasterCoprocessorHost.java:861)
   at 
 org.apache.hadoop.hbase.master.MasterCoprocessorHost.postStartMaster(MasterCoprocessorHost.java:688)
   at 
 org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:631)
   at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:155)
   at org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1244)
   at java.lang.Thread.run(Thread.java:744)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11367) Pluggable replication endpoint

2014-09-16 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135069#comment-14135069
 ] 

Enis Soztutar commented on HBASE-11367:
---

bq. I need couple of days before I start on that. BTW Anoop has given comments 
on HBASE-11920. I will update that patch and can we commit that to master and 
branch-1?
Sorry, I was not able to get to it yet. I have to read through the parent jira. 
Will do that tomorrow. 


 Pluggable replication endpoint
 --

 Key: HBASE-11367
 URL: https://issues.apache.org/jira/browse/HBASE-11367
 Project: HBase
  Issue Type: Sub-task
Reporter: Enis Soztutar
Assignee: Enis Soztutar
Priority: Blocker
 Fix For: 0.99.0, 2.0.0

 Attachments: 0001-11367.patch, hbase-11367_v1.patch, 
 hbase-11367_v2.patch, hbase-11367_v3.patch, hbase-11367_v4.patch, 
 hbase-11367_v4.patch, hbase-11367_v5.patch


 We need a pluggable endpoint for replication for more flexibility. See parent 
 jira for more context. 
 ReplicationSource tails the logs for each peer. This jira introduces 
 ReplicationEndpoint which is customizable per peer. ReplicationEndpoint is 
 run in the same RS process and instantiated per replication peer per region 
 server. Implementations of this interface handle the actual shipping of WAL 
 edits to the remote cluster. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-11367) Pluggable replication endpoint

2014-09-16 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-11367:
--
Attachment: hbase-11367_0.98.patch

Attaching a backported 0.98 patch which should apply on top of 0.98.4 or so. 
This can be used as a base for a backport jira if we want to do it. 

 Pluggable replication endpoint
 --

 Key: HBASE-11367
 URL: https://issues.apache.org/jira/browse/HBASE-11367
 Project: HBase
  Issue Type: Sub-task
Reporter: Enis Soztutar
Assignee: Enis Soztutar
Priority: Blocker
 Fix For: 0.99.0, 2.0.0

 Attachments: 0001-11367.patch, hbase-11367_0.98.patch, 
 hbase-11367_v1.patch, hbase-11367_v2.patch, hbase-11367_v3.patch, 
 hbase-11367_v4.patch, hbase-11367_v4.patch, hbase-11367_v5.patch


 We need a pluggable endpoint for replication for more flexibility. See parent 
 jira for more context. 
 ReplicationSource tails the logs for each peer. This jira introduces 
 ReplicationEndpoint which is customizable per peer. ReplicationEndpoint is 
 run in the same RS process and instantiated per replication peer per region 
 server. Implementations of this interface handle the actual shipping of WAL 
 edits to the remote cluster. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-11974) When a disabled table is scanned, NotServingRegionException is thrown instead of TableNotEnabledException

2014-09-16 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-11974:
---
Attachment: 11974-v5.txt

 When a disabled table is scanned, NotServingRegionException is thrown instead 
 of TableNotEnabledException
 -

 Key: HBASE-11974
 URL: https://issues.apache.org/jira/browse/HBASE-11974
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu
 Attachments: 11974-test.patch, 11974-v1.txt, 11974-v2.txt, 
 11974-v3.txt, 11974-v4.txt, 11974-v5.txt


 When a disabled table is scanned, TableNotEnabledException should be thrown.
 However, currently NotServingRegionException is thrown.
 Thanks to Romil Choksi who discovered this problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11987) Make zk-less table states backward compatible.

2014-09-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135080#comment-14135080
 ] 

Hadoop QA commented on HBASE-11987:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12668987/11987v2.txt
  against trunk revision .
  ATTACHMENT ID: 12668987

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.replication.regionserver.TestRegionReplicaReplicationEndpoint
  org.apache.hadoop.hbase.master.handler.TestCreateTableHandler
  org.apache.hadoop.hbase.rest.TestScannersWithLabels

 {color:red}-1 core zombie tests{color}.  There are 3 zombie test(s):   
at 
org.apache.hadoop.hbase.TestAcidGuarantees.testGetAtomicity(TestAcidGuarantees.java:332)
at 
org.apache.camel.test.junit4.CamelTestSupport.doStopCamelContext(CamelTestSupport.java:450)
at 
org.apache.camel.test.junit4.CamelTestSupport.tearDown(CamelTestSupport.java:351)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10916//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10916//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10916//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10916//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10916//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10916//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10916//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10916//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10916//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10916//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10916//console

This message is automatically generated.

 Make zk-less table states backward compatible.
 --

 Key: HBASE-11987
 URL: https://issues.apache.org/jira/browse/HBASE-11987
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0
Reporter: Andrey Stepachev
Assignee: Andrey Stepachev
 Fix For: 2.0.0

 Attachments: 11987v2.txt, HBASE-11987.patch


 Changed protobuf format not handled properly, so on startup of on top of old 
 hbase files protobuf raises exception:
 Thanks to [~stack] for finding that.
 {noformat}
 90 2014-09-15 16:28:12,387 FATAL [ActiveMasterManager] master.HMaster: Failed 
 to become active master
  91 java.io.IOException: content=20546
  92   at 
 org.apache.hadoop.hbase.util.FSTableDescriptors.readTableDescriptor(FSTableDescriptors.java:599)
  93   at 
 org.apache.hadoop.hbase.util.FSTableDescriptors.createTableDescriptorForTableDirectory(FSTableDescriptors.java:804)
  94   at 
 org.apache.hadoop.hbase.util.FSTableDescriptors.createTableDescriptor(FSTableDescriptors.java:771)
  95   at 
 org.apache.hadoop.hbase.util.FSTableDescriptors.createTableDescriptor(FSTableDescriptors.java:749)
  96   at 
 org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:460)
  97   at 
 

[jira] [Commented] (HBASE-11986) Document MOB in Ref Guide

2014-09-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135084#comment-14135084
 ] 

Hadoop QA commented on HBASE-11986:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12668989/HBASE-11986.patch
  against trunk revision .
  ATTACHMENT ID: 12668989

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation patch that doesn't require tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.TestClassFinder

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s):   
at 
org.apache.camel.test.junit4.CamelTestSupport.doStopCamelContext(CamelTestSupport.java:450)
at 
org.apache.camel.test.junit4.CamelTestSupport.tearDown(CamelTestSupport.java:351)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10918//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10918//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10918//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10918//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10918//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10918//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10918//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10918//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10918//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10918//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10918//console

This message is automatically generated.

 Document MOB in Ref Guide
 -

 Key: HBASE-11986
 URL: https://issues.apache.org/jira/browse/HBASE-11986
 Project: HBase
  Issue Type: Sub-task
  Components: documentation
Reporter: Misty Stanley-Jones
Assignee: Misty Stanley-Jones
 Fix For: hbase-11339

 Attachments: HBASE-11986.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11984) TestClassFinder failing on occasion

2014-09-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135087#comment-14135087
 ] 

Hudson commented on HBASE-11984:


FAILURE: Integrated in HBase-TRUNK #5510 (See 
[https://builds.apache.org/job/HBase-TRUNK/5510/])
HBASE-11984 TestClassFinder failing on occasion -- Add DEBUGGING (stack: rev 
cc6fe16e592e1c4115b0c98db3d63ddd2d5a118b)
* hbase-common/src/test/java/org/apache/hadoop/hbase/TestClassFinder.java


 TestClassFinder failing on occasion
 ---

 Key: HBASE-11984
 URL: https://issues.apache.org/jira/browse/HBASE-11984
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: stack
 Fix For: 2.0.0

 Attachments: 0001-More-debug.patch


 Failed like this:
 {code}
 Tests run: 11, Failures: 3, Errors: 0, Skipped: 0, Time elapsed: 1.913 sec 
  FAILURE! - in org.apache.hadoop.hbase.TestClassFinder
 testClassFinderFiltersByClassInDirs(org.apache.hadoop.hbase.TestClassFinder)  
 Time elapsed: 0.028 sec   FAILURE!
 java.lang.AssertionError: expected:-1 but was:0
   at org.junit.Assert.fail(Assert.java:88)
   at org.junit.Assert.failNotEquals(Assert.java:743)
   at org.junit.Assert.assertEquals(Assert.java:118)
   at org.junit.Assert.assertEquals(Assert.java:555)
   at org.junit.Assert.assertEquals(Assert.java:542)
   at 
 org.apache.hadoop.hbase.TestClassFinder.testClassFinderFiltersByClassInDirs(TestClassFinder.java:259)
 testClassFinderCanFindClassesInDirs(org.apache.hadoop.hbase.TestClassFinder)  
 Time elapsed: 0.017 sec   FAILURE!
 java.lang.AssertionError: null
   at org.junit.Assert.fail(Assert.java:86)
   at org.junit.Assert.assertTrue(Assert.java:41)
   at org.junit.Assert.assertTrue(Assert.java:52)
   at 
 org.apache.hadoop.hbase.TestClassFinder.testClassFinderCanFindClassesInDirs(TestClassFinder.java:223)
 testClassFinderFiltersByNameInDirs(org.apache.hadoop.hbase.TestClassFinder)  
 Time elapsed: 0.018 sec   FAILURE!
 java.lang.AssertionError: expected:-1 but was:0
   at org.junit.Assert.fail(Assert.java:88)
   at org.junit.Assert.failNotEquals(Assert.java:743)
   at org.junit.Assert.assertEquals(Assert.java:118)
   at org.junit.Assert.assertEquals(Assert.java:555)
   at org.junit.Assert.assertEquals(Assert.java:542)
   at 
 org.apache.hadoop.hbase.TestClassFinder.testClassFinderFiltersByNameInDirs(TestClassFinder.java:242)
 {code}
 ... in precommit 
 https://builds.apache.org/job/PreCommit-HBASE-Build/10912/console



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11874) Support Cell to be passed to StoreFile.Writer rather than KeyValue

2014-09-16 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135110#comment-14135110
 ] 

Anoop Sam John commented on HBASE-11874:


Thanks Stack.
I have to add some UTs as suggested by Ram.
Any more comments Ram? If none I plan to commit this tomorrow with addition of 
suggested UTs

 Support Cell to be passed to StoreFile.Writer rather than KeyValue
 --

 Key: HBASE-11874
 URL: https://issues.apache.org/jira/browse/HBASE-11874
 Project: HBase
  Issue Type: Sub-task
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 2.0.0, 0.99.1

 Attachments: HBASE-11874.patch, HBASE-11874_V2.patch, 
 HBASE-11874_V3.patch, HBASE-11874_V3.patch, HBASE-11874_V4.patch, 
 HBASE-11874_V5.patch


 This is the in write path and touches StoreFile.Writer,  HFileWriter , 
 HFileDataBlockEncoder and different DataBlockEncoder impl.
 We will have to avoid KV#getBuffer() KV#getKeyOffset/Length() calls.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11873) Hbase Version CLI enhancement

2014-09-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135113#comment-14135113
 ] 

Hudson commented on HBASE-11873:


FAILURE: Integrated in HBase-1.0 #187 (See 
[https://builds.apache.org/job/HBase-1.0/187/])
HBASE-11873 Hbase Version CLI enhancement (Ashish Singhi) (stack: rev 
8b4da86dcba97d0415f6429251bef5915e31e3c6)
* 
hbase-server/src/main/jamon/org/apache/hadoop/hbase/tmpl/master/MasterStatusTmpl.jamon
* 
hbase-server/src/main/jamon/org/apache/hadoop/hbase/tmpl/regionserver/RSStatusTmpl.jamon
* hbase-common/src/main/java/org/apache/hadoop/hbase/VersionAnnotation.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/util/VersionInfo.java
* hbase-common/src/saveVersion.sh


 Hbase Version CLI enhancement
 -

 Key: HBASE-11873
 URL: https://issues.apache.org/jira/browse/HBASE-11873
 Project: HBase
  Issue Type: Improvement
  Components: build
Reporter: Guo Ruijing
Priority: Minor
  Labels: beginner
 Fix For: 2.0.0, 0.99.1

 Attachments: HBASE-11873-1.patch, HBASE-11873-2.patch, 
 HBASE-11873-3.patch, HBASE-11873.patch


 Hbase Version CLI enhancements:
 1) include source code checksum.
 2) change Subversion to Source code repository
 Existing implementation:
 hadoop@localhost p4_wspaces]$ hbase version
 2014-09-01 03:29:40,773 INFO  [main] util.VersionInfo: HBase 0.98.1-hadoop2
 2014-09-01 03:29:40,773 INFO  [main] util.VersionInfo: Subversion ...
 2014-09-01 03:29:40,773 INFO  [main] util.VersionInfo: Compiled by ...
 Expected implematation:
 hadoop@localhost p4_wspaces]$ hbase version
 2014-09-01 03:29:40,773 INFO  [main] util.VersionInfo: HBase 0.98.1-hadoop2
 2014-09-01 03:29:40,773 INFO  [main] util.VersionInfo: Source code repository 
 ...change Subversion to Source code repository
 2014-09-01 03:29:40,773 INFO  [main] util.VersionInfo: Compiled by ...
 2014-09-01 03:29:40,773 INFO  [main] util.VersionInfo: From source with 
 checksum eb1b9e8d63c302bed1168a7122d70   include source code checksum



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-7847) Use zookeeper multi to clear znodes

2014-09-16 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135115#comment-14135115
 ] 

Rakesh R commented on HBASE-7847:
-

ping [~stack]

 Use zookeeper multi to clear znodes
 ---

 Key: HBASE-7847
 URL: https://issues.apache.org/jira/browse/HBASE-7847
 Project: HBase
  Issue Type: Sub-task
Reporter: Ted Yu
Assignee: Rakesh R
 Attachments: 7847-v1.txt, 7847_v6.patch, 7847_v6.patch, 
 HBASE-7847.patch, HBASE-7847.patch, HBASE-7847.patch, HBASE-7847_v4.patch, 
 HBASE-7847_v5.patch, HBASE-7847_v6.patch


 In ZKProcedureUtil, clearChildZNodes() and clearZNodes(String procedureName) 
 should utilize zookeeper multi so that they're atomic



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11791) Update docs on visibility tags and ACLs, transparent encryption, secure bulk upload

2014-09-16 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135128#comment-14135128
 ] 

Anoop Sam John commented on HBASE-11791:


You marked some of the comments from me as not clear. I tried answering. Any 
further Qs Misty?  I think we are good to go (once latest comments from Sean 
addressed). Thanks for the excellent work Misty!

 Update docs on visibility tags and ACLs, transparent encryption, secure bulk 
 upload
 ---

 Key: HBASE-11791
 URL: https://issues.apache.org/jira/browse/HBASE-11791
 Project: HBase
  Issue Type: Task
  Components: documentation
Reporter: Misty Stanley-Jones
Assignee: Misty Stanley-Jones
 Attachments: HBASE-11791-v1.patch, HBASE-11791-v10.patch, 
 HBASE-11791-v11.patch, HBASE-11791-v12.patch, HBASE-11791-v2.patch, 
 HBASE-11791-v3.patch, HBASE-11791-v4.patch, HBASE-11791-v5.patch, 
 HBASE-11791-v6.patch, HBASE-11791-v7.patch, HBASE-11791-v9.patch, HBase 
 Security Features Operators Guide - HBaseCon 2014 - v5.pptx, 
 LDAPScanLabelGenerator.png


 Do a pass on the ACL and tag docs and make sure they are up to date and 
 accurate, expand to cover HBASE-10885, HBASE-11001, HBASE-11002, HBASE-11434



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11982) Bootstraping hbase:meta table creates a WAL file in region dir

2014-09-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135133#comment-14135133
 ] 

Hadoop QA commented on HBASE-11982:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12668962/hbase-11982_v1.patch
  against trunk revision .
  ATTACHMENT ID: 12668962

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.security.access.TestAccessController
  org.apache.hadoop.hbase.master.TestMasterFailover
  org.apache.hadoop.hbase.replication.TestPerTableCFReplication
  org.apache.hadoop.hbase.TestZooKeeper
  
org.apache.hadoop.hbase.mapreduce.TestSecureLoadIncrementalHFiles
  org.apache.hadoop.hbase.client.TestReplicaWithCluster
  
org.apache.hadoop.hbase.regionserver.TestEndToEndSplitTransaction
  
org.apache.hadoop.hbase.replication.regionserver.TestRegionReplicaReplicationEndpoint
  org.apache.hadoop.hbase.regionserver.TestRegionFavoredNodes
  
org.apache.hadoop.hbase.security.visibility.TestVisibilityLabelsWithDistributedLogReplay
  org.apache.hadoop.hbase.client.TestMultiParallel
  org.apache.hadoop.hbase.master.TestRegionPlacement

 {color:red}-1 core zombie tests{color}.  There are 3 zombie test(s):   
at 
org.apache.hadoop.hbase.client.TestReplicasClient.testSmallScanWithReplicas(TestReplicasClient.java:537)
at 
org.apache.hadoop.hbase.regionserver.wal.TestLogRolling.testCompactionRecordDoesntBlockRolling(TestLogRolling.java:628)
at 
org.apache.hadoop.hbase.snapshot.TestExportSnapshot.testExportFileSystemState(TestExportSnapshot.java:288)
at 
org.apache.hadoop.hbase.snapshot.TestExportSnapshot.testExportFileSystemState(TestExportSnapshot.java:262)
at 
org.apache.hadoop.hbase.snapshot.TestExportSnapshot.testExportWithTargetName(TestExportSnapshot.java:220)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10911//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10911//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10911//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10911//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10911//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10911//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10911//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10911//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10911//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10911//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10911//console

This message is automatically generated.

 Bootstraping hbase:meta table creates a WAL file in region dir
 --

 Key: HBASE-11982
 URL: 

[jira] [Commented] (HBASE-11825) Create Connection and ConnectionManager

2014-09-16 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135137#comment-14135137
 ] 

Enis Soztutar commented on HBASE-11825:
---

waiting for unit test run to commit this. 

 Create Connection and ConnectionManager
 ---

 Key: HBASE-11825
 URL: https://issues.apache.org/jira/browse/HBASE-11825
 Project: HBase
  Issue Type: Improvement
Reporter: Carter
Assignee: Solomon Duskis
Priority: Critical
 Fix For: 2.0.0, 0.99.1

 Attachments: HBASE_11825.patch, HBASE_11825_v1.patch


 This is further cleanup of the HBase interface for 1.0 after implementing the 
 new Table and Admin interfaces.  Following Enis's guidelines in HBASE-10602, 
 this JIRA will generate a new ConnectionManager to replace HCM and Connection 
 to replace HConnection.
 For more detail, this JIRA intends to implement this portion:
 {code}
 interface Connection extends Closeable{
   Table getTable(), and rest of HConnection methods 
   getAdmin()
   // no deprecated methods (cache related etc)
 }
 @Deprecated
 interface HConnection extends Connection {
   @Deprecated
   HTableInterface getTable()
   // users are encouraged to use Connection
 }
 class ConnectionManager {
   createConnection(Configuration) // not sure whether we want a static 
 factory method to create connections or a ctor
 }
 @Deprecated
 class HCM extends ConnectionManager {
   // users are encouraged to use ConnectionManager
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11988) AC/VC system table create on postStartMaster fails too often in test

2014-09-16 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135140#comment-14135140
 ] 

Anoop Sam John commented on HBASE-11988:


HMaster#createTable()
{code}
String namespace = hTableDescriptor.getTableName().getNamespaceAsString();
getNamespaceDescriptor(namespace); // ensure namespace exists
{code}
We can even avoid this ns check when the table ns is System or default. Those 
are in built ns and no one can remove them also.

 AC/VC system table create on postStartMaster fails too often in test
 

 Key: HBASE-11988
 URL: https://issues.apache.org/jira/browse/HBASE-11988
 Project: HBase
  Issue Type: Bug
Reporter: Anoop Sam John
Assignee: Anoop Sam John

 See for example
 {noformat}
 2014-09-16 04:02:08,833 ERROR [ActiveMasterManager] master.HMaster(633): 
 Coprocessor postStartMaster() hook failed
 java.io.IOException: Table Namespace Manager not ready yet, try again later
   at 
 org.apache.hadoop.hbase.master.HMaster.checkNamespaceManagerReady(HMaster.java:1669)
   at 
 org.apache.hadoop.hbase.master.HMaster.getNamespaceDescriptor(HMaster.java:1852)
   at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1096)
   at 
 org.apache.hadoop.hbase.security.access.AccessControlLists.init(AccessControlLists.java:143)
   at 
 org.apache.hadoop.hbase.security.access.AccessController.postStartMaster(AccessController.java:1059)
   at 
 org.apache.hadoop.hbase.master.MasterCoprocessorHost$58.call(MasterCoprocessorHost.java:692)
   at 
 org.apache.hadoop.hbase.master.MasterCoprocessorHost.execOperation(MasterCoprocessorHost.java:861)
   at 
 org.apache.hadoop.hbase.master.MasterCoprocessorHost.postStartMaster(MasterCoprocessorHost.java:688)
   at 
 org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:631)
   at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:155)
   at org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1244)
   at java.lang.Thread.run(Thread.java:744)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-11989) IntegrationTestLoadAndVerify cannot be configured anymore on distributed mode

2014-09-16 Thread Enis Soztutar (JIRA)
Enis Soztutar created HBASE-11989:
-

 Summary: IntegrationTestLoadAndVerify cannot be configured anymore 
on distributed mode
 Key: HBASE-11989
 URL: https://issues.apache.org/jira/browse/HBASE-11989
 Project: HBase
  Issue Type: Bug
  Components: test
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Fix For: 2.0.0, 0.98.7, 0.99.1


It seems that ITLoadAndverify now ignores most important parameters for running 
it in distributed mode: 

{code}
hbase org.apache.hadoop.hbase.test.IntegrationTestLoadAndVerify 
-Dloadmapper.backrefs=50 -Dloadmapper.map.tasks=30 
-Dloadmapper.num_to_write=1000 -Dverify.reduce.tasks=30 
-Dverify.scannercaching=1 loadAndVerify
{code}
would still launch a job which writes 2000 keys, and runs with 2 mappers. 

Likely cause: HBASE-11253. 
{code}
diff --git 
hbase-it/src/test/java/org/apache/hadoop/hbase/test/IntegrationTestLoadAndVerify.java
 
hbase-it/src/test/java/org/apache/hadoop/hbase/test/IntegrationTestLoadAndVerify.java
index 390b894..a1da601 100644
--- 
hbase-it/src/test/java/org/apache/hadoop/hbase/test/IntegrationTestLoadAndVerify.java
+++ 
hbase-it/src/test/java/org/apache/hadoop/hbase/test/IntegrationTestLoadAndVerify.java
@@ -123,10 +123,10 @@ public class IntegrationTestLoadAndVerify  extends 
IntegrationTestBase  {
 util = getTestingUtil(getConf());
 util.initializeCluster(3);
 this.setConf(util.getConfiguration());
+getConf().setLong(NUM_TO_WRITE_KEY, NUM_TO_WRITE_DEFAULT / 100);
+getConf().setInt(NUM_MAP_TASKS_KEY, NUM_MAP_TASKS_DEFAULT / 100);
+getConf().setInt(NUM_REDUCE_TASKS_KEY, NUM_REDUCE_TASKS_DEFAULT / 10);
 if (!util.isDistributedCluster()) {
-  getConf().setLong(NUM_TO_WRITE_KEY, NUM_TO_WRITE_DEFAULT / 100);
-  getConf().setInt(NUM_MAP_TASKS_KEY, NUM_MAP_TASKS_DEFAULT / 100);
-  getConf().setInt(NUM_REDUCE_TASKS_KEY, NUM_REDUCE_TASKS_DEFAULT / 10);
   util.startMiniMapReduceCluster();
 }
   }
{code}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11986) Document MOB in Ref Guide

2014-09-16 Thread Jingcheng Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135144#comment-14135144
 ] 

Jingcheng Du commented on HBASE-11986:
--

Thanks a lot Misty!

 Document MOB in Ref Guide
 -

 Key: HBASE-11986
 URL: https://issues.apache.org/jira/browse/HBASE-11986
 Project: HBase
  Issue Type: Sub-task
  Components: documentation
Reporter: Misty Stanley-Jones
Assignee: Misty Stanley-Jones
 Fix For: hbase-11339

 Attachments: HBASE-11986.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11974) When a disabled table is scanned, NotServingRegionException is thrown instead of TableNotEnabledException

2014-09-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135152#comment-14135152
 ] 

Hadoop QA commented on HBASE-11974:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12668993/11974-v5.txt
  against trunk revision .
  ATTACHMENT ID: 12668993

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.rest.TestScannersWithLabels
  
org.apache.hadoop.hbase.security.access.TestScanEarlyTermination

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10919//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10919//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10919//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10919//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10919//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10919//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10919//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10919//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10919//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10919//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10919//console

This message is automatically generated.

 When a disabled table is scanned, NotServingRegionException is thrown instead 
 of TableNotEnabledException
 -

 Key: HBASE-11974
 URL: https://issues.apache.org/jira/browse/HBASE-11974
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu
 Attachments: 11974-test.patch, 11974-v1.txt, 11974-v2.txt, 
 11974-v3.txt, 11974-v4.txt, 11974-v5.txt


 When a disabled table is scanned, TableNotEnabledException should be thrown.
 However, currently NotServingRegionException is thrown.
 Thanks to Romil Choksi who discovered this problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-11990) Make setting the start and stop row for a specific prefix easier

2014-09-16 Thread Niels Basjes (JIRA)
Niels Basjes created HBASE-11990:


 Summary: Make setting the start and stop row for a specific prefix 
easier
 Key: HBASE-11990
 URL: https://issues.apache.org/jira/browse/HBASE-11990
 Project: HBase
  Issue Type: New Feature
  Components: Client
Reporter: Niels Basjes


If you want to set a scan from your application to scan for a specific row 
prefix this is actually quite hard.
As described in several places you can set the startRow to the prefix; yet the 
stopRow should be set to the prefix '+1'
If the prefix 'ASCII' put into a byte[] then this is easy because you can 
simply increment the last byte of the array. 
But if your application uses real binary rowids you may run into the scenario 
that your prefix is something like { 0x12, 0x23, 0xFF, 0xFF }. Then the 
increment should be { 0x12, 0x24, 0x00, 0x00 }.

I have prepared a proposed patch that makes setting these values correctly a 
lot easier.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11987) Make zk-less table states backward compatible.

2014-09-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135186#comment-14135186
 ] 

Hudson commented on HBASE-11987:


FAILURE: Integrated in HBase-TRUNK #5511 (See 
[https://builds.apache.org/job/HBase-TRUNK/5511/])
HBASE-11987 Make zk-less table states backward compatible (Andrey Stepachev) 
(stack: rev 43a8dea347fb5c9a7082eaaa23a2d379cdaba9fe)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestFSTableDescriptors.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSTableDescriptors.java


 Make zk-less table states backward compatible.
 --

 Key: HBASE-11987
 URL: https://issues.apache.org/jira/browse/HBASE-11987
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0
Reporter: Andrey Stepachev
Assignee: Andrey Stepachev
 Fix For: 2.0.0

 Attachments: 11987v2.txt, HBASE-11987.patch


 Changed protobuf format not handled properly, so on startup of on top of old 
 hbase files protobuf raises exception:
 Thanks to [~stack] for finding that.
 {noformat}
 90 2014-09-15 16:28:12,387 FATAL [ActiveMasterManager] master.HMaster: Failed 
 to become active master
  91 java.io.IOException: content=20546
  92   at 
 org.apache.hadoop.hbase.util.FSTableDescriptors.readTableDescriptor(FSTableDescriptors.java:599)
  93   at 
 org.apache.hadoop.hbase.util.FSTableDescriptors.createTableDescriptorForTableDirectory(FSTableDescriptors.java:804)
  94   at 
 org.apache.hadoop.hbase.util.FSTableDescriptors.createTableDescriptor(FSTableDescriptors.java:771)
  95   at 
 org.apache.hadoop.hbase.util.FSTableDescriptors.createTableDescriptor(FSTableDescriptors.java:749)
  96   at 
 org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:460)
  97   at 
 org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:147)
  98   at 
 org.apache.hadoop.hbase.master.MasterFileSystem.init(MasterFileSystem.java:127)
  99   at 
 org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:488)
 100   at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:155)
 101   at org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1244)
 102   at java.lang.Thread.run(Thread.java:744)
 103 Caused by: org.apache.hadoop.hbase.exceptions.DeserializationException: 
 com.google.protobuf.InvalidProtocolBufferException: While parsing a protocol 
 message, the input ended unexpectedly in the middle of a field.  This could 
 mean either than the input has been truncated or that an embedded message 
 misreported its own length.
 104   at 
 org.apache.hadoop.hbase.TableDescriptor.parseFrom(TableDescriptor.java:120)
 105   at 
 org.apache.hadoop.hbase.util.FSTableDescriptors.readTableDescriptor(FSTableDescriptors.java:597)
 106   ... 10 more
 107 Caused by: com.google.protobuf.InvalidProtocolBufferException: While 
 parsing a protocol message, the input ended unexpectedly in the middle of a 
 field.  This could mean either than the input has been truncated or that an 
 embedded message misreported its own length.
 108   at 
 com.google.protobuf.InvalidProtocolBufferException.truncatedMessage(InvalidProtocolBufferException.java:70)
 109   at 
 com.google.protobuf.CodedInputStream.refillBuffer(CodedInputStream.java:728)
 110   at 
 com.google.protobuf.CodedInputStream.readRawByte(CodedInputStream.java:769)
 111   at 
 com.google.protobuf.CodedInputStream.readRawLittleEndian64(CodedInputStream.java:488)
 112   at 
 com.google.protobuf.CodedInputStream.readFixed64(CodedInputStream.java:203)
 113   at 
 com.google.protobuf.UnknownFieldSet$Builder.mergeFieldFrom(UnknownFieldSet.java:481)
 114   at 
 com.google.protobuf.GeneratedMessage.parseUnknownField(GeneratedMessage.java:193)
 115   at 
 org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableName.init(HBaseProtos.java:215)
 116   at 
 org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableName.init(HBaseProtos.java:173)
 117   at 
 org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableName$1.parsePartialFrom(HBaseProtos.java:261)
 118   at 
 org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableName$1.parsePartialFrom(HBaseProtos.java:256)
 119   at 
 com.google.protobuf.CodedInputStream.readMessage(CodedInputStream.java:309)
 120   at 
 org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableSchema.init(HBaseProtos.java:852)
 121   at 
 org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableSchema.init(HBaseProtos.java:799)
 122   at 
 org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableSchema$1.parsePartialFrom(HBaseProtos.java:923)
 123   at 
 org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$TableSchema$1.parsePartialFrom(HBaseProtos.java:918)
 124   at 
 

[jira] [Commented] (HBASE-11989) IntegrationTestLoadAndVerify cannot be configured anymore on distributed mode

2014-09-16 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135191#comment-14135191
 ] 

Anoop Sam John commented on HBASE-11989:


My bad!
That was a mistake to include this change into this patch. Accidently went into 
this. Not at all related with HBASE-11253. Thanks for noticing.

 IntegrationTestLoadAndVerify cannot be configured anymore on distributed mode
 -

 Key: HBASE-11989
 URL: https://issues.apache.org/jira/browse/HBASE-11989
 Project: HBase
  Issue Type: Bug
  Components: test
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Fix For: 2.0.0, 0.98.7, 0.99.1


 It seems that ITLoadAndverify now ignores most important parameters for 
 running it in distributed mode: 
 {code}
 hbase org.apache.hadoop.hbase.test.IntegrationTestLoadAndVerify 
 -Dloadmapper.backrefs=50 -Dloadmapper.map.tasks=30 
 -Dloadmapper.num_to_write=1000 -Dverify.reduce.tasks=30 
 -Dverify.scannercaching=1 loadAndVerify
 {code}
 would still launch a job which writes 2000 keys, and runs with 2 mappers. 
 Likely cause: HBASE-11253. 
 {code}
 diff --git 
 hbase-it/src/test/java/org/apache/hadoop/hbase/test/IntegrationTestLoadAndVerify.java
  
 hbase-it/src/test/java/org/apache/hadoop/hbase/test/IntegrationTestLoadAndVerify.java
 index 390b894..a1da601 100644
 --- 
 hbase-it/src/test/java/org/apache/hadoop/hbase/test/IntegrationTestLoadAndVerify.java
 +++ 
 hbase-it/src/test/java/org/apache/hadoop/hbase/test/IntegrationTestLoadAndVerify.java
 @@ -123,10 +123,10 @@ public class IntegrationTestLoadAndVerify  extends 
 IntegrationTestBase  {
  util = getTestingUtil(getConf());
  util.initializeCluster(3);
  this.setConf(util.getConfiguration());
 +getConf().setLong(NUM_TO_WRITE_KEY, NUM_TO_WRITE_DEFAULT / 100);
 +getConf().setInt(NUM_MAP_TASKS_KEY, NUM_MAP_TASKS_DEFAULT / 100);
 +getConf().setInt(NUM_REDUCE_TASKS_KEY, NUM_REDUCE_TASKS_DEFAULT / 10);
  if (!util.isDistributedCluster()) {
 -  getConf().setLong(NUM_TO_WRITE_KEY, NUM_TO_WRITE_DEFAULT / 100);
 -  getConf().setInt(NUM_MAP_TASKS_KEY, NUM_MAP_TASKS_DEFAULT / 100);
 -  getConf().setInt(NUM_REDUCE_TASKS_KEY, NUM_REDUCE_TASKS_DEFAULT / 10);
util.startMiniMapReduceCluster();
  }
}
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-11990) Make setting the start and stop row for a specific prefix easier

2014-09-16 Thread Niels Basjes (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Niels Basjes updated HBASE-11990:
-
Attachment: HBASE-11990-20140916.patch

Proposed patch.
This also fixes the same documentation issue from HBASE-9035 in the javadoc of 
the setStopRow method.

 Make setting the start and stop row for a specific prefix easier
 

 Key: HBASE-11990
 URL: https://issues.apache.org/jira/browse/HBASE-11990
 Project: HBase
  Issue Type: New Feature
  Components: Client
Reporter: Niels Basjes
 Attachments: HBASE-11990-20140916.patch


 If you want to set a scan from your application to scan for a specific row 
 prefix this is actually quite hard.
 As described in several places you can set the startRow to the prefix; yet 
 the stopRow should be set to the prefix '+1'
 If the prefix 'ASCII' put into a byte[] then this is easy because you can 
 simply increment the last byte of the array. 
 But if your application uses real binary rowids you may run into the scenario 
 that your prefix is something like { 0x12, 0x23, 0xFF, 0xFF }. Then the 
 increment should be { 0x12, 0x24, 0x00, 0x00 }.
 I have prepared a proposed patch that makes setting these values correctly a 
 lot easier.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-11990) Make setting the start and stop row for a specific prefix easier

2014-09-16 Thread Niels Basjes (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Niels Basjes updated HBASE-11990:
-
Description: 
If you want to set a scan from your application to scan for a specific row 
prefix this is actually quite hard.
As described in several places you can set the startRow to the prefix; yet the 
stopRow should be set to the prefix '+1'
If the prefix 'ASCII' put into a byte[] then this is easy because you can 
simply increment the last byte of the array. 
But if your application uses real binary rowids you may run into the scenario 
that your prefix is something like 
{code}{ 0x12, 0x23, 0xFF, 0xFF }{code} Then the increment should be {code}{ 
0x12, 0x24, 0x00, 0x00 }{code}

I have prepared a proposed patch that makes setting these values correctly a 
lot easier.

  was:
If you want to set a scan from your application to scan for a specific row 
prefix this is actually quite hard.
As described in several places you can set the startRow to the prefix; yet the 
stopRow should be set to the prefix '+1'
If the prefix 'ASCII' put into a byte[] then this is easy because you can 
simply increment the last byte of the array. 
But if your application uses real binary rowids you may run into the scenario 
that your prefix is something like { 0x12, 0x23, 0xFF, 0xFF }. Then the 
increment should be { 0x12, 0x24, 0x00, 0x00 }.

I have prepared a proposed patch that makes setting these values correctly a 
lot easier.


 Make setting the start and stop row for a specific prefix easier
 

 Key: HBASE-11990
 URL: https://issues.apache.org/jira/browse/HBASE-11990
 Project: HBase
  Issue Type: New Feature
  Components: Client
Reporter: Niels Basjes
 Attachments: HBASE-11990-20140916.patch


 If you want to set a scan from your application to scan for a specific row 
 prefix this is actually quite hard.
 As described in several places you can set the startRow to the prefix; yet 
 the stopRow should be set to the prefix '+1'
 If the prefix 'ASCII' put into a byte[] then this is easy because you can 
 simply increment the last byte of the array. 
 But if your application uses real binary rowids you may run into the scenario 
 that your prefix is something like 
 {code}{ 0x12, 0x23, 0xFF, 0xFF }{code} Then the increment should be {code}{ 
 0x12, 0x24, 0x00, 0x00 }{code}
 I have prepared a proposed patch that makes setting these values correctly a 
 lot easier.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11990) Make setting the start and stop row for a specific prefix easier

2014-09-16 Thread Niels Basjes (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135207#comment-14135207
 ] 

Niels Basjes commented on HBASE-11990:
--

In case you plan to accept this patch.
Question: Is it desired/required for me to update the hbase book also?

 Make setting the start and stop row for a specific prefix easier
 

 Key: HBASE-11990
 URL: https://issues.apache.org/jira/browse/HBASE-11990
 Project: HBase
  Issue Type: New Feature
  Components: Client
Reporter: Niels Basjes
 Attachments: HBASE-11990-20140916.patch


 If you want to set a scan from your application to scan for a specific row 
 prefix this is actually quite hard.
 As described in several places you can set the startRow to the prefix; yet 
 the stopRow should be set to the prefix '+1'
 If the prefix 'ASCII' put into a byte[] then this is easy because you can 
 simply increment the last byte of the array. 
 But if your application uses real binary rowids you may run into the scenario 
 that your prefix is something like 
 {code}{ 0x12, 0x23, 0xFF, 0xFF }{code} Then the increment should be {code}{ 
 0x12, 0x24, 0x00, 0x00 }{code}
 I have prepared a proposed patch that makes setting these values correctly a 
 lot easier.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-11879) Change TableInputFormatBase to take interface arguments

2014-09-16 Thread Solomon Duskis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Solomon Duskis reassigned HBASE-11879:
--

Assignee: Solomon Duskis  (was: Carter)

 Change TableInputFormatBase to take interface arguments
 ---

 Key: HBASE-11879
 URL: https://issues.apache.org/jira/browse/HBASE-11879
 Project: HBase
  Issue Type: Improvement
Reporter: Carter
Assignee: Solomon Duskis
 Fix For: 0.99.1


 As part of the ongoing interface abstraction work, I'm now investigating 
 {{TableInputFormatBase}}, which has two methods that break encapsulation:
 {code}
 protected HTable getHTable();
 protected void setHTable(HTable table);
 {code}
 While these are protected methods, the base @InterfaceAudience.Public is 
 abstract, meaning that it supports extension by user code.
 I propose deprecating these two methods and replacing them with these four, 
 once the Table interface is merged:
 {code}
 protected Table getTable();
 protected void setTable(Table table);
 protected RegionLocator getRegionLocator();
 protected void setRegionLocator(RegionLocator regionLocator);
 {code}
 Since users will frequently call {{setTable}} and {{setRegionLocator}} 
 together, it probably also makes sense to add the following convenience 
 method:
 {code}
 protected void setTableAndRegionLocator(Table table, RegionLocator 
 regionLocator);
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11990) Make setting the start and stop row for a specific prefix easier

2014-09-16 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135654#comment-14135654
 ] 

stack commented on HBASE-11990:
---

Use case and patch look good to me.  I'd add an example to the javadoc -- the 
description on this issue would do -- to the new methods to make it more clear 
what the behavior is.

 Make setting the start and stop row for a specific prefix easier
 

 Key: HBASE-11990
 URL: https://issues.apache.org/jira/browse/HBASE-11990
 Project: HBase
  Issue Type: New Feature
  Components: Client
Reporter: Niels Basjes
 Attachments: HBASE-11990-20140916.patch


 If you want to set a scan from your application to scan for a specific row 
 prefix this is actually quite hard.
 As described in several places you can set the startRow to the prefix; yet 
 the stopRow should be set to the prefix '+1'
 If the prefix 'ASCII' put into a byte[] then this is easy because you can 
 simply increment the last byte of the array. 
 But if your application uses real binary rowids you may run into the scenario 
 that your prefix is something like 
 {code}{ 0x12, 0x23, 0xFF, 0xFF }{code} Then the increment should be {code}{ 
 0x12, 0x24, 0x00, 0x00 }{code}
 I have prepared a proposed patch that makes setting these values correctly a 
 lot easier.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-11961) Document region state transitions

2014-09-16 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-11961:

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Integrated into the master branch. Thanks.

 Document region state transitions
 -

 Key: HBASE-11961
 URL: https://issues.apache.org/jira/browse/HBASE-11961
 Project: HBase
  Issue Type: Task
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
Priority: Minor
 Fix For: 2.0.0

 Attachments: HBASE-11961-misty.patch, hbase-11961.patch, 
 region_states.png


 Document the region state transitions in the refguide.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11990) Make setting the start and stop row for a specific prefix easier

2014-09-16 Thread Niels Basjes (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135670#comment-14135670
 ] 

Niels Basjes commented on HBASE-11990:
--

Should I add a part to the book also?

 Make setting the start and stop row for a specific prefix easier
 

 Key: HBASE-11990
 URL: https://issues.apache.org/jira/browse/HBASE-11990
 Project: HBase
  Issue Type: New Feature
  Components: Client
Reporter: Niels Basjes
 Attachments: HBASE-11990-20140916.patch


 If you want to set a scan from your application to scan for a specific row 
 prefix this is actually quite hard.
 As described in several places you can set the startRow to the prefix; yet 
 the stopRow should be set to the prefix '+1'
 If the prefix 'ASCII' put into a byte[] then this is easy because you can 
 simply increment the last byte of the array. 
 But if your application uses real binary rowids you may run into the scenario 
 that your prefix is something like 
 {code}{ 0x12, 0x23, 0xFF, 0xFF }{code} Then the increment should be {code}{ 
 0x12, 0x24, 0x00, 0x00 }{code}
 I have prepared a proposed patch that makes setting these values correctly a 
 lot easier.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11738) Document improvements to LoadTestTool and PerformanceEvaluation

2014-09-16 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135673#comment-14135673
 ] 

Nick Dimiduk commented on HBASE-11738:
--

{noformat}
 coderg.apache.hadoop.hbase.util.LoadTestTool/code utility, which 
is used for
{noformat}

Should be org.apache...

{noformat}
 testing. The commandhbase ltt/command command was introduced in 
HBase 0.98./para
{noformat}

This command was introduced at the same time as {{hbase pe}}: 0.98.4?

Otherwise this summary looks good. +1

 Document improvements to LoadTestTool and PerformanceEvaluation
 ---

 Key: HBASE-11738
 URL: https://issues.apache.org/jira/browse/HBASE-11738
 Project: HBase
  Issue Type: Sub-task
  Components: documentation
Reporter: Misty Stanley-Jones
Assignee: Misty Stanley-Jones
Priority: Minor
 Fix For: 0.98.7, 0.99.1

 Attachments: HBABASE-11738.patch, HBASE-11738.patch, HBASE-11738.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-11976) Server startcode is not checked for bulk region assignment

2014-09-16 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-11976:

   Resolution: Fixed
Fix Version/s: 0.99.1
   2.0.0
   Status: Resolved  (was: Patch Available)

Integrated into branch 1 and master. Thanks.

 Server startcode is not checked for bulk region assignment
 --

 Key: HBASE-11976
 URL: https://issues.apache.org/jira/browse/HBASE-11976
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
 Fix For: 2.0.0, 0.99.1

 Attachments: hbase-11976.patch


 I got the following failure yesterday. It looks like sending open region 
 request, the region server failed to check the server start code.
  
 {noformat}
 2014-09-14 19:36:45,565 ERROR 
 [B.defaultRpcServer.handler=24,queue=0,port=20020] 
 master.AssignmentManager: Failed to transition region from 
 {2706f577540a7d1b53b5a8f66178fbf2 state=PENDING_OPEN, ts=1410748604803, 
 server=a2428.halxg.cloudera.com,20020,1410746518223} on OPENED by 
 a2428.halxg.cloudera.com,20020,1410748599408: 
 2706f577540a7d1b53b5a8f66178fbf2 is not opening on 
 a2428.halxg.cloudera.com,20020,1410748599408
 ABORTING region server a2428.halxg.cloudera.com,20020,1410748599408: 
 Exception running postOpenDeployTasks; region=2706f577540a7d1b53b5a8f66178fbf2
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-11991) Region states may be out of sync

2014-09-16 Thread Jimmy Xiang (JIRA)
Jimmy Xiang created HBASE-11991:
---

 Summary: Region states may be out of sync
 Key: HBASE-11991
 URL: https://issues.apache.org/jira/browse/HBASE-11991
 Project: HBase
  Issue Type: Bug
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang


Region states could be out of sync under a rare scenario. The regions hosted by 
a server could be wrong.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11990) Make setting the start and stop row for a specific prefix easier

2014-09-16 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135706#comment-14135706
 ] 

stack commented on HBASE-11990:
---

bq. Should I add a part to the book also?

IMO, no.  Too low-level. Pimp up the javadoc a little and it is good to go I'd 
say.

 Make setting the start and stop row for a specific prefix easier
 

 Key: HBASE-11990
 URL: https://issues.apache.org/jira/browse/HBASE-11990
 Project: HBase
  Issue Type: New Feature
  Components: Client
Reporter: Niels Basjes
 Attachments: HBASE-11990-20140916.patch


 If you want to set a scan from your application to scan for a specific row 
 prefix this is actually quite hard.
 As described in several places you can set the startRow to the prefix; yet 
 the stopRow should be set to the prefix '+1'
 If the prefix 'ASCII' put into a byte[] then this is easy because you can 
 simply increment the last byte of the array. 
 But if your application uses real binary rowids you may run into the scenario 
 that your prefix is something like 
 {code}{ 0x12, 0x23, 0xFF, 0xFF }{code} Then the increment should be {code}{ 
 0x12, 0x24, 0x00, 0x00 }{code}
 I have prepared a proposed patch that makes setting these values correctly a 
 lot easier.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-11920) Add CP hooks for ReplicationEndPoint

2014-09-16 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan reassigned HBASE-11920:
--

Assignee: ramkrishna.s.vasudevan

 Add CP hooks for ReplicationEndPoint
 

 Key: HBASE-11920
 URL: https://issues.apache.org/jira/browse/HBASE-11920
 Project: HBase
  Issue Type: Sub-task
  Components: Coprocessors, Replication
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 2.0.0, 0.99.1

 Attachments: HBASE-11920.patch, HBASE-11920_1.patch, 
 HBASE-11920_2.patch


 If we want to create internal replication endpoints other than the one 
 created thro configuration we may need new hooks. This is something like an 
 internal scanner that we create during compaction so that the actual 
 compaction scanner can be used as a delegator.
 [~enis]
 If I can give a patch by tomorrow will it be possible to include in the RC?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-11920) Add CP hooks for ReplicationEndPoint

2014-09-16 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-11920:
---
Status: Open  (was: Patch Available)

 Add CP hooks for ReplicationEndPoint
 

 Key: HBASE-11920
 URL: https://issues.apache.org/jira/browse/HBASE-11920
 Project: HBase
  Issue Type: Sub-task
  Components: Coprocessors, Replication
Reporter: ramkrishna.s.vasudevan
 Fix For: 2.0.0, 0.99.1

 Attachments: HBASE-11920.patch, HBASE-11920_1.patch, 
 HBASE-11920_2.patch


 If we want to create internal replication endpoints other than the one 
 created thro configuration we may need new hooks. This is something like an 
 internal scanner that we create during compaction so that the actual 
 compaction scanner can be used as a delegator.
 [~enis]
 If I can give a patch by tomorrow will it be possible to include in the RC?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-11920) Add CP hooks for ReplicationEndPoint

2014-09-16 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-11920:
---
Attachment: HBASE-11920_2.patch

Updated patch addressing comments.

 Add CP hooks for ReplicationEndPoint
 

 Key: HBASE-11920
 URL: https://issues.apache.org/jira/browse/HBASE-11920
 Project: HBase
  Issue Type: Sub-task
  Components: Coprocessors, Replication
Reporter: ramkrishna.s.vasudevan
 Fix For: 2.0.0, 0.99.1

 Attachments: HBASE-11920.patch, HBASE-11920_1.patch, 
 HBASE-11920_2.patch


 If we want to create internal replication endpoints other than the one 
 created thro configuration we may need new hooks. This is something like an 
 internal scanner that we create during compaction so that the actual 
 compaction scanner can be used as a delegator.
 [~enis]
 If I can give a patch by tomorrow will it be possible to include in the RC?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-11992) Backport HBASE-11367 (Pluggable replication endpoint) to 0.98

2014-09-16 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-11992:
---
Attachment: hbase-11367_0.98.patch

 Backport HBASE-11367 (Pluggable replication endpoint) to 0.98
 -

 Key: HBASE-11992
 URL: https://issues.apache.org/jira/browse/HBASE-11992
 Project: HBase
  Issue Type: Task
Reporter: Andrew Purtell
 Attachments: hbase-11367_0.98.patch


 ReplicationSource tails the logs for each peer. HBASE-11367 introduces 
 ReplicationEndpoint which is customizable per peer. ReplicationEndpoint is 
 run in the same RS process and instantiated per replication peer per region 
 server. Implementations of this interface handle the actual shipping of WAL 
 edits to the remote cluster.
 This issue is for backporting HBASE-11367 to 0.98.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-11992) Backport HBASE-11367 (Pluggable replication endpoint) to 0.98

2014-09-16 Thread Andrew Purtell (JIRA)
Andrew Purtell created HBASE-11992:
--

 Summary: Backport HBASE-11367 (Pluggable replication endpoint) to 
0.98
 Key: HBASE-11992
 URL: https://issues.apache.org/jira/browse/HBASE-11992
 Project: HBase
  Issue Type: Task
Reporter: Andrew Purtell
 Attachments: hbase-11367_0.98.patch

ReplicationSource tails the logs for each peer. HBASE-11367 introduces 
ReplicationEndpoint which is customizable per peer. ReplicationEndpoint is run 
in the same RS process and instantiated per replication peer per region server. 
Implementations of this interface handle the actual shipping of WAL edits to 
the remote cluster.

This issue is for backporting HBASE-11367 to 0.98.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11992) Backport HBASE-11367 (Pluggable replication endpoint) to 0.98

2014-09-16 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135733#comment-14135733
 ] 

Andrew Purtell commented on HBASE-11992:


Attached 0.98 patch from HBASE-11367

 Backport HBASE-11367 (Pluggable replication endpoint) to 0.98
 -

 Key: HBASE-11992
 URL: https://issues.apache.org/jira/browse/HBASE-11992
 Project: HBase
  Issue Type: Task
Reporter: Andrew Purtell
 Attachments: hbase-11367_0.98.patch


 ReplicationSource tails the logs for each peer. HBASE-11367 introduces 
 ReplicationEndpoint which is customizable per peer. ReplicationEndpoint is 
 run in the same RS process and instantiated per replication peer per region 
 server. Implementations of this interface handle the actual shipping of WAL 
 edits to the remote cluster.
 This issue is for backporting HBASE-11367 to 0.98.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11367) Pluggable replication endpoint

2014-09-16 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135735#comment-14135735
 ] 

Andrew Purtell commented on HBASE-11367:


I opened HBASE-11992 for backport discussion

 Pluggable replication endpoint
 --

 Key: HBASE-11367
 URL: https://issues.apache.org/jira/browse/HBASE-11367
 Project: HBase
  Issue Type: Sub-task
Reporter: Enis Soztutar
Assignee: Enis Soztutar
Priority: Blocker
 Fix For: 0.99.0, 2.0.0

 Attachments: 0001-11367.patch, hbase-11367_0.98.patch, 
 hbase-11367_v1.patch, hbase-11367_v2.patch, hbase-11367_v3.patch, 
 hbase-11367_v4.patch, hbase-11367_v4.patch, hbase-11367_v5.patch


 We need a pluggable endpoint for replication for more flexibility. See parent 
 jira for more context. 
 ReplicationSource tails the logs for each peer. This jira introduces 
 ReplicationEndpoint which is customizable per peer. ReplicationEndpoint is 
 run in the same RS process and instantiated per replication peer per region 
 server. Implementations of this interface handle the actual shipping of WAL 
 edits to the remote cluster. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11874) Support Cell to be passed to StoreFile.Writer rather than KeyValue

2014-09-16 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135734#comment-14135734
 ] 

ramkrishna.s.vasudevan commented on HBASE-11874:


Just seeing this.  Sorry for being late.  By tomorrow I will complete this 
review.  Overall its fine. Just once more will see it.

 Support Cell to be passed to StoreFile.Writer rather than KeyValue
 --

 Key: HBASE-11874
 URL: https://issues.apache.org/jira/browse/HBASE-11874
 Project: HBase
  Issue Type: Sub-task
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 2.0.0, 0.99.1

 Attachments: HBASE-11874.patch, HBASE-11874_V2.patch, 
 HBASE-11874_V3.patch, HBASE-11874_V3.patch, HBASE-11874_V4.patch, 
 HBASE-11874_V5.patch


 This is the in write path and touches StoreFile.Writer,  HFileWriter , 
 HFileDataBlockEncoder and different DataBlockEncoder impl.
 We will have to avoid KV#getBuffer() KV#getKeyOffset/Length() calls.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11992) Backport HBASE-11367 (Pluggable replication endpoint) to 0.98

2014-09-16 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135737#comment-14135737
 ] 

ramkrishna.s.vasudevan commented on HBASE-11992:


Thanks [~apurtell] for raising this.  I will continue on this.  In a couple of 
days or probably beginning of next week.

 Backport HBASE-11367 (Pluggable replication endpoint) to 0.98
 -

 Key: HBASE-11992
 URL: https://issues.apache.org/jira/browse/HBASE-11992
 Project: HBase
  Issue Type: Task
Reporter: Andrew Purtell
 Attachments: hbase-11367_0.98.patch


 ReplicationSource tails the logs for each peer. HBASE-11367 introduces 
 ReplicationEndpoint which is customizable per peer. ReplicationEndpoint is 
 run in the same RS process and instantiated per replication peer per region 
 server. Implementations of this interface handle the actual shipping of WAL 
 edits to the remote cluster.
 This issue is for backporting HBASE-11367 to 0.98.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11990) Make setting the start and stop row for a specific prefix easier

2014-09-16 Thread Jean-Marc Spaggiari (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135748#comment-14135748
 ] 

Jean-Marc Spaggiari commented on HBASE-11990:
-

Thanks for patch [~nielsbasjes].

{code}
+   * Note: In order to make stopRow inclusive add 1
+   * (or use {@link #setStopRowInclusive(byte[])} to do it for you)
{code}

This is not clear to me. Can we say something like increment the last byte or 
something like that instead of add 1? Or add one to this or that?

Also, on the test, can you add cases where rows are empty?

Everything else looks good to me. 

 Make setting the start and stop row for a specific prefix easier
 

 Key: HBASE-11990
 URL: https://issues.apache.org/jira/browse/HBASE-11990
 Project: HBase
  Issue Type: New Feature
  Components: Client
Reporter: Niels Basjes
 Attachments: HBASE-11990-20140916.patch


 If you want to set a scan from your application to scan for a specific row 
 prefix this is actually quite hard.
 As described in several places you can set the startRow to the prefix; yet 
 the stopRow should be set to the prefix '+1'
 If the prefix 'ASCII' put into a byte[] then this is easy because you can 
 simply increment the last byte of the array. 
 But if your application uses real binary rowids you may run into the scenario 
 that your prefix is something like 
 {code}{ 0x12, 0x23, 0xFF, 0xFF }{code} Then the increment should be {code}{ 
 0x12, 0x24, 0x00, 0x00 }{code}
 I have prepared a proposed patch that makes setting these values correctly a 
 lot easier.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11974) When a disabled table is scanned, NotServingRegionException is thrown instead of TableNotEnabledException

2014-09-16 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135750#comment-14135750
 ] 

Ted Yu commented on HBASE-11974:


TestScanEarlyTermination failed in trunk build #5511.
TestScannersWithLabels passes locally with patch.

 When a disabled table is scanned, NotServingRegionException is thrown instead 
 of TableNotEnabledException
 -

 Key: HBASE-11974
 URL: https://issues.apache.org/jira/browse/HBASE-11974
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu
 Attachments: 11974-test.patch, 11974-v1.txt, 11974-v2.txt, 
 11974-v3.txt, 11974-v4.txt, 11974-v5.txt


 When a disabled table is scanned, TableNotEnabledException should be thrown.
 However, currently NotServingRegionException is thrown.
 Thanks to Romil Choksi who discovered this problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-11992) Backport HBASE-11367 (Pluggable replication endpoint) to 0.98

2014-09-16 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-11992:
---
Assignee: ramkrishna.s.vasudevan

 Backport HBASE-11367 (Pluggable replication endpoint) to 0.98
 -

 Key: HBASE-11992
 URL: https://issues.apache.org/jira/browse/HBASE-11992
 Project: HBase
  Issue Type: Task
Reporter: Andrew Purtell
Assignee: ramkrishna.s.vasudevan
 Attachments: hbase-11367_0.98.patch


 ReplicationSource tails the logs for each peer. HBASE-11367 introduces 
 ReplicationEndpoint which is customizable per peer. ReplicationEndpoint is 
 run in the same RS process and instantiated per replication peer per region 
 server. Implementations of this interface handle the actual shipping of WAL 
 edits to the remote cluster.
 This issue is for backporting HBASE-11367 to 0.98.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11974) When a disabled table is scanned, NotServingRegionException is thrown instead of TableNotEnabledException

2014-09-16 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135761#comment-14135761
 ] 

Elliott Clark commented on HBASE-11974:
---

Being able to have master down and still serve requests is very important. I'd 
rather have a misleading exception than add a single point of failure into the 
critical path.

 When a disabled table is scanned, NotServingRegionException is thrown instead 
 of TableNotEnabledException
 -

 Key: HBASE-11974
 URL: https://issues.apache.org/jira/browse/HBASE-11974
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu
 Attachments: 11974-test.patch, 11974-v1.txt, 11974-v2.txt, 
 11974-v3.txt, 11974-v4.txt, 11974-v5.txt


 When a disabled table is scanned, TableNotEnabledException should be thrown.
 However, currently NotServingRegionException is thrown.
 Thanks to Romil Choksi who discovered this problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11974) When a disabled table is scanned, NotServingRegionException is thrown instead of TableNotEnabledException

2014-09-16 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135776#comment-14135776
 ] 

Ted Yu commented on HBASE-11974:


In patch v5, isTableDisabled() is only called when scanner open encounters 
NotServingRegionException.
[~eclark]:
Can you clarify if your comment is for patch v5 or for some earlier patch ?

 When a disabled table is scanned, NotServingRegionException is thrown instead 
 of TableNotEnabledException
 -

 Key: HBASE-11974
 URL: https://issues.apache.org/jira/browse/HBASE-11974
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu
 Attachments: 11974-test.patch, 11974-v1.txt, 11974-v2.txt, 
 11974-v3.txt, 11974-v4.txt, 11974-v5.txt


 When a disabled table is scanned, TableNotEnabledException should be thrown.
 However, currently NotServingRegionException is thrown.
 Thanks to Romil Choksi who discovered this problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11974) When a disabled table is scanned, NotServingRegionException is thrown instead of TableNotEnabledException

2014-09-16 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135787#comment-14135787
 ] 

Elliott Clark commented on HBASE-11974:
---

That means that a scanner will hang if the master is down.
Additionally on heavy region movement. The master will get pounded when it's 
already under stress.

 When a disabled table is scanned, NotServingRegionException is thrown instead 
 of TableNotEnabledException
 -

 Key: HBASE-11974
 URL: https://issues.apache.org/jira/browse/HBASE-11974
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu
 Attachments: 11974-test.patch, 11974-v1.txt, 11974-v2.txt, 
 11974-v3.txt, 11974-v4.txt, 11974-v5.txt


 When a disabled table is scanned, TableNotEnabledException should be thrown.
 However, currently NotServingRegionException is thrown.
 Thanks to Romil Choksi who discovered this problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11990) Make setting the start and stop row for a specific prefix easier

2014-09-16 Thread Andreas Neumann (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135852#comment-14135852
 ] 

Andreas Neumann commented on HBASE-11990:
-

Actually, for something like { 0x12, 0x23, 0xFF, 0xFF }, the correct stop key 
is not { 0x12, 0x24, 0x00, 0x00 } but { 0x12, 0x24 }. If you use { 0x12, 0x24, 
0x00, 0x00 }, then that would include { 0x12, 0x24, 0x00 } and { 0x12, 0x24 }, 
both obviously do not have the prefix in question. 

 Make setting the start and stop row for a specific prefix easier
 

 Key: HBASE-11990
 URL: https://issues.apache.org/jira/browse/HBASE-11990
 Project: HBase
  Issue Type: New Feature
  Components: Client
Reporter: Niels Basjes
 Attachments: HBASE-11990-20140916.patch


 If you want to set a scan from your application to scan for a specific row 
 prefix this is actually quite hard.
 As described in several places you can set the startRow to the prefix; yet 
 the stopRow should be set to the prefix '+1'
 If the prefix 'ASCII' put into a byte[] then this is easy because you can 
 simply increment the last byte of the array. 
 But if your application uses real binary rowids you may run into the scenario 
 that your prefix is something like 
 {code}{ 0x12, 0x23, 0xFF, 0xFF }{code} Then the increment should be {code}{ 
 0x12, 0x24, 0x00, 0x00 }{code}
 I have prepared a proposed patch that makes setting these values correctly a 
 lot easier.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-11983) HRegion constructors should not create HLog

2014-09-16 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey reassigned HBASE-11983:
---

Assignee: Sean Busbey

 HRegion constructors should not create HLog 
 

 Key: HBASE-11983
 URL: https://issues.apache.org/jira/browse/HBASE-11983
 Project: HBase
  Issue Type: Bug
  Components: wal
Reporter: Enis Soztutar
Assignee: Sean Busbey

 We should get rid of HRegion creating its own HLog. It should ALWAYS get the 
 log from outside. 
 I think this was added for unit tests, but we should refrain from such 
 practice in the future (adding UT constructors always leads to weird and 
 critical bugs down the road). See recent: HBASE-11982, HBASE-11654. 
 Get rid of weird things like ignoreHLog:
 {code}
   /**
* @param ignoreHLog - true to skip generate new hlog if it is null, mostly 
 for createTable
*/
   public static HRegion createHRegion(final HRegionInfo info, final Path 
 rootDir,
   final Configuration conf,
   final HTableDescriptor hTableDescriptor,
   final HLog hlog,
   final boolean initialize, final boolean 
 ignoreHLog)
 {code}
 We can unify all the createXX and newXX methods and separate creating a 
 region in the file system vs opening a region. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11990) Make setting the start and stop row for a specific prefix easier

2014-09-16 Thread Jean-Marc Spaggiari (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135870#comment-14135870
 ] 

Jean-Marc Spaggiari commented on HBASE-11990:
-

Should it not even be { 0x12, 0x23, 0xFF, 0xFF, 0xFF } ? 

What if you put { 0x12, 0x23, 0xFF, 0xFF }  but you have { 0x12, 0x23, 0xFF, 
0xFF, 0xFF , 0xFF , 0xFF , 0xFF}  and dont want it? If you change that by  { 
0x12, 0x24 } then you will get it, which is not what you want. { 0x12, 0x23, 
0xFF, 0xFF, 0xFF } will not  include it.

I'm thinking at loud here. They might be some corner cases that I miss.



 Make setting the start and stop row for a specific prefix easier
 

 Key: HBASE-11990
 URL: https://issues.apache.org/jira/browse/HBASE-11990
 Project: HBase
  Issue Type: New Feature
  Components: Client
Reporter: Niels Basjes
 Attachments: HBASE-11990-20140916.patch


 If you want to set a scan from your application to scan for a specific row 
 prefix this is actually quite hard.
 As described in several places you can set the startRow to the prefix; yet 
 the stopRow should be set to the prefix '+1'
 If the prefix 'ASCII' put into a byte[] then this is easy because you can 
 simply increment the last byte of the array. 
 But if your application uses real binary rowids you may run into the scenario 
 that your prefix is something like 
 {code}{ 0x12, 0x23, 0xFF, 0xFF }{code} Then the increment should be {code}{ 
 0x12, 0x24, 0x00, 0x00 }{code}
 I have prepared a proposed patch that makes setting these values correctly a 
 lot easier.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11976) Server startcode is not checked for bulk region assignment

2014-09-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135873#comment-14135873
 ] 

Hudson commented on HBASE-11976:


FAILURE: Integrated in HBase-TRUNK #5512 (See 
[https://builds.apache.org/job/HBase-TRUNK/5512/])
HBASE-11976 Server startcode is not checked for bulk region assignment (jxiang: 
rev cc873713c17738cecc2264c236586af3a7b91c1f)
* hbase-server/src/main/java/org/apache/hadoop/hbase/master/ServerManager.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/RequestConverter.java


 Server startcode is not checked for bulk region assignment
 --

 Key: HBASE-11976
 URL: https://issues.apache.org/jira/browse/HBASE-11976
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
 Fix For: 2.0.0, 0.99.1

 Attachments: hbase-11976.patch


 I got the following failure yesterday. It looks like sending open region 
 request, the region server failed to check the server start code.
  
 {noformat}
 2014-09-14 19:36:45,565 ERROR 
 [B.defaultRpcServer.handler=24,queue=0,port=20020] 
 master.AssignmentManager: Failed to transition region from 
 {2706f577540a7d1b53b5a8f66178fbf2 state=PENDING_OPEN, ts=1410748604803, 
 server=a2428.halxg.cloudera.com,20020,1410746518223} on OPENED by 
 a2428.halxg.cloudera.com,20020,1410748599408: 
 2706f577540a7d1b53b5a8f66178fbf2 is not opening on 
 a2428.halxg.cloudera.com,20020,1410748599408
 ABORTING region server a2428.halxg.cloudera.com,20020,1410748599408: 
 Exception running postOpenDeployTasks; region=2706f577540a7d1b53b5a8f66178fbf2
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11961) Document region state transitions

2014-09-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135872#comment-14135872
 ] 

Hudson commented on HBASE-11961:


FAILURE: Integrated in HBase-TRUNK #5512 (See 
[https://builds.apache.org/job/HBase-TRUNK/5512/])
HBASE-11961 Document region state transitions (Jimmy Xiang and Misty 
Stanley-Jones) (jxiang: rev 4ad3fe1f2e3859d7aa1cbe06e4e94ae3b164a17d)
* src/main/site/resources/images/region_states.png
* src/main/docbkx/book.xml


 Document region state transitions
 -

 Key: HBASE-11961
 URL: https://issues.apache.org/jira/browse/HBASE-11961
 Project: HBase
  Issue Type: Task
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
Priority: Minor
 Fix For: 2.0.0

 Attachments: HBASE-11961-misty.patch, hbase-11961.patch, 
 region_states.png


 Document the region state transitions in the refguide.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-11990) Make setting the start and stop row for a specific prefix easier

2014-09-16 Thread Jean-Marc Spaggiari (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135870#comment-14135870
 ] 

Jean-Marc Spaggiari edited comment on HBASE-11990 at 9/16/14 6:16 PM:
--

Should it not even be { 0x12, 0x23, 0xFF, 0xFF, 0xFF } ? 

What if you put { 0x12, 0x23, 0xFF, 0xFF }  but you have { 0x12, 0x23, 0xFF, 
0xFF, 0xFF , 0xFF , 0xFF , 0xFF}  and dont want it? If you change that by  { 
0x12, 0x24 } then you will get it, which is not what you want. { 0x12, 0x23, 
0xFF, 0xFF, 0xFF } will not  include it.

I'm thinking at loud here. They might be some corner cases that I miss.

Edit: Forget about that. I forgot it was prefix, was just thinking about stop 
key...


was (Author: jmspaggi):
Should it not even be { 0x12, 0x23, 0xFF, 0xFF, 0xFF } ? 

What if you put { 0x12, 0x23, 0xFF, 0xFF }  but you have { 0x12, 0x23, 0xFF, 
0xFF, 0xFF , 0xFF , 0xFF , 0xFF}  and dont want it? If you change that by  { 
0x12, 0x24 } then you will get it, which is not what you want. { 0x12, 0x23, 
0xFF, 0xFF, 0xFF } will not  include it.

I'm thinking at loud here. They might be some corner cases that I miss.



 Make setting the start and stop row for a specific prefix easier
 

 Key: HBASE-11990
 URL: https://issues.apache.org/jira/browse/HBASE-11990
 Project: HBase
  Issue Type: New Feature
  Components: Client
Reporter: Niels Basjes
 Attachments: HBASE-11990-20140916.patch


 If you want to set a scan from your application to scan for a specific row 
 prefix this is actually quite hard.
 As described in several places you can set the startRow to the prefix; yet 
 the stopRow should be set to the prefix '+1'
 If the prefix 'ASCII' put into a byte[] then this is easy because you can 
 simply increment the last byte of the array. 
 But if your application uses real binary rowids you may run into the scenario 
 that your prefix is something like 
 {code}{ 0x12, 0x23, 0xFF, 0xFF }{code} Then the increment should be {code}{ 
 0x12, 0x24, 0x00, 0x00 }{code}
 I have prepared a proposed patch that makes setting these values correctly a 
 lot easier.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-11990) Make setting the start and stop row for a specific prefix easier

2014-09-16 Thread Jean-Marc Spaggiari (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135870#comment-14135870
 ] 

Jean-Marc Spaggiari edited comment on HBASE-11990 at 9/16/14 6:16 PM:
--

Should it not even be { 0x12, 0x23, 0xFF, 0xFF, 0xFF } ? 

What if you put { 0x12, 0x23, 0xFF, 0xFF }  but you have { 0x12, 0x23, 0xFF, 
0xFF, 0xFF , 0xFF , 0xFF , 0xFF}  and dont want it? If you change that by  { 
0x12, 0x24 } then you will get it, which is not what you want. { 0x12, 0x23, 
0xFF, 0xFF, 0xFF } will not  include it.

I'm thinking at loud here. They might be some corner cases that I miss.

Edit: Forget about that. I forgot it was prefix, was just thinking about stop 
key... So Andreas comment is valid for me.


was (Author: jmspaggi):
Should it not even be { 0x12, 0x23, 0xFF, 0xFF, 0xFF } ? 

What if you put { 0x12, 0x23, 0xFF, 0xFF }  but you have { 0x12, 0x23, 0xFF, 
0xFF, 0xFF , 0xFF , 0xFF , 0xFF}  and dont want it? If you change that by  { 
0x12, 0x24 } then you will get it, which is not what you want. { 0x12, 0x23, 
0xFF, 0xFF, 0xFF } will not  include it.

I'm thinking at loud here. They might be some corner cases that I miss.

Edit: Forget about that. I forgot it was prefix, was just thinking about stop 
key...

 Make setting the start and stop row for a specific prefix easier
 

 Key: HBASE-11990
 URL: https://issues.apache.org/jira/browse/HBASE-11990
 Project: HBase
  Issue Type: New Feature
  Components: Client
Reporter: Niels Basjes
 Attachments: HBASE-11990-20140916.patch


 If you want to set a scan from your application to scan for a specific row 
 prefix this is actually quite hard.
 As described in several places you can set the startRow to the prefix; yet 
 the stopRow should be set to the prefix '+1'
 If the prefix 'ASCII' put into a byte[] then this is easy because you can 
 simply increment the last byte of the array. 
 But if your application uses real binary rowids you may run into the scenario 
 that your prefix is something like 
 {code}{ 0x12, 0x23, 0xFF, 0xFF }{code} Then the increment should be {code}{ 
 0x12, 0x24, 0x00, 0x00 }{code}
 I have prepared a proposed patch that makes setting these values correctly a 
 lot easier.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11984) TestClassFinder failing on occasion

2014-09-16 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135881#comment-14135881
 ] 

stack commented on HBASE-11984:
---

Debugging shows that the 'target' classes are not always present:

{code}
java.lang.AssertionError: Classes in org.apache.hadoop.hbase
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.hadoop.hbase.TestClassFinder.testClassFinderFiltersByClassInDirs(TestClassFinder.java:255)
{code}

... removed by a concurrent hbase test run?

We should not rely on data outside of test dirs.   Let me add a fix.

 TestClassFinder failing on occasion
 ---

 Key: HBASE-11984
 URL: https://issues.apache.org/jira/browse/HBASE-11984
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: stack
 Fix For: 2.0.0

 Attachments: 0001-More-debug.patch


 Failed like this:
 {code}
 Tests run: 11, Failures: 3, Errors: 0, Skipped: 0, Time elapsed: 1.913 sec 
  FAILURE! - in org.apache.hadoop.hbase.TestClassFinder
 testClassFinderFiltersByClassInDirs(org.apache.hadoop.hbase.TestClassFinder)  
 Time elapsed: 0.028 sec   FAILURE!
 java.lang.AssertionError: expected:-1 but was:0
   at org.junit.Assert.fail(Assert.java:88)
   at org.junit.Assert.failNotEquals(Assert.java:743)
   at org.junit.Assert.assertEquals(Assert.java:118)
   at org.junit.Assert.assertEquals(Assert.java:555)
   at org.junit.Assert.assertEquals(Assert.java:542)
   at 
 org.apache.hadoop.hbase.TestClassFinder.testClassFinderFiltersByClassInDirs(TestClassFinder.java:259)
 testClassFinderCanFindClassesInDirs(org.apache.hadoop.hbase.TestClassFinder)  
 Time elapsed: 0.017 sec   FAILURE!
 java.lang.AssertionError: null
   at org.junit.Assert.fail(Assert.java:86)
   at org.junit.Assert.assertTrue(Assert.java:41)
   at org.junit.Assert.assertTrue(Assert.java:52)
   at 
 org.apache.hadoop.hbase.TestClassFinder.testClassFinderCanFindClassesInDirs(TestClassFinder.java:223)
 testClassFinderFiltersByNameInDirs(org.apache.hadoop.hbase.TestClassFinder)  
 Time elapsed: 0.018 sec   FAILURE!
 java.lang.AssertionError: expected:-1 but was:0
   at org.junit.Assert.fail(Assert.java:88)
   at org.junit.Assert.failNotEquals(Assert.java:743)
   at org.junit.Assert.assertEquals(Assert.java:118)
   at org.junit.Assert.assertEquals(Assert.java:555)
   at org.junit.Assert.assertEquals(Assert.java:542)
   at 
 org.apache.hadoop.hbase.TestClassFinder.testClassFinderFiltersByNameInDirs(TestClassFinder.java:242)
 {code}
 ... in precommit 
 https://builds.apache.org/job/PreCommit-HBASE-Build/10912/console



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-11984) TestClassFinder failing on occasion

2014-09-16 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-11984:
--
Attachment: 11984v2.txt

The idea is that for the three tests that look for class files in dirs, we now 
create the classes for it to find rather than expect them to be present under 
target dir.

The changes are kinda ugly because the class naming and package naming in this 
test class is a little cryptic (I'm sure it made 'sense' at one time).  I tried 
not disturb it too much.  Made stuff local to the test... i.e. named classes 
for method name and added package suffixes for the package name too.

Going to just commit.  Its test fixup.  Will  leave issue open in case we see 
other failures out of here.

 TestClassFinder failing on occasion
 ---

 Key: HBASE-11984
 URL: https://issues.apache.org/jira/browse/HBASE-11984
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: stack
 Fix For: 2.0.0

 Attachments: 0001-More-debug.patch, 11984v2.txt


 Failed like this:
 {code}
 Tests run: 11, Failures: 3, Errors: 0, Skipped: 0, Time elapsed: 1.913 sec 
  FAILURE! - in org.apache.hadoop.hbase.TestClassFinder
 testClassFinderFiltersByClassInDirs(org.apache.hadoop.hbase.TestClassFinder)  
 Time elapsed: 0.028 sec   FAILURE!
 java.lang.AssertionError: expected:-1 but was:0
   at org.junit.Assert.fail(Assert.java:88)
   at org.junit.Assert.failNotEquals(Assert.java:743)
   at org.junit.Assert.assertEquals(Assert.java:118)
   at org.junit.Assert.assertEquals(Assert.java:555)
   at org.junit.Assert.assertEquals(Assert.java:542)
   at 
 org.apache.hadoop.hbase.TestClassFinder.testClassFinderFiltersByClassInDirs(TestClassFinder.java:259)
 testClassFinderCanFindClassesInDirs(org.apache.hadoop.hbase.TestClassFinder)  
 Time elapsed: 0.017 sec   FAILURE!
 java.lang.AssertionError: null
   at org.junit.Assert.fail(Assert.java:86)
   at org.junit.Assert.assertTrue(Assert.java:41)
   at org.junit.Assert.assertTrue(Assert.java:52)
   at 
 org.apache.hadoop.hbase.TestClassFinder.testClassFinderCanFindClassesInDirs(TestClassFinder.java:223)
 testClassFinderFiltersByNameInDirs(org.apache.hadoop.hbase.TestClassFinder)  
 Time elapsed: 0.018 sec   FAILURE!
 java.lang.AssertionError: expected:-1 but was:0
   at org.junit.Assert.fail(Assert.java:88)
   at org.junit.Assert.failNotEquals(Assert.java:743)
   at org.junit.Assert.assertEquals(Assert.java:118)
   at org.junit.Assert.assertEquals(Assert.java:555)
   at org.junit.Assert.assertEquals(Assert.java:542)
   at 
 org.apache.hadoop.hbase.TestClassFinder.testClassFinderFiltersByNameInDirs(TestClassFinder.java:242)
 {code}
 ... in precommit 
 https://builds.apache.org/job/PreCommit-HBASE-Build/10912/console



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11992) Backport HBASE-11367 (Pluggable replication endpoint) to 0.98

2014-09-16 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135889#comment-14135889
 ] 

Andrew Purtell commented on HBASE-11992:


Preliminary comments on the patch, or things that should be in it.

In hbase-client, the following classes don't have interface stability 
annotations but should all be marked Private: ReplicationFactory, 
ReplicationPeersZKImpl, ReplicationQueueInfo, ReplicationQueuesClient,  
ReplicationQueuesClientZKImpl , ReplicationQueuesZKImpl, 
ReplicationStateZKBase, ReplicationTrackerZKImpl. I don't think users should 
have expected these be other than internal implementation. Changes to these 
should be fine. 

We don't have known users referencing these types. I checked Phoenix. 

Remove this TODO? We're not going to do this in 0.98, right?

{code}
@@ -80,6 +86,8 @@ public class ReplicationAdmin implements Closeable {
   .toString(HConstants.REPLICATION_SCOPE_GLOBAL);
 
   private final HConnection connection;
+  // TODO: replication should be managed by master. All the classes except 
ReplicationAdmin should
+  // be moved to hbase-server. Resolve it in HBASE-11392.
   private final ReplicationQueuesClient replicationQueuesClient;
   private final ReplicationPeers replicationPeers;
{code}

Synchronize methods that call reconnect() in the new class 
HBaseReplicationEndpoint:

{code}
diff --git 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/HBaseReplicationEndpoint.java
 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/HBaseReplicationEndpoint.java
new file mode 100644
index 000..4b9a28f
--- /dev/null
+++ 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/HBaseReplicationEndpoint.java
[...]
+  @Override
+  public UUID getPeerUUID() {
+UUID peerUUID = null;
+try {
+  peerUUID = ZKClusterId.getUUIDForCluster(zkw);
+} catch (KeeperException ke) {
--- +  reconnect(ke);
+}
+return peerUUID;
+  }
[...]
+  public ListServerName getRegionServers() {
+try {
+  setRegionServers(fetchSlavesAddresses(this.getZkw()));
+} catch (KeeperException ke) {
+  if (LOG.isDebugEnabled()) {
+LOG.debug(Fetch salves addresses failed., ke);
+  }
--- +  reconnect(ke);
+}
+return regionServers;
+  }
[...]
{code}

The additional state in the ReplicationPeer znode pbuf introduces 
mixed-version-compatibility issues:

{code}
diff --git hbase-protocol/src/main/protobuf/ZooKeeper.proto 
hbase-protocol/src/main/protobuf/ZooKeeper.proto
index 37816da..598385c 100644
--- hbase-protocol/src/main/protobuf/ZooKeeper.proto
+++ hbase-protocol/src/main/protobuf/ZooKeeper.proto
@@ -119,6 +119,9 @@ message ReplicationPeer {
   // clusterkey is the concatenation of the slave cluster's
   // 
hbase.zookeeper.quorum:hbase.zookeeper.property.clientPort:zookeeper.znode.parent
   required string clusterkey = 1;
+  optional string replicationEndpointImpl = 2;
+  repeated BytesBytesPair data = 3;
+  repeated NameStringPair configuration = 4;
 }
{code}

Here, will this be enough for a mixed version deployment where add_peer might 
be executed on an older version and won't write the new fields of 
ReplicationState into the ZK node for the peer? Should we be checking a site 
file setting also? 

{code}
@@ -372,9 +381,32 @@ public class ReplicationSourceManager implements 
ReplicationListener {
   LOG.warn(Passed replication source implementation throws errors,  +
   defaulting to ReplicationSource, e);
   src = new ReplicationSource();
+}
 
+ReplicationEndpoint replicationEndpoint = null;
+try {
+  String replicationEndpointImpl = peerConfig.getReplicationEndpointImpl();
+  if (replicationEndpointImpl == null) {
+// Default to HBase inter-cluster replication endpoint
+replicationEndpointImpl = 
HBaseInterClusterReplicationEndpoint.class.getName();
+  }
+  @SuppressWarnings(rawtypes)
+  Class c = Class.forName(replicationEndpointImpl);
+  replicationEndpoint = (ReplicationEndpoint) c.newInstance();
+} catch (Exception e) {
+  LOG.warn(Passed replication endpoint implementation throws errors, e);
+  throw new IOException(e);
 }
{code}

Likewise, what happens when you add a peer with a custom replication endpoint 
impl setting but the work gets picked up by an older server? We can't ask a 
user or admin to disable replication while performing a rolling upgrade of 
either the source or destination clusters so I think it's necessary to document 
that customized replication endpoints are not supported in mixed version 
deployments, the fleet must be up to date before customized peer settings can 
be put in place. If we could ask older servers that cannot honor the new 
customization settings to stop replicating and let their work be transferred to 
the newer servers, that would also work, but we do not have this 

[jira] [Commented] (HBASE-11992) Backport HBASE-11367 (Pluggable replication endpoint) to 0.98

2014-09-16 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135891#comment-14135891
 ] 

Andrew Purtell commented on HBASE-11992:


Ping [~lhofhansl], [~jesse_yates]

 Backport HBASE-11367 (Pluggable replication endpoint) to 0.98
 -

 Key: HBASE-11992
 URL: https://issues.apache.org/jira/browse/HBASE-11992
 Project: HBase
  Issue Type: Task
Reporter: Andrew Purtell
Assignee: ramkrishna.s.vasudevan
 Attachments: hbase-11367_0.98.patch


 ReplicationSource tails the logs for each peer. HBASE-11367 introduces 
 ReplicationEndpoint which is customizable per peer. ReplicationEndpoint is 
 run in the same RS process and instantiated per replication peer per region 
 server. Implementations of this interface handle the actual shipping of WAL 
 edits to the remote cluster.
 This issue is for backporting HBASE-11367 to 0.98.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-11984) TestClassFinder failing on occasion

2014-09-16 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-11984:
--
Attachment: 11984.branch-1.txt

What I applied to branch-1

 TestClassFinder failing on occasion
 ---

 Key: HBASE-11984
 URL: https://issues.apache.org/jira/browse/HBASE-11984
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: stack
 Fix For: 2.0.0

 Attachments: 0001-More-debug.patch, 11984.branch-1.txt, 11984v2.txt


 Failed like this:
 {code}
 Tests run: 11, Failures: 3, Errors: 0, Skipped: 0, Time elapsed: 1.913 sec 
  FAILURE! - in org.apache.hadoop.hbase.TestClassFinder
 testClassFinderFiltersByClassInDirs(org.apache.hadoop.hbase.TestClassFinder)  
 Time elapsed: 0.028 sec   FAILURE!
 java.lang.AssertionError: expected:-1 but was:0
   at org.junit.Assert.fail(Assert.java:88)
   at org.junit.Assert.failNotEquals(Assert.java:743)
   at org.junit.Assert.assertEquals(Assert.java:118)
   at org.junit.Assert.assertEquals(Assert.java:555)
   at org.junit.Assert.assertEquals(Assert.java:542)
   at 
 org.apache.hadoop.hbase.TestClassFinder.testClassFinderFiltersByClassInDirs(TestClassFinder.java:259)
 testClassFinderCanFindClassesInDirs(org.apache.hadoop.hbase.TestClassFinder)  
 Time elapsed: 0.017 sec   FAILURE!
 java.lang.AssertionError: null
   at org.junit.Assert.fail(Assert.java:86)
   at org.junit.Assert.assertTrue(Assert.java:41)
   at org.junit.Assert.assertTrue(Assert.java:52)
   at 
 org.apache.hadoop.hbase.TestClassFinder.testClassFinderCanFindClassesInDirs(TestClassFinder.java:223)
 testClassFinderFiltersByNameInDirs(org.apache.hadoop.hbase.TestClassFinder)  
 Time elapsed: 0.018 sec   FAILURE!
 java.lang.AssertionError: expected:-1 but was:0
   at org.junit.Assert.fail(Assert.java:88)
   at org.junit.Assert.failNotEquals(Assert.java:743)
   at org.junit.Assert.assertEquals(Assert.java:118)
   at org.junit.Assert.assertEquals(Assert.java:555)
   at org.junit.Assert.assertEquals(Assert.java:542)
   at 
 org.apache.hadoop.hbase.TestClassFinder.testClassFinderFiltersByNameInDirs(TestClassFinder.java:242)
 {code}
 ... in precommit 
 https://builds.apache.org/job/PreCommit-HBASE-Build/10912/console



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11992) Backport HBASE-11367 (Pluggable replication endpoint) to 0.98

2014-09-16 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135909#comment-14135909
 ] 

Andrew Purtell commented on HBASE-11992:


I also wonder about major service migration. In order to stage in customized 
replication endpoints do we require remove_peer then add_peer, instead of an 
online migration and update of existing replication sources? The former implies 
service downtime or some edits that should have been replicated will be missed 
between remove and (re)add, the latter does not. 

 Backport HBASE-11367 (Pluggable replication endpoint) to 0.98
 -

 Key: HBASE-11992
 URL: https://issues.apache.org/jira/browse/HBASE-11992
 Project: HBase
  Issue Type: Task
Reporter: Andrew Purtell
Assignee: ramkrishna.s.vasudevan
 Attachments: hbase-11367_0.98.patch


 ReplicationSource tails the logs for each peer. HBASE-11367 introduces 
 ReplicationEndpoint which is customizable per peer. ReplicationEndpoint is 
 run in the same RS process and instantiated per replication peer per region 
 server. Implementations of this interface handle the actual shipping of WAL 
 edits to the remote cluster.
 This issue is for backporting HBASE-11367 to 0.98.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-11992) Backport HBASE-11367 (Pluggable replication endpoint) to 0.98

2014-09-16 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135909#comment-14135909
 ] 

Andrew Purtell edited comment on HBASE-11992 at 9/16/14 6:33 PM:
-

I also wonder about service migration when upgrading from 0.98 to 1.0, which is 
supposed to be wire compatible and should have the same mixed-version 
deployment tolerance as 0.98. In order to stage in customized replication 
endpoints do we require remove_peer then add_peer, instead of an online 
migration and update of existing replication sources? The former implies 
service downtime or some edits that should have been replicated will be missed 
between remove and (re)add, the latter does not. 


was (Author: apurtell):
I also wonder about major service migration. In order to stage in customized 
replication endpoints do we require remove_peer then add_peer, instead of an 
online migration and update of existing replication sources? The former implies 
service downtime or some edits that should have been replicated will be missed 
between remove and (re)add, the latter does not. 

 Backport HBASE-11367 (Pluggable replication endpoint) to 0.98
 -

 Key: HBASE-11992
 URL: https://issues.apache.org/jira/browse/HBASE-11992
 Project: HBase
  Issue Type: Task
Reporter: Andrew Purtell
Assignee: ramkrishna.s.vasudevan
 Attachments: hbase-11367_0.98.patch


 ReplicationSource tails the logs for each peer. HBASE-11367 introduces 
 ReplicationEndpoint which is customizable per peer. ReplicationEndpoint is 
 run in the same RS process and instantiated per replication peer per region 
 server. Implementations of this interface handle the actual shipping of WAL 
 edits to the remote cluster.
 This issue is for backporting HBASE-11367 to 0.98.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-11984) TestClassFinder failing on occasion

2014-09-16 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-11984:
--
Fix Version/s: 0.99.1
   0.98.7

 TestClassFinder failing on occasion
 ---

 Key: HBASE-11984
 URL: https://issues.apache.org/jira/browse/HBASE-11984
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: stack
 Fix For: 2.0.0, 0.98.7, 0.99.1

 Attachments: 0001-More-debug.patch, 11984.branch-1.txt, 11984v2.txt


 Failed like this:
 {code}
 Tests run: 11, Failures: 3, Errors: 0, Skipped: 0, Time elapsed: 1.913 sec 
  FAILURE! - in org.apache.hadoop.hbase.TestClassFinder
 testClassFinderFiltersByClassInDirs(org.apache.hadoop.hbase.TestClassFinder)  
 Time elapsed: 0.028 sec   FAILURE!
 java.lang.AssertionError: expected:-1 but was:0
   at org.junit.Assert.fail(Assert.java:88)
   at org.junit.Assert.failNotEquals(Assert.java:743)
   at org.junit.Assert.assertEquals(Assert.java:118)
   at org.junit.Assert.assertEquals(Assert.java:555)
   at org.junit.Assert.assertEquals(Assert.java:542)
   at 
 org.apache.hadoop.hbase.TestClassFinder.testClassFinderFiltersByClassInDirs(TestClassFinder.java:259)
 testClassFinderCanFindClassesInDirs(org.apache.hadoop.hbase.TestClassFinder)  
 Time elapsed: 0.017 sec   FAILURE!
 java.lang.AssertionError: null
   at org.junit.Assert.fail(Assert.java:86)
   at org.junit.Assert.assertTrue(Assert.java:41)
   at org.junit.Assert.assertTrue(Assert.java:52)
   at 
 org.apache.hadoop.hbase.TestClassFinder.testClassFinderCanFindClassesInDirs(TestClassFinder.java:223)
 testClassFinderFiltersByNameInDirs(org.apache.hadoop.hbase.TestClassFinder)  
 Time elapsed: 0.018 sec   FAILURE!
 java.lang.AssertionError: expected:-1 but was:0
   at org.junit.Assert.fail(Assert.java:88)
   at org.junit.Assert.failNotEquals(Assert.java:743)
   at org.junit.Assert.assertEquals(Assert.java:118)
   at org.junit.Assert.assertEquals(Assert.java:555)
   at org.junit.Assert.assertEquals(Assert.java:542)
   at 
 org.apache.hadoop.hbase.TestClassFinder.testClassFinderFiltersByNameInDirs(TestClassFinder.java:242)
 {code}
 ... in precommit 
 https://builds.apache.org/job/PreCommit-HBASE-Build/10912/console



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-11984) TestClassFinder failing on occasion

2014-09-16 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-11984.
---
Resolution: Fixed

Pushed to master, branch-1 and 0.98.  Let me just resolve.  Will reopen is 
issue.

 TestClassFinder failing on occasion
 ---

 Key: HBASE-11984
 URL: https://issues.apache.org/jira/browse/HBASE-11984
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: stack
 Fix For: 2.0.0, 0.98.7, 0.99.1

 Attachments: 0001-More-debug.patch, 11984.branch-1.txt, 11984v2.txt


 Failed like this:
 {code}
 Tests run: 11, Failures: 3, Errors: 0, Skipped: 0, Time elapsed: 1.913 sec 
  FAILURE! - in org.apache.hadoop.hbase.TestClassFinder
 testClassFinderFiltersByClassInDirs(org.apache.hadoop.hbase.TestClassFinder)  
 Time elapsed: 0.028 sec   FAILURE!
 java.lang.AssertionError: expected:-1 but was:0
   at org.junit.Assert.fail(Assert.java:88)
   at org.junit.Assert.failNotEquals(Assert.java:743)
   at org.junit.Assert.assertEquals(Assert.java:118)
   at org.junit.Assert.assertEquals(Assert.java:555)
   at org.junit.Assert.assertEquals(Assert.java:542)
   at 
 org.apache.hadoop.hbase.TestClassFinder.testClassFinderFiltersByClassInDirs(TestClassFinder.java:259)
 testClassFinderCanFindClassesInDirs(org.apache.hadoop.hbase.TestClassFinder)  
 Time elapsed: 0.017 sec   FAILURE!
 java.lang.AssertionError: null
   at org.junit.Assert.fail(Assert.java:86)
   at org.junit.Assert.assertTrue(Assert.java:41)
   at org.junit.Assert.assertTrue(Assert.java:52)
   at 
 org.apache.hadoop.hbase.TestClassFinder.testClassFinderCanFindClassesInDirs(TestClassFinder.java:223)
 testClassFinderFiltersByNameInDirs(org.apache.hadoop.hbase.TestClassFinder)  
 Time elapsed: 0.018 sec   FAILURE!
 java.lang.AssertionError: expected:-1 but was:0
   at org.junit.Assert.fail(Assert.java:88)
   at org.junit.Assert.failNotEquals(Assert.java:743)
   at org.junit.Assert.assertEquals(Assert.java:118)
   at org.junit.Assert.assertEquals(Assert.java:555)
   at org.junit.Assert.assertEquals(Assert.java:542)
   at 
 org.apache.hadoop.hbase.TestClassFinder.testClassFinderFiltersByNameInDirs(TestClassFinder.java:242)
 {code}
 ... in precommit 
 https://builds.apache.org/job/PreCommit-HBASE-Build/10912/console



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-11992) Backport HBASE-11367 (Pluggable replication endpoint) to 0.98

2014-09-16 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135909#comment-14135909
 ] 

Andrew Purtell edited comment on HBASE-11992 at 9/16/14 6:34 PM:
-

I also wonder about service migration when upgrading from 0.98 to 1.0, which is 
supposed to be wire compatible and should have the same mixed-version 
deployment tolerance as 0.98. In order to stage in customized replication 
endpoints do we require remove_peer then add_peer, instead of an online 
migration and update of existing replication sources? The former implies 
service downtime or some edits that should have been replicated will be missed 
between remove and (re)add, the latter does not. 

Of course, if we add this feature in 0.98 the upgrade concerns to 1.0 are moot, 
but we then need to deal with this intra 0.98 minor release variations.


was (Author: apurtell):
I also wonder about service migration when upgrading from 0.98 to 1.0, which is 
supposed to be wire compatible and should have the same mixed-version 
deployment tolerance as 0.98. In order to stage in customized replication 
endpoints do we require remove_peer then add_peer, instead of an online 
migration and update of existing replication sources? The former implies 
service downtime or some edits that should have been replicated will be missed 
between remove and (re)add, the latter does not. 

 Backport HBASE-11367 (Pluggable replication endpoint) to 0.98
 -

 Key: HBASE-11992
 URL: https://issues.apache.org/jira/browse/HBASE-11992
 Project: HBase
  Issue Type: Task
Reporter: Andrew Purtell
Assignee: ramkrishna.s.vasudevan
 Attachments: hbase-11367_0.98.patch


 ReplicationSource tails the logs for each peer. HBASE-11367 introduces 
 ReplicationEndpoint which is customizable per peer. ReplicationEndpoint is 
 run in the same RS process and instantiated per replication peer per region 
 server. Implementations of this interface handle the actual shipping of WAL 
 edits to the remote cluster.
 This issue is for backporting HBASE-11367 to 0.98.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11990) Make setting the start and stop row for a specific prefix easier

2014-09-16 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135922#comment-14135922
 ] 

Enis Soztutar commented on HBASE-11990:
---

I think this use case is valid, but an inclusive stop key, and an inclusive 
stop key prefix are quite different. 
For given inclusive stop key {X,Y,Z}

Inclusive stop key: 
bytes {X,Y,Z} will be included, {X,Y,Z,W} will not be included

Inclusive stop key prefix:
bytes {X,Y,Z} will be included, {X,Y,Z,W} will be included

 Make setting the start and stop row for a specific prefix easier
 

 Key: HBASE-11990
 URL: https://issues.apache.org/jira/browse/HBASE-11990
 Project: HBase
  Issue Type: New Feature
  Components: Client
Reporter: Niels Basjes
 Attachments: HBASE-11990-20140916.patch


 If you want to set a scan from your application to scan for a specific row 
 prefix this is actually quite hard.
 As described in several places you can set the startRow to the prefix; yet 
 the stopRow should be set to the prefix '+1'
 If the prefix 'ASCII' put into a byte[] then this is easy because you can 
 simply increment the last byte of the array. 
 But if your application uses real binary rowids you may run into the scenario 
 that your prefix is something like 
 {code}{ 0x12, 0x23, 0xFF, 0xFF }{code} Then the increment should be {code}{ 
 0x12, 0x24, 0x00, 0x00 }{code}
 I have prepared a proposed patch that makes setting these values correctly a 
 lot easier.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11976) Server startcode is not checked for bulk region assignment

2014-09-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135950#comment-14135950
 ] 

Hudson commented on HBASE-11976:


FAILURE: Integrated in HBase-1.0 #188 (See 
[https://builds.apache.org/job/HBase-1.0/188/])
HBASE-11976 Server startcode is not checked for bulk region assignment (jxiang: 
rev 95bc9a337e420f185ef088d00dfcf15846348094)
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/RequestConverter.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/master/ServerManager.java


 Server startcode is not checked for bulk region assignment
 --

 Key: HBASE-11976
 URL: https://issues.apache.org/jira/browse/HBASE-11976
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
 Fix For: 2.0.0, 0.99.1

 Attachments: hbase-11976.patch


 I got the following failure yesterday. It looks like sending open region 
 request, the region server failed to check the server start code.
  
 {noformat}
 2014-09-14 19:36:45,565 ERROR 
 [B.defaultRpcServer.handler=24,queue=0,port=20020] 
 master.AssignmentManager: Failed to transition region from 
 {2706f577540a7d1b53b5a8f66178fbf2 state=PENDING_OPEN, ts=1410748604803, 
 server=a2428.halxg.cloudera.com,20020,1410746518223} on OPENED by 
 a2428.halxg.cloudera.com,20020,1410748599408: 
 2706f577540a7d1b53b5a8f66178fbf2 is not opening on 
 a2428.halxg.cloudera.com,20020,1410748599408
 ABORTING region server a2428.halxg.cloudera.com,20020,1410748599408: 
 Exception running postOpenDeployTasks; region=2706f577540a7d1b53b5a8f66178fbf2
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-11825) Create Connection and ConnectionManager

2014-09-16 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar resolved HBASE-11825.
---
  Resolution: Fixed
Hadoop Flags: Reviewed

Pushed this to master and branch-1. Thanks Solomon, Carter. 

 Create Connection and ConnectionManager
 ---

 Key: HBASE-11825
 URL: https://issues.apache.org/jira/browse/HBASE-11825
 Project: HBase
  Issue Type: Improvement
Reporter: Carter
Assignee: Solomon Duskis
Priority: Critical
 Fix For: 2.0.0, 0.99.1

 Attachments: HBASE_11825.patch, HBASE_11825_v1.patch


 This is further cleanup of the HBase interface for 1.0 after implementing the 
 new Table and Admin interfaces.  Following Enis's guidelines in HBASE-10602, 
 this JIRA will generate a new ConnectionManager to replace HCM and Connection 
 to replace HConnection.
 For more detail, this JIRA intends to implement this portion:
 {code}
 interface Connection extends Closeable{
   Table getTable(), and rest of HConnection methods 
   getAdmin()
   // no deprecated methods (cache related etc)
 }
 @Deprecated
 interface HConnection extends Connection {
   @Deprecated
   HTableInterface getTable()
   // users are encouraged to use Connection
 }
 class ConnectionManager {
   createConnection(Configuration) // not sure whether we want a static 
 factory method to create connections or a ctor
 }
 @Deprecated
 class HCM extends ConnectionManager {
   // users are encouraged to use ConnectionManager
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-11993) Expose the set of tables available in TableStateManager

2014-09-16 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-11993:

Attachment: HBASE-11993-v0.patch

 Expose the set of tables available in TableStateManager
 ---

 Key: HBASE-11993
 URL: https://issues.apache.org/jira/browse/HBASE-11993
 Project: HBase
  Issue Type: Improvement
  Components: master
Affects Versions: 2.0.0
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
Priority: Trivial
 Fix For: 2.0.0

 Attachments: HBASE-11993-v0.patch


 TableStateManager has the full set of TableNames already in memory,
 we should expose the set of table names and use it instead of going to query 
 the fs descriptors.
 (Is there any reason why we don't have the descriptors in-memory too? saving 
 memory with tons of tables? do we even support tons of tables?)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-11993) Expose the set of tables available in TableStateManager

2014-09-16 Thread Matteo Bertozzi (JIRA)
Matteo Bertozzi created HBASE-11993:
---

 Summary: Expose the set of tables available in TableStateManager
 Key: HBASE-11993
 URL: https://issues.apache.org/jira/browse/HBASE-11993
 Project: HBase
  Issue Type: Improvement
  Components: master
Affects Versions: 2.0.0
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
Priority: Trivial
 Fix For: 2.0.0
 Attachments: HBASE-11993-v0.patch

TableStateManager has the full set of TableNames already in memory,
we should expose the set of table names and use it instead of going to query 
the fs descriptors.

(Is there any reason why we don't have the descriptors in-memory too? saving 
memory with tons of tables? do we even support tons of tables?)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-11993) Expose the set of tables available in TableStateManager

2014-09-16 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-11993:

Status: Patch Available  (was: Open)

 Expose the set of tables available in TableStateManager
 ---

 Key: HBASE-11993
 URL: https://issues.apache.org/jira/browse/HBASE-11993
 Project: HBase
  Issue Type: Improvement
  Components: master
Affects Versions: 2.0.0
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
Priority: Trivial
 Fix For: 2.0.0

 Attachments: HBASE-11993-v0.patch


 TableStateManager has the full set of TableNames already in memory,
 we should expose the set of table names and use it instead of going to query 
 the fs descriptors.
 (Is there any reason why we don't have the descriptors in-memory too? saving 
 memory with tons of tables? do we even support tons of tables?)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11985) Document sizing rules of thumb

2014-09-16 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135971#comment-14135971
 ] 

Jonathan Hsieh commented on HBASE-11985:



Pending mob work is targeted for 100k-10mb range.  tbd to see if it is makes 
sens for 10mb-64mb.  (if you have a few it would be ok, but if you have a lot).

A large number of *column families* probably means you are doing it wrong.



 Document sizing rules of thumb
 --

 Key: HBASE-11985
 URL: https://issues.apache.org/jira/browse/HBASE-11985
 Project: HBase
  Issue Type: Task
  Components: documentation
Reporter: Misty Stanley-Jones
Assignee: Misty Stanley-Jones

 I'm looking for tuning/sizing rules of thumb to put in the Ref Guide.
 Info I have gleaned so far:
 A reasonable region size is between 10 GB and 50 GB.
 A reasonable maximum cell size is 1 MB to 10 MB. If your cells are larger 
 than 10 MB, consider storing the cell contents in HDFS and storing a 
 reference to the location in HBase. Pending MOB work for 10 MB - 64 MB window.
 When you size your regions and cells, keep in mind that a region cannot split 
 across a row. If your row size is too large, or your region size is too 
 small, you can end up with a single row per region, which is not a good 
 pattern. It is also possible that one big column causes splits while other 
 columns are tiny, and this may not be great.
 A large # of columns probably means you are doing it wrong.
 Column names need to be short because they get stored for every value 
 (barring encoding). Don't need to be self-documenting like in RDBMS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-11994) PutCombiner floods the M/R log with repeated log messages.

2014-09-16 Thread Aditya Kishore (JIRA)
Aditya Kishore created HBASE-11994:
--

 Summary: PutCombiner floods the M/R log with repeated log messages.
 Key: HBASE-11994
 URL: https://issues.apache.org/jira/browse/HBASE-11994
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Affects Versions: 0.98.6
Reporter: Aditya Kishore
Assignee: Aditya Kishore


{{o.a.h.h.mapreduce.PutCombiner}} logs an info message for each row that it 
receives in its reduce() function, flooding the M/R tasks log.

{noformat}
...
2014-09-12 12:38:02,907 INFO [MapRSpillThread] 
org.apache.hadoop.hbase.mapreduce.PutCombiner: Combined 1 Put(s) into 1.
2014-09-12 12:38:02,908 INFO [MapRSpillThread] 
org.apache.hadoop.hbase.mapreduce.PutCombiner: Combined 1 Put(s) into 1.
...{repeated 100s of thousand times.}
{noformat}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11906) Meta data loss with distributed log replay

2014-09-16 Thread Jeffrey Zhong (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135983#comment-14135983
 ] 

Jeffrey Zhong commented on HBASE-11906:
---

Thanks for the feedback!

{quote}
Should we try to delete the znode only if DLR is enabled, and internalFlush is 
indeed invoked?
{quote}
Yes. I can add the check when DLR is enabled. For internalFlush is indeed 
invoked, current patch should already cover this because it's called after 
flush during region close. If flush fails, the delete will be skipped.

For [~saint@gmail.com]
{quote}
On the first patch, is this intentional:
+ zkw.sync(nodePath);
{quote}
This was by intention but I don't think I need it. I can remove it upon my 
check in. Thanks.

[~saint@gmail.com] I think we can do the zk - file system migration later 
in one go. If you still like the v1 patch, I can check the v1 patch in. Thanks. 


 Meta data loss with distributed log replay
 --

 Key: HBASE-11906
 URL: https://issues.apache.org/jira/browse/HBASE-11906
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.99.0, 2.0.0
Reporter: Jimmy Xiang
Assignee: Jeffrey Zhong
 Attachments: HBASE-11906.patch, debugging.patch, 
 hbase-11906-v2.patch, meta-data-loss-2.log, meta-data-loss-with-dlr.log


 In the attached log, you can see, before log replaying, the region is open on 
 e1205:
 {noformat}
 A3. 2014-09-05 16:38:46,705 INFO  
 [B.defaultRpcServer.handler=5,queue=2,port=20020] master.RegionStateStore: 
 Updating row 
 IntegrationTestBigLinkedList,\x90Jy\x04\xA7\x90Jp,1409959495482.cbb0d736ebfabcf4a07e5a7b395fcdf7.
  with 
 state=OPENopenSeqNum=40118237server=e1205.halxg.cloudera.com,20020,1409960280431
 {noformat}
 After the log replay, we got from meta the region is open on e1209
 {noformat}
 A4. 2014-09-05 16:41:12,257 INFO  [ActiveMasterManager] 
 master.AssignmentManager: Loading from meta: 
 {cbb0d736ebfabcf4a07e5a7b395fcdf7 state=OPEN, ts=1409960472257, 
 server=e1209.halxg.cloudera.com,20020,1409959391651}
 {noformat}
 The replayed edits show the log does have the edit expected:
 {noformat}
 2014-09-05 16:41:11,862 INFO  
 [B.defaultRpcServer.handler=18,queue=0,port=20020] 
 regionserver.RSRpcServices: Meta replay edit 
 type=PUT,mutation={totalColumns:4,families:{info:[{timestamp:1409960326705,tag:[3:\\x00\\x00\\x00\\x00\\x02bad],value:e1205.halxg.cloudera.com:20020,qualifier:server,vlen:30},{timestamp:1409960326705,tag:[3:\\x00\\x00\\x00\\x00\\x02bad],value:\\x00\\x00\\x01HH.\\x81o,qualifier:serverstartcode,vlen:8},{timestamp:1409960326705,tag:[3:\\x00\\x00\\x00\\x00\\x02bad],value:\\x00\\x00\\x00\\x00\\x02d'\\xDD,qualifier:seqnumDuringOpen,vlen:8},{timestamp:1409960326706,tag:[3:\\x00\\x00\\x00\\x00\\x02bad],value:OPEN,qualifier:state,vlen:4}]},row:IntegrationTestBigLinkedList,\\x90Jy\\x04\\xA7\\x90Jp,1409959495482.cbb0d736ebfabcf4a07e5a7b395fcdf7.}
 {noformat}
 Why we picked up a wrong value with an older time stamp?
 {noformat}
 2014-09-05 16:41:11,063 INFO  
 [B.defaultRpcServer.handler=9,queue=0,port=20020] regionserver.RSRpcServices: 
 Meta replay edit 
 type=PUT,mutation={totalColumns:4,families:{info:[{timestamp:1409959994634,tag:[3:\\x00\\x00\\x00\\x00\\x00\\x00\\x09\\x99],value:e1209.halxg.cloudera.com:20020,qualifier:server,vlen:30},{timestamp:1409959994634,tag:[3:\\x00\\x00\\x00\\x00\\x00\\x00\\x09\\x99],value:\\x00\\x00\\x01HH
  
 \\xF1\\xA3,qualifier:serverstartcode,vlen:8},{timestamp:1409959994634,tag:[3:\\x00\\x00\\x00\\x00\\x00\\x00\\x09\\x99],value:\\x00\\x00\\x00\\x00\\x00\\x01\\xB7\\xAB,qualifier:seqnumDuringOpen,vlen:8},{timestamp:1409959994634,tag:[3:\\x00\\x00\\x00\\x00\\x00\\x00\\x09\\x99],value:OPEN,qualifier:state,vlen:4}]},row:IntegrationTestBigLinkedList,\\x90Jy\\x04\\xA7\\x90Jp,1409959495482.cbb0d736ebfabcf4a07e5a7b395fcdf7.}
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11994) PutCombiner floods the M/R log with repeated log messages.

2014-09-16 Thread Aditya Kishore (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135986#comment-14135986
 ] 

Aditya Kishore commented on HBASE-11994:


Also, the variable {{curSize}} is not reset to zero after flushing.

 PutCombiner floods the M/R log with repeated log messages.
 --

 Key: HBASE-11994
 URL: https://issues.apache.org/jira/browse/HBASE-11994
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Affects Versions: 0.98.6
Reporter: Aditya Kishore
Assignee: Aditya Kishore

 {{o.a.h.h.mapreduce.PutCombiner}} logs an info message for each row that it 
 receives in its reduce() function, flooding the M/R tasks log.
 {noformat}
 ...
 2014-09-12 12:38:02,907 INFO [MapRSpillThread] 
 org.apache.hadoop.hbase.mapreduce.PutCombiner: Combined 1 Put(s) into 1.
 2014-09-12 12:38:02,908 INFO [MapRSpillThread] 
 org.apache.hadoop.hbase.mapreduce.PutCombiner: Combined 1 Put(s) into 1.
 ...{repeated 100s of thousand times.}
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11993) Expose the set of tables available in TableStateManager

2014-09-16 Thread Andrey Stepachev (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135996#comment-14135996
 ] 

Andrey Stepachev commented on HBASE-11993:
--

May be it is better to merge TableStateManager and FSTableDescriptors. Seems 
that there is two caches maintained and need to be kept in sync?

 Expose the set of tables available in TableStateManager
 ---

 Key: HBASE-11993
 URL: https://issues.apache.org/jira/browse/HBASE-11993
 Project: HBase
  Issue Type: Improvement
  Components: master
Affects Versions: 2.0.0
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
Priority: Trivial
 Fix For: 2.0.0

 Attachments: HBASE-11993-v0.patch


 TableStateManager has the full set of TableNames already in memory,
 we should expose the set of table names and use it instead of going to query 
 the fs descriptors.
 (Is there any reason why we don't have the descriptors in-memory too? saving 
 memory with tons of tables? do we even support tons of tables?)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-11994) PutCombiner floods the M/R log with repeated log messages.

2014-09-16 Thread Aditya Kishore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya Kishore updated HBASE-11994:
---
Attachment: HBASE-11994-PutCombiner-floods-the-M-R-log-with-repe.patch

This simple patch changes the log level to DEBUG and resets {{curSize}} to 0 on 
flush.

 PutCombiner floods the M/R log with repeated log messages.
 --

 Key: HBASE-11994
 URL: https://issues.apache.org/jira/browse/HBASE-11994
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Affects Versions: 0.98.6
Reporter: Aditya Kishore
Assignee: Aditya Kishore
 Attachments: 
 HBASE-11994-PutCombiner-floods-the-M-R-log-with-repe.patch


 {{o.a.h.h.mapreduce.PutCombiner}} logs an info message for each row that it 
 receives in its reduce() function, flooding the M/R tasks log.
 {noformat}
 ...
 2014-09-12 12:38:02,907 INFO [MapRSpillThread] 
 org.apache.hadoop.hbase.mapreduce.PutCombiner: Combined 1 Put(s) into 1.
 2014-09-12 12:38:02,908 INFO [MapRSpillThread] 
 org.apache.hadoop.hbase.mapreduce.PutCombiner: Combined 1 Put(s) into 1.
 ...{repeated 100s of thousand times.}
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-11994) PutCombiner floods the M/R log with repeated log messages.

2014-09-16 Thread Aditya Kishore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya Kishore updated HBASE-11994:
---
Status: Patch Available  (was: Open)

 PutCombiner floods the M/R log with repeated log messages.
 --

 Key: HBASE-11994
 URL: https://issues.apache.org/jira/browse/HBASE-11994
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Affects Versions: 0.98.6
Reporter: Aditya Kishore
Assignee: Aditya Kishore
 Attachments: 
 HBASE-11994-PutCombiner-floods-the-M-R-log-with-repe.patch


 {{o.a.h.h.mapreduce.PutCombiner}} logs an info message for each row that it 
 receives in its reduce() function, flooding the M/R tasks log.
 {noformat}
 ...
 2014-09-12 12:38:02,907 INFO [MapRSpillThread] 
 org.apache.hadoop.hbase.mapreduce.PutCombiner: Combined 1 Put(s) into 1.
 2014-09-12 12:38:02,908 INFO [MapRSpillThread] 
 org.apache.hadoop.hbase.mapreduce.PutCombiner: Combined 1 Put(s) into 1.
 ...{repeated 100s of thousand times.}
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11974) When a disabled table is scanned, NotServingRegionException is thrown instead of TableNotEnabledException

2014-09-16 Thread Andrey Stepachev (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14136003#comment-14136003
 ] 

Andrey Stepachev commented on HBASE-11974:
--

But if scanner fails with NSRE it anyway need to try meta lookup anyway, right? 
And with current master meta is always collocated. Without master it will not 
be available anyway. 
Table availability method is relatively lightweight. Not measured it, but it 
should be very fast.

 When a disabled table is scanned, NotServingRegionException is thrown instead 
 of TableNotEnabledException
 -

 Key: HBASE-11974
 URL: https://issues.apache.org/jira/browse/HBASE-11974
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu
 Attachments: 11974-test.patch, 11974-v1.txt, 11974-v2.txt, 
 11974-v3.txt, 11974-v4.txt, 11974-v5.txt


 When a disabled table is scanned, TableNotEnabledException should be thrown.
 However, currently NotServingRegionException is thrown.
 Thanks to Romil Choksi who discovered this problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-11976) Server startcode is not checked for bulk region assignment

2014-09-16 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-11976:

Fix Version/s: 0.98.6.1

 Server startcode is not checked for bulk region assignment
 --

 Key: HBASE-11976
 URL: https://issues.apache.org/jira/browse/HBASE-11976
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
 Fix For: 2.0.0, 0.99.1, 0.98.6.1

 Attachments: hbase-11976.patch


 I got the following failure yesterday. It looks like sending open region 
 request, the region server failed to check the server start code.
  
 {noformat}
 2014-09-14 19:36:45,565 ERROR 
 [B.defaultRpcServer.handler=24,queue=0,port=20020] 
 master.AssignmentManager: Failed to transition region from 
 {2706f577540a7d1b53b5a8f66178fbf2 state=PENDING_OPEN, ts=1410748604803, 
 server=a2428.halxg.cloudera.com,20020,1410746518223} on OPENED by 
 a2428.halxg.cloudera.com,20020,1410748599408: 
 2706f577540a7d1b53b5a8f66178fbf2 is not opening on 
 a2428.halxg.cloudera.com,20020,1410748599408
 ABORTING region server a2428.halxg.cloudera.com,20020,1410748599408: 
 Exception running postOpenDeployTasks; region=2706f577540a7d1b53b5a8f66178fbf2
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11974) When a disabled table is scanned, NotServingRegionException is thrown instead of TableNotEnabledException

2014-09-16 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14136021#comment-14136021
 ] 

Elliott Clark commented on HBASE-11974:
---

bq.And with current master meta is always collocated.

Thats configurable and probably off for 1. 
bq.Table availability method is relatively lightweight. Not measured it, but it 
should be very fast.
Either way adding another rpc call just for cleanliness of exception name is 
not a good trade off imo.

 When a disabled table is scanned, NotServingRegionException is thrown instead 
 of TableNotEnabledException
 -

 Key: HBASE-11974
 URL: https://issues.apache.org/jira/browse/HBASE-11974
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu
 Attachments: 11974-test.patch, 11974-v1.txt, 11974-v2.txt, 
 11974-v3.txt, 11974-v4.txt, 11974-v5.txt


 When a disabled table is scanned, TableNotEnabledException should be thrown.
 However, currently NotServingRegionException is thrown.
 Thanks to Romil Choksi who discovered this problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11825) Create Connection and ConnectionManager

2014-09-16 Thread Solomon Duskis (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14136024#comment-14136024
 ] 

Solomon Duskis commented on HBASE-11825:


That's awesome.  Glad it all worked out, and very excited that my first patch 
is now part of master.

I'll create another issue relating to clean up work that will work towards 
HConnection with Connection and HConnectionManager with ConnectionFactory.  
I'll aim for the low hanging fruit first.

 Create Connection and ConnectionManager
 ---

 Key: HBASE-11825
 URL: https://issues.apache.org/jira/browse/HBASE-11825
 Project: HBase
  Issue Type: Improvement
Reporter: Carter
Assignee: Solomon Duskis
Priority: Critical
 Fix For: 2.0.0, 0.99.1

 Attachments: HBASE_11825.patch, HBASE_11825_v1.patch


 This is further cleanup of the HBase interface for 1.0 after implementing the 
 new Table and Admin interfaces.  Following Enis's guidelines in HBASE-10602, 
 this JIRA will generate a new ConnectionManager to replace HCM and Connection 
 to replace HConnection.
 For more detail, this JIRA intends to implement this portion:
 {code}
 interface Connection extends Closeable{
   Table getTable(), and rest of HConnection methods 
   getAdmin()
   // no deprecated methods (cache related etc)
 }
 @Deprecated
 interface HConnection extends Connection {
   @Deprecated
   HTableInterface getTable()
   // users are encouraged to use Connection
 }
 class ConnectionManager {
   createConnection(Configuration) // not sure whether we want a static 
 factory method to create connections or a ctor
 }
 @Deprecated
 class HCM extends ConnectionManager {
   // users are encouraged to use ConnectionManager
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11825) Create Connection and ConnectionManager

2014-09-16 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14136028#comment-14136028
 ] 

Enis Soztutar commented on HBASE-11825:
---

bq. That's awesome. Glad it all worked out, and very excited that my first 
patch is now part of master.
Yep. It will come in 1.0 release that is hopefully soon.
bq. I'll create another issue relating to clean up work that will work towards 
HConnection with Connection and HConnectionManager with ConnectionFactory. I'll 
aim for the low hanging fruit first.
sounds like the right approach. 
Thanks again. 

 Create Connection and ConnectionManager
 ---

 Key: HBASE-11825
 URL: https://issues.apache.org/jira/browse/HBASE-11825
 Project: HBase
  Issue Type: Improvement
Reporter: Carter
Assignee: Solomon Duskis
Priority: Critical
 Fix For: 2.0.0, 0.99.1

 Attachments: HBASE_11825.patch, HBASE_11825_v1.patch


 This is further cleanup of the HBase interface for 1.0 after implementing the 
 new Table and Admin interfaces.  Following Enis's guidelines in HBASE-10602, 
 this JIRA will generate a new ConnectionManager to replace HCM and Connection 
 to replace HConnection.
 For more detail, this JIRA intends to implement this portion:
 {code}
 interface Connection extends Closeable{
   Table getTable(), and rest of HConnection methods 
   getAdmin()
   // no deprecated methods (cache related etc)
 }
 @Deprecated
 interface HConnection extends Connection {
   @Deprecated
   HTableInterface getTable()
   // users are encouraged to use Connection
 }
 class ConnectionManager {
   createConnection(Configuration) // not sure whether we want a static 
 factory method to create connections or a ctor
 }
 @Deprecated
 class HCM extends ConnectionManager {
   // users are encouraged to use ConnectionManager
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   3   >