[jira] [Commented] (HBASE-9272) A simple parallel, unordered scanner

2013-08-21 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13745816#comment-13745816
 ] 

Lars Hofhansl commented on HBASE-9272:
--

Yeah, that was the one where I convinced Stack that we do not need this. :)

 A simple parallel, unordered scanner
 

 Key: HBASE-9272
 URL: https://issues.apache.org/jira/browse/HBASE-9272
 Project: HBase
  Issue Type: New Feature
Reporter: Lars Hofhansl
Priority: Minor
 Attachments: ParallelClientScanner.java


 The contract of ClientScanner is to return rows in sort order. That limits 
 the order in which region can be scanned.
 I propose a simple ParallelScanner that does not have this requirement and 
 queries regions in parallel, return whatever gets returned first.
 This is generally useful for scans that filter a lot of data on the server, 
 or in cases where the client can very quickly react to the returned data.
 I have a simple prototype (doesn't do error handling right, and might be a 
 bit heavy on the synchronization side - it used a BlockingQueue to hand data 
 between the client using the scanner and the threads doing the scanning, it 
 also could potentially starve some scanners long enugh to time out at the 
 server).
 On the plus side, it's only a 130 lines of code. :)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8165) Move to Hadoop 2.1.0-beta from 2.0.x-alpha (WAS: Update our protobuf to 2.5 from 2.4.1)

2013-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13745826#comment-13745826
 ] 

Hudson commented on HBASE-8165:
---

SUCCESS: Integrated in HBase-TRUNK #4420 (See 
[https://builds.apache.org/job/HBase-TRUNK/4420/])
HBASE-8165 Move to Hadoop 2.1.0-beta from 2.0.x-alpha (WAS: Update our protobuf 
to 2.5 from 2.4.1) (stack: rev 1516084)
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/ServerName.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/RequestConverter.java
* 
/hbase/trunk/hbase-common/src/test/java/org/apache/hadoop/hbase/HBaseCommonTestingUtility.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/AccessControlProtos.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/AdminProtos.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/AggregateProtos.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/AuthenticationProtos.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/CellProtos.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/ClientProtos.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/ClusterIdProtos.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/ClusterStatusProtos.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/ComparatorProtos.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/ErrorHandlingProtos.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/FSProtos.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/FilterProtos.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/HBaseProtos.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/HFileProtos.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/LoadBalancerProtos.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/MapReduceProtos.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/MasterAdminProtos.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/MasterMonitorProtos.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/MasterProtos.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/MultiRowMutation.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/MultiRowMutationProcessorProtos.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/RPCProtos.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/RegionServerStatusProtos.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/RowProcessorProtos.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/SecureBulkLoadProtos.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/Tracing.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/WALProtos.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/ZooKeeperProtos.java
* /hbase/trunk/hbase-protocol/src/main/protobuf/MasterAdmin.proto
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/protobuf/generated/CellMessage.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/protobuf/generated/CellSetMessage.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/protobuf/generated/ColumnSchemaMessage.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/protobuf/generated/ScannerMessage.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/protobuf/generated/StorageClusterStatusMessage.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/protobuf/generated/TableInfoMessage.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/protobuf/generated/TableListMessage.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/protobuf/generated/TableSchemaMessage.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/protobuf/generated/VersionMessage.java
* 

[jira] [Commented] (HBASE-9281) user_permission command encounters NullPointerException

2013-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13745827#comment-13745827
 ] 

Hudson commented on HBASE-9281:
---

SUCCESS: Integrated in HBase-TRUNK #4420 (See 
[https://builds.apache.org/job/HBase-TRUNK/4420/])
HBASE-9281 user_permission command encounters NullPointerException (Ted Yu) 
(tedyu: rev 1516074)
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/AccessControlLists.java


 user_permission command encounters NullPointerException
 ---

 Key: HBASE-9281
 URL: https://issues.apache.org/jira/browse/HBASE-9281
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.95.2
Reporter: Ted Yu
Assignee: Ted Yu
 Attachments: 9281-v1.txt


 As user hbase, user_permission command gave:
 {code}
 java.io.IOException: java.io.IOException
   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2185)
   at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1854)
 Caused by: java.lang.NullPointerException
   at 
 org.apache.hadoop.hbase.security.access.AccessControlLists.getUserTablePermissions(AccessControlLists.java:484)
   at 
 org.apache.hadoop.hbase.security.access.AccessController.getUserPermissions(AccessController.java:1341)
   at 
 org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos$AccessControlService$1.getUserPermissions(AccessControlProtos.java:9949)
   at 
 org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos$AccessControlService.callMethod(AccessControlProtos.java:10107)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:5121)
   at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.execService(HRegionServer.java:3211)
   at 
 org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:26851)
   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2147)
   ... 1 more
   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
   at 
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
   at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
   at 
 org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
   at 
 org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
   at 
 org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:235)
   at 
 org.apache.hadoop.hbase.protobuf.ProtobufUtil.execService(ProtobufUtil.java:1304)
   at 
 org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.java:87)
   at 
 org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.java:84)
   at 
 org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:120)
   at 
 org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:98)
   at 
 org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel.callExecService(RegionCoprocessorRpcChannel.java:90)
   at 
 org.apache.hadoop.hbase.ipc.CoprocessorRpcChannel.callBlockingMethod(CoprocessorRpcChannel.java:67)
   at 
 org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos$AccessControlService$BlockingStub.getUserPermissions(AccessControlProtos.java:10304)
   at 
 org.apache.hadoop.hbase.protobuf.ProtobufUtil.getUserPermissions(ProtobufUtil.java:1974)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
 ...
 Caused by: 
 org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(java.io.IOException): 
 java.io.IOException
   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2185)
   at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1854)
 Caused by: java.lang.NullPointerException
   at 
 org.apache.hadoop.hbase.security.access.AccessControlLists.getUserTablePermissions(AccessControlLists.java:484)
   at 
 org.apache.hadoop.hbase.security.access.AccessController.getUserPermissions(AccessController.java:1341)
   at 
 org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos$AccessControlService$1.getUserPermissions(AccessControlProtos.java:9949)
   at 
 org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos$AccessControlService.callMethod(AccessControlProtos.java:10107)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:5121)
   at 
 

[jira] [Updated] (HBASE-9261) Add cp hooks after {start|close}RegionOperation in batchMutate

2013-08-21 Thread rajeshbabu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

rajeshbabu updated HBASE-9261:
--

Status: Patch Available  (was: Open)

 Add cp hooks after {start|close}RegionOperation in batchMutate
 --

 Key: HBASE-9261
 URL: https://issues.apache.org/jira/browse/HBASE-9261
 Project: HBase
  Issue Type: Sub-task
Reporter: rajeshbabu
 Attachments: HBASE-9261.patch


 These hooks helps for checking Resources(blocking memstore size) and 
 necessary locking on index region while performing batch of mutations. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9261) Add cp hooks after {start|close}RegionOperation in batchMutate

2013-08-21 Thread rajeshbabu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

rajeshbabu updated HBASE-9261:
--

Attachment: HBASE-9261.patch

Patch for trunk.

 Add cp hooks after {start|close}RegionOperation in batchMutate
 --

 Key: HBASE-9261
 URL: https://issues.apache.org/jira/browse/HBASE-9261
 Project: HBase
  Issue Type: Sub-task
Reporter: rajeshbabu
 Attachments: HBASE-9261.patch


 These hooks helps for checking Resources(blocking memstore size) and 
 necessary locking on index region while performing batch of mutations. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9286) ageOfLastShippedOp replication metric doesn't update if the slave regionserver is stalled

2013-08-21 Thread Alex Newman (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13745871#comment-13745871
 ] 

Alex Newman commented on HBASE-9286:


or does it make more sense to focus on 0.95.0 for now?

 ageOfLastShippedOp replication metric doesn't update if the slave 
 regionserver is stalled
 -

 Key: HBASE-9286
 URL: https://issues.apache.org/jira/browse/HBASE-9286
 Project: HBase
  Issue Type: Bug
Reporter: Alex Newman
Assignee: Alex Newman
 Attachments: 
 0001-HBASE-9286.-ageOfLastShippedOp-replication-metric-do.patch


 In replicationmanager
  HRegionInterface rrs = getRS();
 rrs.replicateLogEntries(Arrays.copyOf(this.entriesArray, 
 currentNbEntries));
 
 this.metrics.setAgeOfLastShippedOp(
 this.entriesArray[currentNbEntries-1].getKey().getWriteTime());
 break;
 which makes sense, but is wrong. The problem is that rrs.replicateLogEntries 
 will block for a very long time if the slave server is suspended or 
 unavailable but not down.
 However this is easy to fix. We just need to call   
 refreshAgeOfLastShippedOp();
 on a regular basis, in a different thread. I've attached a patch which fixed 
 this for cdh4. I can make one for trunk and the like as well if you need me 
 to do but it's a small change.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HBASE-9287) TestCatalogTracker depends on the execution order

2013-08-21 Thread Matteo Bertozzi (JIRA)
Matteo Bertozzi created HBASE-9287:
--

 Summary: TestCatalogTracker depends on the execution order
 Key: HBASE-9287
 URL: https://issues.apache.org/jira/browse/HBASE-9287
 Project: HBase
  Issue Type: Test
  Components: test
Affects Versions: 0.94.11
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
Priority: Minor
 Fix For: 0.94.12
 Attachments: HBASE-9287-v0.patch

Some CatalogTracker test don't delete the ROOT location.
For example if testNoTimeoutWaitForRoot() runs before 
testInterruptWaitOnMetaAndRoot() you get
{code}
junit.framework.AssertionFailedError: Expected: null but was: 
example.org,1234,1377038834244
at junit.framework.Assert.fail(Assert.java:50)
at junit.framework.Assert.assertTrue(Assert.java:20)
at junit.framework.Assert.assertNull(Assert.java:237)
at junit.framework.Assert.assertNull(Assert.java:230)
at 
org.apache.hadoop.hbase.catalog.TestCatalogTracker.testInterruptWaitOnMetaAndRoot(TestCatalogTracker.java:144)
{code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9287) TestCatalogTracker depends on the execution order

2013-08-21 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-9287:
---

Attachment: HBASE-7360-v0.patch

 TestCatalogTracker depends on the execution order
 -

 Key: HBASE-9287
 URL: https://issues.apache.org/jira/browse/HBASE-9287
 Project: HBase
  Issue Type: Test
  Components: test
Affects Versions: 0.94.11
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
Priority: Minor
 Fix For: 0.94.12

 Attachments: HBASE-9287-v0.patch


 Some CatalogTracker test don't delete the ROOT location.
 For example if testNoTimeoutWaitForRoot() runs before 
 testInterruptWaitOnMetaAndRoot() you get
 {code}
 junit.framework.AssertionFailedError: Expected: null but was: 
 example.org,1234,1377038834244
   at junit.framework.Assert.fail(Assert.java:50)
   at junit.framework.Assert.assertTrue(Assert.java:20)
   at junit.framework.Assert.assertNull(Assert.java:237)
   at junit.framework.Assert.assertNull(Assert.java:230)
   at 
 org.apache.hadoop.hbase.catalog.TestCatalogTracker.testInterruptWaitOnMetaAndRoot(TestCatalogTracker.java:144)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9287) TestCatalogTracker depends on the execution order

2013-08-21 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-9287:
---

Attachment: (was: HBASE-7360-v0.patch)

 TestCatalogTracker depends on the execution order
 -

 Key: HBASE-9287
 URL: https://issues.apache.org/jira/browse/HBASE-9287
 Project: HBase
  Issue Type: Test
  Components: test
Affects Versions: 0.94.11
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
Priority: Minor
 Fix For: 0.94.12

 Attachments: HBASE-9287-v0.patch


 Some CatalogTracker test don't delete the ROOT location.
 For example if testNoTimeoutWaitForRoot() runs before 
 testInterruptWaitOnMetaAndRoot() you get
 {code}
 junit.framework.AssertionFailedError: Expected: null but was: 
 example.org,1234,1377038834244
   at junit.framework.Assert.fail(Assert.java:50)
   at junit.framework.Assert.assertTrue(Assert.java:20)
   at junit.framework.Assert.assertNull(Assert.java:237)
   at junit.framework.Assert.assertNull(Assert.java:230)
   at 
 org.apache.hadoop.hbase.catalog.TestCatalogTracker.testInterruptWaitOnMetaAndRoot(TestCatalogTracker.java:144)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9287) TestCatalogTracker depends on the execution order

2013-08-21 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-9287:
---

Attachment: HBASE-9287-v0.patch

 TestCatalogTracker depends on the execution order
 -

 Key: HBASE-9287
 URL: https://issues.apache.org/jira/browse/HBASE-9287
 Project: HBase
  Issue Type: Test
  Components: test
Affects Versions: 0.94.11
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
Priority: Minor
 Fix For: 0.94.12

 Attachments: HBASE-9287-v0.patch


 Some CatalogTracker test don't delete the ROOT location.
 For example if testNoTimeoutWaitForRoot() runs before 
 testInterruptWaitOnMetaAndRoot() you get
 {code}
 junit.framework.AssertionFailedError: Expected: null but was: 
 example.org,1234,1377038834244
   at junit.framework.Assert.fail(Assert.java:50)
   at junit.framework.Assert.assertTrue(Assert.java:20)
   at junit.framework.Assert.assertNull(Assert.java:237)
   at junit.framework.Assert.assertNull(Assert.java:230)
   at 
 org.apache.hadoop.hbase.catalog.TestCatalogTracker.testInterruptWaitOnMetaAndRoot(TestCatalogTracker.java:144)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8165) Move to Hadoop 2.1.0-beta from 2.0.x-alpha (WAS: Update our protobuf to 2.5 from 2.4.1)

2013-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13745880#comment-13745880
 ] 

Hudson commented on HBASE-8165:
---

SUCCESS: Integrated in hbase-0.95 #480 (See 
[https://builds.apache.org/job/hbase-0.95/480/])
HBASE-8165 Move to Hadoop 2.1.0-beta from 2.0.x-alpha (WAS: Update our protobuf 
to 2.5 from 2.4.1) (stack: rev 1516086)
* 
/hbase/branches/0.95/hbase-client/src/main/java/org/apache/hadoop/hbase/ServerName.java
* 
/hbase/branches/0.95/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java
* 
/hbase/branches/0.95/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/RequestConverter.java
* 
/hbase/branches/0.95/hbase-common/src/test/java/org/apache/hadoop/hbase/HBaseCommonTestingUtility.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/AccessControlProtos.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/AdminProtos.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/AggregateProtos.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/AuthenticationProtos.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/CellProtos.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/ClientProtos.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/ClusterIdProtos.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/ClusterStatusProtos.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/ComparatorProtos.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/ErrorHandlingProtos.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/FSProtos.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/FilterProtos.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/HBaseProtos.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/HFileProtos.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/LoadBalancerProtos.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/MapReduceProtos.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/MasterAdminProtos.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/MasterMonitorProtos.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/MasterProtos.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/MultiRowMutation.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/MultiRowMutationProcessorProtos.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/RPCProtos.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/RegionServerStatusProtos.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/RowProcessorProtos.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/SecureBulkLoadProtos.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/Tracing.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/WALProtos.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/ZooKeeperProtos.java
* /hbase/branches/0.95/hbase-protocol/src/main/protobuf/MasterAdmin.proto
* 
/hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/protobuf/generated/CellMessage.java
* 
/hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/protobuf/generated/CellSetMessage.java
* 
/hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/protobuf/generated/ColumnSchemaMessage.java
* 
/hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/protobuf/generated/ScannerMessage.java
* 
/hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/protobuf/generated/StorageClusterStatusMessage.java
* 
/hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/protobuf/generated/TableInfoMessage.java
* 

[jira] [Commented] (HBASE-9193) Make what Chaos monkey actions run configurable per test.

2013-08-21 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13745960#comment-13745960
 ] 

Enis Soztutar commented on HBASE-9193:
--

You can already have a CM that have 0-policies, thus having no threads, no? 
Anyway, let's keep this patch since it is already in. We can revisit this on 
the new patch / review cycle around this area. 

 Make what Chaos monkey actions run configurable per test.
 -

 Key: HBASE-9193
 URL: https://issues.apache.org/jira/browse/HBASE-9193
 Project: HBase
  Issue Type: Task
Affects Versions: 0.98.0, 0.95.1
Reporter: Elliott Clark
Assignee: Elliott Clark
 Fix For: 0.98.0, 0.95.2

 Attachments: HBASE-9193-0.patch, HBASE-9193-1.patch


 Would be awesome to have every it test derive from the same base class that 
 all allow setting of which ChaosMonkey.Actions to run.
 Something like:
 {code}
 hbase org.apache.hadoop.hbase.test.IntegrationTestLoadAndVerify 
 -Dverify.reduce.tasks=12 -DchaosMonkeyActionSet=SlowDeterministic 
 loadAndVerify
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9108) LoadTestTool need to have a way to ignore keys which were failed during write.

2013-08-21 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13745971#comment-13745971
 ] 

Enis Soztutar commented on HBASE-9108:
--

I am not sure ignoring exceptions is the right way to do the test. Agreed that 
on CM actions which cause extended unavailability, LoadTestTool will fail the 
write. But shouldn't that be an actual error that either the CM is acting very 
aggressively, or the cluster does not recover from errors quick enough causing 
poor MTTR. 

 LoadTestTool need to have a way to ignore keys which were failed during 
 write. 
 ---

 Key: HBASE-9108
 URL: https://issues.apache.org/jira/browse/HBASE-9108
 Project: HBase
  Issue Type: Improvement
  Components: test
Affects Versions: 0.95.0, 0.95.1, 0.94.9, 0.94.10
Reporter: gautam
Assignee: gautam
Priority: Critical
 Attachments: 9108.patch._trunk.5, 9108.patch._trunk.6, 
 HBASE-9108.patch._trunk.2, HBASE-9108.patch._trunk.3, 
 HBASE-9108.patch._trunk.4, HBASE-9108.patch._trunk.7, 
 HBASE-9108.patch._trunk.8

   Original Estimate: 48h
  Remaining Estimate: 48h

 While running the chaosmonkey integration tests, it is found that write 
 sometimes fails when the cluster components are restarted/stopped/killed etc..
 The data key which was being put, using the LoadTestTool, is added to the 
 failed key set, and at the end of the test, this failed key set is checked 
 for any entries to assert failures.
 While doing fail-over testing, it is expected that some of the keys may go 
 un-written. The point here is to validate that whatever gets into hbase for 
 an unstable cluster really goes in, and hence read should be 100% for 
 whatever keys went in successfully.
 Currently LoadTestTool has strict checks to validate every key being written 
 or not. In case any keys is not written, it fails.
 I wanted to loosen this constraint by allowing users to pass in a set of 
 exceptions they expect when doing put/write operations over hbase. If one of 
 these expected exception set is thrown while writing key to hbase, the failed 
 key would be ignored, and hence wont even be considered again for subsequent 
 write as well as read.
 This can be passed to the load test tool as csv list parameter 
 -allowed_write_exceptions, or it can be passed through hbase-site.xml by 
 writing a value for test.ignore.exceptions.during.write
 Here is the usage:
 -allowed_write_exceptions 
 java.io.EOFException,org.apache.hadoop.hbase.NotServingRegionException,org.apache.hadoop.hbase.client.NoServerForRegionException,org.apache.hadoop.hbase.ipc.ServerNotRunningYetException
 Hence, by doing this the existing integration tests can also make use of this 
 change by passing it as property in hbase-site.xml, as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9277) REST should use listTableNames to list tables

2013-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13745980#comment-13745980
 ] 

Hudson commented on HBASE-9277:
---

SUCCESS: Integrated in hbase-0.95-on-hadoop2 #259 (See 
[https://builds.apache.org/job/hbase-0.95-on-hadoop2/259/])
HBASE-9277. REST should use listTableNames to list tables (apurtell: rev 
1516046)
* 
/hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/RootResource.java


 REST should use listTableNames to list tables
 -

 Key: HBASE-9277
 URL: https://issues.apache.org/jira/browse/HBASE-9277
 Project: HBase
  Issue Type: Sub-task
  Components: REST
Affects Versions: 0.98.0, 0.94.12, 0.96.0, 0.95.3
Reporter: Andrew Purtell
Assignee: Andrew Purtell
 Fix For: 0.98.0, 0.94.12, 0.96.0, 0.95.3

 Attachments: 9277-0.94.patch, 9277.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9281) user_permission command encounters NullPointerException

2013-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13745978#comment-13745978
 ] 

Hudson commented on HBASE-9281:
---

SUCCESS: Integrated in hbase-0.95-on-hadoop2 #259 (See 
[https://builds.apache.org/job/hbase-0.95-on-hadoop2/259/])
HBASE-9281 user_permission command encounters NullPointerException (Ted Yu) 
(tedyu: rev 1516073)
* 
/hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/AccessControlLists.java


 user_permission command encounters NullPointerException
 ---

 Key: HBASE-9281
 URL: https://issues.apache.org/jira/browse/HBASE-9281
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.95.2
Reporter: Ted Yu
Assignee: Ted Yu
 Attachments: 9281-v1.txt


 As user hbase, user_permission command gave:
 {code}
 java.io.IOException: java.io.IOException
   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2185)
   at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1854)
 Caused by: java.lang.NullPointerException
   at 
 org.apache.hadoop.hbase.security.access.AccessControlLists.getUserTablePermissions(AccessControlLists.java:484)
   at 
 org.apache.hadoop.hbase.security.access.AccessController.getUserPermissions(AccessController.java:1341)
   at 
 org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos$AccessControlService$1.getUserPermissions(AccessControlProtos.java:9949)
   at 
 org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos$AccessControlService.callMethod(AccessControlProtos.java:10107)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:5121)
   at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.execService(HRegionServer.java:3211)
   at 
 org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:26851)
   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2147)
   ... 1 more
   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
   at 
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
   at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
   at 
 org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
   at 
 org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
   at 
 org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:235)
   at 
 org.apache.hadoop.hbase.protobuf.ProtobufUtil.execService(ProtobufUtil.java:1304)
   at 
 org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.java:87)
   at 
 org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.java:84)
   at 
 org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:120)
   at 
 org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:98)
   at 
 org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel.callExecService(RegionCoprocessorRpcChannel.java:90)
   at 
 org.apache.hadoop.hbase.ipc.CoprocessorRpcChannel.callBlockingMethod(CoprocessorRpcChannel.java:67)
   at 
 org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos$AccessControlService$BlockingStub.getUserPermissions(AccessControlProtos.java:10304)
   at 
 org.apache.hadoop.hbase.protobuf.ProtobufUtil.getUserPermissions(ProtobufUtil.java:1974)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
 ...
 Caused by: 
 org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(java.io.IOException): 
 java.io.IOException
   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2185)
   at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1854)
 Caused by: java.lang.NullPointerException
   at 
 org.apache.hadoop.hbase.security.access.AccessControlLists.getUserTablePermissions(AccessControlLists.java:484)
   at 
 org.apache.hadoop.hbase.security.access.AccessController.getUserPermissions(AccessController.java:1341)
   at 
 org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos$AccessControlService$1.getUserPermissions(AccessControlProtos.java:9949)
   at 
 org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos$AccessControlService.callMethod(AccessControlProtos.java:10107)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:5121)
   at 

[jira] [Commented] (HBASE-9273) Consolidate isSystemTable checking

2013-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13745981#comment-13745981
 ] 

Hudson commented on HBASE-9273:
---

SUCCESS: Integrated in hbase-0.95-on-hadoop2 #259 (See 
[https://builds.apache.org/job/hbase-0.95-on-hadoop2/259/])
HBASE-9273 Consolidate isSystemTable checking (jxiang: rev 1516041)
* 
/hbase/branches/0.95/hbase-client/src/main/java/org/apache/hadoop/hbase/HTableDescriptor.java
* 
/hbase/branches/0.95/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HTable.java
* 
/hbase/branches/0.95/hbase-client/src/main/java/org/apache/hadoop/hbase/snapshot/ClientSnapshotDescriptionUtils.java
* 
/hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java
* 
/hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
* 
/hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.java
* 
/hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/model/TableRegionModel.java
* 
/hbase/branches/0.95/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAdmin.java
* 
/hbase/branches/0.95/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestDistributedLogSplitting.java
* 
/hbase/branches/0.95/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestHBaseFsck.java


 Consolidate isSystemTable checking
 --

 Key: HBASE-9273
 URL: https://issues.apache.org/jira/browse/HBASE-9273
 Project: HBase
  Issue Type: Improvement
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
Priority: Minor
 Fix For: 0.98.0, 0.96.0

 Attachments: trunk-9273.patch


 We have TableName#isSystemTable and HTableDescriptor#isSystemTable.  We 
 should get rid of one to avoid confusion.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8165) Move to Hadoop 2.1.0-beta from 2.0.x-alpha (WAS: Update our protobuf to 2.5 from 2.4.1)

2013-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13745976#comment-13745976
 ] 

Hudson commented on HBASE-8165:
---

SUCCESS: Integrated in hbase-0.95-on-hadoop2 #259 (See 
[https://builds.apache.org/job/hbase-0.95-on-hadoop2/259/])
HBASE-8165 Move to Hadoop 2.1.0-beta from 2.0.x-alpha (WAS: Update our protobuf 
to 2.5 from 2.4.1) (stack: rev 1516086)
* 
/hbase/branches/0.95/hbase-client/src/main/java/org/apache/hadoop/hbase/ServerName.java
* 
/hbase/branches/0.95/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java
* 
/hbase/branches/0.95/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/RequestConverter.java
* 
/hbase/branches/0.95/hbase-common/src/test/java/org/apache/hadoop/hbase/HBaseCommonTestingUtility.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/AccessControlProtos.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/AdminProtos.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/AggregateProtos.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/AuthenticationProtos.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/CellProtos.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/ClientProtos.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/ClusterIdProtos.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/ClusterStatusProtos.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/ComparatorProtos.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/ErrorHandlingProtos.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/FSProtos.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/FilterProtos.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/HBaseProtos.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/HFileProtos.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/LoadBalancerProtos.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/MapReduceProtos.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/MasterAdminProtos.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/MasterMonitorProtos.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/MasterProtos.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/MultiRowMutation.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/MultiRowMutationProcessorProtos.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/RPCProtos.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/RegionServerStatusProtos.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/RowProcessorProtos.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/SecureBulkLoadProtos.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/Tracing.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/WALProtos.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/ZooKeeperProtos.java
* /hbase/branches/0.95/hbase-protocol/src/main/protobuf/MasterAdmin.proto
* 
/hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/protobuf/generated/CellMessage.java
* 
/hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/protobuf/generated/CellSetMessage.java
* 
/hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/protobuf/generated/ColumnSchemaMessage.java
* 
/hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/protobuf/generated/ScannerMessage.java
* 
/hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/protobuf/generated/StorageClusterStatusMessage.java
* 
/hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/protobuf/generated/TableInfoMessage.java
* 

[jira] [Commented] (HBASE-9279) Thrift should use listTableNames to list tables

2013-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13745977#comment-13745977
 ] 

Hudson commented on HBASE-9279:
---

SUCCESS: Integrated in hbase-0.95-on-hadoop2 #259 (See 
[https://builds.apache.org/job/hbase-0.95-on-hadoop2/259/])
HBASE-9279. Thrift should use listTableNames to list tables (apurtell: rev 
1516050)
* 
/hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/thrift/ThriftServerRunner.java


 Thrift should use listTableNames to list tables
 ---

 Key: HBASE-9279
 URL: https://issues.apache.org/jira/browse/HBASE-9279
 Project: HBase
  Issue Type: Sub-task
  Components: Thrift
Affects Versions: 0.98.0, 0.94.11, 0.96.0, 0.95.3
Reporter: Andrew Purtell
Assignee: Andrew Purtell
 Fix For: 0.98.0, 0.94.12, 0.96.0, 0.95.3

 Attachments: 9279-0.94.patch, 9279.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9276) List tables API should filter with isSystemTable

2013-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13745979#comment-13745979
 ] 

Hudson commented on HBASE-9276:
---

SUCCESS: Integrated in hbase-0.95-on-hadoop2 #259 (See 
[https://builds.apache.org/job/hbase-0.95-on-hadoop2/259/])
HBASE-9276 List tables API should filter with isSystemTable (stack: rev 1516044)
* 
/hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java


 List tables API should filter with isSystemTable
 

 Key: HBASE-9276
 URL: https://issues.apache.org/jira/browse/HBASE-9276
 Project: HBase
  Issue Type: Sub-task
  Components: shell
Affects Versions: 0.98.0, 0.96.0, 0.95.3
Reporter: Andrew Purtell
Assignee: Nick Dimiduk
 Fix For: 0.98.0, 0.96.0

 Attachments: 
 0001-HBASE-9276-List-tables-API-should-filter-with-isSyst.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9249) Add cp hook before setting PONR in split

2013-08-21 Thread rajeshbabu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

rajeshbabu updated HBASE-9249:
--

Attachment: HBASE-9249.patch

Patch for trunk

 Add cp hook before setting PONR in split
 

 Key: HBASE-9249
 URL: https://issues.apache.org/jira/browse/HBASE-9249
 Project: HBase
  Issue Type: Sub-task
Reporter: rajeshbabu
 Attachments: HBASE-9249.patch


 This hook helps to perform split on user region and corresponding index 
 region such that both will be split or none.
 With this hook split for user and index region as follows
 user region
 ===
 1) Create splitting znode for user region split
 2) Close parent user region
 3) split user region storefiles
 4) instantiate child regions of user region
 Through the new hook we can call index region transitions as below
 index region
 ===
 5) Create splitting znode for index region split
 6) Close parent index region
 7) Split storefiles of index region
 8) instantiate child regions of the index region
 If any failures in 5,6,7,8 rollback the steps and return null, on null return 
 throw exception to rollback for 1,2,3,4
 9) set PONR
 10) do batch put of offline and split entries for user and index regions
 user region
 ===
 11) open daughers of user regions and transition znode to split.
 We can open index region daughters and transition znode to split through 
 postSplit hook which is already present.
 index region
 
 12) open daughers of index regions and transition znode to split.
 Even if region server crashes also at the end both user and index regions 
 will be split or none

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9249) Add cp hook before setting PONR in split

2013-08-21 Thread rajeshbabu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

rajeshbabu updated HBASE-9249:
--

Status: Patch Available  (was: Open)

 Add cp hook before setting PONR in split
 

 Key: HBASE-9249
 URL: https://issues.apache.org/jira/browse/HBASE-9249
 Project: HBase
  Issue Type: Sub-task
Reporter: rajeshbabu
 Attachments: HBASE-9249.patch


 This hook helps to perform split on user region and corresponding index 
 region such that both will be split or none.
 With this hook split for user and index region as follows
 user region
 ===
 1) Create splitting znode for user region split
 2) Close parent user region
 3) split user region storefiles
 4) instantiate child regions of user region
 Through the new hook we can call index region transitions as below
 index region
 ===
 5) Create splitting znode for index region split
 6) Close parent index region
 7) Split storefiles of index region
 8) instantiate child regions of the index region
 If any failures in 5,6,7,8 rollback the steps and return null, on null return 
 throw exception to rollback for 1,2,3,4
 9) set PONR
 10) do batch put of offline and split entries for user and index regions
 user region
 ===
 11) open daughers of user regions and transition znode to split.
 We can open index region daughters and transition znode to split through 
 postSplit hook which is already present.
 index region
 
 12) open daughers of index regions and transition znode to split.
 Even if region server crashes also at the end both user and index regions 
 will be split or none

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9108) LoadTestTool need to have a way to ignore keys which were failed during write.

2013-08-21 Thread gautam (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13746028#comment-13746028
 ] 

gautam commented on HBASE-9108:
---

By adding the option to ignore exceptions while writing, we are ensuring that 
during CM actions whatever keys were written successfully, can be read.
We are ensuring 100% read guarantee.
Currently LoadTestTool is pretty rigid, and looks for 100% write as well as 
100% read guarantee. There are use cases, where the client applications instead 
of attempting the write again or go into an infinite loop, might want to handle 
it differently (like storing those failed keys into a list to retry once after 
the batch write is done, etc). This will handle that use case.
Again, we are providing that as a configuration. Tomorrow when you want to run 
the same set of tests to get 100% write guarantee as well, over say a stronger 
 better hbase version, you just need to remove the configuration.


 LoadTestTool need to have a way to ignore keys which were failed during 
 write. 
 ---

 Key: HBASE-9108
 URL: https://issues.apache.org/jira/browse/HBASE-9108
 Project: HBase
  Issue Type: Improvement
  Components: test
Affects Versions: 0.95.0, 0.95.1, 0.94.9, 0.94.10
Reporter: gautam
Assignee: gautam
Priority: Critical
 Attachments: 9108.patch._trunk.5, 9108.patch._trunk.6, 
 HBASE-9108.patch._trunk.2, HBASE-9108.patch._trunk.3, 
 HBASE-9108.patch._trunk.4, HBASE-9108.patch._trunk.7, 
 HBASE-9108.patch._trunk.8

   Original Estimate: 48h
  Remaining Estimate: 48h

 While running the chaosmonkey integration tests, it is found that write 
 sometimes fails when the cluster components are restarted/stopped/killed etc..
 The data key which was being put, using the LoadTestTool, is added to the 
 failed key set, and at the end of the test, this failed key set is checked 
 for any entries to assert failures.
 While doing fail-over testing, it is expected that some of the keys may go 
 un-written. The point here is to validate that whatever gets into hbase for 
 an unstable cluster really goes in, and hence read should be 100% for 
 whatever keys went in successfully.
 Currently LoadTestTool has strict checks to validate every key being written 
 or not. In case any keys is not written, it fails.
 I wanted to loosen this constraint by allowing users to pass in a set of 
 exceptions they expect when doing put/write operations over hbase. If one of 
 these expected exception set is thrown while writing key to hbase, the failed 
 key would be ignored, and hence wont even be considered again for subsequent 
 write as well as read.
 This can be passed to the load test tool as csv list parameter 
 -allowed_write_exceptions, or it can be passed through hbase-site.xml by 
 writing a value for test.ignore.exceptions.during.write
 Here is the usage:
 -allowed_write_exceptions 
 java.io.EOFException,org.apache.hadoop.hbase.NotServingRegionException,org.apache.hadoop.hbase.client.NoServerForRegionException,org.apache.hadoop.hbase.ipc.ServerNotRunningYetException
 Hence, by doing this the existing integration tests can also make use of this 
 change by passing it as property in hbase-site.xml, as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9287) TestCatalogTracker depends on the execution order

2013-08-21 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-9287:
---

Attachment: HBASE-9287-trunk-v0.patch

 TestCatalogTracker depends on the execution order
 -

 Key: HBASE-9287
 URL: https://issues.apache.org/jira/browse/HBASE-9287
 Project: HBase
  Issue Type: Test
  Components: test
Affects Versions: 0.94.11
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
Priority: Minor
 Fix For: 0.94.12

 Attachments: HBASE-9287-trunk-v0.patch, HBASE-9287-v0.patch


 Some CatalogTracker test don't delete the ROOT location.
 For example if testNoTimeoutWaitForRoot() runs before 
 testInterruptWaitOnMetaAndRoot() you get
 {code}
 junit.framework.AssertionFailedError: Expected: null but was: 
 example.org,1234,1377038834244
   at junit.framework.Assert.fail(Assert.java:50)
   at junit.framework.Assert.assertTrue(Assert.java:20)
   at junit.framework.Assert.assertNull(Assert.java:237)
   at junit.framework.Assert.assertNull(Assert.java:230)
   at 
 org.apache.hadoop.hbase.catalog.TestCatalogTracker.testInterruptWaitOnMetaAndRoot(TestCatalogTracker.java:144)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9287) TestCatalogTracker depends on the execution order

2013-08-21 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-9287:
---

Affects Version/s: 0.98.0
   0.95.2
Fix Version/s: 0.96.0
   0.98.0

 TestCatalogTracker depends on the execution order
 -

 Key: HBASE-9287
 URL: https://issues.apache.org/jira/browse/HBASE-9287
 Project: HBase
  Issue Type: Test
  Components: test
Affects Versions: 0.98.0, 0.95.2, 0.94.11
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
Priority: Minor
 Fix For: 0.98.0, 0.94.12, 0.96.0

 Attachments: HBASE-9287-trunk-v0.patch, HBASE-9287-v0.patch


 Some CatalogTracker test don't delete the ROOT location.
 For example if testNoTimeoutWaitForRoot() runs before 
 testInterruptWaitOnMetaAndRoot() you get
 {code}
 junit.framework.AssertionFailedError: Expected: null but was: 
 example.org,1234,1377038834244
   at junit.framework.Assert.fail(Assert.java:50)
   at junit.framework.Assert.assertTrue(Assert.java:20)
   at junit.framework.Assert.assertNull(Assert.java:237)
   at junit.framework.Assert.assertNull(Assert.java:230)
   at 
 org.apache.hadoop.hbase.catalog.TestCatalogTracker.testInterruptWaitOnMetaAndRoot(TestCatalogTracker.java:144)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9287) TestCatalogTracker depends on the execution order

2013-08-21 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-9287:
---

Status: Patch Available  (was: Open)

 TestCatalogTracker depends on the execution order
 -

 Key: HBASE-9287
 URL: https://issues.apache.org/jira/browse/HBASE-9287
 Project: HBase
  Issue Type: Test
  Components: test
Affects Versions: 0.94.11, 0.95.2, 0.98.0
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
Priority: Minor
 Fix For: 0.98.0, 0.94.12, 0.96.0

 Attachments: HBASE-9287-trunk-v0.patch, HBASE-9287-v0.patch


 Some CatalogTracker test don't delete the ROOT location.
 For example if testNoTimeoutWaitForRoot() runs before 
 testInterruptWaitOnMetaAndRoot() you get
 {code}
 junit.framework.AssertionFailedError: Expected: null but was: 
 example.org,1234,1377038834244
   at junit.framework.Assert.fail(Assert.java:50)
   at junit.framework.Assert.assertTrue(Assert.java:20)
   at junit.framework.Assert.assertNull(Assert.java:237)
   at junit.framework.Assert.assertNull(Assert.java:230)
   at 
 org.apache.hadoop.hbase.catalog.TestCatalogTracker.testInterruptWaitOnMetaAndRoot(TestCatalogTracker.java:144)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9249) Add cp hook before setting PONR in split

2013-08-21 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13746042#comment-13746042
 ] 

ramkrishna.s.vasudevan commented on HBASE-9249:
---

These changes are enough Rajesh.  For doing step 10 do we need to use the 
information from SplitInfo and add it to Meta?


 Add cp hook before setting PONR in split
 

 Key: HBASE-9249
 URL: https://issues.apache.org/jira/browse/HBASE-9249
 Project: HBase
  Issue Type: Sub-task
Reporter: rajeshbabu
 Attachments: HBASE-9249.patch


 This hook helps to perform split on user region and corresponding index 
 region such that both will be split or none.
 With this hook split for user and index region as follows
 user region
 ===
 1) Create splitting znode for user region split
 2) Close parent user region
 3) split user region storefiles
 4) instantiate child regions of user region
 Through the new hook we can call index region transitions as below
 index region
 ===
 5) Create splitting znode for index region split
 6) Close parent index region
 7) Split storefiles of index region
 8) instantiate child regions of the index region
 If any failures in 5,6,7,8 rollback the steps and return null, on null return 
 throw exception to rollback for 1,2,3,4
 9) set PONR
 10) do batch put of offline and split entries for user and index regions
 user region
 ===
 11) open daughers of user regions and transition znode to split.
 We can open index region daughters and transition znode to split through 
 postSplit hook which is already present.
 index region
 
 12) open daughers of index regions and transition znode to split.
 Even if region server crashes also at the end both user and index regions 
 will be split or none

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7639) Enable online schema update by default

2013-08-21 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13746068#comment-13746068
 ] 

Enis Soztutar commented on HBASE-7639:
--

0.94 would benefit from it, but I doubt that we can do the backport so late in 
the 0.94 cycle. See the discussions at HBASE-7965. 

 Enable online schema update by default 
 ---

 Key: HBASE-7639
 URL: https://issues.apache.org/jira/browse/HBASE-7639
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.95.2
Reporter: Enis Soztutar
Assignee: Elliott Clark
 Fix For: 0.98.0, 0.95.2

 Attachments: HBASE-7639-0.patch


 After we get HBASE-7305 and HBASE-7546, things will become stable enough to 
 enable online schema update to be enabled by default. 
 {code}
   property
 namehbase.online.schema.update.enable/name
 valuefalse/value
 description
 Set true to enable online schema changes.  This is an experimental 
 feature.··
 There are known issues modifying table schemas at the same time a region
 split is happening so your table needs to be quiescent or else you have to
 be running with splits disabled.
 /description
   /property
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9108) LoadTestTool need to have a way to ignore keys which were failed during write.

2013-08-21 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13746078#comment-13746078
 ] 

Enis Soztutar commented on HBASE-9108:
--

There are two concerns about this approach as I see it. First, naming a set of 
exceptions is very brittle. With new exceptions, or refactored ones, keeping 
that in sync will become a burden on the test maintainer. Second, 100% write 
guarantee is what we want from this test. The retry logic inside HBase already 
does what you mention (storing failed keys and retrying).

bq. when you want to run the same set of tests to get 100% write guarantee as 
well, over say a stronger  better hbase version, you just need to remove the 
configuration
I think that we already have a stronger  better hbase version. We have seen 
CM actions which cause several minutes of downtimes on our test setup, thus 
causing LoadTestTool to fail, but I think the right way to approach this is to 
configure the test env for better MTTR, and limit the chaos caused by CM, 
together with adjusting the retry / timeouts accordingly. 

 LoadTestTool need to have a way to ignore keys which were failed during 
 write. 
 ---

 Key: HBASE-9108
 URL: https://issues.apache.org/jira/browse/HBASE-9108
 Project: HBase
  Issue Type: Improvement
  Components: test
Affects Versions: 0.95.0, 0.95.1, 0.94.9, 0.94.10
Reporter: gautam
Assignee: gautam
Priority: Critical
 Attachments: 9108.patch._trunk.5, 9108.patch._trunk.6, 
 HBASE-9108.patch._trunk.2, HBASE-9108.patch._trunk.3, 
 HBASE-9108.patch._trunk.4, HBASE-9108.patch._trunk.7, 
 HBASE-9108.patch._trunk.8

   Original Estimate: 48h
  Remaining Estimate: 48h

 While running the chaosmonkey integration tests, it is found that write 
 sometimes fails when the cluster components are restarted/stopped/killed etc..
 The data key which was being put, using the LoadTestTool, is added to the 
 failed key set, and at the end of the test, this failed key set is checked 
 for any entries to assert failures.
 While doing fail-over testing, it is expected that some of the keys may go 
 un-written. The point here is to validate that whatever gets into hbase for 
 an unstable cluster really goes in, and hence read should be 100% for 
 whatever keys went in successfully.
 Currently LoadTestTool has strict checks to validate every key being written 
 or not. In case any keys is not written, it fails.
 I wanted to loosen this constraint by allowing users to pass in a set of 
 exceptions they expect when doing put/write operations over hbase. If one of 
 these expected exception set is thrown while writing key to hbase, the failed 
 key would be ignored, and hence wont even be considered again for subsequent 
 write as well as read.
 This can be passed to the load test tool as csv list parameter 
 -allowed_write_exceptions, or it can be passed through hbase-site.xml by 
 writing a value for test.ignore.exceptions.during.write
 Here is the usage:
 -allowed_write_exceptions 
 java.io.EOFException,org.apache.hadoop.hbase.NotServingRegionException,org.apache.hadoop.hbase.client.NoServerForRegionException,org.apache.hadoop.hbase.ipc.ServerNotRunningYetException
 Hence, by doing this the existing integration tests can also make use of this 
 change by passing it as property in hbase-site.xml, as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6581) Build with hadoop.profile=3.0

2013-08-21 Thread Eric Charles (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Charles updated HBASE-6581:


Attachment: HBASE-6581-20130821.patch

Attached a new patch which work with current trunk.

Upon the poms, we also have now to deal with the protobuf 2.5.0 (used by 
hadoop, hbase uses 2.4.1, the communication between mixed protobufs is not 
possible) and also the change of FSDataOutputStream#sync method (now #hsync).

1. To solve the protobuf issue, I propose to remove the generated classes from 
hbase-protocol/src/main/java and let the generation occur at build-time with 
hadoop-maven-plugin. This implies that the correct protoc version must be 
installed on the build machine (2.4.1 for pre-h3 to ensure backwards 
compatibility, 2.5.0 for h3).

2. I have added a hbase-hadoop3-compat module that reuses the 
hbase-hadoop2-compat to avoid duplicating all the metrics classes. To allow the 
build, I have quickly implemented the call the the (h)sync via introspection 
(IMHO subject to review/optimization - another option would be to duplicate the 
compat classes, or to create extra modules only for that specific case).

The patch allows me to build/assemble with the various profiles. hadoop3 
profile assembly works well for me on top of an hadoop trunk build.

There are a few test failures, but I also had some failures on my env even with 
a vanilla trunk (will further look at these).


 Build with hadoop.profile=3.0
 -

 Key: HBASE-6581
 URL: https://issues.apache.org/jira/browse/HBASE-6581
 Project: HBase
  Issue Type: Bug
Reporter: Eric Charles
 Attachments: HBASE-6581-1.patch, HBASE-6581-20130821.patch, 
 HBASE-6581-2.patch, HBASE-6581.diff, HBASE-6581.diff


 Building trunk with hadoop.profile=3.0 gives exceptions (see [1]) due to 
 change in the hadoop maven modules naming (and also usage of 3.0-SNAPSHOT 
 instead of 3.0.0-SNAPSHOT in hbase-common).
 I can provide a patch that would move most of hadoop dependencies in their 
 respective profiles and will define the correct hadoop deps in the 3.0 
 profile.
 Please tell me if that's ok to go this way.
 Thx, Eric
 [1]
 $ mvn clean install -Dhadoop.profile=3.0
 [INFO] Scanning for projects...
 [ERROR] The build could not read 3 projects - [Help 1]
 [ERROR]   
 [ERROR]   The project org.apache.hbase:hbase-server:0.95-SNAPSHOT 
 (/d/hbase.svn/hbase-server/pom.xml) has 3 errors
 [ERROR] 'dependencies.dependency.version' for 
 org.apache.hadoop:hadoop-common:jar is missing. @ line 655, column 21
 [ERROR] 'dependencies.dependency.version' for 
 org.apache.hadoop:hadoop-annotations:jar is missing. @ line 659, column 21
 [ERROR] 'dependencies.dependency.version' for 
 org.apache.hadoop:hadoop-minicluster:jar is missing. @ line 663, column 21
 [ERROR]   
 [ERROR]   The project org.apache.hbase:hbase-common:0.95-SNAPSHOT 
 (/d/hbase.svn/hbase-common/pom.xml) has 3 errors
 [ERROR] 'dependencies.dependency.version' for 
 org.apache.hadoop:hadoop-common:jar is missing. @ line 170, column 21
 [ERROR] 'dependencies.dependency.version' for 
 org.apache.hadoop:hadoop-annotations:jar is missing. @ line 174, column 21
 [ERROR] 'dependencies.dependency.version' for 
 org.apache.hadoop:hadoop-minicluster:jar is missing. @ line 178, column 21
 [ERROR]   
 [ERROR]   The project org.apache.hbase:hbase-it:0.95-SNAPSHOT 
 (/d/hbase.svn/hbase-it/pom.xml) has 3 errors
 [ERROR] 'dependencies.dependency.version' for 
 org.apache.hadoop:hadoop-common:jar is missing. @ line 220, column 18
 [ERROR] 'dependencies.dependency.version' for 
 org.apache.hadoop:hadoop-annotations:jar is missing. @ line 224, column 21
 [ERROR] 'dependencies.dependency.version' for 
 org.apache.hadoop:hadoop-minicluster:jar is missing. @ line 228, column 21
 [ERROR] 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9108) LoadTestTool need to have a way to ignore keys which were failed during write.

2013-08-21 Thread gautam (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13746084#comment-13746084
 ] 

gautam commented on HBASE-9108:
---

The retry logic inside HBase already does what you mention (storing failed 
keys and retrying).
But sometimes, the retry logic fails with 
org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException, and hence 
fails the test as the key becomes the failed key, and hence you need to tune 
your env, which sometimes is a small and the only cluster setup. Or as you said 
you need to fine tune CM, which then you would need to vary for different 
cluster setups to get a better MTTR.
Some other time you observe the key has failed to write, because of:
java.io.EOFException
org.apache.hadoop.hbase.NotServingRegionException,
org.apache.hadoop.hbase.client.NoServerForRegionException,
org.apache.hadoop.hbase.ipc.ServerNotRunningYetException,
org.apache.hadoop.hbase.ipc.HBaseClient$FailedServerException

A tester might want to skip the retrial attempts here, skip the key here and 
proceed, and he can configure the exceptions he want to skip on write by 
passing it over as configuration. Since this wont be available by default in 
hbase configuration xmls, this is a known risk he will take.
And sorry I didnt mean to say that, I agree we already have a stronger  
better hbase version. My intent was for future version upgrades, tester might 
want to go for 100% read+write, as he might have moved to a better  a big 
cluster setup with a better MTTR.




 LoadTestTool need to have a way to ignore keys which were failed during 
 write. 
 ---

 Key: HBASE-9108
 URL: https://issues.apache.org/jira/browse/HBASE-9108
 Project: HBase
  Issue Type: Improvement
  Components: test
Affects Versions: 0.95.0, 0.95.1, 0.94.9, 0.94.10
Reporter: gautam
Assignee: gautam
Priority: Critical
 Attachments: 9108.patch._trunk.5, 9108.patch._trunk.6, 
 HBASE-9108.patch._trunk.2, HBASE-9108.patch._trunk.3, 
 HBASE-9108.patch._trunk.4, HBASE-9108.patch._trunk.7, 
 HBASE-9108.patch._trunk.8

   Original Estimate: 48h
  Remaining Estimate: 48h

 While running the chaosmonkey integration tests, it is found that write 
 sometimes fails when the cluster components are restarted/stopped/killed etc..
 The data key which was being put, using the LoadTestTool, is added to the 
 failed key set, and at the end of the test, this failed key set is checked 
 for any entries to assert failures.
 While doing fail-over testing, it is expected that some of the keys may go 
 un-written. The point here is to validate that whatever gets into hbase for 
 an unstable cluster really goes in, and hence read should be 100% for 
 whatever keys went in successfully.
 Currently LoadTestTool has strict checks to validate every key being written 
 or not. In case any keys is not written, it fails.
 I wanted to loosen this constraint by allowing users to pass in a set of 
 exceptions they expect when doing put/write operations over hbase. If one of 
 these expected exception set is thrown while writing key to hbase, the failed 
 key would be ignored, and hence wont even be considered again for subsequent 
 write as well as read.
 This can be passed to the load test tool as csv list parameter 
 -allowed_write_exceptions, or it can be passed through hbase-site.xml by 
 writing a value for test.ignore.exceptions.during.write
 Here is the usage:
 -allowed_write_exceptions 
 java.io.EOFException,org.apache.hadoop.hbase.NotServingRegionException,org.apache.hadoop.hbase.client.NoServerForRegionException,org.apache.hadoop.hbase.ipc.ServerNotRunningYetException
 Hence, by doing this the existing integration tests can also make use of this 
 change by passing it as property in hbase-site.xml, as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8462) Custom timestamps should not be allowed to be negative

2013-08-21 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13746090#comment-13746090
 ] 

Enis Soztutar commented on HBASE-8462:
--

bq. Let's not do this in 0.94.
Agreed. 

 Custom timestamps should not be allowed to be negative
 --

 Key: HBASE-8462
 URL: https://issues.apache.org/jira/browse/HBASE-8462
 Project: HBase
  Issue Type: Bug
  Components: Client
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Fix For: 0.98.0, 0.96.0

 Attachments: hbase-8462_v1.patch, hbase-8462_v2.patch


 Client supplied timestamps should not be allowed to be negative, otherwise 
 unpredictable results will follow. Especially, since we are encoding the ts 
 using Bytes.Bytes(long), negative timestamps are sorted after positive ones. 
 Plus, the new PB messages define ts' as uint64. 
 Credit goes to Huned Lokhandwala for reporting this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-8462) Custom timestamps should not be allowed to be negative

2013-08-21 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-8462:
-

Attachment: hbase-8462_v3.patch

Rebased, addressed Ted's comments. Let's get this in. Stack +1'ed it already, 
so I will commit unless objection if hadoopqa gives ok. 

 Custom timestamps should not be allowed to be negative
 --

 Key: HBASE-8462
 URL: https://issues.apache.org/jira/browse/HBASE-8462
 Project: HBase
  Issue Type: Bug
  Components: Client
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Fix For: 0.98.0, 0.96.0

 Attachments: hbase-8462_v1.patch, hbase-8462_v2.patch, 
 hbase-8462_v3.patch


 Client supplied timestamps should not be allowed to be negative, otherwise 
 unpredictable results will follow. Especially, since we are encoding the ts 
 using Bytes.Bytes(long), negative timestamps are sorted after positive ones. 
 Plus, the new PB messages define ts' as uint64. 
 Credit goes to Huned Lokhandwala for reporting this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-8462) Custom timestamps should not be allowed to be negative

2013-08-21 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-8462:
-

Release Note: Timestamps in Mutations (Put/Delete, etc) are not allowed to 
be negative throwing IllegalArgumentException from the client side. Note that 
negative timestamps are not sorted correctly, and will cause inconsistencies 
when accessing the values. 

 Custom timestamps should not be allowed to be negative
 --

 Key: HBASE-8462
 URL: https://issues.apache.org/jira/browse/HBASE-8462
 Project: HBase
  Issue Type: Bug
  Components: Client
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Fix For: 0.98.0, 0.96.0

 Attachments: hbase-8462_v1.patch, hbase-8462_v2.patch, 
 hbase-8462_v3.patch


 Client supplied timestamps should not be allowed to be negative, otherwise 
 unpredictable results will follow. Especially, since we are encoding the ts 
 using Bytes.Bytes(long), negative timestamps are sorted after positive ones. 
 Plus, the new PB messages define ts' as uint64. 
 Credit goes to Huned Lokhandwala for reporting this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9249) Add cp hook before setting PONR in split

2013-08-21 Thread rajeshbabu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13746113#comment-13746113
 ] 

rajeshbabu commented on HBASE-9249:
---

Yes Ram.
 


 Add cp hook before setting PONR in split
 

 Key: HBASE-9249
 URL: https://issues.apache.org/jira/browse/HBASE-9249
 Project: HBase
  Issue Type: Sub-task
Reporter: rajeshbabu
 Attachments: HBASE-9249.patch


 This hook helps to perform split on user region and corresponding index 
 region such that both will be split or none.
 With this hook split for user and index region as follows
 user region
 ===
 1) Create splitting znode for user region split
 2) Close parent user region
 3) split user region storefiles
 4) instantiate child regions of user region
 Through the new hook we can call index region transitions as below
 index region
 ===
 5) Create splitting znode for index region split
 6) Close parent index region
 7) Split storefiles of index region
 8) instantiate child regions of the index region
 If any failures in 5,6,7,8 rollback the steps and return null, on null return 
 throw exception to rollback for 1,2,3,4
 9) set PONR
 10) do batch put of offline and split entries for user and index regions
 user region
 ===
 11) open daughers of user regions and transition znode to split.
 We can open index region daughters and transition znode to split through 
 postSplit hook which is already present.
 index region
 
 12) open daughers of index regions and transition znode to split.
 Even if region server crashes also at the end both user and index regions 
 will be split or none

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8462) Custom timestamps should not be allowed to be negative

2013-08-21 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13746114#comment-13746114
 ] 

stack commented on HBASE-8462:
--

+1 again for 0.95 and trunk.  (Pity the cheeks couldn't all go into a shared 
method like the setTimestamp)

 Custom timestamps should not be allowed to be negative
 --

 Key: HBASE-8462
 URL: https://issues.apache.org/jira/browse/HBASE-8462
 Project: HBase
  Issue Type: Bug
  Components: Client
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Fix For: 0.98.0, 0.96.0

 Attachments: hbase-8462_v1.patch, hbase-8462_v2.patch, 
 hbase-8462_v3.patch


 Client supplied timestamps should not be allowed to be negative, otherwise 
 unpredictable results will follow. Especially, since we are encoding the ts 
 using Bytes.Bytes(long), negative timestamps are sorted after positive ones. 
 Plus, the new PB messages define ts' as uint64. 
 Credit goes to Huned Lokhandwala for reporting this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9249) Add cp hook before setting PONR in split

2013-08-21 Thread rajeshbabu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

rajeshbabu updated HBASE-9249:
--

Description: 
This hook helps to perform split on user region and corresponding index region 
such that both will be split or none.
With this hook split for user and index region as follows

user region
===
1) Create splitting znode for user region split
2) Close parent user region
3) split user region storefiles
4) instantiate child regions of user region

Through the new hook we can call index region transitions as below

index region
===
5) Create splitting znode for index region split
6) Close parent index region
7) Split storefiles of index region
8) instantiate child regions of the index region

If any failures in 5,6,7,8 rollback the steps and return null, on null return 
throw exception to rollback for 1,2,3,4

9) set PONR
10) do batch put of offline and split entries for user and index regions
index region

11) open daughers of index regions and transition znode to split. This step we 
will do through 

user region
===
12) open daughers of user regions and transition znode to split.

We can open index region daughters and transition znode to split through 
postSplit hook which is already present.

Even if region server crashes also at the end both user and index regions will 
be split or none


  was:
This hook helps to perform split on user region and corresponding index region 
such that both will be split or none.
With this hook split for user and index region as follows

user region
===
1) Create splitting znode for user region split
2) Close parent user region
3) split user region storefiles
4) instantiate child regions of user region

Through the new hook we can call index region transitions as below

index region
===
5) Create splitting znode for index region split
6) Close parent index region
7) Split storefiles of index region
8) instantiate child regions of the index region

If any failures in 5,6,7,8 rollback the steps and return null, on null return 
throw exception to rollback for 1,2,3,4

9) set PONR
10) do batch put of offline and split entries for user and index regions
user region
===
11) open daughers of user regions and transition znode to split.

We can open index region daughters and transition znode to split through 
postSplit hook which is already present.

index region

12) open daughers of index regions and transition znode to split.

Even if region server crashes also at the end both user and index regions will 
be split or none



 Add cp hook before setting PONR in split
 

 Key: HBASE-9249
 URL: https://issues.apache.org/jira/browse/HBASE-9249
 Project: HBase
  Issue Type: Sub-task
Reporter: rajeshbabu
 Attachments: HBASE-9249.patch


 This hook helps to perform split on user region and corresponding index 
 region such that both will be split or none.
 With this hook split for user and index region as follows
 user region
 ===
 1) Create splitting znode for user region split
 2) Close parent user region
 3) split user region storefiles
 4) instantiate child regions of user region
 Through the new hook we can call index region transitions as below
 index region
 ===
 5) Create splitting znode for index region split
 6) Close parent index region
 7) Split storefiles of index region
 8) instantiate child regions of the index region
 If any failures in 5,6,7,8 rollback the steps and return null, on null return 
 throw exception to rollback for 1,2,3,4
 9) set PONR
 10) do batch put of offline and split entries for user and index regions
 index region
 
 11) open daughers of index regions and transition znode to split. This step 
 we will do through 
 user region
 ===
 12) open daughers of user regions and transition znode to split.
 We can open index region daughters and transition znode to split through 
 postSplit hook which is already present.
 Even if region server crashes also at the end both user and index regions 
 will be split or none

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9249) Add cp hook before setting PONR in split

2013-08-21 Thread rajeshbabu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

rajeshbabu updated HBASE-9249:
--

Description: 
This hook helps to perform split on user region and corresponding index region 
such that both will be split or none.
With this hook split for user and index region as follows

user region
===
1) Create splitting znode for user region split
2) Close parent user region
3) split user region storefiles
4) instantiate child regions of user region

Through the new hook we can call index region transitions as below

index region
===
5) Create splitting znode for index region split
6) Close parent index region
7) Split storefiles of index region
8) instantiate child regions of the index region

If any failures in 5,6,7,8 rollback the steps and return null, on null return 
throw exception to rollback for 1,2,3,4

9) set PONR
10) do batch put of offline and split entries for user and index regions
index region

11) open daughers of index regions and transition znode to split. This step we 
will do through preSplitAfterPONR hook. Opening index regions before opening 
user regions helps to avoid put failures if there is colocation mismatch(this 
can happen if user regions opening completed but index regions opening in 
progress)

user region
===
12) open daughers of user regions and transition znode to split.

Even if region server crashes also at the end both user and index regions will 
be split or none


  was:
This hook helps to perform split on user region and corresponding index region 
such that both will be split or none.
With this hook split for user and index region as follows

user region
===
1) Create splitting znode for user region split
2) Close parent user region
3) split user region storefiles
4) instantiate child regions of user region

Through the new hook we can call index region transitions as below

index region
===
5) Create splitting znode for index region split
6) Close parent index region
7) Split storefiles of index region
8) instantiate child regions of the index region

If any failures in 5,6,7,8 rollback the steps and return null, on null return 
throw exception to rollback for 1,2,3,4

9) set PONR
10) do batch put of offline and split entries for user and index regions
index region

11) open daughers of index regions and transition znode to split. This step we 
will do through 

user region
===
12) open daughers of user regions and transition znode to split.

We can open index region daughters and transition znode to split through 
postSplit hook which is already present.

Even if region server crashes also at the end both user and index regions will 
be split or none



 Add cp hook before setting PONR in split
 

 Key: HBASE-9249
 URL: https://issues.apache.org/jira/browse/HBASE-9249
 Project: HBase
  Issue Type: Sub-task
Reporter: rajeshbabu
 Attachments: HBASE-9249.patch


 This hook helps to perform split on user region and corresponding index 
 region such that both will be split or none.
 With this hook split for user and index region as follows
 user region
 ===
 1) Create splitting znode for user region split
 2) Close parent user region
 3) split user region storefiles
 4) instantiate child regions of user region
 Through the new hook we can call index region transitions as below
 index region
 ===
 5) Create splitting znode for index region split
 6) Close parent index region
 7) Split storefiles of index region
 8) instantiate child regions of the index region
 If any failures in 5,6,7,8 rollback the steps and return null, on null return 
 throw exception to rollback for 1,2,3,4
 9) set PONR
 10) do batch put of offline and split entries for user and index regions
 index region
 
 11) open daughers of index regions and transition znode to split. This step 
 we will do through preSplitAfterPONR hook. Opening index regions before 
 opening user regions helps to avoid put failures if there is colocation 
 mismatch(this can happen if user regions opening completed but index regions 
 opening in progress)
 user region
 ===
 12) open daughers of user regions and transition znode to split.
 Even if region server crashes also at the end both user and index regions 
 will be split or none

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HBASE-9268) Client doesn't recover from a stalled region server

2013-08-21 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon reassigned HBASE-9268:
--

Assignee: Nicolas Liochon

 Client doesn't recover from a stalled region server
 ---

 Key: HBASE-9268
 URL: https://issues.apache.org/jira/browse/HBASE-9268
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.95.2
Reporter: Jean-Daniel Cryans
Assignee: Nicolas Liochon
 Fix For: 0.98.0, 0.95.3

 Attachments: 9268-hack.patch


 Got this testing the 0.95.2 RC.
 I killed -STOP a region server and let it stay like that while running PE. 
 The clients didn't find the new region locations and in the jstack were stuck 
 doing RPC. Eventually I killed -CONT and the client printed these:
 bq. Exception in thread TestClient-6 java.lang.RuntimeException: 
 org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 
 128 actions: IOException: 90 times, SocketTimeoutException: 38 times,

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9268) Client doesn't recover from a stalled region server

2013-08-21 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-9268:
---

Status: Patch Available  (was: Open)

 Client doesn't recover from a stalled region server
 ---

 Key: HBASE-9268
 URL: https://issues.apache.org/jira/browse/HBASE-9268
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.95.2
Reporter: Jean-Daniel Cryans
Assignee: Nicolas Liochon
 Fix For: 0.98.0, 0.95.3

 Attachments: 9268-hack.patch


 Got this testing the 0.95.2 RC.
 I killed -STOP a region server and let it stay like that while running PE. 
 The clients didn't find the new region locations and in the jstack were stuck 
 doing RPC. Eventually I killed -CONT and the client printed these:
 bq. Exception in thread TestClient-6 java.lang.RuntimeException: 
 org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 
 128 actions: IOException: 90 times, SocketTimeoutException: 38 times,

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9268) Client doesn't recover from a stalled region server

2013-08-21 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-9268:
---

Attachment: 9268-hack.patch

 Client doesn't recover from a stalled region server
 ---

 Key: HBASE-9268
 URL: https://issues.apache.org/jira/browse/HBASE-9268
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.95.2
Reporter: Jean-Daniel Cryans
Assignee: Nicolas Liochon
 Fix For: 0.98.0, 0.95.3

 Attachments: 9268-hack.patch


 Got this testing the 0.95.2 RC.
 I killed -STOP a region server and let it stay like that while running PE. 
 The clients didn't find the new region locations and in the jstack were stuck 
 doing RPC. Eventually I killed -CONT and the client printed these:
 bq. Exception in thread TestClient-6 java.lang.RuntimeException: 
 org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 
 128 actions: IOException: 90 times, SocketTimeoutException: 38 times,

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-8930) Filter evaluates KVs outside requested columns

2013-08-21 Thread Vasu Mariyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasu Mariyala updated HBASE-8930:
-

Attachment: (was: HBASE-8930.patch)

 Filter evaluates KVs outside requested columns
 --

 Key: HBASE-8930
 URL: https://issues.apache.org/jira/browse/HBASE-8930
 Project: HBase
  Issue Type: Bug
  Components: Filters
Affects Versions: 0.94.7
Reporter: Federico Gaule
Assignee: Vasu Mariyala
Priority: Critical
  Labels: filters, hbase, keyvalue
 Attachments: HBASE-8930.patch


 1- Fill row with some columns
 2- Get row with some columns less than universe - Use filter to print kvs
 3- Filter prints not requested columns
 Filter (AllwaysNextColFilter) always return ReturnCode.INCLUDE_AND_NEXT_COL 
 and prints KV's qualifier
 SUFFIX_0 = 0
 SUFFIX_1 = 1
 SUFFIX_4 = 4
 SUFFIX_6 = 6
 P= Persisted
 R= Requested
 E= Evaluated
 X= Returned
 | 5580 | 5581 | 5584 | 5586 | 5590 | 5591 | 5594 | 5596 | 5600 | 5601 | 5604 
 | 5606 |... 
 |  |  P   |   P  |  |  |  P   |   P  |  |  |  P   |   P  
 |  |...
 |  |  R   |   R  |   R  |  |  R   |   R  |   R  |  |  |  
 |  |...
 |  |  E   |   E  |  |  |  E   |   E  |  |  |  
 {color:red}E{color}   |  |  |...
 |  |  X   |   X  |  |  |  X   |   X  |  |  |  |  
 |  |
 {code:title=ExtraColumnTest.java|borderStyle=solid}
 @Test
 public void testFilter() throws Exception {
 Configuration config = HBaseConfiguration.create();
 config.set(hbase.zookeeper.quorum, myZK);
 HTable hTable = new HTable(config, testTable);
 byte[] cf = Bytes.toBytes(cf);
 byte[] row = Bytes.toBytes(row);
 byte[] col1 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_1));
 byte[] col2 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_1));
 byte[] col3 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_1));
 byte[] col4 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_1));
 byte[] col5 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_1));
 byte[] col6 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, (byte) SUFFIX_1));
 byte[] col1g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_6));
 byte[] col2g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_6));
 byte[] col1v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_4));
 byte[] col2v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_4));
 byte[] col3v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_4));
 byte[] col4v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_4));
 byte[] col5v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_4));
 byte[] col6v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, (byte) SUFFIX_4));
 // === INSERTION =//
 Put put = new Put(row);
 put.add(cf, col1, Bytes.toBytes((short) 1));
 put.add(cf, col2, Bytes.toBytes((short) 1));
 put.add(cf, col3, Bytes.toBytes((short) 3));
 put.add(cf, col4, Bytes.toBytes((short) 3));
 put.add(cf, col5, Bytes.toBytes((short) 3));
 put.add(cf, col6, Bytes.toBytes((short) 3));
 hTable.put(put);
 put = new Put(row);
 put.add(cf, col1v, Bytes.toBytes((short) 10));
 put.add(cf, col2v, Bytes.toBytes((short) 10));
 put.add(cf, col3v, Bytes.toBytes((short) 10));
 put.add(cf, col4v, Bytes.toBytes((short) 10));
 put.add(cf, col5v, Bytes.toBytes((short) 10));
 put.add(cf, col6v, Bytes.toBytes((short) 10));
 hTable.put(put);
 hTable.flushCommits();
 //==READING=//
 Filter allwaysNextColFilter = new AllwaysNextColFilter();
 Get get = new Get(row);
 get.addColumn(cf, col1); //5581
 get.addColumn(cf, col1v); //5584
 get.addColumn(cf, col1g); //5586
 get.addColumn(cf, col2); //5591
 get.addColumn(cf, col2v); //5594
 get.addColumn(cf, col2g); //5596
 
 get.setFilter(allwaysNextColFilter);
 get.setMaxVersions(1);
 System.out.println(get);
 Scan scan = new Scan(get);
 

[jira] [Updated] (HBASE-8930) Filter evaluates KVs outside requested columns

2013-08-21 Thread Vasu Mariyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasu Mariyala updated HBASE-8930:
-

Attachment: HBASE-8930.patch

Reattaching the same patch to try to get the Hadoop QA run

 Filter evaluates KVs outside requested columns
 --

 Key: HBASE-8930
 URL: https://issues.apache.org/jira/browse/HBASE-8930
 Project: HBase
  Issue Type: Bug
  Components: Filters
Affects Versions: 0.94.7
Reporter: Federico Gaule
Assignee: Vasu Mariyala
Priority: Critical
  Labels: filters, hbase, keyvalue
 Attachments: HBASE-8930.patch


 1- Fill row with some columns
 2- Get row with some columns less than universe - Use filter to print kvs
 3- Filter prints not requested columns
 Filter (AllwaysNextColFilter) always return ReturnCode.INCLUDE_AND_NEXT_COL 
 and prints KV's qualifier
 SUFFIX_0 = 0
 SUFFIX_1 = 1
 SUFFIX_4 = 4
 SUFFIX_6 = 6
 P= Persisted
 R= Requested
 E= Evaluated
 X= Returned
 | 5580 | 5581 | 5584 | 5586 | 5590 | 5591 | 5594 | 5596 | 5600 | 5601 | 5604 
 | 5606 |... 
 |  |  P   |   P  |  |  |  P   |   P  |  |  |  P   |   P  
 |  |...
 |  |  R   |   R  |   R  |  |  R   |   R  |   R  |  |  |  
 |  |...
 |  |  E   |   E  |  |  |  E   |   E  |  |  |  
 {color:red}E{color}   |  |  |...
 |  |  X   |   X  |  |  |  X   |   X  |  |  |  |  
 |  |
 {code:title=ExtraColumnTest.java|borderStyle=solid}
 @Test
 public void testFilter() throws Exception {
 Configuration config = HBaseConfiguration.create();
 config.set(hbase.zookeeper.quorum, myZK);
 HTable hTable = new HTable(config, testTable);
 byte[] cf = Bytes.toBytes(cf);
 byte[] row = Bytes.toBytes(row);
 byte[] col1 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_1));
 byte[] col2 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_1));
 byte[] col3 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_1));
 byte[] col4 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_1));
 byte[] col5 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_1));
 byte[] col6 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, (byte) SUFFIX_1));
 byte[] col1g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_6));
 byte[] col2g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_6));
 byte[] col1v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_4));
 byte[] col2v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_4));
 byte[] col3v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_4));
 byte[] col4v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_4));
 byte[] col5v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_4));
 byte[] col6v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, (byte) SUFFIX_4));
 // === INSERTION =//
 Put put = new Put(row);
 put.add(cf, col1, Bytes.toBytes((short) 1));
 put.add(cf, col2, Bytes.toBytes((short) 1));
 put.add(cf, col3, Bytes.toBytes((short) 3));
 put.add(cf, col4, Bytes.toBytes((short) 3));
 put.add(cf, col5, Bytes.toBytes((short) 3));
 put.add(cf, col6, Bytes.toBytes((short) 3));
 hTable.put(put);
 put = new Put(row);
 put.add(cf, col1v, Bytes.toBytes((short) 10));
 put.add(cf, col2v, Bytes.toBytes((short) 10));
 put.add(cf, col3v, Bytes.toBytes((short) 10));
 put.add(cf, col4v, Bytes.toBytes((short) 10));
 put.add(cf, col5v, Bytes.toBytes((short) 10));
 put.add(cf, col6v, Bytes.toBytes((short) 10));
 hTable.put(put);
 hTable.flushCommits();
 //==READING=//
 Filter allwaysNextColFilter = new AllwaysNextColFilter();
 Get get = new Get(row);
 get.addColumn(cf, col1); //5581
 get.addColumn(cf, col1v); //5584
 get.addColumn(cf, col1g); //5586
 get.addColumn(cf, col2); //5591
 get.addColumn(cf, col2v); //5594
 get.addColumn(cf, col2g); //5596
 
 get.setFilter(allwaysNextColFilter);
 get.setMaxVersions(1);
 

[jira] [Commented] (HBASE-9268) Client doesn't recover from a stalled region server

2013-08-21 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13746169#comment-13746169
 ] 

Nicolas Liochon commented on HBASE-9268:


Hum. Different points:
- 38 is about the number of puts that have failed with a SocketTimeout. As it's 
a multi put, it's likely to be a single message. It does not mean that the 
client retried 38 times.
- we do a socket#setSoTimeout, but this is only for reads, not for write.
- it's not possible to do write timeout in java w/o using nio API.
- HDFS added SocketOutputStream back in HADOOP-2346, but HBase does not use it.
- The API to use is NetUtils.getOutputStream(socket, timeout); Tested, it works.
- We can use it, but the API does not allows to change the timeout on the fly 
as we do.
- I'm not sure of the time needed by ZooKeeper to decide that the server was 
dead. The tests were strange.

So, synthesis is:
- Looking at the code, I don't think it's a new issue. JD, what do you think?
- It seems we can fix or improve things here. I will give it a try.
- I need to double check the zookeeper stuff.

 Client doesn't recover from a stalled region server
 ---

 Key: HBASE-9268
 URL: https://issues.apache.org/jira/browse/HBASE-9268
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.95.2
Reporter: Jean-Daniel Cryans
Assignee: Nicolas Liochon
 Fix For: 0.98.0, 0.95.3

 Attachments: 9268-hack.patch


 Got this testing the 0.95.2 RC.
 I killed -STOP a region server and let it stay like that while running PE. 
 The clients didn't find the new region locations and in the jstack were stuck 
 doing RPC. Eventually I killed -CONT and the client printed these:
 bq. Exception in thread TestClient-6 java.lang.RuntimeException: 
 org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 
 128 actions: IOException: 90 times, SocketTimeoutException: 38 times,

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9268) Client doesn't recover from a stalled region server

2013-08-21 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13746493#comment-13746493
 ] 

Nicolas Liochon commented on HBASE-9268:


btw, I'm interested to know if you have the same issue when you activate 
HBASE-7590 (it should work well).


 Client doesn't recover from a stalled region server
 ---

 Key: HBASE-9268
 URL: https://issues.apache.org/jira/browse/HBASE-9268
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.95.2
Reporter: Jean-Daniel Cryans
Assignee: Nicolas Liochon
 Fix For: 0.98.0, 0.95.3

 Attachments: 9268-hack.patch


 Got this testing the 0.95.2 RC.
 I killed -STOP a region server and let it stay like that while running PE. 
 The clients didn't find the new region locations and in the jstack were stuck 
 doing RPC. Eventually I killed -CONT and the client printed these:
 bq. Exception in thread TestClient-6 java.lang.RuntimeException: 
 org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 
 128 actions: IOException: 90 times, SocketTimeoutException: 38 times,

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9249) Add cp hook before setting PONR in split

2013-08-21 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13746501#comment-13746501
 ] 

ramkrishna.s.vasudevan commented on HBASE-9249:
---

bq. Yes Ram
This means this patch is enough or you are saying Yes for the second question 
:) ?

 Add cp hook before setting PONR in split
 

 Key: HBASE-9249
 URL: https://issues.apache.org/jira/browse/HBASE-9249
 Project: HBase
  Issue Type: Sub-task
Reporter: rajeshbabu
 Attachments: HBASE-9249.patch


 This hook helps to perform split on user region and corresponding index 
 region such that both will be split or none.
 With this hook split for user and index region as follows
 user region
 ===
 1) Create splitting znode for user region split
 2) Close parent user region
 3) split user region storefiles
 4) instantiate child regions of user region
 Through the new hook we can call index region transitions as below
 index region
 ===
 5) Create splitting znode for index region split
 6) Close parent index region
 7) Split storefiles of index region
 8) instantiate child regions of the index region
 If any failures in 5,6,7,8 rollback the steps and return null, on null return 
 throw exception to rollback for 1,2,3,4
 9) set PONR
 10) do batch put of offline and split entries for user and index regions
 index region
 
 11) open daughers of index regions and transition znode to split. This step 
 we will do through preSplitAfterPONR hook. Opening index regions before 
 opening user regions helps to avoid put failures if there is colocation 
 mismatch(this can happen if user regions opening completed but index regions 
 opening in progress)
 user region
 ===
 12) open daughers of user regions and transition znode to split.
 Even if region server crashes also at the end both user and index regions 
 will be split or none

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9249) Add cp hook before setting PONR in split

2013-08-21 Thread rajeshbabu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13746516#comment-13746516
 ] 

rajeshbabu commented on HBASE-9249:
---

bq. For doing step 10 do we need to use the information from SplitInfo and add 
it to Meta?
We should make use of splitInfo to get index regions which need to be offline 
in META. For secondary index still we need to make some changes.


 Add cp hook before setting PONR in split
 

 Key: HBASE-9249
 URL: https://issues.apache.org/jira/browse/HBASE-9249
 Project: HBase
  Issue Type: Sub-task
Reporter: rajeshbabu
 Attachments: HBASE-9249.patch


 This hook helps to perform split on user region and corresponding index 
 region such that both will be split or none.
 With this hook split for user and index region as follows
 user region
 ===
 1) Create splitting znode for user region split
 2) Close parent user region
 3) split user region storefiles
 4) instantiate child regions of user region
 Through the new hook we can call index region transitions as below
 index region
 ===
 5) Create splitting znode for index region split
 6) Close parent index region
 7) Split storefiles of index region
 8) instantiate child regions of the index region
 If any failures in 5,6,7,8 rollback the steps and return null, on null return 
 throw exception to rollback for 1,2,3,4
 9) set PONR
 10) do batch put of offline and split entries for user and index regions
 index region
 
 11) open daughers of index regions and transition znode to split. This step 
 we will do through preSplitAfterPONR hook. Opening index regions before 
 opening user regions helps to avoid put failures if there is colocation 
 mismatch(this can happen if user regions opening completed but index regions 
 opening in progress)
 user region
 ===
 12) open daughers of user regions and transition znode to split.
 Even if region server crashes also at the end both user and index regions 
 will be split or none

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9249) Add cp hook before setting PONR in split

2013-08-21 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13746522#comment-13746522
 ] 

ramkrishna.s.vasudevan commented on HBASE-9249:
---

Add a Javadoc on SplitInfo.  Patch looks good to me.

 Add cp hook before setting PONR in split
 

 Key: HBASE-9249
 URL: https://issues.apache.org/jira/browse/HBASE-9249
 Project: HBase
  Issue Type: Sub-task
Reporter: rajeshbabu
 Attachments: HBASE-9249.patch


 This hook helps to perform split on user region and corresponding index 
 region such that both will be split or none.
 With this hook split for user and index region as follows
 user region
 ===
 1) Create splitting znode for user region split
 2) Close parent user region
 3) split user region storefiles
 4) instantiate child regions of user region
 Through the new hook we can call index region transitions as below
 index region
 ===
 5) Create splitting znode for index region split
 6) Close parent index region
 7) Split storefiles of index region
 8) instantiate child regions of the index region
 If any failures in 5,6,7,8 rollback the steps and return null, on null return 
 throw exception to rollback for 1,2,3,4
 9) set PONR
 10) do batch put of offline and split entries for user and index regions
 index region
 
 11) open daughers of index regions and transition znode to split. This step 
 we will do through preSplitAfterPONR hook. Opening index regions before 
 opening user regions helps to avoid put failures if there is colocation 
 mismatch(this can happen if user regions opening completed but index regions 
 opening in progress)
 user region
 ===
 12) open daughers of user regions and transition znode to split.
 Even if region server crashes also at the end both user and index regions 
 will be split or none

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9241) Add cp hook before initialize variable set to true in master intialization

2013-08-21 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13746523#comment-13746523
 ] 

ramkrishna.s.vasudevan commented on HBASE-9241:
---

Patch looks good.  On getting the error from the hook we now just log the 
error?  Should we give a provision for taking someother corrective action?
Like stop the master from starting - something like that?
Can do it in a follow up JIRA if needed.
+1

 Add cp hook before initialize variable set to true in master intialization
 --

 Key: HBASE-9241
 URL: https://issues.apache.org/jira/browse/HBASE-9241
 Project: HBase
  Issue Type: Sub-task
  Components: master
Reporter: rajeshbabu
Assignee: rajeshbabu
 Attachments: HBASE-9241.patch


 This hook helps in following cases.
 1) When we are creating indexed table then there is a chance that master can 
 go down after successful creation of user table but index table creation not 
 yet started.
 This hook helps to find such cases and create missing index table.
 2) if any case there are mismatches in colocation of user and index regions 
 we can run balancer.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7639) Enable online schema update by default

2013-08-21 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13746541#comment-13746541
 ] 

Lars Hofhansl commented on HBASE-7639:
--

Yeah, I think this ship has sailed for 0.94.

 Enable online schema update by default 
 ---

 Key: HBASE-7639
 URL: https://issues.apache.org/jira/browse/HBASE-7639
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.95.2
Reporter: Enis Soztutar
Assignee: Elliott Clark
 Fix For: 0.98.0, 0.95.2

 Attachments: HBASE-7639-0.patch


 After we get HBASE-7305 and HBASE-7546, things will become stable enough to 
 enable online schema update to be enabled by default. 
 {code}
   property
 namehbase.online.schema.update.enable/name
 valuefalse/value
 description
 Set true to enable online schema changes.  This is an experimental 
 feature.··
 There are known issues modifying table schemas at the same time a region
 split is happening so your table needs to be quiescent or else you have to
 be running with splits disabled.
 /description
   /property
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8165) Move to Hadoop 2.1.0-beta from 2.0.x-alpha (WAS: Update our protobuf to 2.5 from 2.4.1)

2013-08-21 Thread Francis Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13746543#comment-13746543
 ] 

Francis Liu commented on HBASE-8165:


Any thoughts on upgrading protobuf in 0.94? So it plays nice with hadoop-2.2? 
Maybe just upgrade in the 2.0 profile?

 Move to Hadoop 2.1.0-beta from 2.0.x-alpha (WAS: Update our protobuf to 2.5 
 from 2.4.1)
 ---

 Key: HBASE-8165
 URL: https://issues.apache.org/jira/browse/HBASE-8165
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: stack
 Fix For: 0.98.0, 0.96.0

 Attachments: 8165_minus_generated.txt, 8165_trunk7.txt, 
 8165_trunkv6.txt, 8165.txt, 8165v2.txt, 8165v3.txt, 8165v4.txt, 8165v5.txt, 
 8165v8.txt, 8615v9.txt, HBASE-8165-rebased.patch


 Update to new 2.5 pb.  Some speedups and a new PARSER idiom that bypasses 
 making a builder.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9267) StochasticLoadBalancer goes over its processing time limit

2013-08-21 Thread Jean-Daniel Cryans (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13746553#comment-13746553
 ] 

Jean-Daniel Cryans commented on HBASE-9267:
---

Yesterday I was playing more with v2 and v3 and although I don't see the 
sublist issue anymore, the time it takes to balance always goes up until it 
reaches 60 seconds. The more I think about it the less I like it... there's no 
way to kill the balancer while it's running (AFAIK) and it blocks a couple of 
other things like HBCK. We could discuss this on dev@ or another jira though.

So I'm still +1 on the patch.

 StochasticLoadBalancer goes over its processing time limit
 --

 Key: HBASE-9267
 URL: https://issues.apache.org/jira/browse/HBASE-9267
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.95.2
Reporter: Jean-Daniel Cryans
Assignee: Elliott Clark
 Fix For: 0.98.0, 0.95.3

 Attachments: HBASE-9267-0.patch, HBASE-9267-1.patch, 
 HBASE-9267-2.patch, HBASE-9267-3.patch, HBASE-9267-4.patch


 I trying out 0.95.2, I left it running over the weekend (8 RS, average load 
 between 12 and 3 regions) and right now the balancer runs for 12 mins:
 bq. 2013-08-19 21:54:45,534 DEBUG 
 [jdec2hbase0403-1.vpc.cloudera.com,6,1376689696384-BalancerChore] 
 org.apache.hadoop.hbase.master.balancer.StochasticLoadBalancer: Could not 
 find a better load balance plan.  Tried 0 different configurations in 
 777309ms, and did not find anything with a computed cost less than 
 36.32576937689094
 It seems it slowly crept up there, yesterday it was doing:
 bq. 2013-08-18 20:53:17,232 DEBUG 
 [jdec2hbase0403-1.vpc.cloudera.com,6,1376689696384-BalancerChore] 
 org.apache.hadoop.hbase.master.balancer.StochasticLoadBalancer: Could not 
 find a better load balance plan.  Tried 0 different configurations in 
 257374ms, and did not find anything with a computed cost less than 
 36.3251082542424
 And originally it was doing 1 minute.
 In the jstack I see a 1000 of these and jstack doesn't want to show me the 
 whole thing:
 bq.  at java.util.SubList$1.nextIndex(AbstractList.java:713)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8165) Move to Hadoop 2.1.0-beta from 2.0.x-alpha (WAS: Update our protobuf to 2.5 from 2.4.1)

2013-08-21 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13746555#comment-13746555
 ] 

Lars Hofhansl commented on HBASE-8165:
--

Where are we using protobufs in 0.94? I see it in the pom, but I never had to 
run protoc to build 0.94.

 Move to Hadoop 2.1.0-beta from 2.0.x-alpha (WAS: Update our protobuf to 2.5 
 from 2.4.1)
 ---

 Key: HBASE-8165
 URL: https://issues.apache.org/jira/browse/HBASE-8165
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: stack
 Fix For: 0.98.0, 0.96.0

 Attachments: 8165_minus_generated.txt, 8165_trunk7.txt, 
 8165_trunkv6.txt, 8165.txt, 8165v2.txt, 8165v3.txt, 8165v4.txt, 8165v5.txt, 
 8165v8.txt, 8615v9.txt, HBASE-8165-rebased.patch


 Update to new 2.5 pb.  Some speedups and a new PARSER idiom that bypasses 
 making a builder.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7462) TestDrainingServer is an integration test. It should be a unit test instead

2013-08-21 Thread Gustavo Anatoly (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gustavo Anatoly updated HBASE-7462:
---

Status: Patch Available  (was: In Progress)

 TestDrainingServer is an integration test. It should be a unit test instead
 ---

 Key: HBASE-7462
 URL: https://issues.apache.org/jira/browse/HBASE-7462
 Project: HBase
  Issue Type: Wish
  Components: test
Affects Versions: 0.95.2
Reporter: Nicolas Liochon
Assignee: Gustavo Anatoly
Priority: Trivial
  Labels: noob
 Attachments: HBASE-7462-v2.patch


 TestDrainingServer tests the function that allows to say that a regionserver 
 should not get new regions.
 As it is written today, it's an integration test: it starts  stops a cluster.
 The test would be more efficient if it would just check that the 
 AssignmentManager does not use the drained region server; whatever the 
 circumstances (bulk assign or not for example).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9108) LoadTestTool need to have a way to ignore keys which were failed during write.

2013-08-21 Thread gautam (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13746556#comment-13746556
 ] 

gautam commented on HBASE-9108:
---

Enis, do you think this can be committed now?

 LoadTestTool need to have a way to ignore keys which were failed during 
 write. 
 ---

 Key: HBASE-9108
 URL: https://issues.apache.org/jira/browse/HBASE-9108
 Project: HBase
  Issue Type: Improvement
  Components: test
Affects Versions: 0.95.0, 0.95.1, 0.94.9, 0.94.10
Reporter: gautam
Assignee: gautam
Priority: Critical
 Attachments: 9108.patch._trunk.5, 9108.patch._trunk.6, 
 HBASE-9108.patch._trunk.2, HBASE-9108.patch._trunk.3, 
 HBASE-9108.patch._trunk.4, HBASE-9108.patch._trunk.7, 
 HBASE-9108.patch._trunk.8

   Original Estimate: 48h
  Remaining Estimate: 48h

 While running the chaosmonkey integration tests, it is found that write 
 sometimes fails when the cluster components are restarted/stopped/killed etc..
 The data key which was being put, using the LoadTestTool, is added to the 
 failed key set, and at the end of the test, this failed key set is checked 
 for any entries to assert failures.
 While doing fail-over testing, it is expected that some of the keys may go 
 un-written. The point here is to validate that whatever gets into hbase for 
 an unstable cluster really goes in, and hence read should be 100% for 
 whatever keys went in successfully.
 Currently LoadTestTool has strict checks to validate every key being written 
 or not. In case any keys is not written, it fails.
 I wanted to loosen this constraint by allowing users to pass in a set of 
 exceptions they expect when doing put/write operations over hbase. If one of 
 these expected exception set is thrown while writing key to hbase, the failed 
 key would be ignored, and hence wont even be considered again for subsequent 
 write as well as read.
 This can be passed to the load test tool as csv list parameter 
 -allowed_write_exceptions, or it can be passed through hbase-site.xml by 
 writing a value for test.ignore.exceptions.during.write
 Here is the usage:
 -allowed_write_exceptions 
 java.io.EOFException,org.apache.hadoop.hbase.NotServingRegionException,org.apache.hadoop.hbase.client.NoServerForRegionException,org.apache.hadoop.hbase.ipc.ServerNotRunningYetException
 Hence, by doing this the existing integration tests can also make use of this 
 change by passing it as property in hbase-site.xml, as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8165) Move to Hadoop 2.1.0-beta from 2.0.x-alpha (WAS: Update our protobuf to 2.5 from 2.4.1)

2013-08-21 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13746559#comment-13746559
 ] 

Elliott Clark commented on HBASE-8165:
--

Protobuf is there for the rest server (maybe other things too) in 94.

 Move to Hadoop 2.1.0-beta from 2.0.x-alpha (WAS: Update our protobuf to 2.5 
 from 2.4.1)
 ---

 Key: HBASE-8165
 URL: https://issues.apache.org/jira/browse/HBASE-8165
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: stack
 Fix For: 0.98.0, 0.96.0

 Attachments: 8165_minus_generated.txt, 8165_trunk7.txt, 
 8165_trunkv6.txt, 8165.txt, 8165v2.txt, 8165v3.txt, 8165v4.txt, 8165v5.txt, 
 8165v8.txt, 8615v9.txt, HBASE-8165-rebased.patch


 Update to new 2.5 pb.  Some speedups and a new PARSER idiom that bypasses 
 making a builder.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9282) Minor logging cleanup; shorten logs, remove redundant info

2013-08-21 Thread Jean-Daniel Cryans (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13746574#comment-13746574
 ] 

Jean-Daniel Cryans commented on HBASE-9282:
---

+1

 Minor logging cleanup; shorten logs, remove redundant info
 --

 Key: HBASE-9282
 URL: https://issues.apache.org/jira/browse/HBASE-9282
 Project: HBase
  Issue Type: Task
  Components: Usability
Reporter: stack
Assignee: stack
 Fix For: 0.96.0

 Attachments: 9282.txt


 Minor log cleanup; trying to get it so hbase logs can be read on a laptop 
 screen w/o having to scroll right.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9267) StochasticLoadBalancer goes over its processing time limit

2013-08-21 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13746587#comment-13746587
 ] 

Elliott Clark commented on HBASE-9267:
--

v3 didn't really work at al because java's dumb.  I'm not seeing what you're 
seeing at all.

{code}
true

  
0 row(s) in 0.0020 seconds

true

  
0 row(s) in 0.0020 seconds

true

  
0 row(s) in 0.0020 seconds

true

  
0 row(s) in 0.0020 seconds

true

  
0 row(s) in 0.0020 seconds

true

  
0 row(s) in 0.0020 seconds

true

  
0 row(s) in 0.0020 seconds

true

  
0 row(s) in 0.0010 seconds

true

  
0 row(s) in 0.0010 seconds

true

  
0 row(s) in 0.0010 seconds

true

  
0 row(s) in 0.0020 seconds

true

  
0 row(s) in 0.0020 seconds

true

  
0 row(s) in 0.0010 seconds

true

  
0 row(s) in 0.0020 seconds

true

  
0 row(s) in 0.0020 seconds

true

  
0 row(s) in 0.0020 seconds

true

  
0 row(s) in 0.0020 seconds

true

  
0 row(s) in 0.0020 seconds

true

  
0 row(s) in 0.0010 seconds

true

  
0 row(s) in 

[jira] [Commented] (HBASE-7709) Infinite loop possible in Master/Master replication

2013-08-21 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13746586#comment-13746586
 ] 

Lars Hofhansl commented on HBASE-7709:
--

The 0.94 patch looks good. Bit large, but then again this is a bad bug to have 
(when it hits you you'll useless load on your cluster forever, throwing your 
versions off, etc).
Nice refactoring of the replication test.

Few nits:
* PREFIX_CLUSTER_KEY in WALEdit could just be '_', right? No need to store that 
longer prefix everywhere.
* Similarly maybe make PREFIX_CONSUMED_CLUSTER_IDS in Mutation just _cs.id
* The comment for scopes in WALEdit could be a bit more explicit that we're 
overloading scopes with the cluster id for backwards compatibility.

+1 otherwise (assuming the full 0.94 test suite passes)

Looking at trunk patch now.

 Infinite loop possible in Master/Master replication
 ---

 Key: HBASE-7709
 URL: https://issues.apache.org/jira/browse/HBASE-7709
 Project: HBase
  Issue Type: Bug
  Components: Replication
Affects Versions: 0.94.6, 0.95.1
Reporter: Lars Hofhansl
Assignee: Vasu Mariyala
 Fix For: 0.98.0, 0.94.12, 0.96.0

 Attachments: 095-trunk.patch, 0.95-trunk-rev1.patch, 
 HBASE-7709.patch, HBASE-7709-rev1.patch, HBASE-7709-rev2.patch


  We just discovered the following scenario:
 # Cluster A and B are setup in master/master replication
 # By accident we had Cluster C replicate to Cluster A.
 Now all edit originating from C will be bouncing between A and B. Forever!
 The reason is that when the edit come in from C the cluster ID is already set 
 and won't be reset.
 We have a couple of options here:
 # Optionally only support master/master (not cycles of more than two 
 clusters). In that case we can always reset the cluster ID in the 
 ReplicationSource. That means that now cycles  2 will have the data cycle 
 forever. This is the only option that requires no changes in the HLog format.
 # Instead of a single cluster id per edit maintain a (unordered) set of 
 cluster id that have seen this edit. Then in ReplicationSource we drop any 
 edit that the sink has seen already. The is the cleanest approach, but it 
 might need a lot of data stored per edit if there are many clusters involved.
 # Maintain a configurable counter of the maximum cycle side we want to 
 support. Could default to 10 (even maybe even just). Store a hop-count in the 
 WAL and the ReplicationSource increases that hop-count on each hop. If we're 
 over the max, just drop the edit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9267) StochasticLoadBalancer goes over its processing time limit

2013-08-21 Thread Jean-Daniel Cryans (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13746601#comment-13746601
 ] 

Jean-Daniel Cryans commented on HBASE-9267:
---

Oh you're right, I deployed v3 and got that but then I saw another issue 
somewhere else and my ADHD kicked in so I chased that one instead. What I 
described was v2. Do you think v4 would be any different? My jstacking was 
showing that the time is just spent computing costs.

 StochasticLoadBalancer goes over its processing time limit
 --

 Key: HBASE-9267
 URL: https://issues.apache.org/jira/browse/HBASE-9267
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.95.2
Reporter: Jean-Daniel Cryans
Assignee: Elliott Clark
 Fix For: 0.98.0, 0.95.3

 Attachments: HBASE-9267-0.patch, HBASE-9267-1.patch, 
 HBASE-9267-2.patch, HBASE-9267-3.patch, HBASE-9267-4.patch


 I trying out 0.95.2, I left it running over the weekend (8 RS, average load 
 between 12 and 3 regions) and right now the balancer runs for 12 mins:
 bq. 2013-08-19 21:54:45,534 DEBUG 
 [jdec2hbase0403-1.vpc.cloudera.com,6,1376689696384-BalancerChore] 
 org.apache.hadoop.hbase.master.balancer.StochasticLoadBalancer: Could not 
 find a better load balance plan.  Tried 0 different configurations in 
 777309ms, and did not find anything with a computed cost less than 
 36.32576937689094
 It seems it slowly crept up there, yesterday it was doing:
 bq. 2013-08-18 20:53:17,232 DEBUG 
 [jdec2hbase0403-1.vpc.cloudera.com,6,1376689696384-BalancerChore] 
 org.apache.hadoop.hbase.master.balancer.StochasticLoadBalancer: Could not 
 find a better load balance plan.  Tried 0 different configurations in 
 257374ms, and did not find anything with a computed cost less than 
 36.3251082542424
 And originally it was doing 1 minute.
 In the jstack I see a 1000 of these and jstack doesn't want to show me the 
 whole thing:
 bq.  at java.util.SubList$1.nextIndex(AbstractList.java:713)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9284) user_permission.rb uses wrong argument types for ProtobufUtil#getUserPermissions() call

2013-08-21 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-9284:
--

Description: 
In security.rb, line 187:
{code}
perms = 
org.apache.hadoop.hbase.protobuf.ProtobufUtil.getUserPermissions(
  protocol, table_name.to_java_bytes)
{code}
the call results in the following exception:
{code}
ERROR: no method 'getUserPermissions' for arguments 
(org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos.AccessControlService.BlockingStub,org.jruby.java.proxies.ArrayJavaProxy)
 on Java::OrgApacheHadoopHbaseProtobuf::ProtobufUtil
Backtrace: /usr/lib/hbase/bin/../lib/ruby/hbase/security.rb:147:in 
`user_permission'
   
/usr/lib/hbase/bin/../lib/ruby/shell/commands/user_permission.rb:39:in `command'
   org/jruby/RubyKernel.java:2109:in `send'
   /usr/lib/hbase/bin/../lib/ruby/shell/commands.rb:34:in `command_safe'
   /usr/lib/hbase/bin/../lib/ruby/shell/commands.rb:87:in 
`translate_hbase_exceptions'
   /usr/lib/hbase/bin/../lib/ruby/shell/commands.rb:34:in `command_safe'
   /usr/lib/hbase/bin/../lib/ruby/shell.rb:123:in `internal_command'
   /usr/lib/hbase/bin/../lib/ruby/shell.rb:115:in `command'
   (eval):2:in `user_permission'
   (hbase):1:in `evaluate'
   org/jruby/RubyKernel.java:1112:in `eval'
{code}
The two argument method expects TableName for the second parameter.

  was:
In user_permission.rb, line 187:
{code}
perms = 
org.apache.hadoop.hbase.protobuf.ProtobufUtil.getUserPermissions(
  protocol, table_name.to_java_bytes)
{code}
the call results in the following exception:
{code}
ERROR: no method 'getUserPermissions' for arguments 
(org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos.AccessControlService.BlockingStub,org.jruby.java.proxies.ArrayJavaProxy)
 on Java::OrgApacheHadoopHbaseProtobuf::ProtobufUtil
Backtrace: /usr/lib/hbase/bin/../lib/ruby/hbase/security.rb:147:in 
`user_permission'
   
/usr/lib/hbase/bin/../lib/ruby/shell/commands/user_permission.rb:39:in `command'
   org/jruby/RubyKernel.java:2109:in `send'
   /usr/lib/hbase/bin/../lib/ruby/shell/commands.rb:34:in `command_safe'
   /usr/lib/hbase/bin/../lib/ruby/shell/commands.rb:87:in 
`translate_hbase_exceptions'
   /usr/lib/hbase/bin/../lib/ruby/shell/commands.rb:34:in `command_safe'
   /usr/lib/hbase/bin/../lib/ruby/shell.rb:123:in `internal_command'
   /usr/lib/hbase/bin/../lib/ruby/shell.rb:115:in `command'
   (eval):2:in `user_permission'
   (hbase):1:in `evaluate'
   org/jruby/RubyKernel.java:1112:in `eval'
{code}
The two argument method expects TableName for the second parameter.


 user_permission.rb uses wrong argument types for 
 ProtobufUtil#getUserPermissions() call
 ---

 Key: HBASE-9284
 URL: https://issues.apache.org/jira/browse/HBASE-9284
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.95.2
Reporter: Ted Yu

 In security.rb, line 187:
 {code}
 perms = 
 org.apache.hadoop.hbase.protobuf.ProtobufUtil.getUserPermissions(
   protocol, table_name.to_java_bytes)
 {code}
 the call results in the following exception:
 {code}
 ERROR: no method 'getUserPermissions' for arguments 
 (org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos.AccessControlService.BlockingStub,org.jruby.java.proxies.ArrayJavaProxy)
  on Java::OrgApacheHadoopHbaseProtobuf::ProtobufUtil
 Backtrace: /usr/lib/hbase/bin/../lib/ruby/hbase/security.rb:147:in 
 `user_permission'

 /usr/lib/hbase/bin/../lib/ruby/shell/commands/user_permission.rb:39:in 
 `command'
org/jruby/RubyKernel.java:2109:in `send'
/usr/lib/hbase/bin/../lib/ruby/shell/commands.rb:34:in 
 `command_safe'
/usr/lib/hbase/bin/../lib/ruby/shell/commands.rb:87:in 
 `translate_hbase_exceptions'
/usr/lib/hbase/bin/../lib/ruby/shell/commands.rb:34:in 
 `command_safe'
/usr/lib/hbase/bin/../lib/ruby/shell.rb:123:in `internal_command'
/usr/lib/hbase/bin/../lib/ruby/shell.rb:115:in `command'
(eval):2:in `user_permission'
(hbase):1:in `evaluate'
org/jruby/RubyKernel.java:1112:in `eval'
 {code}
 The two argument method expects TableName for the second parameter.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9274) After HBASE-8408 applied, temporary test files are being left in /tmp/hbase-user

2013-08-21 Thread Jonathan Hsieh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hsieh updated HBASE-9274:
--

Attachment: hbase-9274.patch

- Convert TestKeepDeletes,TestMinVersions,TestScanner from deprecated 
HBaseTestCase to Junit4 HBaseTestingUtility
- Force region creation into HBU#CreateNewHRegion
- Simplifying HBaseTestCase
- Some moves from HBaseTestCase - HBaseTestingUtility
- Tracking down testtable detritus
- Reuse createLocalHRegion so that proper tmp dirs are used.
- Create new #createLocalHTU / Convert tests to use 
HBaseTestingUtility.createLocalHTU
- Lessened scope of MockRegionServerServices


 After HBASE-8408 applied, temporary test files are being left in 
 /tmp/hbase-user
 --

 Key: HBASE-9274
 URL: https://issues.apache.org/jira/browse/HBASE-9274
 Project: HBase
  Issue Type: Bug
  Components: test
Affects Versions: 0.95.2
Reporter: Jonathan Hsieh
Assignee: Jonathan Hsieh
 Fix For: 0.98.0, 0.95.3

 Attachments: hbase-9274.patch


 Some of our jenkins CI machines have been failing out with /tmp/hbase-user
 This can be shown by executing the following command before and after the 
 namespaces patch.
 {code}
 # several tests are dropping stuff in the archive dir, just pick one
 mvn clean test -Dtest=TestEncodedSeekers
 find /tmp/hbase-jon/hbase/
 {code}
 /tmp/hbase-jon after test run before patch applied
 {code}
 $ find /tmp/hbase-jon/
 /tmp/hbase-jon/
 /tmp/hbase-jon/local
 /tmp/hbase-jon/local/jars
 {code}
 after namespaces patch applied
 {code}
 /tmp/hbase-jon/
 /tmp/hbase-jon/local
 /tmp/hbase-jon/local/jars
 /tmp/hbase-jon/hbase
 /tmp/hbase-jon/hbase/.archive
 /tmp/hbase-jon/hbase/.archive/.data
 /tmp/hbase-jon/hbase/.archive/.data/default
 /tmp/hbase-jon/hbase/.archive/.data/default/encodedSeekersTable
 /tmp/hbase-jon/hbase/.archive/.data/default/encodedSeekersTable/c6ec51dca2a9fe4c2279006345d62b35
 /tmp/hbase-jon/hbase/.archive/.data/default/encodedSeekersTable/c6ec51dca2a9fe4c2279006345d62b35/encodedSeekersCF
 /tmp/hbase-jon/hbase/.archive/.data/default/encodedSeekersTable/c6ec51dca2a9fe4c2279006345d62b35/encodedSeekersCF/8e76a87806b94483851158366f7d5c17
 /tmp/hbase-jon/hbase/.archive/.data/default/encodedSeekersTable/c6ec51dca2a9fe4c2279006345d62b35/encodedSeekersCF/494c07dbf08940749696bb0f9278401e
 /tmp/hbase-jon/hbase/.archive/.data/default/encodedSeekersTable/c6ec51dca2a9fe4c2279006345d62b35/encodedSeekersCF/.8e76a87806b94483851158366f7d5c1
 7.crc 
 
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7709) Infinite loop possible in Master/Master replication

2013-08-21 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13746611#comment-13746611
 ] 

Lars Hofhansl commented on HBASE-7709:
--

In trunk:
* should repeated UUID clusters = 8 in WAL.proto? Otherwise we can't read old 
log entries. But maybe that's not a problem...?
* in Import:
{code}
+clusters = new HashSetUUID();
+clusters.add(ZKClusterId.getUUIDForCluster(zkw));
{code}
Can be written as {{cluster = 
Collections.Collections.singleton(ZKClusterId.getUUIDForCluster(zkw))}}
* Is this right?
{code}
+  for(UUID clusterId : key.getClusters()) {
 uuidBuilder.setLeastSigBits(clusterId.getLeastSignificantBits());
 uuidBuilder.setMostSigBits(clusterId.getMostSignificantBits());
+keyBuilder.addClusters(uuidBuilder.build());
{code}
addClusters expects a Set.
* Where is HlogKey.PREFIX_CLUSTER_KEY used? Just to read old versions of 
WALEdits? Need to discuss if that is necessary. [~stack]? This has to do with 
upgrading WALEdits from pre 0.95.

Otherwise looks great.

 Infinite loop possible in Master/Master replication
 ---

 Key: HBASE-7709
 URL: https://issues.apache.org/jira/browse/HBASE-7709
 Project: HBase
  Issue Type: Bug
  Components: Replication
Affects Versions: 0.94.6, 0.95.1
Reporter: Lars Hofhansl
Assignee: Vasu Mariyala
 Fix For: 0.98.0, 0.94.12, 0.96.0

 Attachments: 095-trunk.patch, 0.95-trunk-rev1.patch, 
 HBASE-7709.patch, HBASE-7709-rev1.patch, HBASE-7709-rev2.patch


  We just discovered the following scenario:
 # Cluster A and B are setup in master/master replication
 # By accident we had Cluster C replicate to Cluster A.
 Now all edit originating from C will be bouncing between A and B. Forever!
 The reason is that when the edit come in from C the cluster ID is already set 
 and won't be reset.
 We have a couple of options here:
 # Optionally only support master/master (not cycles of more than two 
 clusters). In that case we can always reset the cluster ID in the 
 ReplicationSource. That means that now cycles  2 will have the data cycle 
 forever. This is the only option that requires no changes in the HLog format.
 # Instead of a single cluster id per edit maintain a (unordered) set of 
 cluster id that have seen this edit. Then in ReplicationSource we drop any 
 edit that the sink has seen already. The is the cleanest approach, but it 
 might need a lot of data stored per edit if there are many clusters involved.
 # Maintain a configurable counter of the maximum cycle side we want to 
 support. Could default to 10 (even maybe even just). Store a hop-count in the 
 WAL and the ReplicationSource increases that hop-count on each hop. If we're 
 over the max, just drop the edit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9274) After HBASE-8408 applied, temporary test files are being left in /tmp/hbase-user

2013-08-21 Thread Jonathan Hsieh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hsieh updated HBASE-9274:
--

Status: Patch Available  (was: Open)

The patch submitted makes it so that tests no longer drop detritus into 
/tmp/hbase-user.  The root cause is that instances of a default 
HBaseConfiguration with hbase.rootdir set to /tmp/hbase-user sneaks into 
different parts of test code via mocked or wrapped RegionServerServices and 
FileSystems, and because the namespaces changes cause the hfile archiver to 
generates dirs by consulting hbase.rootdir (instead of using a relative path 
like before).The patch makes it good style to instantiate hregions and 
other objects from the HBaseTestingUtility helpers methods.

 After HBASE-8408 applied, temporary test files are being left in 
 /tmp/hbase-user
 --

 Key: HBASE-9274
 URL: https://issues.apache.org/jira/browse/HBASE-9274
 Project: HBase
  Issue Type: Bug
  Components: test
Affects Versions: 0.95.2
Reporter: Jonathan Hsieh
Assignee: Jonathan Hsieh
 Fix For: 0.98.0, 0.95.3

 Attachments: hbase-9274.patch


 Some of our jenkins CI machines have been failing out with /tmp/hbase-user
 This can be shown by executing the following command before and after the 
 namespaces patch.
 {code}
 # several tests are dropping stuff in the archive dir, just pick one
 mvn clean test -Dtest=TestEncodedSeekers
 find /tmp/hbase-jon/hbase/
 {code}
 /tmp/hbase-jon after test run before patch applied
 {code}
 $ find /tmp/hbase-jon/
 /tmp/hbase-jon/
 /tmp/hbase-jon/local
 /tmp/hbase-jon/local/jars
 {code}
 after namespaces patch applied
 {code}
 /tmp/hbase-jon/
 /tmp/hbase-jon/local
 /tmp/hbase-jon/local/jars
 /tmp/hbase-jon/hbase
 /tmp/hbase-jon/hbase/.archive
 /tmp/hbase-jon/hbase/.archive/.data
 /tmp/hbase-jon/hbase/.archive/.data/default
 /tmp/hbase-jon/hbase/.archive/.data/default/encodedSeekersTable
 /tmp/hbase-jon/hbase/.archive/.data/default/encodedSeekersTable/c6ec51dca2a9fe4c2279006345d62b35
 /tmp/hbase-jon/hbase/.archive/.data/default/encodedSeekersTable/c6ec51dca2a9fe4c2279006345d62b35/encodedSeekersCF
 /tmp/hbase-jon/hbase/.archive/.data/default/encodedSeekersTable/c6ec51dca2a9fe4c2279006345d62b35/encodedSeekersCF/8e76a87806b94483851158366f7d5c17
 /tmp/hbase-jon/hbase/.archive/.data/default/encodedSeekersTable/c6ec51dca2a9fe4c2279006345d62b35/encodedSeekersCF/494c07dbf08940749696bb0f9278401e
 /tmp/hbase-jon/hbase/.archive/.data/default/encodedSeekersTable/c6ec51dca2a9fe4c2279006345d62b35/encodedSeekersCF/.8e76a87806b94483851158366f7d5c1
 7.crc 
 
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9245) Remove dead or deprecated code from hbase 0.96

2013-08-21 Thread Jonathan Hsieh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hsieh updated HBASE-9245:
--

Summary: Remove dead or deprecated code from hbase 0.96  (was: Remove dead 
code from hbase 0.96)

 Remove dead or deprecated code from hbase 0.96
 --

 Key: HBASE-9245
 URL: https://issues.apache.org/jira/browse/HBASE-9245
 Project: HBase
  Issue Type: Bug
Reporter: Jonathan Hsieh

 This is an umbrella issue that will cover the removal or refactoring of 
 dangling dead code and cruft.  Some can make it into 0.96, some may have to 
 wait for an 0.98.  The great culling of code will be grouped patches that 
 are logically related.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9284) user_permission.rb uses wrong argument types for ProtobufUtil#getUserPermissions() call

2013-08-21 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13746618#comment-13746618
 ] 

Ted Yu commented on HBASE-9284:
---

The intention of the ProtobufUtil.getUserPermissions() call mentioned above is 
to obtain permissions for Namespace specified by table_name argument.

A new method can be added to ProtobufUtil for such query.

 user_permission.rb uses wrong argument types for 
 ProtobufUtil#getUserPermissions() call
 ---

 Key: HBASE-9284
 URL: https://issues.apache.org/jira/browse/HBASE-9284
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.95.2
Reporter: Ted Yu

 In security.rb, line 187:
 {code}
 perms = 
 org.apache.hadoop.hbase.protobuf.ProtobufUtil.getUserPermissions(
   protocol, table_name.to_java_bytes)
 {code}
 the call results in the following exception:
 {code}
 ERROR: no method 'getUserPermissions' for arguments 
 (org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos.AccessControlService.BlockingStub,org.jruby.java.proxies.ArrayJavaProxy)
  on Java::OrgApacheHadoopHbaseProtobuf::ProtobufUtil
 Backtrace: /usr/lib/hbase/bin/../lib/ruby/hbase/security.rb:147:in 
 `user_permission'

 /usr/lib/hbase/bin/../lib/ruby/shell/commands/user_permission.rb:39:in 
 `command'
org/jruby/RubyKernel.java:2109:in `send'
/usr/lib/hbase/bin/../lib/ruby/shell/commands.rb:34:in 
 `command_safe'
/usr/lib/hbase/bin/../lib/ruby/shell/commands.rb:87:in 
 `translate_hbase_exceptions'
/usr/lib/hbase/bin/../lib/ruby/shell/commands.rb:34:in 
 `command_safe'
/usr/lib/hbase/bin/../lib/ruby/shell.rb:123:in `internal_command'
/usr/lib/hbase/bin/../lib/ruby/shell.rb:115:in `command'
(eval):2:in `user_permission'
(hbase):1:in `evaluate'
org/jruby/RubyKernel.java:1112:in `eval'
 {code}
 The two argument method expects TableName for the second parameter.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (HBASE-7709) Infinite loop possible in Master/Master replication

2013-08-21 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13746611#comment-13746611
 ] 

Lars Hofhansl edited comment on HBASE-7709 at 8/21/13 5:38 PM:
---

In trunk:
* should repeated UUID clusters = 8 be optional in WAL.proto? Otherwise we 
can't read old log entries. But maybe that's not a problem...?
* in Import:
{code}
+clusters = new HashSetUUID();
+clusters.add(ZKClusterId.getUUIDForCluster(zkw));
{code}
Can be written as {{cluster = 
Collections.Collections.singleton(ZKClusterId.getUUIDForCluster(zkw))}}
* Is this right?
{code}
+  for(UUID clusterId : key.getClusters()) {
 uuidBuilder.setLeastSigBits(clusterId.getLeastSignificantBits());
 uuidBuilder.setMostSigBits(clusterId.getMostSignificantBits());
+keyBuilder.addClusters(uuidBuilder.build());
{code}
addClusters expects a Set.
* Where is HlogKey.PREFIX_CLUSTER_KEY used? Just to read old versions of 
WALEdits? Need to discuss if that is necessary. [~stack]? This has to do with 
upgrading WALEdits from pre 0.95.

Otherwise looks great.

  was (Author: lhofhansl):
In trunk:
* should repeated UUID clusters = 8 in WAL.proto? Otherwise we can't read old 
log entries. But maybe that's not a problem...?
* in Import:
{code}
+clusters = new HashSetUUID();
+clusters.add(ZKClusterId.getUUIDForCluster(zkw));
{code}
Can be written as {{cluster = 
Collections.Collections.singleton(ZKClusterId.getUUIDForCluster(zkw))}}
* Is this right?
{code}
+  for(UUID clusterId : key.getClusters()) {
 uuidBuilder.setLeastSigBits(clusterId.getLeastSignificantBits());
 uuidBuilder.setMostSigBits(clusterId.getMostSignificantBits());
+keyBuilder.addClusters(uuidBuilder.build());
{code}
addClusters expects a Set.
* Where is HlogKey.PREFIX_CLUSTER_KEY used? Just to read old versions of 
WALEdits? Need to discuss if that is necessary. [~stack]? This has to do with 
upgrading WALEdits from pre 0.95.

Otherwise looks great.
  
 Infinite loop possible in Master/Master replication
 ---

 Key: HBASE-7709
 URL: https://issues.apache.org/jira/browse/HBASE-7709
 Project: HBase
  Issue Type: Bug
  Components: Replication
Affects Versions: 0.94.6, 0.95.1
Reporter: Lars Hofhansl
Assignee: Vasu Mariyala
 Fix For: 0.98.0, 0.94.12, 0.96.0

 Attachments: 095-trunk.patch, 0.95-trunk-rev1.patch, 
 HBASE-7709.patch, HBASE-7709-rev1.patch, HBASE-7709-rev2.patch


  We just discovered the following scenario:
 # Cluster A and B are setup in master/master replication
 # By accident we had Cluster C replicate to Cluster A.
 Now all edit originating from C will be bouncing between A and B. Forever!
 The reason is that when the edit come in from C the cluster ID is already set 
 and won't be reset.
 We have a couple of options here:
 # Optionally only support master/master (not cycles of more than two 
 clusters). In that case we can always reset the cluster ID in the 
 ReplicationSource. That means that now cycles  2 will have the data cycle 
 forever. This is the only option that requires no changes in the HLog format.
 # Instead of a single cluster id per edit maintain a (unordered) set of 
 cluster id that have seen this edit. Then in ReplicationSource we drop any 
 edit that the sink has seen already. The is the cleanest approach, but it 
 might need a lot of data stored per edit if there are many clusters involved.
 # Maintain a configurable counter of the maximum cycle side we want to 
 support. Could default to 10 (even maybe even just). Store a hop-count in the 
 WAL and the ReplicationSource increases that hop-count on each hop. If we're 
 over the max, just drop the edit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HBASE-9288) Eliminate HBaseTestCase and convert to HBaseTestingUtility

2013-08-21 Thread Jonathan Hsieh (JIRA)
Jonathan Hsieh created HBASE-9288:
-

 Summary: Eliminate HBaseTestCase and convert to HBaseTestingUtility
 Key: HBASE-9288
 URL: https://issues.apache.org/jira/browse/HBASE-9288
 Project: HBase
  Issue Type: Sub-task
Reporter: Jonathan Hsieh


HBaseTestCase has been deprecated for several releases now and should be 
removed/converted.  Some examples of this can be seen in HBASE-9274.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9288) Eliminate HBaseTestCase and convert to HBaseTestingUtility

2013-08-21 Thread Jonathan Hsieh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hsieh updated HBASE-9288:
--

Priority: Minor  (was: Major)

 Eliminate HBaseTestCase and convert to HBaseTestingUtility
 --

 Key: HBASE-9288
 URL: https://issues.apache.org/jira/browse/HBASE-9288
 Project: HBase
  Issue Type: Sub-task
Reporter: Jonathan Hsieh
Priority: Minor

 HBaseTestCase has been deprecated for several releases now and should be 
 removed/converted.  Some examples of this can be seen in HBASE-9274.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7709) Infinite loop possible in Master/Master replication

2013-08-21 Thread Vasu Mariyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasu Mariyala updated HBASE-7709:
-

Attachment: HBASE-7709-rev3.patch

 Infinite loop possible in Master/Master replication
 ---

 Key: HBASE-7709
 URL: https://issues.apache.org/jira/browse/HBASE-7709
 Project: HBase
  Issue Type: Bug
  Components: Replication
Affects Versions: 0.94.6, 0.95.1
Reporter: Lars Hofhansl
Assignee: Vasu Mariyala
 Fix For: 0.98.0, 0.94.12, 0.96.0

 Attachments: 095-trunk.patch, 0.95-trunk-rev1.patch, 
 HBASE-7709.patch, HBASE-7709-rev1.patch, HBASE-7709-rev2.patch, 
 HBASE-7709-rev3.patch


  We just discovered the following scenario:
 # Cluster A and B are setup in master/master replication
 # By accident we had Cluster C replicate to Cluster A.
 Now all edit originating from C will be bouncing between A and B. Forever!
 The reason is that when the edit come in from C the cluster ID is already set 
 and won't be reset.
 We have a couple of options here:
 # Optionally only support master/master (not cycles of more than two 
 clusters). In that case we can always reset the cluster ID in the 
 ReplicationSource. That means that now cycles  2 will have the data cycle 
 forever. This is the only option that requires no changes in the HLog format.
 # Instead of a single cluster id per edit maintain a (unordered) set of 
 cluster id that have seen this edit. Then in ReplicationSource we drop any 
 edit that the sink has seen already. The is the cleanest approach, but it 
 might need a lot of data stored per edit if there are many clusters involved.
 # Maintain a configurable counter of the maximum cycle side we want to 
 support. Could default to 10 (even maybe even just). Store a hop-count in the 
 WAL and the ReplicationSource increases that hop-count on each hop. If we're 
 over the max, just drop the edit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9274) After HBASE-8408 applied, temporary test files are being left in /tmp/hbase-user

2013-08-21 Thread Jonathan Hsieh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hsieh updated HBASE-9274:
--

Attachment: hbase-9274.v2.patch

missed a few imports on v1.

 After HBASE-8408 applied, temporary test files are being left in 
 /tmp/hbase-user
 --

 Key: HBASE-9274
 URL: https://issues.apache.org/jira/browse/HBASE-9274
 Project: HBase
  Issue Type: Bug
  Components: test
Affects Versions: 0.95.2
Reporter: Jonathan Hsieh
Assignee: Jonathan Hsieh
 Fix For: 0.98.0, 0.95.3

 Attachments: hbase-9274.patch, hbase-9274.v2.patch


 Some of our jenkins CI machines have been failing out with /tmp/hbase-user
 This can be shown by executing the following command before and after the 
 namespaces patch.
 {code}
 # several tests are dropping stuff in the archive dir, just pick one
 mvn clean test -Dtest=TestEncodedSeekers
 find /tmp/hbase-jon/hbase/
 {code}
 /tmp/hbase-jon after test run before patch applied
 {code}
 $ find /tmp/hbase-jon/
 /tmp/hbase-jon/
 /tmp/hbase-jon/local
 /tmp/hbase-jon/local/jars
 {code}
 after namespaces patch applied
 {code}
 /tmp/hbase-jon/
 /tmp/hbase-jon/local
 /tmp/hbase-jon/local/jars
 /tmp/hbase-jon/hbase
 /tmp/hbase-jon/hbase/.archive
 /tmp/hbase-jon/hbase/.archive/.data
 /tmp/hbase-jon/hbase/.archive/.data/default
 /tmp/hbase-jon/hbase/.archive/.data/default/encodedSeekersTable
 /tmp/hbase-jon/hbase/.archive/.data/default/encodedSeekersTable/c6ec51dca2a9fe4c2279006345d62b35
 /tmp/hbase-jon/hbase/.archive/.data/default/encodedSeekersTable/c6ec51dca2a9fe4c2279006345d62b35/encodedSeekersCF
 /tmp/hbase-jon/hbase/.archive/.data/default/encodedSeekersTable/c6ec51dca2a9fe4c2279006345d62b35/encodedSeekersCF/8e76a87806b94483851158366f7d5c17
 /tmp/hbase-jon/hbase/.archive/.data/default/encodedSeekersTable/c6ec51dca2a9fe4c2279006345d62b35/encodedSeekersCF/494c07dbf08940749696bb0f9278401e
 /tmp/hbase-jon/hbase/.archive/.data/default/encodedSeekersTable/c6ec51dca2a9fe4c2279006345d62b35/encodedSeekersCF/.8e76a87806b94483851158366f7d5c1
 7.crc 
 
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HBASE-9288) Eliminate HBaseTestCase and convert to HBaseTestingUtility

2013-08-21 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-9288.
--

Resolution: Duplicate

Dup of HBASE-4625 (you filed it then too [~j...@cloudera.com] -- smile)

 Eliminate HBaseTestCase and convert to HBaseTestingUtility
 --

 Key: HBASE-9288
 URL: https://issues.apache.org/jira/browse/HBASE-9288
 Project: HBase
  Issue Type: Sub-task
Reporter: Jonathan Hsieh
Priority: Minor

 HBaseTestCase has been deprecated for several releases now and should be 
 removed/converted.  Some examples of this can be seen in HBASE-9274.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9274) After HBASE-8408 applied, temporary test files are being left in /tmp/hbase-user

2013-08-21 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13746640#comment-13746640
 ] 

Jonathan Hsieh commented on HBASE-9274:
---

a 0.95 version only as a few trivial tweaks.  will post after hadoopqa has a go.

 After HBASE-8408 applied, temporary test files are being left in 
 /tmp/hbase-user
 --

 Key: HBASE-9274
 URL: https://issues.apache.org/jira/browse/HBASE-9274
 Project: HBase
  Issue Type: Bug
  Components: test
Affects Versions: 0.95.2
Reporter: Jonathan Hsieh
Assignee: Jonathan Hsieh
 Fix For: 0.98.0, 0.95.3

 Attachments: hbase-9274.patch, hbase-9274.v2.patch


 Some of our jenkins CI machines have been failing out with /tmp/hbase-user
 This can be shown by executing the following command before and after the 
 namespaces patch.
 {code}
 # several tests are dropping stuff in the archive dir, just pick one
 mvn clean test -Dtest=TestEncodedSeekers
 find /tmp/hbase-jon/hbase/
 {code}
 /tmp/hbase-jon after test run before patch applied
 {code}
 $ find /tmp/hbase-jon/
 /tmp/hbase-jon/
 /tmp/hbase-jon/local
 /tmp/hbase-jon/local/jars
 {code}
 after namespaces patch applied
 {code}
 /tmp/hbase-jon/
 /tmp/hbase-jon/local
 /tmp/hbase-jon/local/jars
 /tmp/hbase-jon/hbase
 /tmp/hbase-jon/hbase/.archive
 /tmp/hbase-jon/hbase/.archive/.data
 /tmp/hbase-jon/hbase/.archive/.data/default
 /tmp/hbase-jon/hbase/.archive/.data/default/encodedSeekersTable
 /tmp/hbase-jon/hbase/.archive/.data/default/encodedSeekersTable/c6ec51dca2a9fe4c2279006345d62b35
 /tmp/hbase-jon/hbase/.archive/.data/default/encodedSeekersTable/c6ec51dca2a9fe4c2279006345d62b35/encodedSeekersCF
 /tmp/hbase-jon/hbase/.archive/.data/default/encodedSeekersTable/c6ec51dca2a9fe4c2279006345d62b35/encodedSeekersCF/8e76a87806b94483851158366f7d5c17
 /tmp/hbase-jon/hbase/.archive/.data/default/encodedSeekersTable/c6ec51dca2a9fe4c2279006345d62b35/encodedSeekersCF/494c07dbf08940749696bb0f9278401e
 /tmp/hbase-jon/hbase/.archive/.data/default/encodedSeekersTable/c6ec51dca2a9fe4c2279006345d62b35/encodedSeekersCF/.8e76a87806b94483851158366f7d5c1
 7.crc 
 
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7709) Infinite loop possible in Master/Master replication

2013-08-21 Thread Vasu Mariyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasu Mariyala updated HBASE-7709:
-

Attachment: (was: HBASE-7709-rev3.patch)

 Infinite loop possible in Master/Master replication
 ---

 Key: HBASE-7709
 URL: https://issues.apache.org/jira/browse/HBASE-7709
 Project: HBase
  Issue Type: Bug
  Components: Replication
Affects Versions: 0.94.6, 0.95.1
Reporter: Lars Hofhansl
Assignee: Vasu Mariyala
 Fix For: 0.98.0, 0.94.12, 0.96.0

 Attachments: 095-trunk.patch, 0.95-trunk-rev1.patch, 
 HBASE-7709.patch, HBASE-7709-rev1.patch, HBASE-7709-rev2.patch


  We just discovered the following scenario:
 # Cluster A and B are setup in master/master replication
 # By accident we had Cluster C replicate to Cluster A.
 Now all edit originating from C will be bouncing between A and B. Forever!
 The reason is that when the edit come in from C the cluster ID is already set 
 and won't be reset.
 We have a couple of options here:
 # Optionally only support master/master (not cycles of more than two 
 clusters). In that case we can always reset the cluster ID in the 
 ReplicationSource. That means that now cycles  2 will have the data cycle 
 forever. This is the only option that requires no changes in the HLog format.
 # Instead of a single cluster id per edit maintain a (unordered) set of 
 cluster id that have seen this edit. Then in ReplicationSource we drop any 
 edit that the sink has seen already. The is the cleanest approach, but it 
 might need a lot of data stored per edit if there are many clusters involved.
 # Maintain a configurable counter of the maximum cycle side we want to 
 support. Could default to 10 (even maybe even just). Store a hop-count in the 
 WAL and the ReplicationSource increases that hop-count on each hop. If we're 
 over the max, just drop the edit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9285) User who created table cannot scan the same table due to Insufficient permissions

2013-08-21 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13746662#comment-13746662
 ] 

Ted Yu commented on HBASE-9285:
---

The reason for permission check failure was that there is no znode persisted 
under /hbase/acl which stores permission information for the table.

Investigating ...

 User who created table cannot scan the same table due to Insufficient 
 permissions
 -

 Key: HBASE-9285
 URL: https://issues.apache.org/jira/browse/HBASE-9285
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.95.2
Reporter: Ted Yu

 User hrt_qa has been given 'C' permission.
 {code}
 create 'te', {NAME = 'f1', VERSIONS = 5}
 ...
 hbase(main):003:0 list
 TABLE
 hbase:acl
 hbase:namespace
 te
 6 row(s) in 0.0570 seconds
 hbase(main):004:0 scan 'te'
 ROW  COLUMN+CELL
 2013-08-21 02:21:00,921 DEBUG [main] token.AuthenticationTokenSelector: No 
 matching token found
 2013-08-21 02:21:00,921 DEBUG [main] security.HBaseSaslRpcClient: Creating 
 SASL GSSAPI client. Server's Kerberos principal name is 
 hbase/hor16n13.gq1.ygridcore@horton.ygridcore.net
 2013-08-21 02:21:00,923 DEBUG [main] security.HBaseSaslRpcClient: Have sent 
 token of size 582 from initSASLContext.
 2013-08-21 02:21:00,926 DEBUG [main] security.HBaseSaslRpcClient: Will read 
 input token of size 0 for processing by initSASLContext
 2013-08-21 02:21:00,926 DEBUG [main] security.HBaseSaslRpcClient: Will send 
 token of size 0 from initSASLContext.
 2013-08-21 02:21:00,926 DEBUG [main] security.HBaseSaslRpcClient: Will read 
 input token of size 53 for processing by initSASLContext
 2013-08-21 02:21:00,927 DEBUG [main] security.HBaseSaslRpcClient: Will send 
 token of size 53 from initSASLContext.
 2013-08-21 02:21:00,927 DEBUG [main] security.HBaseSaslRpcClient: SASL client 
 context established. Negotiated QoP: auth
 2013-08-21 02:21:00,935 WARN  [main] client.RpcRetryingCaller: Call 
 exception, tries=0, retries=7, retryTime=-14ms
 org.apache.hadoop.hbase.security.AccessDeniedException: 
 org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient 
 permissions for user 'hrt_qa' for scanner open on table te
   at 
 org.apache.hadoop.hbase.security.access.AccessController.preScannerOpen(AccessController.java:1116)
   at 
 org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preScannerOpen(RegionCoprocessorHost.java:1294)
   at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3007)
   at 
 org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:26847)
 ...
 Caused by: 
 org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.security.AccessDeniedException):
  org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient 
 permissions for user 'hrt_qa' for scanner open on table te
   at 
 org.apache.hadoop.hbase.security.access.AccessController.preScannerOpen(AccessController.java:1116)
   at 
 org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preScannerOpen(RegionCoprocessorHost.java:1294)
   at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3007)
 {code}
 Here was related entries in hbase:acl table:
 {code}
 hbase(main):001:0 scan 'hbase:acl'
 ROW  COLUMN+CELL
  hbase:acl   column=l:hrt_qa, 
 timestamp=1377045996685, value=C
  te  column=l:hrt_qa, 
 timestamp=1377051648649, value=RWXCA
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7709) Infinite loop possible in Master/Master replication

2013-08-21 Thread Vasu Mariyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasu Mariyala updated HBASE-7709:
-

Attachment: HBASE-7709-rev3.patch

 Infinite loop possible in Master/Master replication
 ---

 Key: HBASE-7709
 URL: https://issues.apache.org/jira/browse/HBASE-7709
 Project: HBase
  Issue Type: Bug
  Components: Replication
Affects Versions: 0.94.6, 0.95.1
Reporter: Lars Hofhansl
Assignee: Vasu Mariyala
 Fix For: 0.98.0, 0.94.12, 0.96.0

 Attachments: 095-trunk.patch, 0.95-trunk-rev1.patch, 
 HBASE-7709.patch, HBASE-7709-rev1.patch, HBASE-7709-rev2.patch, 
 HBASE-7709-rev3.patch


  We just discovered the following scenario:
 # Cluster A and B are setup in master/master replication
 # By accident we had Cluster C replicate to Cluster A.
 Now all edit originating from C will be bouncing between A and B. Forever!
 The reason is that when the edit come in from C the cluster ID is already set 
 and won't be reset.
 We have a couple of options here:
 # Optionally only support master/master (not cycles of more than two 
 clusters). In that case we can always reset the cluster ID in the 
 ReplicationSource. That means that now cycles  2 will have the data cycle 
 forever. This is the only option that requires no changes in the HLog format.
 # Instead of a single cluster id per edit maintain a (unordered) set of 
 cluster id that have seen this edit. Then in ReplicationSource we drop any 
 edit that the sink has seen already. The is the cleanest approach, but it 
 might need a lot of data stored per edit if there are many clusters involved.
 # Maintain a configurable counter of the maximum cycle side we want to 
 support. Could default to 10 (even maybe even just). Store a hop-count in the 
 WAL and the ReplicationSource increases that hop-count on each hop. If we're 
 over the max, just drop the edit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9210) hbase shell -d doesn't print out exception stack trace

2013-08-21 Thread Jeffrey Zhong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeffrey Zhong updated HBASE-9210:
-

Attachment: hbase-9210-v1.patch

Add handling when error object doesn't have cause method. Thanks. 

 hbase shell  -d doesn't print out exception stack trace
 -

 Key: HBASE-9210
 URL: https://issues.apache.org/jira/browse/HBASE-9210
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0, 0.95.2
Reporter: Jeffrey Zhong
Assignee: Jeffrey Zhong
 Attachments: hbase-9210.patch, hbase-9210-v1.patch


 when starting shell with -d specified, the following line doesn't print 
 anything because debug isn't set when shell is constructed.
 {code}
 Backtrace: #{e.backtrace.join(\n   )} if debug
 {code}
 In addition, the existing code prints the outer most exception while we 
 normally need the root cause exception.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-4625) Convert @deprecated HBaseTestCase tests JUnit4 style tests

2013-08-21 Thread Jonathan Hsieh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hsieh updated HBASE-4625:
--

Summary: Convert @deprecated HBaseTestCase tests  JUnit4 style tests  (was: 
Convert @deprecated HBaseTestCase tests in 0.90 into JUnit4 style tests in 0.92 
and TRUNK)

 Convert @deprecated HBaseTestCase tests  JUnit4 style tests
 ---

 Key: HBASE-4625
 URL: https://issues.apache.org/jira/browse/HBASE-4625
 Project: HBase
  Issue Type: Sub-task
Reporter: Jonathan Hsieh
Priority: Minor
  Labels: noob

 This will class has 47 references so separating out into a separate subtask.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-4625) Convert @deprecated HBaseTestCase tests in 0.90 into JUnit4 style tests in 0.92 and TRUNK

2013-08-21 Thread Jonathan Hsieh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hsieh updated HBASE-4625:
--

Parent Issue: HBASE-9245  (was: HBASE-4436)

 Convert @deprecated HBaseTestCase tests in 0.90 into JUnit4 style tests in 
 0.92 and TRUNK
 -

 Key: HBASE-4625
 URL: https://issues.apache.org/jira/browse/HBASE-4625
 Project: HBase
  Issue Type: Sub-task
Reporter: Jonathan Hsieh
  Labels: noob

 This will class has 47 references so separating out into a separate subtask.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-4625) Convert @deprecated HBaseTestCase tests in 0.90 into JUnit4 style tests in 0.92 and TRUNK

2013-08-21 Thread Jonathan Hsieh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hsieh updated HBASE-4625:
--

Priority: Minor  (was: Major)

 Convert @deprecated HBaseTestCase tests in 0.90 into JUnit4 style tests in 
 0.92 and TRUNK
 -

 Key: HBASE-4625
 URL: https://issues.apache.org/jira/browse/HBASE-4625
 Project: HBase
  Issue Type: Sub-task
Reporter: Jonathan Hsieh
Priority: Minor
  Labels: noob

 This will class has 47 references so separating out into a separate subtask.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9288) Eliminate HBaseTestCase and convert to HBaseTestingUtility

2013-08-21 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13746696#comment-13746696
 ] 

Jonathan Hsieh commented on HBASE-9288:
---

Thanks.  I've reparented HBASE-4625.

 Eliminate HBaseTestCase and convert to HBaseTestingUtility
 --

 Key: HBASE-9288
 URL: https://issues.apache.org/jira/browse/HBASE-9288
 Project: HBase
  Issue Type: Sub-task
Reporter: Jonathan Hsieh
Priority: Minor

 HBaseTestCase has been deprecated for several releases now and should be 
 removed/converted.  Some examples of this can be seen in HBASE-9274.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-8960) TestDistributedLogSplitting.testLogReplayForDisablingTable fails sometimes

2013-08-21 Thread Jeffrey Zhong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeffrey Zhong updated HBASE-8960:
-

Attachment: hbase-8960-fix-disallowWritesInRecovering.patch

Reworked the test case disallowWritesInRecovering to make it stable. I 
integrated the patch into 0.95 and trunk as well. Thanks.

 TestDistributedLogSplitting.testLogReplayForDisablingTable fails sometimes
 --

 Key: HBASE-8960
 URL: https://issues.apache.org/jira/browse/HBASE-8960
 Project: HBase
  Issue Type: Task
  Components: test
Reporter: Jimmy Xiang
Assignee: Jeffrey Zhong
Priority: Minor
 Fix For: 0.96.0

 Attachments: hbase-8690-v4.patch, hbase-8960-addendum-2.patch, 
 hbase-8960-addendum.patch, hbase-8960-fix-disallowWritesInRecovering.patch, 
 hbase-8960.patch


 http://54.241.6.143/job/HBase-0.95-Hadoop-2/org.apache.hbase$hbase-server/634/testReport/junit/org.apache.hadoop.hbase.master/TestDistributedLogSplitting/testLogReplayForDisablingTable/
 {noformat}
 java.lang.AssertionError: expected:1000 but was:0
   at org.junit.Assert.fail(Assert.java:88)
   at org.junit.Assert.failNotEquals(Assert.java:743)
   at org.junit.Assert.assertEquals(Assert.java:118)
   at org.junit.Assert.assertEquals(Assert.java:555)
   at org.junit.Assert.assertEquals(Assert.java:542)
   at 
 org.apache.hadoop.hbase.master.TestDistributedLogSplitting.testLogReplayForDisablingTable(TestDistributedLogSplitting.java:797)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
   at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
   at 
 org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8960) TestDistributedLogSplitting.testLogReplayForDisablingTable fails sometimes

2013-08-21 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13746703#comment-13746703
 ] 

stack commented on HBASE-8960:
--

[~jeffreyz] Thanks for working on this.

 TestDistributedLogSplitting.testLogReplayForDisablingTable fails sometimes
 --

 Key: HBASE-8960
 URL: https://issues.apache.org/jira/browse/HBASE-8960
 Project: HBase
  Issue Type: Task
  Components: test
Reporter: Jimmy Xiang
Assignee: Jeffrey Zhong
Priority: Minor
 Fix For: 0.96.0

 Attachments: hbase-8690-v4.patch, hbase-8960-addendum-2.patch, 
 hbase-8960-addendum.patch, hbase-8960-fix-disallowWritesInRecovering.patch, 
 hbase-8960.patch


 http://54.241.6.143/job/HBase-0.95-Hadoop-2/org.apache.hbase$hbase-server/634/testReport/junit/org.apache.hadoop.hbase.master/TestDistributedLogSplitting/testLogReplayForDisablingTable/
 {noformat}
 java.lang.AssertionError: expected:1000 but was:0
   at org.junit.Assert.fail(Assert.java:88)
   at org.junit.Assert.failNotEquals(Assert.java:743)
   at org.junit.Assert.assertEquals(Assert.java:118)
   at org.junit.Assert.assertEquals(Assert.java:555)
   at org.junit.Assert.assertEquals(Assert.java:542)
   at 
 org.apache.hadoop.hbase.master.TestDistributedLogSplitting.testLogReplayForDisablingTable(TestDistributedLogSplitting.java:797)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
   at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
   at 
 org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9210) hbase shell -d doesn't print out exception stack trace

2013-08-21 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13746706#comment-13746706
 ] 

stack commented on HBASE-9210:
--

Patch looks good.  +1 if it works for you for 0.95 and trunk.

 hbase shell  -d doesn't print out exception stack trace
 -

 Key: HBASE-9210
 URL: https://issues.apache.org/jira/browse/HBASE-9210
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0, 0.95.2
Reporter: Jeffrey Zhong
Assignee: Jeffrey Zhong
 Attachments: hbase-9210.patch, hbase-9210-v1.patch


 when starting shell with -d specified, the following line doesn't print 
 anything because debug isn't set when shell is constructed.
 {code}
 Backtrace: #{e.backtrace.join(\n   )} if debug
 {code}
 In addition, the existing code prints the outer most exception while we 
 normally need the root cause exception.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HBASE-9289) hbase-assembly pom should use project.parent.basedir

2013-08-21 Thread Jimmy Xiang (JIRA)
Jimmy Xiang created HBASE-9289:
--

 Summary: hbase-assembly pom should use project.parent.basedir
 Key: HBASE-9289
 URL: https://issues.apache.org/jira/browse/HBASE-9289
 Project: HBase
  Issue Type: Bug
  Components: build
Affects Versions: 0.95.2
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
Priority: Minor
 Fix For: 0.96.0
 Attachments: trunk-9289.patch

Currently, we have
{noformat}
outputFile${project.build.directory}/../../target/cached_classpath.txt/outputFile
{noformat}

It is more robust to use ${project.parent.basedir} instead.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9289) hbase-assembly pom should use project.parent.basedir

2013-08-21 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-9289:
---

Status: Patch Available  (was: Open)

 hbase-assembly pom should use project.parent.basedir
 

 Key: HBASE-9289
 URL: https://issues.apache.org/jira/browse/HBASE-9289
 Project: HBase
  Issue Type: Bug
  Components: build
Affects Versions: 0.95.2
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
Priority: Minor
 Fix For: 0.96.0

 Attachments: trunk-9289.patch


 Currently, we have
 {noformat}
 outputFile${project.build.directory}/../../target/cached_classpath.txt/outputFile
 {noformat}
 It is more robust to use ${project.parent.basedir} instead.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9289) hbase-assembly pom should use project.parent.basedir

2013-08-21 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-9289:
---

Attachment: trunk-9289.patch

 hbase-assembly pom should use project.parent.basedir
 

 Key: HBASE-9289
 URL: https://issues.apache.org/jira/browse/HBASE-9289
 Project: HBase
  Issue Type: Bug
  Components: build
Affects Versions: 0.95.2
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
Priority: Minor
 Fix For: 0.96.0

 Attachments: trunk-9289.patch


 Currently, we have
 {noformat}
 outputFile${project.build.directory}/../../target/cached_classpath.txt/outputFile
 {noformat}
 It is more robust to use ${project.parent.basedir} instead.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9289) hbase-assembly pom should use project.parent.basedir

2013-08-21 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13746722#comment-13746722
 ] 

stack commented on HBASE-9289:
--

+1

 hbase-assembly pom should use project.parent.basedir
 

 Key: HBASE-9289
 URL: https://issues.apache.org/jira/browse/HBASE-9289
 Project: HBase
  Issue Type: Bug
  Components: build
Affects Versions: 0.95.2
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
Priority: Minor
 Fix For: 0.96.0

 Attachments: trunk-9289.patch


 Currently, we have
 {noformat}
 outputFile${project.build.directory}/../../target/cached_classpath.txt/outputFile
 {noformat}
 It is more robust to use ${project.parent.basedir} instead.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9272) A simple parallel, unordered scanner

2013-08-21 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-9272:
-

Assignee: Lars Hofhansl

 A simple parallel, unordered scanner
 

 Key: HBASE-9272
 URL: https://issues.apache.org/jira/browse/HBASE-9272
 Project: HBase
  Issue Type: New Feature
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
Priority: Minor
 Attachments: ParallelClientScanner.java


 The contract of ClientScanner is to return rows in sort order. That limits 
 the order in which region can be scanned.
 I propose a simple ParallelScanner that does not have this requirement and 
 queries regions in parallel, return whatever gets returned first.
 This is generally useful for scans that filter a lot of data on the server, 
 or in cases where the client can very quickly react to the returned data.
 I have a simple prototype (doesn't do error handling right, and might be a 
 bit heavy on the synchronization side - it used a BlockingQueue to hand data 
 between the client using the scanner and the threads doing the scanning, it 
 also could potentially starve some scanners long enugh to time out at the 
 server).
 On the plus side, it's only a 130 lines of code. :)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9272) A simple parallel, unordered scanner

2013-08-21 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13746728#comment-13746728
 ] 

Lars Hofhansl commented on HBASE-9272:
--

Compared to HBASE-1935 this is *much* simpler since it still uses ClientScanner 
under the hood (which I had since factored in its own class).
Using ClientScanner also has the benefit that this is resistant to concurrent 
splits.

Simple perf test with 30m rows, 1 col, 100 byte values. Split into 16 regions 
on a cluster with 16 region servers.
The performance speedup (scan latency here) is proportional to the number of 
threads used when most data is filtered at the server (as expected) - of course 
the cluster was not otherwise busy.
Even when *all* rows are returned I see following (scanner caching: 100), 
buffer size in ParallelScanner was 1000 and 1 (made no difference perf 
wise):
* Running the ParallelScanner with 50 threads: 40.8s.
* Running the ParallelScanner with 20 threads: 40.3s.
* Running the ParallelScanner with 16 threads: 40.3s.
* Running the ParallelScanner with 10 threads: 59.6s.
* Running the ParallelScanner with 1 thread: 316
* Running the standard scanner: 309

So there is a 1.5% synchronization/context-switching overhead.
It looks like this general approach a viable and by using ClientScanner it is 
also ridiculously simple.


 A simple parallel, unordered scanner
 

 Key: HBASE-9272
 URL: https://issues.apache.org/jira/browse/HBASE-9272
 Project: HBase
  Issue Type: New Feature
Reporter: Lars Hofhansl
Priority: Minor
 Attachments: ParallelClientScanner.java


 The contract of ClientScanner is to return rows in sort order. That limits 
 the order in which region can be scanned.
 I propose a simple ParallelScanner that does not have this requirement and 
 queries regions in parallel, return whatever gets returned first.
 This is generally useful for scans that filter a lot of data on the server, 
 or in cases where the client can very quickly react to the returned data.
 I have a simple prototype (doesn't do error handling right, and might be a 
 bit heavy on the synchronization side - it used a BlockingQueue to hand data 
 between the client using the scanner and the threads doing the scanning, it 
 also could potentially starve some scanners long enugh to time out at the 
 server).
 On the plus side, it's only a 130 lines of code. :)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9272) A simple parallel, unordered scanner

2013-08-21 Thread Jean-Marc Spaggiari (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13746731#comment-13746731
 ] 

Jean-Marc Spaggiari commented on HBASE-9272:


Is there a way to make that optional so if people want to not have the 1.5% 
overhead they can? Like, when they know they will need all the data or 
something like that?

 A simple parallel, unordered scanner
 

 Key: HBASE-9272
 URL: https://issues.apache.org/jira/browse/HBASE-9272
 Project: HBase
  Issue Type: New Feature
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
Priority: Minor
 Attachments: ParallelClientScanner.java


 The contract of ClientScanner is to return rows in sort order. That limits 
 the order in which region can be scanned.
 I propose a simple ParallelScanner that does not have this requirement and 
 queries regions in parallel, return whatever gets returned first.
 This is generally useful for scans that filter a lot of data on the server, 
 or in cases where the client can very quickly react to the returned data.
 I have a simple prototype (doesn't do error handling right, and might be a 
 bit heavy on the synchronization side - it used a BlockingQueue to hand data 
 between the client using the scanner and the threads doing the scanning, it 
 also could potentially starve some scanners long enugh to time out at the 
 server).
 On the plus side, it's only a 130 lines of code. :)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HBASE-9290) Add logging in IntegrationTestBigLinkedList Verify reduce phase

2013-08-21 Thread Elliott Clark (JIRA)
Elliott Clark created HBASE-9290:


 Summary: Add logging in IntegrationTestBigLinkedList Verify reduce 
phase
 Key: HBASE-9290
 URL: https://issues.apache.org/jira/browse/HBASE-9290
 Project: HBase
  Issue Type: Bug
Reporter: Elliott Clark
Assignee: Elliott Clark


In order to debug mangled references it would be very helpful to have the rows 
printed to log.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HBASE-9291) Enable client to setAttribute that is sent once to each region server

2013-08-21 Thread James Taylor (JIRA)
James Taylor created HBASE-9291:
---

 Summary: Enable client to setAttribute that is sent once to each 
region server
 Key: HBASE-9291
 URL: https://issues.apache.org/jira/browse/HBASE-9291
 Project: HBase
  Issue Type: New Feature
  Components: IPC/RPC
Reporter: James Taylor


Currently a Scan and Mutation allow the client to set its own attributes that 
get passed through the RPC layer and are accessible from a coprocessor. This is 
very handy, but breaks down if the amount of information is large, since this 
information ends up being sent again and again to every region. Clients can 
work around this with an endpoint pre and post coprocessor invocation that:
1) sends the information and caches it on the region server in the pre 
invocation
2) invokes the Scan or sends the batch of Mutations, and then
3) removes it in the post invocation.
In this case, the client is forced to identify all region servers (ideally, all 
region servers that will be involved in the Scan/Mutation), make extra RPC 
calls, manage the caching of the information on the region server, age-out the 
information (in case the client dies before step (3) that clears the cached 
information), and must deal with the possibility of a split occurring while 
this operation is in-progress.

Instead, it'd be much better if an attribute could be identified as a region 
server attribute in OperationWithAttributes and the HBase RPC layer would take 
care of doing the above.

The use case where the above are necessary in Phoenix include:
1) Hash joins, where the results of the smaller side of a join scan are 
packaged up and sent to each region server, and
2) Secondary indexing, where the metadata of knowing a) which column 
family/column qualifier pairs and b) which part of the row key contributes to 
which indexes are sent to each region server that will process a batched put.



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9282) Minor logging cleanup; shorten logs, remove redundant info

2013-08-21 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-9282:
-

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Committed to 0.95 branch and to trunk.  Thanks for review.

 Minor logging cleanup; shorten logs, remove redundant info
 --

 Key: HBASE-9282
 URL: https://issues.apache.org/jira/browse/HBASE-9282
 Project: HBase
  Issue Type: Task
  Components: Usability
Reporter: stack
Assignee: stack
 Fix For: 0.96.0

 Attachments: 9282.txt


 Minor log cleanup; trying to get it so hbase logs can be read on a laptop 
 screen w/o having to scroll right.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9291) Enable client to setAttribute that is sent once to each region server

2013-08-21 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated HBASE-9291:


Tags: Phoenix

 Enable client to setAttribute that is sent once to each region server
 -

 Key: HBASE-9291
 URL: https://issues.apache.org/jira/browse/HBASE-9291
 Project: HBase
  Issue Type: New Feature
  Components: IPC/RPC
Reporter: James Taylor

 Currently a Scan and Mutation allow the client to set its own attributes that 
 get passed through the RPC layer and are accessible from a coprocessor. This 
 is very handy, but breaks down if the amount of information is large, since 
 this information ends up being sent again and again to every region. Clients 
 can work around this with an endpoint pre and post coprocessor invocation 
 that:
 1) sends the information and caches it on the region server in the pre 
 invocation
 2) invokes the Scan or sends the batch of Mutations, and then
 3) removes it in the post invocation.
 In this case, the client is forced to identify all region servers (ideally, 
 all region servers that will be involved in the Scan/Mutation), make extra 
 RPC calls, manage the caching of the information on the region server, 
 age-out the information (in case the client dies before step (3) that clears 
 the cached information), and must deal with the possibility of a split 
 occurring while this operation is in-progress.
 Instead, it'd be much better if an attribute could be identified as a region 
 server attribute in OperationWithAttributes and the HBase RPC layer would 
 take care of doing the above.
 The use case where the above are necessary in Phoenix include:
 1) Hash joins, where the results of the smaller side of a join scan are 
 packaged up and sent to each region server, and
 2) Secondary indexing, where the metadata of knowing a) which column 
 family/column qualifier pairs and b) which part of the row key contributes to 
 which indexes are sent to each region server that will process a batched put.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8165) Move to Hadoop 2.1.0-beta from 2.0.x-alpha (WAS: Update our protobuf to 2.5 from 2.4.1)

2013-08-21 Thread Francis Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13746749#comment-13746749
 ] 

Francis Liu commented on HBASE-8165:


Snapshots use protobuf doesn't it?

 Move to Hadoop 2.1.0-beta from 2.0.x-alpha (WAS: Update our protobuf to 2.5 
 from 2.4.1)
 ---

 Key: HBASE-8165
 URL: https://issues.apache.org/jira/browse/HBASE-8165
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: stack
 Fix For: 0.98.0, 0.96.0

 Attachments: 8165_minus_generated.txt, 8165_trunk7.txt, 
 8165_trunkv6.txt, 8165.txt, 8165v2.txt, 8165v3.txt, 8165v4.txt, 8165v5.txt, 
 8165v8.txt, 8615v9.txt, HBASE-8165-rebased.patch


 Update to new 2.5 pb.  Some speedups and a new PARSER idiom that bypasses 
 making a builder.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9287) TestCatalogTracker depends on the execution order

2013-08-21 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13746752#comment-13746752
 ] 

stack commented on HBASE-9287:
--

+1

Go for it [~mbertozzi]


 TestCatalogTracker depends on the execution order
 -

 Key: HBASE-9287
 URL: https://issues.apache.org/jira/browse/HBASE-9287
 Project: HBase
  Issue Type: Test
  Components: test
Affects Versions: 0.98.0, 0.95.2, 0.94.11
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
Priority: Minor
 Fix For: 0.98.0, 0.94.12, 0.96.0

 Attachments: HBASE-9287-trunk-v0.patch, HBASE-9287-v0.patch


 Some CatalogTracker test don't delete the ROOT location.
 For example if testNoTimeoutWaitForRoot() runs before 
 testInterruptWaitOnMetaAndRoot() you get
 {code}
 junit.framework.AssertionFailedError: Expected: null but was: 
 example.org,1234,1377038834244
   at junit.framework.Assert.fail(Assert.java:50)
   at junit.framework.Assert.assertTrue(Assert.java:20)
   at junit.framework.Assert.assertNull(Assert.java:237)
   at junit.framework.Assert.assertNull(Assert.java:230)
   at 
 org.apache.hadoop.hbase.catalog.TestCatalogTracker.testInterruptWaitOnMetaAndRoot(TestCatalogTracker.java:144)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8165) Move to Hadoop 2.1.0-beta from 2.0.x-alpha (WAS: Update our protobuf to 2.5 from 2.4.1)

2013-08-21 Thread Francis Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13746754#comment-13746754
 ] 

Francis Liu commented on HBASE-8165:


I see to proto files:

ErrorHandling.proto - Used by ForeignException class which seems to be only 
used by snapshot classes
hbase.proto - contains SnapshotDescription


 Move to Hadoop 2.1.0-beta from 2.0.x-alpha (WAS: Update our protobuf to 2.5 
 from 2.4.1)
 ---

 Key: HBASE-8165
 URL: https://issues.apache.org/jira/browse/HBASE-8165
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: stack
 Fix For: 0.98.0, 0.96.0

 Attachments: 8165_minus_generated.txt, 8165_trunk7.txt, 
 8165_trunkv6.txt, 8165.txt, 8165v2.txt, 8165v3.txt, 8165v4.txt, 8165v5.txt, 
 8165v8.txt, 8615v9.txt, HBASE-8165-rebased.patch


 Update to new 2.5 pb.  Some speedups and a new PARSER idiom that bypasses 
 making a builder.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7709) Infinite loop possible in Master/Master replication

2013-08-21 Thread Vasu Mariyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasu Mariyala updated HBASE-7709:
-

Attachment: 0.95-trunk-rev2.patch

 Infinite loop possible in Master/Master replication
 ---

 Key: HBASE-7709
 URL: https://issues.apache.org/jira/browse/HBASE-7709
 Project: HBase
  Issue Type: Bug
  Components: Replication
Affects Versions: 0.94.6, 0.95.1
Reporter: Lars Hofhansl
Assignee: Vasu Mariyala
 Fix For: 0.98.0, 0.94.12, 0.96.0

 Attachments: 095-trunk.patch, 0.95-trunk-rev1.patch, 
 0.95-trunk-rev2.patch, HBASE-7709.patch, HBASE-7709-rev1.patch, 
 HBASE-7709-rev2.patch, HBASE-7709-rev3.patch


  We just discovered the following scenario:
 # Cluster A and B are setup in master/master replication
 # By accident we had Cluster C replicate to Cluster A.
 Now all edit originating from C will be bouncing between A and B. Forever!
 The reason is that when the edit come in from C the cluster ID is already set 
 and won't be reset.
 We have a couple of options here:
 # Optionally only support master/master (not cycles of more than two 
 clusters). In that case we can always reset the cluster ID in the 
 ReplicationSource. That means that now cycles  2 will have the data cycle 
 forever. This is the only option that requires no changes in the HLog format.
 # Instead of a single cluster id per edit maintain a (unordered) set of 
 cluster id that have seen this edit. Then in ReplicationSource we drop any 
 edit that the sink has seen already. The is the cleanest approach, but it 
 might need a lot of data stored per edit if there are many clusters involved.
 # Maintain a configurable counter of the maximum cycle side we want to 
 support. Could default to 10 (even maybe even just). Store a hop-count in the 
 WAL and the ReplicationSource increases that hop-count on each hop. If we're 
 over the max, just drop the edit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9274) After HBASE-8408 applied, temporary test files are being left in /tmp/hbase-user

2013-08-21 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13746764#comment-13746764
 ] 

stack commented on HBASE-9274:
--

Should we shut down HTU constructor?  Or at least head in that direction?

-  private static final HBaseTestingUtility UTIL = new HBaseTestingUtility();
+  private static final HBaseTestingUtility UTIL = 
HBaseTestingUtility.createLocalHTU();

+1 on commit to trunk and 0.95.



 After HBASE-8408 applied, temporary test files are being left in 
 /tmp/hbase-user
 --

 Key: HBASE-9274
 URL: https://issues.apache.org/jira/browse/HBASE-9274
 Project: HBase
  Issue Type: Bug
  Components: test
Affects Versions: 0.95.2
Reporter: Jonathan Hsieh
Assignee: Jonathan Hsieh
 Fix For: 0.98.0, 0.95.3

 Attachments: hbase-9274.patch, hbase-9274.v2.patch


 Some of our jenkins CI machines have been failing out with /tmp/hbase-user
 This can be shown by executing the following command before and after the 
 namespaces patch.
 {code}
 # several tests are dropping stuff in the archive dir, just pick one
 mvn clean test -Dtest=TestEncodedSeekers
 find /tmp/hbase-jon/hbase/
 {code}
 /tmp/hbase-jon after test run before patch applied
 {code}
 $ find /tmp/hbase-jon/
 /tmp/hbase-jon/
 /tmp/hbase-jon/local
 /tmp/hbase-jon/local/jars
 {code}
 after namespaces patch applied
 {code}
 /tmp/hbase-jon/
 /tmp/hbase-jon/local
 /tmp/hbase-jon/local/jars
 /tmp/hbase-jon/hbase
 /tmp/hbase-jon/hbase/.archive
 /tmp/hbase-jon/hbase/.archive/.data
 /tmp/hbase-jon/hbase/.archive/.data/default
 /tmp/hbase-jon/hbase/.archive/.data/default/encodedSeekersTable
 /tmp/hbase-jon/hbase/.archive/.data/default/encodedSeekersTable/c6ec51dca2a9fe4c2279006345d62b35
 /tmp/hbase-jon/hbase/.archive/.data/default/encodedSeekersTable/c6ec51dca2a9fe4c2279006345d62b35/encodedSeekersCF
 /tmp/hbase-jon/hbase/.archive/.data/default/encodedSeekersTable/c6ec51dca2a9fe4c2279006345d62b35/encodedSeekersCF/8e76a87806b94483851158366f7d5c17
 /tmp/hbase-jon/hbase/.archive/.data/default/encodedSeekersTable/c6ec51dca2a9fe4c2279006345d62b35/encodedSeekersCF/494c07dbf08940749696bb0f9278401e
 /tmp/hbase-jon/hbase/.archive/.data/default/encodedSeekersTable/c6ec51dca2a9fe4c2279006345d62b35/encodedSeekersCF/.8e76a87806b94483851158366f7d5c1
 7.crc 
 
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9241) Add cp hook before initialize variable set to true in master intialization

2013-08-21 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13746766#comment-13746766
 ] 

stack commented on HBASE-9241:
--

Is the new method being interjected at the right location?

 status.markComplete(Initialization successful);
 LOG.info(Master has completed initialization);
+if (this.cpHost != null) {
+  try {
+this.cpHost.preMasterInitialization();

The above says initialization copmleted and then we call 
premasterinitialization?  I'd think this method would be called before the 
master initialization given its name?

 Add cp hook before initialize variable set to true in master intialization
 --

 Key: HBASE-9241
 URL: https://issues.apache.org/jira/browse/HBASE-9241
 Project: HBase
  Issue Type: Sub-task
  Components: master
Reporter: rajeshbabu
Assignee: rajeshbabu
 Attachments: HBASE-9241.patch


 This hook helps in following cases.
 1) When we are creating indexed table then there is a chance that master can 
 go down after successful creation of user table but index table creation not 
 yet started.
 This hook helps to find such cases and create missing index table.
 2) if any case there are mismatches in colocation of user and index regions 
 we can run balancer.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


  1   2   3   >