[jira] [Commented] (HBASE-10531) Revisit how the key byte[] is passed to HFileScanner.seekTo and reseekTo

2014-03-31 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956168#comment-13956168
 ] 

ramkrishna.s.vasudevan commented on HBASE-10531:


The test failure 
{code}
org.apache.hadoop.hbase.master.TestMasterFailover.testSimpleMasterFailover
org.apache.hadoop.hbase.regionserver.TestHRegionBusyWait.testBatchPut
{code}
did not occur locally in my testruns and also in hadoopQA run.  Also the 
subsequent runs does not have this failure. JFYI.

> Revisit how the key byte[] is passed to HFileScanner.seekTo and reseekTo
> 
>
> Key: HBASE-10531
> URL: https://issues.apache.org/jira/browse/HBASE-10531
> Project: HBase
>  Issue Type: Sub-task
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 0.99.0
>
> Attachments: HBASE-10531.patch, HBASE-10531_1.patch, 
> HBASE-10531_12.patch, HBASE-10531_13.patch, HBASE-10531_13.patch, 
> HBASE-10531_2.patch, HBASE-10531_3.patch, HBASE-10531_4.patch, 
> HBASE-10531_5.patch, HBASE-10531_6.patch, HBASE-10531_7.patch, 
> HBASE-10531_8.patch, HBASE-10531_9.patch
>
>
> Currently the byte[] key passed to HFileScanner.seekTo and 
> HFileScanner.reseekTo, is a combination of row, cf, qual, type and ts.  And 
> the caller forms this by using kv.getBuffer, which is actually deprecated.  
> So see how this can be achieved considering kv.getBuffer is removed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10848) Filter SingleColumnValueFilter combined with NullComparator does not work

2014-03-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956165#comment-13956165
 ] 

Hudson commented on HBASE-10848:


FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #242 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/242/])
HBASE-10848 Filter SingleColumnValueFilter combined with NullComparator does 
not work (Fabien) (tedyu: rev 1583510)
* 
/hbase/branches/0.98/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/NullComparator.java
* 
/hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestNullComparator.java
* 
/hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestSingleColumnValueFilter.java


> Filter SingleColumnValueFilter combined with NullComparator does not work
> -
>
> Key: HBASE-10848
> URL: https://issues.apache.org/jira/browse/HBASE-10848
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 0.96.1.1
>Reporter: Fabien Le Gallo
>Assignee: Fabien Le Gallo
> Fix For: 0.99.0, 0.98.2
>
> Attachments: HBASE-10848.patch, HBASE_10848-v2.patch, 
> HBASE_10848-v3.patch, HBASE_10848-v4.patch, HBaseRegression.java, 
> TestScanWithNullComparable.java
>
>
> I want to filter out from the scan the rows that does not have a specific 
> column qualifier. For this purpose I use the filter SingleColumnValueFilter 
> combined with the NullComparator.
> But every time I use this in a scan, I get the following exception:
> {code}
> java.lang.RuntimeException: org.apache.hadoop.hbase.DoNotRetryIOException: 
> Failed after retry of OutOfOrderScannerNextException: was there a rpc timeout?
> at 
> org.apache.hadoop.hbase.client.AbstractClientScanner$1.hasNext(AbstractClientScanner.java:47)
> at 
> com.xxx.xxx.test.HBaseRegression.nullComparator(HBaseRegression.java:92)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runNotIgnored(BlockJUnit4ClassRunner.java:79)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:71)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:49)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:236)
> at 
> org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
> at 
> org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
> at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
> at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
> at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
> at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
> Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: Failed after retry 
> of OutOfOrderScannerNextException: was there a rpc timeout?
> at 
> org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:391)
> at 
> org.apache.hadoop.hbase.client.AbstractClientScanner$1.hasNext(AbstractClientScanner.java:44)
> ... 25 more
> Caused by: org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException: 
> org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException: Expected 
> nextCallSeq: 1 But the nextCallSeq got from client: 0; request=scanner_id: 
> 7998309028985532303 number_of_rows: 100 close_scanner: false next_call_seq: 0
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3011)
> at 
> org.apache.hadoop.hbas

[jira] [Commented] (HBASE-10848) Filter SingleColumnValueFilter combined with NullComparator does not work

2014-03-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956164#comment-13956164
 ] 

Hudson commented on HBASE-10848:


SUCCESS: Integrated in HBase-0.98 #258 (See 
[https://builds.apache.org/job/HBase-0.98/258/])
HBASE-10848 Filter SingleColumnValueFilter combined with NullComparator does 
not work (Fabien) (tedyu: rev 1583510)
* 
/hbase/branches/0.98/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/NullComparator.java
* 
/hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestNullComparator.java
* 
/hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestSingleColumnValueFilter.java


> Filter SingleColumnValueFilter combined with NullComparator does not work
> -
>
> Key: HBASE-10848
> URL: https://issues.apache.org/jira/browse/HBASE-10848
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 0.96.1.1
>Reporter: Fabien Le Gallo
>Assignee: Fabien Le Gallo
> Fix For: 0.99.0, 0.98.2
>
> Attachments: HBASE-10848.patch, HBASE_10848-v2.patch, 
> HBASE_10848-v3.patch, HBASE_10848-v4.patch, HBaseRegression.java, 
> TestScanWithNullComparable.java
>
>
> I want to filter out from the scan the rows that does not have a specific 
> column qualifier. For this purpose I use the filter SingleColumnValueFilter 
> combined with the NullComparator.
> But every time I use this in a scan, I get the following exception:
> {code}
> java.lang.RuntimeException: org.apache.hadoop.hbase.DoNotRetryIOException: 
> Failed after retry of OutOfOrderScannerNextException: was there a rpc timeout?
> at 
> org.apache.hadoop.hbase.client.AbstractClientScanner$1.hasNext(AbstractClientScanner.java:47)
> at 
> com.xxx.xxx.test.HBaseRegression.nullComparator(HBaseRegression.java:92)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runNotIgnored(BlockJUnit4ClassRunner.java:79)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:71)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:49)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:236)
> at 
> org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
> at 
> org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
> at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
> at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
> at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
> at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
> Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: Failed after retry 
> of OutOfOrderScannerNextException: was there a rpc timeout?
> at 
> org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:391)
> at 
> org.apache.hadoop.hbase.client.AbstractClientScanner$1.hasNext(AbstractClientScanner.java:44)
> ... 25 more
> Caused by: org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException: 
> org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException: Expected 
> nextCallSeq: 1 But the nextCallSeq got from client: 0; request=scanner_id: 
> 7998309028985532303 number_of_rows: 100 close_scanner: false next_call_seq: 0
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3011)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientP

[jira] [Commented] (HBASE-10881) Support reverse scan in thrift2

2014-03-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956159#comment-13956159
 ] 

Hadoop QA commented on HBASE-10881:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12637993/HBASE-10881-trunk-v1.diff
  against trunk revision .
  ATTACHMENT ID: 12637993

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+  private static final org.apache.thrift.protocol.TField 
REVERSED_FIELD_DESC = new org.apache.thrift.protocol.TField("reversed", 
org.apache.thrift.protocol.TType.BOOL, (short)11);
+  private _Fields optionals[] = 
{_Fields.START_ROW,_Fields.STOP_ROW,_Fields.COLUMNS,_Fields.CACHING,_Fields.MAX_VERSIONS,_Fields.TIME_RANGE,_Fields.FILTER_STRING,_Fields.BATCH_SIZE,_Fields.ATTRIBUTES,_Fields.AUTHORIZATIONS,_Fields.REVERSED};
+tmpMap.put(_Fields.REVERSED, new 
org.apache.thrift.meta_data.FieldMetaData("reversed", 
org.apache.thrift.TFieldRequirementType.OPTIONAL, 
+  org.apache.thrift.protocol.TList _list117 = new 
org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, 
iprot.readI32());
+  org.apache.thrift.protocol.TMap _map120 = new 
org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, 
org.apache.thrift.protocol.TType.STRING, iprot.readI32());

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s):   
at 
org.apache.hadoop.hbase.mapreduce.TestTableMapReduceBase.testMultiRegionTable(TestTableMapReduceBase.java:96)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9152//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9152//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9152//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9152//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9152//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9152//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9152//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9152//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9152//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9152//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9152//console

This message is automatically generated.

> Support reverse scan in thrift2
> ---
>
> Key: HBASE-10881
> URL: https://issues.apache.org/jira/browse/HBASE-10881
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 0.99.0
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
>Priority: Minor
> Attachments: HBASE-10881-trunk-v1.diff
>
>
> Support reverse scan in thrift2.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10866) Decouple HLogSplitterHandler from ZooKeeper

2014-03-31 Thread Mikhail Antonov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956155#comment-13956155
 ] 

Mikhail Antonov commented on HBASE-10866:
-

[~ lhofhansl] thanks for feedback!

Yeah, totally understand this abstraction effort needs to be finished 
everywhere (and also the fact that it's quite a bit of work, affecting  many 
places in the codebase). The piecemeal approach would provide for easier 
review/feedbacks of patches to ensure they're in line with the goal (as 
[~stack] and [~cos] noted), and for better work structuring and parallelization.

> Decouple HLogSplitterHandler from ZooKeeper
> ---
>
> Key: HBASE-10866
> URL: https://issues.apache.org/jira/browse/HBASE-10866
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver, Zookeeper
>Reporter: Mikhail Antonov
> Attachments: HBASE-10866.patch, HBASE-10866.patch, HBASE-10866.patch, 
> HBASE-10866.patch, HBaseConsensus.pdf
>
>
> As some sort of follow-up or initial step towards HBASE-10296...
> Whatever consensus algorithm/library may be the chosen, perhaps on of first 
> practical steps towards this goal would be to better abstract ZK-related API 
> and details, which are now throughout the codebase (mostly leaked throuth 
> ZkUtil, ZooKeeperWatcher and listeners).
> I'd like to propose a series of patches to help better abstract out zookeeper 
> (and then help develop consensus APIs). 
> Here is first version of  patch for initial review (then I'm planning to work 
> on another handlers in regionserver, and then perhaps start working on 
> abstracting listeners).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10867) TestRegionPlacement#testRegionPlacement occasionally fails

2014-03-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956153#comment-13956153
 ] 

Hudson commented on HBASE-10867:


FAILURE: Integrated in HBase-TRUNK #5053 (See 
[https://builds.apache.org/job/HBase-TRUNK/5053/])
HBASE-10867 TestRegionPlacement#testRegionPlacement occasionally fails (tedyu: 
rev 1583515)
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestRegionPlacement.java


> TestRegionPlacement#testRegionPlacement occasionally fails
> --
>
> Key: HBASE-10867
> URL: https://issues.apache.org/jira/browse/HBASE-10867
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Fix For: 0.99.0
>
> Attachments: 10867-v1.txt, 10867-v2.txt
>
>
> From 
> https://builds.apache.org/job/HBase-TRUNK/5047/testReport/org.apache.hadoop.hbase.master/TestRegionPlacement/testRegionPlacement/
>  :
> {code}
> java.lang.ArrayIndexOutOfBoundsException: 10
>   at 
> java.util.concurrent.CopyOnWriteArrayList.get(CopyOnWriteArrayList.java:368)
>   at 
> java.util.concurrent.CopyOnWriteArrayList.get(CopyOnWriteArrayList.java:377)
>   at 
> org.apache.hadoop.hbase.LocalHBaseCluster.getRegionServer(LocalHBaseCluster.java:224)
>   at 
> org.apache.hadoop.hbase.MiniHBaseCluster.getRegionServer(MiniHBaseCluster.java:609)
>   at 
> org.apache.hadoop.hbase.master.TestRegionPlacement.killRandomServerAndVerifyAssignment(TestRegionPlacement.java:303)
>   at 
> org.apache.hadoop.hbase.master.TestRegionPlacement.testRegionPlacement(TestRegionPlacement.java:270)
> {code}
> In the setup:
> {code}
> TEST_UTIL.startMiniCluster(SLAVES);
> {code}
> where SLAVES is 10.
> So when 10 was used in 
> TEST_UTIL.getHBaseCluster().getRegionServer(killIndex), we would get 
> ArrayIndexOutOfBoundsException.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10848) Filter SingleColumnValueFilter combined with NullComparator does not work

2014-03-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956152#comment-13956152
 ] 

Hudson commented on HBASE-10848:


FAILURE: Integrated in HBase-TRUNK #5053 (See 
[https://builds.apache.org/job/HBase-TRUNK/5053/])
HBASE-10848 Filter SingleColumnValueFilter combined with NullComparator does 
not work (Fabien) (tedyu: rev 1583511)
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/NullComparator.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestNullComparator.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestSingleColumnValueFilter.java


> Filter SingleColumnValueFilter combined with NullComparator does not work
> -
>
> Key: HBASE-10848
> URL: https://issues.apache.org/jira/browse/HBASE-10848
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 0.96.1.1
>Reporter: Fabien Le Gallo
>Assignee: Fabien Le Gallo
> Fix For: 0.99.0, 0.98.2
>
> Attachments: HBASE-10848.patch, HBASE_10848-v2.patch, 
> HBASE_10848-v3.patch, HBASE_10848-v4.patch, HBaseRegression.java, 
> TestScanWithNullComparable.java
>
>
> I want to filter out from the scan the rows that does not have a specific 
> column qualifier. For this purpose I use the filter SingleColumnValueFilter 
> combined with the NullComparator.
> But every time I use this in a scan, I get the following exception:
> {code}
> java.lang.RuntimeException: org.apache.hadoop.hbase.DoNotRetryIOException: 
> Failed after retry of OutOfOrderScannerNextException: was there a rpc timeout?
> at 
> org.apache.hadoop.hbase.client.AbstractClientScanner$1.hasNext(AbstractClientScanner.java:47)
> at 
> com.xxx.xxx.test.HBaseRegression.nullComparator(HBaseRegression.java:92)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runNotIgnored(BlockJUnit4ClassRunner.java:79)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:71)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:49)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:236)
> at 
> org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
> at 
> org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
> at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
> at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
> at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
> at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
> Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: Failed after retry 
> of OutOfOrderScannerNextException: was there a rpc timeout?
> at 
> org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:391)
> at 
> org.apache.hadoop.hbase.client.AbstractClientScanner$1.hasNext(AbstractClientScanner.java:44)
> ... 25 more
> Caused by: org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException: 
> org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException: Expected 
> nextCallSeq: 1 But the nextCallSeq got from client: 0; request=scanner_id: 
> 7998309028985532303 number_of_rows: 100 close_scanner: false next_call_seq: 0
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3011)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$

[jira] [Commented] (HBASE-10855) Enable hfilev3 by default

2014-03-31 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956150#comment-13956150
 ] 

stack commented on HBASE-10855:
---

Let me look at it [~ram_krish]  I have a little rig here so can dig.  Will bug 
you fellows if can't figure it.  It seems like a good test but a little fragile 
anyways 

> Enable hfilev3 by default
> -
>
> Key: HBASE-10855
> URL: https://issues.apache.org/jira/browse/HBASE-10855
> Project: HBase
>  Issue Type: Sub-task
>  Components: HFile
>Reporter: stack
>Assignee: stack
> Fix For: 0.99.0
>
> Attachments: 10855.txt, 10855.txt, 10855.txt, 10855.txt
>
>
> Distributed log replay needs this.  Should be on by default in 1.0/0.99.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10855) Enable hfilev3 by default

2014-03-31 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956147#comment-13956147
 ] 

ramkrishna.s.vasudevan commented on HBASE-10855:


For me too it passes locally changing the version to V3.
But just seeing the test case one guess would be that because we use V3 once we 
flush we would atleast write the tag length (of type short).
Here we only make two puts for two diff families.  The size of the HFile may be 
a bigger by 2 bytes now. Will that be a reason here?


> Enable hfilev3 by default
> -
>
> Key: HBASE-10855
> URL: https://issues.apache.org/jira/browse/HBASE-10855
> Project: HBase
>  Issue Type: Sub-task
>  Components: HFile
>Reporter: stack
>Assignee: stack
> Fix For: 0.99.0
>
> Attachments: 10855.txt, 10855.txt, 10855.txt, 10855.txt
>
>
> Distributed log replay needs this.  Should be on by default in 1.0/0.99.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10866) Decouple HLogSplitterHandler from ZooKeeper

2014-03-31 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956143#comment-13956143
 ] 

Lars Hofhansl commented on HBASE-10866:
---

Hi [~mantonov], 
* Nice writeup and separation of the uses of ZK.
* I agree we can treat permanent and transient shared state same. In both cases 
it is state shared between servers. When the abstraction is done we can even 
have an implementation that stores that state in an HBase table.
* Let's do this piecemeal (as you suggest). But note that if we do not finish 
this everywhere we're worse off than before - more classes that do the same 
thing and have to modified together.
* o.a.h.h.regionserver.consensus and o.a.h.h.master.consensus seem fine to me.


> Decouple HLogSplitterHandler from ZooKeeper
> ---
>
> Key: HBASE-10866
> URL: https://issues.apache.org/jira/browse/HBASE-10866
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver, Zookeeper
>Reporter: Mikhail Antonov
> Attachments: HBASE-10866.patch, HBASE-10866.patch, HBASE-10866.patch, 
> HBASE-10866.patch, HBaseConsensus.pdf
>
>
> As some sort of follow-up or initial step towards HBASE-10296...
> Whatever consensus algorithm/library may be the chosen, perhaps on of first 
> practical steps towards this goal would be to better abstract ZK-related API 
> and details, which are now throughout the codebase (mostly leaked throuth 
> ZkUtil, ZooKeeperWatcher and listeners).
> I'd like to propose a series of patches to help better abstract out zookeeper 
> (and then help develop consensus APIs). 
> Here is first version of  patch for initial review (then I'm planning to work 
> on another handlers in regionserver, and then perhaps start working on 
> abstracting listeners).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10855) Enable hfilev3 by default

2014-03-31 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956136#comment-13956136
 ] 

stack commented on HBASE-10855:
---

So the TestRegionPlacement is another issue, HBASE-10867.  The failures in 
TestHRegion and in TestHRegionBusyWeight are actually in the same place, in 
testgetHDFSBlocksDistribution.  This test was added way back by HBASE-4114 to 
get more metrics on block locality.  I can get it to fail on occasion locally.  
Let me take a look.  [~ram_krish] and/or [~anoop.hbase], any idea why locality 
stats would be different when v3 is enabled?  Thanks.

> Enable hfilev3 by default
> -
>
> Key: HBASE-10855
> URL: https://issues.apache.org/jira/browse/HBASE-10855
> Project: HBase
>  Issue Type: Sub-task
>  Components: HFile
>Reporter: stack
>Assignee: stack
> Fix For: 0.99.0
>
> Attachments: 10855.txt, 10855.txt, 10855.txt, 10855.txt
>
>
> Distributed log replay needs this.  Should be on by default in 1.0/0.99.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10866) Decouple HLogSplitterHandler from ZooKeeper

2014-03-31 Thread Mikhail Antonov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956135#comment-13956135
 ] 

Mikhail Antonov commented on HBASE-10866:
-

Regarding essential state being kept in ZK - I guess you're talking about 
HBASE-7767 and such? I see following possible approaches (looking from the 
position of abstracting ZK usage):

 - while doing refactoring, include this permanent shared state as part of 
first usecase (transient shared state), as described in writeup. That is 
conceptually wrong, but would just be a simple refactoring of ZK usage.
 - create another hbase meta table to keep this information. That is probably 
right approach, but would be much bigger change?

On the codebase structure - this is how I see it:

 - both master and region server will need access to consensus operations. In 
example with log splitting, on the master side there's SplitLogManager which is 
orchestrating task to replay wal files by creating znodes.
 - I'm thinking that at least for now, to make refactoring process more 
straightforward and manageable, it may be better to keep consensus parts of 
both sides (master and RS) separated in packages like 
o.a.h.h.regionserver.consensus (.impl for implementations) and 
o.a.h.h.master.consensus (.impl for implementations), and may be after a series 
of patches, reconcile common parts into o.a.h.h.consensus. Reasons - consensus 
api-related parts on master and regionserver sides can be worked on 
independently in many cases, and while the refactoring is in process we can 
refactor first region-part, then master-part. Also, in many cases the logic 
which needs to be abstracted on region side and master side is specific for 
this type of cluster node.

What do you think? May be I'm missing something on the packaging conventions 
for HBase though..





> Decouple HLogSplitterHandler from ZooKeeper
> ---
>
> Key: HBASE-10866
> URL: https://issues.apache.org/jira/browse/HBASE-10866
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver, Zookeeper
>Reporter: Mikhail Antonov
> Attachments: HBASE-10866.patch, HBASE-10866.patch, HBASE-10866.patch, 
> HBASE-10866.patch, HBaseConsensus.pdf
>
>
> As some sort of follow-up or initial step towards HBASE-10296...
> Whatever consensus algorithm/library may be the chosen, perhaps on of first 
> practical steps towards this goal would be to better abstract ZK-related API 
> and details, which are now throughout the codebase (mostly leaked throuth 
> ZkUtil, ZooKeeperWatcher and listeners).
> I'd like to propose a series of patches to help better abstract out zookeeper 
> (and then help develop consensus APIs). 
> Here is first version of  patch for initial review (then I'm planning to work 
> on another handlers in regionserver, and then perhaps start working on 
> abstracting listeners).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10855) Enable hfilev3 by default

2014-03-31 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956116#comment-13956116
 ] 

Anoop Sam John commented on HBASE-10855:


The test failures seems not related. They are passing locally.
+1

> Enable hfilev3 by default
> -
>
> Key: HBASE-10855
> URL: https://issues.apache.org/jira/browse/HBASE-10855
> Project: HBase
>  Issue Type: Sub-task
>  Components: HFile
>Reporter: stack
>Assignee: stack
> Fix For: 0.99.0
>
> Attachments: 10855.txt, 10855.txt, 10855.txt, 10855.txt
>
>
> Distributed log replay needs this.  Should be on by default in 1.0/0.99.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10879) user_permission shell command on namespace doesn't work

2014-03-31 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956114#comment-13956114
 ] 

ramkrishna.s.vasudevan commented on HBASE-10879:


+1 on patch.

> user_permission shell command on namespace doesn't work
> ---
>
> Key: HBASE-10879
> URL: https://issues.apache.org/jira/browse/HBASE-10879
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 10879-v1.txt, 10879-v2.txt
>
>
> Currently user_permission command on namespace, e.g.
> {code}
> user_permission '@ns'
> {code}
> would result in the following exception:
> {code}
> Exception `NameError' at /usr/lib/hbase/lib/ruby/hbase/security.rb:170 - no 
> method 'getUserPermissions' for arguments 
> (org.apache.hadoop.hbase.protobuf.generated.  
> AccessControlProtos.AccessControlService.BlockingStub,org.jruby.java.proxies.ArrayJavaProxy)
>  on Java::OrgApacheHadoopHbaseProtobuf::ProtobufUtil
> ERROR: no method 'getUserPermissions' for arguments 
> (org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos.AccessControlService.BlockingStub,org.jruby.java.
>proxies.ArrayJavaProxy) on 
> Java::OrgApacheHadoopHbaseProtobuf::ProtobufUtil
> Backtrace: /usr/lib/hbase/lib/ruby/hbase/security.rb:170:in `user_permission'
>/usr/lib/hbase/lib/ruby/shell/commands/user_permission.rb:39:in 
> `command'
>org/jruby/RubyKernel.java:2109:in `send'
>/usr/lib/hbase/lib/ruby/shell/commands.rb:34:in `command_safe'
>/usr/lib/hbase/lib/ruby/shell/commands.rb:91:in 
> `translate_hbase_exceptions'
>/usr/lib/hbase/lib/ruby/shell/commands.rb:34:in `command_safe'
>/usr/lib/hbase/lib/ruby/shell.rb:127:in `internal_command'
>/usr/lib/hbase/lib/ruby/shell.rb:119:in `command'
>(eval):2:in `user_permission'
>(hbase):1:in `evaluate'
>org/jruby/RubyKernel.java:1112:in `eval'
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10867) TestRegionPlacement#testRegionPlacement occasionally fails

2014-03-31 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956109#comment-13956109
 ] 

stack commented on HBASE-10867:
---

[~ted_yu] Why not apply this to 0.98, 0.96?

> TestRegionPlacement#testRegionPlacement occasionally fails
> --
>
> Key: HBASE-10867
> URL: https://issues.apache.org/jira/browse/HBASE-10867
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Fix For: 0.99.0
>
> Attachments: 10867-v1.txt, 10867-v2.txt
>
>
> From 
> https://builds.apache.org/job/HBase-TRUNK/5047/testReport/org.apache.hadoop.hbase.master/TestRegionPlacement/testRegionPlacement/
>  :
> {code}
> java.lang.ArrayIndexOutOfBoundsException: 10
>   at 
> java.util.concurrent.CopyOnWriteArrayList.get(CopyOnWriteArrayList.java:368)
>   at 
> java.util.concurrent.CopyOnWriteArrayList.get(CopyOnWriteArrayList.java:377)
>   at 
> org.apache.hadoop.hbase.LocalHBaseCluster.getRegionServer(LocalHBaseCluster.java:224)
>   at 
> org.apache.hadoop.hbase.MiniHBaseCluster.getRegionServer(MiniHBaseCluster.java:609)
>   at 
> org.apache.hadoop.hbase.master.TestRegionPlacement.killRandomServerAndVerifyAssignment(TestRegionPlacement.java:303)
>   at 
> org.apache.hadoop.hbase.master.TestRegionPlacement.testRegionPlacement(TestRegionPlacement.java:270)
> {code}
> In the setup:
> {code}
> TEST_UTIL.startMiniCluster(SLAVES);
> {code}
> where SLAVES is 10.
> So when 10 was used in 
> TEST_UTIL.getHBaseCluster().getRegionServer(killIndex), we would get 
> ArrayIndexOutOfBoundsException.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10879) user_permission shell command on namespace doesn't work

2014-03-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956106#comment-13956106
 ] 

Hadoop QA commented on HBASE-10879:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12637990/10879-v2.txt
  against trunk revision .
  ATTACHMENT ID: 12637990

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.master.TestMasterNoCluster

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9151//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9151//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9151//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9151//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9151//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9151//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9151//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9151//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9151//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9151//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9151//console

This message is automatically generated.

> user_permission shell command on namespace doesn't work
> ---
>
> Key: HBASE-10879
> URL: https://issues.apache.org/jira/browse/HBASE-10879
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 10879-v1.txt, 10879-v2.txt
>
>
> Currently user_permission command on namespace, e.g.
> {code}
> user_permission '@ns'
> {code}
> would result in the following exception:
> {code}
> Exception `NameError' at /usr/lib/hbase/lib/ruby/hbase/security.rb:170 - no 
> method 'getUserPermissions' for arguments 
> (org.apache.hadoop.hbase.protobuf.generated.  
> AccessControlProtos.AccessControlService.BlockingStub,org.jruby.java.proxies.ArrayJavaProxy)
>  on Java::OrgApacheHadoopHbaseProtobuf::ProtobufUtil
> ERROR: no method 'getUserPermissions' for arguments 
> (org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos.AccessControlService.BlockingStub,org.jruby.java.
>proxies.ArrayJavaProxy) on 
> Java::OrgApacheHadoopHbaseProtobuf::ProtobufUtil
> Backtrace: /usr/lib/hbase/lib/ruby/hbase/security.rb:170:in `user_permission'
>/usr/lib/hbase/lib/ruby/shell/commands/user_permission.rb:39:in 
> `command'
>org/jruby/RubyKernel.java:2109:in `send'
>/usr/lib/hbase/lib/ruby/shell/commands.rb:34:in `command_safe'
>/usr/lib/hbase/lib/ruby/shell/commands.rb:91:in 
> `translate_hbase_exceptions'
>/usr/lib/hbase/lib/ruby/shell/commands.rb:34:in `command_safe'
>/usr/lib/hbase/lib/ruby/shell.rb:127:in `internal_command'
>/usr/lib/hbase/lib/ruby/shell.rb:119:in `command'
>(eval):2:in `user_permission'
>(hbase):1:in `evaluate'
>

[jira] [Commented] (HBASE-10866) Decouple HLogSplitterHandler from ZooKeeper

2014-03-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956097#comment-13956097
 ] 

Hadoop QA commented on HBASE-10866:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12637986/HBASE-10866.patch
  against trunk revision .
  ATTACHMENT ID: 12637986

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9150//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9150//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9150//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9150//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9150//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9150//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9150//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9150//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9150//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9150//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9150//console

This message is automatically generated.

> Decouple HLogSplitterHandler from ZooKeeper
> ---
>
> Key: HBASE-10866
> URL: https://issues.apache.org/jira/browse/HBASE-10866
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver, Zookeeper
>Reporter: Mikhail Antonov
> Attachments: HBASE-10866.patch, HBASE-10866.patch, HBASE-10866.patch, 
> HBASE-10866.patch, HBaseConsensus.pdf
>
>
> As some sort of follow-up or initial step towards HBASE-10296...
> Whatever consensus algorithm/library may be the chosen, perhaps on of first 
> practical steps towards this goal would be to better abstract ZK-related API 
> and details, which are now throughout the codebase (mostly leaked throuth 
> ZkUtil, ZooKeeperWatcher and listeners).
> I'd like to propose a series of patches to help better abstract out zookeeper 
> (and then help develop consensus APIs). 
> Here is first version of  patch for initial review (then I'm planning to work 
> on another handlers in regionserver, and then perhaps start working on 
> abstracting listeners).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10867) TestRegionPlacement#testRegionPlacement occasionally fails

2014-03-31 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956094#comment-13956094
 ] 

stack commented on HBASE-10867:
---

Hmm... Looks like Ted already committed.  This is what I'd remove:

{code}
Index: 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestRegionPlacement.java
===
--- 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestRegionPlacement.java
  (revision 1583526)
+++ 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestRegionPlacement.java
  (working copy)
@@ -289,8 +289,6 @@

   private void killRandomServerAndVerifyAssignment()
   throws IOException, InterruptedException, KeeperException {
-ClusterStatus oldStatus = TEST_UTIL.getHBaseCluster().getClusterStatus();
-ServerName servers[] = oldStatus.getServers().toArray(new ServerName[10]);
 ServerName serverToKill = null;
 int killIndex = 0;
 Random random = new Random(System.currentTimeMillis());
{code}

... but this test does not behave well for me locally.  I am afraid to touch it.

> TestRegionPlacement#testRegionPlacement occasionally fails
> --
>
> Key: HBASE-10867
> URL: https://issues.apache.org/jira/browse/HBASE-10867
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Fix For: 0.99.0
>
> Attachments: 10867-v1.txt, 10867-v2.txt
>
>
> From 
> https://builds.apache.org/job/HBase-TRUNK/5047/testReport/org.apache.hadoop.hbase.master/TestRegionPlacement/testRegionPlacement/
>  :
> {code}
> java.lang.ArrayIndexOutOfBoundsException: 10
>   at 
> java.util.concurrent.CopyOnWriteArrayList.get(CopyOnWriteArrayList.java:368)
>   at 
> java.util.concurrent.CopyOnWriteArrayList.get(CopyOnWriteArrayList.java:377)
>   at 
> org.apache.hadoop.hbase.LocalHBaseCluster.getRegionServer(LocalHBaseCluster.java:224)
>   at 
> org.apache.hadoop.hbase.MiniHBaseCluster.getRegionServer(MiniHBaseCluster.java:609)
>   at 
> org.apache.hadoop.hbase.master.TestRegionPlacement.killRandomServerAndVerifyAssignment(TestRegionPlacement.java:303)
>   at 
> org.apache.hadoop.hbase.master.TestRegionPlacement.testRegionPlacement(TestRegionPlacement.java:270)
> {code}
> In the setup:
> {code}
> TEST_UTIL.startMiniCluster(SLAVES);
> {code}
> where SLAVES is 10.
> So when 10 was used in 
> TEST_UTIL.getHBaseCluster().getRegionServer(killIndex), we would get 
> ArrayIndexOutOfBoundsException.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10855) Enable hfilev3 by default

2014-03-31 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956093#comment-13956093
 ] 

ramkrishna.s.vasudevan commented on HBASE-10855:


+1. LGTM.

> Enable hfilev3 by default
> -
>
> Key: HBASE-10855
> URL: https://issues.apache.org/jira/browse/HBASE-10855
> Project: HBase
>  Issue Type: Sub-task
>  Components: HFile
>Reporter: stack
>Assignee: stack
> Fix For: 0.99.0
>
> Attachments: 10855.txt, 10855.txt, 10855.txt, 10855.txt
>
>
> Distributed log replay needs this.  Should be on by default in 1.0/0.99.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10851) Wait for regionservers to join the cluster

2014-03-31 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956092#comment-13956092
 ] 

stack commented on HBASE-10851:
---

Above seems reasonable but how you know the backup masters are carrying regions 
or not?

{code}
+  if (backupMasters != null) {
+// Exclude all backup masters
+count -= backupMasters.size();
+  }
{code}

By default they carry regions?  What if they are configured not to carry 
regions?

> Wait for regionservers to join the cluster
> --
>
> Key: HBASE-10851
> URL: https://issues.apache.org/jira/browse/HBASE-10851
> Project: HBase
>  Issue Type: Bug
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
>Priority: Critical
> Attachments: hbase-10851.patch
>
>
> With HBASE-10569, if regionservers are started a while after the master, all 
> regions will be assigned to the master.  That may not be what users expect.
> A work-around is to always start regionservers before masters.
> I was wondering if the master can wait a little for other regionservers to 
> join.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HBASE-10878) Operator | for visibility label doesn't work

2014-03-31 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu resolved HBASE-10878.


Resolution: Not a Problem

> Operator | for visibility label doesn't work
> 
>
> Key: HBASE-10878
> URL: https://issues.apache.org/jira/browse/HBASE-10878
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>
> I used setup similar to that from HBASE-10863, with fix for HBASE-10863 :
> {code}
> hbase(main):003:0> scan 'hbase:labels'
> ROW  COLUMN+CELL
>  \x00\x00\x00\x01column=f:\x00, 
> timestamp=1395944796030, value=system
>  \x00\x00\x00\x01column=f:hbase, 
> timestamp=1395944796030, value=
>  \x00\x00\x00\x02column=f:\x00, 
> timestamp=1395951045442, value=TOP_SECRET
>  \x00\x00\x00\x02column=f:hrt_qa, 
> timestamp=1395951229682, value=
>  \x00\x00\x00\x02column=f:hrt_qa1, 
> timestamp=1395951270297, value=
>  \x00\x00\x00\x02column=f:mapred, 
> timestamp=1395958442326, value=
>  \x00\x00\x00\x03column=f:\x00, 
> timestamp=1395952069731, value=TOP_TOP_SECRET
>  \x00\x00\x00\x03column=f:mapred, 
> timestamp=1395956032141, value=
>  \x00\x00\x00\x04column=f:\x00, 
> timestamp=1395971516605, value=A
>  \x00\x00\x00\x04column=f:oozie, 
> timestamp=1395971647859, value=
>  \x00\x00\x00\x05column=f:\x00, 
> timestamp=1395971520327, value=B
> 5 row(s) in 0.0580 seconds
> {code}
> I did the following as user oozie using hbase shell:
> {code}
> hbase(main):001:0> scan 'tb', { AUTHORIZATIONS => ['A']}
> ROW  COLUMN+CELL
>  row column=f1:q, 
> timestamp=1395971660859, value=v1
>  row2column=f1:q, 
> timestamp=1395972271343, value=v2
>  row3column=f1:q, 
> timestamp=1396067477702, value=v3
> 3 row(s) in 0.2050 seconds
> hbase(main):002:0> scan 'tb', { AUTHORIZATIONS => ['A|B']}
> ROW  COLUMN+CELL
>  row2column=f1:q, 
> timestamp=1395972271343, value=v2
> 1 row(s) in 0.0150 seconds
> hbase(main):003:0> scan 'tb', { AUTHORIZATIONS => ['B|A']}
> ROW  COLUMN+CELL
>  row2column=f1:q, 
> timestamp=1395972271343, value=v2
> 1 row(s) in 0.0260 seconds
> {code}
> Rows 'row' and 'row3' were inserted with label 'A'.
> Row 'row2' was inserted without label.
> Row 'row1' was inserted with label 'B'.
> I would expect row1 to also be returned.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10878) Operator | for visibility label doesn't work

2014-03-31 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956082#comment-13956082
 ] 

ramkrishna.s.vasudevan commented on HBASE-10878:


Scans has to be with individual labels and that would ideally be the right 
behaviour because we are specifying what labels that user can see.  It is with 
Puts that we can say which labels are allowed to be scan for a user and so 
there the use of expressions make sense.
Also this is inline with Accumlo behaviour also. JFYI.
{code}
// user possess both admin and system level access
Authorization auths = new Authorization("admin","system");

Scanner s = connector.createScanner("table", auths)
{code}
as in Accumulo's doc.

> Operator | for visibility label doesn't work
> 
>
> Key: HBASE-10878
> URL: https://issues.apache.org/jira/browse/HBASE-10878
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>
> I used setup similar to that from HBASE-10863, with fix for HBASE-10863 :
> {code}
> hbase(main):003:0> scan 'hbase:labels'
> ROW  COLUMN+CELL
>  \x00\x00\x00\x01column=f:\x00, 
> timestamp=1395944796030, value=system
>  \x00\x00\x00\x01column=f:hbase, 
> timestamp=1395944796030, value=
>  \x00\x00\x00\x02column=f:\x00, 
> timestamp=1395951045442, value=TOP_SECRET
>  \x00\x00\x00\x02column=f:hrt_qa, 
> timestamp=1395951229682, value=
>  \x00\x00\x00\x02column=f:hrt_qa1, 
> timestamp=1395951270297, value=
>  \x00\x00\x00\x02column=f:mapred, 
> timestamp=1395958442326, value=
>  \x00\x00\x00\x03column=f:\x00, 
> timestamp=1395952069731, value=TOP_TOP_SECRET
>  \x00\x00\x00\x03column=f:mapred, 
> timestamp=1395956032141, value=
>  \x00\x00\x00\x04column=f:\x00, 
> timestamp=1395971516605, value=A
>  \x00\x00\x00\x04column=f:oozie, 
> timestamp=1395971647859, value=
>  \x00\x00\x00\x05column=f:\x00, 
> timestamp=1395971520327, value=B
> 5 row(s) in 0.0580 seconds
> {code}
> I did the following as user oozie using hbase shell:
> {code}
> hbase(main):001:0> scan 'tb', { AUTHORIZATIONS => ['A']}
> ROW  COLUMN+CELL
>  row column=f1:q, 
> timestamp=1395971660859, value=v1
>  row2column=f1:q, 
> timestamp=1395972271343, value=v2
>  row3column=f1:q, 
> timestamp=1396067477702, value=v3
> 3 row(s) in 0.2050 seconds
> hbase(main):002:0> scan 'tb', { AUTHORIZATIONS => ['A|B']}
> ROW  COLUMN+CELL
>  row2column=f1:q, 
> timestamp=1395972271343, value=v2
> 1 row(s) in 0.0150 seconds
> hbase(main):003:0> scan 'tb', { AUTHORIZATIONS => ['B|A']}
> ROW  COLUMN+CELL
>  row2column=f1:q, 
> timestamp=1395972271343, value=v2
> 1 row(s) in 0.0260 seconds
> {code}
> Rows 'row' and 'row3' were inserted with label 'A'.
> Row 'row2' was inserted without label.
> Row 'row1' was inserted with label 'B'.
> I would expect row1 to also be returned.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10867) TestRegionPlacement#testRegionPlacement occasionally fails

2014-03-31 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956081#comment-13956081
 ] 

stack commented on HBASE-10867:
---

Just ran into this: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9142//testReport/org.apache.hadoop.hbase.master/TestRegionPlacement/testRegionPlacement/
 over in HBASE-10855

Replacing server.length wtih SLAVES should work...If we do  that though, then 
there is a bunch of stuff to remove.   Let me apply a patch that does what this 
patch does AND the cleanup.

> TestRegionPlacement#testRegionPlacement occasionally fails
> --
>
> Key: HBASE-10867
> URL: https://issues.apache.org/jira/browse/HBASE-10867
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Fix For: 0.99.0
>
> Attachments: 10867-v1.txt, 10867-v2.txt
>
>
> From 
> https://builds.apache.org/job/HBase-TRUNK/5047/testReport/org.apache.hadoop.hbase.master/TestRegionPlacement/testRegionPlacement/
>  :
> {code}
> java.lang.ArrayIndexOutOfBoundsException: 10
>   at 
> java.util.concurrent.CopyOnWriteArrayList.get(CopyOnWriteArrayList.java:368)
>   at 
> java.util.concurrent.CopyOnWriteArrayList.get(CopyOnWriteArrayList.java:377)
>   at 
> org.apache.hadoop.hbase.LocalHBaseCluster.getRegionServer(LocalHBaseCluster.java:224)
>   at 
> org.apache.hadoop.hbase.MiniHBaseCluster.getRegionServer(MiniHBaseCluster.java:609)
>   at 
> org.apache.hadoop.hbase.master.TestRegionPlacement.killRandomServerAndVerifyAssignment(TestRegionPlacement.java:303)
>   at 
> org.apache.hadoop.hbase.master.TestRegionPlacement.testRegionPlacement(TestRegionPlacement.java:270)
> {code}
> In the setup:
> {code}
> TEST_UTIL.startMiniCluster(SLAVES);
> {code}
> where SLAVES is 10.
> So when 10 was used in 
> TEST_UTIL.getHBaseCluster().getRegionServer(killIndex), we would get 
> ArrayIndexOutOfBoundsException.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-9345) Add support for specifying filters in scan

2014-03-31 Thread Virag Kothari (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virag Kothari updated HBASE-9345:
-

Attachment: HBASE-9345_trunk.patch

> Add support for specifying filters in scan
> --
>
> Key: HBASE-9345
> URL: https://issues.apache.org/jira/browse/HBASE-9345
> Project: HBase
>  Issue Type: Improvement
>  Components: REST
>Affects Versions: 0.94.11
>Reporter: Vandana Ayyalasomayajula
>Assignee: Vandana Ayyalasomayajula
>Priority: Minor
> Attachments: HBASE-9345_trunk.patch
>
>
> In the implementation of stateless scanner from HBase-9343, the support for 
> specifying filters is missing. This JIRA aims to implement support for filter 
> specification.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-9345) Add support for specifying filters in scan

2014-03-31 Thread Virag Kothari (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virag Kothari updated HBASE-9345:
-

Status: Patch Available  (was: Open)

> Add support for specifying filters in scan
> --
>
> Key: HBASE-9345
> URL: https://issues.apache.org/jira/browse/HBASE-9345
> Project: HBase
>  Issue Type: Improvement
>  Components: REST
>Affects Versions: 0.94.11
>Reporter: Vandana Ayyalasomayajula
>Assignee: Vandana Ayyalasomayajula
>Priority: Minor
> Attachments: HBASE-9345_trunk.patch
>
>
> In the implementation of stateless scanner from HBase-9343, the support for 
> specifying filters is missing. This JIRA aims to implement support for filter 
> specification.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10881) Support reverse scan in thrift2

2014-03-31 Thread Liu Shaohui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liu Shaohui updated HBASE-10881:


Affects Version/s: 0.99.0
   Status: Patch Available  (was: Open)

> Support reverse scan in thrift2
> ---
>
> Key: HBASE-10881
> URL: https://issues.apache.org/jira/browse/HBASE-10881
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 0.99.0
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
>Priority: Minor
> Attachments: HBASE-10881-trunk-v1.diff
>
>
> Support reverse scan in thrift2.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10881) Support reverse scan in thrift2

2014-03-31 Thread Liu Shaohui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liu Shaohui updated HBASE-10881:


Attachment: HBASE-10881-trunk-v1.diff

Patch for trunk

> Support reverse scan in thrift2
> ---
>
> Key: HBASE-10881
> URL: https://issues.apache.org/jira/browse/HBASE-10881
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 0.99.0
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
>Priority: Minor
> Attachments: HBASE-10881-trunk-v1.diff
>
>
> Support reverse scan in thrift2.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10866) Decouple HLogSplitterHandler from ZooKeeper

2014-03-31 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956067#comment-13956067
 ] 

stack commented on HBASE-10866:
---

Writeup looks great.  Should have date, author and issue reference attached in 
case someone trips over it in wild but this is just a nit.  There is actually a 
fourth unfortunate use of zk that we'd rather not recall but since you are 
making a list, I might as well let you know of it: we persist state into zk for 
a few cases; replicating and whether a table is disabled to speak of two (these 
we need to undo).  The writeup is helpful ([~jxiang] -- you'd be interested).  
Regards say RegionServerConsensus, where would such an Interface live in the 
code base?  Is it only consensus among regionservers?   Would the Master need 
package access?  Thank you.



> Decouple HLogSplitterHandler from ZooKeeper
> ---
>
> Key: HBASE-10866
> URL: https://issues.apache.org/jira/browse/HBASE-10866
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver, Zookeeper
>Reporter: Mikhail Antonov
> Attachments: HBASE-10866.patch, HBASE-10866.patch, HBASE-10866.patch, 
> HBASE-10866.patch, HBaseConsensus.pdf
>
>
> As some sort of follow-up or initial step towards HBASE-10296...
> Whatever consensus algorithm/library may be the chosen, perhaps on of first 
> practical steps towards this goal would be to better abstract ZK-related API 
> and details, which are now throughout the codebase (mostly leaked throuth 
> ZkUtil, ZooKeeperWatcher and listeners).
> I'd like to propose a series of patches to help better abstract out zookeeper 
> (and then help develop consensus APIs). 
> Here is first version of  patch for initial review (then I'm planning to work 
> on another handlers in regionserver, and then perhaps start working on 
> abstracting listeners).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10879) user_permission shell command on namespace doesn't work

2014-03-31 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956063#comment-13956063
 ] 

Anoop Sam John commented on HBASE-10879:


V2  LGTM
I think we need patch for 96 also.

> user_permission shell command on namespace doesn't work
> ---
>
> Key: HBASE-10879
> URL: https://issues.apache.org/jira/browse/HBASE-10879
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 10879-v1.txt, 10879-v2.txt
>
>
> Currently user_permission command on namespace, e.g.
> {code}
> user_permission '@ns'
> {code}
> would result in the following exception:
> {code}
> Exception `NameError' at /usr/lib/hbase/lib/ruby/hbase/security.rb:170 - no 
> method 'getUserPermissions' for arguments 
> (org.apache.hadoop.hbase.protobuf.generated.  
> AccessControlProtos.AccessControlService.BlockingStub,org.jruby.java.proxies.ArrayJavaProxy)
>  on Java::OrgApacheHadoopHbaseProtobuf::ProtobufUtil
> ERROR: no method 'getUserPermissions' for arguments 
> (org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos.AccessControlService.BlockingStub,org.jruby.java.
>proxies.ArrayJavaProxy) on 
> Java::OrgApacheHadoopHbaseProtobuf::ProtobufUtil
> Backtrace: /usr/lib/hbase/lib/ruby/hbase/security.rb:170:in `user_permission'
>/usr/lib/hbase/lib/ruby/shell/commands/user_permission.rb:39:in 
> `command'
>org/jruby/RubyKernel.java:2109:in `send'
>/usr/lib/hbase/lib/ruby/shell/commands.rb:34:in `command_safe'
>/usr/lib/hbase/lib/ruby/shell/commands.rb:91:in 
> `translate_hbase_exceptions'
>/usr/lib/hbase/lib/ruby/shell/commands.rb:34:in `command_safe'
>/usr/lib/hbase/lib/ruby/shell.rb:127:in `internal_command'
>/usr/lib/hbase/lib/ruby/shell.rb:119:in `command'
>(eval):2:in `user_permission'
>(hbase):1:in `evaluate'
>org/jruby/RubyKernel.java:1112:in `eval'
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10850) Unexpected behavior when using filter SingleColumnValueFilter

2014-03-31 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-10850:
---

Attachment: TestWithMiniCluster.java

Attached is the same test class (But changed to use Mini cluster)

> Unexpected behavior when using filter SingleColumnValueFilter
> -
>
> Key: HBASE-10850
> URL: https://issues.apache.org/jira/browse/HBASE-10850
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 0.96.1.1
>Reporter: Fabien Le Gallo
>Assignee: haosdent
>Priority: Critical
> Attachments: HBASE-10850-96.patch, HBASE-10850.patch, 
> HBASE-10850_V2.patch, HBaseSingleColumnValueFilterTest.java, 
> TestWithMiniCluster.java
>
>
> When using the filter SingleColumnValueFilter, and depending of the columns 
> specified in the scan (filtering column always specified), the results can be 
> different.
> Here is an example.
> Suppose the following table:
> ||key||a:foo||a:bar||b:foo||b:bar||
> |1|false|_flag_|_flag_|_flag_|
> |2|true|_flag_|_flag_|_flag_|
> |3| |_flag_|_flag_|_flag_|
> With this filter:
> {code}
> SingleColumnValueFilter filter = new 
> SingleColumnValueFilter(Bytes.toBytes("a"), Bytes.toBytes("foo"), 
> CompareOp.EQUAL, new BinaryComparator(Bytes.toBytes("false")));
> filter.setFilterIfMissing(true);
> {code}
> Depending of how I specify the list of columns to add in the scan, the result 
> is different. Yet, all examples below should always return only the first row 
> (key '1'):
> OK:
> {code}
> scan.addFamily(Bytes.toBytes("a"));
> {code}
> KO (2 results returned, row '3' without 'a:foo' qualifier is returned):
> {code}
> scan.addFamily(Bytes.toBytes("a"));
> scan.addFamily(Bytes.toBytes("b"));
> {code}
> KO (2 results returned, row '3' without 'a:foo' qualifier is returned):
> {code}
> scan.addColumn(Bytes.toBytes("a"), Bytes.toBytes("foo"));
> scan.addColumn(Bytes.toBytes("a"), Bytes.toBytes("bar"));
> scan.addColumn(Bytes.toBytes("b"), Bytes.toBytes("foo"));
> {code}
> OK:
> {code}
> scan.addColumn(Bytes.toBytes("a"), Bytes.toBytes("foo"));
> scan.addColumn(Bytes.toBytes("b"), Bytes.toBytes("bar"));
> {code}
> OK:
> {code}
> scan.addColumn(Bytes.toBytes("a"), Bytes.toBytes("foo"));
> scan.addColumn(Bytes.toBytes("a"), Bytes.toBytes("bar"));
> {code}
> This is a regression as it was working properly on HBase 0.92.
> You will find in attachement the unit tests reproducing the issue.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10850) Unexpected behavior when using filter SingleColumnValueFilter

2014-03-31 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-10850:
---

Status: Open  (was: Patch Available)

> Unexpected behavior when using filter SingleColumnValueFilter
> -
>
> Key: HBASE-10850
> URL: https://issues.apache.org/jira/browse/HBASE-10850
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 0.96.1.1
>Reporter: Fabien Le Gallo
>Assignee: haosdent
>Priority: Critical
> Attachments: HBASE-10850-96.patch, HBASE-10850.patch, 
> HBASE-10850_V2.patch, HBaseSingleColumnValueFilterTest.java
>
>
> When using the filter SingleColumnValueFilter, and depending of the columns 
> specified in the scan (filtering column always specified), the results can be 
> different.
> Here is an example.
> Suppose the following table:
> ||key||a:foo||a:bar||b:foo||b:bar||
> |1|false|_flag_|_flag_|_flag_|
> |2|true|_flag_|_flag_|_flag_|
> |3| |_flag_|_flag_|_flag_|
> With this filter:
> {code}
> SingleColumnValueFilter filter = new 
> SingleColumnValueFilter(Bytes.toBytes("a"), Bytes.toBytes("foo"), 
> CompareOp.EQUAL, new BinaryComparator(Bytes.toBytes("false")));
> filter.setFilterIfMissing(true);
> {code}
> Depending of how I specify the list of columns to add in the scan, the result 
> is different. Yet, all examples below should always return only the first row 
> (key '1'):
> OK:
> {code}
> scan.addFamily(Bytes.toBytes("a"));
> {code}
> KO (2 results returned, row '3' without 'a:foo' qualifier is returned):
> {code}
> scan.addFamily(Bytes.toBytes("a"));
> scan.addFamily(Bytes.toBytes("b"));
> {code}
> KO (2 results returned, row '3' without 'a:foo' qualifier is returned):
> {code}
> scan.addColumn(Bytes.toBytes("a"), Bytes.toBytes("foo"));
> scan.addColumn(Bytes.toBytes("a"), Bytes.toBytes("bar"));
> scan.addColumn(Bytes.toBytes("b"), Bytes.toBytes("foo"));
> {code}
> OK:
> {code}
> scan.addColumn(Bytes.toBytes("a"), Bytes.toBytes("foo"));
> scan.addColumn(Bytes.toBytes("b"), Bytes.toBytes("bar"));
> {code}
> OK:
> {code}
> scan.addColumn(Bytes.toBytes("a"), Bytes.toBytes("foo"));
> scan.addColumn(Bytes.toBytes("a"), Bytes.toBytes("bar"));
> {code}
> This is a regression as it was working properly on HBase 0.92.
> You will find in attachement the unit tests reproducing the issue.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10830) Integration test MR jobs attempt to load htrace jars from the wrong location

2014-03-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956054#comment-13956054
 ] 

Hadoop QA commented on HBASE-10830:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12637974/HBASE-10830.01.patch
  against trunk revision .
  ATTACHMENT ID: 12637974

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 9 new 
or modified tests.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9149//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9149//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9149//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9149//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9149//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9149//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9149//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9149//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9149//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9149//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9149//console

This message is automatically generated.

> Integration test MR jobs attempt to load htrace jars from the wrong location
> 
>
> Key: HBASE-10830
> URL: https://issues.apache.org/jira/browse/HBASE-10830
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.1
>Reporter: Andrew Purtell
>Priority: Minor
> Fix For: 0.99.0, 0.98.2
>
> Attachments: HBASE-10830.00.patch, HBASE-10830.01.patch
>
>
> The MapReduce jobs submitted by IntegrationTestImportTsv want to load the 
> htrace JAR from the local Maven cache but get confused and use a HDFS URI.
> {noformat}
> Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 8.489 sec <<< 
> FAILURE!
> testGenerateAndLoad(org.apache.hadoop.hbase.mapreduce.IntegrationTestImportTsv)
>   Time elapsed: 0.488 sec  <<< ERROR!
> java.io.FileNotFoundException: File does not exist: 
> hdfs://localhost:37548/home/apurtell/.m2/repository/org/cloudera/htrace/htrace-core/2.04/htrace-core-2.04.jar
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1110)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1102)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1102)
> at 
> org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:288)
> at 
> org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:224)
> at 
> org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestamps(ClientDistributedCacheManager.java:93)
> at 
> org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestampsAndCacheVisibilities(ClientDistributedCacheManager.java:57)
>   

[jira] [Updated] (HBASE-10879) user_permission shell command on namespace doesn't work

2014-03-31 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-10879:
---

Attachment: 10879-v2.txt

> user_permission shell command on namespace doesn't work
> ---
>
> Key: HBASE-10879
> URL: https://issues.apache.org/jira/browse/HBASE-10879
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 10879-v1.txt, 10879-v2.txt
>
>
> Currently user_permission command on namespace, e.g.
> {code}
> user_permission '@ns'
> {code}
> would result in the following exception:
> {code}
> Exception `NameError' at /usr/lib/hbase/lib/ruby/hbase/security.rb:170 - no 
> method 'getUserPermissions' for arguments 
> (org.apache.hadoop.hbase.protobuf.generated.  
> AccessControlProtos.AccessControlService.BlockingStub,org.jruby.java.proxies.ArrayJavaProxy)
>  on Java::OrgApacheHadoopHbaseProtobuf::ProtobufUtil
> ERROR: no method 'getUserPermissions' for arguments 
> (org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos.AccessControlService.BlockingStub,org.jruby.java.
>proxies.ArrayJavaProxy) on 
> Java::OrgApacheHadoopHbaseProtobuf::ProtobufUtil
> Backtrace: /usr/lib/hbase/lib/ruby/hbase/security.rb:170:in `user_permission'
>/usr/lib/hbase/lib/ruby/shell/commands/user_permission.rb:39:in 
> `command'
>org/jruby/RubyKernel.java:2109:in `send'
>/usr/lib/hbase/lib/ruby/shell/commands.rb:34:in `command_safe'
>/usr/lib/hbase/lib/ruby/shell/commands.rb:91:in 
> `translate_hbase_exceptions'
>/usr/lib/hbase/lib/ruby/shell/commands.rb:34:in `command_safe'
>/usr/lib/hbase/lib/ruby/shell.rb:127:in `internal_command'
>/usr/lib/hbase/lib/ruby/shell.rb:119:in `command'
>(eval):2:in `user_permission'
>(hbase):1:in `evaluate'
>org/jruby/RubyKernel.java:1112:in `eval'
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HBASE-10880) install hbase on hadoop NN,how to configure the value hbase hbase.rootdir?

2014-03-31 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-10880.
---

Resolution: Invalid

Please ask these kind of questions on the mailing list, not in an issue.  
Thanks.   You don't seem to have followed the instruction in here: 
http://hadoop.apache.org/docs/r2.3.0/hadoop-yarn/hadoop-yarn-site/HDFSHighAvailabilityWithNFS.html
  You have hadoop1 and hadoop2 when it looks like it should be cluster1 and 
cluster2.  Try rereading the instructions.

> install hbase on hadoop NN,how to configure the value hbase hbase.rootdir?
> --
>
> Key: HBASE-10880
> URL: https://issues.apache.org/jira/browse/HBASE-10880
> Project: HBase
>  Issue Type: New Feature
>  Components: hadoop2
> Environment: 1.red hat
> 2.hadoop-2.3.0(NN)
> 3.hadoop1(namenode active),hadoop2(namenode standby),hadoop3 
> (datanode),hadoop4(datanode)
> 4.hbase-0.98.0-hadoop2-bin.tar.gz
>Reporter: bobsoft
>
> operating system: linux (red hat 5.4 i386)
> hadoop version:hadoop-2.3  (NN:two namenode)
> hbase version:hbase-0.98.0-hadoop2-bin
> hadoop-2.3(NN) 
>  core-site.xml
> 
> 
> hadoop.tmp.dir
> /home/hadoop/tmp
>  A base for other temporary directories.
>  
>  
>  fs.default.name
>  hdfs://cluster1
> 
>  hdfs-site.xml
> 
> 
>  dfs.replication
>  2
> 
> 
> dfs.nameservices
> cluster1
> 
> 
> dfs.ha.namenodes.cluster1
> hadoop1,hadoop2
> 
> 
> dfs.namenode.rpc-address.cluster1.hadoop1
> hadoop1:9000
> 
> 
> dfs.namenode.rpc-address.cluster1.hadoop2
> hadoop2:9000
> 
> 
> dfs.namenode.http-address.cluster1.hadoop1
> hadoop1:50070
> 
> 
> dfs.namenode.http-address.cluster1.hadoop2
> hadoop2:50070
> 
> 
> dfs.namenode.shared.edits.dir
> qjournal://hadoop1:8485;hadoop2:8485;hadoop3:8485/cluster1
> 
> 
> dfs.client.failover.proxy.provider.cluster1
> org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
> 
> 
> dfs.ha.fencing.methods
> sshfence
> 
> 
> dfs.ha.fencing.ssh.private-key-files
> /root/.ssh/id_rsa
> 
> 
> dfs.journalnode.edits.dir
> /hadoop/tmp/journal
> 
>  
>dfs.webhdfs.enabled
>true
>   
> 
> After testing, hadoop NN configuration is successful.
> http://images.cnitblog.com/i/48682/201403/281811569696016.jpg
> http://images.cnitblog.com/i/48682/201403/281812107976891.jpg
> http://images.cnitblog.com/i/48682/201403/281812270168836.jpg
> how to configure the value hbase hbase.rootdir?
> "hdfs://hadoop1:9000/hbase" or "hdfs://hadoop2:9000/hbase" or " 
> hdfs://cluster1"  All of these configuration values​​, I have tried, but 
> failed to start hbase,Can you help me?



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10878) Operator | for visibility label doesn't work

2014-03-31 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956049#comment-13956049
 ] 

Anoop Sam John commented on HBASE-10878:


In case when the user have auth for both A and B you can pass both in 
AUTHORIZATIONS
A cell with label A|B  will get included in the result in this case also.
So in case of scan AUTHORIZATIONS one can not pass label expressions (as in 
case of put)
Scan AUTHORIZATIONS can specify which all label auths this scan is associated 
with but not any expressions.
Am I making it clear Ted?

> Operator | for visibility label doesn't work
> 
>
> Key: HBASE-10878
> URL: https://issues.apache.org/jira/browse/HBASE-10878
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>
> I used setup similar to that from HBASE-10863, with fix for HBASE-10863 :
> {code}
> hbase(main):003:0> scan 'hbase:labels'
> ROW  COLUMN+CELL
>  \x00\x00\x00\x01column=f:\x00, 
> timestamp=1395944796030, value=system
>  \x00\x00\x00\x01column=f:hbase, 
> timestamp=1395944796030, value=
>  \x00\x00\x00\x02column=f:\x00, 
> timestamp=1395951045442, value=TOP_SECRET
>  \x00\x00\x00\x02column=f:hrt_qa, 
> timestamp=1395951229682, value=
>  \x00\x00\x00\x02column=f:hrt_qa1, 
> timestamp=1395951270297, value=
>  \x00\x00\x00\x02column=f:mapred, 
> timestamp=1395958442326, value=
>  \x00\x00\x00\x03column=f:\x00, 
> timestamp=1395952069731, value=TOP_TOP_SECRET
>  \x00\x00\x00\x03column=f:mapred, 
> timestamp=1395956032141, value=
>  \x00\x00\x00\x04column=f:\x00, 
> timestamp=1395971516605, value=A
>  \x00\x00\x00\x04column=f:oozie, 
> timestamp=1395971647859, value=
>  \x00\x00\x00\x05column=f:\x00, 
> timestamp=1395971520327, value=B
> 5 row(s) in 0.0580 seconds
> {code}
> I did the following as user oozie using hbase shell:
> {code}
> hbase(main):001:0> scan 'tb', { AUTHORIZATIONS => ['A']}
> ROW  COLUMN+CELL
>  row column=f1:q, 
> timestamp=1395971660859, value=v1
>  row2column=f1:q, 
> timestamp=1395972271343, value=v2
>  row3column=f1:q, 
> timestamp=1396067477702, value=v3
> 3 row(s) in 0.2050 seconds
> hbase(main):002:0> scan 'tb', { AUTHORIZATIONS => ['A|B']}
> ROW  COLUMN+CELL
>  row2column=f1:q, 
> timestamp=1395972271343, value=v2
> 1 row(s) in 0.0150 seconds
> hbase(main):003:0> scan 'tb', { AUTHORIZATIONS => ['B|A']}
> ROW  COLUMN+CELL
>  row2column=f1:q, 
> timestamp=1395972271343, value=v2
> 1 row(s) in 0.0260 seconds
> {code}
> Rows 'row' and 'row3' were inserted with label 'A'.
> Row 'row2' was inserted without label.
> Row 'row1' was inserted with label 'B'.
> I would expect row1 to also be returned.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10879) user_permission shell command on namespace doesn't work

2014-03-31 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956047#comment-13956047
 ] 

Anoop Sam John commented on HBASE-10879:


nit
{code}
+   * @param t namespace name
+   * @throws ServiceException
+   */
+  public static List getUserPermissions(
+  AccessControlService.BlockingInterface protocol,
+  byte[] namespace) throws ServiceException {
{code}
Pls correct the javadoc

> user_permission shell command on namespace doesn't work
> ---
>
> Key: HBASE-10879
> URL: https://issues.apache.org/jira/browse/HBASE-10879
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 10879-v1.txt
>
>
> Currently user_permission command on namespace, e.g.
> {code}
> user_permission '@ns'
> {code}
> would result in the following exception:
> {code}
> Exception `NameError' at /usr/lib/hbase/lib/ruby/hbase/security.rb:170 - no 
> method 'getUserPermissions' for arguments 
> (org.apache.hadoop.hbase.protobuf.generated.  
> AccessControlProtos.AccessControlService.BlockingStub,org.jruby.java.proxies.ArrayJavaProxy)
>  on Java::OrgApacheHadoopHbaseProtobuf::ProtobufUtil
> ERROR: no method 'getUserPermissions' for arguments 
> (org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos.AccessControlService.BlockingStub,org.jruby.java.
>proxies.ArrayJavaProxy) on 
> Java::OrgApacheHadoopHbaseProtobuf::ProtobufUtil
> Backtrace: /usr/lib/hbase/lib/ruby/hbase/security.rb:170:in `user_permission'
>/usr/lib/hbase/lib/ruby/shell/commands/user_permission.rb:39:in 
> `command'
>org/jruby/RubyKernel.java:2109:in `send'
>/usr/lib/hbase/lib/ruby/shell/commands.rb:34:in `command_safe'
>/usr/lib/hbase/lib/ruby/shell/commands.rb:91:in 
> `translate_hbase_exceptions'
>/usr/lib/hbase/lib/ruby/shell/commands.rb:34:in `command_safe'
>/usr/lib/hbase/lib/ruby/shell.rb:127:in `internal_command'
>/usr/lib/hbase/lib/ruby/shell.rb:119:in `command'
>(eval):2:in `user_permission'
>(hbase):1:in `evaluate'
>org/jruby/RubyKernel.java:1112:in `eval'
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10882) Bulkload process hangs on regions randomly and finally throws RegionTooBusyException

2014-03-31 Thread Victor Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Victor Xu updated HBASE-10882:
--

Attachment: jstack_5105.log

Add jstack output file of the hanging region.

> Bulkload process hangs on regions randomly and finally throws 
> RegionTooBusyException
> 
>
> Key: HBASE-10882
> URL: https://issues.apache.org/jira/browse/HBASE-10882
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 0.94.10
> Environment: rhel 5.6, jdk1.7.0_45, hadoop-2.2.0-cdh5.0.0
>Reporter: Victor Xu
> Attachments: jstack_5105.log
>
>
> I came across the problem in the early morning several days ago. It happened 
> when I used hadoop completebulkload command to bulk load some hdfs files into 
> hbase table. Several regions hung and after retried three times they all 
> threw RegionTooBusyExceptions. Fortunately, I caught one of the exceptional 
> region’s HRegionServer process’s jstack info just in time.
> I found that the bulkload process was waiting for a write lock:
> at 
> java.util.concurrent.locks.ReentrantReadWriteLock$WriteLock.tryLock(ReentrantReadWriteLock.java:1115)
> The lock id is 0x0004054ecbf0.
> In the meantime, many other Get/Scan operations were also waiting for the 
> same lock id. And, of course, they were waiting for the read lock:
> at 
> java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.tryLock(ReentrantReadWriteLock.java:873)
> The most ridiculous thing is NO ONE OWNED THE LOCK! I searched the jstack 
> output carefully, but cannot find any process who claimed to own the lock.
> When I restart the bulk load process, it failed at different regions but with 
> the same RegionTooBusyExceptions. 
> I guess maybe the region was doing some compactions at that time and owned 
> the lock, but I couldn’t find compaction info in the hbase-logs.
> Finally, after several days’ hard work, the only temporary solution to this 
> problem was found, that is TRIGGERING A MAJOR COMPACTION BEFORE THE BULKLOAD, 
> So which process owned the lock? Has anyone came across the same problem 
> before?



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10878) Operator | for visibility label doesn't work

2014-03-31 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956043#comment-13956043
 ] 

Ted Yu commented on HBASE-10878:


bq. pass either one of them alone

That would be inconvenient, right ?
Suppose there're more than two labels, user needs to issue multiple queries and 
combine the results together.

> Operator | for visibility label doesn't work
> 
>
> Key: HBASE-10878
> URL: https://issues.apache.org/jira/browse/HBASE-10878
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>
> I used setup similar to that from HBASE-10863, with fix for HBASE-10863 :
> {code}
> hbase(main):003:0> scan 'hbase:labels'
> ROW  COLUMN+CELL
>  \x00\x00\x00\x01column=f:\x00, 
> timestamp=1395944796030, value=system
>  \x00\x00\x00\x01column=f:hbase, 
> timestamp=1395944796030, value=
>  \x00\x00\x00\x02column=f:\x00, 
> timestamp=1395951045442, value=TOP_SECRET
>  \x00\x00\x00\x02column=f:hrt_qa, 
> timestamp=1395951229682, value=
>  \x00\x00\x00\x02column=f:hrt_qa1, 
> timestamp=1395951270297, value=
>  \x00\x00\x00\x02column=f:mapred, 
> timestamp=1395958442326, value=
>  \x00\x00\x00\x03column=f:\x00, 
> timestamp=1395952069731, value=TOP_TOP_SECRET
>  \x00\x00\x00\x03column=f:mapred, 
> timestamp=1395956032141, value=
>  \x00\x00\x00\x04column=f:\x00, 
> timestamp=1395971516605, value=A
>  \x00\x00\x00\x04column=f:oozie, 
> timestamp=1395971647859, value=
>  \x00\x00\x00\x05column=f:\x00, 
> timestamp=1395971520327, value=B
> 5 row(s) in 0.0580 seconds
> {code}
> I did the following as user oozie using hbase shell:
> {code}
> hbase(main):001:0> scan 'tb', { AUTHORIZATIONS => ['A']}
> ROW  COLUMN+CELL
>  row column=f1:q, 
> timestamp=1395971660859, value=v1
>  row2column=f1:q, 
> timestamp=1395972271343, value=v2
>  row3column=f1:q, 
> timestamp=1396067477702, value=v3
> 3 row(s) in 0.2050 seconds
> hbase(main):002:0> scan 'tb', { AUTHORIZATIONS => ['A|B']}
> ROW  COLUMN+CELL
>  row2column=f1:q, 
> timestamp=1395972271343, value=v2
> 1 row(s) in 0.0150 seconds
> hbase(main):003:0> scan 'tb', { AUTHORIZATIONS => ['B|A']}
> ROW  COLUMN+CELL
>  row2column=f1:q, 
> timestamp=1395972271343, value=v2
> 1 row(s) in 0.0260 seconds
> {code}
> Rows 'row' and 'row3' were inserted with label 'A'.
> Row 'row2' was inserted without label.
> Row 'row1' was inserted with label 'B'.
> I would expect row1 to also be returned.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-10882) Bulkload process hangs on regions randomly and finally throws RegionTooBusyException

2014-03-31 Thread Victor Xu (JIRA)
Victor Xu created HBASE-10882:
-

 Summary: Bulkload process hangs on regions randomly and finally 
throws RegionTooBusyException
 Key: HBASE-10882
 URL: https://issues.apache.org/jira/browse/HBASE-10882
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.94.10
 Environment: rhel 5.6, jdk1.7.0_45, hadoop-2.2.0-cdh5.0.0
Reporter: Victor Xu


I came across the problem in the early morning several days ago. It happened 
when I used hadoop completebulkload command to bulk load some hdfs files into 
hbase table. Several regions hung and after retried three times they all threw 
RegionTooBusyExceptions. Fortunately, I caught one of the exceptional region’s 
HRegionServer process’s jstack info just in time.
I found that the bulkload process was waiting for a write lock:
at 
java.util.concurrent.locks.ReentrantReadWriteLock$WriteLock.tryLock(ReentrantReadWriteLock.java:1115)
The lock id is 0x0004054ecbf0.
In the meantime, many other Get/Scan operations were also waiting for the same 
lock id. And, of course, they were waiting for the read lock:
at 
java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.tryLock(ReentrantReadWriteLock.java:873)
The most ridiculous thing is NO ONE OWNED THE LOCK! I searched the jstack 
output carefully, but cannot find any process who claimed to own the lock.
When I restart the bulk load process, it failed at different regions but with 
the same RegionTooBusyExceptions. 
I guess maybe the region was doing some compactions at that time and owned the 
lock, but I couldn’t find compaction info in the hbase-logs.
Finally, after several days’ hard work, the only temporary solution to this 
problem was found, that is TRIGGERING A MAJOR COMPACTION BEFORE THE BULKLOAD, 
So which process owned the lock? Has anyone came across the same problem before?



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10878) Operator | for visibility label doesn't work

2014-03-31 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956040#comment-13956040
 ] 

Anoop Sam John commented on HBASE-10878:


scan 'tb', { AUTHORIZATIONS => ['A|B']}
You can not pass label expression in AUTHORIZATIONS .  When the user doing the 
scan having authorizations for labels A and B and he want to use both of them 
you can pass AUTHORIZATIONS => ['A', 'B']
When it has to be like A | B  pass either one of them alone in AUTHORIZATIONS

> Operator | for visibility label doesn't work
> 
>
> Key: HBASE-10878
> URL: https://issues.apache.org/jira/browse/HBASE-10878
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>
> I used setup similar to that from HBASE-10863, with fix for HBASE-10863 :
> {code}
> hbase(main):003:0> scan 'hbase:labels'
> ROW  COLUMN+CELL
>  \x00\x00\x00\x01column=f:\x00, 
> timestamp=1395944796030, value=system
>  \x00\x00\x00\x01column=f:hbase, 
> timestamp=1395944796030, value=
>  \x00\x00\x00\x02column=f:\x00, 
> timestamp=1395951045442, value=TOP_SECRET
>  \x00\x00\x00\x02column=f:hrt_qa, 
> timestamp=1395951229682, value=
>  \x00\x00\x00\x02column=f:hrt_qa1, 
> timestamp=1395951270297, value=
>  \x00\x00\x00\x02column=f:mapred, 
> timestamp=1395958442326, value=
>  \x00\x00\x00\x03column=f:\x00, 
> timestamp=1395952069731, value=TOP_TOP_SECRET
>  \x00\x00\x00\x03column=f:mapred, 
> timestamp=1395956032141, value=
>  \x00\x00\x00\x04column=f:\x00, 
> timestamp=1395971516605, value=A
>  \x00\x00\x00\x04column=f:oozie, 
> timestamp=1395971647859, value=
>  \x00\x00\x00\x05column=f:\x00, 
> timestamp=1395971520327, value=B
> 5 row(s) in 0.0580 seconds
> {code}
> I did the following as user oozie using hbase shell:
> {code}
> hbase(main):001:0> scan 'tb', { AUTHORIZATIONS => ['A']}
> ROW  COLUMN+CELL
>  row column=f1:q, 
> timestamp=1395971660859, value=v1
>  row2column=f1:q, 
> timestamp=1395972271343, value=v2
>  row3column=f1:q, 
> timestamp=1396067477702, value=v3
> 3 row(s) in 0.2050 seconds
> hbase(main):002:0> scan 'tb', { AUTHORIZATIONS => ['A|B']}
> ROW  COLUMN+CELL
>  row2column=f1:q, 
> timestamp=1395972271343, value=v2
> 1 row(s) in 0.0150 seconds
> hbase(main):003:0> scan 'tb', { AUTHORIZATIONS => ['B|A']}
> ROW  COLUMN+CELL
>  row2column=f1:q, 
> timestamp=1395972271343, value=v2
> 1 row(s) in 0.0260 seconds
> {code}
> Rows 'row' and 'row3' were inserted with label 'A'.
> Row 'row2' was inserted without label.
> Row 'row1' was inserted with label 'B'.
> I would expect row1 to also be returned.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10866) Decouple HLogSplitterHandler from ZooKeeper

2014-03-31 Thread Mikhail Antonov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Antonov updated HBASE-10866:


Attachment: HBASE-10866.patch

updated patch

> Decouple HLogSplitterHandler from ZooKeeper
> ---
>
> Key: HBASE-10866
> URL: https://issues.apache.org/jira/browse/HBASE-10866
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver, Zookeeper
>Reporter: Mikhail Antonov
> Attachments: HBASE-10866.patch, HBASE-10866.patch, HBASE-10866.patch, 
> HBASE-10866.patch, HBaseConsensus.pdf
>
>
> As some sort of follow-up or initial step towards HBASE-10296...
> Whatever consensus algorithm/library may be the chosen, perhaps on of first 
> practical steps towards this goal would be to better abstract ZK-related API 
> and details, which are now throughout the codebase (mostly leaked throuth 
> ZkUtil, ZooKeeperWatcher and listeners).
> I'd like to propose a series of patches to help better abstract out zookeeper 
> (and then help develop consensus APIs). 
> Here is first version of  patch for initial review (then I'm planning to work 
> on another handlers in regionserver, and then perhaps start working on 
> abstracting listeners).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10866) Decouple HLogSplitterHandler from ZooKeeper

2014-03-31 Thread Mikhail Antonov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Antonov updated HBASE-10866:


Status: Patch Available  (was: Open)

> Decouple HLogSplitterHandler from ZooKeeper
> ---
>
> Key: HBASE-10866
> URL: https://issues.apache.org/jira/browse/HBASE-10866
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver, Zookeeper
>Reporter: Mikhail Antonov
> Attachments: HBASE-10866.patch, HBASE-10866.patch, HBASE-10866.patch, 
> HBASE-10866.patch, HBaseConsensus.pdf
>
>
> As some sort of follow-up or initial step towards HBASE-10296...
> Whatever consensus algorithm/library may be the chosen, perhaps on of first 
> practical steps towards this goal would be to better abstract ZK-related API 
> and details, which are now throughout the codebase (mostly leaked throuth 
> ZkUtil, ZooKeeperWatcher and listeners).
> I'd like to propose a series of patches to help better abstract out zookeeper 
> (and then help develop consensus APIs). 
> Here is first version of  patch for initial review (then I'm planning to work 
> on another handlers in regionserver, and then perhaps start working on 
> abstracting listeners).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10866) Decouple HLogSplitterHandler from ZooKeeper

2014-03-31 Thread Mikhail Antonov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Antonov updated HBASE-10866:


Status: Open  (was: Patch Available)

> Decouple HLogSplitterHandler from ZooKeeper
> ---
>
> Key: HBASE-10866
> URL: https://issues.apache.org/jira/browse/HBASE-10866
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver, Zookeeper
>Reporter: Mikhail Antonov
> Attachments: HBASE-10866.patch, HBASE-10866.patch, HBASE-10866.patch, 
> HBASE-10866.patch, HBaseConsensus.pdf
>
>
> As some sort of follow-up or initial step towards HBASE-10296...
> Whatever consensus algorithm/library may be the chosen, perhaps on of first 
> practical steps towards this goal would be to better abstract ZK-related API 
> and details, which are now throughout the codebase (mostly leaked throuth 
> ZkUtil, ZooKeeperWatcher and listeners).
> I'd like to propose a series of patches to help better abstract out zookeeper 
> (and then help develop consensus APIs). 
> Here is first version of  patch for initial review (then I'm planning to work 
> on another handlers in regionserver, and then perhaps start working on 
> abstracting listeners).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-10880) install hbase on hadoop NN,how to configure the value hbase hbase.rootdir?

2014-03-31 Thread bobsoft (JIRA)
bobsoft created HBASE-10880:
---

 Summary: install hbase on hadoop NN,how to configure the value 
hbase hbase.rootdir?
 Key: HBASE-10880
 URL: https://issues.apache.org/jira/browse/HBASE-10880
 Project: HBase
  Issue Type: New Feature
  Components: hadoop2
 Environment: 1.red hat
2.hadoop-2.3.0(NN)
3.hadoop1(namenode active),hadoop2(namenode standby),hadoop3 
(datanode),hadoop4(datanode)
4.hbase-0.98.0-hadoop2-bin.tar.gz
Reporter: bobsoft


operating system: linux (red hat 5.4 i386)
hadoop version:hadoop-2.3  (NN:two namenode)
hbase version:hbase-0.98.0-hadoop2-bin

hadoop-2.3(NN) 
 core-site.xml


hadoop.tmp.dir
/home/hadoop/tmp
 A base for other temporary directories.
 
 
 fs.default.name
 hdfs://cluster1




 dfs.replication
 2



dfs.nameservices
cluster1



dfs.ha.namenodes.cluster1
hadoop1,hadoop2



dfs.namenode.rpc-address.cluster1.hadoop1
hadoop1:9000



dfs.namenode.rpc-address.cluster1.hadoop2
hadoop2:9000



dfs.namenode.http-address.cluster1.hadoop1
hadoop1:50070



dfs.namenode.http-address.cluster1.hadoop2
hadoop2:50070



dfs.namenode.shared.edits.dir
qjournal://hadoop1:8485;hadoop2:8485;hadoop3:8485/cluster1



dfs.client.failover.proxy.provider.cluster1
org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider



dfs.ha.fencing.methods
sshfence



dfs.ha.fencing.ssh.private-key-files
/root/.ssh/id_rsa



dfs.journalnode.edits.dir
/hadoop/tmp/journal


 
   dfs.webhdfs.enabled
   true
  


After testing, hadoop NN configuration is successful.
http://images.cnitblog.com/i/48682/201403/281811569696016.jpg
http://images.cnitblog.com/i/48682/201403/281812107976891.jpg
http://images.cnitblog.com/i/48682/201403/281812270168836.jpg


how to configure the value hbase hbase.rootdir?
"hdfs://hadoop1:9000/hbase" or "hdfs://hadoop2:9000/hbase" or " 
hdfs://cluster1"  All of these configuration values​​, I have tried, but failed 
to start hbase,Can you help me?



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-10881) Support reverse scan in thrift2

2014-03-31 Thread Liu Shaohui (JIRA)
Liu Shaohui created HBASE-10881:
---

 Summary: Support reverse scan in thrift2
 Key: HBASE-10881
 URL: https://issues.apache.org/jira/browse/HBASE-10881
 Project: HBase
  Issue Type: New Feature
Reporter: Liu Shaohui
Assignee: Liu Shaohui
Priority: Minor


Support reverse scan in thrift2.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10867) TestRegionPlacement#testRegionPlacement occasionally fails

2014-03-31 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-10867:
---

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> TestRegionPlacement#testRegionPlacement occasionally fails
> --
>
> Key: HBASE-10867
> URL: https://issues.apache.org/jira/browse/HBASE-10867
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Fix For: 0.99.0
>
> Attachments: 10867-v1.txt, 10867-v2.txt
>
>
> From 
> https://builds.apache.org/job/HBase-TRUNK/5047/testReport/org.apache.hadoop.hbase.master/TestRegionPlacement/testRegionPlacement/
>  :
> {code}
> java.lang.ArrayIndexOutOfBoundsException: 10
>   at 
> java.util.concurrent.CopyOnWriteArrayList.get(CopyOnWriteArrayList.java:368)
>   at 
> java.util.concurrent.CopyOnWriteArrayList.get(CopyOnWriteArrayList.java:377)
>   at 
> org.apache.hadoop.hbase.LocalHBaseCluster.getRegionServer(LocalHBaseCluster.java:224)
>   at 
> org.apache.hadoop.hbase.MiniHBaseCluster.getRegionServer(MiniHBaseCluster.java:609)
>   at 
> org.apache.hadoop.hbase.master.TestRegionPlacement.killRandomServerAndVerifyAssignment(TestRegionPlacement.java:303)
>   at 
> org.apache.hadoop.hbase.master.TestRegionPlacement.testRegionPlacement(TestRegionPlacement.java:270)
> {code}
> In the setup:
> {code}
> TEST_UTIL.startMiniCluster(SLAVES);
> {code}
> where SLAVES is 10.
> So when 10 was used in 
> TEST_UTIL.getHBaseCluster().getRegionServer(killIndex), we would get 
> ArrayIndexOutOfBoundsException.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10867) TestRegionPlacement#testRegionPlacement occasionally fails

2014-03-31 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-10867:
---

Fix Version/s: 0.99.0
 Hadoop Flags: Reviewed

> TestRegionPlacement#testRegionPlacement occasionally fails
> --
>
> Key: HBASE-10867
> URL: https://issues.apache.org/jira/browse/HBASE-10867
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Fix For: 0.99.0
>
> Attachments: 10867-v1.txt, 10867-v2.txt
>
>
> From 
> https://builds.apache.org/job/HBase-TRUNK/5047/testReport/org.apache.hadoop.hbase.master/TestRegionPlacement/testRegionPlacement/
>  :
> {code}
> java.lang.ArrayIndexOutOfBoundsException: 10
>   at 
> java.util.concurrent.CopyOnWriteArrayList.get(CopyOnWriteArrayList.java:368)
>   at 
> java.util.concurrent.CopyOnWriteArrayList.get(CopyOnWriteArrayList.java:377)
>   at 
> org.apache.hadoop.hbase.LocalHBaseCluster.getRegionServer(LocalHBaseCluster.java:224)
>   at 
> org.apache.hadoop.hbase.MiniHBaseCluster.getRegionServer(MiniHBaseCluster.java:609)
>   at 
> org.apache.hadoop.hbase.master.TestRegionPlacement.killRandomServerAndVerifyAssignment(TestRegionPlacement.java:303)
>   at 
> org.apache.hadoop.hbase.master.TestRegionPlacement.testRegionPlacement(TestRegionPlacement.java:270)
> {code}
> In the setup:
> {code}
> TEST_UTIL.startMiniCluster(SLAVES);
> {code}
> where SLAVES is 10.
> So when 10 was used in 
> TEST_UTIL.getHBaseCluster().getRegionServer(killIndex), we would get 
> ArrayIndexOutOfBoundsException.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10867) TestRegionPlacement#testRegionPlacement occasionally fails

2014-03-31 Thread Liu Shaohui (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956006#comment-13956006
 ] 

Liu Shaohui commented on HBASE-10867:
-

LGTM. Thanks, [~yuzhih...@gmail.com]

> TestRegionPlacement#testRegionPlacement occasionally fails
> --
>
> Key: HBASE-10867
> URL: https://issues.apache.org/jira/browse/HBASE-10867
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Attachments: 10867-v1.txt, 10867-v2.txt
>
>
> From 
> https://builds.apache.org/job/HBase-TRUNK/5047/testReport/org.apache.hadoop.hbase.master/TestRegionPlacement/testRegionPlacement/
>  :
> {code}
> java.lang.ArrayIndexOutOfBoundsException: 10
>   at 
> java.util.concurrent.CopyOnWriteArrayList.get(CopyOnWriteArrayList.java:368)
>   at 
> java.util.concurrent.CopyOnWriteArrayList.get(CopyOnWriteArrayList.java:377)
>   at 
> org.apache.hadoop.hbase.LocalHBaseCluster.getRegionServer(LocalHBaseCluster.java:224)
>   at 
> org.apache.hadoop.hbase.MiniHBaseCluster.getRegionServer(MiniHBaseCluster.java:609)
>   at 
> org.apache.hadoop.hbase.master.TestRegionPlacement.killRandomServerAndVerifyAssignment(TestRegionPlacement.java:303)
>   at 
> org.apache.hadoop.hbase.master.TestRegionPlacement.testRegionPlacement(TestRegionPlacement.java:270)
> {code}
> In the setup:
> {code}
> TEST_UTIL.startMiniCluster(SLAVES);
> {code}
> where SLAVES is 10.
> So when 10 was used in 
> TEST_UTIL.getHBaseCluster().getRegionServer(killIndex), we would get 
> ArrayIndexOutOfBoundsException.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10848) Filter SingleColumnValueFilter combined with NullComparator does not work

2014-03-31 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-10848:
---

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

> Filter SingleColumnValueFilter combined with NullComparator does not work
> -
>
> Key: HBASE-10848
> URL: https://issues.apache.org/jira/browse/HBASE-10848
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 0.96.1.1
>Reporter: Fabien Le Gallo
>Assignee: Fabien Le Gallo
> Fix For: 0.99.0, 0.98.2
>
> Attachments: HBASE-10848.patch, HBASE_10848-v2.patch, 
> HBASE_10848-v3.patch, HBASE_10848-v4.patch, HBaseRegression.java, 
> TestScanWithNullComparable.java
>
>
> I want to filter out from the scan the rows that does not have a specific 
> column qualifier. For this purpose I use the filter SingleColumnValueFilter 
> combined with the NullComparator.
> But every time I use this in a scan, I get the following exception:
> {code}
> java.lang.RuntimeException: org.apache.hadoop.hbase.DoNotRetryIOException: 
> Failed after retry of OutOfOrderScannerNextException: was there a rpc timeout?
> at 
> org.apache.hadoop.hbase.client.AbstractClientScanner$1.hasNext(AbstractClientScanner.java:47)
> at 
> com.xxx.xxx.test.HBaseRegression.nullComparator(HBaseRegression.java:92)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runNotIgnored(BlockJUnit4ClassRunner.java:79)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:71)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:49)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:236)
> at 
> org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
> at 
> org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
> at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
> at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
> at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
> at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
> Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: Failed after retry 
> of OutOfOrderScannerNextException: was there a rpc timeout?
> at 
> org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:391)
> at 
> org.apache.hadoop.hbase.client.AbstractClientScanner$1.hasNext(AbstractClientScanner.java:44)
> ... 25 more
> Caused by: org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException: 
> org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException: Expected 
> nextCallSeq: 1 But the nextCallSeq got from client: 0; request=scanner_id: 
> 7998309028985532303 number_of_rows: 100 close_scanner: false next_call_seq: 0
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3011)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:26929)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2175)
> at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1879)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstru

[jira] [Commented] (HBASE-10851) Wait for regionservers to join the cluster

2014-03-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956005#comment-13956005
 ] 

Hadoop QA commented on HBASE-10851:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12637954/hbase-10851.patch
  against trunk revision .
  ATTACHMENT ID: 12637954

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.regionserver.TestSplitLogWorker

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s):   
at 
org.apache.hadoop.hbase.regionserver.wal.TestLogRolling.testLogRollOnDatanodeDeath(TestLogRolling.java:368)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9147//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9147//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9147//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9147//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9147//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9147//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9147//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9147//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9147//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9147//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9147//console

This message is automatically generated.

> Wait for regionservers to join the cluster
> --
>
> Key: HBASE-10851
> URL: https://issues.apache.org/jira/browse/HBASE-10851
> Project: HBase
>  Issue Type: Bug
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
>Priority: Critical
> Attachments: hbase-10851.patch
>
>
> With HBASE-10569, if regionservers are started a while after the master, all 
> regions will be assigned to the master.  That may not be what users expect.
> A work-around is to always start regionservers before masters.
> I was wondering if the master can wait a little for other regionservers to 
> join.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10830) Integration test MR jobs attempt to load htrace jars from the wrong location

2014-03-31 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956004#comment-13956004
 ] 

stack commented on HBASE-10830:
---

More +1'ing from me.  This is great being able to run this in standalone.

> Integration test MR jobs attempt to load htrace jars from the wrong location
> 
>
> Key: HBASE-10830
> URL: https://issues.apache.org/jira/browse/HBASE-10830
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.1
>Reporter: Andrew Purtell
>Priority: Minor
> Fix For: 0.99.0, 0.98.2
>
> Attachments: HBASE-10830.00.patch, HBASE-10830.01.patch
>
>
> The MapReduce jobs submitted by IntegrationTestImportTsv want to load the 
> htrace JAR from the local Maven cache but get confused and use a HDFS URI.
> {noformat}
> Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 8.489 sec <<< 
> FAILURE!
> testGenerateAndLoad(org.apache.hadoop.hbase.mapreduce.IntegrationTestImportTsv)
>   Time elapsed: 0.488 sec  <<< ERROR!
> java.io.FileNotFoundException: File does not exist: 
> hdfs://localhost:37548/home/apurtell/.m2/repository/org/cloudera/htrace/htrace-core/2.04/htrace-core-2.04.jar
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1110)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1102)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1102)
> at 
> org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:288)
> at 
> org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:224)
> at 
> org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestamps(ClientDistributedCacheManager.java:93)
> at 
> org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestampsAndCacheVisibilities(ClientDistributedCacheManager.java:57)
> at 
> org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:264)
> at 
> org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:300)
> at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:387)
> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1268)
> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1265)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
> at org.apache.hadoop.mapreduce.Job.submit(Job.java:1265)
> at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1286)
> at org.apache.hadoop.hbase.mapreduce.ImportTsv.run(ImportTsv.java:603)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at 
> org.apache.hadoop.hbase.mapreduce.TestImportTsv.doMROnTableTest(TestImportTsv.java:270)
> at 
> org.apache.hadoop.hbase.mapreduce.TestImportTsv.doMROnTableTest(TestImportTsv.java:232)
> at 
> org.apache.hadoop.hbase.mapreduce.IntegrationTestImportTsv.testGenerateAndLoad(IntegrationTestImportTsv.java:206)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (HBASE-10848) Filter SingleColumnValueFilter combined with NullComparator does not work

2014-03-31 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu reassigned HBASE-10848:
--

Assignee: Fabien Le Gallo

> Filter SingleColumnValueFilter combined with NullComparator does not work
> -
>
> Key: HBASE-10848
> URL: https://issues.apache.org/jira/browse/HBASE-10848
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 0.96.1.1
>Reporter: Fabien Le Gallo
>Assignee: Fabien Le Gallo
> Fix For: 0.99.0, 0.98.2
>
> Attachments: HBASE-10848.patch, HBASE_10848-v2.patch, 
> HBASE_10848-v3.patch, HBASE_10848-v4.patch, HBaseRegression.java, 
> TestScanWithNullComparable.java
>
>
> I want to filter out from the scan the rows that does not have a specific 
> column qualifier. For this purpose I use the filter SingleColumnValueFilter 
> combined with the NullComparator.
> But every time I use this in a scan, I get the following exception:
> {code}
> java.lang.RuntimeException: org.apache.hadoop.hbase.DoNotRetryIOException: 
> Failed after retry of OutOfOrderScannerNextException: was there a rpc timeout?
> at 
> org.apache.hadoop.hbase.client.AbstractClientScanner$1.hasNext(AbstractClientScanner.java:47)
> at 
> com.xxx.xxx.test.HBaseRegression.nullComparator(HBaseRegression.java:92)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runNotIgnored(BlockJUnit4ClassRunner.java:79)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:71)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:49)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:236)
> at 
> org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
> at 
> org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
> at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
> at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
> at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
> at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
> Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: Failed after retry 
> of OutOfOrderScannerNextException: was there a rpc timeout?
> at 
> org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:391)
> at 
> org.apache.hadoop.hbase.client.AbstractClientScanner$1.hasNext(AbstractClientScanner.java:44)
> ... 25 more
> Caused by: org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException: 
> org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException: Expected 
> nextCallSeq: 1 But the nextCallSeq got from client: 0; request=scanner_id: 
> 7998309028985532303 number_of_rows: 100 close_scanner: false next_call_seq: 0
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3011)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:26929)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2175)
> at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1879)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.n

[jira] [Updated] (HBASE-10848) Filter SingleColumnValueFilter combined with NullComparator does not work

2014-03-31 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-10848:
---

Fix Version/s: 0.98.2
   0.99.0

> Filter SingleColumnValueFilter combined with NullComparator does not work
> -
>
> Key: HBASE-10848
> URL: https://issues.apache.org/jira/browse/HBASE-10848
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 0.96.1.1
>Reporter: Fabien Le Gallo
> Fix For: 0.99.0, 0.98.2
>
> Attachments: HBASE-10848.patch, HBASE_10848-v2.patch, 
> HBASE_10848-v3.patch, HBASE_10848-v4.patch, HBaseRegression.java, 
> TestScanWithNullComparable.java
>
>
> I want to filter out from the scan the rows that does not have a specific 
> column qualifier. For this purpose I use the filter SingleColumnValueFilter 
> combined with the NullComparator.
> But every time I use this in a scan, I get the following exception:
> {code}
> java.lang.RuntimeException: org.apache.hadoop.hbase.DoNotRetryIOException: 
> Failed after retry of OutOfOrderScannerNextException: was there a rpc timeout?
> at 
> org.apache.hadoop.hbase.client.AbstractClientScanner$1.hasNext(AbstractClientScanner.java:47)
> at 
> com.xxx.xxx.test.HBaseRegression.nullComparator(HBaseRegression.java:92)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runNotIgnored(BlockJUnit4ClassRunner.java:79)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:71)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:49)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:236)
> at 
> org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
> at 
> org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
> at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
> at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
> at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
> at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
> Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: Failed after retry 
> of OutOfOrderScannerNextException: was there a rpc timeout?
> at 
> org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:391)
> at 
> org.apache.hadoop.hbase.client.AbstractClientScanner$1.hasNext(AbstractClientScanner.java:44)
> ... 25 more
> Caused by: org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException: 
> org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException: Expected 
> nextCallSeq: 1 But the nextCallSeq got from client: 0; request=scanner_id: 
> 7998309028985532303 number_of_rows: 100 close_scanner: false next_call_seq: 0
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3011)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:26929)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2175)
> at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1879)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.

[jira] [Commented] (HBASE-10847) 0.94: drop non-secure builds, make security the default

2014-03-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956000#comment-13956000
 ] 

Hudson commented on HBASE-10847:


FAILURE: Integrated in HBase-0.94-JDK7 #97 (See 
[https://builds.apache.org/job/HBase-0.94-JDK7/97/])
HBASE-10847 0.94: drop non-secure builds, make security the default. (larsh: 
rev 1583480)
* /hbase/branches/0.94/pom.xml
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSideWithSecureRpcEngine.java


> 0.94: drop non-secure builds, make security the default
> ---
>
> Key: HBASE-10847
> URL: https://issues.apache.org/jira/browse/HBASE-10847
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
> Fix For: 0.94.19
>
> Attachments: 10847-v2.txt, 10847-v3.txt, 10847-v4.txt, 10847.txt
>
>
> I would like to only create a single 0.94 tarball/release that contains the 
> security code - and drop the non-secure tarballs and releases.
> Let's discuss...



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10848) Filter SingleColumnValueFilter combined with NullComparator does not work

2014-03-31 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13955995#comment-13955995
 ] 

chunhui shen commented on HBASE-10848:
--

lgtm
+1 on v4

> Filter SingleColumnValueFilter combined with NullComparator does not work
> -
>
> Key: HBASE-10848
> URL: https://issues.apache.org/jira/browse/HBASE-10848
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 0.96.1.1
>Reporter: Fabien Le Gallo
> Attachments: HBASE-10848.patch, HBASE_10848-v2.patch, 
> HBASE_10848-v3.patch, HBASE_10848-v4.patch, HBaseRegression.java, 
> TestScanWithNullComparable.java
>
>
> I want to filter out from the scan the rows that does not have a specific 
> column qualifier. For this purpose I use the filter SingleColumnValueFilter 
> combined with the NullComparator.
> But every time I use this in a scan, I get the following exception:
> {code}
> java.lang.RuntimeException: org.apache.hadoop.hbase.DoNotRetryIOException: 
> Failed after retry of OutOfOrderScannerNextException: was there a rpc timeout?
> at 
> org.apache.hadoop.hbase.client.AbstractClientScanner$1.hasNext(AbstractClientScanner.java:47)
> at 
> com.xxx.xxx.test.HBaseRegression.nullComparator(HBaseRegression.java:92)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runNotIgnored(BlockJUnit4ClassRunner.java:79)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:71)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:49)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:236)
> at 
> org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
> at 
> org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
> at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
> at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
> at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
> at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
> Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: Failed after retry 
> of OutOfOrderScannerNextException: was there a rpc timeout?
> at 
> org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:391)
> at 
> org.apache.hadoop.hbase.client.AbstractClientScanner$1.hasNext(AbstractClientScanner.java:44)
> ... 25 more
> Caused by: org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException: 
> org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException: Expected 
> nextCallSeq: 1 But the nextCallSeq got from client: 0; request=scanner_id: 
> 7998309028985532303 number_of_rows: 100 close_scanner: false next_call_seq: 0
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3011)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:26929)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2175)
> at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1879)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:526)

[jira] [Updated] (HBASE-10830) Integration test MR jobs attempt to load htrace jars from the wrong location

2014-03-31 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-10830:
-

Attachment: HBASE-10830.01.patch

Adding similar logic for IntegrationTestBigLinkedList and 
IntegrationTestLoadAndVerify.

> Integration test MR jobs attempt to load htrace jars from the wrong location
> 
>
> Key: HBASE-10830
> URL: https://issues.apache.org/jira/browse/HBASE-10830
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.1
>Reporter: Andrew Purtell
>Priority: Minor
> Fix For: 0.99.0, 0.98.2
>
> Attachments: HBASE-10830.00.patch, HBASE-10830.01.patch
>
>
> The MapReduce jobs submitted by IntegrationTestImportTsv want to load the 
> htrace JAR from the local Maven cache but get confused and use a HDFS URI.
> {noformat}
> Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 8.489 sec <<< 
> FAILURE!
> testGenerateAndLoad(org.apache.hadoop.hbase.mapreduce.IntegrationTestImportTsv)
>   Time elapsed: 0.488 sec  <<< ERROR!
> java.io.FileNotFoundException: File does not exist: 
> hdfs://localhost:37548/home/apurtell/.m2/repository/org/cloudera/htrace/htrace-core/2.04/htrace-core-2.04.jar
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1110)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1102)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1102)
> at 
> org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:288)
> at 
> org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:224)
> at 
> org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestamps(ClientDistributedCacheManager.java:93)
> at 
> org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestampsAndCacheVisibilities(ClientDistributedCacheManager.java:57)
> at 
> org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:264)
> at 
> org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:300)
> at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:387)
> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1268)
> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1265)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
> at org.apache.hadoop.mapreduce.Job.submit(Job.java:1265)
> at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1286)
> at org.apache.hadoop.hbase.mapreduce.ImportTsv.run(ImportTsv.java:603)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at 
> org.apache.hadoop.hbase.mapreduce.TestImportTsv.doMROnTableTest(TestImportTsv.java:270)
> at 
> org.apache.hadoop.hbase.mapreduce.TestImportTsv.doMROnTableTest(TestImportTsv.java:232)
> at 
> org.apache.hadoop.hbase.mapreduce.IntegrationTestImportTsv.testGenerateAndLoad(IntegrationTestImportTsv.java:206)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10847) 0.94: drop non-secure builds, make security the default

2014-03-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13955991#comment-13955991
 ] 

Hudson commented on HBASE-10847:


FAILURE: Integrated in HBase-0.94-on-Hadoop-2 #62 (See 
[https://builds.apache.org/job/HBase-0.94-on-Hadoop-2/62/])
HBASE-10847 0.94: drop non-secure builds, make security the default. (larsh: 
rev 1583480)
* /hbase/branches/0.94/pom.xml
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSideWithSecureRpcEngine.java


> 0.94: drop non-secure builds, make security the default
> ---
>
> Key: HBASE-10847
> URL: https://issues.apache.org/jira/browse/HBASE-10847
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
> Fix For: 0.94.19
>
> Attachments: 10847-v2.txt, 10847-v3.txt, 10847-v4.txt, 10847.txt
>
>
> I would like to only create a single 0.94 tarball/release that contains the 
> security code - and drop the non-secure tarballs and releases.
> Let's discuss...



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10830) Integration test MR jobs attempt to load htrace jars from the wrong location

2014-03-31 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-10830:
-

Hadoop Flags: Reviewed
  Status: Patch Available  (was: Open)

> Integration test MR jobs attempt to load htrace jars from the wrong location
> 
>
> Key: HBASE-10830
> URL: https://issues.apache.org/jira/browse/HBASE-10830
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.1
>Reporter: Andrew Purtell
>Priority: Minor
> Fix For: 0.99.0, 0.98.2
>
> Attachments: HBASE-10830.00.patch
>
>
> The MapReduce jobs submitted by IntegrationTestImportTsv want to load the 
> htrace JAR from the local Maven cache but get confused and use a HDFS URI.
> {noformat}
> Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 8.489 sec <<< 
> FAILURE!
> testGenerateAndLoad(org.apache.hadoop.hbase.mapreduce.IntegrationTestImportTsv)
>   Time elapsed: 0.488 sec  <<< ERROR!
> java.io.FileNotFoundException: File does not exist: 
> hdfs://localhost:37548/home/apurtell/.m2/repository/org/cloudera/htrace/htrace-core/2.04/htrace-core-2.04.jar
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1110)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1102)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1102)
> at 
> org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:288)
> at 
> org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:224)
> at 
> org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestamps(ClientDistributedCacheManager.java:93)
> at 
> org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestampsAndCacheVisibilities(ClientDistributedCacheManager.java:57)
> at 
> org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:264)
> at 
> org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:300)
> at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:387)
> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1268)
> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1265)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
> at org.apache.hadoop.mapreduce.Job.submit(Job.java:1265)
> at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1286)
> at org.apache.hadoop.hbase.mapreduce.ImportTsv.run(ImportTsv.java:603)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at 
> org.apache.hadoop.hbase.mapreduce.TestImportTsv.doMROnTableTest(TestImportTsv.java:270)
> at 
> org.apache.hadoop.hbase.mapreduce.TestImportTsv.doMROnTableTest(TestImportTsv.java:232)
> at 
> org.apache.hadoop.hbase.mapreduce.IntegrationTestImportTsv.testGenerateAndLoad(IntegrationTestImportTsv.java:206)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10830) Integration test MR jobs attempt to load htrace jars from the wrong location

2014-03-31 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13955977#comment-13955977
 ] 

stack commented on HBASE-10830:
---

Much better.  This patch would seem to address the issue.  +1 

I'm letting it run.  Not done yet. It is past ITI  Got one failure but it 
was OOME will look into it:

{code}
---
 T E S T S
---
Running org.apache.hadoop.hbase.IntegrationTestIngest
2014-03-31 15:54:52.108 java[80514:1903] Unable to load realm info from 
SCDynamicStore
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 789.865 sec
Running org.apache.hadoop.hbase.IntegrationTestIngestStripeCompactions
2014-03-31 16:08:03.158 java[81159:1903] Unable to load realm info from 
SCDynamicStore





Running org.apache.hadoop.hbase.IntegrationTestIngestWithACL
2014-03-31 16:38:02.676 java[81968:1903] Unable to load realm info from 
SCDynamicStore
Running org.apache.hadoop.hbase.IntegrationTestIngestWithEncryption
2014-03-31 17:08:03.228 java[82305:1903] Unable to load realm info from 
SCDynamicStore
Running org.apache.hadoop.hbase.IntegrationTestIngestWithTags
2014-03-31 17:38:03.880 java[82680:1703] Unable to load realm info from 
SCDynamicStore
org.apache.maven.surefire.booter.SurefireBooterForkException: Error occurred in 
starting fork, check output in log
at 
org.apache.maven.plugin.surefire.booterclient.ForkStarter.fork(ForkStarter.java:238)
at 
org.apache.maven.plugin.surefire.booterclient.ForkStarter.access$000(ForkStarter.java:64)
at 
org.apache.maven.plugin.surefire.booterclient.ForkStarter$ParallelFork.call(ForkStarter.java:303)
at 
org.apache.maven.plugin.surefire.booterclient.ForkStarter$ParallelFork.call(ForkStarter.java:285)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
No results for java.util.concurrent.FutureTask@182b1195
Running org.apache.hadoop.hbase.IntegrationTestIngestWithVisibilityLabels
2014-03-31 17:48:59.800 java[82869:1903] Unable to load realm info from 
SCDynamicStore
...
{code}

Yeah, I think you've addresed the 'issue' [~ndimiduk]

> Integration test MR jobs attempt to load htrace jars from the wrong location
> 
>
> Key: HBASE-10830
> URL: https://issues.apache.org/jira/browse/HBASE-10830
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.1
>Reporter: Andrew Purtell
>Priority: Minor
> Fix For: 0.99.0, 0.98.2
>
> Attachments: HBASE-10830.00.patch
>
>
> The MapReduce jobs submitted by IntegrationTestImportTsv want to load the 
> htrace JAR from the local Maven cache but get confused and use a HDFS URI.
> {noformat}
> Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 8.489 sec <<< 
> FAILURE!
> testGenerateAndLoad(org.apache.hadoop.hbase.mapreduce.IntegrationTestImportTsv)
>   Time elapsed: 0.488 sec  <<< ERROR!
> java.io.FileNotFoundException: File does not exist: 
> hdfs://localhost:37548/home/apurtell/.m2/repository/org/cloudera/htrace/htrace-core/2.04/htrace-core-2.04.jar
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1110)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1102)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1102)
> at 
> org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:288)
> at 
> org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:224)
> at 
> org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestamps(ClientDistributedCacheManager.java:93)
> at 
> org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestampsAndCacheVisibilities(ClientDistributedCacheManager.java:57)
> at 
> org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:264)
> at 
> org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:300)
> at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:387)
> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1268)
> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1265)
>  

[jira] [Commented] (HBASE-10847) 0.94: drop non-secure builds, make security the default

2014-03-31 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13955968#comment-13955968
 ] 

Lars Hofhansl commented on HBASE-10847:
---

The -security build failed with a known flaky test (need to look into that).


> 0.94: drop non-secure builds, make security the default
> ---
>
> Key: HBASE-10847
> URL: https://issues.apache.org/jira/browse/HBASE-10847
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
> Fix For: 0.94.19
>
> Attachments: 10847-v2.txt, 10847-v3.txt, 10847-v4.txt, 10847.txt
>
>
> I would like to only create a single 0.94 tarball/release that contains the 
> security code - and drop the non-secure tarballs and releases.
> Let's discuss...



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10847) 0.94: drop non-secure builds, make security the default

2014-03-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13955965#comment-13955965
 ] 

Hudson commented on HBASE-10847:


SUCCESS: Integrated in HBase-0.94 #1335 (See 
[https://builds.apache.org/job/HBase-0.94/1335/])
HBASE-10847 0.94: drop non-secure builds, make security the default. (larsh: 
rev 1583480)
* /hbase/branches/0.94/pom.xml
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSideWithSecureRpcEngine.java


> 0.94: drop non-secure builds, make security the default
> ---
>
> Key: HBASE-10847
> URL: https://issues.apache.org/jira/browse/HBASE-10847
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
> Fix For: 0.94.19
>
> Attachments: 10847-v2.txt, 10847-v3.txt, 10847-v4.txt, 10847.txt
>
>
> I would like to only create a single 0.94 tarball/release that contains the 
> security code - and drop the non-secure tarballs and releases.
> Let's discuss...



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HBASE-5222) Stopping replication via the "stop_replication" command in hbase shell on a slave cluster isn't acknowledged in the replication sink

2014-03-31 Thread Jean-Daniel Cryans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Daniel Cryans resolved HBASE-5222.
---

Resolution: Invalid

The kill switch was completely removed, closing.

> Stopping replication via the "stop_replication" command in hbase shell on a 
> slave cluster isn't acknowledged in the replication sink
> 
>
> Key: HBASE-5222
> URL: https://issues.apache.org/jira/browse/HBASE-5222
> Project: HBase
>  Issue Type: Bug
>  Components: Replication, shell
>Affects Versions: 0.90.4
>Reporter: Josh Wymer
>
> After running "stop_replication" in the hbase shell on our slave cluster we 
> saw replication continue for weeks. Turns out that the replication sink is 
> missing a check to get the replication state and therefore continued to write.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10879) user_permission shell command on namespace doesn't work

2014-03-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13955948#comment-13955948
 ] 

Hadoop QA commented on HBASE-10879:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12637940/10879-v1.txt
  against trunk revision .
  ATTACHMENT ID: 12637940

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 1 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9146//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9146//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9146//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9146//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9146//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9146//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9146//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9146//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9146//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9146//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9146//console

This message is automatically generated.

> user_permission shell command on namespace doesn't work
> ---
>
> Key: HBASE-10879
> URL: https://issues.apache.org/jira/browse/HBASE-10879
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 10879-v1.txt
>
>
> Currently user_permission command on namespace, e.g.
> {code}
> user_permission '@ns'
> {code}
> would result in the following exception:
> {code}
> Exception `NameError' at /usr/lib/hbase/lib/ruby/hbase/security.rb:170 - no 
> method 'getUserPermissions' for arguments 
> (org.apache.hadoop.hbase.protobuf.generated.  
> AccessControlProtos.AccessControlService.BlockingStub,org.jruby.java.proxies.ArrayJavaProxy)
>  on Java::OrgApacheHadoopHbaseProtobuf::ProtobufUtil
> ERROR: no method 'getUserPermissions' for arguments 
> (org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos.AccessControlService.BlockingStub,org.jruby.java.
>proxies.ArrayJavaProxy) on 
> Java::OrgApacheHadoopHbaseProtobuf::ProtobufUtil
> Backtrace: /usr/lib/hbase/lib/ruby/hbase/security.rb:170:in `user_permission'
>/usr/lib/hbase/lib/ruby/shell/commands/user_permission.rb:39:in 
> `command'
>org/jruby/RubyKernel.java:2109:in `send'
>/usr/lib/hbase/lib/ruby/shell/commands.rb:34:in `command_safe'
>/usr/lib/hbase/lib/ruby/shell/commands.rb:91:in 
> `translate_hbase_exceptions'
>/usr/lib/hbase/lib/ruby/shell/commands.rb:34:in `command_safe'
>/usr/lib/hbase/lib/ruby/shell.rb:127:in `internal_command'
>/usr/lib/hbase/lib/ruby/shell.rb:119:in `command'
>(eval):2:in `user_permission'
>(hbase):1:in `evaluate'
>org/jruby/RubyKernel.java:1112:in `eval'
> {code}



--
This message was sent by

[jira] [Updated] (HBASE-7118) org.apache.hadoop.hbase.replication.TestReplicationPeer failed with testResetZooKeeperSession unit test

2014-03-31 Thread Jean-Daniel Cryans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Daniel Cryans updated HBASE-7118:
--

Resolution: Invalid
Status: Resolved  (was: Patch Available)

Stale jira, closing.

> org.apache.hadoop.hbase.replication.TestReplicationPeer failed with 
> testResetZooKeeperSession unit test
> ---
>
> Key: HBASE-7118
> URL: https://issues.apache.org/jira/browse/HBASE-7118
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.92.0, 0.94.0
> Environment: RHEL 5.3, open JDK 1.6
>Reporter: Li Ping Zhang
>Assignee: Li Ping Zhang
>  Labels: patch
> Attachments: HBASE-7118-0.94.0.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> org.apache.hadoop.hbase.replication.TestReplicationPeer
> Running org.apache.hadoop.hbase.replication.TestReplicationPeer
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 25.89 sec <<< 
> FAILURE!
>   --- stable failures, new for hbase 0.92.0, need to be fixed firstly.
>   
>  
> target/surefire-reports/org.apache.hadoop.hbase.replication.TestReplicationPeer.txt
>  output:
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 28.245 sec 
> <<< FAILURE!
> testResetZooKeeperSession(org.apache.hadoop.hbase.replication.TestReplicationPeer)
>   Time elapsed: 25.247 sec  <<< FAILURE!
> junit.framework.AssertionFailedError: ReplicationPeer ZooKeeper session was 
> not properly expired.
> at junit.framework.Assert.fail(Assert.java:50)
> at 
> org.apache.hadoop.hbase.replication.TestReplicationPeer.testResetZooKeeperSession(TestReplicationPeer.java:73)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:60)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37)
> at java.lang.reflect.Method.invoke(Method.java:611)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:62)
> 
> target/surefire-reports/org.apache.hadoop.hbase.replication.TestReplicationPeer-output.txt
>  content:
>
> 2012-03-25 20:52:42,979 INFO  [main] zookeeper.MiniZooKeeperCluster(174): 
> Started MiniZK Cluster and connect 1 ZK server on client port: 21818
> 2012-03-25 20:52:43,023 DEBUG [main] zookeeper.ZKUtil(96): connection to 
> cluster: clusterId opening connection to ZooKeeper with ensemble 
> (localhost:21818)
> 2012-03-25 20:52:43,082 INFO  [main] zookeeper.RecoverableZooKeeper(89): The 
> identifier of this process is 4...@svltest116.svl.ibm.com
> 2012-03-25 20:52:43,166 DEBUG [main-EventThread] 
> zookeeper.ZooKeeperWatcher(257): connection to cluster: clusterId Received 
> ZooKeeper Event, type=None, state=SyncConnected, path=null
> 2012-03-25 20:52:43,175 INFO  [Thread-9] replication.TestReplicationPeer(53): 
> Expiring ReplicationPeer ZooKeeper session.
> 2012-03-25 20:52:43,196 DEBUG [main-EventThread] 
> zookeeper.ZooKeeperWatcher(334): connection to cluster: 
> clusterId-0x1364d226a3d connected
> 2012-03-25 20:52:43,308 INFO  [Thread-9] hbase.HBaseTestingUtility(1234): ZK 
> Closed Session 0x1364d226a3d; sleeping=25000
> 2012-03-25 20:53:08,323 INFO  [Thread-9] replication.TestReplicationPeer(57): 
> Attempting to use expired ReplicationPeer ZooKeeper session.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HBASE-5211) org.apache.hadoop.hbase.replication.TestMultiSlaveReplication#testMultiSlaveReplication is flakey

2014-03-31 Thread Jean-Daniel Cryans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Daniel Cryans resolved HBASE-5211.
---

Resolution: Cannot Reproduce

Stale jira, closing.

> org.apache.hadoop.hbase.replication.TestMultiSlaveReplication#testMultiSlaveReplication
>  is flakey
> -
>
> Key: HBASE-5211
> URL: https://issues.apache.org/jira/browse/HBASE-5211
> Project: HBase
>  Issue Type: Bug
>Reporter: Alex Newman
> Attachments: log2.txt, trunk.txt
>
>
> I can't seem to get this test to pass consistently on my laptop. Also my 
> hudson occasionally tripps up on it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10847) 0.94: drop non-secure builds, make security the default

2014-03-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13955921#comment-13955921
 ] 

Hudson commented on HBASE-10847:


FAILURE: Integrated in HBase-0.94-security #452 (See 
[https://builds.apache.org/job/HBase-0.94-security/452/])
HBASE-10847 0.94: drop non-secure builds, make security the default. (larsh: 
rev 1583480)
* /hbase/branches/0.94/pom.xml
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSideWithSecureRpcEngine.java


> 0.94: drop non-secure builds, make security the default
> ---
>
> Key: HBASE-10847
> URL: https://issues.apache.org/jira/browse/HBASE-10847
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
> Fix For: 0.94.19
>
> Attachments: 10847-v2.txt, 10847-v3.txt, 10847-v4.txt, 10847.txt
>
>
> I would like to only create a single 0.94 tarball/release that contains the 
> security code - and drop the non-secure tarballs and releases.
> Let's discuss...



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10866) Decouple HLogSplitterHandler from ZooKeeper

2014-03-31 Thread Mikhail Antonov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Antonov updated HBASE-10866:


Attachment: HBaseConsensus.pdf

[~stack] - attached is a short write-up on the consensus for hbase - would 
appreciate review and feedback.

> Decouple HLogSplitterHandler from ZooKeeper
> ---
>
> Key: HBASE-10866
> URL: https://issues.apache.org/jira/browse/HBASE-10866
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver, Zookeeper
>Reporter: Mikhail Antonov
> Attachments: HBASE-10866.patch, HBASE-10866.patch, HBASE-10866.patch, 
> HBaseConsensus.pdf
>
>
> As some sort of follow-up or initial step towards HBASE-10296...
> Whatever consensus algorithm/library may be the chosen, perhaps on of first 
> practical steps towards this goal would be to better abstract ZK-related API 
> and details, which are now throughout the codebase (mostly leaked throuth 
> ZkUtil, ZooKeeperWatcher and listeners).
> I'd like to propose a series of patches to help better abstract out zookeeper 
> (and then help develop consensus APIs). 
> Here is first version of  patch for initial review (then I'm planning to work 
> on another handlers in regionserver, and then perhaps start working on 
> abstracting listeners).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10855) Enable hfilev3 by default

2014-03-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13955911#comment-13955911
 ] 

Hadoop QA commented on HBASE-10855:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12637931/10855.txt
  against trunk revision .
  ATTACHMENT ID: 12637931

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.regionserver.TestHRegion
  org.apache.hadoop.hbase.regionserver.TestHRegionBusyWait

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s):   
at 
org.apache.hadoop.hbase.mapreduce.TestTableMapReduceBase.testMultiRegionTable(TestTableMapReduceBase.java:96)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9145//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9145//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9145//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9145//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9145//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9145//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9145//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9145//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9145//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9145//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9145//console

This message is automatically generated.

> Enable hfilev3 by default
> -
>
> Key: HBASE-10855
> URL: https://issues.apache.org/jira/browse/HBASE-10855
> Project: HBase
>  Issue Type: Sub-task
>  Components: HFile
>Reporter: stack
>Assignee: stack
> Fix For: 0.99.0
>
> Attachments: 10855.txt, 10855.txt, 10855.txt, 10855.txt
>
>
> Distributed log replay needs this.  Should be on by default in 1.0/0.99.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10851) Wait for regionservers to join the cluster

2014-03-31 Thread Jimmy Xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13955897#comment-13955897
 ] 

Jimmy Xiang commented on HBASE-10851:
-

[~stack], the 4.5 seconds pause is still there. I looked into assigning meta as 
soon as possible. However, it works in some cases, but not in other cases. For 
example, if a meta is assigned to a normal region server, and still in 
transition while the master fail over, we need to wait for this server to check 
in. It will be complicated if we special handle such corner cases.

I attached a patch that exclude those backup masters in counting the 
regionservers. The minimum regionservers to wait is changed from 1 to 2 so the 
active master is included. For standalone sever, the minimum regionservers to 
wait is set to 1.

> Wait for regionservers to join the cluster
> --
>
> Key: HBASE-10851
> URL: https://issues.apache.org/jira/browse/HBASE-10851
> Project: HBase
>  Issue Type: Bug
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
>Priority: Critical
> Attachments: hbase-10851.patch
>
>
> With HBASE-10569, if regionservers are started a while after the master, all 
> regions will be assigned to the master.  That may not be what users expect.
> A work-around is to always start regionservers before masters.
> I was wondering if the master can wait a little for other regionservers to 
> join.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10877) HBase non-retriable exception list should be expanded

2014-03-31 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13955889#comment-13955889
 ] 

Nick Dimiduk commented on HBASE-10877:
--

See the comments on HBASE-10432. We opted on the side of caution, but this 
IllegalAccessError is a subclass of LinkageError, so should be handled 
correctly.

> HBase non-retriable exception list should be expanded
> -
>
> Key: HBASE-10877
> URL: https://issues.apache.org/jira/browse/HBASE-10877
> Project: HBase
>  Issue Type: Improvement
>Reporter: Sergey Shelukhin
>Priority: Minor
>
> Example where retries do not make sense:
> {noformat}
> 2014-03-31 20:54:27,765 WARN [InputInitializer [Map 1] #0] 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation: 
> Encountered problems when prefetch hbase:meta table: 
> org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after 
> attempts=35, exceptions:
> Mon Mar 31 20:45:17 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: class 
> com.google.protobuf.HBaseZeroCopyByteString cannot access its superclass 
> com.google.protobuf.LiteralByteString
> Mon Mar 31 20:45:17 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:45:17 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:45:18 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:45:20 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:45:24 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:45:34 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:45:45 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:45:55 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:46:05 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:46:25 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:46:45 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:47:05 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:47:25 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:47:45 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:48:05 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:48:25 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:48:46 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:49:06 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:49:26 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:49:46 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:50:06 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:50:26 UTC 2014, 

[jira] [Updated] (HBASE-10851) Wait for regionservers to join the cluster

2014-03-31 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-10851:


Status: Patch Available  (was: Open)

> Wait for regionservers to join the cluster
> --
>
> Key: HBASE-10851
> URL: https://issues.apache.org/jira/browse/HBASE-10851
> Project: HBase
>  Issue Type: Bug
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
>Priority: Critical
> Attachments: hbase-10851.patch
>
>
> With HBASE-10569, if regionservers are started a while after the master, all 
> regions will be assigned to the master.  That may not be what users expect.
> A work-around is to always start regionservers before masters.
> I was wondering if the master can wait a little for other regionservers to 
> join.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10851) Wait for regionservers to join the cluster

2014-03-31 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-10851:


Attachment: hbase-10851.patch

> Wait for regionservers to join the cluster
> --
>
> Key: HBASE-10851
> URL: https://issues.apache.org/jira/browse/HBASE-10851
> Project: HBase
>  Issue Type: Bug
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
>Priority: Critical
> Attachments: hbase-10851.patch
>
>
> With HBASE-10569, if regionservers are started a while after the master, all 
> regions will be assigned to the master.  That may not be what users expect.
> A work-around is to always start regionservers before masters.
> I was wondering if the master can wait a little for other regionservers to 
> join.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10091) Exposing HBase DataTypes to non-Java interfaces

2014-03-31 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13955877#comment-13955877
 ] 

Nick Dimiduk commented on HBASE-10091:
--

I haven't worked through a prototype yet, so I don't know exactly. The DSL we 
have for exposing filters is parsed once, in Java (using 
[ParseFilter|https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/filter/ParseFilter.html]),
 by the shell or Thrift service (I guess REST service doesn't support this 
yet). The user would provide the type mapping as a configuration string and let 
whatever is interacting with the HTable handle sending provided data literals 
to the correct DataType instances.

One example consumer is the Hive metastore. The table is defined in metastore 
that has a column mapping, similar to today, mapping the metastore table column 
to an HBase table column. In addition to the column mapping, a type 
specification is also provided. This would be an Expression in the DSL we're 
discussing. The StorageHandler would be responsible for honoring this 
additional component in the mapping. How exactly we ensure the metastore type 
can be converted to/from the HBase {{DataType}} is still up for question. I 
hope to learn from Phoenix on this, hence I deferred that work out to 
HBASE-8863.

More concretely, I imagine this DSL is relatively simple. A complete type 
definition might be as simple as {{package.class\[/ORDER\]}}. We'll need to add 
any necessary API to {{DataType}} to support constructing from the parser. 
There may also be some built-in named definitions, "raw" or "ordered-bytes", 
where we ship an existing known mapping between Java type and HBase DataType 
implementation. This would be a convenience for consumers of HTable; I don't 
know how this would play into a metastore implementation.

The only place where potential overlap with Avro/Protobuf comes in is with 
[Struct|http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/types/Struct.html].
 I'm not convinced this is very complicated either; just a sequence of types 
with syntax for specifying an optional element. There's no concept of "schema 
versioning" in {{Struct}}; there's no room for it in a place where encoded 
ordering is the primary concern.

> Exposing HBase DataTypes to non-Java interfaces
> ---
>
> Key: HBASE-10091
> URL: https://issues.apache.org/jira/browse/HBASE-10091
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client
>Reporter: Nick Dimiduk
>
> Access to the DataType implementations introduced in HBASE-8693 is currently 
> limited to consumers of the Java API. It is not easy to specify a data type 
> in non-Java environments, such as the HBase shell, REST or Thrift Gateways, 
> command-line arguments to our utility MapReduce jobs, or in integration 
> points such as a (hypothetical extension to) Hive's HBaseStorageHandler. See 
> examples where this limitation impedes in HBASE-8593 and HBASE-10071.
> I propose the implementation of a type definition DSL, similar to the 
> language defined for Filters in HBASE-4176. By implementing this in core 
> HBase, it can be reused in all of the situations described previously. The 
> parser for this DSL must support arbitrary type extensions, just as the 
> Filter parser allows for new Filter types to be registered at runtime.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10879) user_permission shell command on namespace doesn't work

2014-03-31 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-10879:
---

Attachment: 10879-v1.txt

With attached patch,
{code}
hbase(main):004:0> grant 'oozie', 'RW', '@ns'
0 row(s) in 0.1930 seconds

hbase(main):005:0> user_permission '@ns'
User Table,Family,Qualifier:Permission
 oozie   ,,: [Permission: 
actions=READ,WRITE]
1 row(s) in 0.0990 seconds
{code}

> user_permission shell command on namespace doesn't work
> ---
>
> Key: HBASE-10879
> URL: https://issues.apache.org/jira/browse/HBASE-10879
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 10879-v1.txt
>
>
> Currently user_permission command on namespace, e.g.
> {code}
> user_permission '@ns'
> {code}
> would result in the following exception:
> {code}
> Exception `NameError' at /usr/lib/hbase/lib/ruby/hbase/security.rb:170 - no 
> method 'getUserPermissions' for arguments 
> (org.apache.hadoop.hbase.protobuf.generated.  
> AccessControlProtos.AccessControlService.BlockingStub,org.jruby.java.proxies.ArrayJavaProxy)
>  on Java::OrgApacheHadoopHbaseProtobuf::ProtobufUtil
> ERROR: no method 'getUserPermissions' for arguments 
> (org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos.AccessControlService.BlockingStub,org.jruby.java.
>proxies.ArrayJavaProxy) on 
> Java::OrgApacheHadoopHbaseProtobuf::ProtobufUtil
> Backtrace: /usr/lib/hbase/lib/ruby/hbase/security.rb:170:in `user_permission'
>/usr/lib/hbase/lib/ruby/shell/commands/user_permission.rb:39:in 
> `command'
>org/jruby/RubyKernel.java:2109:in `send'
>/usr/lib/hbase/lib/ruby/shell/commands.rb:34:in `command_safe'
>/usr/lib/hbase/lib/ruby/shell/commands.rb:91:in 
> `translate_hbase_exceptions'
>/usr/lib/hbase/lib/ruby/shell/commands.rb:34:in `command_safe'
>/usr/lib/hbase/lib/ruby/shell.rb:127:in `internal_command'
>/usr/lib/hbase/lib/ruby/shell.rb:119:in `command'
>(eval):2:in `user_permission'
>(hbase):1:in `evaluate'
>org/jruby/RubyKernel.java:1112:in `eval'
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10879) user_permission shell command on namespace doesn't work

2014-03-31 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-10879:
---

Status: Patch Available  (was: Open)

> user_permission shell command on namespace doesn't work
> ---
>
> Key: HBASE-10879
> URL: https://issues.apache.org/jira/browse/HBASE-10879
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 10879-v1.txt
>
>
> Currently user_permission command on namespace, e.g.
> {code}
> user_permission '@ns'
> {code}
> would result in the following exception:
> {code}
> Exception `NameError' at /usr/lib/hbase/lib/ruby/hbase/security.rb:170 - no 
> method 'getUserPermissions' for arguments 
> (org.apache.hadoop.hbase.protobuf.generated.  
> AccessControlProtos.AccessControlService.BlockingStub,org.jruby.java.proxies.ArrayJavaProxy)
>  on Java::OrgApacheHadoopHbaseProtobuf::ProtobufUtil
> ERROR: no method 'getUserPermissions' for arguments 
> (org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos.AccessControlService.BlockingStub,org.jruby.java.
>proxies.ArrayJavaProxy) on 
> Java::OrgApacheHadoopHbaseProtobuf::ProtobufUtil
> Backtrace: /usr/lib/hbase/lib/ruby/hbase/security.rb:170:in `user_permission'
>/usr/lib/hbase/lib/ruby/shell/commands/user_permission.rb:39:in 
> `command'
>org/jruby/RubyKernel.java:2109:in `send'
>/usr/lib/hbase/lib/ruby/shell/commands.rb:34:in `command_safe'
>/usr/lib/hbase/lib/ruby/shell/commands.rb:91:in 
> `translate_hbase_exceptions'
>/usr/lib/hbase/lib/ruby/shell/commands.rb:34:in `command_safe'
>/usr/lib/hbase/lib/ruby/shell.rb:127:in `internal_command'
>/usr/lib/hbase/lib/ruby/shell.rb:119:in `command'
>(eval):2:in `user_permission'
>(hbase):1:in `evaluate'
>org/jruby/RubyKernel.java:1112:in `eval'
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-10879) user_permission shell command on namespace doesn't work

2014-03-31 Thread Ted Yu (JIRA)
Ted Yu created HBASE-10879:
--

 Summary: user_permission shell command on namespace doesn't work
 Key: HBASE-10879
 URL: https://issues.apache.org/jira/browse/HBASE-10879
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu


Currently user_permission command on namespace, e.g.
{code}
user_permission '@ns'
{code}
would result in the following exception:
{code}
Exception `NameError' at /usr/lib/hbase/lib/ruby/hbase/security.rb:170 - no 
method 'getUserPermissions' for arguments 
(org.apache.hadoop.hbase.protobuf.generated.  
AccessControlProtos.AccessControlService.BlockingStub,org.jruby.java.proxies.ArrayJavaProxy)
 on Java::OrgApacheHadoopHbaseProtobuf::ProtobufUtil

ERROR: no method 'getUserPermissions' for arguments 
(org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos.AccessControlService.BlockingStub,org.jruby.java.
   proxies.ArrayJavaProxy) on 
Java::OrgApacheHadoopHbaseProtobuf::ProtobufUtil
Backtrace: /usr/lib/hbase/lib/ruby/hbase/security.rb:170:in `user_permission'
   /usr/lib/hbase/lib/ruby/shell/commands/user_permission.rb:39:in 
`command'
   org/jruby/RubyKernel.java:2109:in `send'
   /usr/lib/hbase/lib/ruby/shell/commands.rb:34:in `command_safe'
   /usr/lib/hbase/lib/ruby/shell/commands.rb:91:in 
`translate_hbase_exceptions'
   /usr/lib/hbase/lib/ruby/shell/commands.rb:34:in `command_safe'
   /usr/lib/hbase/lib/ruby/shell.rb:127:in `internal_command'
   /usr/lib/hbase/lib/ruby/shell.rb:119:in `command'
   (eval):2:in `user_permission'
   (hbase):1:in `evaluate'
   org/jruby/RubyKernel.java:1112:in `eval'
{code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10830) Integration test MR jobs attempt to load htrace jars from the wrong location

2014-03-31 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13955829#comment-13955829
 ] 

Nick Dimiduk commented on HBASE-10830:
--

[~apurtell], [~stack] See if this patch moves things along for you in your 
environments.

> Integration test MR jobs attempt to load htrace jars from the wrong location
> 
>
> Key: HBASE-10830
> URL: https://issues.apache.org/jira/browse/HBASE-10830
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.1
>Reporter: Andrew Purtell
>Priority: Minor
> Fix For: 0.99.0, 0.98.2
>
> Attachments: HBASE-10830.00.patch
>
>
> The MapReduce jobs submitted by IntegrationTestImportTsv want to load the 
> htrace JAR from the local Maven cache but get confused and use a HDFS URI.
> {noformat}
> Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 8.489 sec <<< 
> FAILURE!
> testGenerateAndLoad(org.apache.hadoop.hbase.mapreduce.IntegrationTestImportTsv)
>   Time elapsed: 0.488 sec  <<< ERROR!
> java.io.FileNotFoundException: File does not exist: 
> hdfs://localhost:37548/home/apurtell/.m2/repository/org/cloudera/htrace/htrace-core/2.04/htrace-core-2.04.jar
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1110)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1102)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1102)
> at 
> org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:288)
> at 
> org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:224)
> at 
> org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestamps(ClientDistributedCacheManager.java:93)
> at 
> org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestampsAndCacheVisibilities(ClientDistributedCacheManager.java:57)
> at 
> org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:264)
> at 
> org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:300)
> at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:387)
> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1268)
> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1265)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
> at org.apache.hadoop.mapreduce.Job.submit(Job.java:1265)
> at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1286)
> at org.apache.hadoop.hbase.mapreduce.ImportTsv.run(ImportTsv.java:603)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at 
> org.apache.hadoop.hbase.mapreduce.TestImportTsv.doMROnTableTest(TestImportTsv.java:270)
> at 
> org.apache.hadoop.hbase.mapreduce.TestImportTsv.doMROnTableTest(TestImportTsv.java:232)
> at 
> org.apache.hadoop.hbase.mapreduce.IntegrationTestImportTsv.testGenerateAndLoad(IntegrationTestImportTsv.java:206)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HBASE-10847) 0.94: drop non-secure builds, make security the default

2014-03-31 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl resolved HBASE-10847.
---

  Resolution: Fixed
Hadoop Flags: Reviewed

Committed to 0.94. I also changed the -security build to use the -security-test 
profile.
Watching the builds now.

> 0.94: drop non-secure builds, make security the default
> ---
>
> Key: HBASE-10847
> URL: https://issues.apache.org/jira/browse/HBASE-10847
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
> Fix For: 0.94.19
>
> Attachments: 10847-v2.txt, 10847-v3.txt, 10847-v4.txt, 10847.txt
>
>
> I would like to only create a single 0.94 tarball/release that contains the 
> security code - and drop the non-secure tarballs and releases.
> Let's discuss...



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10830) Integration test MR jobs attempt to load htrace jars from the wrong location

2014-03-31 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-10830:
-

Attachment: HBASE-10830.00.patch

Adding a miniMRCluster when configured for non-distributed mode helps. Past 
that, testRunFromOutputCommitter hangs.

> Integration test MR jobs attempt to load htrace jars from the wrong location
> 
>
> Key: HBASE-10830
> URL: https://issues.apache.org/jira/browse/HBASE-10830
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.1
>Reporter: Andrew Purtell
>Priority: Minor
> Fix For: 0.99.0, 0.98.2
>
> Attachments: HBASE-10830.00.patch
>
>
> The MapReduce jobs submitted by IntegrationTestImportTsv want to load the 
> htrace JAR from the local Maven cache but get confused and use a HDFS URI.
> {noformat}
> Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 8.489 sec <<< 
> FAILURE!
> testGenerateAndLoad(org.apache.hadoop.hbase.mapreduce.IntegrationTestImportTsv)
>   Time elapsed: 0.488 sec  <<< ERROR!
> java.io.FileNotFoundException: File does not exist: 
> hdfs://localhost:37548/home/apurtell/.m2/repository/org/cloudera/htrace/htrace-core/2.04/htrace-core-2.04.jar
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1110)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1102)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1102)
> at 
> org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:288)
> at 
> org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:224)
> at 
> org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestamps(ClientDistributedCacheManager.java:93)
> at 
> org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestampsAndCacheVisibilities(ClientDistributedCacheManager.java:57)
> at 
> org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:264)
> at 
> org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:300)
> at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:387)
> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1268)
> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1265)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
> at org.apache.hadoop.mapreduce.Job.submit(Job.java:1265)
> at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1286)
> at org.apache.hadoop.hbase.mapreduce.ImportTsv.run(ImportTsv.java:603)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at 
> org.apache.hadoop.hbase.mapreduce.TestImportTsv.doMROnTableTest(TestImportTsv.java:270)
> at 
> org.apache.hadoop.hbase.mapreduce.TestImportTsv.doMROnTableTest(TestImportTsv.java:232)
> at 
> org.apache.hadoop.hbase.mapreduce.IntegrationTestImportTsv.testGenerateAndLoad(IntegrationTestImportTsv.java:206)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10847) 0.94: drop non-secure builds, make security the default

2014-03-31 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-10847:
--

Attachment: 10847-v4.txt

What I am going to commit.

Just fixes some typos and forces TestFromClientSide to run with 
WritableRpcEngine (which is not public, hence the string path to the class), 
since we have TestFromClientSideWithSecureRpcEngine, that will run with 
SecureRpcEngine unconditionally.



> 0.94: drop non-secure builds, make security the default
> ---
>
> Key: HBASE-10847
> URL: https://issues.apache.org/jira/browse/HBASE-10847
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
> Fix For: 0.94.19
>
> Attachments: 10847-v2.txt, 10847-v3.txt, 10847-v4.txt, 10847.txt
>
>
> I would like to only create a single 0.94 tarball/release that contains the 
> security code - and drop the non-secure tarballs and releases.
> Let's discuss...



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10314) Add Chaos Monkey that doesn't touch the master

2014-03-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13955818#comment-13955818
 ] 

Hadoop QA commented on HBASE-10314:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12637913/HBASE-10314-0.patch
  against trunk revision .
  ATTACHMENT ID: 12637913

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified tests.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 release 
audit warnings (more than the trunk's current 0 warnings).

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9143//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9143//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9143//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9143//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9143//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9143//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9143//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9143//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9143//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9143//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9143//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9143//console

This message is automatically generated.

> Add Chaos Monkey that doesn't touch the master
> --
>
> Key: HBASE-10314
> URL: https://issues.apache.org/jira/browse/HBASE-10314
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.98.0, 0.99.0, 0.96.1.1
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HBASE-10314-0.patch, HBASE-10314-0.patch, 
> HBASE-10314-0.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-10878) Operator | for visibility label doesn't work

2014-03-31 Thread Ted Yu (JIRA)
Ted Yu created HBASE-10878:
--

 Summary: Operator | for visibility label doesn't work
 Key: HBASE-10878
 URL: https://issues.apache.org/jira/browse/HBASE-10878
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu


I used setup similar to that from HBASE-10863, with fix for HBASE-10863 :
{code}
hbase(main):003:0> scan 'hbase:labels'
ROW  COLUMN+CELL
 \x00\x00\x00\x01column=f:\x00, 
timestamp=1395944796030, value=system
 \x00\x00\x00\x01column=f:hbase, 
timestamp=1395944796030, value=
 \x00\x00\x00\x02column=f:\x00, 
timestamp=1395951045442, value=TOP_SECRET
 \x00\x00\x00\x02column=f:hrt_qa, 
timestamp=1395951229682, value=
 \x00\x00\x00\x02column=f:hrt_qa1, 
timestamp=1395951270297, value=
 \x00\x00\x00\x02column=f:mapred, 
timestamp=1395958442326, value=
 \x00\x00\x00\x03column=f:\x00, 
timestamp=1395952069731, value=TOP_TOP_SECRET
 \x00\x00\x00\x03column=f:mapred, 
timestamp=1395956032141, value=
 \x00\x00\x00\x04column=f:\x00, 
timestamp=1395971516605, value=A
 \x00\x00\x00\x04column=f:oozie, 
timestamp=1395971647859, value=
 \x00\x00\x00\x05column=f:\x00, 
timestamp=1395971520327, value=B
5 row(s) in 0.0580 seconds
{code}
I did the following as user oozie using hbase shell:
{code}
hbase(main):001:0> scan 'tb', { AUTHORIZATIONS => ['A']}
ROW  COLUMN+CELL
 row column=f1:q, 
timestamp=1395971660859, value=v1
 row2column=f1:q, 
timestamp=1395972271343, value=v2
 row3column=f1:q, 
timestamp=1396067477702, value=v3
3 row(s) in 0.2050 seconds

hbase(main):002:0> scan 'tb', { AUTHORIZATIONS => ['A|B']}
ROW  COLUMN+CELL
 row2column=f1:q, 
timestamp=1395972271343, value=v2
1 row(s) in 0.0150 seconds

hbase(main):003:0> scan 'tb', { AUTHORIZATIONS => ['B|A']}
ROW  COLUMN+CELL
 row2column=f1:q, 
timestamp=1395972271343, value=v2
1 row(s) in 0.0260 seconds
{code}
Rows 'row' and 'row3' were inserted with label 'A'.
Row 'row2' was inserted without label.
Row 'row1' was inserted with label 'B'.

I would expect row1 to also be returned.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10866) Decouple HLogSplitterHandler from ZooKeeper

2014-03-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13955780#comment-13955780
 ] 

Hadoop QA commented on HBASE-10866:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12637916/HBASE-10866.patch
  against trunk revision .
  ATTACHMENT ID: 12637916

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9144//console

This message is automatically generated.

> Decouple HLogSplitterHandler from ZooKeeper
> ---
>
> Key: HBASE-10866
> URL: https://issues.apache.org/jira/browse/HBASE-10866
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver, Zookeeper
>Reporter: Mikhail Antonov
> Attachments: HBASE-10866.patch, HBASE-10866.patch, HBASE-10866.patch
>
>
> As some sort of follow-up or initial step towards HBASE-10296...
> Whatever consensus algorithm/library may be the chosen, perhaps on of first 
> practical steps towards this goal would be to better abstract ZK-related API 
> and details, which are now throughout the codebase (mostly leaked throuth 
> ZkUtil, ZooKeeperWatcher and listeners).
> I'd like to propose a series of patches to help better abstract out zookeeper 
> (and then help develop consensus APIs). 
> Here is first version of  patch for initial review (then I'm planning to work 
> on another handlers in regionserver, and then perhaps start working on 
> abstracting listeners).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10855) Enable hfilev3 by default

2014-03-31 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-10855:
--

Attachment: 10855.txt

Failures seem super arbitrary and likely unrelated.  Any +1 out there?

> Enable hfilev3 by default
> -
>
> Key: HBASE-10855
> URL: https://issues.apache.org/jira/browse/HBASE-10855
> Project: HBase
>  Issue Type: Sub-task
>  Components: HFile
>Reporter: stack
>Assignee: stack
> Fix For: 0.99.0
>
> Attachments: 10855.txt, 10855.txt, 10855.txt, 10855.txt
>
>
> Distributed log replay needs this.  Should be on by default in 1.0/0.99.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10855) Enable hfilev3 by default

2014-03-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13955774#comment-13955774
 ] 

Hadoop QA commented on HBASE-10855:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12637907/10855.txt
  against trunk revision .
  ATTACHMENT ID: 12637907

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.master.TestRegionPlacement

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s):   
at 
org.apache.hadoop.hbase.mapreduce.TestTableMapReduceBase.testMultiRegionTable(TestTableMapReduceBase.java:96)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9142//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9142//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9142//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9142//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9142//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9142//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9142//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9142//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9142//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9142//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9142//console

This message is automatically generated.

> Enable hfilev3 by default
> -
>
> Key: HBASE-10855
> URL: https://issues.apache.org/jira/browse/HBASE-10855
> Project: HBase
>  Issue Type: Sub-task
>  Components: HFile
>Reporter: stack
>Assignee: stack
> Fix For: 0.99.0
>
> Attachments: 10855.txt, 10855.txt, 10855.txt
>
>
> Distributed log replay needs this.  Should be on by default in 1.0/0.99.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10877) HBase non-retriable exception list should be expanded

2014-03-31 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HBASE-10877:
-

Summary: HBase non-retriable exception list should be expanded  (was: HBase 
non-retrieable exception list should be expanded)

> HBase non-retriable exception list should be expanded
> -
>
> Key: HBASE-10877
> URL: https://issues.apache.org/jira/browse/HBASE-10877
> Project: HBase
>  Issue Type: Improvement
>Reporter: Sergey Shelukhin
>Priority: Minor
>
> Example where retries do not make sense:
> {noformat}
> 2014-03-31 20:54:27,765 WARN [InputInitializer [Map 1] #0] 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation: 
> Encountered problems when prefetch hbase:meta table: 
> org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after 
> attempts=35, exceptions:
> Mon Mar 31 20:45:17 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: class 
> com.google.protobuf.HBaseZeroCopyByteString cannot access its superclass 
> com.google.protobuf.LiteralByteString
> Mon Mar 31 20:45:17 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:45:17 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:45:18 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:45:20 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:45:24 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:45:34 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:45:45 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:45:55 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:46:05 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:46:25 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:46:45 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:47:05 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:47:25 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:47:45 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:48:05 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:48:25 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:48:46 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:49:06 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:49:26 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:49:46 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:50:06 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:50:26 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.Il

[jira] [Created] (HBASE-10877) HBase non-retrieable exception list should be expanded

2014-03-31 Thread Sergey Shelukhin (JIRA)
Sergey Shelukhin created HBASE-10877:


 Summary: HBase non-retrieable exception list should be expanded
 Key: HBASE-10877
 URL: https://issues.apache.org/jira/browse/HBASE-10877
 Project: HBase
  Issue Type: Improvement
Reporter: Sergey Shelukhin
Priority: Minor


Example where retries do not make sense:
{noformat}
2014-03-31 20:54:27,765 WARN [InputInitializer [Map 1] #0] 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation: 
Encountered problems when prefetch hbase:meta table: 
org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after 
attempts=35, exceptions:
Mon Mar 31 20:45:17 UTC 2014, 
org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
java.lang.IllegalAccessError: class com.google.protobuf.HBaseZeroCopyByteString 
cannot access its superclass com.google.protobuf.LiteralByteString
Mon Mar 31 20:45:17 UTC 2014, 
org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
Mon Mar 31 20:45:17 UTC 2014, 
org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
Mon Mar 31 20:45:18 UTC 2014, 
org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
Mon Mar 31 20:45:20 UTC 2014, 
org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
Mon Mar 31 20:45:24 UTC 2014, 
org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
Mon Mar 31 20:45:34 UTC 2014, 
org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
Mon Mar 31 20:45:45 UTC 2014, 
org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
Mon Mar 31 20:45:55 UTC 2014, 
org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
Mon Mar 31 20:46:05 UTC 2014, 
org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
Mon Mar 31 20:46:25 UTC 2014, 
org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
Mon Mar 31 20:46:45 UTC 2014, 
org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
Mon Mar 31 20:47:05 UTC 2014, 
org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
Mon Mar 31 20:47:25 UTC 2014, 
org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
Mon Mar 31 20:47:45 UTC 2014, 
org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
Mon Mar 31 20:48:05 UTC 2014, 
org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
Mon Mar 31 20:48:25 UTC 2014, 
org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
Mon Mar 31 20:48:46 UTC 2014, 
org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
Mon Mar 31 20:49:06 UTC 2014, 
org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
Mon Mar 31 20:49:26 UTC 2014, 
org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
Mon Mar 31 20:49:46 UTC 2014, 
org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
Mon Mar 31 20:50:06 UTC 2014, 
org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
Mon Mar 31 20:50:26 UTC 2014, 
org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
Mon Mar 31 20:50:46 UTC 2014, 
org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
Mon Mar 31 20:51:06 UTC 2014, 
org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
Mon Mar 31 20:51:26 UTC 2014, 
org.apache.hadoop.hbase.client.RpcR

[jira] [Updated] (HBASE-10866) Decouple HLogSplitterHandler from ZooKeeper

2014-03-31 Thread Mikhail Antonov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Antonov updated HBASE-10866:


Status: Patch Available  (was: Open)

> Decouple HLogSplitterHandler from ZooKeeper
> ---
>
> Key: HBASE-10866
> URL: https://issues.apache.org/jira/browse/HBASE-10866
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver, Zookeeper
>Reporter: Mikhail Antonov
> Attachments: HBASE-10866.patch, HBASE-10866.patch, HBASE-10866.patch
>
>
> As some sort of follow-up or initial step towards HBASE-10296...
> Whatever consensus algorithm/library may be the chosen, perhaps on of first 
> practical steps towards this goal would be to better abstract ZK-related API 
> and details, which are now throughout the codebase (mostly leaked throuth 
> ZkUtil, ZooKeeperWatcher and listeners).
> I'd like to propose a series of patches to help better abstract out zookeeper 
> (and then help develop consensus APIs). 
> Here is first version of  patch for initial review (then I'm planning to work 
> on another handlers in regionserver, and then perhaps start working on 
> abstracting listeners).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10866) Decouple HLogSplitterHandler from ZooKeeper

2014-03-31 Thread Mikhail Antonov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Antonov updated HBASE-10866:


Status: Open  (was: Patch Available)

> Decouple HLogSplitterHandler from ZooKeeper
> ---
>
> Key: HBASE-10866
> URL: https://issues.apache.org/jira/browse/HBASE-10866
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver, Zookeeper
>Reporter: Mikhail Antonov
> Attachments: HBASE-10866.patch, HBASE-10866.patch, HBASE-10866.patch
>
>
> As some sort of follow-up or initial step towards HBASE-10296...
> Whatever consensus algorithm/library may be the chosen, perhaps on of first 
> practical steps towards this goal would be to better abstract ZK-related API 
> and details, which are now throughout the codebase (mostly leaked throuth 
> ZkUtil, ZooKeeperWatcher and listeners).
> I'd like to propose a series of patches to help better abstract out zookeeper 
> (and then help develop consensus APIs). 
> Here is first version of  patch for initial review (then I'm planning to work 
> on another handlers in regionserver, and then perhaps start working on 
> abstracting listeners).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10771) Primitive type put/get APIs in ByteRange

2014-03-31 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13955728#comment-13955728
 ] 

stack commented on HBASE-10771:
---

Thanks [~ndimiduk]

And I like how you put it.  My comments are of the same class as yours (to be 
clear).

We have ByteBuffers coming in the front door currently.   We give the socket a 
nioBB to read into.  Keeping position in another object, BR for instance seems 
fine (so we avoid #1 and #2 in @apurtell list above).

Reading in from HDFS, we allocate a BB and read into it.

Looking at ByteBuf, it has range checking (checkIndex) but the getBytes and the 
checkIndex are subclasseable so perhaps it could be 'turned off' if we wanted 
it to be.   ByteBuf has nice features like allocations from pools but if we are 
talking tight loops (no range check) and reuse, then neither nioBB nor nettyBB 
will do.




> Primitive type put/get APIs in ByteRange 
> -
>
> Key: HBASE-10771
> URL: https://issues.apache.org/jira/browse/HBASE-10771
> Project: HBase
>  Issue Type: Improvement
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 0.99.0
>
> Attachments: HBASE-10771.patch, HBASE-10771_V2.patch
>
>
> While doing HBASE-10713 I came across the need to write int/long (and read 
> also) from a ByteRange.  CellBlocks are backed by ByteRange. So we can add 
> such APIs.
> Also as per HBASE-10750  we return a ByteRange from MSLAB and also discussion 
> under HBASE-10191 suggest we can have BR backed HFileBlocks etc.  



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10866) Decouple HLogSplitterHandler from ZooKeeper

2014-03-31 Thread Mikhail Antonov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Antonov updated HBASE-10866:


Attachment: HBASE-10866.patch

no-prefixed patch, corrected formatting

> Decouple HLogSplitterHandler from ZooKeeper
> ---
>
> Key: HBASE-10866
> URL: https://issues.apache.org/jira/browse/HBASE-10866
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver, Zookeeper
>Reporter: Mikhail Antonov
> Attachments: HBASE-10866.patch, HBASE-10866.patch, HBASE-10866.patch
>
>
> As some sort of follow-up or initial step towards HBASE-10296...
> Whatever consensus algorithm/library may be the chosen, perhaps on of first 
> practical steps towards this goal would be to better abstract ZK-related API 
> and details, which are now throughout the codebase (mostly leaked throuth 
> ZkUtil, ZooKeeperWatcher and listeners).
> I'd like to propose a series of patches to help better abstract out zookeeper 
> (and then help develop consensus APIs). 
> Here is first version of  patch for initial review (then I'm planning to work 
> on another handlers in regionserver, and then perhaps start working on 
> abstracting listeners).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10866) Decouple HLogSplitterHandler from ZooKeeper

2014-03-31 Thread Mikhail Antonov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Antonov updated HBASE-10866:


Attachment: HBASE-10866.patch

updated patch, revised interfaces naming

> Decouple HLogSplitterHandler from ZooKeeper
> ---
>
> Key: HBASE-10866
> URL: https://issues.apache.org/jira/browse/HBASE-10866
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver, Zookeeper
>Reporter: Mikhail Antonov
> Attachments: HBASE-10866.patch, HBASE-10866.patch
>
>
> As some sort of follow-up or initial step towards HBASE-10296...
> Whatever consensus algorithm/library may be the chosen, perhaps on of first 
> practical steps towards this goal would be to better abstract ZK-related API 
> and details, which are now throughout the codebase (mostly leaked throuth 
> ZkUtil, ZooKeeperWatcher and listeners).
> I'd like to propose a series of patches to help better abstract out zookeeper 
> (and then help develop consensus APIs). 
> Here is first version of  patch for initial review (then I'm planning to work 
> on another handlers in regionserver, and then perhaps start working on 
> abstracting listeners).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10314) Add Chaos Monkey that doesn't touch the master

2014-03-31 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-10314:
--

Attachment: HBASE-10314-0.patch

Re-attaching to see if this patch is still good.

> Add Chaos Monkey that doesn't touch the master
> --
>
> Key: HBASE-10314
> URL: https://issues.apache.org/jira/browse/HBASE-10314
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.98.0, 0.99.0, 0.96.1.1
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HBASE-10314-0.patch, HBASE-10314-0.patch, 
> HBASE-10314-0.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10855) Enable hfilev3 by default

2014-03-31 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-10855:
--

Attachment: 10855.txt

Retry.

> Enable hfilev3 by default
> -
>
> Key: HBASE-10855
> URL: https://issues.apache.org/jira/browse/HBASE-10855
> Project: HBase
>  Issue Type: Sub-task
>  Components: HFile
>Reporter: stack
>Assignee: stack
> Fix For: 0.99.0
>
> Attachments: 10855.txt, 10855.txt, 10855.txt
>
>
> Distributed log replay needs this.  Should be on by default in 1.0/0.99.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10815) Master regionserver should be rolling-upgradable

2014-03-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13955607#comment-13955607
 ] 

Hudson commented on HBASE-10815:


SUCCESS: Integrated in HBase-TRUNK #5052 (See 
[https://builds.apache.org/job/HBase-TRUNK/5052/])
HBASE-10815 Master regionserver should be rolling-upgradable (jxiang: rev 
1583373)
* /hbase/trunk/hbase-common/src/main/resources/hbase-default.xml
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/ActiveMasterManager.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMasterCommandLine.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/ServerManager.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/BaseLoadBalancer.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/SimpleLoadBalancer.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/StochasticLoadBalancer.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/master/balancer/TestBaseLoadBalancer.java


> Master regionserver should be rolling-upgradable
> 
>
> Key: HBASE-10815
> URL: https://issues.apache.org/jira/browse/HBASE-10815
> Project: HBase
>  Issue Type: Sub-task
>  Components: master, Region Assignment
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
> Fix For: 0.99.0
>
> Attachments: hbase-10815.patch, hbase-10815_v2.patch
>
>
> In HBASE-10569, two things could affect the rolling-upgrade from a 0.96+ 
> release:
> * Master doesn't have its own info server any. It shares the same info server 
> with the regionserver. We can have a setting so that we can start two info 
> servers, one for the master on the original port, and one for the 
> regionserver.
> * Backup master is a regionserver now. So it could hold regions. This could 
> affect some deployment. We can have a setting so that we can prevent backup 
> master from serving any region.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10867) TestRegionPlacement#testRegionPlacement occasionally fails

2014-03-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13955577#comment-13955577
 ] 

Hadoop QA commented on HBASE-10867:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12637872/10867-v2.txt
  against trunk revision .
  ATTACHMENT ID: 12637872

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.replication.TestReplicationSyncUpTool

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9141//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9141//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9141//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9141//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9141//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9141//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9141//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9141//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9141//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9141//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9141//console

This message is automatically generated.

> TestRegionPlacement#testRegionPlacement occasionally fails
> --
>
> Key: HBASE-10867
> URL: https://issues.apache.org/jira/browse/HBASE-10867
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Attachments: 10867-v1.txt, 10867-v2.txt
>
>
> From 
> https://builds.apache.org/job/HBase-TRUNK/5047/testReport/org.apache.hadoop.hbase.master/TestRegionPlacement/testRegionPlacement/
>  :
> {code}
> java.lang.ArrayIndexOutOfBoundsException: 10
>   at 
> java.util.concurrent.CopyOnWriteArrayList.get(CopyOnWriteArrayList.java:368)
>   at 
> java.util.concurrent.CopyOnWriteArrayList.get(CopyOnWriteArrayList.java:377)
>   at 
> org.apache.hadoop.hbase.LocalHBaseCluster.getRegionServer(LocalHBaseCluster.java:224)
>   at 
> org.apache.hadoop.hbase.MiniHBaseCluster.getRegionServer(MiniHBaseCluster.java:609)
>   at 
> org.apache.hadoop.hbase.master.TestRegionPlacement.killRandomServerAndVerifyAssignment(TestRegionPlacement.java:303)
>   at 
> org.apache.hadoop.hbase.master.TestRegionPlacement.testRegionPlacement(TestRegionPlacement.java:270)
> {code}
> In the setup:
> {code}
> TEST_UTIL.startMiniCluster(SLAVES);
> {code}
> where SLAVES is 10.
> So when 10 was used in 
> TEST_UTIL.getHBaseCluster().getRegionServer(killIndex), we would get 
> ArrayIndexOutOfBoundsException.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-8073) HFileOutputFormat support for offline operation

2014-03-31 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13955565#comment-13955565
 ] 

Nick Dimiduk commented on HBASE-8073:
-

bq. We could also expose additional API for 
HFileOutputFormat.configureIncrementalLoad() so that we outsource and provide 
user/caller flexibility to input split point or other info to, so that 
HFileOutputFormat does not has to figure out and do this internally.

Yes, this is a good step. The partitions file could be passed to TOP directly, 
and the same file could be parsed to count the number of reducers.

> HFileOutputFormat support for offline operation
> ---
>
> Key: HBASE-8073
> URL: https://issues.apache.org/jira/browse/HBASE-8073
> Project: HBase
>  Issue Type: Sub-task
>  Components: mapreduce
>Reporter: Nick Dimiduk
>
> When using HFileOutputFormat to generate HFiles, it inspects the region 
> topology of the target table. The split points from that table are used to 
> guide the TotalOrderPartitioner. If the target table does not exist, it is 
> first created. This imposes an unnecessary dependence on an online HBase and 
> existing table.
> If the table exists, it can be used. However, the job can be smarter. For 
> example, if there's far more data going into the HFiles than the table 
> currently contains, the table regions aren't very useful for data split 
> points. Instead, the input data can be sampled to produce split points more 
> meaningful to the dataset. LoadIncrementalHFiles is already capable of 
> handling divergence between HFile boundaries and table regions, so this 
> should not pose any additional burdon at load time.
> The proper method of sampling the data likely requires a custom input format 
> and an additional map-reduce job perform the sampling. See a relevant 
> implementation: 
> https://github.com/alexholmes/hadoop-book/blob/master/src/main/java/com/manning/hip/ch4/sampler/ReservoirSamplerInputFormat.java



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-8523) No support for double In Increment api of Hbase

2014-03-31 Thread Kapil Malik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kapil Malik updated HBASE-8523:
---

   Labels: patch  (was: )
Affects Version/s: (was: 0.90.4)
   0.94.19
   Status: Patch Available  (was: Open)

Added and implemented 2 new APIs in HTableInterface -
double incrementColumnValue(byte[] row, byte[] family, byte[] qualifier,
  double amount) throws IOException;
and 
double incrementColumnValue(byte[] row, byte[] family, byte[] qualifier,
  double amount, boolean writeToWAL) throws IOException;
Affects the following classes - 
HTableInterface (and all implementations)
HRegionInterface (and HRegionServer implementation)
HRegion

Semantics are exactly same as the "long" counter part.

Please consider.




> No support for double In Increment api of Hbase
> ---
>
> Key: HBASE-8523
> URL: https://issues.apache.org/jira/browse/HBASE-8523
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Affects Versions: 0.94.19
>Reporter: vikram s
>  Labels: patch
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-8073) HFileOutputFormat support for offline operation

2014-03-31 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1394#comment-1394
 ] 

Jerry He commented on HBASE-8073:
-

I happen to work on enabling security on our Hadoop/HBase cluster recently.  
Good point on the FS permissions. 

We could also expose additional API for 
HFileOutputFormat.configureIncrementalLoad() so that we outsource and provide 
user/caller flexibility to input split point or other info to, so that 
HFileOutputFormat does not has to figure out and do this internally.
For example, the user/caller will do the sampling of their data and input the 
info to HFileOutputFormat.configureIncrementalLoad().
The user can even provide a trivial input, e.g. these are my intended split 
points {0, 10, 20, ...}

> HFileOutputFormat support for offline operation
> ---
>
> Key: HBASE-8073
> URL: https://issues.apache.org/jira/browse/HBASE-8073
> Project: HBase
>  Issue Type: Sub-task
>  Components: mapreduce
>Reporter: Nick Dimiduk
>
> When using HFileOutputFormat to generate HFiles, it inspects the region 
> topology of the target table. The split points from that table are used to 
> guide the TotalOrderPartitioner. If the target table does not exist, it is 
> first created. This imposes an unnecessary dependence on an online HBase and 
> existing table.
> If the table exists, it can be used. However, the job can be smarter. For 
> example, if there's far more data going into the HFiles than the table 
> currently contains, the table regions aren't very useful for data split 
> points. Instead, the input data can be sampled to produce split points more 
> meaningful to the dataset. LoadIncrementalHFiles is already capable of 
> handling divergence between HFile boundaries and table regions, so this 
> should not pose any additional burdon at load time.
> The proper method of sampling the data likely requires a custom input format 
> and an additional map-reduce job perform the sampling. See a relevant 
> implementation: 
> https://github.com/alexholmes/hadoop-book/blob/master/src/main/java/com/manning/hip/ch4/sampler/ReservoirSamplerInputFormat.java



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-10876) Remove Avro Connector

2014-03-31 Thread Elliott Clark (JIRA)
Elliott Clark created HBASE-10876:
-

 Summary: Remove Avro Connector
 Key: HBASE-10876
 URL: https://issues.apache.org/jira/browse/HBASE-10876
 Project: HBase
  Issue Type: Bug
  Components: build
Affects Versions: 0.89-fb
Reporter: Elliott Clark
Assignee: Elliott Clark
 Fix For: 0.89-fb


Follow trunk and remove avro connector.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


  1   2   >