[jira] [Commented] (HBASE-10289) Avoid random port usage by default JMX Server. Create Custome JMX server

2014-06-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14038537#comment-14038537
 ] 

Hudson commented on HBASE-10289:


FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #330 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/330/])
HBASE-10289 Avoid random port usage by default JMX Server. Create Custom JMX 
server (Qiang Tian) (apurtell: rev 8ac95e73aeb76bf6bc0679122c772ce093be2955)
* hbase-server/src/main/java/org/apache/hadoop/hbase/JMXListener.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/TestJMXListener.java
* conf/hbase-env.sh


> Avoid random port usage by default JMX Server. Create Custome JMX server
> 
>
> Key: HBASE-10289
> URL: https://issues.apache.org/jira/browse/HBASE-10289
> Project: HBase
>  Issue Type: Improvement
>Reporter: nijel
>Assignee: Qiang Tian
>Priority: Minor
>  Labels: stack
> Fix For: 0.99.0, 0.98.4
>
> Attachments: HBASE-10289-v4.patch, HBASE-10289.patch, 
> HBASE-10289_1.patch, HBASE-10289_2.patch, HBASE-10289_3.patch, 
> HBase10289-master.patch, hbase10289-0.98.patch, 
> hbase10289-doc_update-master.patch, hbase10289-master-v1.patch, 
> hbase10289-master-v2.patch
>
>
> If we enable JMX MBean server for HMaster or Region server  through VM 
> arguments, the process will use one random which we cannot configure.
> This can be a problem if that random port is configured for some other 
> service.
> This issue can be avoided by supporting  a custom JMX Server.
> The ports can be configured. If there is no ports configured, it will 
> continue the same way as now.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11380) HRegion lock object is not being released properly, leading to snapshot failure

2014-06-19 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14038521#comment-14038521
 ] 

Anoop Sam John commented on HBASE-11380:


ClientSideRegionScanner seems doing HRegion#startRegionOperation.  (As u said 
doing snapshot reads. This is used )  Worth checking whether there is failure 
in closing ClientSideRegionScanner  (in which only we do closeRegionOperation)

Also checking in the code I can see in below places we do 
region.startRegionOperation(Operation)  but not seeing it getting closed.  
RSRpcServices
compactRegion
mergeRegions
splitRegion

> HRegion lock object is not being released properly, leading to snapshot 
> failure
> ---
>
> Key: HBASE-11380
> URL: https://issues.apache.org/jira/browse/HBASE-11380
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 0.98.3
>Reporter: Craig Condit
> Attachments: 11380-v1.txt
>
>
> Background:
> We are attempting to create ~ 750 table snapshots on a nightly basis for use 
> in MR jobs. The jobs are run in batches, with a maximum of around 20 jobs 
> running simultaneously.
> We have started to see the following in our region server logs (after < 1 day 
> uptime):
> {noformat}
> java.lang.Error: Maximum lock count exceeded
>   at 
> java.util.concurrent.locks.ReentrantReadWriteLock$Sync.fullTryAcquireShared(ReentrantReadWriteLock.java:531)
>   at 
> java.util.concurrent.locks.ReentrantReadWriteLock$Sync.tryAcquireShared(ReentrantReadWriteLock.java:491)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1326)
>   at 
> java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.tryLock(ReentrantReadWriteLock.java:873)
>   at org.apache.hadoop.hbase.regionserver.HRegion.lock(HRegion.java:5904)
>   at org.apache.hadoop.hbase.regionserver.HRegion.lock(HRegion.java:5891)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.startRegionOperation(HRegion.java:5798)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.startRegionOperation(HRegion.java:5761)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.processRowsWithLocks(HRegion.java:4891)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.mutateRowsWithLocks(HRegion.java:4856)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.mutateRowsWithLocks(HRegion.java:4838)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.mutateRow(HRegion.java:4829)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.mutateRows(HRegionServer.java:4390)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3362)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29503)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2012)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:98)
>   at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:168)
>   at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$000(SimpleRpcScheduler.java:39)
>   at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler$1.run(SimpleRpcScheduler.java:111)
>   at java.lang.Thread.run(Thread.java:744)
> {noformat}
> Not sure of the cause, but the result is that snapshots cannot be created. We 
> see this in our client logs:
> {noformat}
> Exception in thread "main" 
> org.apache.hadoop.hbase.snapshot.HBaseSnapshotException: 
> org.apache.hadoop.hbase.snapshot.HBaseSnapshotException: Snapshot { 
> ss=test-snapshot-20140619143753294 table=test type=FLUSH } had an error.  
> Procedure test-snapshot-20140619143753294 { 
> waiting=[p3plpadata038.internal,60020,1403140682587, 
> p3plpadata056.internal,60020,1403140865123, 
> p3plpadata072.internal,60020,1403141022569] 
> done=[p3plpadata023.internal,60020,1403140552227, 
> p3plpadata009.internal,60020,1403140487826] }
>   at 
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.isSnapshotDone(SnapshotManager.java:342)
>   at 
> org.apache.hadoop.hbase.master.HMaster.isSnapshotDone(HMaster.java:2907)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:40494)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2012)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:98)
>   at 
> org.apache.hadoop.hbase.ipc.FifoRpcScheduler$1.run(FifoRpcScheduler.java:73)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> java.u

[jira] [Commented] (HBASE-6192) Document ACL matrix in the book

2014-06-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14038478#comment-14038478
 ] 

Hadoop QA commented on HBASE-6192:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12651605/HBASE-6192-3.patch
  against trunk revision .
  ATTACHMENT ID: 12651605

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:red}-1 findbugs{color}.  The patch appears to introduce 2 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+HBase. Before using the table, read through the information about 
how to interpret it.
+  For the most part, permissions work in an expected way, with the 
following caveats:
+  CheckAndPut and CheckAndDelete 
operations will fail if the user does not have both
+  Increment and Append operations do 
not require Read access.
+  
hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController.java,
+
hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/AccessController.java.

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9802//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9802//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9802//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9802//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9802//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9802//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9802//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9802//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9802//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9802//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9802//console

This message is automatically generated.

> Document ACL matrix in the book
> ---
>
> Key: HBASE-6192
> URL: https://issues.apache.org/jira/browse/HBASE-6192
> Project: HBase
>  Issue Type: Task
>  Components: documentation, security
>Affects Versions: 0.94.1, 0.95.2
>Reporter: Enis Soztutar
>Assignee: Misty Stanley-Jones
>  Labels: documentaion, security
> Fix For: 0.99.0
>
> Attachments: HBASE-6192-2.patch, HBASE-6192-3.patch, 
> HBASE-6192-rebased.patch, HBASE-6192.patch, HBase Security-ACL Matrix.pdf, 
> HBase Security-ACL Matrix.pdf, HBase Security-ACL Matrix.pdf, HBase 
> Security-ACL Matrix.xls, HBase Security-ACL Matrix.xls, HBase Security-ACL 
> Matrix.xls
>
>
> We have an excellent matrix at 
> https://issues.apache.org/jira/secure/attachment/12531252/Security-ACL%20Matrix.pdf
>  for ACL. Once the changes are done, we can adapt that and put it in the 
> book, also add some more documentation about the new authorization features. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-4931) CopyTable instructions could be improved.

2014-06-19 Thread Misty Stanley-Jones (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Misty Stanley-Jones updated HBASE-4931:
---

Status: Patch Available  (was: Open)

> CopyTable instructions could be improved.
> -
>
> Key: HBASE-4931
> URL: https://issues.apache.org/jira/browse/HBASE-4931
> Project: HBase
>  Issue Type: Bug
>  Components: documentation, mapreduce
>Affects Versions: 0.92.0, 0.90.4
>Reporter: Jonathan Hsieh
>Assignee: Misty Stanley-Jones
> Attachments: HBASE-4931.patch
>
>
> The book and the usage instructions could be improved to include more 
> details, things caveats and to better explain usage.
> One example in particular, could be updated to refer to 
> ReplicationRegionInterface and ReplicationRegionServer in thier current 
> locations (o.a.h.h.client.replication and o.a.h.h.replication.regionserver), 
> and better explain why one would use particular arguments.
> {code}
> $ bin/hbase org.apache.hadoop.hbase.mapreduce.CopyTable
> --rs.class=org.apache.hadoop.hbase.ipc.ReplicationRegionInterface
> --rs.impl=org.apache.hadoop.hbase.regionserver.replication.ReplicationRegionServer
> --starttime=1265875194289 --endtime=1265878794289
> --peer.adr=server1,server2,server3:2181:/hbase TestTable
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-4931) CopyTable instructions could be improved.

2014-06-19 Thread Misty Stanley-Jones (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Misty Stanley-Jones updated HBASE-4931:
---

Attachment: HBASE-4931.patch

The usage info in the command was better than the docs so I just pasted it in. 
I did add some info to the usage message too. I don't know how to address the 
comment about the actionable error message and I don't have enough info about 
what the docs should give warnings about, in the next comment. 

> CopyTable instructions could be improved.
> -
>
> Key: HBASE-4931
> URL: https://issues.apache.org/jira/browse/HBASE-4931
> Project: HBase
>  Issue Type: Bug
>  Components: documentation, mapreduce
>Affects Versions: 0.90.4, 0.92.0
>Reporter: Jonathan Hsieh
>Assignee: Misty Stanley-Jones
> Attachments: HBASE-4931.patch
>
>
> The book and the usage instructions could be improved to include more 
> details, things caveats and to better explain usage.
> One example in particular, could be updated to refer to 
> ReplicationRegionInterface and ReplicationRegionServer in thier current 
> locations (o.a.h.h.client.replication and o.a.h.h.replication.regionserver), 
> and better explain why one would use particular arguments.
> {code}
> $ bin/hbase org.apache.hadoop.hbase.mapreduce.CopyTable
> --rs.class=org.apache.hadoop.hbase.ipc.ReplicationRegionInterface
> --rs.impl=org.apache.hadoop.hbase.regionserver.replication.ReplicationRegionServer
> --starttime=1265875194289 --endtime=1265878794289
> --peer.adr=server1,server2,server3:2181:/hbase TestTable
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11380) HRegion lock object is not being released properly, leading to snapshot failure

2014-06-19 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14038420#comment-14038420
 ] 

Ted Yu commented on HBASE-11380:


I went through methods in HRegion which call startRegionOperation().
The one in processRowsWithLocks() is the only place where accompanying 
closeRegionOperation() should be added.

In regionserver, there may be other place where lock is not released properly.

> HRegion lock object is not being released properly, leading to snapshot 
> failure
> ---
>
> Key: HBASE-11380
> URL: https://issues.apache.org/jira/browse/HBASE-11380
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 0.98.3
>Reporter: Craig Condit
> Attachments: 11380-v1.txt
>
>
> Background:
> We are attempting to create ~ 750 table snapshots on a nightly basis for use 
> in MR jobs. The jobs are run in batches, with a maximum of around 20 jobs 
> running simultaneously.
> We have started to see the following in our region server logs (after < 1 day 
> uptime):
> {noformat}
> java.lang.Error: Maximum lock count exceeded
>   at 
> java.util.concurrent.locks.ReentrantReadWriteLock$Sync.fullTryAcquireShared(ReentrantReadWriteLock.java:531)
>   at 
> java.util.concurrent.locks.ReentrantReadWriteLock$Sync.tryAcquireShared(ReentrantReadWriteLock.java:491)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1326)
>   at 
> java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.tryLock(ReentrantReadWriteLock.java:873)
>   at org.apache.hadoop.hbase.regionserver.HRegion.lock(HRegion.java:5904)
>   at org.apache.hadoop.hbase.regionserver.HRegion.lock(HRegion.java:5891)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.startRegionOperation(HRegion.java:5798)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.startRegionOperation(HRegion.java:5761)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.processRowsWithLocks(HRegion.java:4891)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.mutateRowsWithLocks(HRegion.java:4856)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.mutateRowsWithLocks(HRegion.java:4838)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.mutateRow(HRegion.java:4829)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.mutateRows(HRegionServer.java:4390)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3362)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29503)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2012)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:98)
>   at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:168)
>   at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$000(SimpleRpcScheduler.java:39)
>   at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler$1.run(SimpleRpcScheduler.java:111)
>   at java.lang.Thread.run(Thread.java:744)
> {noformat}
> Not sure of the cause, but the result is that snapshots cannot be created. We 
> see this in our client logs:
> {noformat}
> Exception in thread "main" 
> org.apache.hadoop.hbase.snapshot.HBaseSnapshotException: 
> org.apache.hadoop.hbase.snapshot.HBaseSnapshotException: Snapshot { 
> ss=test-snapshot-20140619143753294 table=test type=FLUSH } had an error.  
> Procedure test-snapshot-20140619143753294 { 
> waiting=[p3plpadata038.internal,60020,1403140682587, 
> p3plpadata056.internal,60020,1403140865123, 
> p3plpadata072.internal,60020,1403141022569] 
> done=[p3plpadata023.internal,60020,1403140552227, 
> p3plpadata009.internal,60020,1403140487826] }
>   at 
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.isSnapshotDone(SnapshotManager.java:342)
>   at 
> org.apache.hadoop.hbase.master.HMaster.isSnapshotDone(HMaster.java:2907)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:40494)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2012)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:98)
>   at 
> org.apache.hadoop.hbase.ipc.FifoRpcScheduler$1.run(FifoRpcScheduler.java:73)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Threa

[jira] [Assigned] (HBASE-4931) CopyTable instructions could be improved.

2014-06-19 Thread Misty Stanley-Jones (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Misty Stanley-Jones reassigned HBASE-4931:
--

Assignee: Misty Stanley-Jones

> CopyTable instructions could be improved.
> -
>
> Key: HBASE-4931
> URL: https://issues.apache.org/jira/browse/HBASE-4931
> Project: HBase
>  Issue Type: Bug
>  Components: documentation, mapreduce
>Affects Versions: 0.90.4, 0.92.0
>Reporter: Jonathan Hsieh
>Assignee: Misty Stanley-Jones
>
> The book and the usage instructions could be improved to include more 
> details, things caveats and to better explain usage.
> One example in particular, could be updated to refer to 
> ReplicationRegionInterface and ReplicationRegionServer in thier current 
> locations (o.a.h.h.client.replication and o.a.h.h.replication.regionserver), 
> and better explain why one would use particular arguments.
> {code}
> $ bin/hbase org.apache.hadoop.hbase.mapreduce.CopyTable
> --rs.class=org.apache.hadoop.hbase.ipc.ReplicationRegionInterface
> --rs.impl=org.apache.hadoop.hbase.regionserver.replication.ReplicationRegionServer
> --starttime=1265875194289 --endtime=1265878794289
> --peer.adr=server1,server2,server3:2181:/hbase TestTable
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-6192) Document ACL matrix in the book

2014-06-19 Thread Misty Stanley-Jones (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Misty Stanley-Jones updated HBASE-6192:
---

Attachment: HBASE-6192-3.patch

Integrated feedback from [~mbertozzi] and [~apurtell]. Let me know what further 
improvements can be made.

> Document ACL matrix in the book
> ---
>
> Key: HBASE-6192
> URL: https://issues.apache.org/jira/browse/HBASE-6192
> Project: HBase
>  Issue Type: Task
>  Components: documentation, security
>Affects Versions: 0.94.1, 0.95.2
>Reporter: Enis Soztutar
>Assignee: Misty Stanley-Jones
>  Labels: documentaion, security
> Fix For: 0.99.0
>
> Attachments: HBASE-6192-2.patch, HBASE-6192-3.patch, 
> HBASE-6192-rebased.patch, HBASE-6192.patch, HBase Security-ACL Matrix.pdf, 
> HBase Security-ACL Matrix.pdf, HBase Security-ACL Matrix.pdf, HBase 
> Security-ACL Matrix.xls, HBase Security-ACL Matrix.xls, HBase Security-ACL 
> Matrix.xls
>
>
> We have an excellent matrix at 
> https://issues.apache.org/jira/secure/attachment/12531252/Security-ACL%20Matrix.pdf
>  for ACL. Once the changes are done, we can adapt that and put it in the 
> book, also add some more documentation about the new authorization features. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11363) Access checks in preCompact and preCompactSelection are out of sync

2014-06-19 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14038322#comment-14038322
 ] 

Anoop Sam John commented on HBASE-11363:


It would be better to do the AC check in preCompactScannerOpen hook Andy?

preCompact() hook is called after the scanner are open and heap is created. But 
preCompactScannerOpen is called 1st. What do you say?

+1 for the removal of preCompactSelection() code.

> Access checks in preCompact and preCompactSelection are out of sync
> ---
>
> Key: HBASE-11363
> URL: https://issues.apache.org/jira/browse/HBASE-11363
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.3
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
> Fix For: 0.99.0, 0.98.4
>
> Attachments: HBASE-11363.patch
>
>
> As discussed on HBASE-6192, it looks like someone cut and pasted the access 
> check from preCompact into preCompactSelection at one time and, later, 
> another change was made that relaxed permissions for compaction requests from 
> ADMIN to ADMIN|CREATE.
> We do not need an access check in preCompactSelection since a request to 
> compact is already mediated by preCompact.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (HBASE-11363) Access checks in preCompact and preCompactSelection are out of sync

2014-06-19 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14038322#comment-14038322
 ] 

Anoop Sam John edited comment on HBASE-11363 at 6/20/14 3:27 AM:
-

It would be better to do the AC check in preCompactScannerOpen hook Andy?

preCompact() hook is called after the scanners are open and heap is created. 
But preCompactScannerOpen is called 1st. What do you say?

+1 for the removal of preCompactSelection() code.


was (Author: anoop.hbase):
It would be better to do the AC check in preCompactScannerOpen hook Andy?

preCompact() hook is called after the scanner are open and heap is created. But 
preCompactScannerOpen is called 1st. What do you say?

+1 for the removal of preCompactSelection() code.

> Access checks in preCompact and preCompactSelection are out of sync
> ---
>
> Key: HBASE-11363
> URL: https://issues.apache.org/jira/browse/HBASE-11363
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.3
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
> Fix For: 0.99.0, 0.98.4
>
> Attachments: HBASE-11363.patch
>
>
> As discussed on HBASE-6192, it looks like someone cut and pasted the access 
> check from preCompact into preCompactSelection at one time and, later, 
> another change was made that relaxed permissions for compaction requests from 
> ADMIN to ADMIN|CREATE.
> We do not need an access check in preCompactSelection since a request to 
> compact is already mediated by preCompact.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11362) Minor improvements to LoadTestTool and PerformanceEvaluation

2014-06-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14038315#comment-14038315
 ] 

Hudson commented on HBASE-11362:


SUCCESS: Integrated in HBase-0.98 #348 (See 
[https://builds.apache.org/job/HBase-0.98/348/])
HBASE-11362 Minor improvements to LoadTestTool and PerformanceEvaluation 
(Vandana Ayyalasomayajula) (apurtell: rev 
0cce7d16a4b6995388d7e5cb15846e5e8ee93c6e)
* hbase-server/src/test/java/org/apache/hadoop/hbase/util/LoadTestTool.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/PerformanceEvaluation.java


> Minor improvements to LoadTestTool and PerformanceEvaluation
> 
>
> Key: HBASE-11362
> URL: https://issues.apache.org/jira/browse/HBASE-11362
> Project: HBase
>  Issue Type: Improvement
>Reporter: Vandana Ayyalasomayajula
>Assignee: Vandana Ayyalasomayajula
>Priority: Minor
> Fix For: 0.99.0, 0.98.4
>
> Attachments: HBASE-11362_1.patch, HBASE-11362_2.patch
>
>
> The current LoadTestTool can be improved to accept additional options like
> number of splits and deferred log flush. Similarly performance evaluation can 
> be extended to accept bloom filters.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10289) Avoid random port usage by default JMX Server. Create Custome JMX server

2014-06-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14038314#comment-14038314
 ] 

Hudson commented on HBASE-10289:


SUCCESS: Integrated in HBase-0.98 #348 (See 
[https://builds.apache.org/job/HBase-0.98/348/])
HBASE-10289 Avoid random port usage by default JMX Server. Create Custom JMX 
server (Qiang Tian) (apurtell: rev 8ac95e73aeb76bf6bc0679122c772ce093be2955)
* conf/hbase-env.sh
* hbase-server/src/main/java/org/apache/hadoop/hbase/JMXListener.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/TestJMXListener.java


> Avoid random port usage by default JMX Server. Create Custome JMX server
> 
>
> Key: HBASE-10289
> URL: https://issues.apache.org/jira/browse/HBASE-10289
> Project: HBase
>  Issue Type: Improvement
>Reporter: nijel
>Assignee: Qiang Tian
>Priority: Minor
>  Labels: stack
> Fix For: 0.99.0, 0.98.4
>
> Attachments: HBASE-10289-v4.patch, HBASE-10289.patch, 
> HBASE-10289_1.patch, HBASE-10289_2.patch, HBASE-10289_3.patch, 
> HBase10289-master.patch, hbase10289-0.98.patch, 
> hbase10289-doc_update-master.patch, hbase10289-master-v1.patch, 
> hbase10289-master-v2.patch
>
>
> If we enable JMX MBean server for HMaster or Region server  through VM 
> arguments, the process will use one random which we cannot configure.
> This can be a problem if that random port is configured for some other 
> service.
> This issue can be avoided by supporting  a custom JMX Server.
> The ports can be configured. If there is no ports configured, it will 
> continue the same way as now.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11362) Minor improvements to LoadTestTool and PerformanceEvaluation

2014-06-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14038313#comment-14038313
 ] 

Hudson commented on HBASE-11362:


FAILURE: Integrated in HBase-TRUNK #5222 (See 
[https://builds.apache.org/job/HBase-TRUNK/5222/])
HBASE-11362 Minor improvements to LoadTestTool and PerformanceEvaluation 
(Vandana Ayyalasomayajula) (apurtell: rev 
890618bb4265d24f6c02de61ff107e481fd6)
* hbase-server/src/test/java/org/apache/hadoop/hbase/PerformanceEvaluation.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/util/LoadTestTool.java


> Minor improvements to LoadTestTool and PerformanceEvaluation
> 
>
> Key: HBASE-11362
> URL: https://issues.apache.org/jira/browse/HBASE-11362
> Project: HBase
>  Issue Type: Improvement
>Reporter: Vandana Ayyalasomayajula
>Assignee: Vandana Ayyalasomayajula
>Priority: Minor
> Fix For: 0.99.0, 0.98.4
>
> Attachments: HBASE-11362_1.patch, HBASE-11362_2.patch
>
>
> The current LoadTestTool can be improved to accept additional options like
> number of splits and deferred log flush. Similarly performance evaluation can 
> be extended to accept bloom filters.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11362) Minor improvements to LoadTestTool and PerformanceEvaluation

2014-06-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14038294#comment-14038294
 ] 

Hudson commented on HBASE-11362:


FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #329 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/329/])
HBASE-11362 Minor improvements to LoadTestTool and PerformanceEvaluation 
(Vandana Ayyalasomayajula) (apurtell: rev 
0cce7d16a4b6995388d7e5cb15846e5e8ee93c6e)
* hbase-server/src/test/java/org/apache/hadoop/hbase/PerformanceEvaluation.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/util/LoadTestTool.java


> Minor improvements to LoadTestTool and PerformanceEvaluation
> 
>
> Key: HBASE-11362
> URL: https://issues.apache.org/jira/browse/HBASE-11362
> Project: HBase
>  Issue Type: Improvement
>Reporter: Vandana Ayyalasomayajula
>Assignee: Vandana Ayyalasomayajula
>Priority: Minor
> Fix For: 0.99.0, 0.98.4
>
> Attachments: HBASE-11362_1.patch, HBASE-11362_2.patch
>
>
> The current LoadTestTool can be improved to accept additional options like
> number of splits and deferred log flush. Similarly performance evaluation can 
> be extended to accept bloom filters.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11118) non environment variable solution for "IllegalAccessError: class com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass com.google.protobuf.Literal

2014-06-19 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14038283#comment-14038283
 ] 

Andrew Purtell commented on HBASE-8:


No, none are PB compatible. Can't be if meeting the desired goal of eliminating 
serde and copying costs. As a longer term project absolutely. Making a note 
here. 

> non environment variable solution for "IllegalAccessError: class 
> com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass 
> com.google.protobuf.LiteralByteString"
> --
>
> Key: HBASE-8
> URL: https://issues.apache.org/jira/browse/HBASE-8
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.2
>Reporter: André Kelpe
>Priority: Blocker
> Fix For: 0.99.0
>
> Attachments: 8.bytestringer.txt, 
> 1118.suggested.undoing.optimization.on.clientside.txt, 
> 1118.suggested.undoing.optimization.on.clientside.txt, 
> HBASE-8-0.98.patch.gz, HBASE-8-trunk.patch.gz, shade_attempt.patch
>
>
> I am running into the problem described in 
> https://issues.apache.org/jira/browse/HBASE-10304, while trying to use a 
> newer version within cascading.hbase 
> (https://github.com/cascading/cascading.hbase).
> One of the features of cascading.hbase is that you can use it from lingual 
> (http://www.cascading.org/projects/lingual/), our SQL layer for hadoop. 
> lingual has a notion of providers, which are fat jars that we pull down 
> dynamically at runtime. Those jars give users the ability to talk to any 
> system or format from SQL. They are added to the classpath  programmatically 
> before we submit jobs to a hadoop cluster.
> Since lingual does not know upfront , which providers are going to be used in 
> a given run, the HADOOP_CLASSPATH trick proposed in the JIRA above is really 
> clunky and breaks the ease of use we had before. No other provider requires 
> this right now.
> It would be great to have a programmatical way to fix this, when using fat 
> jars.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11382) Adding unit test for HBASE-10964 (Delete mutation is not consistent with Put wrt timestamp)

2014-06-19 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14038277#comment-14038277
 ] 

Anoop Sam John commented on HBASE-11382:


Thanks for taking a look at these area.

{code}
+delete.setTimestamp(ts);
+delete.deleteColumn(FAMILY, QUALIFIER);
+Assert.assertEquals(ts, delete.getTimeStamp());
{code}
It is not worth to only assert ts by getting from Delete object.  Simple setter 
getter check only.  The issue was, when the delete obj is constructed with a 
ts, the KVs getting added to obj (every call like deleteColumn will add kv 
objects inside familyMap in Delete) were not honoring this ts.  So u need to 
assert the TS in these KVs.

Also need license header for the new file.

As long as it is not spinning mini cluster, new test class addition is ok in 
this area.

> Adding unit test for HBASE-10964 (Delete mutation is not consistent with Put 
> wrt timestamp)
> ---
>
> Key: HBASE-11382
> URL: https://issues.apache.org/jira/browse/HBASE-11382
> Project: HBase
>  Issue Type: Bug
>Reporter: Srikanth Srungarapu
>Assignee: Srikanth Srungarapu
>Priority: Minor
> Attachments: HBASE-11382.patch
>
>
> Adding a small unit test for verifying that delete mutation is honoring 
> timestamp of delete object.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11363) Access checks in preCompact and preCompactSelection are out of sync

2014-06-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14038275#comment-14038275
 ] 

Hadoop QA commented on HBASE-11363:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12651568/HBASE-11363.patch
  against trunk revision .
  ATTACHMENT ID: 12651568

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:red}-1 findbugs{color}.  The patch appears to introduce 2 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9801//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9801//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9801//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9801//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9801//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9801//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9801//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9801//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9801//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9801//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9801//console

This message is automatically generated.

> Access checks in preCompact and preCompactSelection are out of sync
> ---
>
> Key: HBASE-11363
> URL: https://issues.apache.org/jira/browse/HBASE-11363
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.3
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
> Fix For: 0.99.0, 0.98.4
>
> Attachments: HBASE-11363.patch
>
>
> As discussed on HBASE-6192, it looks like someone cut and pasted the access 
> check from preCompact into preCompactSelection at one time and, later, 
> another change was made that relaxed permissions for compaction requests from 
> ADMIN to ADMIN|CREATE.
> We do not need an access check in preCompactSelection since a request to 
> compact is already mediated by preCompact.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11375) Validate compile-protobuf profile in test-patch.sh

2014-06-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14038279#comment-14038279
 ] 

Hudson commented on HBASE-11375:


SUCCESS: Integrated in HBase-TRUNK #5221 (See 
[https://builds.apache.org/job/HBase-TRUNK/5221/])
HBASE-11375 Validate compile-protobuf profile in test-patch.sh (tedyu: rev 
8c8d9d50085ddd31379c2b7319162ab472502f14)
* dev-support/test-patch.sh


> Validate compile-protobuf profile in test-patch.sh
> --
>
> Key: HBASE-11375
> URL: https://issues.apache.org/jira/browse/HBASE-11375
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Fix For: 0.99.0
>
> Attachments: 11375-v1.txt, 11375-v2.txt
>
>
> compile-protobuf profile sometimes doesn't compile - latest issue being 
> HBASE-11373
> test-patch.sh should validate that compile-protobuf profile compiles. This 
> would discover such issue sooner.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-9345) Add support for specifying filters in scan

2014-06-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14038278#comment-14038278
 ] 

Hudson commented on HBASE-9345:
---

SUCCESS: Integrated in HBase-TRUNK #5221 (See 
[https://builds.apache.org/job/HBase-TRUNK/5221/])
HBASE-9345 Add support for specifying filters in scan (Virag Kothari) (stack: 
rev b5db1432806ad49a04ea7866856e7321ea252adb)
* hbase-server/src/main/java/org/apache/hadoop/hbase/rest/TableResource.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/rest/RESTServlet.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/rest/Constants.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/rest/TestTableScan.java


> Add support for specifying filters in scan
> --
>
> Key: HBASE-9345
> URL: https://issues.apache.org/jira/browse/HBASE-9345
> Project: HBase
>  Issue Type: Improvement
>  Components: REST
>Affects Versions: 0.94.11
>Reporter: Vandana Ayyalasomayajula
>Assignee: Virag Kothari
>Priority: Minor
> Fix For: 0.99.0
>
> Attachments: HBASE-9345_trunk.patch, HBASE-9345_trunk.patch, 
> HBASE_9345_trunk.patch
>
>
> In the implementation of stateless scanner from HBase-9343, the support for 
> specifying filters is missing. This JIRA aims to implement support for filter 
> specification.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11118) non environment variable solution for "IllegalAccessError: class com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass com.google.protobuf.Literal

2014-06-19 Thread ryan rawson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14038252#comment-14038252
 ] 

ryan rawson commented on HBASE-8:
-

Does it make sense to have a non-protobuf dependent RPC framework?  At this
point isn't protobuf not just 'turn structs into bytes' but also deeply
entangled in to HBase?  Seems like a longer term project.


> non environment variable solution for "IllegalAccessError: class 
> com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass 
> com.google.protobuf.LiteralByteString"
> --
>
> Key: HBASE-8
> URL: https://issues.apache.org/jira/browse/HBASE-8
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.2
>Reporter: André Kelpe
>Priority: Blocker
> Fix For: 0.99.0
>
> Attachments: 8.bytestringer.txt, 
> 1118.suggested.undoing.optimization.on.clientside.txt, 
> 1118.suggested.undoing.optimization.on.clientside.txt, 
> HBASE-8-0.98.patch.gz, HBASE-8-trunk.patch.gz, shade_attempt.patch
>
>
> I am running into the problem described in 
> https://issues.apache.org/jira/browse/HBASE-10304, while trying to use a 
> newer version within cascading.hbase 
> (https://github.com/cascading/cascading.hbase).
> One of the features of cascading.hbase is that you can use it from lingual 
> (http://www.cascading.org/projects/lingual/), our SQL layer for hadoop. 
> lingual has a notion of providers, which are fat jars that we pull down 
> dynamically at runtime. Those jars give users the ability to talk to any 
> system or format from SQL. They are added to the classpath  programmatically 
> before we submit jobs to a hadoop cluster.
> Since lingual does not know upfront , which providers are going to be used in 
> a given run, the HADOOP_CLASSPATH trick proposed in the JIRA above is really 
> clunky and breaks the ease of use we had before. No other provider requires 
> this right now.
> It would be great to have a programmatical way to fix this, when using fat 
> jars.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11118) non environment variable solution for "IllegalAccessError: class com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass com.google.protobuf.Literal

2014-06-19 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14038246#comment-14038246
 ] 

stack commented on HBASE-8:
---

[~apurtell] None are pb compat on quick read?  (Its not mentioned in the 
matrix...)

> non environment variable solution for "IllegalAccessError: class 
> com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass 
> com.google.protobuf.LiteralByteString"
> --
>
> Key: HBASE-8
> URL: https://issues.apache.org/jira/browse/HBASE-8
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.2
>Reporter: André Kelpe
>Priority: Blocker
> Fix For: 0.99.0
>
> Attachments: 8.bytestringer.txt, 
> 1118.suggested.undoing.optimization.on.clientside.txt, 
> 1118.suggested.undoing.optimization.on.clientside.txt, 
> HBASE-8-0.98.patch.gz, HBASE-8-trunk.patch.gz, shade_attempt.patch
>
>
> I am running into the problem described in 
> https://issues.apache.org/jira/browse/HBASE-10304, while trying to use a 
> newer version within cascading.hbase 
> (https://github.com/cascading/cascading.hbase).
> One of the features of cascading.hbase is that you can use it from lingual 
> (http://www.cascading.org/projects/lingual/), our SQL layer for hadoop. 
> lingual has a notion of providers, which are fat jars that we pull down 
> dynamically at runtime. Those jars give users the ability to talk to any 
> system or format from SQL. They are added to the classpath  programmatically 
> before we submit jobs to a hadoop cluster.
> Since lingual does not know upfront , which providers are going to be used in 
> a given run, the HADOOP_CLASSPATH trick proposed in the JIRA above is really 
> clunky and breaks the ease of use we had before. No other provider requires 
> this right now.
> It would be great to have a programmatical way to fix this, when using fat 
> jars.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11344) Hide row keys and such from the web UIs

2014-06-19 Thread Jieshan Bean (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14038214#comment-14038214
 ] 

Jieshan Bean commented on HBASE-11344:
--

+1 on this idea. We are suffering the same security problem.

> Hide row keys and such from the web UIs
> ---
>
> Key: HBASE-11344
> URL: https://issues.apache.org/jira/browse/HBASE-11344
> Project: HBase
>  Issue Type: Improvement
>Reporter: Devaraj Das
> Fix For: 0.99.0
>
>
> The table details on the master UI lists the start row keys of the regions. 
> The row keys might have sensitive data. We should hide them based on whether 
> or not the user accessing has the required authorization to view the table.. 
> To start with, we could make the display of row keys and such based on a 
> configuration being true or false. If it is false, such potentially sensitive 
> data is never displayed on the web UI.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11380) HRegion lock object is not being released properly, leading to snapshot failure

2014-06-19 Thread Craig Condit (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14038202#comment-14038202
 ] 

Craig Condit commented on HBASE-11380:
--

We see this on many tables, but only after creating lots of snapshots. So I 
suspect this code may not be the original cause, but is being affected (since 
somewhere else we have already taken the lock 32K times).

We are using only the AccessController coprocessor.

> HRegion lock object is not being released properly, leading to snapshot 
> failure
> ---
>
> Key: HBASE-11380
> URL: https://issues.apache.org/jira/browse/HBASE-11380
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 0.98.3
>Reporter: Craig Condit
> Attachments: 11380-v1.txt
>
>
> Background:
> We are attempting to create ~ 750 table snapshots on a nightly basis for use 
> in MR jobs. The jobs are run in batches, with a maximum of around 20 jobs 
> running simultaneously.
> We have started to see the following in our region server logs (after < 1 day 
> uptime):
> {noformat}
> java.lang.Error: Maximum lock count exceeded
>   at 
> java.util.concurrent.locks.ReentrantReadWriteLock$Sync.fullTryAcquireShared(ReentrantReadWriteLock.java:531)
>   at 
> java.util.concurrent.locks.ReentrantReadWriteLock$Sync.tryAcquireShared(ReentrantReadWriteLock.java:491)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1326)
>   at 
> java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.tryLock(ReentrantReadWriteLock.java:873)
>   at org.apache.hadoop.hbase.regionserver.HRegion.lock(HRegion.java:5904)
>   at org.apache.hadoop.hbase.regionserver.HRegion.lock(HRegion.java:5891)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.startRegionOperation(HRegion.java:5798)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.startRegionOperation(HRegion.java:5761)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.processRowsWithLocks(HRegion.java:4891)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.mutateRowsWithLocks(HRegion.java:4856)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.mutateRowsWithLocks(HRegion.java:4838)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.mutateRow(HRegion.java:4829)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.mutateRows(HRegionServer.java:4390)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3362)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29503)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2012)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:98)
>   at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:168)
>   at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$000(SimpleRpcScheduler.java:39)
>   at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler$1.run(SimpleRpcScheduler.java:111)
>   at java.lang.Thread.run(Thread.java:744)
> {noformat}
> Not sure of the cause, but the result is that snapshots cannot be created. We 
> see this in our client logs:
> {noformat}
> Exception in thread "main" 
> org.apache.hadoop.hbase.snapshot.HBaseSnapshotException: 
> org.apache.hadoop.hbase.snapshot.HBaseSnapshotException: Snapshot { 
> ss=test-snapshot-20140619143753294 table=test type=FLUSH } had an error.  
> Procedure test-snapshot-20140619143753294 { 
> waiting=[p3plpadata038.internal,60020,1403140682587, 
> p3plpadata056.internal,60020,1403140865123, 
> p3plpadata072.internal,60020,1403141022569] 
> done=[p3plpadata023.internal,60020,1403140552227, 
> p3plpadata009.internal,60020,1403140487826] }
>   at 
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.isSnapshotDone(SnapshotManager.java:342)
>   at 
> org.apache.hadoop.hbase.master.HMaster.isSnapshotDone(HMaster.java:2907)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:40494)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2012)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:98)
>   at 
> org.apache.hadoop.hbase.ipc.FifoRpcScheduler$1.run(FifoRpcScheduler.java:73)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lan

[jira] [Commented] (HBASE-11380) HRegion lock object is not being released properly, leading to snapshot failure

2014-06-19 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14038199#comment-14038199
 ] 

Enis Soztutar commented on HBASE-11380:
---

>From what I remember, HRegion#processRowsWithLocks will only be called for 
>operations that has RowProcessor, which we only use for for meta edits 
>(MultiRowMutationProcessor). 
We should do the patch, but that might not be the root cause. [~ccondit] you 
see this on a regular table? Are you using any coprocessors? 

> HRegion lock object is not being released properly, leading to snapshot 
> failure
> ---
>
> Key: HBASE-11380
> URL: https://issues.apache.org/jira/browse/HBASE-11380
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 0.98.3
>Reporter: Craig Condit
> Attachments: 11380-v1.txt
>
>
> Background:
> We are attempting to create ~ 750 table snapshots on a nightly basis for use 
> in MR jobs. The jobs are run in batches, with a maximum of around 20 jobs 
> running simultaneously.
> We have started to see the following in our region server logs (after < 1 day 
> uptime):
> {noformat}
> java.lang.Error: Maximum lock count exceeded
>   at 
> java.util.concurrent.locks.ReentrantReadWriteLock$Sync.fullTryAcquireShared(ReentrantReadWriteLock.java:531)
>   at 
> java.util.concurrent.locks.ReentrantReadWriteLock$Sync.tryAcquireShared(ReentrantReadWriteLock.java:491)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1326)
>   at 
> java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.tryLock(ReentrantReadWriteLock.java:873)
>   at org.apache.hadoop.hbase.regionserver.HRegion.lock(HRegion.java:5904)
>   at org.apache.hadoop.hbase.regionserver.HRegion.lock(HRegion.java:5891)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.startRegionOperation(HRegion.java:5798)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.startRegionOperation(HRegion.java:5761)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.processRowsWithLocks(HRegion.java:4891)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.mutateRowsWithLocks(HRegion.java:4856)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.mutateRowsWithLocks(HRegion.java:4838)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.mutateRow(HRegion.java:4829)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.mutateRows(HRegionServer.java:4390)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3362)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29503)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2012)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:98)
>   at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:168)
>   at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$000(SimpleRpcScheduler.java:39)
>   at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler$1.run(SimpleRpcScheduler.java:111)
>   at java.lang.Thread.run(Thread.java:744)
> {noformat}
> Not sure of the cause, but the result is that snapshots cannot be created. We 
> see this in our client logs:
> {noformat}
> Exception in thread "main" 
> org.apache.hadoop.hbase.snapshot.HBaseSnapshotException: 
> org.apache.hadoop.hbase.snapshot.HBaseSnapshotException: Snapshot { 
> ss=test-snapshot-20140619143753294 table=test type=FLUSH } had an error.  
> Procedure test-snapshot-20140619143753294 { 
> waiting=[p3plpadata038.internal,60020,1403140682587, 
> p3plpadata056.internal,60020,1403140865123, 
> p3plpadata072.internal,60020,1403141022569] 
> done=[p3plpadata023.internal,60020,1403140552227, 
> p3plpadata009.internal,60020,1403140487826] }
>   at 
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.isSnapshotDone(SnapshotManager.java:342)
>   at 
> org.apache.hadoop.hbase.master.HMaster.isSnapshotDone(HMaster.java:2907)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:40494)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2012)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:98)
>   at 
> org.apache.hadoop.hbase.ipc.FifoRpcScheduler$1.run(FifoRpcScheduler.java:73)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Wor

[jira] [Updated] (HBASE-11380) HRegion lock object is not being released properly, leading to snapshot failure

2014-06-19 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-11380:
---

Attachment: 11380-v1.txt

Patch v1 encloses processor.preProcess() call in try / catch block.

> HRegion lock object is not being released properly, leading to snapshot 
> failure
> ---
>
> Key: HBASE-11380
> URL: https://issues.apache.org/jira/browse/HBASE-11380
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 0.98.3
>Reporter: Craig Condit
> Attachments: 11380-v1.txt
>
>
> Background:
> We are attempting to create ~ 750 table snapshots on a nightly basis for use 
> in MR jobs. The jobs are run in batches, with a maximum of around 20 jobs 
> running simultaneously.
> We have started to see the following in our region server logs (after < 1 day 
> uptime):
> {noformat}
> java.lang.Error: Maximum lock count exceeded
>   at 
> java.util.concurrent.locks.ReentrantReadWriteLock$Sync.fullTryAcquireShared(ReentrantReadWriteLock.java:531)
>   at 
> java.util.concurrent.locks.ReentrantReadWriteLock$Sync.tryAcquireShared(ReentrantReadWriteLock.java:491)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1326)
>   at 
> java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.tryLock(ReentrantReadWriteLock.java:873)
>   at org.apache.hadoop.hbase.regionserver.HRegion.lock(HRegion.java:5904)
>   at org.apache.hadoop.hbase.regionserver.HRegion.lock(HRegion.java:5891)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.startRegionOperation(HRegion.java:5798)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.startRegionOperation(HRegion.java:5761)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.processRowsWithLocks(HRegion.java:4891)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.mutateRowsWithLocks(HRegion.java:4856)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.mutateRowsWithLocks(HRegion.java:4838)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.mutateRow(HRegion.java:4829)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.mutateRows(HRegionServer.java:4390)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3362)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29503)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2012)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:98)
>   at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:168)
>   at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$000(SimpleRpcScheduler.java:39)
>   at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler$1.run(SimpleRpcScheduler.java:111)
>   at java.lang.Thread.run(Thread.java:744)
> {noformat}
> Not sure of the cause, but the result is that snapshots cannot be created. We 
> see this in our client logs:
> {noformat}
> Exception in thread "main" 
> org.apache.hadoop.hbase.snapshot.HBaseSnapshotException: 
> org.apache.hadoop.hbase.snapshot.HBaseSnapshotException: Snapshot { 
> ss=test-snapshot-20140619143753294 table=test type=FLUSH } had an error.  
> Procedure test-snapshot-20140619143753294 { 
> waiting=[p3plpadata038.internal,60020,1403140682587, 
> p3plpadata056.internal,60020,1403140865123, 
> p3plpadata072.internal,60020,1403141022569] 
> done=[p3plpadata023.internal,60020,1403140552227, 
> p3plpadata009.internal,60020,1403140487826] }
>   at 
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.isSnapshotDone(SnapshotManager.java:342)
>   at 
> org.apache.hadoop.hbase.master.HMaster.isSnapshotDone(HMaster.java:2907)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:40494)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2012)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:98)
>   at 
> org.apache.hadoop.hbase.ipc.FifoRpcScheduler$1.run(FifoRpcScheduler.java:73)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:744)
> Caused by: 
> org.apache.hadoop.hbase.errorhandling.ForeignException$ProxyThrowable via 
> p3plpadata060.internal,60020,1403140935958:org.apache.hadoop.hbase.errorhandling.ForeignException$ProxyThrowable:
>

[jira] [Commented] (HBASE-11375) Validate compile-protobuf profile in test-patch.sh

2014-06-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14038173#comment-14038173
 ] 

Hadoop QA commented on HBASE-11375:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12651521/11375-v2.txt
  against trunk revision .
  ATTACHMENT ID: 12651521

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+  echo "$MVN clean install -DskipTests -Pcompile-protobuf -X 
-D${PROJECT_NAME}PatchProcess > $PATCH_DIR/patchProtocErrors.txt 2>&1"
+  $MVN clean install -DskipTests -Pcompile-protobuf -X 
-D${PROJECT_NAME}PatchProcess  > $PATCH_DIR/patchProtocErrors.txt 2>&1
+{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings."

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.client.TestMultiParallel

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9800//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9800//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9800//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9800//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9800//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9800//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9800//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9800//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9800//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9800//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9800//console

This message is automatically generated.

> Validate compile-protobuf profile in test-patch.sh
> --
>
> Key: HBASE-11375
> URL: https://issues.apache.org/jira/browse/HBASE-11375
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Fix For: 0.99.0
>
> Attachments: 11375-v1.txt, 11375-v2.txt
>
>
> compile-protobuf profile sometimes doesn't compile - latest issue being 
> HBASE-11373
> test-patch.sh should validate that compile-protobuf profile compiles. This 
> would discover such issue sooner.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11118) non environment variable solution for "IllegalAccessError: class com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass com.google.protobuf.Literal

2014-06-19 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14038172#comment-14038172
 ] 

Andrew Purtell commented on HBASE-8:


For future reference, there are three post-protobuf options for consideration 
(at least): 
https://kentonv.github.io/capnproto/news/2014-06-17-capnproto-flatbuffers-sbe.html
 

> non environment variable solution for "IllegalAccessError: class 
> com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass 
> com.google.protobuf.LiteralByteString"
> --
>
> Key: HBASE-8
> URL: https://issues.apache.org/jira/browse/HBASE-8
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.2
>Reporter: André Kelpe
>Priority: Blocker
> Fix For: 0.99.0
>
> Attachments: 8.bytestringer.txt, 
> 1118.suggested.undoing.optimization.on.clientside.txt, 
> 1118.suggested.undoing.optimization.on.clientside.txt, 
> HBASE-8-0.98.patch.gz, HBASE-8-trunk.patch.gz, shade_attempt.patch
>
>
> I am running into the problem described in 
> https://issues.apache.org/jira/browse/HBASE-10304, while trying to use a 
> newer version within cascading.hbase 
> (https://github.com/cascading/cascading.hbase).
> One of the features of cascading.hbase is that you can use it from lingual 
> (http://www.cascading.org/projects/lingual/), our SQL layer for hadoop. 
> lingual has a notion of providers, which are fat jars that we pull down 
> dynamically at runtime. Those jars give users the ability to talk to any 
> system or format from SQL. They are added to the classpath  programmatically 
> before we submit jobs to a hadoop cluster.
> Since lingual does not know upfront , which providers are going to be used in 
> a given run, the HADOOP_CLASSPATH trick proposed in the JIRA above is really 
> clunky and breaks the ease of use we had before. No other provider requires 
> this right now.
> It would be great to have a programmatical way to fix this, when using fat 
> jars.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11363) Access checks in preCompact and preCompactSelection are out of sync

2014-06-19 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-11363:
---

Status: Patch Available  (was: Open)

> Access checks in preCompact and preCompactSelection are out of sync
> ---
>
> Key: HBASE-11363
> URL: https://issues.apache.org/jira/browse/HBASE-11363
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.3
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
> Fix For: 0.99.0, 0.98.4
>
> Attachments: HBASE-11363.patch
>
>
> As discussed on HBASE-6192, it looks like someone cut and pasted the access 
> check from preCompact into preCompactSelection at one time and, later, 
> another change was made that relaxed permissions for compaction requests from 
> ADMIN to ADMIN|CREATE.
> We do not need an access check in preCompactSelection since a request to 
> compact is already mediated by preCompact.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11363) Access checks in preCompact and preCompactSelection are out of sync

2014-06-19 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-11363:
---

Attachment: HBASE-11363.patch

Attached patch. AC tests pass locally.

> Access checks in preCompact and preCompactSelection are out of sync
> ---
>
> Key: HBASE-11363
> URL: https://issues.apache.org/jira/browse/HBASE-11363
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.3
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
> Fix For: 0.99.0, 0.98.4
>
> Attachments: HBASE-11363.patch
>
>
> As discussed on HBASE-6192, it looks like someone cut and pasted the access 
> check from preCompact into preCompactSelection at one time and, later, 
> another change was made that relaxed permissions for compaction requests from 
> ADMIN to ADMIN|CREATE.
> We do not need an access check in preCompactSelection since a request to 
> compact is already mediated by preCompact.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10289) Avoid random port usage by default JMX Server. Create Custome JMX server

2014-06-19 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-10289:
---

   Resolution: Fixed
Fix Version/s: 0.98.4
   Status: Resolved  (was: Patch Available)

Applied 'hbase10289-0.98.patch' and pushed, new unit test passes locally.

> Avoid random port usage by default JMX Server. Create Custome JMX server
> 
>
> Key: HBASE-10289
> URL: https://issues.apache.org/jira/browse/HBASE-10289
> Project: HBase
>  Issue Type: Improvement
>Reporter: nijel
>Assignee: Qiang Tian
>Priority: Minor
>  Labels: stack
> Fix For: 0.99.0, 0.98.4
>
> Attachments: HBASE-10289-v4.patch, HBASE-10289.patch, 
> HBASE-10289_1.patch, HBASE-10289_2.patch, HBASE-10289_3.patch, 
> HBase10289-master.patch, hbase10289-0.98.patch, 
> hbase10289-doc_update-master.patch, hbase10289-master-v1.patch, 
> hbase10289-master-v2.patch
>
>
> If we enable JMX MBean server for HMaster or Region server  through VM 
> arguments, the process will use one random which we cannot configure.
> This can be a problem if that random port is configured for some other 
> service.
> This issue can be avoided by supporting  a custom JMX Server.
> The ports can be configured. If there is no ports configured, it will 
> continue the same way as now.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HBASE-11362) Minor improvements to LoadTestTool and PerformanceEvaluation

2014-06-19 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell resolved HBASE-11362.


   Resolution: Fixed
Fix Version/s: 0.98.4
   0.99.0
 Hadoop Flags: Reviewed

Committed to trunk and 0.98

> Minor improvements to LoadTestTool and PerformanceEvaluation
> 
>
> Key: HBASE-11362
> URL: https://issues.apache.org/jira/browse/HBASE-11362
> Project: HBase
>  Issue Type: Improvement
>Reporter: Vandana Ayyalasomayajula
>Assignee: Vandana Ayyalasomayajula
>Priority: Minor
> Fix For: 0.99.0, 0.98.4
>
> Attachments: HBASE-11362_1.patch, HBASE-11362_2.patch
>
>
> The current LoadTestTool can be improved to accept additional options like
> number of splits and deferred log flush. Similarly performance evaluation can 
> be extended to accept bloom filters.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11348) Make frequency and sleep times of chaos monkeys configurable

2014-06-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14038152#comment-14038152
 ] 

Hudson commented on HBASE-11348:


SUCCESS: Integrated in HBase-TRUNK #5220 (See 
[https://builds.apache.org/job/HBase-TRUNK/5220/])
HBASE-11348 Make frequency and sleep times of chaos monkeys configurable 
(Vandan Ayyalasomayajula) (stack: rev 5764df2974e68efd69d581478618cffe1395e547)
* src/main/docbkx/developer.xml
* 
hbase-it/src/test/java/org/apache/hadoop/hbase/chaos/factories/UnbalanceMonkeyFactory.java
* 
hbase-it/src/test/java/org/apache/hadoop/hbase/chaos/actions/MoveRegionsOfTableAction.java
* 
hbase-it/src/test/java/org/apache/hadoop/hbase/chaos/factories/MonkeyFactory.java
* hbase-it/src/test/java/org/apache/hadoop/hbase/mttr/IntegrationTestMTTR.java
* 
hbase-it/src/test/java/org/apache/hadoop/hbase/chaos/factories/MonkeyConstants.java
* 
hbase-it/src/test/java/org/apache/hadoop/hbase/chaos/factories/SlowDeterministicMonkeyFactory.java
* 
hbase-it/src/test/java/org/apache/hadoop/hbase/chaos/actions/UnbalanceKillAndRebalanceAction.java
* hbase-it/src/test/java/org/apache/hadoop/hbase/IntegrationTestBase.java


> Make frequency and sleep times of  chaos monkeys configurable 
> --
>
> Key: HBASE-11348
> URL: https://issues.apache.org/jira/browse/HBASE-11348
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 0.98.3
>Reporter: Vandana Ayyalasomayajula
>Assignee: Vandana Ayyalasomayajula
>Priority: Minor
>  Labels: integration-tests
> Fix For: 0.99.0
>
> Attachments: HBASE-11348_1.patch, HBASE-11348_2.patch
>
>
> Currently the chaos monkeys used in the integration tests, run with a fixed 
> configuration. It would be useful to have the frequency, sleep times to be 
> configurable. That would help controlling the chaos the monkeys are intended 
> to create.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10289) Avoid random port usage by default JMX Server. Create Custome JMX server

2014-06-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14038153#comment-14038153
 ] 

Hudson commented on HBASE-10289:


SUCCESS: Integrated in HBase-TRUNK #5220 (See 
[https://builds.apache.org/job/HBase-TRUNK/5220/])
HBASE-10289 Avoid random port usage by default JMX Server. Create Custome JMX 
server (Qiang Tian).  DOC ADDENDUM (stack: rev 
b16e36a5b2650b2ce2c7686c6653c77074481115)
* src/main/docbkx/configuration.xml


> Avoid random port usage by default JMX Server. Create Custome JMX server
> 
>
> Key: HBASE-10289
> URL: https://issues.apache.org/jira/browse/HBASE-10289
> Project: HBase
>  Issue Type: Improvement
>Reporter: nijel
>Assignee: Qiang Tian
>Priority: Minor
>  Labels: stack
> Fix For: 0.99.0
>
> Attachments: HBASE-10289-v4.patch, HBASE-10289.patch, 
> HBASE-10289_1.patch, HBASE-10289_2.patch, HBASE-10289_3.patch, 
> HBase10289-master.patch, hbase10289-0.98.patch, 
> hbase10289-doc_update-master.patch, hbase10289-master-v1.patch, 
> hbase10289-master-v2.patch
>
>
> If we enable JMX MBean server for HMaster or Region server  through VM 
> arguments, the process will use one random which we cannot configure.
> This can be a problem if that random port is configured for some other 
> service.
> This issue can be avoided by supporting  a custom JMX Server.
> The ports can be configured. If there is no ports configured, it will 
> continue the same way as now.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11338) Expand documentation on bloom filters

2014-06-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14038151#comment-14038151
 ] 

Hudson commented on HBASE-11338:


SUCCESS: Integrated in HBase-TRUNK #5220 (See 
[https://builds.apache.org/job/HBase-TRUNK/5220/])
HBASE-11338 Expand documentation on bloom filters (Misty Stanley-Jones) (stack: 
rev 9829bb9c24ec483b0ae013e1a62c2c946f81ca8b)
* src/main/docbkx/performance.xml


> Expand documentation on bloom filters
> -
>
> Key: HBASE-11338
> URL: https://issues.apache.org/jira/browse/HBASE-11338
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: Misty Stanley-Jones
>Assignee: Misty Stanley-Jones
> Fix For: 0.99.0
>
> Attachments: HBASE-11338-1.patch, HBASE-11338-2.patch, 
> HBASE-11338.patch
>
>
> Ref Guide  could use more info on bloom filters.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HBASE-11265) [89-fb] Remove shaded references to com.google

2014-06-19 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark resolved HBASE-11265.
---

Resolution: Fixed

> [89-fb] Remove shaded references to com.google
> --
>
> Key: HBASE-11265
> URL: https://issues.apache.org/jira/browse/HBASE-11265
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.89-fb
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Fix For: 0.89-fb
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HBASE-11187) [89-fb] Limit the number of client threads per regionserver

2014-06-19 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark resolved HBASE-11187.
---

Resolution: Fixed

> [89-fb] Limit the number of client threads per regionserver
> ---
>
> Key: HBASE-11187
> URL: https://issues.apache.org/jira/browse/HBASE-11187
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 0.89-fb
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Fix For: 0.89-fb
>
>
> In the client each HTable can create one or more threads per region server.  
> When there are lots of HTables and a region server is slow this can result in 
> an explosion of blocked threads.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-11383) [0.89-fb] Add jitter to HLog rolling period

2014-06-19 Thread Elliott Clark (JIRA)
Elliott Clark created HBASE-11383:
-

 Summary: [0.89-fb] Add jitter to HLog rolling period
 Key: HBASE-11383
 URL: https://issues.apache.org/jira/browse/HBASE-11383
 Project: HBase
  Issue Type: Bug
Reporter: Elliott Clark
Assignee: Elliott Clark


Just in case things go wrong all at once make sure not to ddos the nn by making 
the rate limiting jittered.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HBASE-11258) [0.89-fb] Pull in Integration Tests from open source 0.96/trunk

2014-06-19 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark resolved HBASE-11258.
---

Resolution: Fixed

> [0.89-fb] Pull in Integration Tests from open source 0.96/trunk
> ---
>
> Key: HBASE-11258
> URL: https://issues.apache.org/jira/browse/HBASE-11258
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.89-fb
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Fix For: 0.89-fb
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11380) HRegion lock object is not being released properly, leading to snapshot failure

2014-06-19 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14038137#comment-14038137
 ] 

Andrew Purtell commented on HBASE-11380:


bq. Could this lead to unreleased locks ?

You tell us, Ted. (smile)

> HRegion lock object is not being released properly, leading to snapshot 
> failure
> ---
>
> Key: HBASE-11380
> URL: https://issues.apache.org/jira/browse/HBASE-11380
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 0.98.3
>Reporter: Craig Condit
>
> Background:
> We are attempting to create ~ 750 table snapshots on a nightly basis for use 
> in MR jobs. The jobs are run in batches, with a maximum of around 20 jobs 
> running simultaneously.
> We have started to see the following in our region server logs (after < 1 day 
> uptime):
> {noformat}
> java.lang.Error: Maximum lock count exceeded
>   at 
> java.util.concurrent.locks.ReentrantReadWriteLock$Sync.fullTryAcquireShared(ReentrantReadWriteLock.java:531)
>   at 
> java.util.concurrent.locks.ReentrantReadWriteLock$Sync.tryAcquireShared(ReentrantReadWriteLock.java:491)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1326)
>   at 
> java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.tryLock(ReentrantReadWriteLock.java:873)
>   at org.apache.hadoop.hbase.regionserver.HRegion.lock(HRegion.java:5904)
>   at org.apache.hadoop.hbase.regionserver.HRegion.lock(HRegion.java:5891)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.startRegionOperation(HRegion.java:5798)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.startRegionOperation(HRegion.java:5761)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.processRowsWithLocks(HRegion.java:4891)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.mutateRowsWithLocks(HRegion.java:4856)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.mutateRowsWithLocks(HRegion.java:4838)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.mutateRow(HRegion.java:4829)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.mutateRows(HRegionServer.java:4390)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3362)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29503)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2012)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:98)
>   at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:168)
>   at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$000(SimpleRpcScheduler.java:39)
>   at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler$1.run(SimpleRpcScheduler.java:111)
>   at java.lang.Thread.run(Thread.java:744)
> {noformat}
> Not sure of the cause, but the result is that snapshots cannot be created. We 
> see this in our client logs:
> {noformat}
> Exception in thread "main" 
> org.apache.hadoop.hbase.snapshot.HBaseSnapshotException: 
> org.apache.hadoop.hbase.snapshot.HBaseSnapshotException: Snapshot { 
> ss=test-snapshot-20140619143753294 table=test type=FLUSH } had an error.  
> Procedure test-snapshot-20140619143753294 { 
> waiting=[p3plpadata038.internal,60020,1403140682587, 
> p3plpadata056.internal,60020,1403140865123, 
> p3plpadata072.internal,60020,1403141022569] 
> done=[p3plpadata023.internal,60020,1403140552227, 
> p3plpadata009.internal,60020,1403140487826] }
>   at 
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.isSnapshotDone(SnapshotManager.java:342)
>   at 
> org.apache.hadoop.hbase.master.HMaster.isSnapshotDone(HMaster.java:2907)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:40494)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2012)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:98)
>   at 
> org.apache.hadoop.hbase.ipc.FifoRpcScheduler$1.run(FifoRpcScheduler.java:73)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:744)
> Caused by: 
> org.apache.hadoop.hbase.errorhandling.ForeignException$ProxyThrowable via 
> p3plpadata060.internal,60020,1403140935958:org.apache.hadoop.hbase.errorhandling.ForeignException$ProxyThrowa

[jira] [Commented] (HBASE-11381) Isolate and rerun unit tests that fail following a code commit

2014-06-19 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14038135#comment-14038135
 ] 

Andrew Purtell commented on HBASE-11381:


Seems a good idea to kick off another build upon failure, with some upper bound 
on retries.

The Naginator plugin page says:
{quote}
If the build fails, it will be rescheduled to run again in five minutes. For 
each consecutive unsuccessful build, the waiting time is extended by five 
minutes. The waiting time is capped at one hour, so after ~12 failing builds in 
a row, a new build will only be rescheduled once per hour.
{quote}
We don't want quite that. On private infrastructure a steady stream of nagging 
build failure reports are the desired outcome (smile) but ASF Jenkins is a 
shared resource. 


> Isolate and rerun unit tests that fail following a code commit
> --
>
> Key: HBASE-11381
> URL: https://issues.apache.org/jira/browse/HBASE-11381
> Project: HBase
>  Issue Type: Task
>Reporter: Dima Spivak
>
> It's not uncommon to see that, after changes are committed to a branch, a set 
> of unit tests will begin to fail and Hudson will add a comment to the 
> relevant JIRAs reporting on the unsuccessful build. Unfortunately, these test 
> failures are not always indicative of regressions; sometimes, the problem is 
> with infrastructure and simply rerunning the tests can make them go green 
> again.
> I propose modifying the Jenkins job that is triggered by a code commit to 
> address this problem. In particular, the job could use the reports generated 
> by the Maven Surefire plugin to generate a list of tests to rerun. These 
> tests can be set to rerun any number of times and a threshold ratio of 
> passes-to-fails can be the deciding factor in whether a real bug has been 
> introduced following a change in HBase.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11375) Validate compile-protobuf profile in test-patch.sh

2014-06-19 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-11375:
---

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Validate compile-protobuf profile in test-patch.sh
> --
>
> Key: HBASE-11375
> URL: https://issues.apache.org/jira/browse/HBASE-11375
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Fix For: 0.99.0
>
> Attachments: 11375-v1.txt, 11375-v2.txt
>
>
> compile-protobuf profile sometimes doesn't compile - latest issue being 
> HBASE-11373
> test-patch.sh should validate that compile-protobuf profile compiles. This 
> would discover such issue sooner.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11375) Validate compile-protobuf profile in test-patch.sh

2014-06-19 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-11375:
---

Fix Version/s: 0.99.0
 Hadoop Flags: Reviewed

Thanks for the review, Andrew.

> Validate compile-protobuf profile in test-patch.sh
> --
>
> Key: HBASE-11375
> URL: https://issues.apache.org/jira/browse/HBASE-11375
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Fix For: 0.99.0
>
> Attachments: 11375-v1.txt, 11375-v2.txt
>
>
> compile-protobuf profile sometimes doesn't compile - latest issue being 
> HBASE-11373
> test-patch.sh should validate that compile-protobuf profile compiles. This 
> would discover such issue sooner.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11382) Adding unit test for HBASE-10964 (Delete mutation is not consistent with Put wrt timestamp)

2014-06-19 Thread Srikanth Srungarapu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Srikanth Srungarapu updated HBASE-11382:


Attachment: HBASE-11382.patch

> Adding unit test for HBASE-10964 (Delete mutation is not consistent with Put 
> wrt timestamp)
> ---
>
> Key: HBASE-11382
> URL: https://issues.apache.org/jira/browse/HBASE-11382
> Project: HBase
>  Issue Type: Bug
>Reporter: Srikanth Srungarapu
>Assignee: Srikanth Srungarapu
>Priority: Minor
> Attachments: HBASE-11382.patch
>
>
> Adding a small unit test for verifying that delete mutation is honoring 
> timestamp of delete object.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-11382) Adding unit test for HBASE-10964 (Delete mutation is not consistent with Put wrt timestamp)

2014-06-19 Thread Srikanth Srungarapu (JIRA)
Srikanth Srungarapu created HBASE-11382:
---

 Summary: Adding unit test for HBASE-10964 (Delete mutation is not 
consistent with Put wrt timestamp)
 Key: HBASE-11382
 URL: https://issues.apache.org/jira/browse/HBASE-11382
 Project: HBase
  Issue Type: Bug
Reporter: Srikanth Srungarapu
Assignee: Srikanth Srungarapu


Adding a small unit test for verifying that delete mutation is honoring 
timestamp of delete object.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11382) Adding unit test for HBASE-10964 (Delete mutation is not consistent with Put wrt timestamp)

2014-06-19 Thread Srikanth Srungarapu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Srikanth Srungarapu updated HBASE-11382:


Priority: Minor  (was: Major)

> Adding unit test for HBASE-10964 (Delete mutation is not consistent with Put 
> wrt timestamp)
> ---
>
> Key: HBASE-11382
> URL: https://issues.apache.org/jira/browse/HBASE-11382
> Project: HBase
>  Issue Type: Bug
>Reporter: Srikanth Srungarapu
>Assignee: Srikanth Srungarapu
>Priority: Minor
>
> Adding a small unit test for verifying that delete mutation is honoring 
> timestamp of delete object.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11381) Isolate and rerun unit tests that fail following a code commit

2014-06-19 Thread Dima Spivak (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14038095#comment-14038095
 ] 

Dima Spivak commented on HBASE-11381:
-

To provide a few more concrete details on the logistics of this, the Jenkins 
[Naginator Plugin|https://wiki.jenkins-ci.org/display/JENKINS/Naginator+Plugin] 
seems well-suited for this task. Note that the newly triggered builds could 
specifically be set to use the previous build's artifacts so that the exact 
same bits are retested.

> Isolate and rerun unit tests that fail following a code commit
> --
>
> Key: HBASE-11381
> URL: https://issues.apache.org/jira/browse/HBASE-11381
> Project: HBase
>  Issue Type: Task
>Reporter: Dima Spivak
>
> It's not uncommon to see that, after changes are committed to a branch, a set 
> of unit tests will begin to fail and Hudson will add a comment to the 
> relevant JIRAs reporting on the unsuccessful build. Unfortunately, these test 
> failures are not always indicative of regressions; sometimes, the problem is 
> with infrastructure and simply rerunning the tests can make them go green 
> again.
> I propose modifying the Jenkins job that is triggered by a code commit to 
> address this problem. In particular, the job could use the reports generated 
> by the Maven Surefire plugin to generate a list of tests to rerun. These 
> tests can be set to rerun any number of times and a threshold ratio of 
> passes-to-fails can be the deciding factor in whether a real bug has been 
> introduced following a change in HBase.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-11381) Isolate and rerun unit tests that fail following a code commit

2014-06-19 Thread Dima Spivak (JIRA)
Dima Spivak created HBASE-11381:
---

 Summary: Isolate and rerun unit tests that fail following a code 
commit
 Key: HBASE-11381
 URL: https://issues.apache.org/jira/browse/HBASE-11381
 Project: HBase
  Issue Type: Task
Reporter: Dima Spivak


It's not uncommon to see that, after changes are committed to a branch, a set 
of unit tests will begin to fail and Hudson will add a comment to the relevant 
JIRAs reporting on the unsuccessful build. Unfortunately, these test failures 
are not always indicative of regressions; sometimes, the problem is with 
infrastructure and simply rerunning the tests can make them go green again.

I propose modifying the Jenkins job that is triggered by a code commit to 
address this problem. In particular, the job could use the reports generated by 
the Maven Surefire plugin to generate a list of tests to rerun. These tests can 
be set to rerun any number of times and a threshold ratio of passes-to-fails 
can be the deciding factor in whether a real bug has been introduced following 
a change in HBase.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-9345) Add support for specifying filters in scan

2014-06-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14038093#comment-14038093
 ] 

Hadoop QA commented on HBASE-9345:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12651505/HBASE_9345_trunk.patch
  against trunk revision .
  ATTACHMENT ID: 12651505

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:red}-1 findbugs{color}.  The patch appears to introduce 2 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.regionserver.TestSplitTransactionOnCluster

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9798//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9798//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9798//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9798//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9798//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9798//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9798//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9798//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9798//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9798//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9798//console

This message is automatically generated.

> Add support for specifying filters in scan
> --
>
> Key: HBASE-9345
> URL: https://issues.apache.org/jira/browse/HBASE-9345
> Project: HBase
>  Issue Type: Improvement
>  Components: REST
>Affects Versions: 0.94.11
>Reporter: Vandana Ayyalasomayajula
>Assignee: Virag Kothari
>Priority: Minor
> Fix For: 0.99.0
>
> Attachments: HBASE-9345_trunk.patch, HBASE-9345_trunk.patch, 
> HBASE_9345_trunk.patch
>
>
> In the implementation of stateless scanner from HBase-9343, the support for 
> specifying filters is missing. This JIRA aims to implement support for filter 
> specification.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11380) HRegion lock object is not being released properly, leading to snapshot failure

2014-06-19 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14038087#comment-14038087
 ] 

Ted Yu commented on HBASE-11380:


Looking at HRegion#processRowsWithLocks(), I see the following call which is 
outside try / finally block:
{code}
// 1. Run pre-process hook
processor.preProcess(this, walEdit);
{code}
Could this lead to unreleased locks ?

> HRegion lock object is not being released properly, leading to snapshot 
> failure
> ---
>
> Key: HBASE-11380
> URL: https://issues.apache.org/jira/browse/HBASE-11380
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 0.98.3
>Reporter: Craig Condit
>
> Background:
> We are attempting to create ~ 750 table snapshots on a nightly basis for use 
> in MR jobs. The jobs are run in batches, with a maximum of around 20 jobs 
> running simultaneously.
> We have started to see the following in our region server logs (after < 1 day 
> uptime):
> {noformat}
> java.lang.Error: Maximum lock count exceeded
>   at 
> java.util.concurrent.locks.ReentrantReadWriteLock$Sync.fullTryAcquireShared(ReentrantReadWriteLock.java:531)
>   at 
> java.util.concurrent.locks.ReentrantReadWriteLock$Sync.tryAcquireShared(ReentrantReadWriteLock.java:491)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1326)
>   at 
> java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.tryLock(ReentrantReadWriteLock.java:873)
>   at org.apache.hadoop.hbase.regionserver.HRegion.lock(HRegion.java:5904)
>   at org.apache.hadoop.hbase.regionserver.HRegion.lock(HRegion.java:5891)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.startRegionOperation(HRegion.java:5798)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.startRegionOperation(HRegion.java:5761)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.processRowsWithLocks(HRegion.java:4891)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.mutateRowsWithLocks(HRegion.java:4856)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.mutateRowsWithLocks(HRegion.java:4838)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.mutateRow(HRegion.java:4829)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.mutateRows(HRegionServer.java:4390)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3362)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29503)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2012)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:98)
>   at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:168)
>   at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$000(SimpleRpcScheduler.java:39)
>   at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler$1.run(SimpleRpcScheduler.java:111)
>   at java.lang.Thread.run(Thread.java:744)
> {noformat}
> Not sure of the cause, but the result is that snapshots cannot be created. We 
> see this in our client logs:
> {noformat}
> Exception in thread "main" 
> org.apache.hadoop.hbase.snapshot.HBaseSnapshotException: 
> org.apache.hadoop.hbase.snapshot.HBaseSnapshotException: Snapshot { 
> ss=test-snapshot-20140619143753294 table=test type=FLUSH } had an error.  
> Procedure test-snapshot-20140619143753294 { 
> waiting=[p3plpadata038.internal,60020,1403140682587, 
> p3plpadata056.internal,60020,1403140865123, 
> p3plpadata072.internal,60020,1403141022569] 
> done=[p3plpadata023.internal,60020,1403140552227, 
> p3plpadata009.internal,60020,1403140487826] }
>   at 
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.isSnapshotDone(SnapshotManager.java:342)
>   at 
> org.apache.hadoop.hbase.master.HMaster.isSnapshotDone(HMaster.java:2907)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:40494)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2012)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:98)
>   at 
> org.apache.hadoop.hbase.ipc.FifoRpcScheduler$1.run(FifoRpcScheduler.java:73)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:744)
> Caused by: 
> org.apache.hadoop.hbase.errorhandling.Fo

[jira] [Updated] (HBASE-9345) Add support for specifying filters in scan

2014-06-19 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-9345:
-

   Resolution: Fixed
Fix Version/s: 0.99.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Committed to master (It passed tests here below).  Thanks for the patch Virag.

> Add support for specifying filters in scan
> --
>
> Key: HBASE-9345
> URL: https://issues.apache.org/jira/browse/HBASE-9345
> Project: HBase
>  Issue Type: Improvement
>  Components: REST
>Affects Versions: 0.94.11
>Reporter: Vandana Ayyalasomayajula
>Assignee: Virag Kothari
>Priority: Minor
> Fix For: 0.99.0
>
> Attachments: HBASE-9345_trunk.patch, HBASE-9345_trunk.patch, 
> HBASE_9345_trunk.patch
>
>
> In the implementation of stateless scanner from HBase-9343, the support for 
> specifying filters is missing. This JIRA aims to implement support for filter 
> specification.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-6192) Document ACL matrix in the book

2014-06-19 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14038083#comment-14038083
 ] 

Andrew Purtell commented on HBASE-6192:
---

bq. Can you grant at the RegionServer or master level or any others?

It's probably more useful to describe permissions as fitting into levels in the 
data model as opposed to what particular daemon might be involved in 
decisionmaking. The hierarchy is global -> namespace -> table -> cf -> cq -> 
cell. We start checking if the user has the necessary permission bit at the top 
of the hierarchy and walk down until we find a grant. So a bit granted at table 
level dominates any grants done at the cf, cf+cq, or cell level; the user can 
do what that bit implies at any location in the table. Or, a bit granted at 
global scope dominates all, the user is always allowed to take that action 
everywhere. 

Mostly, permissions for global administrative and schema operations are checked 
in the master, while permissions for queries and mutations are checked at the 
region level (since coprocessors can be installed on a per table basis). We 
also do one check for ADMIN capability at the RegionServer level, if the user 
is allowed to issue a stop request. Some admin actions like flush, compact, and 
split requests are also checked at the region level, because clients can issue 
those directly to the regionservers. 

> Document ACL matrix in the book
> ---
>
> Key: HBASE-6192
> URL: https://issues.apache.org/jira/browse/HBASE-6192
> Project: HBase
>  Issue Type: Task
>  Components: documentation, security
>Affects Versions: 0.94.1, 0.95.2
>Reporter: Enis Soztutar
>Assignee: Misty Stanley-Jones
>  Labels: documentaion, security
> Fix For: 0.99.0
>
> Attachments: HBASE-6192-2.patch, HBASE-6192-rebased.patch, 
> HBASE-6192.patch, HBase Security-ACL Matrix.pdf, HBase Security-ACL 
> Matrix.pdf, HBase Security-ACL Matrix.pdf, HBase Security-ACL Matrix.xls, 
> HBase Security-ACL Matrix.xls, HBase Security-ACL Matrix.xls
>
>
> We have an excellent matrix at 
> https://issues.apache.org/jira/secure/attachment/12531252/Security-ACL%20Matrix.pdf
>  for ACL. Once the changes are done, we can adapt that and put it in the 
> book, also add some more documentation about the new authorization features. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-11380) HRegion lock object is not being released properly, leading to snapshot failure

2014-06-19 Thread Craig Condit (JIRA)
Craig Condit created HBASE-11380:


 Summary: HRegion lock object is not being released properly, 
leading to snapshot failure
 Key: HBASE-11380
 URL: https://issues.apache.org/jira/browse/HBASE-11380
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.98.3
Reporter: Craig Condit


Background:

We are attempting to create ~ 750 table snapshots on a nightly basis for use in 
MR jobs. The jobs are run in batches, with a maximum of around 20 jobs running 
simultaneously.

We have started to see the following in our region server logs (after < 1 day 
uptime):

{noformat}
java.lang.Error: Maximum lock count exceeded
at 
java.util.concurrent.locks.ReentrantReadWriteLock$Sync.fullTryAcquireShared(ReentrantReadWriteLock.java:531)
at 
java.util.concurrent.locks.ReentrantReadWriteLock$Sync.tryAcquireShared(ReentrantReadWriteLock.java:491)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1326)
at 
java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.tryLock(ReentrantReadWriteLock.java:873)
at org.apache.hadoop.hbase.regionserver.HRegion.lock(HRegion.java:5904)
at org.apache.hadoop.hbase.regionserver.HRegion.lock(HRegion.java:5891)
at 
org.apache.hadoop.hbase.regionserver.HRegion.startRegionOperation(HRegion.java:5798)
at 
org.apache.hadoop.hbase.regionserver.HRegion.startRegionOperation(HRegion.java:5761)
at 
org.apache.hadoop.hbase.regionserver.HRegion.processRowsWithLocks(HRegion.java:4891)
at 
org.apache.hadoop.hbase.regionserver.HRegion.mutateRowsWithLocks(HRegion.java:4856)
at 
org.apache.hadoop.hbase.regionserver.HRegion.mutateRowsWithLocks(HRegion.java:4838)
at 
org.apache.hadoop.hbase.regionserver.HRegion.mutateRow(HRegion.java:4829)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.mutateRows(HRegionServer.java:4390)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3362)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29503)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2012)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:98)
at 
org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:168)
at 
org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$000(SimpleRpcScheduler.java:39)
at 
org.apache.hadoop.hbase.ipc.SimpleRpcScheduler$1.run(SimpleRpcScheduler.java:111)
at java.lang.Thread.run(Thread.java:744)
{noformat}

Not sure of the cause, but the result is that snapshots cannot be created. We 
see this in our client logs:

{noformat}
Exception in thread "main" 
org.apache.hadoop.hbase.snapshot.HBaseSnapshotException: 
org.apache.hadoop.hbase.snapshot.HBaseSnapshotException: Snapshot { 
ss=test-snapshot-20140619143753294 table=test type=FLUSH } had an error.  
Procedure test-snapshot-20140619143753294 { 
waiting=[p3plpadata038.internal,60020,1403140682587, 
p3plpadata056.internal,60020,1403140865123, 
p3plpadata072.internal,60020,1403141022569] 
done=[p3plpadata023.internal,60020,1403140552227, 
p3plpadata009.internal,60020,1403140487826] }
at 
org.apache.hadoop.hbase.master.snapshot.SnapshotManager.isSnapshotDone(SnapshotManager.java:342)
at 
org.apache.hadoop.hbase.master.HMaster.isSnapshotDone(HMaster.java:2907)
at 
org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:40494)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2012)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:98)
at 
org.apache.hadoop.hbase.ipc.FifoRpcScheduler$1.run(FifoRpcScheduler.java:73)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Caused by: 
org.apache.hadoop.hbase.errorhandling.ForeignException$ProxyThrowable via 
p3plpadata060.internal,60020,1403140935958:org.apache.hadoop.hbase.errorhandling.ForeignException$ProxyThrowable:
 
at 
org.apache.hadoop.hbase.errorhandling.ForeignExceptionDispatcher.rethrowException(ForeignExceptionDispatcher.java:83)
at 
org.apache.hadoop.hbase.master.snapshot.TakeSnapshotHandler.rethrowExceptionIfFailed(TakeSnapshotHandler.java:320)
at 
org.apache.hadoop.hbase.master.snapshot.SnapshotManager.isSnapshotDone(SnapshotManager.java:332)
... 10 more
Caused by: 
org.apache.hadoo

[jira] [Commented] (HBASE-11378) TableMapReduceUtil overwrites user supplied options for multiple tables/scaners job

2014-06-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14038059#comment-14038059
 ] 

Hudson commented on HBASE-11378:


FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #328 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/328/])
HBASE-11378 TableMapReduceUtil overwrites user supplied options for multiple 
tables/scaners job (jxiang: rev 23cd02a21cae7b98b12178cd04fb2e88aa56b4a2)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableMapReduceUtil.java


> TableMapReduceUtil overwrites user supplied options for multiple 
> tables/scaners job
> ---
>
> Key: HBASE-11378
> URL: https://issues.apache.org/jira/browse/HBASE-11378
> Project: HBase
>  Issue Type: Bug
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
> Fix For: 0.99.0, 0.98.4
>
> Attachments: hbase-11378.patch
>
>
> In TableMapReduceUtil#initTableMapperJob, we have
> HBaseConfiguration.addHbaseResources(job.getConfiguration());
> It should use merge instead. Otherwise, user supplied options are overwritten.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11375) Validate compile-protobuf profile in test-patch.sh

2014-06-19 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14038056#comment-14038056
 ] 

Andrew Purtell commented on HBASE-11375:


+1

> Validate compile-protobuf profile in test-patch.sh
> --
>
> Key: HBASE-11375
> URL: https://issues.apache.org/jira/browse/HBASE-11375
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Attachments: 11375-v1.txt, 11375-v2.txt
>
>
> compile-protobuf profile sometimes doesn't compile - latest issue being 
> HBASE-11373
> test-patch.sh should validate that compile-protobuf profile compiles. This 
> would discover such issue sooner.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-6192) Document ACL matrix in the book

2014-06-19 Thread Misty Stanley-Jones (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14038053#comment-14038053
 ] 

Misty Stanley-Jones commented on HBASE-6192:


While you are in the answering-stuff mode, do I have the scopes right? You can 
grant at global, table, cf, cq, or namespace. Can you grant at the RegionServer 
or master level or any others? For some of these it seemed like it would be 
useful to grant, for instance, admin at the regionserver level, or admin on a 
particular region, or something like that. Is that possible? We don't seem to 
test for those scenarios.

> Document ACL matrix in the book
> ---
>
> Key: HBASE-6192
> URL: https://issues.apache.org/jira/browse/HBASE-6192
> Project: HBase
>  Issue Type: Task
>  Components: documentation, security
>Affects Versions: 0.94.1, 0.95.2
>Reporter: Enis Soztutar
>Assignee: Misty Stanley-Jones
>  Labels: documentaion, security
> Fix For: 0.99.0
>
> Attachments: HBASE-6192-2.patch, HBASE-6192-rebased.patch, 
> HBASE-6192.patch, HBase Security-ACL Matrix.pdf, HBase Security-ACL 
> Matrix.pdf, HBase Security-ACL Matrix.pdf, HBase Security-ACL Matrix.xls, 
> HBase Security-ACL Matrix.xls, HBase Security-ACL Matrix.xls
>
>
> We have an excellent matrix at 
> https://issues.apache.org/jira/secure/attachment/12531252/Security-ACL%20Matrix.pdf
>  for ACL. Once the changes are done, we can adapt that and put it in the 
> book, also add some more documentation about the new authorization features. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-6192) Document ACL matrix in the book

2014-06-19 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14038044#comment-14038044
 ] 

Andrew Purtell commented on HBASE-6192:
---

WRITE doesn't imply READ as a rule.

There are some implied permissions involving meta regions though. Every client 
must have READ access to the META table, or clients can't work. So this is a 
special case. We always allow reads on meta regions. In the same way, CREATE 
and ADMIN are granted WRITE permission on meta regions, so the table operations 
they are allowed to perform can complete, even if technically the bits can be 
granted separately in any possible combination.

Also of interest, checkAndX operations won't be useful (will fail) if the user 
doesn't have READ+WRITE permissions.

One area that is a little weird is you can increment or append without having 
READ permission. 

> Document ACL matrix in the book
> ---
>
> Key: HBASE-6192
> URL: https://issues.apache.org/jira/browse/HBASE-6192
> Project: HBase
>  Issue Type: Task
>  Components: documentation, security
>Affects Versions: 0.94.1, 0.95.2
>Reporter: Enis Soztutar
>Assignee: Misty Stanley-Jones
>  Labels: documentaion, security
> Fix For: 0.99.0
>
> Attachments: HBASE-6192-2.patch, HBASE-6192-rebased.patch, 
> HBASE-6192.patch, HBase Security-ACL Matrix.pdf, HBase Security-ACL 
> Matrix.pdf, HBase Security-ACL Matrix.pdf, HBase Security-ACL Matrix.xls, 
> HBase Security-ACL Matrix.xls, HBase Security-ACL Matrix.xls
>
>
> We have an excellent matrix at 
> https://issues.apache.org/jira/secure/attachment/12531252/Security-ACL%20Matrix.pdf
>  for ACL. Once the changes are done, we can adapt that and put it in the 
> book, also add some more documentation about the new authorization features. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11355) a couple of callQueue related improvements

2014-06-19 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14038011#comment-14038011
 ] 

stack commented on HBASE-11355:
---

Looks great.   Will give closer review soon.  Testing...

> a couple of callQueue related improvements
> --
>
> Key: HBASE-11355
> URL: https://issues.apache.org/jira/browse/HBASE-11355
> Project: HBase
>  Issue Type: Improvement
>  Components: IPC/RPC, Performance
>Affects Versions: 0.99.0, 0.94.20
>Reporter: Liang Xie
>Assignee: Matteo Bertozzi
> Attachments: HBASE-11355-v0.patch
>
>
> In one of my in-memory read only testing(100% get requests), one of the top 
> scalibility bottleneck came from the single callQueue. A tentative sharing 
> this callQueue according to the rpc handler number showed a big throughput 
> improvement(the original get() qps is around 60k, after this one and other 
> hotspot tunning, i got 220k get() qps in the same single region server) in a 
> YCSB read only scenario.
> Another stuff we can do is seperating the queue into read call queue and 
> write call queue, we had done it in our internal branch, it would helpful in 
> some outages, to avoid all read or all write requests ran out of all handler 
> threads.
> One more stuff is changing the current blocking behevior once the callQueue 
> is full, considering the full callQueue almost means the backend processing 
> is slow somehow, so a fail-fast here should be more reasonable if we using 
> HBase as a low latency processing system. see "callQueue.put(call)"



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-9345) Add support for specifying filters in scan

2014-06-19 Thread Virag Kothari (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14038006#comment-14038006
 ] 

Virag Kothari commented on HBASE-9345:
--

Thanks for taking a look. Yup, the patch works for me.

> Add support for specifying filters in scan
> --
>
> Key: HBASE-9345
> URL: https://issues.apache.org/jira/browse/HBASE-9345
> Project: HBase
>  Issue Type: Improvement
>  Components: REST
>Affects Versions: 0.94.11
>Reporter: Vandana Ayyalasomayajula
>Assignee: Virag Kothari
>Priority: Minor
> Attachments: HBASE-9345_trunk.patch, HBASE-9345_trunk.patch, 
> HBASE_9345_trunk.patch
>
>
> In the implementation of stateless scanner from HBase-9343, the support for 
> specifying filters is missing. This JIRA aims to implement support for filter 
> specification.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11360) SnapshotFileCache refresh logic based on modified directory time might be insufficient

2014-06-19 Thread churro morales (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14038002#comment-14038002
 ] 

churro morales commented on HBASE-11360:


Hi Lars, 

uploaded a patch and would love to hear what you think.  Took your idea of not 
caching the tmp directory but we still cache the snapshots.  Instead we send 
batches of files to the cleaner thus we don't have to read from hdfs as often.  
Would love to hear your thoughts.

This would also make HBASE-11322 a non-issue as the tmp directory is no longer 
cached anymore.

Thanks

> SnapshotFileCache refresh logic based on modified directory time might be 
> insufficient
> --
>
> Key: HBASE-11360
> URL: https://issues.apache.org/jira/browse/HBASE-11360
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.19
>Reporter: churro morales
> Fix For: 0.94.22
>
> Attachments: HBASE-11360-0.94.patch
>
>
> Right now we decide whether to refresh the cache based on the lastModified 
> timestamp of all the snapshots and those "running" snapshots which is located 
> in the /hbase/.hbase-snapshot/.tmp/ directory
> We ran a ExportSnapshot job which takes around 7 minutes between creating the 
> directory and copying all the files. 
> Thus the modified time for the 
> /hbase/.hbase-snapshot/.tmp directory was 7 minutes earlier than the modified 
> time of the
> /hbase/.hbase-snapshot/.tmp/ directory
> Thus the cache refresh happens and doesn't pick up all the files but thinks 
> its up to date as the modified time of the .tmp directory never changes.
> This is a bug as when the export job starts the cache never contains the 
> files for the "running" snapshot and will fail.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11360) SnapshotFileCache refresh logic based on modified directory time might be insufficient

2014-06-19 Thread churro morales (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

churro morales updated HBASE-11360:
---

Attachment: HBASE-11360-0.94.patch

> SnapshotFileCache refresh logic based on modified directory time might be 
> insufficient
> --
>
> Key: HBASE-11360
> URL: https://issues.apache.org/jira/browse/HBASE-11360
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.19
>Reporter: churro morales
> Fix For: 0.94.22
>
> Attachments: HBASE-11360-0.94.patch
>
>
> Right now we decide whether to refresh the cache based on the lastModified 
> timestamp of all the snapshots and those "running" snapshots which is located 
> in the /hbase/.hbase-snapshot/.tmp/ directory
> We ran a ExportSnapshot job which takes around 7 minutes between creating the 
> directory and copying all the files. 
> Thus the modified time for the 
> /hbase/.hbase-snapshot/.tmp directory was 7 minutes earlier than the modified 
> time of the
> /hbase/.hbase-snapshot/.tmp/ directory
> Thus the cache refresh happens and doesn't pick up all the files but thinks 
> its up to date as the modified time of the .tmp directory never changes.
> This is a bug as when the export job starts the cache never contains the 
> files for the "running" snapshot and will fail.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-6192) Document ACL matrix in the book

2014-06-19 Thread Misty Stanley-Jones (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14037998#comment-14037998
 ] 

Misty Stanley-Jones commented on HBASE-6192:


OK, so can you create a table if you have Create but not Write? And Write 
definitely doesn't imply Read? Because [~jdcryans] thought it did, as well.

> Document ACL matrix in the book
> ---
>
> Key: HBASE-6192
> URL: https://issues.apache.org/jira/browse/HBASE-6192
> Project: HBase
>  Issue Type: Task
>  Components: documentation, security
>Affects Versions: 0.94.1, 0.95.2
>Reporter: Enis Soztutar
>Assignee: Misty Stanley-Jones
>  Labels: documentaion, security
> Fix For: 0.99.0
>
> Attachments: HBASE-6192-2.patch, HBASE-6192-rebased.patch, 
> HBASE-6192.patch, HBase Security-ACL Matrix.pdf, HBase Security-ACL 
> Matrix.pdf, HBase Security-ACL Matrix.pdf, HBase Security-ACL Matrix.xls, 
> HBase Security-ACL Matrix.xls, HBase Security-ACL Matrix.xls
>
>
> We have an excellent matrix at 
> https://issues.apache.org/jira/secure/attachment/12531252/Security-ACL%20Matrix.pdf
>  for ACL. Once the changes are done, we can adapt that and put it in the 
> book, also add some more documentation about the new authorization features. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11375) Validate compile-protobuf profile in test-patch.sh

2014-06-19 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-11375:
---

Attachment: 11375-v2.txt

> Validate compile-protobuf profile in test-patch.sh
> --
>
> Key: HBASE-11375
> URL: https://issues.apache.org/jira/browse/HBASE-11375
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Attachments: 11375-v1.txt, 11375-v2.txt
>
>
> compile-protobuf profile sometimes doesn't compile - latest issue being 
> HBASE-11373
> test-patch.sh should validate that compile-protobuf profile compiles. This 
> would discover such issue sooner.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11338) Expand documentation on bloom filters

2014-06-19 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-11338:
--

Attachment: HBASE-11338-2.patch

What I committed.

> Expand documentation on bloom filters
> -
>
> Key: HBASE-11338
> URL: https://issues.apache.org/jira/browse/HBASE-11338
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: Misty Stanley-Jones
>Assignee: Misty Stanley-Jones
> Fix For: 0.99.0
>
> Attachments: HBASE-11338-1.patch, HBASE-11338-2.patch, 
> HBASE-11338.patch
>
>
> Ref Guide  could use more info on bloom filters.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11338) Expand documentation on bloom filters

2014-06-19 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-11338:
--

   Resolution: Fixed
Fix Version/s: 0.99.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Committed master (minus the change to 
hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableMapReduceUtil.java).
  Thanks Misty.

> Expand documentation on bloom filters
> -
>
> Key: HBASE-11338
> URL: https://issues.apache.org/jira/browse/HBASE-11338
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: Misty Stanley-Jones
>Assignee: Misty Stanley-Jones
> Fix For: 0.99.0
>
> Attachments: HBASE-11338-1.patch, HBASE-11338-2.patch, 
> HBASE-11338.patch
>
>
> Ref Guide  could use more info on bloom filters.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11323) BucketCache all the time!

2014-06-19 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14037985#comment-14037985
 ] 

stack commented on HBASE-11323:
---

Two votes for #1 over in HBASE-11364 [BlockCache] Add a flag to cache data 
blocks in L1 if multi-tier cache

> BucketCache all the time!
> -
>
> Key: HBASE-11323
> URL: https://issues.apache.org/jira/browse/HBASE-11323
> Project: HBase
>  Issue Type: Sub-task
>  Components: io
>Reporter: stack
> Fix For: 0.99.0
>
> Attachments: ReportBlockCache.pdf
>
>
> One way to realize the parent issue is to just enable bucket cache all the 
> time; i.e. always have offheap enabled.  Would have to do some work to make 
> it drop-dead simple on initial setup (I think it doable).
> So, upside would be the offheap upsides (less GC, less likely to go away and 
> never come back because of full GC when heap is large, etc.).
> Downside is higher latency.   In Nick's BlockCache 101 there is little to no 
> difference between onheap and offheap.  In a basic compare doing scans and 
> gets -- details to follow -- I have BucketCache deploy about 20% less ops 
> than LRUBC when all incache and maybe 10% less ops when falling out of cache. 
>   I can't tell difference in means and 95th and 99th are roughly same (more 
> stable with BucketCache).  GC profile is much better with BucketCache -- way 
> less.  BucketCache uses about 7% more user CPU.
> More detail on comparison to follow.
> I think the numbers disagree enough we should probably do the [~lhofhansl] 
> suggestion, that we allow you to have a table sit in LRUBC, something the 
> current bucket cache layout does not do.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (HBASE-6192) Document ACL matrix in the book

2014-06-19 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14037984#comment-14037984
 ] 

Andrew Purtell edited comment on HBASE-6192 at 6/19/14 10:07 PM:
-

ADMIN should be a strict superset of CREATE. If we've set up somewhere where 
CREATE can do something ADMIN cannot, that would be a bug.


was (Author: apurtell):
ADMIN does not imply CREATE, but ADMIN should be a strict superset of CREATE by 
design. If we've set up somewhere where CREATE can do something ADMIN cannot, 
that would be a bug.

> Document ACL matrix in the book
> ---
>
> Key: HBASE-6192
> URL: https://issues.apache.org/jira/browse/HBASE-6192
> Project: HBase
>  Issue Type: Task
>  Components: documentation, security
>Affects Versions: 0.94.1, 0.95.2
>Reporter: Enis Soztutar
>Assignee: Misty Stanley-Jones
>  Labels: documentaion, security
> Fix For: 0.99.0
>
> Attachments: HBASE-6192-2.patch, HBASE-6192-rebased.patch, 
> HBASE-6192.patch, HBase Security-ACL Matrix.pdf, HBase Security-ACL 
> Matrix.pdf, HBase Security-ACL Matrix.pdf, HBase Security-ACL Matrix.xls, 
> HBase Security-ACL Matrix.xls, HBase Security-ACL Matrix.xls
>
>
> We have an excellent matrix at 
> https://issues.apache.org/jira/secure/attachment/12531252/Security-ACL%20Matrix.pdf
>  for ACL. Once the changes are done, we can adapt that and put it in the 
> book, also add some more documentation about the new authorization features. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-6192) Document ACL matrix in the book

2014-06-19 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14037984#comment-14037984
 ] 

Andrew Purtell commented on HBASE-6192:
---

ADMIN does not imply CREATE, but ADMIN should be a strict superset of CREATE by 
design. If we've set up somewhere where CREATE can do something ADMIN cannot, 
that would be a bug.

> Document ACL matrix in the book
> ---
>
> Key: HBASE-6192
> URL: https://issues.apache.org/jira/browse/HBASE-6192
> Project: HBase
>  Issue Type: Task
>  Components: documentation, security
>Affects Versions: 0.94.1, 0.95.2
>Reporter: Enis Soztutar
>Assignee: Misty Stanley-Jones
>  Labels: documentaion, security
> Fix For: 0.99.0
>
> Attachments: HBASE-6192-2.patch, HBASE-6192-rebased.patch, 
> HBASE-6192.patch, HBase Security-ACL Matrix.pdf, HBase Security-ACL 
> Matrix.pdf, HBase Security-ACL Matrix.pdf, HBase Security-ACL Matrix.xls, 
> HBase Security-ACL Matrix.xls, HBase Security-ACL Matrix.xls
>
>
> We have an excellent matrix at 
> https://issues.apache.org/jira/secure/attachment/12531252/Security-ACL%20Matrix.pdf
>  for ACL. Once the changes are done, we can adapt that and put it in the 
> book, also add some more documentation about the new authorization features. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-6192) Document ACL matrix in the book

2014-06-19 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14037983#comment-14037983
 ] 

Andrew Purtell commented on HBASE-6192:
---

WRITE does not imply READ. 

You might grant WRITE to a logger process but not grant READ since it is not 
supposed to be accessing the stored data, only transmitting it for ingest.

> Document ACL matrix in the book
> ---
>
> Key: HBASE-6192
> URL: https://issues.apache.org/jira/browse/HBASE-6192
> Project: HBase
>  Issue Type: Task
>  Components: documentation, security
>Affects Versions: 0.94.1, 0.95.2
>Reporter: Enis Soztutar
>Assignee: Misty Stanley-Jones
>  Labels: documentaion, security
> Fix For: 0.99.0
>
> Attachments: HBASE-6192-2.patch, HBASE-6192-rebased.patch, 
> HBASE-6192.patch, HBase Security-ACL Matrix.pdf, HBase Security-ACL 
> Matrix.pdf, HBase Security-ACL Matrix.pdf, HBase Security-ACL Matrix.xls, 
> HBase Security-ACL Matrix.xls, HBase Security-ACL Matrix.xls
>
>
> We have an excellent matrix at 
> https://issues.apache.org/jira/secure/attachment/12531252/Security-ACL%20Matrix.pdf
>  for ACL. Once the changes are done, we can adapt that and put it in the 
> book, also add some more documentation about the new authorization features. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11375) Validate compile-protobuf profile in test-patch.sh

2014-06-19 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14037977#comment-14037977
 ] 

Andrew Purtell commented on HBASE-11375:


Ah, right, well 'install' is probably still less expensive than 'package'

> Validate compile-protobuf profile in test-patch.sh
> --
>
> Key: HBASE-11375
> URL: https://issues.apache.org/jira/browse/HBASE-11375
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Attachments: 11375-v1.txt
>
>
> compile-protobuf profile sometimes doesn't compile - latest issue being 
> HBASE-11373
> test-patch.sh should validate that compile-protobuf profile compiles. This 
> would discover such issue sooner.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11364) [BlockCache] Add a flag to cache data blocks in L1 if multi-tier cache

2014-06-19 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14037970#comment-14037970
 ] 

Andrew Purtell commented on HBASE-11364:


I vote for option #1 but with CACHE_DATA_IN_L1 defaulting to false if bucket 
cache is configured. 

> [BlockCache] Add a flag to cache data blocks in L1 if multi-tier cache
> --
>
> Key: HBASE-11364
> URL: https://issues.apache.org/jira/browse/HBASE-11364
> Project: HBase
>  Issue Type: Task
>Reporter: stack
>Assignee: stack
> Fix For: 0.99.0
>
> Attachments: 11364.txt
>
>
> This is a prerequisite for HBASE-11323 BucketCache on all the time.  It 
> addresses a @lars hofhansl ask that we be able to ask that for some column 
> families, even their data blocks get cached up in the LruBlockCache L1 tier 
> in a multi-tier deploy as happens when doing BucketCache (CombinedBlockCache) 
> setups.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11348) Make frequency and sleep times of chaos monkeys configurable

2014-06-19 Thread Vandana Ayyalasomayajula (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14037963#comment-14037963
 ] 

Vandana Ayyalasomayajula commented on HBASE-11348:
--

[~stack] and [~enis] Thanks for the prompt reviews and commit. 

> Make frequency and sleep times of  chaos monkeys configurable 
> --
>
> Key: HBASE-11348
> URL: https://issues.apache.org/jira/browse/HBASE-11348
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 0.98.3
>Reporter: Vandana Ayyalasomayajula
>Assignee: Vandana Ayyalasomayajula
>Priority: Minor
>  Labels: integration-tests
> Fix For: 0.99.0
>
> Attachments: HBASE-11348_1.patch, HBASE-11348_2.patch
>
>
> Currently the chaos monkeys used in the integration tests, run with a fixed 
> configuration. It would be useful to have the frequency, sleep times to be 
> configurable. That would help controlling the chaos the monkeys are intended 
> to create.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-9345) Add support for specifying filters in scan

2014-06-19 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14037959#comment-14037959
 ] 

stack commented on HBASE-9345:
--

Patch lgtm.  Waiting on hadoopqa.  It works for you [~virag]?

> Add support for specifying filters in scan
> --
>
> Key: HBASE-9345
> URL: https://issues.apache.org/jira/browse/HBASE-9345
> Project: HBase
>  Issue Type: Improvement
>  Components: REST
>Affects Versions: 0.94.11
>Reporter: Vandana Ayyalasomayajula
>Assignee: Virag Kothari
>Priority: Minor
> Attachments: HBASE-9345_trunk.patch, HBASE-9345_trunk.patch, 
> HBASE_9345_trunk.patch
>
>
> In the implementation of stateless scanner from HBase-9343, the support for 
> specifying filters is missing. This JIRA aims to implement support for filter 
> specification.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11338) Expand documentation on bloom filters

2014-06-19 Thread Misty Stanley-Jones (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Misty Stanley-Jones updated HBASE-11338:


Attachment: HBASE-11338-1.patch

Implemented feedback. Thanks [~stack]

> Expand documentation on bloom filters
> -
>
> Key: HBASE-11338
> URL: https://issues.apache.org/jira/browse/HBASE-11338
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: Misty Stanley-Jones
>Assignee: Misty Stanley-Jones
> Attachments: HBASE-11338-1.patch, HBASE-11338.patch
>
>
> Ref Guide  could use more info on bloom filters.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HBASE-11348) Make frequency and sleep times of chaos monkeys configurable

2014-06-19 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-11348.
---

   Resolution: Fixed
Fix Version/s: 0.99.0
 Hadoop Flags: Reviewed

Committed to master.  Thank you for the patch and doc Vandana.

> Make frequency and sleep times of  chaos monkeys configurable 
> --
>
> Key: HBASE-11348
> URL: https://issues.apache.org/jira/browse/HBASE-11348
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 0.98.3
>Reporter: Vandana Ayyalasomayajula
>Assignee: Vandana Ayyalasomayajula
>Priority: Minor
>  Labels: integration-tests
> Fix For: 0.99.0
>
> Attachments: HBASE-11348_1.patch, HBASE-11348_2.patch
>
>
> Currently the chaos monkeys used in the integration tests, run with a fixed 
> configuration. It would be useful to have the frequency, sleep times to be 
> configurable. That would help controlling the chaos the monkeys are intended 
> to create.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-9345) Add support for specifying filters in scan

2014-06-19 Thread Virag Kothari (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virag Kothari updated HBASE-9345:
-

Attachment: HBASE_9345_trunk.patch

> Add support for specifying filters in scan
> --
>
> Key: HBASE-9345
> URL: https://issues.apache.org/jira/browse/HBASE-9345
> Project: HBase
>  Issue Type: Improvement
>  Components: REST
>Affects Versions: 0.94.11
>Reporter: Vandana Ayyalasomayajula
>Assignee: Virag Kothari
>Priority: Minor
> Attachments: HBASE-9345_trunk.patch, HBASE-9345_trunk.patch, 
> HBASE_9345_trunk.patch
>
>
> In the implementation of stateless scanner from HBase-9343, the support for 
> specifying filters is missing. This JIRA aims to implement support for filter 
> specification.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11378) TableMapReduceUtil overwrites user supplied options for multiple tables/scaners job

2014-06-19 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14037896#comment-14037896
 ] 

Nick Dimiduk commented on HBASE-11378:
--

Nice one Jimmy!

> TableMapReduceUtil overwrites user supplied options for multiple 
> tables/scaners job
> ---
>
> Key: HBASE-11378
> URL: https://issues.apache.org/jira/browse/HBASE-11378
> Project: HBase
>  Issue Type: Bug
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
> Fix For: 0.99.0, 0.98.4
>
> Attachments: hbase-11378.patch
>
>
> In TableMapReduceUtil#initTableMapperJob, we have
> HBaseConfiguration.addHbaseResources(job.getConfiguration());
> It should use merge instead. Otherwise, user supplied options are overwritten.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11339) HBase MOB

2014-06-19 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14037889#comment-14037889
 ] 

Nick Dimiduk commented on HBASE-11339:
--

bq. Couldn't we just do a combination of per-cf compaction and per-cf flushes

+1. This strikes me as very well aligned with the design intention of column 
families.

> HBase MOB
> -
>
> Key: HBASE-11339
> URL: https://issues.apache.org/jira/browse/HBASE-11339
> Project: HBase
>  Issue Type: New Feature
>  Components: regionserver, Scanners
>Reporter: Jingcheng Du
>Assignee: Jingcheng Du
> Attachments: HBase LOB Design.pdf
>
>
>   It's quite useful to save the medium binary data like images, documents 
> into Apache HBase. Unfortunately directly saving the binary MOB(medium 
> object) to HBase leads to a worse performance since the frequent split and 
> compaction.
>   In this design, the MOB data are stored in an more efficient way, which 
> keeps a high write/read performance and guarantees the data consistency in 
> Apache HBase.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11339) HBase MOB

2014-06-19 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14037870#comment-14037870
 ] 

Jonathan Hsieh commented on HBASE-11339:


Let's do one more strawman and try to disqualify it for the MOB case.

Why not just improve/use existing column family functionality and have use a cf 
for lob/mob fields? Couldn't we just do a combination of  per-cf compaction and 
per-cf flushes (not sure if all or some of those features are in already)  and 
get to good performance while avoiding write amplification penalties?

> HBase MOB
> -
>
> Key: HBASE-11339
> URL: https://issues.apache.org/jira/browse/HBASE-11339
> Project: HBase
>  Issue Type: New Feature
>  Components: regionserver, Scanners
>Reporter: Jingcheng Du
>Assignee: Jingcheng Du
> Attachments: HBase LOB Design.pdf
>
>
>   It's quite useful to save the medium binary data like images, documents 
> into Apache HBase. Unfortunately directly saving the binary MOB(medium 
> object) to HBase leads to a worse performance since the frequent split and 
> compaction.
>   In this design, the MOB data are stored in an more efficient way, which 
> keeps a high write/read performance and guarantees the data consistency in 
> Apache HBase.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11339) HBase MOB

2014-06-19 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14037864#comment-14037864
 ] 

Jonathan Hsieh commented on HBASE-11339:


Thanks for following up with good questions! 

You haven't called it out directly but your questions are leading towards 
trouble spots in a loblog design.  One has to do with atomicity and the other 
has to do with reading recent values.  I think the latter effectively 
disqualifies the loblog idea.  Here's a writeup.

bq. In this way, we save the Lob files as SequenceFiles, and save the offset 
and file name back into the Put before putting the KV into the MemStore, right?

Essentially yes.  They aren't necessarily sequence files -- they would be 
synced to complete writing the lob just like the current hlog files does with 
edits. 

bq. 1. If so, we don't use the MemStore to save the Lob data, right? Then how 
to read the Lob data that are not sync yet(which are still in the writer 
buffer)?

If the loblog write and locator write into the hlog both succeed, we'd use the 
same design/mechanism you currently have to read lobs that aren't present in 
the memstore since they were flushed.  

The difference is that the loblogs are still being written. In HDFS you can 
read files that are currently being written, however you aren't guaranteed to 
read to the most recent end of the file since we have no built in tail in hdfs 
yet).   Hm.. so we have a problem getting latest data.

So for the lob log design to be correct, it would need work on hdfs to provide 
guarantees or a tail operation.  While not out of the question, that would be a 
ways out from now and disqualifies the lob log for the short term.

bq. 2. We need add a preSync and preAppend to the HLog so that we could sync 
the Lob files before the HLogs are sync.

Explain why you need presync and preappend? 

I think this is getting at a problem where we are trying to essentially sync 
writes to two logs atomically. Could we just not issue the locator put until 
the lob has been synced?  (a lob that is just around won't hurt anything, but a 
bad locator would).  Both the lob and the locator would have the same 
ts/mvcc/seqno.

In the PDF's design, this shouldn't be a problem because it would use the 
normal write path for atomicity guarantees.  Currently hbase guarantees 
atomicity of CF's at flush time, and by having all cf:c's added to the hlog and 
memstore atomically.  

bq. In order to get the correct offset, we have to synchronize the prePut in 
the coprocessor, or we could use different Lob files for each thread?

Why not just write+sync the lob and then write the locator put?  For lobs we'd 
use the same mechanism to sync (one loblog for all threads, queued using the 
disruptor work).  


> HBase MOB
> -
>
> Key: HBASE-11339
> URL: https://issues.apache.org/jira/browse/HBASE-11339
> Project: HBase
>  Issue Type: New Feature
>  Components: regionserver, Scanners
>Reporter: Jingcheng Du
>Assignee: Jingcheng Du
> Attachments: HBase LOB Design.pdf
>
>
>   It's quite useful to save the medium binary data like images, documents 
> into Apache HBase. Unfortunately directly saving the binary MOB(medium 
> object) to HBase leads to a worse performance since the frequent split and 
> compaction.
>   In this design, the MOB data are stored in an more efficient way, which 
> keeps a high write/read performance and guarantees the data consistency in 
> Apache HBase.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11378) TableMapReduceUtil overwrites user supplied options for multiple tables/scaners job

2014-06-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14037859#comment-14037859
 ] 

Hudson commented on HBASE-11378:


SUCCESS: Integrated in HBase-0.98 #347 (See 
[https://builds.apache.org/job/HBase-0.98/347/])
HBASE-11378 TableMapReduceUtil overwrites user supplied options for multiple 
tables/scaners job (jxiang: rev 23cd02a21cae7b98b12178cd04fb2e88aa56b4a2)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableMapReduceUtil.java


> TableMapReduceUtil overwrites user supplied options for multiple 
> tables/scaners job
> ---
>
> Key: HBASE-11378
> URL: https://issues.apache.org/jira/browse/HBASE-11378
> Project: HBase
>  Issue Type: Bug
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
> Fix For: 0.99.0, 0.98.4
>
> Attachments: hbase-11378.patch
>
>
> In TableMapReduceUtil#initTableMapperJob, we have
> HBaseConfiguration.addHbaseResources(job.getConfiguration());
> It should use merge instead. Otherwise, user supplied options are overwritten.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10933) hbck -fixHdfsOrphans is not working properly it throws null pointer exception

2014-06-19 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-10933:
--

Fix Version/s: (was: 0.94.21)
   0.94.22

> hbck -fixHdfsOrphans is not working properly it throws null pointer exception
> -
>
> Key: HBASE-10933
> URL: https://issues.apache.org/jira/browse/HBASE-10933
> Project: HBase
>  Issue Type: Bug
>  Components: hbck
>Affects Versions: 0.94.16, 0.98.2
>Reporter: Deepak Sharma
>Assignee: Kashif J S
>Priority: Critical
> Fix For: 0.99.0, 0.94.22
>
> Attachments: HBASE-10933-0.94-v1.patch, HBASE-10933-0.94-v2.patch, 
> HBASE-10933-trunk-v1.patch, HBASE-10933-trunk-v2.patch, TestResults-0.94.txt, 
> TestResults-trunk.txt
>
>
> if we regioninfo file is not existing in hbase region then if we run hbck 
> repair or hbck -fixHdfsOrphans
> then it is not able to resolve this problem it throws null pointer exception
> {code}
> 2014-04-08 20:11:49,750 INFO  [main] util.HBaseFsck 
> (HBaseFsck.java:adoptHdfsOrphans(470)) - Attempting to handle orphan hdfs 
> dir: 
> hdfs://10.18.40.28:54310/hbase/TestHdfsOrphans1/5a3de9ca65e587cb05c9384a3981c950
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.util.HBaseFsck$TableInfo.access$000(HBaseFsck.java:1939)
>   at 
> org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphan(HBaseFsck.java:497)
>   at 
> org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphans(HBaseFsck.java:471)
>   at 
> org.apache.hadoop.hbase.util.HBaseFsck.restoreHdfsIntegrity(HBaseFsck.java:591)
>   at 
> org.apache.hadoop.hbase.util.HBaseFsck.offlineHdfsIntegrityRepair(HBaseFsck.java:369)
>   at org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:447)
>   at org.apache.hadoop.hbase.util.HBaseFsck.exec(HBaseFsck.java:3769)
>   at org.apache.hadoop.hbase.util.HBaseFsck.run(HBaseFsck.java:3587)
>   at 
> com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.repairToFixHdfsOrphans(HbaseHbckRepair.java:244)
>   at 
> com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.setUp(HbaseHbckRepair.java:84)
>   at junit.framework.TestCase.runBare(TestCase.java:132)
>   at junit.framework.TestResult$1.protect(TestResult.java:110)
>   at junit.framework.TestResult.runProtected(TestResult.java:128)
>   at junit.framework.TestResult.run(TestResult.java:113)
>   at junit.framework.TestCase.run(TestCase.java:124)
>   at junit.framework.TestSuite.runTest(TestSuite.java:243)
>   at junit.framework.TestSuite.run(TestSuite.java:238)
>   at 
> org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
>   at 
> org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
>   at 
> org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
> {code}
> problem i got it is because since in HbaseFsck class 
> {code}
>  private void adoptHdfsOrphan(HbckInfo hi)
> {code}
> we are intializing tableinfo using SortedMap tablesInfo 
> object
> {code}
> TableInfo tableInfo = tablesInfo.get(tableName);
> {code}
> but  in private SortedMap loadHdfsRegionInfos()
> {code}
>  for (HbckInfo hbi: hbckInfos) {
>   if (hbi.getHdfsHRI() == null) {
> // was an orphan
> continue;
>   }
> {code}
> we have check if a region is orphan then that table will can not be added in 
> SortedMap tablesInfo
> so later while using this we get null pointer exception



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11360) SnapshotFileCache refresh logic based on modified directory time might be insufficient

2014-06-19 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-11360:
--

Fix Version/s: 0.94.22

> SnapshotFileCache refresh logic based on modified directory time might be 
> insufficient
> --
>
> Key: HBASE-11360
> URL: https://issues.apache.org/jira/browse/HBASE-11360
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.19
>Reporter: churro morales
> Fix For: 0.94.22
>
>
> Right now we decide whether to refresh the cache based on the lastModified 
> timestamp of all the snapshots and those "running" snapshots which is located 
> in the /hbase/.hbase-snapshot/.tmp/ directory
> We ran a ExportSnapshot job which takes around 7 minutes between creating the 
> directory and copying all the files. 
> Thus the modified time for the 
> /hbase/.hbase-snapshot/.tmp directory was 7 minutes earlier than the modified 
> time of the
> /hbase/.hbase-snapshot/.tmp/ directory
> Thus the cache refresh happens and doesn't pick up all the files but thinks 
> its up to date as the modified time of the .tmp directory never changes.
> This is a bug as when the export job starts the cache never contains the 
> files for the "running" snapshot and will fail.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10924) [region_mover]: Adjust region_mover script to retry unloading a server a configurable number of times in case of region splits/merges

2014-06-19 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-10924:
--

Fix Version/s: (was: 0.94.21)
   0.94.22

Pushing one more time.

> [region_mover]: Adjust region_mover script to retry unloading a server a 
> configurable number of times in case of region splits/merges
> -
>
> Key: HBASE-10924
> URL: https://issues.apache.org/jira/browse/HBASE-10924
> Project: HBase
>  Issue Type: Bug
>  Components: Region Assignment
>Affects Versions: 0.94.15
>Reporter: Aleksandr Shulman
>Assignee: Aleksandr Shulman
>  Labels: region_mover, rolling_upgrade
> Fix For: 0.94.22
>
> Attachments: HBASE-10924-0.94-v2.patch, HBASE-10924-0.94-v3.patch
>
>
> Observed behavior:
> In about 5% of cases, my rolling upgrade tests fail because of stuck regions 
> during a region server unload. My theory is that this occurs when region 
> assignment information changes between the time the region list is generated, 
> and the time when the region is to be moved.
> An example of such a region information change is a split or merge.
> Example:
> Regionserver A has 100 regions (#0-#99). The balancer is turned off and the 
> regionmover script is called to unload this regionserver. The regionmover 
> script will generate the list of 100 regions to be moved and then proceed 
> down that list, moving the regions off in series. However, there is a region, 
> #84, that has split into two daughter regions while regions 0-83 were moved. 
> The script will be stuck trying to move #84, timeout, and then the failure 
> will bubble up (attempt 1 failed).
> Proposed solution:
> This specific failure mode should be caught and the region_mover script 
> should now attempt to move off all the regions. Now, it will have 16+1 (due 
> to split) regions to move. There is a good chance that it will be able to 
> move all 17 off without issues. However, should it encounter this same issue 
> (attempt 2 failed), it will retry again. This process will continue until the 
> maximum number of unload retry attempts has been reached.
> This is not foolproof, but let's say for the sake of argument that 5% of 
> unload attempts hit this issue, then with a retry count of 3, it will reduce 
> the unload failure probability from 0.05 to 0.000125 (0.05^3).
> Next steps:
> I am looking for feedback on this approach. If it seems like a sensible 
> approach, I will create a strawman patch and test it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11322) SnapshotHFileCleaner makes the wrong check for lastModified time thus causing too many cache refreshes

2014-06-19 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-11322:
--

Fix Version/s: (was: 0.94.21)
   0.94.22

Let's push this to 0.94.22 as we're still discussing.

> SnapshotHFileCleaner makes the wrong check for lastModified time thus causing 
> too many cache refreshes
> --
>
> Key: HBASE-11322
> URL: https://issues.apache.org/jira/browse/HBASE-11322
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.19
>Reporter: churro morales
>Assignee: churro morales
>Priority: Critical
> Fix For: 0.94.22
>
> Attachments: 11322.94.txt, HBASE-11322.patch
>
>
> The SnapshotHFileCleaner calls the SnapshotFileCache if a particular HFile in 
> question is part of a snapshot.
> If the HFile is not in the cache, we then refresh the cache and check again.
> But the cache refresh checks to see if anything has been modified since the 
> last cache refresh but this logic is incorrect in certain scenarios.
> The last modified time is done via this operation:
> {code}
> this.lastModifiedTime = Math.min(dirStatus.getModificationTime(),
>  tempStatus.getModificationTime());
> {code}
> and the check to see if the snapshot directories have been modified:
> {code}
> // if the snapshot directory wasn't modified since we last check, we are done
> if (dirStatus.getModificationTime() <= lastModifiedTime &&
> tempStatus.getModificationTime() <= lastModifiedTime) {
>   return;
> }
> {code}
> Suppose the following happens:
> dirStatus modified 6-1-2014
> tempStatus modified 6-2-2014
> lastModifiedTime = 6-1-2014
> provided these two directories don't get modified again all subsequent checks 
> wont exit early, like they should.
> In our cluster, this was a huge performance hit.  The cleaner chain fell 
> behind, thus almost filling up dfs and our namenode heap.
> Its a simple fix, instead of Math.min we use Math.max for the lastModified, I 
> believe that will be correct.
> I'll apply a patch for you guys.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11348) Make frequency and sleep times of chaos monkeys configurable

2014-06-19 Thread Vandana Ayyalasomayajula (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14037826#comment-14037826
 ] 

Vandana Ayyalasomayajula commented on HBASE-11348:
--

For the book, will the following description of the change be okay ?
{quote}
After the changes from HBASE-11348, the chaos monkeys used to run integration 
tests can be configured for each run. The user can create a java properties 
file and configure
chaos monkeys. The properties file needs to be in hbase classpath. The various 
properties that can be configured and their default values can be found in 
org.apache.hadoop.hbase.chaos.factories.MonkeyConstants class. 
If any chaos monkey configuration is missing in the property file, then the 
default values would be assumed.
Example:
bin/hbase org.apache.hadoop.hbase.IntegrationTestIngest -m slowDeterministic 
-monkeyProps monkey.properties
The above command will start the integration tests and chaos monkey. 
Contents of monkey.properties:
sdm.action1.period=12
sdm.action2.period=4
move.regions.sleep.time=8
move.regions.max.time=100
move.regions.sleep.time=8
batch.restart.rs.ratio=0.4f
{quote}

> Make frequency and sleep times of  chaos monkeys configurable 
> --
>
> Key: HBASE-11348
> URL: https://issues.apache.org/jira/browse/HBASE-11348
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 0.98.3
>Reporter: Vandana Ayyalasomayajula
>Assignee: Vandana Ayyalasomayajula
>Priority: Minor
>  Labels: integration-tests
> Attachments: HBASE-11348_1.patch, HBASE-11348_2.patch
>
>
> Currently the chaos monkeys used in the integration tests, run with a fixed 
> configuration. It would be useful to have the frequency, sleep times to be 
> configurable. That would help controlling the chaos the monkeys are intended 
> to create.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11331) [blockcache] lazy block decompression

2014-06-19 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14037824#comment-14037824
 ] 

stack commented on HBASE-11331:
---

How feasible keeping count of how many times a block has been decompressed and 
if over a configurable threshold, instead shove the decompressed block back 
into the block cache in place of the compressed one?   We already count if been 
accessed more than once?  Could we leverage this fact?

bq. This is related to but less invasive than HBASE-8894.

Would a better characterization be that this is a core piece of HBASE-8894 only 
done more in line w/ how hbase master branch works now (HBASE-8894 interjects a 
special-case handling of its L2 cache when reading blocks from HDFS... This 
makes do without special interjection).

> [blockcache] lazy block decompression
> -
>
> Key: HBASE-11331
> URL: https://issues.apache.org/jira/browse/HBASE-11331
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
> Attachments: HBASE-11331.00.patch
>
>
> Maintaining data in its compressed form in the block cache will greatly 
> increase our effective blockcache size and should show a meaning improvement 
> in cache hit rates in well designed applications. The idea here is to lazily 
> decompress/decrypt blocks when they're consumed, rather than as soon as 
> they're pulled off of disk.
> This is related to but less invasive than HBASE-8894.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11378) TableMapReduceUtil overwrites user supplied options for multiple tables/scaners job

2014-06-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14037820#comment-14037820
 ] 

Hudson commented on HBASE-11378:


SUCCESS: Integrated in HBase-TRUNK #5219 (See 
[https://builds.apache.org/job/HBase-TRUNK/5219/])
HBASE-11378 TableMapReduceUtil overwrites user supplied options for multiple 
tables/scaners job (jxiang: rev 45bc13d87a08dd56c01a20e3b574f85882500810)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableMapReduceUtil.java


> TableMapReduceUtil overwrites user supplied options for multiple 
> tables/scaners job
> ---
>
> Key: HBASE-11378
> URL: https://issues.apache.org/jira/browse/HBASE-11378
> Project: HBase
>  Issue Type: Bug
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
> Fix For: 0.99.0, 0.98.4
>
> Attachments: hbase-11378.patch
>
>
> In TableMapReduceUtil#initTableMapperJob, we have
> HBaseConfiguration.addHbaseResources(job.getConfiguration());
> It should use merge instead. Otherwise, user supplied options are overwritten.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11364) [BlockCache] Add a flag to cache data blocks in L1 if multi-tier cache

2014-06-19 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14037811#comment-14037811
 ] 

stack commented on HBASE-11364:
---

bq. We can also make the config option default on going forward. That'd be 
almost identical to your option #2, only that it could disabled via a config. 
But maybe we do not want more configs?

No more configs!

Maybe go #1 for a release then in 1.1, enable bucket cache as default.  Thanks 
[~lhofhansl]

> [BlockCache] Add a flag to cache data blocks in L1 if multi-tier cache
> --
>
> Key: HBASE-11364
> URL: https://issues.apache.org/jira/browse/HBASE-11364
> Project: HBase
>  Issue Type: Task
>Reporter: stack
>Assignee: stack
> Fix For: 0.99.0
>
> Attachments: 11364.txt
>
>
> This is a prerequisite for HBASE-11323 BucketCache on all the time.  It 
> addresses a @lars hofhansl ask that we be able to ask that for some column 
> families, even their data blocks get cached up in the LruBlockCache L1 tier 
> in a multi-tier deploy as happens when doing BucketCache (CombinedBlockCache) 
> setups.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11348) Make frequency and sleep times of chaos monkeys configurable

2014-06-19 Thread Vandana Ayyalasomayajula (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14037807#comment-14037807
 ] 

Vandana Ayyalasomayajula commented on HBASE-11348:
--

[~stack] yes, the options 
{code}
 Options: -h,--help Show usage
 -m,--monkey  Which chaos monkey to run
 -monkeyProps  The properties file for specifying chaos monkey 
properties.
-ncc Option to not clean up the cluster at the end.
{code} 

show up when any user executes command bin/hbase 
 --help or -h.
I will shortly update the JIRA with a one paragraph description of the change.

> Make frequency and sleep times of  chaos monkeys configurable 
> --
>
> Key: HBASE-11348
> URL: https://issues.apache.org/jira/browse/HBASE-11348
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 0.98.3
>Reporter: Vandana Ayyalasomayajula
>Assignee: Vandana Ayyalasomayajula
>Priority: Minor
>  Labels: integration-tests
> Attachments: HBASE-11348_1.patch, HBASE-11348_2.patch
>
>
> Currently the chaos monkeys used in the integration tests, run with a fixed 
> configuration. It would be useful to have the frequency, sleep times to be 
> configurable. That would help controlling the chaos the monkeys are intended 
> to create.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11348) Make frequency and sleep times of chaos monkeys configurable

2014-06-19 Thread Vandana Ayyalasomayajula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vandana Ayyalasomayajula updated HBASE-11348:
-

Attachment: HBASE-11348_2.patch

Rebased patch for trunk.

> Make frequency and sleep times of  chaos monkeys configurable 
> --
>
> Key: HBASE-11348
> URL: https://issues.apache.org/jira/browse/HBASE-11348
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 0.98.3
>Reporter: Vandana Ayyalasomayajula
>Assignee: Vandana Ayyalasomayajula
>Priority: Minor
>  Labels: integration-tests
> Attachments: HBASE-11348_1.patch, HBASE-11348_2.patch
>
>
> Currently the chaos monkeys used in the integration tests, run with a fixed 
> configuration. It would be useful to have the frequency, sleep times to be 
> configurable. That would help controlling the chaos the monkeys are intended 
> to create.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11364) [BlockCache] Add a flag to cache data blocks in L1 if multi-tier cache

2014-06-19 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14037773#comment-14037773
 ] 

Lars Hofhansl commented on HBASE-11364:
---

Yeah option #1.
I.e.:
* Have config in hbase-site.xml to enable the bucketcache (default off)
* when bucket cache is enabled via config it is the default (not schema changes 
required)
* folks can selectively pull tables into L1 only via a schema change

We can also make the config option default on going forward. That'd be almost 
identical to your option #2, only that it could disabled via a config. But 
maybe we do not want more configs?

#3 will be incredibly hard to get right for all cases and lead to double 
caching and potentially even more GC as we churn blocks through L2 to L1 and 
back.


> [BlockCache] Add a flag to cache data blocks in L1 if multi-tier cache
> --
>
> Key: HBASE-11364
> URL: https://issues.apache.org/jira/browse/HBASE-11364
> Project: HBase
>  Issue Type: Task
>Reporter: stack
>Assignee: stack
> Fix For: 0.99.0
>
> Attachments: 11364.txt
>
>
> This is a prerequisite for HBASE-11323 BucketCache on all the time.  It 
> addresses a @lars hofhansl ask that we be able to ask that for some column 
> families, even their data blocks get cached up in the LruBlockCache L1 tier 
> in a multi-tier deploy as happens when doing BucketCache (CombinedBlockCache) 
> setups.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11348) Make frequency and sleep times of chaos monkeys configurable

2014-06-19 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14037770#comment-14037770
 ] 

stack commented on HBASE-11348:
---

Do these options show as usage when I type in -h or --help for IT test?

{code}
 * Options: -h,--help Show usage
 *  -m,--monkey  Which chaos monkey to run
 *  -monkeyProps  The properties file for specifying chaos monkey 
properties.
 *  -ncc Option to not clean up the cluster at the end.
{code}

... or are they only in the class comment?  i.e. do they show when I do:

 bin/hbase   -h|--help

?  Maybe there is no help for these tests?

For the book, just write a paragraph and I'll shove it in on commit on how 
you'd use the new property files or maybe I can amend your release note

> Make frequency and sleep times of  chaos monkeys configurable 
> --
>
> Key: HBASE-11348
> URL: https://issues.apache.org/jira/browse/HBASE-11348
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 0.98.3
>Reporter: Vandana Ayyalasomayajula
>Assignee: Vandana Ayyalasomayajula
>Priority: Minor
>  Labels: integration-tests
> Attachments: HBASE-11348_1.patch
>
>
> Currently the chaos monkeys used in the integration tests, run with a fixed 
> configuration. It would be useful to have the frequency, sleep times to be 
> configurable. That would help controlling the chaos the monkeys are intended 
> to create.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11285) Expand coprocs info in Ref Guide

2014-06-19 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14037765#comment-14037765
 ] 

stack commented on HBASE-11285:
---

Maybe the mighty [~ghelmling] will come by and review this latest?

> Expand coprocs info in Ref Guide
> 
>
> Key: HBASE-11285
> URL: https://issues.apache.org/jira/browse/HBASE-11285
> Project: HBase
>  Issue Type: Bug
>  Components: Coprocessors, documentation
>Affects Versions: 0.98.3
>Reporter: Misty Stanley-Jones
>Assignee: Misty Stanley-Jones
> Attachments: HBASE-11285-1.patch, HBASE-11285.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11364) [BlockCache] Add a flag to cache data blocks in L1 if multi-tier cache

2014-06-19 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14037761#comment-14037761
 ] 

stack commented on HBASE-11364:
---

[~lhofhansl] So you are for option #1 (unless option #3 shows well in testing I 
suppose).

> [BlockCache] Add a flag to cache data blocks in L1 if multi-tier cache
> --
>
> Key: HBASE-11364
> URL: https://issues.apache.org/jira/browse/HBASE-11364
> Project: HBase
>  Issue Type: Task
>Reporter: stack
>Assignee: stack
> Fix For: 0.99.0
>
> Attachments: 11364.txt
>
>
> This is a prerequisite for HBASE-11323 BucketCache on all the time.  It 
> addresses a @lars hofhansl ask that we be able to ask that for some column 
> families, even their data blocks get cached up in the LruBlockCache L1 tier 
> in a multi-tier deploy as happens when doing BucketCache (CombinedBlockCache) 
> setups.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11364) [BlockCache] Add a flag to cache data blocks in L1 if multi-tier cache

2014-06-19 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14037753#comment-14037753
 ] 

Lars Hofhansl commented on HBASE-11364:
---

I was saying the way you have it is good :)

> [BlockCache] Add a flag to cache data blocks in L1 if multi-tier cache
> --
>
> Key: HBASE-11364
> URL: https://issues.apache.org/jira/browse/HBASE-11364
> Project: HBase
>  Issue Type: Task
>Reporter: stack
>Assignee: stack
> Fix For: 0.99.0
>
> Attachments: 11364.txt
>
>
> This is a prerequisite for HBASE-11323 BucketCache on all the time.  It 
> addresses a @lars hofhansl ask that we be able to ask that for some column 
> families, even their data blocks get cached up in the LruBlockCache L1 tier 
> in a multi-tier deploy as happens when doing BucketCache (CombinedBlockCache) 
> setups.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11323) BucketCache all the time!

2014-06-19 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14037749#comment-14037749
 ] 

stack commented on HBASE-11323:
---

Options are shaping up as follows (copied from HBASE-11364):

1. Do NOT enable offheap by default. Just talk it up as the way to go 
underlining it will make pure in-memory access slower (but you can make it so 
some of your tables are pegged in memory if you want because of the flag here). 
Upside: No surprise. Downside: Folks don't read manuals nor change defaults.
2. Enable offheap BucketCache using CombinedBucketCache. When folks upgrade, 
latency to user-level DATA blocks will go up. Upsides: less GC, more cached. 
Downside: those who notice added latency might get upset. Changing schema will 
require alter table.
3. Enable offheap BucketCache but in additive mode where we just add in an L2 
under the L1 LruBlockCache. Upside: Additive. Downside: Could make for more GC.
Of the above, maybe 1. is the way to go? 2. may surprise in that perf and GC 
gets better of a sudden (this would be ok) but others may be surprised that 
their latencies have gone up for some key tables. 3. may actually make GC worse 
(at least that is case in the SlabCache case which is similar and the L1/L2 
layout doesn't get good review to date going by HBASE-8894).
Will test 3.

> BucketCache all the time!
> -
>
> Key: HBASE-11323
> URL: https://issues.apache.org/jira/browse/HBASE-11323
> Project: HBase
>  Issue Type: Sub-task
>  Components: io
>Reporter: stack
> Fix For: 0.99.0
>
> Attachments: ReportBlockCache.pdf
>
>
> One way to realize the parent issue is to just enable bucket cache all the 
> time; i.e. always have offheap enabled.  Would have to do some work to make 
> it drop-dead simple on initial setup (I think it doable).
> So, upside would be the offheap upsides (less GC, less likely to go away and 
> never come back because of full GC when heap is large, etc.).
> Downside is higher latency.   In Nick's BlockCache 101 there is little to no 
> difference between onheap and offheap.  In a basic compare doing scans and 
> gets -- details to follow -- I have BucketCache deploy about 20% less ops 
> than LRUBC when all incache and maybe 10% less ops when falling out of cache. 
>   I can't tell difference in means and 95th and 99th are roughly same (more 
> stable with BucketCache).  GC profile is much better with BucketCache -- way 
> less.  BucketCache uses about 7% more user CPU.
> More detail on comparison to follow.
> I think the numbers disagree enough we should probably do the [~lhofhansl] 
> suggestion, that we allow you to have a table sit in LRUBC, something the 
> current bucket cache layout does not do.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11364) [BlockCache] Add a flag to cache data blocks in L1 if multi-tier cache

2014-06-19 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14037747#comment-14037747
 ] 

stack commented on HBASE-11364:
---

Moving the last comment to the more appropriate issue, HBASE-11323 bucketcache 
on all the time.

> [BlockCache] Add a flag to cache data blocks in L1 if multi-tier cache
> --
>
> Key: HBASE-11364
> URL: https://issues.apache.org/jira/browse/HBASE-11364
> Project: HBase
>  Issue Type: Task
>Reporter: stack
>Assignee: stack
> Fix For: 0.99.0
>
> Attachments: 11364.txt
>
>
> This is a prerequisite for HBASE-11323 BucketCache on all the time.  It 
> addresses a @lars hofhansl ask that we be able to ask that for some column 
> families, even their data blocks get cached up in the LruBlockCache L1 tier 
> in a multi-tier deploy as happens when doing BucketCache (CombinedBlockCache) 
> setups.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11364) [BlockCache] Add a flag to cache data blocks in L1 if multi-tier cache

2014-06-19 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14037745#comment-14037745
 ] 

stack commented on HBASE-11364:
---

[~lhofhansl]

bq. I can see this both way. Since off caching is a separate option your way is 
better: Once off heap cache is enabled, data goes there by default.

Come again Lars.  I don't follow.  Options are:

1. Do NOT enable offheap by default.  Just talk it up as the way to go 
underlining it will make pure in-memory access slower (but you can make it so 
some of your tables are pegged in memory if you want because of the flag here). 
 Upside: No surprise.  Downside: Folks don't read manuals nor change defaults.
2. Enable offheap BucketCache using CombinedBucketCache. When folks upgrade, 
latency to user-level DATA blocks will go up.  Upsides: less GC, more cached.  
Downside: those who notice added latency might get upset.  Changing schema will 
require alter table.
3. Enable offheap BucketCache but in additive mode where we just add in an L2 
under the L1 LruBlockCache.  Upside: Additive.  Downside: Could make for more 
GC.

Of the above, maybe 1. is the way to go?  2. may surprise in that perf and GC 
gets better of a sudden (this would be ok) but others may be surprised that 
their latencies have gone up for some key tables.  3. may actually make GC 
worse (at least that is case in the SlabCache case which is similar and the 
L1/L2 layout doesn't get good review to date going by  HBASE-8894).

Will test 3.



> [BlockCache] Add a flag to cache data blocks in L1 if multi-tier cache
> --
>
> Key: HBASE-11364
> URL: https://issues.apache.org/jira/browse/HBASE-11364
> Project: HBase
>  Issue Type: Task
>Reporter: stack
>Assignee: stack
> Fix For: 0.99.0
>
> Attachments: 11364.txt
>
>
> This is a prerequisite for HBASE-11323 BucketCache on all the time.  It 
> addresses a @lars hofhansl ask that we be able to ask that for some column 
> families, even their data blocks get cached up in the LruBlockCache L1 tier 
> in a multi-tier deploy as happens when doing BucketCache (CombinedBlockCache) 
> setups.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11338) Expand documentation on bloom filters

2014-06-19 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14037722#comment-14037722
 ] 

stack commented on HBASE-11338:
---

Very nice.

Blooms only work for Gets.  Not Scans (You say both in the doc).

Blooms are enabled since 0.96.0 (see HBASE-8450 
https://github.com/apache/hbase/commit/b5146ebf6e3a1beaaebb42853862ad6de9d4b2cf)

Otherwise, the doc is excellent.  If you want me to just fix the above on 
commit, just say.  No problem.

> Expand documentation on bloom filters
> -
>
> Key: HBASE-11338
> URL: https://issues.apache.org/jira/browse/HBASE-11338
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: Misty Stanley-Jones
>Assignee: Misty Stanley-Jones
> Attachments: HBASE-11338.patch
>
>
> Ref Guide  could use more info on bloom filters.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11378) TableMapReduceUtil overwrites user supplied options for multiple tables/scaners job

2014-06-19 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-11378:


  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Integrated into trunk and 0.98. Thanks. The zombie test could be 
TestHRegionBusyWait since I didn't see it in the test report. However, this 
test should not be affected.  It seems to be fine locally.

> TableMapReduceUtil overwrites user supplied options for multiple 
> tables/scaners job
> ---
>
> Key: HBASE-11378
> URL: https://issues.apache.org/jira/browse/HBASE-11378
> Project: HBase
>  Issue Type: Bug
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
> Fix For: 0.99.0, 0.98.4
>
> Attachments: hbase-11378.patch
>
>
> In TableMapReduceUtil#initTableMapperJob, we have
> HBaseConfiguration.addHbaseResources(job.getConfiguration());
> It should use merge instead. Otherwise, user supplied options are overwritten.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


  1   2   >