[jira] [Commented] (HBASE-14599) Modify site config to use protocol-relative URLs for CSS/JS

2015-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14956143#comment-14956143
 ] 

Hudson commented on HBASE-14599:


SUCCESS: Integrated in HBase-TRUNK #6905 (See 
[https://builds.apache.org/job/HBase-TRUNK/6905/])
HBASE-14599 Modify site config to use protocol-relative URLs for CSS/JS 
(mstanleyjones: rev 73f4c550860158a8b54377cbc1dff0a8bdae)
* src/main/site/site.xml


> Modify site config to use protocol-relative URLs for CSS/JS
> ---
>
> Key: HBASE-14599
> URL: https://issues.apache.org/jira/browse/HBASE-14599
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: Misty Stanley-Jones
>Assignee: Misty Stanley-Jones
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: HBASE-14599.patch
>
>
> Parts of the website are served over https, and a directive is needed in the 
> site configuration to make sure that the CSS and Javascript are not served 
> over http in such cases, which causes some browsers to block the CSS and 
> Javascript. Reflow has a configuration for this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14355) Scan different TimeRange for each column family

2015-10-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14956160#comment-14956160
 ] 

Hadoop QA commented on HBASE-14355:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12766413/HBASE-14355.patch
  against master branch at commit 874437cc5859b09480e783651004613cabc7510e.
  ATTACHMENT ID: 12766413

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.6.1 2.7.0 
2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 1 
warning messages.

{color:red}-1 checkstyle{color}.  The applied patch generated 
1758 checkstyle errors (more than the master's current 1753 errors).

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+  "ualifier\030\002 
\003(\014\"\226\003\n\003Get\022\013\n\003row\030\001 \002(\014\022 \n\006c" +
+  "\030\002 \001(\004\022\024\n\014more_results\030\003 
\001(\010\022\013\n\003ttl\030\004 \001(" +
+  new java.lang.String[] { "Row", "Column", "Attribute", "Filter", 
"TimeRange", "MaxVersions", "CacheBlocks", "StoreLimit", "StoreOffset", 
"ExistenceOnly", "Consistency", "CfTimeRange", });
+  new java.lang.String[] { "Column", "Attribute", "StartRow", 
"StopRow", "Filter", "TimeRange", "MaxVersions", "CacheBlocks", "BatchSize", 
"MaxResultSize", "StoreLimit", "StoreOffset", "LoadColumnFamiliesOnDemand", 
"Small", "Reversed", "Consistency", "Caching", "CfTimeRange", });
+  "\001 \002(\t\022\013\n\003url\030\002 
\002(\t\022\020\n\010revision\030\003 \002(\t\022\014\n\004" +
+  
"S\020\004\022\013\n\007MINUTES\020\005\022\t\n\005HOURS\020\006\022\010\n\004DAYS\020\007B>\n"
 +

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s):   
at 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeRollingUpgrade.testDatanodeRollingUpgradeWithRollback(TestDataNodeRollingUpgrade.java:272)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15992//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15992//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15992//artifact/patchprocess/checkstyle-aggregate.html

Javadoc warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15992//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15992//console

This message is automatically generated.

> Scan different TimeRange for each column family
> ---
>
> Key: HBASE-14355
> URL: https://issues.apache.org/jira/browse/HBASE-14355
> Project: HBase
>  Issue Type: New Feature
>  Components: Client, regionserver, Scanners
>Reporter: Dave Latham
>Assignee: churro morales
> Fix For: 2.0.0, 1.3.0, 0.98.16
>
> Attachments: HBASE-14355.patch
>
>
> At present the Scan API supports only table level time range. We have 
> specific use cases that will benefit from per column family time range. (See 
> background discussion at 
> https://mail-archives.apache.org/mod_mbox/hbase-user/201508.mbox/%3ccaa4mzom00ef5eoxstk0hetxeby8mqss61gbvgttgpaspmhq...@mail.gmail.com%3E)
> There are a couple of choices that would be good to validate.  First - how to 
> update the Scan API to support family and table level updates.  One proposal 
> would be to add Scan.setTimeRange(byte family, long minTime, long maxTime), 
> then store it in a Map.  When executing the scan, if a 
> family has a specified TimeRange, then use it, otherwise fall back to using 
> the table level TimeRange.  Clients using the new API against old region 
> servers would not get the families correctly filterd.  Old clients sending 
> scans to 

[jira] [Commented] (HBASE-14596) TestCellACLs failing... on1.2 builds

2015-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14956169#comment-14956169
 ] 

Hudson commented on HBASE-14596:


FAILURE: Integrated in HBase-1.3 #260 (See 
[https://builds.apache.org/job/HBase-1.3/260/])
HBASE-14596 TestCellACLs failing... on1.2 builds; FIX (stack: rev 
778a595a5289a71b7c562e93bedf7d7583688b99)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestCellACLs.java


> TestCellACLs failing... on1.2 builds
> 
>
> Key: HBASE-14596
> URL: https://issues.apache.org/jira/browse/HBASE-14596
> Project: HBase
>  Issue Type: Sub-task
>  Components: test
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: 14596.branch-1.patch, 14596.debug.txt, 
> 14596.master.patch, 14596.txt
>
>
> Caught this in 1.7 builds:
> {code}
> "PriorityRpcServer.handler=4,queue=0,port=42214" daemon prio=10 
> tid=0x7f08d5786000 nid=0x2eed in Object.wait() [0x7f08918d5000]
>java.lang.Thread.State: TIMED_WAITING (on object monitor)
>   at java.lang.Object.wait(Native Method)
>   at 
> org.apache.hadoop.hbase.client.AsyncProcess.waitForMaximumCurrentTasks(AsyncProcess.java:1659)
>   - locked <0x0007580e5f98> (a java.util.concurrent.atomic.AtomicLong)
>   at 
> org.apache.hadoop.hbase.client.AsyncProcess.waitForAllPreviousOpsAndReset(AsyncProcess.java:1688)
>   at 
> org.apache.hadoop.hbase.client.BufferedMutatorImpl.backgroundFlushCommits(BufferedMutatorImpl.java:208)
>   at 
> org.apache.hadoop.hbase.client.BufferedMutatorImpl.flush(BufferedMutatorImpl.java:183)
>   - locked <0x0007580f36f8> (a 
> org.apache.hadoop.hbase.client.BufferedMutatorImpl)
>   at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1430)
>   at org.apache.hadoop.hbase.client.HTable.put(HTable.java:1021)
>   at 
> org.apache.hadoop.hbase.security.access.AccessControlLists.addUserPermission(AccessControlLists.java:176)
>   at 
> org.apache.hadoop.hbase.security.access.AccessController$8.run(AccessController.java:2175)
>   at 
> org.apache.hadoop.hbase.security.access.AccessController$8.run(AccessController.java:2172)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
>   at 
> org.apache.hadoop.security.SecurityUtil.doAsUser(SecurityUtil.java:444)
>   at 
> org.apache.hadoop.security.SecurityUtil.doAsLoginUser(SecurityUtil.java:425)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at org.apache.hadoop.hbase.util.Methods.call(Methods.java:39)
>   at org.apache.hadoop.hbase.security.User.runAsLoginUser(User.java:205)
>   at 
> org.apache.hadoop.hbase.security.access.AccessController.grant(AccessController.java:2172)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos$AccessControlService$1.grant(AccessControlProtos.java:9933)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos$AccessControlService.callMethod(AccessControlProtos.java:10097)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7650)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1896)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1878)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32590)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2120)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:106)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
>   at java.lang.Thread.run(Thread.java:745)
> "PriorityRpcServer.handler=3,queue=1,port=42214" daemon prio=10 
> tid=0x7f08d5784000 nid=0x2eec in Object.wait() [0x7f08919d7000]
>java.lang.Thread.State: TIMED_WAITING (on object monitor)
>   at java.lang.Object.wait(Native Method)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1248)
>   - locked <0x0007cb61ecd8> (a org.apache.hadoop.hbase.ipc.Call)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:217)
>   at 
> 

[jira] [Commented] (HBASE-14600) Make #testWalRollOnLowReplication looser still

2015-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14956168#comment-14956168
 ] 

Hudson commented on HBASE-14600:


FAILURE: Integrated in HBase-1.3 #260 (See 
[https://builds.apache.org/job/HBase-1.3/260/])
HBASE-14600 Make #testWalRollOnLowReplication looser still (stack: rev 
db880599bf432c09b9f806ca6de4e42bdb5b6be2)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/procedure/TestWALProcedureStoreOnHDFS.java


> Make #testWalRollOnLowReplication looser still
> --
>
> Key: HBASE-14600
> URL: https://issues.apache.org/jira/browse/HBASE-14600
> Project: HBase
>  Issue Type: Sub-task
>  Components: test
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: 14600.txt
>
>
> The parent upped timeouts on testWalRollOnLowReplication. It still fails on 
> occasion. Chatting w/ [~mbertozzi], he suggested that if we've make progress 
> in the test, return the test as compeleted successfully if we get a 
> RuntimeException out of the sync call(because DN is slow to recover).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14598) ByteBufferOutputStream grows its HeapByteBuffer beyond JVM limitations

2015-10-13 Thread Ian Friedman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ian Friedman updated HBASE-14598:
-
Attachment: hbase-14598-v1.patch

> ByteBufferOutputStream grows its HeapByteBuffer beyond JVM limitations
> --
>
> Key: HBASE-14598
> URL: https://issues.apache.org/jira/browse/HBASE-14598
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.12
>Reporter: Ian Friedman
>Assignee: Ian Friedman
> Attachments: 14598.txt, hbase-14598-v1.patch
>
>
> We noticed that in returning a Scan against a region containing particularly 
> large (wide) rows that it is possible during 
> ByteBufferOutputStream.checkSizeAndGrow() to attempt to create a new 
> ByteBuffer larger than the JVM allows which then throws a OutOfMemoryError. 
> The code currently caps it at Integer.MAX_VALUE which is actually larger than 
> the JVM allows. This lead to us dealing with cascading region server death as 
> the RegionServer hosting the region died, opened on a new server, the client 
> retried the scan, and the new RS died as well. 
> I believe ByteBufferOutputStream should not try to create ByteBuffers that 
> large and instead throw an exception back up if it needs to grow any bigger. 
> The limit should probably be something like Integer.MAX_VALUE-8, as that is 
> what ArrayList uses. ref: 
> http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/8-b132/java/util/ArrayList.java#221



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14564) Fix and reenable TestHFileOutputFormat2

2015-10-13 Thread Heng Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14956230#comment-14956230
 ] 

Heng Chen commented on HBASE-14564:
---

{quote}
Do we have another test suite that does increment load? Could go there?
{quote}

I found one testcase {{TestLoadIncrementalHFiles}} used to do increment load, 
but it seems not using mapreduce.  Should we do test increment load with MR in 
this testcase or create a new one?



> Fix and reenable TestHFileOutputFormat2
> ---
>
> Key: HBASE-14564
> URL: https://issues.apache.org/jira/browse/HBASE-14564
> Project: HBase
>  Issue Type: Bug
>Reporter: stack
>
> Was disabled as part of the zombie stomping session over in HBASE-14420. Test 
> needs a rewrite and/or being split up. Scope of the test needs to be shrunk 
> and made more targeted. Currently it does everything.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14591) Region with reference hfile may split after a forced split in IncreasingToUpperBoundRegionSplitPolicy

2015-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14956264#comment-14956264
 ] 

Hudson commented on HBASE-14591:


FAILURE: Integrated in HBase-1.3 #261 (See 
[https://builds.apache.org/job/HBase-1.3/261/])
HBASE-14591 Region with reference hfile may split after a forced split 
(liushaohui: rev 1a163b7ab71f7e1b5958102bb4e059e9e68be729)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionSplitPolicy.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/IncreasingToUpperBoundRegionSplitPolicy.java


> Region with reference hfile may split after a forced split in 
> IncreasingToUpperBoundRegionSplitPolicy
> -
>
> Key: HBASE-14591
> URL: https://issues.apache.org/jira/browse/HBASE-14591
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.15
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
>Priority: Critical
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.0.3, 1.1.3, 0.98.16
>
> Attachments: HBASE-14591-v1.patch
>
>
> In the IncreasingToUpperBoundRegionSplitPolicy, a region with a store having 
> hfile reference may split after a forced split. This will break many 
> assumptions of design.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14283) Reverse scan doesn’t work with HFile inline index/bloom blocks

2015-10-13 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14956263#comment-14956263
 ] 

Anoop Sam John commented on HBASE-14283:


No it is for backward scan only.  [~benlau]  confirm once if am wrong.

> Reverse scan doesn’t work with HFile inline index/bloom blocks
> --
>
> Key: HBASE-14283
> URL: https://issues.apache.org/jira/browse/HBASE-14283
> Project: HBase
>  Issue Type: Bug
>Reporter: Ben Lau
>Assignee: Ben Lau
> Attachments: HBASE-14283-0.98.patch, HBASE-14283-branch-1.0.patch, 
> HBASE-14283-branch-1.1.patch, HBASE-14283-branch-1.2.patch, 
> HBASE-14283-branch-1.patch, HBASE-14283-master.patch, HBASE-14283-v2.patch, 
> HBASE-14283.patch, hfile-seek-before.patch
>
>
> Reverse scans do not work if an HFile contains inline bloom blocks or leaf 
> level index blocks.  The reason is because the seekBefore() call calculates 
> the previous data block’s size by assuming data blocks are contiguous which 
> is not the case in HFile V2 and beyond.
> Attached is a first cut patch (targeting 
> bcef28eefaf192b0ad48c8011f98b8e944340da5 on trunk) which includes:
> (1) a unit test which exposes the bug and demonstrates failures for both 
> inline bloom blocks and inline index blocks
> (2) a proposed fix for inline index blocks that does not require a new HFile 
> version change, but is only performant for 1 and 2-level indexes and not 3+.  
> 3+ requires an HFile format update for optimal performance.
> This patch does not fix the bloom filter blocks bug.  But the fix should be 
> similar to the case of inline index blocks.  The reason I haven’t made the 
> change yet is I want to confirm that you guys would be fine with me revising 
> the HFile.Reader interface.
> Specifically, these 2 functions (getGeneralBloomFilterMetadata and 
> getDeleteBloomFilterMetadata) need to return the BloomFilter.  Right now the 
> HFileReader class doesn’t have a reference to the bloom filters (and hence 
> their indices) and only constructs the IO streams and hence has no way to 
> know where the bloom blocks are in the HFile.  It seems that the HFile.Reader 
> bloom method comments state that they “know nothing about how that metadata 
> is structured” but I do not know if that is a requirement of the abstraction 
> (why?) or just an incidental current property. 
> We would like to do 3 things with community approval:
> (1) Update the HFile.Reader interface and implementation to contain and 
> return BloomFilters directly rather than unstructured IO streams
> (2) Merge the fixes for index blocks and bloom blocks into open source
> (3) Create a new Jira ticket for open source HBase to add a ‘prevBlockSize’ 
> field in the block header in the next HFile version, so that seekBefore() 
> calls can not only be correct but performant in all cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14505) Reenable tests disabled by HBASE-14430 in TestHttpServerLifecycle

2015-10-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14956303#comment-14956303
 ] 

Hadoop QA commented on HBASE-14505:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12766451/HBASE-14505.2.patch
  against master branch at commit 0e41dc18c0fb6a1533eff549f7836c0c9f5d4685.
  ATTACHMENT ID: 12766451

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.6.1 2.7.0 
2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 1 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s):   
at 
org.apache.hadoop.hbase.namespace.TestNamespaceAuditor.testRegionOperations(TestNamespaceAuditor.java:469)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15994//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15994//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15994//artifact/patchprocess/checkstyle-aggregate.html

  Javadoc warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15994//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15994//console

This message is automatically generated.

> Reenable tests disabled by HBASE-14430 in TestHttpServerLifecycle
> -
>
> Key: HBASE-14505
> URL: https://issues.apache.org/jira/browse/HBASE-14505
> Project: HBase
>  Issue Type: Task
>  Components: test
>Reporter: stack
>  Labels: beginner
> Attachments: HBASE-14505.1.patch, HBASE-14505.2.patch
>
>
> Probably needs newer version of jetty or some cryptic JVM version.
> See HBASE-14430 for litany on how hard this is to reproduce, not only in 
> hbase, but back up in the Jetty where the issue was also reported.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14588) Stop accessing test resources from within src folder

2015-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14956128#comment-14956128
 ] 

Hudson commented on HBASE-14588:


FAILURE: Integrated in HBase-0.98 #1154 (See 
[https://builds.apache.org/job/HBase-0.98/1154/])
HBASE-14588 Stop accessing test resources from within src folder (Andrew 
(stack: rev 80b40e3daf6414c946bf4ea7acae61159ea32775)
* hbase-server/src/test/java/org/apache/hadoop/hbase/io/TestReference.java
* 
hbase-server/src/test/data/a6a6562b777440fd9c34885428f5cb61.21e75333ada3d5bafb34bb918f29576c
* 
hbase-server/src/test/resources/a6a6562b777440fd9c34885428f5cb61.21e75333ada3d5bafb34bb918f29576c
* pom.xml


> Stop accessing test resources from within src folder
> 
>
> Key: HBASE-14588
> URL: https://issues.apache.org/jira/browse/HBASE-14588
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 1.1.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.0.3, 1.1.3, 0.98.16
>
> Attachments: hbase-14588.001.patch, hbase-14588.001.patch, 
> hbase-14588.002.patch, hbase-14588.branch-1.1.001.patch
>
>
> A few tests in hbase-server reach into the src/test/data folder to get test 
> resources, which is naughty since tests are supposed to only operate within 
> the target/ folder. It's better to put these into src/test/resources and let 
> them be automatically copied into target/ via the resources plugin, like 
> other test resources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14256) Flush task message may be confusing when region is recovered

2015-10-13 Thread Gabor Liptak (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14956137#comment-14956137
 ] 

Gabor Liptak commented on HBASE-14256:
--

The compile failure seems unrelated.

> Flush task message may be confusing when region is recovered
> 
>
> Key: HBASE-14256
> URL: https://issues.apache.org/jira/browse/HBASE-14256
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 2.0.0, 1.2.0
>Reporter: Lars George
>Assignee: Gabor Liptak
>  Labels: beginner
> Attachments: HBASE-14256.1.patch
>
>
> In {{HRegion.setRecovering()}} we have this code:
> {code}
> // force a flush only if region replication is set up for this region. 
> Otherwise no need.
>   boolean forceFlush = getTableDesc().getRegionReplication() > 1;
>   // force a flush first
>   MonitoredTask status = TaskMonitor.get().createStatus(
> "Flushing region " + this + " because recovery is finished");
>   try {
> if (forceFlush) {
>   internalFlushcache(status);
> }
> {code}
> So we only optionally force flush after a recovery of a region, but the 
> message always is set to "Flushing...", which might be confusing. We should 
> change the message based on {{forceFlush}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (HBASE-14591) Region with reference hfile may split after a forced split in IncreasingToUpperBoundRegionSplitPolicy

2015-10-13 Thread Liu Shaohui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liu Shaohui closed HBASE-14591.
---

> Region with reference hfile may split after a forced split in 
> IncreasingToUpperBoundRegionSplitPolicy
> -
>
> Key: HBASE-14591
> URL: https://issues.apache.org/jira/browse/HBASE-14591
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.15
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
>Priority: Critical
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.0.3, 1.1.3, 0.98.16
>
> Attachments: HBASE-14591-v1.patch
>
>
> In the IncreasingToUpperBoundRegionSplitPolicy, a region with a store having 
> hfile reference may split after a forced split. This will break many 
> assumptions of design.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14355) Scan different TimeRange for each column family

2015-10-13 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14956198#comment-14956198
 ] 

Ted Yu commented on HBASE-14355:


{code}
83private Map cftr = new TreeMap(Bytes.BYTES_COMPARATOR);
{code}
Name the variable colFamTimeRangeMap.
{code}
44  import org.apache.hadoop.hbase.KeyValueUtil;38  import 
org.apache.hadoop.hbase.*;
{code}
Can you keep individual imports ?

Can you add more tests ?

> Scan different TimeRange for each column family
> ---
>
> Key: HBASE-14355
> URL: https://issues.apache.org/jira/browse/HBASE-14355
> Project: HBase
>  Issue Type: New Feature
>  Components: Client, regionserver, Scanners
>Reporter: Dave Latham
>Assignee: churro morales
> Fix For: 2.0.0, 1.3.0, 0.98.16
>
> Attachments: HBASE-14355.patch
>
>
> At present the Scan API supports only table level time range. We have 
> specific use cases that will benefit from per column family time range. (See 
> background discussion at 
> https://mail-archives.apache.org/mod_mbox/hbase-user/201508.mbox/%3ccaa4mzom00ef5eoxstk0hetxeby8mqss61gbvgttgpaspmhq...@mail.gmail.com%3E)
> There are a couple of choices that would be good to validate.  First - how to 
> update the Scan API to support family and table level updates.  One proposal 
> would be to add Scan.setTimeRange(byte family, long minTime, long maxTime), 
> then store it in a Map.  When executing the scan, if a 
> family has a specified TimeRange, then use it, otherwise fall back to using 
> the table level TimeRange.  Clients using the new API against old region 
> servers would not get the families correctly filterd.  Old clients sending 
> scans to new region servers would work correctly.
> The other question is how to get StoreFileScanner.shouldUseScanner to match 
> up the proper family and time range.  It has the Scan available but doesn't 
> currently have available which family it is a part of.  One option would be 
> to try to pass down the column family in each constructor path.  Another 
> would be to instead alter shouldUseScanner to pass down the specific 
> TimeRange to use (similar to how it currently passes down the columns to use 
> which also appears to be a workaround for not having the family available). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14598) ByteBufferOutputStream grows its HeapByteBuffer beyond JVM limitations

2015-10-13 Thread Ian Friedman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ian Friedman updated HBASE-14598:
-
Status: Patch Available  (was: Open)

> ByteBufferOutputStream grows its HeapByteBuffer beyond JVM limitations
> --
>
> Key: HBASE-14598
> URL: https://issues.apache.org/jira/browse/HBASE-14598
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.12
>Reporter: Ian Friedman
>Assignee: Ian Friedman
> Attachments: 14598.txt
>
>
> We noticed that in returning a Scan against a region containing particularly 
> large (wide) rows that it is possible during 
> ByteBufferOutputStream.checkSizeAndGrow() to attempt to create a new 
> ByteBuffer larger than the JVM allows which then throws a OutOfMemoryError. 
> The code currently caps it at Integer.MAX_VALUE which is actually larger than 
> the JVM allows. This lead to us dealing with cascading region server death as 
> the RegionServer hosting the region died, opened on a new server, the client 
> retried the scan, and the new RS died as well. 
> I believe ByteBufferOutputStream should not try to create ByteBuffers that 
> large and instead throw an exception back up if it needs to grow any bigger. 
> The limit should probably be something like Integer.MAX_VALUE-8, as that is 
> what ArrayList uses. ref: 
> http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/8-b132/java/util/ArrayList.java#221



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14602) Convert HBasePoweredBy Wiki page to a hbase.apache.org page

2015-10-13 Thread Misty Stanley-Jones (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Misty Stanley-Jones updated HBASE-14602:

Summary: Convert HBasePoweredBy Wiki page to a hbase.apache.org page  (was: 
Converted HBasePoweredBy Wiki page to a hbase.apache.org page)

> Convert HBasePoweredBy Wiki page to a hbase.apache.org page
> ---
>
> Key: HBASE-14602
> URL: https://issues.apache.org/jira/browse/HBASE-14602
> Project: HBase
>  Issue Type: Sub-task
>  Components: documentation
>Affects Versions: 2.0.0
>Reporter: Misty Stanley-Jones
>Assignee: Misty Stanley-Jones
> Fix For: 2.0.0
>
>
> Leave all the info as it is, add a disclaimer about the accuracy of the info 
> and info on how to get yourself added/updated (email hbase-dev or file a JIRA 
> -- we want the hurdle to be much lower than it has been). Redirect the wiki 
> page to the new site.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14602) Converted HBasePoweredBy Wiki page to a hbase.apache.org page

2015-10-13 Thread Misty Stanley-Jones (JIRA)
Misty Stanley-Jones created HBASE-14602:
---

 Summary: Converted HBasePoweredBy Wiki page to a hbase.apache.org 
page
 Key: HBASE-14602
 URL: https://issues.apache.org/jira/browse/HBASE-14602
 Project: HBase
  Issue Type: Sub-task
  Components: documentation
Affects Versions: 2.0.0
Reporter: Misty Stanley-Jones
Assignee: Misty Stanley-Jones
 Fix For: 2.0.0


Leave all the info as it is, add a disclaimer about the accuracy of the info 
and info on how to get yourself added/updated (email hbase-dev or file a JIRA 
-- we want the hurdle to be much lower than it has been). Redirect the wiki 
page to the new site.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14602) Convert HBasePoweredBy Wiki page to a hbase.apache.org page

2015-10-13 Thread Misty Stanley-Jones (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Misty Stanley-Jones updated HBASE-14602:

Description: 
https://wiki.apache.org/hadoop/Hbase/PoweredBy

Leave all the info as it is, add a disclaimer about the accuracy of the info 
and info on how to get yourself added/updated (email hbase-dev or file a JIRA 
-- we want the hurdle to be much lower than it has been). Redirect the wiki 
page to the new site.

  was:Leave all the info as it is, add a disclaimer about the accuracy of the 
info and info on how to get yourself added/updated (email hbase-dev or file a 
JIRA -- we want the hurdle to be much lower than it has been). Redirect the 
wiki page to the new site.


> Convert HBasePoweredBy Wiki page to a hbase.apache.org page
> ---
>
> Key: HBASE-14602
> URL: https://issues.apache.org/jira/browse/HBASE-14602
> Project: HBase
>  Issue Type: Sub-task
>  Components: documentation
>Affects Versions: 2.0.0
>Reporter: Misty Stanley-Jones
>Assignee: Misty Stanley-Jones
> Fix For: 2.0.0
>
>
> https://wiki.apache.org/hadoop/Hbase/PoweredBy
> Leave all the info as it is, add a disclaimer about the accuracy of the info 
> and info on how to get yourself added/updated (email hbase-dev or file a JIRA 
> -- we want the hurdle to be much lower than it has been). Redirect the wiki 
> page to the new site.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14521) Unify the semantic of hbase.client.retries.number

2015-10-13 Thread Yu Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14956248#comment-14956248
 ] 

Yu Li commented on HBASE-14521:
---

{quote}
I found a typo that I will fix on commit
{quote}
Yes please, thanks for the careful review [~nkeywal]!

> Unify the semantic of hbase.client.retries.number
> -
>
> Key: HBASE-14521
> URL: https://issues.apache.org/jira/browse/HBASE-14521
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.14, 1.1.2
>Reporter: Yu Li
>Assignee: Yu Li
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-14521.patch, HBASE-14521_v2.patch, 
> HBASE-14521_v3.patch
>
>
> From name of the _hbase.client.retries.number_ property, it should be the 
> number of maximum *retries*, or say if we set the property to 1, there should 
> be 2 attempts in total. However, there're two different semantics when using 
> it in current code base.
> For example, in ConnectionImplementation#locateRegionInMeta:
> {code}
> int localNumRetries = (retry ? numTries : 1);
> for (int tries = 0; true; tries++) {
>   if (tries >= localNumRetries) {
> throw new NoServerForRegionException("Unable to find region for "
> + Bytes.toStringBinary(row) + " in " + tableName +
> " after " + numTries + " tries.");
>   }
> {code}
> the retries number is regarded as max times for *tries*
> While in RpcRetryingCallerImpl#callWithRetries:
> {code}
> for (int tries = 0;; tries++) {
>   long expectedSleep;
>   try {
> callable.prepare(tries != 0); // if called with false, check table 
> status on ZK
> interceptor.intercept(context.prepare(callable, tries));
> return callable.call(getRemainingTime(callTimeout));
>   } catch (PreemptiveFastFailException e) {
> throw e;
>   } catch (Throwable t) {
> ...
> if (tries >= retries - 1) {
>   throw new RetriesExhaustedException(tries, exceptions);
> }
> {code}
> it's regarded as exactly for *REtry* (try a call first with no condition and 
> then check whether to retry or exceeds maximum retry number)
> This inconsistency will cause misunderstanding in usage, such as one of our 
> customer set the property to zero expecting one single call but finally 
> received NoServerForRegionException.
> We should unify the semantic of the property, and I suggest to keep the 
> original one for retry rather than total tries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14493) Upgrade the jamon-runtime dependency

2015-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14956265#comment-14956265
 ] 

Hudson commented on HBASE-14493:


FAILURE: Integrated in HBase-1.3 #261 (See 
[https://builds.apache.org/job/HBase-1.3/261/])
HBASE-14493 Upgrade the jamon-runtime dependency (apurtell: rev 
1960cb94dbe535658debf5e6fdc58c14a057605a)
* pom.xml
* hbase-resource-bundle/src/main/resources/supplemental-models.xml


> Upgrade the jamon-runtime dependency
> 
>
> Key: HBASE-14493
> URL: https://issues.apache.org/jira/browse/HBASE-14493
> Project: HBase
>  Issue Type: Task
>Affects Versions: 1.1.1
>Reporter: Newton Alex
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 2.0.0, 1.2.0, 1.3.0, 0.98.16
>
> Attachments: HBASE-14493-0.98.patch, HBASE-14493-branch-1.patch, 
> HBASE-14493.patch, HBASE-14493.patch
>
>
> Current version of HBase uses MPL 1.1 which has legal restrictions. Newer 
> versions of jamon-runtime appear to be MPL 2.0. HBase should upgrade to a 
> safer licensed version of jamon.
> 2.4.0 is MPL 1.1 : 
> http://grepcode.com/snapshot/repo1.maven.org/maven2/org.jamon/jamon-runtime/2.4.0
> 2.4.1 is MPL 2.0 : 
> http://grepcode.com/snapshot/repo1.maven.org/maven2/org.jamon/jamon-runtime/2.4.1
> Here’s a comparison of the equivalent sections of the respective licenses 
> dealing w/ Termination:
> MPL 1.1 - Section 8 (Termination) Subsection 2:
> 8.2. If You initiate litigation by asserting a patent infringement claim 
> (excluding declatory judgment actions) against Initial Developer or a 
> Contributor (the Initial Developer or Contributor against whom You file such 
> action is referred to as "Participant") alleging that:
> such Participant's Contributor Version directly or indirectly infringes any 
> patent, then any and all rights granted by such Participant to You under 
> Sections 2.1 and/or 2.2 of this License shall, upon 60 days notice from 
> Participant terminate prospectively, unless if within 60 days after receipt 
> of notice You either: (i) agree in writing to pay Participant a mutually 
> agreeable reasonable royalty for Your past and future use of Modifications 
> made by such Participant, or (ii) withdraw Your litigation claim with respect 
> to the Contributor Version against such Participant. If within 60 days of 
> notice, a reasonable royalty and payment arrangement are not mutually agreed 
> upon in writing by the parties or the litigation claim is not withdrawn, the 
> rights granted by Participant to You under Sections 2.1 and/or 2.2 
> automatically terminate at the expiration of the 60 day notice period 
> specified above.
> any software, hardware, or device, other than such Participant's Contributor 
> Version, directly or indirectly infringes any patent, then any rights granted 
> to You by such Participant under Sections 2.1(b) and 2.2(b) are revoked 
> effective as of the date You first made, used, sold, distributed, or had 
> made, Modifications made by that Participant.
> MPL 2.0 - Section 5 (Termination) Subsection 2:
> 5.2. If You initiate litigation against any entity by asserting a patent 
> infringement claim (excluding declaratory judgment actions, counter-claims, 
> and cross-claims) alleging that a Contributor Version directly or indirectly 
> infringes any patent, then the rights granted to You by any and all 
> Contributors for the Covered Software under Section 2.1 of this License shall 
> terminate.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14602) Convert HBasePoweredBy Wiki page to a hbase.apache.org page

2015-10-13 Thread Misty Stanley-Jones (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14956262#comment-14956262
 ] 

Misty Stanley-Jones commented on HBASE-14602:
-

Testing: Built it locally and it looked fine.

> Convert HBasePoweredBy Wiki page to a hbase.apache.org page
> ---
>
> Key: HBASE-14602
> URL: https://issues.apache.org/jira/browse/HBASE-14602
> Project: HBase
>  Issue Type: Sub-task
>  Components: documentation
>Affects Versions: 2.0.0
>Reporter: Misty Stanley-Jones
>Assignee: Misty Stanley-Jones
> Fix For: 2.0.0
>
> Attachments: HBASE-14602.patch
>
>
> https://wiki.apache.org/hadoop/Hbase/PoweredBy
> Leave all the info as it is, add a disclaimer about the accuracy of the info 
> and info on how to get yourself added/updated (email hbase-dev or file a JIRA 
> -- we want the hurdle to be much lower than it has been). Redirect the wiki 
> page to the new site.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14591) Region with reference hfile may split after a forced split in IncreasingToUpperBoundRegionSplitPolicy

2015-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14956292#comment-14956292
 ] 

Hudson commented on HBASE-14591:


FAILURE: Integrated in HBase-1.3-IT #234 (See 
[https://builds.apache.org/job/HBase-1.3-IT/234/])
HBASE-14591 Region with reference hfile may split after a forced split 
(liushaohui: rev 1a163b7ab71f7e1b5958102bb4e059e9e68be729)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionSplitPolicy.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/IncreasingToUpperBoundRegionSplitPolicy.java


> Region with reference hfile may split after a forced split in 
> IncreasingToUpperBoundRegionSplitPolicy
> -
>
> Key: HBASE-14591
> URL: https://issues.apache.org/jira/browse/HBASE-14591
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.15
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
>Priority: Critical
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.0.3, 1.1.3, 0.98.16
>
> Attachments: HBASE-14591-v1.patch
>
>
> In the IncreasingToUpperBoundRegionSplitPolicy, a region with a store having 
> hfile reference may split after a forced split. This will break many 
> assumptions of design.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13082) Coarsen StoreScanner locks to RegionScanner

2015-10-13 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-13082:
---
Attachment: HBASE-13082_2_WIP.patch

For HadoopQA.

> Coarsen StoreScanner locks to RegionScanner
> ---
>
> Key: HBASE-13082
> URL: https://issues.apache.org/jira/browse/HBASE-13082
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: ramkrishna.s.vasudevan
> Attachments: 13082-test.txt, 13082-v2.txt, 13082-v3.txt, 
> 13082-v4.txt, 13082.txt, 13082.txt, HBASE-13082_1_WIP.patch, 
> HBASE-13082_2_WIP.patch, gc.png, gc.png, gc.png, hits.png, next.png, next.png
>
>
> Continuing where HBASE-10015 left of.
> We can avoid locking (and memory fencing) inside StoreScanner by deferring to 
> the lock already held by the RegionScanner.
> In tests this shows quite a scan improvement and reduced CPU (the fences make 
> the cores wait for memory fetches).
> There are some drawbacks too:
> * All calls to RegionScanner need to be remain synchronized
> * Implementors of coprocessors need to be diligent in following the locking 
> contract. For example Phoenix does not lock RegionScanner.nextRaw() and 
> required in the documentation (not picking on Phoenix, this one is my fault 
> as I told them it's OK)
> * possible starving of flushes and compaction with heavy read load. 
> RegionScanner operations would keep getting the locks and the 
> flushes/compactions would not be able finalize the set of files.
> I'll have a patch soon.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13082) Coarsen StoreScanner locks to RegionScanner

2015-10-13 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-13082:
---
Status: Patch Available  (was: Open)

> Coarsen StoreScanner locks to RegionScanner
> ---
>
> Key: HBASE-13082
> URL: https://issues.apache.org/jira/browse/HBASE-13082
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: ramkrishna.s.vasudevan
> Attachments: 13082-test.txt, 13082-v2.txt, 13082-v3.txt, 
> 13082-v4.txt, 13082.txt, 13082.txt, HBASE-13082_1_WIP.patch, 
> HBASE-13082_2_WIP.patch, gc.png, gc.png, gc.png, hits.png, next.png, next.png
>
>
> Continuing where HBASE-10015 left of.
> We can avoid locking (and memory fencing) inside StoreScanner by deferring to 
> the lock already held by the RegionScanner.
> In tests this shows quite a scan improvement and reduced CPU (the fences make 
> the cores wait for memory fetches).
> There are some drawbacks too:
> * All calls to RegionScanner need to be remain synchronized
> * Implementors of coprocessors need to be diligent in following the locking 
> contract. For example Phoenix does not lock RegionScanner.nextRaw() and 
> required in the documentation (not picking on Phoenix, this one is my fault 
> as I told them it's OK)
> * possible starving of flushes and compaction with heavy read load. 
> RegionScanner operations would keep getting the locks and the 
> flushes/compactions would not be able finalize the set of files.
> I'll have a patch soon.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14493) Upgrade the jamon-runtime dependency

2015-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14956293#comment-14956293
 ] 

Hudson commented on HBASE-14493:


FAILURE: Integrated in HBase-1.3-IT #234 (See 
[https://builds.apache.org/job/HBase-1.3-IT/234/])
HBASE-14493 Upgrade the jamon-runtime dependency (apurtell: rev 
1960cb94dbe535658debf5e6fdc58c14a057605a)
* hbase-resource-bundle/src/main/resources/supplemental-models.xml
* pom.xml


> Upgrade the jamon-runtime dependency
> 
>
> Key: HBASE-14493
> URL: https://issues.apache.org/jira/browse/HBASE-14493
> Project: HBase
>  Issue Type: Task
>Affects Versions: 1.1.1
>Reporter: Newton Alex
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 2.0.0, 1.2.0, 1.3.0, 0.98.16
>
> Attachments: HBASE-14493-0.98.patch, HBASE-14493-branch-1.patch, 
> HBASE-14493.patch, HBASE-14493.patch
>
>
> Current version of HBase uses MPL 1.1 which has legal restrictions. Newer 
> versions of jamon-runtime appear to be MPL 2.0. HBase should upgrade to a 
> safer licensed version of jamon.
> 2.4.0 is MPL 1.1 : 
> http://grepcode.com/snapshot/repo1.maven.org/maven2/org.jamon/jamon-runtime/2.4.0
> 2.4.1 is MPL 2.0 : 
> http://grepcode.com/snapshot/repo1.maven.org/maven2/org.jamon/jamon-runtime/2.4.1
> Here’s a comparison of the equivalent sections of the respective licenses 
> dealing w/ Termination:
> MPL 1.1 - Section 8 (Termination) Subsection 2:
> 8.2. If You initiate litigation by asserting a patent infringement claim 
> (excluding declatory judgment actions) against Initial Developer or a 
> Contributor (the Initial Developer or Contributor against whom You file such 
> action is referred to as "Participant") alleging that:
> such Participant's Contributor Version directly or indirectly infringes any 
> patent, then any and all rights granted by such Participant to You under 
> Sections 2.1 and/or 2.2 of this License shall, upon 60 days notice from 
> Participant terminate prospectively, unless if within 60 days after receipt 
> of notice You either: (i) agree in writing to pay Participant a mutually 
> agreeable reasonable royalty for Your past and future use of Modifications 
> made by such Participant, or (ii) withdraw Your litigation claim with respect 
> to the Contributor Version against such Participant. If within 60 days of 
> notice, a reasonable royalty and payment arrangement are not mutually agreed 
> upon in writing by the parties or the litigation claim is not withdrawn, the 
> rights granted by Participant to You under Sections 2.1 and/or 2.2 
> automatically terminate at the expiration of the 60 day notice period 
> specified above.
> any software, hardware, or device, other than such Participant's Contributor 
> Version, directly or indirectly infringes any patent, then any rights granted 
> to You by such Participant under Sections 2.1(b) and 2.2(b) are revoked 
> effective as of the date You first made, used, sold, distributed, or had 
> made, Modifications made by that Participant.
> MPL 2.0 - Section 5 (Termination) Subsection 2:
> 5.2. If You initiate litigation against any entity by asserting a patent 
> infringement claim (excluding declaratory judgment actions, counter-claims, 
> and cross-claims) alleging that a Contributor Version directly or indirectly 
> infringes any patent, then the rights granted to You by any and all 
> Contributors for the Covered Software under Section 2.1 of this License shall 
> terminate.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14286) Fix spelling error in "safetyBumper" parameter in WALSplitter

2015-10-13 Thread Gabor Liptak (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14956133#comment-14956133
 ] 

Gabor Liptak commented on HBASE-14286:
--

[~larsgeorge] Would you like some other updates for this? Thanks


> Fix spelling error in "safetyBumper" parameter in WALSplitter
> -
>
> Key: HBASE-14286
> URL: https://issues.apache.org/jira/browse/HBASE-14286
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0, 1.2.0
>Reporter: Lars George
>Assignee: Gabor Liptak
>Priority: Trivial
> Attachments: HBASE-14286.1.patch
>
>
> In {{WALSplitter]] we have this code:
> {code}
>   public static long writeRegionSequenceIdFile(final FileSystem fs, final 
> Path regiondir,
>   long newSeqId, long saftyBumper) throws IOException {
> {code}
> We should fix the parameter name to be {{safetyBumper}}. Same for the JavaDoc 
> above the method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14586) Use a maven profile to run Jacoco analysis

2015-10-13 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14956135#comment-14956135
 ] 

Duo Zhang commented on HBASE-14586:
---

+1

> Use a maven profile to run Jacoco analysis
> --
>
> Key: HBASE-14586
> URL: https://issues.apache.org/jira/browse/HBASE-14586
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 1.1.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Minor
> Attachments: hbase-14586.001.patch, hbase-14586.001.patch, 
> hbase-14586.002.patch
>
>
> The pom.xml has a line like this for the Surefire argLine, which has an extra 
> ${argLine} reference. Recommend changes like this:
> {noformat}
> -${hbase-surefire.argLine} ${argLine}
> +${hbase-surefire.argLine}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14600) Make #testWalRollOnLowReplication looser still

2015-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14956142#comment-14956142
 ] 

Hudson commented on HBASE-14600:


SUCCESS: Integrated in HBase-TRUNK #6905 (See 
[https://builds.apache.org/job/HBase-TRUNK/6905/])
HBASE-14600 Make #testWalRollOnLowReplication looser still (stack: rev 
1458798eb593358fe5415596b2958f2f7e451ea5)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/procedure/TestWALProcedureStoreOnHDFS.java


> Make #testWalRollOnLowReplication looser still
> --
>
> Key: HBASE-14600
> URL: https://issues.apache.org/jira/browse/HBASE-14600
> Project: HBase
>  Issue Type: Sub-task
>  Components: test
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: 14600.txt
>
>
> The parent upped timeouts on testWalRollOnLowReplication. It still fails on 
> occasion. Chatting w/ [~mbertozzi], he suggested that if we've make progress 
> in the test, return the test as compeleted successfully if we get a 
> RuntimeException out of the sync call(because DN is slow to recover).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14491) ReplicationSource#countDistinctRowKeys code logic is not correct

2015-10-13 Thread Ashish Singhi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14956151#comment-14956151
 ] 

Ashish Singhi commented on HBASE-14491:
---

Thanks for the explanation, [~enis].
bq. WALEdit sharing the same row key, but they are still distinct mutations 
that has to be counted separately
I feel the better method name should have been countDistinctMutations than 
countDistinctRowKeys.
Anyways I will close this.

> ReplicationSource#countDistinctRowKeys code logic is not correct
> 
>
> Key: HBASE-14491
> URL: https://issues.apache.org/jira/browse/HBASE-14491
> Project: HBase
>  Issue Type: Bug
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
>Priority: Minor
>
> {code}
>   Cell lastCell = cells.get(0);
>   for (int i = 0; i < edit.size(); i++) {
> if (!CellUtil.matchingRow(cells.get(i), lastCell)) {
>   distinctRowKeys++;
> }
>   }
> {code}
> The above logic for finding the distinct row keys in the list needs to be 
> corrected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14591) Region with reference hfile may split after a forced split in IncreasingToUpperBoundRegionSplitPolicy

2015-10-13 Thread Liu Shaohui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liu Shaohui updated HBASE-14591:

   Resolution: Fixed
Fix Version/s: 0.98.16
   1.1.3
   1.0.3
   1.3.0
   1.2.0
   Status: Resolved  (was: Patch Available)

> Region with reference hfile may split after a forced split in 
> IncreasingToUpperBoundRegionSplitPolicy
> -
>
> Key: HBASE-14591
> URL: https://issues.apache.org/jira/browse/HBASE-14591
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.15
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
>Priority: Critical
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.0.3, 1.1.3, 0.98.16
>
> Attachments: HBASE-14591-v1.patch
>
>
> In the IncreasingToUpperBoundRegionSplitPolicy, a region with a store having 
> hfile reference may split after a forced split. This will break many 
> assumptions of design.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14511) StoreFile.Writer Meta Plugin

2015-10-13 Thread Jingcheng Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14956187#comment-14956187
 ] 

Jingcheng Du commented on HBASE-14511:
--

Hi [~vrodionov], do you want to take a look at the latest patch? Thanks!

> StoreFile.Writer Meta Plugin
> 
>
> Key: HBASE-14511
> URL: https://issues.apache.org/jira/browse/HBASE-14511
> Project: HBase
>  Issue Type: New Feature
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Attachments: HBASE-14511-v3.patch, HBASE-14511.v1.patch, 
> HBASE-14511.v2.patch
>
>
> During my work on a new compaction policies (HBASE-14468, HBASE-14477) I had 
> to modify the existing code of a StoreFile.Writer to add additional meta-info 
> required by these new  policies. I think that it should be done by means of a 
> new Plugin framework, because this seems to be a general capability/feature. 
> As a future enhancement this can become a part of a more general 
> StoreFileWriter/Reader plugin architecture. But I need only Meta section of a 
> store file.
> This could be used, for example, to collect rowkeys distribution information 
> during hfile creation. This info can be used later to find the optimal region 
> split key or to create optimal set of sub-regions for M/R jobs or other jobs 
> which can operate on a sub-region level.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14535) Unit test for rpc connection concurrency / deadlock testing

2015-10-13 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14956188#comment-14956188
 ] 

Enis Soztutar commented on HBASE-14535:
---

I have converted the test to be an IT instead in the hopes that we can get it 
in and running at least. It has already been useful in analyzing the patches 
for the parent.

I've also made some changes and added a big payload to the RPCs (2MB) to 
simulate the write + close deadlock better. However, now I am running into 
timeouts in master code. I was testing with branch-1.1  which was ok, but it is 
failing in master now. I have to inspect the thread dump to see what is 
happening.  

> Unit test for rpc connection concurrency / deadlock testing 
> 
>
> Key: HBASE-14535
> URL: https://issues.apache.org/jira/browse/HBASE-14535
> Project: HBase
>  Issue Type: Sub-task
>  Components: rpc
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.0.3, 1.1.3, 0.98.16
>
> Attachments: hbase-14535_v1.patch, hbase-14535_v2.patch
>
>
> As per parent jira and recent jiras  HBASE-14449 + HBASE-14241 and 
> HBASE-14313, we seem to be lacking some testing rpc connection concurrency 
> issues in a UT env. 
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14596) TestCellACLs failing... on1.2 builds

2015-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14956252#comment-14956252
 ] 

Hudson commented on HBASE-14596:


SUCCESS: Integrated in HBase-1.2 #250 (See 
[https://builds.apache.org/job/HBase-1.2/250/])
HBASE-14596 TestCellACLs failing... on1.2 builds; FIX (stack: rev 
1390b2a94b09b1b5d6f91b814053aca74af8a519)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestCellACLs.java


> TestCellACLs failing... on1.2 builds
> 
>
> Key: HBASE-14596
> URL: https://issues.apache.org/jira/browse/HBASE-14596
> Project: HBase
>  Issue Type: Sub-task
>  Components: test
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: 14596.branch-1.patch, 14596.debug.txt, 
> 14596.master.patch, 14596.txt
>
>
> Caught this in 1.7 builds:
> {code}
> "PriorityRpcServer.handler=4,queue=0,port=42214" daemon prio=10 
> tid=0x7f08d5786000 nid=0x2eed in Object.wait() [0x7f08918d5000]
>java.lang.Thread.State: TIMED_WAITING (on object monitor)
>   at java.lang.Object.wait(Native Method)
>   at 
> org.apache.hadoop.hbase.client.AsyncProcess.waitForMaximumCurrentTasks(AsyncProcess.java:1659)
>   - locked <0x0007580e5f98> (a java.util.concurrent.atomic.AtomicLong)
>   at 
> org.apache.hadoop.hbase.client.AsyncProcess.waitForAllPreviousOpsAndReset(AsyncProcess.java:1688)
>   at 
> org.apache.hadoop.hbase.client.BufferedMutatorImpl.backgroundFlushCommits(BufferedMutatorImpl.java:208)
>   at 
> org.apache.hadoop.hbase.client.BufferedMutatorImpl.flush(BufferedMutatorImpl.java:183)
>   - locked <0x0007580f36f8> (a 
> org.apache.hadoop.hbase.client.BufferedMutatorImpl)
>   at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1430)
>   at org.apache.hadoop.hbase.client.HTable.put(HTable.java:1021)
>   at 
> org.apache.hadoop.hbase.security.access.AccessControlLists.addUserPermission(AccessControlLists.java:176)
>   at 
> org.apache.hadoop.hbase.security.access.AccessController$8.run(AccessController.java:2175)
>   at 
> org.apache.hadoop.hbase.security.access.AccessController$8.run(AccessController.java:2172)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
>   at 
> org.apache.hadoop.security.SecurityUtil.doAsUser(SecurityUtil.java:444)
>   at 
> org.apache.hadoop.security.SecurityUtil.doAsLoginUser(SecurityUtil.java:425)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at org.apache.hadoop.hbase.util.Methods.call(Methods.java:39)
>   at org.apache.hadoop.hbase.security.User.runAsLoginUser(User.java:205)
>   at 
> org.apache.hadoop.hbase.security.access.AccessController.grant(AccessController.java:2172)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos$AccessControlService$1.grant(AccessControlProtos.java:9933)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos$AccessControlService.callMethod(AccessControlProtos.java:10097)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7650)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1896)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1878)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32590)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2120)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:106)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
>   at java.lang.Thread.run(Thread.java:745)
> "PriorityRpcServer.handler=3,queue=1,port=42214" daemon prio=10 
> tid=0x7f08d5784000 nid=0x2eec in Object.wait() [0x7f08919d7000]
>java.lang.Thread.State: TIMED_WAITING (on object monitor)
>   at java.lang.Object.wait(Native Method)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1248)
>   - locked <0x0007cb61ecd8> (a org.apache.hadoop.hbase.ipc.Call)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:217)
>   at 
> 

[jira] [Commented] (HBASE-14493) Upgrade the jamon-runtime dependency

2015-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14956251#comment-14956251
 ] 

Hudson commented on HBASE-14493:


SUCCESS: Integrated in HBase-1.2 #250 (See 
[https://builds.apache.org/job/HBase-1.2/250/])
HBASE-14493 Upgrade the jamon-runtime dependency (apurtell: rev 
e4626c0e94c79085d71ba42a4aa0c920170b8119)
* hbase-resource-bundle/src/main/resources/supplemental-models.xml
* pom.xml


> Upgrade the jamon-runtime dependency
> 
>
> Key: HBASE-14493
> URL: https://issues.apache.org/jira/browse/HBASE-14493
> Project: HBase
>  Issue Type: Task
>Affects Versions: 1.1.1
>Reporter: Newton Alex
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 2.0.0, 1.2.0, 1.3.0, 0.98.16
>
> Attachments: HBASE-14493-0.98.patch, HBASE-14493-branch-1.patch, 
> HBASE-14493.patch, HBASE-14493.patch
>
>
> Current version of HBase uses MPL 1.1 which has legal restrictions. Newer 
> versions of jamon-runtime appear to be MPL 2.0. HBase should upgrade to a 
> safer licensed version of jamon.
> 2.4.0 is MPL 1.1 : 
> http://grepcode.com/snapshot/repo1.maven.org/maven2/org.jamon/jamon-runtime/2.4.0
> 2.4.1 is MPL 2.0 : 
> http://grepcode.com/snapshot/repo1.maven.org/maven2/org.jamon/jamon-runtime/2.4.1
> Here’s a comparison of the equivalent sections of the respective licenses 
> dealing w/ Termination:
> MPL 1.1 - Section 8 (Termination) Subsection 2:
> 8.2. If You initiate litigation by asserting a patent infringement claim 
> (excluding declatory judgment actions) against Initial Developer or a 
> Contributor (the Initial Developer or Contributor against whom You file such 
> action is referred to as "Participant") alleging that:
> such Participant's Contributor Version directly or indirectly infringes any 
> patent, then any and all rights granted by such Participant to You under 
> Sections 2.1 and/or 2.2 of this License shall, upon 60 days notice from 
> Participant terminate prospectively, unless if within 60 days after receipt 
> of notice You either: (i) agree in writing to pay Participant a mutually 
> agreeable reasonable royalty for Your past and future use of Modifications 
> made by such Participant, or (ii) withdraw Your litigation claim with respect 
> to the Contributor Version against such Participant. If within 60 days of 
> notice, a reasonable royalty and payment arrangement are not mutually agreed 
> upon in writing by the parties or the litigation claim is not withdrawn, the 
> rights granted by Participant to You under Sections 2.1 and/or 2.2 
> automatically terminate at the expiration of the 60 day notice period 
> specified above.
> any software, hardware, or device, other than such Participant's Contributor 
> Version, directly or indirectly infringes any patent, then any rights granted 
> to You by such Participant under Sections 2.1(b) and 2.2(b) are revoked 
> effective as of the date You first made, used, sold, distributed, or had 
> made, Modifications made by that Participant.
> MPL 2.0 - Section 5 (Termination) Subsection 2:
> 5.2. If You initiate litigation against any entity by asserting a patent 
> infringement claim (excluding declaratory judgment actions, counter-claims, 
> and cross-claims) alleging that a Contributor Version directly or indirectly 
> infringes any patent, then the rights granted to You by any and all 
> Contributors for the Covered Software under Section 2.1 of this License shall 
> terminate.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14283) Reverse scan doesn’t work with HFile inline index/bloom blocks

2015-10-13 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14956254#comment-14956254
 ] 

Lars Hofhansl commented on HBASE-14283:
---

Am I understanding correctly that we already incur two reads now, even when 
we're scanning forward? If so, that seems unfortunate.

> Reverse scan doesn’t work with HFile inline index/bloom blocks
> --
>
> Key: HBASE-14283
> URL: https://issues.apache.org/jira/browse/HBASE-14283
> Project: HBase
>  Issue Type: Bug
>Reporter: Ben Lau
>Assignee: Ben Lau
> Attachments: HBASE-14283-0.98.patch, HBASE-14283-branch-1.0.patch, 
> HBASE-14283-branch-1.1.patch, HBASE-14283-branch-1.2.patch, 
> HBASE-14283-branch-1.patch, HBASE-14283-master.patch, HBASE-14283-v2.patch, 
> HBASE-14283.patch, hfile-seek-before.patch
>
>
> Reverse scans do not work if an HFile contains inline bloom blocks or leaf 
> level index blocks.  The reason is because the seekBefore() call calculates 
> the previous data block’s size by assuming data blocks are contiguous which 
> is not the case in HFile V2 and beyond.
> Attached is a first cut patch (targeting 
> bcef28eefaf192b0ad48c8011f98b8e944340da5 on trunk) which includes:
> (1) a unit test which exposes the bug and demonstrates failures for both 
> inline bloom blocks and inline index blocks
> (2) a proposed fix for inline index blocks that does not require a new HFile 
> version change, but is only performant for 1 and 2-level indexes and not 3+.  
> 3+ requires an HFile format update for optimal performance.
> This patch does not fix the bloom filter blocks bug.  But the fix should be 
> similar to the case of inline index blocks.  The reason I haven’t made the 
> change yet is I want to confirm that you guys would be fine with me revising 
> the HFile.Reader interface.
> Specifically, these 2 functions (getGeneralBloomFilterMetadata and 
> getDeleteBloomFilterMetadata) need to return the BloomFilter.  Right now the 
> HFileReader class doesn’t have a reference to the bloom filters (and hence 
> their indices) and only constructs the IO streams and hence has no way to 
> know where the bloom blocks are in the HFile.  It seems that the HFile.Reader 
> bloom method comments state that they “know nothing about how that metadata 
> is structured” but I do not know if that is a requirement of the abstraction 
> (why?) or just an incidental current property. 
> We would like to do 3 things with community approval:
> (1) Update the HFile.Reader interface and implementation to contain and 
> return BloomFilters directly rather than unstructured IO streams
> (2) Merge the fixes for index blocks and bloom blocks into open source
> (3) Create a new Jira ticket for open source HBase to add a ‘prevBlockSize’ 
> field in the block header in the next HFile version, so that seekBefore() 
> calls can not only be correct but performant in all cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-14283) Reverse scan doesn’t work with HFile inline index/bloom blocks

2015-10-13 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14956254#comment-14956254
 ] 

Lars Hofhansl edited comment on HBASE-14283 at 10/14/15 4:45 AM:
-

Am I understanding correctly that we always incur two reads now, even when 
we're scanning forward? If so, that seems unfortunate.


was (Author: lhofhansl):
Am I understanding correctly that we already incur two reads now, even when 
we're scanning forward? If so, that seems unfortunate.

> Reverse scan doesn’t work with HFile inline index/bloom blocks
> --
>
> Key: HBASE-14283
> URL: https://issues.apache.org/jira/browse/HBASE-14283
> Project: HBase
>  Issue Type: Bug
>Reporter: Ben Lau
>Assignee: Ben Lau
> Attachments: HBASE-14283-0.98.patch, HBASE-14283-branch-1.0.patch, 
> HBASE-14283-branch-1.1.patch, HBASE-14283-branch-1.2.patch, 
> HBASE-14283-branch-1.patch, HBASE-14283-master.patch, HBASE-14283-v2.patch, 
> HBASE-14283.patch, hfile-seek-before.patch
>
>
> Reverse scans do not work if an HFile contains inline bloom blocks or leaf 
> level index blocks.  The reason is because the seekBefore() call calculates 
> the previous data block’s size by assuming data blocks are contiguous which 
> is not the case in HFile V2 and beyond.
> Attached is a first cut patch (targeting 
> bcef28eefaf192b0ad48c8011f98b8e944340da5 on trunk) which includes:
> (1) a unit test which exposes the bug and demonstrates failures for both 
> inline bloom blocks and inline index blocks
> (2) a proposed fix for inline index blocks that does not require a new HFile 
> version change, but is only performant for 1 and 2-level indexes and not 3+.  
> 3+ requires an HFile format update for optimal performance.
> This patch does not fix the bloom filter blocks bug.  But the fix should be 
> similar to the case of inline index blocks.  The reason I haven’t made the 
> change yet is I want to confirm that you guys would be fine with me revising 
> the HFile.Reader interface.
> Specifically, these 2 functions (getGeneralBloomFilterMetadata and 
> getDeleteBloomFilterMetadata) need to return the BloomFilter.  Right now the 
> HFileReader class doesn’t have a reference to the bloom filters (and hence 
> their indices) and only constructs the IO streams and hence has no way to 
> know where the bloom blocks are in the HFile.  It seems that the HFile.Reader 
> bloom method comments state that they “know nothing about how that metadata 
> is structured” but I do not know if that is a requirement of the abstraction 
> (why?) or just an incidental current property. 
> We would like to do 3 things with community approval:
> (1) Update the HFile.Reader interface and implementation to contain and 
> return BloomFilters directly rather than unstructured IO streams
> (2) Merge the fixes for index blocks and bloom blocks into open source
> (3) Create a new Jira ticket for open source HBase to add a ‘prevBlockSize’ 
> field in the block header in the next HFile version, so that seekBefore() 
> calls can not only be correct but performant in all cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14221) Reduce the number of time row comparison is done in a Scan

2015-10-13 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14956272#comment-14956272
 ] 

ramkrishna.s.vasudevan commented on HBASE-14221:


+1 on trying it out.  Need to explore  more on that loser tree.  Just started 
seeing some docs for this topic called Tournament Trees. 

> Reduce the number of time row comparison is done in a Scan
> --
>
> Key: HBASE-14221
> URL: https://issues.apache.org/jira/browse/HBASE-14221
> Project: HBase
>  Issue Type: Sub-task
>  Components: Scanners
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0
>
> Attachments: 14221-0.98-takeALook.txt, HBASE-14221.patch, 
> HBASE-14221_1.patch, HBASE-14221_1.patch, HBASE-14221_6.patch, 
> withmatchingRowspatch.png, withoutmatchingRowspatch.png
>
>
> When we tried to do some profiling with the PE tool found this.
> Currently we do row comparisons in 3 places in a simple Scan case.
> 1) ScanQueryMatcher
> {code}
>int ret = this.rowComparator.compareRows(curCell, cell);
> if (!this.isReversed) {
>   if (ret <= -1) {
> return MatchCode.DONE;
>   } else if (ret >= 1) {
> // could optimize this, if necessary?
> // Could also be called SEEK_TO_CURRENT_ROW, but this
> // should be rare/never happens.
> return MatchCode.SEEK_NEXT_ROW;
>   }
> } else {
>   if (ret <= -1) {
> return MatchCode.SEEK_NEXT_ROW;
>   } else if (ret >= 1) {
> return MatchCode.DONE;
>   }
> }
> {code}
> 2) In StoreScanner next() while starting to scan the row
> {code}
> if (!scannerContext.hasAnyLimit(LimitScope.BETWEEN_CELLS) || 
> matcher.curCell == null ||
> isNewRow || !CellUtil.matchingRow(peeked, matcher.curCell)) {
>   this.countPerRow = 0;
>   matcher.setToNewRow(peeked);
> }
> {code}
> Particularly to see if we are in a new row.
> 3) In HRegion
> {code}
>   scannerContext.setKeepProgress(true);
>   heap.next(results, scannerContext);
>   scannerContext.setKeepProgress(tmpKeepProgress);
>   nextKv = heap.peek();
> moreCellsInRow = moreCellsInRow(nextKv, currentRowCell);
> {code}
> Here again there are cases where we need to careful for a MultiCF case.  Was 
> trying to solve this for the MultiCF case but is having lot of cases to 
> solve. But atleast for a single CF case I think these comparison can be 
> reduced.
> So for a single CF case in the SQM we are able to find if we have crossed a 
> row using the code pasted above in SQM. That comparison is definitely needed.
> Now in case of a single CF the HRegion is going to have only one element in 
> the heap and so the 3rd comparison can surely be avoided if the 
> StoreScanner.next() was over due to MatchCode.DONE caused by SQM.
> Coming to the 2nd compareRows that we do in StoreScanner. next() - even that 
> can be avoided if we know that the previous next() call was over due to a new 
> row. Doing all this I found that the compareRows in the profiler which was 
> 19% got reduced to 13%. Initially we can solve for single CF case which can 
> be extended to MultiCF cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14591) Region with reference hfile may split after a forced split in IncreasingToUpperBoundRegionSplitPolicy

2015-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14956212#comment-14956212
 ] 

Hudson commented on HBASE-14591:


FAILURE: Integrated in HBase-1.0 #1080 (See 
[https://builds.apache.org/job/HBase-1.0/1080/])
HBASE-14591 Region with reference hfile may split after a forced split 
(liushaohui: rev 71b8cb9bd36c993e9479dea4c1d4843482a3d5e1)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionSplitPolicy.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/IncreasingToUpperBoundRegionSplitPolicy.java


> Region with reference hfile may split after a forced split in 
> IncreasingToUpperBoundRegionSplitPolicy
> -
>
> Key: HBASE-14591
> URL: https://issues.apache.org/jira/browse/HBASE-14591
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.15
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
>Priority: Critical
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.0.3, 1.1.3, 0.98.16
>
> Attachments: HBASE-14591-v1.patch
>
>
> In the IncreasingToUpperBoundRegionSplitPolicy, a region with a store having 
> hfile reference may split after a forced split. This will break many 
> assumptions of design.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14541) TestHFileOutputFormat.testMRIncrementalLoadWithSplit failed on internal rig; gave up after trying 10 times

2015-10-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14956213#comment-14956213
 ] 

Hadoop QA commented on HBASE-14541:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12766430/HBASE-14541-v0.patch
  against master branch at commit 4754e583f9e33ab0573e957ab69126edb95d6a1c.
  ATTACHMENT ID: 12766430

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.6.1 2.7.0 
2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 1 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15993//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15993//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15993//artifact/patchprocess/checkstyle-aggregate.html

  Javadoc warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15993//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15993//console

This message is automatically generated.

> TestHFileOutputFormat.testMRIncrementalLoadWithSplit failed on internal rig; 
> gave up after trying 10 times
> --
>
> Key: HBASE-14541
> URL: https://issues.apache.org/jira/browse/HBASE-14541
> Project: HBase
>  Issue Type: Bug
>Reporter: stack
>Assignee: Matteo Bertozzi
> Attachments: HBASE-14541-test.patch, HBASE-14541-v0.patch, 
> HBASE-14541-v0.patch
>
>
> This one seems worth a dig. We seem to be making progress but here is what we 
> are trying to load which seems weird:
> {code}
> 2015-10-01 17:19:41,322 INFO  [main] mapreduce.LoadIncrementalHFiles(360): 
> Split occured while grouping HFiles, retry attempt 10 with 4 files remaining 
> to group or split
> 2015-10-01 17:19:41,323 ERROR [main] mapreduce.LoadIncrementalHFiles(402): 
> -
> Bulk load aborted with some files not yet loaded:
> -
>   
> hdfs://localhost:39540/user/jenkins/test-data/720ae36a-2495-456b-ba68-19e260685a35/testLocalMRIncrementalLoad/info-B/_tmp/_tmp/_tmp/_tmp/_tmp/_tmp/_tmp/_tmp/_tmp/_tmp/ce11cbe2490d444d8958264004286aff.bottom
>   
> hdfs://localhost:39540/user/jenkins/test-data/720ae36a-2495-456b-ba68-19e260685a35/testLocalMRIncrementalLoad/info-B/_tmp/_tmp/_tmp/_tmp/_tmp/_tmp/_tmp/_tmp/_tmp/_tmp/ce11cbe2490d444d8958264004286aff.top
>   
> hdfs://localhost:39540/user/jenkins/test-data/720ae36a-2495-456b-ba68-19e260685a35/testLocalMRIncrementalLoad/info-A/_tmp/_tmp/_tmp/_tmp/_tmp/_tmp/_tmp/_tmp/_tmp/_tmp/30c58eeb23a6464da21117e6e1bc565c.bottom
>   
> hdfs://localhost:39540/user/jenkins/test-data/720ae36a-2495-456b-ba68-19e260685a35/testLocalMRIncrementalLoad/info-A/_tmp/_tmp/_tmp/_tmp/_tmp/_tmp/_tmp/_tmp/_tmp/_tmp/30c58eeb23a6464da21117e6e1bc565c.top
> {code}
> Whats that about?
> Making note here. Will keep an eye on this one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14355) Scan different TimeRange for each column family

2015-10-13 Thread churro morales (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14956219#comment-14956219
 ] 

churro morales commented on HBASE-14355:


Sure I'll fix the individual imports.  I was thinking another test for store 
files without any KV's.  Other than that what do you think we should test?  

> Scan different TimeRange for each column family
> ---
>
> Key: HBASE-14355
> URL: https://issues.apache.org/jira/browse/HBASE-14355
> Project: HBase
>  Issue Type: New Feature
>  Components: Client, regionserver, Scanners
>Reporter: Dave Latham
>Assignee: churro morales
> Fix For: 2.0.0, 1.3.0, 0.98.16
>
> Attachments: HBASE-14355.patch
>
>
> At present the Scan API supports only table level time range. We have 
> specific use cases that will benefit from per column family time range. (See 
> background discussion at 
> https://mail-archives.apache.org/mod_mbox/hbase-user/201508.mbox/%3ccaa4mzom00ef5eoxstk0hetxeby8mqss61gbvgttgpaspmhq...@mail.gmail.com%3E)
> There are a couple of choices that would be good to validate.  First - how to 
> update the Scan API to support family and table level updates.  One proposal 
> would be to add Scan.setTimeRange(byte family, long minTime, long maxTime), 
> then store it in a Map.  When executing the scan, if a 
> family has a specified TimeRange, then use it, otherwise fall back to using 
> the table level TimeRange.  Clients using the new API against old region 
> servers would not get the families correctly filterd.  Old clients sending 
> scans to new region servers would work correctly.
> The other question is how to get StoreFileScanner.shouldUseScanner to match 
> up the proper family and time range.  It has the Scan available but doesn't 
> currently have available which family it is a part of.  One option would be 
> to try to pass down the column family in each constructor path.  Another 
> would be to instead alter shouldUseScanner to pass down the specific 
> TimeRange to use (similar to how it currently passes down the columns to use 
> which also appears to be a workaround for not having the family available). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14570) Cleanup hanging TestHBaseFsck

2015-10-13 Thread Heng Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14956220#comment-14956220
 ] 

Heng Chen commented on HBASE-14570:
---

Hi, [~stack] and [~eclark],  After analysis the log stack posted above,  I 
don't think TestHbaseFsck was hanged.

We can see the when the test failed,  jstack about TestHbaseFsck is
{code}
"Time-limited test" daemon prio=10 tid=0x7f5830aec800 nid=0x6206 waiting on 
condition [0x7f55fdada000]
   java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for  <0x0007f294f240> (a 
java.util.concurrent.FutureTask)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
at java.util.concurrent.FutureTask.awaitDone(FutureTask.java:425)
at java.util.concurrent.FutureTask.get(FutureTask.java:187)
at 
java.util.concurrent.AbstractExecutorService.invokeAll(AbstractExecutorService.java:243)
at 
org.apache.hadoop.hbase.util.HBaseFsck.checkRegionConsistencyConcurrently(HBaseFsck.java:1876)
at 
org.apache.hadoop.hbase.util.HBaseFsck.checkAndFixConsistency(HBaseFsck.java:1834)
at 
org.apache.hadoop.hbase.util.HBaseFsck.onlineConsistencyRepair(HBaseFsck.java:675)
at org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:697)
at 
org.apache.hadoop.hbase.util.hbck.HbckTestingUtil.doFsck(HbckTestingUtil.java:71)
at 
org.apache.hadoop.hbase.util.hbck.HbckTestingUtil.doFsck(HbckTestingUtil.java:43)
at 
org.apache.hadoop.hbase.util.hbck.HbckTestingUtil.doFsck(HbckTestingUtil.java:38)
at 
org.apache.hadoop.hbase.util.TestHBaseFsck.testRegionShouldNotBeDeployed(TestHBaseFsck.java:1632)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.lang.Thread.run(Thread.java:744)
{code}

The relates code is 
{code}
private void checkRegionConsistencyConcurrently(
final List workItems)
throws IOException, KeeperException, InterruptedException {
if (workItems.isEmpty()) {
  return;  // nothing to check
}

List workFutures = executor.invokeAll(workItems);
for(Future f: workFutures) {
  try {
f.get(); //blocking here
  } catch(ExecutionException e1) {
{code}

And jstack about thread CheckRegionConsistencyWorkItem is 
{code}
"pool-104-thread-1" prio=10 tid=0x7f57e8026800 nid=0x65ea waiting on 
condition [0x7f560aeae000]
   java.lang.Thread.State: TIMED_WAITING (sleeping)
at java.lang.Thread.sleep(Native Method)
at 
org.apache.hadoop.hbase.master.ServerManager.closeRegionSilentlyAndWait(ServerManager.java:863)
at 
org.apache.hadoop.hbase.util.HBaseFsckRepair.closeRegionSilentlyAndWait(HBaseFsckRepair.java:156)
at 
org.apache.hadoop.hbase.util.HBaseFsckRepair.fixMultiAssignment(HBaseFsckRepair.java:74)
at 
org.apache.hadoop.hbase.util.HBaseFsck.checkRegionConsistency(HBaseFsck.java:2387)
at org.apache.hadoop.hbase.util.HBaseFsck.access$900(HBaseFsck.java:192)
at 
org.apache.hadoop.hbase.util.HBaseFsck$CheckRegionConsistencyWorkItem.call(HBaseFsck.java:1907)
- locked <0x0007f294ef10> (a 
org.apache.hadoop.hbase.util.HBaseFsck$CheckRegionConsistencyWorkItem)
at 
org.apache.hadoop.hbase.util.HBaseFsck$CheckRegionConsistencyWorkItem.call(HBaseFsck.java:1895)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 

[jira] [Commented] (HBASE-14493) Upgrade the jamon-runtime dependency

2015-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14956283#comment-14956283
 ] 

Hudson commented on HBASE-14493:


SUCCESS: Integrated in HBase-1.2-IT #208 (See 
[https://builds.apache.org/job/HBase-1.2-IT/208/])
HBASE-14493 Upgrade the jamon-runtime dependency (apurtell: rev 
e4626c0e94c79085d71ba42a4aa0c920170b8119)
* pom.xml
* hbase-resource-bundle/src/main/resources/supplemental-models.xml


> Upgrade the jamon-runtime dependency
> 
>
> Key: HBASE-14493
> URL: https://issues.apache.org/jira/browse/HBASE-14493
> Project: HBase
>  Issue Type: Task
>Affects Versions: 1.1.1
>Reporter: Newton Alex
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 2.0.0, 1.2.0, 1.3.0, 0.98.16
>
> Attachments: HBASE-14493-0.98.patch, HBASE-14493-branch-1.patch, 
> HBASE-14493.patch, HBASE-14493.patch
>
>
> Current version of HBase uses MPL 1.1 which has legal restrictions. Newer 
> versions of jamon-runtime appear to be MPL 2.0. HBase should upgrade to a 
> safer licensed version of jamon.
> 2.4.0 is MPL 1.1 : 
> http://grepcode.com/snapshot/repo1.maven.org/maven2/org.jamon/jamon-runtime/2.4.0
> 2.4.1 is MPL 2.0 : 
> http://grepcode.com/snapshot/repo1.maven.org/maven2/org.jamon/jamon-runtime/2.4.1
> Here’s a comparison of the equivalent sections of the respective licenses 
> dealing w/ Termination:
> MPL 1.1 - Section 8 (Termination) Subsection 2:
> 8.2. If You initiate litigation by asserting a patent infringement claim 
> (excluding declatory judgment actions) against Initial Developer or a 
> Contributor (the Initial Developer or Contributor against whom You file such 
> action is referred to as "Participant") alleging that:
> such Participant's Contributor Version directly or indirectly infringes any 
> patent, then any and all rights granted by such Participant to You under 
> Sections 2.1 and/or 2.2 of this License shall, upon 60 days notice from 
> Participant terminate prospectively, unless if within 60 days after receipt 
> of notice You either: (i) agree in writing to pay Participant a mutually 
> agreeable reasonable royalty for Your past and future use of Modifications 
> made by such Participant, or (ii) withdraw Your litigation claim with respect 
> to the Contributor Version against such Participant. If within 60 days of 
> notice, a reasonable royalty and payment arrangement are not mutually agreed 
> upon in writing by the parties or the litigation claim is not withdrawn, the 
> rights granted by Participant to You under Sections 2.1 and/or 2.2 
> automatically terminate at the expiration of the 60 day notice period 
> specified above.
> any software, hardware, or device, other than such Participant's Contributor 
> Version, directly or indirectly infringes any patent, then any rights granted 
> to You by such Participant under Sections 2.1(b) and 2.2(b) are revoked 
> effective as of the date You first made, used, sold, distributed, or had 
> made, Modifications made by that Participant.
> MPL 2.0 - Section 5 (Termination) Subsection 2:
> 5.2. If You initiate litigation against any entity by asserting a patent 
> infringement claim (excluding declaratory judgment actions, counter-claims, 
> and cross-claims) alleging that a Contributor Version directly or indirectly 
> infringes any patent, then the rights granted to You by any and all 
> Contributors for the Covered Software under Section 2.1 of this License shall 
> terminate.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14591) Region with reference hfile may split after a forced split in IncreasingToUpperBoundRegionSplitPolicy

2015-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14956282#comment-14956282
 ] 

Hudson commented on HBASE-14591:


SUCCESS: Integrated in HBase-1.2-IT #208 (See 
[https://builds.apache.org/job/HBase-1.2-IT/208/])
HBASE-14591 Region with reference hfile may split after a forced split 
(liushaohui: rev c25657f26b991e60ad8ccaf9c2c2182edb83eb98)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/IncreasingToUpperBoundRegionSplitPolicy.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionSplitPolicy.java


> Region with reference hfile may split after a forced split in 
> IncreasingToUpperBoundRegionSplitPolicy
> -
>
> Key: HBASE-14591
> URL: https://issues.apache.org/jira/browse/HBASE-14591
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.15
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
>Priority: Critical
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.0.3, 1.1.3, 0.98.16
>
> Attachments: HBASE-14591-v1.patch
>
>
> In the IncreasingToUpperBoundRegionSplitPolicy, a region with a store having 
> hfile reference may split after a forced split. This will break many 
> assumptions of design.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14591) Region with reference hfile may split after a forced split in IncreasingToUpperBoundRegionSplitPolicy

2015-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14956302#comment-14956302
 ] 

Hudson commented on HBASE-14591:


FAILURE: Integrated in HBase-1.1 #704 (See 
[https://builds.apache.org/job/HBase-1.1/704/])
HBASE-14591 Region with reference hfile may split after a forced split 
(liushaohui: rev 8ad3bf046bb7e63bd6884c9fb3365e4c2b75c28a)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/IncreasingToUpperBoundRegionSplitPolicy.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionSplitPolicy.java


> Region with reference hfile may split after a forced split in 
> IncreasingToUpperBoundRegionSplitPolicy
> -
>
> Key: HBASE-14591
> URL: https://issues.apache.org/jira/browse/HBASE-14591
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.15
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
>Priority: Critical
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.0.3, 1.1.3, 0.98.16
>
> Attachments: HBASE-14591-v1.patch
>
>
> In the IncreasingToUpperBoundRegionSplitPolicy, a region with a store having 
> hfile reference may split after a forced split. This will break many 
> assumptions of design.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-14491) ReplicationSource#countDistinctRowKeys code logic is not correct

2015-10-13 Thread Ashish Singhi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Singhi resolved HBASE-14491.
---
Resolution: Fixed
  Assignee: (was: Ashish Singhi)

Fixed by Enis as part of HBASE-14501.

> ReplicationSource#countDistinctRowKeys code logic is not correct
> 
>
> Key: HBASE-14491
> URL: https://issues.apache.org/jira/browse/HBASE-14491
> Project: HBase
>  Issue Type: Bug
>Reporter: Ashish Singhi
>Priority: Minor
>
> {code}
>   Cell lastCell = cells.get(0);
>   for (int i = 0; i < edit.size(); i++) {
> if (!CellUtil.matchingRow(cells.get(i), lastCell)) {
>   distinctRowKeys++;
> }
>   }
> {code}
> The above logic for finding the distinct row keys in the list needs to be 
> corrected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14602) Convert HBasePoweredBy Wiki page to a hbase.apache.org page

2015-10-13 Thread Misty Stanley-Jones (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Misty Stanley-Jones updated HBASE-14602:

Status: Patch Available  (was: Open)

> Convert HBasePoweredBy Wiki page to a hbase.apache.org page
> ---
>
> Key: HBASE-14602
> URL: https://issues.apache.org/jira/browse/HBASE-14602
> Project: HBase
>  Issue Type: Sub-task
>  Components: documentation
>Affects Versions: 2.0.0
>Reporter: Misty Stanley-Jones
>Assignee: Misty Stanley-Jones
> Fix For: 2.0.0
>
> Attachments: HBASE-14602.patch
>
>
> https://wiki.apache.org/hadoop/Hbase/PoweredBy
> Leave all the info as it is, add a disclaimer about the accuracy of the info 
> and info on how to get yourself added/updated (email hbase-dev or file a JIRA 
> -- we want the hurdle to be much lower than it has been). Redirect the wiki 
> page to the new site.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14602) Convert HBasePoweredBy Wiki page to a hbase.apache.org page

2015-10-13 Thread Misty Stanley-Jones (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Misty Stanley-Jones updated HBASE-14602:

Attachment: HBASE-14602.patch

> Convert HBasePoweredBy Wiki page to a hbase.apache.org page
> ---
>
> Key: HBASE-14602
> URL: https://issues.apache.org/jira/browse/HBASE-14602
> Project: HBase
>  Issue Type: Sub-task
>  Components: documentation
>Affects Versions: 2.0.0
>Reporter: Misty Stanley-Jones
>Assignee: Misty Stanley-Jones
> Fix For: 2.0.0
>
> Attachments: HBASE-14602.patch
>
>
> https://wiki.apache.org/hadoop/Hbase/PoweredBy
> Leave all the info as it is, add a disclaimer about the accuracy of the info 
> and info on how to get yourself added/updated (email hbase-dev or file a JIRA 
> -- we want the hurdle to be much lower than it has been). Redirect the wiki 
> page to the new site.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13082) Coarsen StoreScanner locks to RegionScanner

2015-10-13 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-13082:
---
Status: Open  (was: Patch Available)

> Coarsen StoreScanner locks to RegionScanner
> ---
>
> Key: HBASE-13082
> URL: https://issues.apache.org/jira/browse/HBASE-13082
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: ramkrishna.s.vasudevan
> Attachments: 13082-test.txt, 13082-v2.txt, 13082-v3.txt, 
> 13082-v4.txt, 13082.txt, 13082.txt, HBASE-13082_1_WIP.patch, gc.png, gc.png, 
> gc.png, hits.png, next.png, next.png
>
>
> Continuing where HBASE-10015 left of.
> We can avoid locking (and memory fencing) inside StoreScanner by deferring to 
> the lock already held by the RegionScanner.
> In tests this shows quite a scan improvement and reduced CPU (the fences make 
> the cores wait for memory fetches).
> There are some drawbacks too:
> * All calls to RegionScanner need to be remain synchronized
> * Implementors of coprocessors need to be diligent in following the locking 
> contract. For example Phoenix does not lock RegionScanner.nextRaw() and 
> required in the documentation (not picking on Phoenix, this one is my fault 
> as I told them it's OK)
> * possible starving of flushes and compaction with heavy read load. 
> RegionScanner operations would keep getting the locks and the 
> flushes/compactions would not be able finalize the set of files.
> I'll have a patch soon.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14283) Reverse scan doesn’t work with HFile inline index/bloom blocks

2015-10-13 Thread Ben Lau (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14956228#comment-14956228
 ] 

Ben Lau commented on HBASE-14283:
-

So anything I should change in the patches?  How many +1's are needed?  Does 
someone else need to +1?  

> Reverse scan doesn’t work with HFile inline index/bloom blocks
> --
>
> Key: HBASE-14283
> URL: https://issues.apache.org/jira/browse/HBASE-14283
> Project: HBase
>  Issue Type: Bug
>Reporter: Ben Lau
>Assignee: Ben Lau
> Attachments: HBASE-14283-0.98.patch, HBASE-14283-branch-1.0.patch, 
> HBASE-14283-branch-1.1.patch, HBASE-14283-branch-1.2.patch, 
> HBASE-14283-branch-1.patch, HBASE-14283-master.patch, HBASE-14283-v2.patch, 
> HBASE-14283.patch, hfile-seek-before.patch
>
>
> Reverse scans do not work if an HFile contains inline bloom blocks or leaf 
> level index blocks.  The reason is because the seekBefore() call calculates 
> the previous data block’s size by assuming data blocks are contiguous which 
> is not the case in HFile V2 and beyond.
> Attached is a first cut patch (targeting 
> bcef28eefaf192b0ad48c8011f98b8e944340da5 on trunk) which includes:
> (1) a unit test which exposes the bug and demonstrates failures for both 
> inline bloom blocks and inline index blocks
> (2) a proposed fix for inline index blocks that does not require a new HFile 
> version change, but is only performant for 1 and 2-level indexes and not 3+.  
> 3+ requires an HFile format update for optimal performance.
> This patch does not fix the bloom filter blocks bug.  But the fix should be 
> similar to the case of inline index blocks.  The reason I haven’t made the 
> change yet is I want to confirm that you guys would be fine with me revising 
> the HFile.Reader interface.
> Specifically, these 2 functions (getGeneralBloomFilterMetadata and 
> getDeleteBloomFilterMetadata) need to return the BloomFilter.  Right now the 
> HFileReader class doesn’t have a reference to the bloom filters (and hence 
> their indices) and only constructs the IO streams and hence has no way to 
> know where the bloom blocks are in the HFile.  It seems that the HFile.Reader 
> bloom method comments state that they “know nothing about how that metadata 
> is structured” but I do not know if that is a requirement of the abstraction 
> (why?) or just an incidental current property. 
> We would like to do 3 things with community approval:
> (1) Update the HFile.Reader interface and implementation to contain and 
> return BloomFilters directly rather than unstructured IO streams
> (2) Merge the fixes for index blocks and bloom blocks into open source
> (3) Create a new Jira ticket for open source HBase to add a ‘prevBlockSize’ 
> field in the block header in the next HFile version, so that seekBefore() 
> calls can not only be correct but performant in all cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14283) Reverse scan doesn’t work with HFile inline index/bloom blocks

2015-10-13 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14956240#comment-14956240
 ] 

Anoop Sam John commented on HBASE-14283:


Attach the same patch once more for a clean QA run.. Then we can commit it..  
Ted can you pls commit it after QA? 

> Reverse scan doesn’t work with HFile inline index/bloom blocks
> --
>
> Key: HBASE-14283
> URL: https://issues.apache.org/jira/browse/HBASE-14283
> Project: HBase
>  Issue Type: Bug
>Reporter: Ben Lau
>Assignee: Ben Lau
> Attachments: HBASE-14283-0.98.patch, HBASE-14283-branch-1.0.patch, 
> HBASE-14283-branch-1.1.patch, HBASE-14283-branch-1.2.patch, 
> HBASE-14283-branch-1.patch, HBASE-14283-master.patch, HBASE-14283-v2.patch, 
> HBASE-14283.patch, hfile-seek-before.patch
>
>
> Reverse scans do not work if an HFile contains inline bloom blocks or leaf 
> level index blocks.  The reason is because the seekBefore() call calculates 
> the previous data block’s size by assuming data blocks are contiguous which 
> is not the case in HFile V2 and beyond.
> Attached is a first cut patch (targeting 
> bcef28eefaf192b0ad48c8011f98b8e944340da5 on trunk) which includes:
> (1) a unit test which exposes the bug and demonstrates failures for both 
> inline bloom blocks and inline index blocks
> (2) a proposed fix for inline index blocks that does not require a new HFile 
> version change, but is only performant for 1 and 2-level indexes and not 3+.  
> 3+ requires an HFile format update for optimal performance.
> This patch does not fix the bloom filter blocks bug.  But the fix should be 
> similar to the case of inline index blocks.  The reason I haven’t made the 
> change yet is I want to confirm that you guys would be fine with me revising 
> the HFile.Reader interface.
> Specifically, these 2 functions (getGeneralBloomFilterMetadata and 
> getDeleteBloomFilterMetadata) need to return the BloomFilter.  Right now the 
> HFileReader class doesn’t have a reference to the bloom filters (and hence 
> their indices) and only constructs the IO streams and hence has no way to 
> know where the bloom blocks are in the HFile.  It seems that the HFile.Reader 
> bloom method comments state that they “know nothing about how that metadata 
> is structured” but I do not know if that is a requirement of the abstraction 
> (why?) or just an incidental current property. 
> We would like to do 3 things with community approval:
> (1) Update the HFile.Reader interface and implementation to contain and 
> return BloomFilters directly rather than unstructured IO streams
> (2) Merge the fixes for index blocks and bloom blocks into open source
> (3) Create a new Jira ticket for open source HBase to add a ‘prevBlockSize’ 
> field in the block header in the next HFile version, so that seekBefore() 
> calls can not only be correct but performant in all cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14602) Convert HBasePoweredBy Wiki page to a hbase.apache.org page

2015-10-13 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14956267#comment-14956267
 ] 

stack commented on HBASE-14602:
---

+1

Did you set wiki to point to this new page? Thanks [~misty]

> Convert HBasePoweredBy Wiki page to a hbase.apache.org page
> ---
>
> Key: HBASE-14602
> URL: https://issues.apache.org/jira/browse/HBASE-14602
> Project: HBase
>  Issue Type: Sub-task
>  Components: documentation
>Affects Versions: 2.0.0
>Reporter: Misty Stanley-Jones
>Assignee: Misty Stanley-Jones
> Fix For: 2.0.0
>
> Attachments: HBASE-14602.patch
>
>
> https://wiki.apache.org/hadoop/Hbase/PoweredBy
> Leave all the info as it is, add a disclaimer about the accuracy of the info 
> and info on how to get yourself added/updated (email hbase-dev or file a JIRA 
> -- we want the hurdle to be much lower than it has been). Redirect the wiki 
> page to the new site.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14596) TestCellACLs failing... on1.2 builds

2015-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14956285#comment-14956285
 ] 

Hudson commented on HBASE-14596:


SUCCESS: Integrated in HBase-TRUNK #6906 (See 
[https://builds.apache.org/job/HBase-TRUNK/6906/])
HBASE-14596 TestCellACLs failing... on1.2 builds; FIX (stack: rev 
e6a271a4fb8696d8dc4e07ece9c2b5fc86f775f9)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestCellACLs.java


> TestCellACLs failing... on1.2 builds
> 
>
> Key: HBASE-14596
> URL: https://issues.apache.org/jira/browse/HBASE-14596
> Project: HBase
>  Issue Type: Sub-task
>  Components: test
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: 14596.branch-1.patch, 14596.debug.txt, 
> 14596.master.patch, 14596.txt
>
>
> Caught this in 1.7 builds:
> {code}
> "PriorityRpcServer.handler=4,queue=0,port=42214" daemon prio=10 
> tid=0x7f08d5786000 nid=0x2eed in Object.wait() [0x7f08918d5000]
>java.lang.Thread.State: TIMED_WAITING (on object monitor)
>   at java.lang.Object.wait(Native Method)
>   at 
> org.apache.hadoop.hbase.client.AsyncProcess.waitForMaximumCurrentTasks(AsyncProcess.java:1659)
>   - locked <0x0007580e5f98> (a java.util.concurrent.atomic.AtomicLong)
>   at 
> org.apache.hadoop.hbase.client.AsyncProcess.waitForAllPreviousOpsAndReset(AsyncProcess.java:1688)
>   at 
> org.apache.hadoop.hbase.client.BufferedMutatorImpl.backgroundFlushCommits(BufferedMutatorImpl.java:208)
>   at 
> org.apache.hadoop.hbase.client.BufferedMutatorImpl.flush(BufferedMutatorImpl.java:183)
>   - locked <0x0007580f36f8> (a 
> org.apache.hadoop.hbase.client.BufferedMutatorImpl)
>   at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1430)
>   at org.apache.hadoop.hbase.client.HTable.put(HTable.java:1021)
>   at 
> org.apache.hadoop.hbase.security.access.AccessControlLists.addUserPermission(AccessControlLists.java:176)
>   at 
> org.apache.hadoop.hbase.security.access.AccessController$8.run(AccessController.java:2175)
>   at 
> org.apache.hadoop.hbase.security.access.AccessController$8.run(AccessController.java:2172)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
>   at 
> org.apache.hadoop.security.SecurityUtil.doAsUser(SecurityUtil.java:444)
>   at 
> org.apache.hadoop.security.SecurityUtil.doAsLoginUser(SecurityUtil.java:425)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at org.apache.hadoop.hbase.util.Methods.call(Methods.java:39)
>   at org.apache.hadoop.hbase.security.User.runAsLoginUser(User.java:205)
>   at 
> org.apache.hadoop.hbase.security.access.AccessController.grant(AccessController.java:2172)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos$AccessControlService$1.grant(AccessControlProtos.java:9933)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos$AccessControlService.callMethod(AccessControlProtos.java:10097)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7650)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1896)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1878)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32590)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2120)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:106)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
>   at java.lang.Thread.run(Thread.java:745)
> "PriorityRpcServer.handler=3,queue=1,port=42214" daemon prio=10 
> tid=0x7f08d5784000 nid=0x2eec in Object.wait() [0x7f08919d7000]
>java.lang.Thread.State: TIMED_WAITING (on object monitor)
>   at java.lang.Object.wait(Native Method)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1248)
>   - locked <0x0007cb61ecd8> (a org.apache.hadoop.hbase.ipc.Call)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:217)
>   at 
> 

[jira] [Commented] (HBASE-14493) Upgrade the jamon-runtime dependency

2015-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14956284#comment-14956284
 ] 

Hudson commented on HBASE-14493:


SUCCESS: Integrated in HBase-TRUNK #6906 (See 
[https://builds.apache.org/job/HBase-TRUNK/6906/])
HBASE-14493 Upgrade the jamon-runtime dependency (apurtell: rev 
4754e583f9e33ab0573e957ab69126edb95d6a1c)
* pom.xml
* hbase-resource-bundle/src/main/resources/supplemental-models.xml


> Upgrade the jamon-runtime dependency
> 
>
> Key: HBASE-14493
> URL: https://issues.apache.org/jira/browse/HBASE-14493
> Project: HBase
>  Issue Type: Task
>Affects Versions: 1.1.1
>Reporter: Newton Alex
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 2.0.0, 1.2.0, 1.3.0, 0.98.16
>
> Attachments: HBASE-14493-0.98.patch, HBASE-14493-branch-1.patch, 
> HBASE-14493.patch, HBASE-14493.patch
>
>
> Current version of HBase uses MPL 1.1 which has legal restrictions. Newer 
> versions of jamon-runtime appear to be MPL 2.0. HBase should upgrade to a 
> safer licensed version of jamon.
> 2.4.0 is MPL 1.1 : 
> http://grepcode.com/snapshot/repo1.maven.org/maven2/org.jamon/jamon-runtime/2.4.0
> 2.4.1 is MPL 2.0 : 
> http://grepcode.com/snapshot/repo1.maven.org/maven2/org.jamon/jamon-runtime/2.4.1
> Here’s a comparison of the equivalent sections of the respective licenses 
> dealing w/ Termination:
> MPL 1.1 - Section 8 (Termination) Subsection 2:
> 8.2. If You initiate litigation by asserting a patent infringement claim 
> (excluding declatory judgment actions) against Initial Developer or a 
> Contributor (the Initial Developer or Contributor against whom You file such 
> action is referred to as "Participant") alleging that:
> such Participant's Contributor Version directly or indirectly infringes any 
> patent, then any and all rights granted by such Participant to You under 
> Sections 2.1 and/or 2.2 of this License shall, upon 60 days notice from 
> Participant terminate prospectively, unless if within 60 days after receipt 
> of notice You either: (i) agree in writing to pay Participant a mutually 
> agreeable reasonable royalty for Your past and future use of Modifications 
> made by such Participant, or (ii) withdraw Your litigation claim with respect 
> to the Contributor Version against such Participant. If within 60 days of 
> notice, a reasonable royalty and payment arrangement are not mutually agreed 
> upon in writing by the parties or the litigation claim is not withdrawn, the 
> rights granted by Participant to You under Sections 2.1 and/or 2.2 
> automatically terminate at the expiration of the 60 day notice period 
> specified above.
> any software, hardware, or device, other than such Participant's Contributor 
> Version, directly or indirectly infringes any patent, then any rights granted 
> to You by such Participant under Sections 2.1(b) and 2.2(b) are revoked 
> effective as of the date You first made, used, sold, distributed, or had 
> made, Modifications made by that Participant.
> MPL 2.0 - Section 5 (Termination) Subsection 2:
> 5.2. If You initiate litigation against any entity by asserting a patent 
> infringement claim (excluding declaratory judgment actions, counter-claims, 
> and cross-claims) alleging that a Contributor Version directly or indirectly 
> infringes any patent, then the rights granted to You by any and all 
> Contributors for the Covered Software under Section 2.1 of this License shall 
> terminate.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14602) Convert HBasePoweredBy Wiki page to a hbase.apache.org page

2015-10-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14956305#comment-14956305
 ] 

Hadoop QA commented on HBASE-14602:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12766469/HBASE-14602.patch
  against master branch at commit e5580c247c06d8c708b92e96a5622853ec06a77d.
  ATTACHMENT ID: 12766469

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation patch that doesn't require tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15996//console

This message is automatically generated.

> Convert HBasePoweredBy Wiki page to a hbase.apache.org page
> ---
>
> Key: HBASE-14602
> URL: https://issues.apache.org/jira/browse/HBASE-14602
> Project: HBase
>  Issue Type: Sub-task
>  Components: documentation
>Affects Versions: 2.0.0
>Reporter: Misty Stanley-Jones
>Assignee: Misty Stanley-Jones
> Fix For: 2.0.0
>
> Attachments: HBASE-14602.patch
>
>
> https://wiki.apache.org/hadoop/Hbase/PoweredBy
> Leave all the info as it is, add a disclaimer about the accuracy of the info 
> and info on how to get yourself added/updated (email hbase-dev or file a JIRA 
> -- we want the hurdle to be much lower than it has been). Redirect the wiki 
> page to the new site.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14282) Remove metrics2

2015-10-13 Thread Gabor Liptak (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14956136#comment-14956136
 ] 

Gabor Liptak commented on HBASE-14282:
--

What is the preferred way to move forward here?

> Remove metrics2
> ---
>
> Key: HBASE-14282
> URL: https://issues.apache.org/jira/browse/HBASE-14282
> Project: HBase
>  Issue Type: Improvement
>  Components: metrics
>Reporter: Elliott Clark
>Priority: Critical
> Fix For: 2.0.0
>
>
> Metrics2 has a whole bunch of race conditions and weird edges because of all 
> of the caching that metrics2 does.
> Additionally tying something so integral to hadoop who has lots of versions 
> we support has been a maintenance nightmare. It would allow us to completely 
> get rid of the compat modules.
> Rip it out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14598) ByteBufferOutputStream grows its HeapByteBuffer beyond JVM limitations

2015-10-13 Thread Ian Friedman (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14956205#comment-14956205
 ] 

Ian Friedman commented on HBASE-14598:
--

Patch attached as hbase-14598-v1.patch

Highlights of the change: 
- store buf.position() + extra in a long to avoid potential overflow if either 
are too big
- check if needed capacity is greater than our max size 
- make sure doubling the array size doesn't exceed max size
- throw buffer overflow exception if it is, and catch it in IPCUtil rethrowing 
as DoNotRetryIOException so the client gives up

> ByteBufferOutputStream grows its HeapByteBuffer beyond JVM limitations
> --
>
> Key: HBASE-14598
> URL: https://issues.apache.org/jira/browse/HBASE-14598
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.12
>Reporter: Ian Friedman
>Assignee: Ian Friedman
> Attachments: 14598.txt, hbase-14598-v1.patch
>
>
> We noticed that in returning a Scan against a region containing particularly 
> large (wide) rows that it is possible during 
> ByteBufferOutputStream.checkSizeAndGrow() to attempt to create a new 
> ByteBuffer larger than the JVM allows which then throws a OutOfMemoryError. 
> The code currently caps it at Integer.MAX_VALUE which is actually larger than 
> the JVM allows. This lead to us dealing with cascading region server death as 
> the RegionServer hosting the region died, opened on a new server, the client 
> retried the scan, and the new RS died as well. 
> I believe ByteBufferOutputStream should not try to create ByteBuffers that 
> large and instead throw an exception back up if it needs to grow any bigger. 
> The limit should probably be something like Integer.MAX_VALUE-8, as that is 
> what ArrayList uses. ref: 
> http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/8-b132/java/util/ArrayList.java#221



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14598) ByteBufferOutputStream grows its HeapByteBuffer beyond JVM limitations

2015-10-13 Thread Ian Friedman (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14956206#comment-14956206
 ] 

Ian Friedman commented on HBASE-14598:
--

Thanks [~stack]!

> ByteBufferOutputStream grows its HeapByteBuffer beyond JVM limitations
> --
>
> Key: HBASE-14598
> URL: https://issues.apache.org/jira/browse/HBASE-14598
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.12
>Reporter: Ian Friedman
>Assignee: Ian Friedman
> Attachments: 14598.txt, hbase-14598-v1.patch
>
>
> We noticed that in returning a Scan against a region containing particularly 
> large (wide) rows that it is possible during 
> ByteBufferOutputStream.checkSizeAndGrow() to attempt to create a new 
> ByteBuffer larger than the JVM allows which then throws a OutOfMemoryError. 
> The code currently caps it at Integer.MAX_VALUE which is actually larger than 
> the JVM allows. This lead to us dealing with cascading region server death as 
> the RegionServer hosting the region died, opened on a new server, the client 
> retried the scan, and the new RS died as well. 
> I believe ByteBufferOutputStream should not try to create ByteBuffers that 
> large and instead throw an exception back up if it needs to grow any bigger. 
> The limit should probably be something like Integer.MAX_VALUE-8, as that is 
> what ArrayList uses. ref: 
> http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/8-b132/java/util/ArrayList.java#221



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14598) ByteBufferOutputStream grows its HeapByteBuffer beyond JVM limitations

2015-10-13 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14956204#comment-14956204
 ] 

stack commented on HBASE-14598:
---

+1

Lets see what patch build says.

> ByteBufferOutputStream grows its HeapByteBuffer beyond JVM limitations
> --
>
> Key: HBASE-14598
> URL: https://issues.apache.org/jira/browse/HBASE-14598
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.12
>Reporter: Ian Friedman
>Assignee: Ian Friedman
> Attachments: 14598.txt, hbase-14598-v1.patch
>
>
> We noticed that in returning a Scan against a region containing particularly 
> large (wide) rows that it is possible during 
> ByteBufferOutputStream.checkSizeAndGrow() to attempt to create a new 
> ByteBuffer larger than the JVM allows which then throws a OutOfMemoryError. 
> The code currently caps it at Integer.MAX_VALUE which is actually larger than 
> the JVM allows. This lead to us dealing with cascading region server death as 
> the RegionServer hosting the region died, opened on a new server, the client 
> retried the scan, and the new RS died as well. 
> I believe ByteBufferOutputStream should not try to create ByteBuffers that 
> large and instead throw an exception back up if it needs to grow any bigger. 
> The limit should probably be something like Integer.MAX_VALUE-8, as that is 
> what ArrayList uses. ref: 
> http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/8-b132/java/util/ArrayList.java#221



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-14221) Reduce the number of time row comparison is done in a Scan

2015-10-13 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14956224#comment-14956224
 ] 

Lars Hofhansl edited comment on HBASE-14221 at 10/14/15 4:18 AM:
-

I think [~mcorgan]'s KeyValueScannerHeap is worth exploring still (see later on 
that jira). It beats PriorityQueue in every test, and since it is our 
implementation we can further tweak it down the road. Matt's MIA unfortunately, 
but I plan to test some more with it. 

(And I have some awesome database guys sitting less than 30 feet form me, and 
they come up with a striking similar scanner approach for their LSM based 
database)



was (Author: lhofhansl):
I think [~mcorgan] KeyValueScannerHeap is worth exploring still (see later on 
that jira). It's beats PriorityQueue in every test, and since it is our 
implementation we can further tweak it down the road. Matt's MIA unfortunately, 
but I plan to test some more with it. 

(And I have some awesome database guys sitting less than 30 feet form me, and 
they come up with a striking similar scanner approach for their LSM based 
database)


> Reduce the number of time row comparison is done in a Scan
> --
>
> Key: HBASE-14221
> URL: https://issues.apache.org/jira/browse/HBASE-14221
> Project: HBase
>  Issue Type: Sub-task
>  Components: Scanners
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0
>
> Attachments: 14221-0.98-takeALook.txt, HBASE-14221.patch, 
> HBASE-14221_1.patch, HBASE-14221_1.patch, HBASE-14221_6.patch, 
> withmatchingRowspatch.png, withoutmatchingRowspatch.png
>
>
> When we tried to do some profiling with the PE tool found this.
> Currently we do row comparisons in 3 places in a simple Scan case.
> 1) ScanQueryMatcher
> {code}
>int ret = this.rowComparator.compareRows(curCell, cell);
> if (!this.isReversed) {
>   if (ret <= -1) {
> return MatchCode.DONE;
>   } else if (ret >= 1) {
> // could optimize this, if necessary?
> // Could also be called SEEK_TO_CURRENT_ROW, but this
> // should be rare/never happens.
> return MatchCode.SEEK_NEXT_ROW;
>   }
> } else {
>   if (ret <= -1) {
> return MatchCode.SEEK_NEXT_ROW;
>   } else if (ret >= 1) {
> return MatchCode.DONE;
>   }
> }
> {code}
> 2) In StoreScanner next() while starting to scan the row
> {code}
> if (!scannerContext.hasAnyLimit(LimitScope.BETWEEN_CELLS) || 
> matcher.curCell == null ||
> isNewRow || !CellUtil.matchingRow(peeked, matcher.curCell)) {
>   this.countPerRow = 0;
>   matcher.setToNewRow(peeked);
> }
> {code}
> Particularly to see if we are in a new row.
> 3) In HRegion
> {code}
>   scannerContext.setKeepProgress(true);
>   heap.next(results, scannerContext);
>   scannerContext.setKeepProgress(tmpKeepProgress);
>   nextKv = heap.peek();
> moreCellsInRow = moreCellsInRow(nextKv, currentRowCell);
> {code}
> Here again there are cases where we need to careful for a MultiCF case.  Was 
> trying to solve this for the MultiCF case but is having lot of cases to 
> solve. But atleast for a single CF case I think these comparison can be 
> reduced.
> So for a single CF case in the SQM we are able to find if we have crossed a 
> row using the code pasted above in SQM. That comparison is definitely needed.
> Now in case of a single CF the HRegion is going to have only one element in 
> the heap and so the 3rd comparison can surely be avoided if the 
> StoreScanner.next() was over due to MatchCode.DONE caused by SQM.
> Coming to the 2nd compareRows that we do in StoreScanner. next() - even that 
> can be avoided if we know that the previous next() call was over due to a new 
> row. Doing all this I found that the compareRows in the profiler which was 
> 19% got reduced to 13%. Initially we can solve for single CF case which can 
> be extended to MultiCF cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14221) Reduce the number of time row comparison is done in a Scan

2015-10-13 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14956224#comment-14956224
 ] 

Lars Hofhansl commented on HBASE-14221:
---

I think [~mcorgan] KeyValueScannerHeap is worth exploring still (see later on 
that jira). It's beats PriorityQueue in every test, and since it is our 
implementation we can further tweak it down the road. Matt's MIA unfortunately, 
but I plan to test some more with it. 

(And I have some awesome database guys sitting less than 30 feet form me, and 
they come up with a striking similar scanner approach for their LSM based 
database)


> Reduce the number of time row comparison is done in a Scan
> --
>
> Key: HBASE-14221
> URL: https://issues.apache.org/jira/browse/HBASE-14221
> Project: HBase
>  Issue Type: Sub-task
>  Components: Scanners
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0
>
> Attachments: 14221-0.98-takeALook.txt, HBASE-14221.patch, 
> HBASE-14221_1.patch, HBASE-14221_1.patch, HBASE-14221_6.patch, 
> withmatchingRowspatch.png, withoutmatchingRowspatch.png
>
>
> When we tried to do some profiling with the PE tool found this.
> Currently we do row comparisons in 3 places in a simple Scan case.
> 1) ScanQueryMatcher
> {code}
>int ret = this.rowComparator.compareRows(curCell, cell);
> if (!this.isReversed) {
>   if (ret <= -1) {
> return MatchCode.DONE;
>   } else if (ret >= 1) {
> // could optimize this, if necessary?
> // Could also be called SEEK_TO_CURRENT_ROW, but this
> // should be rare/never happens.
> return MatchCode.SEEK_NEXT_ROW;
>   }
> } else {
>   if (ret <= -1) {
> return MatchCode.SEEK_NEXT_ROW;
>   } else if (ret >= 1) {
> return MatchCode.DONE;
>   }
> }
> {code}
> 2) In StoreScanner next() while starting to scan the row
> {code}
> if (!scannerContext.hasAnyLimit(LimitScope.BETWEEN_CELLS) || 
> matcher.curCell == null ||
> isNewRow || !CellUtil.matchingRow(peeked, matcher.curCell)) {
>   this.countPerRow = 0;
>   matcher.setToNewRow(peeked);
> }
> {code}
> Particularly to see if we are in a new row.
> 3) In HRegion
> {code}
>   scannerContext.setKeepProgress(true);
>   heap.next(results, scannerContext);
>   scannerContext.setKeepProgress(tmpKeepProgress);
>   nextKv = heap.peek();
> moreCellsInRow = moreCellsInRow(nextKv, currentRowCell);
> {code}
> Here again there are cases where we need to careful for a MultiCF case.  Was 
> trying to solve this for the MultiCF case but is having lot of cases to 
> solve. But atleast for a single CF case I think these comparison can be 
> reduced.
> So for a single CF case in the SQM we are able to find if we have crossed a 
> row using the code pasted above in SQM. That comparison is definitely needed.
> Now in case of a single CF the HRegion is going to have only one element in 
> the heap and so the 3rd comparison can surely be avoided if the 
> StoreScanner.next() was over due to MatchCode.DONE caused by SQM.
> Coming to the 2nd compareRows that we do in StoreScanner. next() - even that 
> can be avoided if we know that the previous next() call was over due to a new 
> row. Doing all this I found that the compareRows in the profiler which was 
> 19% got reduced to 13%. Initially we can solve for single CF case which can 
> be extended to MultiCF cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14602) Convert HBasePoweredBy Wiki page to a hbase.apache.org page

2015-10-13 Thread Misty Stanley-Jones (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14956278#comment-14956278
 ] 

Misty Stanley-Jones commented on HBASE-14602:
-

Will do after the page is live.

> Convert HBasePoweredBy Wiki page to a hbase.apache.org page
> ---
>
> Key: HBASE-14602
> URL: https://issues.apache.org/jira/browse/HBASE-14602
> Project: HBase
>  Issue Type: Sub-task
>  Components: documentation
>Affects Versions: 2.0.0
>Reporter: Misty Stanley-Jones
>Assignee: Misty Stanley-Jones
> Fix For: 2.0.0
>
> Attachments: HBASE-14602.patch
>
>
> https://wiki.apache.org/hadoop/Hbase/PoweredBy
> Leave all the info as it is, add a disclaimer about the accuracy of the info 
> and info on how to get yourself added/updated (email hbase-dev or file a JIRA 
> -- we want the hurdle to be much lower than it has been). Redirect the wiki 
> page to the new site.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14501) NPE in replication with TDE

2015-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14954601#comment-14954601
 ] 

Hudson commented on HBASE-14501:


FAILURE: Integrated in HBase-1.2 #244 (See 
[https://builds.apache.org/job/HBase-1.2/244/])
HBASE-14501 NPE in replication with TDE (enis: rev 
b7b18e09f4f12b117d37b929127e8f52927b9d98)
* hbase-common/src/main/java/org/apache/hadoop/hbase/codec/BaseDecoder.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/codec/CellCodec.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValueUtil.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/SecureWALCellCodec.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValue.java


> NPE in replication with TDE
> ---
>
> Key: HBASE-14501
> URL: https://issues.apache.org/jira/browse/HBASE-14501
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.0.3, 1.1.3, 0.98.16
>
> Attachments: hbase-14501_v1.patch
>
>
> We are seeing a NPE when replication (or in this case async wal replay for 
> region replicas) is run on top of an HDFS cluster with TDE configured.
> This is the stack trace:
> {code}
> java.lang.NullPointerException
> at org.apache.hadoop.hbase.CellUtil.matchingRow(CellUtil.java:370)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.countDistinctRowKeys(ReplicationSource.java:649)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.readAllEntriesToReplicateOrNextFile(ReplicationSource.java:450)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:346)
> {code}
> This stack trace can only happen if WALEdit.getCells() returns an array 
> containing null entries. I believe this happens due to 
> {{KeyValueCodec.parseCell()}} uses {{KeyValueUtil.iscreate()}} which returns 
> null in case of EOF at the beginning. However, the contract for the 
> Decoder.parseCell() is not clear whether returning null is acceptable or not. 
> The other Decoders (CompressedKvDecoder, CellCodec, etc) do not return null 
> while KeyValueCodec does. 
> BaseDecoder has this code: 
> {code}
>   public boolean advance() throws IOException {
> if (!this.hasNext) return this.hasNext;
> if (this.in.available() == 0) {
>   this.hasNext = false;
>   return this.hasNext;
> }
> try {
>   this.current = parseCell();
> } catch (IOException ioEx) {
>   rethrowEofException(ioEx);
> }
> return this.hasNext;
>   }
> {code}
> which is not correct since it uses {{IS.available()}} not according to the 
> javadoc: 
> (https://docs.oracle.com/javase/7/docs/api/java/io/InputStream.html#available()).
>  DFSInputStream implements {{available()}} as the remaining bytes to read 
> from the stream, so we do not see the issue there. 
> {{CryptoInputStream.available()}} does a similar thing but see the issue. 
> So two questions: 
>  - What should be the interface for Decoder.parseCell()? Can it return null? 
>  - How to properly fix  BaseDecoder.advance() to not rely on {{available()}} 
> call. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14501) NPE in replication with TDE

2015-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14954638#comment-14954638
 ] 

Hudson commented on HBASE-14501:


SUCCESS: Integrated in HBase-0.98 #1153 (See 
[https://builds.apache.org/job/HBase-0.98/1153/])
HBASE-14501 NPE in replication with TDE (enis: rev 
c6608e652a118b1c5dc683e5e5b9694724e1d8f5)
* hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValueUtil.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/codec/CellCodec.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValue.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/SecureWALCellCodec.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/codec/BaseDecoder.java


> NPE in replication with TDE
> ---
>
> Key: HBASE-14501
> URL: https://issues.apache.org/jira/browse/HBASE-14501
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.0.3, 1.1.3, 0.98.16
>
> Attachments: hbase-14501_v1.patch
>
>
> We are seeing a NPE when replication (or in this case async wal replay for 
> region replicas) is run on top of an HDFS cluster with TDE configured.
> This is the stack trace:
> {code}
> java.lang.NullPointerException
> at org.apache.hadoop.hbase.CellUtil.matchingRow(CellUtil.java:370)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.countDistinctRowKeys(ReplicationSource.java:649)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.readAllEntriesToReplicateOrNextFile(ReplicationSource.java:450)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:346)
> {code}
> This stack trace can only happen if WALEdit.getCells() returns an array 
> containing null entries. I believe this happens due to 
> {{KeyValueCodec.parseCell()}} uses {{KeyValueUtil.iscreate()}} which returns 
> null in case of EOF at the beginning. However, the contract for the 
> Decoder.parseCell() is not clear whether returning null is acceptable or not. 
> The other Decoders (CompressedKvDecoder, CellCodec, etc) do not return null 
> while KeyValueCodec does. 
> BaseDecoder has this code: 
> {code}
>   public boolean advance() throws IOException {
> if (!this.hasNext) return this.hasNext;
> if (this.in.available() == 0) {
>   this.hasNext = false;
>   return this.hasNext;
> }
> try {
>   this.current = parseCell();
> } catch (IOException ioEx) {
>   rethrowEofException(ioEx);
> }
> return this.hasNext;
>   }
> {code}
> which is not correct since it uses {{IS.available()}} not according to the 
> javadoc: 
> (https://docs.oracle.com/javase/7/docs/api/java/io/InputStream.html#available()).
>  DFSInputStream implements {{available()}} as the remaining bytes to read 
> from the stream, so we do not see the issue there. 
> {{CryptoInputStream.available()}} does a similar thing but see the issue. 
> So two questions: 
>  - What should be the interface for Decoder.parseCell()? Can it return null? 
>  - How to properly fix  BaseDecoder.advance() to not rely on {{available()}} 
> call. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14596) TestCellACLs failing... on1.2 builds

2015-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14954666#comment-14954666
 ] 

Hudson commented on HBASE-14596:


FAILURE: Integrated in HBase-1.3-IT #230 (See 
[https://builds.apache.org/job/HBase-1.3-IT/230/])
HBASE-14596 TestCellACLs failing... on1.2 builds; TUNEUP (stack: rev 
80a51d227e38f5276e8720fc7f2d32ed009064c2)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestCellACLs.java


> TestCellACLs failing... on1.2 builds
> 
>
> Key: HBASE-14596
> URL: https://issues.apache.org/jira/browse/HBASE-14596
> Project: HBase
>  Issue Type: Sub-task
>  Components: test
>Reporter: stack
> Attachments: 14596.txt
>
>
> Caught this in 1.7 builds:
> {code}
> "PriorityRpcServer.handler=4,queue=0,port=42214" daemon prio=10 
> tid=0x7f08d5786000 nid=0x2eed in Object.wait() [0x7f08918d5000]
>java.lang.Thread.State: TIMED_WAITING (on object monitor)
>   at java.lang.Object.wait(Native Method)
>   at 
> org.apache.hadoop.hbase.client.AsyncProcess.waitForMaximumCurrentTasks(AsyncProcess.java:1659)
>   - locked <0x0007580e5f98> (a java.util.concurrent.atomic.AtomicLong)
>   at 
> org.apache.hadoop.hbase.client.AsyncProcess.waitForAllPreviousOpsAndReset(AsyncProcess.java:1688)
>   at 
> org.apache.hadoop.hbase.client.BufferedMutatorImpl.backgroundFlushCommits(BufferedMutatorImpl.java:208)
>   at 
> org.apache.hadoop.hbase.client.BufferedMutatorImpl.flush(BufferedMutatorImpl.java:183)
>   - locked <0x0007580f36f8> (a 
> org.apache.hadoop.hbase.client.BufferedMutatorImpl)
>   at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1430)
>   at org.apache.hadoop.hbase.client.HTable.put(HTable.java:1021)
>   at 
> org.apache.hadoop.hbase.security.access.AccessControlLists.addUserPermission(AccessControlLists.java:176)
>   at 
> org.apache.hadoop.hbase.security.access.AccessController$8.run(AccessController.java:2175)
>   at 
> org.apache.hadoop.hbase.security.access.AccessController$8.run(AccessController.java:2172)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
>   at 
> org.apache.hadoop.security.SecurityUtil.doAsUser(SecurityUtil.java:444)
>   at 
> org.apache.hadoop.security.SecurityUtil.doAsLoginUser(SecurityUtil.java:425)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at org.apache.hadoop.hbase.util.Methods.call(Methods.java:39)
>   at org.apache.hadoop.hbase.security.User.runAsLoginUser(User.java:205)
>   at 
> org.apache.hadoop.hbase.security.access.AccessController.grant(AccessController.java:2172)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos$AccessControlService$1.grant(AccessControlProtos.java:9933)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos$AccessControlService.callMethod(AccessControlProtos.java:10097)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7650)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1896)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1878)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32590)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2120)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:106)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
>   at java.lang.Thread.run(Thread.java:745)
> "PriorityRpcServer.handler=3,queue=1,port=42214" daemon prio=10 
> tid=0x7f08d5784000 nid=0x2eec in Object.wait() [0x7f08919d7000]
>java.lang.Thread.State: TIMED_WAITING (on object monitor)
>   at java.lang.Object.wait(Native Method)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1248)
>   - locked <0x0007cb61ecd8> (a org.apache.hadoop.hbase.ipc.Call)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:217)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:295)
>   at 
> 

[jira] [Commented] (HBASE-14268) Improve KeyLocker

2015-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14954667#comment-14954667
 ] 

Hudson commented on HBASE-14268:


FAILURE: Integrated in HBase-1.3-IT #230 (See 
[https://builds.apache.org/job/HBase-1.3-IT/230/])
HBASE-14268 Improve KeyLocker (Hiroshi Ikeda) (stack: rev 
98f1387f867835118cadf8b2bbc1e937a16de250)
* hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestKeyLocker.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/util/KeyLocker.java


> Improve KeyLocker
> -
>
> Key: HBASE-14268
> URL: https://issues.apache.org/jira/browse/HBASE-14268
> Project: HBase
>  Issue Type: Improvement
>  Components: util
>Reporter: Hiroshi Ikeda
>Assignee: Hiroshi Ikeda
>Priority: Minor
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: 14268-V5.patch, HBASE-14268-V2.patch, 
> HBASE-14268-V3.patch, HBASE-14268-V4.patch, HBASE-14268-V5.patch, 
> HBASE-14268-V5.patch, HBASE-14268-V6.patch, HBASE-14268-V7.patch, 
> HBASE-14268-V7.patch, HBASE-14268-V7.patch, HBASE-14268-V7.patch, 
> HBASE-14268-V7.patch, HBASE-14268-V7.patch, HBASE-14268-V7.patch, 
> HBASE-14268.patch, KeyLockerIncrKeysPerformance.java, 
> KeyLockerPerformance.java, ReferenceTestApp.java
>
>
> 1. In the implementation of {{KeyLocker}} it uses atomic variables inside a 
> synchronized block, which doesn't make sense. Moreover, logic inside the 
> synchronized block is not trivial so that it makes less performance in heavy 
> multi-threaded environment.
> 2. {{KeyLocker}} gives an instance of {{RentrantLock}} which is already 
> locked, but it doesn't follow the contract of {{ReentrantLock}} because you 
> are not allowed to freely invoke lock/unlock methods under that contract. 
> That introduces a potential risk; Whenever you see a variable of the type 
> {{RentrantLock}}, you should pay attention to what the included instance is 
> coming from.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14588) Stop accessing test resources from within src folder

2015-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14954668#comment-14954668
 ] 

Hudson commented on HBASE-14588:


FAILURE: Integrated in HBase-1.3-IT #230 (See 
[https://builds.apache.org/job/HBase-1.3-IT/230/])
HBASE-14588 Stop accessing test resources from within src folder (Andrew 
(stack: rev 6f282810526f88706111d21f4ec65960ce499b44)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRecoveredEdits.java
* pom.xml
* 
hbase-server/src/test/resources/a6a6562b777440fd9c34885428f5cb61.21e75333ada3d5bafb34bb918f29576c
* 
hbase-server/src/test/data/a6a6562b777440fd9c34885428f5cb61.21e75333ada3d5bafb34bb918f29576c
* hbase-server/src/test/resources/0016310
* hbase-server/src/test/java/org/apache/hadoop/hbase/io/TestReference.java
* hbase-server/src/test/data/0016310


> Stop accessing test resources from within src folder
> 
>
> Key: HBASE-14588
> URL: https://issues.apache.org/jira/browse/HBASE-14588
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 1.1.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: hbase-14588.001.patch, hbase-14588.001.patch, 
> hbase-14588.002.patch
>
>
> A few tests in hbase-server reach into the src/test/data folder to get test 
> resources, which is naughty since tests are supposed to only operate within 
> the target/ folder. It's better to put these into src/test/resources and let 
> them be automatically copied into target/ via the resources plugin, like 
> other test resources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14268) Improve KeyLocker

2015-10-13 Thread Y. SREENIVASULU REDDY (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14954684#comment-14954684
 ] 

Y. SREENIVASULU REDDY commented on HBASE-14268:
---

compilation is failing on master branch..

{code}
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.2:compile (default-compile) on 
project hbase-common: Compilation failure: Compilation failure:
hbase/hbase/hbase-common/src/main/java/org/apache/hadoop/hbase/util/KeyLocker.java:[51,17]
 cannot find symbol
[ERROR] symbol:   class WeakObjectPool
{code}

> Improve KeyLocker
> -
>
> Key: HBASE-14268
> URL: https://issues.apache.org/jira/browse/HBASE-14268
> Project: HBase
>  Issue Type: Improvement
>  Components: util
>Reporter: Hiroshi Ikeda
>Assignee: Hiroshi Ikeda
>Priority: Minor
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: 14268-V5.patch, HBASE-14268-V2.patch, 
> HBASE-14268-V3.patch, HBASE-14268-V4.patch, HBASE-14268-V5.patch, 
> HBASE-14268-V5.patch, HBASE-14268-V6.patch, HBASE-14268-V7.patch, 
> HBASE-14268-V7.patch, HBASE-14268-V7.patch, HBASE-14268-V7.patch, 
> HBASE-14268-V7.patch, HBASE-14268-V7.patch, HBASE-14268-V7.patch, 
> HBASE-14268.patch, KeyLockerIncrKeysPerformance.java, 
> KeyLockerPerformance.java, ReferenceTestApp.java
>
>
> 1. In the implementation of {{KeyLocker}} it uses atomic variables inside a 
> synchronized block, which doesn't make sense. Moreover, logic inside the 
> synchronized block is not trivial so that it makes less performance in heavy 
> multi-threaded environment.
> 2. {{KeyLocker}} gives an instance of {{RentrantLock}} which is already 
> locked, but it doesn't follow the contract of {{ReentrantLock}} because you 
> are not allowed to freely invoke lock/unlock methods under that contract. 
> That introduces a potential risk; Whenever you see a variable of the type 
> {{RentrantLock}}, you should pay attention to what the included instance is 
> coming from.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14579) Users authenticated with KERBEROS are recorded as being authenticated with SIMPLE

2015-10-13 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14954636#comment-14954636
 ] 

Nicolas Liochon commented on HBASE-14579:
-

Thanks Stack, yes, it would be great. I could change the script, but I can't 
easily test it right now.

> Users authenticated with KERBEROS are recorded as being authenticated with 
> SIMPLE
> -
>
> Key: HBASE-14579
> URL: https://issues.apache.org/jira/browse/HBASE-14579
> Project: HBase
>  Issue Type: Bug
>  Components: security
>Affects Versions: 1.0.0, 1.2.0, 0.98.15
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: hbase-14579.patch
>
>
> That's the HBase version of HADOOP-10683.
> We see:
> ??hbase.Server (RpcServer.java:saslReadAndProcess(1446)) - Auth successful 
> for securedUser/localh...@example.com (auth:SIMPLE)??
> while we would like to see:
> ??hbase.Server (RpcServer.java:saslReadAndProcess(1446)) - Auth successful 
> for securedUser/localh...@example.com (auth:KERBEROS)??
> The fix is simple, but it means we need hadoop 2.5+. 
> There is also a lot of cases where HBase calls "createUser" w/o specifying 
> the authentication method... I don"'t have the solution for these ones.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14420) Zombie Stomping Session

2015-10-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14954644#comment-14954644
 ] 

Hadoop QA commented on HBASE-14420:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12766263/none_fix.txt
  against master branch at commit 2ff6d0fe4789857ab51685949711d755dedd459a.
  ATTACHMENT ID: 12766263

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation, build,
or dev-support patch that doesn't require tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.6.1 2.7.0 
2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s):   
at 
org.apache.lens.server.query.TestQueryEndEmailNotifier.testSuccessfulQuery(TestQueryEndEmailNotifier.java:250)
at 
org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:80)
at org.testng.internal.Invoker.invokeMethod(Invoker.java:714)
at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:901)
at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1231)
at 
org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:127)
at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:111)
at org.testng.TestRunner.privateRun(TestRunner.java:767)
at org.testng.TestRunner.run(TestRunner.java:617)
at org.testng.SuiteRunner.runTest(SuiteRunner.java:334)
at org.testng.SuiteRunner.runSequentially(SuiteRunner.java:329)
at org.testng.SuiteRunner.privateRun(SuiteRunner.java:291)
at org.testng.SuiteRunner.run(SuiteRunner.java:240)
at org.testng.SuiteRunnerWorker.runSuite(SuiteRunnerWorker.java:52)
at org.testng.SuiteRunnerWorker.run(SuiteRunnerWorker.java:86)
at org.testng.TestNG.runSuitesSequentially(TestNG.java:1198)
at org.testng.TestNG.runSuitesLocally(TestNG.java:1123)
at org.testng.TestNG.run(TestNG.java:1031)
at 
org.apache.maven.surefire.testng.TestNGExecutor.run(TestNGExecutor.java:69)
at 
org.apache.maven.surefire.testng.TestNGDirectoryTestSuite.executeMulti(TestNGDirectoryTestSuite.java:181)
at 
org.apache.maven.surefire.testng.TestNGDirectoryTestSuite.execute(TestNGDirectoryTestSuite.java:99)
at 
org.apache.maven.surefire.testng.TestNGProvider.invoke(TestNGProvider.java:113)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15982//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15982//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15982//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15982//console

This message is automatically generated.

> Zombie Stomping Session
> ---
>
> Key: HBASE-14420
> URL: https://issues.apache.org/jira/browse/HBASE-14420
> Project: HBase
>  Issue Type: Umbrella
>  Components: test
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Attachments: hangers.txt, none_fix (1).txt, none_fix.txt, 
> none_fix.txt, none_fix.txt, none_fix.txt, none_fix.txt, none_fix.txt, 
> none_fix.txt, none_fix.txt, none_fix.txt
>
>
> Patch build are now failing most of the time because we are dropping zombies. 
> I confirm we are doing this on non-apache build boxes too.
> Left-over zombies consume resources on build boxes (OOME cannot create native 
> threads). Having to do multiple test runs in the hope that we can get a 
> 

[jira] [Commented] (HBASE-14501) NPE in replication with TDE

2015-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14954515#comment-14954515
 ] 

Hudson commented on HBASE-14501:


SUCCESS: Integrated in HBase-1.1 #702 (See 
[https://builds.apache.org/job/HBase-1.1/702/])
HBASE-14501 NPE in replication with TDE (enis: rev 
55c2f132550ed1ccc4904211ac61fc6035367b24)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValueUtil.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/codec/CellCodec.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/SecureWALCellCodec.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValue.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/codec/BaseDecoder.java


> NPE in replication with TDE
> ---
>
> Key: HBASE-14501
> URL: https://issues.apache.org/jira/browse/HBASE-14501
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.0.3, 1.1.3, 0.98.16
>
> Attachments: hbase-14501_v1.patch
>
>
> We are seeing a NPE when replication (or in this case async wal replay for 
> region replicas) is run on top of an HDFS cluster with TDE configured.
> This is the stack trace:
> {code}
> java.lang.NullPointerException
> at org.apache.hadoop.hbase.CellUtil.matchingRow(CellUtil.java:370)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.countDistinctRowKeys(ReplicationSource.java:649)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.readAllEntriesToReplicateOrNextFile(ReplicationSource.java:450)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:346)
> {code}
> This stack trace can only happen if WALEdit.getCells() returns an array 
> containing null entries. I believe this happens due to 
> {{KeyValueCodec.parseCell()}} uses {{KeyValueUtil.iscreate()}} which returns 
> null in case of EOF at the beginning. However, the contract for the 
> Decoder.parseCell() is not clear whether returning null is acceptable or not. 
> The other Decoders (CompressedKvDecoder, CellCodec, etc) do not return null 
> while KeyValueCodec does. 
> BaseDecoder has this code: 
> {code}
>   public boolean advance() throws IOException {
> if (!this.hasNext) return this.hasNext;
> if (this.in.available() == 0) {
>   this.hasNext = false;
>   return this.hasNext;
> }
> try {
>   this.current = parseCell();
> } catch (IOException ioEx) {
>   rethrowEofException(ioEx);
> }
> return this.hasNext;
>   }
> {code}
> which is not correct since it uses {{IS.available()}} not according to the 
> javadoc: 
> (https://docs.oracle.com/javase/7/docs/api/java/io/InputStream.html#available()).
>  DFSInputStream implements {{available()}} as the remaining bytes to read 
> from the stream, so we do not see the issue there. 
> {{CryptoInputStream.available()}} does a similar thing but see the issue. 
> So two questions: 
>  - What should be the interface for Decoder.parseCell()? Can it return null? 
>  - How to properly fix  BaseDecoder.advance() to not rely on {{available()}} 
> call. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13082) Coarsen StoreScanner locks to RegionScanner

2015-10-13 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-13082:
---
Attachment: HBASE-13082_1_WIP.patch

Parking a work in progress patch.  With my previous patches corrected 
most of the test cases. Still few are failing. Looking into them.  
The patch introduces a state inside the StoreFile and marks the reader if it is 
COMPACTED or NOT_COMPACTED. Every time a scanner uses the storefile we also 
increment a ref count. Any scanner getting created will try to check the state 
and if the state is COMPACTED then those files will not be used by the scanner 
instead only the new file created after compaction will be added. All the 
COMPACTED files will be cleared by a background chore service that is 
instantiated per store and that will clean up the store files that are in 
COMPACTEd state and also that those that has ref count as 0.


> Coarsen StoreScanner locks to RegionScanner
> ---
>
> Key: HBASE-13082
> URL: https://issues.apache.org/jira/browse/HBASE-13082
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: ramkrishna.s.vasudevan
> Attachments: 13082-test.txt, 13082-v2.txt, 13082-v3.txt, 
> 13082-v4.txt, 13082.txt, 13082.txt, HBASE-13082_1_WIP.patch, gc.png, gc.png, 
> gc.png, hits.png, next.png, next.png
>
>
> Continuing where HBASE-10015 left of.
> We can avoid locking (and memory fencing) inside StoreScanner by deferring to 
> the lock already held by the RegionScanner.
> In tests this shows quite a scan improvement and reduced CPU (the fences make 
> the cores wait for memory fetches).
> There are some drawbacks too:
> * All calls to RegionScanner need to be remain synchronized
> * Implementors of coprocessors need to be diligent in following the locking 
> contract. For example Phoenix does not lock RegionScanner.nextRaw() and 
> required in the documentation (not picking on Phoenix, this one is my fault 
> as I told them it's OK)
> * possible starving of flushes and compaction with heavy read load. 
> RegionScanner operations would keep getting the locks and the 
> flushes/compactions would not be able finalize the set of files.
> I'll have a patch soon.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14588) Stop accessing test resources from within src folder

2015-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14954611#comment-14954611
 ] 

Hudson commented on HBASE-14588:


FAILURE: Integrated in HBase-TRUNK #6901 (See 
[https://builds.apache.org/job/HBase-TRUNK/6901/])
HBASE-14588 Stop accessing test resources from within src folder (stack: rev 
a45cb72ef23e3d1c52091a1a6376e8a3aeceed94)
* 
hbase-server/src/test/resources/a6a6562b777440fd9c34885428f5cb61.21e75333ada3d5bafb34bb918f29576c
* hbase-server/src/test/data/0016310
* hbase-server/src/test/resources/0016310
* 
hbase-server/src/test/data/a6a6562b777440fd9c34885428f5cb61.21e75333ada3d5bafb34bb918f29576c
* pom.xml
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRecoveredEdits.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/io/TestReference.java


> Stop accessing test resources from within src folder
> 
>
> Key: HBASE-14588
> URL: https://issues.apache.org/jira/browse/HBASE-14588
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 1.1.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: hbase-14588.001.patch, hbase-14588.001.patch, 
> hbase-14588.002.patch
>
>
> A few tests in hbase-server reach into the src/test/data folder to get test 
> resources, which is naughty since tests are supposed to only operate within 
> the target/ folder. It's better to put these into src/test/resources and let 
> them be automatically copied into target/ via the resources plugin, like 
> other test resources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14596) TestCellACLs failing... on1.2 builds

2015-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14954609#comment-14954609
 ] 

Hudson commented on HBASE-14596:


FAILURE: Integrated in HBase-TRUNK #6901 (See 
[https://builds.apache.org/job/HBase-TRUNK/6901/])
HBASE-14596 TestCellACLs failing... on1.2 builds; TUNEUP (stack: rev 
2428c5f46712ac026f2523bd12ad9ea88359fca6)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestCellACLs.java


> TestCellACLs failing... on1.2 builds
> 
>
> Key: HBASE-14596
> URL: https://issues.apache.org/jira/browse/HBASE-14596
> Project: HBase
>  Issue Type: Sub-task
>  Components: test
>Reporter: stack
> Attachments: 14596.txt
>
>
> Caught this in 1.7 builds:
> {code}
> "PriorityRpcServer.handler=4,queue=0,port=42214" daemon prio=10 
> tid=0x7f08d5786000 nid=0x2eed in Object.wait() [0x7f08918d5000]
>java.lang.Thread.State: TIMED_WAITING (on object monitor)
>   at java.lang.Object.wait(Native Method)
>   at 
> org.apache.hadoop.hbase.client.AsyncProcess.waitForMaximumCurrentTasks(AsyncProcess.java:1659)
>   - locked <0x0007580e5f98> (a java.util.concurrent.atomic.AtomicLong)
>   at 
> org.apache.hadoop.hbase.client.AsyncProcess.waitForAllPreviousOpsAndReset(AsyncProcess.java:1688)
>   at 
> org.apache.hadoop.hbase.client.BufferedMutatorImpl.backgroundFlushCommits(BufferedMutatorImpl.java:208)
>   at 
> org.apache.hadoop.hbase.client.BufferedMutatorImpl.flush(BufferedMutatorImpl.java:183)
>   - locked <0x0007580f36f8> (a 
> org.apache.hadoop.hbase.client.BufferedMutatorImpl)
>   at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1430)
>   at org.apache.hadoop.hbase.client.HTable.put(HTable.java:1021)
>   at 
> org.apache.hadoop.hbase.security.access.AccessControlLists.addUserPermission(AccessControlLists.java:176)
>   at 
> org.apache.hadoop.hbase.security.access.AccessController$8.run(AccessController.java:2175)
>   at 
> org.apache.hadoop.hbase.security.access.AccessController$8.run(AccessController.java:2172)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
>   at 
> org.apache.hadoop.security.SecurityUtil.doAsUser(SecurityUtil.java:444)
>   at 
> org.apache.hadoop.security.SecurityUtil.doAsLoginUser(SecurityUtil.java:425)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at org.apache.hadoop.hbase.util.Methods.call(Methods.java:39)
>   at org.apache.hadoop.hbase.security.User.runAsLoginUser(User.java:205)
>   at 
> org.apache.hadoop.hbase.security.access.AccessController.grant(AccessController.java:2172)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos$AccessControlService$1.grant(AccessControlProtos.java:9933)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos$AccessControlService.callMethod(AccessControlProtos.java:10097)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7650)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1896)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1878)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32590)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2120)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:106)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
>   at java.lang.Thread.run(Thread.java:745)
> "PriorityRpcServer.handler=3,queue=1,port=42214" daemon prio=10 
> tid=0x7f08d5784000 nid=0x2eec in Object.wait() [0x7f08919d7000]
>java.lang.Thread.State: TIMED_WAITING (on object monitor)
>   at java.lang.Object.wait(Native Method)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1248)
>   - locked <0x0007cb61ecd8> (a org.apache.hadoop.hbase.ipc.Call)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:217)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:295)
>   at 
> 

[jira] [Commented] (HBASE-14268) Improve KeyLocker

2015-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14954610#comment-14954610
 ] 

Hudson commented on HBASE-14268:


FAILURE: Integrated in HBase-TRUNK #6901 (See 
[https://builds.apache.org/job/HBase-TRUNK/6901/])
HBASE-14268 Improve KeyLocker (Hiroshi Ikeda) (stack: rev 
99e99f3b54bb8801565fbe2a2c071da44281868d)
* hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestKeyLocker.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/util/KeyLocker.java


> Improve KeyLocker
> -
>
> Key: HBASE-14268
> URL: https://issues.apache.org/jira/browse/HBASE-14268
> Project: HBase
>  Issue Type: Improvement
>  Components: util
>Reporter: Hiroshi Ikeda
>Assignee: Hiroshi Ikeda
>Priority: Minor
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: 14268-V5.patch, HBASE-14268-V2.patch, 
> HBASE-14268-V3.patch, HBASE-14268-V4.patch, HBASE-14268-V5.patch, 
> HBASE-14268-V5.patch, HBASE-14268-V6.patch, HBASE-14268-V7.patch, 
> HBASE-14268-V7.patch, HBASE-14268-V7.patch, HBASE-14268-V7.patch, 
> HBASE-14268-V7.patch, HBASE-14268-V7.patch, HBASE-14268-V7.patch, 
> HBASE-14268.patch, KeyLockerIncrKeysPerformance.java, 
> KeyLockerPerformance.java, ReferenceTestApp.java
>
>
> 1. In the implementation of {{KeyLocker}} it uses atomic variables inside a 
> synchronized block, which doesn't make sense. Moreover, logic inside the 
> synchronized block is not trivial so that it makes less performance in heavy 
> multi-threaded environment.
> 2. {{KeyLocker}} gives an instance of {{RentrantLock}} which is already 
> locked, but it doesn't follow the contract of {{ReentrantLock}} because you 
> are not allowed to freely invoke lock/unlock methods under that contract. 
> That introduces a potential risk; Whenever you see a variable of the type 
> {{RentrantLock}}, you should pay attention to what the included instance is 
> coming from.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14588) Stop accessing test resources from within src folder

2015-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14954632#comment-14954632
 ] 

Hudson commented on HBASE-14588:


FAILURE: Integrated in HBase-1.3 #257 (See 
[https://builds.apache.org/job/HBase-1.3/257/])
HBASE-14588 Stop accessing test resources from within src folder (Andrew 
(stack: rev 6f282810526f88706111d21f4ec65960ce499b44)
* 
hbase-server/src/test/data/a6a6562b777440fd9c34885428f5cb61.21e75333ada3d5bafb34bb918f29576c
* 
hbase-server/src/test/resources/a6a6562b777440fd9c34885428f5cb61.21e75333ada3d5bafb34bb918f29576c
* pom.xml
* hbase-server/src/test/resources/0016310
* hbase-server/src/test/java/org/apache/hadoop/hbase/io/TestReference.java
* hbase-server/src/test/data/0016310
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRecoveredEdits.java


> Stop accessing test resources from within src folder
> 
>
> Key: HBASE-14588
> URL: https://issues.apache.org/jira/browse/HBASE-14588
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 1.1.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: hbase-14588.001.patch, hbase-14588.001.patch, 
> hbase-14588.002.patch
>
>
> A few tests in hbase-server reach into the src/test/data folder to get test 
> resources, which is naughty since tests are supposed to only operate within 
> the target/ folder. It's better to put these into src/test/resources and let 
> them be automatically copied into target/ via the resources plugin, like 
> other test resources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14596) TestCellACLs failing... on1.2 builds

2015-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14954630#comment-14954630
 ] 

Hudson commented on HBASE-14596:


FAILURE: Integrated in HBase-1.3 #257 (See 
[https://builds.apache.org/job/HBase-1.3/257/])
HBASE-14596 TestCellACLs failing... on1.2 builds; TUNEUP (stack: rev 
80a51d227e38f5276e8720fc7f2d32ed009064c2)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestCellACLs.java


> TestCellACLs failing... on1.2 builds
> 
>
> Key: HBASE-14596
> URL: https://issues.apache.org/jira/browse/HBASE-14596
> Project: HBase
>  Issue Type: Sub-task
>  Components: test
>Reporter: stack
> Attachments: 14596.txt
>
>
> Caught this in 1.7 builds:
> {code}
> "PriorityRpcServer.handler=4,queue=0,port=42214" daemon prio=10 
> tid=0x7f08d5786000 nid=0x2eed in Object.wait() [0x7f08918d5000]
>java.lang.Thread.State: TIMED_WAITING (on object monitor)
>   at java.lang.Object.wait(Native Method)
>   at 
> org.apache.hadoop.hbase.client.AsyncProcess.waitForMaximumCurrentTasks(AsyncProcess.java:1659)
>   - locked <0x0007580e5f98> (a java.util.concurrent.atomic.AtomicLong)
>   at 
> org.apache.hadoop.hbase.client.AsyncProcess.waitForAllPreviousOpsAndReset(AsyncProcess.java:1688)
>   at 
> org.apache.hadoop.hbase.client.BufferedMutatorImpl.backgroundFlushCommits(BufferedMutatorImpl.java:208)
>   at 
> org.apache.hadoop.hbase.client.BufferedMutatorImpl.flush(BufferedMutatorImpl.java:183)
>   - locked <0x0007580f36f8> (a 
> org.apache.hadoop.hbase.client.BufferedMutatorImpl)
>   at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1430)
>   at org.apache.hadoop.hbase.client.HTable.put(HTable.java:1021)
>   at 
> org.apache.hadoop.hbase.security.access.AccessControlLists.addUserPermission(AccessControlLists.java:176)
>   at 
> org.apache.hadoop.hbase.security.access.AccessController$8.run(AccessController.java:2175)
>   at 
> org.apache.hadoop.hbase.security.access.AccessController$8.run(AccessController.java:2172)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
>   at 
> org.apache.hadoop.security.SecurityUtil.doAsUser(SecurityUtil.java:444)
>   at 
> org.apache.hadoop.security.SecurityUtil.doAsLoginUser(SecurityUtil.java:425)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at org.apache.hadoop.hbase.util.Methods.call(Methods.java:39)
>   at org.apache.hadoop.hbase.security.User.runAsLoginUser(User.java:205)
>   at 
> org.apache.hadoop.hbase.security.access.AccessController.grant(AccessController.java:2172)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos$AccessControlService$1.grant(AccessControlProtos.java:9933)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos$AccessControlService.callMethod(AccessControlProtos.java:10097)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7650)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1896)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1878)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32590)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2120)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:106)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
>   at java.lang.Thread.run(Thread.java:745)
> "PriorityRpcServer.handler=3,queue=1,port=42214" daemon prio=10 
> tid=0x7f08d5784000 nid=0x2eec in Object.wait() [0x7f08919d7000]
>java.lang.Thread.State: TIMED_WAITING (on object monitor)
>   at java.lang.Object.wait(Native Method)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1248)
>   - locked <0x0007cb61ecd8> (a org.apache.hadoop.hbase.ipc.Call)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:217)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:295)
>   at 
> 

[jira] [Commented] (HBASE-14268) Improve KeyLocker

2015-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14954631#comment-14954631
 ] 

Hudson commented on HBASE-14268:


FAILURE: Integrated in HBase-1.3 #257 (See 
[https://builds.apache.org/job/HBase-1.3/257/])
HBASE-14268 Improve KeyLocker (Hiroshi Ikeda) (stack: rev 
98f1387f867835118cadf8b2bbc1e937a16de250)
* hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestKeyLocker.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/util/KeyLocker.java


> Improve KeyLocker
> -
>
> Key: HBASE-14268
> URL: https://issues.apache.org/jira/browse/HBASE-14268
> Project: HBase
>  Issue Type: Improvement
>  Components: util
>Reporter: Hiroshi Ikeda
>Assignee: Hiroshi Ikeda
>Priority: Minor
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: 14268-V5.patch, HBASE-14268-V2.patch, 
> HBASE-14268-V3.patch, HBASE-14268-V4.patch, HBASE-14268-V5.patch, 
> HBASE-14268-V5.patch, HBASE-14268-V6.patch, HBASE-14268-V7.patch, 
> HBASE-14268-V7.patch, HBASE-14268-V7.patch, HBASE-14268-V7.patch, 
> HBASE-14268-V7.patch, HBASE-14268-V7.patch, HBASE-14268-V7.patch, 
> HBASE-14268.patch, KeyLockerIncrKeysPerformance.java, 
> KeyLockerPerformance.java, ReferenceTestApp.java
>
>
> 1. In the implementation of {{KeyLocker}} it uses atomic variables inside a 
> synchronized block, which doesn't make sense. Moreover, logic inside the 
> synchronized block is not trivial so that it makes less performance in heavy 
> multi-threaded environment.
> 2. {{KeyLocker}} gives an instance of {{RentrantLock}} which is already 
> locked, but it doesn't follow the contract of {{ReentrantLock}} because you 
> are not allowed to freely invoke lock/unlock methods under that contract. 
> That introduces a potential risk; Whenever you see a variable of the type 
> {{RentrantLock}}, you should pay attention to what the included instance is 
> coming from.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14596) TestCellACLs failing... on1.2 builds

2015-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14954652#comment-14954652
 ] 

Hudson commented on HBASE-14596:


FAILURE: Integrated in HBase-1.2 #245 (See 
[https://builds.apache.org/job/HBase-1.2/245/])
HBASE-14596 TestCellACLs failing... on1.2 builds; TUNEUP (stack: rev 
c285bcd2d016358eeaf1ba83f7bfe93cfe43631e)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestCellACLs.java


> TestCellACLs failing... on1.2 builds
> 
>
> Key: HBASE-14596
> URL: https://issues.apache.org/jira/browse/HBASE-14596
> Project: HBase
>  Issue Type: Sub-task
>  Components: test
>Reporter: stack
> Attachments: 14596.txt
>
>
> Caught this in 1.7 builds:
> {code}
> "PriorityRpcServer.handler=4,queue=0,port=42214" daemon prio=10 
> tid=0x7f08d5786000 nid=0x2eed in Object.wait() [0x7f08918d5000]
>java.lang.Thread.State: TIMED_WAITING (on object monitor)
>   at java.lang.Object.wait(Native Method)
>   at 
> org.apache.hadoop.hbase.client.AsyncProcess.waitForMaximumCurrentTasks(AsyncProcess.java:1659)
>   - locked <0x0007580e5f98> (a java.util.concurrent.atomic.AtomicLong)
>   at 
> org.apache.hadoop.hbase.client.AsyncProcess.waitForAllPreviousOpsAndReset(AsyncProcess.java:1688)
>   at 
> org.apache.hadoop.hbase.client.BufferedMutatorImpl.backgroundFlushCommits(BufferedMutatorImpl.java:208)
>   at 
> org.apache.hadoop.hbase.client.BufferedMutatorImpl.flush(BufferedMutatorImpl.java:183)
>   - locked <0x0007580f36f8> (a 
> org.apache.hadoop.hbase.client.BufferedMutatorImpl)
>   at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1430)
>   at org.apache.hadoop.hbase.client.HTable.put(HTable.java:1021)
>   at 
> org.apache.hadoop.hbase.security.access.AccessControlLists.addUserPermission(AccessControlLists.java:176)
>   at 
> org.apache.hadoop.hbase.security.access.AccessController$8.run(AccessController.java:2175)
>   at 
> org.apache.hadoop.hbase.security.access.AccessController$8.run(AccessController.java:2172)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
>   at 
> org.apache.hadoop.security.SecurityUtil.doAsUser(SecurityUtil.java:444)
>   at 
> org.apache.hadoop.security.SecurityUtil.doAsLoginUser(SecurityUtil.java:425)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at org.apache.hadoop.hbase.util.Methods.call(Methods.java:39)
>   at org.apache.hadoop.hbase.security.User.runAsLoginUser(User.java:205)
>   at 
> org.apache.hadoop.hbase.security.access.AccessController.grant(AccessController.java:2172)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos$AccessControlService$1.grant(AccessControlProtos.java:9933)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos$AccessControlService.callMethod(AccessControlProtos.java:10097)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7650)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1896)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1878)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32590)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2120)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:106)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
>   at java.lang.Thread.run(Thread.java:745)
> "PriorityRpcServer.handler=3,queue=1,port=42214" daemon prio=10 
> tid=0x7f08d5784000 nid=0x2eec in Object.wait() [0x7f08919d7000]
>java.lang.Thread.State: TIMED_WAITING (on object monitor)
>   at java.lang.Object.wait(Native Method)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1248)
>   - locked <0x0007cb61ecd8> (a org.apache.hadoop.hbase.ipc.Call)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:217)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:295)
>   at 
> 

[jira] [Commented] (HBASE-14588) Stop accessing test resources from within src folder

2015-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14954654#comment-14954654
 ] 

Hudson commented on HBASE-14588:


FAILURE: Integrated in HBase-1.2 #245 (See 
[https://builds.apache.org/job/HBase-1.2/245/])
HBASE-14588 Stop accessing test resources from within src folder (Andrew 
(stack: rev e78a7e0806370b03413b29ac24ea81d534067bbc)
* hbase-server/src/test/data/0016310
* 
hbase-server/src/test/resources/a6a6562b777440fd9c34885428f5cb61.21e75333ada3d5bafb34bb918f29576c
* hbase-server/src/test/resources/0016310
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRecoveredEdits.java
* pom.xml
* hbase-server/src/test/java/org/apache/hadoop/hbase/io/TestReference.java
* 
hbase-server/src/test/data/a6a6562b777440fd9c34885428f5cb61.21e75333ada3d5bafb34bb918f29576c


> Stop accessing test resources from within src folder
> 
>
> Key: HBASE-14588
> URL: https://issues.apache.org/jira/browse/HBASE-14588
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 1.1.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: hbase-14588.001.patch, hbase-14588.001.patch, 
> hbase-14588.002.patch
>
>
> A few tests in hbase-server reach into the src/test/data folder to get test 
> resources, which is naughty since tests are supposed to only operate within 
> the target/ folder. It's better to put these into src/test/resources and let 
> them be automatically copied into target/ via the resources plugin, like 
> other test resources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14268) Improve KeyLocker

2015-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14954653#comment-14954653
 ] 

Hudson commented on HBASE-14268:


FAILURE: Integrated in HBase-1.2 #245 (See 
[https://builds.apache.org/job/HBase-1.2/245/])
HBASE-14268 Improve KeyLocker (Hiroshi Ikeda) (stack: rev 
0fc4614e5b830899e521b14d34db2e54126ddfd3)
* hbase-common/src/main/java/org/apache/hadoop/hbase/util/KeyLocker.java
* hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestKeyLocker.java


> Improve KeyLocker
> -
>
> Key: HBASE-14268
> URL: https://issues.apache.org/jira/browse/HBASE-14268
> Project: HBase
>  Issue Type: Improvement
>  Components: util
>Reporter: Hiroshi Ikeda
>Assignee: Hiroshi Ikeda
>Priority: Minor
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: 14268-V5.patch, HBASE-14268-V2.patch, 
> HBASE-14268-V3.patch, HBASE-14268-V4.patch, HBASE-14268-V5.patch, 
> HBASE-14268-V5.patch, HBASE-14268-V6.patch, HBASE-14268-V7.patch, 
> HBASE-14268-V7.patch, HBASE-14268-V7.patch, HBASE-14268-V7.patch, 
> HBASE-14268-V7.patch, HBASE-14268-V7.patch, HBASE-14268-V7.patch, 
> HBASE-14268.patch, KeyLockerIncrKeysPerformance.java, 
> KeyLockerPerformance.java, ReferenceTestApp.java
>
>
> 1. In the implementation of {{KeyLocker}} it uses atomic variables inside a 
> synchronized block, which doesn't make sense. Moreover, logic inside the 
> synchronized block is not trivial so that it makes less performance in heavy 
> multi-threaded environment.
> 2. {{KeyLocker}} gives an instance of {{RentrantLock}} which is already 
> locked, but it doesn't follow the contract of {{ReentrantLock}} because you 
> are not allowed to freely invoke lock/unlock methods under that contract. 
> That introduces a potential risk; Whenever you see a variable of the type 
> {{RentrantLock}}, you should pay attention to what the included instance is 
> coming from.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14501) NPE in replication with TDE

2015-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14954589#comment-14954589
 ] 

Hudson commented on HBASE-14501:


FAILURE: Integrated in HBase-1.3 #256 (See 
[https://builds.apache.org/job/HBase-1.3/256/])
HBASE-14501 NPE in replication with TDE (enis: rev 
52cfbf7ef05150431d00ee127eb45b545c28)
* hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValueUtil.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/codec/CellCodec.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValue.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/SecureWALCellCodec.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/codec/BaseDecoder.java


> NPE in replication with TDE
> ---
>
> Key: HBASE-14501
> URL: https://issues.apache.org/jira/browse/HBASE-14501
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.0.3, 1.1.3, 0.98.16
>
> Attachments: hbase-14501_v1.patch
>
>
> We are seeing a NPE when replication (or in this case async wal replay for 
> region replicas) is run on top of an HDFS cluster with TDE configured.
> This is the stack trace:
> {code}
> java.lang.NullPointerException
> at org.apache.hadoop.hbase.CellUtil.matchingRow(CellUtil.java:370)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.countDistinctRowKeys(ReplicationSource.java:649)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.readAllEntriesToReplicateOrNextFile(ReplicationSource.java:450)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:346)
> {code}
> This stack trace can only happen if WALEdit.getCells() returns an array 
> containing null entries. I believe this happens due to 
> {{KeyValueCodec.parseCell()}} uses {{KeyValueUtil.iscreate()}} which returns 
> null in case of EOF at the beginning. However, the contract for the 
> Decoder.parseCell() is not clear whether returning null is acceptable or not. 
> The other Decoders (CompressedKvDecoder, CellCodec, etc) do not return null 
> while KeyValueCodec does. 
> BaseDecoder has this code: 
> {code}
>   public boolean advance() throws IOException {
> if (!this.hasNext) return this.hasNext;
> if (this.in.available() == 0) {
>   this.hasNext = false;
>   return this.hasNext;
> }
> try {
>   this.current = parseCell();
> } catch (IOException ioEx) {
>   rethrowEofException(ioEx);
> }
> return this.hasNext;
>   }
> {code}
> which is not correct since it uses {{IS.available()}} not according to the 
> javadoc: 
> (https://docs.oracle.com/javase/7/docs/api/java/io/InputStream.html#available()).
>  DFSInputStream implements {{available()}} as the remaining bytes to read 
> from the stream, so we do not see the issue there. 
> {{CryptoInputStream.available()}} does a similar thing but see the issue. 
> So two questions: 
>  - What should be the interface for Decoder.parseCell()? Can it return null? 
>  - How to properly fix  BaseDecoder.advance() to not rely on {{available()}} 
> call. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13819) Make RPC layer CellBlock buffer a DirectByteBuffer

2015-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14954605#comment-14954605
 ] 

Hudson commented on HBASE-13819:


FAILURE: Integrated in HBase-TRUNK #6900 (See 
[https://builds.apache.org/job/HBase-TRUNK/6900/])
HBASE-13819 Make RPC layer CellBlock buffer a DirectByteBuffer. (anoopsamjohn: 
rev 6143b7694cc02e905b931de86462c6125ca8b3b6)
* 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/BoundedByteBufferPool.java


> Make RPC layer CellBlock buffer a DirectByteBuffer
> --
>
> Key: HBASE-13819
> URL: https://issues.apache.org/jira/browse/HBASE-13819
> Project: HBase
>  Issue Type: Sub-task
>  Components: Scanners
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-13819.patch, HBASE-13819_branch-1.patch, 
> HBASE-13819_branch-1.patch, HBASE-13819_branch-1.patch
>
>
> In RPC layer, when we make a cellBlock to put as RPC payload, we will make an 
> on heap byte buffer (via BoundedByteBufferPool). The pool will keep upto 
> certain number of buffers. This jira aims at testing possibility for making 
> this buffers off heap ones. (DBB)  The advantages
> 1. Unsafe based writes to off heap is faster than that to on heap. Now we are 
> not using unsafe based writes at all. Even if we add, DBB will be better
> 2. When Cells are backed by off heap (HBASE-11425) off heap to off heap 
> writes will be better
> 3. When checked the code in SocketChannel impl, if we pass a HeapByteBuffer 
> to the socket channel, it will create a temp DBB and copy data to there and 
> only DBBs will be moved to Sockets. If we make DBB 1st hand itself, we can  
> avoid this one more level of copying.
> Will do different perf testing with changed and report back.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14501) NPE in replication with TDE

2015-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14954606#comment-14954606
 ] 

Hudson commented on HBASE-14501:


FAILURE: Integrated in HBase-TRUNK #6900 (See 
[https://builds.apache.org/job/HBase-TRUNK/6900/])
HBASE-14501 NPE in replication with TDE (enis: rev 
2ff6d0fe4789857ab51685949711d755dedd459a)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/codec/BaseDecoder.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValueUtil.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/codec/CellCodec.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValue.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/SecureWALCellCodec.java


> NPE in replication with TDE
> ---
>
> Key: HBASE-14501
> URL: https://issues.apache.org/jira/browse/HBASE-14501
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.0.3, 1.1.3, 0.98.16
>
> Attachments: hbase-14501_v1.patch
>
>
> We are seeing a NPE when replication (or in this case async wal replay for 
> region replicas) is run on top of an HDFS cluster with TDE configured.
> This is the stack trace:
> {code}
> java.lang.NullPointerException
> at org.apache.hadoop.hbase.CellUtil.matchingRow(CellUtil.java:370)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.countDistinctRowKeys(ReplicationSource.java:649)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.readAllEntriesToReplicateOrNextFile(ReplicationSource.java:450)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:346)
> {code}
> This stack trace can only happen if WALEdit.getCells() returns an array 
> containing null entries. I believe this happens due to 
> {{KeyValueCodec.parseCell()}} uses {{KeyValueUtil.iscreate()}} which returns 
> null in case of EOF at the beginning. However, the contract for the 
> Decoder.parseCell() is not clear whether returning null is acceptable or not. 
> The other Decoders (CompressedKvDecoder, CellCodec, etc) do not return null 
> while KeyValueCodec does. 
> BaseDecoder has this code: 
> {code}
>   public boolean advance() throws IOException {
> if (!this.hasNext) return this.hasNext;
> if (this.in.available() == 0) {
>   this.hasNext = false;
>   return this.hasNext;
> }
> try {
>   this.current = parseCell();
> } catch (IOException ioEx) {
>   rethrowEofException(ioEx);
> }
> return this.hasNext;
>   }
> {code}
> which is not correct since it uses {{IS.available()}} not according to the 
> javadoc: 
> (https://docs.oracle.com/javase/7/docs/api/java/io/InputStream.html#available()).
>  DFSInputStream implements {{available()}} as the remaining bytes to read 
> from the stream, so we do not see the issue there. 
> {{CryptoInputStream.available()}} does a similar thing but see the issue. 
> So two questions: 
>  - What should be the interface for Decoder.parseCell()? Can it return null? 
>  - How to properly fix  BaseDecoder.advance() to not rely on {{available()}} 
> call. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14529) Respond to SIGHUP to reload config

2015-10-13 Thread Ashish Singhi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14954634#comment-14954634
 ] 

Ashish Singhi commented on HBASE-14529:
---

[~eclark], with this change I am not able to run UTs on Windows. It fails to 
start the HBase processes. 
Can we avoid the creation of HUP signal if OS is Windows ?

> Respond to SIGHUP to reload config
> --
>
> Key: HBASE-14529
> URL: https://issues.apache.org/jira/browse/HBASE-14529
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 1.2.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14529-v1.patch, HBASE-14529-v2.patch, 
> HBASE-14529.patch
>
>
> SIGHUP is the way everyone since the dawn of unix has done config reload.
> Lets not be a special unique snowflake.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14596) TestCellACLs failing... on1.2 builds

2015-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14954657#comment-14954657
 ] 

Hudson commented on HBASE-14596:


FAILURE: Integrated in HBase-1.2-IT #204 (See 
[https://builds.apache.org/job/HBase-1.2-IT/204/])
HBASE-14596 TestCellACLs failing... on1.2 builds; TUNEUP (stack: rev 
c285bcd2d016358eeaf1ba83f7bfe93cfe43631e)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestCellACLs.java


> TestCellACLs failing... on1.2 builds
> 
>
> Key: HBASE-14596
> URL: https://issues.apache.org/jira/browse/HBASE-14596
> Project: HBase
>  Issue Type: Sub-task
>  Components: test
>Reporter: stack
> Attachments: 14596.txt
>
>
> Caught this in 1.7 builds:
> {code}
> "PriorityRpcServer.handler=4,queue=0,port=42214" daemon prio=10 
> tid=0x7f08d5786000 nid=0x2eed in Object.wait() [0x7f08918d5000]
>java.lang.Thread.State: TIMED_WAITING (on object monitor)
>   at java.lang.Object.wait(Native Method)
>   at 
> org.apache.hadoop.hbase.client.AsyncProcess.waitForMaximumCurrentTasks(AsyncProcess.java:1659)
>   - locked <0x0007580e5f98> (a java.util.concurrent.atomic.AtomicLong)
>   at 
> org.apache.hadoop.hbase.client.AsyncProcess.waitForAllPreviousOpsAndReset(AsyncProcess.java:1688)
>   at 
> org.apache.hadoop.hbase.client.BufferedMutatorImpl.backgroundFlushCommits(BufferedMutatorImpl.java:208)
>   at 
> org.apache.hadoop.hbase.client.BufferedMutatorImpl.flush(BufferedMutatorImpl.java:183)
>   - locked <0x0007580f36f8> (a 
> org.apache.hadoop.hbase.client.BufferedMutatorImpl)
>   at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1430)
>   at org.apache.hadoop.hbase.client.HTable.put(HTable.java:1021)
>   at 
> org.apache.hadoop.hbase.security.access.AccessControlLists.addUserPermission(AccessControlLists.java:176)
>   at 
> org.apache.hadoop.hbase.security.access.AccessController$8.run(AccessController.java:2175)
>   at 
> org.apache.hadoop.hbase.security.access.AccessController$8.run(AccessController.java:2172)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
>   at 
> org.apache.hadoop.security.SecurityUtil.doAsUser(SecurityUtil.java:444)
>   at 
> org.apache.hadoop.security.SecurityUtil.doAsLoginUser(SecurityUtil.java:425)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at org.apache.hadoop.hbase.util.Methods.call(Methods.java:39)
>   at org.apache.hadoop.hbase.security.User.runAsLoginUser(User.java:205)
>   at 
> org.apache.hadoop.hbase.security.access.AccessController.grant(AccessController.java:2172)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos$AccessControlService$1.grant(AccessControlProtos.java:9933)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos$AccessControlService.callMethod(AccessControlProtos.java:10097)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7650)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1896)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1878)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32590)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2120)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:106)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
>   at java.lang.Thread.run(Thread.java:745)
> "PriorityRpcServer.handler=3,queue=1,port=42214" daemon prio=10 
> tid=0x7f08d5784000 nid=0x2eec in Object.wait() [0x7f08919d7000]
>java.lang.Thread.State: TIMED_WAITING (on object monitor)
>   at java.lang.Object.wait(Native Method)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1248)
>   - locked <0x0007cb61ecd8> (a org.apache.hadoop.hbase.ipc.Call)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:217)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:295)
>   at 
> 

[jira] [Commented] (HBASE-14588) Stop accessing test resources from within src folder

2015-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14954659#comment-14954659
 ] 

Hudson commented on HBASE-14588:


FAILURE: Integrated in HBase-1.2-IT #204 (See 
[https://builds.apache.org/job/HBase-1.2-IT/204/])
HBASE-14588 Stop accessing test resources from within src folder (stack: rev 
e78a7e0806370b03413b29ac24ea81d534067bbc)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRecoveredEdits.java
* hbase-server/src/test/data/0016310
* 
hbase-server/src/test/data/a6a6562b777440fd9c34885428f5cb61.21e75333ada3d5bafb34bb918f29576c
* hbase-server/src/test/resources/0016310
* pom.xml
* 
hbase-server/src/test/resources/a6a6562b777440fd9c34885428f5cb61.21e75333ada3d5bafb34bb918f29576c
* hbase-server/src/test/java/org/apache/hadoop/hbase/io/TestReference.java


> Stop accessing test resources from within src folder
> 
>
> Key: HBASE-14588
> URL: https://issues.apache.org/jira/browse/HBASE-14588
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 1.1.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: hbase-14588.001.patch, hbase-14588.001.patch, 
> hbase-14588.002.patch
>
>
> A few tests in hbase-server reach into the src/test/data folder to get test 
> resources, which is naughty since tests are supposed to only operate within 
> the target/ folder. It's better to put these into src/test/resources and let 
> them be automatically copied into target/ via the resources plugin, like 
> other test resources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14268) Improve KeyLocker

2015-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14954658#comment-14954658
 ] 

Hudson commented on HBASE-14268:


FAILURE: Integrated in HBase-1.2-IT #204 (See 
[https://builds.apache.org/job/HBase-1.2-IT/204/])
HBASE-14268 Improve KeyLocker (Hiroshi Ikeda) (stack: rev 
0fc4614e5b830899e521b14d34db2e54126ddfd3)
* hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestKeyLocker.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/util/KeyLocker.java


> Improve KeyLocker
> -
>
> Key: HBASE-14268
> URL: https://issues.apache.org/jira/browse/HBASE-14268
> Project: HBase
>  Issue Type: Improvement
>  Components: util
>Reporter: Hiroshi Ikeda
>Assignee: Hiroshi Ikeda
>Priority: Minor
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: 14268-V5.patch, HBASE-14268-V2.patch, 
> HBASE-14268-V3.patch, HBASE-14268-V4.patch, HBASE-14268-V5.patch, 
> HBASE-14268-V5.patch, HBASE-14268-V6.patch, HBASE-14268-V7.patch, 
> HBASE-14268-V7.patch, HBASE-14268-V7.patch, HBASE-14268-V7.patch, 
> HBASE-14268-V7.patch, HBASE-14268-V7.patch, HBASE-14268-V7.patch, 
> HBASE-14268.patch, KeyLockerIncrKeysPerformance.java, 
> KeyLockerPerformance.java, ReferenceTestApp.java
>
>
> 1. In the implementation of {{KeyLocker}} it uses atomic variables inside a 
> synchronized block, which doesn't make sense. Moreover, logic inside the 
> synchronized block is not trivial so that it makes less performance in heavy 
> multi-threaded environment.
> 2. {{KeyLocker}} gives an instance of {{RentrantLock}} which is already 
> locked, but it doesn't follow the contract of {{ReentrantLock}} because you 
> are not allowed to freely invoke lock/unlock methods under that contract. 
> That introduces a potential risk; Whenever you see a variable of the type 
> {{RentrantLock}}, you should pay attention to what the included instance is 
> coming from.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14594) Use new DNS API introduced in HADOOP-12437

2015-10-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14954532#comment-14954532
 ] 

Hadoop QA commented on HBASE-14594:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12766257/HBASE-14594.001.patch
  against master branch at commit 2ff6d0fe4789857ab51685949711d755dedd459a.
  ATTACHMENT ID: 12766257

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.6.1 2.7.0 
2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 1 
warning messages.

{color:red}-1 checkstyle{color}.  The applied patch generated 
1755 checkstyle errors (more than the master's current 1754 errors).

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.master.balancer.TestStochasticLoadBalancer

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s):   
at 
org.apache.hadoop.hbase.filter.TestFuzzyRowFilterEndToEnd.testEndToEnd(TestFuzzyRowFilterEndToEnd.java:143)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15980//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15980//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15980//artifact/patchprocess/checkstyle-aggregate.html

Javadoc warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15980//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15980//console

This message is automatically generated.

> Use new DNS API introduced in HADOOP-12437
> --
>
> Key: HBASE-14594
> URL: https://issues.apache.org/jira/browse/HBASE-14594
> Project: HBase
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
> Fix For: 2.0.0, 1.2.1, 1.0.3, 1.1.3
>
> Attachments: HBASE-14594.001.patch
>
>
> HADOOP-12437 introduced a new API to {{org.apache.hadoop.net.DNS}}: 
> {{getDefaultHost(String, String, boolean)}}.
> The purpose of this method (the boolean argument really) is to change the 
> functionality so that when rDNS fails, {{InetAddress#getCanonicalHostName()}} 
> is consulted which includes resolution via the hosts file.
> The direct application of this new method is relevant on hosts with multiples 
> NICs and Kerberos enabled.
> Sadly, this method only exists in 2.8.0-SNAPSHOT, so to benefit from the fix 
> without great pain, some reflection is required.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14580) Make the HBaseMiniCluster compliant with Kerberos

2015-10-13 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14954583#comment-14954583
 ] 

Nicolas Liochon commented on HBASE-14580:
-

This second run makes more sense :-). I'm going to commit on the master branch. 
 [~ndimiduk], you may want this for the 1.2 branch?

> Make the HBaseMiniCluster compliant with Kerberos
> -
>
> Key: HBASE-14580
> URL: https://issues.apache.org/jira/browse/HBASE-14580
> Project: HBase
>  Issue Type: Improvement
>  Components: security, test
>Affects Versions: 2.0.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
> Fix For: 2.0.0
>
> Attachments: hbase-14580.v2.patch, hbase-14580.v2.patch, 
> patch-14580.v1.patch
>
>
> Whne using MiniKDC and the minicluster in a unit test, there is a conflict 
> causeed by HBaseTestingUtility:
> {code}
>   public static User getDifferentUser(final Configuration c,
> final String differentiatingSuffix)
>   throws IOException {
>// snip
> String username = User.getCurrent().getName() +
>   differentiatingSuffix; < problem here
> User user = User.createUserForTesting(c, username,
> new String[]{"supergroup"});
> return user;
>   }
> {code}
> This creates users like securedUser/localh...@example.com.hfs.0, and this 
> does not work.
> My fix is to return the current user when Kerberos is set. I don't think that 
> there is another option (any other opinion?). However this user is not in a 
> group so we have logs like 'WARN  [IPC Server handler 9 on 61366] 
> security.UserGroupInformation (UserGroupInformation.java:getGroupNames(1521)) 
> - No groups available for user securedUser' I'm not sure of its impact. 
> [~apurtell], what do you think?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14268) Improve KeyLocker

2015-10-13 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14954690#comment-14954690
 ] 

Nicolas Liochon commented on HBASE-14268:
-

I just saw that, I'm having a look.

> Improve KeyLocker
> -
>
> Key: HBASE-14268
> URL: https://issues.apache.org/jira/browse/HBASE-14268
> Project: HBase
>  Issue Type: Improvement
>  Components: util
>Reporter: Hiroshi Ikeda
>Assignee: Hiroshi Ikeda
>Priority: Minor
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: 14268-V5.patch, HBASE-14268-V2.patch, 
> HBASE-14268-V3.patch, HBASE-14268-V4.patch, HBASE-14268-V5.patch, 
> HBASE-14268-V5.patch, HBASE-14268-V6.patch, HBASE-14268-V7.patch, 
> HBASE-14268-V7.patch, HBASE-14268-V7.patch, HBASE-14268-V7.patch, 
> HBASE-14268-V7.patch, HBASE-14268-V7.patch, HBASE-14268-V7.patch, 
> HBASE-14268.patch, KeyLockerIncrKeysPerformance.java, 
> KeyLockerPerformance.java, ReferenceTestApp.java
>
>
> 1. In the implementation of {{KeyLocker}} it uses atomic variables inside a 
> synchronized block, which doesn't make sense. Moreover, logic inside the 
> synchronized block is not trivial so that it makes less performance in heavy 
> multi-threaded environment.
> 2. {{KeyLocker}} gives an instance of {{RentrantLock}} which is already 
> locked, but it doesn't follow the contract of {{ReentrantLock}} because you 
> are not allowed to freely invoke lock/unlock methods under that contract. 
> That introduces a potential risk; Whenever you see a variable of the type 
> {{RentrantLock}}, you should pay attention to what the included instance is 
> coming from.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14268) Improve KeyLocker

2015-10-13 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14954701#comment-14954701
 ] 

Nicolas Liochon commented on HBASE-14268:
-

[~sreenivasulureddy]It should be ok now. I added the two missing files.

> Improve KeyLocker
> -
>
> Key: HBASE-14268
> URL: https://issues.apache.org/jira/browse/HBASE-14268
> Project: HBase
>  Issue Type: Improvement
>  Components: util
>Reporter: Hiroshi Ikeda
>Assignee: Hiroshi Ikeda
>Priority: Minor
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: 14268-V5.patch, HBASE-14268-V2.patch, 
> HBASE-14268-V3.patch, HBASE-14268-V4.patch, HBASE-14268-V5.patch, 
> HBASE-14268-V5.patch, HBASE-14268-V6.patch, HBASE-14268-V7.patch, 
> HBASE-14268-V7.patch, HBASE-14268-V7.patch, HBASE-14268-V7.patch, 
> HBASE-14268-V7.patch, HBASE-14268-V7.patch, HBASE-14268-V7.patch, 
> HBASE-14268.patch, KeyLockerIncrKeysPerformance.java, 
> KeyLockerPerformance.java, ReferenceTestApp.java
>
>
> 1. In the implementation of {{KeyLocker}} it uses atomic variables inside a 
> synchronized block, which doesn't make sense. Moreover, logic inside the 
> synchronized block is not trivial so that it makes less performance in heavy 
> multi-threaded environment.
> 2. {{KeyLocker}} gives an instance of {{RentrantLock}} which is already 
> locked, but it doesn't follow the contract of {{ReentrantLock}} because you 
> are not allowed to freely invoke lock/unlock methods under that contract. 
> That introduces a potential risk; Whenever you see a variable of the type 
> {{RentrantLock}}, you should pay attention to what the included instance is 
> coming from.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14463) Severe performance downgrade when parallel reading a single key from BucketCache

2015-10-13 Thread Yu Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu Li updated HBASE-14463:
--
Attachment: HBASE-14463_v11.patch

Latest patch taking usage of WeakObjectPool introduced by HBASE-14268. Since 
HBASE-14268 is committed, let's see what HadoopQA says with this patch

> Severe performance downgrade when parallel reading a single key from 
> BucketCache
> 
>
> Key: HBASE-14463
> URL: https://issues.apache.org/jira/browse/HBASE-14463
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.14, 1.1.2
>Reporter: Yu Li
>Assignee: Yu Li
> Fix For: 2.0.0, 1.2.0, 1.3.0, 0.98.16
>
> Attachments: HBASE-14463.patch, HBASE-14463_v11.patch, 
> HBASE-14463_v2.patch, HBASE-14463_v3.patch, HBASE-14463_v4.patch, 
> HBASE-14463_v5.patch, TestBucketCache-new_with_IdLock.png, 
> TestBucketCache-new_with_IdReadWriteLock.png, 
> TestBucketCache_with_IdLock.png, 
> TestBucketCache_with_IdReadWriteLock-resolveLockLeak.png, 
> TestBucketCache_with_IdReadWriteLock.png
>
>
> We store feature data of online items in HBase, do machine learning on these 
> features, and supply the outputs to our online search engine. In such 
> scenario we will launch hundreds of yarn workers and each worker will read 
> all features of one item(i.e. single rowkey in HBase), so there'll be heavy 
> parallel reading on a single rowkey.
> We were using LruCache but start to try BucketCache recently to resolve gc 
> issue, and just as titled we have observed severe performance downgrade. 
> After some analytics we found the root cause is the lock in 
> BucketCache#getBlock, as shown below
> {code}
>   try {
> lockEntry = offsetLock.getLockEntry(bucketEntry.offset());
> // ...
> if (bucketEntry.equals(backingMap.get(key))) {
>   // ...
>   int len = bucketEntry.getLength();
>   Cacheable cachedBlock = ioEngine.read(bucketEntry.offset(), len,
>   bucketEntry.deserializerReference(this.deserialiserMap));
> {code}
> Since ioEnging.read involves array copy, it's much more time-costed than the 
> operation in LruCache. And since we're using synchronized in 
> IdLock#getLockEntry, parallel read dropping on the same bucket would be 
> executed in serial, which causes a really bad performance.
> To resolve the problem, we propose to use ReentranceReadWriteLock in 
> BucketCache, and introduce a new class called IdReadWriteLock to implement it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14268) Improve KeyLocker

2015-10-13 Thread Y. SREENIVASULU REDDY (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14954776#comment-14954776
 ] 

Y. SREENIVASULU REDDY commented on HBASE-14268:
---

[~nkeywal]  after adding the missing files, compilation is success. 
Thanks.

> Improve KeyLocker
> -
>
> Key: HBASE-14268
> URL: https://issues.apache.org/jira/browse/HBASE-14268
> Project: HBase
>  Issue Type: Improvement
>  Components: util
>Reporter: Hiroshi Ikeda
>Assignee: Hiroshi Ikeda
>Priority: Minor
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: 14268-V5.patch, HBASE-14268-V2.patch, 
> HBASE-14268-V3.patch, HBASE-14268-V4.patch, HBASE-14268-V5.patch, 
> HBASE-14268-V5.patch, HBASE-14268-V6.patch, HBASE-14268-V7.patch, 
> HBASE-14268-V7.patch, HBASE-14268-V7.patch, HBASE-14268-V7.patch, 
> HBASE-14268-V7.patch, HBASE-14268-V7.patch, HBASE-14268-V7.patch, 
> HBASE-14268.patch, KeyLockerIncrKeysPerformance.java, 
> KeyLockerPerformance.java, ReferenceTestApp.java
>
>
> 1. In the implementation of {{KeyLocker}} it uses atomic variables inside a 
> synchronized block, which doesn't make sense. Moreover, logic inside the 
> synchronized block is not trivial so that it makes less performance in heavy 
> multi-threaded environment.
> 2. {{KeyLocker}} gives an instance of {{RentrantLock}} which is already 
> locked, but it doesn't follow the contract of {{ReentrantLock}} because you 
> are not allowed to freely invoke lock/unlock methods under that contract. 
> That introduces a potential risk; Whenever you see a variable of the type 
> {{RentrantLock}}, you should pay attention to what the included instance is 
> coming from.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14501) NPE in replication with TDE

2015-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14954780#comment-14954780
 ] 

Hudson commented on HBASE-14501:


FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #1105 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/1105/])
HBASE-14501 NPE in replication with TDE (enis: rev 
c6608e652a118b1c5dc683e5e5b9694724e1d8f5)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValueUtil.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/codec/CellCodec.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/codec/BaseDecoder.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValue.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/SecureWALCellCodec.java


> NPE in replication with TDE
> ---
>
> Key: HBASE-14501
> URL: https://issues.apache.org/jira/browse/HBASE-14501
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.0.3, 1.1.3, 0.98.16
>
> Attachments: hbase-14501_v1.patch
>
>
> We are seeing a NPE when replication (or in this case async wal replay for 
> region replicas) is run on top of an HDFS cluster with TDE configured.
> This is the stack trace:
> {code}
> java.lang.NullPointerException
> at org.apache.hadoop.hbase.CellUtil.matchingRow(CellUtil.java:370)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.countDistinctRowKeys(ReplicationSource.java:649)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.readAllEntriesToReplicateOrNextFile(ReplicationSource.java:450)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:346)
> {code}
> This stack trace can only happen if WALEdit.getCells() returns an array 
> containing null entries. I believe this happens due to 
> {{KeyValueCodec.parseCell()}} uses {{KeyValueUtil.iscreate()}} which returns 
> null in case of EOF at the beginning. However, the contract for the 
> Decoder.parseCell() is not clear whether returning null is acceptable or not. 
> The other Decoders (CompressedKvDecoder, CellCodec, etc) do not return null 
> while KeyValueCodec does. 
> BaseDecoder has this code: 
> {code}
>   public boolean advance() throws IOException {
> if (!this.hasNext) return this.hasNext;
> if (this.in.available() == 0) {
>   this.hasNext = false;
>   return this.hasNext;
> }
> try {
>   this.current = parseCell();
> } catch (IOException ioEx) {
>   rethrowEofException(ioEx);
> }
> return this.hasNext;
>   }
> {code}
> which is not correct since it uses {{IS.available()}} not according to the 
> javadoc: 
> (https://docs.oracle.com/javase/7/docs/api/java/io/InputStream.html#available()).
>  DFSInputStream implements {{available()}} as the remaining bytes to read 
> from the stream, so we do not see the issue there. 
> {{CryptoInputStream.available()}} does a similar thing but see the issue. 
> So two questions: 
>  - What should be the interface for Decoder.parseCell()? Can it return null? 
>  - How to properly fix  BaseDecoder.advance() to not rely on {{available()}} 
> call. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14580) Make the HBaseMiniCluster compliant with Kerberos

2015-10-13 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-14580:

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Committed on master only

> Make the HBaseMiniCluster compliant with Kerberos
> -
>
> Key: HBASE-14580
> URL: https://issues.apache.org/jira/browse/HBASE-14580
> Project: HBase
>  Issue Type: Improvement
>  Components: security, test
>Affects Versions: 2.0.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
> Fix For: 2.0.0
>
> Attachments: hbase-14580.v2.patch, hbase-14580.v2.patch, 
> patch-14580.v1.patch
>
>
> Whne using MiniKDC and the minicluster in a unit test, there is a conflict 
> causeed by HBaseTestingUtility:
> {code}
>   public static User getDifferentUser(final Configuration c,
> final String differentiatingSuffix)
>   throws IOException {
>// snip
> String username = User.getCurrent().getName() +
>   differentiatingSuffix; < problem here
> User user = User.createUserForTesting(c, username,
> new String[]{"supergroup"});
> return user;
>   }
> {code}
> This creates users like securedUser/localh...@example.com.hfs.0, and this 
> does not work.
> My fix is to return the current user when Kerberos is set. I don't think that 
> there is another option (any other opinion?). However this user is not in a 
> group so we have logs like 'WARN  [IPC Server handler 9 on 61366] 
> security.UserGroupInformation (UserGroupInformation.java:getGroupNames(1521)) 
> - No groups available for user securedUser' I'm not sure of its impact. 
> [~apurtell], what do you think?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14557) MapReduce WALPlayer issue with NoTagsKeyValue

2015-10-13 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-14557:
---
Attachment: (was: HBASE-14557_branch-1.patch)

> MapReduce WALPlayer issue with NoTagsKeyValue
> -
>
> Key: HBASE-14557
> URL: https://issues.apache.org/jira/browse/HBASE-14557
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0
>Reporter: Jerry He
>Assignee: Anoop Sam John
>Priority: Blocker
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14557.patch, HBASE-14557.patch, 
> HBASE-14557_V2.patch, HBASE-14557_branch-1.2.patch, HBASE-14557_branch-1.patch
>
>
> Running MapReduce WALPlayer to convert WAL into HFiles:
> {noformat}
> 15/10/05 20:28:08 INFO mapred.JobClient: Task Id : 
> attempt_201508031611_0029_m_00_0, Status : FAILED
> java.io.IOException: Type mismatch in value from map: expected 
> org.apache.hadoop.hbase.KeyValue, recieved 
> org.apache.hadoop.hbase.NoTagsKeyValue
> at 
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer.collect(MapTask.java:997)
> at 
> org.apache.hadoop.mapred.MapTask$NewOutputCollector.write(MapTask.java:689)
> at 
> org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
> at 
> org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedMapper.java:112)
> at 
> org.apache.hadoop.hbase.mapreduce.WALPlayer$WALKeyValueMapper.map(WALPlayer.java:111)
> at 
> org.apache.hadoop.hbase.mapreduce.WALPlayer$WALKeyValueMapper.map(WALPlayer.java:96)
> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:140)
> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:751)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:368)
> at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
> at 
> java.security.AccessController.doPrivileged(AccessController.java:369)
> at javax.security.auth.Subject.doAs(Subject.java:572)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1502)
> at org.apache.hadoop.mapred.Child.main(Child.java:249)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14557) MapReduce WALPlayer issue with NoTagsKeyValue

2015-10-13 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-14557:
---
Attachment: HBASE-14557_branch-1.patch

> MapReduce WALPlayer issue with NoTagsKeyValue
> -
>
> Key: HBASE-14557
> URL: https://issues.apache.org/jira/browse/HBASE-14557
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0
>Reporter: Jerry He
>Assignee: Anoop Sam John
>Priority: Blocker
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14557.patch, HBASE-14557.patch, 
> HBASE-14557_V2.patch, HBASE-14557_branch-1.2.patch, 
> HBASE-14557_branch-1.patch, HBASE-14557_branch-1.patch
>
>
> Running MapReduce WALPlayer to convert WAL into HFiles:
> {noformat}
> 15/10/05 20:28:08 INFO mapred.JobClient: Task Id : 
> attempt_201508031611_0029_m_00_0, Status : FAILED
> java.io.IOException: Type mismatch in value from map: expected 
> org.apache.hadoop.hbase.KeyValue, recieved 
> org.apache.hadoop.hbase.NoTagsKeyValue
> at 
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer.collect(MapTask.java:997)
> at 
> org.apache.hadoop.mapred.MapTask$NewOutputCollector.write(MapTask.java:689)
> at 
> org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
> at 
> org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedMapper.java:112)
> at 
> org.apache.hadoop.hbase.mapreduce.WALPlayer$WALKeyValueMapper.map(WALPlayer.java:111)
> at 
> org.apache.hadoop.hbase.mapreduce.WALPlayer$WALKeyValueMapper.map(WALPlayer.java:96)
> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:140)
> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:751)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:368)
> at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
> at 
> java.security.AccessController.doPrivileged(AccessController.java:369)
> at javax.security.auth.Subject.doAs(Subject.java:572)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1502)
> at org.apache.hadoop.mapred.Child.main(Child.java:249)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14268) Improve KeyLocker

2015-10-13 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14954967#comment-14954967
 ] 

stack commented on HBASE-14268:
---

Thanks @nkeywal and [~sreenivasulureddy] for fixing my messup.

> Improve KeyLocker
> -
>
> Key: HBASE-14268
> URL: https://issues.apache.org/jira/browse/HBASE-14268
> Project: HBase
>  Issue Type: Improvement
>  Components: util
>Reporter: Hiroshi Ikeda
>Assignee: Hiroshi Ikeda
>Priority: Minor
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: 14268-V5.patch, HBASE-14268-V2.patch, 
> HBASE-14268-V3.patch, HBASE-14268-V4.patch, HBASE-14268-V5.patch, 
> HBASE-14268-V5.patch, HBASE-14268-V6.patch, HBASE-14268-V7.patch, 
> HBASE-14268-V7.patch, HBASE-14268-V7.patch, HBASE-14268-V7.patch, 
> HBASE-14268-V7.patch, HBASE-14268-V7.patch, HBASE-14268-V7.patch, 
> HBASE-14268.patch, KeyLockerIncrKeysPerformance.java, 
> KeyLockerPerformance.java, ReferenceTestApp.java
>
>
> 1. In the implementation of {{KeyLocker}} it uses atomic variables inside a 
> synchronized block, which doesn't make sense. Moreover, logic inside the 
> synchronized block is not trivial so that it makes less performance in heavy 
> multi-threaded environment.
> 2. {{KeyLocker}} gives an instance of {{RentrantLock}} which is already 
> locked, but it doesn't follow the contract of {{ReentrantLock}} because you 
> are not allowed to freely invoke lock/unlock methods under that contract. 
> That introduces a potential risk; Whenever you see a variable of the type 
> {{RentrantLock}}, you should pay attention to what the included instance is 
> coming from.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14580) Make the HBaseMiniCluster compliant with Kerberos

2015-10-13 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14954969#comment-14954969
 ] 

stack commented on HBASE-14580:
---

[~nkeywal] FYI, its [~busbey] who is RM'ing 1.2 (You want this in 1.2 Sean?)

> Make the HBaseMiniCluster compliant with Kerberos
> -
>
> Key: HBASE-14580
> URL: https://issues.apache.org/jira/browse/HBASE-14580
> Project: HBase
>  Issue Type: Improvement
>  Components: security, test
>Affects Versions: 2.0.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
> Fix For: 2.0.0
>
> Attachments: hbase-14580.v2.patch, hbase-14580.v2.patch, 
> patch-14580.v1.patch
>
>
> Whne using MiniKDC and the minicluster in a unit test, there is a conflict 
> causeed by HBaseTestingUtility:
> {code}
>   public static User getDifferentUser(final Configuration c,
> final String differentiatingSuffix)
>   throws IOException {
>// snip
> String username = User.getCurrent().getName() +
>   differentiatingSuffix; < problem here
> User user = User.createUserForTesting(c, username,
> new String[]{"supergroup"});
> return user;
>   }
> {code}
> This creates users like securedUser/localh...@example.com.hfs.0, and this 
> does not work.
> My fix is to return the current user when Kerberos is set. I don't think that 
> there is another option (any other opinion?). However this user is not in a 
> group so we have logs like 'WARN  [IPC Server handler 9 on 61366] 
> security.UserGroupInformation (UserGroupInformation.java:getGroupNames(1521)) 
> - No groups available for user securedUser' I'm not sure of its impact. 
> [~apurtell], what do you think?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14463) Severe performance downgrade when parallel reading a single key from BucketCache

2015-10-13 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14954978#comment-14954978
 ] 

stack commented on HBASE-14463:
---

Thanks for doing up the summary [~carp84] Helps those of us trying to follow 
along.

> Severe performance downgrade when parallel reading a single key from 
> BucketCache
> 
>
> Key: HBASE-14463
> URL: https://issues.apache.org/jira/browse/HBASE-14463
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.14, 1.1.2
>Reporter: Yu Li
>Assignee: Yu Li
> Fix For: 2.0.0, 1.2.0, 1.3.0, 0.98.16
>
> Attachments: HBASE-14463.patch, HBASE-14463_v11.patch, 
> HBASE-14463_v2.patch, HBASE-14463_v3.patch, HBASE-14463_v4.patch, 
> HBASE-14463_v5.patch, TestBucketCache-new_with_IdLock.png, 
> TestBucketCache-new_with_IdReadWriteLock.png, 
> TestBucketCache_with_IdLock.png, 
> TestBucketCache_with_IdReadWriteLock-resolveLockLeak.png, 
> TestBucketCache_with_IdReadWriteLock.png
>
>
> We store feature data of online items in HBase, do machine learning on these 
> features, and supply the outputs to our online search engine. In such 
> scenario we will launch hundreds of yarn workers and each worker will read 
> all features of one item(i.e. single rowkey in HBase), so there'll be heavy 
> parallel reading on a single rowkey.
> We were using LruCache but start to try BucketCache recently to resolve gc 
> issue, and just as titled we have observed severe performance downgrade. 
> After some analytics we found the root cause is the lock in 
> BucketCache#getBlock, as shown below
> {code}
>   try {
> lockEntry = offsetLock.getLockEntry(bucketEntry.offset());
> // ...
> if (bucketEntry.equals(backingMap.get(key))) {
>   // ...
>   int len = bucketEntry.getLength();
>   Cacheable cachedBlock = ioEngine.read(bucketEntry.offset(), len,
>   bucketEntry.deserializerReference(this.deserialiserMap));
> {code}
> Since ioEnging.read involves array copy, it's much more time-costed than the 
> operation in LruCache. And since we're using synchronized in 
> IdLock#getLockEntry, parallel read dropping on the same bucket would be 
> executed in serial, which causes a really bad performance.
> To resolve the problem, we propose to use ReentranceReadWriteLock in 
> BucketCache, and introduce a new class called IdReadWriteLock to implement it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14463) Severe performance downgrade when parallel reading a single key from BucketCache

2015-10-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14954859#comment-14954859
 ] 

Hadoop QA commented on HBASE-14463:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12766296/HBASE-14463_v11.patch
  against master branch at commit 657078b353f215ab02ff7ac2b449006090c0c971.
  ATTACHMENT ID: 12766296

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 8 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.6.1 2.7.0 
2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 2 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s):   
at 
org.apache.cxf.systest.ws.addr_wsdl.AddNumberImpl.execute(AddNumberImpl.java:53)
at 
org.apache.cxf.systest.ws.addr_wsdl.AddNumberImpl.addNumbers(AddNumberImpl.java:39)
at 
org.apache.cxf.systest.ws.addr_wsdl.AddNumberImpl.execute(AddNumberImpl.java:53)
at 
org.apache.cxf.systest.ws.addr_wsdl.AddNumberImpl.addNumbers(AddNumberImpl.java:39)
at 
org.apache.cxf.systest.ws.addr_wsdl.jaxwsmm.WSDLAddrPolicyAttachmentJaxwsMMProviderTest.testUsingAddressing(WSDLAddrPolicyAttachmentJaxwsMMProviderTest.java:117)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15984//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15984//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15984//artifact/patchprocess/checkstyle-aggregate.html

  Javadoc warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15984//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15984//console

This message is automatically generated.

> Severe performance downgrade when parallel reading a single key from 
> BucketCache
> 
>
> Key: HBASE-14463
> URL: https://issues.apache.org/jira/browse/HBASE-14463
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.14, 1.1.2
>Reporter: Yu Li
>Assignee: Yu Li
> Fix For: 2.0.0, 1.2.0, 1.3.0, 0.98.16
>
> Attachments: HBASE-14463.patch, HBASE-14463_v11.patch, 
> HBASE-14463_v2.patch, HBASE-14463_v3.patch, HBASE-14463_v4.patch, 
> HBASE-14463_v5.patch, TestBucketCache-new_with_IdLock.png, 
> TestBucketCache-new_with_IdReadWriteLock.png, 
> TestBucketCache_with_IdLock.png, 
> TestBucketCache_with_IdReadWriteLock-resolveLockLeak.png, 
> TestBucketCache_with_IdReadWriteLock.png
>
>
> We store feature data of online items in HBase, do machine learning on these 
> features, and supply the outputs to our online search engine. In such 
> scenario we will launch hundreds of yarn workers and each worker will read 
> all features of one item(i.e. single rowkey in HBase), so there'll be heavy 
> parallel reading on a single rowkey.
> We were using LruCache but start to try BucketCache recently to resolve gc 
> issue, and just as titled we have observed severe performance downgrade. 
> After some analytics we found the root cause is the lock in 
> BucketCache#getBlock, as shown below
> {code}
>   try {
> lockEntry = offsetLock.getLockEntry(bucketEntry.offset());
> // ...
> if (bucketEntry.equals(backingMap.get(key))) {
>   // ...
>   int len = bucketEntry.getLength();
>   Cacheable cachedBlock = ioEngine.read(bucketEntry.offset(), len,
>   bucketEntry.deserializerReference(this.deserialiserMap));
> {code}
> Since ioEnging.read involves array copy, it's much more time-costed than the 
> 

  1   2   3   >