[jira] [Commented] (HBASE-14936) CombinedBlockCache should overwrite CacheStats#rollMetricsPeriod()

2015-12-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15055606#comment-15055606
 ] 

Hadoop QA commented on HBASE-14936:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12776771/HBASE-14936-trunk-v1.patch
  against master branch at commit 555d9b70bd650a0df0ed9e382de449c337274493.
  ATTACHMENT ID: 12776771

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.6.1 2.7.0 
2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 1 
warning messages.

{color:green}+1 checkstyle{color}. The applied patch does not generate new 
checkstyle errors.

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 release 
audit warnings (more than the master's current 0 warnings).

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:green}+1 site{color}.  The mvn post-site goal succeeds with this 
patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

{color:green}+1 zombies{color}. No zombie tests found running at the end of 
the build.

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16848//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16848//artifact/patchprocess/patchReleaseAuditWarnings.txt
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16848//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16848//artifact/patchprocess/checkstyle-aggregate.html

  Javadoc warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16848//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16848//console

This message is automatically generated.

> CombinedBlockCache should overwrite CacheStats#rollMetricsPeriod()
> --
>
> Key: HBASE-14936
> URL: https://issues.apache.org/jira/browse/HBASE-14936
> Project: HBase
>  Issue Type: Bug
>  Components: BlockCache
>Affects Versions: 1.1.2
>Reporter: Jianwei Cui
>Assignee: Jianwei Cui
> Fix For: 2.0.0, 1.2, 1.1, 1.3
>
> Attachments: HBASE-14936-trunk-v1.patch, HBASE-14936-trunk.patch
>
>
> It seems CombinedBlockCache should overwrite CacheStats#rollMetricsPeriod() as
> {code}
> public void rollMetricsPeriod() {
>   lruCacheStats.rollMetricsPeriod();
>   bucketCacheStats.rollMetricsPeriod();
> }
> {code}
> otherwise, CombinedBlockCache.getHitRatioPastNPeriods() and 
> CombinedBlockCache.getHitCachingRatioPastNPeriods() will always return 0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14970) Backport HBASE-13082 and its sub-jira to branch-1

2015-12-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15055612#comment-15055612
 ] 

Hadoop QA commented on HBASE-14970:
---

{color:green}+1 overall{color}.  

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16849//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16849//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16849//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16849//console

This message is automatically generated.

> Backport HBASE-13082 and its sub-jira to branch-1
> -
>
> Key: HBASE-14970
> URL: https://issues.apache.org/jira/browse/HBASE-14970
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Attachments: HBASE-13082-branch-1.patch, HBASE-14970_branch-1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14970) Backport HBASE-13082 and its sub-jira to branch-1

2015-12-14 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-14970:
---
Status: Open  (was: Patch Available)

> Backport HBASE-13082 and its sub-jira to branch-1
> -
>
> Key: HBASE-14970
> URL: https://issues.apache.org/jira/browse/HBASE-14970
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Attachments: HBASE-13082-branch-1.patch, HBASE-14970_branch-1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14970) Backport HBASE-13082 and its sub-jira to branch-1

2015-12-14 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-14970:
---
Status: Patch Available  (was: Open)

> Backport HBASE-13082 and its sub-jira to branch-1
> -
>
> Key: HBASE-14970
> URL: https://issues.apache.org/jira/browse/HBASE-14970
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Attachments: HBASE-13082-branch-1.patch, HBASE-14970_branch-1.patch, 
> HBASE-14970_branch-1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14970) Backport HBASE-13082 and its sub-jira to branch-1

2015-12-14 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-14970:
---
Attachment: HBASE-14970_branch-1.patch

Trying QA once again as there was a failure in TestHeapSize.

> Backport HBASE-13082 and its sub-jira to branch-1
> -
>
> Key: HBASE-14970
> URL: https://issues.apache.org/jira/browse/HBASE-14970
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Attachments: HBASE-13082-branch-1.patch, HBASE-14970_branch-1.patch, 
> HBASE-14970_branch-1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14936) CombinedBlockCache should overwrite CacheStats#rollMetricsPeriod()

2015-12-14 Thread Jianwei Cui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jianwei Cui updated HBASE-14936:

Attachment: HBASE-14936-trunk-v2.patch

add license for TestCombinedBlockCache.java

> CombinedBlockCache should overwrite CacheStats#rollMetricsPeriod()
> --
>
> Key: HBASE-14936
> URL: https://issues.apache.org/jira/browse/HBASE-14936
> Project: HBase
>  Issue Type: Bug
>  Components: BlockCache
>Affects Versions: 1.1.2
>Reporter: Jianwei Cui
>Assignee: Jianwei Cui
> Fix For: 2.0.0, 1.2, 1.1, 1.3
>
> Attachments: HBASE-14936-trunk-v1.patch, HBASE-14936-trunk-v2.patch, 
> HBASE-14936-trunk.patch
>
>
> It seems CombinedBlockCache should overwrite CacheStats#rollMetricsPeriod() as
> {code}
> public void rollMetricsPeriod() {
>   lruCacheStats.rollMetricsPeriod();
>   bucketCacheStats.rollMetricsPeriod();
> }
> {code}
> otherwise, CombinedBlockCache.getHitRatioPastNPeriods() and 
> CombinedBlockCache.getHitCachingRatioPastNPeriods() will always return 0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14936) CombinedBlockCache should overwrite CacheStats#rollMetricsPeriod()

2015-12-14 Thread Heng Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15055705#comment-15055705
 ] 

Heng Chen commented on HBASE-14936:
---

Fix release audit and push to master and branch-1.3.
Could you upload a patch for branch-1.0, branch-1.1, branch-1.2

> CombinedBlockCache should overwrite CacheStats#rollMetricsPeriod()
> --
>
> Key: HBASE-14936
> URL: https://issues.apache.org/jira/browse/HBASE-14936
> Project: HBase
>  Issue Type: Bug
>  Components: BlockCache
>Affects Versions: 1.1.2
>Reporter: Jianwei Cui
>Assignee: Jianwei Cui
> Fix For: 2.0.0, 1.2, 1.1, 1.3
>
> Attachments: HBASE-14936-trunk-v1.patch, HBASE-14936-trunk-v2.patch, 
> HBASE-14936-trunk.patch
>
>
> It seems CombinedBlockCache should overwrite CacheStats#rollMetricsPeriod() as
> {code}
> public void rollMetricsPeriod() {
>   lruCacheStats.rollMetricsPeriod();
>   bucketCacheStats.rollMetricsPeriod();
> }
> {code}
> otherwise, CombinedBlockCache.getHitRatioPastNPeriods() and 
> CombinedBlockCache.getHitCachingRatioPastNPeriods() will always return 0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14936) CombinedBlockCache should overwrite CacheStats#rollMetricsPeriod()

2015-12-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15055739#comment-15055739
 ] 

Hadoop QA commented on HBASE-14936:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12777439/HBASE-14936-trunk-v2.patch
  against master branch at commit 04622254f7209c5cfeadcfa137a97fbed161075a.
  ATTACHMENT ID: 12777439

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16852//console

This message is automatically generated.

> CombinedBlockCache should overwrite CacheStats#rollMetricsPeriod()
> --
>
> Key: HBASE-14936
> URL: https://issues.apache.org/jira/browse/HBASE-14936
> Project: HBase
>  Issue Type: Bug
>  Components: BlockCache
>Affects Versions: 1.1.2
>Reporter: Jianwei Cui
>Assignee: Jianwei Cui
> Fix For: 2.0.0, 1.2, 1.1, 1.3
>
> Attachments: HBASE-14936-trunk-v1.patch, HBASE-14936-trunk-v2.patch, 
> HBASE-14936-trunk.patch
>
>
> It seems CombinedBlockCache should overwrite CacheStats#rollMetricsPeriod() as
> {code}
> public void rollMetricsPeriod() {
>   lruCacheStats.rollMetricsPeriod();
>   bucketCacheStats.rollMetricsPeriod();
> }
> {code}
> otherwise, CombinedBlockCache.getHitRatioPastNPeriods() and 
> CombinedBlockCache.getHitCachingRatioPastNPeriods() will always return 0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14936) CombinedBlockCache should overwrite CacheStats#rollMetricsPeriod()

2015-12-14 Thread Jianwei Cui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jianwei Cui updated HBASE-14936:

Attachment: HBASE-14936-branch-1.0-1.1.patch

patch for branch-1.0 and branch-1.1

> CombinedBlockCache should overwrite CacheStats#rollMetricsPeriod()
> --
>
> Key: HBASE-14936
> URL: https://issues.apache.org/jira/browse/HBASE-14936
> Project: HBase
>  Issue Type: Bug
>  Components: BlockCache
>Affects Versions: 1.1.2
>Reporter: Jianwei Cui
>Assignee: Jianwei Cui
> Fix For: 2.0.0, 1.2, 1.1, 1.3
>
> Attachments: HBASE-14936-branch-1.0-1.1.patch, 
> HBASE-14936-trunk-v1.patch, HBASE-14936-trunk-v2.patch, 
> HBASE-14936-trunk.patch
>
>
> It seems CombinedBlockCache should overwrite CacheStats#rollMetricsPeriod() as
> {code}
> public void rollMetricsPeriod() {
>   lruCacheStats.rollMetricsPeriod();
>   bucketCacheStats.rollMetricsPeriod();
> }
> {code}
> otherwise, CombinedBlockCache.getHitRatioPastNPeriods() and 
> CombinedBlockCache.getHitCachingRatioPastNPeriods() will always return 0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14936) CombinedBlockCache should overwrite CacheStats#rollMetricsPeriod()

2015-12-14 Thread Jianwei Cui (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15055786#comment-15055786
 ] 

Jianwei Cui commented on HBASE-14936:
-

Sure, it seems HBASE-14936-trunk-v2.patch could be applied to branch-1.2, I add 
a patch for branch-1.0 and branch-1.1

> CombinedBlockCache should overwrite CacheStats#rollMetricsPeriod()
> --
>
> Key: HBASE-14936
> URL: https://issues.apache.org/jira/browse/HBASE-14936
> Project: HBase
>  Issue Type: Bug
>  Components: BlockCache
>Affects Versions: 1.1.2
>Reporter: Jianwei Cui
>Assignee: Jianwei Cui
> Fix For: 2.0.0, 1.2, 1.1, 1.3
>
> Attachments: HBASE-14936-branch-1.0-1.1.patch, 
> HBASE-14936-trunk-v1.patch, HBASE-14936-trunk-v2.patch, 
> HBASE-14936-trunk.patch
>
>
> It seems CombinedBlockCache should overwrite CacheStats#rollMetricsPeriod() as
> {code}
> public void rollMetricsPeriod() {
>   lruCacheStats.rollMetricsPeriod();
>   bucketCacheStats.rollMetricsPeriod();
> }
> {code}
> otherwise, CombinedBlockCache.getHitRatioPastNPeriods() and 
> CombinedBlockCache.getHitCachingRatioPastNPeriods() will always return 0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14936) CombinedBlockCache should overwrite CacheStats#rollMetricsPeriod()

2015-12-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15055802#comment-15055802
 ] 

Hadoop QA commented on HBASE-14936:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12777445/HBASE-14936-branch-1.0-1.1.patch
  against branch-1.0 branch at commit 04622254f7209c5cfeadcfa137a97fbed161075a.
  ATTACHMENT ID: 12777445

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified tests.

{color:red}-1 javac{color}.  The patch appears to cause mvn compile goal to 
fail with Hadoop version 2.4.0.

Compilation errors resume:
[ERROR] COMPILATION ERROR : 
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestCombinedBlockCache.java:[22,58]
 error: CombinedCacheStats has private access in CombinedBlockCache
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestCombinedBlockCache.java:[33,4]
 error: cannot find symbol
[ERROR]   symbol:   class CombinedCacheStats
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:2.5.1:testCompile 
(default-testCompile) on project hbase-server: Compilation failure: Compilation 
failure:
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestCombinedBlockCache.java:[22,58]
 error: CombinedCacheStats has private access in CombinedBlockCache
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestCombinedBlockCache.java:[33,4]
 error: cannot find symbol
[ERROR] symbol:   class CombinedCacheStats
[ERROR] location: class TestCombinedBlockCache
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestCombinedBlockCache.java:[34,12]
 error: cannot find symbol
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hbase-server


Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16853//console

This message is automatically generated.

> CombinedBlockCache should overwrite CacheStats#rollMetricsPeriod()
> --
>
> Key: HBASE-14936
> URL: https://issues.apache.org/jira/browse/HBASE-14936
> Project: HBase
>  Issue Type: Bug
>  Components: BlockCache
>Affects Versions: 1.1.2
>Reporter: Jianwei Cui
>Assignee: Jianwei Cui
> Fix For: 2.0.0, 1.2, 1.1, 1.3
>
> Attachments: HBASE-14936-branch-1.0-1.1.patch, 
> HBASE-14936-trunk-v1.patch, HBASE-14936-trunk-v2.patch, 
> HBASE-14936-trunk.patch
>
>
> It seems CombinedBlockCache should overwrite CacheStats#rollMetricsPeriod() as
> {code}
> public void rollMetricsPeriod() {
>   lruCacheStats.rollMetricsPeriod();
>   bucketCacheStats.rollMetricsPeriod();
> }
> {code}
> otherwise, CombinedBlockCache.getHitRatioPastNPeriods() and 
> CombinedBlockCache.getHitCachingRatioPastNPeriods() will always return 0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14460) [Perf Regression] Merge of MVCC and SequenceId (HBASE-HBASE-8763) slowed Increments, CheckAndPuts, batch operations

2015-12-14 Thread Jingcheng Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingcheng Du updated HBASE-14460:
-
Attachment: HBASE-14460-discussion.patch

I am thinking about an alternative way to improve the implementation in 
increment, checkAndPut, etc.
In each operation, we can attach a write number per row, in the operation of 
increment, we can wait for the previous operations to finish only in this row 
in mvcc.await()?
I had drafted an ugly patch (only for master) to do this for discussion. And I 
ran the TestIncrement, the results are listed in the following.
{noformat}
1. testContendedSingleCellIncrementer:
  With the patch: 1st run is 228.185s. 2nd run is 232.453s. 3th run is 
235.457s. 4th run is 229.003s.
  Without the patch: 1st run is 230.299s. 2nd run is 234.997s. 3rd run is 
219.224s. 4th run is 225.731s..
2. testUnContendedSingleCellIncrementer:
  With the patch: 59.244s.
  Without the patch: 81.667s.
{noformat}

The patch is attached in this JIRA for discussion. Thanks!

> [Perf Regression] Merge of MVCC and SequenceId (HBASE-HBASE-8763) slowed 
> Increments, CheckAndPuts, batch operations
> ---
>
> Key: HBASE-14460
> URL: https://issues.apache.org/jira/browse/HBASE-14460
> Project: HBase
>  Issue Type: Bug
>  Components: Performance
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Attachments: 0.94.test.patch, 0.98.test.patch, 14460.txt, 
> HBASE-14460-discussion.patch, flamegraph-13120.svg.master.singlecell.svg, 
> flamegraph-26636.094.100.svg, flamegraph-28066.098.singlecell.svg, 
> flamegraph-28767.098.100.svg, flamegraph-31647.master.100.svg, 
> flamegraph-9466.094.singlecell.svg, m.test.patch, region_lock.png, 
> testincrement.094.patch, testincrement.098.patch, testincrement.master.patch
>
>
> As reported by 鈴木俊裕 up on the mailing list -- see "Performance degradation 
> between CDH5.3.1(HBase0.98.6) and CDH5.4.5(HBase1.0.0)" -- our unification of 
> sequenceid and MVCC slows Increments (and other ops) as the mvcc needs to 
> 'catch up' to our current point before we can read the last Increment value 
> that we need to update.
> We can say that our Increment is just done wrong, we should just be writing 
> Increments and summing on read, but checkAndPut as well as batching 
> operations have the same issue. Fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14936) CombinedBlockCache should overwrite CacheStats#rollMetricsPeriod()

2015-12-14 Thread Heng Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Heng Chen updated HBASE-14936:
--
Attachment: HBASE-14936-branch-1.0-addendum.patch

fix compile error

> CombinedBlockCache should overwrite CacheStats#rollMetricsPeriod()
> --
>
> Key: HBASE-14936
> URL: https://issues.apache.org/jira/browse/HBASE-14936
> Project: HBase
>  Issue Type: Bug
>  Components: BlockCache
>Affects Versions: 1.1.2
>Reporter: Jianwei Cui
>Assignee: Jianwei Cui
> Fix For: 2.0.0, 1.2, 1.1, 1.3
>
> Attachments: HBASE-14936-branch-1.0-1.1.patch, 
> HBASE-14936-branch-1.0-addendum.patch, HBASE-14936-trunk-v1.patch, 
> HBASE-14936-trunk-v2.patch, HBASE-14936-trunk.patch
>
>
> It seems CombinedBlockCache should overwrite CacheStats#rollMetricsPeriod() as
> {code}
> public void rollMetricsPeriod() {
>   lruCacheStats.rollMetricsPeriod();
>   bucketCacheStats.rollMetricsPeriod();
> }
> {code}
> otherwise, CombinedBlockCache.getHitRatioPastNPeriods() and 
> CombinedBlockCache.getHitCachingRatioPastNPeriods() will always return 0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14936) CombinedBlockCache should overwrite CacheStats#rollMetricsPeriod()

2015-12-14 Thread Heng Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15055823#comment-15055823
 ] 

Heng Chen commented on HBASE-14936:
---

Push to branch-1+ ,   Thanks [~cuijianwei] for your nice patch. 

> CombinedBlockCache should overwrite CacheStats#rollMetricsPeriod()
> --
>
> Key: HBASE-14936
> URL: https://issues.apache.org/jira/browse/HBASE-14936
> Project: HBase
>  Issue Type: Bug
>  Components: BlockCache
>Affects Versions: 1.1.2
>Reporter: Jianwei Cui
>Assignee: Jianwei Cui
> Fix For: 2.0.0, 1.2, 1.1, 1.3
>
> Attachments: HBASE-14936-branch-1.0-1.1.patch, 
> HBASE-14936-branch-1.0-addendum.patch, HBASE-14936-trunk-v1.patch, 
> HBASE-14936-trunk-v2.patch, HBASE-14936-trunk.patch
>
>
> It seems CombinedBlockCache should overwrite CacheStats#rollMetricsPeriod() as
> {code}
> public void rollMetricsPeriod() {
>   lruCacheStats.rollMetricsPeriod();
>   bucketCacheStats.rollMetricsPeriod();
> }
> {code}
> otherwise, CombinedBlockCache.getHitRatioPastNPeriods() and 
> CombinedBlockCache.getHitCachingRatioPastNPeriods() will always return 0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14936) CombinedBlockCache should overwrite CacheStats#rollMetricsPeriod()

2015-12-14 Thread Heng Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Heng Chen updated HBASE-14936:
--
   Resolution: Fixed
Fix Version/s: 1.0
   Status: Resolved  (was: Patch Available)

> CombinedBlockCache should overwrite CacheStats#rollMetricsPeriod()
> --
>
> Key: HBASE-14936
> URL: https://issues.apache.org/jira/browse/HBASE-14936
> Project: HBase
>  Issue Type: Bug
>  Components: BlockCache
>Affects Versions: 1.1.2
>Reporter: Jianwei Cui
>Assignee: Jianwei Cui
> Fix For: 1.0, 2.0.0, 1.2, 1.1, 1.3
>
> Attachments: HBASE-14936-branch-1.0-1.1.patch, 
> HBASE-14936-branch-1.0-addendum.patch, HBASE-14936-trunk-v1.patch, 
> HBASE-14936-trunk-v2.patch, HBASE-14936-trunk.patch
>
>
> It seems CombinedBlockCache should overwrite CacheStats#rollMetricsPeriod() as
> {code}
> public void rollMetricsPeriod() {
>   lruCacheStats.rollMetricsPeriod();
>   bucketCacheStats.rollMetricsPeriod();
> }
> {code}
> otherwise, CombinedBlockCache.getHitRatioPastNPeriods() and 
> CombinedBlockCache.getHitCachingRatioPastNPeriods() will always return 0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14936) CombinedBlockCache should overwrite CacheStats#rollMetricsPeriod()

2015-12-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15055847#comment-15055847
 ] 

Hudson commented on HBASE-14936:


FAILURE: Integrated in HBase-1.3-IT #371 (See 
[https://builds.apache.org/job/HBase-1.3-IT/371/])
HBASE-14936 CombinedBlockCache should overwrite (chenheng: rev 
3b9b8cc667d3a7ffb3473ac8f181f27cff8c1a4e)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CombinedBlockCache.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CacheStats.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestCombinedBlockCache.java


> CombinedBlockCache should overwrite CacheStats#rollMetricsPeriod()
> --
>
> Key: HBASE-14936
> URL: https://issues.apache.org/jira/browse/HBASE-14936
> Project: HBase
>  Issue Type: Bug
>  Components: BlockCache
>Affects Versions: 1.1.2
>Reporter: Jianwei Cui
>Assignee: Jianwei Cui
> Fix For: 2.0.0, 1.2, 1.1, 1.3, 1.0
>
> Attachments: HBASE-14936-branch-1.0-1.1.patch, 
> HBASE-14936-branch-1.0-addendum.patch, HBASE-14936-trunk-v1.patch, 
> HBASE-14936-trunk-v2.patch, HBASE-14936-trunk.patch
>
>
> It seems CombinedBlockCache should overwrite CacheStats#rollMetricsPeriod() as
> {code}
> public void rollMetricsPeriod() {
>   lruCacheStats.rollMetricsPeriod();
>   bucketCacheStats.rollMetricsPeriod();
> }
> {code}
> otherwise, CombinedBlockCache.getHitRatioPastNPeriods() and 
> CombinedBlockCache.getHitCachingRatioPastNPeriods() will always return 0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14936) CombinedBlockCache should overwrite CacheStats#rollMetricsPeriod()

2015-12-14 Thread Jianwei Cui (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15055856#comment-15055856
 ] 

Jianwei Cui commented on HBASE-14936:
-

Thanks for your review [~chenheng]

> CombinedBlockCache should overwrite CacheStats#rollMetricsPeriod()
> --
>
> Key: HBASE-14936
> URL: https://issues.apache.org/jira/browse/HBASE-14936
> Project: HBase
>  Issue Type: Bug
>  Components: BlockCache
>Affects Versions: 1.1.2
>Reporter: Jianwei Cui
>Assignee: Jianwei Cui
> Fix For: 2.0.0, 1.2, 1.1, 1.3, 1.0
>
> Attachments: HBASE-14936-branch-1.0-1.1.patch, 
> HBASE-14936-branch-1.0-addendum.patch, HBASE-14936-trunk-v1.patch, 
> HBASE-14936-trunk-v2.patch, HBASE-14936-trunk.patch
>
>
> It seems CombinedBlockCache should overwrite CacheStats#rollMetricsPeriod() as
> {code}
> public void rollMetricsPeriod() {
>   lruCacheStats.rollMetricsPeriod();
>   bucketCacheStats.rollMetricsPeriod();
> }
> {code}
> otherwise, CombinedBlockCache.getHitRatioPastNPeriods() and 
> CombinedBlockCache.getHitCachingRatioPastNPeriods() will always return 0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14936) CombinedBlockCache should overwrite CacheStats#rollMetricsPeriod()

2015-12-14 Thread Jianwei Cui (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15055857#comment-15055857
 ] 

Jianwei Cui commented on HBASE-14936:
-

Thanks for your review [~chenheng]

> CombinedBlockCache should overwrite CacheStats#rollMetricsPeriod()
> --
>
> Key: HBASE-14936
> URL: https://issues.apache.org/jira/browse/HBASE-14936
> Project: HBase
>  Issue Type: Bug
>  Components: BlockCache
>Affects Versions: 1.1.2
>Reporter: Jianwei Cui
>Assignee: Jianwei Cui
> Fix For: 2.0.0, 1.2, 1.1, 1.3, 1.0
>
> Attachments: HBASE-14936-branch-1.0-1.1.patch, 
> HBASE-14936-branch-1.0-addendum.patch, HBASE-14936-trunk-v1.patch, 
> HBASE-14936-trunk-v2.patch, HBASE-14936-trunk.patch
>
>
> It seems CombinedBlockCache should overwrite CacheStats#rollMetricsPeriod() as
> {code}
> public void rollMetricsPeriod() {
>   lruCacheStats.rollMetricsPeriod();
>   bucketCacheStats.rollMetricsPeriod();
> }
> {code}
> otherwise, CombinedBlockCache.getHitRatioPastNPeriods() and 
> CombinedBlockCache.getHitCachingRatioPastNPeriods() will always return 0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14949) Skip duplicate entries when replay WAL.

2015-12-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15055882#comment-15055882
 ] 

Hadoop QA commented on HBASE-14949:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12777027/HBASE-14949_v2.patch
  against master branch at commit 555d9b70bd650a0df0ed9e382de449c337274493.
  ATTACHMENT ID: 12777027

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.6.1 2.7.0 
2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 1 
warning messages.

{color:green}+1 checkstyle{color}. The applied patch does not generate new 
checkstyle errors.

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:green}+1 site{color}.  The mvn post-site goal succeeds with this 
patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

{color:green}+1 zombies{color}. No zombie tests found running at the end of 
the build.

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16850//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16850//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16850//artifact/patchprocess/checkstyle-aggregate.html

  Javadoc warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16850//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16850//console

This message is automatically generated.

> Skip duplicate entries when replay WAL.
> ---
>
> Key: HBASE-14949
> URL: https://issues.apache.org/jira/browse/HBASE-14949
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Heng Chen
> Attachments: HBASE-14949.patch, HBASE-14949_v1.patch, 
> HBASE-14949_v2.patch
>
>
> As HBASE-14004 design,  there will be duplicate entries in different WAL.  It 
> happens when one hflush failed, we will close old WAL with 'acked hflushed' 
> length,  then open a new WAL and write the unacked hlushed entries into it.
> So there maybe some overlap between old WAL and new WAL.
> We should skip the duplicate entries when replay.  I think it has no harm to 
> current logic, maybe we do it first. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14936) CombinedBlockCache should overwrite CacheStats#rollMetricsPeriod()

2015-12-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15055893#comment-15055893
 ] 

Hudson commented on HBASE-14936:


FAILURE: Integrated in HBase-1.2-IT #338 (See 
[https://builds.apache.org/job/HBase-1.2-IT/338/])
HBASE-14936 CombinedBlockCache should overwrite (chenheng: rev 
db4d6c3ae39fac9a02ad5e57c115b78d64dbfcf4)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CombinedBlockCache.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestCombinedBlockCache.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CacheStats.java


> CombinedBlockCache should overwrite CacheStats#rollMetricsPeriod()
> --
>
> Key: HBASE-14936
> URL: https://issues.apache.org/jira/browse/HBASE-14936
> Project: HBase
>  Issue Type: Bug
>  Components: BlockCache
>Affects Versions: 1.1.2
>Reporter: Jianwei Cui
>Assignee: Jianwei Cui
> Fix For: 2.0.0, 1.2, 1.1, 1.3, 1.0
>
> Attachments: HBASE-14936-branch-1.0-1.1.patch, 
> HBASE-14936-branch-1.0-addendum.patch, HBASE-14936-trunk-v1.patch, 
> HBASE-14936-trunk-v2.patch, HBASE-14936-trunk.patch
>
>
> It seems CombinedBlockCache should overwrite CacheStats#rollMetricsPeriod() as
> {code}
> public void rollMetricsPeriod() {
>   lruCacheStats.rollMetricsPeriod();
>   bucketCacheStats.rollMetricsPeriod();
> }
> {code}
> otherwise, CombinedBlockCache.getHitRatioPastNPeriods() and 
> CombinedBlockCache.getHitCachingRatioPastNPeriods() will always return 0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14895) Seek only to the newly flushed file on scanner reset on flush

2015-12-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15055896#comment-15055896
 ] 

Hudson commented on HBASE-14895:


FAILURE: Integrated in HBase-Trunk_matrix #549 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/549/])
HBASE-14895 Seek only to the newly flushed file on scanner reset on 
(ramkrishna: rev 555d9b70bd650a0df0ed9e382de449c337274493)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFileScanner.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Store.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ChangedReadersObserver.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreScanner.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ReversedStoreScanner.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestWideScanner.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestBlockEvictionFromClient.java


> Seek only to the newly flushed file on scanner reset on flush
> -
>
> Key: HBASE-14895
> URL: https://issues.apache.org/jira/browse/HBASE-14895
> Project: HBase
>  Issue Type: Sub-task
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0
>
> Attachments: HBASE-14895.patch, HBASE-14895_1.patch, 
> HBASE-14895_1.patch, HBASE-14895_2.patch, HBASE-14895_3.patch, 
> HBASE-14895_3.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14795) Enhance the spark-hbase scan operations

2015-12-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15055895#comment-15055895
 ] 

Hudson commented on HBASE-14795:


FAILURE: Integrated in HBase-Trunk_matrix #549 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/549/])
HBASE-14795 Enhance the spark-hbase scan operations (Zhan Zhang) (tedyu: rev 
676ce01c82c137348e88d0acaa694ad214dc2f12)
* 
hbase-spark/src/main/scala/org/apache/hadoop/hbase/spark/datasources/package.scala
* 
hbase-spark/src/main/scala/org/apache/hadoop/hbase/spark/datasources/HBaseTableScanRDD.scala
* 
hbase-spark/src/main/scala/org/apache/hadoop/hbase/spark/datasources/Bound.scala
* hbase-spark/src/main/scala/org/apache/hadoop/hbase/spark/HBaseContext.scala
* 
hbase-spark/src/main/scala/org/apache/hadoop/hbase/spark/datasources/HBaseResources.scala
* 
hbase-spark/src/main/scala/org/apache/hadoop/hbase/spark/datasources/SerializableConfiguration.scala
* hbase-spark/src/main/scala/org/apache/hadoop/hbase/spark/DefaultSource.scala


> Enhance the spark-hbase scan operations
> ---
>
> Key: HBASE-14795
> URL: https://issues.apache.org/jira/browse/HBASE-14795
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Malaska
>Assignee: Zhan Zhang
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: 
> 0001-HBASE-14795-Enhance-the-spark-hbase-scan-operations.patch, 
> HBASE-14795-1.patch, HBASE-14795-2.patch, HBASE-14795-3.patch, 
> HBASE-14795-4.patch
>
>
> This is a sub-jira of HBASE-14789.  This jira is to focus on the replacement 
> of TableInputFormat for a more custom scan implementation that will make the 
> following use case more effective.
> Use case:
> In the case you have multiple scan ranges on a single table with in a single 
> query.  TableInputFormat will scan the the outer range of the scan start and 
> end range where this implementation can be more pointed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14895) Seek only to the newly flushed file on scanner reset on flush

2015-12-14 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-14895:
---
Attachment: HBASE-14895_addendum.patch

This addendum needs to be commited. Pls review. I saw in this in the code today 
and interestingly the same thing failed in the build today after the commit.
https://builds.apache.org/job/HBase-Trunk_matrix/549/jdk=latest1.8,label=Hadoop/testReport/junit/org.apache.hadoop.hbase.regionserver/TestHRegion/testFlushCacheWhileScanning/


> Seek only to the newly flushed file on scanner reset on flush
> -
>
> Key: HBASE-14895
> URL: https://issues.apache.org/jira/browse/HBASE-14895
> Project: HBase
>  Issue Type: Sub-task
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0
>
> Attachments: HBASE-14895.patch, HBASE-14895_1.patch, 
> HBASE-14895_1.patch, HBASE-14895_2.patch, HBASE-14895_3.patch, 
> HBASE-14895_3.patch, HBASE-14895_addendum.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HBASE-14895) Seek only to the newly flushed file on scanner reset on flush

2015-12-14 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan reopened HBASE-14895:


Just reopening for commiting the addendum. 

> Seek only to the newly flushed file on scanner reset on flush
> -
>
> Key: HBASE-14895
> URL: https://issues.apache.org/jira/browse/HBASE-14895
> Project: HBase
>  Issue Type: Sub-task
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0
>
> Attachments: HBASE-14895.patch, HBASE-14895_1.patch, 
> HBASE-14895_1.patch, HBASE-14895_2.patch, HBASE-14895_3.patch, 
> HBASE-14895_3.patch, HBASE-14895_addendum.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14970) Backport HBASE-13082 and its sub-jira to branch-1

2015-12-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15055932#comment-15055932
 ] 

Hadoop QA commented on HBASE-14970:
---

{color:green}+1 overall{color}.  

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16851//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16851//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16851//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16851//console

This message is automatically generated.

> Backport HBASE-13082 and its sub-jira to branch-1
> -
>
> Key: HBASE-14970
> URL: https://issues.apache.org/jira/browse/HBASE-14970
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Attachments: HBASE-13082-branch-1.patch, HBASE-14970_branch-1.patch, 
> HBASE-14970_branch-1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14936) CombinedBlockCache should overwrite CacheStats#rollMetricsPeriod()

2015-12-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15056077#comment-15056077
 ] 

Hudson commented on HBASE-14936:


FAILURE: Integrated in HBase-1.3 #434 (See 
[https://builds.apache.org/job/HBase-1.3/434/])
HBASE-14936 CombinedBlockCache should overwrite (chenheng: rev 
3b9b8cc667d3a7ffb3473ac8f181f27cff8c1a4e)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CombinedBlockCache.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CacheStats.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestCombinedBlockCache.java


> CombinedBlockCache should overwrite CacheStats#rollMetricsPeriod()
> --
>
> Key: HBASE-14936
> URL: https://issues.apache.org/jira/browse/HBASE-14936
> Project: HBase
>  Issue Type: Bug
>  Components: BlockCache
>Affects Versions: 1.1.2
>Reporter: Jianwei Cui
>Assignee: Jianwei Cui
> Fix For: 2.0.0, 1.2, 1.1, 1.3, 1.0
>
> Attachments: HBASE-14936-branch-1.0-1.1.patch, 
> HBASE-14936-branch-1.0-addendum.patch, HBASE-14936-trunk-v1.patch, 
> HBASE-14936-trunk-v2.patch, HBASE-14936-trunk.patch
>
>
> It seems CombinedBlockCache should overwrite CacheStats#rollMetricsPeriod() as
> {code}
> public void rollMetricsPeriod() {
>   lruCacheStats.rollMetricsPeriod();
>   bucketCacheStats.rollMetricsPeriod();
> }
> {code}
> otherwise, CombinedBlockCache.getHitRatioPastNPeriods() and 
> CombinedBlockCache.getHitCachingRatioPastNPeriods() will always return 0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14936) CombinedBlockCache should overwrite CacheStats#rollMetricsPeriod()

2015-12-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15056222#comment-15056222
 ] 

Hudson commented on HBASE-14936:


FAILURE: Integrated in HBase-1.1-JDK7 #1618 (See 
[https://builds.apache.org/job/HBase-1.1-JDK7/1618/])
HBASE-14936 CombinedBlockCache should overwrite (chenheng: rev 
b7b9123e758d1d6b35651212a4d1a14521eb1606)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CombinedBlockCache.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestCombinedBlockCache.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CacheStats.java
HBASE-14936 CombinedBlockCache should overwrite (chenheng: rev 
d6000a8a611abe016e57e6575092e392c508aeaa)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CombinedBlockCache.java


> CombinedBlockCache should overwrite CacheStats#rollMetricsPeriod()
> --
>
> Key: HBASE-14936
> URL: https://issues.apache.org/jira/browse/HBASE-14936
> Project: HBase
>  Issue Type: Bug
>  Components: BlockCache
>Affects Versions: 1.1.2
>Reporter: Jianwei Cui
>Assignee: Jianwei Cui
> Fix For: 2.0.0, 1.2, 1.1, 1.3, 1.0
>
> Attachments: HBASE-14936-branch-1.0-1.1.patch, 
> HBASE-14936-branch-1.0-addendum.patch, HBASE-14936-trunk-v1.patch, 
> HBASE-14936-trunk-v2.patch, HBASE-14936-trunk.patch
>
>
> It seems CombinedBlockCache should overwrite CacheStats#rollMetricsPeriod() as
> {code}
> public void rollMetricsPeriod() {
>   lruCacheStats.rollMetricsPeriod();
>   bucketCacheStats.rollMetricsPeriod();
> }
> {code}
> otherwise, CombinedBlockCache.getHitRatioPastNPeriods() and 
> CombinedBlockCache.getHitCachingRatioPastNPeriods() will always return 0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14971) Figure out why there are broken links in the Javadocs

2015-12-14 Thread Misty Stanley-Jones (JIRA)
Misty Stanley-Jones created HBASE-14971:
---

 Summary: Figure out why there are broken links in the Javadocs
 Key: HBASE-14971
 URL: https://issues.apache.org/jira/browse/HBASE-14971
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0
Reporter: Misty Stanley-Jones


Running the link checker produces the following results (also visible at 
https://builds.apache.org/job/HBase%20Website%20Link%20Ckecker/10/artifact/link_report/errorX.html
 as long as the job is available). I suspect it is because Javadoc refers to 
private classes.

{code}
#
# ERROR  30 missing html files (cross referenced)
#
/devapidocs/org/apache/hadoop/hbase/JitterScheduledThreadPoolExecutorImpl.JitteredRunnableScheduledFuture.html
used in 3 files:
/devapidocs/index-all.html
/devapidocs/org/apache/hadoop/hbase/package-tree.html
/devapidocs/overview-tree.html

/devapidocs/org/apache/hadoop/hbase/JitterScheduledThreadPoolExecutorImpl.html
used in 5 files:
/devapidocs/allclasses-noframe.html
/devapidocs/index-all.html
/devapidocs/org/apache/hadoop/hbase/package-summary.html
/devapidocs/org/apache/hadoop/hbase/package-tree.html
/devapidocs/overview-tree.html

/devapidocs/org/apache/hadoop/hbase/MultiActionResultTooLarge.html
used in 8 files:
/devapidocs/allclasses-noframe.html
/devapidocs/index-all.html
/devapidocs/org/apache/hadoop/hbase/NamespaceDescriptor.html

/devapidocs/org/apache/hadoop/hbase/classification/class-use/InterfaceAudience.Public.html
/devapidocs/org/apache/hadoop/hbase/package-summary.html
/devapidocs/org/apache/hadoop/hbase/package-tree.html
/devapidocs/overview-tree.html
/devapidocs/serialized-form.html

/devapidocs/org/apache/hadoop/hbase/RetryImmediatelyException.html
used in 8 files:
/devapidocs/allclasses-noframe.html
/devapidocs/index-all.html
/devapidocs/org/apache/hadoop/hbase/RegionTooBusyException.html

/devapidocs/org/apache/hadoop/hbase/classification/class-use/InterfaceAudience.Public.html
/devapidocs/org/apache/hadoop/hbase/package-summary.html
/devapidocs/org/apache/hadoop/hbase/package-tree.html
/devapidocs/overview-tree.html
/devapidocs/serialized-form.html

/devapidocs/org/apache/hadoop/hbase/client/VersionInfoUtil.html
used in 5 files:
/devapidocs/allclasses-noframe.html
/devapidocs/index-all.html
/devapidocs/org/apache/hadoop/hbase/client/package-summary.html
/devapidocs/org/apache/hadoop/hbase/client/package-tree.html
/devapidocs/overview-tree.html

/devapidocs/org/apache/hadoop/hbase/master/MetricsMasterProcSource.html
used in 8 files:
/devapidocs/allclasses-noframe.html
/devapidocs/constant-values.html
/devapidocs/index-all.html
/devapidocs/org/apache/hadoop/hbase/master/MetricsMaster.html
/devapidocs/org/apache/hadoop/hbase/master/package-summary.html
/devapidocs/org/apache/hadoop/hbase/master/package-tree.html
/devapidocs/org/apache/hadoop/hbase/metrics/BaseSource.html
/devapidocs/overview-tree.html

/devapidocs/org/apache/hadoop/hbase/master/MetricsMasterProcSourceFactory.html
used in 5 files:
/devapidocs/allclasses-noframe.html
/devapidocs/index-all.html
/devapidocs/org/apache/hadoop/hbase/master/package-summary.html
/devapidocs/org/apache/hadoop/hbase/master/package-tree.html
/devapidocs/overview-tree.html

/devapidocs/org/apache/hadoop/hbase/master/MetricsMasterProcSourceFactoryImpl.html
used in 5 files:
/devapidocs/allclasses-noframe.html
/devapidocs/index-all.html
/devapidocs/org/apache/hadoop/hbase/master/package-summary.html
/devapidocs/org/apache/hadoop/hbase/master/package-tree.html
/devapidocs/overview-tree.html

/devapidocs/org/apache/hadoop/hbase/master/MetricsMasterProcSourceImpl.html
used in 6 files:
/devapidocs/allclasses-noframe.html
/devapidocs/index-all.html
/devapidocs/org/apache/hadoop/hbase/master/package-summary.html
/devapidocs/org/apache/hadoop/hbase/master/package-tree.html
/devapidocs/org/apache/hadoop/hbase/metrics/BaseSource.html
/devapidocs/overview-tree.html

/devapidocs/org/apache/hadoop/hbase/regionserver/compactions/CompactedHFilesDischarger.html
used in 4 files:
/devapidocs/allclasses-noframe.html
/devapidocs/index-all.html
/devapidocs/org/apache/hadoop/hbase/regionserver/HRegion.html
/devapidocs/overview-tree.html

/devapidocs/org/apache/hadoop/hbase/regionserver/compactions/FIFOCompactionPolicy.html
used in 3 files:
/devapidocs/allclasses-noframe.html
/devapidocs/index-all.html
/devapidocs/overview-tree.html

/devapidocs/org/apache/hadoop/hbase/replication/master/ReplicationHFileCleaner.html
used in 4 files:
/devapidocs/allclasses-noframe.html
/devapidocs/index-all.html
/d

[jira] [Created] (HBASE-14972) TestHFileOutputFormat hanging

2015-12-14 Thread stack (JIRA)
stack created HBASE-14972:
-

 Summary: TestHFileOutputFormat hanging
 Key: HBASE-14972
 URL: https://issues.apache.org/jira/browse/HBASE-14972
 Project: HBase
  Issue Type: Sub-task
  Components: hangingTests, test
Reporter: stack


This one has been hanging a while. Happened this morning here:

https://builds.apache.org/view/H-L/view/HBase/job/HBase-1.2/jdk=latest1.7,label=Hadoop/440/consoleText

Will fill in more detail later. Builds.apache.org is crawling this morning.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14955) OOME: cannot create native thread is back

2015-12-14 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15056260#comment-15056260
 ] 

stack commented on HBASE-14955:
---

I like it [~chenheng] What is the receipe you have for doing this? Point me at 
an example and I'll make a patch for these tests.

I think the failure is probably environmental in this case... probably errant, 
concurrent tests running on the box consuming resources but anything we can do 
to use less will make it more likely we'll pass in these low-resource 
circumstances. Thanks.

> OOME: cannot create native thread is back
> -
>
> Key: HBASE-14955
> URL: https://issues.apache.org/jira/browse/HBASE-14955
> Project: HBase
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: test
>Reporter: stack
>
> This fail is OOME cannot create native thread.  Two MR jobs fail:
>  
> org.apache.hadoop.hbase.mapreduce.TestImportTSVWithVisibilityLabels.org.apache.hadoop.hbase.mapreduce.TestImportTSVWithVisibilityLabels46
>  ms1 
> org.apache.hadoop.hbase.mapreduce.TestTableInputFormatScan1.org.apache.hadoop.hbase.mapreduce.TestTableInputFormatScan142
>  ms1
> Was running 1.3 tests on H0.
> Could try and purge resources used by these tests.
> Making issue in meantime to keep an eye on it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14678) Experiment: Temporarily disable balancer and a few others to see if root of crashed/timedout JVMs

2015-12-14 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15056265#comment-15056265
 ] 

stack commented on HBASE-14678:
---

Ok. We seem to have figured the test killer (it was self-inflicted). The 
long-time flakies and hangers are being addressed slowly. It will be time to 
start reenabling these disabled tests soon.

> Experiment: Temporarily disable balancer and a few others to see if root of 
> crashed/timedout JVMs
> -
>
> Key: HBASE-14678
> URL: https://issues.apache.org/jira/browse/HBASE-14678
> Project: HBase
>  Issue Type: Sub-task
>  Components: test
>Reporter: stack
>
> Looking at recent builds of 1.2, I see a few of the runs finishing with kills 
> and notice that a JVM exited without reporting back state. Running the 
> hanging test finder, I can see at least that in one case that the balancer 
> tests seem to be outstanding; looking in test output, seems to be still going 
> on A few others are reported as hung but they look like they have just 
> started running and are just killed by surefire.
> This issue is about trying to disable a few of the problematics like balancer 
> tests to see if our overall stability improves. If so, I can concentrate on 
> stabilizing these few tests. Else will just undo the experiment and put the 
> tests back on line.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14895) Seek only to the newly flushed file on scanner reset on flush

2015-12-14 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15056273#comment-15056273
 ] 

stack commented on HBASE-14895:
---

You need review on the addendum [~ram_krish] ? Why you need to remove the flush 
check?

> Seek only to the newly flushed file on scanner reset on flush
> -
>
> Key: HBASE-14895
> URL: https://issues.apache.org/jira/browse/HBASE-14895
> Project: HBase
>  Issue Type: Sub-task
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0
>
> Attachments: HBASE-14895.patch, HBASE-14895_1.patch, 
> HBASE-14895_1.patch, HBASE-14895_2.patch, HBASE-14895_3.patch, 
> HBASE-14895_3.patch, HBASE-14895_addendum.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14895) Seek only to the newly flushed file on scanner reset on flush

2015-12-14 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15056270#comment-15056270
 ] 

stack commented on HBASE-14895:
---

bq.  saw in this in the code today and interestingly the same thing failed in 
the build today after the commit.
https://builds.apache.org/job/HBase-Trunk_matrix/549/jdk=latest1.8,label=Hadoop/testReport/junit/org.apache.hadoop.hbase.regionserver/TestHRegion/testFlushCacheWhileScanning/

Hurray for a CI that catches issues!!!



> Seek only to the newly flushed file on scanner reset on flush
> -
>
> Key: HBASE-14895
> URL: https://issues.apache.org/jira/browse/HBASE-14895
> Project: HBase
>  Issue Type: Sub-task
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0
>
> Attachments: HBASE-14895.patch, HBASE-14895_1.patch, 
> HBASE-14895_1.patch, HBASE-14895_2.patch, HBASE-14895_3.patch, 
> HBASE-14895_3.patch, HBASE-14895_addendum.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14936) CombinedBlockCache should overwrite CacheStats#rollMetricsPeriod()

2015-12-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15056311#comment-15056311
 ] 

Hudson commented on HBASE-14936:


FAILURE: Integrated in HBase-1.0 #1122 (See 
[https://builds.apache.org/job/HBase-1.0/1122/])
HBASE-14936 CombinedBlockCache should overwrite (chenheng: rev 
fb08834f553ffb94ddc7fe0b2707ea5a18127633)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CombinedBlockCache.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestCombinedBlockCache.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CacheStats.java
HBASE-14936 CombinedBlockCache should overwrite (chenheng: rev 
2b35976d1f94d3dc5627f1a6b3471f3c0eaffdce)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CombinedBlockCache.java


> CombinedBlockCache should overwrite CacheStats#rollMetricsPeriod()
> --
>
> Key: HBASE-14936
> URL: https://issues.apache.org/jira/browse/HBASE-14936
> Project: HBase
>  Issue Type: Bug
>  Components: BlockCache
>Affects Versions: 1.1.2
>Reporter: Jianwei Cui
>Assignee: Jianwei Cui
> Fix For: 2.0.0, 1.2, 1.1, 1.3, 1.0
>
> Attachments: HBASE-14936-branch-1.0-1.1.patch, 
> HBASE-14936-branch-1.0-addendum.patch, HBASE-14936-trunk-v1.patch, 
> HBASE-14936-trunk-v2.patch, HBASE-14936-trunk.patch
>
>
> It seems CombinedBlockCache should overwrite CacheStats#rollMetricsPeriod() as
> {code}
> public void rollMetricsPeriod() {
>   lruCacheStats.rollMetricsPeriod();
>   bucketCacheStats.rollMetricsPeriod();
> }
> {code}
> otherwise, CombinedBlockCache.getHitRatioPastNPeriods() and 
> CombinedBlockCache.getHitCachingRatioPastNPeriods() will always return 0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14936) CombinedBlockCache should overwrite CacheStats#rollMetricsPeriod()

2015-12-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15056316#comment-15056316
 ] 

Hudson commented on HBASE-14936:


ABORTED: Integrated in HBase-1.2 #441 (See 
[https://builds.apache.org/job/HBase-1.2/441/])
HBASE-14936 CombinedBlockCache should overwrite (chenheng: rev 
db4d6c3ae39fac9a02ad5e57c115b78d64dbfcf4)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CombinedBlockCache.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CacheStats.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestCombinedBlockCache.java


> CombinedBlockCache should overwrite CacheStats#rollMetricsPeriod()
> --
>
> Key: HBASE-14936
> URL: https://issues.apache.org/jira/browse/HBASE-14936
> Project: HBase
>  Issue Type: Bug
>  Components: BlockCache
>Affects Versions: 1.1.2
>Reporter: Jianwei Cui
>Assignee: Jianwei Cui
> Fix For: 2.0.0, 1.2, 1.1, 1.3, 1.0
>
> Attachments: HBASE-14936-branch-1.0-1.1.patch, 
> HBASE-14936-branch-1.0-addendum.patch, HBASE-14936-trunk-v1.patch, 
> HBASE-14936-trunk-v2.patch, HBASE-14936-trunk.patch
>
>
> It seems CombinedBlockCache should overwrite CacheStats#rollMetricsPeriod() as
> {code}
> public void rollMetricsPeriod() {
>   lruCacheStats.rollMetricsPeriod();
>   bucketCacheStats.rollMetricsPeriod();
> }
> {code}
> otherwise, CombinedBlockCache.getHitRatioPastNPeriods() and 
> CombinedBlockCache.getHitCachingRatioPastNPeriods() will always return 0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14895) Seek only to the newly flushed file on scanner reset on flush

2015-12-14 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15056401#comment-15056401
 ] 

ramkrishna.s.vasudevan commented on HBASE-14895:


After this new patch we don't reset the heap to null when a flush happens. The 
only time we do is when there is close that happens when the heap.peek() does 
not return any element. That is what has happened here. The next() call did not 
yield any cell so the heap was closed and so the heap is null and when we try 
to do a peek on that heap we got the NPE.

> Seek only to the newly flushed file on scanner reset on flush
> -
>
> Key: HBASE-14895
> URL: https://issues.apache.org/jira/browse/HBASE-14895
> Project: HBase
>  Issue Type: Sub-task
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0
>
> Attachments: HBASE-14895.patch, HBASE-14895_1.patch, 
> HBASE-14895_1.patch, HBASE-14895_2.patch, HBASE-14895_3.patch, 
> HBASE-14895_3.patch, HBASE-14895_addendum.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14895) Seek only to the newly flushed file on scanner reset on flush

2015-12-14 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-14895:
---
Attachment: HBASE-14895_addendum_1.patch

Same addendum, just calling this.peek instead of this.heap.peek() so that the 
null check on heap happens.

> Seek only to the newly flushed file on scanner reset on flush
> -
>
> Key: HBASE-14895
> URL: https://issues.apache.org/jira/browse/HBASE-14895
> Project: HBase
>  Issue Type: Sub-task
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0
>
> Attachments: HBASE-14895.patch, HBASE-14895_1.patch, 
> HBASE-14895_1.patch, HBASE-14895_2.patch, HBASE-14895_3.patch, 
> HBASE-14895_3.patch, HBASE-14895_addendum.patch, HBASE-14895_addendum_1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14936) CombinedBlockCache should overwrite CacheStats#rollMetricsPeriod()

2015-12-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15056415#comment-15056415
 ] 

Hudson commented on HBASE-14936:


FAILURE: Integrated in HBase-1.1-JDK8 #1706 (See 
[https://builds.apache.org/job/HBase-1.1-JDK8/1706/])
HBASE-14936 CombinedBlockCache should overwrite (chenheng: rev 
b7b9123e758d1d6b35651212a4d1a14521eb1606)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestCombinedBlockCache.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CombinedBlockCache.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CacheStats.java
HBASE-14936 CombinedBlockCache should overwrite (chenheng: rev 
d6000a8a611abe016e57e6575092e392c508aeaa)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CombinedBlockCache.java


> CombinedBlockCache should overwrite CacheStats#rollMetricsPeriod()
> --
>
> Key: HBASE-14936
> URL: https://issues.apache.org/jira/browse/HBASE-14936
> Project: HBase
>  Issue Type: Bug
>  Components: BlockCache
>Affects Versions: 1.1.2
>Reporter: Jianwei Cui
>Assignee: Jianwei Cui
> Fix For: 2.0.0, 1.2, 1.1, 1.3, 1.0
>
> Attachments: HBASE-14936-branch-1.0-1.1.patch, 
> HBASE-14936-branch-1.0-addendum.patch, HBASE-14936-trunk-v1.patch, 
> HBASE-14936-trunk-v2.patch, HBASE-14936-trunk.patch
>
>
> It seems CombinedBlockCache should overwrite CacheStats#rollMetricsPeriod() as
> {code}
> public void rollMetricsPeriod() {
>   lruCacheStats.rollMetricsPeriod();
>   bucketCacheStats.rollMetricsPeriod();
> }
> {code}
> otherwise, CombinedBlockCache.getHitRatioPastNPeriods() and 
> CombinedBlockCache.getHitCachingRatioPastNPeriods() will always return 0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14030) HBase Backup/Restore Phase 1

2015-12-14 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-14030:
--
Status: Open  (was: Patch Available)

> HBase Backup/Restore Phase 1
> 
>
> Key: HBASE-14030
> URL: https://issues.apache.org/jira/browse/HBASE-14030
> Project: HBase
>  Issue Type: Umbrella
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Attachments: HBASE-14030-v0.patch, HBASE-14030-v1.patch, 
> HBASE-14030-v10.patch, HBASE-14030-v11.patch, HBASE-14030-v12.patch, 
> HBASE-14030-v13.patch, HBASE-14030-v14.patch, HBASE-14030-v15.patch, 
> HBASE-14030-v17.patch, HBASE-14030-v18.patch, HBASE-14030-v2.patch, 
> HBASE-14030-v20.patch, HBASE-14030-v3.patch, HBASE-14030-v4.patch, 
> HBASE-14030-v5.patch, HBASE-14030-v6.patch, HBASE-14030-v7.patch, 
> HBASE-14030-v8.patch
>
>
> This is the umbrella ticket for Backup/Restore Phase 1. See HBASE-7912 design 
> doc for the phase description.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14030) HBase Backup/Restore Phase 1

2015-12-14 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-14030:
--
Status: Patch Available  (was: Open)

> HBase Backup/Restore Phase 1
> 
>
> Key: HBASE-14030
> URL: https://issues.apache.org/jira/browse/HBASE-14030
> Project: HBase
>  Issue Type: Umbrella
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Attachments: HBASE-14030-v0.patch, HBASE-14030-v1.patch, 
> HBASE-14030-v10.patch, HBASE-14030-v11.patch, HBASE-14030-v12.patch, 
> HBASE-14030-v13.patch, HBASE-14030-v14.patch, HBASE-14030-v15.patch, 
> HBASE-14030-v17.patch, HBASE-14030-v18.patch, HBASE-14030-v2.patch, 
> HBASE-14030-v20.patch, HBASE-14030-v3.patch, HBASE-14030-v4.patch, 
> HBASE-14030-v5.patch, HBASE-14030-v6.patch, HBASE-14030-v7.patch, 
> HBASE-14030-v8.patch
>
>
> This is the umbrella ticket for Backup/Restore Phase 1. See HBASE-7912 design 
> doc for the phase description.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-10390) expose checkAndPut/Delete custom comparators thru HTable

2015-12-14 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-10390:
--
Status: Open  (was: Patch Available)

> expose checkAndPut/Delete custom comparators thru HTable
> 
>
> Key: HBASE-10390
> URL: https://issues.apache.org/jira/browse/HBASE-10390
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Reporter: Sergey Shelukhin
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0
>
> Attachments: HBASE-10390-v1.patch
>
>
> checkAndPut/Delete appear to support custom comparators. However, thru 
> HTable, there's no way to pass one, it always creates BinaryComparator from 
> value. It would be good to expose the custom ones in the API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-10390) expose checkAndPut/Delete custom comparators thru HTable

2015-12-14 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-10390:
--
Attachment: HBASE-10390-v2.patch

v2 patch.

> expose checkAndPut/Delete custom comparators thru HTable
> 
>
> Key: HBASE-10390
> URL: https://issues.apache.org/jira/browse/HBASE-10390
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Reporter: Sergey Shelukhin
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0
>
> Attachments: HBASE-10390-v1.patch, HBASE-10390-v2.patch
>
>
> checkAndPut/Delete appear to support custom comparators. However, thru 
> HTable, there's no way to pass one, it always creates BinaryComparator from 
> value. It would be good to expose the custom ones in the API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14951) Make hbase.regionserver.maxlogs obsolete

2015-12-14 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15056473#comment-15056473
 ] 

Vladimir Rodionov commented on HBASE-14951:
---

{quote}
Where the *2 come from in the formula?
{quote}

Just to make sure that we do not trigger memstore flushes only because we 
reached the max wal files limit. 

> Make hbase.regionserver.maxlogs obsolete
> 
>
> Key: HBASE-14951
> URL: https://issues.apache.org/jira/browse/HBASE-14951
> Project: HBase
>  Issue Type: Improvement
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-14951-v1.patch, HBASE-14951-v2.patch
>
>
> There was a discussion in HBASE-14388 related to maximum number of log files. 
> It was an agreement that we should calculate this number in a code but still 
> need to honor user's setting. 
> Maximum number of log files now is calculated as following:
>  maxLogs = HEAP_SIZE * memstoreRatio * 2/ LogRollSize



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-10390) expose checkAndPut/Delete custom comparators thru HTable

2015-12-14 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-10390:
--
Status: Patch Available  (was: Open)

> expose checkAndPut/Delete custom comparators thru HTable
> 
>
> Key: HBASE-10390
> URL: https://issues.apache.org/jira/browse/HBASE-10390
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Reporter: Sergey Shelukhin
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0
>
> Attachments: HBASE-10390-v1.patch, HBASE-10390-v2.patch
>
>
> checkAndPut/Delete appear to support custom comparators. However, thru 
> HTable, there's no way to pass one, it always creates BinaryComparator from 
> value. It would be good to expose the custom ones in the API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14951) Make hbase.regionserver.maxlogs obsolete

2015-12-14 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-14951:
--
Release Note: 
Rolling WAL events across a cluster can be highly correlated, hence flushing 
memstores, hence triggering minor compactions, that can be promoted to major 
ones. These events are highly correlated in time if there is a balanced 
write-load on the regions in a table. Default value for maximum WAL files (* 
hbase.regionserver.maxlogs*), which controls WAL rolling events - 32 is too 
small for many modern deployments. 
Now we calculate this value dynamically (if not defined by user) using the 
following formula:

maxLogs = HBASE_HEAP_SIZE * memstoreRatio * 2/ LogRollSize, where

memstoreRatio is 
  

  was:
Rolling WAL events across a cluster can be highly correlated, hence flushing 
memstores, hence triggering minor compactions, that can be promoted to major 
ones. These events are highly correlated in time if there is a balanced 
write-load on the regions in a table. Default value for maximum WAL files (* 
hbase.regionserver.maxlogs*), which controls WAL rolling events - 32 is too 
small for many modern deployments. 
Now we calculate this value dynamically (if not defined by user) using the 
following formula:

  


> Make hbase.regionserver.maxlogs obsolete
> 
>
> Key: HBASE-14951
> URL: https://issues.apache.org/jira/browse/HBASE-14951
> Project: HBase
>  Issue Type: Improvement
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-14951-v1.patch, HBASE-14951-v2.patch
>
>
> There was a discussion in HBASE-14388 related to maximum number of log files. 
> It was an agreement that we should calculate this number in a code but still 
> need to honor user's setting. 
> Maximum number of log files now is calculated as following:
>  maxLogs = HEAP_SIZE * memstoreRatio * 2/ LogRollSize



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14951) Make hbase.regionserver.maxlogs obsolete

2015-12-14 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-14951:
--
Release Note: 
Rolling WAL events across a cluster can be highly correlated, hence flushing 
memstores, hence triggering minor compactions, that can be promoted to major 
ones. These events are highly correlated in time if there is a balanced 
write-load on the regions in a table. Default value for maximum WAL files (* 
hbase.regionserver.maxlogs*), which controls WAL rolling events - 32 is too 
small for many modern deployments. 
Now we calculate this value dynamically (if not defined by user) using the 
following formula:

  

> Make hbase.regionserver.maxlogs obsolete
> 
>
> Key: HBASE-14951
> URL: https://issues.apache.org/jira/browse/HBASE-14951
> Project: HBase
>  Issue Type: Improvement
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-14951-v1.patch, HBASE-14951-v2.patch
>
>
> There was a discussion in HBASE-14388 related to maximum number of log files. 
> It was an agreement that we should calculate this number in a code but still 
> need to honor user's setting. 
> Maximum number of log files now is calculated as following:
>  maxLogs = HEAP_SIZE * memstoreRatio * 2/ LogRollSize



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14951) Make hbase.regionserver.maxlogs obsolete

2015-12-14 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-14951:
--
Release Note: 
Rolling WAL events across a cluster can be highly correlated, hence flushing 
memstores, hence triggering minor compactions, that can be promoted to major 
ones. These events are highly correlated in time if there is a balanced 
write-load on the regions in a table. Default value for maximum WAL files (* 
hbase.regionserver.maxlogs*), which controls WAL rolling events - 32 is too 
small for many modern deployments. 
Now we calculate this value dynamically (if not defined by user), using the 
following formula:

maxLogs = Math.max( 32, HBASE_HEAP_SIZE * memstoreRatio * 2/ LogRollSize), where

memstoreRatio is *hbase.regionserver.global.memstore.size*
LogRollSize is maximum WAL file size (default 0.95 * HDFS block size)

The following table gives the new default maximum log files values for several 
different Region Server heap sizes:
||heap||memstore ratio||max logs||
|1G|40%|32|
|2G|40%|32|
|10G|40%|80|
|20G|40%|160|
|32G|40%|256|   
  

  was:
Rolling WAL events across a cluster can be highly correlated, hence flushing 
memstores, hence triggering minor compactions, that can be promoted to major 
ones. These events are highly correlated in time if there is a balanced 
write-load on the regions in a table. Default value for maximum WAL files (* 
hbase.regionserver.maxlogs*), which controls WAL rolling events - 32 is too 
small for many modern deployments. 
Now we calculate this value dynamically (if not defined by user) using the 
following formula:

maxLogs = HBASE_HEAP_SIZE * memstoreRatio * 2/ LogRollSize, where

memstoreRatio is 
  


> Make hbase.regionserver.maxlogs obsolete
> 
>
> Key: HBASE-14951
> URL: https://issues.apache.org/jira/browse/HBASE-14951
> Project: HBase
>  Issue Type: Improvement
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-14951-v1.patch, HBASE-14951-v2.patch
>
>
> There was a discussion in HBASE-14388 related to maximum number of log files. 
> It was an agreement that we should calculate this number in a code but still 
> need to honor user's setting. 
> Maximum number of log files now is calculated as following:
>  maxLogs = HEAP_SIZE * memstoreRatio * 2/ LogRollSize



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14973) NPE on displaying region when a region moves

2015-12-14 Thread Elliott Clark (JIRA)
Elliott Clark created HBASE-14973:
-

 Summary: NPE on displaying region when a region moves
 Key: HBASE-14973
 URL: https://issues.apache.org/jira/browse/HBASE-14973
 Project: HBase
  Issue Type: Bug
  Components: UI
Affects Versions: 1.2.0
Reporter: Elliott Clark
Priority: Minor


{code}
HTTP ERROR 500

Problem accessing /region.jsp. Reason:

INTERNAL_SERVER_ERROR
Caused by:

java.lang.NullPointerException
at 
org.apache.hadoop.hbase.generated.regionserver.region_jsp._jspService(region_jsp.java:65)
at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:98)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
at 
org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
at 
org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:113)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at 
org.apache.hadoop.hbase.http.ClickjackingPreventionFilter.doFilter(ClickjackingPreventionFilter.java:48)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at 
org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:1354)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at 
org.apache.hadoop.hbase.http.NoCacheFilter.doFilter(NoCacheFilter.java:49)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at 
org.apache.hadoop.hbase.http.NoCacheFilter.doFilter(NoCacheFilter.java:49)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at 
org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
at 
org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
at 
org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
at 
org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
at 
org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
at 
org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
at org.mortbay.jetty.Server.handle(Server.java:326)
at 
org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
at 
org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
at 
org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
at 
org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14951) Make hbase.regionserver.maxlogs obsolete

2015-12-14 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-14951:
--
Release Note: 
Rolling WAL events across a cluster can be highly correlated, hence flushing 
memstores, hence triggering minor compactions, that can be promoted to major 
ones. These events are highly correlated in time if there is a balanced 
write-load on the regions in a table. Default value for maximum WAL files (* 
hbase.regionserver.maxlogs*), which controls WAL rolling events - 32 is too 
small for many modern deployments. 
Now we calculate this value dynamically (if not defined by user), using the 
following formula:

maxLogs = Math.max( 32, HBASE_HEAP_SIZE * memstoreRatio * 2/ LogRollSize), where

memstoreRatio is *hbase.regionserver.global.memstore.size*
LogRollSize is maximum WAL file size (default 0.95 * HDFS block size)

The following table gives the new default maximum log files values for several 
different Region Server heap sizes:

||heap||memstore ratio||max logs||
|1G|40%|32|
|2G|40%|32|
|10G|40%|80|
|20G|40%|160|
|32G|40%|256|   
  

  was:
Rolling WAL events across a cluster can be highly correlated, hence flushing 
memstores, hence triggering minor compactions, that can be promoted to major 
ones. These events are highly correlated in time if there is a balanced 
write-load on the regions in a table. Default value for maximum WAL files (* 
hbase.regionserver.maxlogs*), which controls WAL rolling events - 32 is too 
small for many modern deployments. 
Now we calculate this value dynamically (if not defined by user), using the 
following formula:

maxLogs = Math.max( 32, HBASE_HEAP_SIZE * memstoreRatio * 2/ LogRollSize), where

memstoreRatio is *hbase.regionserver.global.memstore.size*
LogRollSize is maximum WAL file size (default 0.95 * HDFS block size)

The following table gives the new default maximum log files values for several 
different Region Server heap sizes:
||heap||memstore ratio||max logs||
|1G|40%|32|
|2G|40%|32|
|10G|40%|80|
|20G|40%|160|
|32G|40%|256|   
  


> Make hbase.regionserver.maxlogs obsolete
> 
>
> Key: HBASE-14951
> URL: https://issues.apache.org/jira/browse/HBASE-14951
> Project: HBase
>  Issue Type: Improvement
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-14951-v1.patch, HBASE-14951-v2.patch
>
>
> There was a discussion in HBASE-14388 related to maximum number of log files. 
> It was an agreement that we should calculate this number in a code but still 
> need to honor user's setting. 
> Maximum number of log files now is calculated as following:
>  maxLogs = HEAP_SIZE * memstoreRatio * 2/ LogRollSize



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14951) Make hbase.regionserver.maxlogs obsolete

2015-12-14 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-14951:
--
Release Note: 
Rolling WAL events across a cluster can be highly correlated, hence flushing 
memstores, hence triggering minor compactions, that can be promoted to major 
ones. These events are highly correlated in time if there is a balanced 
write-load on the regions in a table. Default value for maximum WAL files (* 
hbase.regionserver.maxlogs*), which controls WAL rolling events - 32 is too 
small for many modern deployments. 
Now we calculate this value dynamically (if not defined by user), using the 
following formula:

maxLogs = Math.max( 32, HBASE_HEAP_SIZE * memstoreRatio * 2/ LogRollSize), where

memstoreRatio is *hbase.regionserver.global.memstore.size*
LogRollSize is maximum WAL file size (default 0.95 * HDFS block size)

The following table gives the new default maximum log files values for several 
different Region Server heap sizes:

heapmemstore perc   maxLogs
1G  40% 32
2G  40% 32
10G 40% 80
20G 40% 160
32G 40% 256

  

  was:
Rolling WAL events across a cluster can be highly correlated, hence flushing 
memstores, hence triggering minor compactions, that can be promoted to major 
ones. These events are highly correlated in time if there is a balanced 
write-load on the regions in a table. Default value for maximum WAL files (* 
hbase.regionserver.maxlogs*), which controls WAL rolling events - 32 is too 
small for many modern deployments. 
Now we calculate this value dynamically (if not defined by user), using the 
following formula:

maxLogs = Math.max( 32, HBASE_HEAP_SIZE * memstoreRatio * 2/ LogRollSize), where

memstoreRatio is *hbase.regionserver.global.memstore.size*
LogRollSize is maximum WAL file size (default 0.95 * HDFS block size)

The following table gives the new default maximum log files values for several 
different Region Server heap sizes:

||heap||memstore ratio||max logs||
|1G|40%|32|
|2G|40%|32|
|10G|40%|80|
|20G|40%|160|
|32G|40%|256|   
  


> Make hbase.regionserver.maxlogs obsolete
> 
>
> Key: HBASE-14951
> URL: https://issues.apache.org/jira/browse/HBASE-14951
> Project: HBase
>  Issue Type: Improvement
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-14951-v1.patch, HBASE-14951-v2.patch
>
>
> There was a discussion in HBASE-14388 related to maximum number of log files. 
> It was an agreement that we should calculate this number in a code but still 
> need to honor user's setting. 
> Maximum number of log files now is calculated as following:
>  maxLogs = HEAP_SIZE * memstoreRatio * 2/ LogRollSize



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14951) Make hbase.regionserver.maxlogs obsolete

2015-12-14 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15056545#comment-15056545
 ] 

Vladimir Rodionov commented on HBASE-14951:
---

Updated Release Notes. 

> Make hbase.regionserver.maxlogs obsolete
> 
>
> Key: HBASE-14951
> URL: https://issues.apache.org/jira/browse/HBASE-14951
> Project: HBase
>  Issue Type: Improvement
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-14951-v1.patch, HBASE-14951-v2.patch
>
>
> There was a discussion in HBASE-14388 related to maximum number of log files. 
> It was an agreement that we should calculate this number in a code but still 
> need to honor user's setting. 
> Maximum number of log files now is calculated as following:
>  maxLogs = HEAP_SIZE * memstoreRatio * 2/ LogRollSize



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14974) Total number of Regions in Transition number on UI incorrect

2015-12-14 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-14974:
--
Attachment: Screen Shot 2015-12-14 at 11.34.14 AM.png

> Total number of Regions in Transition number on UI incorrect
> 
>
> Key: HBASE-14974
> URL: https://issues.apache.org/jira/browse/HBASE-14974
> Project: HBase
>  Issue Type: Bug
>  Components: UI
>Reporter: Elliott Clark
>Priority: Trivial
> Attachments: Screen Shot 2015-12-14 at 11.34.14 AM.png
>
>
> Total number of Regions in Transition shows 100 when there are 100 or more 
> regions in transition.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14974) Total number of Regions in Transition number on UI incorrect

2015-12-14 Thread Elliott Clark (JIRA)
Elliott Clark created HBASE-14974:
-

 Summary: Total number of Regions in Transition number on UI 
incorrect
 Key: HBASE-14974
 URL: https://issues.apache.org/jira/browse/HBASE-14974
 Project: HBase
  Issue Type: Bug
  Components: UI
Reporter: Elliott Clark
Priority: Trivial
 Attachments: Screen Shot 2015-12-14 at 11.34.14 AM.png

Total number of Regions in Transition shows 100 when there are 100 or more 
regions in transition.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14974) Total number of Regions in Transition number on UI incorrect

2015-12-14 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15056553#comment-15056553
 ] 

Elliott Clark commented on HBASE-14974:
---

Attached screenshot with the issue. The total in the screen shot should read 288

> Total number of Regions in Transition number on UI incorrect
> 
>
> Key: HBASE-14974
> URL: https://issues.apache.org/jira/browse/HBASE-14974
> Project: HBase
>  Issue Type: Bug
>  Components: UI
>Reporter: Elliott Clark
>Priority: Trivial
> Attachments: Screen Shot 2015-12-14 at 11.34.14 AM.png
>
>
> Total number of Regions in Transition shows 100 when there are 100 or more 
> regions in transition.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14974) Total number of Regions in Transition number on UI incorrect

2015-12-14 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-14974:
--
Attachment: Screen Shot 2015-12-14 at 11.37.13 AM.png

Attached screenshot with issue.

> Total number of Regions in Transition number on UI incorrect
> 
>
> Key: HBASE-14974
> URL: https://issues.apache.org/jira/browse/HBASE-14974
> Project: HBase
>  Issue Type: Bug
>  Components: UI
>Reporter: Elliott Clark
>Priority: Trivial
> Attachments: Screen Shot 2015-12-14 at 11.34.14 AM.png, Screen Shot 
> 2015-12-14 at 11.37.13 AM.png
>
>
> Total number of Regions in Transition shows 100 when there are 100 or more 
> regions in transition.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14975) Don't color the total RIT line yellow if it's zero

2015-12-14 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-14975:
--
Attachment: Screen Shot 2015-12-14 at 11.37.13 AM.png

> Don't color the total RIT line yellow if it's zero
> --
>
> Key: HBASE-14975
> URL: https://issues.apache.org/jira/browse/HBASE-14975
> Project: HBase
>  Issue Type: Bug
>  Components: UI
>Reporter: Elliott Clark
> Attachments: Screen Shot 2015-12-14 at 11.37.13 AM.png
>
>
> Right now if there are regions in transition, sometimes the RIT over 60 
> seconds line is colored yellow. It shouldn't be colored yellow if there are 
> no regions that have been in transition too long.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14975) Don't color the total RIT line yellow if it's zero

2015-12-14 Thread Elliott Clark (JIRA)
Elliott Clark created HBASE-14975:
-

 Summary: Don't color the total RIT line yellow if it's zero
 Key: HBASE-14975
 URL: https://issues.apache.org/jira/browse/HBASE-14975
 Project: HBase
  Issue Type: Bug
  Components: UI
Reporter: Elliott Clark


Right now if there are regions in transition, sometimes the RIT over 60 seconds 
line is colored yellow. It shouldn't be colored yellow if there are no regions 
that have been in transition too long.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (HBASE-14974) Total number of Regions in Transition number on UI incorrect

2015-12-14 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-14974:
--
Comment: was deleted

(was: Attached screenshot with issue.)

> Total number of Regions in Transition number on UI incorrect
> 
>
> Key: HBASE-14974
> URL: https://issues.apache.org/jira/browse/HBASE-14974
> Project: HBase
>  Issue Type: Bug
>  Components: UI
>Reporter: Elliott Clark
>Priority: Trivial
> Attachments: Screen Shot 2015-12-14 at 11.34.14 AM.png
>
>
> Total number of Regions in Transition shows 100 when there are 100 or more 
> regions in transition.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14974) Total number of Regions in Transition number on UI incorrect

2015-12-14 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-14974:
--
Attachment: (was: Screen Shot 2015-12-14 at 11.37.13 AM.png)

> Total number of Regions in Transition number on UI incorrect
> 
>
> Key: HBASE-14974
> URL: https://issues.apache.org/jira/browse/HBASE-14974
> Project: HBase
>  Issue Type: Bug
>  Components: UI
>Reporter: Elliott Clark
>Priority: Trivial
> Attachments: Screen Shot 2015-12-14 at 11.34.14 AM.png
>
>
> Total number of Regions in Transition shows 100 when there are 100 or more 
> regions in transition.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14976) Add RPC call queues to the web ui

2015-12-14 Thread Elliott Clark (JIRA)
Elliott Clark created HBASE-14976:
-

 Summary: Add RPC call queues to the web ui
 Key: HBASE-14976
 URL: https://issues.apache.org/jira/browse/HBASE-14976
 Project: HBase
  Issue Type: Improvement
  Components: UI
Reporter: Elliott Clark
Priority: Minor


The size of the call queue for the regionserver is a critical metric to see if 
things are going too slowly. We should add the call queue size to the ui under 
the queues tab.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14460) [Perf Regression] Merge of MVCC and SequenceId (HBASE-HBASE-8763) slowed Increments, CheckAndPuts, batch operations

2015-12-14 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15056586#comment-15056586
 ] 

stack commented on HBASE-14460:
---

Sweet [~jingcheng...@intel.com] Thank you. I like the numbers.  Will be back 
with comments on the patch

> [Perf Regression] Merge of MVCC and SequenceId (HBASE-HBASE-8763) slowed 
> Increments, CheckAndPuts, batch operations
> ---
>
> Key: HBASE-14460
> URL: https://issues.apache.org/jira/browse/HBASE-14460
> Project: HBase
>  Issue Type: Bug
>  Components: Performance
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Attachments: 0.94.test.patch, 0.98.test.patch, 14460.txt, 
> HBASE-14460-discussion.patch, flamegraph-13120.svg.master.singlecell.svg, 
> flamegraph-26636.094.100.svg, flamegraph-28066.098.singlecell.svg, 
> flamegraph-28767.098.100.svg, flamegraph-31647.master.100.svg, 
> flamegraph-9466.094.singlecell.svg, m.test.patch, region_lock.png, 
> testincrement.094.patch, testincrement.098.patch, testincrement.master.patch
>
>
> As reported by 鈴木俊裕 up on the mailing list -- see "Performance degradation 
> between CDH5.3.1(HBase0.98.6) and CDH5.4.5(HBase1.0.0)" -- our unification of 
> sequenceid and MVCC slows Increments (and other ops) as the mvcc needs to 
> 'catch up' to our current point before we can read the last Increment value 
> that we need to update.
> We can say that our Increment is just done wrong, we should just be writing 
> Increments and summing on read, but checkAndPut as well as batching 
> operations have the same issue. Fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-6721) RegionServer Group based Assignment

2015-12-14 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15056590#comment-15056590
 ] 

Ted Yu commented on HBASE-6721:
---

>From https://builds.apache.org/job/PreCommit-HBASE-Build/16827/consoleFull :
{code}
testKillRS(org.apache.hadoop.hbase.group.TestGroups)  Time elapsed: 10.725 sec  
<<< FAILURE!
java.lang.AssertionError: expected:<1> but was:<0>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:645)
at org.junit.Assert.assertEquals(Assert.java:631)
at 
org.apache.hadoop.hbase.group.TestGroupsBase.testKillRS(TestGroupsBase.java:612)
{code}
I am running TestShell locally to see if it passes.

> RegionServer Group based Assignment
> ---
>
> Key: HBASE-6721
> URL: https://issues.apache.org/jira/browse/HBASE-6721
> Project: HBase
>  Issue Type: New Feature
>Reporter: Francis Liu
>Assignee: Francis Liu
>  Labels: hbase-6721
> Attachments: 6721-master-webUI.patch, HBASE-6721 
> GroupBasedLoadBalancer Sequence Diagram.xml, HBASE-6721-DesigDoc.pdf, 
> HBASE-6721-DesigDoc.pdf, HBASE-6721-DesigDoc.pdf, HBASE-6721-DesigDoc.pdf, 
> HBASE-6721_0.98_2.patch, HBASE-6721_10.patch, HBASE-6721_11.patch, 
> HBASE-6721_12.patch, HBASE-6721_13.patch, HBASE-6721_14.patch, 
> HBASE-6721_15.patch, HBASE-6721_8.patch, HBASE-6721_9.patch, 
> HBASE-6721_9.patch, HBASE-6721_94.patch, HBASE-6721_94.patch, 
> HBASE-6721_94_2.patch, HBASE-6721_94_3.patch, HBASE-6721_94_3.patch, 
> HBASE-6721_94_4.patch, HBASE-6721_94_5.patch, HBASE-6721_94_6.patch, 
> HBASE-6721_94_7.patch, HBASE-6721_98_1.patch, HBASE-6721_98_2.patch, 
> HBASE-6721_hbase-6721_addendum.patch, HBASE-6721_trunk.patch, 
> HBASE-6721_trunk.patch, HBASE-6721_trunk.patch, HBASE-6721_trunk1.patch, 
> HBASE-6721_trunk2.patch, balanceCluster Sequence Diagram.svg, 
> hbase-6721-v15-branch-1.1.patch, hbase-6721-v16.patch, hbase-6721-v17.patch, 
> hbase-6721-v18.patch, hbase-6721-v19.patch, immediateAssignments Sequence 
> Diagram.svg, randomAssignment Sequence Diagram.svg, retainAssignment Sequence 
> Diagram.svg, roundRobinAssignment Sequence Diagram.svg
>
>
> In multi-tenant deployments of HBase, it is likely that a RegionServer will 
> be serving out regions from a number of different tables owned by various 
> client applications. Being able to group a subset of running RegionServers 
> and assign specific tables to it, provides a client application a level of 
> isolation and resource allocation.
> The proposal essentially is to have an AssignmentManager which is aware of 
> RegionServer groups and assigns tables to region servers based on groupings. 
> Load balancing will occur on a per group basis as well. 
> This is essentially a simplification of the approach taken in HBASE-4120. See 
> attached document.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-10390) expose checkAndPut/Delete custom comparators thru HTable

2015-12-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15056592#comment-15056592
 ] 

Hadoop QA commented on HBASE-10390:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12777531/HBASE-10390-v2.patch
  against master branch at commit 04622254f7209c5cfeadcfa137a97fbed161075a.
  ATTACHMENT ID: 12777531

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 9 new 
or modified tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16854//console

This message is automatically generated.

> expose checkAndPut/Delete custom comparators thru HTable
> 
>
> Key: HBASE-10390
> URL: https://issues.apache.org/jira/browse/HBASE-10390
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Reporter: Sergey Shelukhin
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0
>
> Attachments: HBASE-10390-v1.patch, HBASE-10390-v2.patch
>
>
> checkAndPut/Delete appear to support custom comparators. However, thru 
> HTable, there's no way to pass one, it always creates BinaryComparator from 
> value. It would be good to expose the custom ones in the API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-10390) expose checkAndPut/Delete custom comparators thru HTable

2015-12-14 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-10390:
--
Status: Open  (was: Patch Available)

> expose checkAndPut/Delete custom comparators thru HTable
> 
>
> Key: HBASE-10390
> URL: https://issues.apache.org/jira/browse/HBASE-10390
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Reporter: Sergey Shelukhin
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0
>
> Attachments: HBASE-10390-v1.patch, HBASE-10390-v2.patch, 
> HBASE-10390-v3.patch
>
>
> checkAndPut/Delete appear to support custom comparators. However, thru 
> HTable, there's no way to pass one, it always creates BinaryComparator from 
> value. It would be good to expose the custom ones in the API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-10390) expose checkAndPut/Delete custom comparators thru HTable

2015-12-14 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-10390:
--
Attachment: HBASE-10390-v3.patch

> expose checkAndPut/Delete custom comparators thru HTable
> 
>
> Key: HBASE-10390
> URL: https://issues.apache.org/jira/browse/HBASE-10390
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Reporter: Sergey Shelukhin
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0
>
> Attachments: HBASE-10390-v1.patch, HBASE-10390-v2.patch, 
> HBASE-10390-v3.patch
>
>
> checkAndPut/Delete appear to support custom comparators. However, thru 
> HTable, there's no way to pass one, it always creates BinaryComparator from 
> value. It would be good to expose the custom ones in the API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-10390) expose checkAndPut/Delete custom comparators thru HTable

2015-12-14 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-10390:
--
Status: Patch Available  (was: Open)

> expose checkAndPut/Delete custom comparators thru HTable
> 
>
> Key: HBASE-10390
> URL: https://issues.apache.org/jira/browse/HBASE-10390
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Reporter: Sergey Shelukhin
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0
>
> Attachments: HBASE-10390-v1.patch, HBASE-10390-v2.patch, 
> HBASE-10390-v3.patch
>
>
> checkAndPut/Delete appear to support custom comparators. However, thru 
> HTable, there's no way to pass one, it always creates BinaryComparator from 
> value. It would be good to expose the custom ones in the API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-6721) RegionServer Group based Assignment

2015-12-14 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15056619#comment-15056619
 ] 

Ted Yu commented on HBASE-6721:
---

Francis:
You can copy the following file from branch-1.1 to master branch so that you 
can run TestShell:
hbase-shell/src/test/java/org/apache/hadoop/hbase/client/TestShell.java

I ran TestShell locally and still got:
{code}
test_Test_Basic_Group_Commands(Hbase::GroupShellTest):
NativeException: org.apache.hadoop.hbase.exceptions.UnknownProtocolException: 
org.apache.hadoop.hbase.exceptions.UnknownProtocolException: No registered 
master coprocessorservice found for name hbase.pb.GroupAdminService
  at 
org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:682)
  at 
org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:57964)
  at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2213)
  at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109)
  at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
  at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
  at java.lang.Thread.run(Thread.java:745)

sun/reflect/NativeConstructorAccessorImpl.java:-2:in `newInstance0'
sun/reflect/NativeConstructorAccessorImpl.java:57:in `newInstance'
sun/reflect/DelegatingConstructorAccessorImpl.java:45:in `newInstance'
java/lang/reflect/Constructor.java:526:in `newInstance'
org/apache/hadoop/ipc/RemoteException.java:106:in `instantiateException'
org/apache/hadoop/ipc/RemoteException.java:95:in `unwrapRemoteException'
org/apache/hadoop/hbase/ipc/AsyncCall.java:127:in `setFailed'
org/apache/hadoop/hbase/ipc/AsyncServerResponseHandler.java:83:in 
`channelRead'
io/netty/channel/AbstractChannelHandlerContext.java:308:in 
`invokeChannelRead'
io/netty/channel/AbstractChannelHandlerContext.java:294:in `fireChannelRead'
io/netty/handler/codec/ByteToMessageDecoder.java:244:in `channelRead'
io/netty/channel/AbstractChannelHandlerContext.java:308:in 
`invokeChannelRead'
io/netty/channel/AbstractChannelHandlerContext.java:294:in `fireChannelRead'
io/netty/channel/DefaultChannelPipeline.java:846:in `fireChannelRead'
io/netty/channel/nio/AbstractNioByteChannel.java:131:in `read'
io/netty/channel/nio/NioEventLoop.java:511:in `processSelectedKey'
io/netty/channel/nio/NioEventLoop.java:468:in `processSelectedKeysOptimized'
io/netty/channel/nio/NioEventLoop.java:382:in `processSelectedKeys'
io/netty/channel/nio/NioEventLoop.java:354:in `run'
io/netty/util/concurrent/SingleThreadEventExecutor.java:110:in `run'
java/lang/Thread.java:745:in `run'
./src/test/ruby/shell/group_shell_test.rb:43:in 
`test_Test_Basic_Group_Commands'
org/jruby/RubyProc.java:270:in `call'
org/jruby/RubyKernel.java:2105:in `send'
org/jruby/RubyArray.java:1620:in `each'
org/jruby/RubyArray.java:1620:in `each'

  2) Failure:
test_Test_bogus_arguments(Hbase::GroupShellTest)
[./src/test/ruby/shell/group_shell_test.rb:85:in `test_Test_bogus_arguments'
 org/jruby/RubyProc.java:270:in `call'
 org/jruby/RubyKernel.java:2105:in `send'
 org/jruby/RubyArray.java:1620:in `each'
 org/jruby/RubyArray.java:1620:in `each']:
 exception expected but was
Class: 
Message: <"org.apache.hadoop.hbase.exceptions.UnknownProtocolException: 
org.apache.hadoop.hbase.exceptions.UnknownProtocolException: No registered 
master coprocessor service  found for name hbase.pb.GroupAdminService\n\tat 
org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:682)\n\tat
 org.apache.hadoop.hbase.  
protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:57964)\n\tat
 org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2213)\n\tat org.   
apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109)\n\tat 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)\n\tat
 org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)\n\tat 
java.lang.Thread.run(Thread.java:745)\n">
{code}

> RegionServer Group based Assignment
> ---
>
> Key: HBASE-6721
> URL: https://issues.apache.org/jira/browse/HBASE-6721
> Project: HBase
>  Issue Type: New Feature
>Reporter: Francis Liu
>Assignee: Francis Liu
>  Labels: hbase-6721
> Attachments: 6721-master-webUI.patch, HBASE-6721 
> GroupBasedLoadBalancer Sequence Diagram.xml, HBASE-6721-DesigDoc.pdf, 
> HBASE-6721-DesigDoc.pdf, HBASE-6721-DesigDoc.pdf, HBASE-6721-DesigDoc.pdf, 
> HBASE-6721_0.98_2.patch, HBASE-6721_10.patch, HBASE-6721_11.patch, 
> HBASE-6721_12.patch, HBASE-6721_13.patch, HBASE-6721_14.patch, 
> HBASE-6721_15.patch, HBASE-6721_8.patch, HBA

[jira] [Updated] (HBASE-14976) Add RPC call queues to the web ui

2015-12-14 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-14976:
--
Labels: beginner  (was: )

> Add RPC call queues to the web ui
> -
>
> Key: HBASE-14976
> URL: https://issues.apache.org/jira/browse/HBASE-14976
> Project: HBase
>  Issue Type: Improvement
>  Components: UI
>Reporter: Elliott Clark
>Priority: Minor
>  Labels: beginner
>
> The size of the call queue for the regionserver is a critical metric to see 
> if things are going too slowly. We should add the call queue size to the ui 
> under the queues tab.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-14820) Region becomes unavailable after a region split is rolled back

2015-12-14 Thread Clara Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Clara Xiong resolved HBASE-14820.
-
Resolution: Invalid

I confirmed this bug was introduced by our own new features. I fixed in our own 
branch. It will be OK when we push upstream.

> Region becomes unavailable after a region split is rolled back
> --
>
> Key: HBASE-14820
> URL: https://issues.apache.org/jira/browse/HBASE-14820
> Project: HBase
>  Issue Type: Bug
>  Components: master, regionserver
>Affects Versions: 0.98.15
>Reporter: Clara Xiong
> Attachments: HBASE-14820-RegionServer.log, 
> HBASE-14820-testcase-0.98.patch, HBSE-14820-hmaster.log
>
>
> After the region server rolls back a timed out attempt of  region split, the 
> region becomes unavailable. 
> Symptoms:
> The RS displays the region open in the web UI.
> The meta table still points to the RS
> Requests for the regions receive a NotServingRegionException. 
> hbck reports 0 inconsistencies. 
> Moving the region fails. 
> Restarting the region server fixes the problem.
> We have see multiple occurrences which require operation intervention.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14804) HBase shell's create table command ignores 'NORMALIZATION_ENABLED' attribute

2015-12-14 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-14804:
---
Fix Version/s: 0.98.17

> HBase shell's create table command ignores 'NORMALIZATION_ENABLED' attribute
> 
>
> Key: HBASE-14804
> URL: https://issues.apache.org/jira/browse/HBASE-14804
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Affects Versions: 1.2.0, 1.1.2
>Reporter: Romil Choksi
>Assignee: Jean-Marc Spaggiari
>Priority: Minor
>  Labels: beginner
> Fix For: 2.0.0, 1.2.0, 1.3.0, 0.98.17
>
> Attachments: HBASE-14804.v0-trunk.patch, HBASE-14804.v1-trunk.patch
>
>
> I am trying to create a new table and set the NORMALIZATION_ENABLED as true, 
> but seems like the argument NORMALIZATION_ENABLED is being ignored. And the 
> attribute NORMALIZATION_ENABLED is not displayed on doing a desc command on 
> that table
> {code}
> hbase(main):020:0> create 'test-table-4', 'cf', {NORMALIZATION_ENABLED => 
> 'true'}
> An argument ignored (unknown or overridden): NORMALIZATION_ENABLED
> 0 row(s) in 4.2670 seconds
> => Hbase::Table - test-table-4
> hbase(main):021:0> desc 'test-table-4'
> Table test-table-4 is ENABLED 
>   
> 
> test-table-4  
>   
> 
> COLUMN FAMILIES DESCRIPTION   
>   
> 
> {NAME => 'cf', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', 
> KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 
> 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOC
> KCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 
>   
> 
> 1 row(s) in 0.0430 seconds
> {code}
> However, on doing an alter command on that table we can set the 
> NORMALIZATION_ENABLED attribute for that table
> {code}
> hbase(main):022:0> alter 'test-table-4', {NORMALIZATION_ENABLED => 'true'}
> Unknown argument ignored: NORMALIZATION_ENABLED
> Updating all regions with the new schema...
> 1/1 regions updated.
> Done.
> 0 row(s) in 2.3640 seconds
> hbase(main):023:0> desc 'test-table-4'
> Table test-table-4 is ENABLED 
>   
> 
> test-table-4, {TABLE_ATTRIBUTES => {NORMALIZATION_ENABLED => 'true'}  
>   
> 
> COLUMN FAMILIES DESCRIPTION   
>   
> 
> {NAME => 'cf', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', 
> KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 
> 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOC
> KCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 
>   
> 
> 1 row(s) in 0.0190 seconds
> {code}
> I think it would be better to have a single step process to enable 
> normalization while creating the table itself, rather than a two step process 
> to alter the table later on to enable normalization



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-6721) RegionServer Group based Assignment

2015-12-14 Thread Francis Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15056717#comment-15056717
 ] 

Francis Liu commented on HBASE-6721:


Is this really how we're supposed to run the other shell unit tests? What if 
there's something that needs to be changed/fixed in TestShell?

> RegionServer Group based Assignment
> ---
>
> Key: HBASE-6721
> URL: https://issues.apache.org/jira/browse/HBASE-6721
> Project: HBase
>  Issue Type: New Feature
>Reporter: Francis Liu
>Assignee: Francis Liu
>  Labels: hbase-6721
> Attachments: 6721-master-webUI.patch, HBASE-6721 
> GroupBasedLoadBalancer Sequence Diagram.xml, HBASE-6721-DesigDoc.pdf, 
> HBASE-6721-DesigDoc.pdf, HBASE-6721-DesigDoc.pdf, HBASE-6721-DesigDoc.pdf, 
> HBASE-6721_0.98_2.patch, HBASE-6721_10.patch, HBASE-6721_11.patch, 
> HBASE-6721_12.patch, HBASE-6721_13.patch, HBASE-6721_14.patch, 
> HBASE-6721_15.patch, HBASE-6721_8.patch, HBASE-6721_9.patch, 
> HBASE-6721_9.patch, HBASE-6721_94.patch, HBASE-6721_94.patch, 
> HBASE-6721_94_2.patch, HBASE-6721_94_3.patch, HBASE-6721_94_3.patch, 
> HBASE-6721_94_4.patch, HBASE-6721_94_5.patch, HBASE-6721_94_6.patch, 
> HBASE-6721_94_7.patch, HBASE-6721_98_1.patch, HBASE-6721_98_2.patch, 
> HBASE-6721_hbase-6721_addendum.patch, HBASE-6721_trunk.patch, 
> HBASE-6721_trunk.patch, HBASE-6721_trunk.patch, HBASE-6721_trunk1.patch, 
> HBASE-6721_trunk2.patch, balanceCluster Sequence Diagram.svg, 
> hbase-6721-v15-branch-1.1.patch, hbase-6721-v16.patch, hbase-6721-v17.patch, 
> hbase-6721-v18.patch, hbase-6721-v19.patch, immediateAssignments Sequence 
> Diagram.svg, randomAssignment Sequence Diagram.svg, retainAssignment Sequence 
> Diagram.svg, roundRobinAssignment Sequence Diagram.svg
>
>
> In multi-tenant deployments of HBase, it is likely that a RegionServer will 
> be serving out regions from a number of different tables owned by various 
> client applications. Being able to group a subset of running RegionServers 
> and assign specific tables to it, provides a client application a level of 
> isolation and resource allocation.
> The proposal essentially is to have an AssignmentManager which is aware of 
> RegionServer groups and assigns tables to region servers based on groupings. 
> Load balancing will occur on a per group basis as well. 
> This is essentially a simplification of the approach taken in HBASE-4120. See 
> attached document.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14866) VerifyReplication should use peer configuration in peer connection

2015-12-14 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-14866:
---
Attachment: HBASE-14866-0.98.patch

Attaching 0.98 patch. API changes are only addition of static helper methods:

org.apache.hadoop.hbase.HBaseConfiguration (Public, Stable)
- HBaseConfiguration.createClusterConf ( Configuration baseConf, String 
clusterKey ) static  :  Configuration 
- HBaseConfiguration.createClusterConf ( Configuration baseConf, String 
clusterKey, String overridePrefix ) static  :  Configuration 
- HBaseConfiguration.setWithPrefix ( Configuration conf, String prefix, 
Iterable> properties ) static  :  void 
- HBaseConfiguration.subset ( Configuration srcConf, String prefix ) static  :  
Configuration 

org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil (Public, Stable)
- TableMapReduceUtil.initCredentialsForCluster ( Job job, Configuration conf ) 
static  :  void 

> VerifyReplication should use peer configuration in peer connection
> --
>
> Key: HBASE-14866
> URL: https://issues.apache.org/jira/browse/HBASE-14866
> Project: HBase
>  Issue Type: Improvement
>  Components: Replication
>Reporter: Gary Helmling
>Assignee: Gary Helmling
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14866-0.98.patch, HBASE-14866.patch, 
> HBASE-14866_v1.patch, hbase-14866-branch-1-v1.patch, hbase-14866-v4.patch, 
> hbase-14866-v5.patch, hbase-14866-v6.patch, hbase-14866_v2.patch, 
> hbase-14866_v3.patch
>
>
> VerifyReplication uses the replication peer's configuration to construct the 
> ZooKeeper quorum address for the peer connection.  However, other 
> configuration properties in the peer's configuration are dropped.  It should 
> merge all configuration properties from the {{ReplicationPeerConfig}} when 
> creating the peer connection and obtaining a credentials for the peer cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14866) VerifyReplication should use peer configuration in peer connection

2015-12-14 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-14866:
---
Attachment: (was: HBASE-14866-0.98.patch)

> VerifyReplication should use peer configuration in peer connection
> --
>
> Key: HBASE-14866
> URL: https://issues.apache.org/jira/browse/HBASE-14866
> Project: HBase
>  Issue Type: Improvement
>  Components: Replication
>Reporter: Gary Helmling
>Assignee: Gary Helmling
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14866.patch, HBASE-14866_v1.patch, 
> hbase-14866-branch-1-v1.patch, hbase-14866-v4.patch, hbase-14866-v5.patch, 
> hbase-14866-v6.patch, hbase-14866_v2.patch, hbase-14866_v3.patch
>
>
> VerifyReplication uses the replication peer's configuration to construct the 
> ZooKeeper quorum address for the peer connection.  However, other 
> configuration properties in the peer's configuration are dropped.  It should 
> merge all configuration properties from the {{ReplicationPeerConfig}} when 
> creating the peer connection and obtaining a credentials for the peer cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14866) VerifyReplication should use peer configuration in peer connection

2015-12-14 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-14866:
---
Attachment: HBASE-14866-0.98.patch

> VerifyReplication should use peer configuration in peer connection
> --
>
> Key: HBASE-14866
> URL: https://issues.apache.org/jira/browse/HBASE-14866
> Project: HBase
>  Issue Type: Improvement
>  Components: Replication
>Reporter: Gary Helmling
>Assignee: Gary Helmling
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14866-0.98.patch, HBASE-14866.patch, 
> HBASE-14866_v1.patch, hbase-14866-branch-1-v1.patch, hbase-14866-v4.patch, 
> hbase-14866-v5.patch, hbase-14866-v6.patch, hbase-14866_v2.patch, 
> hbase-14866_v3.patch
>
>
> VerifyReplication uses the replication peer's configuration to construct the 
> ZooKeeper quorum address for the peer connection.  However, other 
> configuration properties in the peer's configuration are dropped.  It should 
> merge all configuration properties from the {{ReplicationPeerConfig}} when 
> creating the peer connection and obtaining a credentials for the peer cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14960) Fallback to using default RPCControllerFactory if class cannot be loaded

2015-12-14 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-14960:
---
Attachment: HBASE-14960-0.98.patch

> Fallback to using default RPCControllerFactory if class cannot be loaded
> 
>
> Key: HBASE-14960
> URL: https://issues.apache.org/jira/browse/HBASE-14960
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14960-0.98.patch, hbase-14960_v1.patch, 
> hbase-14960_v2.patch, hbase-14960_v3.patch, hbase-14960_v4.patch
>
>
> In Phoenix + HBase clusters, the hbase-site.xml configuration will point to a 
> custom rpc controller factory which is a Phoenix-specific one to configure 
> the priorities for index and system catalog table. 
> However, sometimes these Phoenix-enabled clusters are used from pure-HBase 
> client applications resulting in ClassNotFoundExceptions in application code 
> or MapReduce jobs. Since hbase configuration is shared between 
> Phoenix-clients and HBase clients, having different configurations at the 
> client side is hard. 
> We can instead try to load up the RPCControllerFactory from conf, and if not 
> found, fallback to the default one (in case this is a pure HBase client). In 
> case Phoenix is already in the classpath, it will work as usual. 
> This does not affect the rpc scheduler factory since it is only used at the 
> server side. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14960) Fallback to using default RPCControllerFactory if class cannot be loaded

2015-12-14 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-14960:
---
Fix Version/s: 0.98.17

> Fallback to using default RPCControllerFactory if class cannot be loaded
> 
>
> Key: HBASE-14960
> URL: https://issues.apache.org/jira/browse/HBASE-14960
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0, 1.2.0, 1.3.0, 0.98.17
>
> Attachments: HBASE-14960-0.98.patch, hbase-14960_v1.patch, 
> hbase-14960_v2.patch, hbase-14960_v3.patch, hbase-14960_v4.patch
>
>
> In Phoenix + HBase clusters, the hbase-site.xml configuration will point to a 
> custom rpc controller factory which is a Phoenix-specific one to configure 
> the priorities for index and system catalog table. 
> However, sometimes these Phoenix-enabled clusters are used from pure-HBase 
> client applications resulting in ClassNotFoundExceptions in application code 
> or MapReduce jobs. Since hbase configuration is shared between 
> Phoenix-clients and HBase clients, having different configurations at the 
> client side is hard. 
> We can instead try to load up the RPCControllerFactory from conf, and if not 
> found, fallback to the default one (in case this is a pure HBase client). In 
> case Phoenix is already in the classpath, it will work as usual. 
> This does not affect the rpc scheduler factory since it is only used at the 
> server side. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14468) Compaction improvements: FIFO compaction policy

2015-12-14 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15056829#comment-15056829
 ] 

Vladimir Rodionov commented on HBASE-14468:
---

[~saint@gmail.com]

I see this exception during HBase mini cluster shutdown.

{code}
2015-12-09 17:51:28,444 ERROR [RS:0;asf901:37225] 
hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer(145): Exception in run
java.util.ConcurrentModificationException
at java.util.HashMap$HashIterator.nextNode(HashMap.java:1429)
at java.util.HashMap$KeyIterator.next(HashMap.java:1453)
at java.util.AbstractCollection.toString(AbstractCollection.java:461)
at java.lang.String.valueOf(String.java:2994)
at java.lang.StringBuilder.append(StringBuilder.java:131)
at org.apache.hadoop.hbase.ChoreService.shutdown(ChoreService.java:323)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.stopServiceThreads(HRegionServer.java:2127)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1084)
at 
org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:156)
at 
org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:108)
at 
org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:140)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:360)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1594)
at 
org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:334)
at 
org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:138)
at java.lang.Thread.run(Thread.java:745)
{code}

This seems has nothing to do with a test in question. I can not reproduce this 
issue in my local environment. Any suggestions?


> Compaction improvements: FIFO compaction policy
> ---
>
> Key: HBASE-14468
> URL: https://issues.apache.org/jira/browse/HBASE-14468
> Project: HBase
>  Issue Type: Improvement
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14468-v1.patch, HBASE-14468-v10.patch, 
> HBASE-14468-v2.patch, HBASE-14468-v3.patch, HBASE-14468-v4.patch, 
> HBASE-14468-v5.patch, HBASE-14468-v6.patch, HBASE-14468-v7.patch, 
> HBASE-14468-v8.patch, HBASE-14468-v9.patch
>
>
> h2. FIFO Compaction
> h3. Introduction
> FIFO compaction policy selects only files which have all cells expired. The 
> column family MUST have non-default TTL. 
> Essentially, FIFO compactor does only one job: collects expired store files. 
> These are some applications which could benefit the most:
> # Use it for very high volume raw data which has low TTL and which is the 
> source of another data (after additional processing). Example: Raw 
> time-series vs. time-based rollup aggregates and compacted time-series. We 
> collect raw time-series and store them into CF with FIFO compaction policy, 
> periodically we run  task which creates rollup aggregates and compacts 
> time-series, the original raw data can be discarded after that.
> # Use it for data which can be kept entirely in a a block cache (RAM/SSD). 
> Say we have local SSD (1TB) which we can use as a block cache. No need for 
> compaction of a raw data at all.
> Because we do not do any real compaction, we do not use CPU and IO (disk and 
> network), we do not evict hot data from a block cache. The result: improved 
> throughput and latency both write and read.
> See: https://github.com/facebook/rocksdb/wiki/FIFO-compaction-style
> h3. To enable FIFO compaction policy
> For table:
> {code}
> HTableDescriptor desc = new HTableDescriptor(tableName);
> 
> desc.setConfiguration(DefaultStoreEngine.DEFAULT_COMPACTION_POLICY_CLASS_KEY, 
>   FIFOCompactionPolicy.class.getName());
> {code} 
> For CF:
> {code}
> HColumnDescriptor desc = new HColumnDescriptor(family);
> 
> desc.setConfiguration(DefaultStoreEngine.DEFAULT_COMPACTION_POLICY_CLASS_KEY, 
>   FIFOCompactionPolicy.class.getName());
> {code}
> Although region splitting is supported,  for optimal performance it should be 
> disabled, either by setting explicitly DisabledRegionSplitPolicy or by 
> setting ConstantSizeRegionSplitPolicy and very large max region size. You 
> will have to increase to a very large number store's blocking file number : 
> *hbase.hstore.blockingStoreFiles* as well.
>  
> h3. Limitations
> Do not use FIFO compaction if :
> * Table/CF has MIN_VERSION > 0
> * Table/CF has TTL = FOREVER (HColumnDescriptor.DEFAULT_TTL)



--
This message was 

[jira] [Commented] (HBASE-14468) Compaction improvements: FIFO compaction policy

2015-12-14 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15056833#comment-15056833
 ] 

Vladimir Rodionov commented on HBASE-14468:
---

Another observation: 

It fails under 1.8. My version of JDK is 1.7-latest

> Compaction improvements: FIFO compaction policy
> ---
>
> Key: HBASE-14468
> URL: https://issues.apache.org/jira/browse/HBASE-14468
> Project: HBase
>  Issue Type: Improvement
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14468-v1.patch, HBASE-14468-v10.patch, 
> HBASE-14468-v2.patch, HBASE-14468-v3.patch, HBASE-14468-v4.patch, 
> HBASE-14468-v5.patch, HBASE-14468-v6.patch, HBASE-14468-v7.patch, 
> HBASE-14468-v8.patch, HBASE-14468-v9.patch
>
>
> h2. FIFO Compaction
> h3. Introduction
> FIFO compaction policy selects only files which have all cells expired. The 
> column family MUST have non-default TTL. 
> Essentially, FIFO compactor does only one job: collects expired store files. 
> These are some applications which could benefit the most:
> # Use it for very high volume raw data which has low TTL and which is the 
> source of another data (after additional processing). Example: Raw 
> time-series vs. time-based rollup aggregates and compacted time-series. We 
> collect raw time-series and store them into CF with FIFO compaction policy, 
> periodically we run  task which creates rollup aggregates and compacts 
> time-series, the original raw data can be discarded after that.
> # Use it for data which can be kept entirely in a a block cache (RAM/SSD). 
> Say we have local SSD (1TB) which we can use as a block cache. No need for 
> compaction of a raw data at all.
> Because we do not do any real compaction, we do not use CPU and IO (disk and 
> network), we do not evict hot data from a block cache. The result: improved 
> throughput and latency both write and read.
> See: https://github.com/facebook/rocksdb/wiki/FIFO-compaction-style
> h3. To enable FIFO compaction policy
> For table:
> {code}
> HTableDescriptor desc = new HTableDescriptor(tableName);
> 
> desc.setConfiguration(DefaultStoreEngine.DEFAULT_COMPACTION_POLICY_CLASS_KEY, 
>   FIFOCompactionPolicy.class.getName());
> {code} 
> For CF:
> {code}
> HColumnDescriptor desc = new HColumnDescriptor(family);
> 
> desc.setConfiguration(DefaultStoreEngine.DEFAULT_COMPACTION_POLICY_CLASS_KEY, 
>   FIFOCompactionPolicy.class.getName());
> {code}
> Although region splitting is supported,  for optimal performance it should be 
> disabled, either by setting explicitly DisabledRegionSplitPolicy or by 
> setting ConstantSizeRegionSplitPolicy and very large max region size. You 
> will have to increase to a very large number store's blocking file number : 
> *hbase.hstore.blockingStoreFiles* as well.
>  
> h3. Limitations
> Do not use FIFO compaction if :
> * Table/CF has MIN_VERSION > 0
> * Table/CF has TTL = FOREVER (HColumnDescriptor.DEFAULT_TTL)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-7171) Initial web UI for region/memstore/storefiles details

2015-12-14 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15056852#comment-15056852
 ] 

Sean Busbey commented on HBASE-7171:


there isn't supposed to be a branch-2 yet, AFAIK.

> Initial web UI for region/memstore/storefiles details
> -
>
> Key: HBASE-7171
> URL: https://issues.apache.org/jira/browse/HBASE-7171
> Project: HBase
>  Issue Type: Improvement
>  Components: UI
>Reporter: stack
>Assignee: Mikhail Antonov
>  Labels: beginner
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-7171.patch, region_details.png, region_list.png, 
> storefile_details.png
>
>
> Click on a region in UI and get a listing of hfiles in HDFS and summary of 
> memstore content; click on an HFile and see its content



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-7171) Initial web UI for region/memstore/storefiles details

2015-12-14 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15056861#comment-15056861
 ] 

Sean Busbey commented on HBASE-7171:


please add a release note describing this new functionality.

> Initial web UI for region/memstore/storefiles details
> -
>
> Key: HBASE-7171
> URL: https://issues.apache.org/jira/browse/HBASE-7171
> Project: HBase
>  Issue Type: Improvement
>  Components: UI
>Reporter: stack
>Assignee: Mikhail Antonov
>  Labels: beginner
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-7171.patch, region_details.png, region_list.png, 
> storefile_details.png
>
>
> Click on a region in UI and get a listing of hfiles in HDFS and summary of 
> memstore content; click on an HFile and see its content



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-7171) Initial web UI for region/memstore/storefiles details

2015-12-14 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15056865#comment-15056865
 ] 

Matteo Bertozzi commented on HBASE-7171:


branch-2 isn't supposed to exist yet. I guess was a wrong push from someone

> Initial web UI for region/memstore/storefiles details
> -
>
> Key: HBASE-7171
> URL: https://issues.apache.org/jira/browse/HBASE-7171
> Project: HBase
>  Issue Type: Improvement
>  Components: UI
>Reporter: stack
>Assignee: Mikhail Antonov
>  Labels: beginner
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-7171.patch, region_details.png, region_list.png, 
> storefile_details.png
>
>
> Click on a region in UI and get a listing of hfiles in HDFS and summary of 
> memstore content; click on an HFile and see its content



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14468) Compaction improvements: FIFO compaction policy

2015-12-14 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15056893#comment-15056893
 ] 

Vladimir Rodionov commented on HBASE-14468:
---

Run test multiple times under 1.8_65. No issues. Is this issue reproducible in 
apache build system?

> Compaction improvements: FIFO compaction policy
> ---
>
> Key: HBASE-14468
> URL: https://issues.apache.org/jira/browse/HBASE-14468
> Project: HBase
>  Issue Type: Improvement
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14468-v1.patch, HBASE-14468-v10.patch, 
> HBASE-14468-v2.patch, HBASE-14468-v3.patch, HBASE-14468-v4.patch, 
> HBASE-14468-v5.patch, HBASE-14468-v6.patch, HBASE-14468-v7.patch, 
> HBASE-14468-v8.patch, HBASE-14468-v9.patch
>
>
> h2. FIFO Compaction
> h3. Introduction
> FIFO compaction policy selects only files which have all cells expired. The 
> column family MUST have non-default TTL. 
> Essentially, FIFO compactor does only one job: collects expired store files. 
> These are some applications which could benefit the most:
> # Use it for very high volume raw data which has low TTL and which is the 
> source of another data (after additional processing). Example: Raw 
> time-series vs. time-based rollup aggregates and compacted time-series. We 
> collect raw time-series and store them into CF with FIFO compaction policy, 
> periodically we run  task which creates rollup aggregates and compacts 
> time-series, the original raw data can be discarded after that.
> # Use it for data which can be kept entirely in a a block cache (RAM/SSD). 
> Say we have local SSD (1TB) which we can use as a block cache. No need for 
> compaction of a raw data at all.
> Because we do not do any real compaction, we do not use CPU and IO (disk and 
> network), we do not evict hot data from a block cache. The result: improved 
> throughput and latency both write and read.
> See: https://github.com/facebook/rocksdb/wiki/FIFO-compaction-style
> h3. To enable FIFO compaction policy
> For table:
> {code}
> HTableDescriptor desc = new HTableDescriptor(tableName);
> 
> desc.setConfiguration(DefaultStoreEngine.DEFAULT_COMPACTION_POLICY_CLASS_KEY, 
>   FIFOCompactionPolicy.class.getName());
> {code} 
> For CF:
> {code}
> HColumnDescriptor desc = new HColumnDescriptor(family);
> 
> desc.setConfiguration(DefaultStoreEngine.DEFAULT_COMPACTION_POLICY_CLASS_KEY, 
>   FIFOCompactionPolicy.class.getName());
> {code}
> Although region splitting is supported,  for optimal performance it should be 
> disabled, either by setting explicitly DisabledRegionSplitPolicy or by 
> setting ConstantSizeRegionSplitPolicy and very large max region size. You 
> will have to increase to a very large number store's blocking file number : 
> *hbase.hstore.blockingStoreFiles* as well.
>  
> h3. Limitations
> Do not use FIFO compaction if :
> * Table/CF has MIN_VERSION > 0
> * Table/CF has TTL = FOREVER (HColumnDescriptor.DEFAULT_TTL)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14936) CombinedBlockCache should overwrite CacheStats#rollMetricsPeriod()

2015-12-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15056907#comment-15056907
 ] 

Hudson commented on HBASE-14936:


FAILURE: Integrated in HBase-Trunk_matrix #552 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/552/])
HBASE-14936 CombinedBlockCache should overwrite (chenheng: rev 
04622254f7209c5cfeadcfa137a97fbed161075a)
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CacheStats.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestCombinedBlockCache.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CombinedBlockCache.java


> CombinedBlockCache should overwrite CacheStats#rollMetricsPeriod()
> --
>
> Key: HBASE-14936
> URL: https://issues.apache.org/jira/browse/HBASE-14936
> Project: HBase
>  Issue Type: Bug
>  Components: BlockCache
>Affects Versions: 1.1.2
>Reporter: Jianwei Cui
>Assignee: Jianwei Cui
> Fix For: 2.0.0, 1.2, 1.1, 1.3, 1.0
>
> Attachments: HBASE-14936-branch-1.0-1.1.patch, 
> HBASE-14936-branch-1.0-addendum.patch, HBASE-14936-trunk-v1.patch, 
> HBASE-14936-trunk-v2.patch, HBASE-14936-trunk.patch
>
>
> It seems CombinedBlockCache should overwrite CacheStats#rollMetricsPeriod() as
> {code}
> public void rollMetricsPeriod() {
>   lruCacheStats.rollMetricsPeriod();
>   bucketCacheStats.rollMetricsPeriod();
> }
> {code}
> otherwise, CombinedBlockCache.getHitRatioPastNPeriods() and 
> CombinedBlockCache.getHitCachingRatioPastNPeriods() will always return 0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14951) Make hbase.regionserver.maxlogs obsolete

2015-12-14 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15056918#comment-15056918
 ] 

Vladimir Rodionov commented on HBASE-14951:
---

[~saint@gmail.com]. [~lhofhansl], [~enis] ping-ping

> Make hbase.regionserver.maxlogs obsolete
> 
>
> Key: HBASE-14951
> URL: https://issues.apache.org/jira/browse/HBASE-14951
> Project: HBase
>  Issue Type: Improvement
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-14951-v1.patch, HBASE-14951-v2.patch
>
>
> There was a discussion in HBASE-14388 related to maximum number of log files. 
> It was an agreement that we should calculate this number in a code but still 
> need to honor user's setting. 
> Maximum number of log files now is calculated as following:
>  maxLogs = HEAP_SIZE * memstoreRatio * 2/ LogRollSize



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-10390) expose checkAndPut/Delete custom comparators in HTable

2015-12-14 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-10390:
--
Summary: expose checkAndPut/Delete custom comparators in HTable  (was: 
expose checkAndPut/Delete custom comparators thru HTable)

> expose checkAndPut/Delete custom comparators in HTable
> --
>
> Key: HBASE-10390
> URL: https://issues.apache.org/jira/browse/HBASE-10390
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Reporter: Sergey Shelukhin
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0
>
> Attachments: HBASE-10390-v1.patch, HBASE-10390-v2.patch, 
> HBASE-10390-v3.patch
>
>
> checkAndPut/Delete appear to support custom comparators. However, thru 
> HTable, there's no way to pass one, it always creates BinaryComparator from 
> value. It would be good to expose the custom ones in the API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14468) Compaction improvements: FIFO compaction policy

2015-12-14 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15056929#comment-15056929
 ] 

Enis Soztutar commented on HBASE-14468:
---

Seems {{ChoreService.shutdown()}} should be {{synchronized}}. Open new issue? 

> Compaction improvements: FIFO compaction policy
> ---
>
> Key: HBASE-14468
> URL: https://issues.apache.org/jira/browse/HBASE-14468
> Project: HBase
>  Issue Type: Improvement
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14468-v1.patch, HBASE-14468-v10.patch, 
> HBASE-14468-v2.patch, HBASE-14468-v3.patch, HBASE-14468-v4.patch, 
> HBASE-14468-v5.patch, HBASE-14468-v6.patch, HBASE-14468-v7.patch, 
> HBASE-14468-v8.patch, HBASE-14468-v9.patch
>
>
> h2. FIFO Compaction
> h3. Introduction
> FIFO compaction policy selects only files which have all cells expired. The 
> column family MUST have non-default TTL. 
> Essentially, FIFO compactor does only one job: collects expired store files. 
> These are some applications which could benefit the most:
> # Use it for very high volume raw data which has low TTL and which is the 
> source of another data (after additional processing). Example: Raw 
> time-series vs. time-based rollup aggregates and compacted time-series. We 
> collect raw time-series and store them into CF with FIFO compaction policy, 
> periodically we run  task which creates rollup aggregates and compacts 
> time-series, the original raw data can be discarded after that.
> # Use it for data which can be kept entirely in a a block cache (RAM/SSD). 
> Say we have local SSD (1TB) which we can use as a block cache. No need for 
> compaction of a raw data at all.
> Because we do not do any real compaction, we do not use CPU and IO (disk and 
> network), we do not evict hot data from a block cache. The result: improved 
> throughput and latency both write and read.
> See: https://github.com/facebook/rocksdb/wiki/FIFO-compaction-style
> h3. To enable FIFO compaction policy
> For table:
> {code}
> HTableDescriptor desc = new HTableDescriptor(tableName);
> 
> desc.setConfiguration(DefaultStoreEngine.DEFAULT_COMPACTION_POLICY_CLASS_KEY, 
>   FIFOCompactionPolicy.class.getName());
> {code} 
> For CF:
> {code}
> HColumnDescriptor desc = new HColumnDescriptor(family);
> 
> desc.setConfiguration(DefaultStoreEngine.DEFAULT_COMPACTION_POLICY_CLASS_KEY, 
>   FIFOCompactionPolicy.class.getName());
> {code}
> Although region splitting is supported,  for optimal performance it should be 
> disabled, either by setting explicitly DisabledRegionSplitPolicy or by 
> setting ConstantSizeRegionSplitPolicy and very large max region size. You 
> will have to increase to a very large number store's blocking file number : 
> *hbase.hstore.blockingStoreFiles* as well.
>  
> h3. Limitations
> Do not use FIFO compaction if :
> * Table/CF has MIN_VERSION > 0
> * Table/CF has TTL = FOREVER (HColumnDescriptor.DEFAULT_TTL)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-7171) Initial web UI for region/memstore/storefiles details

2015-12-14 Thread Mikhail Antonov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15056937#comment-15056937
 ] 

Mikhail Antonov commented on HBASE-7171:


I've seen it here - https://github.com/apache/hbase/tree/branch-2

Yep, will add a release note.

> Initial web UI for region/memstore/storefiles details
> -
>
> Key: HBASE-7171
> URL: https://issues.apache.org/jira/browse/HBASE-7171
> Project: HBase
>  Issue Type: Improvement
>  Components: UI
>Reporter: stack
>Assignee: Mikhail Antonov
>  Labels: beginner
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-7171.patch, region_details.png, region_list.png, 
> storefile_details.png
>
>
> Click on a region in UI and get a listing of hfiles in HDFS and summary of 
> memstore content; click on an HFile and see its content



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-7171) Initial web UI for region/memstore/storefiles details

2015-12-14 Thread Mikhail Antonov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Antonov updated HBASE-7171:
---
Release Note: 
HBASE-7171 adds 2 new pages to the region server Web UI to ease debugging and 
provide greater insight into the physical data layout.

Region names in UI table listing all regions (on the RS status page) are now 
hyperlinks leading to region detail page which shows some aggregate memstore 
information (currently just memory used) along with the list of all Store Files 
(HFiles) in the region. Names of Store Files are also hyperlinks leading to 
Store File detail page, which currently runs 'hbase hfile' command behind the 
scene and displays statistics about store file.



> Initial web UI for region/memstore/storefiles details
> -
>
> Key: HBASE-7171
> URL: https://issues.apache.org/jira/browse/HBASE-7171
> Project: HBase
>  Issue Type: Improvement
>  Components: UI
>Reporter: stack
>Assignee: Mikhail Antonov
>  Labels: beginner
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-7171.patch, region_details.png, region_list.png, 
> storefile_details.png
>
>
> Click on a region in UI and get a listing of hfiles in HDFS and summary of 
> memstore content; click on an HFile and see its content



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-6721) RegionServer Group based Assignment

2015-12-14 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15056962#comment-15056962
 ] 

Ted Yu commented on HBASE-6721:
---

Took a quick look at GroupShellTest which misses master coprocessor setup 
similar to the following:
{code}
+
TEST_UTIL.getConfiguration().set(CoprocessorHost.MASTER_COPROCESSOR_CONF_KEY,
+GroupAdminEndpoint.class.getName());
{code}

> RegionServer Group based Assignment
> ---
>
> Key: HBASE-6721
> URL: https://issues.apache.org/jira/browse/HBASE-6721
> Project: HBase
>  Issue Type: New Feature
>Reporter: Francis Liu
>Assignee: Francis Liu
>  Labels: hbase-6721
> Attachments: 6721-master-webUI.patch, HBASE-6721 
> GroupBasedLoadBalancer Sequence Diagram.xml, HBASE-6721-DesigDoc.pdf, 
> HBASE-6721-DesigDoc.pdf, HBASE-6721-DesigDoc.pdf, HBASE-6721-DesigDoc.pdf, 
> HBASE-6721_0.98_2.patch, HBASE-6721_10.patch, HBASE-6721_11.patch, 
> HBASE-6721_12.patch, HBASE-6721_13.patch, HBASE-6721_14.patch, 
> HBASE-6721_15.patch, HBASE-6721_8.patch, HBASE-6721_9.patch, 
> HBASE-6721_9.patch, HBASE-6721_94.patch, HBASE-6721_94.patch, 
> HBASE-6721_94_2.patch, HBASE-6721_94_3.patch, HBASE-6721_94_3.patch, 
> HBASE-6721_94_4.patch, HBASE-6721_94_5.patch, HBASE-6721_94_6.patch, 
> HBASE-6721_94_7.patch, HBASE-6721_98_1.patch, HBASE-6721_98_2.patch, 
> HBASE-6721_hbase-6721_addendum.patch, HBASE-6721_trunk.patch, 
> HBASE-6721_trunk.patch, HBASE-6721_trunk.patch, HBASE-6721_trunk1.patch, 
> HBASE-6721_trunk2.patch, balanceCluster Sequence Diagram.svg, 
> hbase-6721-v15-branch-1.1.patch, hbase-6721-v16.patch, hbase-6721-v17.patch, 
> hbase-6721-v18.patch, hbase-6721-v19.patch, immediateAssignments Sequence 
> Diagram.svg, randomAssignment Sequence Diagram.svg, retainAssignment Sequence 
> Diagram.svg, roundRobinAssignment Sequence Diagram.svg
>
>
> In multi-tenant deployments of HBase, it is likely that a RegionServer will 
> be serving out regions from a number of different tables owned by various 
> client applications. Being able to group a subset of running RegionServers 
> and assign specific tables to it, provides a client application a level of 
> isolation and resource allocation.
> The proposal essentially is to have an AssignmentManager which is aware of 
> RegionServer groups and assigns tables to region servers based on groupings. 
> Load balancing will occur on a per group basis as well. 
> This is essentially a simplification of the approach taken in HBASE-4120. See 
> attached document.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14947) WALProcedureStore improvements

2015-12-14 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15056980#comment-15056980
 ] 

Matteo Bertozzi commented on HBASE-14947:
-

ping [~syuanjiang]

> WALProcedureStore improvements
> --
>
> Key: HBASE-14947
> URL: https://issues.apache.org/jira/browse/HBASE-14947
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2
>Reporter: Ashu Pachauri
>Assignee: Matteo Bertozzi
>Priority: Blocker
> Attachments: HBASE-14947-v0.patch, HBASE-14947-v1.patch
>
>
> We ended up with a deadlock in HBASE-14943, with the storeTracker and lock 
> acquired in reverse order by syncLoop() and insert/update/delete. In the 
> syncLoop() with don't need the lock when we try to roll or removeInactive. 
> also we can move the insert/update/delete tracker check in the syncLoop 
> avoiding to the extra lock operation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14468) Compaction improvements: FIFO compaction policy

2015-12-14 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15056979#comment-15056979
 ] 

Vladimir Rodionov commented on HBASE-14468:
---

{quote}
Seems ChoreService.shutdown() should be synchronized. Open new issue?
{quote}

Not sure. This is called in HRegionServer.stopServiceThreads and is not 
supposed to be MT-safe. Unless we stop the same mini-cluster from multiple 
threads?

> Compaction improvements: FIFO compaction policy
> ---
>
> Key: HBASE-14468
> URL: https://issues.apache.org/jira/browse/HBASE-14468
> Project: HBase
>  Issue Type: Improvement
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14468-v1.patch, HBASE-14468-v10.patch, 
> HBASE-14468-v2.patch, HBASE-14468-v3.patch, HBASE-14468-v4.patch, 
> HBASE-14468-v5.patch, HBASE-14468-v6.patch, HBASE-14468-v7.patch, 
> HBASE-14468-v8.patch, HBASE-14468-v9.patch
>
>
> h2. FIFO Compaction
> h3. Introduction
> FIFO compaction policy selects only files which have all cells expired. The 
> column family MUST have non-default TTL. 
> Essentially, FIFO compactor does only one job: collects expired store files. 
> These are some applications which could benefit the most:
> # Use it for very high volume raw data which has low TTL and which is the 
> source of another data (after additional processing). Example: Raw 
> time-series vs. time-based rollup aggregates and compacted time-series. We 
> collect raw time-series and store them into CF with FIFO compaction policy, 
> periodically we run  task which creates rollup aggregates and compacts 
> time-series, the original raw data can be discarded after that.
> # Use it for data which can be kept entirely in a a block cache (RAM/SSD). 
> Say we have local SSD (1TB) which we can use as a block cache. No need for 
> compaction of a raw data at all.
> Because we do not do any real compaction, we do not use CPU and IO (disk and 
> network), we do not evict hot data from a block cache. The result: improved 
> throughput and latency both write and read.
> See: https://github.com/facebook/rocksdb/wiki/FIFO-compaction-style
> h3. To enable FIFO compaction policy
> For table:
> {code}
> HTableDescriptor desc = new HTableDescriptor(tableName);
> 
> desc.setConfiguration(DefaultStoreEngine.DEFAULT_COMPACTION_POLICY_CLASS_KEY, 
>   FIFOCompactionPolicy.class.getName());
> {code} 
> For CF:
> {code}
> HColumnDescriptor desc = new HColumnDescriptor(family);
> 
> desc.setConfiguration(DefaultStoreEngine.DEFAULT_COMPACTION_POLICY_CLASS_KEY, 
>   FIFOCompactionPolicy.class.getName());
> {code}
> Although region splitting is supported,  for optimal performance it should be 
> disabled, either by setting explicitly DisabledRegionSplitPolicy or by 
> setting ConstantSizeRegionSplitPolicy and very large max region size. You 
> will have to increase to a very large number store's blocking file number : 
> *hbase.hstore.blockingStoreFiles* as well.
>  
> h3. Limitations
> Do not use FIFO compaction if :
> * Table/CF has MIN_VERSION > 0
> * Table/CF has TTL = FOREVER (HColumnDescriptor.DEFAULT_TTL)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-10390) expose checkAndPut/Delete custom comparators in HTable

2015-12-14 Thread Stephen Yuan Jiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15056991#comment-15056991
 ] 

Stephen Yuan Jiang commented on HBASE-10390:


+1 - LGTM

> expose checkAndPut/Delete custom comparators in HTable
> --
>
> Key: HBASE-10390
> URL: https://issues.apache.org/jira/browse/HBASE-10390
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Reporter: Sergey Shelukhin
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0
>
> Attachments: HBASE-10390-v1.patch, HBASE-10390-v2.patch, 
> HBASE-10390-v3.patch
>
>
> checkAndPut/Delete appear to support custom comparators. However, thru 
> HTable, there's no way to pass one, it always creates BinaryComparator from 
> value. It would be good to expose the custom ones in the API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14468) Compaction improvements: FIFO compaction policy

2015-12-14 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15056992#comment-15056992
 ] 

Enis Soztutar commented on HBASE-14468:
---

I think the issue is that the HashMaps inside ChoreService is not thread-safe. 
All usages except for shutdown() is guarded by synchronized. Even one thread 
calling shutdown(), while some other threads trying to access the same hashmaps 
will throw {{ConcurrentModificationException}}. 

> Compaction improvements: FIFO compaction policy
> ---
>
> Key: HBASE-14468
> URL: https://issues.apache.org/jira/browse/HBASE-14468
> Project: HBase
>  Issue Type: Improvement
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14468-v1.patch, HBASE-14468-v10.patch, 
> HBASE-14468-v2.patch, HBASE-14468-v3.patch, HBASE-14468-v4.patch, 
> HBASE-14468-v5.patch, HBASE-14468-v6.patch, HBASE-14468-v7.patch, 
> HBASE-14468-v8.patch, HBASE-14468-v9.patch
>
>
> h2. FIFO Compaction
> h3. Introduction
> FIFO compaction policy selects only files which have all cells expired. The 
> column family MUST have non-default TTL. 
> Essentially, FIFO compactor does only one job: collects expired store files. 
> These are some applications which could benefit the most:
> # Use it for very high volume raw data which has low TTL and which is the 
> source of another data (after additional processing). Example: Raw 
> time-series vs. time-based rollup aggregates and compacted time-series. We 
> collect raw time-series and store them into CF with FIFO compaction policy, 
> periodically we run  task which creates rollup aggregates and compacts 
> time-series, the original raw data can be discarded after that.
> # Use it for data which can be kept entirely in a a block cache (RAM/SSD). 
> Say we have local SSD (1TB) which we can use as a block cache. No need for 
> compaction of a raw data at all.
> Because we do not do any real compaction, we do not use CPU and IO (disk and 
> network), we do not evict hot data from a block cache. The result: improved 
> throughput and latency both write and read.
> See: https://github.com/facebook/rocksdb/wiki/FIFO-compaction-style
> h3. To enable FIFO compaction policy
> For table:
> {code}
> HTableDescriptor desc = new HTableDescriptor(tableName);
> 
> desc.setConfiguration(DefaultStoreEngine.DEFAULT_COMPACTION_POLICY_CLASS_KEY, 
>   FIFOCompactionPolicy.class.getName());
> {code} 
> For CF:
> {code}
> HColumnDescriptor desc = new HColumnDescriptor(family);
> 
> desc.setConfiguration(DefaultStoreEngine.DEFAULT_COMPACTION_POLICY_CLASS_KEY, 
>   FIFOCompactionPolicy.class.getName());
> {code}
> Although region splitting is supported,  for optimal performance it should be 
> disabled, either by setting explicitly DisabledRegionSplitPolicy or by 
> setting ConstantSizeRegionSplitPolicy and very large max region size. You 
> will have to increase to a very large number store's blocking file number : 
> *hbase.hstore.blockingStoreFiles* as well.
>  
> h3. Limitations
> Do not use FIFO compaction if :
> * Table/CF has MIN_VERSION > 0
> * Table/CF has TTL = FOREVER (HColumnDescriptor.DEFAULT_TTL)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14468) Compaction improvements: FIFO compaction policy

2015-12-14 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15057000#comment-15057000
 ] 

Vladimir Rodionov commented on HBASE-14468:
---

OK, [~enis]. I will open JIRA.

> Compaction improvements: FIFO compaction policy
> ---
>
> Key: HBASE-14468
> URL: https://issues.apache.org/jira/browse/HBASE-14468
> Project: HBase
>  Issue Type: Improvement
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14468-v1.patch, HBASE-14468-v10.patch, 
> HBASE-14468-v2.patch, HBASE-14468-v3.patch, HBASE-14468-v4.patch, 
> HBASE-14468-v5.patch, HBASE-14468-v6.patch, HBASE-14468-v7.patch, 
> HBASE-14468-v8.patch, HBASE-14468-v9.patch
>
>
> h2. FIFO Compaction
> h3. Introduction
> FIFO compaction policy selects only files which have all cells expired. The 
> column family MUST have non-default TTL. 
> Essentially, FIFO compactor does only one job: collects expired store files. 
> These are some applications which could benefit the most:
> # Use it for very high volume raw data which has low TTL and which is the 
> source of another data (after additional processing). Example: Raw 
> time-series vs. time-based rollup aggregates and compacted time-series. We 
> collect raw time-series and store them into CF with FIFO compaction policy, 
> periodically we run  task which creates rollup aggregates and compacts 
> time-series, the original raw data can be discarded after that.
> # Use it for data which can be kept entirely in a a block cache (RAM/SSD). 
> Say we have local SSD (1TB) which we can use as a block cache. No need for 
> compaction of a raw data at all.
> Because we do not do any real compaction, we do not use CPU and IO (disk and 
> network), we do not evict hot data from a block cache. The result: improved 
> throughput and latency both write and read.
> See: https://github.com/facebook/rocksdb/wiki/FIFO-compaction-style
> h3. To enable FIFO compaction policy
> For table:
> {code}
> HTableDescriptor desc = new HTableDescriptor(tableName);
> 
> desc.setConfiguration(DefaultStoreEngine.DEFAULT_COMPACTION_POLICY_CLASS_KEY, 
>   FIFOCompactionPolicy.class.getName());
> {code} 
> For CF:
> {code}
> HColumnDescriptor desc = new HColumnDescriptor(family);
> 
> desc.setConfiguration(DefaultStoreEngine.DEFAULT_COMPACTION_POLICY_CLASS_KEY, 
>   FIFOCompactionPolicy.class.getName());
> {code}
> Although region splitting is supported,  for optimal performance it should be 
> disabled, either by setting explicitly DisabledRegionSplitPolicy or by 
> setting ConstantSizeRegionSplitPolicy and very large max region size. You 
> will have to increase to a very large number store's blocking file number : 
> *hbase.hstore.blockingStoreFiles* as well.
>  
> h3. Limitations
> Do not use FIFO compaction if :
> * Table/CF has MIN_VERSION > 0
> * Table/CF has TTL = FOREVER (HColumnDescriptor.DEFAULT_TTL)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14977) ChoreService.shutdowm may result in ConcurrentModificationException

2015-12-14 Thread Vladimir Rodionov (JIRA)
Vladimir Rodionov created HBASE-14977:
-

 Summary: ChoreService.shutdowm may result in 
ConcurrentModificationException
 Key: HBASE-14977
 URL: https://issues.apache.org/jira/browse/HBASE-14977
 Project: HBase
  Issue Type: Bug
Reporter: Vladimir Rodionov
Assignee: Vladimir Rodionov
Priority: Minor
 Fix For: 2.0.0


As seen in this test:
https://builds.apache.org/job/HBase-1.3/jdk=latest1.8,label=Hadoop/425/artifact/hbase-server/target/surefire-reports/org.apache.hadoop.hbase.regionserver.compactions.TestFIFOCompactionPolicy-output.txt

We need to make  shutdown method synchronized to avoid this issue. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14977) ChoreService.shutdown may result in ConcurrentModificationException

2015-12-14 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-14977:
--
Summary: ChoreService.shutdown may result in 
ConcurrentModificationException  (was: ChoreService.shutdowm may result in 
ConcurrentModificationException)

> ChoreService.shutdown may result in ConcurrentModificationException
> ---
>
> Key: HBASE-14977
> URL: https://issues.apache.org/jira/browse/HBASE-14977
> Project: HBase
>  Issue Type: Bug
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>Priority: Minor
> Fix For: 2.0.0
>
>
> As seen in this test:
> https://builds.apache.org/job/HBase-1.3/jdk=latest1.8,label=Hadoop/425/artifact/hbase-server/target/surefire-reports/org.apache.hadoop.hbase.regionserver.compactions.TestFIFOCompactionPolicy-output.txt
> We need to make  shutdown method synchronized to avoid this issue. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14929) There is a space missing from Table "foo" is not currently available.

2015-12-14 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15057014#comment-15057014
 ] 

Jonathan Hsieh commented on HBASE-14929:


Thanks carlos and ted.  lgtm.  I'll commit.

> There is a space missing from Table "foo" is not currently available.
> -
>
> Key: HBASE-14929
> URL: https://issues.apache.org/jira/browse/HBASE-14929
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Malaska
>Assignee: Carlos A. Morillo
>Priority: Trivial
> Attachments: HBASE-14929.patch
>
>
> Go to the following line in LoadIncrementalHFiles.java
> throw new TableNotFoundException("Table " + table.getName() + "is not 
> currently available.");
> and add a space before is and after '



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14977) ChoreService.shutdown may result in ConcurrentModificationException

2015-12-14 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-14977:
--
Attachment: HBASE-14977-v1.patch

Patch v1.

> ChoreService.shutdown may result in ConcurrentModificationException
> ---
>
> Key: HBASE-14977
> URL: https://issues.apache.org/jira/browse/HBASE-14977
> Project: HBase
>  Issue Type: Bug
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-14977-v1.patch
>
>
> As seen in this test:
> https://builds.apache.org/job/HBase-1.3/jdk=latest1.8,label=Hadoop/425/artifact/hbase-server/target/surefire-reports/org.apache.hadoop.hbase.regionserver.compactions.TestFIFOCompactionPolicy-output.txt
> We need to make  shutdown method synchronized to avoid this issue. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14977) ChoreService.shutdown may result in ConcurrentModificationException

2015-12-14 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-14977:
--
Status: Patch Available  (was: Open)

> ChoreService.shutdown may result in ConcurrentModificationException
> ---
>
> Key: HBASE-14977
> URL: https://issues.apache.org/jira/browse/HBASE-14977
> Project: HBase
>  Issue Type: Bug
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-14977-v1.patch
>
>
> As seen in this test:
> https://builds.apache.org/job/HBase-1.3/jdk=latest1.8,label=Hadoop/425/artifact/hbase-server/target/surefire-reports/org.apache.hadoop.hbase.regionserver.compactions.TestFIFOCompactionPolicy-output.txt
> We need to make  shutdown method synchronized to avoid this issue. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14947) WALProcedureStore improvements

2015-12-14 Thread Ashu Pachauri (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15057021#comment-15057021
 ] 

Ashu Pachauri commented on HBASE-14947:
---

[~mbertozzi] I had a look at the patch, looks good to me. +1

> WALProcedureStore improvements
> --
>
> Key: HBASE-14947
> URL: https://issues.apache.org/jira/browse/HBASE-14947
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2
>Reporter: Ashu Pachauri
>Assignee: Matteo Bertozzi
>Priority: Blocker
> Attachments: HBASE-14947-v0.patch, HBASE-14947-v1.patch
>
>
> We ended up with a deadlock in HBASE-14943, with the storeTracker and lock 
> acquired in reverse order by syncLoop() and insert/update/delete. In the 
> syncLoop() with don't need the lock when we try to roll or removeInactive. 
> also we can move the insert/update/delete tracker check in the syncLoop 
> avoiding to the extra lock operation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >