[jira] [Commented] (HBASE-14227) Fold special cased MOB APIs into existing APIs

2016-01-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15088844#comment-15088844
 ] 

Hudson commented on HBASE-14227:


SUCCESS: Integrated in HBase-1.2-IT #384 (See 
[https://builds.apache.org/job/HBase-1.2-IT/384/])
HBASE-14227 Reduce the number of time row comparison is done in a Scan 
(ramkrishna: rev abcab52695e7737d78fe5285e22e2fe5caf78421)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ScanQueryMatcher.java


> Fold special cased MOB APIs into existing APIs
> --
>
> Key: HBASE-14227
> URL: https://issues.apache.org/jira/browse/HBASE-14227
> Project: HBase
>  Issue Type: Task
>  Components: mob
>Affects Versions: 2.0.0
>Reporter: Andrew Purtell
>Assignee: Heng Chen
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: HBASE-14227.patch, HBASE-14227_v1.patch, 
> HBASE-14227_v2.patch, HBASE-14227_v3.patch, HBASE-14227_v4.patch, 
> HBASE-14227_v5.patch, HBASE-14227_v5.patch, HBASE-14227_v6.patch, 
> HBASE-14227_v7.patch
>
>
> There are a number of APIs that came in with MOB that are not new actions for 
> HBase, simply new actions for a MOB implementation:
> - compactMob
> - compactMobs
> - majorCompactMob
> - majorCompactMobs
> - getMobCompactionState
> And in HBaseAdmin:
> - validateMobColumnFamily
> Remove these special cases from the Admin API where possible by folding them 
> into existing APIs.
> We definitely don't need one method for a singleton and another for 
> collections.
> Ideally we will not have any APIs named *Mob when finished, whether MOBs are 
> in use on a table or not should be largely an internal detail. Exposing as 
> schema option would be fine, this conforms to existing practice for other 
> features.
> Marking critical because I think removing the *Mob special cased APIs should 
> be a precondition for release of this feature either in 2.0 or as a backport.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13525) Update test-patch to leverage Apache Yetus

2016-01-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15088839#comment-15088839
 ] 

Hadoop QA commented on HBASE-13525:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 
52s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 13s 
{color} | {color:green} master passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 22s 
{color} | {color:green} master passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
15s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 55s 
{color} | {color:green} master passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 28s 
{color} | {color:green} master passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
52s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 5s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 5s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 16s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
6s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 
5s {color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 8m 
23s {color} | {color:green} Patch does not cause any errors with Hadoop 2.4.1 
2.5.2 2.6.0. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 50s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 28s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 100m 59s 
{color} | {color:green} root in the patch passed with JDK v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 103m 13s 
{color} | {color:green} root in the patch passed with JDK v1.7.0_91. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
26s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 254m 48s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.9.1 Server=1.9.1 Image:yetus/hbase:date2016-01-08 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12781129/HBASE-13525.2.patch |
| JIRA Issue | HBASE-13525 |
| Optional Tests |  asflicense  shellcheck  javac  javadoc  unit  xml  compile  
|
| uname | Linux d18cde2de622 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | master / f3ee6df |
| shellcheck | v0.4.1 |
| JDK v1.7.0_91  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build-rewrite/27/testReport/ |
| modules | C: . U: . |
| Max memory used | 174MB |
| Powered by | Apache Yetus 0.1.0   http://yetus.apache.org |
| Console outp

[jira] [Commented] (HBASE-14227) Fold special cased MOB APIs into existing APIs

2016-01-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15088837#comment-15088837
 ] 

Hudson commented on HBASE-14227:


FAILURE: Integrated in HBase-1.1-JDK8 #1722 (See 
[https://builds.apache.org/job/HBase-1.1-JDK8/1722/])
HBASE-14227 Reduce the number of time row comparison is done in a Scan 
(ramkrishna: rev d2d3c149e6165b1308e565f3ee3136b10ac95b0b)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ScanQueryMatcher.java


> Fold special cased MOB APIs into existing APIs
> --
>
> Key: HBASE-14227
> URL: https://issues.apache.org/jira/browse/HBASE-14227
> Project: HBase
>  Issue Type: Task
>  Components: mob
>Affects Versions: 2.0.0
>Reporter: Andrew Purtell
>Assignee: Heng Chen
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: HBASE-14227.patch, HBASE-14227_v1.patch, 
> HBASE-14227_v2.patch, HBASE-14227_v3.patch, HBASE-14227_v4.patch, 
> HBASE-14227_v5.patch, HBASE-14227_v5.patch, HBASE-14227_v6.patch, 
> HBASE-14227_v7.patch
>
>
> There are a number of APIs that came in with MOB that are not new actions for 
> HBase, simply new actions for a MOB implementation:
> - compactMob
> - compactMobs
> - majorCompactMob
> - majorCompactMobs
> - getMobCompactionState
> And in HBaseAdmin:
> - validateMobColumnFamily
> Remove these special cases from the Admin API where possible by folding them 
> into existing APIs.
> We definitely don't need one method for a singleton and another for 
> collections.
> Ideally we will not have any APIs named *Mob when finished, whether MOBs are 
> in use on a table or not should be largely an internal detail. Exposing as 
> schema option would be fine, this conforms to existing practice for other 
> features.
> Marking critical because I think removing the *Mob special cased APIs should 
> be a precondition for release of this feature either in 2.0 or as a backport.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14970) Backport HBASE-13082 and its sub-jira to branch-1

2016-01-07 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-14970:
---
Status: Patch Available  (was: Open)

> Backport HBASE-13082 and its sub-jira to branch-1
> -
>
> Key: HBASE-14970
> URL: https://issues.apache.org/jira/browse/HBASE-14970
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Attachments: HBASE-13082-branch-1.patch, HBASE-14970_branch-1.patch, 
> HBASE-14970_branch-1.patch, HBASE-14970_branch-1.patch, 
> HBASE-14970_branch-1_1.patch, HBASE-14970_branch-1_2.patch, 
> HBASE-14970_branch-1_4.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14970) Backport HBASE-13082 and its sub-jira to branch-1

2016-01-07 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-14970:
---
Attachment: HBASE-14970_branch-1_4.patch

Updated patch for branch-1 with HBASE-15027 related changes. 

> Backport HBASE-13082 and its sub-jira to branch-1
> -
>
> Key: HBASE-14970
> URL: https://issues.apache.org/jira/browse/HBASE-14970
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Attachments: HBASE-13082-branch-1.patch, HBASE-14970_branch-1.patch, 
> HBASE-14970_branch-1.patch, HBASE-14970_branch-1.patch, 
> HBASE-14970_branch-1_1.patch, HBASE-14970_branch-1_2.patch, 
> HBASE-14970_branch-1_4.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14970) Backport HBASE-13082 and its sub-jira to branch-1

2016-01-07 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-14970:
---
Status: Open  (was: Patch Available)

> Backport HBASE-13082 and its sub-jira to branch-1
> -
>
> Key: HBASE-14970
> URL: https://issues.apache.org/jira/browse/HBASE-14970
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Attachments: HBASE-13082-branch-1.patch, HBASE-14970_branch-1.patch, 
> HBASE-14970_branch-1.patch, HBASE-14970_branch-1.patch, 
> HBASE-14970_branch-1_1.patch, HBASE-14970_branch-1_2.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13525) Update test-patch to leverage Apache Yetus

2016-01-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15088821#comment-15088821
 ] 

Hadoop QA commented on HBASE-13525:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12781129/HBASE-13525.2.patch
  against master branch at commit f3ee6df0f2d0955c2b334a9131eb3994c00af0c4.
  ATTACHMENT ID: 12781129

{color:red}-1 @author{color}.  The patch appears to contain 6 @author tags 
which the Hadoop community has agreed to not allow in code contributions.

{color:green}+1 tests included{color}.  The patch appears to include 12 new 
or modified tests.

{color:red}-1 Anti-pattern{color}.  The patch appears to have anti-pattern 
where BYTES_COMPARATOR was omitted: +  warnings=$(${GREP} 'new 
TreeMaphttps://builds.apache.org/job/PreCommit-HBASE-Build/17168//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17168//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17168//artifact/patchprocess/checkstyle-aggregate.html

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17168//console

This message is automatically generated.

> Update test-patch to leverage Apache Yetus
> --
>
> Key: HBASE-13525
> URL: https://issues.apache.org/jira/browse/HBASE-13525
> Project: HBase
>  Issue Type: Improvement
>  Components: build
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>  Labels: jenkins
> Fix For: 2.0.0
>
> Attachments: HBASE-13525.1.patch, HBASE-13525.2.patch
>
>
> Once HADOOP-11746 lands over in Hadoop, incorporate its changes into our 
> test-patch. Most likely easiest approach is to start with the Hadoop version 
> and add in the features we have locally that they don't.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14213) Ensure ASF policy compliant headers and correct LICENSE and NOTICE files in artifacts for 0.94

2016-01-07 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15088815#comment-15088815
 ] 

Lars Hofhansl commented on HBASE-14213:
---

Turns out the large tarball size is due to HBASE-14747.


> Ensure ASF policy compliant headers and correct LICENSE and NOTICE files in 
> artifacts for 0.94
> --
>
> Key: HBASE-14213
> URL: https://issues.apache.org/jira/browse/HBASE-14213
> Project: HBase
>  Issue Type: Task
>  Components: build
>Reporter: Nick Dimiduk
>Assignee: Sean Busbey
>Priority: Blocker
> Fix For: 0.94.28
>
> Attachments: 14213-LICENSE.txt, 14213-addendum.txt, 
> 14213-combined.txt, 14213-part1.txt, 14213-part2.txt, 14213-part3.sh, 
> 14213-part4.sh, 14213-part5.sh, HBASE-14213.1.0.94.patch
>
>
> From tail of thread on HBASE-14085, opening a backport ticket for 0.94. Took 
> the liberty of assigning to [~busbey].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14747) Make it possible to build Javadoc and xref reports for 0.94 again

2016-01-07 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15088813#comment-15088813
 ] 

Lars Hofhansl commented on HBASE-14747:
---

This actually increases the size of the 0.94 tarball from 57MB to 255MB (must 
larger than any other - earlier or later - version).
I'm going to revert unless I hear a compelling reason not to. [~misty], 
[~stack].


> Make it possible to build Javadoc and xref reports for 0.94 again
> -
>
> Key: HBASE-14747
> URL: https://issues.apache.org/jira/browse/HBASE-14747
> Project: HBase
>  Issue Type: Task
>  Components: build
>Affects Versions: 0.94.27
>Reporter: Misty Stanley-Jones
>Assignee: Misty Stanley-Jones
> Fix For: 0.94.28
>
> Attachments: HBASE-14747-0.94.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14213) Ensure ASF policy compliant headers and correct LICENSE and NOTICE files in artifacts for 0.94

2016-01-07 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-14213:
--
Attachment: 14213-addendum.txt

This fixes the build issues for me:
# not using shade plugin
# not referring to non-existent supplemental-model.xml
# not trying to filter missing LEGAL file

I still see the large tarball, but I saw this even when I revert this patch. So 
more work to do.

> Ensure ASF policy compliant headers and correct LICENSE and NOTICE files in 
> artifacts for 0.94
> --
>
> Key: HBASE-14213
> URL: https://issues.apache.org/jira/browse/HBASE-14213
> Project: HBase
>  Issue Type: Task
>  Components: build
>Reporter: Nick Dimiduk
>Assignee: Sean Busbey
>Priority: Blocker
> Fix For: 0.94.28
>
> Attachments: 14213-LICENSE.txt, 14213-addendum.txt, 
> 14213-combined.txt, 14213-part1.txt, 14213-part2.txt, 14213-part3.sh, 
> 14213-part4.sh, 14213-part5.sh, HBASE-14213.1.0.94.patch
>
>
> From tail of thread on HBASE-14085, opening a backport ticket for 0.94. Took 
> the liberty of assigning to [~busbey].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14213) Ensure ASF policy compliant headers and correct LICENSE and NOTICE files in artifacts for 0.94

2016-01-07 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15088800#comment-15088800
 ] 

Lars Hofhansl commented on HBASE-14213:
---

Going to commit the addendum, unless I hear objections.

> Ensure ASF policy compliant headers and correct LICENSE and NOTICE files in 
> artifacts for 0.94
> --
>
> Key: HBASE-14213
> URL: https://issues.apache.org/jira/browse/HBASE-14213
> Project: HBase
>  Issue Type: Task
>  Components: build
>Reporter: Nick Dimiduk
>Assignee: Sean Busbey
>Priority: Blocker
> Fix For: 0.94.28
>
> Attachments: 14213-LICENSE.txt, 14213-addendum.txt, 
> 14213-combined.txt, 14213-part1.txt, 14213-part2.txt, 14213-part3.sh, 
> 14213-part4.sh, 14213-part5.sh, HBASE-14213.1.0.94.patch
>
>
> From tail of thread on HBASE-14085, opening a backport ticket for 0.94. Took 
> the liberty of assigning to [~busbey].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14227) Fold special cased MOB APIs into existing APIs

2016-01-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15088796#comment-15088796
 ] 

Hudson commented on HBASE-14227:


SUCCESS: Integrated in HBase-1.3-IT #427 (See 
[https://builds.apache.org/job/HBase-1.3-IT/427/])
HBASE-14227 Reduce the number of time row comparison is done in a Scan 
(ramkrishna: rev e32e4df780bf1935d8421d70af2f3d84ae88f590)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ScanQueryMatcher.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java


> Fold special cased MOB APIs into existing APIs
> --
>
> Key: HBASE-14227
> URL: https://issues.apache.org/jira/browse/HBASE-14227
> Project: HBase
>  Issue Type: Task
>  Components: mob
>Affects Versions: 2.0.0
>Reporter: Andrew Purtell
>Assignee: Heng Chen
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: HBASE-14227.patch, HBASE-14227_v1.patch, 
> HBASE-14227_v2.patch, HBASE-14227_v3.patch, HBASE-14227_v4.patch, 
> HBASE-14227_v5.patch, HBASE-14227_v5.patch, HBASE-14227_v6.patch, 
> HBASE-14227_v7.patch
>
>
> There are a number of APIs that came in with MOB that are not new actions for 
> HBase, simply new actions for a MOB implementation:
> - compactMob
> - compactMobs
> - majorCompactMob
> - majorCompactMobs
> - getMobCompactionState
> And in HBaseAdmin:
> - validateMobColumnFamily
> Remove these special cases from the Admin API where possible by folding them 
> into existing APIs.
> We definitely don't need one method for a singleton and another for 
> collections.
> Ideally we will not have any APIs named *Mob when finished, whether MOBs are 
> in use on a table or not should be largely an internal detail. Exposing as 
> schema option would be fine, this conforms to existing practice for other 
> features.
> Marking critical because I think removing the *Mob special cased APIs should 
> be a precondition for release of this feature either in 2.0 or as a backport.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15077) Support OffheapKV write in compaction with out copying data on heap

2016-01-07 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15088782#comment-15088782
 ] 

ramkrishna.s.vasudevan commented on HBASE-15077:


bq.ByteBufferSupportedDataOutputStream may not be backed by BB it might be 
backed by array only. But it supports the BB write APIs. Am I making a point?
I went thro the patch and the existing code. So previously the code was 
retrieving byte by byte and that was getting written to the stream and now that 
we avoid and allow copy to happen as a whole byte[] instead of one by one ?
But still the contents are brought onheap correct?
bq.userDataStream = new ByteBufferSupportedDataOutputStream(baosInMemory);
This is still a backed by an ByteArrayOutputStream.
{code}
   BufferGrabbingByteArrayOutputStream baosInMemoryCopy =   
973 new BufferGrabbingByteArrayOutputStream();  
974 baosInMemory.writeTo(baosInMemoryCopy);
{code}
This got removed because the new ByteArrayOutputStream has getBuffer?  
In case of non DBE block-  when all the cells are offheap, can we not create a 
BBStream and write the BB underlying the BBstream directly to the 
FSOutputStream? May be that is a future HDFS work - when FSOutputStream has a 
write() accepting BB as a param and call that?


> Support OffheapKV write in compaction with out copying data on heap
> ---
>
> Key: HBASE-15077
> URL: https://issues.apache.org/jira/browse/HBASE-15077
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver, Scanners
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0
>
> Attachments: HBASE-15077.patch
>
>
> HBASE-14832  is not enough to handle this.  Doing the remaining needed here.
> {code}
>  if (cell instanceof ByteBufferedCell) {
> 890 out.writeShort(rowLen);
> 891 ByteBufferUtils.copyBufferToStream(out, ((ByteBufferedCell) 
> cell).getRowByteBuffer(),
> 892   ((ByteBufferedCell) cell).getRowPosition(), rowLen);
> 893 out.writeByte(fLen);
> 894 ByteBufferUtils.copyBufferToStream(out, ((ByteBufferedCell) 
> cell).getFamilyByteBuffer(),
> 895   ((ByteBufferedCell) cell).getFamilyPosition(), fLen);
> 896 ByteBufferUtils.copyBufferToStream(out, ((ByteBufferedCell) 
> cell).getQualifierByteBuffer(),
> 897   ((ByteBufferedCell) cell).getQualifierPosition(), qLen);
> {code}
> We have done this but it is not really helping us!  
> In ByteBufferUtils#copyBufferToStream
> {code}
> public static void copyBufferToStream(OutputStream out, ByteBuffer in,
>   int offset, int length) throws IOException {
> if (in.hasArray()) {
>   out.write(in.array(), in.arrayOffset() + offset,
>   length);
> } else {
>   for (int i = 0; i < length; ++i) {
> out.write(toByte(in, offset + i));
>   }
> }
>   }
>   {code}
> So for DBB it is so costly op writing byte by byte reading each to on heap.
> Even if we use writeByteBuffer(OutputStream out, ByteBuffer b, int offset, 
> int length), it won't help us as the underlying stream is a 
> ByteArrayOutputStream and so we will end up in copying.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14213) Ensure ASF policy compliant headers and correct LICENSE and NOTICE files in artifacts for 0.94

2016-01-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15088766#comment-15088766
 ] 

Hudson commented on HBASE-14213:


FAILURE: Integrated in HBase-0.94 #1481 (See 
[https://builds.apache.org/job/HBase-0.94/1481/])
HBASE-14213 Ensure ASF policy compliant headers and correct LICENSE and (larsh: 
rev 498fd221d37064d146b5af246b494b8ca8cce526)
* src/main/java/org/apache/hadoop/hbase/io/hfile/FixedFileTrailer.java
* 
src/main/java/org/apache/hadoop/hbase/master/handler/TableModifyFamilyHandler.java
* src/main/java/org/apache/hadoop/hbase/ipc/HBaseRPCErrorHandler.java
* src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterV1.java
* src/main/java/org/apache/hadoop/hbase/coprocessor/package-info.java
* src/main/java/org/apache/hadoop/hbase/client/Put.java
* src/main/java/org/apache/hadoop/hbase/io/hfile/CachedBlock.java
* src/main/java/org/apache/hadoop/hbase/regionserver/RegionServerAccounting.java
* src/test/java/org/apache/hadoop/hbase/replication/ReplicationSourceDummy.java
* src/main/java/org/apache/hadoop/hbase/mapreduce/KeyValueSortReducer.java
* src/test/java/org/apache/hadoop/hbase/rest/TestScannersWithFilters.java
* src/test/java/org/apache/hadoop/hbase/master/TestRollingRestart.java
* src/main/java/org/apache/hadoop/hbase/ZooKeeperConnectionException.java
* src/main/java/org/apache/hadoop/hbase/rest/filter/GZIPResponseWrapper.java
* src/main/java/org/apache/hadoop/hbase/rest/filter/GZIPResponseStream.java
* src/main/java/org/apache/hadoop/hbase/ipc/WritableRpcEngine.java
* src/main/java/org/apache/hadoop/hbase/mapred/HRegionPartitioner.java
* src/main/java/org/apache/hadoop/hbase/regionserver/RSStatusServlet.java
* src/main/java/org/apache/hadoop/hbase/monitoring/StateDumpServlet.java
* src/test/java/org/apache/hadoop/hbase/TestRegionRebalancing.java
* src/main/java/org/apache/hadoop/hbase/HServerInfo.java
* src/examples/thrift/DemoClient.php
* src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreFlusher.java
* 
src/main/java/org/apache/hadoop/hbase/regionserver/compactions/CompactionRequest.java
* src/main/java/org/apache/hadoop/hbase/client/Operation.java
* src/test/java/org/apache/hadoop/hbase/ResourceChecker.java
* src/examples/thrift/DemoClient.rb
* src/main/java/org/apache/hadoop/hbase/DroppedSnapshotException.java
* src/test/java/org/apache/hadoop/hbase/master/TestDistributedLogSplitting.java
* src/main/ruby/shell/commands/alter_status.rb
* src/main/java/org/apache/hadoop/hbase/client/HTableInterfaceFactory.java
* src/main/java/org/apache/hadoop/hbase/rest/ProtobufMessageHandler.java
* src/main/java/org/apache/hadoop/hbase/master/DeadServer.java
* src/main/java/org/apache/hadoop/hbase/io/hfile/InlineBlockWriter.java
* src/main/java/org/apache/hadoop/hbase/rest/model/TableSchemaModel.java
* 
src/main/java/org/apache/hadoop/hbase/regionserver/metrics/RegionServerStatistics.java
* src/main/java/org/apache/hadoop/hbase/regionserver/OperationStatus.java
* src/main/java/org/apache/hadoop/hbase/filter/package-info.java
* src/main/java/org/apache/hadoop/hbase/master/HMaster.java
* 
src/test/java/org/apache/hadoop/hbase/coprocessor/TestRegionObserverStacking.java
* src/main/java/org/apache/hadoop/hbase/regionserver/HRegionThriftServer.java
* src/test/java/org/apache/hadoop/hbase/client/TestMetaScanner.java
* src/test/java/org/apache/hadoop/hbase/filter/TestRandomRowFilter.java
* 
src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestReplicationSourceManager.java
* src/test/java/org/apache/hadoop/hbase/regionserver/TestQueryMatcher.java
* 
src/main/java/org/apache/hadoop/hbase/monitoring/MemoryBoundedLogMessageBuffer.java
* 
src/test/java/org/apache/hadoop/hbase/coprocessor/TestMasterCoprocessorExceptionWithAbort.java
* src/main/java/org/apache/hadoop/hbase/util/GetJavaProperty.java
* 
src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSinkMetrics.java
* src/main/java/org/apache/hadoop/hbase/util/IncrementingEnvironmentEdge.java
* 
src/main/java/org/apache/hadoop/hbase/regionserver/handler/CloseRootHandler.java
* src/test/java/org/apache/hadoop/hbase/rest/TestGZIPResponseWrapper.java
* src/main/java/org/apache/hadoop/hbase/util/FileSystemVersionException.java
* src/test/java/org/apache/hadoop/hbase/rest/model/TestCellSetModel.java
* src/main/java/org/apache/hadoop/hbase/filter/PrefixFilter.java
* src/main/java/org/apache/hadoop/hbase/io/Reference.java
* bin/hbase-config.sh
* src/main/ruby/shell/commands/compact.rb
* src/main/java/org/apache/hadoop/hbase/util/Writables.java
* src/main/java/org/apache/hadoop/hbase/ipc/HBaseRpcMetrics.java
* src/test/java/org/apache/hadoop/hbase/rest/model/TestTableListModel.java
* 
src/main/java/org/apache/hadoop/hbase/regionserver/compactions/CompactSelection.java
* src/test/java/org/apache/hadoop/hbase/TestGlobalMemStoreSize.java
* src/main/ruby/shell/commands/zk_dump.rb
* src/main/java/org/apache/hadoop/

[jira] [Commented] (HBASE-15055) Major compaction is not triggered when both of TTL and hbase.hstore.compaction.max.size are set

2016-01-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15088762#comment-15088762
 ] 

Hadoop QA commented on HBASE-15055:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12781116/HBASE-15055-v7.patch
  against master branch at commit f3ee6df0f2d0955c2b334a9131eb3994c00af0c4.
  ATTACHMENT ID: 12781116

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.6.1 2.7.0 
2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:red}-1 checkstyle{color}.  The applied patch generated 
new checkstyle errors. Check build console for list of new errors.

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:green}+1 site{color}.  The mvn post-site goal succeeds with this 
patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

{color:green}+1 zombies{color}. No zombie tests found running at the end of 
the build.

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17167//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17167//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17167//artifact/patchprocess/checkstyle-aggregate.html

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17167//console

This message is automatically generated.

> Major compaction is not triggered when both of TTL and 
> hbase.hstore.compaction.max.size are set
> ---
>
> Key: HBASE-15055
> URL: https://issues.apache.org/jira/browse/HBASE-15055
> Project: HBase
>  Issue Type: Bug
>Reporter: Eungsop Yoo
>Assignee: Eungsop Yoo
>Priority: Minor
> Attachments: HBASE-15055-v1.patch, HBASE-15055-v2.patch, 
> HBASE-15055-v3.patch, HBASE-15055-v4.patch, HBASE-15055-v5.patch, 
> HBASE-15055-v6.patch, HBASE-15055-v7.patch, HBASE-15055.patch
>
>
> Some large files may be skipped by hbase.hstore.compaction.max.size in 
> candidate selection. It causes skipping of major compaction. So the TTL 
> expired records are still remained in the disks and keep consuming disks.
> To resolve this issue, I suggest that to skip large files only if there is no 
> TTL expired record.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14970) Backport HBASE-13082 and its sub-jira to branch-1

2016-01-07 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15088754#comment-15088754
 ] 

ramkrishna.s.vasudevan commented on HBASE-14970:


Let me revive this patch with HBASE-15027 committed to master. 

> Backport HBASE-13082 and its sub-jira to branch-1
> -
>
> Key: HBASE-14970
> URL: https://issues.apache.org/jira/browse/HBASE-14970
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Attachments: HBASE-13082-branch-1.patch, HBASE-14970_branch-1.patch, 
> HBASE-14970_branch-1.patch, HBASE-14970_branch-1.patch, 
> HBASE-14970_branch-1_1.patch, HBASE-14970_branch-1_2.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15027) Refactor the way the CompactedHFileDischarger threads are created

2016-01-07 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-15027:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Refactor the way the CompactedHFileDischarger threads are created
> -
>
> Key: HBASE-15027
> URL: https://issues.apache.org/jira/browse/HBASE-15027
> Project: HBase
>  Issue Type: Bug
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0
>
> Attachments: HBASE-15027.patch, HBASE-15027_1.patch, 
> HBASE-15027_2.patch, HBASE-15027_3.patch, HBASE-15027_3.patch, 
> HBASE-15027_4.patch, HBASE-15027_5.patch, HBASE-15027_6.patch, 
> HBASE-15027_7.patch, HBASE-15027_8.patch
>
>
> As per suggestion given over in HBASE-14970, if we need to create a single 
> thread pool service for the CompactionHFileDischarger we need to create an 
> exectuor service in the RegionServer level and create discharger handler 
> threads (Event handlers) and pass the Event to the new Exectuor service that 
> we create  for the compaction hfiles discharger. What should be the default 
> number of threads here?  If a HRS holds 100 of regions - will 10 threads be 
> enough?  This issue will try to resolve this with tests and discussions and 
> suitable patch will be updated in HBASE-14970 for branch-1 once this is 
> committed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-15027) Refactor the way the CompactedHFileDischarger threads are created

2016-01-07 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15088751#comment-15088751
 ] 

ramkrishna.s.vasudevan edited comment on HBASE-15027 at 1/8/16 5:53 AM:


Thanks for thorough reviews and comments [~anoopsamjohn]. Lot of discussions 
happened before coming up with this refactoring model. 


was (Author: ram_krish):
Thanks for thorough reviews and comments. Lot of discussions happened before 
coming up with this refactoring model. 

> Refactor the way the CompactedHFileDischarger threads are created
> -
>
> Key: HBASE-15027
> URL: https://issues.apache.org/jira/browse/HBASE-15027
> Project: HBase
>  Issue Type: Bug
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0
>
> Attachments: HBASE-15027.patch, HBASE-15027_1.patch, 
> HBASE-15027_2.patch, HBASE-15027_3.patch, HBASE-15027_3.patch, 
> HBASE-15027_4.patch, HBASE-15027_5.patch, HBASE-15027_6.patch, 
> HBASE-15027_7.patch, HBASE-15027_8.patch
>
>
> As per suggestion given over in HBASE-14970, if we need to create a single 
> thread pool service for the CompactionHFileDischarger we need to create an 
> exectuor service in the RegionServer level and create discharger handler 
> threads (Event handlers) and pass the Event to the new Exectuor service that 
> we create  for the compaction hfiles discharger. What should be the default 
> number of threads here?  If a HRS holds 100 of regions - will 10 threads be 
> enough?  This issue will try to resolve this with tests and discussions and 
> suitable patch will be updated in HBASE-14970 for branch-1 once this is 
> committed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15027) Refactor the way the CompactedHFileDischarger threads are created

2016-01-07 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-15027:
---
Release Note: 
The property 'hbase.hfile.compactions.discharger.interval' has been renamed to 
'hbase.hfile.compaction.discharger.interval' that describes the interval after 
which the compaction discharger chore service should run.
The property 'hbase.hfile.compaction.discharger.thread.count' describes the 
thread count that does the compaction discharge work. 
The CompactedHFilesDischarger is a chore service now started as part of the 
RegionServer and this chore service iterates over all the onlineRegions in that 
RS and uses the RegionServer's executor service to launch a set of threads that 
does this job of compaction files clean up. 

Thanks for thorough reviews and comments. Lot of discussions happened before 
coming up with this refactoring model. 

> Refactor the way the CompactedHFileDischarger threads are created
> -
>
> Key: HBASE-15027
> URL: https://issues.apache.org/jira/browse/HBASE-15027
> Project: HBase
>  Issue Type: Bug
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0
>
> Attachments: HBASE-15027.patch, HBASE-15027_1.patch, 
> HBASE-15027_2.patch, HBASE-15027_3.patch, HBASE-15027_3.patch, 
> HBASE-15027_4.patch, HBASE-15027_5.patch, HBASE-15027_6.patch, 
> HBASE-15027_7.patch, HBASE-15027_8.patch
>
>
> As per suggestion given over in HBASE-14970, if we need to create a single 
> thread pool service for the CompactionHFileDischarger we need to create an 
> exectuor service in the RegionServer level and create discharger handler 
> threads (Event handlers) and pass the Event to the new Exectuor service that 
> we create  for the compaction hfiles discharger. What should be the default 
> number of threads here?  If a HRS holds 100 of regions - will 10 threads be 
> enough?  This issue will try to resolve this with tests and discussions and 
> suitable patch will be updated in HBASE-14970 for branch-1 once this is 
> committed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14221) Reduce the number of time row comparison is done in a Scan

2016-01-07 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15088736#comment-15088736
 ] 

ramkrishna.s.vasudevan commented on HBASE-14221:


Pushed to all 1.0+ branches. Added the required comments also. Thanks for the 
reviews
[~apurtell]
Do you want this patch in  0.98? 



> Reduce the number of time row comparison is done in a Scan
> --
>
> Key: HBASE-14221
> URL: https://issues.apache.org/jira/browse/HBASE-14221
> Project: HBase
>  Issue Type: Sub-task
>  Components: Scanners
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0
>
> Attachments: 14221-0.98-takeALook.txt, HBASE-14221-branch-1.patch, 
> HBASE-14221.patch, HBASE-14221_1.patch, HBASE-14221_1.patch, 
> HBASE-14221_6.patch, HBASE-14221_9.patch, withmatchingRowspatch.png, 
> withoutmatchingRowspatch.png
>
>
> When we tried to do some profiling with the PE tool found this.
> Currently we do row comparisons in 3 places in a simple Scan case.
> 1) ScanQueryMatcher
> {code}
>int ret = this.rowComparator.compareRows(curCell, cell);
> if (!this.isReversed) {
>   if (ret <= -1) {
> return MatchCode.DONE;
>   } else if (ret >= 1) {
> // could optimize this, if necessary?
> // Could also be called SEEK_TO_CURRENT_ROW, but this
> // should be rare/never happens.
> return MatchCode.SEEK_NEXT_ROW;
>   }
> } else {
>   if (ret <= -1) {
> return MatchCode.SEEK_NEXT_ROW;
>   } else if (ret >= 1) {
> return MatchCode.DONE;
>   }
> }
> {code}
> 2) In StoreScanner next() while starting to scan the row
> {code}
> if (!scannerContext.hasAnyLimit(LimitScope.BETWEEN_CELLS) || 
> matcher.curCell == null ||
> isNewRow || !CellUtil.matchingRow(peeked, matcher.curCell)) {
>   this.countPerRow = 0;
>   matcher.setToNewRow(peeked);
> }
> {code}
> Particularly to see if we are in a new row.
> 3) In HRegion
> {code}
>   scannerContext.setKeepProgress(true);
>   heap.next(results, scannerContext);
>   scannerContext.setKeepProgress(tmpKeepProgress);
>   nextKv = heap.peek();
> moreCellsInRow = moreCellsInRow(nextKv, currentRowCell);
> {code}
> Here again there are cases where we need to careful for a MultiCF case.  Was 
> trying to solve this for the MultiCF case but is having lot of cases to 
> solve. But atleast for a single CF case I think these comparison can be 
> reduced.
> So for a single CF case in the SQM we are able to find if we have crossed a 
> row using the code pasted above in SQM. That comparison is definitely needed.
> Now in case of a single CF the HRegion is going to have only one element in 
> the heap and so the 3rd comparison can surely be avoided if the 
> StoreScanner.next() was over due to MatchCode.DONE caused by SQM.
> Coming to the 2nd compareRows that we do in StoreScanner. next() - even that 
> can be avoided if we know that the previous next() call was over due to a new 
> row. Doing all this I found that the compareRows in the profiler which was 
> 19% got reduced to 13%. Initially we can solve for single CF case which can 
> be extended to MultiCF cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15077) Support OffheapKV write in compaction with out copying data on heap

2016-01-07 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15088718#comment-15088718
 ] 

stack commented on HBASE-15077:
---

bq. ByteBufferSupportedOutputStream may not be backed by BB it might be backed 
by array only. But it supports the BB write APIs. Am I making a point?

That helps. It is not an OutputStream itself. It is  just a marker that says 
this method is present:

9 @Override
40public void write(ByteBuffer b, int off, int len) throws IOException {
41  ByteBufferUtils.copyBufferToStream(out, b, off, len);
42}

... and perhaps later other BB facility will be added.

Ok.

Could call it ByteBufferSupportOutputStream or ByteBufferWriter.

+1

> Support OffheapKV write in compaction with out copying data on heap
> ---
>
> Key: HBASE-15077
> URL: https://issues.apache.org/jira/browse/HBASE-15077
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver, Scanners
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0
>
> Attachments: HBASE-15077.patch
>
>
> HBASE-14832  is not enough to handle this.  Doing the remaining needed here.
> {code}
>  if (cell instanceof ByteBufferedCell) {
> 890 out.writeShort(rowLen);
> 891 ByteBufferUtils.copyBufferToStream(out, ((ByteBufferedCell) 
> cell).getRowByteBuffer(),
> 892   ((ByteBufferedCell) cell).getRowPosition(), rowLen);
> 893 out.writeByte(fLen);
> 894 ByteBufferUtils.copyBufferToStream(out, ((ByteBufferedCell) 
> cell).getFamilyByteBuffer(),
> 895   ((ByteBufferedCell) cell).getFamilyPosition(), fLen);
> 896 ByteBufferUtils.copyBufferToStream(out, ((ByteBufferedCell) 
> cell).getQualifierByteBuffer(),
> 897   ((ByteBufferedCell) cell).getQualifierPosition(), qLen);
> {code}
> We have done this but it is not really helping us!  
> In ByteBufferUtils#copyBufferToStream
> {code}
> public static void copyBufferToStream(OutputStream out, ByteBuffer in,
>   int offset, int length) throws IOException {
> if (in.hasArray()) {
>   out.write(in.array(), in.arrayOffset() + offset,
>   length);
> } else {
>   for (int i = 0; i < length; ++i) {
> out.write(toByte(in, offset + i));
>   }
> }
>   }
>   {code}
> So for DBB it is so costly op writing byte by byte reading each to on heap.
> Even if we use writeByteBuffer(OutputStream out, ByteBuffer b, int offset, 
> int length), it won't help us as the underlying stream is a 
> ByteArrayOutputStream and so we will end up in copying.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14030) HBase Backup/Restore Phase 1

2016-01-07 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-14030:
--
Status: Open  (was: Patch Available)

> HBase Backup/Restore Phase 1
> 
>
> Key: HBASE-14030
> URL: https://issues.apache.org/jira/browse/HBASE-14030
> Project: HBase
>  Issue Type: Umbrella
>Affects Versions: 2.0.0
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0
>
> Attachments: HBASE-14030-v0.patch, HBASE-14030-v1.patch, 
> HBASE-14030-v10.patch, HBASE-14030-v11.patch, HBASE-14030-v12.patch, 
> HBASE-14030-v13.patch, HBASE-14030-v14.patch, HBASE-14030-v15.patch, 
> HBASE-14030-v17.patch, HBASE-14030-v18.patch, HBASE-14030-v2.patch, 
> HBASE-14030-v20.patch, HBASE-14030-v21.patch, HBASE-14030-v22.patch, 
> HBASE-14030-v23.patch, HBASE-14030-v24.patch, HBASE-14030-v25.patch, 
> HBASE-14030-v26.patch, HBASE-14030-v27.patch, HBASE-14030-v28.patch, 
> HBASE-14030-v3.patch, HBASE-14030-v4.patch, HBASE-14030-v5.patch, 
> HBASE-14030-v6.patch, HBASE-14030-v7.patch, HBASE-14030-v8.patch
>
>
> This is the umbrella ticket for Backup/Restore Phase 1. See HBASE-7912 design 
> doc for the phase description.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14030) HBase Backup/Restore Phase 1

2016-01-07 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-14030:
--
Attachment: HBASE-14030-v28.patch

> HBase Backup/Restore Phase 1
> 
>
> Key: HBASE-14030
> URL: https://issues.apache.org/jira/browse/HBASE-14030
> Project: HBase
>  Issue Type: Umbrella
>Affects Versions: 2.0.0
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0
>
> Attachments: HBASE-14030-v0.patch, HBASE-14030-v1.patch, 
> HBASE-14030-v10.patch, HBASE-14030-v11.patch, HBASE-14030-v12.patch, 
> HBASE-14030-v13.patch, HBASE-14030-v14.patch, HBASE-14030-v15.patch, 
> HBASE-14030-v17.patch, HBASE-14030-v18.patch, HBASE-14030-v2.patch, 
> HBASE-14030-v20.patch, HBASE-14030-v21.patch, HBASE-14030-v22.patch, 
> HBASE-14030-v23.patch, HBASE-14030-v24.patch, HBASE-14030-v25.patch, 
> HBASE-14030-v26.patch, HBASE-14030-v27.patch, HBASE-14030-v28.patch, 
> HBASE-14030-v3.patch, HBASE-14030-v4.patch, HBASE-14030-v5.patch, 
> HBASE-14030-v6.patch, HBASE-14030-v7.patch, HBASE-14030-v8.patch
>
>
> This is the umbrella ticket for Backup/Restore Phase 1. See HBASE-7912 design 
> doc for the phase description.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14030) HBase Backup/Restore Phase 1

2016-01-07 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-14030:
--
Status: Patch Available  (was: Open)

> HBase Backup/Restore Phase 1
> 
>
> Key: HBASE-14030
> URL: https://issues.apache.org/jira/browse/HBASE-14030
> Project: HBase
>  Issue Type: Umbrella
>Affects Versions: 2.0.0
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0
>
> Attachments: HBASE-14030-v0.patch, HBASE-14030-v1.patch, 
> HBASE-14030-v10.patch, HBASE-14030-v11.patch, HBASE-14030-v12.patch, 
> HBASE-14030-v13.patch, HBASE-14030-v14.patch, HBASE-14030-v15.patch, 
> HBASE-14030-v17.patch, HBASE-14030-v18.patch, HBASE-14030-v2.patch, 
> HBASE-14030-v20.patch, HBASE-14030-v21.patch, HBASE-14030-v22.patch, 
> HBASE-14030-v23.patch, HBASE-14030-v24.patch, HBASE-14030-v25.patch, 
> HBASE-14030-v26.patch, HBASE-14030-v27.patch, HBASE-14030-v28.patch, 
> HBASE-14030-v3.patch, HBASE-14030-v4.patch, HBASE-14030-v5.patch, 
> HBASE-14030-v6.patch, HBASE-14030-v7.patch, HBASE-14030-v8.patch
>
>
> This is the umbrella ticket for Backup/Restore Phase 1. See HBASE-7912 design 
> doc for the phase description.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15027) Refactor the way the CompactedHFileDischarger threads are created

2016-01-07 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15088711#comment-15088711
 ] 

Anoop Sam John commented on HBASE-15027:


bq.hbase.hfile.compactions.discharger.thread.count 
The config name can be hbase.hfile.compaction.discharger.thread.count   
Similar way other new config also.
This can be changed on commit.  +1
Pls add release notes as we have changed the config name and possible total 
threads etc.
Thanks for the nice work.


> Refactor the way the CompactedHFileDischarger threads are created
> -
>
> Key: HBASE-15027
> URL: https://issues.apache.org/jira/browse/HBASE-15027
> Project: HBase
>  Issue Type: Bug
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0
>
> Attachments: HBASE-15027.patch, HBASE-15027_1.patch, 
> HBASE-15027_2.patch, HBASE-15027_3.patch, HBASE-15027_3.patch, 
> HBASE-15027_4.patch, HBASE-15027_5.patch, HBASE-15027_6.patch, 
> HBASE-15027_7.patch, HBASE-15027_8.patch
>
>
> As per suggestion given over in HBASE-14970, if we need to create a single 
> thread pool service for the CompactionHFileDischarger we need to create an 
> exectuor service in the RegionServer level and create discharger handler 
> threads (Event handlers) and pass the Event to the new Exectuor service that 
> we create  for the compaction hfiles discharger. What should be the default 
> number of threads here?  If a HRS holds 100 of regions - will 10 threads be 
> enough?  This issue will try to resolve this with tests and discussions and 
> suitable patch will be updated in HBASE-14970 for branch-1 once this is 
> committed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13525) Update test-patch to leverage Apache Yetus

2016-01-07 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15088710#comment-15088710
 ] 

stack commented on HBASE-13525:
---

Link to download yetus in release good by me. +1

> Update test-patch to leverage Apache Yetus
> --
>
> Key: HBASE-13525
> URL: https://issues.apache.org/jira/browse/HBASE-13525
> Project: HBase
>  Issue Type: Improvement
>  Components: build
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>  Labels: jenkins
> Fix For: 2.0.0
>
> Attachments: HBASE-13525.1.patch, HBASE-13525.2.patch
>
>
> Once HADOOP-11746 lands over in Hadoop, incorporate its changes into our 
> test-patch. Most likely easiest approach is to start with the Hadoop version 
> and add in the features we have locally that they don't.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14221) Reduce the number of time row comparison is done in a Scan

2016-01-07 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15088708#comment-15088708
 ] 

stack commented on HBASE-14221:
---

Sweet. Add the above as a comment around the null setting on commit so someone 
reading the code knows why?

> Reduce the number of time row comparison is done in a Scan
> --
>
> Key: HBASE-14221
> URL: https://issues.apache.org/jira/browse/HBASE-14221
> Project: HBase
>  Issue Type: Sub-task
>  Components: Scanners
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0
>
> Attachments: 14221-0.98-takeALook.txt, HBASE-14221-branch-1.patch, 
> HBASE-14221.patch, HBASE-14221_1.patch, HBASE-14221_1.patch, 
> HBASE-14221_6.patch, HBASE-14221_9.patch, withmatchingRowspatch.png, 
> withoutmatchingRowspatch.png
>
>
> When we tried to do some profiling with the PE tool found this.
> Currently we do row comparisons in 3 places in a simple Scan case.
> 1) ScanQueryMatcher
> {code}
>int ret = this.rowComparator.compareRows(curCell, cell);
> if (!this.isReversed) {
>   if (ret <= -1) {
> return MatchCode.DONE;
>   } else if (ret >= 1) {
> // could optimize this, if necessary?
> // Could also be called SEEK_TO_CURRENT_ROW, but this
> // should be rare/never happens.
> return MatchCode.SEEK_NEXT_ROW;
>   }
> } else {
>   if (ret <= -1) {
> return MatchCode.SEEK_NEXT_ROW;
>   } else if (ret >= 1) {
> return MatchCode.DONE;
>   }
> }
> {code}
> 2) In StoreScanner next() while starting to scan the row
> {code}
> if (!scannerContext.hasAnyLimit(LimitScope.BETWEEN_CELLS) || 
> matcher.curCell == null ||
> isNewRow || !CellUtil.matchingRow(peeked, matcher.curCell)) {
>   this.countPerRow = 0;
>   matcher.setToNewRow(peeked);
> }
> {code}
> Particularly to see if we are in a new row.
> 3) In HRegion
> {code}
>   scannerContext.setKeepProgress(true);
>   heap.next(results, scannerContext);
>   scannerContext.setKeepProgress(tmpKeepProgress);
>   nextKv = heap.peek();
> moreCellsInRow = moreCellsInRow(nextKv, currentRowCell);
> {code}
> Here again there are cases where we need to careful for a MultiCF case.  Was 
> trying to solve this for the MultiCF case but is having lot of cases to 
> solve. But atleast for a single CF case I think these comparison can be 
> reduced.
> So for a single CF case in the SQM we are able to find if we have crossed a 
> row using the code pasted above in SQM. That comparison is definitely needed.
> Now in case of a single CF the HRegion is going to have only one element in 
> the heap and so the 3rd comparison can surely be avoided if the 
> StoreScanner.next() was over due to MatchCode.DONE caused by SQM.
> Coming to the 2nd compareRows that we do in StoreScanner. next() - even that 
> can be avoided if we know that the previous next() call was over due to a new 
> row. Doing all this I found that the compareRows in the profiler which was 
> 19% got reduced to 13%. Initially we can solve for single CF case which can 
> be extended to MultiCF cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14221) Reduce the number of time row comparison is done in a Scan

2016-01-07 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15088700#comment-15088700
 ] 

ramkrishna.s.vasudevan commented on HBASE-14221:


bq.Whats with all the setting the row to null and the null check. Why that 
needed now?
It avoids additional compareRows operation that we do once we go to the next 
row or we seek to the next row. 
Currently take a case where either the filter or some column tracker says 
SEEK_NEXT_ROW. So on seeing this we are sure that we would have seeked to the 
next row if it is available (if not null). Once it is seeked we in the loop 
process we again do  a compareRows() in the SQM.match() and then say DONE. Now 
this patch will avoid all such additional compares. 

Also once we know we are DONE we set the curCell to null. Before this when the 
StoreScanner.next() was called for the nextRow it used to do one compare and 
identify it moved to the nextRow, now that is not needed. 
In my test run of TestMultiColumScanner after this patch around 2k to 3k 
compares were reduced.  

> Reduce the number of time row comparison is done in a Scan
> --
>
> Key: HBASE-14221
> URL: https://issues.apache.org/jira/browse/HBASE-14221
> Project: HBase
>  Issue Type: Sub-task
>  Components: Scanners
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0
>
> Attachments: 14221-0.98-takeALook.txt, HBASE-14221-branch-1.patch, 
> HBASE-14221.patch, HBASE-14221_1.patch, HBASE-14221_1.patch, 
> HBASE-14221_6.patch, HBASE-14221_9.patch, withmatchingRowspatch.png, 
> withoutmatchingRowspatch.png
>
>
> When we tried to do some profiling with the PE tool found this.
> Currently we do row comparisons in 3 places in a simple Scan case.
> 1) ScanQueryMatcher
> {code}
>int ret = this.rowComparator.compareRows(curCell, cell);
> if (!this.isReversed) {
>   if (ret <= -1) {
> return MatchCode.DONE;
>   } else if (ret >= 1) {
> // could optimize this, if necessary?
> // Could also be called SEEK_TO_CURRENT_ROW, but this
> // should be rare/never happens.
> return MatchCode.SEEK_NEXT_ROW;
>   }
> } else {
>   if (ret <= -1) {
> return MatchCode.SEEK_NEXT_ROW;
>   } else if (ret >= 1) {
> return MatchCode.DONE;
>   }
> }
> {code}
> 2) In StoreScanner next() while starting to scan the row
> {code}
> if (!scannerContext.hasAnyLimit(LimitScope.BETWEEN_CELLS) || 
> matcher.curCell == null ||
> isNewRow || !CellUtil.matchingRow(peeked, matcher.curCell)) {
>   this.countPerRow = 0;
>   matcher.setToNewRow(peeked);
> }
> {code}
> Particularly to see if we are in a new row.
> 3) In HRegion
> {code}
>   scannerContext.setKeepProgress(true);
>   heap.next(results, scannerContext);
>   scannerContext.setKeepProgress(tmpKeepProgress);
>   nextKv = heap.peek();
> moreCellsInRow = moreCellsInRow(nextKv, currentRowCell);
> {code}
> Here again there are cases where we need to careful for a MultiCF case.  Was 
> trying to solve this for the MultiCF case but is having lot of cases to 
> solve. But atleast for a single CF case I think these comparison can be 
> reduced.
> So for a single CF case in the SQM we are able to find if we have crossed a 
> row using the code pasted above in SQM. That comparison is definitely needed.
> Now in case of a single CF the HRegion is going to have only one element in 
> the heap and so the 3rd comparison can surely be avoided if the 
> StoreScanner.next() was over due to MatchCode.DONE caused by SQM.
> Coming to the 2nd compareRows that we do in StoreScanner. next() - even that 
> can be avoided if we know that the previous next() call was over due to a new 
> row. Doing all this I found that the compareRows in the profiler which was 
> 19% got reduced to 13%. Initially we can solve for single CF case which can 
> be extended to MultiCF cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15079) TestMultiParallel.validateLoadedData AssertionError: null

2016-01-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15088701#comment-15088701
 ] 

Hudson commented on HBASE-15079:


SUCCESS: Integrated in HBase-1.2-IT #383 (See 
[https://builds.apache.org/job/HBase-1.2-IT/383/])
HBASE-15079 TestMultiParallel.validateLoadedData AssertionError: null (stack: 
rev 8cbe7c4b796cae0fc0e8fd8d0d511129d2e02515)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestMultiParallel.java


> TestMultiParallel.validateLoadedData AssertionError: null
> -
>
> Key: HBASE-15079
> URL: https://issues.apache.org/jira/browse/HBASE-15079
> Project: HBase
>  Issue Type: Bug
>  Components: Client, flakey, test
>Affects Versions: 2.0.0, 1.2.0, 1.3.0
>Reporter: stack
>Assignee: Heng Chen
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14915-branch-1.2.patch
>
>
> Saw this failure on internal rig:
> {code}
> Stack Trace:
> java.lang.AssertionError: null
> at org.junit.Assert.fail(Assert.java:86)
> at org.junit.Assert.assertTrue(Assert.java:41)
> at org.junit.Assert.assertTrue(Assert.java:52)
> at 
> org.apache.hadoop.hbase.client.TestMultiParallel.validateLoadedData(TestMultiParallel.java:676)
> at 
> org.apache.hadoop.hbase.client.TestMultiParallel.doTestFlushCommits(TestMultiParallel.java:293)
> at 
> org.apache.hadoop.hbase.client.TestMultiParallel.testFlushCommitsNoAbort(TestMultiParallel.java:241)
> {code}
> [~chenheng] actually added a fix for this failure over in HBASE-14915 but we 
> never committed it. Let me attach his patch here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15079) TestMultiParallel.validateLoadedData AssertionError: null

2016-01-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15088698#comment-15088698
 ] 

Hudson commented on HBASE-15079:


FAILURE: Integrated in HBase-Trunk_matrix #619 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/619/])
HBASE-15079 TestMultiParallel.validateLoadedData AssertionError: null (stack: 
rev f3ee6df0f2d0955c2b334a9131eb3994c00af0c4)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestMultiParallel.java


> TestMultiParallel.validateLoadedData AssertionError: null
> -
>
> Key: HBASE-15079
> URL: https://issues.apache.org/jira/browse/HBASE-15079
> Project: HBase
>  Issue Type: Bug
>  Components: Client, flakey, test
>Affects Versions: 2.0.0, 1.2.0, 1.3.0
>Reporter: stack
>Assignee: Heng Chen
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14915-branch-1.2.patch
>
>
> Saw this failure on internal rig:
> {code}
> Stack Trace:
> java.lang.AssertionError: null
> at org.junit.Assert.fail(Assert.java:86)
> at org.junit.Assert.assertTrue(Assert.java:41)
> at org.junit.Assert.assertTrue(Assert.java:52)
> at 
> org.apache.hadoop.hbase.client.TestMultiParallel.validateLoadedData(TestMultiParallel.java:676)
> at 
> org.apache.hadoop.hbase.client.TestMultiParallel.doTestFlushCommits(TestMultiParallel.java:293)
> at 
> org.apache.hadoop.hbase.client.TestMultiParallel.testFlushCommitsNoAbort(TestMultiParallel.java:241)
> {code}
> [~chenheng] actually added a fix for this failure over in HBASE-14915 but we 
> never committed it. Let me attach his patch here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15076) Add getScanner(Scan scan, List additionalScanners) API into Region interface

2016-01-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15088697#comment-15088697
 ] 

Hudson commented on HBASE-15076:


FAILURE: Integrated in HBase-Trunk_matrix #619 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/619/])
HBASE-15076 Add getScanner(Scan scan, List (stack: rev 
5bde960b9525f97d26f8917041d550eeb0e2b781)
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Region.java


> Add getScanner(Scan scan, List additionalScanners) API into 
> Region interface
> -
>
> Key: HBASE-15076
> URL: https://issues.apache.org/jira/browse/HBASE-15076
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Reporter: liu ming
>Assignee: Anoop Sam John
>Priority: Critical
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-15076.patch
>
>
> HRegion method getScanner(Scan scan, List 
> additionalScanners, boolean copyCellsFromSharedMem) is protected.
> In Apache Trafodion, we need to invoke this getScanner method from a 
> coprocessor. Since it is protected, Trafodion must overload the HRegion class 
> and overload this method into a public method.
> It will be good to make this method public.
> It is very useful when one needs to combine several scan result in a single 
> scanner.
> thanks,
> Ming



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15076) Add getScanner(Scan scan, List additionalScanners) API into Region interface

2016-01-07 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15088690#comment-15088690
 ] 

ramkrishna.s.vasudevan commented on HBASE-15076:


Belated +1 on patch. The contract should be that the CP should code for Region 
interface now. 

> Add getScanner(Scan scan, List additionalScanners) API into 
> Region interface
> -
>
> Key: HBASE-15076
> URL: https://issues.apache.org/jira/browse/HBASE-15076
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Reporter: liu ming
>Assignee: Anoop Sam John
>Priority: Critical
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-15076.patch
>
>
> HRegion method getScanner(Scan scan, List 
> additionalScanners, boolean copyCellsFromSharedMem) is protected.
> In Apache Trafodion, we need to invoke this getScanner method from a 
> coprocessor. Since it is protected, Trafodion must overload the HRegion class 
> and overload this method into a public method.
> It will be good to make this method public.
> It is very useful when one needs to combine several scan result in a single 
> scanner.
> thanks,
> Ming



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15080) Remove synchronized block from MasterServiceStubMaker#releaseZooKeeperWatcher()

2016-01-07 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15088683#comment-15088683
 ] 

Ted Yu commented on HBASE-15080:


Ran TestShell locally with patch which passed:
{code}
Running org.apache.hadoop.hbase.client.TestShell
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 110.31 sec - in 
org.apache.hadoop.hbase.client.TestShell
{code}

> Remove synchronized block from 
> MasterServiceStubMaker#releaseZooKeeperWatcher()
> ---
>
> Key: HBASE-15080
> URL: https://issues.apache.org/jira/browse/HBASE-15080
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 0.98.17
>
> Attachments: 15080-0.98.txt
>
>
> This is a follow up to HBASE-11460
> [~elserj] found that in 0.98, the synchronized block below should have been 
> taken out (as was done for branch-1 +):
> {code}
>   synchronized (masterAndZKLock) {
> if (keepAliveZookeeperUserCount.decrementAndGet() <= 0 ){
> {code}
> keepAliveZookeeperUserCount is an AtomicInteger. There is no need for the 
> synchronized block.
> This issue is to remove the synchronized block.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13525) Update test-patch to leverage Apache Yetus

2016-01-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15088675#comment-15088675
 ] 

Hadoop QA commented on HBASE-13525:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12781102/HBASE-13525.1.patch
  against master branch at commit f3ee6df0f2d0955c2b334a9131eb3994c00af0c4.
  ATTACHMENT ID: 12781102

{color:red}-1 @author{color}.  The patch appears to contain 6 @author tags 
which the Hadoop community has agreed to not allow in code contributions.

{color:green}+1 tests included{color}.  The patch appears to include 12 new 
or modified tests.

{color:red}-1 Anti-pattern{color}.  The patch appears to have anti-pattern 
where BYTES_COMPARATOR was omitted: +  warnings=$(${GREP} 'new 
TreeMaphttps://builds.apache.org/job/PreCommit-HBASE-Build/17166//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17166//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17166//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17166//console

This message is automatically generated.

> Update test-patch to leverage Apache Yetus
> --
>
> Key: HBASE-13525
> URL: https://issues.apache.org/jira/browse/HBASE-13525
> Project: HBase
>  Issue Type: Improvement
>  Components: build
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>  Labels: jenkins
> Fix For: 2.0.0
>
> Attachments: HBASE-13525.1.patch, HBASE-13525.2.patch
>
>
> Once HADOOP-11746 lands over in Hadoop, incorporate its changes into our 
> test-patch. Most likely easiest approach is to start with the Hadoop version 
> and add in the features we have locally that they don't.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15027) Refactor the way the CompactedHFileDischarger threads are created

2016-01-07 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15088672#comment-15088672
 ] 

ramkrishna.s.vasudevan commented on HBASE-15027:


No failures now. Good to commit?

> Refactor the way the CompactedHFileDischarger threads are created
> -
>
> Key: HBASE-15027
> URL: https://issues.apache.org/jira/browse/HBASE-15027
> Project: HBase
>  Issue Type: Bug
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0
>
> Attachments: HBASE-15027.patch, HBASE-15027_1.patch, 
> HBASE-15027_2.patch, HBASE-15027_3.patch, HBASE-15027_3.patch, 
> HBASE-15027_4.patch, HBASE-15027_5.patch, HBASE-15027_6.patch, 
> HBASE-15027_7.patch, HBASE-15027_8.patch
>
>
> As per suggestion given over in HBASE-14970, if we need to create a single 
> thread pool service for the CompactionHFileDischarger we need to create an 
> exectuor service in the RegionServer level and create discharger handler 
> threads (Event handlers) and pass the Event to the new Exectuor service that 
> we create  for the compaction hfiles discharger. What should be the default 
> number of threads here?  If a HRS holds 100 of regions - will 10 threads be 
> enough?  This issue will try to resolve this with tests and discussions and 
> suitable patch will be updated in HBASE-14970 for branch-1 once this is 
> committed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15080) Remove synchronized block from MasterServiceStubMaker#releaseZooKeeperWatcher()

2016-01-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15088668#comment-15088668
 ] 

Hadoop QA commented on HBASE-15080:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12781113/15080-0.98.txt
  against 0.98 branch at commit f3ee6df0f2d0955c2b334a9131eb3994c00af0c4.
  ATTACHMENT ID: 12781113

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.6.1 2.7.0 
2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
28 warning messages.

{color:green}+1 checkstyle{color}. The applied patch does not generate new 
checkstyle errors.

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:green}+1 site{color}.  The mvn post-site goal succeeds with this 
patch.

{color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.client.TestShell

{color:green}+1 zombies{color}. No zombie tests found running at the end of 
the build.

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17165//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17165//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17165//artifact/patchprocess/checkstyle-aggregate.html

  Javadoc warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17165//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17165//console

This message is automatically generated.

> Remove synchronized block from 
> MasterServiceStubMaker#releaseZooKeeperWatcher()
> ---
>
> Key: HBASE-15080
> URL: https://issues.apache.org/jira/browse/HBASE-15080
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 0.98.17
>
> Attachments: 15080-0.98.txt
>
>
> This is a follow up to HBASE-11460
> [~elserj] found that in 0.98, the synchronized block below should have been 
> taken out (as was done for branch-1 +):
> {code}
>   synchronized (masterAndZKLock) {
> if (keepAliveZookeeperUserCount.decrementAndGet() <= 0 ){
> {code}
> keepAliveZookeeperUserCount is an AtomicInteger. There is no need for the 
> synchronized block.
> This issue is to remove the synchronized block.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15077) Support OffheapKV write in compaction with out copying data on heap

2016-01-07 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-15077:
---
Description: 
HBASE-14832  is not enough to handle this.  Doing the remaining needed here.

{code}
 if (cell instanceof ByteBufferedCell) {
890   out.writeShort(rowLen);
891   ByteBufferUtils.copyBufferToStream(out, ((ByteBufferedCell) 
cell).getRowByteBuffer(),
892 ((ByteBufferedCell) cell).getRowPosition(), rowLen);
893   out.writeByte(fLen);
894   ByteBufferUtils.copyBufferToStream(out, ((ByteBufferedCell) 
cell).getFamilyByteBuffer(),
895 ((ByteBufferedCell) cell).getFamilyPosition(), fLen);
896   ByteBufferUtils.copyBufferToStream(out, ((ByteBufferedCell) 
cell).getQualifierByteBuffer(),
897 ((ByteBufferedCell) cell).getQualifierPosition(), qLen);
{code}
We have done this but it is not really helping us!  
In ByteBufferUtils#copyBufferToStream
{code}
public static void copyBufferToStream(OutputStream out, ByteBuffer in,
  int offset, int length) throws IOException {
if (in.hasArray()) {
  out.write(in.array(), in.arrayOffset() + offset,
  length);
} else {
  for (int i = 0; i < length; ++i) {
out.write(toByte(in, offset + i));
  }
}
  }
  {code}
So for DBB it is so costly op writing byte by byte reading each to on heap.
Even if we use writeByteBuffer(OutputStream out, ByteBuffer b, int offset, int 
length), it won't help us as the underlying stream is a ByteArrayOutputStream 
and so we will end up in copying.


  was:HBASE-14832  is not enough to handle this.  Doing the remaining needed 
here.


> Support OffheapKV write in compaction with out copying data on heap
> ---
>
> Key: HBASE-15077
> URL: https://issues.apache.org/jira/browse/HBASE-15077
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver, Scanners
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0
>
> Attachments: HBASE-15077.patch
>
>
> HBASE-14832  is not enough to handle this.  Doing the remaining needed here.
> {code}
>  if (cell instanceof ByteBufferedCell) {
> 890 out.writeShort(rowLen);
> 891 ByteBufferUtils.copyBufferToStream(out, ((ByteBufferedCell) 
> cell).getRowByteBuffer(),
> 892   ((ByteBufferedCell) cell).getRowPosition(), rowLen);
> 893 out.writeByte(fLen);
> 894 ByteBufferUtils.copyBufferToStream(out, ((ByteBufferedCell) 
> cell).getFamilyByteBuffer(),
> 895   ((ByteBufferedCell) cell).getFamilyPosition(), fLen);
> 896 ByteBufferUtils.copyBufferToStream(out, ((ByteBufferedCell) 
> cell).getQualifierByteBuffer(),
> 897   ((ByteBufferedCell) cell).getQualifierPosition(), qLen);
> {code}
> We have done this but it is not really helping us!  
> In ByteBufferUtils#copyBufferToStream
> {code}
> public static void copyBufferToStream(OutputStream out, ByteBuffer in,
>   int offset, int length) throws IOException {
> if (in.hasArray()) {
>   out.write(in.array(), in.arrayOffset() + offset,
>   length);
> } else {
>   for (int i = 0; i < length; ++i) {
> out.write(toByte(in, offset + i));
>   }
> }
>   }
>   {code}
> So for DBB it is so costly op writing byte by byte reading each to on heap.
> Even if we use writeByteBuffer(OutputStream out, ByteBuffer b, int offset, 
> int length), it won't help us as the underlying stream is a 
> ByteArrayOutputStream and so we will end up in copying.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14030) HBase Backup/Restore Phase 1

2016-01-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15088649#comment-15088649
 ] 

Hadoop QA commented on HBASE-14030:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12781088/HBASE-14030-v27.patch
  against master branch at commit f3ee6df0f2d0955c2b334a9131eb3994c00af0c4.
  ATTACHMENT ID: 12781088

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 23 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.6.1 2.7.0 
2.7.1)

{color:red}-1 javac{color}.  The applied patch generated 43 javac compiler 
warnings (more than the master's current 35 warnings).

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:red}-1 checkstyle{color}.  The applied patch generated 
new checkstyle errors. Check build console for list of new errors.

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:green}+1 site{color}.  The mvn post-site goal succeeds with this 
patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

{color:green}+1 zombies{color}. No zombie tests found running at the end of 
the build.

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17164//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17164//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17164//artifact/patchprocess/checkstyle-aggregate.html

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17164//console

This message is automatically generated.

> HBase Backup/Restore Phase 1
> 
>
> Key: HBASE-14030
> URL: https://issues.apache.org/jira/browse/HBASE-14030
> Project: HBase
>  Issue Type: Umbrella
>Affects Versions: 2.0.0
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0
>
> Attachments: HBASE-14030-v0.patch, HBASE-14030-v1.patch, 
> HBASE-14030-v10.patch, HBASE-14030-v11.patch, HBASE-14030-v12.patch, 
> HBASE-14030-v13.patch, HBASE-14030-v14.patch, HBASE-14030-v15.patch, 
> HBASE-14030-v17.patch, HBASE-14030-v18.patch, HBASE-14030-v2.patch, 
> HBASE-14030-v20.patch, HBASE-14030-v21.patch, HBASE-14030-v22.patch, 
> HBASE-14030-v23.patch, HBASE-14030-v24.patch, HBASE-14030-v25.patch, 
> HBASE-14030-v26.patch, HBASE-14030-v27.patch, HBASE-14030-v3.patch, 
> HBASE-14030-v4.patch, HBASE-14030-v5.patch, HBASE-14030-v6.patch, 
> HBASE-14030-v7.patch, HBASE-14030-v8.patch
>
>
> This is the umbrella ticket for Backup/Restore Phase 1. See HBASE-7912 design 
> doc for the phase description.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15079) TestMultiParallel.validateLoadedData AssertionError: null

2016-01-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15088647#comment-15088647
 ] 

Hudson commented on HBASE-15079:


FAILURE: Integrated in HBase-1.3-IT #426 (See 
[https://builds.apache.org/job/HBase-1.3-IT/426/])
HBASE-15079 TestMultiParallel.validateLoadedData AssertionError: null (stack: 
rev 90ca944e1bcfe18ef7ae846e234b4d88b178e0f1)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestMultiParallel.java


> TestMultiParallel.validateLoadedData AssertionError: null
> -
>
> Key: HBASE-15079
> URL: https://issues.apache.org/jira/browse/HBASE-15079
> Project: HBase
>  Issue Type: Bug
>  Components: Client, flakey, test
>Affects Versions: 2.0.0, 1.2.0, 1.3.0
>Reporter: stack
>Assignee: Heng Chen
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14915-branch-1.2.patch
>
>
> Saw this failure on internal rig:
> {code}
> Stack Trace:
> java.lang.AssertionError: null
> at org.junit.Assert.fail(Assert.java:86)
> at org.junit.Assert.assertTrue(Assert.java:41)
> at org.junit.Assert.assertTrue(Assert.java:52)
> at 
> org.apache.hadoop.hbase.client.TestMultiParallel.validateLoadedData(TestMultiParallel.java:676)
> at 
> org.apache.hadoop.hbase.client.TestMultiParallel.doTestFlushCommits(TestMultiParallel.java:293)
> at 
> org.apache.hadoop.hbase.client.TestMultiParallel.testFlushCommitsNoAbort(TestMultiParallel.java:241)
> {code}
> [~chenheng] actually added a fix for this failure over in HBASE-14915 but we 
> never committed it. Let me attach his patch here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15079) TestMultiParallel.validateLoadedData AssertionError: null

2016-01-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15088631#comment-15088631
 ] 

Hudson commented on HBASE-15079:


SUCCESS: Integrated in HBase-1.3 #485 (See 
[https://builds.apache.org/job/HBase-1.3/485/])
HBASE-15079 TestMultiParallel.validateLoadedData AssertionError: null (stack: 
rev 90ca944e1bcfe18ef7ae846e234b4d88b178e0f1)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestMultiParallel.java


> TestMultiParallel.validateLoadedData AssertionError: null
> -
>
> Key: HBASE-15079
> URL: https://issues.apache.org/jira/browse/HBASE-15079
> Project: HBase
>  Issue Type: Bug
>  Components: Client, flakey, test
>Affects Versions: 2.0.0, 1.2.0, 1.3.0
>Reporter: stack
>Assignee: Heng Chen
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14915-branch-1.2.patch
>
>
> Saw this failure on internal rig:
> {code}
> Stack Trace:
> java.lang.AssertionError: null
> at org.junit.Assert.fail(Assert.java:86)
> at org.junit.Assert.assertTrue(Assert.java:41)
> at org.junit.Assert.assertTrue(Assert.java:52)
> at 
> org.apache.hadoop.hbase.client.TestMultiParallel.validateLoadedData(TestMultiParallel.java:676)
> at 
> org.apache.hadoop.hbase.client.TestMultiParallel.doTestFlushCommits(TestMultiParallel.java:293)
> at 
> org.apache.hadoop.hbase.client.TestMultiParallel.testFlushCommitsNoAbort(TestMultiParallel.java:241)
> {code}
> [~chenheng] actually added a fix for this failure over in HBASE-14915 but we 
> never committed it. Let me attach his patch here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15071) Cleanup bypass semantic in MasterCoprocessorHost

2016-01-07 Thread Heng Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Heng Chen updated HBASE-15071:
--
Attachment: HBASE-15071.patch

Make a patch.
Now in MasterCoprocessorHost

only preMove/preBalance/preBalanceSwitch/preAssign/preUnAssign will respect 
'bypass',  the other interfaces will just ignore it. And I make some comments 
on relates interfaces.

On the side, preBalance/preAssign/preUnassign will be modified according 
'bypass' semantic.  Due to code compatible,  i just comment it in 'TODO'.

Any suggestions?


> Cleanup bypass semantic in MasterCoprocessorHost
> 
>
> Key: HBASE-15071
> URL: https://issues.apache.org/jira/browse/HBASE-15071
> Project: HBase
>  Issue Type: Task
>  Components: Coprocessors
>Affects Versions: 2.0.0
>Reporter: stack
>Priority: Blocker
> Attachments: HBASE-15071.patch
>
>
> Lets decide on this one before we release 2.0.0.
> A bunch of methods in MasterCoprocessorHost on the 'pre' step allow returning 
> true which indicates the method invocation is not to proceed.
> Not all 'pre' steps do this. Just some.
> Seems a little arbitrary.
> How we skip out if we are not proceed with the invocation is also a little 
> arbitrary.
> When a deleteColumn call is supposed to skip out, it returns a -1, a 
> non-procId. If we are to skip a balance call, we log that CP said skip and 
> then return false to indicate the balancer did not run (why?). Elsewhere we 
> just exit silently. In createNamespace we used to exit silently but 
> HBASE-14888 just changed it so we throw a BypassCoprocessorException 
> instead... 
> Lets make them all work the same way.
> (This issue comes of chat w/ Matteo)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15076) Add getScanner(Scan scan, List additionalScanners) API into Region interface

2016-01-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15088630#comment-15088630
 ] 

Hudson commented on HBASE-15076:


SUCCESS: Integrated in HBase-1.3 #485 (See 
[https://builds.apache.org/job/HBase-1.3/485/])
HBASE-15076 Add getScanner(Scan scan, List (stack: rev 
01ecd30906bef3f24b1753bbe2165c7ee40d6e01)
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Region.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java


> Add getScanner(Scan scan, List additionalScanners) API into 
> Region interface
> -
>
> Key: HBASE-15076
> URL: https://issues.apache.org/jira/browse/HBASE-15076
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Reporter: liu ming
>Assignee: Anoop Sam John
>Priority: Critical
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-15076.patch
>
>
> HRegion method getScanner(Scan scan, List 
> additionalScanners, boolean copyCellsFromSharedMem) is protected.
> In Apache Trafodion, we need to invoke this getScanner method from a 
> coprocessor. Since it is protected, Trafodion must overload the HRegion class 
> and overload this method into a public method.
> It will be good to make this method public.
> It is very useful when one needs to combine several scan result in a single 
> scanner.
> thanks,
> Ming



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15080) Remove synchronized block from MasterServiceStubMaker#releaseZooKeeperWatcher()

2016-01-07 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15088621#comment-15088621
 ] 

Josh Elser commented on HBASE-15080:


Looks right given what you and I talked about earlier, Ted!

Thanks for posting a patch.

> Remove synchronized block from 
> MasterServiceStubMaker#releaseZooKeeperWatcher()
> ---
>
> Key: HBASE-15080
> URL: https://issues.apache.org/jira/browse/HBASE-15080
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 0.98.17
>
> Attachments: 15080-0.98.txt
>
>
> This is a follow up to HBASE-11460
> [~elserj] found that in 0.98, the synchronized block below should have been 
> taken out (as was done for branch-1 +):
> {code}
>   synchronized (masterAndZKLock) {
> if (keepAliveZookeeperUserCount.decrementAndGet() <= 0 ){
> {code}
> keepAliveZookeeperUserCount is an AtomicInteger. There is no need for the 
> synchronized block.
> This issue is to remove the synchronized block.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13525) Update test-patch to leverage Apache Yetus

2016-01-07 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-13525:

Attachment: HBASE-13525.2.patch

-02 version I'll be committing shortly

  - Adds [~stack]'s suggested release note to the file as well.

> Update test-patch to leverage Apache Yetus
> --
>
> Key: HBASE-13525
> URL: https://issues.apache.org/jira/browse/HBASE-13525
> Project: HBase
>  Issue Type: Improvement
>  Components: build
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>  Labels: jenkins
> Fix For: 2.0.0
>
> Attachments: HBASE-13525.1.patch, HBASE-13525.2.patch
>
>
> Once HADOOP-11746 lands over in Hadoop, incorporate its changes into our 
> test-patch. Most likely easiest approach is to start with the Hadoop version 
> and add in the features we have locally that they don't.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15079) TestMultiParallel.validateLoadedData AssertionError: null

2016-01-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15088605#comment-15088605
 ] 

Hudson commented on HBASE-15079:


SUCCESS: Integrated in HBase-1.2 #493 (See 
[https://builds.apache.org/job/HBase-1.2/493/])
HBASE-15079 TestMultiParallel.validateLoadedData AssertionError: null (stack: 
rev 8cbe7c4b796cae0fc0e8fd8d0d511129d2e02515)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestMultiParallel.java


> TestMultiParallel.validateLoadedData AssertionError: null
> -
>
> Key: HBASE-15079
> URL: https://issues.apache.org/jira/browse/HBASE-15079
> Project: HBase
>  Issue Type: Bug
>  Components: Client, flakey, test
>Affects Versions: 2.0.0, 1.2.0, 1.3.0
>Reporter: stack
>Assignee: Heng Chen
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14915-branch-1.2.patch
>
>
> Saw this failure on internal rig:
> {code}
> Stack Trace:
> java.lang.AssertionError: null
> at org.junit.Assert.fail(Assert.java:86)
> at org.junit.Assert.assertTrue(Assert.java:41)
> at org.junit.Assert.assertTrue(Assert.java:52)
> at 
> org.apache.hadoop.hbase.client.TestMultiParallel.validateLoadedData(TestMultiParallel.java:676)
> at 
> org.apache.hadoop.hbase.client.TestMultiParallel.doTestFlushCommits(TestMultiParallel.java:293)
> at 
> org.apache.hadoop.hbase.client.TestMultiParallel.testFlushCommitsNoAbort(TestMultiParallel.java:241)
> {code}
> [~chenheng] actually added a fix for this failure over in HBASE-14915 but we 
> never committed it. Let me attach his patch here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15076) Add getScanner(Scan scan, List additionalScanners) API into Region interface

2016-01-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15088604#comment-15088604
 ] 

Hudson commented on HBASE-15076:


SUCCESS: Integrated in HBase-1.2 #493 (See 
[https://builds.apache.org/job/HBase-1.2/493/])
HBASE-15076 Add getScanner(Scan scan, List (stack: rev 
0836f4274b8b6ce79f86a4fab1c712d0bebc702e)
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Region.java


> Add getScanner(Scan scan, List additionalScanners) API into 
> Region interface
> -
>
> Key: HBASE-15076
> URL: https://issues.apache.org/jira/browse/HBASE-15076
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Reporter: liu ming
>Assignee: Anoop Sam John
>Priority: Critical
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-15076.patch
>
>
> HRegion method getScanner(Scan scan, List 
> additionalScanners, boolean copyCellsFromSharedMem) is protected.
> In Apache Trafodion, we need to invoke this getScanner method from a 
> coprocessor. Since it is protected, Trafodion must overload the HRegion class 
> and overload this method into a public method.
> It will be good to make this method public.
> It is very useful when one needs to combine several scan result in a single 
> scanner.
> thanks,
> Ming



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14213) Ensure ASF policy compliant headers and correct LICENSE and NOTICE files in artifacts for 0.94

2016-01-07 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15088597#comment-15088597
 ] 

Sean Busbey commented on HBASE-14213:
-

I know the site goal failed for me before.

Maybe site includes those docs now? (are docs really ~200MB?)

> Ensure ASF policy compliant headers and correct LICENSE and NOTICE files in 
> artifacts for 0.94
> --
>
> Key: HBASE-14213
> URL: https://issues.apache.org/jira/browse/HBASE-14213
> Project: HBase
>  Issue Type: Task
>  Components: build
>Reporter: Nick Dimiduk
>Assignee: Sean Busbey
>Priority: Blocker
> Fix For: 0.94.28
>
> Attachments: 14213-LICENSE.txt, 14213-combined.txt, 14213-part1.txt, 
> 14213-part2.txt, 14213-part3.sh, 14213-part4.sh, 14213-part5.sh, 
> HBASE-14213.1.0.94.patch
>
>
> From tail of thread on HBASE-14085, opening a backport ticket for 0.94. Took 
> the liberty of assigning to [~busbey].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15076) Add getScanner(Scan scan, List additionalScanners) API into Region interface

2016-01-07 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15088590#comment-15088590
 ] 

Anoop Sam John commented on HBASE-15076:


No there is no private class issue now.
We have added the new method to Region interface which is exposed to CPs
{code}
@InterfaceAudience.LimitedPrivate(HBaseInterfaceAudience.COPROC)
@InterfaceStability.Evolving
public interface Region extends ConfigurationObserver {
{code}
So pls use APIs from this interface only and not from HRegion class directly.

Thanks for the review and commit Stack.

> Add getScanner(Scan scan, List additionalScanners) API into 
> Region interface
> -
>
> Key: HBASE-15076
> URL: https://issues.apache.org/jira/browse/HBASE-15076
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Reporter: liu ming
>Assignee: Anoop Sam John
>Priority: Critical
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-15076.patch
>
>
> HRegion method getScanner(Scan scan, List 
> additionalScanners, boolean copyCellsFromSharedMem) is protected.
> In Apache Trafodion, we need to invoke this getScanner method from a 
> coprocessor. Since it is protected, Trafodion must overload the HRegion class 
> and overload this method into a public method.
> It will be good to make this method public.
> It is very useful when one needs to combine several scan result in a single 
> scanner.
> thanks,
> Ming



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15077) Support OffheapKV write in compaction with out copying data on heap

2016-01-07 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15088585#comment-15088585
 ] 

Anoop Sam John commented on HBASE-15077:


[~tedyu]
Only int write is what we need now. As if we need later, we can add then. These 
are our private interface.
ByteBufferSupportingOutputStream   seems better name. ok
Performance numbers directly taking is bit difficult. Because this will be an 
issue for compaction only now. That too when that compacted file block comes 
from offheap L2 cache.  Later on when we are with write path offheap, it will 
be more impacting and always in flushes too.
[~ram_krish]
Yes we have 2 stream here. It was like a java io ByteArrayOS wrapped by a 
DataOS.  We need to pass a DataOS only to other areas like DBE etc.
So here I have created both versions with supporting BB apis.
Within new DataOS, we have again another stream to which we have to write BB 
bytes and so we have to call the BBUtil.

[~saint@gmail.com]
Sorry I forgot to copy the details from that jira. Let me do now.
No only that line which calc the Max array size is copied.  We do have same 
line with same comment in another class also.I just copied from that :-)
bq.Elsewhere we say ByteBufferedCell, etc. Why then do 
ByteBufferSupportedDataOutputStream and not ByteBufferedDOS?
When we say ByteBuffered it is implicit that it is backed by a BB.  Here it is 
different.  ByteBufferSupportedDataOutputStream may not be backed  by BB it 
might be backed by array only.  But it supports the BB write APIs. Am I making 
a point?

You want to me write a test to make sure it works as we expect? Not sure how. 
Any suggestion?

> Support OffheapKV write in compaction with out copying data on heap
> ---
>
> Key: HBASE-15077
> URL: https://issues.apache.org/jira/browse/HBASE-15077
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver, Scanners
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0
>
> Attachments: HBASE-15077.patch
>
>
> HBASE-14832  is not enough to handle this.  Doing the remaining needed here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14213) Ensure ASF policy compliant headers and correct LICENSE and NOTICE files in artifacts for 0.94

2016-01-07 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15088545#comment-15088545
 ] 

Lars Hofhansl commented on HBASE-14213:
---

Hmm... When fixed I get a 250MB tarball (was 57MB) before. Somehow devapidoc 
and testapidoc is included, but wasn't before.

> Ensure ASF policy compliant headers and correct LICENSE and NOTICE files in 
> artifacts for 0.94
> --
>
> Key: HBASE-14213
> URL: https://issues.apache.org/jira/browse/HBASE-14213
> Project: HBase
>  Issue Type: Task
>  Components: build
>Reporter: Nick Dimiduk
>Assignee: Sean Busbey
>Priority: Blocker
> Fix For: 0.94.28
>
> Attachments: 14213-LICENSE.txt, 14213-combined.txt, 14213-part1.txt, 
> 14213-part2.txt, 14213-part3.sh, 14213-part4.sh, 14213-part5.sh, 
> HBASE-14213.1.0.94.patch
>
>
> From tail of thread on HBASE-14085, opening a backport ticket for 0.94. Took 
> the liberty of assigning to [~busbey].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-14213) Ensure ASF policy compliant headers and correct LICENSE and NOTICE files in artifacts for 0.94

2016-01-07 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15088517#comment-15088517
 ] 

Lars Hofhansl edited comment on HBASE-14213 at 1/8/16 1:10 AM:
---

Yesterday I only built with "-DskipTests install -Prelease". Seems when 
actually building the assembly ("assembly:single") the build is currently 
broken. Patch forthcoming.



was (Author: lhofhansl):
Yesterday I only built with "-DskipTests install". Seems when actually building 
the assembly ("assembly:single") the build is currently broken. Patch 
forthcoming.


> Ensure ASF policy compliant headers and correct LICENSE and NOTICE files in 
> artifacts for 0.94
> --
>
> Key: HBASE-14213
> URL: https://issues.apache.org/jira/browse/HBASE-14213
> Project: HBase
>  Issue Type: Task
>  Components: build
>Reporter: Nick Dimiduk
>Assignee: Sean Busbey
>Priority: Blocker
> Fix For: 0.94.28
>
> Attachments: 14213-LICENSE.txt, 14213-combined.txt, 14213-part1.txt, 
> 14213-part2.txt, 14213-part3.sh, 14213-part4.sh, 14213-part5.sh, 
> HBASE-14213.1.0.94.patch
>
>
> From tail of thread on HBASE-14085, opening a backport ticket for 0.94. Took 
> the liberty of assigning to [~busbey].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14213) Ensure ASF policy compliant headers and correct LICENSE and NOTICE files in artifacts for 0.94

2016-01-07 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15088517#comment-15088517
 ] 

Lars Hofhansl commented on HBASE-14213:
---

Yesterday I only built with "-DskipTests install". Seems when actually building 
the assembly ("assembly:single") the build is currently broken. Patch 
forthcoming.


> Ensure ASF policy compliant headers and correct LICENSE and NOTICE files in 
> artifacts for 0.94
> --
>
> Key: HBASE-14213
> URL: https://issues.apache.org/jira/browse/HBASE-14213
> Project: HBase
>  Issue Type: Task
>  Components: build
>Reporter: Nick Dimiduk
>Assignee: Sean Busbey
>Priority: Blocker
> Fix For: 0.94.28
>
> Attachments: 14213-LICENSE.txt, 14213-combined.txt, 14213-part1.txt, 
> 14213-part2.txt, 14213-part3.sh, 14213-part4.sh, 14213-part5.sh, 
> HBASE-14213.1.0.94.patch
>
>
> From tail of thread on HBASE-14085, opening a backport ticket for 0.94. Took 
> the liberty of assigning to [~busbey].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15081) maven archetype: hbase-spark examples

2016-01-07 Thread Daniel Vimont (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15088514#comment-15088514
 ] 

Daniel Vimont commented on HBASE-15081:
---

Just create this subproject at [~busbey]'s request.

> maven archetype: hbase-spark examples
> -
>
> Key: HBASE-15081
> URL: https://issues.apache.org/jira/browse/HBASE-15081
> Project: HBase
>  Issue Type: Sub-task
>  Components: build, spark, Usability
>Reporter: Daniel Vimont
>
> Using the Java examples in hbase-spark subproject as starting point, create 
> archetype with standalone, fully-functioning Java code and corresponding test 
> code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13525) Update test-patch to leverage Apache Yetus

2016-01-07 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15088511#comment-15088511
 ] 

Allen Wittenauer commented on HBASE-13525:
--

FWIW, my current plan for test-patch, etc, in hadoop is in HADOOP-12651.  It 
basically replaces them with wrappers that do downloads, etc.

> Update test-patch to leverage Apache Yetus
> --
>
> Key: HBASE-13525
> URL: https://issues.apache.org/jira/browse/HBASE-13525
> Project: HBase
>  Issue Type: Improvement
>  Components: build
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>  Labels: jenkins
> Fix For: 2.0.0
>
> Attachments: HBASE-13525.1.patch
>
>
> Once HADOOP-11746 lands over in Hadoop, incorporate its changes into our 
> test-patch. Most likely easiest approach is to start with the Hadoop version 
> and add in the features we have locally that they don't.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-15081) maven archetype: hbase-spark examples

2016-01-07 Thread Daniel Vimont (JIRA)
Daniel Vimont created HBASE-15081:
-

 Summary: maven archetype: hbase-spark examples
 Key: HBASE-15081
 URL: https://issues.apache.org/jira/browse/HBASE-15081
 Project: HBase
  Issue Type: Sub-task
  Components: build, spark, Usability
Reporter: Daniel Vimont


Using the Java examples in hbase-spark subproject as starting point, create 
archetype with standalone, fully-functioning Java code and corresponding test 
code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15055) Major compaction is not triggered when both of TTL and hbase.hstore.compaction.max.size are set

2016-01-07 Thread Eungsop Yoo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eungsop Yoo updated HBASE-15055:

Status: Open  (was: Patch Available)

> Major compaction is not triggered when both of TTL and 
> hbase.hstore.compaction.max.size are set
> ---
>
> Key: HBASE-15055
> URL: https://issues.apache.org/jira/browse/HBASE-15055
> Project: HBase
>  Issue Type: Bug
>Reporter: Eungsop Yoo
>Assignee: Eungsop Yoo
>Priority: Minor
> Attachments: HBASE-15055-v1.patch, HBASE-15055-v2.patch, 
> HBASE-15055-v3.patch, HBASE-15055-v4.patch, HBASE-15055-v5.patch, 
> HBASE-15055-v6.patch, HBASE-15055-v7.patch, HBASE-15055.patch
>
>
> Some large files may be skipped by hbase.hstore.compaction.max.size in 
> candidate selection. It causes skipping of major compaction. So the TTL 
> expired records are still remained in the disks and keep consuming disks.
> To resolve this issue, I suggest that to skip large files only if there is no 
> TTL expired record.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15055) Major compaction is not triggered when both of TTL and hbase.hstore.compaction.max.size are set

2016-01-07 Thread Eungsop Yoo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eungsop Yoo updated HBASE-15055:

Attachment: HBASE-15055-v7.patch

> Major compaction is not triggered when both of TTL and 
> hbase.hstore.compaction.max.size are set
> ---
>
> Key: HBASE-15055
> URL: https://issues.apache.org/jira/browse/HBASE-15055
> Project: HBase
>  Issue Type: Bug
>Reporter: Eungsop Yoo
>Assignee: Eungsop Yoo
>Priority: Minor
> Attachments: HBASE-15055-v1.patch, HBASE-15055-v2.patch, 
> HBASE-15055-v3.patch, HBASE-15055-v4.patch, HBASE-15055-v5.patch, 
> HBASE-15055-v6.patch, HBASE-15055-v7.patch, HBASE-15055.patch
>
>
> Some large files may be skipped by hbase.hstore.compaction.max.size in 
> candidate selection. It causes skipping of major compaction. So the TTL 
> expired records are still remained in the disks and keep consuming disks.
> To resolve this issue, I suggest that to skip large files only if there is no 
> TTL expired record.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15055) Major compaction is not triggered when both of TTL and hbase.hstore.compaction.max.size are set

2016-01-07 Thread Eungsop Yoo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eungsop Yoo updated HBASE-15055:

Status: Patch Available  (was: Open)

> Major compaction is not triggered when both of TTL and 
> hbase.hstore.compaction.max.size are set
> ---
>
> Key: HBASE-15055
> URL: https://issues.apache.org/jira/browse/HBASE-15055
> Project: HBase
>  Issue Type: Bug
>Reporter: Eungsop Yoo
>Assignee: Eungsop Yoo
>Priority: Minor
> Attachments: HBASE-15055-v1.patch, HBASE-15055-v2.patch, 
> HBASE-15055-v3.patch, HBASE-15055-v4.patch, HBASE-15055-v5.patch, 
> HBASE-15055-v6.patch, HBASE-15055-v7.patch, HBASE-15055.patch
>
>
> Some large files may be skipped by hbase.hstore.compaction.max.size in 
> candidate selection. It causes skipping of major compaction. So the TTL 
> expired records are still remained in the disks and keep consuming disks.
> To resolve this issue, I suggest that to skip large files only if there is no 
> TTL expired record.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14937) Make rpc call timeout for replication adaptive

2016-01-07 Thread Ashish Singhi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15088494#comment-15088494
 ] 

Ashish Singhi commented on HBASE-14937:
---

I tried to reproduce this but till now not able to see that remote server is 
available and we are just sleeping. 
Can you please give me the scenario I would like to test that? Thanks. 

> Make rpc call timeout for replication adaptive
> --
>
> Key: HBASE-14937
> URL: https://issues.apache.org/jira/browse/HBASE-14937
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
>  Labels: replication
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-14937.patch
>
>
> When peer cluster replication is disabled and lot of writes are happening in 
> active cluster and later on peer cluster replication is enabled then there 
> are chances that replication requests to peer cluster may time out.
> This is possible after HBASE-13153 and it can also happen with many and many 
> WAL data replication still pending to replicate.
> Approach to this problem will be discussed in the comments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13525) Update test-patch to leverage Apache Yetus

2016-01-07 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15088484#comment-15088484
 ] 

Sean Busbey commented on HBASE-13525:
-

bq. Suggest hoisting your how-to-run-it up to release note.

will do.

bq. Is test-patch missing from this patch? I see this patch removes 
test-patch.sh but it does not seem to include test-patch.

Not included. this patch presumes folks using it will install Yetus' 
test-patch. The rewritten jenkins job, for example, keeps a cached install of 
the latest release. I'll include a link to "download yetus" in the release note 
to mitigate?

something like

{code}
You'll need a local installation of [Apache Yetus' precommit 
checker](http://yetus.apache.org/documentation/0.1.0/#yetus-precommit) to use 
this personality.

Download from: http://yetus.apache.org/downloads/ . You can either grab the 
source artifact and build from it, or use the convenience binaries provided on 
that download page.

To run against, e.g. HBASE-15074 you'd then do
```bash
test-patch --personality=dev-support/hbase-personality.sh HBASE-15074
```

If you want to skip the ~1 hour it'll take to do all the hadoop API checks, use
```bash
test-patch  --plugins=all,-hadoopcheck 
--personality=dev-support/hbase-personality.sh HBASE-15074


pass the `--jenkins` flag if you want to allow test-patch to destructively 
alter local working directory / branch in order to have things match what the 
issue patch requests.
{code}

{quote}
I see this in personality:
+ PATCH_BRANCH_DEFAULT=master
So, how do I run a patch against an old branch now? Does the trick where you 
add the branch name to the patch name work still?
{quote}

Yep. It works now for arbitrary branches/refs found in the repo, not just a 
whitelist. https://yetus.apache.org/documentation/0.1.0/precommit-patchnames/

{quote}
This list is impressive but a little OCD: 54 + HBASE_HADOOP_VERSIONS="2.4.0 
2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1" I suppose it makes sense to 
have this on hadoop-qa to catch the fail before it gets committed. We can turn 
this down on other builds?
{quote}

Yeah, I'd like to move most of these over to nightly and then just do i.e. 
2.4.0, 2.5.0, 2.6.1, and 2.7.1 in precommit.

{quote}
This seems to be untrue now: "even though we're not including that yet." 
regards zombie
{quote}

I added that after I commented out all the "check for zombies" code. we're no 
longer examining the process list for surefire after tests complete (though I 
suspect we're getting the same checks for them out of log parsing now)

> Update test-patch to leverage Apache Yetus
> --
>
> Key: HBASE-13525
> URL: https://issues.apache.org/jira/browse/HBASE-13525
> Project: HBase
>  Issue Type: Improvement
>  Components: build
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>  Labels: jenkins
> Fix For: 2.0.0
>
> Attachments: HBASE-13525.1.patch
>
>
> Once HADOOP-11746 lands over in Hadoop, incorporate its changes into our 
> test-patch. Most likely easiest approach is to start with the Hadoop version 
> and add in the features we have locally that they don't.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15080) Remove synchronized block from MasterServiceStubMaker#releaseZooKeeperWatcher()

2016-01-07 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-15080:
---
Summary: Remove synchronized block from 
MasterServiceStubMaker#releaseZooKeeperWatcher()  (was: Remove synchronized 
keyword from MasterServiceStubMaker#releaseZooKeeperWatcher())

> Remove synchronized block from 
> MasterServiceStubMaker#releaseZooKeeperWatcher()
> ---
>
> Key: HBASE-15080
> URL: https://issues.apache.org/jira/browse/HBASE-15080
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 0.98.17
>
> Attachments: 15080-0.98.txt
>
>
> This is a follow up to HBASE-11460
> [~elserj] found that in 0.98, the synchronized block below should have been 
> taken out (as was done for branch-1 +):
> {code}
>   synchronized (masterAndZKLock) {
> if (keepAliveZookeeperUserCount.decrementAndGet() <= 0 ){
> {code}
> keepAliveZookeeperUserCount is an AtomicInteger. There is no need for the 
> synchronized block.
> This issue is to remove the synchronized block.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13506) AES-GCM cipher support where available

2016-01-07 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-13506:
---
Fix Version/s: (was: 1.0.4)
   (was: 0.98.17)
   (was: 1.1.4)
   (was: 1.2.1)
   0.98.18

> AES-GCM cipher support where available
> --
>
> Key: HBASE-13506
> URL: https://issues.apache.org/jira/browse/HBASE-13506
> Project: HBase
>  Issue Type: Sub-task
>  Components: encryption, security
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
> Fix For: 2.0.0, 1.3.0, 0.98.18
>
>
> The initial encryption drop only had AES-CTR support because authenticated 
> modes such as GCM are only available in Java 7 and up, and our trunk at the 
> time was targeted at Java 6. However we can optionally use AES-GCM cipher 
> support where available. For HBase 1.0 and up, Java 7 is now the minimum so 
> use of AES-GCM can go in directly. It's probably possible to add support in 
> 0.98 too using reflection for cipher object initialization. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13504) Alias current AES cipher as AES-CTR

2016-01-07 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-13504:
---
Fix Version/s: (was: 1.0.4)
   (was: 0.98.17)
   (was: 1.1.4)
   (was: 1.2.1)
   0.98.18

> Alias current AES cipher as AES-CTR
> ---
>
> Key: HBASE-13504
> URL: https://issues.apache.org/jira/browse/HBASE-13504
> Project: HBase
>  Issue Type: Sub-task
>  Components: encryption, security
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 2.0.0, 1.3.0, 0.98.18
>
>
> Alias the current cipher with the name "AES" to the name "AES-CTR".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12890) Provide a way to throttle the number of regions moved by the balancer

2016-01-07 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-12890:
---
Fix Version/s: (was: 0.98.17)
   0.98.18

> Provide a way to throttle the number of regions moved by the balancer
> -
>
> Key: HBASE-12890
> URL: https://issues.apache.org/jira/browse/HBASE-12890
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 0.98.10
>Reporter: churro morales
>Assignee: churro morales
> Fix For: 2.0.0, 1.3.0, 0.98.18
>
> Attachments: HBASE-12890.patch
>
>
> We have a very large cluster and we frequently add remove quite a few 
> regionservers from our cluster.  Whenever we do this the balancer moves 
> thousands of regions at once.  Instead we provide a configuration parameter: 
> hbase.balancer.max.regions.  This limits the number of regions that are 
> balanced per iteration.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12148) Remove TimeRangeTracker as point of contention when many threads writing a Store

2016-01-07 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-12148:
---
Fix Version/s: (was: 0.98.17)
   0.98.18

> Remove TimeRangeTracker as point of contention when many threads writing a 
> Store
> 
>
> Key: HBASE-12148
> URL: https://issues.apache.org/jira/browse/HBASE-12148
> Project: HBase
>  Issue Type: Sub-task
>  Components: Performance
>Affects Versions: 2.0.0, 0.99.1
>Reporter: stack
>Assignee: John Leach
> Fix For: 2.0.0, 1.3.0, 0.98.18
>
> Attachments: 
> 0001-In-AtomicUtils-change-updateMin-and-updateMax-to-ret.patch, 
> 12148.addendum.txt, 12148.txt, 12148.txt, 12148v2.txt, 12148v2.txt, 
> HBASE-12148.txt, HBASE-12148V2.txt, Screen Shot 2014-10-01 at 3.39.46 PM.png, 
> Screen Shot 2014-10-01 at 3.41.07 PM.png
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13336) Consistent rules for security meta table protections

2016-01-07 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-13336:
---
Fix Version/s: (was: 0.98.17)
   0.98.18

> Consistent rules for security meta table protections
> 
>
> Key: HBASE-13336
> URL: https://issues.apache.org/jira/browse/HBASE-13336
> Project: HBase
>  Issue Type: Improvement
>Reporter: Andrew Purtell
>Assignee: Mikhail Antonov
> Fix For: 2.0.0, 1.3.0, 0.98.18
>
> Attachments: HBASE-13336.patch, HBASE-13336_v2.patch
>
>
> The AccessController and VisibilityController do different things regarding 
> protecting their meta tables. The AC allows schema changes and disable/enable 
> if the user has permission. The VC unconditionally disallows all admin 
> actions. Generally, bad things will happen if these meta tables are damaged, 
> disabled, or dropped. The likely outcome is random frequent (or constant) 
> server side op failures with nasty stack traces. On the other hand some 
> things like column family and table attribute changes can have valid use 
> cases. We should have consistent and sensible rules for protecting security 
> meta tables.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13031) Ability to snapshot based on a key range

2016-01-07 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-13031:
---
Fix Version/s: (was: 0.98.17)
   0.98.18

> Ability to snapshot based on a key range
> 
>
> Key: HBASE-13031
> URL: https://issues.apache.org/jira/browse/HBASE-13031
> Project: HBase
>  Issue Type: Improvement
>Reporter: churro morales
>Assignee: churro morales
> Fix For: 2.0.0, 1.3.0, 0.98.18
>
> Attachments: HBASE-13031-v1.patch, HBASE-13031.patch
>
>
> Posted on the mailing list and seems like some people are interested.  A 
> little background for everyone.
> We have a very large table, we would like to snapshot and transfer the data 
> to another cluster (compressed data is always better to ship).  Our problem 
> lies in the fact it could take many weeks to transfer all of the data and 
> during that time with major compactions, the data stored in dfs has the 
> potential to double which would cause us to run out of disk space.
> So we were thinking about allowing the ability to snapshot a specific key 
> range.  
> Ideally I feel the approach is that the user would specify a start and stop 
> key, those would be associated with a region boundary.  If between the time 
> the user submits the request and the snapshot is taken the boundaries change 
> (due to merging or splitting of regions) the snapshot should fail.
> We would know which regions to snapshot and if those changed between when the 
> request was submitted and the regions locked, the snapshot could simply fail 
> and the user would try again, instead of potentially giving the user more / 
> less than what they had anticipated.  I was planning on storing the start / 
> stop key in the SnapshotDescription and from there it looks pretty straight 
> forward where we just have to change the verifier code to accommodate the key 
> ranges.  
> If this design sounds good to anyone, or if I am overlooking anything please 
> let me know.  Once we agree on the design, I'll write and submit the patches.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13096) NPE from SecureWALCellCodec$EncryptedKvEncoder#write when using WAL encryption and Phoenix secondary indexes

2016-01-07 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-13096:
---
Fix Version/s: (was: 0.98.17)
   0.98.18

> NPE from SecureWALCellCodec$EncryptedKvEncoder#write when using WAL 
> encryption and Phoenix secondary indexes
> 
>
> Key: HBASE-13096
> URL: https://issues.apache.org/jira/browse/HBASE-13096
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.6
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>  Labels: phoenix
> Fix For: 0.98.18
>
>
> On user@phoenix Dhavi Rami reported:
> {quote}
> I tried using phoenix in hBase with Transparent Encryption of Data At Rest 
> enabled ( AES encryption) 
> Works fine for a table with primary key column.
> But it doesn't work if I create Secondary index on that tables.I tried to dig 
> deep into the problem and found WAL file encryption throws exception when I 
> have Global Secondary Index created on my mutable table.
> Following is the error I was getting on one of the region server.
> {noformat}
> 2015-02-20 10:44:48,768 ERROR 
> org.apache.hadoop.hbase.regionserver.wal.FSHLog: UNEXPECTED
> java.lang.NullPointerException
> at org.apache.hadoop.hbase.util.Bytes.toInt(Bytes.java:767)
> at org.apache.hadoop.hbase.util.Bytes.toInt(Bytes.java:754)
> at org.apache.hadoop.hbase.KeyValue.getKeyLength(KeyValue.java:1253)
> at 
> org.apache.hadoop.hbase.regionserver.wal.SecureWALCellCodec$EncryptedKvEncoder.write(SecureWALCellCodec.java:194)
> at 
> org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.append(ProtobufLogWriter.java:117)
> at 
> org.apache.hadoop.hbase.regionserver.wal.FSHLog$AsyncWriter.run(FSHLog.java:1137)
> at java.lang.Thread.run(Thread.java:745)
> 2015-02-20 10:44:48,776 INFO org.apache.hadoop.hbase.regionserver.wal.FSHLog: 
> regionserver60020-WAL.AsyncWriter exiting
> {noformat}
> I had to disable WAL encryption, and it started working fine with secondary 
> Index. So Hfile encryption works with secondary index but WAL encryption 
> doesn't work.
> {quote}
> Parking this here for later investigation. For now I'm going to assume this 
> is something in SecureWALCellCodec that needs looking at, but if it turns out 
> to be a Phoenix indexer issue I will move this JIRA there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-11290) Unlock RegionStates

2016-01-07 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-11290:
---
Fix Version/s: (was: 0.98.17)
   0.98.18

> Unlock RegionStates
> ---
>
> Key: HBASE-11290
> URL: https://issues.apache.org/jira/browse/HBASE-11290
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Francis Liu
>Assignee: Francis Liu
> Fix For: 2.0.0, 1.3.0, 0.98.18
>
> Attachments: HBASE-11290-0.98.patch, HBASE-11290-0.98_v2.patch, 
> HBASE-11290.draft.patch, HBASE-11290_trunk.patch
>
>
> Even though RegionStates is a highly accessed data structure in HMaster. Most 
> of it's methods are synchronized. Which limits concurrency. Even simply 
> making some of the getters non-synchronized by using concurrent data 
> structures has helped with region assignments. We can go as simple as this 
> approach or create locks per region or a bucket lock per region bucket.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14067) bundle ruby files for hbase shell into a jar.

2016-01-07 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-14067:
---
Fix Version/s: (was: 0.98.17)
   0.98.18

> bundle ruby files for hbase shell into a jar.
> -
>
> Key: HBASE-14067
> URL: https://issues.apache.org/jira/browse/HBASE-14067
> Project: HBase
>  Issue Type: Improvement
>  Components: shell
>Reporter: Sean Busbey
> Fix For: 2.0.0, 1.3.0, 0.98.18
>
>
> We currently package all the ruby scripts for the hbase shell by placing them 
> in a directory within lib/. We should be able to put these in a jar file 
> since we rely on jruby.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13667) Backport HBASE-12975 to 1.0 and 0.98 without changing coprocessors hooks

2016-01-07 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-13667:
---
Fix Version/s: (was: 0.98.17)
   0.98.18

> Backport HBASE-12975 to 1.0 and 0.98 without changing coprocessors hooks
> 
>
> Key: HBASE-13667
> URL: https://issues.apache.org/jira/browse/HBASE-13667
> Project: HBase
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
> Fix For: 1.0.4, 0.98.18
>
>
> We can backport Split transaction, region merge transaction interfaces to 
> branch 1.0 and 0.98 without changing coprocessor hooks. Then it should be 
> compatible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13505) Deprecate the "AES" cipher type

2016-01-07 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-13505:
---
Fix Version/s: (was: 1.0.4)
   (was: 0.98.17)
   (was: 1.1.4)
   (was: 1.2.1)
   0.98.18

> Deprecate the "AES" cipher type
> ---
>
> Key: HBASE-13505
> URL: https://issues.apache.org/jira/browse/HBASE-13505
> Project: HBase
>  Issue Type: Sub-task
>  Components: encryption, security
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 2.0.0, 1.3.0, 0.98.18
>
>
> Deprecate the "AES" cipher type. Remove internal references to it and use the 
> "AES-CTR" name instead



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13511) Derive data keys with HKDF

2016-01-07 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-13511:
---
Fix Version/s: (was: 1.0.4)
   (was: 0.98.17)
   (was: 1.1.4)
   (was: 1.2.1)
   0.98.18

> Derive data keys with HKDF
> --
>
> Key: HBASE-13511
> URL: https://issues.apache.org/jira/browse/HBASE-13511
> Project: HBase
>  Issue Type: Sub-task
>  Components: encryption, security
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 2.0.0, 1.3.0, 0.98.18
>
>
> When we are locally managing master key material, when users have supplied 
> their own data key material, derive the actual data keys using HKDF 
> (https://tools.ietf.org/html/rfc5869)
> DK' = HKDF(S, DK, MK)
> where
> S = salt
> DK = user supplied data key
> MK = master key
> DK' = derived data key for the HFile
> User supplied key material may be weak or an attacker may have some partial 
> knowledge of it.
> Where we generate random data keys we can still use HKDF as a way to mix more 
> entropy into the secure random generator. 
> DK' = HKDF(R, MK)
> where
> R = random key material drawn from the system's secure random generator
> MK = master key
> (Salting isn't useful here because salt S and R would be drawn from the same 
> pool, so will not have statistical independence.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14049) SnapshotHFileCleaner should optionally clean up after failed snapshots

2016-01-07 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-14049:
---
Fix Version/s: (was: 0.98.17)
   0.98.18

> SnapshotHFileCleaner should optionally clean up after failed snapshots
> --
>
> Key: HBASE-14049
> URL: https://issues.apache.org/jira/browse/HBASE-14049
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.13
>Reporter: Andrew Purtell
> Fix For: 2.0.0, 1.3.0, 0.98.18
>
>
> SnapshotHFileCleaner should optionally clean up after failed snapshots rather 
> than just complain. Add a configuration option that, if set to true 
> (defaulting to false), instructs SnapshotHFileCleaner to recursively remove 
> failed snapshot temporary directories.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14289) Backport HBASE-13965 'Stochastic Load Balancer JMX Metrics' to 0.98

2016-01-07 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-14289:
---
Fix Version/s: (was: 0.98.17)
   0.98.18

Do we want to proceed here? (And how?) Or just close it.

> Backport HBASE-13965 'Stochastic Load Balancer JMX Metrics' to 0.98
> ---
>
> Key: HBASE-14289
> URL: https://issues.apache.org/jira/browse/HBASE-14289
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 0.98.18
>
> Attachments: 14289-0.98-v2.txt, 14289-0.98-v3.txt, 14289-0.98-v4.txt, 
> 14289-0.98-v5.txt
>
>
> The default HBase load balancer (the Stochastic load balancer) is cost 
> function based. The cost function weights are tunable but no visibility into 
> those cost function results is directly provided.
> This issue backports HBASE-13965 to 0.98 branch to provide visibility via JMX 
> into each cost function of the stochastic load balancer, as well as the 
> overall cost of the balancing plan.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14177) Full GC on client may lead to missing scan results

2016-01-07 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-14177:
---
Fix Version/s: (was: 0.98.17)
   0.98.18

> Full GC on client may lead to missing scan results
> --
>
> Key: HBASE-14177
> URL: https://issues.apache.org/jira/browse/HBASE-14177
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 0.98.12, 0.98.13, 1.0.2
>Reporter: James Estes
>Priority: Critical
>  Labels: dataloss
> Fix For: 1.0.4, 0.98.18
>
>
> After adding a large row, scanning back that row winds up being empty. After 
> a few attempts it will succeed (all attempts over the same data on an hbase 
> getting no other writes).
> Looking at logs, it seems this happens when there is memory pressure on the 
> client and there are several Full GCs that happen. Then messages that 
> indicate that region locations are being removed from the local client cache:
> 2015-07-31 12:50:24,647 [main] DEBUG 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation  
> - Removed 192.168.1.131:50981 as a location of 
> big_row_1438368609944,,1438368610048.880c849594807bdc7412f4f982337d6c. for 
> tableName=big_row_1438368609944 from cache
> Blaming the GC may sound fanciful, but if the test is run with -Xms4g -Xmx4g 
> then it always passes on the first scan attempt. Maybe the pause is enough to 
> remove something from the cache, or the client is using weak references 
> somewhere?
> More info 
> http://mail-archives.apache.org/mod_mbox/hbase-user/201507.mbox/%3CCAE8tVdnFf%3Dob569%3DfJkpw1ndVWOVTkihYj9eo6qt0FrzihYHgw%40mail.gmail.com%3E
> Test used to reproduce:
> https://github.com/housejester/hbase-debugging#fullgctest
> I tested and had failures in:
> 0.98.12 client/server
> 0.98.13 client 0.98.12 server
> 0.98.13 client/server
> 1.1.0 client 0.98.13 server
> 0.98.13 client and 1.1.0 server
> 0.98.12 client and 1.1.0 server
> I tested without failure in:
> 1.1.0 client/server



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11460) Deadlock in HMaster on masterAndZKLock in HConnectionManager

2016-01-07 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15088468#comment-15088468
 ] 

Ted Yu commented on HBASE-11460:


Created HBASE-15080 with patch for 0.98 branch.

> Deadlock in HMaster on masterAndZKLock in HConnectionManager
> 
>
> Key: HBASE-11460
> URL: https://issues.apache.org/jira/browse/HBASE-11460
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Affects Versions: 0.96.0
>Reporter: Andrey Stepachev
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 0.99.0, 0.98.4, 2.0.0
>
> Attachments: 11460-v1-0.98.patch, 11460-v1.txt, threads.tdump
>
>
> On one of our clusters we got a deadlock in HMaster.
> In a nutshell deadlock caused by using one HConnectionManager for serving 
> client-like calls and calls from HMaster RPC handlers.
> HBaseAdmin uses HConnectionManager which takes a lock masterAndZKLock.
> On the other side of this game sits TablesNamespaceManager (TNM). This class 
> uses HConnectionManager too (in my case for getting list of available 
> namespaces). 
> Problem is that HMaster class uses TNM  for serving RPC requests.
> If we look at TNM more closely, we can see, that this class is totally 
> synchronised.
> Thats gives us a problem.
> WebInterface calls request via HConnectionManager and locks masterAndZKLock.
> Connection is blocking, so RpcClient will spin, awaiting for reply (while 
> holding lock).
> That how it looks like in thread dump:
> {code}
>java.lang.Thread.State: TIMED_WAITING (on object monitor)
>   at java.lang.Object.wait(Native Method)
>   - waiting on <0xc8905430> (a 
> org.apache.hadoop.hbase.ipc.RpcClient$Call)
>   at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1435)
>   - locked <0xc8905430> (a 
> org.apache.hadoop.hbase.ipc.RpcClient$Call)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1653)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1711)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:40216)
>   at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(HConnectionManager.java:1467)
>   at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(HConnectionManager.java:2093)
>   at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getKeepAliveMasterService(HConnectionManager.java:1819)
>   - locked <0xd15dc668> (a java.lang.Object)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin$MasterCallable.prepare(HBaseAdmin.java:3187)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:119)
>   - locked <0xcd0c1238> (a 
> org.apache.hadoop.hbase.client.RpcRetryingCaller)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:96)
>   - locked <0xcd0c1238> (a 
> org.apache.hadoop.hbase.client.RpcRetryingCaller)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3214)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.listTableDescriptorsByNamespace(HBaseAdmin.java:2265)
> {code}
> Some other client call any HMaster RPC, and it calls TablesNamespaceManager 
> methods, which in turn will block on HConnectionManager global lock 
> masterAndZKLock.
> That how it looks like:
> {code}
>   java.lang.Thread.State: BLOCKED (on object monitor)
>   at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getKeepAliveZooKeeperWatcher(HConnectionManager.java:1699)
>   - waiting to lock <0xd15dc668> (a java.lang.Object)
>   at 
> org.apache.hadoop.hbase.client.ZooKeeperRegistry.isTableOnlineState(ZooKeeperRegistry.java:100)
>   at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.isTableDisabled(HConnectionManager.java:874)
>   at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.relocateRegion(HConnectionManager.java:1027)
>   at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getRegionLocation(HConnectionManager.java:852)
>   at 
> org.apache.hadoop.hbase.client.RegionServerCallable.prepare(RegionServerCallable.java:72)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:119)
>   - locked <0xcd0ef108> (a 
> org.apache.hadoop.hbase.client.RpcRetryingCaller)
>   at org.apache.hadoop.hbase.cli

[jira] [Updated] (HBASE-14546) Backport stub DNS re-resolution options to 0.98

2016-01-07 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-14546:
---
Fix Version/s: (was: 0.98.17)
   0.98.18

> Backport stub DNS re-resolution options to 0.98
> ---
>
> Key: HBASE-14546
> URL: https://issues.apache.org/jira/browse/HBASE-14546
> Project: HBase
>  Issue Type: Task
>Reporter: Andrew Purtell
>Priority: Minor
> Fix For: 0.98.18
>
>
> HBASE-12943 and HBASE-13067 addresses infinite caching preventing servers 
> from rejoining a cluster using the same hostname but a different IP address. 
> HBASE-14544 modifies this to be optional. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15080) Remove synchronized keyword from MasterServiceStubMaker#releaseZooKeeperWatcher()

2016-01-07 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-15080:
---
Attachment: 15080-0.98.txt

> Remove synchronized keyword from 
> MasterServiceStubMaker#releaseZooKeeperWatcher()
> -
>
> Key: HBASE-15080
> URL: https://issues.apache.org/jira/browse/HBASE-15080
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 0.98.17
>
> Attachments: 15080-0.98.txt
>
>
> This is a follow up to HBASE-11460
> [~elserj] found that in 0.98, the synchronized block below should have been 
> taken out (as was done for branch-1 +):
> {code}
>   synchronized (masterAndZKLock) {
> if (keepAliveZookeeperUserCount.decrementAndGet() <= 0 ){
> {code}
> keepAliveZookeeperUserCount is an AtomicInteger. There is no need for the 
> synchronized block.
> This issue is to remove the synchronized block.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15080) Remove synchronized keyword from MasterServiceStubMaker#releaseZooKeeperWatcher()

2016-01-07 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-15080:
---
Status: Patch Available  (was: Open)

> Remove synchronized keyword from 
> MasterServiceStubMaker#releaseZooKeeperWatcher()
> -
>
> Key: HBASE-15080
> URL: https://issues.apache.org/jira/browse/HBASE-15080
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 0.98.17
>
> Attachments: 15080-0.98.txt
>
>
> This is a follow up to HBASE-11460
> [~elserj] found that in 0.98, the synchronized block below should have been 
> taken out (as was done for branch-1 +):
> {code}
>   synchronized (masterAndZKLock) {
> if (keepAliveZookeeperUserCount.decrementAndGet() <= 0 ){
> {code}
> keepAliveZookeeperUserCount is an AtomicInteger. There is no need for the 
> synchronized block.
> This issue is to remove the synchronized block.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14872) Scan different timeRange per column family doesn't percolate down to the memstore

2016-01-07 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-14872:
---
Fix Version/s: (was: 0.98.17)
   0.98.18

> Scan different timeRange per column family doesn't percolate down to the 
> memstore 
> --
>
> Key: HBASE-14872
> URL: https://issues.apache.org/jira/browse/HBASE-14872
> Project: HBase
>  Issue Type: Bug
>  Components: Client, regionserver, Scanners
>Reporter: churro morales
>Assignee: churro morales
> Fix For: 2.0.0, 1.2.0, 0.98.18
>
> Attachments: HBASE-14872-0.98.patch, HBASE-14872-v1.patch, 
> HBASE-14872.patch
>
>
> HBASE-14355 The scan different time range for column family feature was not 
> applied to the memstore it was only done for the store files.  This breaks 
> the contract.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-15080) Remove synchronized keyword from MasterServiceStubMaker#releaseZooKeeperWatcher()

2016-01-07 Thread Ted Yu (JIRA)
Ted Yu created HBASE-15080:
--

 Summary: Remove synchronized keyword from 
MasterServiceStubMaker#releaseZooKeeperWatcher()
 Key: HBASE-15080
 URL: https://issues.apache.org/jira/browse/HBASE-15080
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu
 Fix For: 0.98.17


This is a follow up to HBASE-11460

[~elserj] found that in 0.98, the synchronized block below should have been 
taken out (as was done for branch-1 +):
{code}
  synchronized (masterAndZKLock) {
if (keepAliveZookeeperUserCount.decrementAndGet() <= 0 ){
{code}
keepAliveZookeeperUserCount is an AtomicInteger. There is no need for the 
synchronized block.

This issue is to remove the synchronized block.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14757) Reduce allocation pressure imposed by HFile block processing

2016-01-07 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-14757:
---
Fix Version/s: (was: 0.98.17)
   (was: 1.3.0)

> Reduce allocation pressure imposed by HFile block processing
> 
>
> Key: HBASE-14757
> URL: https://issues.apache.org/jira/browse/HBASE-14757
> Project: HBase
>  Issue Type: Improvement
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
> Fix For: 2.0.0
>
>
> Using Flight Recorder to look at the object allocation profiles of 
> regionservers processing the various YCSB workloads when block encoding is 
> enabled (specifically, FAST_DIFF, but this applies to any), we can see:
> - Allocations of byte[] for block encoding contribute 40-70% of all 
> allocation pressure in TLABs. 
> - Of that subset of allocation pressure, ~50-70% is byte[] for SeekerState
> - Greater than 99% of allocation of byte[] outside of TLABs are for read 
> buffers for HFileBlock#readBlockDataInternal.
> This issue is for investigation of strategy for and impact of reducing that 
> allocation pressure. Reducing allocation pressure reduces demand for GC, 
> which reduces GC activity overall, which reduces a source of system latency.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14927) Backport HBASE-13014 and HBASE-14749 to branch-1 and 0.98

2016-01-07 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-14927:
---
Fix Version/s: (was: 0.98.17)
   0.98.18

> Backport HBASE-13014 and HBASE-14749 to branch-1 and 0.98
> -
>
> Key: HBASE-14927
> URL: https://issues.apache.org/jira/browse/HBASE-14927
> Project: HBase
>  Issue Type: Improvement
>Reporter: Abhishek Singh Chouhan
>Assignee: Abhishek Singh Chouhan
> Fix For: 1.3.0, 0.98.18
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-14931) Active master switches may cause region close forever

2016-01-07 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell resolved HBASE-14931.

   Resolution: Duplicate
Fix Version/s: (was: 0.98.18)

> Active master switches may cause region close forever
> -
>
> Key: HBASE-14931
> URL: https://issues.apache.org/jira/browse/HBASE-14931
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Affects Versions: 0.98.10
>Reporter: Shuaifeng Zhou
>Priority: Critical
>
> 60010 webpage shows that a region is online on one RS, but when access data 
> in the region throw notServingRegion. After lookup the source code and logs, 
> found that it's because active master switches during the region openning:
> 1, master1 open region 'region1', sent open region request to rs and create 
> node in zk
> 2, master1 stoped
> 3, master2 became active master
> 4, master2 obtain all region status,  'region1' status is offline
> 5, rs opened 'region1' node changed to opened in zk, and sent message to 
> master2
> 6, master2 received RS_ZK_REGION_OPENED, but the status is not pending open 
> or openning, sent unassign to rs, 'region1' closed
> {code:title=AssignmentManager.java|borderStyle=solid}
> case RS_ZK_REGION_OPENED:
>   // Should see OPENED after OPENING but possible after PENDING_OPEN.
>   if (regionState == null
>   || !regionState.isPendingOpenOrOpeningOnServer(sn)) {
> LOG.warn("Received OPENED for " + prettyPrintedRegionName
>   + " from " + sn + " but the region isn't PENDING_OPEN/OPENING 
> here: "
>   + regionStates.getRegionState(encodedName));
> if (regionState != null) {
>   // Close it without updating the internal region states,
>   // so as not to create double assignments in unlucky scenarios
>   // mentioned in OpenRegionHandler#process
>   unassign(regionState.getRegion(), null, -1, null, false, sn);
> }
> return;
>   }
> {code}
> 7, master2 continue handle regioninfo when master1 stoped, found that 
> 'region1' status in zk is opened, update status in memory to opened.
> 8, up to now, 'region1' status is opened on webpage of master status, but not 
> opened on any regionserver.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14931) Active master switches may cause region close forever

2016-01-07 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-14931:
---
Fix Version/s: (was: 0.98.17)
   0.98.18

> Active master switches may cause region close forever
> -
>
> Key: HBASE-14931
> URL: https://issues.apache.org/jira/browse/HBASE-14931
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Affects Versions: 0.98.10
>Reporter: Shuaifeng Zhou
>Priority: Critical
>
> 60010 webpage shows that a region is online on one RS, but when access data 
> in the region throw notServingRegion. After lookup the source code and logs, 
> found that it's because active master switches during the region openning:
> 1, master1 open region 'region1', sent open region request to rs and create 
> node in zk
> 2, master1 stoped
> 3, master2 became active master
> 4, master2 obtain all region status,  'region1' status is offline
> 5, rs opened 'region1' node changed to opened in zk, and sent message to 
> master2
> 6, master2 received RS_ZK_REGION_OPENED, but the status is not pending open 
> or openning, sent unassign to rs, 'region1' closed
> {code:title=AssignmentManager.java|borderStyle=solid}
> case RS_ZK_REGION_OPENED:
>   // Should see OPENED after OPENING but possible after PENDING_OPEN.
>   if (regionState == null
>   || !regionState.isPendingOpenOrOpeningOnServer(sn)) {
> LOG.warn("Received OPENED for " + prettyPrintedRegionName
>   + " from " + sn + " but the region isn't PENDING_OPEN/OPENING 
> here: "
>   + regionStates.getRegionState(encodedName));
> if (regionState != null) {
>   // Close it without updating the internal region states,
>   // so as not to create double assignments in unlucky scenarios
>   // mentioned in OpenRegionHandler#process
>   unassign(regionState.getRegion(), null, -1, null, false, sn);
> }
> return;
>   }
> {code}
> 7, master2 continue handle regioninfo when master1 stoped, found that 
> 'region1' status in zk is opened, update status in memory to opened.
> 8, up to now, 'region1' status is opened on webpage of master status, but not 
> opened on any regionserver.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14932) bulkload fails because file not found

2016-01-07 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-14932:
---
Fix Version/s: (was: 0.98.17)
   0.98.18

> bulkload fails because file not found
> -
>
> Key: HBASE-14932
> URL: https://issues.apache.org/jira/browse/HBASE-14932
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 0.98.10
>Reporter: Shuaifeng Zhou
> Fix For: 0.98.18
>
>
> When make a dobulkload call, one call may contain sevel hfiles to load, but 
> the call may timeout during regionserver load files, and client will retry to 
> load.
> But when client doing retry call, regionserver may continue doing load 
> operation, if somefiles success, the retry call will throw filenotfound 
> exception, and this will cause client retry again and again until retry 
> exhausted, and bulkload fails.
> When this happening, actually, some files are loaded successfully, that's a 
> inconsistent status.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14872) Scan different timeRange per column family doesn't percolate down to the memstore

2016-01-07 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-14872:
---
Status: Open  (was: Patch Available)

Stale patch

> Scan different timeRange per column family doesn't percolate down to the 
> memstore 
> --
>
> Key: HBASE-14872
> URL: https://issues.apache.org/jira/browse/HBASE-14872
> Project: HBase
>  Issue Type: Bug
>  Components: Client, regionserver, Scanners
>Reporter: churro morales
>Assignee: churro morales
> Fix For: 2.0.0, 1.2.0, 0.98.17
>
> Attachments: HBASE-14872-0.98.patch, HBASE-14872-v1.patch, 
> HBASE-14872.patch
>
>
> HBASE-14355 The scan different time range for column family feature was not 
> applied to the memstore it was only done for the store files.  This breaks 
> the contract.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15076) Add getScanner(Scan scan, List additionalScanners) API into Region interface

2016-01-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15088456#comment-15088456
 ] 

Hudson commented on HBASE-15076:


SUCCESS: Integrated in HBase-1.2-IT #382 (See 
[https://builds.apache.org/job/HBase-1.2-IT/382/])
HBASE-15076 Add getScanner(Scan scan, List (stack: rev 
0836f4274b8b6ce79f86a4fab1c712d0bebc702e)
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Region.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java


> Add getScanner(Scan scan, List additionalScanners) API into 
> Region interface
> -
>
> Key: HBASE-15076
> URL: https://issues.apache.org/jira/browse/HBASE-15076
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Reporter: liu ming
>Assignee: Anoop Sam John
>Priority: Critical
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-15076.patch
>
>
> HRegion method getScanner(Scan scan, List 
> additionalScanners, boolean copyCellsFromSharedMem) is protected.
> In Apache Trafodion, we need to invoke this getScanner method from a 
> coprocessor. Since it is protected, Trafodion must overload the HRegion class 
> and overload this method into a public method.
> It will be good to make this method public.
> It is very useful when one needs to combine several scan result in a single 
> scanner.
> thanks,
> Ming



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14937) Make rpc call timeout for replication adaptive

2016-01-07 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15088455#comment-15088455
 ] 

Andrew Purtell commented on HBASE-14937:


Not convinced waiting longer is better than just retrying. Seems waiting longer 
can only lead us to be sleeping unnecessarily when the remote is available 
again.

> Make rpc call timeout for replication adaptive
> --
>
> Key: HBASE-14937
> URL: https://issues.apache.org/jira/browse/HBASE-14937
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
>  Labels: replication
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-14937.patch
>
>
> When peer cluster replication is disabled and lot of writes are happening in 
> active cluster and later on peer cluster replication is enabled then there 
> are chances that replication requests to peer cluster may time out.
> This is possible after HBASE-13153 and it can also happen with many and many 
> WAL data replication still pending to replicate.
> Approach to this problem will be discussed in the comments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13525) Update test-patch to leverage Apache Yetus

2016-01-07 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15088441#comment-15088441
 ] 

stack commented on HBASE-13525:
---

Suggest hoisting your how-to-run-it up to release note.

Is test-patch missing from this patch? I see this patch removes test-patch.sh 
but it does not seem to include test-patch.

I see this in personality:

+  PATCH_BRANCH_DEFAULT=master

So, how do I run a patch against an old branch now? Does the trick where you 
add the branch name to the patch name work still?

This list is impressive but a little OCD:   54 +  HBASE_HADOOP_VERSIONS="2.4.0 
2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1" I suppose it makes sense to 
have this on hadoop-qa to catch the fail before it gets committed. We can turn 
this down on other builds?

This seems to be untrue now: "even though we're not including that yet." 
regards zombie

bq. # TODO line length check? could ignore all java files since checkstyle gets 
them.

... and does a better job of it. Sounds good to me.

+1 for commit and fixing teething issues later.


> Update test-patch to leverage Apache Yetus
> --
>
> Key: HBASE-13525
> URL: https://issues.apache.org/jira/browse/HBASE-13525
> Project: HBase
>  Issue Type: Improvement
>  Components: build
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>  Labels: jenkins
> Fix For: 2.0.0
>
> Attachments: HBASE-13525.1.patch
>
>
> Once HADOOP-11746 lands over in Hadoop, incorporate its changes into our 
> test-patch. Most likely easiest approach is to start with the Hadoop version 
> and add in the features we have locally that they don't.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15075) Allow region split request to carry metadata

2016-01-07 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-15075:
---
Attachment: 15075-v0.txt

Patch v0 shows API additions to Admin

> Allow region split request to carry metadata
> 
>
> Key: HBASE-15075
> URL: https://issues.apache.org/jira/browse/HBASE-15075
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
> Attachments: 15075-v0.txt
>
>
> During the process of improving region normalization feature, I found that if 
> region split request triggered by the execution of SplitNormalizationPlan 
> fails, there is no way of knowing whether the failed split originated from 
> region normalization.
> The association of particular split request with outcome of split would give 
> RegionNormalizer information so that it can make better normalization 
> decisions in the subsequent invocations.
> One approach is to embed metadata in SplitRequest which gets passed through 
> RegionStateTransitionContext when 
> RegionServerServices#reportRegionStateTransition() is called.
> This way, RegionStateListener can be notified with the metadata (id of the 
> requester).
> See discussion on dev mailing list
> http://search-hadoop.com/m/YGbbCXdkivihp2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13525) Update test-patch to leverage Apache Yetus

2016-01-07 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-13525:

Attachment: HBASE-13525.1.patch

-01

  - deletes pre-existing test-patch files
  - leaves zombie detector / finder as-is until we can prove out yetus handling
  - adds personality for our particular use of yetus.

can invoke manually with, e.g. HBASE-15074:
{code}
test-patch --personality=dev-support/hbase-personality.sh HBASE-15074
{code}

If you want to skip the ~1 hour it'll take to do all the hadoop API checks, use
{code}
test-patch  --plugins=all,-hadoopcheck 
--personality=dev-support/hbase-personality.sh HBASE-15074
{code}

pass the {{--jenkins}} flag if you want to allow test-patch to destructively 
alter local working directory / branch in order to have things match what the 
issue patch requests.

> Update test-patch to leverage Apache Yetus
> --
>
> Key: HBASE-13525
> URL: https://issues.apache.org/jira/browse/HBASE-13525
> Project: HBase
>  Issue Type: Improvement
>  Components: build
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>  Labels: jenkins
> Fix For: 2.0.0
>
> Attachments: HBASE-13525.1.patch
>
>
> Once HADOOP-11746 lands over in Hadoop, incorporate its changes into our 
> test-patch. Most likely easiest approach is to start with the Hadoop version 
> and add in the features we have locally that they don't.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13525) Update test-patch to leverage Apache Yetus

2016-01-07 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-13525:

Status: Patch Available  (was: In Progress)

> Update test-patch to leverage Apache Yetus
> --
>
> Key: HBASE-13525
> URL: https://issues.apache.org/jira/browse/HBASE-13525
> Project: HBase
>  Issue Type: Improvement
>  Components: build
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>  Labels: jenkins
> Fix For: 2.0.0
>
> Attachments: HBASE-13525.1.patch
>
>
> Once HADOOP-11746 lands over in Hadoop, incorporate its changes into our 
> test-patch. Most likely easiest approach is to start with the Hadoop version 
> and add in the features we have locally that they don't.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15074) Zombie maker to test zombie detector

2016-01-07 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15088399#comment-15088399
 ] 

Sean Busbey commented on HBASE-15074:
-

Aight, I'm going to axe my attempt at including the zombie detector and put up 
a patch.

> Zombie maker to test zombie detector
> 
>
> Key: HBASE-15074
> URL: https://issues.apache.org/jira/browse/HBASE-15074
> Project: HBase
>  Issue Type: Sub-task
>  Components: test
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0
>
> Attachments: 15074.patch, 15074v2.patch, 15074v2.patch
>
>
> Our Sean thinks our zombie detector is not finding zombies (and I think he is 
> right). Here is a test that makes zombies. Lets see what happens.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14221) Reduce the number of time row comparison is done in a Scan

2016-01-07 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15088395#comment-15088395
 ] 

stack commented on HBASE-14221:
---

Patch looks fine. Whats with all the setting the row to null and the null 
check. Why that needed now?

> Reduce the number of time row comparison is done in a Scan
> --
>
> Key: HBASE-14221
> URL: https://issues.apache.org/jira/browse/HBASE-14221
> Project: HBase
>  Issue Type: Sub-task
>  Components: Scanners
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0
>
> Attachments: 14221-0.98-takeALook.txt, HBASE-14221-branch-1.patch, 
> HBASE-14221.patch, HBASE-14221_1.patch, HBASE-14221_1.patch, 
> HBASE-14221_6.patch, HBASE-14221_9.patch, withmatchingRowspatch.png, 
> withoutmatchingRowspatch.png
>
>
> When we tried to do some profiling with the PE tool found this.
> Currently we do row comparisons in 3 places in a simple Scan case.
> 1) ScanQueryMatcher
> {code}
>int ret = this.rowComparator.compareRows(curCell, cell);
> if (!this.isReversed) {
>   if (ret <= -1) {
> return MatchCode.DONE;
>   } else if (ret >= 1) {
> // could optimize this, if necessary?
> // Could also be called SEEK_TO_CURRENT_ROW, but this
> // should be rare/never happens.
> return MatchCode.SEEK_NEXT_ROW;
>   }
> } else {
>   if (ret <= -1) {
> return MatchCode.SEEK_NEXT_ROW;
>   } else if (ret >= 1) {
> return MatchCode.DONE;
>   }
> }
> {code}
> 2) In StoreScanner next() while starting to scan the row
> {code}
> if (!scannerContext.hasAnyLimit(LimitScope.BETWEEN_CELLS) || 
> matcher.curCell == null ||
> isNewRow || !CellUtil.matchingRow(peeked, matcher.curCell)) {
>   this.countPerRow = 0;
>   matcher.setToNewRow(peeked);
> }
> {code}
> Particularly to see if we are in a new row.
> 3) In HRegion
> {code}
>   scannerContext.setKeepProgress(true);
>   heap.next(results, scannerContext);
>   scannerContext.setKeepProgress(tmpKeepProgress);
>   nextKv = heap.peek();
> moreCellsInRow = moreCellsInRow(nextKv, currentRowCell);
> {code}
> Here again there are cases where we need to careful for a MultiCF case.  Was 
> trying to solve this for the MultiCF case but is having lot of cases to 
> solve. But atleast for a single CF case I think these comparison can be 
> reduced.
> So for a single CF case in the SQM we are able to find if we have crossed a 
> row using the code pasted above in SQM. That comparison is definitely needed.
> Now in case of a single CF the HRegion is going to have only one element in 
> the heap and so the 3rd comparison can surely be avoided if the 
> StoreScanner.next() was over due to MatchCode.DONE caused by SQM.
> Coming to the 2nd compareRows that we do in StoreScanner. next() - even that 
> can be avoided if we know that the previous next() call was over due to a new 
> row. Doing all this I found that the compareRows in the profiler which was 
> 19% got reduced to 13%. Initially we can solve for single CF case which can 
> be extended to MultiCF cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15079) TestMultiParallel.validateLoadedData AssertionError: null

2016-01-07 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-15079:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 1.3.0
   1.2.0
   2.0.0
   Status: Resolved  (was: Patch Available)

Pushed to branch-1.2+. Thanks for the patch [~chenheng]

> TestMultiParallel.validateLoadedData AssertionError: null
> -
>
> Key: HBASE-15079
> URL: https://issues.apache.org/jira/browse/HBASE-15079
> Project: HBase
>  Issue Type: Bug
>  Components: Client, flakey, test
>Affects Versions: 2.0.0, 1.2.0, 1.3.0
>Reporter: stack
>Assignee: Heng Chen
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14915-branch-1.2.patch
>
>
> Saw this failure on internal rig:
> {code}
> Stack Trace:
> java.lang.AssertionError: null
> at org.junit.Assert.fail(Assert.java:86)
> at org.junit.Assert.assertTrue(Assert.java:41)
> at org.junit.Assert.assertTrue(Assert.java:52)
> at 
> org.apache.hadoop.hbase.client.TestMultiParallel.validateLoadedData(TestMultiParallel.java:676)
> at 
> org.apache.hadoop.hbase.client.TestMultiParallel.doTestFlushCommits(TestMultiParallel.java:293)
> at 
> org.apache.hadoop.hbase.client.TestMultiParallel.testFlushCommitsNoAbort(TestMultiParallel.java:241)
> {code}
> [~chenheng] actually added a fix for this failure over in HBASE-14915 but we 
> never committed it. Let me attach his patch here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15079) TestMultiParallel.validateLoadedData AssertionError: null

2016-01-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15088381#comment-15088381
 ] 

Hadoop QA commented on HBASE-15079:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12781065/HBASE-14915-branch-1.2.patch
  against branch-1.2 branch at commit 3d3677932a4ec98c12121c879ac5e2ea71925ea5.
  ATTACHMENT ID: 12781065

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.6.1 2.7.0 
2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}. The applied patch does not generate new 
checkstyle errors.

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:green}+1 site{color}.  The mvn post-site goal succeeds with this 
patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

{color:green}+1 zombies{color}. No zombie tests found running at the end of 
the build.

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17163//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17163//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17163//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17163//console

This message is automatically generated.

> TestMultiParallel.validateLoadedData AssertionError: null
> -
>
> Key: HBASE-15079
> URL: https://issues.apache.org/jira/browse/HBASE-15079
> Project: HBase
>  Issue Type: Bug
>  Components: Client, flakey, test
>Affects Versions: 2.0.0, 1.2.0, 1.3.0
>Reporter: stack
>Assignee: Heng Chen
> Attachments: HBASE-14915-branch-1.2.patch
>
>
> Saw this failure on internal rig:
> {code}
> Stack Trace:
> java.lang.AssertionError: null
> at org.junit.Assert.fail(Assert.java:86)
> at org.junit.Assert.assertTrue(Assert.java:41)
> at org.junit.Assert.assertTrue(Assert.java:52)
> at 
> org.apache.hadoop.hbase.client.TestMultiParallel.validateLoadedData(TestMultiParallel.java:676)
> at 
> org.apache.hadoop.hbase.client.TestMultiParallel.doTestFlushCommits(TestMultiParallel.java:293)
> at 
> org.apache.hadoop.hbase.client.TestMultiParallel.testFlushCommitsNoAbort(TestMultiParallel.java:241)
> {code}
> [~chenheng] actually added a fix for this failure over in HBASE-14915 but we 
> never committed it. Let me attach his patch here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15074) Zombie maker to test zombie detector

2016-01-07 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15088373#comment-15088373
 ] 

stack commented on HBASE-15074:
---

The test log console text is complaining about hbase-server failing but 
TestZombie is in hbase-common which is a little confusing. When I grep around 
in the consoleText I see no mention of TestZombie which also does me in.

The yetus output is NOT confusing. It is actually beautiful.

> Zombie maker to test zombie detector
> 
>
> Key: HBASE-15074
> URL: https://issues.apache.org/jira/browse/HBASE-15074
> Project: HBase
>  Issue Type: Sub-task
>  Components: test
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0
>
> Attachments: 15074.patch, 15074v2.patch, 15074v2.patch
>
>
> Our Sean thinks our zombie detector is not finding zombies (and I think he is 
> right). Here is a test that makes zombies. Lets see what happens.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15074) Zombie maker to test zombie detector

2016-01-07 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15088368#comment-15088368
 ] 

Sean Busbey commented on HBASE-15074:
-

The end of the test log file just shows this:

{code}
Results :

Tests run: 237, Failures: 0, Errors: 0, Skipped: 1

[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 17:09 min
[INFO] Finished at: 2016-01-07T17:11:37-06:00
[INFO] Final Memory: 33M/582M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.18.1:test (default-test) on 
project hbase-common: There was a timeout or other error in the fork -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException

{code}

so AFAICT it did the right thing parsing the logs for the zombie name.

> Zombie maker to test zombie detector
> 
>
> Key: HBASE-15074
> URL: https://issues.apache.org/jira/browse/HBASE-15074
> Project: HBase
>  Issue Type: Sub-task
>  Components: test
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0
>
> Attachments: 15074.patch, 15074v2.patch, 15074v2.patch
>
>
> Our Sean thinks our zombie detector is not finding zombies (and I think he is 
> right). Here is a test that makes zombies. Lets see what happens.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >