[jira] [Comment Edited] (HBASE-14548) Expand how table coprocessor jar and dependency path can be specified

2016-07-06 Thread li xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15363889#comment-15363889
 ] 

li xiang edited comment on HBASE-14548 at 7/6/16 7:08 AM:
--

Hi Jerry, there is a format problem in my comment yesterday which I did not 
noticed.
It should be corrected as :
=
1. I tested FileSytem.isDirectory() by a separate program, against the path 
with wildcard. It works as expected for the following inputs, for example, 
"/user/hbase/\*.jar" or "/user/hbase/coprocessor.\*", isDirectory() returns 
false.
=
I forgot to add the escape character "\" in front of two "\*" in the sentence 
above, so the system displays the words between those two "\*" into bold.

Sorry if it made you confused. 


was (Author: water):
Hi Jerry, there is a format problem in my comment yesterday which I did not 
noticed.
It should be corrected as :
=
1. I tested FileSytem.isDirectory() by a separate program, against the path 
with wildcard. It works as expected for the following inputs, for example, 
"/user/hbase/\*.jar" or "/user/hbase/coprocessor.\*", isDirectory() returns 
false.
=
I forgot to add the escape character "\" in front of two "*" in the sentence 
above, so the system displays the words between those two "*" into bold.

Sorry if it made you confused. 

> Expand how table coprocessor jar and dependency path can be specified
> -
>
> Key: HBASE-14548
> URL: https://issues.apache.org/jira/browse/HBASE-14548
> Project: HBase
>  Issue Type: Improvement
>  Components: Coprocessors
>Affects Versions: 1.2.0
>Reporter: Jerry He
>Assignee: li xiang
> Fix For: 2.0.0
>
> Attachments: HBASE-14548-1.2.0-v0.patch, HBASE-14548-1.2.0-v1.patch, 
> HBASE-14548-1.2.0-v2.patch, HBASE-14548-master-v1.patch, 
> HBASE-14548-master-v2.patch
>
>
> Currently you can specify the location of the coprocessor jar in the table 
> coprocessor attribute.
> The problem is that it only allows you to specify one jar that implements the 
> coprocessor.  You will need to either bundle all the dependencies into this 
> jar, or you will need to copy the dependencies into HBase lib dir.
> The first option may not be ideal sometimes.  The second choice can be 
> troublesome too, particularly when the hbase region sever node and dirs are 
> dynamically added/created.
> There are a couple things we can expand here.  We can allow the coprocessor 
> attribute to specify a directory location, probably on hdfs.
> We may even allow some wildcard in there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14548) Expand how table coprocessor jar and dependency path can be specified

2016-07-06 Thread li xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15363889#comment-15363889
 ] 

li xiang commented on HBASE-14548:
--

Hi Jerry, there is a format problem in my comment yesterday which I did not 
noticed.
It should be corrected as :
=
1. I tested FileSytem.isDirectory() by a separate program, against the path 
with wildcard. It works as expected for the following inputs, for example, 
"/user/hbase/\*.jar" or "/user/hbase/coprocessor.\*", isDirectory() returns 
false.
=
I forgot to add the escape character "\" in front of two "*" in the sentence 
above, so the system displays the words between those two "*" into bold.

Sorry if it made you confused. 

> Expand how table coprocessor jar and dependency path can be specified
> -
>
> Key: HBASE-14548
> URL: https://issues.apache.org/jira/browse/HBASE-14548
> Project: HBase
>  Issue Type: Improvement
>  Components: Coprocessors
>Affects Versions: 1.2.0
>Reporter: Jerry He
>Assignee: li xiang
> Fix For: 2.0.0
>
> Attachments: HBASE-14548-1.2.0-v0.patch, HBASE-14548-1.2.0-v1.patch, 
> HBASE-14548-1.2.0-v2.patch, HBASE-14548-master-v1.patch, 
> HBASE-14548-master-v2.patch
>
>
> Currently you can specify the location of the coprocessor jar in the table 
> coprocessor attribute.
> The problem is that it only allows you to specify one jar that implements the 
> coprocessor.  You will need to either bundle all the dependencies into this 
> jar, or you will need to copy the dependencies into HBase lib dir.
> The first option may not be ideal sometimes.  The second choice can be 
> troublesome too, particularly when the hbase region sever node and dirs are 
> dynamically added/created.
> There are a couple things we can expand here.  We can allow the coprocessor 
> attribute to specify a directory location, probably on hdfs.
> We may even allow some wildcard in there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-14548) Expand how table coprocessor jar and dependency path can be specified

2016-07-06 Thread li xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15363889#comment-15363889
 ] 

li xiang edited comment on HBASE-14548 at 7/6/16 7:08 AM:
--

Hi Jerry, there is a format problem in my comment yesterday which I should have 
noticed.
It should be corrected as :
=
1. I tested FileSytem.isDirectory() by a separate program, against the path 
with wildcard. It works as expected for the following inputs, for example, 
"/user/hbase/\*.jar" or "/user/hbase/coprocessor.\*", isDirectory() returns 
false.
=
I forgot to add the escape character "\" in front of two "\*" in the sentence 
above, so the system displays the words between those two "\*" into bold.

Sorry if it made you confused. 


was (Author: water):
Hi Jerry, there is a format problem in my comment yesterday which I did not 
noticed.
It should be corrected as :
=
1. I tested FileSytem.isDirectory() by a separate program, against the path 
with wildcard. It works as expected for the following inputs, for example, 
"/user/hbase/\*.jar" or "/user/hbase/coprocessor.\*", isDirectory() returns 
false.
=
I forgot to add the escape character "\" in front of two "\*" in the sentence 
above, so the system displays the words between those two "\*" into bold.

Sorry if it made you confused. 

> Expand how table coprocessor jar and dependency path can be specified
> -
>
> Key: HBASE-14548
> URL: https://issues.apache.org/jira/browse/HBASE-14548
> Project: HBase
>  Issue Type: Improvement
>  Components: Coprocessors
>Affects Versions: 1.2.0
>Reporter: Jerry He
>Assignee: li xiang
> Fix For: 2.0.0
>
> Attachments: HBASE-14548-1.2.0-v0.patch, HBASE-14548-1.2.0-v1.patch, 
> HBASE-14548-1.2.0-v2.patch, HBASE-14548-master-v1.patch, 
> HBASE-14548-master-v2.patch
>
>
> Currently you can specify the location of the coprocessor jar in the table 
> coprocessor attribute.
> The problem is that it only allows you to specify one jar that implements the 
> coprocessor.  You will need to either bundle all the dependencies into this 
> jar, or you will need to copy the dependencies into HBase lib dir.
> The first option may not be ideal sometimes.  The second choice can be 
> troublesome too, particularly when the hbase region sever node and dirs are 
> dynamically added/created.
> There are a couple things we can expand here.  We can allow the coprocessor 
> attribute to specify a directory location, probably on hdfs.
> We may even allow some wildcard in there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16144) Replication queue's lock will live forever if RS acquiring the lock has died prematurely

2016-07-06 Thread Phil Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phil Yang updated HBASE-16144:
--
Attachment: HBASE-16144-branch-1-v1.patch

patch for branch-1

> Replication queue's lock will live forever if RS acquiring the lock has died 
> prematurely
> 
>
> Key: HBASE-16144
> URL: https://issues.apache.org/jira/browse/HBASE-16144
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.1, 1.1.5, 0.98.20
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: HBASE-16144-branch-1-v1.patch, HBASE-16144-v1.patch, 
> HBASE-16144-v2.patch, HBASE-16144-v3.patch, HBASE-16144-v4.patch, 
> HBASE-16144-v5.patch
>
>
> In default, we will use multi operation when we claimQueues from ZK. But if 
> we set hbase.zookeeper.useMulti=false, we will add a lock first, then copy 
> nodes, finally clean old queue and the lock. 
> However, if the RS acquiring the lock crash before claimQueues done, the lock 
> will always be there and other RS can never claim the queue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16144) Replication queue's lock will live forever if RS acquiring the lock has died prematurely

2016-07-06 Thread Phil Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phil Yang updated HBASE-16144:
--
Attachment: HBASE-16144-branch-1.1-v1.patch

Patch for branch-1 can also be applied to branch-1.2/1.3. Upload patch for 1.1

> Replication queue's lock will live forever if RS acquiring the lock has died 
> prematurely
> 
>
> Key: HBASE-16144
> URL: https://issues.apache.org/jira/browse/HBASE-16144
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.1, 1.1.5, 0.98.20
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: HBASE-16144-branch-1-v1.patch, 
> HBASE-16144-branch-1.1-v1.patch, HBASE-16144-v1.patch, HBASE-16144-v2.patch, 
> HBASE-16144-v3.patch, HBASE-16144-v4.patch, HBASE-16144-v5.patch
>
>
> In default, we will use multi operation when we claimQueues from ZK. But if 
> we set hbase.zookeeper.useMulti=false, we will add a lock first, then copy 
> nodes, finally clean old queue and the lock. 
> However, if the RS acquiring the lock crash before claimQueues done, the lock 
> will always be there and other RS can never claim the queue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16162) Compacting Memstore : unnecessary push of active segments to pipeline

2016-07-06 Thread Eshcar Hillel (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15363925#comment-15363925
 ] 

Eshcar Hillel commented on HBASE-16162:
---

I added my comment to RB.
What I tried to say was that the common practice would be 
{code}
try{
  take lock
...
} finally {
  release lock
}
{code}
rather than
{code}
take lock
try{
...
} finally {
  release lock
}
{code}
Other than this +1

> Compacting Memstore : unnecessary push of active segments to pipeline
> -
>
> Key: HBASE-16162
> URL: https://issues.apache.org/jira/browse/HBASE-16162
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
>Priority: Critical
> Attachments: HBASE-16162.patch, HBASE-16162_V2.patch, 
> HBASE-16162_V3.patch, HBASE-16162_V4.patch
>
>
> We have flow like this
> {code}
> protected void checkActiveSize() {
> if (shouldFlushInMemory()) {
>  InMemoryFlushRunnable runnable = new InMemoryFlushRunnable();
>   }
>   getPool().execute(runnable);
> }
>   }
> private boolean shouldFlushInMemory() {
> if(getActive().getSize() > inmemoryFlushSize) {
>   // size above flush threshold
>   return (allowCompaction.get() && !inMemoryFlushInProgress.get());
> }
> return false;
>   }
> void flushInMemory() throws IOException {
> // Phase I: Update the pipeline
> getRegionServices().blockUpdates();
> try {
>   MutableSegment active = getActive();
>   pushActiveToPipeline(active);
> } finally {
>   getRegionServices().unblockUpdates();
> }
> // Phase II: Compact the pipeline
> try {
>   if (allowCompaction.get() && 
> inMemoryFlushInProgress.compareAndSet(false, true)) {
> // setting the inMemoryFlushInProgress flag again for the case this 
> method is invoked
> // directly (only in tests) in the common path setting from true to 
> true is idempotent
> // Speculative compaction execution, may be interrupted if flush is 
> forced while
> // compaction is in progress
> compactor.startCompaction();
>   }
> {code}
> So every write of cell will produce the check checkActiveSize().   When we 
> are at border of in mem flush,  many threads doing writes to this memstore 
> can get this checkActiveSize () to pass.  Yes the AtomicBoolean is still 
> false only. It is turned ON after some time once the new thread is started 
> run and it push the active to pipeline etc.
> In the new thread code of inMemFlush, we dont have any size check. It just 
> takes the active segment and pushes that to pipeline. Yes we dont allow any 
> new writes to memstore at this time. But before that write lock on 
> region, other handler thread also might have added entry to this thread pool. 
>  When the 1st one finishes, it releases the lock on region and handler 
> threads trying for write to memstore, might get lock and add some data. Now 
> this 2nd in mem flush thread may get a chance and get the lock and so it just 
> takes current active segment and flush that in memory !This will produce 
> very small sized segments to pipeline.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16144) Replication queue's lock will live forever if RS acquiring the lock has died prematurely

2016-07-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15363933#comment-15363933
 ] 

Hadoop QA commented on HBASE-16144:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
7s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 5s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
51s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
28s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 8s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 10s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 10s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 55s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 55s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
53s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
28m 35s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
36s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 45s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 59s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 100m 13s 
{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
34s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 152m 52s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hbase.mapreduce.TestLoadIncrementalHFilesUseSecurityEndPoint |
|   | hadoop.hbase.replication.TestMultiSlaveReplication |
|   | hadoop.hbase.replication.TestReplicationSmallTests |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12816369/HBASE-16144-v5.patch |
| JIRA Issue | HBASE-16144 |
| Optional Tests |  asflicense  javac

[jira] [Commented] (HBASE-14548) Expand how table coprocessor jar and dependency path can be specified

2016-07-06 Thread li xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15363937#comment-15363937
 ] 

li xiang commented on HBASE-14548:
--

Hi Jerry, thanks for your comments!

1. Yes, it still works if user specifies a jar. I verified it using HBase 1.2.0
2. If a directory is specified, all jar(*.jar) directly in the directory are 
included. But the code will not search files in the subtree rooted in the 
directory, i.e. the sub-directories of the directory specified. I high-lights 
that in the doc
3. Yes, support wildcard at the end level of the path string, such as 
"/user/hbase/\*.jar" or "/user/hbase/coprocessor\*.jar" 

> Expand how table coprocessor jar and dependency path can be specified
> -
>
> Key: HBASE-14548
> URL: https://issues.apache.org/jira/browse/HBASE-14548
> Project: HBase
>  Issue Type: Improvement
>  Components: Coprocessors
>Affects Versions: 1.2.0
>Reporter: Jerry He
>Assignee: li xiang
> Fix For: 2.0.0
>
> Attachments: HBASE-14548-1.2.0-v0.patch, HBASE-14548-1.2.0-v1.patch, 
> HBASE-14548-1.2.0-v2.patch, HBASE-14548-master-v1.patch, 
> HBASE-14548-master-v2.patch
>
>
> Currently you can specify the location of the coprocessor jar in the table 
> coprocessor attribute.
> The problem is that it only allows you to specify one jar that implements the 
> coprocessor.  You will need to either bundle all the dependencies into this 
> jar, or you will need to copy the dependencies into HBase lib dir.
> The first option may not be ideal sometimes.  The second choice can be 
> troublesome too, particularly when the hbase region sever node and dirs are 
> dynamically added/created.
> There are a couple things we can expand here.  We can allow the coprocessor 
> attribute to specify a directory location, probably on hdfs.
> We may even allow some wildcard in there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16183) Correct errors in example program of coprocessor in Ref Guide

2016-07-06 Thread li xiang (JIRA)
li xiang created HBASE-16183:


 Summary: Correct errors in example program of coprocessor in Ref 
Guide
 Key: HBASE-16183
 URL: https://issues.apache.org/jira/browse/HBASE-16183
 Project: HBase
  Issue Type: Bug
  Components: documentation
Reporter: li xiang
Assignee: li xiang
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16183) Correct errors in example program of coprocessor in Ref Guide

2016-07-06 Thread Yu Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15363947#comment-15363947
 ] 

Yu Li commented on HBASE-16183:
---

[~water] Please add more details about the error and how to correct it, thanks.

> Correct errors in example program of coprocessor in Ref Guide
> -
>
> Key: HBASE-16183
> URL: https://issues.apache.org/jira/browse/HBASE-16183
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: li xiang
>Assignee: li xiang
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16172) Unify the retry logic in ScannerCallableWithReplicas and RpcRetryingCallerWithReadReplicas

2016-07-06 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15363979#comment-15363979
 ] 

Nicolas Liochon commented on HBASE-16172:
-

bq.  is there any needs about 'synchronized' of 
RpcRetryingCallerWithReadReplicas.call() ?
It looks like the "synchronized" can be safely removed.


> Unify the retry logic in ScannerCallableWithReplicas and 
> RpcRetryingCallerWithReadReplicas
> --
>
> Key: HBASE-16172
> URL: https://issues.apache.org/jira/browse/HBASE-16172
> Project: HBase
>  Issue Type: Bug
>Reporter: Yu Li
>Assignee: Ted Yu
> Attachments: 16172.v1.txt, 16172.v2.txt
>
>
> The issue is pointed out by [~devaraj] in HBASE-16132 (Thanks D.D.), that in 
> {{RpcRetryingCallerWithReadReplicas#call}} we will call 
> {{ResultBoundedCompletionService#take}} instead of {{poll}} to dead-wait on 
> the second one if the first replica timed out, while in 
> {{ScannerCallableWithReplicas#call}} we still use 
> {{ResultBoundedCompletionService#poll}} with some timeout for the 2nd replica.
> This JIRA aims at discussing whether to unify the logic in these two kinds of 
> caller with region replica and taking action if necessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16181) Backup of hbase:backup table

2016-07-06 Thread Heng Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15364006#comment-15364006
 ] 

Heng Chen commented on HBASE-16181:
---

Now, all tables under "hbase" namespace could not be token a snapshot,  i think 
it is unreasonable.  Only meta table need to forbid snapshot.  

+1 for fix snapshots.



> Backup of hbase:backup table
> 
>
> Key: HBASE-16181
> URL: https://issues.apache.org/jira/browse/HBASE-16181
> Project: HBase
>  Issue Type: Task
>Reporter: Vladimir Rodionov
>
> Snapshot of HBase system tables is not supported, we need either move 
> hbase:backup into different name space or fix snapshots.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16162) Compacting Memstore : unnecessary push of active segments to pipeline

2016-07-06 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15364017#comment-15364017
 ] 

Anoop Sam John commented on HBASE-16162:


Check in other places within HBase code base. It is 
{code}
take lock
try{
...
} finally {
  release lock
}
{code}
FYI

> Compacting Memstore : unnecessary push of active segments to pipeline
> -
>
> Key: HBASE-16162
> URL: https://issues.apache.org/jira/browse/HBASE-16162
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
>Priority: Critical
> Attachments: HBASE-16162.patch, HBASE-16162_V2.patch, 
> HBASE-16162_V3.patch, HBASE-16162_V4.patch
>
>
> We have flow like this
> {code}
> protected void checkActiveSize() {
> if (shouldFlushInMemory()) {
>  InMemoryFlushRunnable runnable = new InMemoryFlushRunnable();
>   }
>   getPool().execute(runnable);
> }
>   }
> private boolean shouldFlushInMemory() {
> if(getActive().getSize() > inmemoryFlushSize) {
>   // size above flush threshold
>   return (allowCompaction.get() && !inMemoryFlushInProgress.get());
> }
> return false;
>   }
> void flushInMemory() throws IOException {
> // Phase I: Update the pipeline
> getRegionServices().blockUpdates();
> try {
>   MutableSegment active = getActive();
>   pushActiveToPipeline(active);
> } finally {
>   getRegionServices().unblockUpdates();
> }
> // Phase II: Compact the pipeline
> try {
>   if (allowCompaction.get() && 
> inMemoryFlushInProgress.compareAndSet(false, true)) {
> // setting the inMemoryFlushInProgress flag again for the case this 
> method is invoked
> // directly (only in tests) in the common path setting from true to 
> true is idempotent
> // Speculative compaction execution, may be interrupted if flush is 
> forced while
> // compaction is in progress
> compactor.startCompaction();
>   }
> {code}
> So every write of cell will produce the check checkActiveSize().   When we 
> are at border of in mem flush,  many threads doing writes to this memstore 
> can get this checkActiveSize () to pass.  Yes the AtomicBoolean is still 
> false only. It is turned ON after some time once the new thread is started 
> run and it push the active to pipeline etc.
> In the new thread code of inMemFlush, we dont have any size check. It just 
> takes the active segment and pushes that to pipeline. Yes we dont allow any 
> new writes to memstore at this time. But before that write lock on 
> region, other handler thread also might have added entry to this thread pool. 
>  When the 1st one finishes, it releases the lock on region and handler 
> threads trying for write to memstore, might get lock and add some data. Now 
> this 2nd in mem flush thread may get a chance and get the lock and so it just 
> takes current active segment and flush that in memory !This will produce 
> very small sized segments to pipeline.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-16162) Compacting Memstore : unnecessary push of active segments to pipeline

2016-07-06 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15364017#comment-15364017
 ] 

Anoop Sam John edited comment on HBASE-16162 at 7/6/16 9:01 AM:


Check in other places within HBase code base. It is 
{code}
take lock
try{
...
} finally {
  release lock
}
{code}
FYI

Atomic boolean set, we don't expect to throw any Exception any way


was (Author: anoop.hbase):
Check in other places within HBase code base. It is 
{code}
take lock
try{
...
} finally {
  release lock
}
{code}
FYI

> Compacting Memstore : unnecessary push of active segments to pipeline
> -
>
> Key: HBASE-16162
> URL: https://issues.apache.org/jira/browse/HBASE-16162
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
>Priority: Critical
> Attachments: HBASE-16162.patch, HBASE-16162_V2.patch, 
> HBASE-16162_V3.patch, HBASE-16162_V4.patch
>
>
> We have flow like this
> {code}
> protected void checkActiveSize() {
> if (shouldFlushInMemory()) {
>  InMemoryFlushRunnable runnable = new InMemoryFlushRunnable();
>   }
>   getPool().execute(runnable);
> }
>   }
> private boolean shouldFlushInMemory() {
> if(getActive().getSize() > inmemoryFlushSize) {
>   // size above flush threshold
>   return (allowCompaction.get() && !inMemoryFlushInProgress.get());
> }
> return false;
>   }
> void flushInMemory() throws IOException {
> // Phase I: Update the pipeline
> getRegionServices().blockUpdates();
> try {
>   MutableSegment active = getActive();
>   pushActiveToPipeline(active);
> } finally {
>   getRegionServices().unblockUpdates();
> }
> // Phase II: Compact the pipeline
> try {
>   if (allowCompaction.get() && 
> inMemoryFlushInProgress.compareAndSet(false, true)) {
> // setting the inMemoryFlushInProgress flag again for the case this 
> method is invoked
> // directly (only in tests) in the common path setting from true to 
> true is idempotent
> // Speculative compaction execution, may be interrupted if flush is 
> forced while
> // compaction is in progress
> compactor.startCompaction();
>   }
> {code}
> So every write of cell will produce the check checkActiveSize().   When we 
> are at border of in mem flush,  many threads doing writes to this memstore 
> can get this checkActiveSize () to pass.  Yes the AtomicBoolean is still 
> false only. It is turned ON after some time once the new thread is started 
> run and it push the active to pipeline etc.
> In the new thread code of inMemFlush, we dont have any size check. It just 
> takes the active segment and pushes that to pipeline. Yes we dont allow any 
> new writes to memstore at this time. But before that write lock on 
> region, other handler thread also might have added entry to this thread pool. 
>  When the 1st one finishes, it releases the lock on region and handler 
> threads trying for write to memstore, might get lock and add some data. Now 
> this 2nd in mem flush thread may get a chance and get the lock and so it just 
> takes current active segment and flush that in memory !This will produce 
> very small sized segments to pipeline.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16144) Replication queue's lock will live forever if RS acquiring the lock has died prematurely

2016-07-06 Thread Phil Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15364026#comment-15364026
 ] 

Phil Yang commented on HBASE-16144:
---

Newly committed HBASE-16087 breaks the latest tests, let me fix.

> Replication queue's lock will live forever if RS acquiring the lock has died 
> prematurely
> 
>
> Key: HBASE-16144
> URL: https://issues.apache.org/jira/browse/HBASE-16144
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.1, 1.1.5, 0.98.20
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: HBASE-16144-branch-1-v1.patch, 
> HBASE-16144-branch-1.1-v1.patch, HBASE-16144-v1.patch, HBASE-16144-v2.patch, 
> HBASE-16144-v3.patch, HBASE-16144-v4.patch, HBASE-16144-v5.patch
>
>
> In default, we will use multi operation when we claimQueues from ZK. But if 
> we set hbase.zookeeper.useMulti=false, we will add a lock first, then copy 
> nodes, finally clean old queue and the lock. 
> However, if the RS acquiring the lock crash before claimQueues done, the lock 
> will always be there and other RS can never claim the queue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16144) Replication queue's lock will live forever if RS acquiring the lock has died prematurely

2016-07-06 Thread Phil Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phil Yang updated HBASE-16144:
--
Attachment: HBASE-16144-0.98.v1.patch

> Replication queue's lock will live forever if RS acquiring the lock has died 
> prematurely
> 
>
> Key: HBASE-16144
> URL: https://issues.apache.org/jira/browse/HBASE-16144
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.1, 1.1.5, 0.98.20
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: HBASE-16144-0.98.v1.patch, 
> HBASE-16144-branch-1-v1.patch, HBASE-16144-branch-1.1-v1.patch, 
> HBASE-16144-v1.patch, HBASE-16144-v2.patch, HBASE-16144-v3.patch, 
> HBASE-16144-v4.patch, HBASE-16144-v5.patch
>
>
> In default, we will use multi operation when we claimQueues from ZK. But if 
> we set hbase.zookeeper.useMulti=false, we will add a lock first, then copy 
> nodes, finally clean old queue and the lock. 
> However, if the RS acquiring the lock crash before claimQueues done, the lock 
> will always be there and other RS can never claim the queue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16168) Split cache usage into tables

2016-07-06 Thread Heng Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15364034#comment-15364034
 ] 

Heng Chen commented on HBASE-16168:
---

Seems duplicate with HBASE-15643

> Split cache usage into tables
> -
>
> Key: HBASE-16168
> URL: https://issues.apache.org/jira/browse/HBASE-16168
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Affects Versions: 1.1.1, 0.98.20
>Reporter: darion yaphet
>
> Currently all tables in one region server share the statistics of cache usage 
> . It's hard to decision the tables use how many memory in cache . So I think 
> we should split cache usage statistics into tables . This is more convenient 
> to know the table cache proportion .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16162) Compacting Memstore : unnecessary push of active segments to pipeline

2016-07-06 Thread Eshcar Hillel (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15364033#comment-15364033
 ] 

Eshcar Hillel commented on HBASE-16162:
---

OK. My mistake.

> Compacting Memstore : unnecessary push of active segments to pipeline
> -
>
> Key: HBASE-16162
> URL: https://issues.apache.org/jira/browse/HBASE-16162
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
>Priority: Critical
> Attachments: HBASE-16162.patch, HBASE-16162_V2.patch, 
> HBASE-16162_V3.patch, HBASE-16162_V4.patch
>
>
> We have flow like this
> {code}
> protected void checkActiveSize() {
> if (shouldFlushInMemory()) {
>  InMemoryFlushRunnable runnable = new InMemoryFlushRunnable();
>   }
>   getPool().execute(runnable);
> }
>   }
> private boolean shouldFlushInMemory() {
> if(getActive().getSize() > inmemoryFlushSize) {
>   // size above flush threshold
>   return (allowCompaction.get() && !inMemoryFlushInProgress.get());
> }
> return false;
>   }
> void flushInMemory() throws IOException {
> // Phase I: Update the pipeline
> getRegionServices().blockUpdates();
> try {
>   MutableSegment active = getActive();
>   pushActiveToPipeline(active);
> } finally {
>   getRegionServices().unblockUpdates();
> }
> // Phase II: Compact the pipeline
> try {
>   if (allowCompaction.get() && 
> inMemoryFlushInProgress.compareAndSet(false, true)) {
> // setting the inMemoryFlushInProgress flag again for the case this 
> method is invoked
> // directly (only in tests) in the common path setting from true to 
> true is idempotent
> // Speculative compaction execution, may be interrupted if flush is 
> forced while
> // compaction is in progress
> compactor.startCompaction();
>   }
> {code}
> So every write of cell will produce the check checkActiveSize().   When we 
> are at border of in mem flush,  many threads doing writes to this memstore 
> can get this checkActiveSize () to pass.  Yes the AtomicBoolean is still 
> false only. It is turned ON after some time once the new thread is started 
> run and it push the active to pipeline etc.
> In the new thread code of inMemFlush, we dont have any size check. It just 
> takes the active segment and pushes that to pipeline. Yes we dont allow any 
> new writes to memstore at this time. But before that write lock on 
> region, other handler thread also might have added entry to this thread pool. 
>  When the 1st one finishes, it releases the lock on region and handler 
> threads trying for write to memstore, might get lock and add some data. Now 
> this 2nd in mem flush thread may get a chance and get the lock and so it just 
> takes current active segment and flush that in memory !This will produce 
> very small sized segments to pipeline.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16144) Replication queue's lock will live forever if RS acquiring the lock has died prematurely

2016-07-06 Thread Phil Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phil Yang updated HBASE-16144:
--
Attachment: HBASE-16144-branch-1.1-v2.patch

> Replication queue's lock will live forever if RS acquiring the lock has died 
> prematurely
> 
>
> Key: HBASE-16144
> URL: https://issues.apache.org/jira/browse/HBASE-16144
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.1, 1.1.5, 0.98.20
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: HBASE-16144-0.98.v1.patch, 
> HBASE-16144-branch-1-v1.patch, HBASE-16144-branch-1.1-v1.patch, 
> HBASE-16144-branch-1.1-v2.patch, HBASE-16144-v1.patch, HBASE-16144-v2.patch, 
> HBASE-16144-v3.patch, HBASE-16144-v4.patch, HBASE-16144-v5.patch
>
>
> In default, we will use multi operation when we claimQueues from ZK. But if 
> we set hbase.zookeeper.useMulti=false, we will add a lock first, then copy 
> nodes, finally clean old queue and the lock. 
> However, if the RS acquiring the lock crash before claimQueues done, the lock 
> will always be there and other RS can never claim the queue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16144) Replication queue's lock will live forever if RS acquiring the lock has died prematurely

2016-07-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15364044#comment-15364044
 ] 

Hadoop QA commented on HBASE-16144:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 42s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 
8s {color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 11s 
{color} | {color:green} branch-1 passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s 
{color} | {color:green} branch-1 passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
9s {color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
37s {color} | {color:green} branch-1 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 18s 
{color} | {color:red} hbase-server in branch-1 has 1 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 7s 
{color} | {color:green} branch-1 passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s 
{color} | {color:green} branch-1 passed with JDK v1.7.0_80 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 23s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 23s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 57s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 57s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
2s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
29s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
17m 32s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 33s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 83m 18s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
31s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 128m 16s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12816375/HBASE-16144-branch-1-v1.patch
 |
| JIRA Issue | HBASE-16144 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56

[jira] [Commented] (HBASE-16144) Replication queue's lock will live forever if RS acquiring the lock has died prematurely

2016-07-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15364052#comment-15364052
 ] 

Hadoop QA commented on HBASE-16144:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
20s {color} | {color:green} branch-1.1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 58s 
{color} | {color:green} branch-1.1 passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s 
{color} | {color:green} branch-1.1 passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
27s {color} | {color:green} branch-1.1 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
24s {color} | {color:green} branch-1.1 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 59s 
{color} | {color:red} hbase-client in branch-1.1 has 15 extant Findbugs 
warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 47s 
{color} | {color:red} hbase-server in branch-1.1 has 79 extant Findbugs 
warnings. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 25s 
{color} | {color:red} hbase-client in branch-1.1 failed with JDK v1.8.0. 
{color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 24s 
{color} | {color:red} hbase-server in branch-1.1 failed with JDK v1.8.0. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 49s 
{color} | {color:green} branch-1.1 passed with JDK v1.7.0_80 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
54s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 57s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 57s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 47s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 47s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
27s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
24s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
11m 5s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
11s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 25s 
{color} | {color:red} hbase-client in the patch failed with JDK v1.8.0. {color} 
|
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 24s 
{color} | {color:red} hbase-server in the patch failed with JDK v1.8.0. {color} 
|
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 49s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 35s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 85m 40s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
39s {color} | {color:green} Patch does not generate ASF License 

[jira] [Updated] (HBASE-16144) Replication queue's lock will live forever if RS acquiring the lock has died prematurely

2016-07-06 Thread Phil Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phil Yang updated HBASE-16144:
--
Attachment: HBASE-16144-branch-1-v2.patch

> Replication queue's lock will live forever if RS acquiring the lock has died 
> prematurely
> 
>
> Key: HBASE-16144
> URL: https://issues.apache.org/jira/browse/HBASE-16144
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.1, 1.1.5, 0.98.20
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: HBASE-16144-0.98.v1.patch, 
> HBASE-16144-branch-1-v1.patch, HBASE-16144-branch-1-v2.patch, 
> HBASE-16144-branch-1.1-v1.patch, HBASE-16144-branch-1.1-v2.patch, 
> HBASE-16144-v1.patch, HBASE-16144-v2.patch, HBASE-16144-v3.patch, 
> HBASE-16144-v4.patch, HBASE-16144-v5.patch
>
>
> In default, we will use multi operation when we claimQueues from ZK. But if 
> we set hbase.zookeeper.useMulti=false, we will add a lock first, then copy 
> nodes, finally clean old queue and the lock. 
> However, if the RS acquiring the lock crash before claimQueues done, the lock 
> will always be there and other RS can never claim the queue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16144) Replication queue's lock will live forever if RS acquiring the lock has died prematurely

2016-07-06 Thread Phil Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phil Yang updated HBASE-16144:
--
Attachment: HBASE-16144-v6.patch

> Replication queue's lock will live forever if RS acquiring the lock has died 
> prematurely
> 
>
> Key: HBASE-16144
> URL: https://issues.apache.org/jira/browse/HBASE-16144
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.1, 1.1.5, 0.98.20
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: HBASE-16144-0.98.v1.patch, 
> HBASE-16144-branch-1-v1.patch, HBASE-16144-branch-1-v2.patch, 
> HBASE-16144-branch-1.1-v1.patch, HBASE-16144-branch-1.1-v2.patch, 
> HBASE-16144-v1.patch, HBASE-16144-v2.patch, HBASE-16144-v3.patch, 
> HBASE-16144-v4.patch, HBASE-16144-v5.patch, HBASE-16144-v6.patch
>
>
> In default, we will use multi operation when we claimQueues from ZK. But if 
> we set hbase.zookeeper.useMulti=false, we will add a lock first, then copy 
> nodes, finally clean old queue and the lock. 
> However, if the RS acquiring the lock crash before claimQueues done, the lock 
> will always be there and other RS can never claim the queue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16144) Replication queue's lock will live forever if RS acquiring the lock has died prematurely

2016-07-06 Thread Phil Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15364077#comment-15364077
 ] 

Phil Yang commented on HBASE-16144:
---

All patches uploaded

> Replication queue's lock will live forever if RS acquiring the lock has died 
> prematurely
> 
>
> Key: HBASE-16144
> URL: https://issues.apache.org/jira/browse/HBASE-16144
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.1, 1.1.5, 0.98.20
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: HBASE-16144-0.98.v1.patch, 
> HBASE-16144-branch-1-v1.patch, HBASE-16144-branch-1-v2.patch, 
> HBASE-16144-branch-1.1-v1.patch, HBASE-16144-branch-1.1-v2.patch, 
> HBASE-16144-v1.patch, HBASE-16144-v2.patch, HBASE-16144-v3.patch, 
> HBASE-16144-v4.patch, HBASE-16144-v5.patch, HBASE-16144-v6.patch
>
>
> In default, we will use multi operation when we claimQueues from ZK. But if 
> we set hbase.zookeeper.useMulti=false, we will add a lock first, then copy 
> nodes, finally clean old queue and the lock. 
> However, if the RS acquiring the lock crash before claimQueues done, the lock 
> will always be there and other RS can never claim the queue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16172) Unify the retry logic in ScannerCallableWithReplicas and RpcRetryingCallerWithReadReplicas

2016-07-06 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-16172:
---
Attachment: 16172.v2.txt

> Unify the retry logic in ScannerCallableWithReplicas and 
> RpcRetryingCallerWithReadReplicas
> --
>
> Key: HBASE-16172
> URL: https://issues.apache.org/jira/browse/HBASE-16172
> Project: HBase
>  Issue Type: Bug
>Reporter: Yu Li
>Assignee: Ted Yu
> Attachments: 16172.v1.txt, 16172.v2.txt, 16172.v2.txt
>
>
> The issue is pointed out by [~devaraj] in HBASE-16132 (Thanks D.D.), that in 
> {{RpcRetryingCallerWithReadReplicas#call}} we will call 
> {{ResultBoundedCompletionService#take}} instead of {{poll}} to dead-wait on 
> the second one if the first replica timed out, while in 
> {{ScannerCallableWithReplicas#call}} we still use 
> {{ResultBoundedCompletionService#poll}} with some timeout for the 2nd replica.
> This JIRA aims at discussing whether to unify the logic in these two kinds of 
> caller with region replica and taking action if necessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16183) Correct errors in example program of coprocessor in Ref Guide

2016-07-06 Thread li xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

li xiang updated HBASE-16183:
-
Description: 
1. In Section 89.3.3
change 
  String path = "hdfs://:/user//coprocessor.jar";
into 
  Path path = new 
Path("hdfs://bdavm1506.svl.ibm.com:8020/user/hbase/coprocessor.jar");
reason
  The second parameter of HTableDescriptor.addCoprocessor() is 
org.apache.hadoop.fs.Path, not String.
  See 
http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/HTableDescriptor.html

2. In Section 89.3.3
change
  HBaseAdmin admin = new HBaseAdmin(conf);
into 
  Connection connection = ConnectionFactory.createConnection(conf);
  Admin admin = connection.getAdmin();
reason
  HBASE-12083 makes new HBaseAdmin() deprecated and the instance of Admin is 
supposed to get from Connection.getAdmin()
  Also see 
http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/client/HBaseAdmin.html

3. In section 90.1
change
  public void preGetOp(final ObserverContext e, final Get get, final List 
results)
into
  public void preGetOp(final ObserverContext e, 
final Get get, final List results)

change 
  List kvs = new ArrayList(results.size());
into
  List kvs = new ArrayList(results.size());

change
  public RegionScanner preScannerOpen(final ObserverContext e, final Scan scan,
into
  preScannerOpen(final ObserverContext e, final 
Scan scan,

change
  public boolean postScannerNext(final ObserverContext e, final InternalScanner 
s,
  final List results, final int limit, final boolean hasMore) throws 
IOException {
into
  public boolean postScannerNext(final 
ObserverContext e, final InternalScanner s,
  inal List results, final int limit, final boolean hasMore) throws 
IOException {

change
  Iterator iterator = results.iterator();
into
  Iterator iterator = results.iterator();

reason
  Generic

4. In section 90.1
change
  preGet(e, get, kvs);
into 
  super.preGetOp(e, get, kvs);
reason
  There is not a function called preGet() provided by BaseRegionObserver or its 
super class/interface. I believe we need to call preGetOp() of the super class 
of RegionObserverExample here.

 5. In section 90.1
change
  kvs.add(KeyValueUtil.ensureKeyValue(c));
into
  kvs.add(c);
reason
  KeyValueUtil.ensureKeyValue() is deprecated.
  See 
http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/KeyValueUtil.html
  and https://issues.apache.org/jira/browse/HBASE-12079

> Correct errors in example program of coprocessor in Ref Guide
> -
>
> Key: HBASE-16183
> URL: https://issues.apache.org/jira/browse/HBASE-16183
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: li xiang
>Assignee: li xiang
>Priority: Minor
>
> 1. In Section 89.3.3
> change 
>   String path = "hdfs://:/user//coprocessor.jar";
> into 
>   Path path = new 
> Path("hdfs://bdavm1506.svl.ibm.com:8020/user/hbase/coprocessor.jar");
> reason
>   The second parameter of HTableDescriptor.addCoprocessor() is 
> org.apache.hadoop.fs.Path, not String.
>   See 
> http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/HTableDescriptor.html
> 2. In Section 89.3.3
> change
>   HBaseAdmin admin = new HBaseAdmin(conf);
> into 
>   Connection connection = ConnectionFactory.createConnection(conf);
>   Admin admin = connection.getAdmin();
> reason
>   HBASE-12083 makes new HBaseAdmin() deprecated and the instance of Admin is 
> supposed to get from Connection.getAdmin()
>   Also see 
> http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/client/HBaseAdmin.html
> 3. In section 90.1
> change
>   public void preGetOp(final ObserverContext e, final Get get, final List 
> results)
> into
>   public void preGetOp(final ObserverContext e, 
> final Get get, final List results)
> change 
>   List kvs = new ArrayList(results.size());
> into
>   List kvs = new ArrayList(results.size());
> change
>   public RegionScanner preScannerOpen(final ObserverContext e, final Scan 
> scan,
> into
>   preScannerOpen(final ObserverContext e, final 
> Scan scan,
> change
>   public boolean postScannerNext(final ObserverContext e, final 
> InternalScanner s,
>   final List results, final int limit, final boolean hasMore) throws 
> IOException {
> into
>   public boolean postScannerNext(final 
> ObserverContext e, final InternalScanner s,
>   inal List results, final int limit, final boolean hasMore) throws 
> IOException {
> change
>   Iterator iterator = results.iterator();
> into
>   Iterator iterator = results.iterator();
> reason
>   Generic
> 4. In section 90.1
> change
>   preGet(e, get, kvs);
> into 
>   super.preGetOp(e, get, kvs);
> reason
>   There is not a function called preGet() provided by BaseRegionObserver or 
> its super class/interface. I believe we need to call preGetOp() of the super 
> class of RegionObserverExample 

[jira] [Commented] (HBASE-16183) Correct errors in example program of coprocessor in Ref Guide

2016-07-06 Thread li xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15364147#comment-15364147
 ] 

li xiang commented on HBASE-16183:
--

Hi Yu, I updated the description. Sorry...

> Correct errors in example program of coprocessor in Ref Guide
> -
>
> Key: HBASE-16183
> URL: https://issues.apache.org/jira/browse/HBASE-16183
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: li xiang
>Assignee: li xiang
>Priority: Minor
>
> 1. In Section 89.3.3
> change 
>   String path = "hdfs://:/user//coprocessor.jar";
> into 
>   Path path = new 
> Path("hdfs://bdavm1506.svl.ibm.com:8020/user/hbase/coprocessor.jar");
> reason
>   The second parameter of HTableDescriptor.addCoprocessor() is 
> org.apache.hadoop.fs.Path, not String.
>   See 
> http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/HTableDescriptor.html
> 2. In Section 89.3.3
> change
>   HBaseAdmin admin = new HBaseAdmin(conf);
> into 
>   Connection connection = ConnectionFactory.createConnection(conf);
>   Admin admin = connection.getAdmin();
> reason
>   HBASE-12083 makes new HBaseAdmin() deprecated and the instance of Admin is 
> supposed to get from Connection.getAdmin()
>   Also see 
> http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/client/HBaseAdmin.html
> 3. In section 90.1
> change
>   public void preGetOp(final ObserverContext e, final Get get, final List 
> results)
> into
>   public void preGetOp(final ObserverContext e, 
> final Get get, final List results)
> change 
>   List kvs = new ArrayList(results.size());
> into
>   List kvs = new ArrayList(results.size());
> change
>   public RegionScanner preScannerOpen(final ObserverContext e, final Scan 
> scan,
> into
>   preScannerOpen(final ObserverContext e, final 
> Scan scan,
> change
>   public boolean postScannerNext(final ObserverContext e, final 
> InternalScanner s,
>   final List results, final int limit, final boolean hasMore) throws 
> IOException {
> into
>   public boolean postScannerNext(final 
> ObserverContext e, final InternalScanner s,
>   inal List results, final int limit, final boolean hasMore) throws 
> IOException {
> change
>   Iterator iterator = results.iterator();
> into
>   Iterator iterator = results.iterator();
> reason
>   Generic
> 4. In section 90.1
> change
>   preGet(e, get, kvs);
> into 
>   super.preGetOp(e, get, kvs);
> reason
>   There is not a function called preGet() provided by BaseRegionObserver or 
> its super class/interface. I believe we need to call preGetOp() of the super 
> class of RegionObserverExample here.
>  5. In section 90.1
> change
>   kvs.add(KeyValueUtil.ensureKeyValue(c));
> into
>   kvs.add(c);
> reason
>   KeyValueUtil.ensureKeyValue() is deprecated.
>   See 
> http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/KeyValueUtil.html
>   and https://issues.apache.org/jira/browse/HBASE-12079



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16183) Correct errors in example program of coprocessor in Ref Guide

2016-07-06 Thread li xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

li xiang updated HBASE-16183:
-
Description: 
1. In Section 89.3.3
change
{code}
String path = "hdfs://:/user//coprocessor.jar";
{code}
into 
  Path path = new 
Path("hdfs://bdavm1506.svl.ibm.com:8020/user/hbase/coprocessor.jar");
reason
  The second parameter of HTableDescriptor.addCoprocessor() is 
org.apache.hadoop.fs.Path, not String.
  See 
http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/HTableDescriptor.html

2. In Section 89.3.3
change
  HBaseAdmin admin = new HBaseAdmin(conf);
into 
  Connection connection = ConnectionFactory.createConnection(conf);
  Admin admin = connection.getAdmin();
reason
  HBASE-12083 makes new HBaseAdmin() deprecated and the instance of Admin is 
supposed to get from Connection.getAdmin()
  Also see 
http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/client/HBaseAdmin.html

3. In section 90.1
change
  public void preGetOp(final ObserverContext e, final Get get, final List 
results)
into
  public void preGetOp(final ObserverContext e, 
final Get get, final List results)

change 
  List kvs = new ArrayList(results.size());
into
  List kvs = new ArrayList(results.size());

change
  public RegionScanner preScannerOpen(final ObserverContext e, final Scan scan,
into
  preScannerOpen(final ObserverContext e, final 
Scan scan,

change
  public boolean postScannerNext(final ObserverContext e, final InternalScanner 
s,
  final List results, final int limit, final boolean hasMore) throws 
IOException {
into
  public boolean postScannerNext(final 
ObserverContext e, final InternalScanner s,
  inal List results, final int limit, final boolean hasMore) throws 
IOException {

change
  Iterator iterator = results.iterator();
into
  Iterator iterator = results.iterator();

reason
  Generic

4. In section 90.1
change
  preGet(e, get, kvs);
into 
  super.preGetOp(e, get, kvs);
reason
  There is not a function called preGet() provided by BaseRegionObserver or its 
super class/interface. I believe we need to call preGetOp() of the super class 
of RegionObserverExample here.

 5. In section 90.1
change
  kvs.add(KeyValueUtil.ensureKeyValue(c));
into
  kvs.add(c);
reason
  KeyValueUtil.ensureKeyValue() is deprecated.
  See 
http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/KeyValueUtil.html
  and https://issues.apache.org/jira/browse/HBASE-12079

  was:
1. In Section 89.3.3
change 
  String path = "hdfs://:/user//coprocessor.jar";
into 
  Path path = new 
Path("hdfs://bdavm1506.svl.ibm.com:8020/user/hbase/coprocessor.jar");
reason
  The second parameter of HTableDescriptor.addCoprocessor() is 
org.apache.hadoop.fs.Path, not String.
  See 
http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/HTableDescriptor.html

2. In Section 89.3.3
change
  HBaseAdmin admin = new HBaseAdmin(conf);
into 
  Connection connection = ConnectionFactory.createConnection(conf);
  Admin admin = connection.getAdmin();
reason
  HBASE-12083 makes new HBaseAdmin() deprecated and the instance of Admin is 
supposed to get from Connection.getAdmin()
  Also see 
http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/client/HBaseAdmin.html

3. In section 90.1
change
  public void preGetOp(final ObserverContext e, final Get get, final List 
results)
into
  public void preGetOp(final ObserverContext e, 
final Get get, final List results)

change 
  List kvs = new ArrayList(results.size());
into
  List kvs = new ArrayList(results.size());

change
  public RegionScanner preScannerOpen(final ObserverContext e, final Scan scan,
into
  preScannerOpen(final ObserverContext e, final 
Scan scan,

change
  public boolean postScannerNext(final ObserverContext e, final InternalScanner 
s,
  final List results, final int limit, final boolean hasMore) throws 
IOException {
into
  public boolean postScannerNext(final 
ObserverContext e, final InternalScanner s,
  inal List results, final int limit, final boolean hasMore) throws 
IOException {

change
  Iterator iterator = results.iterator();
into
  Iterator iterator = results.iterator();

reason
  Generic

4. In section 90.1
change
  preGet(e, get, kvs);
into 
  super.preGetOp(e, get, kvs);
reason
  There is not a function called preGet() provided by BaseRegionObserver or its 
super class/interface. I believe we need to call preGetOp() of the super class 
of RegionObserverExample here.

 5. In section 90.1
change
  kvs.add(KeyValueUtil.ensureKeyValue(c));
into
  kvs.add(c);
reason
  KeyValueUtil.ensureKeyValue() is deprecated.
  See 
http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/KeyValueUtil.html
  and https://issues.apache.org/jira/browse/HBASE-12079


> Correct errors in example program of coprocessor in Ref Guide
> -
>
> Key: HBASE-16183
> URL: https://issues.apache.org/jira/browse/HBASE-16183

[jira] [Updated] (HBASE-16183) Correct errors in example program of coprocessor in Ref Guide

2016-07-06 Thread li xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

li xiang updated HBASE-16183:
-
Description: 
1. In Section 89.3.3
change
{code}
String path = "hdfs://:/user//coprocessor.jar";
{code}
into
{code}
Path path = new 
Path("hdfs://bdavm1506.svl.ibm.com:8020/user/hbase/coprocessor.jar");
{code}
reason
  The second parameter of HTableDescriptor.addCoprocessor() is 
org.apache.hadoop.fs.Path, not String.
  See 
http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/HTableDescriptor.html

2. In Section 89.3.3
change
{code}
HBaseAdmin admin = new HBaseAdmin(conf);
{code}
into
{code}
Connection connection = ConnectionFactory.createConnection(conf);
Admin admin = connection.getAdmin();
{code}
reason
  HBASE-12083 makes new HBaseAdmin() deprecated and the instance of Admin is 
supposed to get from Connection.getAdmin()
  Also see 
http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/client/HBaseAdmin.html

3. In section 90.1
change
{code}
public void preGetOp(final ObserverContext e, final Get get, final List results)
{code}
into
{code}
public void preGetOp(final ObserverContext e, 
final Get get, final List results)
{code}
change
{code} 
List kvs = new ArrayList(results.size());
{code}
into
{code}
List kvs = new ArrayList(results.size());
{code}
change
{code}
public RegionScanner preScannerOpen(final ObserverContext e, final Scan scan,
{code}
into
{code}
preScannerOpen(final ObserverContext e, final 
Scan scan,
{code}
change
{code}
public boolean postScannerNext(final ObserverContext e, final InternalScanner s,
final List results, final int limit, final boolean hasMore) throws IOException {
{code}
into
{code}
public boolean postScannerNext(final 
ObserverContext e, final InternalScanner s,
final List results, final int limit, final boolean hasMore) throws 
IOException {
{code}
change
{code}
Iterator iterator = results.iterator();
{code}
into
{code}
Iterator iterator = results.iterator();
{code}
reason
  Generic

4. In section 90.1
change
{code}
preGet(e, get, kvs);
{code}
into
{code}
super.preGetOp(e, get, kvs);
{code}
reason
  There is not a function called preGet() provided by BaseRegionObserver or its 
super class/interface. I believe we need to call preGetOp() of the super class 
of RegionObserverExample here.

 5. In section 90.1
change
{code}
kvs.add(KeyValueUtil.ensureKeyValue(c));
{code}
into
{code}
kvs.add(c);
{code}
reason
  KeyValueUtil.ensureKeyValue() is deprecated.
  See 
http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/KeyValueUtil.html
  and https://issues.apache.org/jira/browse/HBASE-12079

  was:
1. In Section 89.3.3
change
{code}
String path = "hdfs://:/user//coprocessor.jar";
{code}
into 
  Path path = new 
Path("hdfs://bdavm1506.svl.ibm.com:8020/user/hbase/coprocessor.jar");
reason
  The second parameter of HTableDescriptor.addCoprocessor() is 
org.apache.hadoop.fs.Path, not String.
  See 
http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/HTableDescriptor.html

2. In Section 89.3.3
change
  HBaseAdmin admin = new HBaseAdmin(conf);
into 
  Connection connection = ConnectionFactory.createConnection(conf);
  Admin admin = connection.getAdmin();
reason
  HBASE-12083 makes new HBaseAdmin() deprecated and the instance of Admin is 
supposed to get from Connection.getAdmin()
  Also see 
http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/client/HBaseAdmin.html

3. In section 90.1
change
  public void preGetOp(final ObserverContext e, final Get get, final List 
results)
into
  public void preGetOp(final ObserverContext e, 
final Get get, final List results)

change 
  List kvs = new ArrayList(results.size());
into
  List kvs = new ArrayList(results.size());

change
  public RegionScanner preScannerOpen(final ObserverContext e, final Scan scan,
into
  preScannerOpen(final ObserverContext e, final 
Scan scan,

change
  public boolean postScannerNext(final ObserverContext e, final InternalScanner 
s,
  final List results, final int limit, final boolean hasMore) throws 
IOException {
into
  public boolean postScannerNext(final 
ObserverContext e, final InternalScanner s,
  inal List results, final int limit, final boolean hasMore) throws 
IOException {

change
  Iterator iterator = results.iterator();
into
  Iterator iterator = results.iterator();

reason
  Generic

4. In section 90.1
change
  preGet(e, get, kvs);
into 
  super.preGetOp(e, get, kvs);
reason
  There is not a function called preGet() provided by BaseRegionObserver or its 
super class/interface. I believe we need to call preGetOp() of the super class 
of RegionObserverExample here.

 5. In section 90.1
change
  kvs.add(KeyValueUtil.ensureKeyValue(c));
into
  kvs.add(c);
reason
  KeyValueUtil.ensureKeyValue() is deprecated.
  See 
http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/KeyValueUtil.html
  and https://issues.apache.org/jira/browse/HBASE-12079


> Correct errors in example program of

[jira] [Updated] (HBASE-16183) Correct errors in example program of coprocessor in Ref Guide

2016-07-06 Thread li xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

li xiang updated HBASE-16183:
-
Description: 
1. In Section 89.3.3
change
{code}
String path = "hdfs://:/user//coprocessor.jar";
{code}
into
{code}
Path path = new 
Path("hdfs://bdavm1506.svl.ibm.com:8020/user/hbase/coprocessor.jar");
{code}
reason
  The second parameter of HTableDescriptor.addCoprocessor() is 
org.apache.hadoop.fs.Path, not String.
  See 
http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/HTableDescriptor.html

2. In Section 89.3.3
change
{code}
HBaseAdmin admin = new HBaseAdmin(conf);
{code}
into
{code}
Connection connection = ConnectionFactory.createConnection(conf);
Admin admin = connection.getAdmin();
{code}
reason
  HBASE-12083 makes new HBaseAdmin() deprecated and the instance of Admin is 
supposed to get from Connection.getAdmin()
  Also see 
http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/client/HBaseAdmin.html

3. In section 90.1
change
{code}
public void preGetOp(final ObserverContext e, final Get get, final List results)
{code}
into
{code}
public void preGetOp(final ObserverContext e, 
final Get get, final List results)
{code}
change
{code} 
List kvs = new ArrayList(results.size());
{code}
into
{code}
List kvs = new ArrayList(results.size());
{code}
change
{code}
public RegionScanner preScannerOpen(final ObserverContext e, final Scan scan,
{code}
into
{code}
preScannerOpen(final ObserverContext e, final 
Scan scan,
{code}
change
{code}
public boolean postScannerNext(final ObserverContext e, final InternalScanner 
s, final List results, final int limit, final boolean hasMore) throws 
IOException {
{code}
into
{code}
public boolean postScannerNext(final 
ObserverContext e, final InternalScanner s, final 
List results, final int limit, final boolean hasMore) throws 
IOException {
{code}
change
{code}
Iterator iterator = results.iterator();
{code}
into
{code}
Iterator iterator = results.iterator();
{code}
reason
  Generic

4. In section 90.1
change
{code}
preGet(e, get, kvs);
{code}
into
{code}
super.preGetOp(e, get, kvs);
{code}
reason
  There is not a function called preGet() provided by BaseRegionObserver or its 
super class/interface. I believe we need to call preGetOp() of the super class 
of RegionObserverExample here.

 5. In section 90.1
change
{code}
kvs.add(KeyValueUtil.ensureKeyValue(c));
{code}
into
{code}
kvs.add(c);
{code}
reason
  KeyValueUtil.ensureKeyValue() is deprecated.
  See 
http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/KeyValueUtil.html
  and https://issues.apache.org/jira/browse/HBASE-12079

  was:
1. In Section 89.3.3
change
{code}
String path = "hdfs://:/user//coprocessor.jar";
{code}
into
{code}
Path path = new 
Path("hdfs://bdavm1506.svl.ibm.com:8020/user/hbase/coprocessor.jar");
{code}
reason
  The second parameter of HTableDescriptor.addCoprocessor() is 
org.apache.hadoop.fs.Path, not String.
  See 
http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/HTableDescriptor.html

2. In Section 89.3.3
change
{code}
HBaseAdmin admin = new HBaseAdmin(conf);
{code}
into
{code}
Connection connection = ConnectionFactory.createConnection(conf);
Admin admin = connection.getAdmin();
{code}
reason
  HBASE-12083 makes new HBaseAdmin() deprecated and the instance of Admin is 
supposed to get from Connection.getAdmin()
  Also see 
http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/client/HBaseAdmin.html

3. In section 90.1
change
{code}
public void preGetOp(final ObserverContext e, final Get get, final List results)
{code}
into
{code}
public void preGetOp(final ObserverContext e, 
final Get get, final List results)
{code}
change
{code} 
List kvs = new ArrayList(results.size());
{code}
into
{code}
List kvs = new ArrayList(results.size());
{code}
change
{code}
public RegionScanner preScannerOpen(final ObserverContext e, final Scan scan,
{code}
into
{code}
preScannerOpen(final ObserverContext e, final 
Scan scan,
{code}
change
{code}
public boolean postScannerNext(final ObserverContext e, final InternalScanner s,
final List results, final int limit, final boolean hasMore) throws IOException {
{code}
into
{code}
public boolean postScannerNext(final 
ObserverContext e, final InternalScanner s,
final List results, final int limit, final boolean hasMore) throws 
IOException {
{code}
change
{code}
Iterator iterator = results.iterator();
{code}
into
{code}
Iterator iterator = results.iterator();
{code}
reason
  Generic

4. In section 90.1
change
{code}
preGet(e, get, kvs);
{code}
into
{code}
super.preGetOp(e, get, kvs);
{code}
reason
  There is not a function called preGet() provided by BaseRegionObserver or its 
super class/interface. I believe we need to call preGetOp() of the super class 
of RegionObserverExample here.

 5. In section 90.1
change
{code}
kvs.add(KeyValueUtil.ensureKeyValue(c));
{code}
into
{code}
kvs.add(c);
{code}
reason
  KeyValueUtil.ensureKeyValue() is d

[jira] [Updated] (HBASE-16183) Correct errors in example program of coprocessor in Ref Guide

2016-07-06 Thread li xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

li xiang updated HBASE-16183:
-
Description: 
1. In Section 89.3.3
change
{code}
String path = "hdfs://:/user//coprocessor.jar";
{code}
into
{code}
Path path = new 
Path("hdfs://bdavm1506.svl.ibm.com:8020/user/hbase/coprocessor.jar");
{code}
Reason:
  The second parameter of HTableDescriptor.addCoprocessor() is 
org.apache.hadoop.fs.Path, not String.
  See 
http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/HTableDescriptor.html

2. In Section 89.3.3
change
{code}
HBaseAdmin admin = new HBaseAdmin(conf);
{code}
into
{code}
Connection connection = ConnectionFactory.createConnection(conf);
Admin admin = connection.getAdmin();
{code}
Reason:
  HBASE-12083 makes new HBaseAdmin() deprecated and the instance of Admin is 
supposed to get from Connection.getAdmin()
  Also see 
http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/client/HBaseAdmin.html

3. In section 90.1
change
{code}
public void preGetOp(final ObserverContext e, final Get get, final List results)
{code}
into
{code}
public void preGetOp(final ObserverContext e, 
final Get get, final List results)
{code}
change
{code}
List kvs = new ArrayList(results.size());
{code}
into
{code}
List kvs = new ArrayList(results.size());
{code}
change
{code}
public RegionScanner preScannerOpen(final ObserverContext e, final Scan scan,
{code}
into
{code}
preScannerOpen(final ObserverContext e, final 
Scan scan,
{code}
change
{code}
public boolean postScannerNext(final ObserverContext e, final InternalScanner 
s, final List results, final int limit, final boolean hasMore) throws 
IOException {
{code}
into
{code}
public boolean postScannerNext(final 
ObserverContext e, final InternalScanner s, final 
List results, final int limit, final boolean hasMore) throws 
IOException {
{code}
change
{code}
Iterator iterator = results.iterator();
{code}
into
{code}
Iterator iterator = results.iterator();
{code}
Reason:
  Generic

4. In section 90.1
change
{code}
preGet(e, get, kvs);
{code}
into
{code}
super.preGetOp(e, get, kvs);
{code}
Reason:
  There is not a function called preGet() provided by BaseRegionObserver or its 
super class/interface. I believe we need to call preGetOp() of the super class 
of RegionObserverExample here.

5. In section 90.1
change
{code}
kvs.add(KeyValueUtil.ensureKeyValue(c));
{code}
into
{code}
kvs.add(c);
{code}
Reason:
  KeyValueUtil.ensureKeyValue() is deprecated.
  See 
http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/KeyValueUtil.html
  and https://issues.apache.org/jira/browse/HBASE-12079

  was:
1. In Section 89.3.3
change
{code}
String path = "hdfs://:/user//coprocessor.jar";
{code}
into
{code}
Path path = new 
Path("hdfs://bdavm1506.svl.ibm.com:8020/user/hbase/coprocessor.jar");
{code}
reason
  The second parameter of HTableDescriptor.addCoprocessor() is 
org.apache.hadoop.fs.Path, not String.
  See 
http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/HTableDescriptor.html

2. In Section 89.3.3
change
{code}
HBaseAdmin admin = new HBaseAdmin(conf);
{code}
into
{code}
Connection connection = ConnectionFactory.createConnection(conf);
Admin admin = connection.getAdmin();
{code}
reason
  HBASE-12083 makes new HBaseAdmin() deprecated and the instance of Admin is 
supposed to get from Connection.getAdmin()
  Also see 
http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/client/HBaseAdmin.html

3. In section 90.1
change
{code}
public void preGetOp(final ObserverContext e, final Get get, final List results)
{code}
into
{code}
public void preGetOp(final ObserverContext e, 
final Get get, final List results)
{code}
change
{code} 
List kvs = new ArrayList(results.size());
{code}
into
{code}
List kvs = new ArrayList(results.size());
{code}
change
{code}
public RegionScanner preScannerOpen(final ObserverContext e, final Scan scan,
{code}
into
{code}
preScannerOpen(final ObserverContext e, final 
Scan scan,
{code}
change
{code}
public boolean postScannerNext(final ObserverContext e, final InternalScanner 
s, final List results, final int limit, final boolean hasMore) throws 
IOException {
{code}
into
{code}
public boolean postScannerNext(final 
ObserverContext e, final InternalScanner s, final 
List results, final int limit, final boolean hasMore) throws 
IOException {
{code}
change
{code}
Iterator iterator = results.iterator();
{code}
into
{code}
Iterator iterator = results.iterator();
{code}
reason
  Generic

4. In section 90.1
change
{code}
preGet(e, get, kvs);
{code}
into
{code}
super.preGetOp(e, get, kvs);
{code}
reason
  There is not a function called preGet() provided by BaseRegionObserver or its 
super class/interface. I believe we need to call preGetOp() of the super class 
of RegionObserverExample here.

 5. In section 90.1
change
{code}
kvs.add(KeyValueUtil.ensureKeyValue(c));
{code}
into
{code}
kvs.add(c);
{code}
reason
  KeyValueUtil.ensureKeyValue(

[jira] [Commented] (HBASE-16144) Replication queue's lock will live forever if RS acquiring the lock has died prematurely

2016-07-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15364168#comment-15364168
 ] 

Hadoop QA commented on HBASE-16144:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 3m 8s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
6s {color} | {color:green} branch-1.1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 58s 
{color} | {color:green} branch-1.1 passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 47s 
{color} | {color:green} branch-1.1 passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
32s {color} | {color:green} branch-1.1 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
30s {color} | {color:green} branch-1.1 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 0s 
{color} | {color:red} hbase-client in branch-1.1 has 15 extant Findbugs 
warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 47s 
{color} | {color:red} hbase-server in branch-1.1 has 79 extant Findbugs 
warnings. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 27s 
{color} | {color:red} hbase-client in branch-1.1 failed with JDK v1.8.0. 
{color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 23s 
{color} | {color:red} hbase-server in branch-1.1 failed with JDK v1.8.0. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 48s 
{color} | {color:green} branch-1.1 passed with JDK v1.7.0_80 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
55s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 58s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 58s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 47s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 47s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
26s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
25s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
11m 15s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
13s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 25s 
{color} | {color:red} hbase-client in the patch failed with JDK v1.8.0. {color} 
|
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 25s 
{color} | {color:red} hbase-server in the patch failed with JDK v1.8.0. {color} 
|
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 48s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 37s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 87m 20s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
40s {color} | {color:green} Patch does not generate ASF License wa

[jira] [Commented] (HBASE-16144) Replication queue's lock will live forever if RS acquiring the lock has died prematurely

2016-07-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15364182#comment-15364182
 ] 

Hadoop QA commented on HBASE-16144:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
48s {color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 56s 
{color} | {color:green} branch-1 passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 49s 
{color} | {color:green} branch-1 passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
56s {color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
25s {color} | {color:green} branch-1 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 51s 
{color} | {color:red} hbase-server in branch-1 has 1 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s 
{color} | {color:green} branch-1 passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s 
{color} | {color:green} branch-1 passed with JDK v1.7.0_80 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
1s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 9s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 9s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 49s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 49s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
55s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
25s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
15m 12s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 33s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 78m 42s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
27s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 115m 35s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12816397/HBASE-16144-branch-1-v2.patch
 |
| JIRA Issue | HBASE-16144 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:

[jira] [Commented] (HBASE-16144) Replication queue's lock will live forever if RS acquiring the lock has died prematurely

2016-07-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15364197#comment-15364197
 ] 

Hadoop QA commented on HBASE-16144:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
9s {color} | {color:green} 0.98 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 6s 
{color} | {color:green} 0.98 passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s 
{color} | {color:green} 0.98 passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
14s {color} | {color:green} 0.98 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
22s {color} | {color:green} 0.98 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 53s 
{color} | {color:red} hbase-client in 0.98 has 19 extant Findbugs warnings. 
{color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 37s 
{color} | {color:red} hbase-server in 0.98 has 85 extant Findbugs warnings. 
{color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 24s 
{color} | {color:red} hbase-client in 0.98 failed with JDK v1.8.0. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 34s 
{color} | {color:red} hbase-server in 0.98 failed with JDK v1.8.0. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 44s 
{color} | {color:green} 0.98 passed with JDK v1.7.0_80 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
55s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 4s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 4s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 53s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 53s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
24s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
10m 38s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 1s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 25s 
{color} | {color:red} hbase-client in the patch failed with JDK v1.8.0. {color} 
|
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 36s 
{color} | {color:red} hbase-server in the patch failed with JDK v1.8.0. {color} 
|
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 14s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 125m 14s 
{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
43s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {col

[jira] [Commented] (HBASE-16144) Replication queue's lock will live forever if RS acquiring the lock has died prematurely

2016-07-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15364238#comment-15364238
 ] 

Hadoop QA commented on HBASE-16144:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 26s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 
19s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 36s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 6s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
2s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
29s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
46s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 16s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 14s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 24s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 5s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 5s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
3s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
29s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
31m 36s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 58s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 79m 20s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
31s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 139m 16s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.snapshot.TestFlushSnapshotFromClient |
|   | hadoop.hbase.replication.TestReplicationSmallTests |
| Timed out junit tests | org.apache.hadoop.hbase.client.TestAdmin2 |
|   | org.apache.hadoop.hbase.client.TestHCM |
|   | org.apache.hadoop.hbase.client.TestReplicasClient |
|   | org.apache.hadoop.hbase.client.TestMobCloneSnapshotFromClient |
|   | 
org.apache.hadoop.hbase.client.Tes

[jira] [Updated] (HBASE-16162) Compacting Memstore : unnecessary push of active segments to pipeline

2016-07-06 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-16162:
---
Affects Version/s: 2.0.0

> Compacting Memstore : unnecessary push of active segments to pipeline
> -
>
> Key: HBASE-16162
> URL: https://issues.apache.org/jira/browse/HBASE-16162
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-16162.patch, HBASE-16162_V2.patch, 
> HBASE-16162_V3.patch, HBASE-16162_V4.patch
>
>
> We have flow like this
> {code}
> protected void checkActiveSize() {
> if (shouldFlushInMemory()) {
>  InMemoryFlushRunnable runnable = new InMemoryFlushRunnable();
>   }
>   getPool().execute(runnable);
> }
>   }
> private boolean shouldFlushInMemory() {
> if(getActive().getSize() > inmemoryFlushSize) {
>   // size above flush threshold
>   return (allowCompaction.get() && !inMemoryFlushInProgress.get());
> }
> return false;
>   }
> void flushInMemory() throws IOException {
> // Phase I: Update the pipeline
> getRegionServices().blockUpdates();
> try {
>   MutableSegment active = getActive();
>   pushActiveToPipeline(active);
> } finally {
>   getRegionServices().unblockUpdates();
> }
> // Phase II: Compact the pipeline
> try {
>   if (allowCompaction.get() && 
> inMemoryFlushInProgress.compareAndSet(false, true)) {
> // setting the inMemoryFlushInProgress flag again for the case this 
> method is invoked
> // directly (only in tests) in the common path setting from true to 
> true is idempotent
> // Speculative compaction execution, may be interrupted if flush is 
> forced while
> // compaction is in progress
> compactor.startCompaction();
>   }
> {code}
> So every write of cell will produce the check checkActiveSize().   When we 
> are at border of in mem flush,  many threads doing writes to this memstore 
> can get this checkActiveSize () to pass.  Yes the AtomicBoolean is still 
> false only. It is turned ON after some time once the new thread is started 
> run and it push the active to pipeline etc.
> In the new thread code of inMemFlush, we dont have any size check. It just 
> takes the active segment and pushes that to pipeline. Yes we dont allow any 
> new writes to memstore at this time. But before that write lock on 
> region, other handler thread also might have added entry to this thread pool. 
>  When the 1st one finishes, it releases the lock on region and handler 
> threads trying for write to memstore, might get lock and add some data. Now 
> this 2nd in mem flush thread may get a chance and get the lock and so it just 
> takes current active segment and flush that in memory !This will produce 
> very small sized segments to pipeline.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16162) Compacting Memstore : unnecessary push of active segments to pipeline

2016-07-06 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-16162:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

> Compacting Memstore : unnecessary push of active segments to pipeline
> -
>
> Key: HBASE-16162
> URL: https://issues.apache.org/jira/browse/HBASE-16162
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
>Priority: Critical
> Attachments: HBASE-16162.patch, HBASE-16162_V2.patch, 
> HBASE-16162_V3.patch, HBASE-16162_V4.patch
>
>
> We have flow like this
> {code}
> protected void checkActiveSize() {
> if (shouldFlushInMemory()) {
>  InMemoryFlushRunnable runnable = new InMemoryFlushRunnable();
>   }
>   getPool().execute(runnable);
> }
>   }
> private boolean shouldFlushInMemory() {
> if(getActive().getSize() > inmemoryFlushSize) {
>   // size above flush threshold
>   return (allowCompaction.get() && !inMemoryFlushInProgress.get());
> }
> return false;
>   }
> void flushInMemory() throws IOException {
> // Phase I: Update the pipeline
> getRegionServices().blockUpdates();
> try {
>   MutableSegment active = getActive();
>   pushActiveToPipeline(active);
> } finally {
>   getRegionServices().unblockUpdates();
> }
> // Phase II: Compact the pipeline
> try {
>   if (allowCompaction.get() && 
> inMemoryFlushInProgress.compareAndSet(false, true)) {
> // setting the inMemoryFlushInProgress flag again for the case this 
> method is invoked
> // directly (only in tests) in the common path setting from true to 
> true is idempotent
> // Speculative compaction execution, may be interrupted if flush is 
> forced while
> // compaction is in progress
> compactor.startCompaction();
>   }
> {code}
> So every write of cell will produce the check checkActiveSize().   When we 
> are at border of in mem flush,  many threads doing writes to this memstore 
> can get this checkActiveSize () to pass.  Yes the AtomicBoolean is still 
> false only. It is turned ON after some time once the new thread is started 
> run and it push the active to pipeline etc.
> In the new thread code of inMemFlush, we dont have any size check. It just 
> takes the active segment and pushes that to pipeline. Yes we dont allow any 
> new writes to memstore at this time. But before that write lock on 
> region, other handler thread also might have added entry to this thread pool. 
>  When the 1st one finishes, it releases the lock on region and handler 
> threads trying for write to memstore, might get lock and add some data. Now 
> this 2nd in mem flush thread may get a chance and get the lock and so it just 
> takes current active segment and flush that in memory !This will produce 
> very small sized segments to pipeline.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16162) Compacting Memstore : unnecessary push of active segments to pipeline

2016-07-06 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-16162:
---
Fix Version/s: 2.0.0

> Compacting Memstore : unnecessary push of active segments to pipeline
> -
>
> Key: HBASE-16162
> URL: https://issues.apache.org/jira/browse/HBASE-16162
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-16162.patch, HBASE-16162_V2.patch, 
> HBASE-16162_V3.patch, HBASE-16162_V4.patch
>
>
> We have flow like this
> {code}
> protected void checkActiveSize() {
> if (shouldFlushInMemory()) {
>  InMemoryFlushRunnable runnable = new InMemoryFlushRunnable();
>   }
>   getPool().execute(runnable);
> }
>   }
> private boolean shouldFlushInMemory() {
> if(getActive().getSize() > inmemoryFlushSize) {
>   // size above flush threshold
>   return (allowCompaction.get() && !inMemoryFlushInProgress.get());
> }
> return false;
>   }
> void flushInMemory() throws IOException {
> // Phase I: Update the pipeline
> getRegionServices().blockUpdates();
> try {
>   MutableSegment active = getActive();
>   pushActiveToPipeline(active);
> } finally {
>   getRegionServices().unblockUpdates();
> }
> // Phase II: Compact the pipeline
> try {
>   if (allowCompaction.get() && 
> inMemoryFlushInProgress.compareAndSet(false, true)) {
> // setting the inMemoryFlushInProgress flag again for the case this 
> method is invoked
> // directly (only in tests) in the common path setting from true to 
> true is idempotent
> // Speculative compaction execution, may be interrupted if flush is 
> forced while
> // compaction is in progress
> compactor.startCompaction();
>   }
> {code}
> So every write of cell will produce the check checkActiveSize().   When we 
> are at border of in mem flush,  many threads doing writes to this memstore 
> can get this checkActiveSize () to pass.  Yes the AtomicBoolean is still 
> false only. It is turned ON after some time once the new thread is started 
> run and it push the active to pipeline etc.
> In the new thread code of inMemFlush, we dont have any size check. It just 
> takes the active segment and pushes that to pipeline. Yes we dont allow any 
> new writes to memstore at this time. But before that write lock on 
> region, other handler thread also might have added entry to this thread pool. 
>  When the 1st one finishes, it releases the lock on region and handler 
> threads trying for write to memstore, might get lock and add some data. Now 
> this 2nd in mem flush thread may get a chance and get the lock and so it just 
> takes current active segment and flush that in memory !This will produce 
> very small sized segments to pipeline.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16172) Unify the retry logic in ScannerCallableWithReplicas and RpcRetryingCallerWithReadReplicas

2016-07-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15364302#comment-15364302
 ] 

Hadoop QA commented on HBASE-16172:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue} 0m 2s 
{color} | {color:blue} The patch file was not named according to hbase's naming 
conventions. Please see 
https://yetus.apache.org/documentation/0.2.1/precommit-patchnames for 
instructions. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 
59s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 9s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 2s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
54s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
32s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
29s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 12s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 11s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 11s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 58s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 58s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
53s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
30s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
29m 31s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
25s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 57s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 100m 49s 
{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
27s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 157m 9s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.client.TestHCM |
|   | hadoop.hbase.replication.TestReplicationSyncUpTool |
|   | hadoop.hbase.replication

[jira] [Commented] (HBASE-16162) Compacting Memstore : unnecessary push of active segments to pipeline

2016-07-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15364424#comment-15364424
 ] 

Hudson commented on HBASE-16162:


FAILURE: Integrated in HBase-Trunk_matrix #1179 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/1179/])
HBASE-16162 Compacting Memstore : unnecessary push of active segments to 
(anoopsamjohn: rev 581d2b7de517ee29b81b62c521ef5ca27c41f38d)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreCompactor.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/CompactingMemStore.java


> Compacting Memstore : unnecessary push of active segments to pipeline
> -
>
> Key: HBASE-16162
> URL: https://issues.apache.org/jira/browse/HBASE-16162
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-16162.patch, HBASE-16162_V2.patch, 
> HBASE-16162_V3.patch, HBASE-16162_V4.patch
>
>
> We have flow like this
> {code}
> protected void checkActiveSize() {
> if (shouldFlushInMemory()) {
>  InMemoryFlushRunnable runnable = new InMemoryFlushRunnable();
>   }
>   getPool().execute(runnable);
> }
>   }
> private boolean shouldFlushInMemory() {
> if(getActive().getSize() > inmemoryFlushSize) {
>   // size above flush threshold
>   return (allowCompaction.get() && !inMemoryFlushInProgress.get());
> }
> return false;
>   }
> void flushInMemory() throws IOException {
> // Phase I: Update the pipeline
> getRegionServices().blockUpdates();
> try {
>   MutableSegment active = getActive();
>   pushActiveToPipeline(active);
> } finally {
>   getRegionServices().unblockUpdates();
> }
> // Phase II: Compact the pipeline
> try {
>   if (allowCompaction.get() && 
> inMemoryFlushInProgress.compareAndSet(false, true)) {
> // setting the inMemoryFlushInProgress flag again for the case this 
> method is invoked
> // directly (only in tests) in the common path setting from true to 
> true is idempotent
> // Speculative compaction execution, may be interrupted if flush is 
> forced while
> // compaction is in progress
> compactor.startCompaction();
>   }
> {code}
> So every write of cell will produce the check checkActiveSize().   When we 
> are at border of in mem flush,  many threads doing writes to this memstore 
> can get this checkActiveSize () to pass.  Yes the AtomicBoolean is still 
> false only. It is turned ON after some time once the new thread is started 
> run and it push the active to pipeline etc.
> In the new thread code of inMemFlush, we dont have any size check. It just 
> takes the active segment and pushes that to pipeline. Yes we dont allow any 
> new writes to memstore at this time. But before that write lock on 
> region, other handler thread also might have added entry to this thread pool. 
>  When the 1st one finishes, it releases the lock on region and handler 
> threads trying for write to memstore, might get lock and add some data. Now 
> this 2nd in mem flush thread may get a chance and get the lock and so it just 
> takes current active segment and flush that in memory !This will produce 
> very small sized segments to pipeline.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14345) Consolidate printUsage in IntegrationTestLoadAndVerify

2016-07-06 Thread Reid Chan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1536#comment-1536
 ] 

Reid Chan commented on HBASE-14345:
---

Hi~

i fixed it
can anyone give me permissions to attach screenshot and patch?

> Consolidate printUsage in IntegrationTestLoadAndVerify
> --
>
> Key: HBASE-14345
> URL: https://issues.apache.org/jira/browse/HBASE-14345
> Project: HBase
>  Issue Type: Improvement
>  Components: integration tests
>Reporter: Nick Dimiduk
>Priority: Trivial
>  Labels: beginner
>
> Investigating the use of {{itlav}} is a little screwy. Subclasses are not 
> overriding the {{printUsage()}} methods correctly, so you have to pass 
> {{--help}} to get some info and no arguments to get the rest.
> {noformat}
> [hbase@ndimiduk-112rc2-7 ~]$ hbase 
> org.apache.hadoop.hbase.test.IntegrationTestLoadAndVerify --help
> usage: bin/hbase org.apache.hadoop.hbase.test.IntegrationTestLoadAndVerify 
> 
> Options:
>  -h,--help Show usage
>  -m,--monkey  Which chaos monkey to run
>  -monkeyProps The properties file for specifying chaos monkey 
> properties.
>  -ncc,--noClusterCleanUp   Don't clean up the cluster at the end
> [hbase@ndimiduk-112rc2-7 ~]$ hbase 
> org.apache.hadoop.hbase.test.IntegrationTestLoadAndVerify
> IntegrationTestLoadAndVerify [-Doptions] 
>   Loads a table with row dependencies and verifies the dependency chains
> Options
>   -Dloadmapper.table=Table to write/verify (default autogen)
>   -Dloadmapper.backrefs=Number of backreferences per row (default 
> 50)
>   -Dloadmapper.num_to_write=Number of rows per mapper (default 100,000 
> per mapper)
>   -Dloadmapper.deleteAfter=  Delete after a successful verify (default 
> true)
>   -Dloadmapper.numPresplits=Number of presplit regions to start with 
> (default 40)
>   -Dloadmapper.map.tasks=   Number of map tasks for load (default 200)
>   -Dverify.reduce.tasks=Number of reduce tasks for verify (default 
> 35)
>   -Dverify.scannercaching=  Number hbase scanner caching rows to read 
> (default 50)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14345) Consolidate printUsage in IntegrationTestLoadAndVerify

2016-07-06 Thread Reid Chan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reid Chan updated HBASE-14345:
--
Status: Patch Available  (was: Open)

> Consolidate printUsage in IntegrationTestLoadAndVerify
> --
>
> Key: HBASE-14345
> URL: https://issues.apache.org/jira/browse/HBASE-14345
> Project: HBase
>  Issue Type: Improvement
>  Components: integration tests
>Reporter: Nick Dimiduk
>Assignee: Reid Chan
>Priority: Trivial
>  Labels: beginner
>
> Investigating the use of {{itlav}} is a little screwy. Subclasses are not 
> overriding the {{printUsage()}} methods correctly, so you have to pass 
> {{--help}} to get some info and no arguments to get the rest.
> {noformat}
> [hbase@ndimiduk-112rc2-7 ~]$ hbase 
> org.apache.hadoop.hbase.test.IntegrationTestLoadAndVerify --help
> usage: bin/hbase org.apache.hadoop.hbase.test.IntegrationTestLoadAndVerify 
> 
> Options:
>  -h,--help Show usage
>  -m,--monkey  Which chaos monkey to run
>  -monkeyProps The properties file for specifying chaos monkey 
> properties.
>  -ncc,--noClusterCleanUp   Don't clean up the cluster at the end
> [hbase@ndimiduk-112rc2-7 ~]$ hbase 
> org.apache.hadoop.hbase.test.IntegrationTestLoadAndVerify
> IntegrationTestLoadAndVerify [-Doptions] 
>   Loads a table with row dependencies and verifies the dependency chains
> Options
>   -Dloadmapper.table=Table to write/verify (default autogen)
>   -Dloadmapper.backrefs=Number of backreferences per row (default 
> 50)
>   -Dloadmapper.num_to_write=Number of rows per mapper (default 100,000 
> per mapper)
>   -Dloadmapper.deleteAfter=  Delete after a successful verify (default 
> true)
>   -Dloadmapper.numPresplits=Number of presplit regions to start with 
> (default 40)
>   -Dloadmapper.map.tasks=   Number of map tasks for load (default 200)
>   -Dverify.reduce.tasks=Number of reduce tasks for verify (default 
> 35)
>   -Dverify.scannercaching=  Number hbase scanner caching rows to read 
> (default 50)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-14345) Consolidate printUsage in IntegrationTestLoadAndVerify

2016-07-06 Thread Reid Chan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reid Chan reassigned HBASE-14345:
-

Assignee: Reid Chan

> Consolidate printUsage in IntegrationTestLoadAndVerify
> --
>
> Key: HBASE-14345
> URL: https://issues.apache.org/jira/browse/HBASE-14345
> Project: HBase
>  Issue Type: Improvement
>  Components: integration tests
>Reporter: Nick Dimiduk
>Assignee: Reid Chan
>Priority: Trivial
>  Labels: beginner
>
> Investigating the use of {{itlav}} is a little screwy. Subclasses are not 
> overriding the {{printUsage()}} methods correctly, so you have to pass 
> {{--help}} to get some info and no arguments to get the rest.
> {noformat}
> [hbase@ndimiduk-112rc2-7 ~]$ hbase 
> org.apache.hadoop.hbase.test.IntegrationTestLoadAndVerify --help
> usage: bin/hbase org.apache.hadoop.hbase.test.IntegrationTestLoadAndVerify 
> 
> Options:
>  -h,--help Show usage
>  -m,--monkey  Which chaos monkey to run
>  -monkeyProps The properties file for specifying chaos monkey 
> properties.
>  -ncc,--noClusterCleanUp   Don't clean up the cluster at the end
> [hbase@ndimiduk-112rc2-7 ~]$ hbase 
> org.apache.hadoop.hbase.test.IntegrationTestLoadAndVerify
> IntegrationTestLoadAndVerify [-Doptions] 
>   Loads a table with row dependencies and verifies the dependency chains
> Options
>   -Dloadmapper.table=Table to write/verify (default autogen)
>   -Dloadmapper.backrefs=Number of backreferences per row (default 
> 50)
>   -Dloadmapper.num_to_write=Number of rows per mapper (default 100,000 
> per mapper)
>   -Dloadmapper.deleteAfter=  Delete after a successful verify (default 
> true)
>   -Dloadmapper.numPresplits=Number of presplit regions to start with 
> (default 40)
>   -Dloadmapper.map.tasks=   Number of map tasks for load (default 200)
>   -Dverify.reduce.tasks=Number of reduce tasks for verify (default 
> 35)
>   -Dverify.scannercaching=  Number hbase scanner caching rows to read 
> (default 50)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14345) Consolidate printUsage in IntegrationTestLoadAndVerify

2016-07-06 Thread Reid Chan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reid Chan updated HBASE-14345:
--
Attachment: HBASE-14345.patch

> Consolidate printUsage in IntegrationTestLoadAndVerify
> --
>
> Key: HBASE-14345
> URL: https://issues.apache.org/jira/browse/HBASE-14345
> Project: HBase
>  Issue Type: Improvement
>  Components: integration tests
>Reporter: Nick Dimiduk
>Assignee: Reid Chan
>Priority: Trivial
>  Labels: beginner
> Attachments: HBASE-14345.patch
>
>
> Investigating the use of {{itlav}} is a little screwy. Subclasses are not 
> overriding the {{printUsage()}} methods correctly, so you have to pass 
> {{--help}} to get some info and no arguments to get the rest.
> {noformat}
> [hbase@ndimiduk-112rc2-7 ~]$ hbase 
> org.apache.hadoop.hbase.test.IntegrationTestLoadAndVerify --help
> usage: bin/hbase org.apache.hadoop.hbase.test.IntegrationTestLoadAndVerify 
> 
> Options:
>  -h,--help Show usage
>  -m,--monkey  Which chaos monkey to run
>  -monkeyProps The properties file for specifying chaos monkey 
> properties.
>  -ncc,--noClusterCleanUp   Don't clean up the cluster at the end
> [hbase@ndimiduk-112rc2-7 ~]$ hbase 
> org.apache.hadoop.hbase.test.IntegrationTestLoadAndVerify
> IntegrationTestLoadAndVerify [-Doptions] 
>   Loads a table with row dependencies and verifies the dependency chains
> Options
>   -Dloadmapper.table=Table to write/verify (default autogen)
>   -Dloadmapper.backrefs=Number of backreferences per row (default 
> 50)
>   -Dloadmapper.num_to_write=Number of rows per mapper (default 100,000 
> per mapper)
>   -Dloadmapper.deleteAfter=  Delete after a successful verify (default 
> true)
>   -Dloadmapper.numPresplits=Number of presplit regions to start with 
> (default 40)
>   -Dloadmapper.map.tasks=   Number of map tasks for load (default 200)
>   -Dverify.reduce.tasks=Number of reduce tasks for verify (default 
> 35)
>   -Dverify.scannercaching=  Number hbase scanner caching rows to read 
> (default 50)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14345) Consolidate printUsage in IntegrationTestLoadAndVerify

2016-07-06 Thread Reid Chan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reid Chan updated HBASE-14345:
--
Status: Open  (was: Patch Available)

> Consolidate printUsage in IntegrationTestLoadAndVerify
> --
>
> Key: HBASE-14345
> URL: https://issues.apache.org/jira/browse/HBASE-14345
> Project: HBase
>  Issue Type: Improvement
>  Components: integration tests
>Reporter: Nick Dimiduk
>Assignee: Reid Chan
>Priority: Trivial
>  Labels: beginner
> Attachments: HBASE-14345.patch
>
>
> Investigating the use of {{itlav}} is a little screwy. Subclasses are not 
> overriding the {{printUsage()}} methods correctly, so you have to pass 
> {{--help}} to get some info and no arguments to get the rest.
> {noformat}
> [hbase@ndimiduk-112rc2-7 ~]$ hbase 
> org.apache.hadoop.hbase.test.IntegrationTestLoadAndVerify --help
> usage: bin/hbase org.apache.hadoop.hbase.test.IntegrationTestLoadAndVerify 
> 
> Options:
>  -h,--help Show usage
>  -m,--monkey  Which chaos monkey to run
>  -monkeyProps The properties file for specifying chaos monkey 
> properties.
>  -ncc,--noClusterCleanUp   Don't clean up the cluster at the end
> [hbase@ndimiduk-112rc2-7 ~]$ hbase 
> org.apache.hadoop.hbase.test.IntegrationTestLoadAndVerify
> IntegrationTestLoadAndVerify [-Doptions] 
>   Loads a table with row dependencies and verifies the dependency chains
> Options
>   -Dloadmapper.table=Table to write/verify (default autogen)
>   -Dloadmapper.backrefs=Number of backreferences per row (default 
> 50)
>   -Dloadmapper.num_to_write=Number of rows per mapper (default 100,000 
> per mapper)
>   -Dloadmapper.deleteAfter=  Delete after a successful verify (default 
> true)
>   -Dloadmapper.numPresplits=Number of presplit regions to start with 
> (default 40)
>   -Dloadmapper.map.tasks=   Number of map tasks for load (default 200)
>   -Dverify.reduce.tasks=Number of reduce tasks for verify (default 
> 35)
>   -Dverify.scannercaching=  Number hbase scanner caching rows to read 
> (default 50)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14345) Consolidate printUsage in IntegrationTestLoadAndVerify

2016-07-06 Thread Reid Chan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reid Chan updated HBASE-14345:
--
Attachment: HBASE-14345.patch

> Consolidate printUsage in IntegrationTestLoadAndVerify
> --
>
> Key: HBASE-14345
> URL: https://issues.apache.org/jira/browse/HBASE-14345
> Project: HBase
>  Issue Type: Improvement
>  Components: integration tests
>Reporter: Nick Dimiduk
>Assignee: Reid Chan
>Priority: Trivial
>  Labels: beginner
> Attachments: HBASE-14345.patch, itlv.png
>
>
> Investigating the use of {{itlav}} is a little screwy. Subclasses are not 
> overriding the {{printUsage()}} methods correctly, so you have to pass 
> {{--help}} to get some info and no arguments to get the rest.
> {noformat}
> [hbase@ndimiduk-112rc2-7 ~]$ hbase 
> org.apache.hadoop.hbase.test.IntegrationTestLoadAndVerify --help
> usage: bin/hbase org.apache.hadoop.hbase.test.IntegrationTestLoadAndVerify 
> 
> Options:
>  -h,--help Show usage
>  -m,--monkey  Which chaos monkey to run
>  -monkeyProps The properties file for specifying chaos monkey 
> properties.
>  -ncc,--noClusterCleanUp   Don't clean up the cluster at the end
> [hbase@ndimiduk-112rc2-7 ~]$ hbase 
> org.apache.hadoop.hbase.test.IntegrationTestLoadAndVerify
> IntegrationTestLoadAndVerify [-Doptions] 
>   Loads a table with row dependencies and verifies the dependency chains
> Options
>   -Dloadmapper.table=Table to write/verify (default autogen)
>   -Dloadmapper.backrefs=Number of backreferences per row (default 
> 50)
>   -Dloadmapper.num_to_write=Number of rows per mapper (default 100,000 
> per mapper)
>   -Dloadmapper.deleteAfter=  Delete after a successful verify (default 
> true)
>   -Dloadmapper.numPresplits=Number of presplit regions to start with 
> (default 40)
>   -Dloadmapper.map.tasks=   Number of map tasks for load (default 200)
>   -Dverify.reduce.tasks=Number of reduce tasks for verify (default 
> 35)
>   -Dverify.scannercaching=  Number hbase scanner caching rows to read 
> (default 50)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14345) Consolidate printUsage in IntegrationTestLoadAndVerify

2016-07-06 Thread Reid Chan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reid Chan updated HBASE-14345:
--
Attachment: itlv.png

> Consolidate printUsage in IntegrationTestLoadAndVerify
> --
>
> Key: HBASE-14345
> URL: https://issues.apache.org/jira/browse/HBASE-14345
> Project: HBase
>  Issue Type: Improvement
>  Components: integration tests
>Reporter: Nick Dimiduk
>Assignee: Reid Chan
>Priority: Trivial
>  Labels: beginner
> Attachments: HBASE-14345.patch, itlv.png
>
>
> Investigating the use of {{itlav}} is a little screwy. Subclasses are not 
> overriding the {{printUsage()}} methods correctly, so you have to pass 
> {{--help}} to get some info and no arguments to get the rest.
> {noformat}
> [hbase@ndimiduk-112rc2-7 ~]$ hbase 
> org.apache.hadoop.hbase.test.IntegrationTestLoadAndVerify --help
> usage: bin/hbase org.apache.hadoop.hbase.test.IntegrationTestLoadAndVerify 
> 
> Options:
>  -h,--help Show usage
>  -m,--monkey  Which chaos monkey to run
>  -monkeyProps The properties file for specifying chaos monkey 
> properties.
>  -ncc,--noClusterCleanUp   Don't clean up the cluster at the end
> [hbase@ndimiduk-112rc2-7 ~]$ hbase 
> org.apache.hadoop.hbase.test.IntegrationTestLoadAndVerify
> IntegrationTestLoadAndVerify [-Doptions] 
>   Loads a table with row dependencies and verifies the dependency chains
> Options
>   -Dloadmapper.table=Table to write/verify (default autogen)
>   -Dloadmapper.backrefs=Number of backreferences per row (default 
> 50)
>   -Dloadmapper.num_to_write=Number of rows per mapper (default 100,000 
> per mapper)
>   -Dloadmapper.deleteAfter=  Delete after a successful verify (default 
> true)
>   -Dloadmapper.numPresplits=Number of presplit regions to start with 
> (default 40)
>   -Dloadmapper.map.tasks=   Number of map tasks for load (default 200)
>   -Dverify.reduce.tasks=Number of reduce tasks for verify (default 
> 35)
>   -Dverify.scannercaching=  Number hbase scanner caching rows to read 
> (default 50)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14345) Consolidate printUsage in IntegrationTestLoadAndVerify

2016-07-06 Thread Reid Chan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reid Chan updated HBASE-14345:
--
Attachment: (was: HBASE-14345.patch)

> Consolidate printUsage in IntegrationTestLoadAndVerify
> --
>
> Key: HBASE-14345
> URL: https://issues.apache.org/jira/browse/HBASE-14345
> Project: HBase
>  Issue Type: Improvement
>  Components: integration tests
>Reporter: Nick Dimiduk
>Assignee: Reid Chan
>Priority: Trivial
>  Labels: beginner
> Attachments: HBASE-14345.patch, itlv.png
>
>
> Investigating the use of {{itlav}} is a little screwy. Subclasses are not 
> overriding the {{printUsage()}} methods correctly, so you have to pass 
> {{--help}} to get some info and no arguments to get the rest.
> {noformat}
> [hbase@ndimiduk-112rc2-7 ~]$ hbase 
> org.apache.hadoop.hbase.test.IntegrationTestLoadAndVerify --help
> usage: bin/hbase org.apache.hadoop.hbase.test.IntegrationTestLoadAndVerify 
> 
> Options:
>  -h,--help Show usage
>  -m,--monkey  Which chaos monkey to run
>  -monkeyProps The properties file for specifying chaos monkey 
> properties.
>  -ncc,--noClusterCleanUp   Don't clean up the cluster at the end
> [hbase@ndimiduk-112rc2-7 ~]$ hbase 
> org.apache.hadoop.hbase.test.IntegrationTestLoadAndVerify
> IntegrationTestLoadAndVerify [-Doptions] 
>   Loads a table with row dependencies and verifies the dependency chains
> Options
>   -Dloadmapper.table=Table to write/verify (default autogen)
>   -Dloadmapper.backrefs=Number of backreferences per row (default 
> 50)
>   -Dloadmapper.num_to_write=Number of rows per mapper (default 100,000 
> per mapper)
>   -Dloadmapper.deleteAfter=  Delete after a successful verify (default 
> true)
>   -Dloadmapper.numPresplits=Number of presplit regions to start with 
> (default 40)
>   -Dloadmapper.map.tasks=   Number of map tasks for load (default 200)
>   -Dverify.reduce.tasks=Number of reduce tasks for verify (default 
> 35)
>   -Dverify.scannercaching=  Number hbase scanner caching rows to read 
> (default 50)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14345) Consolidate printUsage in IntegrationTestLoadAndVerify

2016-07-06 Thread Reid Chan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reid Chan updated HBASE-14345:
--
Status: Patch Available  (was: Open)

> Consolidate printUsage in IntegrationTestLoadAndVerify
> --
>
> Key: HBASE-14345
> URL: https://issues.apache.org/jira/browse/HBASE-14345
> Project: HBase
>  Issue Type: Improvement
>  Components: integration tests
>Reporter: Nick Dimiduk
>Assignee: Reid Chan
>Priority: Trivial
>  Labels: beginner
> Attachments: HBASE-14345.patch, itlv.png
>
>
> Investigating the use of {{itlav}} is a little screwy. Subclasses are not 
> overriding the {{printUsage()}} methods correctly, so you have to pass 
> {{--help}} to get some info and no arguments to get the rest.
> {noformat}
> [hbase@ndimiduk-112rc2-7 ~]$ hbase 
> org.apache.hadoop.hbase.test.IntegrationTestLoadAndVerify --help
> usage: bin/hbase org.apache.hadoop.hbase.test.IntegrationTestLoadAndVerify 
> 
> Options:
>  -h,--help Show usage
>  -m,--monkey  Which chaos monkey to run
>  -monkeyProps The properties file for specifying chaos monkey 
> properties.
>  -ncc,--noClusterCleanUp   Don't clean up the cluster at the end
> [hbase@ndimiduk-112rc2-7 ~]$ hbase 
> org.apache.hadoop.hbase.test.IntegrationTestLoadAndVerify
> IntegrationTestLoadAndVerify [-Doptions] 
>   Loads a table with row dependencies and verifies the dependency chains
> Options
>   -Dloadmapper.table=Table to write/verify (default autogen)
>   -Dloadmapper.backrefs=Number of backreferences per row (default 
> 50)
>   -Dloadmapper.num_to_write=Number of rows per mapper (default 100,000 
> per mapper)
>   -Dloadmapper.deleteAfter=  Delete after a successful verify (default 
> true)
>   -Dloadmapper.numPresplits=Number of presplit regions to start with 
> (default 40)
>   -Dloadmapper.map.tasks=   Number of map tasks for load (default 200)
>   -Dverify.reduce.tasks=Number of reduce tasks for verify (default 
> 35)
>   -Dverify.scannercaching=  Number hbase scanner caching rows to read 
> (default 50)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14345) Consolidate printUsage in IntegrationTestLoadAndVerify

2016-07-06 Thread Reid Chan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15364452#comment-15364452
 ] 

Reid Chan commented on HBASE-14345:
---

attached.

thanks

> Consolidate printUsage in IntegrationTestLoadAndVerify
> --
>
> Key: HBASE-14345
> URL: https://issues.apache.org/jira/browse/HBASE-14345
> Project: HBase
>  Issue Type: Improvement
>  Components: integration tests
>Reporter: Nick Dimiduk
>Assignee: Reid Chan
>Priority: Trivial
>  Labels: beginner
> Attachments: HBASE-14345.patch, itlv.png
>
>
> Investigating the use of {{itlav}} is a little screwy. Subclasses are not 
> overriding the {{printUsage()}} methods correctly, so you have to pass 
> {{--help}} to get some info and no arguments to get the rest.
> {noformat}
> [hbase@ndimiduk-112rc2-7 ~]$ hbase 
> org.apache.hadoop.hbase.test.IntegrationTestLoadAndVerify --help
> usage: bin/hbase org.apache.hadoop.hbase.test.IntegrationTestLoadAndVerify 
> 
> Options:
>  -h,--help Show usage
>  -m,--monkey  Which chaos monkey to run
>  -monkeyProps The properties file for specifying chaos monkey 
> properties.
>  -ncc,--noClusterCleanUp   Don't clean up the cluster at the end
> [hbase@ndimiduk-112rc2-7 ~]$ hbase 
> org.apache.hadoop.hbase.test.IntegrationTestLoadAndVerify
> IntegrationTestLoadAndVerify [-Doptions] 
>   Loads a table with row dependencies and verifies the dependency chains
> Options
>   -Dloadmapper.table=Table to write/verify (default autogen)
>   -Dloadmapper.backrefs=Number of backreferences per row (default 
> 50)
>   -Dloadmapper.num_to_write=Number of rows per mapper (default 100,000 
> per mapper)
>   -Dloadmapper.deleteAfter=  Delete after a successful verify (default 
> true)
>   -Dloadmapper.numPresplits=Number of presplit regions to start with 
> (default 40)
>   -Dloadmapper.map.tasks=   Number of map tasks for load (default 200)
>   -Dverify.reduce.tasks=Number of reduce tasks for verify (default 
> 35)
>   -Dverify.scannercaching=  Number hbase scanner caching rows to read 
> (default 50)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16184) Shell test fails due to undefined method `getAgeOfLastAppliedOp'

2016-07-06 Thread Ted Yu (JIRA)
Ted Yu created HBASE-16184:
--

 Summary: Shell test fails due to undefined method 
`getAgeOfLastAppliedOp'
 Key: HBASE-16184
 URL: https://issues.apache.org/jira/browse/HBASE-16184
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu


Running TestShell ended up with the following on master branch:
{code}
  1) Error:
test_Get_replication_sink_metrics_information(Hbase::AdminAlterTableTest):
NoMethodError: undefined method `getAgeOfLastAppliedOp' for nil:NilClass
/home/hbase/trunk/hbase-shell/src/main/ruby/hbase/admin.rb:725:in `status'

file:/home/hbase/.m2/repository/org/jruby/jruby-complete/1.6.8/jruby-complete-1.6.8.jar!/builtin/java/java.util.rb:7:in
 `each'
/home/hbase/trunk/hbase-shell/src/main/ruby/hbase/admin.rb:720:in `status'
/home/hbase/trunk/hbase-shell/src/test/ruby/test_helper.rb:130:in 
`replication_status'
./src/test/ruby/hbase/admin_test.rb:427:in 
`test_Get_replication_sink_metrics_information'
org/jruby/RubyProc.java:270:in `call'
org/jruby/RubyKernel.java:2105:in `send'
org/jruby/RubyArray.java:1620:in `each'
org/jruby/RubyArray.java:1620:in `each'

  2) Error:
test_Get_replication_source_metrics_information(Hbase::AdminAlterTableTest):
NoMethodError: undefined method `getAgeOfLastAppliedOp' for nil:NilClass
/home/hbase/trunk/hbase-shell/src/main/ruby/hbase/admin.rb:725:in `status'

file:/home/hbase/.m2/repository/org/jruby/jruby-complete/1.6.8/jruby-complete-1.6.8.jar!/builtin/java/java.util.rb:7:in
 `each'
/home/hbase/trunk/hbase-shell/src/main/ruby/hbase/admin.rb:720:in `status'
/home/hbase/trunk/hbase-shell/src/test/ruby/test_helper.rb:130:in 
`replication_status'
./src/test/ruby/hbase/admin_test.rb:423:in 
`test_Get_replication_source_metrics_information'
org/jruby/RubyProc.java:270:in `call'
org/jruby/RubyKernel.java:2105:in `send'
org/jruby/RubyArray.java:1620:in `each'
org/jruby/RubyArray.java:1620:in `each'

  3) Error:
test_Get_replication_status(Hbase::AdminAlterTableTest):
NoMethodError: undefined method `getAgeOfLastAppliedOp' for nil:NilClass
/home/hbase/trunk/hbase-shell/src/main/ruby/hbase/admin.rb:725:in `status'

file:/home/hbase/.m2/repository/org/jruby/jruby-complete/1.6.8/jruby-complete-1.6.8.jar!/builtin/java/java.util.rb:7:in
 `each'
/home/hbase/trunk/hbase-shell/src/main/ruby/hbase/admin.rb:720:in `status'
/home/hbase/trunk/hbase-shell/src/test/ruby/test_helper.rb:130:in 
`replication_status'
./src/test/ruby/hbase/admin_test.rb:419:in `test_Get_replication_status'
org/jruby/RubyProc.java:270:in `call'
org/jruby/RubyKernel.java:2105:in `send'
org/jruby/RubyArray.java:1620:in `each'
org/jruby/RubyArray.java:1620:in `each'
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16185) TestReplicationSmallTests fails in master branch

2016-07-06 Thread Ted Yu (JIRA)
Ted Yu created HBASE-16185:
--

 Summary: TestReplicationSmallTests fails in master branch
 Key: HBASE-16185
 URL: https://issues.apache.org/jira/browse/HBASE-16185
 Project: HBase
  Issue Type: Test
Reporter: Ted Yu


As commit 581d2b7de517ee29b81b62c521ef5ca27c41f38d, the following test failure 
can be reproduced:
{code}
testReplicationStatus(org.apache.hadoop.hbase.replication.TestReplicationSmallTests)
  Time elapsed: 2.691 sec  <<< FAILURE!
java.lang.AssertionError: failed to get ReplicationLoadSourceList
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.hadoop.hbase.replication.TestReplicationSmallTests.testReplicationStatus(TestReplicationSmallTests.java:741)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16185) TestReplicationSmallTests fails in master branch

2016-07-06 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15364468#comment-15364468
 ] 

Ted Yu commented on HBASE-16185:


In the test output, I saw:
{code}
2016-07-06 08:21:15,438 DEBUG 
[RpcServer.FifoWFPBQ.replication.handler=2,queue=0,port=46430] 
ipc.CallRunner(121): RpcServer.FifoWFPBQ.replication.handler=2,queue=0,   
port=46430: callId: 2 service: AdminService methodName: ReplicateWALEntry size: 
2.5 K connection: 172.18.128.12:50207
java.lang.NullPointerException
  at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2257)
  at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:118)
  at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:189)
  at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:169)
2016-07-06 08:21:15,439 WARN  
[RS:1;cn012:55894.replicationSource.cn012.l42scl.hortonworks.com%2C55894%2C1467818460771,2]
 regionserver.
HBaseInterClusterReplicationEndpoint(278): Can't replicate because of an error 
on the remote cluster:
org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(java.lang.NullPointerException):
 java.lang.NullPointerException
  at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2257)
  at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:118)
  at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:189)
  at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:169)

  at 
org.apache.hadoop.hbase.ipc.AsyncServerResponseHandler.createRemoteException(AsyncServerResponseHandler.java:120)
  at 
org.apache.hadoop.hbase.ipc.AsyncServerResponseHandler.channelRead0(AsyncServerResponseHandler.java:76)
  at 
org.apache.hadoop.hbase.ipc.AsyncServerResponseHandler.channelRead0(AsyncServerResponseHandler.java:38)
  at 
io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
  at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
  at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334)
  at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:326)
  at 
io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293)
  at 
io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267)
  at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
  at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334)
  at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:326)
{code}

> TestReplicationSmallTests fails in master branch
> 
>
> Key: HBASE-16185
> URL: https://issues.apache.org/jira/browse/HBASE-16185
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>
> As commit 581d2b7de517ee29b81b62c521ef5ca27c41f38d, the following test 
> failure can be reproduced:
> {code}
> testReplicationStatus(org.apache.hadoop.hbase.replication.TestReplicationSmallTests)
>   Time elapsed: 2.691 sec  <<< FAILURE!
> java.lang.AssertionError: failed to get ReplicationLoadSourceList
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hadoop.hbase.replication.TestReplicationSmallTests.testReplicationStatus(TestReplicationSmallTests.java:741)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16179) Fix compilation errors when building hbase-spark against Spark 2.0

2016-07-06 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15364534#comment-15364534
 ] 

Ted Yu commented on HBASE-16179:


See this thread:

http://search-hadoop.com/m/q3RTtL7Wg54KKcD&subj=Re+Discuss+commit+to+Scala+2+10+support+for+Spark+2+x+lifecycle

> Fix compilation errors when building hbase-spark against Spark 2.0
> --
>
> Key: HBASE-16179
> URL: https://issues.apache.org/jira/browse/HBASE-16179
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>
> I tried building hbase-spark module against Spark-2.0 snapshot and got the 
> following compilation errors:
> http://pastebin.com/bg3w247a
> Some Spark classes such as DataTypeParser and Logging are no longer 
> accessible to downstream projects.
> hbase-spark module should not depend on such classes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14345) Consolidate printUsage in IntegrationTestLoadAndVerify

2016-07-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15364536#comment-15364536
 ] 

Hadoop QA commented on HBASE-14345:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
54s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 24s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 16s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
9s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 0s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 8s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
26m 4s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 8s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 16s 
{color} | {color:green} hbase-it in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
8s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 32m 47s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12816443/HBASE-14345.patch |
| JIRA Issue | HBASE-14345 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 581d2b7 |
| Default Java | 1.7.0_80 |
| Multi-JDK versions |  /home/jenkins/tools/java/jdk1.8.0:1.8.0 
/home/jenkins/jenkins-slave/tools/hudson.model.JDK/JDK_1.7_latest_:1.7.0_80 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/2543/testReport/ |
| modules | C: hb

[jira] [Updated] (HBASE-16180) Fix ST_WRITE_TO_STATIC_FROM_INSTANCE_METHOD findbugs introduced by parent

2016-07-06 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-16180:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Pushed to branch-1.0+ (but not to master since it does not have this issue)

> Fix ST_WRITE_TO_STATIC_FROM_INSTANCE_METHOD findbugs introduced by parent
> -
>
> Key: HBASE-16180
> URL: https://issues.apache.org/jira/browse/HBASE-16180
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver
>Reporter: stack
>Assignee: stack
> Fix For: 1.3.0, 1.1.3, 1.0.3, 1.2.0
>
> Attachments: HBASE-16180.branch-1.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15988) Backup set add command MUST initiate full backup for a table(s) being added

2016-07-06 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-15988:
---
Description: When a new table is added to backup table set, the incremental 
backup involving the new table should be full backup.

> Backup set add command MUST initiate full backup for a table(s) being added
> ---
>
> Key: HBASE-15988
> URL: https://issues.apache.org/jira/browse/HBASE-15988
> Project: HBase
>  Issue Type: Task
>Affects Versions: 2.0.0
>Reporter: Vladimir Rodionov
>  Labels: backup
> Fix For: 2.0.0
>
> Attachments: 15988.v1.txt
>
>
> When a new table is added to backup table set, the incremental backup 
> involving the new table should be full backup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16179) Fix compilation errors when building hbase-spark against Spark 2.0

2016-07-06 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15364551#comment-15364551
 ] 

Sean Busbey commented on HBASE-16179:
-

(here's that same thread on lists.apache: https://s.apache.org/BjIY )

The discussion seems kind of back and forth, to me. I think it says they're 
keeping it in 2.0 at least. Let's work under that presumption and we can ask 
them for clarification.

> Fix compilation errors when building hbase-spark against Spark 2.0
> --
>
> Key: HBASE-16179
> URL: https://issues.apache.org/jira/browse/HBASE-16179
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>
> I tried building hbase-spark module against Spark-2.0 snapshot and got the 
> following compilation errors:
> http://pastebin.com/bg3w247a
> Some Spark classes such as DataTypeParser and Logging are no longer 
> accessible to downstream projects.
> hbase-spark module should not depend on such classes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14743) Add metrics around HeapMemoryManager

2016-07-06 Thread Reid Chan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reid Chan updated HBASE-14743:
--
Attachment: (was: HBASE-14743.011.patch)

> Add metrics around HeapMemoryManager
> 
>
> Key: HBASE-14743
> URL: https://issues.apache.org/jira/browse/HBASE-14743
> Project: HBase
>  Issue Type: Improvement
>Reporter: Elliott Clark
>Assignee: Reid Chan
>Priority: Minor
> Attachments: HBASE-14743.009.patch, HBASE-14743.009.rw3.patch, 
> HBASE-14743.009.v2.patch, HBASE-14743.010.patch, HBASE-14743.010.v2.patch, 
> Metrics snapshot 2016-6-30.png, Screen Shot 2016-06-16 at 5.39.13 PM.png, 
> test2_1.png, test2_2.png, test2_3.png, test2_4.png
>
>
> it would be good to know how many invocations there have been.
> How many decided to expand memstore.
> How many decided to expand block cache.
> How many decided to do nothing.
> etc.
> When that's done use those metrics to clean up the tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16185) TestReplicationSmallTests fails in master branch

2016-07-06 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15364593#comment-15364593
 ] 

stack commented on HBASE-16185:
---

Let me fill in missing info:

Commit 581d2b7de517ee29b81b62c521ef5ca27c41f38d is:


commit 581d2b7de517ee29b81b62c521ef5ca27c41f38d
Author: anoopsjohn 
Date:   Wed Jul 6 18:54:35 2016 +0530

HBASE-16162 Compacting Memstore : unnecessary push of active segments to 
pipeline.

FYI [~anoopsamjohn]

> TestReplicationSmallTests fails in master branch
> 
>
> Key: HBASE-16185
> URL: https://issues.apache.org/jira/browse/HBASE-16185
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>
> As commit 581d2b7de517ee29b81b62c521ef5ca27c41f38d, the following test 
> failure can be reproduced:
> {code}
> testReplicationStatus(org.apache.hadoop.hbase.replication.TestReplicationSmallTests)
>   Time elapsed: 2.691 sec  <<< FAILURE!
> java.lang.AssertionError: failed to get ReplicationLoadSourceList
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hadoop.hbase.replication.TestReplicationSmallTests.testReplicationStatus(TestReplicationSmallTests.java:741)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16184) Shell test fails due to undefined method `getAgeOfLastAppliedOp'

2016-07-06 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15364594#comment-15364594
 ] 

stack commented on HBASE-16184:
---

What broke shell?

> Shell test fails due to undefined method `getAgeOfLastAppliedOp'
> 
>
> Key: HBASE-16184
> URL: https://issues.apache.org/jira/browse/HBASE-16184
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>
> Running TestShell ended up with the following on master branch:
> {code}
>   1) Error:
> test_Get_replication_sink_metrics_information(Hbase::AdminAlterTableTest):
> NoMethodError: undefined method `getAgeOfLastAppliedOp' for nil:NilClass
> /home/hbase/trunk/hbase-shell/src/main/ruby/hbase/admin.rb:725:in `status'
> 
> file:/home/hbase/.m2/repository/org/jruby/jruby-complete/1.6.8/jruby-complete-1.6.8.jar!/builtin/java/java.util.rb:7:in
>  `each'
> /home/hbase/trunk/hbase-shell/src/main/ruby/hbase/admin.rb:720:in `status'
> /home/hbase/trunk/hbase-shell/src/test/ruby/test_helper.rb:130:in 
> `replication_status'
> ./src/test/ruby/hbase/admin_test.rb:427:in 
> `test_Get_replication_sink_metrics_information'
> org/jruby/RubyProc.java:270:in `call'
> org/jruby/RubyKernel.java:2105:in `send'
> org/jruby/RubyArray.java:1620:in `each'
> org/jruby/RubyArray.java:1620:in `each'
>   2) Error:
> test_Get_replication_source_metrics_information(Hbase::AdminAlterTableTest):
> NoMethodError: undefined method `getAgeOfLastAppliedOp' for nil:NilClass
> /home/hbase/trunk/hbase-shell/src/main/ruby/hbase/admin.rb:725:in `status'
> 
> file:/home/hbase/.m2/repository/org/jruby/jruby-complete/1.6.8/jruby-complete-1.6.8.jar!/builtin/java/java.util.rb:7:in
>  `each'
> /home/hbase/trunk/hbase-shell/src/main/ruby/hbase/admin.rb:720:in `status'
> /home/hbase/trunk/hbase-shell/src/test/ruby/test_helper.rb:130:in 
> `replication_status'
> ./src/test/ruby/hbase/admin_test.rb:423:in 
> `test_Get_replication_source_metrics_information'
> org/jruby/RubyProc.java:270:in `call'
> org/jruby/RubyKernel.java:2105:in `send'
> org/jruby/RubyArray.java:1620:in `each'
> org/jruby/RubyArray.java:1620:in `each'
>   3) Error:
> test_Get_replication_status(Hbase::AdminAlterTableTest):
> NoMethodError: undefined method `getAgeOfLastAppliedOp' for nil:NilClass
> /home/hbase/trunk/hbase-shell/src/main/ruby/hbase/admin.rb:725:in `status'
> 
> file:/home/hbase/.m2/repository/org/jruby/jruby-complete/1.6.8/jruby-complete-1.6.8.jar!/builtin/java/java.util.rb:7:in
>  `each'
> /home/hbase/trunk/hbase-shell/src/main/ruby/hbase/admin.rb:720:in `status'
> /home/hbase/trunk/hbase-shell/src/test/ruby/test_helper.rb:130:in 
> `replication_status'
> ./src/test/ruby/hbase/admin_test.rb:419:in `test_Get_replication_status'
> org/jruby/RubyProc.java:270:in `call'
> org/jruby/RubyKernel.java:2105:in `send'
> org/jruby/RubyArray.java:1620:in `each'
> org/jruby/RubyArray.java:1620:in `each'
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16185) TestReplicationSmallTests fails in master branch

2016-07-06 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15364599#comment-15364599
 ] 

Ted Yu commented on HBASE-16185:


HBASE-16162 touched CompactingMemstore which may not be in the path of 
TestReplicationSmallTests.

The test failure might be caused some other recent checkin(s).

> TestReplicationSmallTests fails in master branch
> 
>
> Key: HBASE-16185
> URL: https://issues.apache.org/jira/browse/HBASE-16185
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>
> As commit 581d2b7de517ee29b81b62c521ef5ca27c41f38d, the following test 
> failure can be reproduced:
> {code}
> testReplicationStatus(org.apache.hadoop.hbase.replication.TestReplicationSmallTests)
>   Time elapsed: 2.691 sec  <<< FAILURE!
> java.lang.AssertionError: failed to get ReplicationLoadSourceList
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hadoop.hbase.replication.TestReplicationSmallTests.testReplicationStatus(TestReplicationSmallTests.java:741)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16171) Fix the potential problems in TestHCM.testConnectionCloseAllowsInterrupt

2016-07-06 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-16171:
--
Attachment: HBASE-16171.001.patch

Retry

> Fix the potential problems in TestHCM.testConnectionCloseAllowsInterrupt
> 
>
> Key: HBASE-16171
> URL: https://issues.apache.org/jira/browse/HBASE-16171
> Project: HBase
>  Issue Type: Bug
>Reporter: Colin Ma
>Assignee: Colin Ma
> Attachments: HBASE-16171.001.patch, HBASE-16171.001.patch
>
>
> TestHCM.testConnectionCloseAllowsInterrupt is not stable in QA runs, and 
> always failed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16171) Fix the potential problems in TestHCM.testConnectionCloseAllowsInterrupt

2016-07-06 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15364604#comment-15364604
 ] 

stack commented on HBASE-16171:
---

Where do you see the failures [~colin_mjj] so i know where to commit? Thank 
you. Let me retry your patch. The failures seem unrelated given you are 
changing TestHCM config only.

> Fix the potential problems in TestHCM.testConnectionCloseAllowsInterrupt
> 
>
> Key: HBASE-16171
> URL: https://issues.apache.org/jira/browse/HBASE-16171
> Project: HBase
>  Issue Type: Bug
>Reporter: Colin Ma
>Assignee: Colin Ma
> Attachments: HBASE-16171.001.patch, HBASE-16171.001.patch
>
>
> TestHCM.testConnectionCloseAllowsInterrupt is not stable in QA runs, and 
> always failed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16185) TestReplicationSmallTests fails in master branch

2016-07-06 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15364606#comment-15364606
 ] 

Anoop Sam John commented on HBASE-16185:


This commit is very specific to new feature : compacting memstore and touches 
files in that area.  I dont think the failure is related to this commit.
I have reverted that commit in my local box and ran the test and same failure 
am getting again. FYI.  We need to see the root cause.

> TestReplicationSmallTests fails in master branch
> 
>
> Key: HBASE-16185
> URL: https://issues.apache.org/jira/browse/HBASE-16185
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>
> As commit 581d2b7de517ee29b81b62c521ef5ca27c41f38d, the following test 
> failure can be reproduced:
> {code}
> testReplicationStatus(org.apache.hadoop.hbase.replication.TestReplicationSmallTests)
>   Time elapsed: 2.691 sec  <<< FAILURE!
> java.lang.AssertionError: failed to get ReplicationLoadSourceList
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hadoop.hbase.replication.TestReplicationSmallTests.testReplicationStatus(TestReplicationSmallTests.java:741)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16185) TestReplicationSmallTests fails in master branch

2016-07-06 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15364611#comment-15364611
 ] 

Anoop Sam John commented on HBASE-16185:


20a99b4c06ecb77c29c3ff173052a00174b9af8c.
Reverting that seems passing the tests

> TestReplicationSmallTests fails in master branch
> 
>
> Key: HBASE-16185
> URL: https://issues.apache.org/jira/browse/HBASE-16185
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>
> As commit 581d2b7de517ee29b81b62c521ef5ca27c41f38d, the following test 
> failure can be reproduced:
> {code}
> testReplicationStatus(org.apache.hadoop.hbase.replication.TestReplicationSmallTests)
>   Time elapsed: 2.691 sec  <<< FAILURE!
> java.lang.AssertionError: failed to get ReplicationLoadSourceList
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hadoop.hbase.replication.TestReplicationSmallTests.testReplicationStatus(TestReplicationSmallTests.java:741)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16185) TestReplicationSmallTests fails in master branch

2016-07-06 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15364625#comment-15364625
 ] 

Ted Yu commented on HBASE-16185:


20a99b4c06ecb77c29c3ff173052a00174b9af8c itself was a revert.

git reset --hard 20a99b4c06ecb77c29c3ff173052a00174b9af8c

Then the test passes.

> TestReplicationSmallTests fails in master branch
> 
>
> Key: HBASE-16185
> URL: https://issues.apache.org/jira/browse/HBASE-16185
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>
> As commit 581d2b7de517ee29b81b62c521ef5ca27c41f38d, the following test 
> failure can be reproduced:
> {code}
> testReplicationStatus(org.apache.hadoop.hbase.replication.TestReplicationSmallTests)
>   Time elapsed: 2.691 sec  <<< FAILURE!
> java.lang.AssertionError: failed to get ReplicationLoadSourceList
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hadoop.hbase.replication.TestReplicationSmallTests.testReplicationStatus(TestReplicationSmallTests.java:741)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15988) Backup set add command MUST initiate full backup for a table(s) being added

2016-07-06 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-15988:
---
Attachment: 15988.v2.txt

> Backup set add command MUST initiate full backup for a table(s) being added
> ---
>
> Key: HBASE-15988
> URL: https://issues.apache.org/jira/browse/HBASE-15988
> Project: HBase
>  Issue Type: Task
>Affects Versions: 2.0.0
>Reporter: Vladimir Rodionov
>  Labels: backup
> Fix For: 2.0.0
>
> Attachments: 15988.v1.txt, 15988.v2.txt
>
>
> When a new table is added to backup table set, the incremental backup 
> involving the new table should be full backup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16091) Canary takes lot more time when there are delete markers in the table

2016-07-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15364638#comment-15364638
 ] 

Hudson commented on HBASE-16091:


FAILURE: Integrated in HBase-0.98-matrix #364 (See 
[https://builds.apache.org/job/HBase-0.98-matrix/364/])
HBASE-16091 Canary takes lot more time when there are delete markers in 
(apurtell: rev e3ef8b69bf6834b8a1b7e33aee53792e8ef1f7cb)
* hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/tool/TestCanaryTool.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/tool/Canary.java


> Canary takes lot more time when there are delete markers in the table
> -
>
> Key: HBASE-16091
> URL: https://issues.apache.org/jira/browse/HBASE-16091
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Vishal Khandelwal
>Assignee: Vishal Khandelwal
> Fix For: 2.0.0, 1.4.0, 0.98.21
>
> Attachments: HBASE-16091.00.patch, HBASE-16091.01.patch, 
> HBASE-16091.02.patch
>
>
> We have a table which has lot of delete markers and we running Canary test on 
> a regular interval sometimes tests are timing out because to reading first 
> row would skip all these delete markers. Since purpose of Canary is to find 
> health of the region, i think keeping raw=true would not defeat the purpose 
> but provide good perf improvement. 
> Following are the example of one such scan where 
> without changing code it took 62.3 sec for onre region scan
> 2016-06-23 08:49:11,670 INFO  [pool-2-thread-1] tool.Canary - read from 
> region  . column family 0 in 62338ms
> whereas after setting raw=true, it reduced to 58ms
> 2016-06-23 08:45:20,259 INFO  [pool-2-thread-1] tests.Canary - read from 
> region . column family 0 in 58ms
> Taking this over multiple tables , with multiple region would be a good 
> performance gain.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16185) TestReplicationSmallTests fails in master branch

2016-07-06 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15364644#comment-15364644
 ] 

Anoop Sam John commented on HBASE-16185:


Am sorry.. The commit caused the failure is 
ae92668dd6eff5271ceeecc435165f5fc14fab48.
20a99b4c06ecb77c29c3ff173052a00174b9af8c revert to this point (reset) passed 
the test


> TestReplicationSmallTests fails in master branch
> 
>
> Key: HBASE-16185
> URL: https://issues.apache.org/jira/browse/HBASE-16185
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>
> As commit 581d2b7de517ee29b81b62c521ef5ca27c41f38d, the following test 
> failure can be reproduced:
> {code}
> testReplicationStatus(org.apache.hadoop.hbase.replication.TestReplicationSmallTests)
>   Time elapsed: 2.691 sec  <<< FAILURE!
> java.lang.AssertionError: failed to get ReplicationLoadSourceList
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hadoop.hbase.replication.TestReplicationSmallTests.testReplicationStatus(TestReplicationSmallTests.java:741)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15650) Remove TimeRangeTracker as point of contention when many threads reading a StoreFile

2016-07-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15364639#comment-15364639
 ] 

Hudson commented on HBASE-15650:


FAILURE: Integrated in HBase-0.98-matrix #364 (See 
[https://builds.apache.org/job/HBase-0.98-matrix/364/])
Revert "HBASE-15650 Remove TimeRangeTracker as point of contention when 
(apurtell: rev 47c19607c2c88c44f226940a2357087294fe70a5)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestTimeRangeTracker.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFilePrettyPrinter.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/io/TimeRange.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFileScanner.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/MockStoreFile.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestHFileOutputFormat2.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFile.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestHFileOutputFormat.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStore.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/TimeRangeTracker.java


> Remove TimeRangeTracker as point of contention when many threads reading a 
> StoreFile
> 
>
> Key: HBASE-15650
> URL: https://issues.apache.org/jira/browse/HBASE-15650
> Project: HBase
>  Issue Type: Bug
>  Components: Performance
>Affects Versions: 2.0.0
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0, 1.3.0, 0.98.19, 1.4.0
>
> Attachments: 15650.branch-1.2.patch, 15650.branch-1.patch, 
> 15650.branch-1.patch, 15650.patch, 15650.patch, 15650v2.branch-1.patch, 
> 15650v2.patch, 15650v3.patch, 15650v4.patch, 15650v5.patch, 15650v6.patch, 
> Point-of-contention-on-random-read.png
>
>
> HBASE-12148 is about "Remove TimeRangeTracker as point of contention when 
> many threads writing a Store". It is also a point of contention when reading.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16091) Canary takes lot more time when there are delete markers in the table

2016-07-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15364680#comment-15364680
 ] 

Hudson commented on HBASE-16091:


FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #1236 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/1236/])
HBASE-16091 Canary takes lot more time when there are delete markers in 
(apurtell: rev e3ef8b69bf6834b8a1b7e33aee53792e8ef1f7cb)
* hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/tool/Canary.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/tool/TestCanaryTool.java


> Canary takes lot more time when there are delete markers in the table
> -
>
> Key: HBASE-16091
> URL: https://issues.apache.org/jira/browse/HBASE-16091
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Vishal Khandelwal
>Assignee: Vishal Khandelwal
> Fix For: 2.0.0, 1.4.0, 0.98.21
>
> Attachments: HBASE-16091.00.patch, HBASE-16091.01.patch, 
> HBASE-16091.02.patch
>
>
> We have a table which has lot of delete markers and we running Canary test on 
> a regular interval sometimes tests are timing out because to reading first 
> row would skip all these delete markers. Since purpose of Canary is to find 
> health of the region, i think keeping raw=true would not defeat the purpose 
> but provide good perf improvement. 
> Following are the example of one such scan where 
> without changing code it took 62.3 sec for onre region scan
> 2016-06-23 08:49:11,670 INFO  [pool-2-thread-1] tool.Canary - read from 
> region  . column family 0 in 62338ms
> whereas after setting raw=true, it reduced to 58ms
> 2016-06-23 08:45:20,259 INFO  [pool-2-thread-1] tests.Canary - read from 
> region . column family 0 in 58ms
> Taking this over multiple tables , with multiple region would be a good 
> performance gain.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15650) Remove TimeRangeTracker as point of contention when many threads reading a StoreFile

2016-07-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15364681#comment-15364681
 ] 

Hudson commented on HBASE-15650:


FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #1236 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/1236/])
Revert "HBASE-15650 Remove TimeRangeTracker as point of contention when 
(apurtell: rev 47c19607c2c88c44f226940a2357087294fe70a5)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFilePrettyPrinter.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestHFileOutputFormat.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/MockStoreFile.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestHFileOutputFormat2.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/TimeRangeTracker.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFileScanner.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/io/TimeRange.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStore.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestTimeRangeTracker.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFile.java


> Remove TimeRangeTracker as point of contention when many threads reading a 
> StoreFile
> 
>
> Key: HBASE-15650
> URL: https://issues.apache.org/jira/browse/HBASE-15650
> Project: HBase
>  Issue Type: Bug
>  Components: Performance
>Affects Versions: 2.0.0
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0, 1.3.0, 0.98.19, 1.4.0
>
> Attachments: 15650.branch-1.2.patch, 15650.branch-1.patch, 
> 15650.branch-1.patch, 15650.patch, 15650.patch, 15650v2.branch-1.patch, 
> 15650v2.patch, 15650v3.patch, 15650v4.patch, 15650v5.patch, 15650v6.patch, 
> Point-of-contention-on-random-read.png
>
>
> HBASE-12148 is about "Remove TimeRangeTracker as point of contention when 
> many threads writing a Store". It is also a point of contention when reading.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16137) Fix findbugs warning introduced by hbase-14730

2016-07-06 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15364708#comment-15364708
 ] 

Anoop Sam John commented on HBASE-16137:


HBASE-16180  added annotation to ignore the comment.

> Fix findbugs warning introduced by hbase-14730
> --
>
> Key: HBASE-16137
> URL: https://issues.apache.org/jira/browse/HBASE-16137
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0, 1.3.0
>Reporter: huaxiang sun
>Assignee: huaxiang sun
>Priority: Minor
>
> From stack:
> "Lads. This patch makes for a new findbugs warning: 
> https://builds.apache.org/job/PreCommit-HBASE-Build/2390/artifact/patchprocess/branch-findbugs-hbase-server-warnings.html
> If you are good w/ the code, i can fix the findbugs warning... just say."



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15988) Backup set add command MUST initiate full backup for a table(s) being added

2016-07-06 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-15988:
---
Status: Patch Available  (was: Open)

Ran tests in org.apache.hadoop.hbase.backup package which all passed.

> Backup set add command MUST initiate full backup for a table(s) being added
> ---
>
> Key: HBASE-15988
> URL: https://issues.apache.org/jira/browse/HBASE-15988
> Project: HBase
>  Issue Type: Task
>Affects Versions: 2.0.0
>Reporter: Vladimir Rodionov
>  Labels: backup
> Fix For: 2.0.0
>
> Attachments: 15988.v1.txt, 15988.v2.txt
>
>
> When a new table is added to backup table set, the incremental backup 
> involving the new table should be full backup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15988) Backup set add command MUST initiate full backup for a table(s) being added

2016-07-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15364730#comment-15364730
 ] 

Hadoop QA commented on HBASE-15988:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue} 0m 1s 
{color} | {color:blue} The patch file was not named according to hbase's naming 
conventions. Please see 
https://yetus.apache.org/documentation/0.2.1/precommit-patchnames for 
instructions. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 3s {color} 
| {color:red} HBASE-15988 does not apply to master. Rebase required? Wrong 
Branch? See https://yetus.apache.org/documentation/0.2.1/precommit-patchnames 
for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12816459/15988.v2.txt |
| JIRA Issue | HBASE-15988 |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/2545/console |
| Powered by | Apache Yetus 0.2.1   http://yetus.apache.org |


This message was automatically generated.



> Backup set add command MUST initiate full backup for a table(s) being added
> ---
>
> Key: HBASE-15988
> URL: https://issues.apache.org/jira/browse/HBASE-15988
> Project: HBase
>  Issue Type: Task
>Affects Versions: 2.0.0
>Reporter: Vladimir Rodionov
>  Labels: backup
> Fix For: 2.0.0
>
> Attachments: 15988.v1.txt, 15988.v2.txt
>
>
> When a new table is added to backup table set, the incremental backup 
> involving the new table should be full backup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16184) Shell test fails due to undefined method `getAgeOfLastAppliedOp' for nil:NilClass

2016-07-06 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-16184:
---
Summary: Shell test fails due to undefined method `getAgeOfLastAppliedOp' 
for nil:NilClass  (was: Shell test fails due to undefined method 
`getAgeOfLastAppliedOp')

> Shell test fails due to undefined method `getAgeOfLastAppliedOp' for 
> nil:NilClass
> -
>
> Key: HBASE-16184
> URL: https://issues.apache.org/jira/browse/HBASE-16184
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>
> Running TestShell ended up with the following on master branch:
> {code}
>   1) Error:
> test_Get_replication_sink_metrics_information(Hbase::AdminAlterTableTest):
> NoMethodError: undefined method `getAgeOfLastAppliedOp' for nil:NilClass
> /home/hbase/trunk/hbase-shell/src/main/ruby/hbase/admin.rb:725:in `status'
> 
> file:/home/hbase/.m2/repository/org/jruby/jruby-complete/1.6.8/jruby-complete-1.6.8.jar!/builtin/java/java.util.rb:7:in
>  `each'
> /home/hbase/trunk/hbase-shell/src/main/ruby/hbase/admin.rb:720:in `status'
> /home/hbase/trunk/hbase-shell/src/test/ruby/test_helper.rb:130:in 
> `replication_status'
> ./src/test/ruby/hbase/admin_test.rb:427:in 
> `test_Get_replication_sink_metrics_information'
> org/jruby/RubyProc.java:270:in `call'
> org/jruby/RubyKernel.java:2105:in `send'
> org/jruby/RubyArray.java:1620:in `each'
> org/jruby/RubyArray.java:1620:in `each'
>   2) Error:
> test_Get_replication_source_metrics_information(Hbase::AdminAlterTableTest):
> NoMethodError: undefined method `getAgeOfLastAppliedOp' for nil:NilClass
> /home/hbase/trunk/hbase-shell/src/main/ruby/hbase/admin.rb:725:in `status'
> 
> file:/home/hbase/.m2/repository/org/jruby/jruby-complete/1.6.8/jruby-complete-1.6.8.jar!/builtin/java/java.util.rb:7:in
>  `each'
> /home/hbase/trunk/hbase-shell/src/main/ruby/hbase/admin.rb:720:in `status'
> /home/hbase/trunk/hbase-shell/src/test/ruby/test_helper.rb:130:in 
> `replication_status'
> ./src/test/ruby/hbase/admin_test.rb:423:in 
> `test_Get_replication_source_metrics_information'
> org/jruby/RubyProc.java:270:in `call'
> org/jruby/RubyKernel.java:2105:in `send'
> org/jruby/RubyArray.java:1620:in `each'
> org/jruby/RubyArray.java:1620:in `each'
>   3) Error:
> test_Get_replication_status(Hbase::AdminAlterTableTest):
> NoMethodError: undefined method `getAgeOfLastAppliedOp' for nil:NilClass
> /home/hbase/trunk/hbase-shell/src/main/ruby/hbase/admin.rb:725:in `status'
> 
> file:/home/hbase/.m2/repository/org/jruby/jruby-complete/1.6.8/jruby-complete-1.6.8.jar!/builtin/java/java.util.rb:7:in
>  `each'
> /home/hbase/trunk/hbase-shell/src/main/ruby/hbase/admin.rb:720:in `status'
> /home/hbase/trunk/hbase-shell/src/test/ruby/test_helper.rb:130:in 
> `replication_status'
> ./src/test/ruby/hbase/admin_test.rb:419:in `test_Get_replication_status'
> org/jruby/RubyProc.java:270:in `call'
> org/jruby/RubyKernel.java:2105:in `send'
> org/jruby/RubyArray.java:1620:in `each'
> org/jruby/RubyArray.java:1620:in `each'
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16133) RSGroupBasedLoadBalancer.retainAssignment() might miss a region

2016-07-06 Thread Francis Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15364744#comment-15364744
 ] 

Francis Liu commented on HBASE-16133:
-

Sorry just got to this. Good catch.

> RSGroupBasedLoadBalancer.retainAssignment() might miss a region
> ---
>
> Key: HBASE-16133
> URL: https://issues.apache.org/jira/browse/HBASE-16133
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0
>
> Attachments: hbase-16133_v1.patch
>
>
> We have seen in the tests through the IntegrationTestRSGroup that we may miss 
> assigning a region. 
> It is a simple logic error here: 
> {code}
> if (server != null && !assignments.containsKey(server)) {
>   assignments.put(server, new ArrayList());
> } else if (server != null) {
>assignments.get(server).add(region);
>  } else {
> {code}
> in the first condition, we are not adding the region to the newly created 
> ArrayList. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-16184) Shell test fails due to undefined method `getAgeOfLastAppliedOp' for nil:NilClass

2016-07-06 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu resolved HBASE-16184.

Resolution: Duplicate

Turns out to be dup of HBASE-16185

Switching to commit 20a99b4c06ecb77c29c3ff173052a00174b9af8c, the test passes.

In this test failure, rLoadSink was nil.

> Shell test fails due to undefined method `getAgeOfLastAppliedOp' for 
> nil:NilClass
> -
>
> Key: HBASE-16184
> URL: https://issues.apache.org/jira/browse/HBASE-16184
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>
> Running TestShell ended up with the following on master branch:
> {code}
>   1) Error:
> test_Get_replication_sink_metrics_information(Hbase::AdminAlterTableTest):
> NoMethodError: undefined method `getAgeOfLastAppliedOp' for nil:NilClass
> /home/hbase/trunk/hbase-shell/src/main/ruby/hbase/admin.rb:725:in `status'
> 
> file:/home/hbase/.m2/repository/org/jruby/jruby-complete/1.6.8/jruby-complete-1.6.8.jar!/builtin/java/java.util.rb:7:in
>  `each'
> /home/hbase/trunk/hbase-shell/src/main/ruby/hbase/admin.rb:720:in `status'
> /home/hbase/trunk/hbase-shell/src/test/ruby/test_helper.rb:130:in 
> `replication_status'
> ./src/test/ruby/hbase/admin_test.rb:427:in 
> `test_Get_replication_sink_metrics_information'
> org/jruby/RubyProc.java:270:in `call'
> org/jruby/RubyKernel.java:2105:in `send'
> org/jruby/RubyArray.java:1620:in `each'
> org/jruby/RubyArray.java:1620:in `each'
>   2) Error:
> test_Get_replication_source_metrics_information(Hbase::AdminAlterTableTest):
> NoMethodError: undefined method `getAgeOfLastAppliedOp' for nil:NilClass
> /home/hbase/trunk/hbase-shell/src/main/ruby/hbase/admin.rb:725:in `status'
> 
> file:/home/hbase/.m2/repository/org/jruby/jruby-complete/1.6.8/jruby-complete-1.6.8.jar!/builtin/java/java.util.rb:7:in
>  `each'
> /home/hbase/trunk/hbase-shell/src/main/ruby/hbase/admin.rb:720:in `status'
> /home/hbase/trunk/hbase-shell/src/test/ruby/test_helper.rb:130:in 
> `replication_status'
> ./src/test/ruby/hbase/admin_test.rb:423:in 
> `test_Get_replication_source_metrics_information'
> org/jruby/RubyProc.java:270:in `call'
> org/jruby/RubyKernel.java:2105:in `send'
> org/jruby/RubyArray.java:1620:in `each'
> org/jruby/RubyArray.java:1620:in `each'
>   3) Error:
> test_Get_replication_status(Hbase::AdminAlterTableTest):
> NoMethodError: undefined method `getAgeOfLastAppliedOp' for nil:NilClass
> /home/hbase/trunk/hbase-shell/src/main/ruby/hbase/admin.rb:725:in `status'
> 
> file:/home/hbase/.m2/repository/org/jruby/jruby-complete/1.6.8/jruby-complete-1.6.8.jar!/builtin/java/java.util.rb:7:in
>  `each'
> /home/hbase/trunk/hbase-shell/src/main/ruby/hbase/admin.rb:720:in `status'
> /home/hbase/trunk/hbase-shell/src/test/ruby/test_helper.rb:130:in 
> `replication_status'
> ./src/test/ruby/hbase/admin_test.rb:419:in `test_Get_replication_status'
> org/jruby/RubyProc.java:270:in `call'
> org/jruby/RubyKernel.java:2105:in `send'
> org/jruby/RubyArray.java:1620:in `each'
> org/jruby/RubyArray.java:1620:in `each'
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16137) Fix findbugs warning introduced by hbase-14730

2016-07-06 Thread huaxiang sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15364773#comment-15364773
 ] 

huaxiang sun commented on HBASE-16137:
--

Thanks [~anoop.hbase] for the notice, did not get chance to work on it. I was 
thinking to change the code as suggested by

http://stackoverflow.com/questions/21136302/findbugs-error-write-to-static-field-from-instance-method

to remove the findbug warning. Is this still needed after stack's fix? 

> Fix findbugs warning introduced by hbase-14730
> --
>
> Key: HBASE-16137
> URL: https://issues.apache.org/jira/browse/HBASE-16137
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0, 1.3.0
>Reporter: huaxiang sun
>Assignee: huaxiang sun
>Priority: Minor
>
> From stack:
> "Lads. This patch makes for a new findbugs warning: 
> https://builds.apache.org/job/PreCommit-HBASE-Build/2390/artifact/patchprocess/branch-findbugs-hbase-server-warnings.html
> If you are good w/ the code, i can fix the findbugs warning... just say."



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15988) Backup set add command MUST initiate full backup for a table(s) being added

2016-07-06 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15364798#comment-15364798
 ] 

Vladimir Rodionov commented on HBASE-15988:
---

-1 on patch.

[~tedyu], can you trigger full backup for tables on add tables to table set 
operation? 

> Backup set add command MUST initiate full backup for a table(s) being added
> ---
>
> Key: HBASE-15988
> URL: https://issues.apache.org/jira/browse/HBASE-15988
> Project: HBase
>  Issue Type: Task
>Affects Versions: 2.0.0
>Reporter: Vladimir Rodionov
>  Labels: backup
> Fix For: 2.0.0
>
> Attachments: 15988.v1.txt, 15988.v2.txt
>
>
> When a new table is added to backup table set, the incremental backup 
> involving the new table should be full backup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-15988) Backup set add command MUST initiate full backup for a table(s) being added

2016-07-06 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15364798#comment-15364798
 ] 

Vladimir Rodionov edited comment on HBASE-15988 at 7/6/16 6:21 PM:
---

-1 on patch.

[~tedyu], can you trigger full backup for tables on add tables to table set 
operation? 

BackupAdmin.addToBackupSet() <- here


was (Author: vrodionov):
-1 on patch.

[~tedyu], can you trigger full backup for tables on add tables to table set 
operation? 

> Backup set add command MUST initiate full backup for a table(s) being added
> ---
>
> Key: HBASE-15988
> URL: https://issues.apache.org/jira/browse/HBASE-15988
> Project: HBase
>  Issue Type: Task
>Affects Versions: 2.0.0
>Reporter: Vladimir Rodionov
>  Labels: backup
> Fix For: 2.0.0
>
> Attachments: 15988.v1.txt, 15988.v2.txt
>
>
> When a new table is added to backup table set, the incremental backup 
> involving the new table should be full backup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15988) Backup set add command MUST initiate full backup for a table(s) being added

2016-07-06 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15364812#comment-15364812
 ] 

Ted Yu commented on HBASE-15988:


Since table(s) can be added to backup set in batches, I don't see why full 
backup should be triggered (for each addition to backup set).

This also deviates from user's perspective of addition to backup set.

> Backup set add command MUST initiate full backup for a table(s) being added
> ---
>
> Key: HBASE-15988
> URL: https://issues.apache.org/jira/browse/HBASE-15988
> Project: HBase
>  Issue Type: Task
>Affects Versions: 2.0.0
>Reporter: Vladimir Rodionov
>  Labels: backup
> Fix For: 2.0.0
>
> Attachments: 15988.v1.txt, 15988.v2.txt
>
>
> When a new table is added to backup table set, the incremental backup 
> involving the new table should be full backup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16182) Increase IntegrationTestRpcClient timeout

2016-07-06 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-16182:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

I've pushed this. It is a trivial change. 

> Increase IntegrationTestRpcClient timeout 
> --
>
> Key: HBASE-16182
> URL: https://issues.apache.org/jira/browse/HBASE-16182
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0, 1.4.0
>
> Attachments: hbase-16182_v1.patch
>
>
> We have seen IntegrationTestRpcClient fail recently with a timeout. Further 
> inspection, the root cause seems to be a very underpowered node running the 
> test caused the timeout since there is no BLOCKED thread, both for handlers, 
> readers, listener, or the client side threads. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15988) Backup set add command MUST initiate full backup for a table(s) being added

2016-07-06 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15364855#comment-15364855
 ] 

Vladimir Rodionov commented on HBASE-15988:
---

Hmm.

Suppose you have table set of 150 tables and all of them are OK (with full 
backup), you add new one, which does not have backup yet. Next time you run 
incremental backup on this set, the backup will be converted to FULL for all 
151 tables, or to FULL for only 1(?). I do not think user expects either one. 

When we add new tables to a backup set, we need to 3 conditions:

# all tables have backups - nothing to do
# all tables do not have backups - nothing to do
# some have, some do not - trigger FULL backup for tables which do not have 
backups yet or throw exception ("Can not add ..., run FULL backup first for 
tables ...). This can be configurable, say *-force* will trigger full backup 
automatically.

> Backup set add command MUST initiate full backup for a table(s) being added
> ---
>
> Key: HBASE-15988
> URL: https://issues.apache.org/jira/browse/HBASE-15988
> Project: HBase
>  Issue Type: Task
>Affects Versions: 2.0.0
>Reporter: Vladimir Rodionov
>  Labels: backup
> Fix For: 2.0.0
>
> Attachments: 15988.v1.txt, 15988.v2.txt
>
>
> When a new table is added to backup table set, the incremental backup 
> involving the new table should be full backup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-15988) Backup set add command MUST initiate full backup for a table(s) being added

2016-07-06 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15364855#comment-15364855
 ] 

Vladimir Rodionov edited comment on HBASE-15988 at 7/6/16 6:39 PM:
---

Hmm.

Suppose you have table set of 150 tables and all of them are OK (with full 
backup), you add new one, which does not have backup yet. Next time you run 
incremental backup on this set, the backup will be converted to FULL for all 
151 tables, or to FULL for only 1(?). I do not think user expects either one. 

When we add new tables to a backup set, we need to check 3 conditions:

# all tables have backups - nothing to do
# all tables do not have backups - nothing to do
# some have, some do not - trigger FULL backup for tables which do not have 
backups yet or throw exception ("Can not add ..., run FULL backup first for 
tables ...). This can be configurable, say *-force* will trigger full backup 
automatically.


was (Author: vrodionov):
Hmm.

Suppose you have table set of 150 tables and all of them are OK (with full 
backup), you add new one, which does not have backup yet. Next time you run 
incremental backup on this set, the backup will be converted to FULL for all 
151 tables, or to FULL for only 1(?). I do not think user expects either one. 

When we add new tables to a backup set, we need to 3 conditions:

# all tables have backups - nothing to do
# all tables do not have backups - nothing to do
# some have, some do not - trigger FULL backup for tables which do not have 
backups yet or throw exception ("Can not add ..., run FULL backup first for 
tables ...). This can be configurable, say *-force* will trigger full backup 
automatically.

> Backup set add command MUST initiate full backup for a table(s) being added
> ---
>
> Key: HBASE-15988
> URL: https://issues.apache.org/jira/browse/HBASE-15988
> Project: HBase
>  Issue Type: Task
>Affects Versions: 2.0.0
>Reporter: Vladimir Rodionov
>  Labels: backup
> Fix For: 2.0.0
>
> Attachments: 15988.v1.txt, 15988.v2.txt
>
>
> When a new table is added to backup table set, the incremental backup 
> involving the new table should be full backup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14345) Consolidate printUsage in IntegrationTestLoadAndVerify

2016-07-06 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15364863#comment-15364863
 ] 

Enis Soztutar commented on HBASE-14345:
---

We want the no-arg and {{--help}} to print the same thing which contains all 
the available arguments. Is this patch doing that? 

> Consolidate printUsage in IntegrationTestLoadAndVerify
> --
>
> Key: HBASE-14345
> URL: https://issues.apache.org/jira/browse/HBASE-14345
> Project: HBase
>  Issue Type: Improvement
>  Components: integration tests
>Reporter: Nick Dimiduk
>Assignee: Reid Chan
>Priority: Trivial
>  Labels: beginner
> Attachments: HBASE-14345.patch, itlv.png
>
>
> Investigating the use of {{itlav}} is a little screwy. Subclasses are not 
> overriding the {{printUsage()}} methods correctly, so you have to pass 
> {{--help}} to get some info and no arguments to get the rest.
> {noformat}
> [hbase@ndimiduk-112rc2-7 ~]$ hbase 
> org.apache.hadoop.hbase.test.IntegrationTestLoadAndVerify --help
> usage: bin/hbase org.apache.hadoop.hbase.test.IntegrationTestLoadAndVerify 
> 
> Options:
>  -h,--help Show usage
>  -m,--monkey  Which chaos monkey to run
>  -monkeyProps The properties file for specifying chaos monkey 
> properties.
>  -ncc,--noClusterCleanUp   Don't clean up the cluster at the end
> [hbase@ndimiduk-112rc2-7 ~]$ hbase 
> org.apache.hadoop.hbase.test.IntegrationTestLoadAndVerify
> IntegrationTestLoadAndVerify [-Doptions] 
>   Loads a table with row dependencies and verifies the dependency chains
> Options
>   -Dloadmapper.table=Table to write/verify (default autogen)
>   -Dloadmapper.backrefs=Number of backreferences per row (default 
> 50)
>   -Dloadmapper.num_to_write=Number of rows per mapper (default 100,000 
> per mapper)
>   -Dloadmapper.deleteAfter=  Delete after a successful verify (default 
> true)
>   -Dloadmapper.numPresplits=Number of presplit regions to start with 
> (default 40)
>   -Dloadmapper.map.tasks=   Number of map tasks for load (default 200)
>   -Dverify.reduce.tasks=Number of reduce tasks for verify (default 
> 35)
>   -Dverify.scannercaching=  Number hbase scanner caching rows to read 
> (default 50)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16087) Replication shouldn't start on a master if if only hosts system tables

2016-07-06 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15364869#comment-15364869
 ] 

Elliott Clark commented on HBASE-16087:
---

You don't have to create the table, just list it in the config.

> Replication shouldn't start on a master if if only hosts system tables
> --
>
> Key: HBASE-16087
> URL: https://issues.apache.org/jira/browse/HBASE-16087
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0, 1.3.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HBASE-16087.patch, HBASE-16087.v1.patch
>
>
> System tables aren't replicated so we shouldn't start up a replication master 
> if there are no user tables on the master.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16186) Fix AssignmentManager MBean name

2016-07-06 Thread Lars George (JIRA)
Lars George created HBASE-16186:
---

 Summary: Fix AssignmentManager MBean name
 Key: HBASE-16186
 URL: https://issues.apache.org/jira/browse/HBASE-16186
 Project: HBase
  Issue Type: Bug
  Components: master
Affects Versions: 1.2.1
Reporter: Lars George
 Fix For: 2.0.0


The MBean has a spelling error, listed as "AssignmentManger" (note the missing 
"a"). This is a publicly available name that tools might already use to filter 
metrics etc. We should change this across major versions only?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16095) Add priority to TableDescriptor and priority region open thread pool

2016-07-06 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-16095:
--
Attachment: hbase-16095_v2.patch

v2 adds test category. 

> Add priority to TableDescriptor and priority region open thread pool
> 
>
> Key: HBASE-16095
> URL: https://issues.apache.org/jira/browse/HBASE-16095
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0, 1.4.0
>
> Attachments: hbase-16095_v0.patch, hbase-16095_v1.patch, 
> hbase-16095_v2.patch
>
>
> This is in the similar area with HBASE-15816, and also required with the 
> current secondary indexing for Phoenix. 
> The problem with P secondary indexes is that data table regions depend on 
> index regions to be able to make progress. Possible distributed deadlocks can 
> be prevented via custom RpcScheduler + RpcController configuration via 
> HBASE-11048 and PHOENIX-938. However, region opening also has the same 
> deadlock situation, because data region open has to replay the WAL edits to 
> the index regions. There is only 1 thread pool to open regions with 3 workers 
> by default. So if the cluster is recovering / restarting from scratch, the 
> deadlock happens because some index regions cannot be opened due to them 
> being in the same queue waiting for data regions to open (which waits for  
> RPC'ing to index regions which is not open). This is reproduced in almost all 
> Phoenix secondary index clusters (mutable table w/o transactions) that we 
> see. 
> The proposal is to have a "high priority" region opening thread pool, and 
> have the HTD carry the relative priority of a table. This maybe useful for 
> other "framework" level tables from Phoenix, Tephra, Trafodian, etc if they 
> want some specific tables to become online faster. 
> As a follow up patch, we can also take a look at how this priority 
> information can be used by the rpc scheduler on the server side or rpc 
> controller on the client side, so that we do not have to set priorities 
> manually per-operation. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16187) Fix typo in blog post for metrics2

2016-07-06 Thread Lars George (JIRA)
Lars George created HBASE-16187:
---

 Summary: Fix typo in blog post for metrics2
 Key: HBASE-16187
 URL: https://issues.apache.org/jira/browse/HBASE-16187
 Project: HBase
  Issue Type: Bug
  Components: website
Reporter: Lars George
Assignee: Sean Busbey


See https://blogs.apache.org/hbase/entry/migration_to_the_new_metrics

s/sudo/pseudo





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16187) Fix typo in blog post for metrics2

2016-07-06 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-16187:

Fix Version/s: 2.0.0

> Fix typo in blog post for metrics2
> --
>
> Key: HBASE-16187
> URL: https://issues.apache.org/jira/browse/HBASE-16187
> Project: HBase
>  Issue Type: Bug
>  Components: website
>Reporter: Lars George
>Assignee: Sean Busbey
> Fix For: 2.0.0
>
>
> See https://blogs.apache.org/hbase/entry/migration_to_the_new_metrics
> s/sudo/pseudo



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16180) Fix ST_WRITE_TO_STATIC_FROM_INSTANCE_METHOD findbugs introduced by parent

2016-07-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15364904#comment-15364904
 ] 

Hudson commented on HBASE-16180:


FAILURE: Integrated in HBase-1.3-IT #744 (See 
[https://builds.apache.org/job/HBase-1.3-IT/744/])
HBASE-16180 Fix ST_WRITE_TO_STATIC_FROM_INSTANCE_METHOD findbugs (stack: rev 
1ac755e40d51010753125db90f9a20c4fd028966)
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterV2.java


> Fix ST_WRITE_TO_STATIC_FROM_INSTANCE_METHOD findbugs introduced by parent
> -
>
> Key: HBASE-16180
> URL: https://issues.apache.org/jira/browse/HBASE-16180
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver
>Reporter: stack
>Assignee: stack
> Fix For: 1.2.0, 1.3.0, 1.0.3, 1.1.3
>
> Attachments: HBASE-16180.branch-1.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16188) Add EventCounter information to log4j properties file

2016-07-06 Thread Lars George (JIRA)
Lars George created HBASE-16188:
---

 Summary: Add EventCounter information to log4j properties file
 Key: HBASE-16188
 URL: https://issues.apache.org/jira/browse/HBASE-16188
 Project: HBase
  Issue Type: Improvement
Affects Versions: 1.2.1
Reporter: Lars George
Priority: Minor


Hadoop's {{JvmMetrics}}, which HBase also is using in Metrics2 and provides it 
as an MBean, has the ability to count log4j log calls. This is tracked by a 
special {{Appender}} class, also provided by Hadoop, called {{EventCounter}}. 

We should add some info how to enable this (or maybe even enable it by 
default?).

The appender needs to be added in two places, shown here:

{noformat}
hbase.root.logger=INFO,console
...
# Define the root logger to the system property "hbase.root.logger".
log4j.rootLogger=${hbase.root.logger}, EventCounter

log4j.appender.EventCounter=org.apache.hadoop.log.metrics.EventCounter
{noformat}

We could simply add this commented out akin to the {{hbase-env.sh}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16171) Fix the potential problems in TestHCM.testConnectionCloseAllowsInterrupt

2016-07-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15364917#comment-15364917
 ] 

Hadoop QA commented on HBASE-16171:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
0s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
52s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
56s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 34s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
44s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 43s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 43s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
53s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
25m 59s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
25s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 32s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 36s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 97m 18s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 138m 57s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hbase.mapreduce.TestLoadIncrementalHFilesUseSecurityEndPoint |
| Timed out junit tests | 
org.apache.hadoop.hbase.replication.TestReplicationSyncUpToolWithBulkLoadedData 
|
|   | org.apache.hadoop.hbase.replication.TestReplicationSmallTests |
|   | org.apache.hadoop.hbase.replication.TestMultiSlaveReplication |
|   | org.apache.hadoop.hbase.filter.TestFuzzyRowFilterEndToEnd |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12816456/HBASE-16171.001.patch 
|
| JIRA Issue | HBASE-16171 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux asf901.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool |

[jira] [Created] (HBASE-16189) [Rolling Upgrade] 2.0 hfiles cannot be opened by 1.x servers

2016-07-06 Thread Enis Soztutar (JIRA)
Enis Soztutar created HBASE-16189:
-

 Summary: [Rolling Upgrade] 2.0 hfiles cannot be opened by 1.x 
servers
 Key: HBASE-16189
 URL: https://issues.apache.org/jira/browse/HBASE-16189
 Project: HBase
  Issue Type: Sub-task
Reporter: Enis Soztutar
Priority: Critical
 Fix For: 2.0.0


HBASE-10800 added MetaCellComparator, which gets written to the HFile. 1.x code 
does not have the new class, hence fails to open the regions. I did not check 
whether this is only for meta or for regular tables as well. 

{code}
Caused by: org.apache.hadoop.hbase.io.hfile.CorruptHFileException: Problem 
reading HFile Trailer from file 
hdfs://cn017.l42scl.hortonworks.com:8020/apps/hbase/data/data/hbase/meta/1588230740/info/aa96e4ef463b4a82956330b236440437
at 
org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:483)
at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:511)
at 
org.apache.hadoop.hbase.regionserver.StoreFile$Reader.(StoreFile.java:1123)
at 
org.apache.hadoop.hbase.regionserver.StoreFileInfo.open(StoreFileInfo.java:267)
at 
org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:409)
at 
org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:512)
at 
org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:687)
at 
org.apache.hadoop.hbase.regionserver.HStore.access$000(HStore.java:130)
at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:554)
at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:551)
... 6 more
Caused by: java.io.IOException: java.lang.ClassNotFoundException: 
org.apache.hadoop.hbase.CellComparator$MetaCellComparator
at 
org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.getComparatorClass(FixedFileTrailer.java:581)
at 
org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.deserializeFromPB(FixedFileTrailer.java:300)
at 
org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.deserialize(FixedFileTrailer.java:242)
at 
org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.readFromStream(FixedFileTrailer.java:407)
at 
org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:468)
... 15 more
Caused by: java.lang.ClassNotFoundException: 
org.apache.hadoop.hbase.CellComparator$MetaCellComparator
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:264)
at 
org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.getComparatorClass(FixedFileTrailer.java:579)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15988) Backup set add command MUST initiate full backup for a table(s) being added

2016-07-06 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15364932#comment-15364932
 ] 

Ted Yu commented on HBASE-15988:


bq. I do not think user expects either one.
bq. some have, some do not

For the above case, how about throwing DoNotRetryIOException informing user 
that incremental backup for the current table set cannot be performed ?
User can decide whether to shrink the table set or trigger full backup on the 
newly added tables.

> Backup set add command MUST initiate full backup for a table(s) being added
> ---
>
> Key: HBASE-15988
> URL: https://issues.apache.org/jira/browse/HBASE-15988
> Project: HBase
>  Issue Type: Task
>Affects Versions: 2.0.0
>Reporter: Vladimir Rodionov
>  Labels: backup
> Fix For: 2.0.0
>
> Attachments: 15988.v1.txt, 15988.v2.txt
>
>
> When a new table is added to backup table set, the incremental backup 
> involving the new table should be full backup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16187) Fix typo in blog post for metrics2

2016-07-06 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15364944#comment-15364944
 ] 

Elliott Clark commented on HBASE-16187:
---

s/Clarke/Clark/g

> Fix typo in blog post for metrics2
> --
>
> Key: HBASE-16187
> URL: https://issues.apache.org/jira/browse/HBASE-16187
> Project: HBase
>  Issue Type: Bug
>  Components: website
>Reporter: Lars George
>Assignee: Sean Busbey
> Fix For: 2.0.0
>
>
> See https://blogs.apache.org/hbase/entry/migration_to_the_new_metrics
> s/sudo/pseudo



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15988) Backup set add command MUST initiate full backup for a table(s) being added

2016-07-06 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15364952#comment-15364952
 ] 

Vladimir Rodionov commented on HBASE-15988:
---

{quote}
User can decide whether to shrink the table set or trigger full backup on the 
newly added tables.
{quote}

User will need to compile list of tables, which failed incremental backup, then 
run backup on these tables in a full mode, then retry incremental backup on 
backup set again? This is going to confuse users. What do you think, [~tedyu]? 
cc: [~enis].

> Backup set add command MUST initiate full backup for a table(s) being added
> ---
>
> Key: HBASE-15988
> URL: https://issues.apache.org/jira/browse/HBASE-15988
> Project: HBase
>  Issue Type: Task
>Affects Versions: 2.0.0
>Reporter: Vladimir Rodionov
>  Labels: backup
> Fix For: 2.0.0
>
> Attachments: 15988.v1.txt, 15988.v2.txt
>
>
> When a new table is added to backup table set, the incremental backup 
> involving the new table should be full backup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16180) Fix ST_WRITE_TO_STATIC_FROM_INSTANCE_METHOD findbugs introduced by parent

2016-07-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15364958#comment-15364958
 ] 

Hudson commented on HBASE-16180:


SUCCESS: Integrated in HBase-1.2-IT #545 (See 
[https://builds.apache.org/job/HBase-1.2-IT/545/])
HBASE-16180 Fix ST_WRITE_TO_STATIC_FROM_INSTANCE_METHOD findbugs (stack: rev 
78ef0513b4f9ae8c59dbb77100c7a12d5ec45e62)
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterV2.java


> Fix ST_WRITE_TO_STATIC_FROM_INSTANCE_METHOD findbugs introduced by parent
> -
>
> Key: HBASE-16180
> URL: https://issues.apache.org/jira/browse/HBASE-16180
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver
>Reporter: stack
>Assignee: stack
> Fix For: 1.2.0, 1.3.0, 1.0.3, 1.1.3
>
> Attachments: HBASE-16180.branch-1.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >