[jira] [Comment Edited] (HBASE-14548) Expand how table coprocessor jar and dependency path can be specified

2016-07-05 Thread li xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15362486#comment-15362486
 ] 

li xiang edited comment on HBASE-14548 at 7/6/16 6:56 AM:
--

Hi Jerry, thanks for the review.

1. I tested FileSytem.isDirectory() by a separate program, against the path 
with wildcard. It works as expected for the following inputs, for example, 
"/user/hbase/\*.jar" or "/user/hbase/coprocessor.\*", isDirectory() returns 
false.
The condition I found which will fail is that suppose the jar file is put as 
/user/hbase/coprocessor.jar and "/user/h*" is used as the input. As 
isDirectory() returns false, the code does not append "*.jar". 
FileSystem.globStatus() returns all directories and files starting with "h" 
under /user, such as hbase, hive...(if there are). But JarFile followed can not 
handle a directory.

So I add the logic : For each item globStatus() returned, process file only and 
skip for directory. And if all items are directories, throw 
FileNotFoundException: No file found matching hdfs:///xxx*

I uploaded v2 for branch 1.2.0 and master to include those changes.

2. Regarding the enhancement on ClassLoaderTestHelper, I opened a new JIRA as 
https://issues.apache.org/jira/browse/HBASE-16173


was (Author: water):
Hi Jerry, thanks for the review.

1. I tested FileSytem.isDirectory() by a separate program, against the path 
with wildcard. It works as expected for the following inputs, for example, 
"/user/hbase/*.jar" or "/user/hbase/coprocessor.*", isDirectory() returns false.
The condition I found which will fail is that suppose the jar file is put as 
/user/hbase/coprocessor.jar and "/user/h*" is used as the input. As 
isDirectory() returns false, the code does not append "*.jar". 
FileSystem.globStatus() returns all directories and files starting with "h" 
under /user, such as hbase, hive...(if there are). But JarFile followed can not 
handle a directory.

So I add the logic : For each item globStatus() returned, process file only and 
skip for directory. And if all items are directories, throw 
FileNotFoundException: No file found matching hdfs:///xxx*

I uploaded v2 for branch 1.2.0 and master to include those changes.

2. Regarding the enhancement on ClassLoaderTestHelper, I opened a new JIRA as 
https://issues.apache.org/jira/browse/HBASE-16173

> Expand how table coprocessor jar and dependency path can be specified
> -
>
> Key: HBASE-14548
> URL: https://issues.apache.org/jira/browse/HBASE-14548
> Project: HBase
>  Issue Type: Improvement
>  Components: Coprocessors
>Affects Versions: 1.2.0
>Reporter: Jerry He
>Assignee: li xiang
> Fix For: 2.0.0
>
> Attachments: HBASE-14548-1.2.0-v0.patch, HBASE-14548-1.2.0-v1.patch, 
> HBASE-14548-1.2.0-v2.patch, HBASE-14548-master-v1.patch, 
> HBASE-14548-master-v2.patch
>
>
> Currently you can specify the location of the coprocessor jar in the table 
> coprocessor attribute.
> The problem is that it only allows you to specify one jar that implements the 
> coprocessor.  You will need to either bundle all the dependencies into this 
> jar, or you will need to copy the dependencies into HBase lib dir.
> The first option may not be ideal sometimes.  The second choice can be 
> troublesome too, particularly when the hbase region sever node and dirs are 
> dynamically added/created.
> There are a couple things we can expand here.  We can allow the coprocessor 
> attribute to specify a directory location, probably on hdfs.
> We may even allow some wildcard in there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16172) Unify the retry logic in ScannerCallableWithReplicas and RpcRetryingCallerWithReadReplicas

2016-07-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15363878#comment-15363878
 ] 

Hadoop QA commented on HBASE-16172:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue} 0m 1s 
{color} | {color:blue} The patch file was not named according to hbase's naming 
conventions. Please see 
https://yetus.apache.org/documentation/0.2.1/precommit-patchnames for 
instructions. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
53s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 4s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 51s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
49s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
26s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
53s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
6s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 56s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 56s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 52s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
25s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
25m 59s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 56s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 93m 45s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
29s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 141m 57s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.replication.TestReplicationSmallTests |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://

[jira] [Commented] (HBASE-16162) Compacting Memstore : unnecessary push of active segments to pipeline

2016-07-05 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15363797#comment-15363797
 ] 

ramkrishna.s.vasudevan commented on HBASE-16162:


+1 to commit this.

> Compacting Memstore : unnecessary push of active segments to pipeline
> -
>
> Key: HBASE-16162
> URL: https://issues.apache.org/jira/browse/HBASE-16162
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
>Priority: Critical
> Attachments: HBASE-16162.patch, HBASE-16162_V2.patch, 
> HBASE-16162_V3.patch, HBASE-16162_V4.patch
>
>
> We have flow like this
> {code}
> protected void checkActiveSize() {
> if (shouldFlushInMemory()) {
>  InMemoryFlushRunnable runnable = new InMemoryFlushRunnable();
>   }
>   getPool().execute(runnable);
> }
>   }
> private boolean shouldFlushInMemory() {
> if(getActive().getSize() > inmemoryFlushSize) {
>   // size above flush threshold
>   return (allowCompaction.get() && !inMemoryFlushInProgress.get());
> }
> return false;
>   }
> void flushInMemory() throws IOException {
> // Phase I: Update the pipeline
> getRegionServices().blockUpdates();
> try {
>   MutableSegment active = getActive();
>   pushActiveToPipeline(active);
> } finally {
>   getRegionServices().unblockUpdates();
> }
> // Phase II: Compact the pipeline
> try {
>   if (allowCompaction.get() && 
> inMemoryFlushInProgress.compareAndSet(false, true)) {
> // setting the inMemoryFlushInProgress flag again for the case this 
> method is invoked
> // directly (only in tests) in the common path setting from true to 
> true is idempotent
> // Speculative compaction execution, may be interrupted if flush is 
> forced while
> // compaction is in progress
> compactor.startCompaction();
>   }
> {code}
> So every write of cell will produce the check checkActiveSize().   When we 
> are at border of in mem flush,  many threads doing writes to this memstore 
> can get this checkActiveSize () to pass.  Yes the AtomicBoolean is still 
> false only. It is turned ON after some time once the new thread is started 
> run and it push the active to pipeline etc.
> In the new thread code of inMemFlush, we dont have any size check. It just 
> takes the active segment and pushes that to pipeline. Yes we dont allow any 
> new writes to memstore at this time. But before that write lock on 
> region, other handler thread also might have added entry to this thread pool. 
>  When the 1st one finishes, it releases the lock on region and handler 
> threads trying for write to memstore, might get lock and add some data. Now 
> this 2nd in mem flush thread may get a chance and get the lock and so it just 
> takes current active segment and flush that in memory !This will produce 
> very small sized segments to pipeline.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16144) Replication queue's lock will live forever if RS acquiring the lock has died prematurely

2016-07-05 Thread Phil Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phil Yang updated HBASE-16144:
--
Attachment: HBASE-16144-v5.patch

> Replication queue's lock will live forever if RS acquiring the lock has died 
> prematurely
> 
>
> Key: HBASE-16144
> URL: https://issues.apache.org/jira/browse/HBASE-16144
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.1, 1.1.5, 0.98.20
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: HBASE-16144-v1.patch, HBASE-16144-v2.patch, 
> HBASE-16144-v3.patch, HBASE-16144-v4.patch, HBASE-16144-v5.patch
>
>
> In default, we will use multi operation when we claimQueues from ZK. But if 
> we set hbase.zookeeper.useMulti=false, we will add a lock first, then copy 
> nodes, finally clean old queue and the lock. 
> However, if the RS acquiring the lock crash before claimQueues done, the lock 
> will always be there and other RS can never claim the queue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16144) Replication queue's lock will live forever if RS acquiring the lock has died prematurely

2016-07-05 Thread Phil Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15363786#comment-15363786
 ] 

Phil Yang commented on HBASE-16144:
---

In this test class, three cluster share the same zk cluster. In the end of 
testZKLockCleaner the cluster1 will be closed, so the shared zk is closed, too. 
If we run testZKLockCleaner first and then testMultiSlaveReplication, the 
closed zk will throw exception while creating utility2.startMiniCluster().

We can add a line "utility1.setZkCluster(miniZK);" in setUpBeforeClass to 
prevent closing zk when we close the cluster.

I'll upload a new patch soon, for master and other branches. Thanks.

> Replication queue's lock will live forever if RS acquiring the lock has died 
> prematurely
> 
>
> Key: HBASE-16144
> URL: https://issues.apache.org/jira/browse/HBASE-16144
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.1, 1.1.5, 0.98.20
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: HBASE-16144-v1.patch, HBASE-16144-v2.patch, 
> HBASE-16144-v3.patch, HBASE-16144-v4.patch
>
>
> In default, we will use multi operation when we claimQueues from ZK. But if 
> we set hbase.zookeeper.useMulti=false, we will add a lock first, then copy 
> nodes, finally clean old queue and the lock. 
> However, if the RS acquiring the lock crash before claimQueues done, the lock 
> will always be there and other RS can never claim the queue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16162) Compacting Memstore : unnecessary push of active segments to pipeline

2016-07-05 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15363770#comment-15363770
 ] 

Anoop Sam John commented on HBASE-16162:


Any comments?

> Compacting Memstore : unnecessary push of active segments to pipeline
> -
>
> Key: HBASE-16162
> URL: https://issues.apache.org/jira/browse/HBASE-16162
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
>Priority: Critical
> Attachments: HBASE-16162.patch, HBASE-16162_V2.patch, 
> HBASE-16162_V3.patch, HBASE-16162_V4.patch
>
>
> We have flow like this
> {code}
> protected void checkActiveSize() {
> if (shouldFlushInMemory()) {
>  InMemoryFlushRunnable runnable = new InMemoryFlushRunnable();
>   }
>   getPool().execute(runnable);
> }
>   }
> private boolean shouldFlushInMemory() {
> if(getActive().getSize() > inmemoryFlushSize) {
>   // size above flush threshold
>   return (allowCompaction.get() && !inMemoryFlushInProgress.get());
> }
> return false;
>   }
> void flushInMemory() throws IOException {
> // Phase I: Update the pipeline
> getRegionServices().blockUpdates();
> try {
>   MutableSegment active = getActive();
>   pushActiveToPipeline(active);
> } finally {
>   getRegionServices().unblockUpdates();
> }
> // Phase II: Compact the pipeline
> try {
>   if (allowCompaction.get() && 
> inMemoryFlushInProgress.compareAndSet(false, true)) {
> // setting the inMemoryFlushInProgress flag again for the case this 
> method is invoked
> // directly (only in tests) in the common path setting from true to 
> true is idempotent
> // Speculative compaction execution, may be interrupted if flush is 
> forced while
> // compaction is in progress
> compactor.startCompaction();
>   }
> {code}
> So every write of cell will produce the check checkActiveSize().   When we 
> are at border of in mem flush,  many threads doing writes to this memstore 
> can get this checkActiveSize () to pass.  Yes the AtomicBoolean is still 
> false only. It is turned ON after some time once the new thread is started 
> run and it push the active to pipeline etc.
> In the new thread code of inMemFlush, we dont have any size check. It just 
> takes the active segment and pushes that to pipeline. Yes we dont allow any 
> new writes to memstore at this time. But before that write lock on 
> region, other handler thread also might have added entry to this thread pool. 
>  When the 1st one finishes, it releases the lock on region and handler 
> threads trying for write to memstore, might get lock and add some data. Now 
> this 2nd in mem flush thread may get a chance and get the lock and so it just 
> takes current active segment and flush that in memory !This will produce 
> very small sized segments to pipeline.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15643) Need metrics of cache hit ratio, etc for one table

2016-07-05 Thread Heng Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15363758#comment-15363758
 ] 

Heng Chen commented on HBASE-15643:
---

Could you upload it to review board?

> Need metrics of cache hit ratio, etc for one table
> --
>
> Key: HBASE-15643
> URL: https://issues.apache.org/jira/browse/HBASE-15643
> Project: HBase
>  Issue Type: Improvement
>Reporter: Heng Chen
>Assignee: Alicia Ying Shu
> Attachments: HBASE-15643.patch
>
>
> There are many tables on our cluster,  only some of them need to be read 
> online.  
> We could improve the performance of read by cache,  but we need some metrics 
> for it at table level. There are a few we can collect: BlockCacheCount, 
> BlockCacheSize, BlockCacheHitCount, BlockCacheMissCount, BlockCacheHitPercent



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15643) Need metrics of cache hit ratio, etc for one table

2016-07-05 Thread Heng Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15363756#comment-15363756
 ] 

Heng Chen commented on HBASE-15643:
---

Thanks [~aliciashu] for take it up.  Let me take a look.  :)

> Need metrics of cache hit ratio, etc for one table
> --
>
> Key: HBASE-15643
> URL: https://issues.apache.org/jira/browse/HBASE-15643
> Project: HBase
>  Issue Type: Improvement
>Reporter: Heng Chen
>Assignee: Alicia Ying Shu
> Attachments: HBASE-15643.patch
>
>
> There are many tables on our cluster,  only some of them need to be read 
> online.  
> We could improve the performance of read by cache,  but we need some metrics 
> for it at table level. There are a few we can collect: BlockCacheCount, 
> BlockCacheSize, BlockCacheHitCount, BlockCacheMissCount, BlockCacheHitPercent



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16172) Unify the retry logic in ScannerCallableWithReplicas and RpcRetryingCallerWithReadReplicas

2016-07-05 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-16172:
---
Attachment: 16172.v2.txt

Patch v2 has trivial change in TestRegionReplicaFailover so that hbase-server 
tests are triggered.

> Unify the retry logic in ScannerCallableWithReplicas and 
> RpcRetryingCallerWithReadReplicas
> --
>
> Key: HBASE-16172
> URL: https://issues.apache.org/jira/browse/HBASE-16172
> Project: HBase
>  Issue Type: Bug
>Reporter: Yu Li
>Assignee: Ted Yu
> Attachments: 16172.v1.txt, 16172.v2.txt
>
>
> The issue is pointed out by [~devaraj] in HBASE-16132 (Thanks D.D.), that in 
> {{RpcRetryingCallerWithReadReplicas#call}} we will call 
> {{ResultBoundedCompletionService#take}} instead of {{poll}} to dead-wait on 
> the second one if the first replica timed out, while in 
> {{ScannerCallableWithReplicas#call}} we still use 
> {{ResultBoundedCompletionService#poll}} with some timeout for the 2nd replica.
> This JIRA aims at discussing whether to unify the logic in these two kinds of 
> caller with region replica and taking action if necessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16074) ITBLL fails, reports lost big or tiny families

2016-07-05 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15363727#comment-15363727
 ] 

stack commented on HBASE-16074:
---

What [~enis] says. There is this doc where try to talk up tooling for ITBLL 
debugging of which there is a paucity: 
https://docs.google.com/document/d/14Tvu5yWYNBDFkh8xCqLkU9tlyNWhJv3GjDGOkqZU1eE/edit

> ITBLL fails, reports lost big or tiny families
> --
>
> Key: HBASE-16074
> URL: https://issues.apache.org/jira/browse/HBASE-16074
> Project: HBase
>  Issue Type: Bug
>  Components: integration tests
>Affects Versions: 1.3.0, 0.98.20
>Reporter: Mikhail Antonov
>Assignee: Mikhail Antonov
>Priority: Blocker
> Fix For: 2.0.0, 1.3.0, 1.4.0, 0.98.21
>
> Attachments: 16074.test.branch-1.3.patch, 16074.test.patch, 
> HBASE-16074.branch-1.3.001.patch, HBASE-16074.branch-1.3.002.patch, 
> HBASE-16074.branch-1.3.003.patch, HBASE-16074.branch-1.3.003.patch, 
> changes_to_stress_ITBLL.patch, changes_to_stress_ITBLL__a_bit_relaxed_.patch, 
> itbll log with failure, itbll log with success
>
>
> Underlying MR jobs succeed but I'm seeing the following in the logs (mid-size 
> distributed test cluster):
> ERROR test.IntegrationTestBigLinkedList$Verify: Found nodes which lost big or 
> tiny families, count=164
> I do not know exactly yet whether it's a bug, a test issue or env setup 
> issue, but need figure it out. Opening this to raise awareness and see if 
> someone saw that recently.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16074) ITBLL fails, reports lost big or tiny families

2016-07-05 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15363725#comment-15363725
 ] 

stack commented on HBASE-16074:
---

I seem to fail even though I have HBASE-16132.

> ITBLL fails, reports lost big or tiny families
> --
>
> Key: HBASE-16074
> URL: https://issues.apache.org/jira/browse/HBASE-16074
> Project: HBase
>  Issue Type: Bug
>  Components: integration tests
>Affects Versions: 1.3.0, 0.98.20
>Reporter: Mikhail Antonov
>Assignee: Mikhail Antonov
>Priority: Blocker
> Fix For: 2.0.0, 1.3.0, 1.4.0, 0.98.21
>
> Attachments: 16074.test.branch-1.3.patch, 16074.test.patch, 
> HBASE-16074.branch-1.3.001.patch, HBASE-16074.branch-1.3.002.patch, 
> HBASE-16074.branch-1.3.003.patch, HBASE-16074.branch-1.3.003.patch, 
> changes_to_stress_ITBLL.patch, changes_to_stress_ITBLL__a_bit_relaxed_.patch, 
> itbll log with failure, itbll log with success
>
>
> Underlying MR jobs succeed but I'm seeing the following in the logs (mid-size 
> distributed test cluster):
> ERROR test.IntegrationTestBigLinkedList$Verify: Found nodes which lost big or 
> tiny families, count=164
> I do not know exactly yet whether it's a bug, a test issue or env setup 
> issue, but need figure it out. Opening this to raise awareness and see if 
> someone saw that recently.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16172) Unify the retry logic in ScannerCallableWithReplicas and RpcRetryingCallerWithReadReplicas

2016-07-05 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15363706#comment-15363706
 ] 

Ted Yu commented on HBASE-16172:


commit c2f6f479adc9fb108ba69e0407799dbdf5eaefa7
Author: nkeywal 
Date:   Fri Mar 7 14:00:21 2014 +

HBASE-10355 Failover RPC's from client using region replicas

[~nkeywal]:
Can you remind us ?

> Unify the retry logic in ScannerCallableWithReplicas and 
> RpcRetryingCallerWithReadReplicas
> --
>
> Key: HBASE-16172
> URL: https://issues.apache.org/jira/browse/HBASE-16172
> Project: HBase
>  Issue Type: Bug
>Reporter: Yu Li
>Assignee: Ted Yu
> Attachments: 16172.v1.txt
>
>
> The issue is pointed out by [~devaraj] in HBASE-16132 (Thanks D.D.), that in 
> {{RpcRetryingCallerWithReadReplicas#call}} we will call 
> {{ResultBoundedCompletionService#take}} instead of {{poll}} to dead-wait on 
> the second one if the first replica timed out, while in 
> {{ScannerCallableWithReplicas#call}} we still use 
> {{ResultBoundedCompletionService#poll}} with some timeout for the 2nd replica.
> This JIRA aims at discussing whether to unify the logic in these two kinds of 
> caller with region replica and taking action if necessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16172) Unify the retry logic in ScannerCallableWithReplicas and RpcRetryingCallerWithReadReplicas

2016-07-05 Thread Heng Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15363697#comment-15363697
 ] 

Heng Chen commented on HBASE-16172:
---

Notice {{RpcRetryingCallerWithReadReplicas.call()}} only be called in 
HTable.get(),  and we will get one new instance of 
RpcRetryingCallerWithReadReplicas in HTable.get,   is there any needs about 
'synchronized' of {{RpcRetryingCallerWithReadReplicas.call()}} ?

> Unify the retry logic in ScannerCallableWithReplicas and 
> RpcRetryingCallerWithReadReplicas
> --
>
> Key: HBASE-16172
> URL: https://issues.apache.org/jira/browse/HBASE-16172
> Project: HBase
>  Issue Type: Bug
>Reporter: Yu Li
>Assignee: Ted Yu
> Attachments: 16172.v1.txt
>
>
> The issue is pointed out by [~devaraj] in HBASE-16132 (Thanks D.D.), that in 
> {{RpcRetryingCallerWithReadReplicas#call}} we will call 
> {{ResultBoundedCompletionService#take}} instead of {{poll}} to dead-wait on 
> the second one if the first replica timed out, while in 
> {{ScannerCallableWithReplicas#call}} we still use 
> {{ResultBoundedCompletionService#poll}} with some timeout for the 2nd replica.
> This JIRA aims at discussing whether to unify the logic in these two kinds of 
> caller with region replica and taking action if necessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16074) ITBLL fails, reports lost big or tiny families

2016-07-05 Thread Mikhail Antonov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15363683#comment-15363683
 ] 

Mikhail Antonov commented on HBASE-16074:
-

Dang :( Somehow I missed emails with pings over there. Looks suspicious. I 
don't think I tried this fix on my tip of 1.3. Let me run with this patch + all 
TRT recent fixes by Stack. If that's the root cause I should see it pretty 
quickly. 

Should have some results by tomorrow morning.

> ITBLL fails, reports lost big or tiny families
> --
>
> Key: HBASE-16074
> URL: https://issues.apache.org/jira/browse/HBASE-16074
> Project: HBase
>  Issue Type: Bug
>  Components: integration tests
>Affects Versions: 1.3.0, 0.98.20
>Reporter: Mikhail Antonov
>Assignee: Mikhail Antonov
>Priority: Blocker
> Fix For: 2.0.0, 1.3.0, 1.4.0, 0.98.21
>
> Attachments: 16074.test.branch-1.3.patch, 16074.test.patch, 
> HBASE-16074.branch-1.3.001.patch, HBASE-16074.branch-1.3.002.patch, 
> HBASE-16074.branch-1.3.003.patch, HBASE-16074.branch-1.3.003.patch, 
> changes_to_stress_ITBLL.patch, changes_to_stress_ITBLL__a_bit_relaxed_.patch, 
> itbll log with failure, itbll log with success
>
>
> Underlying MR jobs succeed but I'm seeing the following in the logs (mid-size 
> distributed test cluster):
> ERROR test.IntegrationTestBigLinkedList$Verify: Found nodes which lost big or 
> tiny families, count=164
> I do not know exactly yet whether it's a bug, a test issue or env setup 
> issue, but need figure it out. Opening this to raise awareness and see if 
> someone saw that recently.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16179) Fix compilation errors when building hbase-spark against Spark 2.0

2016-07-05 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15363680#comment-15363680
 ] 

Sean Busbey commented on HBASE-16179:
-

a jenkins job maintained on some non-ASF service doesn't answer the question. 
Is there a DISCUSS or VOTE thread somewhere, a roadmap, or a compatibility 
guide (like we have for java versions)?

> Fix compilation errors when building hbase-spark against Spark 2.0
> --
>
> Key: HBASE-16179
> URL: https://issues.apache.org/jira/browse/HBASE-16179
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>
> I tried building hbase-spark module against Spark-2.0 snapshot and got the 
> following compilation errors:
> http://pastebin.com/bg3w247a
> Some Spark classes such as DataTypeParser and Logging are no longer 
> accessible to downstream projects.
> hbase-spark module should not depend on such classes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16074) ITBLL fails, reports lost big or tiny families

2016-07-05 Thread Yu Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15363675#comment-15363675
 ] 

Yu Li commented on HBASE-16074:
---

Got a chance to try HBASE-16132? That's a problem existing for a long time ever 
since region replica stuff goes in (or say, exists in 1.x but not 0.98), and we 
encountered the problem in our online system.

> ITBLL fails, reports lost big or tiny families
> --
>
> Key: HBASE-16074
> URL: https://issues.apache.org/jira/browse/HBASE-16074
> Project: HBase
>  Issue Type: Bug
>  Components: integration tests
>Affects Versions: 1.3.0, 0.98.20
>Reporter: Mikhail Antonov
>Assignee: Mikhail Antonov
>Priority: Blocker
> Fix For: 2.0.0, 1.3.0, 1.4.0, 0.98.21
>
> Attachments: 16074.test.branch-1.3.patch, 16074.test.patch, 
> HBASE-16074.branch-1.3.001.patch, HBASE-16074.branch-1.3.002.patch, 
> HBASE-16074.branch-1.3.003.patch, HBASE-16074.branch-1.3.003.patch, 
> changes_to_stress_ITBLL.patch, changes_to_stress_ITBLL__a_bit_relaxed_.patch, 
> itbll log with failure, itbll log with success
>
>
> Underlying MR jobs succeed but I'm seeing the following in the logs (mid-size 
> distributed test cluster):
> ERROR test.IntegrationTestBigLinkedList$Verify: Found nodes which lost big or 
> tiny families, count=164
> I do not know exactly yet whether it's a bug, a test issue or env setup 
> issue, but need figure it out. Opening this to raise awareness and see if 
> someone saw that recently.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16074) ITBLL fails, reports lost big or tiny families

2016-07-05 Thread Mikhail Antonov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15363673#comment-15363673
 ] 

Mikhail Antonov commented on HBASE-16074:
-

https://docs.google.com/document/d/1ye6eUlOljduktn5E95qLpzb6q4ns2L3eoexzEjSJEg0/edit?usp=sharing
  - capturing details here to keep track of what's been tried

> ITBLL fails, reports lost big or tiny families
> --
>
> Key: HBASE-16074
> URL: https://issues.apache.org/jira/browse/HBASE-16074
> Project: HBase
>  Issue Type: Bug
>  Components: integration tests
>Affects Versions: 1.3.0, 0.98.20
>Reporter: Mikhail Antonov
>Assignee: Mikhail Antonov
>Priority: Blocker
> Fix For: 2.0.0, 1.3.0, 1.4.0, 0.98.21
>
> Attachments: 16074.test.branch-1.3.patch, 16074.test.patch, 
> HBASE-16074.branch-1.3.001.patch, HBASE-16074.branch-1.3.002.patch, 
> HBASE-16074.branch-1.3.003.patch, HBASE-16074.branch-1.3.003.patch, 
> changes_to_stress_ITBLL.patch, changes_to_stress_ITBLL__a_bit_relaxed_.patch, 
> itbll log with failure, itbll log with success
>
>
> Underlying MR jobs succeed but I'm seeing the following in the logs (mid-size 
> distributed test cluster):
> ERROR test.IntegrationTestBigLinkedList$Verify: Found nodes which lost big or 
> tiny families, count=164
> I do not know exactly yet whether it's a bug, a test issue or env setup 
> issue, but need figure it out. Opening this to raise awareness and see if 
> someone saw that recently.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15650) Remove TimeRangeTracker as point of contention when many threads reading a StoreFile

2016-07-05 Thread Mikhail Antonov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15363670#comment-15363670
 ] 

Mikhail Antonov commented on HBASE-15650:
-

Understand.. Thanks in advance!

> Remove TimeRangeTracker as point of contention when many threads reading a 
> StoreFile
> 
>
> Key: HBASE-15650
> URL: https://issues.apache.org/jira/browse/HBASE-15650
> Project: HBase
>  Issue Type: Bug
>  Components: Performance
>Affects Versions: 2.0.0
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0, 1.3.0, 0.98.19, 1.4.0
>
> Attachments: 15650.branch-1.2.patch, 15650.branch-1.patch, 
> 15650.branch-1.patch, 15650.patch, 15650.patch, 15650v2.branch-1.patch, 
> 15650v2.patch, 15650v3.patch, 15650v4.patch, 15650v5.patch, 15650v6.patch, 
> Point-of-contention-on-random-read.png
>
>
> HBASE-12148 is about "Remove TimeRangeTracker as point of contention when 
> many threads writing a Store". It is also a point of contention when reading.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16074) ITBLL fails, reports lost big or tiny families

2016-07-05 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15363664#comment-15363664
 ] 

stack commented on HBASE-16074:
---

I just ran 1.3 tip plus HBASE-16176 (which has had a tendency to make failures 
happen less often) and after second turn through the loop it failed with 
"16/07/05 18:20:02 ERROR test.IntegrationTestBigLinkedList$Verify: Found nodes 
which lost big or tiny families, count=127" and " UNREFERENCED=175". Looking. 
Will start up a loop this evening before bed with revert again (It passed for 
me last night).



> ITBLL fails, reports lost big or tiny families
> --
>
> Key: HBASE-16074
> URL: https://issues.apache.org/jira/browse/HBASE-16074
> Project: HBase
>  Issue Type: Bug
>  Components: integration tests
>Affects Versions: 1.3.0, 0.98.20
>Reporter: Mikhail Antonov
>Assignee: Mikhail Antonov
>Priority: Blocker
> Fix For: 2.0.0, 1.3.0, 1.4.0, 0.98.21
>
> Attachments: 16074.test.branch-1.3.patch, 16074.test.patch, 
> HBASE-16074.branch-1.3.001.patch, HBASE-16074.branch-1.3.002.patch, 
> HBASE-16074.branch-1.3.003.patch, HBASE-16074.branch-1.3.003.patch, 
> changes_to_stress_ITBLL.patch, changes_to_stress_ITBLL__a_bit_relaxed_.patch, 
> itbll log with failure, itbll log with success
>
>
> Underlying MR jobs succeed but I'm seeing the following in the logs (mid-size 
> distributed test cluster):
> ERROR test.IntegrationTestBigLinkedList$Verify: Found nodes which lost big or 
> tiny families, count=164
> I do not know exactly yet whether it's a bug, a test issue or env setup 
> issue, but need figure it out. Opening this to raise awareness and see if 
> someone saw that recently.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16087) Replication shouldn't start on a master if if only hosts system tables

2016-07-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15363661#comment-15363661
 ] 

Hudson commented on HBASE-16087:


FAILURE: Integrated in HBase-1.3 #770 (See 
[https://builds.apache.org/job/HBase-1.3/770/])
HBASE-16087 Replication shouldn't start on a master if if only hosts (eclark: 
rev 59c5900fae4392b9a5fcca8dbf5543e1bea1e452)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/BaseLoadBalancer.java


> Replication shouldn't start on a master if if only hosts system tables
> --
>
> Key: HBASE-16087
> URL: https://issues.apache.org/jira/browse/HBASE-16087
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0, 1.3.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HBASE-16087.patch, HBASE-16087.v1.patch
>
>
> System tables aren't replicated so we shouldn't start up a replication master 
> if there are no user tables on the master.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16087) Replication shouldn't start on a master if if only hosts system tables

2016-07-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15363611#comment-15363611
 ] 

Hudson commented on HBASE-16087:


SUCCESS: Integrated in HBase-1.4 #273 (See 
[https://builds.apache.org/job/HBase-1.4/273/])
HBASE-16087 Replication shouldn't start on a master if if only hosts (eclark: 
rev ff8c2fcac0324a1cfa5608062e0e5fc263cb8160)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/BaseLoadBalancer.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java


> Replication shouldn't start on a master if if only hosts system tables
> --
>
> Key: HBASE-16087
> URL: https://issues.apache.org/jira/browse/HBASE-16087
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0, 1.3.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HBASE-16087.patch, HBASE-16087.v1.patch
>
>
> System tables aren't replicated so we shouldn't start up a replication master 
> if there are no user tables on the master.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16157) The incorrect block cache count and size are caused by removing duplicate block key in the LruBlockCache

2016-07-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15363610#comment-15363610
 ] 

Hudson commented on HBASE-16157:


SUCCESS: Integrated in HBase-1.4 #273 (See 
[https://builds.apache.org/job/HBase-1.4/273/])
HBASE-16157 The incorrect block cache count and size are caused by (tedyu: rev 
368c32e3229a21d017eb9e6248b29315f7c51211)
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestLruBlockCache.java


> The incorrect block cache count and size are caused by removing duplicate 
> block key in the LruBlockCache
> 
>
> Key: HBASE-16157
> URL: https://issues.apache.org/jira/browse/HBASE-16157
> Project: HBase
>  Issue Type: Bug
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Trivial
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-16157-v1.patch, HBASE-16157-v2.patch, 
> HBASE-16157-v3.patch, HBASE-16157-v4.patch
>
>
> {code:title=LruBlockCache.java|borderStyle=solid}
> // Check return value from the Map#remove before updating the metrics
>   protected long evictBlock(LruCachedBlock block, boolean 
> evictedByEvictionProcess) {
> map.remove(block.getCacheKey());
> updateSizeMetrics(block, true);
> ...
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15650) Remove TimeRangeTracker as point of contention when many threads reading a StoreFile

2016-07-05 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15363606#comment-15363606
 ] 

Andrew Purtell commented on HBASE-15650:


bq. do you regularly run ITBLL on a sizable cluster with 0.98 which has this 
optimization in? If you do and you never saw any lost refs

[~mantonov] No I do not run ITBLL on a sizable cluster for 0.98. Question of 
access to resources. Looking to change that with something DIY on EC2 with 
[~dimaspivak]'s clusterdock stuff. 

> Remove TimeRangeTracker as point of contention when many threads reading a 
> StoreFile
> 
>
> Key: HBASE-15650
> URL: https://issues.apache.org/jira/browse/HBASE-15650
> Project: HBase
>  Issue Type: Bug
>  Components: Performance
>Affects Versions: 2.0.0
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0, 1.3.0, 0.98.19, 1.4.0
>
> Attachments: 15650.branch-1.2.patch, 15650.branch-1.patch, 
> 15650.branch-1.patch, 15650.patch, 15650.patch, 15650v2.branch-1.patch, 
> 15650v2.patch, 15650v3.patch, 15650v4.patch, 15650v5.patch, 15650v6.patch, 
> Point-of-contention-on-random-read.png
>
>
> HBASE-12148 is about "Remove TimeRangeTracker as point of contention when 
> many threads writing a Store". It is also a point of contention when reading.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-15650) Remove TimeRangeTracker as point of contention when many threads reading a StoreFile

2016-07-05 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15363606#comment-15363606
 ] 

Andrew Purtell edited comment on HBASE-15650 at 7/6/16 1:56 AM:


bq. do you regularly run ITBLL on a sizable cluster with 0.98 which has this 
optimization in? If you do and you never saw any lost refs

[~mantonov] No I do not run ITBLL on a sizable cluster for 0.98. Question of 
access to resources. Looking to change that with something DIY on EC2 with 
[~dimaspivak]'s clusterdock stuff. Will look for this problem on 0.98. 


was (Author: apurtell):
bq. do you regularly run ITBLL on a sizable cluster with 0.98 which has this 
optimization in? If you do and you never saw any lost refs

[~mantonov] No I do not run ITBLL on a sizable cluster for 0.98. Question of 
access to resources. Looking to change that with something DIY on EC2 with 
[~dimaspivak]'s clusterdock stuff. 

> Remove TimeRangeTracker as point of contention when many threads reading a 
> StoreFile
> 
>
> Key: HBASE-15650
> URL: https://issues.apache.org/jira/browse/HBASE-15650
> Project: HBase
>  Issue Type: Bug
>  Components: Performance
>Affects Versions: 2.0.0
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0, 1.3.0, 0.98.19, 1.4.0
>
> Attachments: 15650.branch-1.2.patch, 15650.branch-1.patch, 
> 15650.branch-1.patch, 15650.patch, 15650.patch, 15650v2.branch-1.patch, 
> 15650v2.patch, 15650v3.patch, 15650v4.patch, 15650v5.patch, 15650v6.patch, 
> Point-of-contention-on-random-read.png
>
>
> HBASE-12148 is about "Remove TimeRangeTracker as point of contention when 
> many threads writing a Store". It is also a point of contention when reading.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15988) Backup set add command MUST initiate full backup for a table(s) being added

2016-07-05 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-15988:
---
Attachment: 15988.v1.txt

> Backup set add command MUST initiate full backup for a table(s) being added
> ---
>
> Key: HBASE-15988
> URL: https://issues.apache.org/jira/browse/HBASE-15988
> Project: HBase
>  Issue Type: Task
>Affects Versions: 2.0.0
>Reporter: Vladimir Rodionov
>  Labels: backup
> Fix For: 2.0.0
>
> Attachments: 15988.v1.txt
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16180) Fix ST_WRITE_TO_STATIC_FROM_INSTANCE_METHOD findbugs introduced by parent

2016-07-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15363558#comment-15363558
 ] 

Hadoop QA commented on HBASE-16180:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
55s {color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s 
{color} | {color:green} branch-1 passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s 
{color} | {color:green} branch-1 passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
59s {color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} branch-1 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 55s 
{color} | {color:red} hbase-server in branch-1 has 1 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 37s 
{color} | {color:green} branch-1 passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 32s 
{color} | {color:green} branch-1 passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
41s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
59s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
15m 38s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 8s 
{color} | {color:green} hbase-server generated 0 new + 0 unchanged - 1 fixed = 
0 total (was 1) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 85m 15s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 115m 1s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12816325/HBASE-16180.branch-1.001.patch
 |
| JIRA Issue | HBASE-16180 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh
 |
| git revision | branch-1 / 368c32e |
| Default Java | 

[jira] [Commented] (HBASE-16174) Hook cell test up, and fix broken cell test.

2016-07-05 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15363551#comment-15363551
 ] 

Enis Soztutar commented on HBASE-16174:
---

+1.

> Hook cell test up, and fix broken cell test.
> 
>
> Key: HBASE-16174
> URL: https://issues.apache.org/jira/browse/HBASE-16174
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HBASE-16174.HBASE-14850.patch
>
>
> Make sure that cell test is working properly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16176) Bug fixes/improvements on HBASE-15650 Remove TimeRangeTracker as point of contention when many threads reading a StoreFile

2016-07-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15363546#comment-15363546
 ] 

Hadoop QA commented on HBASE-16176:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
42s {color} | {color:green} branch-1.3 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 36s 
{color} | {color:green} branch-1.3 passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 5s 
{color} | {color:green} branch-1.3 passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
7s {color} | {color:green} branch-1.3 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
34s {color} | {color:green} branch-1.3 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 54s 
{color} | {color:red} hbase-server in branch-1.3 has 1 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 33s 
{color} | {color:green} branch-1.3 passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 10s 
{color} | {color:green} branch-1.3 passed with JDK v1.7.0_80 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 42s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 3s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 3s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
4s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
35s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
15m 38s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 30s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 8s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 51s 
{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 33s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 81m 16s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
42s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 127m 1s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12816322/HBASE-16176.branch-1.3.002.patch
 |
| JIRA Issue | HBASE-16176 |
| Optional Tests |  asflicense  javac  javadoc  u

[jira] [Commented] (HBASE-16074) ITBLL fails, reports lost big or tiny families

2016-07-05 Thread Mikhail Antonov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15363540#comment-15363540
 ] 

Mikhail Antonov commented on HBASE-16074:
-

So, I still don't think TRT optimization was what introduced the bug, since I 
did see one bad iteration with this change reverted from 1.3, - but with TRT 
changes I see this happening on 1.2.2 now, which makes me suspect it's been in 
1.2.2 before.

Let me try TRT changes on 1.2.0 tag and maybe 1.1, if it applies cleanly. Then 
we have more data points and see how to go from there.

> ITBLL fails, reports lost big or tiny families
> --
>
> Key: HBASE-16074
> URL: https://issues.apache.org/jira/browse/HBASE-16074
> Project: HBase
>  Issue Type: Bug
>  Components: integration tests
>Affects Versions: 1.3.0, 0.98.20
>Reporter: Mikhail Antonov
>Assignee: Mikhail Antonov
>Priority: Blocker
> Fix For: 2.0.0, 1.3.0, 1.4.0, 0.98.21
>
> Attachments: 16074.test.branch-1.3.patch, 16074.test.patch, 
> HBASE-16074.branch-1.3.001.patch, HBASE-16074.branch-1.3.002.patch, 
> HBASE-16074.branch-1.3.003.patch, HBASE-16074.branch-1.3.003.patch, 
> changes_to_stress_ITBLL.patch, changes_to_stress_ITBLL__a_bit_relaxed_.patch, 
> itbll log with failure, itbll log with success
>
>
> Underlying MR jobs succeed but I'm seeing the following in the logs (mid-size 
> distributed test cluster):
> ERROR test.IntegrationTestBigLinkedList$Verify: Found nodes which lost big or 
> tiny families, count=164
> I do not know exactly yet whether it's a bug, a test issue or env setup 
> issue, but need figure it out. Opening this to raise awareness and see if 
> someone saw that recently.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16074) ITBLL fails, reports lost big or tiny families

2016-07-05 Thread Mikhail Antonov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15363536#comment-15363536
 ] 

Mikhail Antonov commented on HBASE-16074:
-

Didn't try yet.

> ITBLL fails, reports lost big or tiny families
> --
>
> Key: HBASE-16074
> URL: https://issues.apache.org/jira/browse/HBASE-16074
> Project: HBase
>  Issue Type: Bug
>  Components: integration tests
>Affects Versions: 1.3.0, 0.98.20
>Reporter: Mikhail Antonov
>Assignee: Mikhail Antonov
>Priority: Blocker
> Fix For: 2.0.0, 1.3.0, 1.4.0, 0.98.21
>
> Attachments: 16074.test.branch-1.3.patch, 16074.test.patch, 
> HBASE-16074.branch-1.3.001.patch, HBASE-16074.branch-1.3.002.patch, 
> HBASE-16074.branch-1.3.003.patch, HBASE-16074.branch-1.3.003.patch, 
> changes_to_stress_ITBLL.patch, changes_to_stress_ITBLL__a_bit_relaxed_.patch, 
> itbll log with failure, itbll log with success
>
>
> Underlying MR jobs succeed but I'm seeing the following in the logs (mid-size 
> distributed test cluster):
> ERROR test.IntegrationTestBigLinkedList$Verify: Found nodes which lost big or 
> tiny families, count=164
> I do not know exactly yet whether it's a bug, a test issue or env setup 
> issue, but need figure it out. Opening this to raise awareness and see if 
> someone saw that recently.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16179) Fix compilation errors when building hbase-spark against Spark 2.0

2016-07-05 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15363538#comment-15363538
 ] 

Ted Yu commented on HBASE-16179:


Scala 2.10 support is maintained:

https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Compile/job/SPARK-master-COMPILE-sbt-SCALA-2.10/

Let's see how hard it is to keep hbase-spark module working against Spark 1.6.1

> Fix compilation errors when building hbase-spark against Spark 2.0
> --
>
> Key: HBASE-16179
> URL: https://issues.apache.org/jira/browse/HBASE-16179
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>
> I tried building hbase-spark module against Spark-2.0 snapshot and got the 
> following compilation errors:
> http://pastebin.com/bg3w247a
> Some Spark classes such as DataTypeParser and Logging are no longer 
> accessible to downstream projects.
> hbase-spark module should not depend on such classes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16074) ITBLL fails, reports lost big or tiny families

2016-07-05 Thread Mikhail Antonov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15363533#comment-15363533
 ] 

Mikhail Antonov commented on HBASE-16074:
-

The keys are there - re-running verify w/o monkeys works. They don't get 
returned in the scan.

> ITBLL fails, reports lost big or tiny families
> --
>
> Key: HBASE-16074
> URL: https://issues.apache.org/jira/browse/HBASE-16074
> Project: HBase
>  Issue Type: Bug
>  Components: integration tests
>Affects Versions: 1.3.0, 0.98.20
>Reporter: Mikhail Antonov
>Assignee: Mikhail Antonov
>Priority: Blocker
> Fix For: 2.0.0, 1.3.0, 1.4.0, 0.98.21
>
> Attachments: 16074.test.branch-1.3.patch, 16074.test.patch, 
> HBASE-16074.branch-1.3.001.patch, HBASE-16074.branch-1.3.002.patch, 
> HBASE-16074.branch-1.3.003.patch, HBASE-16074.branch-1.3.003.patch, 
> changes_to_stress_ITBLL.patch, changes_to_stress_ITBLL__a_bit_relaxed_.patch, 
> itbll log with failure, itbll log with success
>
>
> Underlying MR jobs succeed but I'm seeing the following in the logs (mid-size 
> distributed test cluster):
> ERROR test.IntegrationTestBigLinkedList$Verify: Found nodes which lost big or 
> tiny families, count=164
> I do not know exactly yet whether it's a bug, a test issue or env setup 
> issue, but need figure it out. Opening this to raise awareness and see if 
> someone saw that recently.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16074) ITBLL fails, reports lost big or tiny families

2016-07-05 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15363517#comment-15363517
 ] 

Enis Soztutar commented on HBASE-16074:
---

HFilePrettyPrinter can search a key in a file using {{-w}}. I think I did write 
a script to go over all hfiles in the table and search the missing key before 
using HFPP. 

> ITBLL fails, reports lost big or tiny families
> --
>
> Key: HBASE-16074
> URL: https://issues.apache.org/jira/browse/HBASE-16074
> Project: HBase
>  Issue Type: Bug
>  Components: integration tests
>Affects Versions: 1.3.0, 0.98.20
>Reporter: Mikhail Antonov
>Assignee: Mikhail Antonov
>Priority: Blocker
> Fix For: 2.0.0, 1.3.0, 1.4.0, 0.98.21
>
> Attachments: 16074.test.branch-1.3.patch, 16074.test.patch, 
> HBASE-16074.branch-1.3.001.patch, HBASE-16074.branch-1.3.002.patch, 
> HBASE-16074.branch-1.3.003.patch, HBASE-16074.branch-1.3.003.patch, 
> changes_to_stress_ITBLL.patch, changes_to_stress_ITBLL__a_bit_relaxed_.patch, 
> itbll log with failure, itbll log with success
>
>
> Underlying MR jobs succeed but I'm seeing the following in the logs (mid-size 
> distributed test cluster):
> ERROR test.IntegrationTestBigLinkedList$Verify: Found nodes which lost big or 
> tiny families, count=164
> I do not know exactly yet whether it's a bug, a test issue or env setup 
> issue, but need figure it out. Opening this to raise awareness and see if 
> someone saw that recently.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16074) ITBLL fails, reports lost big or tiny families

2016-07-05 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15363515#comment-15363515
 ] 

Enis Soztutar commented on HBASE-16074:
---

One trick I used to do with ITBLL is to run with this config: 
{code}

  hbase.master.hfilecleaner.ttl
  60480
  


  hbase.master.logcleaner.ttl
  60480
  


  hbase.region.archive.recovered.edits
  true

{code} 

which will keep ALL Hfiles, WALs and recovered edits around in the archive. 
Then usually, I find one of the missing row keys, and search it through all the 
HFiles and WAL files, and recoveredEdits. The Search tool in ITBLL does this, 
but it has been some time I used it. 

You should also enable DEBUG level logging everywhere. 

> ITBLL fails, reports lost big or tiny families
> --
>
> Key: HBASE-16074
> URL: https://issues.apache.org/jira/browse/HBASE-16074
> Project: HBase
>  Issue Type: Bug
>  Components: integration tests
>Affects Versions: 1.3.0, 0.98.20
>Reporter: Mikhail Antonov
>Assignee: Mikhail Antonov
>Priority: Blocker
> Fix For: 2.0.0, 1.3.0, 1.4.0, 0.98.21
>
> Attachments: 16074.test.branch-1.3.patch, 16074.test.patch, 
> HBASE-16074.branch-1.3.001.patch, HBASE-16074.branch-1.3.002.patch, 
> HBASE-16074.branch-1.3.003.patch, HBASE-16074.branch-1.3.003.patch, 
> changes_to_stress_ITBLL.patch, changes_to_stress_ITBLL__a_bit_relaxed_.patch, 
> itbll log with failure, itbll log with success
>
>
> Underlying MR jobs succeed but I'm seeing the following in the logs (mid-size 
> distributed test cluster):
> ERROR test.IntegrationTestBigLinkedList$Verify: Found nodes which lost big or 
> tiny families, count=164
> I do not know exactly yet whether it's a bug, a test issue or env setup 
> issue, but need figure it out. Opening this to raise awareness and see if 
> someone saw that recently.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16179) Fix compilation errors when building hbase-spark against Spark 2.0

2016-07-05 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15363511#comment-15363511
 ] 

Sean Busbey commented on HBASE-16179:
-

presumably the fix here will aim for still working with Spark 1.6?

Is spark-2.0 still keeping Scala 2.10 around?

> Fix compilation errors when building hbase-spark against Spark 2.0
> --
>
> Key: HBASE-16179
> URL: https://issues.apache.org/jira/browse/HBASE-16179
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>
> I tried building hbase-spark module against Spark-2.0 snapshot and got the 
> following compilation errors:
> http://pastebin.com/bg3w247a
> Some Spark classes such as DataTypeParser and Logging are no longer 
> accessible to downstream projects.
> hbase-spark module should not depend on such classes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16182) Increase IntegrationTestRpcClient timeout

2016-07-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15363508#comment-15363508
 ] 

Hadoop QA commented on HBASE-16182:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
8s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 16s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
10s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 0s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 8s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 23s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 17s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 17s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
25m 55s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 8s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 16s 
{color} | {color:green} hbase-it in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
8s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 32m 56s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12816340/hbase-16182_v1.patch |
| JIRA Issue | HBASE-16182 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux asf901.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / ae92668 |
| Default Java | 1.7.0_80 |
| Multi-JDK versions |  /home/jenkins/tools/java/jdk1.8.0:1.8.0 
/home/jenkins/jenkins-slave/tools/hudson.model.JDK/JDK_1.7_latest_:1.7.0_80 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/2533/testReport/ |
| modules | C

[jira] [Commented] (HBASE-16074) ITBLL fails, reports lost big or tiny families

2016-07-05 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15363480#comment-15363480
 ] 

Sean Busbey commented on HBASE-16074:
-

does the same failure show on 1.2.0?

> ITBLL fails, reports lost big or tiny families
> --
>
> Key: HBASE-16074
> URL: https://issues.apache.org/jira/browse/HBASE-16074
> Project: HBase
>  Issue Type: Bug
>  Components: integration tests
>Affects Versions: 1.3.0, 0.98.20
>Reporter: Mikhail Antonov
>Assignee: Mikhail Antonov
>Priority: Blocker
> Fix For: 2.0.0, 1.3.0, 1.4.0, 0.98.21
>
> Attachments: 16074.test.branch-1.3.patch, 16074.test.patch, 
> HBASE-16074.branch-1.3.001.patch, HBASE-16074.branch-1.3.002.patch, 
> HBASE-16074.branch-1.3.003.patch, HBASE-16074.branch-1.3.003.patch, 
> changes_to_stress_ITBLL.patch, changes_to_stress_ITBLL__a_bit_relaxed_.patch, 
> itbll log with failure, itbll log with success
>
>
> Underlying MR jobs succeed but I'm seeing the following in the logs (mid-size 
> distributed test cluster):
> ERROR test.IntegrationTestBigLinkedList$Verify: Found nodes which lost big or 
> tiny families, count=164
> I do not know exactly yet whether it's a bug, a test issue or env setup 
> issue, but need figure it out. Opening this to raise awareness and see if 
> someone saw that recently.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16182) Increase IntegrationTestRpcClient timeout

2016-07-05 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15363468#comment-15363468
 ] 

Enis Soztutar commented on HBASE-16182:
---

[~sergey.soldatov] mind a quick review, since you worked on this recently. 

> Increase IntegrationTestRpcClient timeout 
> --
>
> Key: HBASE-16182
> URL: https://issues.apache.org/jira/browse/HBASE-16182
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0, 1.4.0
>
> Attachments: hbase-16182_v1.patch
>
>
> We have seen IntegrationTestRpcClient fail recently with a timeout. Further 
> inspection, the root cause seems to be a very underpowered node running the 
> test caused the timeout since there is no BLOCKED thread, both for handlers, 
> readers, listener, or the client side threads. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16182) Increase IntegrationTestRpcClient timeout

2016-07-05 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-16182:
--
Attachment: hbase-16182_v1.patch

Simple patch. 

> Increase IntegrationTestRpcClient timeout 
> --
>
> Key: HBASE-16182
> URL: https://issues.apache.org/jira/browse/HBASE-16182
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0, 1.4.0
>
> Attachments: hbase-16182_v1.patch
>
>
> We have seen IntegrationTestRpcClient fail recently with a timeout. Further 
> inspection, the root cause seems to be a very underpowered node running the 
> test caused the timeout since there is no BLOCKED thread, both for handlers, 
> readers, listener, or the client side threads. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16182) Increase IntegrationTestRpcClient timeout

2016-07-05 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-16182:
--
Status: Patch Available  (was: Open)

> Increase IntegrationTestRpcClient timeout 
> --
>
> Key: HBASE-16182
> URL: https://issues.apache.org/jira/browse/HBASE-16182
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0, 1.4.0
>
> Attachments: hbase-16182_v1.patch
>
>
> We have seen IntegrationTestRpcClient fail recently with a timeout. Further 
> inspection, the root cause seems to be a very underpowered node running the 
> test caused the timeout since there is no BLOCKED thread, both for handlers, 
> readers, listener, or the client side threads. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16182) Increase IntegrationTestRpcClient timeout

2016-07-05 Thread Enis Soztutar (JIRA)
Enis Soztutar created HBASE-16182:
-

 Summary: Increase IntegrationTestRpcClient timeout 
 Key: HBASE-16182
 URL: https://issues.apache.org/jira/browse/HBASE-16182
 Project: HBase
  Issue Type: Bug
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Fix For: 2.0.0, 1.4.0


We have seen IntegrationTestRpcClient fail recently with a timeout. Further 
inspection, the root cause seems to be a very underpowered node running the 
test caused the timeout since there is no BLOCKED thread, both for handlers, 
readers, listener, or the client side threads. 




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15643) Need metrics of cache hit ratio, etc for one table

2016-07-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15363456#comment-15363456
 ] 

Hadoop QA commented on HBASE-15643:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 4s {color} 
| {color:red} HBASE-15643 does not apply to master. Rebase required? Wrong 
Branch? See https://yetus.apache.org/documentation/0.2.1/precommit-patchnames 
for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12816338/HBASE-15643.patch |
| JIRA Issue | HBASE-15643 |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/2532/console |
| Powered by | Apache Yetus 0.2.1   http://yetus.apache.org |


This message was automatically generated.



> Need metrics of cache hit ratio, etc for one table
> --
>
> Key: HBASE-15643
> URL: https://issues.apache.org/jira/browse/HBASE-15643
> Project: HBase
>  Issue Type: Improvement
>Reporter: Heng Chen
>Assignee: Alicia Ying Shu
> Attachments: HBASE-15643.patch
>
>
> There are many tables on our cluster,  only some of them need to be read 
> online.  
> We could improve the performance of read by cache,  but we need some metrics 
> for it at table level. There are a few we can collect: BlockCacheCount, 
> BlockCacheSize, BlockCacheHitCount, BlockCacheMissCount, BlockCacheHitPercent



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (HBASE-15682) HBase Backup Phase 3: Possible data loss during incremental WAL files copy

2016-07-05 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HBASE-15682 started by Vladimir Rodionov.
-
> HBase Backup Phase 3: Possible data loss during incremental WAL files copy
> --
>
> Key: HBASE-15682
> URL: https://issues.apache.org/jira/browse/HBASE-15682
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>  Labels: backup
> Fix For: 2.0.0
>
>
> We collect list of files in WAL and oldWALs directory and launch DistCp job. 
> Some files can be moved from WALs to oldWALs  directory by RS during job's 
> run, what can result in potential data loss.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16074) ITBLL fails, reports lost big or tiny families

2016-07-05 Thread Mikhail Antonov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15363453#comment-15363453
 ] 

Mikhail Antonov commented on HBASE-16074:
-

[~busbey] ^

> ITBLL fails, reports lost big or tiny families
> --
>
> Key: HBASE-16074
> URL: https://issues.apache.org/jira/browse/HBASE-16074
> Project: HBase
>  Issue Type: Bug
>  Components: integration tests
>Affects Versions: 1.3.0, 0.98.20
>Reporter: Mikhail Antonov
>Assignee: Mikhail Antonov
>Priority: Blocker
> Fix For: 2.0.0, 1.3.0, 1.4.0, 0.98.21
>
> Attachments: 16074.test.branch-1.3.patch, 16074.test.patch, 
> HBASE-16074.branch-1.3.001.patch, HBASE-16074.branch-1.3.002.patch, 
> HBASE-16074.branch-1.3.003.patch, HBASE-16074.branch-1.3.003.patch, 
> changes_to_stress_ITBLL.patch, changes_to_stress_ITBLL__a_bit_relaxed_.patch, 
> itbll log with failure, itbll log with success
>
>
> Underlying MR jobs succeed but I'm seeing the following in the logs (mid-size 
> distributed test cluster):
> ERROR test.IntegrationTestBigLinkedList$Verify: Found nodes which lost big or 
> tiny families, count=164
> I do not know exactly yet whether it's a bug, a test issue or env setup 
> issue, but need figure it out. Opening this to raise awareness and see if 
> someone saw that recently.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-7912) HBase Backup/Restore Based on HBase Snapshot

2016-07-05 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-7912:
-
Attachment: HBaseBackupAndRestore -0.91.pdf

Updated design doc.

> HBase Backup/Restore Based on HBase Snapshot
> 
>
> Key: HBASE-7912
> URL: https://issues.apache.org/jira/browse/HBASE-7912
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Richard Ding
>Assignee: Vladimir Rodionov
>  Labels: backup
> Fix For: 2.0.0
>
> Attachments: HBaseBackupAndRestore - v0.8.pdf, HBaseBackupAndRestore 
> -0.91.pdf, HBaseBackupAndRestore-v0.9.pdf, HBaseBackupAndRestore.pdf, 
> HBaseBackupRestore-Jira-7912-DesignDoc-v1.pdf, 
> HBaseBackupRestore-Jira-7912-DesignDoc-v2.pdf, 
> HBaseBackupRestore-Jira-7912-v4.pdf, HBaseBackupRestore-Jira-7912-v5 .pdf, 
> HBaseBackupRestore-Jira-7912-v6.pdf, HBase_BackupRestore-Jira-7912-CLI-v1.pdf
>
>
> Finally, we completed the implementation of our backup/restore solution, and 
> would like to share with community through this jira. 
> We are leveraging existing hbase snapshot feature, and provide a general 
> solution to common users. Our full backup is using snapshot to capture 
> metadata locally and using exportsnapshot to move data to another cluster; 
> the incremental backup is using offline-WALplayer to backup HLogs; we also 
> leverage global distribution rolllog and flush to improve performance; other 
> added-on values such as convert, merge, progress report, and CLI commands. So 
> that a common user can backup hbase data without in-depth knowledge of hbase. 
>  Our solution also contains some usability features for enterprise users. 
> The detail design document and CLI command will be attached in this jira. We 
> plan to use 10~12 subtasks to share each of the following features, and 
> document the detail implement in the subtasks: 
> * *Full Backup* : provide local and remote back/restore for a list of tables
> * *offline-WALPlayer* to convert HLog to HFiles offline (for incremental 
> backup)
> * *distributed* Logroll and distributed flush 
> * Backup *Manifest* and history
> * *Incremental* backup: to build on top of full backup as daily/weekly backup 
> * *Convert*  incremental backup WAL files into hfiles
> * *Merge* several backup images into one(like merge weekly into monthly)
> * *add and remove* table to and from Backup image
> * *Cancel* a backup process
> * backup progress *status*
> * full backup based on *existing snapshot*
> *-*
> *Below is the original description, to keep here as the history for the 
> design and discussion back in 2013*
> There have been attempts in the past to come up with a viable HBase 
> backup/restore solution (e.g., HBASE-4618).  Recently, there are many 
> advancements and new features in HBase, for example, FileLink, Snapshot, and 
> Distributed Barrier Procedure. This is a proposal for a backup/restore 
> solution that utilizes these new features to achieve better performance and 
> consistency. 
>  
> A common practice of backup and restore in database is to first take full 
> baseline backup, and then periodically take incremental backup that capture 
> the changes since the full baseline backup. HBase cluster can store massive 
> amount data.  Combination of full backups with incremental backups has 
> tremendous benefit for HBase as well.  The following is a typical scenario 
> for full and incremental backup.
> # The user takes a full backup of a table or a set of tables in HBase. 
> # The user schedules periodical incremental backups to capture the changes 
> from the full backup, or from last incremental backup.
> # The user needs to restore table data to a past point of time.
> # The full backup is restored to the table(s) or to different table name(s).  
> Then the incremental backups that are up to the desired point in time are 
> applied on top of the full backup. 
> We would support the following key features and capabilities.
> * Full backup uses HBase snapshot to capture HFiles.
> * Use HBase WALs to capture incremental changes, but we use bulk load of 
> HFiles for fast incremental restore.
> * Support single table or a set of tables, and column family level backup and 
> restore.
> * Restore to different table names.
> * Support adding additional tables or CF to backup set without interruption 
> of incremental backup schedule.
> * Support rollup/combining of incremental backups into longer period and 
> bigger incremental backups.
> * Unified command line interface for all the above.
> The solution will support HBase backup to FileSystem, either on the same 
> cluster or across clusters.  It has the flexibility to support backup to 
> other devices and 

[jira] [Commented] (HBASE-16074) ITBLL fails, reports lost big or tiny families

2016-07-05 Thread Mikhail Antonov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15363451#comment-15363451
 ] 

Mikhail Antonov commented on HBASE-16074:
-

I applied backported patches (2 of them) to the latest 1.2.2 and the very first 
run shown lost data. 

> ITBLL fails, reports lost big or tiny families
> --
>
> Key: HBASE-16074
> URL: https://issues.apache.org/jira/browse/HBASE-16074
> Project: HBase
>  Issue Type: Bug
>  Components: integration tests
>Affects Versions: 1.3.0, 0.98.20
>Reporter: Mikhail Antonov
>Assignee: Mikhail Antonov
>Priority: Blocker
> Fix For: 2.0.0, 1.3.0, 1.4.0, 0.98.21
>
> Attachments: 16074.test.branch-1.3.patch, 16074.test.patch, 
> HBASE-16074.branch-1.3.001.patch, HBASE-16074.branch-1.3.002.patch, 
> HBASE-16074.branch-1.3.003.patch, HBASE-16074.branch-1.3.003.patch, 
> changes_to_stress_ITBLL.patch, changes_to_stress_ITBLL__a_bit_relaxed_.patch, 
> itbll log with failure, itbll log with success
>
>
> Underlying MR jobs succeed but I'm seeing the following in the logs (mid-size 
> distributed test cluster):
> ERROR test.IntegrationTestBigLinkedList$Verify: Found nodes which lost big or 
> tiny families, count=164
> I do not know exactly yet whether it's a bug, a test issue or env setup 
> issue, but need figure it out. Opening this to raise awareness and see if 
> someone saw that recently.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15643) Need metrics of cache hit ratio, etc for one table

2016-07-05 Thread Alicia Ying Shu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alicia Ying Shu updated HBASE-15643:

Status: Patch Available  (was: Open)

> Need metrics of cache hit ratio, etc for one table
> --
>
> Key: HBASE-15643
> URL: https://issues.apache.org/jira/browse/HBASE-15643
> Project: HBase
>  Issue Type: Improvement
>Reporter: Heng Chen
>Assignee: Alicia Ying Shu
> Attachments: HBASE-15643.patch
>
>
> There are many tables on our cluster,  only some of them need to be read 
> online.  
> We could improve the performance of read by cache,  but we need some metrics 
> for it at table level. There are a few we can collect: BlockCacheCount, 
> BlockCacheSize, BlockCacheHitCount, BlockCacheMissCount, BlockCacheHitPercent



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15643) Need metrics of cache hit ratio, etc for one table

2016-07-05 Thread Alicia Ying Shu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alicia Ying Shu updated HBASE-15643:

Attachment: HBASE-15643.patch

> Need metrics of cache hit ratio, etc for one table
> --
>
> Key: HBASE-15643
> URL: https://issues.apache.org/jira/browse/HBASE-15643
> Project: HBase
>  Issue Type: Improvement
>Reporter: Heng Chen
>Assignee: Alicia Ying Shu
> Attachments: HBASE-15643.patch
>
>
> There are many tables on our cluster,  only some of them need to be read 
> online.  
> We could improve the performance of read by cache,  but we need some metrics 
> for it at table level. There are a few we can collect: BlockCacheCount, 
> BlockCacheSize, BlockCacheHitCount, BlockCacheMissCount, BlockCacheHitPercent



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15643) Need metrics of cache hit ratio, etc for one table

2016-07-05 Thread Alicia Ying Shu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15363445#comment-15363445
 ] 

Alicia Ying Shu commented on HBASE-15643:
-

Rebased the patch and uploaded again. 

> Need metrics of cache hit ratio, etc for one table
> --
>
> Key: HBASE-15643
> URL: https://issues.apache.org/jira/browse/HBASE-15643
> Project: HBase
>  Issue Type: Improvement
>Reporter: Heng Chen
>Assignee: Alicia Ying Shu
>
> There are many tables on our cluster,  only some of them need to be read 
> online.  
> We could improve the performance of read by cache,  but we need some metrics 
> for it at table level. There are a few we can collect: BlockCacheCount, 
> BlockCacheSize, BlockCacheHitCount, BlockCacheMissCount, BlockCacheHitPercent



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15643) Need metrics of cache hit ratio, etc for one table

2016-07-05 Thread Alicia Ying Shu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alicia Ying Shu updated HBASE-15643:

Attachment: (was: HBASE-15643.patch)

> Need metrics of cache hit ratio, etc for one table
> --
>
> Key: HBASE-15643
> URL: https://issues.apache.org/jira/browse/HBASE-15643
> Project: HBase
>  Issue Type: Improvement
>Reporter: Heng Chen
>Assignee: Alicia Ying Shu
>
> There are many tables on our cluster,  only some of them need to be read 
> online.  
> We could improve the performance of read by cache,  but we need some metrics 
> for it at table level. There are a few we can collect: BlockCacheCount, 
> BlockCacheSize, BlockCacheHitCount, BlockCacheMissCount, BlockCacheHitPercent



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16144) Replication queue's lock will live forever if RS acquiring the lock has died prematurely

2016-07-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15363443#comment-15363443
 ] 

Hudson commented on HBASE-16144:


FAILURE: Integrated in HBase-Trunk_matrix #1175 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/1175/])
HBASE-16144 Revert - test failure in (tedyu: rev 
20a99b4c06ecb77c29c3ff173052a00174b9af8c)
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationFactory.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/cleaner/ReplicationZKLockCleanerChore.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestMultiSlaveReplication.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationQueuesZKImpl.java


> Replication queue's lock will live forever if RS acquiring the lock has died 
> prematurely
> 
>
> Key: HBASE-16144
> URL: https://issues.apache.org/jira/browse/HBASE-16144
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.1, 1.1.5, 0.98.20
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: HBASE-16144-v1.patch, HBASE-16144-v2.patch, 
> HBASE-16144-v3.patch, HBASE-16144-v4.patch
>
>
> In default, we will use multi operation when we claimQueues from ZK. But if 
> we set hbase.zookeeper.useMulti=false, we will add a lock first, then copy 
> nodes, finally clean old queue and the lock. 
> However, if the RS acquiring the lock crash before claimQueues done, the lock 
> will always be there and other RS can never claim the queue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15643) Need metrics of cache hit ratio, etc for one table

2016-07-05 Thread Alicia Ying Shu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alicia Ying Shu updated HBASE-15643:

Status: Open  (was: Patch Available)

> Need metrics of cache hit ratio, etc for one table
> --
>
> Key: HBASE-15643
> URL: https://issues.apache.org/jira/browse/HBASE-15643
> Project: HBase
>  Issue Type: Improvement
>Reporter: Heng Chen
>Assignee: Alicia Ying Shu
> Attachments: HBASE-15643.patch
>
>
> There are many tables on our cluster,  only some of them need to be read 
> online.  
> We could improve the performance of read by cache,  but we need some metrics 
> for it at table level. There are a few we can collect: BlockCacheCount, 
> BlockCacheSize, BlockCacheHitCount, BlockCacheMissCount, BlockCacheHitPercent



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16091) Canary takes lot more time when there are delete markers in the table

2016-07-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15363444#comment-15363444
 ] 

Hudson commented on HBASE-16091:


FAILURE: Integrated in HBase-Trunk_matrix #1175 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/1175/])
HBASE-16091 Canary takes lot more time when there are delete markers in 
(apurtell: rev 318751cfd621cfb848d90d623fdd9db1d19894ed)
* hbase-server/src/main/java/org/apache/hadoop/hbase/tool/Canary.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/tool/TestCanaryTool.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java


> Canary takes lot more time when there are delete markers in the table
> -
>
> Key: HBASE-16091
> URL: https://issues.apache.org/jira/browse/HBASE-16091
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Vishal Khandelwal
>Assignee: Vishal Khandelwal
> Fix For: 2.0.0, 1.4.0, 0.98.21
>
> Attachments: HBASE-16091.00.patch, HBASE-16091.01.patch, 
> HBASE-16091.02.patch
>
>
> We have a table which has lot of delete markers and we running Canary test on 
> a regular interval sometimes tests are timing out because to reading first 
> row would skip all these delete markers. Since purpose of Canary is to find 
> health of the region, i think keeping raw=true would not defeat the purpose 
> but provide good perf improvement. 
> Following are the example of one such scan where 
> without changing code it took 62.3 sec for onre region scan
> 2016-06-23 08:49:11,670 INFO  [pool-2-thread-1] tool.Canary - read from 
> region  . column family 0 in 62338ms
> whereas after setting raw=true, it reduced to 58ms
> 2016-06-23 08:45:20,259 INFO  [pool-2-thread-1] tests.Canary - read from 
> region . column family 0 in 58ms
> Taking this over multiple tables , with multiple region would be a good 
> performance gain.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16087) Replication shouldn't start on a master if if only hosts system tables

2016-07-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15363441#comment-15363441
 ] 

Hudson commented on HBASE-16087:


FAILURE: Integrated in HBase-Trunk_matrix #1175 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/1175/])
HBASE-16087 Replication shouldn't start on a master if if only hosts (eclark: 
rev ae92668dd6eff5271ceeecc435165f5fc14fab48)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/BaseLoadBalancer.java


> Replication shouldn't start on a master if if only hosts system tables
> --
>
> Key: HBASE-16087
> URL: https://issues.apache.org/jira/browse/HBASE-16087
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0, 1.3.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HBASE-16087.patch, HBASE-16087.v1.patch
>
>
> System tables aren't replicated so we shouldn't start up a replication master 
> if there are no user tables on the master.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16177) In dev mode thrift server can't be run

2016-07-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15363440#comment-15363440
 ] 

Hudson commented on HBASE-16177:


FAILURE: Integrated in HBase-Trunk_matrix #1175 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/1175/])
HBASE-16177 In dev mode thrift server can't be run (eclark: rev 
2eef33930c358fa00347376604c3fc4ee68019c1)
* hbase-assembly/pom.xml


> In dev mode thrift server can't be run
> --
>
> Key: HBASE-16177
> URL: https://issues.apache.org/jira/browse/HBASE-16177
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0, 1.2.1
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Fix For: 2.0.0, 1.3.0, 1.4.0
>
> Attachments: HBASE-16177.patch
>
>
> {code}
> Error: Could not find or load main class 
> org.apache.hadoop.hbase.thrift2.ThriftServer
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15985) clarify promises about edits from replication in ref guide

2016-07-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15363442#comment-15363442
 ] 

Hudson commented on HBASE-15985:


FAILURE: Integrated in HBase-Trunk_matrix #1175 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/1175/])
HBASE-15985 clarify promises about edits from replication in ref guide (busbey: 
rev 29c46c4834a3f96e9fca33cb16bc7f3748fcd60c)
* src/main/asciidoc/_chapters/ops_mgt.adoc


> clarify promises about edits from replication in ref guide
> --
>
> Key: HBASE-15985
> URL: https://issues.apache.org/jira/browse/HBASE-15985
> Project: HBase
>  Issue Type: Sub-task
>  Components: documentation, Replication
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Fix For: 2.0.0
>
> Attachments: HBASE-15985.1.patch
>
>
> we should make clear in a call out that replication only provides 
> at-least-once delivery and doesn't guarantee ordering so that e.g. folks 
> using increments aren't surprised.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16087) Replication shouldn't start on a master if if only hosts system tables

2016-07-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15363429#comment-15363429
 ] 

Hudson commented on HBASE-16087:


SUCCESS: Integrated in HBase-1.3-IT #743 (See 
[https://builds.apache.org/job/HBase-1.3-IT/743/])
HBASE-16087 Replication shouldn't start on a master if if only hosts (eclark: 
rev 59c5900fae4392b9a5fcca8dbf5543e1bea1e452)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/BaseLoadBalancer.java


> Replication shouldn't start on a master if if only hosts system tables
> --
>
> Key: HBASE-16087
> URL: https://issues.apache.org/jira/browse/HBASE-16087
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0, 1.3.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HBASE-16087.patch, HBASE-16087.v1.patch
>
>
> System tables aren't replicated so we shouldn't start up a replication master 
> if there are no user tables on the master.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16177) In dev mode thrift server can't be run

2016-07-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15363428#comment-15363428
 ] 

Hudson commented on HBASE-16177:


SUCCESS: Integrated in HBase-1.3-IT #743 (See 
[https://builds.apache.org/job/HBase-1.3-IT/743/])
HBASE-16177 In dev mode thrift server can't be run (eclark: rev 
603decdbf7eea4f86386496d141d3548f384f409)
* hbase-assembly/pom.xml


> In dev mode thrift server can't be run
> --
>
> Key: HBASE-16177
> URL: https://issues.apache.org/jira/browse/HBASE-16177
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0, 1.2.1
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Fix For: 2.0.0, 1.3.0, 1.4.0
>
> Attachments: HBASE-16177.patch
>
>
> {code}
> Error: Could not find or load main class 
> org.apache.hadoop.hbase.thrift2.ThriftServer
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16177) In dev mode thrift server can't be run

2016-07-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15363426#comment-15363426
 ] 

Hudson commented on HBASE-16177:


SUCCESS: Integrated in HBase-1.3 #769 (See 
[https://builds.apache.org/job/HBase-1.3/769/])
HBASE-16177 In dev mode thrift server can't be run (eclark: rev 
603decdbf7eea4f86386496d141d3548f384f409)
* hbase-assembly/pom.xml


> In dev mode thrift server can't be run
> --
>
> Key: HBASE-16177
> URL: https://issues.apache.org/jira/browse/HBASE-16177
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0, 1.2.1
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Fix For: 2.0.0, 1.3.0, 1.4.0
>
> Attachments: HBASE-16177.patch
>
>
> {code}
> Error: Could not find or load main class 
> org.apache.hadoop.hbase.thrift2.ThriftServer
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16181) Backup of hbase:backup table

2016-07-05 Thread Vladimir Rodionov (JIRA)
Vladimir Rodionov created HBASE-16181:
-

 Summary: Backup of hbase:backup table
 Key: HBASE-16181
 URL: https://issues.apache.org/jira/browse/HBASE-16181
 Project: HBase
  Issue Type: Task
Reporter: Vladimir Rodionov


Snapshot of HBase system tables is not supported, we need either move 
hbase:backup into different name space or fix snapshots.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16180) Fix ST_WRITE_TO_STATIC_FROM_INSTANCE_METHOD findbugs introduced by parent

2016-07-05 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-16180:
--
Status: Patch Available  (was: Open)

> Fix ST_WRITE_TO_STATIC_FROM_INSTANCE_METHOD findbugs introduced by parent
> -
>
> Key: HBASE-16180
> URL: https://issues.apache.org/jira/browse/HBASE-16180
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver
>Reporter: stack
>Assignee: stack
> Fix For: 1.3.0, 1.1.3, 1.0.3, 1.2.0
>
> Attachments: HBASE-16180.branch-1.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16180) Fix ST_WRITE_TO_STATIC_FROM_INSTANCE_METHOD findbugs introduced by parent

2016-07-05 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-16180:
--
Attachment: HBASE-16180.branch-1.001.patch

> Fix ST_WRITE_TO_STATIC_FROM_INSTANCE_METHOD findbugs introduced by parent
> -
>
> Key: HBASE-16180
> URL: https://issues.apache.org/jira/browse/HBASE-16180
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver
>Reporter: stack
>Assignee: stack
> Fix For: 1.2.0, 1.3.0, 1.0.3, 1.1.3
>
> Attachments: HBASE-16180.branch-1.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16174) Hook cell test up, and fix broken cell test.

2016-07-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15363401#comment-15363401
 ] 

Hadoop QA commented on HBASE-16174:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 
36s {color} | {color:green} HBASE-14850 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 55s 
{color} | {color:green} HBASE-14850 passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 29s 
{color} | {color:green} HBASE-14850 passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 57s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 2m 57s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 27s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 2m 27s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 
2s {color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green} 0m 
2s {color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
26m 23s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 101m 6s {color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
50s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 149m 2s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Timed out junit tests | 
org.apache.hadoop.hbase.snapshot.TestFlushSnapshotFromClient |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12816263/HBASE-16174.HBASE-14850.patch
 |
| JIRA Issue | HBASE-16174 |
| Optional Tests |  asflicense  shellcheck  shelldocs  cc  unit  compile  |
| uname | Linux asf901.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | HBASE-14850 / ad276ef |
| shellcheck | v0.3.3 (This is an old version that has serious bugs. Consider 
upgrading.) |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/2528/artifact/patchprocess/patch-unit-root.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HBASE-Build/2528/artifact/patchprocess/patch-unit-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/2528/testReport/ |
| modules | C: . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/2528/console |
| Powered by | Apache Yetus 0.2.1   http://yetus.apache.org |


This message was automatically generated.



> Hook cell test up, and fix broken cell test.
> 
>
> Key: HBASE-16174
> URL: https://issues.apache.org/jira/browse/HBASE-16174
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HBASE-16174.HBASE-14850.patch
>
>
> Make sure that cell test is working properly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16180) Fix ST_WRITE_TO_STATIC_FROM_INSTANCE_METHOD findbugs introduced by parent

2016-07-05 Thread stack (JIRA)
stack created HBASE-16180:
-

 Summary: Fix ST_WRITE_TO_STATIC_FROM_INSTANCE_METHOD findbugs 
introduced by parent
 Key: HBASE-16180
 URL: https://issues.apache.org/jira/browse/HBASE-16180
 Project: HBase
  Issue Type: Sub-task
Reporter: stack
Assignee: stack






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16176) Bug fixes/improvements on HBASE-15650 Remove TimeRangeTracker as point of contention when many threads reading a StoreFile

2016-07-05 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-16176:
--
Attachment: HBASE-16176.branch-1.3.002.patch

Retry. Flakey test.

> Bug fixes/improvements on HBASE-15650 Remove TimeRangeTracker as point of 
> contention when many threads reading a StoreFile
> --
>
> Key: HBASE-16176
> URL: https://issues.apache.org/jira/browse/HBASE-16176
> Project: HBase
>  Issue Type: Sub-task
>  Components: Performance
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0, 1.3.0, 1.4.0, 0.98.21
>
> Attachments: HBASE-16176.branch-1.3.001.patch, 
> HBASE-16176.branch-1.3.002.patch, HBASE-16176.branch-1.3.002.patch
>
>
> Debugging the parent issue, came up with some improvements on old HBASE-15650 
> "Remove TimeRangeTracker as point of contention when many threads reading a 
> StoreFile". Lets get them in. Here are the changes:
> {code}
>   6  Change HFile Writer constructor so we pass in the TimeRangeTracker, 
> if one,
>   7  on construction rather than set later (the flag and reference were 
> not
>   8  volatile so could have made for issues in concurrent case) 2. Make 
> sure the
>   9  construction of a TimeRange from a TimeRangeTracer on open of an 
> HFile Reader
>  10  never makes a bad minimum value, one that would preclude us reading 
> any
>  11  values from a file (add a log and set min to 0)
>  12 M hbase-common/src/main/java/org/apache/hadoop/hbase/io/TimeRange.java
>  13  Call through to next constructor (if minStamp was 0, we'd skip 
> setting allTime=true)
>  14 M 
> hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java
>  15  Add constructor override that takes a TimeRangeTracker (set when 
> flushing but
>  16  not when compacting)
>  17 M 
> hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Store.java
>  18  Add override creating an HFile in tmp that takes a TimeRangeTracker
>  19 M 
> hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFile.java
>  20  Add override for HFile Writer that takes a TimeRangeTracker
>  21  Take it on construction instead of having it passed by a setter 
> later (flags
>  22  and reference set by the setter were not volatile... could have been 
> prob
>  23  in concurrent case)
>  24 M 
> hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/TimeRangeTracker.java
>  25  Log WARN if bad initial TimeRange value (and then 'fix' it)
>  26 M 
> hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestTimeRangeTracker.java
>  27  A few tests to prove serialization works as expected and that we'll 
> get a bad min if
>  28  not constructed properly.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16176) Bug fixes/improvements on HBASE-15650 Remove TimeRangeTracker as point of contention when many threads reading a StoreFile

2016-07-05 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15363384#comment-15363384
 ] 

stack commented on HBASE-16176:
---

Yessir. Thanks for review [~mantonov]

> Bug fixes/improvements on HBASE-15650 Remove TimeRangeTracker as point of 
> contention when many threads reading a StoreFile
> --
>
> Key: HBASE-16176
> URL: https://issues.apache.org/jira/browse/HBASE-16176
> Project: HBase
>  Issue Type: Sub-task
>  Components: Performance
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0, 1.3.0, 1.4.0, 0.98.21
>
> Attachments: HBASE-16176.branch-1.3.001.patch, 
> HBASE-16176.branch-1.3.002.patch
>
>
> Debugging the parent issue, came up with some improvements on old HBASE-15650 
> "Remove TimeRangeTracker as point of contention when many threads reading a 
> StoreFile". Lets get them in. Here are the changes:
> {code}
>   6  Change HFile Writer constructor so we pass in the TimeRangeTracker, 
> if one,
>   7  on construction rather than set later (the flag and reference were 
> not
>   8  volatile so could have made for issues in concurrent case) 2. Make 
> sure the
>   9  construction of a TimeRange from a TimeRangeTracer on open of an 
> HFile Reader
>  10  never makes a bad minimum value, one that would preclude us reading 
> any
>  11  values from a file (add a log and set min to 0)
>  12 M hbase-common/src/main/java/org/apache/hadoop/hbase/io/TimeRange.java
>  13  Call through to next constructor (if minStamp was 0, we'd skip 
> setting allTime=true)
>  14 M 
> hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java
>  15  Add constructor override that takes a TimeRangeTracker (set when 
> flushing but
>  16  not when compacting)
>  17 M 
> hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Store.java
>  18  Add override creating an HFile in tmp that takes a TimeRangeTracker
>  19 M 
> hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFile.java
>  20  Add override for HFile Writer that takes a TimeRangeTracker
>  21  Take it on construction instead of having it passed by a setter 
> later (flags
>  22  and reference set by the setter were not volatile... could have been 
> prob
>  23  in concurrent case)
>  24 M 
> hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/TimeRangeTracker.java
>  25  Log WARN if bad initial TimeRange value (and then 'fix' it)
>  26 M 
> hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestTimeRangeTracker.java
>  27  A few tests to prove serialization works as expected and that we'll 
> get a bad min if
>  28  not constructed properly.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16093) Splits failed before creating daughter regions leave meta inconsistent

2016-07-05 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15363383#comment-15363383
 ] 

Enis Soztutar commented on HBASE-16093:
---

[~ndimiduk] I think this ONLY affects zk-less assignment. So it should be fine 
for branch-1.1 backport, since zk-less is not default there. 

> Splits failed before creating daughter regions leave meta inconsistent
> --
>
> Key: HBASE-16093
> URL: https://issues.apache.org/jira/browse/HBASE-16093
> Project: HBase
>  Issue Type: Bug
>  Components: master, Region Assignment
>Affects Versions: 1.3.0, 1.2.1
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>Priority: Critical
> Fix For: 1.3.0, 1.4.0, 1.2.2, 1.1.6
>
> Attachments: HBASE-16093.branch-1.patch
>
>
> This is on branch-1 based code only.
> Here's the sequence of events.
> # A regionserver opens a new region. That regions looks like it should split.
> # So the regionserver starts a split transaction.
> # Split transaction starts execute
> # Split transaction encounters an error in stepsBeforePONR
> # Split transaction starts rollback
> # Split transaction notifies master that it's rolling back using 
> HMasterRpcServices#reportRegionStateTransition
> # AssignmentManager#onRegionTransition is called with SPLIT_REVERTED
> # AssignmentManager#onRegionSplitReverted is called.
> # That onlines the parent region and offlines the daughter regions.
> However the daughter regions were never created in meta so all that gets done 
> is that state for those rows gets OFFLINE. Now all clients trying to get the 
> parent instead get the offline daughter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16096) Replication keeps accumulating znodes

2016-07-05 Thread Joseph (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph updated HBASE-16096:
---
Description: 
If there is an error while creating the replication source on adding the peer, 
the source if not added to the in memory list of sources but the replication 
peer is. 
However, in such a scenario, when you remove the peer, it is deleted from 
zookeeper successfully but for removing the in memory list of peers, we wait 
for the corresponding sources to get deleted (which as we said don't exist 
because of error creating the source). 
The problem here is the ordering of operations for adding/removing source and 
peer. 
Modifying the code to always remove queues from the underlying storage, even if 
there exists no sources also requires a small refactoring of 
TableBasedReplicationQueuesImpl to not abort on removeQueues() of an empty queue

  was:
If there is an error while creating the replication source on adding the peer, 
the source if not added to the in memory list of sources but the replication 
peer is. 
However, in such a scenario, when you remove the peer, it is deleted from 
zookeeper successfully but for removing the in memory list of peers, we wait 
for the corresponding sources to get deleted (which as we said don't exist 
because of error creating the source). 
The problem here is the ordering of operations for adding/removing source and 
peer. 


> Replication keeps accumulating znodes
> -
>
> Key: HBASE-16096
> URL: https://issues.apache.org/jira/browse/HBASE-16096
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 2.0.0, 1.2.0, 1.3.0
>Reporter: Ashu Pachauri
>Assignee: Joseph
> Attachments: HBASE-16096.patch
>
>
> If there is an error while creating the replication source on adding the 
> peer, the source if not added to the in memory list of sources but the 
> replication peer is. 
> However, in such a scenario, when you remove the peer, it is deleted from 
> zookeeper successfully but for removing the in memory list of peers, we wait 
> for the corresponding sources to get deleted (which as we said don't exist 
> because of error creating the source). 
> The problem here is the ordering of operations for adding/removing source and 
> peer. 
> Modifying the code to always remove queues from the underlying storage, even 
> if there exists no sources also requires a small refactoring of 
> TableBasedReplicationQueuesImpl to not abort on removeQueues() of an empty 
> queue



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16096) Replication keeps accumulating znodes

2016-07-05 Thread Joseph (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph updated HBASE-16096:
---
Attachment: HBASE-16096.patch

> Replication keeps accumulating znodes
> -
>
> Key: HBASE-16096
> URL: https://issues.apache.org/jira/browse/HBASE-16096
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 2.0.0, 1.2.0, 1.3.0
>Reporter: Ashu Pachauri
>Assignee: Joseph
> Attachments: HBASE-16096.patch
>
>
> If there is an error while creating the replication source on adding the 
> peer, the source if not added to the in memory list of sources but the 
> replication peer is. 
> However, in such a scenario, when you remove the peer, it is deleted from 
> zookeeper successfully but for removing the in memory list of peers, we wait 
> for the corresponding sources to get deleted (which as we said don't exist 
> because of error creating the source). 
> The problem here is the ordering of operations for adding/removing source and 
> peer. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16096) Replication keeps accumulating znodes

2016-07-05 Thread Joseph (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph updated HBASE-16096:
---
Status: Patch Available  (was: Open)

> Replication keeps accumulating znodes
> -
>
> Key: HBASE-16096
> URL: https://issues.apache.org/jira/browse/HBASE-16096
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 1.2.0, 2.0.0, 1.3.0
>Reporter: Ashu Pachauri
>Assignee: Joseph
> Attachments: HBASE-16096.patch
>
>
> If there is an error while creating the replication source on adding the 
> peer, the source if not added to the in memory list of sources but the 
> replication peer is. 
> However, in such a scenario, when you remove the peer, it is deleted from 
> zookeeper successfully but for removing the in memory list of peers, we wait 
> for the corresponding sources to get deleted (which as we said don't exist 
> because of error creating the source). 
> The problem here is the ordering of operations for adding/removing source and 
> peer. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16096) Replication keeps accumulating znodes

2016-07-05 Thread Joseph (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph updated HBASE-16096:
---
Attachment: (was: HBASE-16096.patch)

> Replication keeps accumulating znodes
> -
>
> Key: HBASE-16096
> URL: https://issues.apache.org/jira/browse/HBASE-16096
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 2.0.0, 1.2.0, 1.3.0
>Reporter: Ashu Pachauri
>Assignee: Joseph
>
> If there is an error while creating the replication source on adding the 
> peer, the source if not added to the in memory list of sources but the 
> replication peer is. 
> However, in such a scenario, when you remove the peer, it is deleted from 
> zookeeper successfully but for removing the in memory list of peers, we wait 
> for the corresponding sources to get deleted (which as we said don't exist 
> because of error creating the source). 
> The problem here is the ordering of operations for adding/removing source and 
> peer. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16096) Replication keeps accumulating znodes

2016-07-05 Thread Joseph (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph updated HBASE-16096:
---
Attachment: HBASE-16096.patch

> Replication keeps accumulating znodes
> -
>
> Key: HBASE-16096
> URL: https://issues.apache.org/jira/browse/HBASE-16096
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 2.0.0, 1.2.0, 1.3.0
>Reporter: Ashu Pachauri
>Assignee: Joseph
> Attachments: HBASE-16096.patch
>
>
> If there is an error while creating the replication source on adding the 
> peer, the source if not added to the in memory list of sources but the 
> replication peer is. 
> However, in such a scenario, when you remove the peer, it is deleted from 
> zookeeper successfully but for removing the in memory list of peers, we wait 
> for the corresponding sources to get deleted (which as we said don't exist 
> because of error creating the source). 
> The problem here is the ordering of operations for adding/removing source and 
> peer. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16096) Replication keeps accumulating znodes

2016-07-05 Thread Joseph (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph updated HBASE-16096:
---
Attachment: (was: HBASE-16096.patch)

> Replication keeps accumulating znodes
> -
>
> Key: HBASE-16096
> URL: https://issues.apache.org/jira/browse/HBASE-16096
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 2.0.0, 1.2.0, 1.3.0
>Reporter: Ashu Pachauri
>Assignee: Joseph
>
> If there is an error while creating the replication source on adding the 
> peer, the source if not added to the in memory list of sources but the 
> replication peer is. 
> However, in such a scenario, when you remove the peer, it is deleted from 
> zookeeper successfully but for removing the in memory list of peers, we wait 
> for the corresponding sources to get deleted (which as we said don't exist 
> because of error creating the source). 
> The problem here is the ordering of operations for adding/removing source and 
> peer. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16096) Replication keeps accumulating znodes

2016-07-05 Thread Joseph (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph updated HBASE-16096:
---
Status: Open  (was: Patch Available)

> Replication keeps accumulating znodes
> -
>
> Key: HBASE-16096
> URL: https://issues.apache.org/jira/browse/HBASE-16096
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 1.2.0, 2.0.0, 1.3.0
>Reporter: Ashu Pachauri
>Assignee: Joseph
>
> If there is an error while creating the replication source on adding the 
> peer, the source if not added to the in memory list of sources but the 
> replication peer is. 
> However, in such a scenario, when you remove the peer, it is deleted from 
> zookeeper successfully but for removing the in memory list of peers, we wait 
> for the corresponding sources to get deleted (which as we said don't exist 
> because of error creating the source). 
> The problem here is the ordering of operations for adding/removing source and 
> peer. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16177) In dev mode thrift server can't be run

2016-07-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15363335#comment-15363335
 ] 

Hudson commented on HBASE-16177:


SUCCESS: Integrated in HBase-1.4 #272 (See 
[https://builds.apache.org/job/HBase-1.4/272/])
HBASE-16177 In dev mode thrift server can't be run (eclark: rev 
1318e84e14112a524935f50a380b9a9da29385fd)
* hbase-assembly/pom.xml


> In dev mode thrift server can't be run
> --
>
> Key: HBASE-16177
> URL: https://issues.apache.org/jira/browse/HBASE-16177
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0, 1.2.1
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Fix For: 2.0.0, 1.3.0, 1.4.0
>
> Attachments: HBASE-16177.patch
>
>
> {code}
> Error: Could not find or load main class 
> org.apache.hadoop.hbase.thrift2.ThriftServer
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16091) Canary takes lot more time when there are delete markers in the table

2016-07-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15363336#comment-15363336
 ] 

Hudson commented on HBASE-16091:


SUCCESS: Integrated in HBase-1.4 #272 (See 
[https://builds.apache.org/job/HBase-1.4/272/])
HBASE-16091 Canary takes lot more time when there are delete markers in 
(apurtell: rev 8efc6148b9ccaa29d2608d1d7348d0d3c5d8158d)
* hbase-server/src/test/java/org/apache/hadoop/hbase/tool/TestCanaryTool.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/tool/Canary.java


> Canary takes lot more time when there are delete markers in the table
> -
>
> Key: HBASE-16091
> URL: https://issues.apache.org/jira/browse/HBASE-16091
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Vishal Khandelwal
>Assignee: Vishal Khandelwal
> Fix For: 2.0.0, 1.4.0, 0.98.21
>
> Attachments: HBASE-16091.00.patch, HBASE-16091.01.patch, 
> HBASE-16091.02.patch
>
>
> We have a table which has lot of delete markers and we running Canary test on 
> a regular interval sometimes tests are timing out because to reading first 
> row would skip all these delete markers. Since purpose of Canary is to find 
> health of the region, i think keeping raw=true would not defeat the purpose 
> but provide good perf improvement. 
> Following are the example of one such scan where 
> without changing code it took 62.3 sec for onre region scan
> 2016-06-23 08:49:11,670 INFO  [pool-2-thread-1] tool.Canary - read from 
> region  . column family 0 in 62338ms
> whereas after setting raw=true, it reduced to 58ms
> 2016-06-23 08:45:20,259 INFO  [pool-2-thread-1] tests.Canary - read from 
> region . column family 0 in 58ms
> Taking this over multiple tables , with multiple region would be a good 
> performance gain.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16176) Bug fixes/improvements on HBASE-15650 Remove TimeRangeTracker as point of contention when many threads reading a StoreFile

2016-07-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15363273#comment-15363273
 ] 

Hadoop QA commented on HBASE-16176:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 39s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
45s {color} | {color:green} branch-1.3 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 33s 
{color} | {color:green} branch-1.3 passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 4s 
{color} | {color:green} branch-1.3 passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
29s {color} | {color:green} branch-1.3 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
42s {color} | {color:green} branch-1.3 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 52s 
{color} | {color:red} hbase-server in branch-1.3 has 1 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 23s 
{color} | {color:green} branch-1.3 passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 7s 
{color} | {color:green} branch-1.3 passed with JDK v1.7.0_80 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 29s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 29s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 5s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 5s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
5s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
36s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
15m 24s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 9s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 20s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 8s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 48s 
{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 34s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 82m 48s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
39s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 130m 38s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Timed out junit tests | 
org.apache.hadoop.hbase.snapshot.TestFlushSnapshotFromClient |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12816272/HBASE

[jira] [Created] (HBASE-16179) Fix compilation errors when building hbase-spark against Spark 2.0

2016-07-05 Thread Ted Yu (JIRA)
Ted Yu created HBASE-16179:
--

 Summary: Fix compilation errors when building hbase-spark against 
Spark 2.0
 Key: HBASE-16179
 URL: https://issues.apache.org/jira/browse/HBASE-16179
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu


I tried building hbase-spark module against Spark-2.0 snapshot and got the 
following compilation errors:

http://pastebin.com/bg3w247a

Some Spark classes such as DataTypeParser and Logging are no longer accessible 
to downstream projects.

hbase-spark module should not depend on such classes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14548) Expand how table coprocessor jar and dependency path can be specified

2016-07-05 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15363259#comment-15363259
 ] 

Jerry He commented on HBASE-14548:
--

Ok, Wildcard path is probably caught by FileSytem.isDirectory() as the 
FileNotFoundException and return false.

I think we only need to guarantee these basic cases work:
1. No regression.  If user specifies a jar path, it should work as it currently 
is.
2. If it is a directory path, it should work by including its jars.
3. Support jar path wildcard at the end level of the path string.

The patch looks good.  See if [~apurtell] [~anoop.hbase] have comments.

> Expand how table coprocessor jar and dependency path can be specified
> -
>
> Key: HBASE-14548
> URL: https://issues.apache.org/jira/browse/HBASE-14548
> Project: HBase
>  Issue Type: Improvement
>  Components: Coprocessors
>Affects Versions: 1.2.0
>Reporter: Jerry He
>Assignee: li xiang
> Fix For: 2.0.0
>
> Attachments: HBASE-14548-1.2.0-v0.patch, HBASE-14548-1.2.0-v1.patch, 
> HBASE-14548-1.2.0-v2.patch, HBASE-14548-master-v1.patch, 
> HBASE-14548-master-v2.patch
>
>
> Currently you can specify the location of the coprocessor jar in the table 
> coprocessor attribute.
> The problem is that it only allows you to specify one jar that implements the 
> coprocessor.  You will need to either bundle all the dependencies into this 
> jar, or you will need to copy the dependencies into HBase lib dir.
> The first option may not be ideal sometimes.  The second choice can be 
> troublesome too, particularly when the hbase region sever node and dirs are 
> dynamically added/created.
> There are a couple things we can expand here.  We can allow the coprocessor 
> attribute to specify a directory location, probably on hdfs.
> We may even allow some wildcard in there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14921) Memory optimizations

2016-07-05 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15363253#comment-15363253
 ] 

Ted Yu commented on HBASE-14921:


Please check test failures in TestCompactingToCellArrayMapMemStore, etc.

> Memory optimizations
> 
>
> Key: HBASE-14921
> URL: https://issues.apache.org/jira/browse/HBASE-14921
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: Eshcar Hillel
>Assignee: Anastasia Braginsky
> Attachments: CellBlocksSegmentInMemStore.pdf, 
> CellBlocksSegmentinthecontextofMemStore(1).pdf, HBASE-14921-V01.patch, 
> HBASE-14921-V02.patch, HBASE-14921-V03.patch, HBASE-14921-V04-CA-V02.patch, 
> HBASE-14921-V04-CA.patch, HBASE-14921-V05-CAO.patch, 
> InitialCellArrayMapEvaluation.pdf, IntroductiontoNewFlatandCompactMemStore.pdf
>
>
> Memory optimizations including compressed format representation and offheap 
> allocations



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16081) Replication remove_peer gets stuck and blocks WAL rolling

2016-07-05 Thread Joseph (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph updated HBASE-16081:
---
Attachment: HBASE-16081.patch

> Replication remove_peer gets stuck and blocks WAL rolling
> -
>
> Key: HBASE-16081
> URL: https://issues.apache.org/jira/browse/HBASE-16081
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, Replication
>Reporter: Ashu Pachauri
>Assignee: Joseph
>Priority: Critical
> Attachments: HBASE-16081.patch
>
>
> We use a blocking take from CompletionService in 
> HBaseInterClusterReplicationEndpoint. When we remove a peer, we try to shut 
> down all threads gracefully. But, under certain race condition, the 
> underlying executor gets shutdown and the CompletionService#take will block 
> forever, which means the remove_peer call will never gracefully finish.
> Since ReplicationSourceManager#removePeer and 
> ReplicationSourceManager#recordLog lock on the same object, we are not able 
> to roll WALs in such a situation and will end up with gigantic WALs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16095) Add priority to TableDescriptor and priority region open thread pool

2016-07-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15363238#comment-15363238
 ] 

Hadoop QA commented on HBASE-16095:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
19s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 20s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 55s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
53s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
27s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
55s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 44s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
4s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 57s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 57s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 54s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 54s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
51s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
29m 27s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
39s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 4s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 17m 23s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 70m 23s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.TestCheckTestClasses |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12812976/hbase-16095_v1.patch |
| JIRA Issue | HBASE-16095 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PR

[jira] [Updated] (HBASE-16081) Replication remove_peer gets stuck and blocks WAL rolling

2016-07-05 Thread Joseph (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph updated HBASE-16081:
---
Attachment: (was: HBASE-16081.patch)

> Replication remove_peer gets stuck and blocks WAL rolling
> -
>
> Key: HBASE-16081
> URL: https://issues.apache.org/jira/browse/HBASE-16081
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, Replication
>Reporter: Ashu Pachauri
>Assignee: Joseph
>Priority: Critical
> Attachments: HBASE-16081.patch
>
>
> We use a blocking take from CompletionService in 
> HBaseInterClusterReplicationEndpoint. When we remove a peer, we try to shut 
> down all threads gracefully. But, under certain race condition, the 
> underlying executor gets shutdown and the CompletionService#take will block 
> forever, which means the remove_peer call will never gracefully finish.
> Since ReplicationSourceManager#removePeer and 
> ReplicationSourceManager#recordLog lock on the same object, we are not able 
> to roll WALs in such a situation and will end up with gigantic WALs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16157) The incorrect block cache count and size are caused by removing duplicate block key in the LruBlockCache

2016-07-05 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-16157:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 1.4.0
   2.0.0
   Status: Resolved  (was: Patch Available)

Thanks for the patch, ChiaPing.

Thanks for the reviews.

> The incorrect block cache count and size are caused by removing duplicate 
> block key in the LruBlockCache
> 
>
> Key: HBASE-16157
> URL: https://issues.apache.org/jira/browse/HBASE-16157
> Project: HBase
>  Issue Type: Bug
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Trivial
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-16157-v1.patch, HBASE-16157-v2.patch, 
> HBASE-16157-v3.patch, HBASE-16157-v4.patch
>
>
> {code:title=LruBlockCache.java|borderStyle=solid}
> // Check return value from the Map#remove before updating the metrics
>   protected long evictBlock(LruCachedBlock block, boolean 
> evictedByEvictionProcess) {
> map.remove(block.getCacheKey());
> updateSizeMetrics(block, true);
> ...
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16145) MultiRowRangeFilter constructor shouldn't throw IOException

2016-07-05 Thread Konstantin Ryakhovskiy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15363228#comment-15363228
 ] 

Konstantin Ryakhovskiy commented on HBASE-16145:


Failed test TestHRegionWithInMemoryFlush does not use functionality which is 
patched, it is not related to the patch

> MultiRowRangeFilter constructor shouldn't throw IOException
> ---
>
> Key: HBASE-16145
> URL: https://issues.apache.org/jira/browse/HBASE-16145
> Project: HBase
>  Issue Type: Wish
>Reporter: Konstantin Ryakhovskiy
>Assignee: Konstantin Ryakhovskiy
>Priority: Minor
> Attachments: HBASE-16145.master.001.patch, 
> HBASE-16145.master.002.patch
>
>
> MultiRowRangeFilter constructor declares IOException.
> The constructor:
> - sorts and merges incoming arguments - list of ranges, 
> - assigns sorted list to a private variable and does not do anything else.
> There is no reason to declare IOException.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16157) The incorrect block cache count and size are caused by removing duplicate block key in the LruBlockCache

2016-07-05 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15363212#comment-15363212
 ] 

Enis Soztutar commented on HBASE-16157:
---

[~ted_yu] is this committed? Why it is still open with no fixVersions set? 

> The incorrect block cache count and size are caused by removing duplicate 
> block key in the LruBlockCache
> 
>
> Key: HBASE-16157
> URL: https://issues.apache.org/jira/browse/HBASE-16157
> Project: HBase
>  Issue Type: Bug
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Trivial
> Attachments: HBASE-16157-v1.patch, HBASE-16157-v2.patch, 
> HBASE-16157-v3.patch, HBASE-16157-v4.patch
>
>
> {code:title=LruBlockCache.java|borderStyle=solid}
> // Check return value from the Map#remove before updating the metrics
>   protected long evictBlock(LruCachedBlock block, boolean 
> evictedByEvictionProcess) {
> map.remove(block.getCacheKey());
> updateSizeMetrics(block, true);
> ...
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16145) MultiRowRangeFilter constructor shouldn't throw IOException

2016-07-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15363206#comment-15363206
 ] 

Hadoop QA commented on HBASE-16145:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
51s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 26s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 6s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
14s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
41s {color} | {color:green} master passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 41s 
{color} | {color:red} hbase-rest in master has 1 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 23s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 10s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
25s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 35s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 8s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 8s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
41s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
26m 31s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 20s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 11s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 56s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 109m 12s 
{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 16s 
{color} | {color:green} hbase-rest in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
42s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 168m 15s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.regionserver.TestHRegionWithInMemoryFlush |
\\
\\
|| Subsystem || Report/Notes 

[jira] [Commented] (HBASE-16178) HBase restore command fails on cluster with encrypted HDFS

2016-07-05 Thread Dima Spivak (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15363202#comment-15363202
 ] 

Dima Spivak commented on HBASE-16178:
-

This doesn't seem to be a bug in HBase. HDFS doesn't allow copying between 
encryption zones (or from a location that's not in an encryption zone into one 
that is), and it looks like you're trying to do a bulk load of HFiles that 
aren't in encryption zone into an HBase root directory that is. Since it looks 
like you're using HDP, it might be easiest to get help from a Hortonworks user 
forum.

> HBase restore command fails on cluster with encrypted HDFS
> --
>
> Key: HBASE-16178
> URL: https://issues.apache.org/jira/browse/HBASE-16178
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
> Environment: Cluster with Encrypted HDFS
>Reporter: Romil Choksi
>  Labels: restore
>
> HBase restore command fails to move hfile into an encryption zone
> {code:title= HDFS namenode log}
> 2016-07-05 07:27:00,580 INFO  ipc.Server (Server.java:logException(2401)) - 
> IPC Server handler 31 on 8020, call 
> org.apache.hadoop.hdfs.protocol.ClientProtocol.rename from :53481 
> Call#130 Retry#0
> java.io.IOException: 
> /apps/hbase/staging/hbase__table_29ov3nxj1o__7o65g4lakspqe1mlku17g0n6e2c61v72o632puuntpfcf3tf41n69bfaso00gvlp/cf1/8cf0242072534ee0a7ee8710b9235c3e
>  can't be moved into an encryption zone.
> at 
> org.apache.hadoop.hdfs.server.namenode.EncryptionZoneManager.checkMoveValidity(EncryptionZoneManager.java:272)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirRenameOp.unprotectedRenameTo(FSDirRenameOp.java:187)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirRenameOp.renameTo(FSDirRenameOp.java:474)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirRenameOp.renameToInt(FSDirRenameOp.java:73)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renameTo(FSNamesystem.java:3761)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.rename(NameNodeRpcServer.java:986)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.rename(ClientNamenodeProtocolServerSideTranslatorPB.java:583)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2313)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2309)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2307)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16081) Replication remove_peer gets stuck and blocks WAL rolling

2016-07-05 Thread Joseph (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph updated HBASE-16081:
---
Attachment: (was: HBASE-16081.patch)

> Replication remove_peer gets stuck and blocks WAL rolling
> -
>
> Key: HBASE-16081
> URL: https://issues.apache.org/jira/browse/HBASE-16081
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, Replication
>Reporter: Ashu Pachauri
>Assignee: Joseph
>Priority: Critical
> Attachments: HBASE-16081.patch
>
>
> We use a blocking take from CompletionService in 
> HBaseInterClusterReplicationEndpoint. When we remove a peer, we try to shut 
> down all threads gracefully. But, under certain race condition, the 
> underlying executor gets shutdown and the CompletionService#take will block 
> forever, which means the remove_peer call will never gracefully finish.
> Since ReplicationSourceManager#removePeer and 
> ReplicationSourceManager#recordLog lock on the same object, we are not able 
> to roll WALs in such a situation and will end up with gigantic WALs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16081) Replication remove_peer gets stuck and blocks WAL rolling

2016-07-05 Thread Joseph (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph updated HBASE-16081:
---
Attachment: HBASE-16081.patch

> Replication remove_peer gets stuck and blocks WAL rolling
> -
>
> Key: HBASE-16081
> URL: https://issues.apache.org/jira/browse/HBASE-16081
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, Replication
>Reporter: Ashu Pachauri
>Assignee: Joseph
>Priority: Critical
> Attachments: HBASE-16081.patch
>
>
> We use a blocking take from CompletionService in 
> HBaseInterClusterReplicationEndpoint. When we remove a peer, we try to shut 
> down all threads gracefully. But, under certain race condition, the 
> underlying executor gets shutdown and the CompletionService#take will block 
> forever, which means the remove_peer call will never gracefully finish.
> Since ReplicationSourceManager#removePeer and 
> ReplicationSourceManager#recordLog lock on the same object, we are not able 
> to roll WALs in such a situation and will end up with gigantic WALs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16178) HBase restore command fails on cluster with encrypted HDFS

2016-07-05 Thread Romil Choksi (JIRA)
Romil Choksi created HBASE-16178:


 Summary: HBase restore command fails on cluster with encrypted HDFS
 Key: HBASE-16178
 URL: https://issues.apache.org/jira/browse/HBASE-16178
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0
 Environment: Cluster with Encrypted HDFS
Reporter: Romil Choksi


HBase restore command fails to move hfile into an encryption zone

{code:title= HDFS namenode log}
2016-07-05 07:27:00,580 INFO  ipc.Server (Server.java:logException(2401)) - IPC 
Server handler 31 on 8020, call 
org.apache.hadoop.hdfs.protocol.ClientProtocol.rename from :53481 
Call#130 Retry#0
java.io.IOException: 
/apps/hbase/staging/hbase__table_29ov3nxj1o__7o65g4lakspqe1mlku17g0n6e2c61v72o632puuntpfcf3tf41n69bfaso00gvlp/cf1/8cf0242072534ee0a7ee8710b9235c3e
 can't be moved into an encryption zone.
at 
org.apache.hadoop.hdfs.server.namenode.EncryptionZoneManager.checkMoveValidity(EncryptionZoneManager.java:272)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirRenameOp.unprotectedRenameTo(FSDirRenameOp.java:187)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirRenameOp.renameTo(FSDirRenameOp.java:474)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirRenameOp.renameToInt(FSDirRenameOp.java:73)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renameTo(FSNamesystem.java:3761)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.rename(NameNodeRpcServer.java:986)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.rename(ClientNamenodeProtocolServerSideTranslatorPB.java:583)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2313)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2309)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2307)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16081) Replication remove_peer gets stuck and blocks WAL rolling

2016-07-05 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15363145#comment-15363145
 ] 

Elliott Clark commented on HBASE-16081:
---

Please describe the whole deadlock in the code. Forcing people to read from two 
places means that the info won't be read by many.

> Replication remove_peer gets stuck and blocks WAL rolling
> -
>
> Key: HBASE-16081
> URL: https://issues.apache.org/jira/browse/HBASE-16081
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, Replication
>Reporter: Ashu Pachauri
>Assignee: Joseph
>Priority: Critical
> Attachments: HBASE-16081.patch
>
>
> We use a blocking take from CompletionService in 
> HBaseInterClusterReplicationEndpoint. When we remove a peer, we try to shut 
> down all threads gracefully. But, under certain race condition, the 
> underlying executor gets shutdown and the CompletionService#take will block 
> forever, which means the remove_peer call will never gracefully finish.
> Since ReplicationSourceManager#removePeer and 
> ReplicationSourceManager#recordLog lock on the same object, we are not able 
> to roll WALs in such a situation and will end up with gigantic WALs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16081) Replication remove_peer gets stuck and blocks WAL rolling

2016-07-05 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-16081:
--
Priority: Critical  (was: Major)

> Replication remove_peer gets stuck and blocks WAL rolling
> -
>
> Key: HBASE-16081
> URL: https://issues.apache.org/jira/browse/HBASE-16081
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, Replication
>Reporter: Ashu Pachauri
>Assignee: Joseph
>Priority: Critical
> Attachments: HBASE-16081.patch
>
>
> We use a blocking take from CompletionService in 
> HBaseInterClusterReplicationEndpoint. When we remove a peer, we try to shut 
> down all threads gracefully. But, under certain race condition, the 
> underlying executor gets shutdown and the CompletionService#take will block 
> forever, which means the remove_peer call will never gracefully finish.
> Since ReplicationSourceManager#removePeer and 
> ReplicationSourceManager#recordLog lock on the same object, we are not able 
> to roll WALs in such a situation and will end up with gigantic WALs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16081) Replication remove_peer gets stuck and blocks WAL rolling

2016-07-05 Thread Joseph (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph updated HBASE-16081:
---
Attachment: HBASE-16081.patch

> Replication remove_peer gets stuck and blocks WAL rolling
> -
>
> Key: HBASE-16081
> URL: https://issues.apache.org/jira/browse/HBASE-16081
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, Replication
>Reporter: Ashu Pachauri
>Assignee: Joseph
> Attachments: HBASE-16081.patch
>
>
> We use a blocking take from CompletionService in 
> HBaseInterClusterReplicationEndpoint. When we remove a peer, we try to shut 
> down all threads gracefully. But, under certain race condition, the 
> underlying executor gets shutdown and the CompletionService#take will block 
> forever, which means the remove_peer call will never gracefully finish.
> Since ReplicationSourceManager#removePeer and 
> ReplicationSourceManager#recordLog lock on the same object, we are not able 
> to roll WALs in such a situation and will end up with gigantic WALs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-16081) Replication remove_peer gets stuck and blocks WAL rolling

2016-07-05 Thread Joseph (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph reassigned HBASE-16081:
--

Assignee: Joseph  (was: Ashu Pachauri)

> Replication remove_peer gets stuck and blocks WAL rolling
> -
>
> Key: HBASE-16081
> URL: https://issues.apache.org/jira/browse/HBASE-16081
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, Replication
>Reporter: Ashu Pachauri
>Assignee: Joseph
> Attachments: HBASE-16081.patch
>
>
> We use a blocking take from CompletionService in 
> HBaseInterClusterReplicationEndpoint. When we remove a peer, we try to shut 
> down all threads gracefully. But, under certain race condition, the 
> underlying executor gets shutdown and the CompletionService#take will block 
> forever, which means the remove_peer call will never gracefully finish.
> Since ReplicationSourceManager#removePeer and 
> ReplicationSourceManager#recordLog lock on the same object, we are not able 
> to roll WALs in such a situation and will end up with gigantic WALs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16174) Hook cell test up, and fix broken cell test.

2016-07-05 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-16174:
--
Status: Patch Available  (was: Open)

> Hook cell test up, and fix broken cell test.
> 
>
> Key: HBASE-16174
> URL: https://issues.apache.org/jira/browse/HBASE-16174
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HBASE-16174.HBASE-14850.patch
>
>
> Make sure that cell test is working properly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-14138) HBase Backup/Restore Phase 3: Security

2016-07-05 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu reassigned HBASE-14138:
--

Assignee: Ted Yu

> HBase Backup/Restore Phase 3: Security
> --
>
> Key: HBASE-14138
> URL: https://issues.apache.org/jira/browse/HBASE-14138
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 2.0.0
>Reporter: Vladimir Rodionov
>Assignee: Ted Yu
>  Labels: backup
> Fix For: 2.0.0
>
>
> Security is not supported. Only authorized user (GLOBAL ADMIN) must be 
> allowed to perform backup/restore. See: HBASE-7367 for good discussion on 
> snapshot security model. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-15998) Cancel restore operation support

2016-07-05 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu reassigned HBASE-15998:
--

Assignee: Ted Yu

> Cancel restore operation support
> 
>
> Key: HBASE-15998
> URL: https://issues.apache.org/jira/browse/HBASE-15998
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 2.0.0
>Reporter: Vladimir Rodionov
>Assignee: Ted Yu
>  Labels: backup
> Fix For: 2.0.0
>
>
> This issue is to add support for user to cancel on-going restore operation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15565) Rewrite restore with Procedure V2

2016-07-05 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15363106#comment-15363106
 ] 

Ted Yu commented on HBASE-15565:


This is re-requisite for HBASE-15998

> Rewrite restore with Procedure V2
> -
>
> Key: HBASE-15565
> URL: https://issues.apache.org/jira/browse/HBASE-15565
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
>  Labels: backup
> Fix For: 2.0.0
>
> Attachments: 15565-v1.txt, 15565.v5.txt, 15565.v8.txt
>
>
> Currently restore is driven by RestoreClientImpl#restore().
> This issue rewrites the flow using Procedure V2.
> RestoreTablesProcedure would replace RestoreClientImpl.
> Main logic would be driven by executeFromState() method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15998) Cancel restore operation support

2016-07-05 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-15998:
---
Description: This issue is to add support for user to cancel on-going 
restore operation.

> Cancel restore operation support
> 
>
> Key: HBASE-15998
> URL: https://issues.apache.org/jira/browse/HBASE-15998
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 2.0.0
>Reporter: Vladimir Rodionov
>  Labels: backup
> Fix For: 2.0.0
>
>
> This issue is to add support for user to cancel on-going restore operation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >