[jira] [Commented] (HBASE-16501) seekToPrevoiusRow() can be optimized

2016-09-01 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15454562#comment-15454562
 ] 

ramkrishna.s.vasudevan commented on HBASE-16501:


Am not sure how to see the count. But when you run the 
TestHRegion#testReverseScanner_StackOverflow with and without patch you get 
this difference
Without patch - 50014998 we do these many getNext call as part of 
seekToPreviousRow().

With patch - 29997. 


> seekToPrevoiusRow() can be optimized
> 
>
> Key: HBASE-16501
> URL: https://issues.apache.org/jira/browse/HBASE-16501
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance, Scanners
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0
>
> Attachments: HBASE-16501.patch, HBASE-16501_1.patch
>
>
> Need to check the details and see how to implement it. But the problem is this
> In seekToPReviousRow impl in case of a reverse scan, say we have rows 
> row1 to row2. We are doing a reverse scan.
> The scan starts from row2 and we read all columns. Assume this row was 
> skipped due to mvcc we move to the previous row 'row1'. Now we read this 
> row1 and even if this does not match in mvcc we skip and again read 
> row2 and do the same. 
> Like this we keep doing til we come to row1 and this time we read til 
> row2 just to k now we have to skip it. The same problem happens in 
> Storefilescanner also and there we do lot of seek and next(). Better to solve 
> this case. 
> [~zjushch] - FYI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16501) seekToPrevoiusRow() can be optimized

2016-09-01 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-16501:
---
Attachment: HBASE-16501_sysocount.patch

Just to show how the count was taken with and without patch.

> seekToPrevoiusRow() can be optimized
> 
>
> Key: HBASE-16501
> URL: https://issues.apache.org/jira/browse/HBASE-16501
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance, Scanners
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0
>
> Attachments: HBASE-16501.patch, HBASE-16501_1.patch, 
> HBASE-16501_sysocount.patch
>
>
> Need to check the details and see how to implement it. But the problem is this
> In seekToPReviousRow impl in case of a reverse scan, say we have rows 
> row1 to row2. We are doing a reverse scan.
> The scan starts from row2 and we read all columns. Assume this row was 
> skipped due to mvcc we move to the previous row 'row1'. Now we read this 
> row1 and even if this does not match in mvcc we skip and again read 
> row2 and do the same. 
> Like this we keep doing til we come to row1 and this time we read til 
> row2 just to k now we have to skip it. The same problem happens in 
> Storefilescanner also and there we do lot of seek and next(). Better to solve 
> this case. 
> [~zjushch] - FYI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16536) Make the HBase minicluster easy to use for testing downstream applications.

2016-09-01 Thread Niels Basjes (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15454571#comment-15454571
 ] 

Niels Basjes commented on HBASE-16536:
--

I do not understand how my patch could have affected the tests that failed. 
Please advise what I should change to make it work.
(Or is this a problem in the master that I didn't cause?)

> Make the HBase minicluster easy to use for testing downstream applications.
> ---
>
> Key: HBASE-16536
> URL: https://issues.apache.org/jira/browse/HBASE-16536
> Project: HBase
>  Issue Type: Improvement
>Reporter: Niels Basjes
>Assignee: Niels Basjes
> Attachments: HBASE-16536-01.patch, HBASE-16536-02.patch
>
>
> In many applications I write I use HBase to store information.
> A big problem is testing these applications.
> I have seen several situations where people have written tests that create 
> tables in the development cluster and due to firewalls and such couldn't run 
> those tests from Jenkins.
> A while ago I wrote the FilterTestingCluster class that makes unit testing 
> the client side filters a lot easier. With this ticket I propose to make this 
> more generic and make it so that user applications can easily incorporate it 
> into their own unit tests without any major modifications to their 
> application.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-16501) seekToPrevoiusRow() can be optimized

2016-09-01 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15454566#comment-15454566
 ] 

ramkrishna.s.vasudevan edited comment on HBASE-16501 at 9/1/16 7:11 AM:


Just to show how the count was taken with and without patch.  So if you apply 
the sysocount.patch and run the test case testReverseScanner_StackOverflow you 
could see this difference. It is not directly possible to measure the number of 
times we do the seek and next() I think.


was (Author: ram_krish):
Just to show how the count was taken with and without patch.

> seekToPrevoiusRow() can be optimized
> 
>
> Key: HBASE-16501
> URL: https://issues.apache.org/jira/browse/HBASE-16501
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance, Scanners
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0
>
> Attachments: HBASE-16501.patch, HBASE-16501_1.patch, 
> HBASE-16501_sysocount.patch
>
>
> Need to check the details and see how to implement it. But the problem is this
> In seekToPReviousRow impl in case of a reverse scan, say we have rows 
> row1 to row2. We are doing a reverse scan.
> The scan starts from row2 and we read all columns. Assume this row was 
> skipped due to mvcc we move to the previous row 'row1'. Now we read this 
> row1 and even if this does not match in mvcc we skip and again read 
> row2 and do the same. 
> Like this we keep doing til we come to row1 and this time we read til 
> row2 just to k now we have to skip it. The same problem happens in 
> Storefilescanner also and there we do lot of seek and next(). Better to solve 
> this case. 
> [~zjushch] - FYI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16501) seekToPrevoiusRow() can be optimized

2016-09-01 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15454579#comment-15454579
 ] 

ramkrishna.s.vasudevan commented on HBASE-16501:


bq. we do these many getNext call as part of seekToPreviousRow().
To be more precise, the getNext() call will do next() call as part of the 
do/while loop which accounts to the above said count.

> seekToPrevoiusRow() can be optimized
> 
>
> Key: HBASE-16501
> URL: https://issues.apache.org/jira/browse/HBASE-16501
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance, Scanners
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0
>
> Attachments: HBASE-16501.patch, HBASE-16501_1.patch, 
> HBASE-16501_sysocount.patch
>
>
> Need to check the details and see how to implement it. But the problem is this
> In seekToPReviousRow impl in case of a reverse scan, say we have rows 
> row1 to row2. We are doing a reverse scan.
> The scan starts from row2 and we read all columns. Assume this row was 
> skipped due to mvcc we move to the previous row 'row1'. Now we read this 
> row1 and even if this does not match in mvcc we skip and again read 
> row2 and do the same. 
> Like this we keep doing til we come to row1 and this time we read til 
> row2 just to k now we have to skip it. The same problem happens in 
> Storefilescanner also and there we do lot of seek and next(). Better to solve 
> this case. 
> [~zjushch] - FYI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16501) seekToPrevoiusRow() can be optimized

2016-09-01 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15454597#comment-15454597
 ] 

Anoop Sam John commented on HBASE-16501:


So in this test case we have all the cells with seqId< readPnt correct?   
Because or else there is a boolean to stop skipping from next rows (In case of 
seek back direction)

> seekToPrevoiusRow() can be optimized
> 
>
> Key: HBASE-16501
> URL: https://issues.apache.org/jira/browse/HBASE-16501
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance, Scanners
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0
>
> Attachments: HBASE-16501.patch, HBASE-16501_1.patch, 
> HBASE-16501_sysocount.patch
>
>
> Need to check the details and see how to implement it. But the problem is this
> In seekToPReviousRow impl in case of a reverse scan, say we have rows 
> row1 to row2. We are doing a reverse scan.
> The scan starts from row2 and we read all columns. Assume this row was 
> skipped due to mvcc we move to the previous row 'row1'. Now we read this 
> row1 and even if this does not match in mvcc we skip and again read 
> row2 and do the same. 
> Like this we keep doing til we come to row1 and this time we read til 
> row2 just to k now we have to skip it. The same problem happens in 
> Storefilescanner also and there we do lot of seek and next(). Better to solve 
> this case. 
> [~zjushch] - FYI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15513) hbase.hregion.memstore.chunkpool.maxsize is 0.0 by default

2016-09-01 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15454613#comment-15454613
 ] 

Anoop Sam John commented on HBASE-15513:


Ya at least the tests what we did using CMS and G1 shows that chunk pool still 
have value. With G1GC and with out chunk pool, we were able to get latency and 
GC pause same as with pool case BUT with much more heap size allocated for the 
RS.   So said that, the chunk pool was helping us to achieve what we wanted 
with lower heap size.Also to mention that here we set global memstore upper 
limit as 43% only as we wanted to keep the IHOP for G1GC as some thing like 50% 
max.

> hbase.hregion.memstore.chunkpool.maxsize is 0.0 by default
> --
>
> Key: HBASE-15513
> URL: https://issues.apache.org/jira/browse/HBASE-15513
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0
>
> Attachments: HBASE-15513-v1.patch
>
>
> That results in excessive MemStoreLAB chunk allocations because we can not 
> reuse them. Not sure, why it has been disabled, by default. May be the code 
> has not been tested well?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15134) Add visibility into Flush and Compaction queues

2016-09-01 Thread Abhishek Singh Chouhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Singh Chouhan updated HBASE-15134:
---
Attachment: HBASE-15134.patch

Initial patch that adds a metric per region of flushes queued and compactions 
queued for that region.

> Add visibility into Flush and Compaction queues
> ---
>
> Key: HBASE-15134
> URL: https://issues.apache.org/jira/browse/HBASE-15134
> Project: HBase
>  Issue Type: New Feature
>Reporter: Elliott Clark
> Attachments: HBASE-15134.patch
>
>
> On busy spurts we can see regionservers start to see large queues for 
> compaction. It's really hard to tell if the server is queueing a lot of 
> compactions for the same region, lots of compactions for lots of regions, or 
> just falling behind.
> For flushes much the same. There can be flushes in queue that aren't being 
> run because of delayed flushes. There's no way to know from the metrics how 
> many flushes are for each region, how many are delayed. Etc.
> We should add either more metrics around this ( num per region, max per 
> region, min per region ) or add on a UI page that has the list of compactions 
> and flushes.
> Or both.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15134) Add visibility into Flush and Compaction queues

2016-09-01 Thread Abhishek Singh Chouhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Singh Chouhan updated HBASE-15134:
---
Status: Patch Available  (was: Open)

Getting a QA run

> Add visibility into Flush and Compaction queues
> ---
>
> Key: HBASE-15134
> URL: https://issues.apache.org/jira/browse/HBASE-15134
> Project: HBase
>  Issue Type: New Feature
>Reporter: Elliott Clark
> Attachments: HBASE-15134.patch
>
>
> On busy spurts we can see regionservers start to see large queues for 
> compaction. It's really hard to tell if the server is queueing a lot of 
> compactions for the same region, lots of compactions for lots of regions, or 
> just falling behind.
> For flushes much the same. There can be flushes in queue that aren't being 
> run because of delayed flushes. There's no way to know from the metrics how 
> many flushes are for each region, how many are delayed. Etc.
> We should add either more metrics around this ( num per region, max per 
> region, min per region ) or add on a UI page that has the list of compactions 
> and flushes.
> Or both.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16501) seekToPrevoiusRow() can be optimized

2016-09-01 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15454737#comment-15454737
 ] 

ramkrishna.s.vasudevan commented on HBASE-16501:


Yes.
As said in my first comment
bq.the var 'stopSkippingKVsIfNextRow ' is for avoiding this unnecessary skip I 
think but the condition does not work when starkKV itself is null.
This does not work in this case. Anyway HBASE-15871 will help to avoid memstore 
scanners if the read pt is less than the last flushed seqid. Because in that 
case all the cells are in the file.

> seekToPrevoiusRow() can be optimized
> 
>
> Key: HBASE-16501
> URL: https://issues.apache.org/jira/browse/HBASE-16501
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance, Scanners
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0
>
> Attachments: HBASE-16501.patch, HBASE-16501_1.patch, 
> HBASE-16501_sysocount.patch
>
>
> Need to check the details and see how to implement it. But the problem is this
> In seekToPReviousRow impl in case of a reverse scan, say we have rows 
> row1 to row2. We are doing a reverse scan.
> The scan starts from row2 and we read all columns. Assume this row was 
> skipped due to mvcc we move to the previous row 'row1'. Now we read this 
> row1 and even if this does not match in mvcc we skip and again read 
> row2 and do the same. 
> Like this we keep doing til we come to row1 and this time we read til 
> row2 just to k now we have to skip it. The same problem happens in 
> Storefilescanner also and there we do lot of seek and next(). Better to solve 
> this case. 
> [~zjushch] - FYI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16501) seekToPrevoiusRow() can be optimized

2016-09-01 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15454740#comment-15454740
 ] 

ramkrishna.s.vasudevan commented on HBASE-16501:


Hence even this patch tries to optimize only seekToPreviousRow when that 
'stopSkippingKVsIfNextRow' is set to true. 

> seekToPrevoiusRow() can be optimized
> 
>
> Key: HBASE-16501
> URL: https://issues.apache.org/jira/browse/HBASE-16501
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance, Scanners
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0
>
> Attachments: HBASE-16501.patch, HBASE-16501_1.patch, 
> HBASE-16501_sysocount.patch
>
>
> Need to check the details and see how to implement it. But the problem is this
> In seekToPReviousRow impl in case of a reverse scan, say we have rows 
> row1 to row2. We are doing a reverse scan.
> The scan starts from row2 and we read all columns. Assume this row was 
> skipped due to mvcc we move to the previous row 'row1'. Now we read this 
> row1 and even if this does not match in mvcc we skip and again read 
> row2 and do the same. 
> Like this we keep doing til we come to row1 and this time we read til 
> row2 just to k now we have to skip it. The same problem happens in 
> Storefilescanner also and there we do lot of seek and next(). Better to solve 
> this case. 
> [~zjushch] - FYI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16447) Replication by namespaces config in peer

2016-09-01 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-16447:
---
Release Note: 
Support replication by namespaces config.
1. Config a namespace in peer means that all tables in this namespace will be 
replicated.
2. If the peer didn't config any namespaces, then table-cfs config only decide 
which table's edit can be replicated.
3. If the peer config a namespace, then the peer can't config any table of this 
namespace. If the peer config a table, then the peer can't config this table's 
namespace too.

  was:
Support replication by namespaces config.
1. If the peer didn't config any namespaces, then table-cfs config only decide 
which table's edit can be replicated.
2. Config a namespace in peer means that all tables in this namespace will be 
replicated.
3. If the peer config a namespace, then the peer can't config any table of this 
namespace. If the peer config a table, then the peer can't config this table's 
namespace too.


> Replication by namespaces config in peer
> 
>
> Key: HBASE-16447
> URL: https://issues.apache.org/jira/browse/HBASE-16447
> Project: HBase
>  Issue Type: New Feature
>  Components: Replication
>Affects Versions: 2.0.0
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Attachments: HBASE-16447-v1.patch
>
>
> Now we only config table cfs in peer. But in our production cluster, there 
> are a dozen of namespace and every namespace has dozens of tables. It was 
> complicated to config all table cfs in peer. For some namespace, it need 
> replication all tables to other slave cluster. It will be easy to config if 
> we support replication by namespace. Suggestions and discussions are welcomed.
> Review board: https://reviews.apache.org/r/51521/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15134) Add visibility into Flush and Compaction queues

2016-09-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15454813#comment-15454813
 ] 

Hadoop QA commented on HBASE-15134:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
9s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 49s 
{color} | {color:green} master passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 56s 
{color} | {color:green} master passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
31s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
37s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
45s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 44s 
{color} | {color:green} master passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s 
{color} | {color:green} master passed with JDK v1.7.0_111 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
8s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 49s 
{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 49s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 56s 
{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 56s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
30s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
35s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 6 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch 13 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
27m 37s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 
33s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 44s 
{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s 
{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 17s 
{color} | {color:green} hbase-hadoop-compat in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 21s 
{color} | {color:green} hbase-hadoop2-compat in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 15m 38s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
30s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 66m 1s {c

[jira] [Updated] (HBASE-16447) Replication by namespaces config in peer

2016-09-01 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-16447:
---
Attachment: HBASE-16447-v2.patch

Upload a v2 patch. Fix by review comments. TestReplicationShell ut passed.

> Replication by namespaces config in peer
> 
>
> Key: HBASE-16447
> URL: https://issues.apache.org/jira/browse/HBASE-16447
> Project: HBase
>  Issue Type: New Feature
>  Components: Replication
>Affects Versions: 2.0.0
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Attachments: HBASE-16447-v1.patch, HBASE-16447-v2.patch
>
>
> Now we only config table cfs in peer. But in our production cluster, there 
> are a dozen of namespace and every namespace has dozens of tables. It was 
> complicated to config all table cfs in peer. For some namespace, it need 
> replication all tables to other slave cluster. It will be easy to config if 
> we support replication by namespace. Suggestions and discussions are welcomed.
> Review board: https://reviews.apache.org/r/51521/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16501) seekToPrevoiusRow() can be optimized

2016-09-01 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15454879#comment-15454879
 ] 

Anoop Sam John commented on HBASE-16501:


When the SegmentScanner is created the currentCell is initialized. So in this 
case, it will go throw all the cells and see all are above readPnt. So there is 
no point in keeping this SegmentScanner as nothing will come out of this.  Will 
it be better to close it off then itself rather than adding all these logic?

> seekToPrevoiusRow() can be optimized
> 
>
> Key: HBASE-16501
> URL: https://issues.apache.org/jira/browse/HBASE-16501
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance, Scanners
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0
>
> Attachments: HBASE-16501.patch, HBASE-16501_1.patch, 
> HBASE-16501_sysocount.patch
>
>
> Need to check the details and see how to implement it. But the problem is this
> In seekToPReviousRow impl in case of a reverse scan, say we have rows 
> row1 to row2. We are doing a reverse scan.
> The scan starts from row2 and we read all columns. Assume this row was 
> skipped due to mvcc we move to the previous row 'row1'. Now we read this 
> row1 and even if this does not match in mvcc we skip and again read 
> row2 and do the same. 
> Like this we keep doing til we come to row1 and this time we read til 
> row2 just to k now we have to skip it. The same problem happens in 
> Storefilescanner also and there we do lot of seek and next(). Better to solve 
> this case. 
> [~zjushch] - FYI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16447) Replication by namespaces config in peer

2016-09-01 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-16447:
---
Attachment: (was: HBASE-16447-v2.patch)

> Replication by namespaces config in peer
> 
>
> Key: HBASE-16447
> URL: https://issues.apache.org/jira/browse/HBASE-16447
> Project: HBase
>  Issue Type: New Feature
>  Components: Replication
>Affects Versions: 2.0.0
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Attachments: HBASE-16447-v1.patch, HBASE-16447-v2.patch
>
>
> Now we only config table cfs in peer. But in our production cluster, there 
> are a dozen of namespace and every namespace has dozens of tables. It was 
> complicated to config all table cfs in peer. For some namespace, it need 
> replication all tables to other slave cluster. It will be easy to config if 
> we support replication by namespace. Suggestions and discussions are welcomed.
> Review board: https://reviews.apache.org/r/51521/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16447) Replication by namespaces config in peer

2016-09-01 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-16447:
---
Attachment: HBASE-16447-v2.patch

> Replication by namespaces config in peer
> 
>
> Key: HBASE-16447
> URL: https://issues.apache.org/jira/browse/HBASE-16447
> Project: HBase
>  Issue Type: New Feature
>  Components: Replication
>Affects Versions: 2.0.0
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Attachments: HBASE-16447-v1.patch, HBASE-16447-v2.patch
>
>
> Now we only config table cfs in peer. But in our production cluster, there 
> are a dozen of namespace and every namespace has dozens of tables. It was 
> complicated to config all table cfs in peer. For some namespace, it need 
> replication all tables to other slave cluster. It will be easy to config if 
> we support replication by namespace. Suggestions and discussions are welcomed.
> Review board: https://reviews.apache.org/r/51521/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-16501) seekToPrevoiusRow() can be optimized

2016-09-01 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15454893#comment-15454893
 ] 

ramkrishna.s.vasudevan edited comment on HBASE-16501 at 9/1/16 9:33 AM:


bq.When the SegmentScanner is created the currentCell is initialized. So in 
this case, it will go throw all the cells and see all are above readPnt.

Yes that will be done by the other JIRA HBASE-15871. When we know the read 
point is already lesser than the flushed pt then there is no point in adding 
the memstore scanner itself.


was (Author: ram_krish):
bq.When the SegmentScanner is created the currentCell is initialized. So in 
this case, it will go throw all the cells and see all are above readPnt.

Yes that will be done by the other JIRA HBASE-15871. When we know the thread 
point is already than the flushed pt then there is no point in adding the 
memstore scanner itself.

> seekToPrevoiusRow() can be optimized
> 
>
> Key: HBASE-16501
> URL: https://issues.apache.org/jira/browse/HBASE-16501
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance, Scanners
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0
>
> Attachments: HBASE-16501.patch, HBASE-16501_1.patch, 
> HBASE-16501_sysocount.patch
>
>
> Need to check the details and see how to implement it. But the problem is this
> In seekToPReviousRow impl in case of a reverse scan, say we have rows 
> row1 to row2. We are doing a reverse scan.
> The scan starts from row2 and we read all columns. Assume this row was 
> skipped due to mvcc we move to the previous row 'row1'. Now we read this 
> row1 and even if this does not match in mvcc we skip and again read 
> row2 and do the same. 
> Like this we keep doing til we come to row1 and this time we read til 
> row2 just to k now we have to skip it. The same problem happens in 
> Storefilescanner also and there we do lot of seek and next(). Better to solve 
> this case. 
> [~zjushch] - FYI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16501) seekToPrevoiusRow() can be optimized

2016-09-01 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15454893#comment-15454893
 ] 

ramkrishna.s.vasudevan commented on HBASE-16501:


bq.When the SegmentScanner is created the currentCell is initialized. So in 
this case, it will go throw all the cells and see all are above readPnt.

Yes that will be done by the other JIRA HBASE-15871. When we know the thread 
point is already than the flushed pt then there is no point in adding the 
memstore scanner itself.

> seekToPrevoiusRow() can be optimized
> 
>
> Key: HBASE-16501
> URL: https://issues.apache.org/jira/browse/HBASE-16501
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance, Scanners
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0
>
> Attachments: HBASE-16501.patch, HBASE-16501_1.patch, 
> HBASE-16501_sysocount.patch
>
>
> Need to check the details and see how to implement it. But the problem is this
> In seekToPReviousRow impl in case of a reverse scan, say we have rows 
> row1 to row2. We are doing a reverse scan.
> The scan starts from row2 and we read all columns. Assume this row was 
> skipped due to mvcc we move to the previous row 'row1'. Now we read this 
> row1 and even if this does not match in mvcc we skip and again read 
> row2 and do the same. 
> Like this we keep doing til we come to row1 and this time we read til 
> row2 just to k now we have to skip it. The same problem happens in 
> Storefilescanner also and there we do lot of seek and next(). Better to solve 
> this case. 
> [~zjushch] - FYI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15134) Add visibility into Flush and Compaction queues

2016-09-01 Thread Abhishek Singh Chouhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Singh Chouhan updated HBASE-15134:
---
Attachment: HBASE-15134.patch

Fixing Heap Test And Checkstyle

> Add visibility into Flush and Compaction queues
> ---
>
> Key: HBASE-15134
> URL: https://issues.apache.org/jira/browse/HBASE-15134
> Project: HBase
>  Issue Type: New Feature
>Reporter: Elliott Clark
> Attachments: HBASE-15134.patch, HBASE-15134.patch
>
>
> On busy spurts we can see regionservers start to see large queues for 
> compaction. It's really hard to tell if the server is queueing a lot of 
> compactions for the same region, lots of compactions for lots of regions, or 
> just falling behind.
> For flushes much the same. There can be flushes in queue that aren't being 
> run because of delayed flushes. There's no way to know from the metrics how 
> many flushes are for each region, how many are delayed. Etc.
> We should add either more metrics around this ( num per region, max per 
> region, min per region ) or add on a UI page that has the list of compactions 
> and flushes.
> Or both.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16501) seekToPrevoiusRow() can be optimized

2016-09-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15454904#comment-15454904
 ] 

Hadoop QA commented on HBASE-16501:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
58s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s 
{color} | {color:green} master passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s 
{color} | {color:green} master passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
50s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
56s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s 
{color} | {color:green} master passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 34s 
{color} | {color:green} master passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 30s 
{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 30s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
26m 51s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 15s 
{color} | {color:red} hbase-server generated 1 new + 0 unchanged - 0 fixed = 1 
total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s 
{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 36s 
{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 95m 33s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 138m 4s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hbase-server |
|  |  Unread field:SegmentScanner.java:[line 60] |
| Failed junit tests | hadoop.hbase.regionserver.TestHRegionWithInMemoryFlush |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.11.2 Server=1.11.2 Image:yetus/hbase:date2016-09-01 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12826570/HBASE-16501_sysocount.patch
 |
| JIRA Issue | HBASE-16501 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbasea

[jira] [Commented] (HBASE-16501) seekToPrevoiusRow() can be optimized

2016-09-01 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15454952#comment-15454952
 ] 

Anoop Sam John commented on HBASE-16501:


In that case I dont think we need this jira and fix at all.. Kind of duplicate 
only. When we do this jira this case will never come.  The already present 
boolean based check will make sure we wont cross the row boundary.  

> seekToPrevoiusRow() can be optimized
> 
>
> Key: HBASE-16501
> URL: https://issues.apache.org/jira/browse/HBASE-16501
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance, Scanners
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0
>
> Attachments: HBASE-16501.patch, HBASE-16501_1.patch, 
> HBASE-16501_sysocount.patch
>
>
> Need to check the details and see how to implement it. But the problem is this
> In seekToPReviousRow impl in case of a reverse scan, say we have rows 
> row1 to row2. We are doing a reverse scan.
> The scan starts from row2 and we read all columns. Assume this row was 
> skipped due to mvcc we move to the previous row 'row1'. Now we read this 
> row1 and even if this does not match in mvcc we skip and again read 
> row2 and do the same. 
> Like this we keep doing til we come to row1 and this time we read til 
> row2 just to k now we have to skip it. The same problem happens in 
> Storefilescanner also and there we do lot of seek and next(). Better to solve 
> this case. 
> [~zjushch] - FYI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16501) seekToPrevoiusRow() can be optimized

2016-09-01 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15454988#comment-15454988
 ] 

ramkrishna.s.vasudevan commented on HBASE-16501:


Yes. I agree to that. If that is fixed then this is not needed. 

> seekToPrevoiusRow() can be optimized
> 
>
> Key: HBASE-16501
> URL: https://issues.apache.org/jira/browse/HBASE-16501
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance, Scanners
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0
>
> Attachments: HBASE-16501.patch, HBASE-16501_1.patch, 
> HBASE-16501_sysocount.patch
>
>
> Need to check the details and see how to implement it. But the problem is this
> In seekToPReviousRow impl in case of a reverse scan, say we have rows 
> row1 to row2. We are doing a reverse scan.
> The scan starts from row2 and we read all columns. Assume this row was 
> skipped due to mvcc we move to the previous row 'row1'. Now we read this 
> row1 and even if this does not match in mvcc we skip and again read 
> row2 and do the same. 
> Like this we keep doing til we come to row1 and this time we read til 
> row2 just to k now we have to skip it. The same problem happens in 
> Storefilescanner also and there we do lot of seek and next(). Better to solve 
> this case. 
> [~zjushch] - FYI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16501) seekToPrevoiusRow() can be optimized

2016-09-01 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15455002#comment-15455002
 ] 

ramkrishna.s.vasudevan commented on HBASE-16501:


Also HBASE-15871 is much more involved. But this is simple and straight 
forward. When I raised this JIRA i was not aware of that but only after I 
solved the problem and investigated then found that already it is trying to 
solve nextRow case but only in such unique cases of readpt being lesser than 
all the cells this problem occurs. 
Infact in StorefileScanner this issue is not there since we do get a cell 
before we decide if to skip or not. Hence did not apply the fix to 
Storefilescanner.

> seekToPrevoiusRow() can be optimized
> 
>
> Key: HBASE-16501
> URL: https://issues.apache.org/jira/browse/HBASE-16501
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance, Scanners
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0
>
> Attachments: HBASE-16501.patch, HBASE-16501_1.patch, 
> HBASE-16501_sysocount.patch
>
>
> Need to check the details and see how to implement it. But the problem is this
> In seekToPReviousRow impl in case of a reverse scan, say we have rows 
> row1 to row2. We are doing a reverse scan.
> The scan starts from row2 and we read all columns. Assume this row was 
> skipped due to mvcc we move to the previous row 'row1'. Now we read this 
> row1 and even if this does not match in mvcc we skip and again read 
> row2 and do the same. 
> Like this we keep doing til we come to row1 and this time we read til 
> row2 just to k now we have to skip it. The same problem happens in 
> Storefilescanner also and there we do lot of seek and next(). Better to solve 
> this case. 
> [~zjushch] - FYI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16447) Replication by namespaces config in peer

2016-09-01 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-16447:
---
Release Note: 
Support replication by namespaces config in peer.
1. Set a namespace in peer config means that all tables in this namespace will 
be replicated.
2. If the namespaces config is null, then the table-cfs config decide which 
table's edit can be replicated. If the table-cfs config is null, then the 
namespaces config decide which table's edit can be replicated.
3. If you already have set a namespace in the peer config, then you can't set 
any table of this namespace to the peer config. If you already have set a table 
in the peer config, then you can't set this table's namespace to the peer 
config.

  was:
Support replication by namespaces config.
1. Config a namespace in peer means that all tables in this namespace will be 
replicated.
2. If the peer didn't config any namespaces, then table-cfs config only decide 
which table's edit can be replicated.
3. If the peer config a namespace, then the peer can't config any table of this 
namespace. If the peer config a table, then the peer can't config this table's 
namespace too.


> Replication by namespaces config in peer
> 
>
> Key: HBASE-16447
> URL: https://issues.apache.org/jira/browse/HBASE-16447
> Project: HBase
>  Issue Type: New Feature
>  Components: Replication
>Affects Versions: 2.0.0
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Attachments: HBASE-16447-v1.patch
>
>
> Now we only config table cfs in peer. But in our production cluster, there 
> are a dozen of namespace and every namespace has dozens of tables. It was 
> complicated to config all table cfs in peer. For some namespace, it need 
> replication all tables to other slave cluster. It will be easy to config if 
> we support replication by namespace. Suggestions and discussions are welcomed.
> Review board: https://reviews.apache.org/r/51521/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16447) Replication by namespaces config in peer

2016-09-01 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-16447:
---
Attachment: (was: HBASE-16447-v2.patch)

> Replication by namespaces config in peer
> 
>
> Key: HBASE-16447
> URL: https://issues.apache.org/jira/browse/HBASE-16447
> Project: HBase
>  Issue Type: New Feature
>  Components: Replication
>Affects Versions: 2.0.0
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Attachments: HBASE-16447-v1.patch
>
>
> Now we only config table cfs in peer. But in our production cluster, there 
> are a dozen of namespace and every namespace has dozens of tables. It was 
> complicated to config all table cfs in peer. For some namespace, it need 
> replication all tables to other slave cluster. It will be easy to config if 
> we support replication by namespace. Suggestions and discussions are welcomed.
> Review board: https://reviews.apache.org/r/51521/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16447) Replication by namespaces config in peer

2016-09-01 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-16447:
---
Attachment: HBASE-16447-v2.patch

Update v2. Update some comments.

> Replication by namespaces config in peer
> 
>
> Key: HBASE-16447
> URL: https://issues.apache.org/jira/browse/HBASE-16447
> Project: HBase
>  Issue Type: New Feature
>  Components: Replication
>Affects Versions: 2.0.0
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Attachments: HBASE-16447-v1.patch, HBASE-16447-v2.patch
>
>
> Now we only config table cfs in peer. But in our production cluster, there 
> are a dozen of namespace and every namespace has dozens of tables. It was 
> complicated to config all table cfs in peer. For some namespace, it need 
> replication all tables to other slave cluster. It will be easy to config if 
> we support replication by namespace. Suggestions and discussions are welcomed.
> Review board: https://reviews.apache.org/r/51521/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16541) Avoid unnecessary cell copy in Result.compareResults

2016-09-01 Thread ChiaPing Tsai (JIRA)
ChiaPing Tsai created HBASE-16541:
-

 Summary: Avoid unnecessary cell copy in Result.compareResults
 Key: HBASE-16541
 URL: https://issues.apache.org/jira/browse/HBASE-16541
 Project: HBase
  Issue Type: Improvement
Reporter: ChiaPing Tsai
Priority: Trivial


{code:title=Bar.java|borderStyle=solid}
// Bytes.equals(a, b) should be replaced by Bytes.equals(a, off, len, b, off 
len);
  public static void compareResults(Result res1, Result res2)
  throws Exception {
...
Cell[] ourKVs = res1.rawCells();
Cell[] replicatedKVs = res2.rawCells();
for (int i = 0; i < res1.size(); i++) {
  if (!ourKVs[i].equals(replicatedKVs[i]) ||
  !Bytes.equals(CellUtil.cloneValue(ourKVs[i]), 
CellUtil.cloneValue(replicatedKVs[i]))) {
throw new Exception("This result was different: "
+ res1.toString() + " compared to " + res2.toString());
  }
}
  }
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16541) Avoid unnecessary cell copy in Result.compareResults

2016-09-01 Thread ChiaPing Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15455132#comment-15455132
 ] 

ChiaPing Tsai commented on HBASE-16541:
---

The Result.compareResults is used by VerifyReplication which compares the data 
from a local table with a remote one.

There are a lot of unnecessary copy if the tables are large.

> Avoid unnecessary cell copy in Result.compareResults
> 
>
> Key: HBASE-16541
> URL: https://issues.apache.org/jira/browse/HBASE-16541
> Project: HBase
>  Issue Type: Improvement
>Reporter: ChiaPing Tsai
>Priority: Trivial
>
> {code:title=Bar.java|borderStyle=solid}
> // Bytes.equals(a, b) should be replaced by Bytes.equals(a, off, len, b, off 
> len);
>   public static void compareResults(Result res1, Result res2)
>   throws Exception {
> ...
> Cell[] ourKVs = res1.rawCells();
> Cell[] replicatedKVs = res2.rawCells();
> for (int i = 0; i < res1.size(); i++) {
>   if (!ourKVs[i].equals(replicatedKVs[i]) ||
>   !Bytes.equals(CellUtil.cloneValue(ourKVs[i]), 
> CellUtil.cloneValue(replicatedKVs[i]))) {
> throw new Exception("This result was different: "
> + res1.toString() + " compared to " + res2.toString());
>   }
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16498) NPE when Scan's stopRow is set NULL

2016-09-01 Thread Pankaj Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pankaj Kumar updated HBASE-16498:
-
Attachment: HBASE-16498-V3.patch

> NPE when Scan's stopRow is set NULL
> ---
>
> Key: HBASE-16498
> URL: https://issues.apache.org/jira/browse/HBASE-16498
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 2.0.0, 1.3.0, 1.4.0, 1.1.6, 1.2.3
>Reporter: Pankaj Kumar
>Assignee: Pankaj Kumar
>Priority: Minor
> Attachments: HBASE-16498-V2.patch, HBASE-16498-V3.patch, 
> HBASE-16498.patch
>
>
> During scan operation we validate whether this is the last region of table, 
> if not then records will be retrieved from nextscanner. If stop row is set 
> null then NPE will be thrown while validating stop row with region endkey.
> {noformat}
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.checkScanStopRow(ClientScanner.java:217)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:266)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.possiblyNextScanner(ClientScanner.java:237)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:537)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.nextWithSyncCache(ClientScanner.java:363)
>   at 
> org.apache.hadoop.hbase.client.ClientSimpleScanner.next(ClientSimpleScanner.java:50)
>   at 
> org.apache.hadoop.hbase.client.AbstractClientScanner.next(AbstractClientScanner.java:70)
>   at 
> org.apache.hadoop.hbase.client.TestAdmin2.testScanWithSplitKeysAndNullStartEndRow(TestAdmin2.java:803)
> {noformat}
> We should return empty byte array when start/end row is set NULL.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16498) NPE when Scan's stopRow is set NULL

2016-09-01 Thread Pankaj Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15455151#comment-15455151
 ] 

Pankaj Kumar commented on HBASE-16498:
--

V3 patch, Thanks for the reviews.

> NPE when Scan's stopRow is set NULL
> ---
>
> Key: HBASE-16498
> URL: https://issues.apache.org/jira/browse/HBASE-16498
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 2.0.0, 1.3.0, 1.4.0, 1.1.6, 1.2.3
>Reporter: Pankaj Kumar
>Assignee: Pankaj Kumar
>Priority: Minor
> Attachments: HBASE-16498-V2.patch, HBASE-16498-V3.patch, 
> HBASE-16498.patch
>
>
> During scan operation we validate whether this is the last region of table, 
> if not then records will be retrieved from nextscanner. If stop row is set 
> null then NPE will be thrown while validating stop row with region endkey.
> {noformat}
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.checkScanStopRow(ClientScanner.java:217)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:266)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.possiblyNextScanner(ClientScanner.java:237)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:537)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.nextWithSyncCache(ClientScanner.java:363)
>   at 
> org.apache.hadoop.hbase.client.ClientSimpleScanner.next(ClientSimpleScanner.java:50)
>   at 
> org.apache.hadoop.hbase.client.AbstractClientScanner.next(AbstractClientScanner.java:70)
>   at 
> org.apache.hadoop.hbase.client.TestAdmin2.testScanWithSplitKeysAndNullStartEndRow(TestAdmin2.java:803)
> {noformat}
> We should return empty byte array when start/end row is set NULL.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16399) Provide an API to get list of failed regions and servername in Canary

2016-09-01 Thread Vishal Khandelwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15455178#comment-15455178
 ] 

Vishal Khandelwal commented on HBASE-16399:
---

This message is not required. I kept it for testing. We are already logging the 
error when exception is caught
{code}
Updating Write Failure List from region:%s on region server
{code}

For this code earlier i thought move it main but then it is specific to region 
scan. I think you are right we don't need this as error we are already printing 
in catch and List is already given which can be consumed caller. i would 
remove. mainly i had added this for my testing earlier.

{code}
  Map readFailures = sink.getReadFailures();
+  if(readFailures != null && readFailures.size() > 0){
+LOG.info(" Read Canary Failure Summary ===");
+LOG.info("Region \t Server Name");
+for(Map.Entry e : readFailures.entrySet()) {
+ LOG.error(e.getKey() + "\t" + e.getValue());
+}
+  }
+
+  Map writeFailures = sink.getWriteFailures();
+  if(writeFailures != null && writeFailures.size() > 0){
+LOG.info(" Write Canary Failure Summary ===");
+LOG.info("Region \t Server Name");
+for(Map.Entry e : writeFailures.entrySet()) {
+ LOG.error(e.getKey() + "\t" + e.getValue());
+}

{code}

> Provide an API to get list of failed regions and servername in Canary
> -
>
> Key: HBASE-16399
> URL: https://issues.apache.org/jira/browse/HBASE-16399
> Project: HBase
>  Issue Type: Improvement
>  Components: canary
>Affects Versions: 1.3.1, 0.98.21
>Reporter: Vishal Khandelwal
>Assignee: Vishal Khandelwal
> Fix For: 1.3.1, 0.98.23
>
> Attachments: HBASE-16399.0.98.00.patch, HBASE-16399.0.98.01.patch, 
> HBASE-16399.00.patch, HBASE-16399.01.patch, HBASE-16399.02.patch, 
> HBASE-16399.branch-1.00.patch, HBASE-16399.branch-1.01.patch, 
> HBASE-16399.branch-1.02.patch
>
>
> At present HBase Canary tool only prints the failures as part of logs. It 
> does not provide an API to get the list or summarizes it so caller can take 
> action on the failed host. This Jira would additional API so caller can get 
> read or write canary failures.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15134) Add visibility into Flush and Compaction queues

2016-09-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15455188#comment-15455188
 ] 

Hadoop QA commented on HBASE-15134:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 39s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
20s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 51s 
{color} | {color:green} master passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 56s 
{color} | {color:green} master passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
33s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
37s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
45s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 44s 
{color} | {color:green} master passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s 
{color} | {color:green} master passed with JDK v1.7.0_111 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
8s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 49s 
{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 49s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 57s 
{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 57s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
31s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
37s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
27m 46s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 
32s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 44s 
{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s 
{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 17s 
{color} | {color:green} hbase-hadoop-compat in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 21s 
{color} | {color:green} hbase-hadoop2-compat in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 94m 8s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
50s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 145m 43s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.regionserver.TestHRegionWithInMemoryFlush |
\\
\\
|| Subsystem || Report/Notes 

[jira] [Updated] (HBASE-16399) Provide an API to get list of failed regions and servername in Canary

2016-09-01 Thread Vishal Khandelwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vishal Khandelwal updated HBASE-16399:
--
Attachment: HBASE-16399.03.patch
HBASE-16399.0.98.02.patch

> Provide an API to get list of failed regions and servername in Canary
> -
>
> Key: HBASE-16399
> URL: https://issues.apache.org/jira/browse/HBASE-16399
> Project: HBase
>  Issue Type: Improvement
>  Components: canary
>Affects Versions: 1.3.1, 0.98.21
>Reporter: Vishal Khandelwal
>Assignee: Vishal Khandelwal
> Fix For: 1.3.1, 0.98.23
>
> Attachments: HBASE-16399.0.98.00.patch, HBASE-16399.0.98.01.patch, 
> HBASE-16399.0.98.02.patch, HBASE-16399.00.patch, HBASE-16399.01.patch, 
> HBASE-16399.02.patch, HBASE-16399.03.patch, HBASE-16399.branch-1.00.patch, 
> HBASE-16399.branch-1.01.patch, HBASE-16399.branch-1.02.patch
>
>
> At present HBase Canary tool only prints the failures as part of logs. It 
> does not provide an API to get the list or summarizes it so caller can take 
> action on the failed host. This Jira would additional API so caller can get 
> read or write canary failures.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16399) Provide an API to get list of failed regions and servername in Canary

2016-09-01 Thread Vishal Khandelwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vishal Khandelwal updated HBASE-16399:
--
Attachment: (was: HBASE-16399.0.98.02.patch)

> Provide an API to get list of failed regions and servername in Canary
> -
>
> Key: HBASE-16399
> URL: https://issues.apache.org/jira/browse/HBASE-16399
> Project: HBase
>  Issue Type: Improvement
>  Components: canary
>Affects Versions: 1.3.1, 0.98.21
>Reporter: Vishal Khandelwal
>Assignee: Vishal Khandelwal
> Fix For: 1.3.1, 0.98.23
>
> Attachments: HBASE-16399.0.98.00.patch, HBASE-16399.0.98.01.patch, 
> HBASE-16399.00.patch, HBASE-16399.01.patch, HBASE-16399.02.patch, 
> HBASE-16399.branch-1.00.patch, HBASE-16399.branch-1.01.patch, 
> HBASE-16399.branch-1.02.patch
>
>
> At present HBase Canary tool only prints the failures as part of logs. It 
> does not provide an API to get the list or summarizes it so caller can take 
> action on the failed host. This Jira would additional API so caller can get 
> read or write canary failures.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16399) Provide an API to get list of failed regions and servername in Canary

2016-09-01 Thread Vishal Khandelwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vishal Khandelwal updated HBASE-16399:
--
Attachment: (was: HBASE-16399.03.patch)

> Provide an API to get list of failed regions and servername in Canary
> -
>
> Key: HBASE-16399
> URL: https://issues.apache.org/jira/browse/HBASE-16399
> Project: HBase
>  Issue Type: Improvement
>  Components: canary
>Affects Versions: 1.3.1, 0.98.21
>Reporter: Vishal Khandelwal
>Assignee: Vishal Khandelwal
> Fix For: 1.3.1, 0.98.23
>
> Attachments: HBASE-16399.0.98.00.patch, HBASE-16399.0.98.01.patch, 
> HBASE-16399.00.patch, HBASE-16399.01.patch, HBASE-16399.02.patch, 
> HBASE-16399.branch-1.00.patch, HBASE-16399.branch-1.01.patch, 
> HBASE-16399.branch-1.02.patch
>
>
> At present HBase Canary tool only prints the failures as part of logs. It 
> does not provide an API to get the list or summarizes it so caller can take 
> action on the failed host. This Jira would additional API so caller can get 
> read or write canary failures.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16447) Replication by namespaces config in peer

2016-09-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15455251#comment-15455251
 ] 

Hadoop QA commented on HBASE-16447:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
2s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 17s 
{color} | {color:green} master passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 27s 
{color} | {color:green} master passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 10m 
14s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
51s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
43s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s 
{color} | {color:green} master passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 11s 
{color} | {color:green} master passed with JDK v1.7.0_111 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 18s 
{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 18s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 18s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 27s 
{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 27s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 27s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 10m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
51s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} rubocop {color} | {color:red} 0m 14s 
{color} | {color:red} The patch generated 59 new + 358 unchanged - 13 fixed = 
417 total (was 371) {color} |
| {color:red}-1{color} | {color:red} ruby-lint {color} | {color:red} 0m 9s 
{color} | {color:red} The patch generated 48 new + 257 unchanged - 0 fixed = 
305 total (was 257) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
27m 36s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 
27s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s 
{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 11s 
{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 22s 
{color} | {color:green} hbase-protocol in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 0s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 102m 4s 
{col

[jira] [Updated] (HBASE-16375) Mapreduce mini cluster using HBaseTestingUtility not setting correct resourcemanager and jobhistory webapp address of MapReduceTestingShim

2016-09-01 Thread Loknath Priyatham Teja Singamsetty (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Loknath Priyatham Teja Singamsetty  updated HBASE-16375:

Attachment: HBASE-16375_0.98_and_above_with_tests.patch

> Mapreduce mini cluster using HBaseTestingUtility not setting correct 
> resourcemanager and jobhistory webapp address of MapReduceTestingShim  
> 
>
> Key: HBASE-16375
> URL: https://issues.apache.org/jira/browse/HBASE-16375
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0
>Reporter: Loknath Priyatham Teja Singamsetty 
>Assignee: Loknath Priyatham Teja Singamsetty 
>Priority: Minor
> Fix For: 2.0.0, 1.3.0, 1.1.6, 0.98.22, 1.2.4
>
> Attachments: HBASE-16375_0.98_and_above.patch, 
> HBASE-16375_0.98_and_above_with_tests.patch
>
>
> Starting mapreduce mini cluster using HBaseTestingUtility is not setting 
> "yarn.resourcemanager.webapp.address" and 
> "mapreduce.jobhistory.webapp.address" which are required for getting the 
> submitted yarn apps using mapreduce webapp. These properties are not being 
> copied from jobConf of MapReduceTestingShim resulting in default values.
> {quote}
> HBaseTestingUtility.java
> // Allow the user to override FS URI for this map-reduce cluster to use.
> mrCluster = new MiniMRCluster(servers,
>   FS_URI != null ? FS_URI : FileSystem.get(conf).getUri().toString(), 1,
>   null, null, new JobConf(this.conf));
> JobConf jobConf = MapreduceTestingShim.getJobConf(mrCluster);
> if (jobConf == null) {
>   jobConf = mrCluster.createJobConf();
> }
> jobConf.set("mapreduce.cluster.local.dir",
>   conf.get("mapreduce.cluster.local.dir")); //Hadoop MiniMR overwrites 
> this while it should not
> LOG.info("Mini mapreduce cluster started");
> // In hadoop2, YARN/MR2 starts a mini cluster with its own conf instance 
> and updates settings.
> // Our HBase MR jobs need several of these settings in order to properly 
> run.  So we copy the
> // necessary config properties here.  YARN-129 required adding a few 
> properties.
> conf.set("mapreduce.jobtracker.address", 
> jobConf.get("mapreduce.jobtracker.address"));
> // this for mrv2 support; mr1 ignores this
> conf.set("mapreduce.framework.name", "yarn");
> conf.setBoolean("yarn.is.minicluster", true);
> String rmAddress = jobConf.get("yarn.resourcemanager.address");
> if (rmAddress != null) {
>   conf.set("yarn.resourcemanager.address", rmAddress);
> }
> String historyAddress = jobConf.get("mapreduce.jobhistory.address");
> if (historyAddress != null) {
>   conf.set("mapreduce.jobhistory.address", historyAddress);
> }
> String schedulerAddress =
>   jobConf.get("yarn.resourcemanager.scheduler.address");
> if (schedulerAddress != null) {
>   conf.set("yarn.resourcemanager.scheduler.address", schedulerAddress);
> }
> {quote}
> As a immediate fix for phoenix e2e test to succeed, need the below lines to 
> be added as well
> {quote}
> String rmWebappAddress = 
> jobConf.get("yarn.resourcemanager.webapp.address");
> if (rmWebappAddress != null) {
>   conf.set("yarn.resourcemanager.webapp.address", rmWebappAddress);
> }
> String historyWebappAddress = 
> jobConf.get("mapreduce.jobhistory.webapp.address");
> if (historyWebappAddress != null) {
>   conf.set("mapreduce.jobhistory.webapp.address", historyWebappAddress);
> }
> {quote}
> Eventually, we should also see if we can copy over all the jobConf properties 
> to HBaseTestingUtility conf object.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16502) Reduce garbage in BufferedDataBlockEncoder

2016-09-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15455281#comment-15455281
 ] 

Hudson commented on HBASE-16502:


FAILURE: Integrated in Jenkins build HBase-0.98-matrix #395 (See 
[https://builds.apache.org/job/HBase-0.98-matrix/395/])
HBASE-16502 Reduce garbage in BufferedDataBlockEncoder (binlijin) (apurtell: 
rev b861c331f05df8a585d03959dc471e9e8b045047)
* (edit) 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/BufferedDataBlockEncoder.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestBufferedDataBlockEncoder.java
* (edit) 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/FastDiffDeltaEncoder.java
* (edit) 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/DiffKeyDeltaEncoder.java
HBASE-16502 Reduce garbage in BufferedDataBlockEncoder - addendum adopts 
(apurtell: rev e792d570a98374a9829522b22d63b90e25c89104)
* (edit) 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/DiffKeyDeltaEncoder.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestBufferedDataBlockEncoder.java
* (edit) 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/BufferedDataBlockEncoder.java
* (edit) 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/FastDiffDeltaEncoder.java


> Reduce garbage in BufferedDataBlockEncoder
> --
>
> Key: HBASE-16502
> URL: https://issues.apache.org/jira/browse/HBASE-16502
> Project: HBase
>  Issue Type: Improvement
>Reporter: binlijin
>Assignee: binlijin
> Fix For: 2.0.0, 1.4.0, 0.98.22
>
> Attachments: HBASE-16502-master.patch, 
> HBASE-16502-master_addnum_v1.patch, HBASE-16502-master_addnum_v2.patch, 
> HBASE-16502-master_v2.patch, HBASE-16502.branch-1.addnumv1.patch, 
> HBASE-16502.branch-1.v1.patch
>
>
> In BufferedDataBlockEncoder.SeekerState every read will new a tagsBuffer for  
> compressTags. There is no need when no tags or compressTags=false, so we can 
> reduce this new byte[] to reduce garbage.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16375) Mapreduce mini cluster using HBaseTestingUtility not setting correct resourcemanager and jobhistory webapp address of MapReduceTestingShim

2016-09-01 Thread Loknath Priyatham Teja Singamsetty (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15455277#comment-15455277
 ] 

Loknath Priyatham Teja Singamsetty  commented on HBASE-16375:
-

Added the test and attached the patch. Please review.

> Mapreduce mini cluster using HBaseTestingUtility not setting correct 
> resourcemanager and jobhistory webapp address of MapReduceTestingShim  
> 
>
> Key: HBASE-16375
> URL: https://issues.apache.org/jira/browse/HBASE-16375
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0
>Reporter: Loknath Priyatham Teja Singamsetty 
>Assignee: Loknath Priyatham Teja Singamsetty 
>Priority: Minor
> Fix For: 2.0.0, 1.3.0, 1.1.6, 0.98.22, 1.2.4
>
> Attachments: HBASE-16375_0.98_and_above.patch, 
> HBASE-16375_0.98_and_above_with_tests.patch
>
>
> Starting mapreduce mini cluster using HBaseTestingUtility is not setting 
> "yarn.resourcemanager.webapp.address" and 
> "mapreduce.jobhistory.webapp.address" which are required for getting the 
> submitted yarn apps using mapreduce webapp. These properties are not being 
> copied from jobConf of MapReduceTestingShim resulting in default values.
> {quote}
> HBaseTestingUtility.java
> // Allow the user to override FS URI for this map-reduce cluster to use.
> mrCluster = new MiniMRCluster(servers,
>   FS_URI != null ? FS_URI : FileSystem.get(conf).getUri().toString(), 1,
>   null, null, new JobConf(this.conf));
> JobConf jobConf = MapreduceTestingShim.getJobConf(mrCluster);
> if (jobConf == null) {
>   jobConf = mrCluster.createJobConf();
> }
> jobConf.set("mapreduce.cluster.local.dir",
>   conf.get("mapreduce.cluster.local.dir")); //Hadoop MiniMR overwrites 
> this while it should not
> LOG.info("Mini mapreduce cluster started");
> // In hadoop2, YARN/MR2 starts a mini cluster with its own conf instance 
> and updates settings.
> // Our HBase MR jobs need several of these settings in order to properly 
> run.  So we copy the
> // necessary config properties here.  YARN-129 required adding a few 
> properties.
> conf.set("mapreduce.jobtracker.address", 
> jobConf.get("mapreduce.jobtracker.address"));
> // this for mrv2 support; mr1 ignores this
> conf.set("mapreduce.framework.name", "yarn");
> conf.setBoolean("yarn.is.minicluster", true);
> String rmAddress = jobConf.get("yarn.resourcemanager.address");
> if (rmAddress != null) {
>   conf.set("yarn.resourcemanager.address", rmAddress);
> }
> String historyAddress = jobConf.get("mapreduce.jobhistory.address");
> if (historyAddress != null) {
>   conf.set("mapreduce.jobhistory.address", historyAddress);
> }
> String schedulerAddress =
>   jobConf.get("yarn.resourcemanager.scheduler.address");
> if (schedulerAddress != null) {
>   conf.set("yarn.resourcemanager.scheduler.address", schedulerAddress);
> }
> {quote}
> As a immediate fix for phoenix e2e test to succeed, need the below lines to 
> be added as well
> {quote}
> String rmWebappAddress = 
> jobConf.get("yarn.resourcemanager.webapp.address");
> if (rmWebappAddress != null) {
>   conf.set("yarn.resourcemanager.webapp.address", rmWebappAddress);
> }
> String historyWebappAddress = 
> jobConf.get("mapreduce.jobhistory.webapp.address");
> if (historyWebappAddress != null) {
>   conf.set("mapreduce.jobhistory.webapp.address", historyWebappAddress);
> }
> {quote}
> Eventually, we should also see if we can copy over all the jobConf properties 
> to HBaseTestingUtility conf object.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16375) Mapreduce mini cluster using HBaseTestingUtility not setting correct resourcemanager and jobhistory webapp address of MapReduceTestingShim

2016-09-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15455302#comment-15455302
 ] 

Hadoop QA commented on HBASE-16375:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 4s {color} 
| {color:red} HBASE-16375 does not apply to master. Rebase required? Wrong 
Branch? See https://yetus.apache.org/documentation/0.3.0/precommit-patchnames 
for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12826614/HBASE-16375_0.98_and_above_with_tests.patch
 |
| JIRA Issue | HBASE-16375 |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/3383/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Mapreduce mini cluster using HBaseTestingUtility not setting correct 
> resourcemanager and jobhistory webapp address of MapReduceTestingShim  
> 
>
> Key: HBASE-16375
> URL: https://issues.apache.org/jira/browse/HBASE-16375
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0
>Reporter: Loknath Priyatham Teja Singamsetty 
>Assignee: Loknath Priyatham Teja Singamsetty 
>Priority: Minor
> Fix For: 2.0.0, 1.3.0, 1.1.6, 0.98.22, 1.2.4
>
> Attachments: HBASE-16375_0.98_and_above.patch, 
> HBASE-16375_0.98_and_above_with_tests.patch
>
>
> Starting mapreduce mini cluster using HBaseTestingUtility is not setting 
> "yarn.resourcemanager.webapp.address" and 
> "mapreduce.jobhistory.webapp.address" which are required for getting the 
> submitted yarn apps using mapreduce webapp. These properties are not being 
> copied from jobConf of MapReduceTestingShim resulting in default values.
> {quote}
> HBaseTestingUtility.java
> // Allow the user to override FS URI for this map-reduce cluster to use.
> mrCluster = new MiniMRCluster(servers,
>   FS_URI != null ? FS_URI : FileSystem.get(conf).getUri().toString(), 1,
>   null, null, new JobConf(this.conf));
> JobConf jobConf = MapreduceTestingShim.getJobConf(mrCluster);
> if (jobConf == null) {
>   jobConf = mrCluster.createJobConf();
> }
> jobConf.set("mapreduce.cluster.local.dir",
>   conf.get("mapreduce.cluster.local.dir")); //Hadoop MiniMR overwrites 
> this while it should not
> LOG.info("Mini mapreduce cluster started");
> // In hadoop2, YARN/MR2 starts a mini cluster with its own conf instance 
> and updates settings.
> // Our HBase MR jobs need several of these settings in order to properly 
> run.  So we copy the
> // necessary config properties here.  YARN-129 required adding a few 
> properties.
> conf.set("mapreduce.jobtracker.address", 
> jobConf.get("mapreduce.jobtracker.address"));
> // this for mrv2 support; mr1 ignores this
> conf.set("mapreduce.framework.name", "yarn");
> conf.setBoolean("yarn.is.minicluster", true);
> String rmAddress = jobConf.get("yarn.resourcemanager.address");
> if (rmAddress != null) {
>   conf.set("yarn.resourcemanager.address", rmAddress);
> }
> String historyAddress = jobConf.get("mapreduce.jobhistory.address");
> if (historyAddress != null) {
>   conf.set("mapreduce.jobhistory.address", historyAddress);
> }
> String schedulerAddress =
>   jobConf.get("yarn.resourcemanager.scheduler.address");
> if (schedulerAddress != null) {
>   conf.set("yarn.resourcemanager.scheduler.address", schedulerAddress);
> }
> {quote}
> As a immediate fix for phoenix e2e test to succeed, need the below lines to 
> be added as well
> {quote}
> String rmWebappAddress = 
> jobConf.get("yarn.resourcemanager.webapp.address");
> if (rmWebappAddress != null) {
>   conf.set("yarn.resourcemanager.webapp.address", rmWebappAddress);
> }
> String historyWebappAddress = 
> jobConf.get("mapreduce.jobhistory.webapp.address");
> if (historyWebappAddress != null) {
>   conf.set("mapreduce.jobhistory.webapp.address", historyWebappAddress);
> }
> {quote}
> Eventually, we should also see if we can copy over all the jobConf properties 
> to HBaseTestingUtility conf object.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15134) Add visibility into Flush and Compaction queues

2016-09-01 Thread Abhishek Singh Chouhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15455307#comment-15455307
 ] 

Abhishek Singh Chouhan commented on HBASE-15134:


[~eclark] [~anoopsamjohn] Does this patch looks heading in the right direction?
I thought about adding a metric to keep a track of the max too but the only 
possibilities looked either to have it in 
"org.apache.hadoop.hbase.regionserver.MetricsRegionWrapperImpl.HRegionMetricsWrapperRunnable"
 in which case it might not report correctly since its run at an interval of 45 
seconds by which time we might have hit max queue count and cleared off too. 
Other seemed doing a check everytime a request is queued against a prev max but 
this would also have to be synchronized (not very sure if thats a good idea). 
Any thoughts ? 

> Add visibility into Flush and Compaction queues
> ---
>
> Key: HBASE-15134
> URL: https://issues.apache.org/jira/browse/HBASE-15134
> Project: HBase
>  Issue Type: New Feature
>Reporter: Elliott Clark
> Attachments: HBASE-15134.patch, HBASE-15134.patch
>
>
> On busy spurts we can see regionservers start to see large queues for 
> compaction. It's really hard to tell if the server is queueing a lot of 
> compactions for the same region, lots of compactions for lots of regions, or 
> just falling behind.
> For flushes much the same. There can be flushes in queue that aren't being 
> run because of delayed flushes. There's no way to know from the metrics how 
> many flushes are for each region, how many are delayed. Etc.
> We should add either more metrics around this ( num per region, max per 
> region, min per region ) or add on a UI page that has the list of compactions 
> and flushes.
> Or both.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16498) NPE when Scan's stopRow is set NULL

2016-09-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15455313#comment-15455313
 ] 

Hadoop QA commented on HBASE-16498:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 21m 45s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 
35s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 22s 
{color} | {color:green} master passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 21s 
{color} | {color:green} master passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
30s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
18s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s 
{color} | {color:green} master passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 28s 
{color} | {color:green} master passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
25s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 21s 
{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 21s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s 
{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 23s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
29s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
37m 45s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
35s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 19s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
12s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 74m 14s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.11.2 Server=1.11.2 Image:yetus/hbase:date2016-09-01 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12826610/HBASE-16498-V3.patch |
| JIRA Issue | HBASE-16498 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 347de1e3a74c 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/

[jira] [Updated] (HBASE-16399) Provide an API to get list of failed regions and servername in Canary

2016-09-01 Thread Vishal Khandelwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vishal Khandelwal updated HBASE-16399:
--
Attachment: HBASE-16399.branch-1.03.patch
HBASE-16399.03.patch
HBASE-16399.0.98.02.patch

incorporating review comments and removing extra log messages. 

> Provide an API to get list of failed regions and servername in Canary
> -
>
> Key: HBASE-16399
> URL: https://issues.apache.org/jira/browse/HBASE-16399
> Project: HBase
>  Issue Type: Improvement
>  Components: canary
>Affects Versions: 1.3.1, 0.98.21
>Reporter: Vishal Khandelwal
>Assignee: Vishal Khandelwal
> Fix For: 1.3.1, 0.98.23
>
> Attachments: HBASE-16399.0.98.00.patch, HBASE-16399.0.98.01.patch, 
> HBASE-16399.0.98.02.patch, HBASE-16399.00.patch, HBASE-16399.01.patch, 
> HBASE-16399.02.patch, HBASE-16399.03.patch, HBASE-16399.branch-1.00.patch, 
> HBASE-16399.branch-1.01.patch, HBASE-16399.branch-1.02.patch, 
> HBASE-16399.branch-1.03.patch
>
>
> At present HBase Canary tool only prints the failures as part of logs. It 
> does not provide an API to get the list or summarizes it so caller can take 
> action on the failed host. This Jira would additional API so caller can get 
> read or write canary failures.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16447) Replication by namespaces config in peer

2016-09-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15455371#comment-15455371
 ] 

Hadoop QA commented on HBASE-16447:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 48s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 33s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 
20s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 38s 
{color} | {color:green} master passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 40s 
{color} | {color:green} master passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 10m 
40s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
58s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
51s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s 
{color} | {color:green} master passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 17s 
{color} | {color:green} master passed with JDK v1.7.0_111 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
58s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 28s 
{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 28s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 28s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 38s 
{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 38s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 38s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 10m 
40s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
54s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} rubocop {color} | {color:red} 0m 17s 
{color} | {color:red} The patch generated 59 new + 358 unchanged - 13 fixed = 
417 total (was 371) {color} |
| {color:red}-1{color} | {color:red} ruby-lint {color} | {color:red} 0m 10s 
{color} | {color:red} The patch generated 48 new + 257 unchanged - 0 fixed = 
305 total (was 257) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
31m 51s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s 
{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 18s 
{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 24s 
{color} | {color:green} hbase-protocol in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 4s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 125m 12s 
{color} |

[jira] [Commented] (HBASE-16375) Mapreduce mini cluster using HBaseTestingUtility not setting correct resourcemanager and jobhistory webapp address of MapReduceTestingShim

2016-09-01 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15455391#comment-15455391
 ] 

Sean Busbey commented on HBASE-16375:
-

Please make sure the initial patch is for the master branch. If backports are 
not clean, or a particular branch requires special handling then please name 
the patch according to the guidelines provided in the precommit feedback.

> Mapreduce mini cluster using HBaseTestingUtility not setting correct 
> resourcemanager and jobhistory webapp address of MapReduceTestingShim  
> 
>
> Key: HBASE-16375
> URL: https://issues.apache.org/jira/browse/HBASE-16375
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0
>Reporter: Loknath Priyatham Teja Singamsetty 
>Assignee: Loknath Priyatham Teja Singamsetty 
>Priority: Minor
> Fix For: 2.0.0, 1.3.0, 1.1.6, 0.98.22, 1.2.4
>
> Attachments: HBASE-16375_0.98_and_above.patch, 
> HBASE-16375_0.98_and_above_with_tests.patch
>
>
> Starting mapreduce mini cluster using HBaseTestingUtility is not setting 
> "yarn.resourcemanager.webapp.address" and 
> "mapreduce.jobhistory.webapp.address" which are required for getting the 
> submitted yarn apps using mapreduce webapp. These properties are not being 
> copied from jobConf of MapReduceTestingShim resulting in default values.
> {quote}
> HBaseTestingUtility.java
> // Allow the user to override FS URI for this map-reduce cluster to use.
> mrCluster = new MiniMRCluster(servers,
>   FS_URI != null ? FS_URI : FileSystem.get(conf).getUri().toString(), 1,
>   null, null, new JobConf(this.conf));
> JobConf jobConf = MapreduceTestingShim.getJobConf(mrCluster);
> if (jobConf == null) {
>   jobConf = mrCluster.createJobConf();
> }
> jobConf.set("mapreduce.cluster.local.dir",
>   conf.get("mapreduce.cluster.local.dir")); //Hadoop MiniMR overwrites 
> this while it should not
> LOG.info("Mini mapreduce cluster started");
> // In hadoop2, YARN/MR2 starts a mini cluster with its own conf instance 
> and updates settings.
> // Our HBase MR jobs need several of these settings in order to properly 
> run.  So we copy the
> // necessary config properties here.  YARN-129 required adding a few 
> properties.
> conf.set("mapreduce.jobtracker.address", 
> jobConf.get("mapreduce.jobtracker.address"));
> // this for mrv2 support; mr1 ignores this
> conf.set("mapreduce.framework.name", "yarn");
> conf.setBoolean("yarn.is.minicluster", true);
> String rmAddress = jobConf.get("yarn.resourcemanager.address");
> if (rmAddress != null) {
>   conf.set("yarn.resourcemanager.address", rmAddress);
> }
> String historyAddress = jobConf.get("mapreduce.jobhistory.address");
> if (historyAddress != null) {
>   conf.set("mapreduce.jobhistory.address", historyAddress);
> }
> String schedulerAddress =
>   jobConf.get("yarn.resourcemanager.scheduler.address");
> if (schedulerAddress != null) {
>   conf.set("yarn.resourcemanager.scheduler.address", schedulerAddress);
> }
> {quote}
> As a immediate fix for phoenix e2e test to succeed, need the below lines to 
> be added as well
> {quote}
> String rmWebappAddress = 
> jobConf.get("yarn.resourcemanager.webapp.address");
> if (rmWebappAddress != null) {
>   conf.set("yarn.resourcemanager.webapp.address", rmWebappAddress);
> }
> String historyWebappAddress = 
> jobConf.get("mapreduce.jobhistory.webapp.address");
> if (historyWebappAddress != null) {
>   conf.set("mapreduce.jobhistory.webapp.address", historyWebappAddress);
> }
> {quote}
> Eventually, we should also see if we can copy over all the jobConf properties 
> to HBaseTestingUtility conf object.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16375) Mapreduce mini cluster using HBaseTestingUtility not setting correct resourcemanager and jobhistory webapp address of MapReduceTestingShim

2016-09-01 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15455408#comment-15455408
 ] 

Sean Busbey commented on HBASE-16375:
-

Also please use {{git --format-patch}} to create your patch file so that we 
have a commit message and your authorship information. For more information and 
some helper scripts, check out [the "submitting patches" section of the ref 
guide|http://hbase.apache.org/book.html#submitting.patches]

> Mapreduce mini cluster using HBaseTestingUtility not setting correct 
> resourcemanager and jobhistory webapp address of MapReduceTestingShim  
> 
>
> Key: HBASE-16375
> URL: https://issues.apache.org/jira/browse/HBASE-16375
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0
>Reporter: Loknath Priyatham Teja Singamsetty 
>Assignee: Loknath Priyatham Teja Singamsetty 
>Priority: Minor
> Fix For: 2.0.0, 1.3.0, 1.1.6, 0.98.22, 1.2.4
>
> Attachments: HBASE-16375_0.98_and_above.patch, 
> HBASE-16375_0.98_and_above_with_tests.patch
>
>
> Starting mapreduce mini cluster using HBaseTestingUtility is not setting 
> "yarn.resourcemanager.webapp.address" and 
> "mapreduce.jobhistory.webapp.address" which are required for getting the 
> submitted yarn apps using mapreduce webapp. These properties are not being 
> copied from jobConf of MapReduceTestingShim resulting in default values.
> {quote}
> HBaseTestingUtility.java
> // Allow the user to override FS URI for this map-reduce cluster to use.
> mrCluster = new MiniMRCluster(servers,
>   FS_URI != null ? FS_URI : FileSystem.get(conf).getUri().toString(), 1,
>   null, null, new JobConf(this.conf));
> JobConf jobConf = MapreduceTestingShim.getJobConf(mrCluster);
> if (jobConf == null) {
>   jobConf = mrCluster.createJobConf();
> }
> jobConf.set("mapreduce.cluster.local.dir",
>   conf.get("mapreduce.cluster.local.dir")); //Hadoop MiniMR overwrites 
> this while it should not
> LOG.info("Mini mapreduce cluster started");
> // In hadoop2, YARN/MR2 starts a mini cluster with its own conf instance 
> and updates settings.
> // Our HBase MR jobs need several of these settings in order to properly 
> run.  So we copy the
> // necessary config properties here.  YARN-129 required adding a few 
> properties.
> conf.set("mapreduce.jobtracker.address", 
> jobConf.get("mapreduce.jobtracker.address"));
> // this for mrv2 support; mr1 ignores this
> conf.set("mapreduce.framework.name", "yarn");
> conf.setBoolean("yarn.is.minicluster", true);
> String rmAddress = jobConf.get("yarn.resourcemanager.address");
> if (rmAddress != null) {
>   conf.set("yarn.resourcemanager.address", rmAddress);
> }
> String historyAddress = jobConf.get("mapreduce.jobhistory.address");
> if (historyAddress != null) {
>   conf.set("mapreduce.jobhistory.address", historyAddress);
> }
> String schedulerAddress =
>   jobConf.get("yarn.resourcemanager.scheduler.address");
> if (schedulerAddress != null) {
>   conf.set("yarn.resourcemanager.scheduler.address", schedulerAddress);
> }
> {quote}
> As a immediate fix for phoenix e2e test to succeed, need the below lines to 
> be added as well
> {quote}
> String rmWebappAddress = 
> jobConf.get("yarn.resourcemanager.webapp.address");
> if (rmWebappAddress != null) {
>   conf.set("yarn.resourcemanager.webapp.address", rmWebappAddress);
> }
> String historyWebappAddress = 
> jobConf.get("mapreduce.jobhistory.webapp.address");
> if (historyWebappAddress != null) {
>   conf.set("mapreduce.jobhistory.webapp.address", historyWebappAddress);
> }
> {quote}
> Eventually, we should also see if we can copy over all the jobConf properties 
> to HBaseTestingUtility conf object.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16318) fail build if license isn't in whitelist

2016-09-01 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15455474#comment-15455474
 ] 

Sean Busbey commented on HBASE-16318:
-

looks like 0.98 is missing HBASE-16340. we shouldn't need license info for it 
because we shouldn't be shipping it.

> fail build if license isn't in whitelist
> 
>
> Key: HBASE-16318
> URL: https://issues.apache.org/jira/browse/HBASE-16318
> Project: HBase
>  Issue Type: Sub-task
>  Components: build, dependencies
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.1.6, 1.2.3, 0.98.22
>
> Attachments: 16318-0.98-addendum2.txt, HBASE-16318.0.patch, 
> HBASE-16318.1.patch, HBASE-16318.2.patch, HBASE-16318.3.patch, 
> HBASE-16318.v3addendum.0.98.patch
>
>
> we use supplemental-models.xml to make sure we have consistent names and 
> descriptions for licenses. we also know what licenses we expect to see in our 
> build. If we see a different one
> # fail the velocity template process
> # if possible, include some information about why this happened



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16318) fail build if license isn't in whitelist

2016-09-01 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15455483#comment-15455483
 ] 

Sean Busbey commented on HBASE-16318:
-

Hurm. this probably means branch-1.2 and branch-1.1 need HBASE-16340 as well.

> fail build if license isn't in whitelist
> 
>
> Key: HBASE-16318
> URL: https://issues.apache.org/jira/browse/HBASE-16318
> Project: HBase
>  Issue Type: Sub-task
>  Components: build, dependencies
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.1.6, 1.2.3, 0.98.22
>
> Attachments: 16318-0.98-addendum2.txt, HBASE-16318.0.patch, 
> HBASE-16318.1.patch, HBASE-16318.2.patch, HBASE-16318.3.patch, 
> HBASE-16318.v3addendum.0.98.patch
>
>
> we use supplemental-models.xml to make sure we have consistent names and 
> descriptions for licenses. we also know what licenses we expect to see in our 
> build. If we see a different one
> # fail the velocity template process
> # if possible, include some information about why this happened



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HBASE-16340) ensure no Xerces jars included

2016-09-01 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey reopened HBASE-16340:
-

reopening due to issue found by @larsh on HBASE-16318. We should include this 
on the other active branches to make sure we don't get xerces when building 
against hadoop 2.7.

> ensure no Xerces jars included
> --
>
> Key: HBASE-16340
> URL: https://issues.apache.org/jira/browse/HBASE-16340
> Project: HBase
>  Issue Type: Task
>  Components: dependencies
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
> Fix For: 2.0.0, 1.3.0, 1.4.0
>
> Attachments: HBASE-16340.1.patch, HBASE-16340.2.patch
>
>
> When we moved our pom to Hadoop 2.7 we picked up a transitive Xerces 
> implementation. We should exclude this to ensure we don't get a conflict with 
> the implementation that ships with the jvm.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16447) Replication by namespaces config in peer

2016-09-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1540#comment-1540
 ] 

Hadoop QA commented on HBASE-16447:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
59s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 16s 
{color} | {color:green} master passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 25s 
{color} | {color:green} master passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 10m 
18s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
52s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 9s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s 
{color} | {color:green} master passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 14s 
{color} | {color:green} master passed with JDK v1.7.0_111 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
44s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 29s 
{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 29s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 29s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 32s 
{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 10m 
27s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
53s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} rubocop {color} | {color:red} 0m 18s 
{color} | {color:red} The patch generated 61 new + 362 unchanged - 13 fixed = 
423 total (was 375) {color} |
| {color:red}-1{color} | {color:red} ruby-lint {color} | {color:red} 0m 10s 
{color} | {color:red} The patch generated 48 new + 259 unchanged - 0 fixed = 
307 total (was 259) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
29m 0s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 7s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s 
{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 16s 
{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 25s 
{color} | {color:green} hbase-protocol in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 5s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {co

[jira] [Updated] (HBASE-16516) Revisit the implementation of PayloadCarryingRpcController

2016-09-01 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-16516:
--
Attachment: HBASE-16516-v4.patch

Change the notifyOnCancel API as I found that we still need to do something 
even if the call is already cancelled before write it out...

> Revisit the implementation of PayloadCarryingRpcController
> --
>
> Key: HBASE-16516
> URL: https://issues.apache.org/jira/browse/HBASE-16516
> Project: HBase
>  Issue Type: Sub-task
>  Components: rpc
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-16516-v1.patch, HBASE-16516-v2.patch, 
> HBASE-16516-v3.patch, HBASE-16516-v4.patch, HBASE-16516.patch
>
>
> First, it should be an interface, the current implementation of 
> {{DelegatingPayloadCarryingRpcController}} is weird.
> Second, we need to be more careful when dealing with cancel.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16399) Provide an API to get list of failed regions and servername in Canary

2016-09-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15455656#comment-15455656
 ] 

Hadoop QA commented on HBASE-16399:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
2s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s 
{color} | {color:green} master passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s 
{color} | {color:green} master passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
49s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
56s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s 
{color} | {color:green} master passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 34s 
{color} | {color:green} master passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s 
{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
27m 18s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 98m 24s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 141m 14s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.11.2 Server=1.11.2 Image:yetus/hbase:date2016-09-01 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12826612/HBASE-16399.03.patch |
| JIRA Issue | HBASE-16399 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 9ce2282a5941 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / e30a66b |
| Default Java | 1.7.0_111 |
| Multi-JDK versions |  /usr/lib/jvm/jav

[jira] [Updated] (HBASE-16375) Mapreduce mini cluster using HBaseTestingUtility not setting correct resourcemanager and jobhistory webapp address of MapReduceTestingShim

2016-09-01 Thread Loknath Priyatham Teja Singamsetty (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Loknath Priyatham Teja Singamsetty  updated HBASE-16375:

Attachment: HBASE-16375_0.98_and_above_with_tests_format.patch

> Mapreduce mini cluster using HBaseTestingUtility not setting correct 
> resourcemanager and jobhistory webapp address of MapReduceTestingShim  
> 
>
> Key: HBASE-16375
> URL: https://issues.apache.org/jira/browse/HBASE-16375
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0
>Reporter: Loknath Priyatham Teja Singamsetty 
>Assignee: Loknath Priyatham Teja Singamsetty 
>Priority: Minor
> Fix For: 2.0.0, 1.3.0, 1.1.6, 0.98.22, 1.2.4
>
> Attachments: HBASE-16375_0.98_and_above.patch, 
> HBASE-16375_0.98_and_above_with_tests.patch, 
> HBASE-16375_0.98_and_above_with_tests_format.patch
>
>
> Starting mapreduce mini cluster using HBaseTestingUtility is not setting 
> "yarn.resourcemanager.webapp.address" and 
> "mapreduce.jobhistory.webapp.address" which are required for getting the 
> submitted yarn apps using mapreduce webapp. These properties are not being 
> copied from jobConf of MapReduceTestingShim resulting in default values.
> {quote}
> HBaseTestingUtility.java
> // Allow the user to override FS URI for this map-reduce cluster to use.
> mrCluster = new MiniMRCluster(servers,
>   FS_URI != null ? FS_URI : FileSystem.get(conf).getUri().toString(), 1,
>   null, null, new JobConf(this.conf));
> JobConf jobConf = MapreduceTestingShim.getJobConf(mrCluster);
> if (jobConf == null) {
>   jobConf = mrCluster.createJobConf();
> }
> jobConf.set("mapreduce.cluster.local.dir",
>   conf.get("mapreduce.cluster.local.dir")); //Hadoop MiniMR overwrites 
> this while it should not
> LOG.info("Mini mapreduce cluster started");
> // In hadoop2, YARN/MR2 starts a mini cluster with its own conf instance 
> and updates settings.
> // Our HBase MR jobs need several of these settings in order to properly 
> run.  So we copy the
> // necessary config properties here.  YARN-129 required adding a few 
> properties.
> conf.set("mapreduce.jobtracker.address", 
> jobConf.get("mapreduce.jobtracker.address"));
> // this for mrv2 support; mr1 ignores this
> conf.set("mapreduce.framework.name", "yarn");
> conf.setBoolean("yarn.is.minicluster", true);
> String rmAddress = jobConf.get("yarn.resourcemanager.address");
> if (rmAddress != null) {
>   conf.set("yarn.resourcemanager.address", rmAddress);
> }
> String historyAddress = jobConf.get("mapreduce.jobhistory.address");
> if (historyAddress != null) {
>   conf.set("mapreduce.jobhistory.address", historyAddress);
> }
> String schedulerAddress =
>   jobConf.get("yarn.resourcemanager.scheduler.address");
> if (schedulerAddress != null) {
>   conf.set("yarn.resourcemanager.scheduler.address", schedulerAddress);
> }
> {quote}
> As a immediate fix for phoenix e2e test to succeed, need the below lines to 
> be added as well
> {quote}
> String rmWebappAddress = 
> jobConf.get("yarn.resourcemanager.webapp.address");
> if (rmWebappAddress != null) {
>   conf.set("yarn.resourcemanager.webapp.address", rmWebappAddress);
> }
> String historyWebappAddress = 
> jobConf.get("mapreduce.jobhistory.webapp.address");
> if (historyWebappAddress != null) {
>   conf.set("mapreduce.jobhistory.webapp.address", historyWebappAddress);
> }
> {quote}
> Eventually, we should also see if we can copy over all the jobConf properties 
> to HBaseTestingUtility conf object.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16375) Mapreduce mini cluster using HBaseTestingUtility not setting correct resourcemanager and jobhistory webapp address of MapReduceTestingShim

2016-09-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15455713#comment-15455713
 ] 

Hadoop QA commented on HBASE-16375:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 4s {color} 
| {color:red} HBASE-16375 does not apply to master. Rebase required? Wrong 
Branch? See https://yetus.apache.org/documentation/0.3.0/precommit-patchnames 
for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12826644/HBASE-16375_0.98_and_above_with_tests_format.patch
 |
| JIRA Issue | HBASE-16375 |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/3386/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Mapreduce mini cluster using HBaseTestingUtility not setting correct 
> resourcemanager and jobhistory webapp address of MapReduceTestingShim  
> 
>
> Key: HBASE-16375
> URL: https://issues.apache.org/jira/browse/HBASE-16375
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0
>Reporter: Loknath Priyatham Teja Singamsetty 
>Assignee: Loknath Priyatham Teja Singamsetty 
>Priority: Minor
> Fix For: 2.0.0, 1.3.0, 1.1.6, 0.98.22, 1.2.4
>
> Attachments: HBASE-16375_0.98_and_above.patch, 
> HBASE-16375_0.98_and_above_with_tests.patch, 
> HBASE-16375_0.98_and_above_with_tests_format.patch
>
>
> Starting mapreduce mini cluster using HBaseTestingUtility is not setting 
> "yarn.resourcemanager.webapp.address" and 
> "mapreduce.jobhistory.webapp.address" which are required for getting the 
> submitted yarn apps using mapreduce webapp. These properties are not being 
> copied from jobConf of MapReduceTestingShim resulting in default values.
> {quote}
> HBaseTestingUtility.java
> // Allow the user to override FS URI for this map-reduce cluster to use.
> mrCluster = new MiniMRCluster(servers,
>   FS_URI != null ? FS_URI : FileSystem.get(conf).getUri().toString(), 1,
>   null, null, new JobConf(this.conf));
> JobConf jobConf = MapreduceTestingShim.getJobConf(mrCluster);
> if (jobConf == null) {
>   jobConf = mrCluster.createJobConf();
> }
> jobConf.set("mapreduce.cluster.local.dir",
>   conf.get("mapreduce.cluster.local.dir")); //Hadoop MiniMR overwrites 
> this while it should not
> LOG.info("Mini mapreduce cluster started");
> // In hadoop2, YARN/MR2 starts a mini cluster with its own conf instance 
> and updates settings.
> // Our HBase MR jobs need several of these settings in order to properly 
> run.  So we copy the
> // necessary config properties here.  YARN-129 required adding a few 
> properties.
> conf.set("mapreduce.jobtracker.address", 
> jobConf.get("mapreduce.jobtracker.address"));
> // this for mrv2 support; mr1 ignores this
> conf.set("mapreduce.framework.name", "yarn");
> conf.setBoolean("yarn.is.minicluster", true);
> String rmAddress = jobConf.get("yarn.resourcemanager.address");
> if (rmAddress != null) {
>   conf.set("yarn.resourcemanager.address", rmAddress);
> }
> String historyAddress = jobConf.get("mapreduce.jobhistory.address");
> if (historyAddress != null) {
>   conf.set("mapreduce.jobhistory.address", historyAddress);
> }
> String schedulerAddress =
>   jobConf.get("yarn.resourcemanager.scheduler.address");
> if (schedulerAddress != null) {
>   conf.set("yarn.resourcemanager.scheduler.address", schedulerAddress);
> }
> {quote}
> As a immediate fix for phoenix e2e test to succeed, need the below lines to 
> be added as well
> {quote}
> String rmWebappAddress = 
> jobConf.get("yarn.resourcemanager.webapp.address");
> if (rmWebappAddress != null) {
>   conf.set("yarn.resourcemanager.webapp.address", rmWebappAddress);
> }
> String historyWebappAddress = 
> jobConf.get("mapreduce.jobhistory.webapp.address");
> if (historyWebappAddress != null) {
>   conf.set("mapreduce.jobhistory.webapp.address", historyWebappAddress);
> }
> {quote}
> Eventually, we should also see if we can copy over all the jobConf properties 
> to HBaseTestingUtility conf object.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16375) Mapreduce mini cluster using HBaseTestingUtility not setting correct resourcemanager and jobhistory webapp address of MapReduceTestingShim

2016-09-01 Thread Loknath Priyatham Teja Singamsetty (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15455715#comment-15455715
 ] 

Loknath Priyatham Teja Singamsetty  commented on HBASE-16375:
-

Attached the patch using --format-patch. In this case, the single patch was 
applying to master branch as well. So, not attaching multiple patches.

> Mapreduce mini cluster using HBaseTestingUtility not setting correct 
> resourcemanager and jobhistory webapp address of MapReduceTestingShim  
> 
>
> Key: HBASE-16375
> URL: https://issues.apache.org/jira/browse/HBASE-16375
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0
>Reporter: Loknath Priyatham Teja Singamsetty 
>Assignee: Loknath Priyatham Teja Singamsetty 
>Priority: Minor
> Fix For: 2.0.0, 1.3.0, 1.1.6, 0.98.22, 1.2.4
>
> Attachments: HBASE-16375_0.98_and_above.patch, 
> HBASE-16375_0.98_and_above_with_tests.patch, 
> HBASE-16375_0.98_and_above_with_tests_format.patch
>
>
> Starting mapreduce mini cluster using HBaseTestingUtility is not setting 
> "yarn.resourcemanager.webapp.address" and 
> "mapreduce.jobhistory.webapp.address" which are required for getting the 
> submitted yarn apps using mapreduce webapp. These properties are not being 
> copied from jobConf of MapReduceTestingShim resulting in default values.
> {quote}
> HBaseTestingUtility.java
> // Allow the user to override FS URI for this map-reduce cluster to use.
> mrCluster = new MiniMRCluster(servers,
>   FS_URI != null ? FS_URI : FileSystem.get(conf).getUri().toString(), 1,
>   null, null, new JobConf(this.conf));
> JobConf jobConf = MapreduceTestingShim.getJobConf(mrCluster);
> if (jobConf == null) {
>   jobConf = mrCluster.createJobConf();
> }
> jobConf.set("mapreduce.cluster.local.dir",
>   conf.get("mapreduce.cluster.local.dir")); //Hadoop MiniMR overwrites 
> this while it should not
> LOG.info("Mini mapreduce cluster started");
> // In hadoop2, YARN/MR2 starts a mini cluster with its own conf instance 
> and updates settings.
> // Our HBase MR jobs need several of these settings in order to properly 
> run.  So we copy the
> // necessary config properties here.  YARN-129 required adding a few 
> properties.
> conf.set("mapreduce.jobtracker.address", 
> jobConf.get("mapreduce.jobtracker.address"));
> // this for mrv2 support; mr1 ignores this
> conf.set("mapreduce.framework.name", "yarn");
> conf.setBoolean("yarn.is.minicluster", true);
> String rmAddress = jobConf.get("yarn.resourcemanager.address");
> if (rmAddress != null) {
>   conf.set("yarn.resourcemanager.address", rmAddress);
> }
> String historyAddress = jobConf.get("mapreduce.jobhistory.address");
> if (historyAddress != null) {
>   conf.set("mapreduce.jobhistory.address", historyAddress);
> }
> String schedulerAddress =
>   jobConf.get("yarn.resourcemanager.scheduler.address");
> if (schedulerAddress != null) {
>   conf.set("yarn.resourcemanager.scheduler.address", schedulerAddress);
> }
> {quote}
> As a immediate fix for phoenix e2e test to succeed, need the below lines to 
> be added as well
> {quote}
> String rmWebappAddress = 
> jobConf.get("yarn.resourcemanager.webapp.address");
> if (rmWebappAddress != null) {
>   conf.set("yarn.resourcemanager.webapp.address", rmWebappAddress);
> }
> String historyWebappAddress = 
> jobConf.get("mapreduce.jobhistory.webapp.address");
> if (historyWebappAddress != null) {
>   conf.set("mapreduce.jobhistory.webapp.address", historyWebappAddress);
> }
> {quote}
> Eventually, we should also see if we can copy over all the jobConf properties 
> to HBaseTestingUtility conf object.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16399) Provide an API to get list of failed regions and servername in Canary

2016-09-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15455727#comment-15455727
 ] 

Hadoop QA commented on HBASE-16399:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
58s {color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 30s 
{color} | {color:green} branch-1 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s 
{color} | {color:green} branch-1 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
55s {color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} branch-1 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 53s 
{color} | {color:red} hbase-server in branch-1 has 1 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s 
{color} | {color:green} branch-1 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 34s 
{color} | {color:green} branch-1 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
56s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
16m 30s {color} | {color:green} The patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 5s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s 
{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 81m 38s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 112m 36s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.regionserver.TestFailedAppendAndSync |
|   | hadoop.hbase.mapred.TestMultiTableSnapshotInputFormat |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.11.2 Server=1.11.2 Image:yetus/hbase:date2016-09-01 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12826620/HBASE-16399.branch-1.03.patch
 |
| JIRA Issue | HBASE-16399 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 6918a128954c 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |

[jira] [Commented] (HBASE-16375) Mapreduce mini cluster using HBaseTestingUtility not setting correct resourcemanager and jobhistory webapp address of MapReduceTestingShim

2016-09-01 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15455742#comment-15455742
 ] 

Andrew Purtell commented on HBASE-16375:


The latest patch is not correctly named. See the guide Sean provided a link to. 
To submit and test for master branch, name the patch HBASE-16375.patch. Because 
you included the string "0.98" it will be tested against 0.98 which is not what 
we want. 

> Mapreduce mini cluster using HBaseTestingUtility not setting correct 
> resourcemanager and jobhistory webapp address of MapReduceTestingShim  
> 
>
> Key: HBASE-16375
> URL: https://issues.apache.org/jira/browse/HBASE-16375
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0
>Reporter: Loknath Priyatham Teja Singamsetty 
>Assignee: Loknath Priyatham Teja Singamsetty 
>Priority: Minor
> Fix For: 2.0.0, 1.3.0, 1.1.6, 0.98.22, 1.2.4
>
> Attachments: HBASE-16375_0.98_and_above.patch, 
> HBASE-16375_0.98_and_above_with_tests.patch, 
> HBASE-16375_0.98_and_above_with_tests_format.patch
>
>
> Starting mapreduce mini cluster using HBaseTestingUtility is not setting 
> "yarn.resourcemanager.webapp.address" and 
> "mapreduce.jobhistory.webapp.address" which are required for getting the 
> submitted yarn apps using mapreduce webapp. These properties are not being 
> copied from jobConf of MapReduceTestingShim resulting in default values.
> {quote}
> HBaseTestingUtility.java
> // Allow the user to override FS URI for this map-reduce cluster to use.
> mrCluster = new MiniMRCluster(servers,
>   FS_URI != null ? FS_URI : FileSystem.get(conf).getUri().toString(), 1,
>   null, null, new JobConf(this.conf));
> JobConf jobConf = MapreduceTestingShim.getJobConf(mrCluster);
> if (jobConf == null) {
>   jobConf = mrCluster.createJobConf();
> }
> jobConf.set("mapreduce.cluster.local.dir",
>   conf.get("mapreduce.cluster.local.dir")); //Hadoop MiniMR overwrites 
> this while it should not
> LOG.info("Mini mapreduce cluster started");
> // In hadoop2, YARN/MR2 starts a mini cluster with its own conf instance 
> and updates settings.
> // Our HBase MR jobs need several of these settings in order to properly 
> run.  So we copy the
> // necessary config properties here.  YARN-129 required adding a few 
> properties.
> conf.set("mapreduce.jobtracker.address", 
> jobConf.get("mapreduce.jobtracker.address"));
> // this for mrv2 support; mr1 ignores this
> conf.set("mapreduce.framework.name", "yarn");
> conf.setBoolean("yarn.is.minicluster", true);
> String rmAddress = jobConf.get("yarn.resourcemanager.address");
> if (rmAddress != null) {
>   conf.set("yarn.resourcemanager.address", rmAddress);
> }
> String historyAddress = jobConf.get("mapreduce.jobhistory.address");
> if (historyAddress != null) {
>   conf.set("mapreduce.jobhistory.address", historyAddress);
> }
> String schedulerAddress =
>   jobConf.get("yarn.resourcemanager.scheduler.address");
> if (schedulerAddress != null) {
>   conf.set("yarn.resourcemanager.scheduler.address", schedulerAddress);
> }
> {quote}
> As a immediate fix for phoenix e2e test to succeed, need the below lines to 
> be added as well
> {quote}
> String rmWebappAddress = 
> jobConf.get("yarn.resourcemanager.webapp.address");
> if (rmWebappAddress != null) {
>   conf.set("yarn.resourcemanager.webapp.address", rmWebappAddress);
> }
> String historyWebappAddress = 
> jobConf.get("mapreduce.jobhistory.webapp.address");
> if (historyWebappAddress != null) {
>   conf.set("mapreduce.jobhistory.webapp.address", historyWebappAddress);
> }
> {quote}
> Eventually, we should also see if we can copy over all the jobConf properties 
> to HBaseTestingUtility conf object.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16375) Mapreduce mini cluster using HBaseTestingUtility not setting correct resourcemanager and jobhistory webapp address of MapReduceTestingShim

2016-09-01 Thread Loknath Priyatham Teja Singamsetty (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Loknath Priyatham Teja Singamsetty  updated HBASE-16375:

Attachment: HBASE-16375.master.001.patch

> Mapreduce mini cluster using HBaseTestingUtility not setting correct 
> resourcemanager and jobhistory webapp address of MapReduceTestingShim  
> 
>
> Key: HBASE-16375
> URL: https://issues.apache.org/jira/browse/HBASE-16375
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0
>Reporter: Loknath Priyatham Teja Singamsetty 
>Assignee: Loknath Priyatham Teja Singamsetty 
>Priority: Minor
> Fix For: 2.0.0, 1.3.0, 1.1.6, 0.98.22, 1.2.4
>
> Attachments: HBASE-16375.master.001.patch, 
> HBASE-16375_0.98_and_above.patch, 
> HBASE-16375_0.98_and_above_with_tests.patch, 
> HBASE-16375_0.98_and_above_with_tests_format.patch
>
>
> Starting mapreduce mini cluster using HBaseTestingUtility is not setting 
> "yarn.resourcemanager.webapp.address" and 
> "mapreduce.jobhistory.webapp.address" which are required for getting the 
> submitted yarn apps using mapreduce webapp. These properties are not being 
> copied from jobConf of MapReduceTestingShim resulting in default values.
> {quote}
> HBaseTestingUtility.java
> // Allow the user to override FS URI for this map-reduce cluster to use.
> mrCluster = new MiniMRCluster(servers,
>   FS_URI != null ? FS_URI : FileSystem.get(conf).getUri().toString(), 1,
>   null, null, new JobConf(this.conf));
> JobConf jobConf = MapreduceTestingShim.getJobConf(mrCluster);
> if (jobConf == null) {
>   jobConf = mrCluster.createJobConf();
> }
> jobConf.set("mapreduce.cluster.local.dir",
>   conf.get("mapreduce.cluster.local.dir")); //Hadoop MiniMR overwrites 
> this while it should not
> LOG.info("Mini mapreduce cluster started");
> // In hadoop2, YARN/MR2 starts a mini cluster with its own conf instance 
> and updates settings.
> // Our HBase MR jobs need several of these settings in order to properly 
> run.  So we copy the
> // necessary config properties here.  YARN-129 required adding a few 
> properties.
> conf.set("mapreduce.jobtracker.address", 
> jobConf.get("mapreduce.jobtracker.address"));
> // this for mrv2 support; mr1 ignores this
> conf.set("mapreduce.framework.name", "yarn");
> conf.setBoolean("yarn.is.minicluster", true);
> String rmAddress = jobConf.get("yarn.resourcemanager.address");
> if (rmAddress != null) {
>   conf.set("yarn.resourcemanager.address", rmAddress);
> }
> String historyAddress = jobConf.get("mapreduce.jobhistory.address");
> if (historyAddress != null) {
>   conf.set("mapreduce.jobhistory.address", historyAddress);
> }
> String schedulerAddress =
>   jobConf.get("yarn.resourcemanager.scheduler.address");
> if (schedulerAddress != null) {
>   conf.set("yarn.resourcemanager.scheduler.address", schedulerAddress);
> }
> {quote}
> As a immediate fix for phoenix e2e test to succeed, need the below lines to 
> be added as well
> {quote}
> String rmWebappAddress = 
> jobConf.get("yarn.resourcemanager.webapp.address");
> if (rmWebappAddress != null) {
>   conf.set("yarn.resourcemanager.webapp.address", rmWebappAddress);
> }
> String historyWebappAddress = 
> jobConf.get("mapreduce.jobhistory.webapp.address");
> if (historyWebappAddress != null) {
>   conf.set("mapreduce.jobhistory.webapp.address", historyWebappAddress);
> }
> {quote}
> Eventually, we should also see if we can copy over all the jobConf properties 
> to HBaseTestingUtility conf object.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16375) Mapreduce mini cluster using HBaseTestingUtility not setting correct resourcemanager and jobhistory webapp address of MapReduceTestingShim

2016-09-01 Thread Loknath Priyatham Teja Singamsetty (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15455829#comment-15455829
 ] 

Loknath Priyatham Teja Singamsetty  commented on HBASE-16375:
-

I have created and attached HBASE-16375.master.001.patch for master branch. Do 
i need to create separate patches for all the branches > 0.98? Is there any way 
where the same patch gets applied to all branches > 0.98? 

> Mapreduce mini cluster using HBaseTestingUtility not setting correct 
> resourcemanager and jobhistory webapp address of MapReduceTestingShim  
> 
>
> Key: HBASE-16375
> URL: https://issues.apache.org/jira/browse/HBASE-16375
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0
>Reporter: Loknath Priyatham Teja Singamsetty 
>Assignee: Loknath Priyatham Teja Singamsetty 
>Priority: Minor
> Fix For: 2.0.0, 1.3.0, 1.1.6, 0.98.22, 1.2.4
>
> Attachments: HBASE-16375.master.001.patch, 
> HBASE-16375_0.98_and_above.patch, 
> HBASE-16375_0.98_and_above_with_tests.patch, 
> HBASE-16375_0.98_and_above_with_tests_format.patch
>
>
> Starting mapreduce mini cluster using HBaseTestingUtility is not setting 
> "yarn.resourcemanager.webapp.address" and 
> "mapreduce.jobhistory.webapp.address" which are required for getting the 
> submitted yarn apps using mapreduce webapp. These properties are not being 
> copied from jobConf of MapReduceTestingShim resulting in default values.
> {quote}
> HBaseTestingUtility.java
> // Allow the user to override FS URI for this map-reduce cluster to use.
> mrCluster = new MiniMRCluster(servers,
>   FS_URI != null ? FS_URI : FileSystem.get(conf).getUri().toString(), 1,
>   null, null, new JobConf(this.conf));
> JobConf jobConf = MapreduceTestingShim.getJobConf(mrCluster);
> if (jobConf == null) {
>   jobConf = mrCluster.createJobConf();
> }
> jobConf.set("mapreduce.cluster.local.dir",
>   conf.get("mapreduce.cluster.local.dir")); //Hadoop MiniMR overwrites 
> this while it should not
> LOG.info("Mini mapreduce cluster started");
> // In hadoop2, YARN/MR2 starts a mini cluster with its own conf instance 
> and updates settings.
> // Our HBase MR jobs need several of these settings in order to properly 
> run.  So we copy the
> // necessary config properties here.  YARN-129 required adding a few 
> properties.
> conf.set("mapreduce.jobtracker.address", 
> jobConf.get("mapreduce.jobtracker.address"));
> // this for mrv2 support; mr1 ignores this
> conf.set("mapreduce.framework.name", "yarn");
> conf.setBoolean("yarn.is.minicluster", true);
> String rmAddress = jobConf.get("yarn.resourcemanager.address");
> if (rmAddress != null) {
>   conf.set("yarn.resourcemanager.address", rmAddress);
> }
> String historyAddress = jobConf.get("mapreduce.jobhistory.address");
> if (historyAddress != null) {
>   conf.set("mapreduce.jobhistory.address", historyAddress);
> }
> String schedulerAddress =
>   jobConf.get("yarn.resourcemanager.scheduler.address");
> if (schedulerAddress != null) {
>   conf.set("yarn.resourcemanager.scheduler.address", schedulerAddress);
> }
> {quote}
> As a immediate fix for phoenix e2e test to succeed, need the below lines to 
> be added as well
> {quote}
> String rmWebappAddress = 
> jobConf.get("yarn.resourcemanager.webapp.address");
> if (rmWebappAddress != null) {
>   conf.set("yarn.resourcemanager.webapp.address", rmWebappAddress);
> }
> String historyWebappAddress = 
> jobConf.get("mapreduce.jobhistory.webapp.address");
> if (historyWebappAddress != null) {
>   conf.set("mapreduce.jobhistory.webapp.address", historyWebappAddress);
> }
> {quote}
> Eventually, we should also see if we can copy over all the jobConf properties 
> to HBaseTestingUtility conf object.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16375) Mapreduce mini cluster using HBaseTestingUtility not setting correct resourcemanager and jobhistory webapp address of MapReduceTestingShim

2016-09-01 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15455849#comment-15455849
 ] 

Andrew Purtell commented on HBASE-16375:


Correct, you'd need to make a patch per branch. However in this case I think 
it's fine to just put up a patch just for master. Precommit results will 
generally apply and the committer can easily run the new test when applying the 
patch to the other target branches to check there is not unexpected variation  

> Mapreduce mini cluster using HBaseTestingUtility not setting correct 
> resourcemanager and jobhistory webapp address of MapReduceTestingShim  
> 
>
> Key: HBASE-16375
> URL: https://issues.apache.org/jira/browse/HBASE-16375
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0
>Reporter: Loknath Priyatham Teja Singamsetty 
>Assignee: Loknath Priyatham Teja Singamsetty 
>Priority: Minor
> Fix For: 2.0.0, 1.3.0, 1.1.6, 0.98.22, 1.2.4
>
> Attachments: HBASE-16375.master.001.patch, 
> HBASE-16375_0.98_and_above.patch, 
> HBASE-16375_0.98_and_above_with_tests.patch, 
> HBASE-16375_0.98_and_above_with_tests_format.patch
>
>
> Starting mapreduce mini cluster using HBaseTestingUtility is not setting 
> "yarn.resourcemanager.webapp.address" and 
> "mapreduce.jobhistory.webapp.address" which are required for getting the 
> submitted yarn apps using mapreduce webapp. These properties are not being 
> copied from jobConf of MapReduceTestingShim resulting in default values.
> {quote}
> HBaseTestingUtility.java
> // Allow the user to override FS URI for this map-reduce cluster to use.
> mrCluster = new MiniMRCluster(servers,
>   FS_URI != null ? FS_URI : FileSystem.get(conf).getUri().toString(), 1,
>   null, null, new JobConf(this.conf));
> JobConf jobConf = MapreduceTestingShim.getJobConf(mrCluster);
> if (jobConf == null) {
>   jobConf = mrCluster.createJobConf();
> }
> jobConf.set("mapreduce.cluster.local.dir",
>   conf.get("mapreduce.cluster.local.dir")); //Hadoop MiniMR overwrites 
> this while it should not
> LOG.info("Mini mapreduce cluster started");
> // In hadoop2, YARN/MR2 starts a mini cluster with its own conf instance 
> and updates settings.
> // Our HBase MR jobs need several of these settings in order to properly 
> run.  So we copy the
> // necessary config properties here.  YARN-129 required adding a few 
> properties.
> conf.set("mapreduce.jobtracker.address", 
> jobConf.get("mapreduce.jobtracker.address"));
> // this for mrv2 support; mr1 ignores this
> conf.set("mapreduce.framework.name", "yarn");
> conf.setBoolean("yarn.is.minicluster", true);
> String rmAddress = jobConf.get("yarn.resourcemanager.address");
> if (rmAddress != null) {
>   conf.set("yarn.resourcemanager.address", rmAddress);
> }
> String historyAddress = jobConf.get("mapreduce.jobhistory.address");
> if (historyAddress != null) {
>   conf.set("mapreduce.jobhistory.address", historyAddress);
> }
> String schedulerAddress =
>   jobConf.get("yarn.resourcemanager.scheduler.address");
> if (schedulerAddress != null) {
>   conf.set("yarn.resourcemanager.scheduler.address", schedulerAddress);
> }
> {quote}
> As a immediate fix for phoenix e2e test to succeed, need the below lines to 
> be added as well
> {quote}
> String rmWebappAddress = 
> jobConf.get("yarn.resourcemanager.webapp.address");
> if (rmWebappAddress != null) {
>   conf.set("yarn.resourcemanager.webapp.address", rmWebappAddress);
> }
> String historyWebappAddress = 
> jobConf.get("mapreduce.jobhistory.webapp.address");
> if (historyWebappAddress != null) {
>   conf.set("mapreduce.jobhistory.webapp.address", historyWebappAddress);
> }
> {quote}
> Eventually, we should also see if we can copy over all the jobConf properties 
> to HBaseTestingUtility conf object.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16507) Procedure v2 - Force DDL operation to always roll forward

2016-09-01 Thread Stephen Yuan Jiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15455853#comment-15455853
 ] 

Stephen Yuan Jiang commented on HBASE-16507:


I +1 in the RB.  Since this patch contains some improvement in the rollback and 
it is critical for the proc-V2-based AM implementation.  I 'd like to see this 
patch to be done to unblock the AM work.

However, I'd like to have additional JIRA to have DDL rollback to be 
re-implemented in the future (preferably before 3.0 release - not have to be 
2.0, could be 2.x)
"
A general comments (some from our private conversation):  

I know from technical point of view, the DDL rollback code is tricky. However, 
I think we do need to provide a robust way for users to rollback hanging or 
transient failed DDL operation.  We can have this patch committed and later 
implement a better version of DDL rollback. 

More comments:
- Direction should be trying to support robust rollback to all DDLs, instead of 
remove it.  
DDL rollback is part of database technology forever.  I don't see any major 
database technology not support rollback.  Technical difficulty apart, we 
should move to a direction that makes rollback more robust; instead of disallow 
it - why split region has to be completed, even with all the errors, why not 
allow it to rollback quick so users can start to access the region; if a create 
table meets tons of issues, instead of hanging there forever or manual cleanup, 
isn't "kill it and let the inconsistent state to be cleaned automatically" a 
logic choice?  

- We have already had DDL rollbacks in HBASE, cannot just reduce functionality 
(a good functionality) due to complication of procedure implementation.  
"

> Procedure v2 - Force DDL operation to always roll forward
> -
>
> Key: HBASE-16507
> URL: https://issues.apache.org/jira/browse/HBASE-16507
> Project: HBase
>  Issue Type: Sub-task
>  Components: master, proc-v2
>Affects Versions: 2.0.0
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-16507-v0.patch, HBASE-16507-v1.patch
>
>
> Having rollback for DDLs was a bad idea. 
> and it turns out to be an unexpected behavior for the user. 
> DDLs only have transient errors (e.g. zk, hdfs, meta down)
> if we abort/rollback on a transient failure the user will get a failure,
> and it is not clear why the user needs to retry the command when the system 
> can do that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16340) ensure no Xerces jars included

2016-09-01 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15455884#comment-15455884
 ] 

Andrew Purtell commented on HBASE-16340:


Are you doing this [~busbey] ? I'm on deck for release work today and can take 
care of this as part of working on 0.98

> ensure no Xerces jars included
> --
>
> Key: HBASE-16340
> URL: https://issues.apache.org/jira/browse/HBASE-16340
> Project: HBase
>  Issue Type: Task
>  Components: dependencies
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
> Fix For: 2.0.0, 1.3.0, 1.4.0
>
> Attachments: HBASE-16340.1.patch, HBASE-16340.2.patch
>
>
> When we moved our pom to Hadoop 2.7 we picked up a transitive Xerces 
> implementation. We should exclude this to ensure we don't get a conflict with 
> the implementation that ships with the jvm.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16340) ensure no Xerces jars included

2016-09-01 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15455893#comment-15455893
 ] 

Sean Busbey commented on HBASE-16340:
-

I had scheduled it for myself, but I've gotten caught up in a (non-hbase) perf 
issue. it'd be great if you could take care of it.

> ensure no Xerces jars included
> --
>
> Key: HBASE-16340
> URL: https://issues.apache.org/jira/browse/HBASE-16340
> Project: HBase
>  Issue Type: Task
>  Components: dependencies
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
> Fix For: 2.0.0, 1.3.0, 1.4.0
>
> Attachments: HBASE-16340.1.patch, HBASE-16340.2.patch
>
>
> When we moved our pom to Hadoop 2.7 we picked up a transitive Xerces 
> implementation. We should exclude this to ensure we don't get a conflict with 
> the implementation that ships with the jvm.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16318) fail build if license isn't in whitelist

2016-09-01 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15455885#comment-15455885
 ] 

Andrew Purtell commented on HBASE-16318:


Ok, let's pick back 16340 rather than commit the second addendum here

> fail build if license isn't in whitelist
> 
>
> Key: HBASE-16318
> URL: https://issues.apache.org/jira/browse/HBASE-16318
> Project: HBase
>  Issue Type: Sub-task
>  Components: build, dependencies
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.1.6, 1.2.3, 0.98.22
>
> Attachments: 16318-0.98-addendum2.txt, HBASE-16318.0.patch, 
> HBASE-16318.1.patch, HBASE-16318.2.patch, HBASE-16318.3.patch, 
> HBASE-16318.v3addendum.0.98.patch
>
>
> we use supplemental-models.xml to make sure we have consistent names and 
> descriptions for licenses. we also know what licenses we expect to see in our 
> build. If we see a different one
> # fail the velocity template process
> # if possible, include some information about why this happened



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14417) Incremental backup and bulk loading

2016-09-01 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-14417:
---
Description: 
Currently, incremental backup is based on WAL files. Bulk data loading bypasses 
WALs for obvious reasons, breaking incremental backups. The only way to 
continue backups after bulk loading is to create new full backup of a table. 
This may not be feasible for customers who do bulk loading regularly (say, 
every day).

Google doc for design:
https://docs.google.com/document/d/1ACCLsecHDvzVSasORgqqRNrloGx4mNYIbvAU7lq5lJE

  was:Currently, incremental backup is based on WAL files. Bulk data loading 
bypasses WALs for obvious reasons, breaking incremental backups. The only way 
to continue backups after bulk loading is to create new full backup of a table. 
This may not be feasible for customers who do bulk loading regularly (say, 
every day).


> Incremental backup and bulk loading
> ---
>
> Key: HBASE-14417
> URL: https://issues.apache.org/jira/browse/HBASE-14417
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 2.0.0
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>Priority: Critical
>  Labels: backup
> Fix For: 2.0.0
>
>
> Currently, incremental backup is based on WAL files. Bulk data loading 
> bypasses WALs for obvious reasons, breaking incremental backups. The only way 
> to continue backups after bulk loading is to create new full backup of a 
> table. This may not be feasible for customers who do bulk loading regularly 
> (say, every day).
> Google doc for design:
> https://docs.google.com/document/d/1ACCLsecHDvzVSasORgqqRNrloGx4mNYIbvAU7lq5lJE



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15449) HBase Backup Phase 3: Support physical table layout change

2016-09-01 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-15449:
---
Attachment: 15449.v8.txt

Patch v8 makes the test in TestIncrementalBackup clearer:

after full backup, column family f2 is added to table1 and column family f3 is 
dropped from table1

> HBase Backup Phase 3: Support physical table layout change 
> ---
>
> Key: HBASE-15449
> URL: https://issues.apache.org/jira/browse/HBASE-15449
> Project: HBase
>  Issue Type: Task
>Affects Versions: 2.0.0
>Reporter: Vladimir Rodionov
>Assignee: Ted Yu
>  Labels: backup
> Fix For: 2.0.0
>
> Attachments: 15449.v1.txt, 15449.v2.txt, 15449.v4.txt, 15449.v5.txt, 
> 15449.v7.txt, 15449.v8.txt
>
>
> Table operation such as add column family, delete column family, truncate , 
> delete table may result in subsequent backup restore failure.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14123) HBase Backup/Restore Phase 2

2016-09-01 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15455951#comment-15455951
 ] 

Ted Yu commented on HBASE-14123:


Patch v14 is based on commit d7022551cf3ad8b9e97292d04d8f68e04d0e068a

> HBase Backup/Restore Phase 2
> 
>
> Key: HBASE-14123
> URL: https://issues.apache.org/jira/browse/HBASE-14123
> Project: HBase
>  Issue Type: Umbrella
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Attachments: 14123-master.v14.txt, 14123-master.v2.txt, 
> 14123-master.v3.txt, 14123-master.v5.txt, 14123-master.v6.txt, 
> 14123-master.v7.txt, 14123-master.v8.txt, 14123-master.v9.txt, 14123-v14.txt, 
> HBASE-14123-for-7912-v1.patch, HBASE-14123-for-7912-v6.patch, 
> HBASE-14123-v1.patch, HBASE-14123-v10.patch, HBASE-14123-v11.patch, 
> HBASE-14123-v12.patch, HBASE-14123-v13.patch, HBASE-14123-v15.patch, 
> HBASE-14123-v16.patch, HBASE-14123-v2.patch, HBASE-14123-v3.patch, 
> HBASE-14123-v4.patch, HBASE-14123-v5.patch, HBASE-14123-v6.patch, 
> HBASE-14123-v7.patch, HBASE-14123-v9.patch
>
>
> Phase 2 umbrella JIRA. See HBASE-7912 for design document and description. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16505) Add AsyncRegion interface to pass deadline and support async operations

2016-09-01 Thread Yu Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15455975#comment-15455975
 ] 

Yu Li commented on HBASE-16505:
---

[~yangzhe1991] Just to confirm, that in the current design, I assume the usage 
would be like:
# Initialize a {{RegionOperationContext}} instance
# Start a new thread (or hand to Netty thread) to call the AsyncRegion with 
this RegionOperationContext
# In the main thread, use {{RegionOperationContext#getResult}} to get the 
result asynchronously, rather than blocking and wait

Correct?

When talking about "asynchronous non-blocking way", I just meant to emphasize 
*non-blocking*. Or say it's not like the current AsyncRpcClient that although 
we hand over the request to netty which does things in an async way, we will 
wait for the result (rather than using callback/listener) so it's still 
blocking mode.

> Add AsyncRegion interface to pass deadline and support async operations
> ---
>
> Key: HBASE-16505
> URL: https://issues.apache.org/jira/browse/HBASE-16505
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: HBASE-16505-v1.patch
>
>
> If we want to know the correct setting of timeout in read/write path, we need 
> add a new parameter in operation-methods of Region.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16507) Procedure v2 - Force DDL operation to always roll forward

2016-09-01 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15455992#comment-15455992
 ] 

Matteo Bertozzi commented on HBASE-16507:
-

The all point of the proc-v2 work is to remove conflict situation via a single 
writer/coordinator.

In this ddl case the rollback is not really removed for technical difficulties. 
Aside from the fact that some implementation like enable/disable/modify are 
wrong and end up causing possible corruptions. The point here is that DDLs are 
operations that will always be able to complete. There may be transient failure 
along the way (e.g zk/hdfa/meta hiccups) but as soon those are resolved and the 
master is fully operational again the operation will be able to complete.

At the moment we rollback on transient failures which will result in users 
being confused about why the operation failed and a simple retry will succeed. 
Ending up in support calls and request about why the system can't handle it 
since it was just a transient failure.

The RS puts behaves with a roll forward approach like we have in this patch. 
Once the operation is written to the wal we know that the operation will 
complete  (at some point) no matter how many transient failure we will get.

Rollback as in traditional database are triggered by situation where the 
operation is stuck due to conflicts and it will never be able to complete. In 
this case our DDLs will always be able to complete.

The only case that rollback can be applied to DDLs here is when the user 
request an abort of the operation. But DDLs are too short in duration to have 
the user be able to abort them when started. Note that in the patch attached 
the user is able to rollback the DDLs operation up to the first prepare step. 
Which is the most likely case where you want to abort. E.g. Your operation is 
stuck behind another operation that is making slow progress and you decide to 
abort it. 

long operation like assignment, snapshot/backups will have rollback support. 
because they are long and because for "technical difficulties" may end up stuck 
in conflicts.

> Procedure v2 - Force DDL operation to always roll forward
> -
>
> Key: HBASE-16507
> URL: https://issues.apache.org/jira/browse/HBASE-16507
> Project: HBase
>  Issue Type: Sub-task
>  Components: master, proc-v2
>Affects Versions: 2.0.0
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-16507-v0.patch, HBASE-16507-v1.patch
>
>
> Having rollback for DDLs was a bad idea. 
> and it turns out to be an unexpected behavior for the user. 
> DDLs only have transient errors (e.g. zk, hdfs, meta down)
> if we abort/rollback on a transient failure the user will get a failure,
> and it is not clear why the user needs to retry the command when the system 
> can do that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16516) Revisit the implementation of PayloadCarryingRpcController

2016-09-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15456020#comment-15456020
 ] 

Hadoop QA commented on HBASE-16516:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 12 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 19s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
5s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 47s 
{color} | {color:green} master passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s 
{color} | {color:green} master passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
46s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
28s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
57s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 43s 
{color} | {color:green} master passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s 
{color} | {color:green} master passed with JDK v1.7.0_111 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
6s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s 
{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 50s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 56s 
{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 56s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
45s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
27m 2s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 
26s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 41s 
{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s 
{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 58s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 96m 38s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
36s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 146m 38s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.11.2 Server=1.11.2 Image:yetus/hbase:date2016-09-01 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12826640/HBASE-16516-v4.patch |
| JIRA Issue | HBASE-16516 |
| Optional Tests |  asflicense  javac  javadoc  unit

[jira] [Updated] (HBASE-15449) HBase Backup Phase 3: Support physical table layout change

2016-09-01 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-15449:
---
Attachment: (was: 15449.v8.txt)

> HBase Backup Phase 3: Support physical table layout change 
> ---
>
> Key: HBASE-15449
> URL: https://issues.apache.org/jira/browse/HBASE-15449
> Project: HBase
>  Issue Type: Task
>Affects Versions: 2.0.0
>Reporter: Vladimir Rodionov
>Assignee: Ted Yu
>  Labels: backup
> Fix For: 2.0.0
>
> Attachments: 15449.v1.txt, 15449.v2.txt, 15449.v4.txt, 15449.v5.txt, 
> 15449.v7.txt, 15449.v8.txt
>
>
> Table operation such as add column family, delete column family, truncate , 
> delete table may result in subsequent backup restore failure.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15449) HBase Backup Phase 3: Support physical table layout change

2016-09-01 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-15449:
---
Attachment: 15449.v8.txt

> HBase Backup Phase 3: Support physical table layout change 
> ---
>
> Key: HBASE-15449
> URL: https://issues.apache.org/jira/browse/HBASE-15449
> Project: HBase
>  Issue Type: Task
>Affects Versions: 2.0.0
>Reporter: Vladimir Rodionov
>Assignee: Ted Yu
>  Labels: backup
> Fix For: 2.0.0
>
> Attachments: 15449.v1.txt, 15449.v2.txt, 15449.v4.txt, 15449.v5.txt, 
> 15449.v7.txt, 15449.v8.txt
>
>
> Table operation such as add column family, delete column family, truncate , 
> delete table may result in subsequent backup restore failure.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16502) Reduce garbage in BufferedDataBlockEncoder

2016-09-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15456067#comment-15456067
 ] 

Hudson commented on HBASE-16502:


FAILURE: Integrated in Jenkins build HBase-0.98-on-Hadoop-1.1 #1268 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/1268/])
HBASE-16502 Reduce garbage in BufferedDataBlockEncoder (binlijin) (apurtell: 
rev b861c331f05df8a585d03959dc471e9e8b045047)
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestBufferedDataBlockEncoder.java
* (edit) 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/DiffKeyDeltaEncoder.java
* (edit) 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/FastDiffDeltaEncoder.java
* (edit) 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/BufferedDataBlockEncoder.java
HBASE-16502 Reduce garbage in BufferedDataBlockEncoder - addendum adopts 
(apurtell: rev e792d570a98374a9829522b22d63b90e25c89104)
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestBufferedDataBlockEncoder.java
* (edit) 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/BufferedDataBlockEncoder.java
* (edit) 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/FastDiffDeltaEncoder.java
* (edit) 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/DiffKeyDeltaEncoder.java


> Reduce garbage in BufferedDataBlockEncoder
> --
>
> Key: HBASE-16502
> URL: https://issues.apache.org/jira/browse/HBASE-16502
> Project: HBase
>  Issue Type: Improvement
>Reporter: binlijin
>Assignee: binlijin
> Fix For: 2.0.0, 1.4.0, 0.98.22
>
> Attachments: HBASE-16502-master.patch, 
> HBASE-16502-master_addnum_v1.patch, HBASE-16502-master_addnum_v2.patch, 
> HBASE-16502-master_v2.patch, HBASE-16502.branch-1.addnumv1.patch, 
> HBASE-16502.branch-1.v1.patch
>
>
> In BufferedDataBlockEncoder.SeekerState every read will new a tagsBuffer for  
> compressTags. There is no need when no tags or compressTags=false, so we can 
> reduce this new byte[] to reduce garbage.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16538) Version mismatch in HBaseConfiguration.checkDefaultsVersion

2016-09-01 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15456081#comment-15456081
 ] 

stack commented on HBASE-16538:
---

I was going to call BS since this has been working for ever but then chatting 
w/ [~mbertozzi], you may be on to something [~appy] Can we print out what the 
annotation is finding at static time?  We could commit something temporarily 
since this only seems to be a condition up on apache jenkins?

> Version mismatch in HBaseConfiguration.checkDefaultsVersion
> ---
>
> Key: HBASE-16538
> URL: https://issues.apache.org/jira/browse/HBASE-16538
> Project: HBase
>  Issue Type: Bug
>Reporter: Appy
>  Labels: configuration, test-failure
>
> {noformat}
> org.apache.hadoop.hbase.procedure2.TestYieldProcedures
> testYieldEachExecutionStep(org.apache.hadoop.hbase.procedure2.TestYieldProcedures)
>   Time elapsed: 0.255 sec  <<< ERROR!
> java.lang.RuntimeException: hbase-default.xml file seems to be for an older 
> version of HBase (2.0.0-SNAPSHOT), this version is Unknown
>   at 
> org.apache.hadoop.hbase.HBaseConfiguration.checkDefaultsVersion(HBaseConfiguration.java:73)
>   at 
> org.apache.hadoop.hbase.HBaseConfiguration.addHbaseResources(HBaseConfiguration.java:83)
>   at 
> org.apache.hadoop.hbase.HBaseConfiguration.create(HBaseConfiguration.java:98)
>   at 
> org.apache.hadoop.hbase.HBaseCommonTestingUtility.(HBaseCommonTestingUtility.java:46)
>   at 
> org.apache.hadoop.hbase.procedure2.TestYieldProcedures.setUp(TestYieldProcedures.java:63)
> {noformat}
> (Exact test is not important)
> Reference run:
> https://builds.apache.org/view/All/job/HBase-Trunk_matrix/jdk=JDK%201.8%20(latest),label=yahoo-not-h2/1515/console



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14123) HBase Backup/Restore Phase 2

2016-09-01 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-14123:
---
Attachment: 14123-master.v15.txt

> HBase Backup/Restore Phase 2
> 
>
> Key: HBASE-14123
> URL: https://issues.apache.org/jira/browse/HBASE-14123
> Project: HBase
>  Issue Type: Umbrella
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Attachments: 14123-master.v14.txt, 14123-master.v15.txt, 
> 14123-master.v2.txt, 14123-master.v3.txt, 14123-master.v5.txt, 
> 14123-master.v6.txt, 14123-master.v7.txt, 14123-master.v8.txt, 
> 14123-master.v9.txt, 14123-v14.txt, HBASE-14123-for-7912-v1.patch, 
> HBASE-14123-for-7912-v6.patch, HBASE-14123-v1.patch, HBASE-14123-v10.patch, 
> HBASE-14123-v11.patch, HBASE-14123-v12.patch, HBASE-14123-v13.patch, 
> HBASE-14123-v15.patch, HBASE-14123-v16.patch, HBASE-14123-v2.patch, 
> HBASE-14123-v3.patch, HBASE-14123-v4.patch, HBASE-14123-v5.patch, 
> HBASE-14123-v6.patch, HBASE-14123-v7.patch, HBASE-14123-v9.patch
>
>
> Phase 2 umbrella JIRA. See HBASE-7912 for design document and description. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16542) Skip full backup in selected backup tests

2016-09-01 Thread Ted Yu (JIRA)
Ted Yu created HBASE-16542:
--

 Summary: Skip full backup in selected backup tests
 Key: HBASE-16542
 URL: https://issues.apache.org/jira/browse/HBASE-16542
 Project: HBase
  Issue Type: Test
Reporter: Ted Yu
Priority: Minor


Since automatic mode is always used (HBASE-16037), some tests take longer time 
to run:

1. restore full backup
2. restore incremental backup

Action 2 would execute action 1 again.

We can selectively skip full backup in backup / restore tests where incremental 
backup is involved.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16536) Make the HBase minicluster easy to use for testing downstream applications.

2016-09-01 Thread Dima Spivak (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15456122#comment-15456122
 ] 

Dima Spivak commented on HBASE-16536:
-

Any specific reason to change the name from {{FilterTestingCluster}} to 
{{HBaseTestingClusterAutostarter}}? I just wonder if that name is going to 
confuse new developers.

> Make the HBase minicluster easy to use for testing downstream applications.
> ---
>
> Key: HBASE-16536
> URL: https://issues.apache.org/jira/browse/HBASE-16536
> Project: HBase
>  Issue Type: Improvement
>Reporter: Niels Basjes
>Assignee: Niels Basjes
> Attachments: HBASE-16536-01.patch, HBASE-16536-02.patch
>
>
> In many applications I write I use HBase to store information.
> A big problem is testing these applications.
> I have seen several situations where people have written tests that create 
> tables in the development cluster and due to firewalls and such couldn't run 
> those tests from Jenkins.
> A while ago I wrote the FilterTestingCluster class that makes unit testing 
> the client side filters a lot easier. With this ticket I propose to make this 
> more generic and make it so that user applications can easily incorporate it 
> into their own unit tests without any major modifications to their 
> application.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16345) RpcRetryingCallerWithReadReplicas#call() should catch some RegionServer Exceptions

2016-09-01 Thread huaxiang sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huaxiang sun updated HBASE-16345:
-
Attachment: HBASE-16345.master.003.patch

> RpcRetryingCallerWithReadReplicas#call() should catch some RegionServer 
> Exceptions
> --
>
> Key: HBASE-16345
> URL: https://issues.apache.org/jira/browse/HBASE-16345
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 2.0.0
>Reporter: huaxiang sun
>Assignee: huaxiang sun
> Attachments: HBASE-16345-v001.patch, HBASE-16345.master.001.patch, 
> HBASE-16345.master.002.patch, HBASE-16345.master.003.patch
>
>
> Update for the description. Debugged more at this front based on the comments 
> from Enis. 
> The cause is that for the primary replica, if its retry is exhausted too 
> fast, f.get() [1] returns ExecutionException. This Exception needs to be 
> ignored and continue with the replicas.
> The other issue is that after adding calls for the replicas, if the first 
> completed task gets ExecutionException (due to the retry exhausted), it 
> throws the exception to the client[2].
> In this case, it needs to loop through these tasks, waiting for the success 
> one. If no one succeeds, throw exception.
> Similar for the scan as well
> [1] 
> https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/client/RpcRetryingCallerWithReadReplicas.java#L197
> [2] 
> https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/client/RpcRetryingCallerWithReadReplicas.java#L219



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16345) RpcRetryingCallerWithReadReplicas#call() should catch some RegionServer Exceptions

2016-09-01 Thread huaxiang sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15456135#comment-15456135
 ] 

huaxiang sun commented on HBASE-16345:
--

Hi [~enis], I just uploaded a new patch which addressed your comments (the 
major one is adding unittest cases). Could you review and provide feedback? 
Thanks!

> RpcRetryingCallerWithReadReplicas#call() should catch some RegionServer 
> Exceptions
> --
>
> Key: HBASE-16345
> URL: https://issues.apache.org/jira/browse/HBASE-16345
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 2.0.0
>Reporter: huaxiang sun
>Assignee: huaxiang sun
> Attachments: HBASE-16345-v001.patch, HBASE-16345.master.001.patch, 
> HBASE-16345.master.002.patch, HBASE-16345.master.003.patch
>
>
> Update for the description. Debugged more at this front based on the comments 
> from Enis. 
> The cause is that for the primary replica, if its retry is exhausted too 
> fast, f.get() [1] returns ExecutionException. This Exception needs to be 
> ignored and continue with the replicas.
> The other issue is that after adding calls for the replicas, if the first 
> completed task gets ExecutionException (due to the retry exhausted), it 
> throws the exception to the client[2].
> In this case, it needs to loop through these tasks, waiting for the success 
> one. If no one succeeds, throw exception.
> Similar for the scan as well
> [1] 
> https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/client/RpcRetryingCallerWithReadReplicas.java#L197
> [2] 
> https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/client/RpcRetryingCallerWithReadReplicas.java#L219



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16543) Separate Create/Modify Table operations from open/reopen regions

2016-09-01 Thread Matteo Bertozzi (JIRA)
Matteo Bertozzi created HBASE-16543:
---

 Summary: Separate Create/Modify Table operations from open/reopen 
regions
 Key: HBASE-16543
 URL: https://issues.apache.org/jira/browse/HBASE-16543
 Project: HBase
  Issue Type: Sub-task
  Components: master
Affects Versions: 2.0.0
Reporter: Matteo Bertozzi
 Fix For: 2.0.0


At the moment create table and modify table operations will trigger an 
open/reopen of the regions inside the DDL operation. 
we should split the operation in two parts
 - create table, enable table regions
 - modify table, reopen table regions



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16375) Mapreduce mini cluster using HBaseTestingUtility not setting correct resourcemanager and jobhistory webapp address of MapReduceTestingShim

2016-09-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15456191#comment-15456191
 ] 

Hadoop QA commented on HBASE-16375:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
2s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s 
{color} | {color:green} master passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s 
{color} | {color:green} master passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
50s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
58s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s 
{color} | {color:green} master passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 34s 
{color} | {color:green} master passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
47s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s 
{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s 
{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 36s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
18s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 8 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
27m 36s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 95m 8s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 138m 19s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.regionserver.TestHRegion |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.11.2 Server=1.11.2 Image:yetus/hbase:date2016-09-01 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12826648/HBASE-16375.master.001.patch
 |
| JIRA Issue | HBASE-16375 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 75ee2dadf339 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-s

[jira] [Commented] (HBASE-15513) hbase.hregion.memstore.chunkpool.maxsize is 0.0 by default

2016-09-01 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15456219#comment-15456219
 ] 

Vladimir Rodionov commented on HBASE-15513:
---

[~anoop.hbase], how about +1?

> hbase.hregion.memstore.chunkpool.maxsize is 0.0 by default
> --
>
> Key: HBASE-15513
> URL: https://issues.apache.org/jira/browse/HBASE-15513
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0
>
> Attachments: HBASE-15513-v1.patch
>
>
> That results in excessive MemStoreLAB chunk allocations because we can not 
> reuse them. Not sure, why it has been disabled, by default. May be the code 
> has not been tested well?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16527) IOExceptions from DFS client still can cause CatalogJanitor to delete referenced files

2016-09-01 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15456232#comment-15456232
 ] 

Ted Yu commented on HBASE-16527:


lgtm

We can handle filter optimization in another issue.

> IOExceptions from DFS client still can cause CatalogJanitor to delete 
> referenced files
> --
>
> Key: HBASE-16527
> URL: https://issues.apache.org/jira/browse/HBASE-16527
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0
>
> Attachments: HBASE-16527-v1.patch, HBASE-16527-v2.patch
>
>
> that was fixed partially in HBASE-13331, but issue still exists , now a 
> little bit deeper in the code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16544) Remove or Clarify 'Using Amazon S3 Storage' section in the reference guide

2016-09-01 Thread Yi Liang (JIRA)
Yi Liang created HBASE-16544:


 Summary: Remove or Clarify  'Using Amazon S3 Storage' section in 
the reference guide
 Key: HBASE-16544
 URL: https://issues.apache.org/jira/browse/HBASE-16544
 Project: HBase
  Issue Type: Bug
  Components: documentation, snapshots
Affects Versions: 2.0.0
Reporter: Yi Liang


reference guide at https://hbase.apache.org/book.html#amazon_s3_configuration

(1) the title 'Using Amazon S3 Storage' is confusing.From my point of view, I 
think this title means that we can use S3 storage to replace HDFS, I really 
tried this :(,   always give me errors and hbase even can not start, see error 
mentioned in jira HBASE-11045.

(2) And the details in this section are more about deploying HBase on Amazon 
EC2 cluster, which has nothing to do 'using Amazon S3 storage'

(3) In all, I think we need to remove this section, or at least clarify this 
section if someone fully test HBase on S3.  see HBASE-15646 for more details 
about this doc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16545) Add backup test where data is ingested during backup procedure

2016-09-01 Thread Ted Yu (JIRA)
Ted Yu created HBASE-16545:
--

 Summary: Add backup test where data is ingested during backup 
procedure
 Key: HBASE-16545
 URL: https://issues.apache.org/jira/browse/HBASE-16545
 Project: HBase
  Issue Type: Test
Reporter: Ted Yu


Currently the backup / restore tests do the following:

* ingest data
* perform full backup
* ingest more data

Data ingestion in step 3 above is after the completion of backup.

This issue is to add concurrent data ingestion in the presence of on-going 
backup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16545) Add backup test where data is ingested during backup procedure

2016-09-01 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15456297#comment-15456297
 ] 

Vladimir Rodionov commented on HBASE-16545:
---

[~tedyu] can you link all backup tickets you are opening to a master 
HBASE-14414?

> Add backup test where data is ingested during backup procedure
> --
>
> Key: HBASE-16545
> URL: https://issues.apache.org/jira/browse/HBASE-16545
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>
> Currently the backup / restore tests do the following:
> * ingest data
> * perform full backup
> * ingest more data
> Data ingestion in step 3 above is after the completion of backup.
> This issue is to add concurrent data ingestion in the presence of on-going 
> backup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16545) Add backup test where data is ingested during backup procedure

2016-09-01 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-16545:
---
Labels: backup  (was: )

> Add backup test where data is ingested during backup procedure
> --
>
> Key: HBASE-16545
> URL: https://issues.apache.org/jira/browse/HBASE-16545
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>  Labels: backup
>
> Currently the backup / restore tests do the following:
> * ingest data
> * perform full backup
> * ingest more data
> Data ingestion in step 3 above is after the completion of backup.
> This issue is to add concurrent data ingestion in the presence of on-going 
> backup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16527) IOExceptions from DFS client still can cause CatalogJanitor to delete referenced files

2016-09-01 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-16527:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 1.4.0
   Status: Resolved  (was: Patch Available)

Thanks for the patch, Vlad.

> IOExceptions from DFS client still can cause CatalogJanitor to delete 
> referenced files
> --
>
> Key: HBASE-16527
> URL: https://issues.apache.org/jira/browse/HBASE-16527
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-16527-v1.patch, HBASE-16527-v2.patch
>
>
> that was fixed partially in HBASE-13331, but issue still exists , now a 
> little bit deeper in the code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16527) IOExceptions from DFS client still can cause CatalogJanitor to delete referenced files

2016-09-01 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15456409#comment-15456409
 ] 

Vladimir Rodionov commented on HBASE-16527:
---

{quote}
lgtm
{quote}

Good. Commit it, [~tedyu]

> IOExceptions from DFS client still can cause CatalogJanitor to delete 
> referenced files
> --
>
> Key: HBASE-16527
> URL: https://issues.apache.org/jira/browse/HBASE-16527
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-16527-v1.patch, HBASE-16527-v2.patch
>
>
> that was fixed partially in HBASE-13331, but issue still exists , now a 
> little bit deeper in the code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16527) IOExceptions from DFS client still can cause CatalogJanitor to delete referenced files

2016-09-01 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15456414#comment-15456414
 ] 

Vladimir Rodionov commented on HBASE-16527:
---

[~tedyu] I think this one should be backported to all 1.+ releases and to 0.98 
as well. I will prepare separate patches.

> IOExceptions from DFS client still can cause CatalogJanitor to delete 
> referenced files
> --
>
> Key: HBASE-16527
> URL: https://issues.apache.org/jira/browse/HBASE-16527
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-16527-v1.patch, HBASE-16527-v2.patch
>
>
> that was fixed partially in HBASE-13331, but issue still exists , now a 
> little bit deeper in the code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16527) IOExceptions from DFS client still can cause CatalogJanitor to delete referenced files

2016-09-01 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-16527:
---
Fix Version/s: 1.2.4
   1.1.7
   1.3.0

> IOExceptions from DFS client still can cause CatalogJanitor to delete 
> referenced files
> --
>
> Key: HBASE-16527
> URL: https://issues.apache.org/jira/browse/HBASE-16527
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.1.7, 1.2.4
>
> Attachments: HBASE-16527-v1.patch, HBASE-16527-v2.patch
>
>
> that was fixed partially in HBASE-13331, but issue still exists , now a 
> little bit deeper in the code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16527) IOExceptions from DFS client still can cause CatalogJanitor to delete referenced files

2016-09-01 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-16527:
---
Fix Version/s: 0.98.22

Integrated to active branches.

> IOExceptions from DFS client still can cause CatalogJanitor to delete 
> referenced files
> --
>
> Key: HBASE-16527
> URL: https://issues.apache.org/jira/browse/HBASE-16527
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0, 1.3.0, 1.4.0, 0.98.22, 1.1.7, 1.2.4
>
> Attachments: HBASE-16527-v1.patch, HBASE-16527-v2.patch
>
>
> that was fixed partially in HBASE-13331, but issue still exists , now a 
> little bit deeper in the code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16345) RpcRetryingCallerWithReadReplicas#call() should catch some RegionServer Exceptions

2016-09-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15456459#comment-15456459
 ] 

Hadoop QA commented on HBASE-16345:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 37s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
24s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s 
{color} | {color:green} master passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s 
{color} | {color:green} master passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
48s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
29s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
57s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 41s 
{color} | {color:green} master passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s 
{color} | {color:green} master passed with JDK v1.7.0_111 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
5s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s 
{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 46s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 54s 
{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 54s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
27m 41s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 
26s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
23s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 41s 
{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s 
{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 0s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 84m 1s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
37s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 135m 20s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hbase.client.TestRestoreSnapshotFromClientWithRegionReplicas |
| Timed out junit tests | 
org.apache.hadoop.hbase.master.balancer.TestStochasticLoadBalancer2 |
|   | org.apache.hadoop.hbase.io.hfile.TestScannerSelectionUsingTTL |
|   | org.apache

[jira] [Commented] (HBASE-16527) IOExceptions from DFS client still can cause CatalogJanitor to delete referenced files

2016-09-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15456470#comment-15456470
 ] 

Hudson commented on HBASE-16527:


FAILURE: Integrated in Jenkins build HBase-1.4 #387 (See 
[https://builds.apache.org/job/HBase-1.4/387/])
HBASE-16527 IOExceptions from DFS client still can cause CatalogJanitor (tedyu: 
rev a034a2bdcb13c005512665e00655a8049efde675)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/CatalogJanitor.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionFileSystem.java


> IOExceptions from DFS client still can cause CatalogJanitor to delete 
> referenced files
> --
>
> Key: HBASE-16527
> URL: https://issues.apache.org/jira/browse/HBASE-16527
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0, 1.3.0, 1.4.0, 0.98.22, 1.1.7, 1.2.4
>
> Attachments: HBASE-16527-v1.patch, HBASE-16527-v2.patch
>
>
> that was fixed partially in HBASE-13331, but issue still exists , now a 
> little bit deeper in the code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16366) Restore operation into new table may fail

2016-09-01 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-16366:
---
Summary: Restore operation into new table may fail  (was: Restore operation 
into new table can fail)

> Restore operation into new table may fail
> -
>
> Key: HBASE-16366
> URL: https://issues.apache.org/jira/browse/HBASE-16366
> Project: HBase
>  Issue Type: Bug
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Attachments: HBASE-16366-v1.patch, HBASE-16366-v2.patch
>
>
> If restore from backup into new table we need to make sure that new table is 
> available online. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16366) Restore operation into new table may fail

2016-09-01 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-16366:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Fixed spacing around:
{code}
+  if (EnvironmentEdgeManager.currentTime() - startTime > 
TABLE_AVAILABILITY_WAIT_TIME) {
{code}

Thanks for the patch, Vlad.

> Restore operation into new table may fail
> -
>
> Key: HBASE-16366
> URL: https://issues.apache.org/jira/browse/HBASE-16366
> Project: HBase
>  Issue Type: Bug
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Attachments: HBASE-16366-v1.patch, HBASE-16366-v2.patch
>
>
> If restore from backup into new table we need to make sure that new table is 
> available online. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16546) please ignore

2016-09-01 Thread Joe Programmer (JIRA)
Joe Programmer created HBASE-16546:
--

 Summary: please ignore
 Key: HBASE-16546
 URL: https://issues.apache.org/jira/browse/HBASE-16546
 Project: HBase
  Issue Type: Bug
Reporter: Joe Programmer


test



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-16546) please ignore

2016-09-01 Thread Dima Spivak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dima Spivak resolved HBASE-16546.
-
Resolution: Invalid

No JIRA tests, please. :)

> please ignore
> -
>
> Key: HBASE-16546
> URL: https://issues.apache.org/jira/browse/HBASE-16546
> Project: HBase
>  Issue Type: Bug
>Reporter: Joe Programmer
>
> test



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16311) Audit log for delete snapshot operation is missing in case of snapshot owner deleting the same

2016-09-01 Thread Yi Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15456492#comment-15456492
 ] 

Yi Liang commented on HBASE-16311:
--

thank for reviewing [~jerryhe], yes,  the snapshot owner may not have global 
Action.ADMIN permission, so we can just set it as null

> Audit log for delete snapshot operation is missing in case of snapshot owner 
> deleting the same
> --
>
> Key: HBASE-16311
> URL: https://issues.apache.org/jira/browse/HBASE-16311
> Project: HBase
>  Issue Type: Sub-task
>  Components: snapshots
>Affects Versions: 2.0.0
>Reporter: Abhishek Kumar
>Assignee: Yi Liang
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-16311-V1.patch, HBASE-16311-V2.patch, 
> HBASE-16311-V3.patch
>
>
> 1. Audit log seems to be left as a TODO task in AccessController.java:
> {code}
>   @Override
>   public void preDeleteSnapshot(final 
> ObserverContext ctx,
>   final SnapshotDescription snapshot) throws IOException {
> if (SnapshotDescriptionUtils.isSnapshotOwner(snapshot, getActiveUser())) {
>   // Snapshot owner is allowed to delete the snapshot
>   // TODO: We are not logging this for audit
> } else {
>   requirePermission("deleteSnapshot", Action.ADMIN);
> }
>   }
> {code}
> 2. Also, snapshot name is not getting logged in the audit logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   3   >