[jira] [Commented] (HBASE-20517) Fix PerformanceEvaluation 'column' parameter

2018-05-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16464650#comment-16464650
 ] 

Hudson commented on HBASE-20517:


Results for branch branch-2.0
[build #260 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/260/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/260//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/260//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/260//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> Fix PerformanceEvaluation 'column' parameter
> 
>
> Key: HBASE-20517
> URL: https://issues.apache.org/jira/browse/HBASE-20517
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Major
> Fix For: 3.0.0, 2.1.0, 1.5.0, 1.2.7, 1.3.3, 2.0.1, 1.4.5
>
> Attachments: HBASE-20517-branch-1.patch, HBASE-20517.patch
>
>
> PerformanceEvaluation's 'column' parameter looks broken to me.
> To test:
> 1. Write some data with 20 columns.
> 2. Do a scan test selecting one column.
> 3. Do a scan test selecting ten columns.
> You'd expect the amount of data returned to vary but no, because the read 
> side isn't selecting the same qualifiers that are written. Bytes returned in 
> case 3 should be 10x those in case 2.
> I'm in branch-1 code at the moment. Probably affects trunk too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-17391) [Shell] Add shell command to get list of servers, with filters

2018-05-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16464647#comment-16464647
 ] 

Hadoop QA commented on HBASE-17391:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
49s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
42s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} rubocop {color} | {color:red}  0m 
13s{color} | {color:red} The patch generated 1 new + 412 unchanged - 0 fixed = 
413 total (was 412) {color} |
| {color:orange}-0{color} | {color:orange} ruby-lint {color} | {color:orange}  
0m  4s{color} | {color:orange} The patch generated 6 new + 725 unchanged - 0 
fixed = 731 total (was 725) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
25s{color} | {color:green} hbase-shell in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
 9s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 18m  9s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:d8b550f |
| JIRA Issue | HBASE-17391 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12922137/HBASE-17391.master.002.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  rubocop  ruby_lint  |
| uname | Linux bd4b41c28e52 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 291dedbf81 |
| maven | version: Apache Maven 3.5.3 
(3383c37e1f9e9b3bc3df5050c29c8aff9f295297; 2018-02-24T19:49:05Z) |
| Default Java | 1.8.0_162 |
| rubocop | v0.54.0 |
| rubocop | 
https://builds.apache.org/job/PreCommit-HBASE-Build/12729/artifact/patchprocess/diff-patch-rubocop.txt
 |
| ruby-lint | v2.3.1 |
| ruby-lint | 
https://builds.apache.org/job/PreCommit-HBASE-Build/12729/artifact/patchprocess/diff-patch-ruby-lint.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/12729/testReport/ |
| Max. process+thread count | 2163 (vs. ulimit of 1) |
| modules | C: hbase-shell U: hbase-shell |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/12729/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> [Shell] Add shell command to get list of servers, with filters
> --
>
> Key: HBASE-17391
> URL: https://issues.apache.org/jira/browse/HBASE-17391
> Project: HBase
>  Issue Type: Improvement
>  Components: shell
>Affects Versions: 1.3.0
>Reporter: Lars George
>Assignee: Sreeram Venkatasubramanian
>Priority: Major
>  Labels: beginner
> Fix For: 3.0.0
>
> Attachments: HBASE-17391.master.000.patch, 
> HBASE-17391.master.001.patch, HBASE-17391.master.002.patch
>
>
> For some operations, for example calling {{update_config}}, the user needs to 
> specify the full server name. For region servers that is easier to find, but 
> not so much for the master (using {{zk_dump}} works but is noisy). It woould 
> be good to add a utility call th

[jira] [Commented] (HBASE-20517) Fix PerformanceEvaluation 'column' parameter

2018-05-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16464646#comment-16464646
 ] 

Hudson commented on HBASE-20517:


Results for branch branch-2
[build #696 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/696/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/696//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/696//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/696//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> Fix PerformanceEvaluation 'column' parameter
> 
>
> Key: HBASE-20517
> URL: https://issues.apache.org/jira/browse/HBASE-20517
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Major
> Fix For: 3.0.0, 2.1.0, 1.5.0, 1.2.7, 1.3.3, 2.0.1, 1.4.5
>
> Attachments: HBASE-20517-branch-1.patch, HBASE-20517.patch
>
>
> PerformanceEvaluation's 'column' parameter looks broken to me.
> To test:
> 1. Write some data with 20 columns.
> 2. Do a scan test selecting one column.
> 3. Do a scan test selecting ten columns.
> You'd expect the amount of data returned to vary but no, because the read 
> side isn't selecting the same qualifiers that are written. Bytes returned in 
> case 3 should be 10x those in case 2.
> I'm in branch-1 code at the moment. Probably affects trunk too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-17391) [Shell] Add shell command to get list of servers, with filters

2018-05-04 Thread Sreeram Venkatasubramanian (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sreeram Venkatasubramanian updated HBASE-17391:
---
Attachment: HBASE-17391.master.002.patch

> [Shell] Add shell command to get list of servers, with filters
> --
>
> Key: HBASE-17391
> URL: https://issues.apache.org/jira/browse/HBASE-17391
> Project: HBase
>  Issue Type: Improvement
>  Components: shell
>Affects Versions: 1.3.0
>Reporter: Lars George
>Assignee: Sreeram Venkatasubramanian
>Priority: Major
>  Labels: beginner
> Fix For: 3.0.0
>
> Attachments: HBASE-17391.master.000.patch, 
> HBASE-17391.master.001.patch, HBASE-17391.master.002.patch
>
>
> For some operations, for example calling {{update_config}}, the user needs to 
> specify the full server name. For region servers that is easier to find, but 
> not so much for the master (using {{zk_dump}} works but is noisy). It woould 
> be good to add a utility call that lists the servers, preferably with an 
> optional filter (a regexp, server type, or globbing style format) that allows 
> to whittle down the potentially long is of servers. For example:
> {noformat}
> hbase(main):001:0> list_servers "master"
> master-1.internal.larsgeorge.com,16000,1483018890074
> hbase(main):002:0> list_servers "rs"
> slave-1.internal.larsgeorge.com,16020,1482996572051
> slave-3.internal.larsgeorge.com,16020,1482996572481
> slave-2.internal.larsgeorge.com,16020,1482996570909
> hbase(main):003:0> list_servers "rs:s.*\.com.*"
> slave-1.internal.larsgeorge.com,16020,1482996572051
> slave-3.internal.larsgeorge.com,16020,1482996572481
> slave-2.internal.larsgeorge.com,16020,1482996570909
> hbase(main):004:0> list_servers ":.*160?0.*"
> master-1.internal.larsgeorge.com,16000,1483018890074
> slave-1.internal.larsgeorge.com,16020,1482996572051
> slave-3.internal.larsgeorge.com,16020,1482996572481
> slave-2.internal.larsgeorge.com,16020,1482996570909
> {noformat}
> I could imagine to have {{master}}, {{backup-master}}, {{rs}}, and maybe even 
> {{zk}} too. The optional regexp shown uses a colon as a divider. This 
> combines the "by-type", and using a filter. Example #4 skips the type and 
> only is using the filter.
> Of course, you could also implement this differently, say with two 
> parameters... just suggesting.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-17391) [Shell] Add shell command to get list of servers, with filters

2018-05-04 Thread Sreeram Venkatasubramanian (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sreeram Venkatasubramanian updated HBASE-17391:
---
Attachment: (was: HBASE-17391.master.002.txt)

> [Shell] Add shell command to get list of servers, with filters
> --
>
> Key: HBASE-17391
> URL: https://issues.apache.org/jira/browse/HBASE-17391
> Project: HBase
>  Issue Type: Improvement
>  Components: shell
>Affects Versions: 1.3.0
>Reporter: Lars George
>Assignee: Sreeram Venkatasubramanian
>Priority: Major
>  Labels: beginner
> Fix For: 3.0.0
>
> Attachments: HBASE-17391.master.000.patch, 
> HBASE-17391.master.001.patch, HBASE-17391.master.002.patch
>
>
> For some operations, for example calling {{update_config}}, the user needs to 
> specify the full server name. For region servers that is easier to find, but 
> not so much for the master (using {{zk_dump}} works but is noisy). It woould 
> be good to add a utility call that lists the servers, preferably with an 
> optional filter (a regexp, server type, or globbing style format) that allows 
> to whittle down the potentially long is of servers. For example:
> {noformat}
> hbase(main):001:0> list_servers "master"
> master-1.internal.larsgeorge.com,16000,1483018890074
> hbase(main):002:0> list_servers "rs"
> slave-1.internal.larsgeorge.com,16020,1482996572051
> slave-3.internal.larsgeorge.com,16020,1482996572481
> slave-2.internal.larsgeorge.com,16020,1482996570909
> hbase(main):003:0> list_servers "rs:s.*\.com.*"
> slave-1.internal.larsgeorge.com,16020,1482996572051
> slave-3.internal.larsgeorge.com,16020,1482996572481
> slave-2.internal.larsgeorge.com,16020,1482996570909
> hbase(main):004:0> list_servers ":.*160?0.*"
> master-1.internal.larsgeorge.com,16000,1483018890074
> slave-1.internal.larsgeorge.com,16020,1482996572051
> slave-3.internal.larsgeorge.com,16020,1482996572481
> slave-2.internal.larsgeorge.com,16020,1482996570909
> {noformat}
> I could imagine to have {{master}}, {{backup-master}}, {{rs}}, and maybe even 
> {{zk}} too. The optional regexp shown uses a colon as a divider. This 
> combines the "by-type", and using a filter. Example #4 skips the type and 
> only is using the filter.
> Of course, you could also implement this differently, say with two 
> parameters... just suggesting.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-17391) [Shell] Add shell command to get list of servers, with filters

2018-05-04 Thread Sreeram Venkatasubramanian (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sreeram Venkatasubramanian updated HBASE-17391:
---
Attachment: HBASE-17391.master.002.txt

> [Shell] Add shell command to get list of servers, with filters
> --
>
> Key: HBASE-17391
> URL: https://issues.apache.org/jira/browse/HBASE-17391
> Project: HBase
>  Issue Type: Improvement
>  Components: shell
>Affects Versions: 1.3.0
>Reporter: Lars George
>Assignee: Sreeram Venkatasubramanian
>Priority: Major
>  Labels: beginner
> Fix For: 3.0.0
>
> Attachments: HBASE-17391.master.000.patch, 
> HBASE-17391.master.001.patch, HBASE-17391.master.002.txt
>
>
> For some operations, for example calling {{update_config}}, the user needs to 
> specify the full server name. For region servers that is easier to find, but 
> not so much for the master (using {{zk_dump}} works but is noisy). It woould 
> be good to add a utility call that lists the servers, preferably with an 
> optional filter (a regexp, server type, or globbing style format) that allows 
> to whittle down the potentially long is of servers. For example:
> {noformat}
> hbase(main):001:0> list_servers "master"
> master-1.internal.larsgeorge.com,16000,1483018890074
> hbase(main):002:0> list_servers "rs"
> slave-1.internal.larsgeorge.com,16020,1482996572051
> slave-3.internal.larsgeorge.com,16020,1482996572481
> slave-2.internal.larsgeorge.com,16020,1482996570909
> hbase(main):003:0> list_servers "rs:s.*\.com.*"
> slave-1.internal.larsgeorge.com,16020,1482996572051
> slave-3.internal.larsgeorge.com,16020,1482996572481
> slave-2.internal.larsgeorge.com,16020,1482996570909
> hbase(main):004:0> list_servers ":.*160?0.*"
> master-1.internal.larsgeorge.com,16000,1483018890074
> slave-1.internal.larsgeorge.com,16020,1482996572051
> slave-3.internal.larsgeorge.com,16020,1482996572481
> slave-2.internal.larsgeorge.com,16020,1482996570909
> {noformat}
> I could imagine to have {{master}}, {{backup-master}}, {{rs}}, and maybe even 
> {{zk}} too. The optional regexp shown uses a colon as a divider. This 
> combines the "by-type", and using a filter. Example #4 skips the type and 
> only is using the filter.
> Of course, you could also implement this differently, say with two 
> parameters... just suggesting.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20526) multithreads bulkload performance

2018-05-04 Thread Key Hutu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Key Hutu updated HBASE-20526:
-
Affects Version/s: (was: 1.2.0)
   1.2.5
Fix Version/s: 1.2.5

> multithreads bulkload performance
> -
>
> Key: HBASE-20526
> URL: https://issues.apache.org/jira/browse/HBASE-20526
> Project: HBase
>  Issue Type: Improvement
>  Components: mapreduce, Zookeeper
>Affects Versions: 1.2.5
> Environment: hbase-server-1.2.0-cdh5.12.1 
> spark version 1.6
>Reporter: Key Hutu
>Assignee: Key Hutu
>Priority: Minor
>  Labels: performance
> Fix For: 1.2.5
>
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> When doing bulkload , some interactive with zookeeper to getting region key 
> range may be cost more time.
> In multithreads enviorment, the duration maybe cost 5 minute or more.
> From the executor log, like 'Reading reply sessionid:0x262fb37f4a07080 , 
> packet:: clientPath:null server ...' contents appear many times.
>  
> It likely to provide new method for bulkload, caching the key range outside
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20526) multithreads bulkload performance

2018-05-04 Thread Key Hutu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16464628#comment-16464628
 ] 

Key Hutu commented on HBASE-20526:
--

In the application,  doBulkload(hpath, admin, table, regionLocator) method 
called.
To ensure real-time performance,  many small files at higher frequencies was 
loaded

> multithreads bulkload performance
> -
>
> Key: HBASE-20526
> URL: https://issues.apache.org/jira/browse/HBASE-20526
> Project: HBase
>  Issue Type: Improvement
>  Components: mapreduce, Zookeeper
>Affects Versions: 1.2.0
> Environment: hbase-server-1.2.0-cdh5.12.1 
> spark version 1.6
>Reporter: Key Hutu
>Assignee: Key Hutu
>Priority: Minor
>  Labels: performance
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> When doing bulkload , some interactive with zookeeper to getting region key 
> range may be cost more time.
> In multithreads enviorment, the duration maybe cost 5 minute or more.
> From the executor log, like 'Reading reply sessionid:0x262fb37f4a07080 , 
> packet:: clientPath:null server ...' contents appear many times.
>  
> It likely to provide new method for bulkload, caching the key range outside
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20526) multithreads bulkload performance

2018-05-04 Thread Key Hutu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16464627#comment-16464627
 ] 

Key Hutu commented on HBASE-20526:
--

Thank you for your attention, Ted Yu
the executor log like this

{panel:title=executor stderr}
2018-05-05 12:19:41,948- WARN -330831[Executor task launch worker for task 
187159]-(HBaseConfiguration.java:195)-Config option 
"hbase.regionserver.lease.period" is deprecated. Instead, use 
"hbase.client.scanner.timeout.period"
2018-05-05 12:19:41,948-DEBUG -330831[Executor task launch worker for task 
187159-SendThread(host-8-2:2181)]-(ClientCnxn.java:818)-Reading reply 
sessionid:0x162fb3760b1ea01, packet:: clientPath:null serverPath:null 
finished:false header:: 199,8  replyHeader:: 199,197642441638,0  request:: 
'/hbase,F  response:: 
v{'replication,'schema,'meta-region-server,'rs,'splitWAL,'backup-masters,'table-lock,'flush-table-proc,'region-in-transition,'online-snapshot,'master,'running,'balancer,'recovering-regions,'draining,'namespace,'hbaseid,'table}
 
2018-05-05 12:19:41,949-DEBUG -330832[Executor task launch worker for task 
187159-SendThread(host-8-2:2181)]-(ClientCnxn.java:818)-Reading reply 
sessionid:0x162fb3760b1ea01, packet:: clientPath:null serverPath:null 
finished:false header:: 200,4  replyHeader:: 200,197642441638,0  request:: 
'/hbase/meta-region-server,F  response:: 
#0001a726567696f6e7365727665723a3630303230ffb6ffac57ffadff80ff80ffa8b50425546a17aa686f73742d382d31323810fff4ffd4318ffd2ff8affe7ffd9ffaf2c100183,s{197568498964,197568498964,1524633515423,1524633515423,0,0,0,0,64,0,197568498964}
 
2018-05-05 12:19:41,950-DEBUG -330833[Executor task launch worker for task 
187159-SendThread(host-8-2:2181)]-(ClientCnxn.java:818)-Reading reply 
sessionid:0x162fb3760b1ea01, packet:: clientPath:null serverPath:null 
finished:false header:: 201,8  replyHeader:: 201,197642441638,0  request:: 
'/hbase,F  response:: 
v{'replication,'schema,'meta-region-server,'rs,'splitWAL,'backup-masters,'table-lock,'flush-table-proc,'region-in-transition,'online-snapshot,'master,'running,'balancer,'recovering-regions,'draining,'namespace,'hbaseid,'table}
 
2018-05-05 12:19:41,950-DEBUG -330833[Executor task launch worker for task 
187159-SendThread(host-8-2:2181)]-(ClientCnxn.java:818)-Reading reply 
sessionid:0x162fb3760b1ea01, packet:: clientPath:null serverPath:null 
finished:false header:: 202,4  replyHeader:: 202,197642441638,0  request:: 
'/hbase/meta-region-server,F  response:: 
#0001a726567696f6e7365727665723a3630303230ffb6ffac57ffadff80ff80ffa8b50425546a17aa686f73742d382d31323810fff4ffd4318ffd2ff8affe7ffd9ffaf2c100183,s{197568498964,197568498964,1524633515423,1524633515423,0,0,0,0,64,0,197568498964}
 
2018-05-05 12:19:41,950-DEBUG -330833[Executor task launch worker for task 
187159-SendThread(host-8-2:2181)]-(ClientCnxn.java:818)-Reading reply 
sessionid:0x162fb3760b1ea01, packet:: clientPath:null serverPath:null 
finished:false header:: 203,8  replyHeader:: 203,197642441638,0  request:: 
'/hbase,F  response:: 
v{'replication,'schema,'meta-region-server,'rs,'splitWAL,'backup-masters,'table-lock,'flush-table-proc,'region-in-transition,'online-snapshot,'master,'running,'balancer,'recovering-regions,'draining,'namespace,'hbaseid,'table}
 
2018-05-05 12:19:41,951-DEBUG -330834[Executor task launch worker for task 
187159-SendThread(host-8-2:2181)]-(ClientCnxn.java:818)-Reading reply 
sessionid:0x162fb3760b1ea01, packet:: clientPath:null serverPath:null 
finished:false header:: 204,4  replyHeader:: 204,197642441638,0  request:: 
'/hbase/meta-region-server,F  response:: 
#0001a726567696f6e7365727665723a3630303230ffb6ffac57ffadff80ff80ffa8b50425546a17aa686f73742d382d31323810fff4ffd4318ffd2ff8affe7ffd9ffaf2c100183,s{197568498964,197568498964,1524633515423,1524633515423,0,0,0,0,64,0,197568498964}
 
2018-05-05 12:19:42,002-DEBUG -330885[Executor task launch worker for task 
201898]-(TaskMemoryManager.java:221)-Task 201898 acquired 256.0 KB for 
org.apache.spark.shuffle.sort.ShuffleExternalSorter@18f196e
2018-05-05 12:19:42,003-DEBUG -330886[Executor task launch worker for task 
201898]-(TaskMemoryManager.java:230)-Task 201898 release 128.0 KB from 
org.apache.spark.shuffle.sort.ShuffleExternalSorter@18f196e
2018-05-05 12:19:42,053-DEBUG -330936[Executor task launch worker for task 
187159-SendThread(host-8-2:2181)]-(ClientCnxn.java:818)-Reading reply 
sessionid:0x162fb3760b1ea01, packet:: clientPath:null serverPath:null 
finished:false header:: 205,8  replyHeader:: 205,197642441638,0  request:: 
'/hbase,F  response:: 
v{'replication,'schema,'meta-region-server,'rs,'splitWAL,'backup-masters,'table-lock,'flush-table-proc,'region-in-transition,'online-snapshot,'master,'running,'balancer,'recovering-

[jira] [Commented] (HBASE-20527) Remove unused code in MetaTableAccessor

2018-05-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16464589#comment-16464589
 ] 

Hadoop QA commented on HBASE-20527:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
 1s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
50s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
52s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
47s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
14m 56s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
0s{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
 9s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 42m 52s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:d8b550f |
| JIRA Issue | HBASE-20527 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12922128/HBASE-20527.v0.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 6269ca952d24 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 291dedbf81 |
| maven | version: Apache Maven 3.5.3 
(3383c37e1f9e9b3bc3df5050c29c8aff9f295297; 2018-02-24T19:49:05Z) |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC3 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/12728/testReport/ |
| Max. process+thread count | 258 (vs. ulimit of 1) |
| modules | C: hbase-client U: hbase-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/12728/console |
| Po

[jira] [Commented] (HBASE-20523) PE tool should support configuring client side buffering sizes

2018-05-04 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16464579#comment-16464579
 ] 

stack commented on HBASE-20523:
---

+1 Can you take this back to 1.2 please [~ram_krish] so I can use it in my 
compares? Thanks.

> PE tool should support configuring client side buffering sizes
> --
>
> Key: HBASE-20523
> URL: https://issues.apache.org/jira/browse/HBASE-20523
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
>Priority: Minor
> Fix For: 2.1.0, 2.0.1
>
> Attachments: HBASE-20523.patch, HBASE-20523_1.patch
>
>
> The client side buffering size impacts the write load and the write 
> performance. Hence for testing purpose it is better we allow client side 
> buffering to be configurable in PE. Already YCSB has such facility.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20527) Remove unused code in MetaTableAccessor

2018-05-04 Thread Mingdao Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingdao Yang updated HBASE-20527:
-
Attachment: HBASE-20527.v0.patch
Status: Patch Available  (was: Open)

> Remove unused code in MetaTableAccessor
> ---
>
> Key: HBASE-20527
> URL: https://issues.apache.org/jira/browse/HBASE-20527
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Mingdao Yang
>Assignee: Mingdao Yang
>Priority: Trivial
> Attachments: HBASE-20527.v0.patch
>
>
> META_REGION_PREFIX isn't used. I'll clean it up.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20527) Remove unused code in MetaTableAccessor

2018-05-04 Thread Mingdao Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingdao Yang updated HBASE-20527:
-
Attachment: (was: HBASE-20527.v0.patch)

> Remove unused code in MetaTableAccessor
> ---
>
> Key: HBASE-20527
> URL: https://issues.apache.org/jira/browse/HBASE-20527
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Mingdao Yang
>Assignee: Mingdao Yang
>Priority: Trivial
>
> META_REGION_PREFIX isn't used. I'll clean it up.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20527) Remove unused code in MetaTableAccessor

2018-05-04 Thread Mingdao Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingdao Yang updated HBASE-20527:
-
Attachment: HBASE-20527.v0.patch

> Remove unused code in MetaTableAccessor
> ---
>
> Key: HBASE-20527
> URL: https://issues.apache.org/jira/browse/HBASE-20527
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Mingdao Yang
>Assignee: Mingdao Yang
>Priority: Trivial
>
> META_REGION_PREFIX isn't used. I'll clean it up.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20505) PE should support multi column family read and write cases

2018-05-04 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-20505:
---
Attachment: (was: HBASE-20505-branch-1.patch)

> PE should support multi column family read and write cases
> --
>
> Key: HBASE-20505
> URL: https://issues.apache.org/jira/browse/HBASE-20505
> Project: HBase
>  Issue Type: Test
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 3.0.0, 2.1.0, 1.5.0
>
> Attachments: HBASE-20505-branch-1.patch, HBASE-20505.patch
>
>
> PerformanceEvaluation has a --columns parameter but this adjusts the number 
> of distinct column qualifiers to write (and, with --addColumns, to add to the 
> scan), not the number of column families. 
> We need something like a new --families parameter that will increase the 
> number of column families defined in the test table schema, written to, and 
> included in gets and scans. Default is 1, current behavior.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20513) Collect and emit ScanMetrics in PerformanceEvaluation

2018-05-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16464565#comment-16464565
 ] 

Hudson commented on HBASE-20513:


SUCCESS: Integrated in Jenkins build HBase-1.3-IT #402 (See 
[https://builds.apache.org/job/HBase-1.3-IT/402/])
HBASE-20513 Collect and emit ScanMetrics in PerformanceEvaluation (apurtell: 
rev 7a4a7d2e4ab581083229e62228eea3b5916b32df)
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/PerformanceEvaluation.java


> Collect and emit ScanMetrics in PerformanceEvaluation
> -
>
> Key: HBASE-20513
> URL: https://issues.apache.org/jira/browse/HBASE-20513
> Project: HBase
>  Issue Type: Test
>  Components: test
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 3.0.0, 2.1.0, 1.5.0, 1.3.3, 2.0.1, 1.4.5
>
> Attachments: HBASE-20513-branch-1.patch, HBASE-20513.patch
>
>
> To better understand changes in scanning behavior between version, enable 
> ScanMetrics collection in PerformanceEvaluation and collect and roll up the 
> results into a report at termination.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20505) PE should support multi column family read and write cases

2018-05-04 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-20505:
---
Attachment: (was: HBASE-20505.patch)

> PE should support multi column family read and write cases
> --
>
> Key: HBASE-20505
> URL: https://issues.apache.org/jira/browse/HBASE-20505
> Project: HBase
>  Issue Type: Test
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 3.0.0, 2.1.0, 1.5.0
>
> Attachments: HBASE-20505-branch-1.patch, HBASE-20505.patch
>
>
> PerformanceEvaluation has a --columns parameter but this adjusts the number 
> of distinct column qualifiers to write (and, with --addColumns, to add to the 
> scan), not the number of column families. 
> We need something like a new --families parameter that will increase the 
> number of column families defined in the test table schema, written to, and 
> included in gets and scans. Default is 1, current behavior.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20505) PE should support multi column family read and write cases

2018-05-04 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-20505:
---
Attachment: HBASE-20505.patch
HBASE-20505-branch-1.patch

> PE should support multi column family read and write cases
> --
>
> Key: HBASE-20505
> URL: https://issues.apache.org/jira/browse/HBASE-20505
> Project: HBase
>  Issue Type: Test
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 3.0.0, 2.1.0, 1.5.0
>
> Attachments: HBASE-20505-branch-1.patch, HBASE-20505.patch
>
>
> PerformanceEvaluation has a --columns parameter but this adjusts the number 
> of distinct column qualifiers to write (and, with --addColumns, to add to the 
> scan), not the number of column families. 
> We need something like a new --families parameter that will increase the 
> number of column families defined in the test table schema, written to, and 
> included in gets and scans. Default is 1, current behavior.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HBASE-20513) Collect and emit ScanMetrics in PerformanceEvaluation

2018-05-04 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell resolved HBASE-20513.

Resolution: Fixed

Pushed to 1.3 and up

> Collect and emit ScanMetrics in PerformanceEvaluation
> -
>
> Key: HBASE-20513
> URL: https://issues.apache.org/jira/browse/HBASE-20513
> Project: HBase
>  Issue Type: Test
>  Components: test
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 3.0.0, 2.1.0, 1.5.0, 1.3.3, 2.0.1, 1.4.5
>
> Attachments: HBASE-20513-branch-1.patch, HBASE-20513.patch
>
>
> To better understand changes in scanning behavior between version, enable 
> ScanMetrics collection in PerformanceEvaluation and collect and roll up the 
> results into a report at termination.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19722) Implement a meta query statistics metrics source

2018-05-04 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16464555#comment-16464555
 ] 

stack commented on HBASE-19722:
---

This looks great. Why as an example? How is this different than 'normal' 
per-region metrics? It does not seem to show where requests are coming from. 
Would that be possible so we could finger a bad-actor? Accesses against meta 
have a general pattern. Could we count the types of access? Thanks.

> Implement a meta query statistics metrics source
> 
>
> Key: HBASE-19722
> URL: https://issues.apache.org/jira/browse/HBASE-19722
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Andrew Purtell
>Assignee: Xu Cang
>Priority: Major
> Attachments: HBASE-19722.patch, HBASE-19722.patch.1, 
> HBASE-19722.patch.2, HBASE-19722.patch.3, HBASE-19722.patch.4, 
> HBASE-19722.patch.5
>
>
> Implement a meta query statistics metrics source, created whenever a 
> regionserver starts hosting meta, removed when meta hosting moves. Provide 
> views on top tables by request counts, top meta rowkeys by request count, top 
> clients making requests by their hostname. 
> Can be implemented as a coprocessor.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20517) Fix PerformanceEvaluation 'column' parameter

2018-05-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16464552#comment-16464552
 ] 

Hudson commented on HBASE-20517:


SUCCESS: Integrated in Jenkins build HBase-1.3-IT #401 (See 
[https://builds.apache.org/job/HBase-1.3-IT/401/])
HBASE-20517 Fix PerformanceEvaluation 'column' parameter (apurtell: rev 
b6bb5211026d3419e6ee24da2ce44d9c92287f84)
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/PerformanceEvaluation.java


> Fix PerformanceEvaluation 'column' parameter
> 
>
> Key: HBASE-20517
> URL: https://issues.apache.org/jira/browse/HBASE-20517
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Major
> Fix For: 3.0.0, 2.1.0, 1.5.0, 1.2.7, 1.3.3, 2.0.1, 1.4.5
>
> Attachments: HBASE-20517-branch-1.patch, HBASE-20517.patch
>
>
> PerformanceEvaluation's 'column' parameter looks broken to me.
> To test:
> 1. Write some data with 20 columns.
> 2. Do a scan test selecting one column.
> 3. Do a scan test selecting ten columns.
> You'd expect the amount of data returned to vary but no, because the read 
> side isn't selecting the same qualifiers that are written. Bytes returned in 
> case 3 should be 10x those in case 2.
> I'm in branch-1 code at the moment. Probably affects trunk too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20517) Fix PerformanceEvaluation 'column' parameter

2018-05-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16464551#comment-16464551
 ] 

Hudson commented on HBASE-20517:


SUCCESS: Integrated in Jenkins build HBase-1.2-IT #1108 (See 
[https://builds.apache.org/job/HBase-1.2-IT/1108/])
HBASE-20517 Fix PerformanceEvaluation 'column' parameter (apurtell: rev 
a275e863124e979745bea493c578c1f9deb400be)
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/PerformanceEvaluation.java


> Fix PerformanceEvaluation 'column' parameter
> 
>
> Key: HBASE-20517
> URL: https://issues.apache.org/jira/browse/HBASE-20517
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Major
> Fix For: 3.0.0, 2.1.0, 1.5.0, 1.2.7, 1.3.3, 2.0.1, 1.4.5
>
> Attachments: HBASE-20517-branch-1.patch, HBASE-20517.patch
>
>
> PerformanceEvaluation's 'column' parameter looks broken to me.
> To test:
> 1. Write some data with 20 columns.
> 2. Do a scan test selecting one column.
> 3. Do a scan test selecting ten columns.
> You'd expect the amount of data returned to vary but no, because the read 
> side isn't selecting the same qualifiers that are written. Bytes returned in 
> case 3 should be 10x those in case 2.
> I'm in branch-1 code at the moment. Probably affects trunk too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20513) Collect and emit ScanMetrics in PerformanceEvaluation

2018-05-04 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-20513:
---
Fix Version/s: 1.4.5
   2.0.1
   1.3.3

> Collect and emit ScanMetrics in PerformanceEvaluation
> -
>
> Key: HBASE-20513
> URL: https://issues.apache.org/jira/browse/HBASE-20513
> Project: HBase
>  Issue Type: Test
>  Components: test
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 3.0.0, 2.1.0, 1.5.0, 1.3.3, 2.0.1, 1.4.5
>
> Attachments: HBASE-20513-branch-1.patch, HBASE-20513.patch
>
>
> To better understand changes in scanning behavior between version, enable 
> ScanMetrics collection in PerformanceEvaluation and collect and roll up the 
> results into a report at termination.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-20530) Composition of backup directory containing namespace when restoring is different from the actual hfile location

2018-05-04 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16464531#comment-16464531
 ] 

Ted Yu edited comment on HBASE-20530 at 5/5/18 12:54 AM:
-

MultiTableHFileOutputFormat.configureIncrementalLoad has this code:
{code}
  
allTableNames.add(tableInfo.getRegionLocator().getName().getNameAsString());
{code}
>From TableName:
{code}
  // The name does not include the namespace when it's the default one.
  this.nameAsString = qualifierAsString;
{code}
I think this is why the namespace was missing in the path.

HBackupFileSystem.getTableBackupDir can use similar logic to accommodate the 
above for incremental backup.


was (Author: yuzhih...@gmail.com):
MultiTableHFileOutputFormat.configureIncrementalLoad has this code:
{code}
  
allTableNames.add(tableInfo.getRegionLocator().getName().getNameAsString());
{code}
>From TableName:
{code}
  // The name does not include the namespace when it's the default one.
  this.nameAsString = qualifierAsString;
{code}
I think this is why the namespace was missing in the path.

HBackupFileSystem.getTableBackupDir can use similar logic to accommodate the 
above.

> Composition of backup directory containing namespace when restoring is 
> different from the actual hfile location
> ---
>
> Key: HBASE-20530
> URL: https://issues.apache.org/jira/browse/HBASE-20530
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Vladimir Rodionov
>Priority: Critical
>
> Here is partial listing of output from incremental backup:
> {code}
> 5306 2018-05-04 02:38 
> hdfs://mycluster/user/hbase/backup_loc/backup_1525401467793/table_almphxih4u/cf1/5648501da7194783947bbf07b172f07e
> {code}
> When restoring, here is what HBackupFileSystem.getTableBackupDir returns:
> {code}
> fileBackupDir=hdfs://mycluster/user/hbase/backup_loc/backup_1525401467793/default/table_almphxih4u
> {code}
> You can see that namespace gets in the way, leading to inability of finding 
> the proper hfile.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HBASE-20517) Fix PerformanceEvaluation 'column' parameter

2018-05-04 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell resolved HBASE-20517.

Resolution: Fixed

Pushed to 1.2 and up

> Fix PerformanceEvaluation 'column' parameter
> 
>
> Key: HBASE-20517
> URL: https://issues.apache.org/jira/browse/HBASE-20517
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Major
> Fix For: 3.0.0, 2.1.0, 1.5.0, 1.2.7, 1.3.3, 2.0.1, 1.4.5
>
> Attachments: HBASE-20517-branch-1.patch, HBASE-20517.patch
>
>
> PerformanceEvaluation's 'column' parameter looks broken to me.
> To test:
> 1. Write some data with 20 columns.
> 2. Do a scan test selecting one column.
> 3. Do a scan test selecting ten columns.
> You'd expect the amount of data returned to vary but no, because the read 
> side isn't selecting the same qualifiers that are written. Bytes returned in 
> case 3 should be 10x those in case 2.
> I'm in branch-1 code at the moment. Probably affects trunk too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20530) Composition of backup directory containing namespace when restoring is different from the actual hfile location

2018-05-04 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16464531#comment-16464531
 ] 

Ted Yu commented on HBASE-20530:


MultiTableHFileOutputFormat.configureIncrementalLoad has this code:
{code}
  
allTableNames.add(tableInfo.getRegionLocator().getName().getNameAsString());
{code}
>From TableName:
{code}
  // The name does not include the namespace when it's the default one.
  this.nameAsString = qualifierAsString;
{code}
I think this is why the namespace was missing in the path.

HBackupFileSystem.getTableBackupDir can use similar logic to accommodate the 
above.

> Composition of backup directory containing namespace when restoring is 
> different from the actual hfile location
> ---
>
> Key: HBASE-20530
> URL: https://issues.apache.org/jira/browse/HBASE-20530
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Vladimir Rodionov
>Priority: Critical
>
> Here is partial listing of output from incremental backup:
> {code}
> 5306 2018-05-04 02:38 
> hdfs://mycluster/user/hbase/backup_loc/backup_1525401467793/table_almphxih4u/cf1/5648501da7194783947bbf07b172f07e
> {code}
> When restoring, here is what HBackupFileSystem.getTableBackupDir returns:
> {code}
> fileBackupDir=hdfs://mycluster/user/hbase/backup_loc/backup_1525401467793/default/table_almphxih4u
> {code}
> You can see that namespace gets in the way, leading to inability of finding 
> the proper hfile.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20513) Collect and emit ScanMetrics in PerformanceEvaluation

2018-05-04 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16464315#comment-16464315
 ] 

Andrew Purtell commented on HBASE-20513:


Going to commit this minor test-only improvement later today unless objection.

> Collect and emit ScanMetrics in PerformanceEvaluation
> -
>
> Key: HBASE-20513
> URL: https://issues.apache.org/jira/browse/HBASE-20513
> Project: HBase
>  Issue Type: Test
>  Components: test
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 3.0.0, 2.1.0, 1.5.0
>
> Attachments: HBASE-20513-branch-1.patch, HBASE-20513.patch
>
>
> To better understand changes in scanning behavior between version, enable 
> ScanMetrics collection in PerformanceEvaluation and collect and roll up the 
> results into a report at termination.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19722) Implement a meta query statistics metrics source

2018-05-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16464287#comment-16464287
 ] 

Hadoop QA commented on HBASE-19722:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue}  0m  
1s{color} | {color:blue} The patch file was not named according to hbase's 
naming conventions. Please see 
https://yetus.apache.org/documentation/0.7.0/precommit-patchnames for 
instructions. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
 4s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
50s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
32s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
51s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
14m 50s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
26s{color} | {color:green} hbase-examples in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
10s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 40m  9s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:d8b550f |
| JIRA Issue | HBASE-19722 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12922087/HBASE-19722.patch.5 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux d1efc62c7fea 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 87f5b5f341 |
| maven | version: Apache Maven 3.5.3 
(3383c37e1f9e9b3bc3df5050c29c8aff9f295297; 2018-02-24T19:49:05Z) |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC3 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/12727/testReport/ |
| Max. process+thread count | 2682 (vs. ulimi

[jira] [Assigned] (HBASE-20532) Use try -with-resources in BackupSystemTable

2018-05-04 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu reassigned HBASE-20532:
--

Assignee: Andy Lin

> Use try -with-resources in BackupSystemTable
> 
>
> Key: HBASE-20532
> URL: https://issues.apache.org/jira/browse/HBASE-20532
> Project: HBase
>  Issue Type: Improvement
>Reporter: Andy Lin
>Assignee: Andy Lin
>Priority: Trivial
>
> Use try -with-resources in BackupSystemTable for describeBackupSet and 
> listBackupSets.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20532) Use try -with-resources in BackupSystemTable

2018-05-04 Thread Andy Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16464252#comment-16464252
 ] 

Andy Lin commented on HBASE-20532:
--

Hi, I'd like to work on this issue. Thanks.

> Use try -with-resources in BackupSystemTable
> 
>
> Key: HBASE-20532
> URL: https://issues.apache.org/jira/browse/HBASE-20532
> Project: HBase
>  Issue Type: Improvement
>Reporter: Andy Lin
>Priority: Trivial
>
> Use try -with-resources in BackupSystemTable for describeBackupSet and 
> listBackupSets.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20532) Use try -with-resources in BackupSystemTable

2018-05-04 Thread Andy Lin (JIRA)
Andy Lin created HBASE-20532:


 Summary: Use try -with-resources in BackupSystemTable
 Key: HBASE-20532
 URL: https://issues.apache.org/jira/browse/HBASE-20532
 Project: HBase
  Issue Type: Improvement
Reporter: Andy Lin


Use try -with-resources in BackupSystemTable for describeBackupSet and 
listBackupSets.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20517) Fix PerformanceEvaluation 'column' parameter

2018-05-04 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16464220#comment-16464220
 ] 

Andrew Purtell commented on HBASE-20517:


Any concerns about this change? I'm going to proceed with commit of a test only 
change soon, otherwise, so I can unblock HBASE-20513 and HBASE-20505, which 
depend on this.

> Fix PerformanceEvaluation 'column' parameter
> 
>
> Key: HBASE-20517
> URL: https://issues.apache.org/jira/browse/HBASE-20517
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Major
> Fix For: 3.0.0, 2.1.0, 1.5.0, 1.2.7, 1.3.3, 2.0.1, 1.4.5
>
> Attachments: HBASE-20517-branch-1.patch, HBASE-20517.patch
>
>
> PerformanceEvaluation's 'column' parameter looks broken to me.
> To test:
> 1. Write some data with 20 columns.
> 2. Do a scan test selecting one column.
> 3. Do a scan test selecting ten columns.
> You'd expect the amount of data returned to vary but no, because the read 
> side isn't selecting the same qualifiers that are written. Bytes returned in 
> case 3 should be 10x those in case 2.
> I'm in branch-1 code at the moment. Probably affects trunk too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19722) Implement a meta query statistics metrics source

2018-05-04 Thread Xu Cang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Cang updated HBASE-19722:

Attachment: HBASE-19722.patch.5

> Implement a meta query statistics metrics source
> 
>
> Key: HBASE-19722
> URL: https://issues.apache.org/jira/browse/HBASE-19722
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Andrew Purtell
>Assignee: Xu Cang
>Priority: Major
> Attachments: HBASE-19722.patch, HBASE-19722.patch.1, 
> HBASE-19722.patch.2, HBASE-19722.patch.3, HBASE-19722.patch.4, 
> HBASE-19722.patch.5
>
>
> Implement a meta query statistics metrics source, created whenever a 
> regionserver starts hosting meta, removed when meta hosting moves. Provide 
> views on top tables by request counts, top meta rowkeys by request count, top 
> clients making requests by their hostname. 
> Can be implemented as a coprocessor.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20528) Revise collections copying from iteration to built-in function

2018-05-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16464169#comment-16464169
 ] 

Hadoop QA commented on HBASE-20528:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
31s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
36s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
12s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
20s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
48s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
53s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
17s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
45s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
14m 49s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m  
8s{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
2s{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}115m 
50s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 12m 
27s{color} | {color:green} hbase-mapreduce in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}193m 15s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:d8b550f |
| JIRA Issue | HBASE-20528 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12921961/0001-Revise-collections-copying-to-built-in-function.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseant

[jira] [Commented] (HBASE-19722) Implement a meta query statistics metrics source

2018-05-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16464159#comment-16464159
 ] 

Hadoop QA commented on HBASE-19722:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue}  0m  
2s{color} | {color:blue} The patch file was not named according to hbase's 
naming conventions. Please see 
https://yetus.apache.org/documentation/0.7.0/precommit-patchnames for 
instructions. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 9s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
16s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
28s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 5 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
12s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
13m  1s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
29s{color} | {color:green} hbase-examples in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
 7s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 35m 13s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:d8b550f |
| JIRA Issue | HBASE-19722 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12922080/HBASE-19722.patch.4 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 1400b228f908 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 87f5b5f341 |
| maven | version: Apache Maven 3.5.3 
(3383c37e1f9e9b3bc3df5050c29c8aff9f295297; 2018-02-24T19:49:05Z) |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC3 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HB

[jira] [Updated] (HBASE-19722) Implement a meta query statistics metrics source

2018-05-04 Thread Xu Cang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Cang updated HBASE-19722:

Attachment: HBASE-19722.patch.4

> Implement a meta query statistics metrics source
> 
>
> Key: HBASE-19722
> URL: https://issues.apache.org/jira/browse/HBASE-19722
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Andrew Purtell
>Assignee: Xu Cang
>Priority: Major
> Attachments: HBASE-19722.patch, HBASE-19722.patch.1, 
> HBASE-19722.patch.2, HBASE-19722.patch.3, HBASE-19722.patch.4
>
>
> Implement a meta query statistics metrics source, created whenever a 
> regionserver starts hosting meta, removed when meta hosting moves. Provide 
> views on top tables by request counts, top meta rowkeys by request count, top 
> clients making requests by their hostname. 
> Can be implemented as a coprocessor.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20378) Provide a hbck option to cleanup replication barrier for a table

2018-05-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16464075#comment-16464075
 ] 

Hudson commented on HBASE-20378:


Results for branch branch-2
[build #694 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/694/]: 
(/) *{color:green}+1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/694//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/694//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/694//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> Provide a hbck option to cleanup replication barrier for a table
> 
>
> Key: HBASE-20378
> URL: https://issues.apache.org/jira/browse/HBASE-20378
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Duo Zhang
>Assignee: Jingyun Tian
>Priority: Major
> Fix For: 3.0.0, 2.1.0
>
> Attachments: HBASE-20378.master.001.patch, 
> HBASE-20378.master.002.patch, HBASE-20378.master.003.patch, 
> HBASE-20378.master.004.patch, HBASE-20378.master.005.patch, 
> HBASE-20378.master.006.patch, HBASE-20378.master.007.patch, 
> HBASE-20378.master.008.patch
>
>
> It is not easy to deal with the scenario where a user change the replication 
> scope from global to local since it may change the scope back while we are 
> cleaning in the background. And I think this a rare operation so just provide 
> an hbck option to deal with it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20481) Replicate entries from same region serially in ReplicationEndpoint for serial replication

2018-05-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16464076#comment-16464076
 ] 

Hudson commented on HBASE-20481:


Results for branch branch-2
[build #694 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/694/]: 
(/) *{color:green}+1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/694//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/694//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/694//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> Replicate entries from same region serially in ReplicationEndpoint for serial 
> replication
> -
>
> Key: HBASE-20481
> URL: https://issues.apache.org/jira/browse/HBASE-20481
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Assignee: Zheng Hu
>Priority: Major
> Attachments: HBASE-20481.v1.patch, HBASE-20481.v2.patch, 
> HBASE-20481.v3.patch, HBASE-20481.v3.patch, HBASE-20481.v4.patch
>
>
> When debugging HBASE-20475, [~openinx] found that the 
> HBaseInterClusterReplicationEndpoint may send the entries for the same 
> regions concurrently, which breaks the serial replication.
> As long as we can have multiple ReplicationEndpoint implementation, just fix 
> HBaseInterClusterReplicationEndpoint is not enough, we need to add a 
> setSerial method to ReplicationEndpoint, to tell the implementation that you 
> should keep the order of the entries from the same region.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20523) PE tool should support configuring client side buffering sizes

2018-05-04 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16464034#comment-16464034
 ] 

Ted Yu commented on HBASE-20523:


lgtm

> PE tool should support configuring client side buffering sizes
> --
>
> Key: HBASE-20523
> URL: https://issues.apache.org/jira/browse/HBASE-20523
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
>Priority: Minor
> Fix For: 2.1.0, 2.0.1
>
> Attachments: HBASE-20523.patch, HBASE-20523_1.patch
>
>
> The client side buffering size impacts the write load and the write 
> performance. Hence for testing purpose it is better we allow client side 
> buffering to be configurable in PE. Already YCSB has such facility.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20531) RS may throw NPE when close meta regions in shutdown procedure.

2018-05-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16464008#comment-16464008
 ] 

Hadoop QA commented on HBASE-20531:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
41s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
31s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 2s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
18s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
17s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
13m  7s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}166m 
22s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}207m 49s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:d8b550f |
| JIRA Issue | HBASE-20531 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12921936/HBASE-20531.v1.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 0d971b3e0d9e 4.4.0-104-generic #127-Ubuntu SMP Mon Dec 11 
12:16:42 UTC 2017 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 87f5b5f341 |
| maven | version: Apache Maven 3.5.3 
(3383c37e1f9e9b3bc3df5050c29c8aff9f295297; 2018-02-24T19:49:05Z) |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC3 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/12723/testReport/ |
| Max. process+thread count | 4716 (vs. ulimit of 1) |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/12723/console |
| P

[jira] [Commented] (HBASE-20528) Revise collections copying from iteration to built-in function

2018-05-04 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16463989#comment-16463989
 ] 

Chia-Ping Tsai commented on HBASE-20528:


There are some rules to the patch. Please take a look at 
[doc|https://hbase.apache.org/book.html#submitting.patches]
{code:java}
curFunctionCosts[i] = tempFunctionCosts[i];{code}
I grep the above keyword, and then get two results in StochasticLoadBalancer. 
Could you fix both of them?

> Revise collections copying from iteration to built-in function
> --
>
> Key: HBASE-20528
> URL: https://issues.apache.org/jira/browse/HBASE-20528
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Hua-Yi Ho
>Assignee: Hua-Yi Ho
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: 
> 0001-Revise-collections-copying-to-built-in-function.patch
>
>
> Some collection codes in file
> StochasticLoadBalancer.java, AbstractHBaseTool.java, HFileInputFormat.java, 
> Result.java, and WalPlayer.java, using iterations to copy whole data in 
> collections. The iterations can just replace by just Colletions.addAll and 
> Arrays.copyOf.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HBASE-20530) Composition of backup directory containing namespace when restoring is different from the actual hfile location

2018-05-04 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov reassigned HBASE-20530:
-

Assignee: Vladimir Rodionov

> Composition of backup directory containing namespace when restoring is 
> different from the actual hfile location
> ---
>
> Key: HBASE-20530
> URL: https://issues.apache.org/jira/browse/HBASE-20530
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Vladimir Rodionov
>Priority: Critical
>
> Here is partial listing of output from incremental backup:
> {code}
> 5306 2018-05-04 02:38 
> hdfs://mycluster/user/hbase/backup_loc/backup_1525401467793/table_almphxih4u/cf1/5648501da7194783947bbf07b172f07e
> {code}
> When restoring, here is what HBackupFileSystem.getTableBackupDir returns:
> {code}
> fileBackupDir=hdfs://mycluster/user/hbase/backup_loc/backup_1525401467793/default/table_almphxih4u
> {code}
> You can see that namespace gets in the way, leading to inability of finding 
> the proper hfile.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20530) Composition of backup directory containing namespace when restoring is different from the actual hfile location

2018-05-04 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-20530:
---
Summary: Composition of backup directory containing namespace when 
restoring is different from the actual hfile location  (was: Composition of 
backup directory incorrectly contains namespace when restoring)

> Composition of backup directory containing namespace when restoring is 
> different from the actual hfile location
> ---
>
> Key: HBASE-20530
> URL: https://issues.apache.org/jira/browse/HBASE-20530
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Priority: Critical
>
> Here is partial listing of output from incremental backup:
> {code}
> 5306 2018-05-04 02:38 
> hdfs://mycluster/user/hbase/backup_loc/backup_1525401467793/table_almphxih4u/cf1/5648501da7194783947bbf07b172f07e
> {code}
> When restoring, here is what HBackupFileSystem.getTableBackupDir returns:
> {code}
> fileBackupDir=hdfs://mycluster/user/hbase/backup_loc/backup_1525401467793/default/table_almphxih4u
> {code}
> You can see that namespace gets in the way, leading to inability of finding 
> the proper hfile.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20528) Revise collections copying from iteration to built-in function

2018-05-04 Thread Hua-Yi Ho (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hua-Yi Ho updated HBASE-20528:
--
Fix Version/s: 3.0.0
   Attachment: 0001-Revise-collections-copying-to-built-in-function.patch
   Status: Patch Available  (was: Open)

> Revise collections copying from iteration to built-in function
> --
>
> Key: HBASE-20528
> URL: https://issues.apache.org/jira/browse/HBASE-20528
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Hua-Yi Ho
>Assignee: Hua-Yi Ho
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: 
> 0001-Revise-collections-copying-to-built-in-function.patch
>
>
> Some collection codes in file
> StochasticLoadBalancer.java, AbstractHBaseTool.java, HFileInputFormat.java, 
> Result.java, and WalPlayer.java, using iterations to copy whole data in 
> collections. The iterations can just replace by just Colletions.addAll and 
> Arrays.copyOf.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19324) Backport HBASE-19311 to branch-1.x

2018-05-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16463902#comment-16463902
 ] 

Hadoop QA commented on HBASE-19324:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
33s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} branch-1 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
29s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
45s{color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
22s{color} | {color:green} branch-1 passed with JDK v1.8.0_172 {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
12s{color} | {color:red} hbase-common in branch-1 failed with JDK v1.7.0_181. 
{color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
17s{color} | {color:red} hbase-server in branch-1 failed with JDK v1.7.0_181. 
{color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
15s{color} | {color:red} hbase-it in branch-1 failed with JDK v1.7.0_181. 
{color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 4s{color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  2m 
36s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} branch-1 passed with JDK v1.8.0_172 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} branch-1 passed with JDK v1.7.0_181 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
12s{color} | {color:green} the patch passed with JDK v1.8.0_172 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 22s{color} 
| {color:red} hbase-common-jdk1.8.0_172 with JDK v1.8.0_172 generated 1 new + 
41 unchanged - 1 fixed = 42 total (was 42) {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
11s{color} | {color:red} hbase-common in the patch failed with JDK v1.7.0_181. 
{color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
17s{color} | {color:red} hbase-server in the patch failed with JDK v1.7.0_181. 
{color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
16s{color} | {color:red} hbase-it in the patch failed with JDK v1.7.0_181. 
{color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 11s{color} 
| {color:red} hbase-common in the patch failed with JDK v1.7.0_181. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 17s{color} 
| {color:red} hbase-server in the patch failed with JDK v1.7.0_181. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 16s{color} 
| {color:red} hbase-it in the patch failed with JDK v1.7.0_181. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} hbase-common: The patch generated 0 new + 1 
unchanged - 1 fixed = 1 total (was 2) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
16s{color} | {color:red} hbase-server: The patch generated 5 new + 9 unchanged 
- 15 fixed = 14 total (was 24) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
24s{color} | {color:red} hbase-it: The patch generated 1 new + 0 unchanged - 1 
fixed = 1 total (was 1) {color} |

[jira] [Commented] (HBASE-20523) PE tool should support configuring client side buffering sizes

2018-05-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16463863#comment-16463863
 ] 

Hadoop QA commented on HBASE-20523:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
 4s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
39s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
27s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
44s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
19s{color} | {color:red} hbase-mapreduce: The patch generated 10 new + 32 
unchanged - 0 fixed = 42 total (was 32) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
 8s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
16m 40s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m 
28s{color} | {color:green} hbase-mapreduce in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 57m  5s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:d8b550f |
| JIRA Issue | HBASE-20523 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12921945/HBASE-20523_1.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 39f923722515 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh
 |
| git revision | master / 87f5b5f341 |
| maven | version: Apache Maven 3.5.3 
(3383c37e1f9e9b3bc3df5050c29c8aff9f295297; 2018-02-24T19:49:05Z) |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC3 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HBASE-Build/12724/artifact/patchprocess/diff-checkstyle-hbase-mapreduce.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/12724/testReport/ |
| Max. process+thread count | 4577 (vs. ulimit of 1) |
| modules | C: hbase-mapreduce U: hbase-mapreduce |
| Console

[jira] [Commented] (HBASE-20378) Provide a hbck option to cleanup replication barrier for a table

2018-05-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16463849#comment-16463849
 ] 

Hudson commented on HBASE-20378:


Results for branch master
[build #320 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/320/]: (x) 
*{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/320//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/320//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/320//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> Provide a hbck option to cleanup replication barrier for a table
> 
>
> Key: HBASE-20378
> URL: https://issues.apache.org/jira/browse/HBASE-20378
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Duo Zhang
>Assignee: Jingyun Tian
>Priority: Major
> Fix For: 3.0.0, 2.1.0
>
> Attachments: HBASE-20378.master.001.patch, 
> HBASE-20378.master.002.patch, HBASE-20378.master.003.patch, 
> HBASE-20378.master.004.patch, HBASE-20378.master.005.patch, 
> HBASE-20378.master.006.patch, HBASE-20378.master.007.patch, 
> HBASE-20378.master.008.patch
>
>
> It is not easy to deal with the scenario where a user change the replication 
> scope from global to local since it may change the scope back while we are 
> cleaning in the background. And I think this a rare operation so just provide 
> an hbck option to deal with it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20481) Replicate entries from same region serially in ReplicationEndpoint for serial replication

2018-05-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16463848#comment-16463848
 ] 

Hudson commented on HBASE-20481:


Results for branch master
[build #320 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/320/]: (x) 
*{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/320//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/320//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/320//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> Replicate entries from same region serially in ReplicationEndpoint for serial 
> replication
> -
>
> Key: HBASE-20481
> URL: https://issues.apache.org/jira/browse/HBASE-20481
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Assignee: Zheng Hu
>Priority: Major
> Attachments: HBASE-20481.v1.patch, HBASE-20481.v2.patch, 
> HBASE-20481.v3.patch, HBASE-20481.v3.patch, HBASE-20481.v4.patch
>
>
> When debugging HBASE-20475, [~openinx] found that the 
> HBaseInterClusterReplicationEndpoint may send the entries for the same 
> regions concurrently, which breaks the serial replication.
> As long as we can have multiple ReplicationEndpoint implementation, just fix 
> HBaseInterClusterReplicationEndpoint is not enough, we need to add a 
> setSerial method to ReplicationEndpoint, to tell the implementation that you 
> should keep the order of the entries from the same region.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20507) Do not need to call recoverLease on the broken file when we fail to create a wal writer

2018-05-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16463846#comment-16463846
 ] 

Hudson commented on HBASE-20507:


Results for branch master
[build #320 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/320/]: (x) 
*{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/320//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/320//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/320//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> Do not need to call recoverLease on the broken file when we fail to create a 
> wal writer
> ---
>
> Key: HBASE-20507
> URL: https://issues.apache.org/jira/browse/HBASE-20507
> Project: HBase
>  Issue Type: Improvement
>  Components: wal
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.1.0, 2.0.1
>
> Attachments: 20507.addendum.patch, HBASE-20507.patch
>
>
> I tried locally with a UT, if we overwrite a file which is currently being 
> written, the old file will be completed and then deleted. If you call close 
> on the previous file, a no lease exception will be thrown which means that 
> the file has already been completed.
> So we do not need to close a file if it will be overwritten immediately, 
> since recoverLease may take a very long time...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20524) Need to clear metrics when ReplicationSourceManager refresh replication sources

2018-05-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16463847#comment-16463847
 ] 

Hudson commented on HBASE-20524:


Results for branch master
[build #320 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/320/]: (x) 
*{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/320//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/320//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/320//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> Need to clear metrics when ReplicationSourceManager refresh replication 
> sources
> ---
>
> Key: HBASE-20524
> URL: https://issues.apache.org/jira/browse/HBASE-20524
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.1.0
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Minor
> Fix For: 3.0.0, 2.1.0
>
> Attachments: HBASE-20524.master.001.patch, 
> HBASE-20524.master.002.patch
>
>
> When ReplicationSourceManager refresh replication sources, it will close the 
> old source first, then startup a new source. The new source will use a new 
> metrics, but forgot to clear the metrics for old sources.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20523) PE tool should support configuring client side buffering sizes

2018-05-04 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-20523:
---
Hadoop Flags: Reviewed
  Status: Patch Available  (was: Open)

Updated patch. Will commit this later. 

> PE tool should support configuring client side buffering sizes
> --
>
> Key: HBASE-20523
> URL: https://issues.apache.org/jira/browse/HBASE-20523
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
>Priority: Minor
> Fix For: 2.1.0, 2.0.1
>
> Attachments: HBASE-20523.patch, HBASE-20523_1.patch
>
>
> The client side buffering size impacts the write load and the write 
> performance. Hence for testing purpose it is better we allow client side 
> buffering to be configurable in PE. Already YCSB has such facility.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20523) PE tool should support configuring client side buffering sizes

2018-05-04 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-20523:
---
Attachment: HBASE-20523_1.patch

> PE tool should support configuring client side buffering sizes
> --
>
> Key: HBASE-20523
> URL: https://issues.apache.org/jira/browse/HBASE-20523
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
>Priority: Minor
> Fix For: 2.1.0, 2.0.1
>
> Attachments: HBASE-20523.patch, HBASE-20523_1.patch
>
>
> The client side buffering size impacts the write load and the write 
> performance. Hence for testing purpose it is better we allow client side 
> buffering to be configurable in PE. Already YCSB has such facility.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20523) PE tool should support configuring client side buffering sizes

2018-05-04 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16463799#comment-16463799
 ] 

ramkrishna.s.vasudevan commented on HBASE-20523:


I followed the YCSB naming here where it says 'clientSideBuffering' to be 
enabled or not and then we have 'writeBufferSize'. Ok will rename the config 
name.

> PE tool should support configuring client side buffering sizes
> --
>
> Key: HBASE-20523
> URL: https://issues.apache.org/jira/browse/HBASE-20523
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
>Priority: Minor
> Fix For: 2.1.0, 2.0.1
>
> Attachments: HBASE-20523.patch
>
>
> The client side buffering size impacts the write load and the write 
> performance. Hence for testing purpose it is better we allow client side 
> buffering to be configurable in PE. Already YCSB has such facility.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20531) RS may throw NPE when close meta regions in shutdown procedure.

2018-05-04 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16463781#comment-16463781
 ] 

Ted Yu commented on HBASE-20531:


Lgtm

> RS may throw NPE when close meta regions in shutdown procedure. 
> 
>
> Key: HBASE-20531
> URL: https://issues.apache.org/jira/browse/HBASE-20531
> Project: HBase
>  Issue Type: Bug
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Attachments: HBASE-20531.v1.patch
>
>
> See also : 
> https://issues.apache.org/jira/browse/HBASE-20475?focusedCommentId=16463322&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16463322
> The NPE stack is as following: 
> {code}
> 2018-05-03 21:05:58,075 ERROR [RS_CLOSE_REGION-regionserver/instance-2:0-1] 
> helpers.MarkerIgnoringBase(159): * ABORTING region server 
> instance-2.c.gcp-hbase.internal,42063,1525381545380: Unrecoverable exception 
> while closing region tes
> t,,1525381436038.66de217a470764f3b37d8faebfd8e8c8., still finishing close 
> *
> java.io.IOException: java.lang.NullPointerException
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.doClose(HRegion.java:1637)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.close(HRegion.java:1466)
> at 
> org.apache.hadoop.hbase.regionserver.handler.CloseRegionHandler.process(CloseRegionHandler.java:104)
> at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:104)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NullPointerException
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.reportFileArchivalForQuotas(HRegionServer.java:3709)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.reportArchivedFilesForQuota(HStore.java:2718)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.removeCompactedfiles(HStore.java:2649)
> at org.apache.hadoop.hbase.regionserver.HStore.close(HStore.java:929)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$2.call(HRegion.java:1615)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$2.call(HRegion.java:1612)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> ... 3 more
> {code}
> In HRegionServer#run(),  we have the following: 
> {code}
> @Override
> public void run() {
> ..
> // Stop the quota manager
> if (rsQuotaManager != null) {
>   rsQuotaManager.stop();
> }
> if (rsSpaceQuotaManager != null) {
>   rsSpaceQuotaManager.stop();
>   rsSpaceQuotaManager = null;
> }
> ..
> // Closing the compactSplit thread before closing meta regions
> if (!this.killed && containsMetaTableRegions()) {
>   if (!abortRequested || this.fsOk) {
> if (this.compactSplitThread != null) {
>   this.compactSplitThread.join();
>   this.compactSplitThread = null;
> }
> closeMetaTableRegions(abortRequested);
>   }
> }
> ..
> }
> {code}
> We  stop the rsSpaceQuotaManager firstly, and then close the meta region, but 
> when close meta region, we need to use rsSpaceQuotaManager to 
> reportFileArchivalForQuotas() , just as the stack trace  said ... 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20531) RS may throw NPE when close meta regions in shutdown procedure.

2018-05-04 Thread Zheng Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Hu updated HBASE-20531:
-
Status: Patch Available  (was: Open)

> RS may throw NPE when close meta regions in shutdown procedure. 
> 
>
> Key: HBASE-20531
> URL: https://issues.apache.org/jira/browse/HBASE-20531
> Project: HBase
>  Issue Type: Bug
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Attachments: HBASE-20531.v1.patch
>
>
> See also : 
> https://issues.apache.org/jira/browse/HBASE-20475?focusedCommentId=16463322&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16463322
> The NPE stack is as following: 
> {code}
> 2018-05-03 21:05:58,075 ERROR [RS_CLOSE_REGION-regionserver/instance-2:0-1] 
> helpers.MarkerIgnoringBase(159): * ABORTING region server 
> instance-2.c.gcp-hbase.internal,42063,1525381545380: Unrecoverable exception 
> while closing region tes
> t,,1525381436038.66de217a470764f3b37d8faebfd8e8c8., still finishing close 
> *
> java.io.IOException: java.lang.NullPointerException
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.doClose(HRegion.java:1637)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.close(HRegion.java:1466)
> at 
> org.apache.hadoop.hbase.regionserver.handler.CloseRegionHandler.process(CloseRegionHandler.java:104)
> at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:104)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NullPointerException
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.reportFileArchivalForQuotas(HRegionServer.java:3709)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.reportArchivedFilesForQuota(HStore.java:2718)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.removeCompactedfiles(HStore.java:2649)
> at org.apache.hadoop.hbase.regionserver.HStore.close(HStore.java:929)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$2.call(HRegion.java:1615)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$2.call(HRegion.java:1612)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> ... 3 more
> {code}
> In HRegionServer#run(),  we have the following: 
> {code}
> @Override
> public void run() {
> ..
> // Stop the quota manager
> if (rsQuotaManager != null) {
>   rsQuotaManager.stop();
> }
> if (rsSpaceQuotaManager != null) {
>   rsSpaceQuotaManager.stop();
>   rsSpaceQuotaManager = null;
> }
> ..
> // Closing the compactSplit thread before closing meta regions
> if (!this.killed && containsMetaTableRegions()) {
>   if (!abortRequested || this.fsOk) {
> if (this.compactSplitThread != null) {
>   this.compactSplitThread.join();
>   this.compactSplitThread = null;
> }
> closeMetaTableRegions(abortRequested);
>   }
> }
> ..
> }
> {code}
> We  stop the rsSpaceQuotaManager firstly, and then close the meta region, but 
> when close meta region, we need to use rsSpaceQuotaManager to 
> reportFileArchivalForQuotas() , just as the stack trace  said ... 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20531) RS may throw NPE when close meta regions in shutdown procedure.

2018-05-04 Thread Zheng Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Hu updated HBASE-20531:
-
Attachment: HBASE-20531.v1.patch

> RS may throw NPE when close meta regions in shutdown procedure. 
> 
>
> Key: HBASE-20531
> URL: https://issues.apache.org/jira/browse/HBASE-20531
> Project: HBase
>  Issue Type: Bug
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Attachments: HBASE-20531.v1.patch
>
>
> See also : 
> https://issues.apache.org/jira/browse/HBASE-20475?focusedCommentId=16463322&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16463322
> The NPE stack is as following: 
> {code}
> 2018-05-03 21:05:58,075 ERROR [RS_CLOSE_REGION-regionserver/instance-2:0-1] 
> helpers.MarkerIgnoringBase(159): * ABORTING region server 
> instance-2.c.gcp-hbase.internal,42063,1525381545380: Unrecoverable exception 
> while closing region tes
> t,,1525381436038.66de217a470764f3b37d8faebfd8e8c8., still finishing close 
> *
> java.io.IOException: java.lang.NullPointerException
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.doClose(HRegion.java:1637)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.close(HRegion.java:1466)
> at 
> org.apache.hadoop.hbase.regionserver.handler.CloseRegionHandler.process(CloseRegionHandler.java:104)
> at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:104)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NullPointerException
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.reportFileArchivalForQuotas(HRegionServer.java:3709)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.reportArchivedFilesForQuota(HStore.java:2718)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.removeCompactedfiles(HStore.java:2649)
> at org.apache.hadoop.hbase.regionserver.HStore.close(HStore.java:929)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$2.call(HRegion.java:1615)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$2.call(HRegion.java:1612)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> ... 3 more
> {code}
> In HRegionServer#run(),  we have the following: 
> {code}
> @Override
> public void run() {
> ..
> // Stop the quota manager
> if (rsQuotaManager != null) {
>   rsQuotaManager.stop();
> }
> if (rsSpaceQuotaManager != null) {
>   rsSpaceQuotaManager.stop();
>   rsSpaceQuotaManager = null;
> }
> ..
> // Closing the compactSplit thread before closing meta regions
> if (!this.killed && containsMetaTableRegions()) {
>   if (!abortRequested || this.fsOk) {
> if (this.compactSplitThread != null) {
>   this.compactSplitThread.join();
>   this.compactSplitThread = null;
> }
> closeMetaTableRegions(abortRequested);
>   }
> }
> ..
> }
> {code}
> We  stop the rsSpaceQuotaManager firstly, and then close the meta region, but 
> when close meta region, we need to use rsSpaceQuotaManager to 
> reportFileArchivalForQuotas() , just as the stack trace  said ... 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-15809) Basic Replication WebUI

2018-05-04 Thread Jingyun Tian (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16463764#comment-16463764
 ] 

Jingyun Tian commented on HBASE-15809:
--

[~busbey] pls have a look at these 2 patches if you got time.

> Basic Replication WebUI
> ---
>
> Key: HBASE-15809
> URL: https://issues.apache.org/jira/browse/HBASE-15809
> Project: HBase
>  Issue Type: New Feature
>  Components: Replication, UI
>Affects Versions: 2.0.0
>Reporter: Matteo Bertozzi
>Assignee: Jingyun Tian
>Priority: Critical
> Fix For: 3.0.0, 2.1.0, 1.5.0
>
> Attachments: HBASE-15809-v0.patch, HBASE-15809-v0.png, 
> HBASE-15809-v1.patch, rep_web_ui.zip
>
>
> At the moment the only way to have some insight on replication from the webui 
> is looking at zkdump and metrics.
> the basic information useful to get started debugging are: peer information 
> and the view of WALs offsets for each peer.
> https://reviews.apache.org/r/47275/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19324) Backport HBASE-19311 to branch-1.x

2018-05-04 Thread Jingyun Tian (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingyun Tian updated HBASE-19324:
-
Status: Patch Available  (was: Open)

> Backport HBASE-19311 to branch-1.x
> --
>
> Key: HBASE-19324
> URL: https://issues.apache.org/jira/browse/HBASE-19324
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Reporter: Duo Zhang
>Assignee: Jingyun Tian
>Priority: Major
> Attachments: HBASE-19324.branch-1.001.patch, 
> HBASE-19324.branch-1.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19324) Backport HBASE-19311 to branch-1.x

2018-05-04 Thread Jingyun Tian (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingyun Tian updated HBASE-19324:
-
Attachment: HBASE-19324.branch-1.002.patch

> Backport HBASE-19311 to branch-1.x
> --
>
> Key: HBASE-19324
> URL: https://issues.apache.org/jira/browse/HBASE-19324
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Reporter: Duo Zhang
>Assignee: Jingyun Tian
>Priority: Major
> Attachments: HBASE-19324.branch-1.001.patch, 
> HBASE-19324.branch-1.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20151) Bug with SingleColumnValueFilter and FamilyFilter

2018-05-04 Thread Reid Chan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16463702#comment-16463702
 ] 

Reid Chan commented on HBASE-20151:
---

Got it, i will upload a new patch after you fully review.

> Bug with SingleColumnValueFilter and FamilyFilter
> -
>
> Key: HBASE-20151
> URL: https://issues.apache.org/jira/browse/HBASE-20151
> Project: HBase
>  Issue Type: Bug
> Environment: MacOS 10.13.3
> HBase 1.3.1
>Reporter: Steven Sadowski
>Assignee: Reid Chan
>Priority: Major
> Fix For: 3.0.0, 2.1.0
>
> Attachments: HBASE-20151.master.001.patch, 
> HBASE-20151.master.002.patch, HBASE-20151.master.003.patch, 
> HBASE-20151.master.004.patch, HBASE-20151.master.004.patch
>
>
> When running the following queries, the result is sometimes return correctly 
> and other times incorrectly based on the qualifier queried.
> Setup:
> {code:java}
> create 'test', 'a', 'b'
> test = get_table 'test'
> test.put '1', 'a:1', nil
> test.put '1', 'a:10', nil
> test.put '1', 'b:2', nil
> {code}
>  
>  This query works fine when the SCVF's qualifier has length 1 (i.e. '1') :
> {code:java}
> test.scan({ FILTER => "( 
> SingleColumnValueFilter('a','1',=,'binary:',true,true) AND 
> FamilyFilter(=,'binary:b') )"})
> ROW   COLUMN+CELL
>  1column=b:2, 
> timestamp=1520455888059, value=
> 1 row(s) in 0.0060 seconds
> {code}
>  
> The query should return the same result when passed a qualifier of length 2 
> (i.e. '10') :
> {code:java}
> test.scan({ FILTER => "( 
> SingleColumnValueFilter('a','10',=,'binary:',true,true) AND 
> FamilyFilter(=,'binary:b') )"})
> ROW   COLUMN+CELL
> 0 row(s) in 0.0110 seconds
> {code}
> However, in this case, it does not return any row (expected result would be 
> to return the same result as the first query).
>  
> Removing the family filter while the qualifier is '10' yields expected 
> results:
> {code:java}
> test.scan({ FILTER => "( 
> SingleColumnValueFilter('a','10',=,'binary:',true,true) )"})
> ROW   COLUMN+CELL
>  1column=a:1, 
> timestamp=1520455887954, value=
>  1column=a:10, 
> timestamp=1520455888024, value=
>  1column=b:2, 
> timestamp=1520455888059, value=
> 1 row(s) in 0.0140 seconds
> {code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20475) Fix the flaky TestReplicationDroppedTables unit test.

2018-05-04 Thread Zheng Hu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16463684#comment-16463684
 ] 

Zheng Hu commented on HBASE-20475:
--

Filed HBASE-20531 to address the above NPE. 

> Fix the flaky TestReplicationDroppedTables unit test.
> -
>
> Key: HBASE-20475
> URL: https://issues.apache.org/jira/browse/HBASE-20475
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.1.0
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Fix For: 3.0.0, 2.1.0
>
> Attachments: HBASE-20475-addendum-v2.patch, 
> HBASE-20475-addendum-v3.patch, HBASE-20475-addendum.patch, HBASE-20475.patch
>
>
> See 
> https://builds.apache.org/job/HBASE-Find-Flaky-Tests/lastSuccessfulBuild/artifact/dashboard.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20524) Need to clear metrics when ReplicationSourceManager refresh replication sources

2018-05-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16463682#comment-16463682
 ] 

Hudson commented on HBASE-20524:


Results for branch branch-2
[build #693 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/693/]: 
(/) *{color:green}+1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/693//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/693//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/693//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> Need to clear metrics when ReplicationSourceManager refresh replication 
> sources
> ---
>
> Key: HBASE-20524
> URL: https://issues.apache.org/jira/browse/HBASE-20524
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.1.0
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Minor
> Fix For: 3.0.0, 2.1.0
>
> Attachments: HBASE-20524.master.001.patch, 
> HBASE-20524.master.002.patch
>
>
> When ReplicationSourceManager refresh replication sources, it will close the 
> old source first, then startup a new source. The new source will use a new 
> metrics, but forgot to clear the metrics for old sources.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20531) RS may throw NPE when close meta regions in shutdown procedure.

2018-05-04 Thread Zheng Hu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16463683#comment-16463683
 ] 

Zheng Hu commented on HBASE-20531:
--

The patch will be quite easy,  adjust the order of  stopping 
rsSpaceQuotaManager and closeMetaTableRegions ,  and check the null pointer in 
reportFileArchivalForQuotas ().

> RS may throw NPE when close meta regions in shutdown procedure. 
> 
>
> Key: HBASE-20531
> URL: https://issues.apache.org/jira/browse/HBASE-20531
> Project: HBase
>  Issue Type: Bug
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
>
> See also : 
> https://issues.apache.org/jira/browse/HBASE-20475?focusedCommentId=16463322&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16463322
> The NPE stack is as following: 
> {code}
> 2018-05-03 21:05:58,075 ERROR [RS_CLOSE_REGION-regionserver/instance-2:0-1] 
> helpers.MarkerIgnoringBase(159): * ABORTING region server 
> instance-2.c.gcp-hbase.internal,42063,1525381545380: Unrecoverable exception 
> while closing region tes
> t,,1525381436038.66de217a470764f3b37d8faebfd8e8c8., still finishing close 
> *
> java.io.IOException: java.lang.NullPointerException
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.doClose(HRegion.java:1637)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.close(HRegion.java:1466)
> at 
> org.apache.hadoop.hbase.regionserver.handler.CloseRegionHandler.process(CloseRegionHandler.java:104)
> at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:104)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NullPointerException
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.reportFileArchivalForQuotas(HRegionServer.java:3709)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.reportArchivedFilesForQuota(HStore.java:2718)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.removeCompactedfiles(HStore.java:2649)
> at org.apache.hadoop.hbase.regionserver.HStore.close(HStore.java:929)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$2.call(HRegion.java:1615)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$2.call(HRegion.java:1612)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> ... 3 more
> {code}
> In HRegionServer#run(),  we have the following: 
> {code}
> @Override
> public void run() {
> ..
> // Stop the quota manager
> if (rsQuotaManager != null) {
>   rsQuotaManager.stop();
> }
> if (rsSpaceQuotaManager != null) {
>   rsSpaceQuotaManager.stop();
>   rsSpaceQuotaManager = null;
> }
> ..
> // Closing the compactSplit thread before closing meta regions
> if (!this.killed && containsMetaTableRegions()) {
>   if (!abortRequested || this.fsOk) {
> if (this.compactSplitThread != null) {
>   this.compactSplitThread.join();
>   this.compactSplitThread = null;
> }
> closeMetaTableRegions(abortRequested);
>   }
> }
> ..
> }
> {code}
> We  stop the rsSpaceQuotaManager firstly, and then close the meta region, but 
> when close meta region, we need to use rsSpaceQuotaManager to 
> reportFileArchivalForQuotas() , just as the stack trace  said ... 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20531) RS may throw NPE when close meta regions in shutdown procedure.

2018-05-04 Thread Zheng Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Hu updated HBASE-20531:
-
Description: 
See also : 
https://issues.apache.org/jira/browse/HBASE-20475?focusedCommentId=16463322&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16463322

The NPE stack is as following: 

{code}
2018-05-03 21:05:58,075 ERROR [RS_CLOSE_REGION-regionserver/instance-2:0-1] 
helpers.MarkerIgnoringBase(159): * ABORTING region server 
instance-2.c.gcp-hbase.internal,42063,1525381545380: Unrecoverable exception 
while closing region tes
t,,1525381436038.66de217a470764f3b37d8faebfd8e8c8., still finishing close *
java.io.IOException: java.lang.NullPointerException
at 
org.apache.hadoop.hbase.regionserver.HRegion.doClose(HRegion.java:1637)
at org.apache.hadoop.hbase.regionserver.HRegion.close(HRegion.java:1466)
at 
org.apache.hadoop.hbase.regionserver.handler.CloseRegionHandler.process(CloseRegionHandler.java:104)
at 
org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:104)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NullPointerException
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.reportFileArchivalForQuotas(HRegionServer.java:3709)
at 
org.apache.hadoop.hbase.regionserver.HStore.reportArchivedFilesForQuota(HStore.java:2718)
at 
org.apache.hadoop.hbase.regionserver.HStore.removeCompactedfiles(HStore.java:2649)
at org.apache.hadoop.hbase.regionserver.HStore.close(HStore.java:929)
at 
org.apache.hadoop.hbase.regionserver.HRegion$2.call(HRegion.java:1615)
at 
org.apache.hadoop.hbase.regionserver.HRegion$2.call(HRegion.java:1612)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
... 3 more
{code}

In HRegionServer#run(),  we have the following: 

{code}
@Override
public void run() {
..
// Stop the quota manager
if (rsQuotaManager != null) {
  rsQuotaManager.stop();
}
if (rsSpaceQuotaManager != null) {
  rsSpaceQuotaManager.stop();
  rsSpaceQuotaManager = null;
}
..
// Closing the compactSplit thread before closing meta regions
if (!this.killed && containsMetaTableRegions()) {
  if (!abortRequested || this.fsOk) {
if (this.compactSplitThread != null) {
  this.compactSplitThread.join();
  this.compactSplitThread = null;
}
closeMetaTableRegions(abortRequested);
  }
}
..
}
{code}

We  stop the rsSpaceQuotaManager firstly, and then close the meta region, but 
when close meta region, we need to use rsSpaceQuotaManager to 
reportFileArchivalForQuotas() , just as the stack trace  said ... 


  was:
See also : 
https://issues.apache.org/jira/browse/HBASE-20475?focusedCommentId=16463322&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16463322

The NPE stack is as following: 

{code}
2018-05-03 21:05:58,075 ERROR [RS_CLOSE_REGION-regionserver/instance-2:0-1] 
helpers.MarkerIgnoringBase(159): * ABORTING region server 
instance-2.c.gcp-hbase.internal,42063,1525381545380: Unrecoverable exception 
while closing region tes
t,,1525381436038.66de217a470764f3b37d8faebfd8e8c8., still finishing close *
java.io.IOException: java.lang.NullPointerException
at 
org.apache.hadoop.hbase.regionserver.HRegion.doClose(HRegion.java:1637)
at org.apache.hadoop.hbase.regionserver.HRegion.close(HRegion.java:1466)
at 
org.apache.hadoop.hbase.regionserver.handler.CloseRegionHandler.process(CloseRegionHandler.java:104)
at 
org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:104)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NullPointerException
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.reportFileArchivalForQuotas(HRegionServer.java:3709)
at 
org.apache.hadoop.hbase.regionserver.HStore.reportArchivedFilesForQuota(HStore.java:2718)
at 
org.apache.hadoop.hbase.regionserver.HStore.removeCompactedfiles(HStore.java:2649)
at org.apache.hadoop.hbase.regionserver.HStore.close(HStore.java:929)
at 
org.apache.hadoop.hbase.regionserver.HRegion$2.call(HRegion.java:1615)
at 
org.apache.hadoop.hbase.regionserver.HRegion$2.call(HRegion.java:1612)
at java.util.concurrent.FutureTask.run(FutureTask.ja

[jira] [Created] (HBASE-20531) RS may throw NPE when close meta regions in shutdown procedure.

2018-05-04 Thread Zheng Hu (JIRA)
Zheng Hu created HBASE-20531:


 Summary: RS may throw NPE when close meta regions in shutdown 
procedure. 
 Key: HBASE-20531
 URL: https://issues.apache.org/jira/browse/HBASE-20531
 Project: HBase
  Issue Type: Bug
Reporter: Zheng Hu
Assignee: Zheng Hu


See also : 
https://issues.apache.org/jira/browse/HBASE-20475?focusedCommentId=16463322&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16463322

The NPE stack is as following: 

{code}
2018-05-03 21:05:58,075 ERROR [RS_CLOSE_REGION-regionserver/instance-2:0-1] 
helpers.MarkerIgnoringBase(159): * ABORTING region server 
instance-2.c.gcp-hbase.internal,42063,1525381545380: Unrecoverable exception 
while closing region tes
t,,1525381436038.66de217a470764f3b37d8faebfd8e8c8., still finishing close *
java.io.IOException: java.lang.NullPointerException
at 
org.apache.hadoop.hbase.regionserver.HRegion.doClose(HRegion.java:1637)
at org.apache.hadoop.hbase.regionserver.HRegion.close(HRegion.java:1466)
at 
org.apache.hadoop.hbase.regionserver.handler.CloseRegionHandler.process(CloseRegionHandler.java:104)
at 
org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:104)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NullPointerException
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.reportFileArchivalForQuotas(HRegionServer.java:3709)
at 
org.apache.hadoop.hbase.regionserver.HStore.reportArchivedFilesForQuota(HStore.java:2718)
at 
org.apache.hadoop.hbase.regionserver.HStore.removeCompactedfiles(HStore.java:2649)
at org.apache.hadoop.hbase.regionserver.HStore.close(HStore.java:929)
at 
org.apache.hadoop.hbase.regionserver.HRegion$2.call(HRegion.java:1615)
at 
org.apache.hadoop.hbase.regionserver.HRegion$2.call(HRegion.java:1612)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
... 3 more
{code}

In HRegionServer#run(),  we have the following: 

{code}
@Override
public void run() {
...
// Stop the quota manager
if (rsQuotaManager != null) {
  rsQuotaManager.stop();
}
if (rsSpaceQuotaManager != null) {
  rsSpaceQuotaManager.stop();
  rsSpaceQuotaManager = null;
}
...  
// Closing the compactSplit thread before closing meta regions
if (!this.killed && containsMetaTableRegions()) {
  if (!abortRequested || this.fsOk) {
if (this.compactSplitThread != null) {
  this.compactSplitThread.join();
  this.compactSplitThread = null;
}
closeMetaTableRegions(abortRequested);
  }
}
}
{code}

We  stop the rsSpaceQuotaManager firstly, and then close the meta region, but 
when close meta region, we need to use rsSpaceQuotaManager to 
reportFileArchivalForQuotas() , just as the stack trace  said ... 




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20151) Bug with SingleColumnValueFilter and FamilyFilter

2018-05-04 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16463677#comment-16463677
 ] 

Chia-Ping Tsai commented on HBASE-20151:


{quote}Does this count as default impl?
{quote}
Ya, that is the default impl. And the default impl should be in Filter rather 
than FilterBase.

> Bug with SingleColumnValueFilter and FamilyFilter
> -
>
> Key: HBASE-20151
> URL: https://issues.apache.org/jira/browse/HBASE-20151
> Project: HBase
>  Issue Type: Bug
> Environment: MacOS 10.13.3
> HBase 1.3.1
>Reporter: Steven Sadowski
>Assignee: Reid Chan
>Priority: Major
> Fix For: 3.0.0, 2.1.0
>
> Attachments: HBASE-20151.master.001.patch, 
> HBASE-20151.master.002.patch, HBASE-20151.master.003.patch, 
> HBASE-20151.master.004.patch, HBASE-20151.master.004.patch
>
>
> When running the following queries, the result is sometimes return correctly 
> and other times incorrectly based on the qualifier queried.
> Setup:
> {code:java}
> create 'test', 'a', 'b'
> test = get_table 'test'
> test.put '1', 'a:1', nil
> test.put '1', 'a:10', nil
> test.put '1', 'b:2', nil
> {code}
>  
>  This query works fine when the SCVF's qualifier has length 1 (i.e. '1') :
> {code:java}
> test.scan({ FILTER => "( 
> SingleColumnValueFilter('a','1',=,'binary:',true,true) AND 
> FamilyFilter(=,'binary:b') )"})
> ROW   COLUMN+CELL
>  1column=b:2, 
> timestamp=1520455888059, value=
> 1 row(s) in 0.0060 seconds
> {code}
>  
> The query should return the same result when passed a qualifier of length 2 
> (i.e. '10') :
> {code:java}
> test.scan({ FILTER => "( 
> SingleColumnValueFilter('a','10',=,'binary:',true,true) AND 
> FamilyFilter(=,'binary:b') )"})
> ROW   COLUMN+CELL
> 0 row(s) in 0.0110 seconds
> {code}
> However, in this case, it does not return any row (expected result would be 
> to return the same result as the first query).
>  
> Removing the family filter while the qualifier is '10' yields expected 
> results:
> {code:java}
> test.scan({ FILTER => "( 
> SingleColumnValueFilter('a','10',=,'binary:',true,true) )"})
> ROW   COLUMN+CELL
>  1column=a:1, 
> timestamp=1520455887954, value=
>  1column=a:10, 
> timestamp=1520455888024, value=
>  1column=b:2, 
> timestamp=1520455888059, value=
> 1 row(s) in 0.0140 seconds
> {code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20151) Bug with SingleColumnValueFilter and FamilyFilter

2018-05-04 Thread Reid Chan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16463671#comment-16463671
 ] 

Reid Chan commented on HBASE-20151:
---

{code:title=In FilterBase}
@Override
public ReturnCode transformReturnCode(FilterListBase.LOGIC logic, ReturnCode 
originRC) {
 return originRC;
}
{code}
Does this count as default impl?

You can refer to discussions and descriptions above to catch up this issue more 
quickly.

> Bug with SingleColumnValueFilter and FamilyFilter
> -
>
> Key: HBASE-20151
> URL: https://issues.apache.org/jira/browse/HBASE-20151
> Project: HBase
>  Issue Type: Bug
> Environment: MacOS 10.13.3
> HBase 1.3.1
>Reporter: Steven Sadowski
>Assignee: Reid Chan
>Priority: Major
> Fix For: 3.0.0, 2.1.0
>
> Attachments: HBASE-20151.master.001.patch, 
> HBASE-20151.master.002.patch, HBASE-20151.master.003.patch, 
> HBASE-20151.master.004.patch, HBASE-20151.master.004.patch
>
>
> When running the following queries, the result is sometimes return correctly 
> and other times incorrectly based on the qualifier queried.
> Setup:
> {code:java}
> create 'test', 'a', 'b'
> test = get_table 'test'
> test.put '1', 'a:1', nil
> test.put '1', 'a:10', nil
> test.put '1', 'b:2', nil
> {code}
>  
>  This query works fine when the SCVF's qualifier has length 1 (i.e. '1') :
> {code:java}
> test.scan({ FILTER => "( 
> SingleColumnValueFilter('a','1',=,'binary:',true,true) AND 
> FamilyFilter(=,'binary:b') )"})
> ROW   COLUMN+CELL
>  1column=b:2, 
> timestamp=1520455888059, value=
> 1 row(s) in 0.0060 seconds
> {code}
>  
> The query should return the same result when passed a qualifier of length 2 
> (i.e. '10') :
> {code:java}
> test.scan({ FILTER => "( 
> SingleColumnValueFilter('a','10',=,'binary:',true,true) AND 
> FamilyFilter(=,'binary:b') )"})
> ROW   COLUMN+CELL
> 0 row(s) in 0.0110 seconds
> {code}
> However, in this case, it does not return any row (expected result would be 
> to return the same result as the first query).
>  
> Removing the family filter while the qualifier is '10' yields expected 
> results:
> {code:java}
> test.scan({ FILTER => "( 
> SingleColumnValueFilter('a','10',=,'binary:',true,true) )"})
> ROW   COLUMN+CELL
>  1column=a:1, 
> timestamp=1520455887954, value=
>  1column=a:10, 
> timestamp=1520455888024, value=
>  1column=b:2, 
> timestamp=1520455888059, value=
> 1 row(s) in 0.0140 seconds
> {code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20151) Bug with SingleColumnValueFilter and FamilyFilter

2018-05-04 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16463658#comment-16463658
 ] 

Chia-Ping Tsai commented on HBASE-20151:


I'm trying to catch this issue...Will add more comments later. However, I 
noticed the patch add an new method to Filter. Filter is a Public class so the 
change breaks the SC (for who have custom impl of filter). Could we add the 
default impl to transformReturnCode?

> Bug with SingleColumnValueFilter and FamilyFilter
> -
>
> Key: HBASE-20151
> URL: https://issues.apache.org/jira/browse/HBASE-20151
> Project: HBase
>  Issue Type: Bug
> Environment: MacOS 10.13.3
> HBase 1.3.1
>Reporter: Steven Sadowski
>Assignee: Reid Chan
>Priority: Major
> Fix For: 3.0.0, 2.1.0
>
> Attachments: HBASE-20151.master.001.patch, 
> HBASE-20151.master.002.patch, HBASE-20151.master.003.patch, 
> HBASE-20151.master.004.patch, HBASE-20151.master.004.patch
>
>
> When running the following queries, the result is sometimes return correctly 
> and other times incorrectly based on the qualifier queried.
> Setup:
> {code:java}
> create 'test', 'a', 'b'
> test = get_table 'test'
> test.put '1', 'a:1', nil
> test.put '1', 'a:10', nil
> test.put '1', 'b:2', nil
> {code}
>  
>  This query works fine when the SCVF's qualifier has length 1 (i.e. '1') :
> {code:java}
> test.scan({ FILTER => "( 
> SingleColumnValueFilter('a','1',=,'binary:',true,true) AND 
> FamilyFilter(=,'binary:b') )"})
> ROW   COLUMN+CELL
>  1column=b:2, 
> timestamp=1520455888059, value=
> 1 row(s) in 0.0060 seconds
> {code}
>  
> The query should return the same result when passed a qualifier of length 2 
> (i.e. '10') :
> {code:java}
> test.scan({ FILTER => "( 
> SingleColumnValueFilter('a','10',=,'binary:',true,true) AND 
> FamilyFilter(=,'binary:b') )"})
> ROW   COLUMN+CELL
> 0 row(s) in 0.0110 seconds
> {code}
> However, in this case, it does not return any row (expected result would be 
> to return the same result as the first query).
>  
> Removing the family filter while the qualifier is '10' yields expected 
> results:
> {code:java}
> test.scan({ FILTER => "( 
> SingleColumnValueFilter('a','10',=,'binary:',true,true) )"})
> ROW   COLUMN+CELL
>  1column=a:1, 
> timestamp=1520455887954, value=
>  1column=a:10, 
> timestamp=1520455888024, value=
>  1column=b:2, 
> timestamp=1520455888059, value=
> 1 row(s) in 0.0140 seconds
> {code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20481) Replicate entries from same region serially in ReplicationEndpoint for serial replication

2018-05-04 Thread Zheng Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Hu updated HBASE-20481:
-
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

> Replicate entries from same region serially in ReplicationEndpoint for serial 
> replication
> -
>
> Key: HBASE-20481
> URL: https://issues.apache.org/jira/browse/HBASE-20481
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Assignee: Zheng Hu
>Priority: Major
> Attachments: HBASE-20481.v1.patch, HBASE-20481.v2.patch, 
> HBASE-20481.v3.patch, HBASE-20481.v3.patch, HBASE-20481.v4.patch
>
>
> When debugging HBASE-20475, [~openinx] found that the 
> HBaseInterClusterReplicationEndpoint may send the entries for the same 
> regions concurrently, which breaks the serial replication.
> As long as we can have multiple ReplicationEndpoint implementation, just fix 
> HBaseInterClusterReplicationEndpoint is not enough, we need to add a 
> setSerial method to ReplicationEndpoint, to tell the implementation that you 
> should keep the order of the entries from the same region.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20481) Replicate entries from same region serially in ReplicationEndpoint for serial replication

2018-05-04 Thread Zheng Hu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16463512#comment-16463512
 ] 

Zheng Hu commented on HBASE-20481:
--

The failed UT is unrelated, Fixed the checkstyle & Pushed to master  & branch-2 
. Thanks [~Apache9] for reviewing..

> Replicate entries from same region serially in ReplicationEndpoint for serial 
> replication
> -
>
> Key: HBASE-20481
> URL: https://issues.apache.org/jira/browse/HBASE-20481
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Assignee: Zheng Hu
>Priority: Major
> Attachments: HBASE-20481.v1.patch, HBASE-20481.v2.patch, 
> HBASE-20481.v3.patch, HBASE-20481.v3.patch, HBASE-20481.v4.patch
>
>
> When debugging HBASE-20475, [~openinx] found that the 
> HBaseInterClusterReplicationEndpoint may send the entries for the same 
> regions concurrently, which breaks the serial replication.
> As long as we can have multiple ReplicationEndpoint implementation, just fix 
> HBaseInterClusterReplicationEndpoint is not enough, we need to add a 
> setSerial method to ReplicationEndpoint, to tell the implementation that you 
> should keep the order of the entries from the same region.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20378) Provide a hbck option to cleanup replication barrier for a table

2018-05-04 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-20378:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.1.0
   Status: Resolved  (was: Patch Available)

Pushed to master and branch-2. Thanks [~tianjingyun] for contributing.

> Provide a hbck option to cleanup replication barrier for a table
> 
>
> Key: HBASE-20378
> URL: https://issues.apache.org/jira/browse/HBASE-20378
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Duo Zhang
>Assignee: Jingyun Tian
>Priority: Major
> Fix For: 3.0.0, 2.1.0
>
> Attachments: HBASE-20378.master.001.patch, 
> HBASE-20378.master.002.patch, HBASE-20378.master.003.patch, 
> HBASE-20378.master.004.patch, HBASE-20378.master.005.patch, 
> HBASE-20378.master.006.patch, HBASE-20378.master.007.patch, 
> HBASE-20378.master.008.patch
>
>
> It is not easy to deal with the scenario where a user change the replication 
> scope from global to local since it may change the scope back while we are 
> cleaning in the background. And I think this a rare operation so just provide 
> an hbck option to deal with it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)