[jira] [Commented] (HBASE-22013) SpaceQuota utilization issue with region replicas enabled

2019-03-11 Thread Uma Maheswari (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16790249#comment-16790249
 ] 

Uma Maheswari commented on HBASE-22013:
---

FileSystemUtilizationChore-doesnot calculate the size of Region Replica

 

Consider the scenario with table created with 1 region and 1 replica

In our case,

FileSystemUtilizationChore -reports the size of 1 region only,skipping the size 
of Region Replica

+QuotaObserverChore:+

hbase.master.quotas.observer.report.percent=0.95 
_//percentRegionsReportedThreshold_

In filterInsufficientlyReportedTables() method,

 
{noformat}
int numRegionsInTable = getNumRegions(table) //return 2 regions  (1 
replica+actual region)
ratioReported = ((double) reportedRegionsInQuota) / numRegionsInTable   
//ratioReported =0.5 which is less than percentRegionsReportedThreshold



{noformat}
 


Master waits for replication region report also and usage is not calculated

QuotaObserverChore is considering Replicated regions for calculation

 

> SpaceQuota utilization issue with region replicas enabled
> -
>
> Key: HBASE-22013
> URL: https://issues.apache.org/jira/browse/HBASE-22013
> Project: HBase
>  Issue Type: Bug
>Reporter: Ajeet Rai
>Priority: Major
>
> Space Quota: Space Quota Issue: If a table is created with region replica 
> then quota calculation is not happening
> Steps:
> 1: Create a table with 100 regions with region replica 3
> 2:  Observe that 'hbase:quota' table doesn't have entry of usage for this 
> table So In UI only policy Limit and Policy is shown but not Usage and State.
> Reason: 
>  It looks like File system utilization core is sending data of 100 reasons 
> but not the size of region replicas.
>  But in quota observer chore, it is considering total region(actual regions+ 
> replica reasons) 
>  So the  ratio of reported regions is less then configured 
> percentRegionsReportedThreshold.
> SO quota calculation is not happening



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22032) KeyValue validation should check for null byte array

2019-03-11 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16790211#comment-16790211
 ] 

Hadoop QA commented on HBASE-22032:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
18s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
28s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
39s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
33s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
8m 43s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
30s{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
 9s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 33m  7s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-22032 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12962060/HBASE-22032.v03.patch 
|
| Optional Tests |  dupname  asflicense  javac  javadoc  unit  findbugs  
shadedjars  hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 7a83698103b6 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 648fb72702 |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.11 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/16343/testReport/ |
| Max. process+thread count | 284 (vs. ulimit of 1) |
| modules | C: hbase-common U: hbase-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/16343/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> KeyValue validation 

[jira] [Created] (HBASE-22038) fix building failures

2019-03-11 Thread Junhong Xu (JIRA)
Junhong Xu created HBASE-22038:
--

 Summary: fix building failures
 Key: HBASE-22038
 URL: https://issues.apache.org/jira/browse/HBASE-22038
 Project: HBase
  Issue Type: Sub-task
Reporter: Junhong Xu
Assignee: Junhong Xu


When building the hbase c++ client with Dockerfile, it fails, because of the 
url resources not found. But this patch just solve  the problem in temporary, 
cos when some dependent libraries are removed someday, the failure will appear 
again. Maybe a base docker image which contains all these dependencies 
maintained by us is required in the long run. 




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22032) KeyValue validation should check for null byte array

2019-03-11 Thread Zheng Hu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16790206#comment-16790206
 ] 

Zheng Hu commented on HBASE-22032:
--

Well got it, Thanks.

> KeyValue validation should check for null byte array
> 
>
> Key: HBASE-22032
> URL: https://issues.apache.org/jira/browse/HBASE-22032
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 2.0.4, 2.1.3
>Reporter: Geoffrey Jacoby
>Assignee: Geoffrey Jacoby
>Priority: Major
> Attachments: HBASE-22032.v01.patch, HBASE-22032.v02.patch, 
> HBASE-22032.v03.patch
>
>
> HBASE-21401 added some nice validation checks to throw precise errors if a 
> KeyValue is constructed using invalid parameters. However it implicitly 
> assumes that the KeyValue buffer is not null. It should validate this 
> assumption and alert accordingly rather than throwing an NPE from an 
> unrelated check. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21935) Replace make_rc.sh with customized spark/dev/create-release

2019-03-11 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16790199#comment-16790199
 ] 

stack commented on HBASE-21935:
---

2.0.006 Updates CHANGES.md and RELEASENOTES.md properly when its second RC -- 
it removes the previous RCs edits before overlaying the new set of CHANGES.md.

> Replace make_rc.sh with customized spark/dev/create-release
> ---
>
> Key: HBASE-21935
> URL: https://issues.apache.org/jira/browse/HBASE-21935
> Project: HBase
>  Issue Type: Task
>  Components: rm
>Reporter: stack
>Assignee: stack
>Priority: Minor
>  Labels: rm
> Attachments: HBASE-21935.branch-2.0.001.patch, 
> HBASE-21935.branch-2.0.002.patch, HBASE-21935.branch-2.0.003.patch, 
> HBASE-21935.branch-2.0.004.patch, HBASE-21935.branch-2.0.005.patch, 
> HBASE-21935.branch-2.0.006.patch, HBASE-21935.branch-2.1.001.patch, 
> HBASE-21935.branch-2.1.002.patch, HBASE-21935.branch-2.1.003.patch, 
> HBASE-21935.branch-2.1.004.patch, HBASE-21935.branch-2.1.005.patch, 
> HBASE-21935.branch-2.1.006.patch, HBASE-21935.branch-2.1.007.patch
>
>
> The spark/dev/create-release is more comprehensive than our hokey make_rc.sh 
> script. It codifies the bulk of the RM process from tagging, version-setting, 
> building, signing, and pushing. It does it in a container so environment is 
> same each time. It has a bunch of spark-specifics as is -- naturally -- but 
> should be possible to pull it around to suit hbase RM'ing. It'd save a bunch 
> of time and would allow us to get to a place where RM'ing is canned, 
> evolvable, and consistent.
> I've been hacking on the tooling before the filing of this JIRA and was 
> polluting branch-2.0 w/ tagging and reverts. Let me make a branch named for 
> this JIRA to play with (There is a dry-run flag but it too needs work...).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21926) Profiler servlet

2019-03-11 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16790203#comment-16790203
 ] 

Andrew Purtell commented on HBASE-21926:


Yes I have a patch for branch-1. I’ll attach it and a patch for branch-2 and 
master tomorrow. 

> Profiler servlet
> 
>
> Key: HBASE-21926
> URL: https://issues.apache.org/jira/browse/HBASE-21926
> Project: HBase
>  Issue Type: New Feature
>  Components: master, Operability, regionserver
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Major
> Fix For: 3.0.0, 1.6.0, 2.3.0
>
>
> HIVE-20202 describes how Hive added a web endpoint for online in production 
> profiling based on async-profiler. The endpoint was added as a servlet to 
> httpserver and supports retrieval of flamegraphs compiled from the profiler 
> trace. Async profiler 
> ([https://github.com/jvm-profiling-tools/async-profiler] ) can also profile 
> heap allocations, lock contention, and HW performance counters in addition to 
> CPU.
> The profiling overhead is pretty low and is safe to run in production. The 
> async-profiler project measured and describes CPU and memory overheads on 
> these issues: 
> [https://github.com/jvm-profiling-tools/async-profiler/issues/14] and 
> [https://github.com/jvm-profiling-tools/async-profiler/issues/131] 
> We have an httpserver based servlet stack so we can use HIVE-20202 as an 
> implementation template for a similar feature for HBase daemons. Ideally we 
> achieve these requirements:
>  * Retrieve flamegraph SVG generated from latest profile trace.
>  * Online enable and disable of profiling activity. (async-profiler does not 
> do instrumentation based profiling so this should not cause the code gen 
> related perf problems of that other approach and can be safely toggled on and 
> off while under production load.)
>  * CPU profiling.
>  * ALLOCATION profiling.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-21926) Profiler servlet

2019-03-11 Thread Allan Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16790202#comment-16790202
 ] 

Allan Yang edited comment on HBASE-21926 at 3/12/19 4:00 AM:
-

[~apurtell], any progress on this one?


was (Author: allan163):
[~apurtell], any progress on this on?

> Profiler servlet
> 
>
> Key: HBASE-21926
> URL: https://issues.apache.org/jira/browse/HBASE-21926
> Project: HBase
>  Issue Type: New Feature
>  Components: master, Operability, regionserver
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Major
> Fix For: 3.0.0, 1.6.0, 2.3.0
>
>
> HIVE-20202 describes how Hive added a web endpoint for online in production 
> profiling based on async-profiler. The endpoint was added as a servlet to 
> httpserver and supports retrieval of flamegraphs compiled from the profiler 
> trace. Async profiler 
> ([https://github.com/jvm-profiling-tools/async-profiler] ) can also profile 
> heap allocations, lock contention, and HW performance counters in addition to 
> CPU.
> The profiling overhead is pretty low and is safe to run in production. The 
> async-profiler project measured and describes CPU and memory overheads on 
> these issues: 
> [https://github.com/jvm-profiling-tools/async-profiler/issues/14] and 
> [https://github.com/jvm-profiling-tools/async-profiler/issues/131] 
> We have an httpserver based servlet stack so we can use HIVE-20202 as an 
> implementation template for a similar feature for HBase daemons. Ideally we 
> achieve these requirements:
>  * Retrieve flamegraph SVG generated from latest profile trace.
>  * Online enable and disable of profiling activity. (async-profiler does not 
> do instrumentation based profiling so this should not cause the code gen 
> related perf problems of that other approach and can be safely toggled on and 
> off while under production load.)
>  * CPU profiling.
>  * ALLOCATION profiling.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21926) Profiler servlet

2019-03-11 Thread Allan Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16790202#comment-16790202
 ] 

Allan Yang commented on HBASE-21926:


[~apurtell], any progress on this on?

> Profiler servlet
> 
>
> Key: HBASE-21926
> URL: https://issues.apache.org/jira/browse/HBASE-21926
> Project: HBase
>  Issue Type: New Feature
>  Components: master, Operability, regionserver
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Major
> Fix For: 3.0.0, 1.6.0, 2.3.0
>
>
> HIVE-20202 describes how Hive added a web endpoint for online in production 
> profiling based on async-profiler. The endpoint was added as a servlet to 
> httpserver and supports retrieval of flamegraphs compiled from the profiler 
> trace. Async profiler 
> ([https://github.com/jvm-profiling-tools/async-profiler] ) can also profile 
> heap allocations, lock contention, and HW performance counters in addition to 
> CPU.
> The profiling overhead is pretty low and is safe to run in production. The 
> async-profiler project measured and describes CPU and memory overheads on 
> these issues: 
> [https://github.com/jvm-profiling-tools/async-profiler/issues/14] and 
> [https://github.com/jvm-profiling-tools/async-profiler/issues/131] 
> We have an httpserver based servlet stack so we can use HIVE-20202 as an 
> implementation template for a similar feature for HBase daemons. Ideally we 
> achieve these requirements:
>  * Retrieve flamegraph SVG generated from latest profile trace.
>  * Online enable and disable of profiling activity. (async-profiler does not 
> do instrumentation based profiling so this should not cause the code gen 
> related perf problems of that other approach and can be safely toggled on and 
> off while under production load.)
>  * CPU profiling.
>  * ALLOCATION profiling.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21935) Replace make_rc.sh with customized spark/dev/create-release

2019-03-11 Thread stack (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-21935:
--
Attachment: HBASE-21935.branch-2.0.006.patch

> Replace make_rc.sh with customized spark/dev/create-release
> ---
>
> Key: HBASE-21935
> URL: https://issues.apache.org/jira/browse/HBASE-21935
> Project: HBase
>  Issue Type: Task
>  Components: rm
>Reporter: stack
>Assignee: stack
>Priority: Minor
>  Labels: rm
> Attachments: HBASE-21935.branch-2.0.001.patch, 
> HBASE-21935.branch-2.0.002.patch, HBASE-21935.branch-2.0.003.patch, 
> HBASE-21935.branch-2.0.004.patch, HBASE-21935.branch-2.0.005.patch, 
> HBASE-21935.branch-2.0.006.patch, HBASE-21935.branch-2.1.001.patch, 
> HBASE-21935.branch-2.1.002.patch, HBASE-21935.branch-2.1.003.patch, 
> HBASE-21935.branch-2.1.004.patch, HBASE-21935.branch-2.1.005.patch, 
> HBASE-21935.branch-2.1.006.patch, HBASE-21935.branch-2.1.007.patch
>
>
> The spark/dev/create-release is more comprehensive than our hokey make_rc.sh 
> script. It codifies the bulk of the RM process from tagging, version-setting, 
> building, signing, and pushing. It does it in a container so environment is 
> same each time. It has a bunch of spark-specifics as is -- naturally -- but 
> should be possible to pull it around to suit hbase RM'ing. It'd save a bunch 
> of time and would allow us to get to a place where RM'ing is canned, 
> evolvable, and consistent.
> I've been hacking on the tooling before the filing of this JIRA and was 
> polluting branch-2.0 w/ tagging and reverts. Let me make a branch named for 
> this JIRA to play with (There is a dry-run flag but it too needs work...).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22032) KeyValue validation should check for null byte array

2019-03-11 Thread Geoffrey Jacoby (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16790191#comment-16790191
 ] 

Geoffrey Jacoby commented on HBASE-22032:
-

[~openinx] - Turns out that the NullPointerException I was tracking down is 
more likely coming from a subclass of KeyValue in Phoenix -- see PHOENIX-5188 
-- but still seems good to have a check at the HBase level. Thanks!

> KeyValue validation should check for null byte array
> 
>
> Key: HBASE-22032
> URL: https://issues.apache.org/jira/browse/HBASE-22032
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 2.0.4, 2.1.3
>Reporter: Geoffrey Jacoby
>Assignee: Geoffrey Jacoby
>Priority: Major
> Attachments: HBASE-22032.v01.patch, HBASE-22032.v02.patch, 
> HBASE-22032.v03.patch
>
>
> HBASE-21401 added some nice validation checks to throw precise errors if a 
> KeyValue is constructed using invalid parameters. However it implicitly 
> assumes that the KeyValue buffer is not null. It should validate this 
> assumption and alert accordingly rather than throwing an NPE from an 
> unrelated check. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22032) KeyValue validation should check for null byte array

2019-03-11 Thread Geoffrey Jacoby (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated HBASE-22032:

Status: Patch Available  (was: Open)

Fixed another checkstyle warning. 

> KeyValue validation should check for null byte array
> 
>
> Key: HBASE-22032
> URL: https://issues.apache.org/jira/browse/HBASE-22032
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.1.3, 2.0.4, 3.0.0
>Reporter: Geoffrey Jacoby
>Assignee: Geoffrey Jacoby
>Priority: Major
> Attachments: HBASE-22032.v01.patch, HBASE-22032.v02.patch, 
> HBASE-22032.v03.patch
>
>
> HBASE-21401 added some nice validation checks to throw precise errors if a 
> KeyValue is constructed using invalid parameters. However it implicitly 
> assumes that the KeyValue buffer is not null. It should validate this 
> assumption and alert accordingly rather than throwing an NPE from an 
> unrelated check. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22032) KeyValue validation should check for null byte array

2019-03-11 Thread Geoffrey Jacoby (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated HBASE-22032:

Status: Open  (was: Patch Available)

> KeyValue validation should check for null byte array
> 
>
> Key: HBASE-22032
> URL: https://issues.apache.org/jira/browse/HBASE-22032
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.1.3, 2.0.4, 3.0.0
>Reporter: Geoffrey Jacoby
>Assignee: Geoffrey Jacoby
>Priority: Major
> Attachments: HBASE-22032.v01.patch, HBASE-22032.v02.patch, 
> HBASE-22032.v03.patch
>
>
> HBASE-21401 added some nice validation checks to throw precise errors if a 
> KeyValue is constructed using invalid parameters. However it implicitly 
> assumes that the KeyValue buffer is not null. It should validate this 
> assumption and alert accordingly rather than throwing an NPE from an 
> unrelated check. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22032) KeyValue validation should check for null byte array

2019-03-11 Thread Geoffrey Jacoby (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated HBASE-22032:

Attachment: HBASE-22032.v03.patch

> KeyValue validation should check for null byte array
> 
>
> Key: HBASE-22032
> URL: https://issues.apache.org/jira/browse/HBASE-22032
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 2.0.4, 2.1.3
>Reporter: Geoffrey Jacoby
>Assignee: Geoffrey Jacoby
>Priority: Major
> Attachments: HBASE-22032.v01.patch, HBASE-22032.v02.patch, 
> HBASE-22032.v03.patch
>
>
> HBASE-21401 added some nice validation checks to throw precise errors if a 
> KeyValue is constructed using invalid parameters. However it implicitly 
> assumes that the KeyValue buffer is not null. It should validate this 
> assumption and alert accordingly rather than throwing an NPE from an 
> unrelated check. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-22037) Re-enable TestAvoidCellReferencesIntoShippedBlocks

2019-03-11 Thread Duo Zhang (JIRA)
Duo Zhang created HBASE-22037:
-

 Summary: Re-enable TestAvoidCellReferencesIntoShippedBlocks
 Key: HBASE-22037
 URL: https://issues.apache.org/jira/browse/HBASE-22037
 Project: HBase
  Issue Type: Sub-task
Reporter: Duo Zhang






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-22036) Rewrite TestScannerHeartbeatMessages

2019-03-11 Thread Duo Zhang (JIRA)
Duo Zhang created HBASE-22036:
-

 Summary: Rewrite TestScannerHeartbeatMessages
 Key: HBASE-22036
 URL: https://issues.apache.org/jira/browse/HBASE-22036
 Project: HBase
  Issue Type: Sub-task
Reporter: Duo Zhang






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20952) Re-visit the WAL API

2019-03-11 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16790179#comment-16790179
 ] 

Hudson commented on HBASE-20952:


Results for branch HBASE-20952
[build #73 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-20952/73/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-20952/73//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-20952/73//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-20952/73//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Re-visit the WAL API
> 
>
> Key: HBASE-20952
> URL: https://issues.apache.org/jira/browse/HBASE-20952
> Project: HBase
>  Issue Type: Improvement
>  Components: wal
>Reporter: Josh Elser
>Priority: Major
> Attachments: 20952.v1.txt
>
>
> Take a step back from the current WAL implementations and think about what an 
> HBase WAL API should look like. What are the primitive calls that we require 
> to guarantee durability of writes with a high degree of performance?
> The API needs to take the current implementations into consideration. We 
> should also have a mind for what is happening in the Ratis LogService (but 
> the LogService should not dictate what HBase's WAL API looks like RATIS-272).
> Other "systems" inside of HBase that use WALs are replication and 
> backup Replication has the use-case for "tail"'ing the WAL which we 
> should provide via our new API. B doesn't do anything fancy (IIRC). We 
> should make sure all consumers are generally going to be OK with the API we 
> create.
> The API may be "OK" (or OK in a part). We need to also consider other methods 
> which were "bolted" on such as {{AbstractFSWAL}} and 
> {{WALFileLengthProvider}}. Other corners of "WAL use" (like the 
> {{WALSplitter}} should also be looked at to use WAL-APIs only).
> We also need to make sure that adequate interface audience and stability 
> annotations are chosen.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21717) Implement Connection based on AsyncConnection

2019-03-11 Thread Duo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-21717:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Pushed to branch HBASE-21512.

> Implement Connection based on AsyncConnection
> -
>
> Key: HBASE-21717
> URL: https://issues.apache.org/jira/browse/HBASE-21717
> Project: HBase
>  Issue Type: Sub-task
>  Components: asyncclient, Client
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: HBASE-21512
>
> Attachments: HBASE-21717-HBASE-21512-v1.patch, 
> HBASE-21717-HBASE-21512-v10.patch, HBASE-21717-HBASE-21512-v2.patch, 
> HBASE-21717-HBASE-21512-v3.patch, HBASE-21717-HBASE-21512-v4.patch, 
> HBASE-21717-HBASE-21512-v5.patch, HBASE-21717-HBASE-21512-v6.patch, 
> HBASE-21717-HBASE-21512-v7.patch, HBASE-21717-HBASE-21512-v8.patch, 
> HBASE-21717-HBASE-21512-v9.patch, HBASE-21717-HBASE-21512-v9.patch, 
> HBASE-21717-HBASE-21512-v9.patch, HBASE-21717-HBASE-21512-v9.patch, 
> HBASE-21717-HBASE-21512.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22035) Backup /Incremental backup in HBase version 1.3.1

2019-03-11 Thread Sandipan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandipan updated HBASE-22035:
-
Summary: Backup /Incremental backup in HBase version 1.3.1  (was: Backup 
/Increamental backup in HBase version 1.3.1)

> Backup /Incremental backup in HBase version 1.3.1
> -
>
> Key: HBASE-22035
> URL: https://issues.apache.org/jira/browse/HBASE-22035
> Project: HBase
>  Issue Type: Wish
>  Components: backuprestore
>Affects Versions: 1.3.1
> Environment: AWS EMR 5.10.0
>Reporter: Sandipan
>Priority: Major
>
> Hi All,
> I am looking for enabling the HBase backup and incremental backup in HBase 
> version 1.3.1. I tried applying the patch HBASE-11085 and HBASE-19000 as per 
> below link.
>  
> https://issues.apache.org/jira/browse/HBASE-11085
>  
> Version 1 ([https://reviews.apache.org/r/21492/])
>  * [^HBASE-11085-trunk-v1.patch]: incremental update/restore code
>  * [^HBASE-11085-trunk-v1-contains-HBASE-10900-trunk-v4.patch]: contain both 
> [^HBASE-11085-trunk-v1.patch] and [^HBASE-10900-trunk-v4.patch]
>  
> but I could see there are still some classes missing like HLog, HLogUtil etc.
> Can some one help on how to enable backup in version HBase 1.3.1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-22035) Backup /Increamental backup in HBase version 1.3.1

2019-03-11 Thread Sandipan (JIRA)
Sandipan created HBASE-22035:


 Summary: Backup /Increamental backup in HBase version 1.3.1
 Key: HBASE-22035
 URL: https://issues.apache.org/jira/browse/HBASE-22035
 Project: HBase
  Issue Type: Wish
  Components: backuprestore
Affects Versions: 1.3.1
 Environment: AWS EMR 5.10.0
Reporter: Sandipan


Hi All,

I am looking for enabling the HBase backup and incremental backup in HBase 
version 1.3.1. I tried applying the patch HBASE-11085 and HBASE-19000 as per 
below link.

 

https://issues.apache.org/jira/browse/HBASE-11085

 

Version 1 ([https://reviews.apache.org/r/21492/])
 * [^HBASE-11085-trunk-v1.patch]: incremental update/restore code
 * [^HBASE-11085-trunk-v1-contains-HBASE-10900-trunk-v4.patch]: contain both 
[^HBASE-11085-trunk-v1.patch] and [^HBASE-10900-trunk-v4.patch]

 

but I could see there are still some classes missing like HLog, HLogUtil etc.

Can some one help on how to enable backup in version HBase 1.3.1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22015) UserPermission should be annotated as InterfaceAudience.Public

2019-03-11 Thread Guanghao Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16790153#comment-16790153
 ] 

Guanghao Zhang commented on HBASE-22015:


[~Yi Mei] The Admin method may be better to UserPermission directly as it is 
IA.Public now...

> UserPermission should be annotated as InterfaceAudience.Public
> --
>
> Key: HBASE-22015
> URL: https://issues.apache.org/jira/browse/HBASE-22015
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Yi Mei
>Assignee: Yi Mei
>Priority: Blocker
> Fix For: 3.0.0, 2.2.0, 2.3.0
>
> Attachments: HBASE-22015.master.001.patch
>
>
> HBASE-11318 mark UserPermission as InterfaceAudience.Private.
> HBASE-11452 instroduce AccessControlClient#getUserPermissions and return 
> UserPermission list but the UserPermission class is Private.
> I also encounter the same problem when I want to move getUserPermissions 
> method as a admin api in HBASE-21911, otherwise the api of getUserPermissions 
> may be 
> {code:java}
> Map> getUserPermissions{code}
> So shall we mark the UserPermission as Public? discussions are welcomed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21717) Implement Connection based on AsyncConnection

2019-03-11 Thread Duo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-21717:
--
Component/s: Client
 asyncclient

> Implement Connection based on AsyncConnection
> -
>
> Key: HBASE-21717
> URL: https://issues.apache.org/jira/browse/HBASE-21717
> Project: HBase
>  Issue Type: Sub-task
>  Components: asyncclient, Client
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Attachments: HBASE-21717-HBASE-21512-v1.patch, 
> HBASE-21717-HBASE-21512-v10.patch, HBASE-21717-HBASE-21512-v2.patch, 
> HBASE-21717-HBASE-21512-v3.patch, HBASE-21717-HBASE-21512-v4.patch, 
> HBASE-21717-HBASE-21512-v5.patch, HBASE-21717-HBASE-21512-v6.patch, 
> HBASE-21717-HBASE-21512-v7.patch, HBASE-21717-HBASE-21512-v8.patch, 
> HBASE-21717-HBASE-21512-v9.patch, HBASE-21717-HBASE-21512-v9.patch, 
> HBASE-21717-HBASE-21512-v9.patch, HBASE-21717-HBASE-21512-v9.patch, 
> HBASE-21717-HBASE-21512.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21717) Implement Connection based on AsyncConnection

2019-03-11 Thread Duo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-21717:
--
Fix Version/s: HBASE-21512

> Implement Connection based on AsyncConnection
> -
>
> Key: HBASE-21717
> URL: https://issues.apache.org/jira/browse/HBASE-21717
> Project: HBase
>  Issue Type: Sub-task
>  Components: asyncclient, Client
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: HBASE-21512
>
> Attachments: HBASE-21717-HBASE-21512-v1.patch, 
> HBASE-21717-HBASE-21512-v10.patch, HBASE-21717-HBASE-21512-v2.patch, 
> HBASE-21717-HBASE-21512-v3.patch, HBASE-21717-HBASE-21512-v4.patch, 
> HBASE-21717-HBASE-21512-v5.patch, HBASE-21717-HBASE-21512-v6.patch, 
> HBASE-21717-HBASE-21512-v7.patch, HBASE-21717-HBASE-21512-v8.patch, 
> HBASE-21717-HBASE-21512-v9.patch, HBASE-21717-HBASE-21512-v9.patch, 
> HBASE-21717-HBASE-21512-v9.patch, HBASE-21717-HBASE-21512-v9.patch, 
> HBASE-21717-HBASE-21512.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22024) Clean up BUCK related things for C++ native client

2019-03-11 Thread Junhong Xu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junhong Xu updated HBASE-22024:
---
Attachment: HBASE-22024.HBASE-14850.v01.patch

> Clean up BUCK related things for C++ native client
> --
>
> Key: HBASE-22024
> URL: https://issues.apache.org/jira/browse/HBASE-22024
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Junhong Xu
>Assignee: Junhong Xu
>Priority: Minor
> Attachments: HBASE-22024.HBASE-14850.v01.patch
>
>
> The BUCK is not supported to build the hbase c++ client, but there are still 
> something about BUCK, like comments or configurations etc, which will confuse 
>  people who are not familiar with that.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21987) Simplify RSGroupInfoManagerImpl#flushConfig() for offline mode

2019-03-11 Thread Xiang Li (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16790141#comment-16790141
 ] 

Xiang Li commented on HBASE-21987:
--

Thanks Xu Cang for the review!

> Simplify RSGroupInfoManagerImpl#flushConfig() for offline mode
> --
>
> Key: HBASE-21987
> URL: https://issues.apache.org/jira/browse/HBASE-21987
> Project: HBase
>  Issue Type: Improvement
>  Components: rsgroup
>Reporter: Xiang Li
>Assignee: Xiang Li
>Priority: Minor
>  Labels: rsgroup
> Fix For: 3.0.0, 2.2.0, 1.5.1, 2.2.1
>
> Attachments: HBASE-21987.branch-1.000.patch, 
> HBASE-21987.master.000.patch, HBASE-21987.master.001.patch, 
> HBASE-21987.master.002.patch, HBASE-21987.master.003.patch, 
> HBASE-21987.master.004.patch
>
>
> The logic to handle offline mode in 
> RSGroupInfoManagerImpl#flushConfig(Map newGroupMap) 
> could be simplified.
> {code:title=RSGroupInfoManagerImpl.java # flushConfig(Map RSGroupInfo> newGroupMap)|borderStyle=solid}
> if (!isOnline()) {
> Map m = Maps.newHashMap(rsGroupMap);
> RSGroupInfo oldDefaultGroup = m.remove(RSGroupInfo.DEFAULT_GROUP);
> RSGroupInfo newDefaultGroup = 
> newGroupMap.remove(RSGroupInfo.DEFAULT_GROUP);
> if (!m.equals(newGroupMap) ||
> !oldDefaultGroup.getTables().equals(newDefaultGroup.getTables())) {
> throw new IOException("Only default servers can be updated during 
> offline mode");
> }
> newGroupMap.put(RSGroupInfo.DEFAULT_GROUP, newDefaultGroup);
> rsGroupMap = newGroupMap;
> return;
>  }
> {code}
> The logic is to make a copy of the private member called "rsGroupMap" as m, 
> and get the default group out of m and newGroupMap and then compare. Then 
> restore the newGroupMap and update rsGroupMap.
> This function is called by 
> {code:title=RSGroupInfoManagerImpl.java # flushConfig() |borderStyle=solid}
> private synchronized void flushConfig() throws IOException {
> flushConfig(this.rsGroupMap);
> }
> {code}
> by RSGroupInfoManagerImpl.RSGroupStartupWorker#waitForGroupTableOnline() 
> during HMaster starts, in which, newGroupMap (the input of flushConfig()) is 
> this.rsGroupMap, the comparison is not needed, because they are the same.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-12133) Add FastLongHistogram for metric computation

2019-03-11 Thread Abhishek Singh Chouhan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-12133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Singh Chouhan updated HBASE-12133:
---
Description: FastLongHistogram is a thread-safe class that estimate 
distribution of data and computes the quantiles. It's useful for computing 
aggregated metrics like P99/P95.  (was: _emphasized text_FastLongHistogram is a 
thread-safe class that estimate distribution of data and computes the 
quantiles. It's useful for computing aggregated metrics like P99/P95.
)

> Add FastLongHistogram for metric computation
> 
>
> Key: HBASE-12133
> URL: https://issues.apache.org/jira/browse/HBASE-12133
> Project: HBase
>  Issue Type: New Feature
>  Components: metrics
>Affects Versions: 0.98.8
>Reporter: Yi Deng
>Assignee: Yi Deng
>Priority: Minor
>  Labels: histogram, metrics
> Fix For: 0.99.1, 1.3.0
>
> Attachments: 
> 0001-Add-FastLongHistogram-for-fast-histogram-estimation.patch, 
> 0001-Add-FastLongHistogram-for-fast-histogram-estimation.patch, 
> 0001-Add-FastLongHistogram-for-fast-histogram-estimation.patch, 
> 12133.addendum.txt
>
>
> FastLongHistogram is a thread-safe class that estimate distribution of data 
> and computes the quantiles. It's useful for computing aggregated metrics like 
> P99/P95.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22002) Remove the deprecated methods in Admin interface

2019-03-11 Thread Duo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-22002:
--
Attachment: HBASE-22002-v2.patch

> Remove the deprecated methods in Admin interface
> 
>
> Key: HBASE-22002
> URL: https://issues.apache.org/jira/browse/HBASE-22002
> Project: HBase
>  Issue Type: Task
>  Components: Admin, Client
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HBASE-22002-v1.patch, HBASE-22002-v2.patch, 
> HBASE-22002.patch
>
>
> For API cleanup, and will make the work in HBASE-21718 a little easier.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-12133) Add FastLongHistogram for metric computation

2019-03-11 Thread Abhishek Singh Chouhan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-12133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Singh Chouhan updated HBASE-12133:
---
Description: 
_emphasized text_FastLongHistogram is a thread-safe class that estimate 
distribution of data and computes the quantiles. It's useful for computing 
aggregated metrics like P99/P95.


  was:
FastLongHistogram is a thread-safe class that estimate distribution of data and 
computes the quantiles. It's useful for computing aggregated metrics like 
P99/P95.



> Add FastLongHistogram for metric computation
> 
>
> Key: HBASE-12133
> URL: https://issues.apache.org/jira/browse/HBASE-12133
> Project: HBase
>  Issue Type: New Feature
>  Components: metrics
>Affects Versions: 0.98.8
>Reporter: Yi Deng
>Assignee: Yi Deng
>Priority: Minor
>  Labels: histogram, metrics
> Fix For: 0.99.1, 1.3.0
>
> Attachments: 
> 0001-Add-FastLongHistogram-for-fast-histogram-estimation.patch, 
> 0001-Add-FastLongHistogram-for-fast-histogram-estimation.patch, 
> 0001-Add-FastLongHistogram-for-fast-histogram-estimation.patch, 
> 12133.addendum.txt
>
>
> _emphasized text_FastLongHistogram is a thread-safe class that estimate 
> distribution of data and computes the quantiles. It's useful for computing 
> aggregated metrics like P99/P95.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22032) KeyValue validation should check for null byte array

2019-03-11 Thread Zheng Hu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16790127#comment-16790127
 ] 

Zheng Hu commented on HBASE-22032:
--

LGTM, [~gjacoby], please help to fix this checkstyle: 
{code}
./hbase-common/src/test/java/org/apache/hadoop/hbase/TestKeyValue.java:628:
//can't add to testCheckKeyValueBytesFailureCase because it goes through the 
InputStream KeyValue API: Line is longer than 100 characters (found 105). 
[LineLength]
{code}
BTW,  In what case did you find this NullPointerException ?

> KeyValue validation should check for null byte array
> 
>
> Key: HBASE-22032
> URL: https://issues.apache.org/jira/browse/HBASE-22032
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 2.0.4, 2.1.3
>Reporter: Geoffrey Jacoby
>Assignee: Geoffrey Jacoby
>Priority: Major
> Attachments: HBASE-22032.v01.patch, HBASE-22032.v02.patch
>
>
> HBASE-21401 added some nice validation checks to throw precise errors if a 
> KeyValue is constructed using invalid parameters. However it implicitly 
> assumes that the KeyValue buffer is not null. It should validate this 
> assumption and alert accordingly rather than throwing an NPE from an 
> unrelated check. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21970) Document that how to upgrade from 2.0 or 2.1 to 2.2+

2019-03-11 Thread Guanghao Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-21970:
---
Fix Version/s: 2.2.0

> Document that how to upgrade from 2.0 or 2.1 to 2.2+
> 
>
> Key: HBASE-21970
> URL: https://issues.apache.org/jira/browse/HBASE-21970
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Major
> Fix For: 3.0.0, 2.2.0
>
> Attachments: HBASE-21970.master.001.patch, 
> HBASE-21970.master.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HBASE-21956) Auto-generate api report in release script

2019-03-11 Thread stack (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-21956.
---
Resolution: Duplicate

Parent scripts do this.

> Auto-generate api report in release script
> --
>
> Key: HBASE-21956
> URL: https://issues.apache.org/jira/browse/HBASE-21956
> Project: HBase
>  Issue Type: Sub-task
>Reporter: stack
>Priority: Major
>
> Integrate the generation of API doc into release script.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22027) Move non-MR parts of TokenUtil into hbase-client

2019-03-11 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16790065#comment-16790065
 ] 

Hadoop QA commented on HBASE-22027:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
18s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
43s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
48s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
14s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
24s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
45s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
32s{color} | {color:red} hbase-client: The patch generated 4 new + 0 unchanged 
- 0 fixed = 4 total (was 0) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
16s{color} | {color:red} hbase-server: The patch generated 2 new + 2 unchanged 
- 3 fixed = 4 total (was 5) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 3 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
37s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
8m 41s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
11s{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}132m  
9s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
45s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}183m 41s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-22027 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12962034/0001-HBase-22027-Split-non-MR-related-parts-of-TokenUtil-.patch
 |
| Optional Tests |  dupname  asflicense  javac  javadoc  unit  findbugs  
shadedjars  hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 1c2ee39c0722 4.4.0-139-generic #165~14.04.1-Ubuntu SMP 

[jira] [Commented] (HBASE-22025) RAT check fails in nightlies; fails on (old) test data files.

2019-03-11 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16790052#comment-16790052
 ] 

Hudson commented on HBASE-22025:


Results for branch branch-2.2
[build #100 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/100/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/100//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/100//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/100//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(x) {color:red}-1 client integration test{color}
--Failed when running client tests on top of Hadoop 2. [see log for 
details|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/100//artifact/output-integration/hadoop-2.log].
 (note that this means we didn't run on Hadoop 3)


> RAT check fails in nightlies; fails on (old) test data files.
> -
>
> Key: HBASE-22025
> URL: https://issues.apache.org/jira/browse/HBASE-22025
> Project: HBase
>  Issue Type: Sub-task
>Reporter: stack
>Assignee: stack
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.1.4, 2.2.1
>
> Attachments: HBASE-22025.branch-2.1.001.patch
>
>
> The nightly runs where we check RM steps fails in branch-2.1 because the rat 
> test complains about old test data files not having licenses. See HBASE-22022 
> for how we turned up this issue. This JIRA adds exclusions for these files 
> that cause failure.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22011) ThriftUtilities.getFromThrift should set filter when not set columns

2019-03-11 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16790051#comment-16790051
 ] 

Hudson commented on HBASE-22011:


Results for branch branch-2.2
[build #100 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/100/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/100//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/100//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/100//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(x) {color:red}-1 client integration test{color}
--Failed when running client tests on top of Hadoop 2. [see log for 
details|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/100//artifact/output-integration/hadoop-2.log].
 (note that this means we didn't run on Hadoop 3)


> ThriftUtilities.getFromThrift should set filter when not set columns
> 
>
> Key: HBASE-22011
> URL: https://issues.apache.org/jira/browse/HBASE-22011
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Bing Xiao
>Assignee: Bing Xiao
>Priority: Major
> Fix For: 2.2.1
>
> Attachments: HBASE-22011-branch-2-v1.patch, 
> HBASE-22011-branch-2.2-v1.patch, HBASE-22011-v1.patch
>
>
> ThriftUtilities.getFromThrift, if TGet wihtout columns the filter is ignore.
> {code:java}
> if (!in.isSetColumns()) {
>  return out;
> }
> for (TColumn column : in.getColumns()) {
>  if (column.isSetQualifier()) {
>  out.addColumn(column.getFamily(), column.getQualifier());
>  } else {
>  out.addFamily(column.getFamily());
>  }
> }
> if (in.isSetFilterBytes()) {
>  out.setFilter(filterFromThrift(in.getFilterBytes()));
> }{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22032) KeyValue validation should check for null byte array

2019-03-11 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16790035#comment-16790035
 ] 

Hadoop QA commented on HBASE-22032:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
25s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
29s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
37s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
24s{color} | {color:red} hbase-common: The patch generated 1 new + 36 unchanged 
- 0 fixed = 37 total (was 36) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
29s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
8m 35s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
30s{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
 9s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 33m  4s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-22032 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12962040/HBASE-22032.v02.patch 
|
| Optional Tests |  dupname  asflicense  javac  javadoc  unit  findbugs  
shadedjars  hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux fc15d7acea7e 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 648fb72702 |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.11 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HBASE-Build/16341/artifact/patchprocess/diff-checkstyle-hbase-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/16341/testReport/ |
| Max. process+thread count | 309 (vs. ulimit of 1) |
| modules | C: hbase-common U: hbase-common |
| Console output | 

[jira] [Updated] (HBASE-22032) KeyValue validation should check for null byte array

2019-03-11 Thread Geoffrey Jacoby (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated HBASE-22032:

Attachment: HBASE-22032.v02.patch

> KeyValue validation should check for null byte array
> 
>
> Key: HBASE-22032
> URL: https://issues.apache.org/jira/browse/HBASE-22032
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 2.0.4, 2.1.3
>Reporter: Geoffrey Jacoby
>Assignee: Geoffrey Jacoby
>Priority: Major
> Attachments: HBASE-22032.v01.patch, HBASE-22032.v02.patch
>
>
> HBASE-21401 added some nice validation checks to throw precise errors if a 
> KeyValue is constructed using invalid parameters. However it implicitly 
> assumes that the KeyValue buffer is not null. It should validate this 
> assumption and alert accordingly rather than throwing an NPE from an 
> unrelated check. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22032) KeyValue validation should check for null byte array

2019-03-11 Thread Geoffrey Jacoby (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated HBASE-22032:

Status: Open  (was: Patch Available)

> KeyValue validation should check for null byte array
> 
>
> Key: HBASE-22032
> URL: https://issues.apache.org/jira/browse/HBASE-22032
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.1.3, 2.0.4, 3.0.0
>Reporter: Geoffrey Jacoby
>Assignee: Geoffrey Jacoby
>Priority: Major
> Attachments: HBASE-22032.v01.patch, HBASE-22032.v02.patch
>
>
> HBASE-21401 added some nice validation checks to throw precise errors if a 
> KeyValue is constructed using invalid parameters. However it implicitly 
> assumes that the KeyValue buffer is not null. It should validate this 
> assumption and alert accordingly rather than throwing an NPE from an 
> unrelated check. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22032) KeyValue validation should check for null byte array

2019-03-11 Thread Geoffrey Jacoby (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated HBASE-22032:

Status: Patch Available  (was: Open)

v2 patch to fix some checkstyle line length warnings.

> KeyValue validation should check for null byte array
> 
>
> Key: HBASE-22032
> URL: https://issues.apache.org/jira/browse/HBASE-22032
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.1.3, 2.0.4, 3.0.0
>Reporter: Geoffrey Jacoby
>Assignee: Geoffrey Jacoby
>Priority: Major
> Attachments: HBASE-22032.v01.patch, HBASE-22032.v02.patch
>
>
> HBASE-21401 added some nice validation checks to throw precise errors if a 
> KeyValue is constructed using invalid parameters. However it implicitly 
> assumes that the KeyValue buffer is not null. It should validate this 
> assumption and alert accordingly rather than throwing an NPE from an 
> unrelated check. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22002) Remove the deprecated methods in Admin interface

2019-03-11 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16790019#comment-16790019
 ] 

Hadoop QA commented on HBASE-22002:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  4m  
9s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 93 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
31s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 6s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
13s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  4m 
39s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
11s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  8m 
19s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
14s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
13s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 38s{color} 
| {color:red} hbase-client generated 10 new + 76 unchanged - 15 fixed = 86 
total (was 91) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} The patch passed checkstyle in hbase-common {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} hbase-client: The patch generated 0 new + 170 
unchanged - 49 fixed = 170 total (was 219) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
17s{color} | {color:green} hbase-server: The patch generated 0 new + 707 
unchanged - 51 fixed = 707 total (was 758) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} The patch passed checkstyle in hbase-mapreduce 
{color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} The patch passed checkstyle in hbase-thrift {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} The patch passed checkstyle in hbase-shell {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} The patch passed checkstyle in hbase-endpoint 
{color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} hbase-backup: The patch generated 0 new + 0 
unchanged - 1 fixed = 0 total (was 1) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} hbase-it: The patch generated 0 new + 100 unchanged 
- 1 fixed = 100 total (was 101) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} hbase-rest: The patch generated 0 new + 33 unchanged 
- 1 fixed = 33 total (was 34) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} The patch passed checkstyle in hbase-client-project 
{color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} The patch passed checkstyle in 
hbase-shaded-client-project {color} |
| {color:red}-1{color} | {color:red} rubocop {color} | {color:red}  0m 
17s{color} | {color:red} The patch generated 39 

[jira] [Commented] (HBASE-22032) KeyValue validation should check for null byte array

2019-03-11 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16789993#comment-16789993
 ] 

Hadoop QA commented on HBASE-22032:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
47s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
14s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
37s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
23s{color} | {color:red} hbase-common: The patch generated 2 new + 36 unchanged 
- 0 fixed = 38 total (was 36) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
10s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
7m 56s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
47s{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
10s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 31m 16s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-22032 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12962039/HBASE-22032.v01.patch 
|
| Optional Tests |  dupname  asflicense  javac  javadoc  unit  findbugs  
shadedjars  hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 914f07dfd36b 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 648fb72702 |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.11 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HBASE-Build/16340/artifact/patchprocess/diff-checkstyle-hbase-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/16340/testReport/ |
| Max. process+thread count | 331 (vs. ulimit of 1) |
| modules | C: hbase-common U: hbase-common |
| Console output | 

[jira] [Commented] (HBASE-22027) Move non-MR parts of TokenUtil into hbase-client

2019-03-11 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HBASE-22027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16789980#comment-16789980
 ] 

Stig Rohde Døssing commented on HBASE-22027:


I only need obtainAndCacheToken. The other 4 methods expose protobufs or the 
private AuthenticationTokenIdentifier/Token. I can't make any of the methods 
private, as they need to be accessible to both 
ClientTokenUtil.obtainAndCacheToken and the forwarding methods in TokenUtil. 
How about I make the methods package private in hbase-client, leave the 
forwarding from TokenUtil to ClientTokenUtil in place, and deprecate the 
corresponding public methods in TokenUtil in hbase-server?

> Move non-MR parts of TokenUtil into hbase-client
> 
>
> Key: HBASE-22027
> URL: https://issues.apache.org/jira/browse/HBASE-22027
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.1.3
>Reporter: Stig Rohde Døssing
>Priority: Major
> Attachments: 
> 0001-HBase-22027-Split-non-MR-related-parts-of-TokenUtil-.patch, 
> 0001-HBase-22027-Split-non-MR-related-parts-of-TokenUtil-.patch
>
>
> HBASE-14208 moved TokenUtil from hbase-client to hbase-server.
> I have a project depending on hbase-client 1.4.4, which I'd like to upgrade 
> to 2.1.3. My project uses TokenUtil (specifically obtainAndCacheToken), which 
> is included in hbase-client 1.4.4. At the same time I also have a dependency 
> on Jetty 9.4, which is incompatible with the current version used by Hadoop. 
> I can fix this for hbase-client by using hbase-shaded-client instead, since 
> Jetty is shaded in this jar, but TokenUtil is only present in hbase-server as 
> of 2.0.0. Since there is no hbase-shaded-server, I can't use TokenUtil and 
> Jetty 9.4 at the same time.
> TokenUtil can be split into server-only parts, and a client relevant part 
> that can go back to hbase-client. The TokenUtil in hbase-server can retain 
> the moved methods, and delegate to the util in hbase-client if backward 
> compatibility is a concern.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-22027) Move non-MR parts of TokenUtil into hbase-client

2019-03-11 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HBASE-22027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16789980#comment-16789980
 ] 

Stig Rohde Døssing edited comment on HBASE-22027 at 3/11/19 9:42 PM:
-

I only need obtainAndCacheToken. The other 4 methods expose protobufs or the 
private AuthenticationTokenIdentifier/Token. I can't make any of the methods 
private, as they need to be accessible to both 
ClientTokenUtil.obtainAndCacheToken and the forwarding methods in TokenUtil. 
How about I make the methods package private in hbase-client, leave the 
forwarding from TokenUtil to ClientTokenUtil in place, and deprecate the 
corresponding public methods in TokenUtil in hbase-server?

Edit: I figure the methods are still useful internally. Maybe move the 4 
methods to another new class in hbase-client marked InterfaceAudience.Private, 
and leave ClientTokenUtil as public with only the obtainAndCacheToken method 
defined?


was (Author: srdo):
I only need obtainAndCacheToken. The other 4 methods expose protobufs or the 
private AuthenticationTokenIdentifier/Token. I can't make any of the methods 
private, as they need to be accessible to both 
ClientTokenUtil.obtainAndCacheToken and the forwarding methods in TokenUtil. 
How about I make the methods package private in hbase-client, leave the 
forwarding from TokenUtil to ClientTokenUtil in place, and deprecate the 
corresponding public methods in TokenUtil in hbase-server?

> Move non-MR parts of TokenUtil into hbase-client
> 
>
> Key: HBASE-22027
> URL: https://issues.apache.org/jira/browse/HBASE-22027
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.1.3
>Reporter: Stig Rohde Døssing
>Priority: Major
> Attachments: 
> 0001-HBase-22027-Split-non-MR-related-parts-of-TokenUtil-.patch, 
> 0001-HBase-22027-Split-non-MR-related-parts-of-TokenUtil-.patch
>
>
> HBASE-14208 moved TokenUtil from hbase-client to hbase-server.
> I have a project depending on hbase-client 1.4.4, which I'd like to upgrade 
> to 2.1.3. My project uses TokenUtil (specifically obtainAndCacheToken), which 
> is included in hbase-client 1.4.4. At the same time I also have a dependency 
> on Jetty 9.4, which is incompatible with the current version used by Hadoop. 
> I can fix this for hbase-client by using hbase-shaded-client instead, since 
> Jetty is shaded in this jar, but TokenUtil is only present in hbase-server as 
> of 2.0.0. Since there is no hbase-shaded-server, I can't use TokenUtil and 
> Jetty 9.4 at the same time.
> TokenUtil can be split into server-only parts, and a client relevant part 
> that can go back to hbase-client. The TokenUtil in hbase-server can retain 
> the moved methods, and delegate to the util in hbase-client if backward 
> compatibility is a concern.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21991) Fix MetaMetrics issues - [Race condition, Faulty remove logic], few improvements

2019-03-11 Thread Xu Cang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16789971#comment-16789971
 ] 

Xu Cang commented on HBASE-21991:
-

+1 LGTM.

Thanks [~jatsakthi]
I will commit to master and branch-2.
Would you mind providing a branch-1 patch if applicable? Thank you.


> Fix MetaMetrics issues - [Race condition, Faulty remove logic], few 
> improvements
> 
>
> Key: HBASE-21991
> URL: https://issues.apache.org/jira/browse/HBASE-21991
> Project: HBase
>  Issue Type: Bug
>  Components: Coprocessors, metrics
>Reporter: Sakthi
>Assignee: Sakthi
>Priority: Major
> Attachments: hbase-21991.master.001.patch, 
> hbase-21991.master.002.patch, hbase-21991.master.003.patch, 
> hbase-21991.master.004.patch, hbase-21991.master.005.patch, 
> hbase-21991.master.006.patch
>
>
> Here is a list of the issues related to the MetaMetrics implementation:
> +*Bugs*+:
>  # [_Lossy counting for top-k_] *Faulty remove logic of non-eligible meters*: 
> Under certain conditions, we might end up storing/exposing all the meters 
> rather than top-k-ish
>  # MetaMetrics can throw NPE resulting in aborting of the RS because of a 
> *Race Condition*.
> +*Improvements*+:
>  # With high number of regions in the cluster, exposure of metrics for each 
> region blows up the JMX from ~140 Kbs to 100+ Mbs depending on the number of 
> regions. It's better to use *lossy counting to maintain top-k for region 
> metrics* as well.
>  # As the lossy meters do not represent actual counts, I think, it'll be 
> better to *rename the meters to include "lossy" in the name*. It would be 
> more informative while monitoring the metrics and there would be less 
> confusion regarding actual counts to lossy counts.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21991) Fix MetaMetrics issues - [Race condition, Faulty remove logic], few improvements

2019-03-11 Thread Sakthi (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16789972#comment-16789972
 ] 

Sakthi commented on HBASE-21991:


I will [~xucang]. Thank you. Also I would like this in branch-2.1 as well. And 
I guess for this patch to be applicable there, we need HBASE-21800 as well 
there.

> Fix MetaMetrics issues - [Race condition, Faulty remove logic], few 
> improvements
> 
>
> Key: HBASE-21991
> URL: https://issues.apache.org/jira/browse/HBASE-21991
> Project: HBase
>  Issue Type: Bug
>  Components: Coprocessors, metrics
>Reporter: Sakthi
>Assignee: Sakthi
>Priority: Major
> Attachments: hbase-21991.master.001.patch, 
> hbase-21991.master.002.patch, hbase-21991.master.003.patch, 
> hbase-21991.master.004.patch, hbase-21991.master.005.patch, 
> hbase-21991.master.006.patch
>
>
> Here is a list of the issues related to the MetaMetrics implementation:
> +*Bugs*+:
>  # [_Lossy counting for top-k_] *Faulty remove logic of non-eligible meters*: 
> Under certain conditions, we might end up storing/exposing all the meters 
> rather than top-k-ish
>  # MetaMetrics can throw NPE resulting in aborting of the RS because of a 
> *Race Condition*.
> +*Improvements*+:
>  # With high number of regions in the cluster, exposure of metrics for each 
> region blows up the JMX from ~140 Kbs to 100+ Mbs depending on the number of 
> regions. It's better to use *lossy counting to maintain top-k for region 
> metrics* as well.
>  # As the lossy meters do not represent actual counts, I think, it'll be 
> better to *rename the meters to include "lossy" in the name*. It would be 
> more informative while monitoring the metrics and there would be less 
> confusion regarding actual counts to lossy counts.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22032) KeyValue validation should check for null byte array

2019-03-11 Thread Geoffrey Jacoby (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated HBASE-22032:

Status: Patch Available  (was: Open)

> KeyValue validation should check for null byte array
> 
>
> Key: HBASE-22032
> URL: https://issues.apache.org/jira/browse/HBASE-22032
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.1.3, 2.0.4, 3.0.0
>Reporter: Geoffrey Jacoby
>Assignee: Geoffrey Jacoby
>Priority: Major
> Attachments: HBASE-22032.v01.patch
>
>
> HBASE-21401 added some nice validation checks to throw precise errors if a 
> KeyValue is constructed using invalid parameters. However it implicitly 
> assumes that the KeyValue buffer is not null. It should validate this 
> assumption and alert accordingly rather than throwing an NPE from an 
> unrelated check. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22032) KeyValue validation should check for null byte array

2019-03-11 Thread Geoffrey Jacoby (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated HBASE-22032:

Attachment: HBASE-22032.v01.patch

> KeyValue validation should check for null byte array
> 
>
> Key: HBASE-22032
> URL: https://issues.apache.org/jira/browse/HBASE-22032
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 2.0.4, 2.1.3
>Reporter: Geoffrey Jacoby
>Assignee: Geoffrey Jacoby
>Priority: Major
> Attachments: HBASE-22032.v01.patch
>
>
> HBASE-21401 added some nice validation checks to throw precise errors if a 
> KeyValue is constructed using invalid parameters. However it implicitly 
> assumes that the KeyValue buffer is not null. It should validate this 
> assumption and alert accordingly rather than throwing an NPE from an 
> unrelated check. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21987) Simplify RSGroupInfoManagerImpl#flushConfig() for offline mode

2019-03-11 Thread Xu Cang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Cang updated HBASE-21987:

   Resolution: Fixed
Fix Version/s: 2.2.1
   1.5.1
   2.2.0
   3.0.0
   Status: Resolved  (was: Patch Available)

> Simplify RSGroupInfoManagerImpl#flushConfig() for offline mode
> --
>
> Key: HBASE-21987
> URL: https://issues.apache.org/jira/browse/HBASE-21987
> Project: HBase
>  Issue Type: Improvement
>  Components: rsgroup
>Reporter: Xiang Li
>Assignee: Xiang Li
>Priority: Minor
>  Labels: rsgroup
> Fix For: 3.0.0, 2.2.0, 1.5.1, 2.2.1
>
> Attachments: HBASE-21987.branch-1.000.patch, 
> HBASE-21987.master.000.patch, HBASE-21987.master.001.patch, 
> HBASE-21987.master.002.patch, HBASE-21987.master.003.patch, 
> HBASE-21987.master.004.patch
>
>
> The logic to handle offline mode in 
> RSGroupInfoManagerImpl#flushConfig(Map newGroupMap) 
> could be simplified.
> {code:title=RSGroupInfoManagerImpl.java # flushConfig(Map RSGroupInfo> newGroupMap)|borderStyle=solid}
> if (!isOnline()) {
> Map m = Maps.newHashMap(rsGroupMap);
> RSGroupInfo oldDefaultGroup = m.remove(RSGroupInfo.DEFAULT_GROUP);
> RSGroupInfo newDefaultGroup = 
> newGroupMap.remove(RSGroupInfo.DEFAULT_GROUP);
> if (!m.equals(newGroupMap) ||
> !oldDefaultGroup.getTables().equals(newDefaultGroup.getTables())) {
> throw new IOException("Only default servers can be updated during 
> offline mode");
> }
> newGroupMap.put(RSGroupInfo.DEFAULT_GROUP, newDefaultGroup);
> rsGroupMap = newGroupMap;
> return;
>  }
> {code}
> The logic is to make a copy of the private member called "rsGroupMap" as m, 
> and get the default group out of m and newGroupMap and then compare. Then 
> restore the newGroupMap and update rsGroupMap.
> This function is called by 
> {code:title=RSGroupInfoManagerImpl.java # flushConfig() |borderStyle=solid}
> private synchronized void flushConfig() throws IOException {
> flushConfig(this.rsGroupMap);
> }
> {code}
> by RSGroupInfoManagerImpl.RSGroupStartupWorker#waitForGroupTableOnline() 
> during HMaster starts, in which, newGroupMap (the input of flushConfig()) is 
> this.rsGroupMap, the comparison is not needed, because they are the same.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21987) Simplify RSGroupInfoManagerImpl#flushConfig() for offline mode

2019-03-11 Thread Xu Cang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16789968#comment-16789968
 ] 

Xu Cang commented on HBASE-21987:
-

Pushed to branch-1.
Thanks [~water]! 

> Simplify RSGroupInfoManagerImpl#flushConfig() for offline mode
> --
>
> Key: HBASE-21987
> URL: https://issues.apache.org/jira/browse/HBASE-21987
> Project: HBase
>  Issue Type: Improvement
>  Components: rsgroup
>Reporter: Xiang Li
>Assignee: Xiang Li
>Priority: Minor
>  Labels: rsgroup
> Attachments: HBASE-21987.branch-1.000.patch, 
> HBASE-21987.master.000.patch, HBASE-21987.master.001.patch, 
> HBASE-21987.master.002.patch, HBASE-21987.master.003.patch, 
> HBASE-21987.master.004.patch
>
>
> The logic to handle offline mode in 
> RSGroupInfoManagerImpl#flushConfig(Map newGroupMap) 
> could be simplified.
> {code:title=RSGroupInfoManagerImpl.java # flushConfig(Map RSGroupInfo> newGroupMap)|borderStyle=solid}
> if (!isOnline()) {
> Map m = Maps.newHashMap(rsGroupMap);
> RSGroupInfo oldDefaultGroup = m.remove(RSGroupInfo.DEFAULT_GROUP);
> RSGroupInfo newDefaultGroup = 
> newGroupMap.remove(RSGroupInfo.DEFAULT_GROUP);
> if (!m.equals(newGroupMap) ||
> !oldDefaultGroup.getTables().equals(newDefaultGroup.getTables())) {
> throw new IOException("Only default servers can be updated during 
> offline mode");
> }
> newGroupMap.put(RSGroupInfo.DEFAULT_GROUP, newDefaultGroup);
> rsGroupMap = newGroupMap;
> return;
>  }
> {code}
> The logic is to make a copy of the private member called "rsGroupMap" as m, 
> and get the default group out of m and newGroupMap and then compare. Then 
> restore the newGroupMap and update rsGroupMap.
> This function is called by 
> {code:title=RSGroupInfoManagerImpl.java # flushConfig() |borderStyle=solid}
> private synchronized void flushConfig() throws IOException {
> flushConfig(this.rsGroupMap);
> }
> {code}
> by RSGroupInfoManagerImpl.RSGroupStartupWorker#waitForGroupTableOnline() 
> during HMaster starts, in which, newGroupMap (the input of flushConfig()) is 
> this.rsGroupMap, the comparison is not needed, because they are the same.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-22027) Move non-MR parts of TokenUtil into hbase-client

2019-03-11 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16789961#comment-16789961
 ] 

stack edited comment on HBASE-22027 at 3/11/19 8:58 PM:


That said, [~Apache9] raises a problem w/ the patch [~Srdo] in that we need to 
fix exposing protobufs to users. toToken is public API in a public class. We 
should be deprecating/removing methods like this. Can the toTokens be made 
private in your new class?


was (Author: stack):
That said, [~Apache9] raises a problem w/ the patch [~Srdo] in that we need to 
fix exposing protobufs to users. toToken is public API in a public class. We 
should be deprecating/removing methods like this.

> Move non-MR parts of TokenUtil into hbase-client
> 
>
> Key: HBASE-22027
> URL: https://issues.apache.org/jira/browse/HBASE-22027
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.1.3
>Reporter: Stig Rohde Døssing
>Priority: Major
> Attachments: 
> 0001-HBase-22027-Split-non-MR-related-parts-of-TokenUtil-.patch, 
> 0001-HBase-22027-Split-non-MR-related-parts-of-TokenUtil-.patch
>
>
> HBASE-14208 moved TokenUtil from hbase-client to hbase-server.
> I have a project depending on hbase-client 1.4.4, which I'd like to upgrade 
> to 2.1.3. My project uses TokenUtil (specifically obtainAndCacheToken), which 
> is included in hbase-client 1.4.4. At the same time I also have a dependency 
> on Jetty 9.4, which is incompatible with the current version used by Hadoop. 
> I can fix this for hbase-client by using hbase-shaded-client instead, since 
> Jetty is shaded in this jar, but TokenUtil is only present in hbase-server as 
> of 2.0.0. Since there is no hbase-shaded-server, I can't use TokenUtil and 
> Jetty 9.4 at the same time.
> TokenUtil can be split into server-only parts, and a client relevant part 
> that can go back to hbase-client. The TokenUtil in hbase-server can retain 
> the moved methods, and delegate to the util in hbase-client if backward 
> compatibility is a concern.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22027) Move non-MR parts of TokenUtil into hbase-client

2019-03-11 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16789961#comment-16789961
 ] 

stack commented on HBASE-22027:
---

That said, [~Apache9] raises a problem w/ the patch [~Srdo] in that we need to 
fix exposing protobufs to users. toToken is public API in a public class. We 
should be deprecating/removing methods like this.

> Move non-MR parts of TokenUtil into hbase-client
> 
>
> Key: HBASE-22027
> URL: https://issues.apache.org/jira/browse/HBASE-22027
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.1.3
>Reporter: Stig Rohde Døssing
>Priority: Major
> Attachments: 
> 0001-HBase-22027-Split-non-MR-related-parts-of-TokenUtil-.patch, 
> 0001-HBase-22027-Split-non-MR-related-parts-of-TokenUtil-.patch
>
>
> HBASE-14208 moved TokenUtil from hbase-client to hbase-server.
> I have a project depending on hbase-client 1.4.4, which I'd like to upgrade 
> to 2.1.3. My project uses TokenUtil (specifically obtainAndCacheToken), which 
> is included in hbase-client 1.4.4. At the same time I also have a dependency 
> on Jetty 9.4, which is incompatible with the current version used by Hadoop. 
> I can fix this for hbase-client by using hbase-shaded-client instead, since 
> Jetty is shaded in this jar, but TokenUtil is only present in hbase-server as 
> of 2.0.0. Since there is no hbase-shaded-server, I can't use TokenUtil and 
> Jetty 9.4 at the same time.
> TokenUtil can be split into server-only parts, and a client relevant part 
> that can go back to hbase-client. The TokenUtil in hbase-server can retain 
> the moved methods, and delegate to the util in hbase-client if backward 
> compatibility is a concern.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22027) Move non-MR parts of TokenUtil into hbase-client

2019-03-11 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16789957#comment-16789957
 ] 

stack commented on HBASE-22027:
---

Retry...To see if failures legit.

> Move non-MR parts of TokenUtil into hbase-client
> 
>
> Key: HBASE-22027
> URL: https://issues.apache.org/jira/browse/HBASE-22027
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.1.3
>Reporter: Stig Rohde Døssing
>Priority: Major
> Attachments: 
> 0001-HBase-22027-Split-non-MR-related-parts-of-TokenUtil-.patch, 
> 0001-HBase-22027-Split-non-MR-related-parts-of-TokenUtil-.patch
>
>
> HBASE-14208 moved TokenUtil from hbase-client to hbase-server.
> I have a project depending on hbase-client 1.4.4, which I'd like to upgrade 
> to 2.1.3. My project uses TokenUtil (specifically obtainAndCacheToken), which 
> is included in hbase-client 1.4.4. At the same time I also have a dependency 
> on Jetty 9.4, which is incompatible with the current version used by Hadoop. 
> I can fix this for hbase-client by using hbase-shaded-client instead, since 
> Jetty is shaded in this jar, but TokenUtil is only present in hbase-server as 
> of 2.0.0. Since there is no hbase-shaded-server, I can't use TokenUtil and 
> Jetty 9.4 at the same time.
> TokenUtil can be split into server-only parts, and a client relevant part 
> that can go back to hbase-client. The TokenUtil in hbase-server can retain 
> the moved methods, and delegate to the util in hbase-client if backward 
> compatibility is a concern.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22027) Move non-MR parts of TokenUtil into hbase-client

2019-03-11 Thread stack (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-22027:
--
Attachment: 0001-HBase-22027-Split-non-MR-related-parts-of-TokenUtil-.patch

> Move non-MR parts of TokenUtil into hbase-client
> 
>
> Key: HBASE-22027
> URL: https://issues.apache.org/jira/browse/HBASE-22027
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.1.3
>Reporter: Stig Rohde Døssing
>Priority: Major
> Attachments: 
> 0001-HBase-22027-Split-non-MR-related-parts-of-TokenUtil-.patch, 
> 0001-HBase-22027-Split-non-MR-related-parts-of-TokenUtil-.patch
>
>
> HBASE-14208 moved TokenUtil from hbase-client to hbase-server.
> I have a project depending on hbase-client 1.4.4, which I'd like to upgrade 
> to 2.1.3. My project uses TokenUtil (specifically obtainAndCacheToken), which 
> is included in hbase-client 1.4.4. At the same time I also have a dependency 
> on Jetty 9.4, which is incompatible with the current version used by Hadoop. 
> I can fix this for hbase-client by using hbase-shaded-client instead, since 
> Jetty is shaded in this jar, but TokenUtil is only present in hbase-server as 
> of 2.0.0. Since there is no hbase-shaded-server, I can't use TokenUtil and 
> Jetty 9.4 at the same time.
> TokenUtil can be split into server-only parts, and a client relevant part 
> that can go back to hbase-client. The TokenUtil in hbase-server can retain 
> the moved methods, and delegate to the util in hbase-client if backward 
> compatibility is a concern.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-22022) nightly fails rat check down in the dev-support/hbase_nightly_source-artifact.sh check

2019-03-11 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16789939#comment-16789939
 ] 

stack edited comment on HBASE-22022 at 3/11/19 8:42 PM:


This ok by you [~busbey] to commit everywhere? Main objection is that it is 
ugly I think


was (Author: stack):
This ok by you [~busbey] to commit everywhere?

> nightly fails rat check down in the 
> dev-support/hbase_nightly_source-artifact.sh check
> --
>
> Key: HBASE-22022
> URL: https://issues.apache.org/jira/browse/HBASE-22022
> Project: HBase
>  Issue Type: Sub-task
>Reporter: stack
>Priority: Major
> Attachments: HBASE-22022.branch-2.1.001.patch
>
>
> Nightlies include a nice check that runs through the rc-making steps. See 
> dev-support/hbase_nightly_source-artifact.sh. Currently the nightly is 
> failing here which is causing the nightly runs fail though often enough all 
> tests pass. It looks like cause is the rat check. Unfortunately, running the 
> nightly script locally, all comes up smelling sweet -- its a context thing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22022) nightly fails rat check down in the dev-support/hbase_nightly_source-artifact.sh check

2019-03-11 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16789939#comment-16789939
 ] 

stack commented on HBASE-22022:
---

This ok by you [~busbey] to commit everywhere?

> nightly fails rat check down in the 
> dev-support/hbase_nightly_source-artifact.sh check
> --
>
> Key: HBASE-22022
> URL: https://issues.apache.org/jira/browse/HBASE-22022
> Project: HBase
>  Issue Type: Sub-task
>Reporter: stack
>Priority: Major
> Attachments: HBASE-22022.branch-2.1.001.patch
>
>
> Nightlies include a nice check that runs through the rc-making steps. See 
> dev-support/hbase_nightly_source-artifact.sh. Currently the nightly is 
> failing here which is causing the nightly runs fail though often enough all 
> tests pass. It looks like cause is the rat check. Unfortunately, running the 
> nightly script locally, all comes up smelling sweet -- its a context thing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22029) RESTApiClusterManager.java:[250,48] cannot find symbol in hbase-it

2019-03-11 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16789907#comment-16789907
 ] 

stack commented on HBASE-22029:
---

The exception in HBASE-20076 is same and in same place. Trying to figure why it 
came back.

> RESTApiClusterManager.java:[250,48] cannot find symbol in hbase-it
> --
>
> Key: HBASE-22029
> URL: https://issues.apache.org/jira/browse/HBASE-22029
> Project: HBase
>  Issue Type: Sub-task
>Reporter: stack
>Priority: Major
>
> I get this doing a RM build. Can't repro elsewhere.
> Picking up an old jaxrs? See 
> https://stackoverflow.com/questions/34679773/extract-string-from-javax-response
> Let me try adding explicit dependency.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22025) RAT check fails in nightlies; fails on (old) test data files.

2019-03-11 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16789887#comment-16789887
 ] 

Hudson commented on HBASE-22025:


Results for branch branch-2
[build #1744 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1744/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1744//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1744//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1744//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> RAT check fails in nightlies; fails on (old) test data files.
> -
>
> Key: HBASE-22025
> URL: https://issues.apache.org/jira/browse/HBASE-22025
> Project: HBase
>  Issue Type: Sub-task
>Reporter: stack
>Assignee: stack
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.1.4, 2.2.1
>
> Attachments: HBASE-22025.branch-2.1.001.patch
>
>
> The nightly runs where we check RM steps fails in branch-2.1 because the rat 
> test complains about old test data files not having licenses. See HBASE-22022 
> for how we turned up this issue. This JIRA adds exclusions for these files 
> that cause failure.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22001) Polish the Admin interface

2019-03-11 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16789885#comment-16789885
 ] 

Hudson commented on HBASE-22001:


Results for branch branch-2
[build #1744 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1744/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1744//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1744//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1744//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Polish the Admin interface
> --
>
> Key: HBASE-22001
> URL: https://issues.apache.org/jira/browse/HBASE-22001
> Project: HBase
>  Issue Type: Task
>  Components: Admin, Client
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.3.0
>
> Attachments: HBASE-22001-v1.patch, HBASE-22001-v2.patch, 
> HBASE-22001-v3.patch, HBASE-22001-v3.patch, HBASE-22001-v4.patch, 
> HBASE-22001-v4.patch, HBASE-22001-v5.patch, HBASE-22001.patch
>
>
> The snapshot related methods are not well declared, we missed several methods 
> which has the restoreAcl parameter. And also, the snapshotAsync method 
> returns nothing, which is bit strange.
> And we can use default methods to reduce the code in HBaseAdmin and also the 
> new Admin implementation in the future.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22011) ThriftUtilities.getFromThrift should set filter when not set columns

2019-03-11 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16789886#comment-16789886
 ] 

Hudson commented on HBASE-22011:


Results for branch branch-2
[build #1744 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1744/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1744//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1744//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1744//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> ThriftUtilities.getFromThrift should set filter when not set columns
> 
>
> Key: HBASE-22011
> URL: https://issues.apache.org/jira/browse/HBASE-22011
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Bing Xiao
>Assignee: Bing Xiao
>Priority: Major
> Fix For: 2.2.1
>
> Attachments: HBASE-22011-branch-2-v1.patch, 
> HBASE-22011-branch-2.2-v1.patch, HBASE-22011-v1.patch
>
>
> ThriftUtilities.getFromThrift, if TGet wihtout columns the filter is ignore.
> {code:java}
> if (!in.isSetColumns()) {
>  return out;
> }
> for (TColumn column : in.getColumns()) {
>  if (column.isSetQualifier()) {
>  out.addColumn(column.getFamily(), column.getQualifier());
>  } else {
>  out.addFamily(column.getFamily());
>  }
> }
> if (in.isSetFilterBytes()) {
>  out.setFilter(filterFromThrift(in.getFilterBytes()));
> }{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-22032) KeyValue validation should check for null byte array

2019-03-11 Thread Geoffrey Jacoby (JIRA)
Geoffrey Jacoby created HBASE-22032:
---

 Summary: KeyValue validation should check for null byte array
 Key: HBASE-22032
 URL: https://issues.apache.org/jira/browse/HBASE-22032
 Project: HBase
  Issue Type: Improvement
Affects Versions: 2.1.3, 2.0.4, 3.0.0
Reporter: Geoffrey Jacoby
Assignee: Geoffrey Jacoby


HBASE-21401 added some nice validation checks to throw precise errors if a 
KeyValue is constructed using invalid parameters. However it implicitly assumes 
that the KeyValue buffer is not null. It should validate this assumption and 
alert accordingly rather than throwing an NPE from an unrelated check. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-22034) Backport HBASE-21404 and HBASE-22032 to branch-1

2019-03-11 Thread Geoffrey Jacoby (JIRA)
Geoffrey Jacoby created HBASE-22034:
---

 Summary: Backport HBASE-21404 and HBASE-22032 to branch-1
 Key: HBASE-22034
 URL: https://issues.apache.org/jira/browse/HBASE-22034
 Project: HBase
  Issue Type: Improvement
Reporter: Geoffrey Jacoby
Assignee: Geoffrey Jacoby


Branch-2 and master have good validation checks when constructing KeyValues. We 
should also have them on branch-1. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-22033) Update to maven-javadoc-plugin 3.1.0 and switch to non-forking aggregate goals

2019-03-11 Thread Sean Busbey (JIRA)
Sean Busbey created HBASE-22033:
---

 Summary: Update to maven-javadoc-plugin 3.1.0 and switch to 
non-forking aggregate goals
 Key: HBASE-22033
 URL: https://issues.apache.org/jira/browse/HBASE-22033
 Project: HBase
  Issue Type: Task
  Components: build, website
Reporter: Sean Busbey


MJAVADOC-444 got into the 3.1.0 release of the maven-javadoc-plugin so now 
there are versions of the aggregate javadoc goals that don't include a forked 
build.

update our build to make use of this new feature. (a before/after on build time 
would be nice to know as well)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22033) Update to maven-javadoc-plugin 3.1.0 and switch to non-forking aggregate goals

2019-03-11 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16789870#comment-16789870
 ] 

Sean Busbey commented on HBASE-22033:
-

ex:
http://maven.apache.org/plugins/maven-javadoc-plugin/examples/aggregate-nofork.html

http://maven.apache.org/plugins/maven-javadoc-plugin/aggregate-no-fork-mojo.html

http://maven.apache.org/plugins/maven-javadoc-plugin/test-aggregate-no-fork-mojo.html


> Update to maven-javadoc-plugin 3.1.0 and switch to non-forking aggregate goals
> --
>
> Key: HBASE-22033
> URL: https://issues.apache.org/jira/browse/HBASE-22033
> Project: HBase
>  Issue Type: Task
>  Components: build, website
>Reporter: Sean Busbey
>Priority: Major
>
> MJAVADOC-444 got into the 3.1.0 release of the maven-javadoc-plugin so now 
> there are versions of the aggregate javadoc goals that don't include a forked 
> build.
> update our build to make use of this new feature. (a before/after on build 
> time would be nice to know as well)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22025) RAT check fails in nightlies; fails on (old) test data files.

2019-03-11 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16789857#comment-16789857
 ] 

Hudson commented on HBASE-22025:


Results for branch master
[build #854 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/854/]: (x) 
*{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/854//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/854//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/854//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> RAT check fails in nightlies; fails on (old) test data files.
> -
>
> Key: HBASE-22025
> URL: https://issues.apache.org/jira/browse/HBASE-22025
> Project: HBase
>  Issue Type: Sub-task
>Reporter: stack
>Assignee: stack
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.1.4, 2.2.1
>
> Attachments: HBASE-22025.branch-2.1.001.patch
>
>
> The nightly runs where we check RM steps fails in branch-2.1 because the rat 
> test complains about old test data files not having licenses. See HBASE-22022 
> for how we turned up this issue. This JIRA adds exclusions for these files 
> that cause failure.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22029) RESTApiClusterManager.java:[250,48] cannot find symbol in hbase-it

2019-03-11 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16789853#comment-16789853
 ] 

stack commented on HBASE-22029:
---

Hmm... no good. Same failure.

Trying to run same mvn command from failed build spot just has the build 
succeed and run to the end.

I did search on net and came up w/ HBASE-20076

> RESTApiClusterManager.java:[250,48] cannot find symbol in hbase-it
> --
>
> Key: HBASE-22029
> URL: https://issues.apache.org/jira/browse/HBASE-22029
> Project: HBase
>  Issue Type: Sub-task
>Reporter: stack
>Priority: Major
>
> I get this doing a RM build. Can't repro elsewhere.
> Picking up an old jaxrs? See 
> https://stackoverflow.com/questions/34679773/extract-string-from-javax-response
> Let me try adding explicit dependency.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21879) Read HFile's block to ByteBuffer directly instead of to byte for reducing young gc purpose

2019-03-11 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16789850#comment-16789850
 ] 

Hudson commented on HBASE-21879:


Results for branch HBASE-21879
[build #22 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21879/22/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21879/22//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21879/22//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21879/22//JDK8_Nightly_Build_Report_(Hadoop3)/]


(x) {color:red}-1 source release artifact{color}
-- See build output for details.


(x) {color:red}-1 client integration test{color}
-- Something went wrong with this stage, [check relevant console 
output|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21879/22//console].


> Read HFile's block to ByteBuffer directly instead of to byte for reducing 
> young gc purpose
> --
>
> Key: HBASE-21879
> URL: https://issues.apache.org/jira/browse/HBASE-21879
> Project: HBase
>  Issue Type: Improvement
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Fix For: 3.0.0, 2.3.0
>
> Attachments: HBASE-21879.v1.patch, HBASE-21879.v1.patch, 
> QPS-latencies-before-HBASE-21879.png, gc-data-before-HBASE-21879.png
>
>
> In HFileBlock#readBlockDataInternal,  we have the following: 
> {code}
> @VisibleForTesting
> protected HFileBlock readBlockDataInternal(FSDataInputStream is, long offset,
> long onDiskSizeWithHeaderL, boolean pread, boolean verifyChecksum, 
> boolean updateMetrics)
>  throws IOException {
>  // .
>   // TODO: Make this ByteBuffer-based. Will make it easier to go to HDFS with 
> BBPool (offheap).
>   byte [] onDiskBlock = new byte[onDiskSizeWithHeader + hdrSize];
>   int nextBlockOnDiskSize = readAtOffset(is, onDiskBlock, preReadHeaderSize,
>   onDiskSizeWithHeader - preReadHeaderSize, true, offset + 
> preReadHeaderSize, pread);
>   if (headerBuf != null) {
> // ...
>   }
>   // ...
>  }
> {code}
> In the read path,  we still read the block from hfile to on-heap byte[], then 
> copy the on-heap byte[] to offheap bucket cache asynchronously,  and in my  
> 100% get performance test, I also observed some frequent young gc,  The 
> largest memory footprint in the young gen should be the on-heap block byte[].
> In fact, we can read HFile's block to ByteBuffer directly instead of to 
> byte[] for reducing young gc purpose. we did not implement this before, 
> because no ByteBuffer reading interface in the older HDFS client, but 2.7+ 
> has supported this now,  so we can fix this now. I think. 
> Will provide an patch and some perf-comparison for this. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21935) Replace make_rc.sh with customized spark/dev/create-release

2019-03-11 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16789847#comment-16789847
 ] 

Sean Busbey commented on HBASE-21935:
-

thanks for the heads up. got the correct one, I think.

{code}

Busbey-MBA:hbase busbey$ smart-apply-patch --project=hbase --committer 
HBASE-21935
Processing: HBASE-21935
WARNING: HBASE-21935 issue status is not matched with "Patch Available".
HBASE-21935 patch is being downloaded at Mon Mar 11 13:21:50 CDT 2019 from
  
https://issues.apache.org/jira/secure/attachment/12962016/HBASE-21935.branch-2.0.005.patch
 -> Downloaded
Applying the patch:
Mon Mar 11 13:21:51 CDT 2019
cd /Users/busbey/tmp_projects/hbase
git am --signoff --whitespace=fix -p1 /tmp/yetus-758.11257/patch
.git/rebase-apply/patch:770: new blank line at EOF.
+
warning: 1 line adds whitespace errors.
Applying: HBASE-21935 Replace make_rc.sh with customized 
spark/dev/create-release
{code}

> Replace make_rc.sh with customized spark/dev/create-release
> ---
>
> Key: HBASE-21935
> URL: https://issues.apache.org/jira/browse/HBASE-21935
> Project: HBase
>  Issue Type: Task
>  Components: rm
>Reporter: stack
>Assignee: stack
>Priority: Minor
>  Labels: rm
> Attachments: HBASE-21935.branch-2.0.001.patch, 
> HBASE-21935.branch-2.0.002.patch, HBASE-21935.branch-2.0.003.patch, 
> HBASE-21935.branch-2.0.004.patch, HBASE-21935.branch-2.0.005.patch, 
> HBASE-21935.branch-2.1.001.patch, HBASE-21935.branch-2.1.002.patch, 
> HBASE-21935.branch-2.1.003.patch, HBASE-21935.branch-2.1.004.patch, 
> HBASE-21935.branch-2.1.005.patch, HBASE-21935.branch-2.1.006.patch, 
> HBASE-21935.branch-2.1.007.patch
>
>
> The spark/dev/create-release is more comprehensive than our hokey make_rc.sh 
> script. It codifies the bulk of the RM process from tagging, version-setting, 
> building, signing, and pushing. It does it in a container so environment is 
> same each time. It has a bunch of spark-specifics as is -- naturally -- but 
> should be possible to pull it around to suit hbase RM'ing. It'd save a bunch 
> of time and would allow us to get to a place where RM'ing is canned, 
> evolvable, and consistent.
> I've been hacking on the tooling before the filing of this JIRA and was 
> polluting branch-2.0 w/ tagging and reverts. Let me make a branch named for 
> this JIRA to play with (There is a dry-run flag but it too needs work...).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22029) RESTApiClusterManager.java:[250,48] cannot find symbol in hbase-it

2019-03-11 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16789846#comment-16789846
 ] 

Hudson commented on HBASE-22029:


Results for branch branch-2.0
[build #1429 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/1429/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/1429//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/1429//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/1429//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> RESTApiClusterManager.java:[250,48] cannot find symbol in hbase-it
> --
>
> Key: HBASE-22029
> URL: https://issues.apache.org/jira/browse/HBASE-22029
> Project: HBase
>  Issue Type: Sub-task
>Reporter: stack
>Priority: Major
>
> I get this doing a RM build. Can't repro elsewhere.
> Picking up an old jaxrs? See 
> https://stackoverflow.com/questions/34679773/extract-string-from-javax-response
> Let me try adding explicit dependency.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-21935) Replace make_rc.sh with customized spark/dev/create-release

2019-03-11 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16789838#comment-16789838
 ] 

stack edited comment on HBASE-21935 at 3/11/19 6:15 PM:


Updated the patch to latest. Has set +x for the moment while still playing. 
Here is how I run it currently... doing just the build step w/o doing any 
commits:

{code}
./do-release-docker.sh -d ~/Downloads/rm -n -s build
{code}

You get asked a few questions. I answer branch-2.0... It'll volunteer 2.0.5. 
When asked what RC #, I say '1' (there is already a tag for 2.0.5RC1... thats 
what you'll build -- it'll ask you confirm that you want to build against the 
2.0.5RC1 existing tag). It'll ask gpg key to use ... then away it goes. It 
makes an output dir and in the output dir spews log to build.log.

FYI [~busbey] Be careful... its 2.0.0005 Not the last patch in the list.


was (Author: stack):
Updated the patch to latest. Has set +x for the moment while still playing. 
Here is how I run it currently... doing just the build step w/o doing any 
commits:

{code}
./do-release-docker.sh -d ~/Downloads/rm -n -s build
{code}

You get asked a few questions. I answer branch-2.0... It'll volunteer 2.0.5. 
When asked what RC #, I say '1' (there is already a tag for 2.0.5RC1... thats 
what you'll build -- it'll ask you confirm that you want to build against the 
2.0.5RC1 existing tag). It'll ask gpg key to use ... then away it goes. It 
makes an output dir and in the output dir spews log to build.log.

FYI [~busbey]

> Replace make_rc.sh with customized spark/dev/create-release
> ---
>
> Key: HBASE-21935
> URL: https://issues.apache.org/jira/browse/HBASE-21935
> Project: HBase
>  Issue Type: Task
>  Components: rm
>Reporter: stack
>Assignee: stack
>Priority: Minor
>  Labels: rm
> Attachments: HBASE-21935.branch-2.0.001.patch, 
> HBASE-21935.branch-2.0.002.patch, HBASE-21935.branch-2.0.003.patch, 
> HBASE-21935.branch-2.0.004.patch, HBASE-21935.branch-2.0.005.patch, 
> HBASE-21935.branch-2.1.001.patch, HBASE-21935.branch-2.1.002.patch, 
> HBASE-21935.branch-2.1.003.patch, HBASE-21935.branch-2.1.004.patch, 
> HBASE-21935.branch-2.1.005.patch, HBASE-21935.branch-2.1.006.patch, 
> HBASE-21935.branch-2.1.007.patch
>
>
> The spark/dev/create-release is more comprehensive than our hokey make_rc.sh 
> script. It codifies the bulk of the RM process from tagging, version-setting, 
> building, signing, and pushing. It does it in a container so environment is 
> same each time. It has a bunch of spark-specifics as is -- naturally -- but 
> should be possible to pull it around to suit hbase RM'ing. It'd save a bunch 
> of time and would allow us to get to a place where RM'ing is canned, 
> evolvable, and consistent.
> I've been hacking on the tooling before the filing of this JIRA and was 
> polluting branch-2.0 w/ tagging and reverts. Let me make a branch named for 
> this JIRA to play with (There is a dry-run flag but it too needs work...).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21935) Replace make_rc.sh with customized spark/dev/create-release

2019-03-11 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16789838#comment-16789838
 ] 

stack commented on HBASE-21935:
---

Updated the patch to latest. Has set +x for the moment while still playing. 
Here is how I run it currently... doing just the build step w/o doing any 
commits:

{code}
./do-release-docker.sh -d ~/Downloads/rm -n -s build
{code}

You get asked a few questions. I answer branch-2.0... It'll volunteer 2.0.5. 
When asked what RC #, I say '1' (there is already a tag for 2.0.5RC1... thats 
what you'll build -- it'll ask you confirm that you want to build against the 
2.0.5RC1 existing tag). It'll ask gpg key to use ... then away it goes. It 
makes an output dir and in the output dir spews log to build.log.

FYI [~busbey]

> Replace make_rc.sh with customized spark/dev/create-release
> ---
>
> Key: HBASE-21935
> URL: https://issues.apache.org/jira/browse/HBASE-21935
> Project: HBase
>  Issue Type: Task
>  Components: rm
>Reporter: stack
>Assignee: stack
>Priority: Minor
>  Labels: rm
> Attachments: HBASE-21935.branch-2.0.001.patch, 
> HBASE-21935.branch-2.0.002.patch, HBASE-21935.branch-2.0.003.patch, 
> HBASE-21935.branch-2.0.004.patch, HBASE-21935.branch-2.0.005.patch, 
> HBASE-21935.branch-2.1.001.patch, HBASE-21935.branch-2.1.002.patch, 
> HBASE-21935.branch-2.1.003.patch, HBASE-21935.branch-2.1.004.patch, 
> HBASE-21935.branch-2.1.005.patch, HBASE-21935.branch-2.1.006.patch, 
> HBASE-21935.branch-2.1.007.patch
>
>
> The spark/dev/create-release is more comprehensive than our hokey make_rc.sh 
> script. It codifies the bulk of the RM process from tagging, version-setting, 
> building, signing, and pushing. It does it in a container so environment is 
> same each time. It has a bunch of spark-specifics as is -- naturally -- but 
> should be possible to pull it around to suit hbase RM'ing. It'd save a bunch 
> of time and would allow us to get to a place where RM'ing is canned, 
> evolvable, and consistent.
> I've been hacking on the tooling before the filing of this JIRA and was 
> polluting branch-2.0 w/ tagging and reverts. Let me make a branch named for 
> this JIRA to play with (There is a dry-run flag but it too needs work...).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21935) Replace make_rc.sh with customized spark/dev/create-release

2019-03-11 Thread stack (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-21935:
--
Attachment: HBASE-21935.branch-2.0.005.patch

> Replace make_rc.sh with customized spark/dev/create-release
> ---
>
> Key: HBASE-21935
> URL: https://issues.apache.org/jira/browse/HBASE-21935
> Project: HBase
>  Issue Type: Task
>  Components: rm
>Reporter: stack
>Assignee: stack
>Priority: Minor
>  Labels: rm
> Attachments: HBASE-21935.branch-2.0.001.patch, 
> HBASE-21935.branch-2.0.002.patch, HBASE-21935.branch-2.0.003.patch, 
> HBASE-21935.branch-2.0.004.patch, HBASE-21935.branch-2.0.005.patch, 
> HBASE-21935.branch-2.1.001.patch, HBASE-21935.branch-2.1.002.patch, 
> HBASE-21935.branch-2.1.003.patch, HBASE-21935.branch-2.1.004.patch, 
> HBASE-21935.branch-2.1.005.patch, HBASE-21935.branch-2.1.006.patch, 
> HBASE-21935.branch-2.1.007.patch
>
>
> The spark/dev/create-release is more comprehensive than our hokey make_rc.sh 
> script. It codifies the bulk of the RM process from tagging, version-setting, 
> building, signing, and pushing. It does it in a container so environment is 
> same each time. It has a bunch of spark-specifics as is -- naturally -- but 
> should be possible to pull it around to suit hbase RM'ing. It'd save a bunch 
> of time and would allow us to get to a place where RM'ing is canned, 
> evolvable, and consistent.
> I've been hacking on the tooling before the filing of this JIRA and was 
> polluting branch-2.0 w/ tagging and reverts. Let me make a branch named for 
> this JIRA to play with (There is a dry-run flag but it too needs work...).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21959) CompactionTool should close the store it uses for compacting files, in order to properly archive compacted files.

2019-03-11 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16789817#comment-16789817
 ] 

Hadoop QA commented on HBASE-21959:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-1 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
39s{color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} branch-1 passed with JDK v1.8.0_202 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} branch-1 passed with JDK v1.7.0_211 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
11s{color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  2m 
28s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} branch-1 passed with JDK v1.8.0_202 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} branch-1 passed with JDK v1.7.0_211 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed with JDK v1.8.0_202 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed with JDK v1.7.0_211 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  2m 
27s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
1m 34s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.7.4. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed with JDK v1.8.0_202 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed with JDK v1.7.0_211 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}113m 
59s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}131m 32s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:61288f8 |
| JIRA Issue | HBASE-21959 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12962000/HBASE-21959-branch-1-002.patch
 |
| Optional Tests |  dupname  asflicense  javac  javadoc  unit  findbugs  
shadedjars  hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 4618cd073c45 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (HBASE-22029) RESTApiClusterManager.java:[250,48] cannot find symbol in hbase-it

2019-03-11 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16789790#comment-16789790
 ] 

stack commented on HBASE-22029:
---

Thanks for +1 [~busbey] Trying the patch first. Takes a bunch of hours to get 
to the break point... bated breath.

> RESTApiClusterManager.java:[250,48] cannot find symbol in hbase-it
> --
>
> Key: HBASE-22029
> URL: https://issues.apache.org/jira/browse/HBASE-22029
> Project: HBase
>  Issue Type: Sub-task
>Reporter: stack
>Priority: Major
>
> I get this doing a RM build. Can't repro elsewhere.
> Picking up an old jaxrs? See 
> https://stackoverflow.com/questions/34679773/extract-string-from-javax-response
> Let me try adding explicit dependency.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22029) RESTApiClusterManager.java:[250,48] cannot find symbol in hbase-it

2019-03-11 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16789781#comment-16789781
 ] 

Sean Busbey commented on HBASE-22029:
-

+1 lgtm. would be nice if we could get all the hbase-rest and hbase-it jars put 
in their own directory so it's easier to keep them out of the normal 
classpaths. maybe something for hbase 3.

> RESTApiClusterManager.java:[250,48] cannot find symbol in hbase-it
> --
>
> Key: HBASE-22029
> URL: https://issues.apache.org/jira/browse/HBASE-22029
> Project: HBase
>  Issue Type: Sub-task
>Reporter: stack
>Priority: Major
>
> I get this doing a RM build. Can't repro elsewhere.
> Picking up an old jaxrs? See 
> https://stackoverflow.com/questions/34679773/extract-string-from-javax-response
> Let me try adding explicit dependency.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22029) RESTApiClusterManager.java:[250,48] cannot find symbol in hbase-it

2019-03-11 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16789765#comment-16789765
 ] 

stack commented on HBASE-22029:
---

[~busbey]
{code}
 mvn version: ^[[1mApache Maven 3.5.2^[[m
 Maven home: /usr/share/maven
 Java version: 1.8.0_191, vendor: Oracle Corporation
 Java home: /usr/lib/jvm/java-8-openjdk-amd64/jre
 Default locale: en, platform encoding: UTF-8
 OS name: "linux", version: "4.9.125-linuxkit", arch: "amd64", family: "unix"
{code}

> RESTApiClusterManager.java:[250,48] cannot find symbol in hbase-it
> --
>
> Key: HBASE-22029
> URL: https://issues.apache.org/jira/browse/HBASE-22029
> Project: HBase
>  Issue Type: Sub-task
>Reporter: stack
>Priority: Major
>
> I get this doing a RM build. Can't repro elsewhere.
> Picking up an old jaxrs? See 
> https://stackoverflow.com/questions/34679773/extract-string-from-javax-response
> Let me try adding explicit dependency.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22027) Move non-MR parts of TokenUtil into hbase-client

2019-03-11 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16789752#comment-16789752
 ] 

Hadoop QA commented on HBASE-22027:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
48s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
31s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
36s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
 5s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
13s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
32s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
30s{color} | {color:red} hbase-client: The patch generated 4 new + 0 unchanged 
- 0 fixed = 4 total (was 0) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m  
3s{color} | {color:red} hbase-server: The patch generated 2 new + 2 unchanged - 
3 fixed = 4 total (was 5) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 3 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
 9s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
8m  9s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
23s{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}265m  0s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
52s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}311m 37s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hbase.client.TestSnapshotTemporaryDirectoryWithRegionReplicas |
|   | hadoop.hbase.replication.TestReplicationDisableInactivePeer |
|   | hadoop.hbase.client.TestSnapshotDFSTemporaryDirectory |
|   | hadoop.hbase.regionserver.TestSplitTransactionOnCluster |
|   | hadoop.hbase.client.TestAdmin1 |
|   | hadoop.hbase.client.TestFromClientSide |
|   | hadoop.hbase.client.TestFromClientSideWithCoprocessor |
\\
\\
|| Subsystem || Report/Notes 

[jira] [Commented] (HBASE-22029) RESTApiClusterManager.java:[250,48] cannot find symbol in hbase-it

2019-03-11 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16789745#comment-16789745
 ] 

Sean Busbey commented on HBASE-22029:
-

what's the output of `mvn --version` in the RM build?

> RESTApiClusterManager.java:[250,48] cannot find symbol in hbase-it
> --
>
> Key: HBASE-22029
> URL: https://issues.apache.org/jira/browse/HBASE-22029
> Project: HBase
>  Issue Type: Sub-task
>Reporter: stack
>Priority: Major
>
> I get this doing a RM build. Can't repro elsewhere.
> Picking up an old jaxrs? See 
> https://stackoverflow.com/questions/34679773/extract-string-from-javax-response
> Let me try adding explicit dependency.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21810) bulkload support set hfile compression on client

2019-03-11 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16789740#comment-16789740
 ] 

Hadoop QA commented on HBASE-21810:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-1 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
41s{color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} branch-1 passed with JDK v1.8.0_202 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} | {color:green} branch-1 passed with JDK v1.7.0_211 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
22s{color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  2m 
45s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} branch-1 passed with JDK v1.8.0_202 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} branch-1 passed with JDK v1.7.0_211 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed with JDK v1.8.0_202 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed with JDK v1.7.0_211 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  2m 
49s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
1m 38s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.7.4. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed with JDK v1.8.0_202 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed with JDK v1.7.0_211 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}120m 
33s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}139m 30s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:61288f8 |
| JIRA Issue | HBASE-21810 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12961993/HBASE-21810.branch-1.003.patch
 |
| Optional Tests |  dupname  asflicense  javac  javadoc  unit  findbugs  
shadedjars  hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 6d19994aaa56 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed Oct 
31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| 

[jira] [Commented] (HBASE-21810) bulkload support set hfile compression on client

2019-03-11 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16789736#comment-16789736
 ] 

Hadoop QA commented on HBASE-21810:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-1.2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
30s{color} | {color:green} branch-1.2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} branch-1.2 passed with JDK v1.8.0_202 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} branch-1.2 passed with JDK v1.7.0_211 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
22s{color} | {color:green} branch-1.2 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  2m 
36s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} branch-1.2 passed with JDK v1.8.0_202 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} branch-1.2 passed with JDK v1.7.0_211 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed with JDK v1.8.0_202 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed with JDK v1.7.0_211 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  2m 
34s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
8m 29s{color} | {color:green} Patch does not cause any errors with Hadoop 2.4.1 
2.5.2 2.6.5 2.7.4. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed with JDK v1.8.0_202 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed with JDK v1.7.0_211 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 89m 
31s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}121m 29s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:34a9b27 |
| JIRA Issue | HBASE-21810 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12961995/HBASE-21810.branch-1.2.003.patch
 |
| Optional Tests |  dupname  asflicense  javac  javadoc  unit  findbugs  
shadedjars  hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 694d7862b106 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed Oct 
31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |

[jira] [Updated] (HBASE-21959) CompactionTool should close the store it uses for compacting files, in order to properly archive compacted files.

2019-03-11 Thread Wellington Chevreuil (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wellington Chevreuil updated HBASE-21959:
-
Attachment: HBASE-21959-branch-1-002.patch

> CompactionTool should close the store it uses for compacting files, in order 
> to properly archive compacted files.
> -
>
> Key: HBASE-21959
> URL: https://issues.apache.org/jira/browse/HBASE-21959
> Project: HBase
>  Issue Type: Bug
>  Components: tooling
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
>Priority: Major
> Attachments: HBASE-21959-branch-1-001.patch, 
> HBASE-21959-branch-1-002.patch, HBASE-21959-master-001.patch, 
> HBASE-21959-master-002.patch, HBASE-21959-master-003.patch
>
>
> While using CompactionTool to offload RSes, noticed compacted files were 
> never archived from original region dir, causing the space used by the region 
> to actually double. Going through its compaction related code on HStore, 
> which is used by CompactionTool for performing compactions, found out what 
> that compacted files archiving happens mainly while closing the HStore 
> instance. CompactionTool is never explicitly closing its HStore instance, so 
> adding a simple patch that properly close the store.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21959) CompactionTool should close the store it uses for compacting files, in order to properly archive compacted files.

2019-03-11 Thread Wellington Chevreuil (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16789700#comment-16789700
 ] 

Wellington Chevreuil commented on HBASE-21959:
--

New branch-1 patch addressing the whitespace issue.

> CompactionTool should close the store it uses for compacting files, in order 
> to properly archive compacted files.
> -
>
> Key: HBASE-21959
> URL: https://issues.apache.org/jira/browse/HBASE-21959
> Project: HBase
>  Issue Type: Bug
>  Components: tooling
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
>Priority: Major
> Attachments: HBASE-21959-branch-1-001.patch, 
> HBASE-21959-branch-1-002.patch, HBASE-21959-master-001.patch, 
> HBASE-21959-master-002.patch, HBASE-21959-master-003.patch
>
>
> While using CompactionTool to offload RSes, noticed compacted files were 
> never archived from original region dir, causing the space used by the region 
> to actually double. Going through its compaction related code on HStore, 
> which is used by CompactionTool for performing compactions, found out what 
> that compacted files archiving happens mainly while closing the HStore 
> instance. CompactionTool is never explicitly closing its HStore instance, so 
> adding a simple patch that properly close the store.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22029) RESTApiClusterManager.java:[250,48] cannot find symbol in hbase-it

2019-03-11 Thread Duo Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16789695#comment-16789695
 ] 

Duo Zhang commented on HBASE-22029:
---

We still need jackson for hbase-rest.

> RESTApiClusterManager.java:[250,48] cannot find symbol in hbase-it
> --
>
> Key: HBASE-22029
> URL: https://issues.apache.org/jira/browse/HBASE-22029
> Project: HBase
>  Issue Type: Sub-task
>Reporter: stack
>Priority: Major
>
> I get this doing a RM build. Can't repro elsewhere.
> Picking up an old jaxrs? See 
> https://stackoverflow.com/questions/34679773/extract-string-from-javax-response
> Let me try adding explicit dependency.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22002) Remove the deprecated methods in Admin interface

2019-03-11 Thread Duo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-22002:
--
Attachment: HBASE-22002-v1.patch

> Remove the deprecated methods in Admin interface
> 
>
> Key: HBASE-22002
> URL: https://issues.apache.org/jira/browse/HBASE-22002
> Project: HBase
>  Issue Type: Task
>  Components: Admin, Client
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HBASE-22002-v1.patch, HBASE-22002.patch
>
>
> For API cleanup, and will make the work in HBASE-21718 a little easier.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21512) Introduce an AsyncClusterConnection and replace the usage of ClusterConnection

2019-03-11 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16789660#comment-16789660
 ] 

Hudson commented on HBASE-21512:


Results for branch HBASE-21512
[build #131 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21512/131/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21512/131//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21512/131//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21512/131//JDK8_Nightly_Build_Report_(Hadoop3)/]


(x) {color:red}-1 source release artifact{color}
-- See build output for details.


(x) {color:red}-1 client integration test{color}
-- Something went wrong with this stage, [check relevant console 
output|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21512/131//console].


> Introduce an AsyncClusterConnection and replace the usage of ClusterConnection
> --
>
> Key: HBASE-21512
> URL: https://issues.apache.org/jira/browse/HBASE-21512
> Project: HBase
>  Issue Type: Umbrella
>Reporter: Duo Zhang
>Priority: Major
> Fix For: 3.0.0
>
>
> At least for the RSProcedureDispatcher, with CompletableFuture we do not need 
> to set a delay and use a thread pool any more, which could reduce the 
> resource usage and also the latency.
> Once this is done, I think we can remove the ClusterConnection completely, 
> and start to rewrite the old sync client based on the async client, which 
> could reduce the code base a lot for our client.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22031) Provide a constructor of RSGroupInfo with shadow copy

2019-03-11 Thread Xiang Li (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiang Li updated HBASE-22031:
-
Description: 
As for org.apache.hadoop.hbase.rsgroup.RSGroupInfo, the following constructor 
performs deep copies on both servers and tables inputed.
{code:title=hbase-common/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupInfo.java.java|borderStyle=solid}
RSGroupInfo(String name, SortedSet servers, SortedSet 
tables) {
  this.name = name;
  this.servers = (servers == null) ? new TreeSet<>() : new TreeSet<>(servers);
  this.tables  = (tables  == null) ? new TreeSet<>() : new TreeSet<>(tables);
}
{code}

> Provide a constructor of RSGroupInfo with shadow copy
> -
>
> Key: HBASE-22031
> URL: https://issues.apache.org/jira/browse/HBASE-22031
> Project: HBase
>  Issue Type: Improvement
>  Components: rsgroup
>Reporter: Xiang Li
>Assignee: Xiang Li
>Priority: Minor
>
> As for org.apache.hadoop.hbase.rsgroup.RSGroupInfo, the following constructor 
> performs deep copies on both servers and tables inputed.
> {code:title=hbase-common/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupInfo.java.java|borderStyle=solid}
> RSGroupInfo(String name, SortedSet servers, SortedSet 
> tables) {
>   this.name = name;
>   this.servers = (servers == null) ? new TreeSet<>() : new TreeSet<>(servers);
>   this.tables  = (tables  == null) ? new TreeSet<>() : new TreeSet<>(tables);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22031) Provide a constructor of RSGroupInfo with shadow copy

2019-03-11 Thread Xiang Li (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiang Li updated HBASE-22031:
-
Description: 
As for org.apache.hadoop.hbase.rsgroup.RSGroupInfo, the following constructor 
performs deep copies on both servers and tables inputed.
{code:title=hbase-common/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupInfo.java.java|borderStyle=solid}
RSGroupInfo(String name, SortedSet servers, SortedSet 
tables) {
  this.name = name;
  this.servers = (servers == null) ? new TreeSet<>() : new TreeSet<>(servers);
  this.tables  = (tables  == null) ? new TreeSet<>() : new TreeSet<>(tables);
}
{code}
The constructor of TreeSet is heavy and I think it is better to have a new 
constructor with shadow copy and it could be used at least in
{code:title=hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupInfoManagerImpl.java|borderStyle=solid}
private synchronized void refresh(boolean forceOnline) throws IOException {
  ...
  groupList.add(new RSGroupInfo(RSGroupInfo.DEFAULT_GROUP, getDefaultServers(), 
orphanTables));
  ...
{code}
It is not needed to allocate new TreeSet to deep copy the output of 
getDefaultServers() and orphanTables, both of which are allocated in the near 
context and not updated in the code followed. So it is safe to make a shadow 
copy here.

  was:
As for org.apache.hadoop.hbase.rsgroup.RSGroupInfo, the following constructor 
performs deep copies on both servers and tables inputed.
{code:title=hbase-common/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupInfo.java.java|borderStyle=solid}
RSGroupInfo(String name, SortedSet servers, SortedSet 
tables) {
  this.name = name;
  this.servers = (servers == null) ? new TreeSet<>() : new TreeSet<>(servers);
  this.tables  = (tables  == null) ? new TreeSet<>() : new TreeSet<>(tables);
}
{code}
The constructor of TreeSet is heavy and I think it is better to have a new 
constructor with shadow copy and it could be used at least in
{code:title=hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupInfoManagerImpl.java|borderStyle=solid}
private synchronized void refresh(boolean forceOnline) throws IOException {
...
groupList.add(new RSGroupInfo(RSGroupInfo.DEFAULT_GROUP, 
getDefaultServers(), orphanTables));
...
{code}
It is not needed to allocate new TreeSet to deep copy the output of 
getDefaultServers() and orphanTables, both of which are allocated in the near 
context and not updated in the code followed. So it is safe to make a shadow 
copy here.


> Provide a constructor of RSGroupInfo with shadow copy
> -
>
> Key: HBASE-22031
> URL: https://issues.apache.org/jira/browse/HBASE-22031
> Project: HBase
>  Issue Type: Improvement
>  Components: rsgroup
>Reporter: Xiang Li
>Assignee: Xiang Li
>Priority: Minor
>
> As for org.apache.hadoop.hbase.rsgroup.RSGroupInfo, the following constructor 
> performs deep copies on both servers and tables inputed.
> {code:title=hbase-common/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupInfo.java.java|borderStyle=solid}
> RSGroupInfo(String name, SortedSet servers, SortedSet 
> tables) {
>   this.name = name;
>   this.servers = (servers == null) ? new TreeSet<>() : new TreeSet<>(servers);
>   this.tables  = (tables  == null) ? new TreeSet<>() : new TreeSet<>(tables);
> }
> {code}
> The constructor of TreeSet is heavy and I think it is better to have a new 
> constructor with shadow copy and it could be used at least in
> {code:title=hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupInfoManagerImpl.java|borderStyle=solid}
> private synchronized void refresh(boolean forceOnline) throws IOException {
>   ...
>   groupList.add(new RSGroupInfo(RSGroupInfo.DEFAULT_GROUP, 
> getDefaultServers(), orphanTables));
>   ...
> {code}
> It is not needed to allocate new TreeSet to deep copy the output of 
> getDefaultServers() and orphanTables, both of which are allocated in the near 
> context and not updated in the code followed. So it is safe to make a shadow 
> copy here.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22015) UserPermission should be annotated as InterfaceAudience.Public

2019-03-11 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16789664#comment-16789664
 ] 

Hadoop QA commented on HBASE-22015:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
31s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
34s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
49s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
47s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
37s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
48s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} The patch passed checkstyle in hbase-client {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
15s{color} | {color:green} hbase-server: The patch generated 0 new + 74 
unchanged - 7 fixed = 74 total (was 81) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
56s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
10m 41s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
7s{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}141m 
24s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
44s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}194m 37s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-22015 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12961953/HBASE-22015.master.001.patch
 |
| Optional Tests |  dupname  asflicense  javac  javadoc  unit  findbugs  
shadedjars  hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux a9a7c6b704e0 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (HBASE-22031) Provide a constructor of RSGroupInfo with shadow copy

2019-03-11 Thread Xiang Li (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16789662#comment-16789662
 ] 

Xiang Li commented on HBASE-22031:
--

[~xucang] Does it make any sense to you?

> Provide a constructor of RSGroupInfo with shadow copy
> -
>
> Key: HBASE-22031
> URL: https://issues.apache.org/jira/browse/HBASE-22031
> Project: HBase
>  Issue Type: Improvement
>  Components: rsgroup
>Reporter: Xiang Li
>Assignee: Xiang Li
>Priority: Minor
>
> As for org.apache.hadoop.hbase.rsgroup.RSGroupInfo, the following constructor 
> performs deep copies on both servers and tables inputed.
> {code:title=hbase-common/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupInfo.java.java|borderStyle=solid}
> RSGroupInfo(String name, SortedSet servers, SortedSet 
> tables) {
>   this.name = name;
>   this.servers = (servers == null) ? new TreeSet<>() : new TreeSet<>(servers);
>   this.tables  = (tables  == null) ? new TreeSet<>() : new TreeSet<>(tables);
> }
> {code}
> The constructor of TreeSet is heavy and I think it is better to have a new 
> constructor with shadow copy and it could be used at least in
> {code:title=hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupInfoManagerImpl.java|borderStyle=solid}
> private synchronized void refresh(boolean forceOnline) throws IOException {
> ...
> groupList.add(new RSGroupInfo(RSGroupInfo.DEFAULT_GROUP, 
> getDefaultServers(), orphanTables));
> ...
> {code}
> It is not needed to allocate new TreeSet to deep copy the output of 
> getDefaultServers() and orphanTables, both of which are allocated in the near 
> context and not updated in the code followed. So it is safe to make a shadow 
> copy here.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22031) Provide a constructor of RSGroupInfo with shadow copy

2019-03-11 Thread Xiang Li (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiang Li updated HBASE-22031:
-
Description: 
As for org.apache.hadoop.hbase.rsgroup.RSGroupInfo, the following constructor 
performs deep copies on both servers and tables inputed.
{code:title=hbase-common/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupInfo.java.java|borderStyle=solid}
RSGroupInfo(String name, SortedSet servers, SortedSet 
tables) {
  this.name = name;
  this.servers = (servers == null) ? new TreeSet<>() : new TreeSet<>(servers);
  this.tables  = (tables  == null) ? new TreeSet<>() : new TreeSet<>(tables);
}
{code}
The constructor of TreeSet is heavy and I think it is better to have a new 
constructor with shadow copy and it could be used at least in
{code:title=hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupInfoManagerImpl.java|borderStyle=solid}
private synchronized void refresh(boolean forceOnline) throws IOException {
...
groupList.add(new RSGroupInfo(RSGroupInfo.DEFAULT_GROUP, 
getDefaultServers(), orphanTables));
...
{code}
It is not needed to allocate new TreeSet to deep copy the output of 
getDefaultServers() and orphanTables, both of which are allocated in the near 
context and not updated in the code followed. So it is safe to make a shadow 
copy here.

  was:
As for org.apache.hadoop.hbase.rsgroup.RSGroupInfo, the following constructor 
performs deep copies on both servers and tables inputed.
{code:title=hbase-common/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupInfo.java.java|borderStyle=solid}
RSGroupInfo(String name, SortedSet servers, SortedSet 
tables) {
  this.name = name;
  this.servers = (servers == null) ? new TreeSet<>() : new TreeSet<>(servers);
  this.tables  = (tables  == null) ? new TreeSet<>() : new TreeSet<>(tables);
}
{code}
The constructor of TreeSet is heavy and I think it is better to have a 
constructor with shadow copy and it could be used at least in
{code:title=hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupInfoManagerImpl.java|borderStyle=solid}
private synchronized void refresh(boolean forceOnline) throws IOException {
...
groupList.add(new RSGroupInfo(RSGroupInfo.DEFAULT_GROUP, 
getDefaultServers(), orphanTables));
...
{code}
It is not needed to allocate new TreeSet to deep copy the output of 
getDefaultServers() and orphanTables, both of which are allocated in the near 
context and not updated in the code followed. So it is safe to make a shadow 
copy here.


> Provide a constructor of RSGroupInfo with shadow copy
> -
>
> Key: HBASE-22031
> URL: https://issues.apache.org/jira/browse/HBASE-22031
> Project: HBase
>  Issue Type: Improvement
>  Components: rsgroup
>Reporter: Xiang Li
>Assignee: Xiang Li
>Priority: Minor
>
> As for org.apache.hadoop.hbase.rsgroup.RSGroupInfo, the following constructor 
> performs deep copies on both servers and tables inputed.
> {code:title=hbase-common/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupInfo.java.java|borderStyle=solid}
> RSGroupInfo(String name, SortedSet servers, SortedSet 
> tables) {
>   this.name = name;
>   this.servers = (servers == null) ? new TreeSet<>() : new TreeSet<>(servers);
>   this.tables  = (tables  == null) ? new TreeSet<>() : new TreeSet<>(tables);
> }
> {code}
> The constructor of TreeSet is heavy and I think it is better to have a new 
> constructor with shadow copy and it could be used at least in
> {code:title=hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupInfoManagerImpl.java|borderStyle=solid}
> private synchronized void refresh(boolean forceOnline) throws IOException {
> ...
> groupList.add(new RSGroupInfo(RSGroupInfo.DEFAULT_GROUP, 
> getDefaultServers(), orphanTables));
> ...
> {code}
> It is not needed to allocate new TreeSet to deep copy the output of 
> getDefaultServers() and orphanTables, both of which are allocated in the near 
> context and not updated in the code followed. So it is safe to make a shadow 
> copy here.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22031) Provide a constructor of RSGroupInfo with shadow copy

2019-03-11 Thread Xiang Li (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiang Li updated HBASE-22031:
-
Description: 
As for org.apache.hadoop.hbase.rsgroup.RSGroupInfo, the following constructor 
performs deep copies on both servers and tables inputed.
{code:title=hbase-common/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupInfo.java.java|borderStyle=solid}
RSGroupInfo(String name, SortedSet servers, SortedSet 
tables) {
  this.name = name;
  this.servers = (servers == null) ? new TreeSet<>() : new TreeSet<>(servers);
  this.tables  = (tables  == null) ? new TreeSet<>() : new TreeSet<>(tables);
}
{code}
The constructor of TreeSet is heavy and I think it is better to have a 
constructor with shadow copy and it could be used at least in
{code:title=hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupInfoManagerImpl.java|borderStyle=solid}
private synchronized void refresh(boolean forceOnline) throws IOException {
...
groupList.add(new RSGroupInfo(RSGroupInfo.DEFAULT_GROUP, 
getDefaultServers(), orphanTables));
...
{code}
It is not needed to allocate new TreeSet to deep copy the output of 
getDefaultServers() and orphanTables, both of which are allocated in the near 
context and not updated in the code followed. So it is safe to make a shadow 
copy here.

  was:
As for org.apache.hadoop.hbase.rsgroup.RSGroupInfo, the following constructor 
performs deep copies on both servers and tables inputed.
{code:title=hbase-common/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupInfo.java.java|borderStyle=solid}
RSGroupInfo(String name, SortedSet servers, SortedSet 
tables) {
  this.name = name;
  this.servers = (servers == null) ? new TreeSet<>() : new TreeSet<>(servers);
  this.tables  = (tables  == null) ? new TreeSet<>() : new TreeSet<>(tables);
}
{code}


> Provide a constructor of RSGroupInfo with shadow copy
> -
>
> Key: HBASE-22031
> URL: https://issues.apache.org/jira/browse/HBASE-22031
> Project: HBase
>  Issue Type: Improvement
>  Components: rsgroup
>Reporter: Xiang Li
>Assignee: Xiang Li
>Priority: Minor
>
> As for org.apache.hadoop.hbase.rsgroup.RSGroupInfo, the following constructor 
> performs deep copies on both servers and tables inputed.
> {code:title=hbase-common/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupInfo.java.java|borderStyle=solid}
> RSGroupInfo(String name, SortedSet servers, SortedSet 
> tables) {
>   this.name = name;
>   this.servers = (servers == null) ? new TreeSet<>() : new TreeSet<>(servers);
>   this.tables  = (tables  == null) ? new TreeSet<>() : new TreeSet<>(tables);
> }
> {code}
> The constructor of TreeSet is heavy and I think it is better to have a 
> constructor with shadow copy and it could be used at least in
> {code:title=hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupInfoManagerImpl.java|borderStyle=solid}
> private synchronized void refresh(boolean forceOnline) throws IOException {
> ...
> groupList.add(new RSGroupInfo(RSGroupInfo.DEFAULT_GROUP, 
> getDefaultServers(), orphanTables));
> ...
> {code}
> It is not needed to allocate new TreeSet to deep copy the output of 
> getDefaultServers() and orphanTables, both of which are allocated in the near 
> context and not updated in the code followed. So it is safe to make a shadow 
> copy here.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-22031) Provide a constructor of RSGroupInfo with shadow copy

2019-03-11 Thread Xiang Li (JIRA)
Xiang Li created HBASE-22031:


 Summary: Provide a constructor of RSGroupInfo with shadow copy
 Key: HBASE-22031
 URL: https://issues.apache.org/jira/browse/HBASE-22031
 Project: HBase
  Issue Type: Improvement
  Components: rsgroup
Reporter: Xiang Li
Assignee: Xiang Li






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22029) RESTApiClusterManager.java:[250,48] cannot find symbol in hbase-it

2019-03-11 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16789656#comment-16789656
 ] 

stack commented on HBASE-22029:
---

Thanks for taking a look [~busbey]...

Here is what shows...

{code}
[INFO] -
[ERROR] COMPILATION ERROR :
[INFO] -
[ERROR] 
/opt/hbase-rm/output/hbase/hbase-it/src/test/java/org/apache/hadoop/hbase/RESTApiClusterManager.java:[250,48]
 cannot find symbol
  symbol:   method readEntity(java.lang.Class)
  location: variable response of type javax.ws.rs.core.Response
{code}

I see the jaxrs reference made local in hbase-rest. Hoping doing same fixes 
above. Seems like we are picking up old jaxrs -- pre-2.0 -- when we see above 
message. The code is old and legit if jaxrs 2.0+

This is branch-2.0. Happens about five hours into my RM build

> RESTApiClusterManager.java:[250,48] cannot find symbol in hbase-it
> --
>
> Key: HBASE-22029
> URL: https://issues.apache.org/jira/browse/HBASE-22029
> Project: HBase
>  Issue Type: Sub-task
>Reporter: stack
>Priority: Major
>
> I get this doing a RM build. Can't repro elsewhere.
> Picking up an old jaxrs? See 
> https://stackoverflow.com/questions/34679773/extract-string-from-javax-response
> Let me try adding explicit dependency.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21879) Read HFile's block to ByteBuffer directly instead of to byte for reducing young gc purpose

2019-03-11 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16789645#comment-16789645
 ] 

Hudson commented on HBASE-21879:


Results for branch HBASE-21879
[build #21 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21879/21/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21879/21//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21879/21//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21879/21//JDK8_Nightly_Build_Report_(Hadoop3)/]


(x) {color:red}-1 source release artifact{color}
-- See build output for details.


(x) {color:red}-1 client integration test{color}
-- Something went wrong with this stage, [check relevant console 
output|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21879/21//console].


> Read HFile's block to ByteBuffer directly instead of to byte for reducing 
> young gc purpose
> --
>
> Key: HBASE-21879
> URL: https://issues.apache.org/jira/browse/HBASE-21879
> Project: HBase
>  Issue Type: Improvement
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Fix For: 3.0.0, 2.3.0
>
> Attachments: HBASE-21879.v1.patch, HBASE-21879.v1.patch, 
> QPS-latencies-before-HBASE-21879.png, gc-data-before-HBASE-21879.png
>
>
> In HFileBlock#readBlockDataInternal,  we have the following: 
> {code}
> @VisibleForTesting
> protected HFileBlock readBlockDataInternal(FSDataInputStream is, long offset,
> long onDiskSizeWithHeaderL, boolean pread, boolean verifyChecksum, 
> boolean updateMetrics)
>  throws IOException {
>  // .
>   // TODO: Make this ByteBuffer-based. Will make it easier to go to HDFS with 
> BBPool (offheap).
>   byte [] onDiskBlock = new byte[onDiskSizeWithHeader + hdrSize];
>   int nextBlockOnDiskSize = readAtOffset(is, onDiskBlock, preReadHeaderSize,
>   onDiskSizeWithHeader - preReadHeaderSize, true, offset + 
> preReadHeaderSize, pread);
>   if (headerBuf != null) {
> // ...
>   }
>   // ...
>  }
> {code}
> In the read path,  we still read the block from hfile to on-heap byte[], then 
> copy the on-heap byte[] to offheap bucket cache asynchronously,  and in my  
> 100% get performance test, I also observed some frequent young gc,  The 
> largest memory footprint in the young gen should be the on-heap block byte[].
> In fact, we can read HFile's block to ByteBuffer directly instead of to 
> byte[] for reducing young gc purpose. we did not implement this before, 
> because no ByteBuffer reading interface in the older HDFS client, but 2.7+ 
> has supported this now,  so we can fix this now. I think. 
> Will provide an patch and some perf-comparison for this. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21810) bulkload support set hfile compression on client

2019-03-11 Thread Yechao Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yechao Chen updated HBASE-21810:

Attachment: HBASE-21810.branch-1.2.003.patch

> bulkload  support set hfile compression on client 
> --
>
> Key: HBASE-21810
> URL: https://issues.apache.org/jira/browse/HBASE-21810
> Project: HBase
>  Issue Type: Improvement
>  Components: mapreduce
>Affects Versions: 1.3.3, 1.4.9, 2.0.4, 2.1.3, 1.2.11
>Reporter: Yechao Chen
>Assignee: Yechao Chen
>Priority: Major
> Attachments: HBASE-21810.branch-1.001.patch, 
> HBASE-21810.branch-1.002.patch, HBASE-21810.branch-1.003.patch, 
> HBASE-21810.branch-1.2.001.patch, HBASE-21810.branch-1.2.002.patch, 
> HBASE-21810.branch-1.2.002.patch, HBASE-21810.branch-1.2.003.patch, 
> HBASE-21810.branch-2.001.patch, HBASE-21810.branch-2.002.patch, 
> HBASE-21810.master.001.patch, HBASE-21810.master.001.patch, 
> HBASE-21810.master.002.patch, HBASE-21810.master.003.patch, 
> HBASE-21810.master.003.patch
>
>
> hbase bulkload (HFileOutputFormat2) generate hfile ,the compression from the 
> table(cf) compression,
> if the compression can be set on client ,sometimes,it's useful,
> some case in our production:
> 1、hfile bulkload replication between the data center with bandwidth limit, we 
> can set the compression of the bulkload hfile not changing the table 
> compression
> 2、bulkload hfile not set  compression ,but the table compression is 
> gz/zstd/snappy... ,can reduce the hfile created time and compaction will make 
> the hfile to compression finally
> 3、somethings the yarn nodes (hfile created by reduce) /dobulkload client has 
> no compression lib,but the hbase cluster has,it's useful for this case



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21810) bulkload support set hfile compression on client

2019-03-11 Thread Yechao Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16789625#comment-16789625
 ] 

Yechao Chen commented on HBASE-21810:
-

update a patch for branch-1 , retry qa

> bulkload  support set hfile compression on client 
> --
>
> Key: HBASE-21810
> URL: https://issues.apache.org/jira/browse/HBASE-21810
> Project: HBase
>  Issue Type: Improvement
>  Components: mapreduce
>Affects Versions: 1.3.3, 1.4.9, 2.0.4, 2.1.3, 1.2.11
>Reporter: Yechao Chen
>Assignee: Yechao Chen
>Priority: Major
> Attachments: HBASE-21810.branch-1.001.patch, 
> HBASE-21810.branch-1.002.patch, HBASE-21810.branch-1.003.patch, 
> HBASE-21810.branch-1.2.001.patch, HBASE-21810.branch-1.2.002.patch, 
> HBASE-21810.branch-1.2.002.patch, HBASE-21810.branch-2.001.patch, 
> HBASE-21810.branch-2.002.patch, HBASE-21810.master.001.patch, 
> HBASE-21810.master.001.patch, HBASE-21810.master.002.patch, 
> HBASE-21810.master.003.patch, HBASE-21810.master.003.patch
>
>
> hbase bulkload (HFileOutputFormat2) generate hfile ,the compression from the 
> table(cf) compression,
> if the compression can be set on client ,sometimes,it's useful,
> some case in our production:
> 1、hfile bulkload replication between the data center with bandwidth limit, we 
> can set the compression of the bulkload hfile not changing the table 
> compression
> 2、bulkload hfile not set  compression ,but the table compression is 
> gz/zstd/snappy... ,can reduce the hfile created time and compaction will make 
> the hfile to compression finally
> 3、somethings the yarn nodes (hfile created by reduce) /dobulkload client has 
> no compression lib,but the hbase cluster has,it's useful for this case



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21810) bulkload support set hfile compression on client

2019-03-11 Thread Yechao Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16789622#comment-16789622
 ] 

Yechao Chen commented on HBASE-21810:
-

why [^HBASE-21810.branch-1.2.002.patch] can't pass precommit by yetus?

> bulkload  support set hfile compression on client 
> --
>
> Key: HBASE-21810
> URL: https://issues.apache.org/jira/browse/HBASE-21810
> Project: HBase
>  Issue Type: Improvement
>  Components: mapreduce
>Affects Versions: 1.3.3, 1.4.9, 2.0.4, 2.1.3, 1.2.11
>Reporter: Yechao Chen
>Assignee: Yechao Chen
>Priority: Major
> Attachments: HBASE-21810.branch-1.001.patch, 
> HBASE-21810.branch-1.002.patch, HBASE-21810.branch-1.003.patch, 
> HBASE-21810.branch-1.2.001.patch, HBASE-21810.branch-1.2.002.patch, 
> HBASE-21810.branch-1.2.002.patch, HBASE-21810.branch-2.001.patch, 
> HBASE-21810.branch-2.002.patch, HBASE-21810.master.001.patch, 
> HBASE-21810.master.001.patch, HBASE-21810.master.002.patch, 
> HBASE-21810.master.003.patch, HBASE-21810.master.003.patch
>
>
> hbase bulkload (HFileOutputFormat2) generate hfile ,the compression from the 
> table(cf) compression,
> if the compression can be set on client ,sometimes,it's useful,
> some case in our production:
> 1、hfile bulkload replication between the data center with bandwidth limit, we 
> can set the compression of the bulkload hfile not changing the table 
> compression
> 2、bulkload hfile not set  compression ,but the table compression is 
> gz/zstd/snappy... ,can reduce the hfile created time and compaction will make 
> the hfile to compression finally
> 3、somethings the yarn nodes (hfile created by reduce) /dobulkload client has 
> no compression lib,but the hbase cluster has,it's useful for this case



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21810) bulkload support set hfile compression on client

2019-03-11 Thread Yechao Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yechao Chen updated HBASE-21810:

Attachment: HBASE-21810.branch-1.003.patch

> bulkload  support set hfile compression on client 
> --
>
> Key: HBASE-21810
> URL: https://issues.apache.org/jira/browse/HBASE-21810
> Project: HBase
>  Issue Type: Improvement
>  Components: mapreduce
>Affects Versions: 1.3.3, 1.4.9, 2.0.4, 2.1.3, 1.2.11
>Reporter: Yechao Chen
>Assignee: Yechao Chen
>Priority: Major
> Attachments: HBASE-21810.branch-1.001.patch, 
> HBASE-21810.branch-1.002.patch, HBASE-21810.branch-1.003.patch, 
> HBASE-21810.branch-1.2.001.patch, HBASE-21810.branch-1.2.002.patch, 
> HBASE-21810.branch-1.2.002.patch, HBASE-21810.branch-2.001.patch, 
> HBASE-21810.branch-2.002.patch, HBASE-21810.master.001.patch, 
> HBASE-21810.master.001.patch, HBASE-21810.master.002.patch, 
> HBASE-21810.master.003.patch, HBASE-21810.master.003.patch
>
>
> hbase bulkload (HFileOutputFormat2) generate hfile ,the compression from the 
> table(cf) compression,
> if the compression can be set on client ,sometimes,it's useful,
> some case in our production:
> 1、hfile bulkload replication between the data center with bandwidth limit, we 
> can set the compression of the bulkload hfile not changing the table 
> compression
> 2、bulkload hfile not set  compression ,but the table compression is 
> gz/zstd/snappy... ,can reduce the hfile created time and compaction will make 
> the hfile to compression finally
> 3、somethings the yarn nodes (hfile created by reduce) /dobulkload client has 
> no compression lib,but the hbase cluster has,it's useful for this case



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-22030) links to specific nightly reports in jira comment from bot are wrong

2019-03-11 Thread Sean Busbey (JIRA)
Sean Busbey created HBASE-22030:
---

 Summary: links to specific nightly reports in jira comment from 
bot are wrong
 Key: HBASE-22030
 URL: https://issues.apache.org/jira/browse/HBASE-22030
 Project: HBase
  Issue Type: Bug
  Components: test
Reporter: Sean Busbey


AFAICT all branches are impacted.

when nightly comments on jira, it's supposed to include links to specific 
reports when possible so that e.g. when jdk8/hadoop 3 fails on branch-2.1 a 
committer can go directly to that report and see what's wrong.

right now all the links end up taking you to the build's top level status page.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21810) bulkload support set hfile compression on client

2019-03-11 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16789600#comment-16789600
 ] 

Hadoop QA commented on HBASE-21810:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} HBASE-21810 does not apply to 2. Rebase required? Wrong Branch? 
See https://yetus.apache.org/documentation/0.8.0/precommit-patchnames for help. 
{color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HBASE-21810 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12961990/HBASE-21810.branch-1.2.002.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/16334/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> bulkload  support set hfile compression on client 
> --
>
> Key: HBASE-21810
> URL: https://issues.apache.org/jira/browse/HBASE-21810
> Project: HBase
>  Issue Type: Improvement
>  Components: mapreduce
>Affects Versions: 1.3.3, 1.4.9, 2.0.4, 2.1.3, 1.2.11
>Reporter: Yechao Chen
>Assignee: Yechao Chen
>Priority: Major
> Attachments: HBASE-21810.branch-1.001.patch, 
> HBASE-21810.branch-1.002.patch, HBASE-21810.branch-1.2.001.patch, 
> HBASE-21810.branch-1.2.002.patch, HBASE-21810.branch-1.2.002.patch, 
> HBASE-21810.branch-2.001.patch, HBASE-21810.branch-2.002.patch, 
> HBASE-21810.master.001.patch, HBASE-21810.master.001.patch, 
> HBASE-21810.master.002.patch, HBASE-21810.master.003.patch, 
> HBASE-21810.master.003.patch
>
>
> hbase bulkload (HFileOutputFormat2) generate hfile ,the compression from the 
> table(cf) compression,
> if the compression can be set on client ,sometimes,it's useful,
> some case in our production:
> 1、hfile bulkload replication between the data center with bandwidth limit, we 
> can set the compression of the bulkload hfile not changing the table 
> compression
> 2、bulkload hfile not set  compression ,but the table compression is 
> gz/zstd/snappy... ,can reduce the hfile created time and compaction will make 
> the hfile to compression finally
> 3、somethings the yarn nodes (hfile created by reduce) /dobulkload client has 
> no compression lib,but the hbase cluster has,it's useful for this case



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22029) RESTApiClusterManager.java:[250,48] cannot find symbol in hbase-it

2019-03-11 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16789601#comment-16789601
 ] 

Sean Busbey commented on HBASE-22029:
-

I thought we had done a bunch of work to reduce/remove jackson, which is why 
I'm concerned.

> RESTApiClusterManager.java:[250,48] cannot find symbol in hbase-it
> --
>
> Key: HBASE-22029
> URL: https://issues.apache.org/jira/browse/HBASE-22029
> Project: HBase
>  Issue Type: Sub-task
>Reporter: stack
>Priority: Major
>
> I get this doing a RM build. Can't repro elsewhere.
> Picking up an old jaxrs? See 
> https://stackoverflow.com/questions/34679773/extract-string-from-javax-response
> Let me try adding explicit dependency.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21810) bulkload support set hfile compression on client

2019-03-11 Thread Yechao Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yechao Chen updated HBASE-21810:

Attachment: HBASE-21810.branch-1.2.002.patch

> bulkload  support set hfile compression on client 
> --
>
> Key: HBASE-21810
> URL: https://issues.apache.org/jira/browse/HBASE-21810
> Project: HBase
>  Issue Type: Improvement
>  Components: mapreduce
>Affects Versions: 1.3.3, 1.4.9, 2.0.4, 2.1.3, 1.2.11
>Reporter: Yechao Chen
>Assignee: Yechao Chen
>Priority: Major
> Attachments: HBASE-21810.branch-1.001.patch, 
> HBASE-21810.branch-1.002.patch, HBASE-21810.branch-1.2.001.patch, 
> HBASE-21810.branch-1.2.002.patch, HBASE-21810.branch-1.2.002.patch, 
> HBASE-21810.branch-2.001.patch, HBASE-21810.branch-2.002.patch, 
> HBASE-21810.master.001.patch, HBASE-21810.master.001.patch, 
> HBASE-21810.master.002.patch, HBASE-21810.master.003.patch, 
> HBASE-21810.master.003.patch
>
>
> hbase bulkload (HFileOutputFormat2) generate hfile ,the compression from the 
> table(cf) compression,
> if the compression can be set on client ,sometimes,it's useful,
> some case in our production:
> 1、hfile bulkload replication between the data center with bandwidth limit, we 
> can set the compression of the bulkload hfile not changing the table 
> compression
> 2、bulkload hfile not set  compression ,but the table compression is 
> gz/zstd/snappy... ,can reduce the hfile created time and compaction will make 
> the hfile to compression finally
> 3、somethings the yarn nodes (hfile created by reduce) /dobulkload client has 
> no compression lib,but the hbase cluster has,it's useful for this case



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22009) Improve RSGroupInfoManagerImpl#getDefaultServers()

2019-03-11 Thread Xiang Li (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16789593#comment-16789593
 ] 

Xiang Li commented on HBASE-22009:
--

[~xucang], if I get it correctly, almost each test in 
hbase-rsgroup/src/test/java/org/apache/hadoop/hbase/rsgroup verifies the 
function modified, covering the following conditions
* No servers in other groups (all are in default group)
* Some servers in other groups and the others in default group

> Improve RSGroupInfoManagerImpl#getDefaultServers()
> --
>
> Key: HBASE-22009
> URL: https://issues.apache.org/jira/browse/HBASE-22009
> Project: HBase
>  Issue Type: Improvement
>  Components: rsgroup
>Reporter: Xiang Li
>Assignee: Xiang Li
>Priority: Minor
> Attachments: HBASE-22009.master.000.patch
>
>
> {code:title=RSGroupInfoManagerImpl.java|borderStyle=solid}
> private SortedSet getDefaultServers() throws IOException {
>   SortedSet defaultServers = Sets.newTreeSet();
>   for (ServerName serverName : getOnlineRS()) {
> Address server = Address.fromParts(serverName.getHostname(), 
> serverName.getPort());
> boolean found = false;
> for (RSGroupInfo rsgi : listRSGroups()) {
>   if (!RSGroupInfo.DEFAULT_GROUP.equals(rsgi.getName()) && 
> rsgi.containsServer(server)) {
> found = true;
> break;
>   }
> }
> if (!found) {
>   defaultServers.add(server);
> }
>   }
>   return defaultServers;
> }
> {code}
> That is a logic of 2 nest loops. And for each server, listRSGroups() 
> allocates a new LinkedList and calls Map#values(), both of which are very 
> heavy operations.
> Maybe the inner loop could be moved out, that is
> # Build a list of servers of other groups than default group
> # Iterate each online servers and check if it is in the list above. If it is 
> not, then it belongs to default group.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   >