[GitHub] [hbase] HorizonNet merged pull request #179: HBASE-22231 Removed unused and '*' import

2019-04-24 Thread GitBox
HorizonNet merged pull request #179: HBASE-22231 Removed unused and '*' import
URL: https://github.com/apache/hbase/pull/179
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Resolved] (HBASE-22296) Remove TestFromClientSide.testGetStartEndKeysWithRegionReplicas

2019-04-24 Thread Duo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang resolved HBASE-22296.
---
  Resolution: Fixed
Assignee: Duo Zhang
Hadoop Flags: Reviewed

Pushed to branch-2.2+.

> Remove TestFromClientSide.testGetStartEndKeysWithRegionReplicas
> ---
>
> Key: HBASE-22296
> URL: https://issues.apache.org/jira/browse/HBASE-22296
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.2.1
>
>
> It tests nothing after HBASE-21753...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21920) Ignoring 'empty' end_key while calculating end_key for new region in HBCK -fixHdfsOverlaps command can cause data loss

2019-04-24 Thread Syeda Arshiya Tabreen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Syeda Arshiya Tabreen updated HBASE-21920:
--
Attachment: HBASE-21920.branch-1.002.patch

> Ignoring 'empty' end_key while calculating end_key for new region in HBCK 
> -fixHdfsOverlaps command can cause data loss
> --
>
> Key: HBASE-21920
> URL: https://issues.apache.org/jira/browse/HBASE-21920
> Project: HBase
>  Issue Type: Bug
>  Components: hbck
>Affects Versions: 1.0.0
>Reporter: Syeda Arshiya Tabreen
>Assignee: Syeda Arshiya Tabreen
>Priority: Major
> Attachments: HBASE-21920.branch-1.001.patch, 
> HBASE-21920.branch-1.002.patch, HBASE-21920.branch-1.002.patch, 
> HBASE-21920.branch-1.patch
>
>
> When running *-fixHdfsOverlaps* command due to overlap in the regions of the 
> table ,it moves all the hfiles of overlapping regions into new region with 
> start_key and end_key calculating based on minimum and maximum start_key and 
> end_key of all overlapping regions.
> When calculating start_key and end_key for new region,end_key with 'empty' is 
> not considered which leads to data loss when scanned using '*startrow'.*
> *For example:*
>  1.create table 't' 
>  2.Insert records \{00,111,200} into the table 't'and flush the data
>  3.split the table 't' with split-key '100'
>  4.Now we have three regions( 1 parent and two daughter regions )
>  1.*Region-1*('Empty','Empty') => \{00,111,200}
>  2.*Region-2*('Empty','100')=>\{00}
>  3.*Region-3*('100','Empty')=>\{111,200}
> 5.Make sure parent region is not deleted in file system and run 
> -*fixHdfsOverlaps* command
> This -*fixHdfsOverlaps* command will move all the hfiles of the three regions
> {*Region-1,Region- 2,Region-3*} into a new region(*Region-4*) created with 
> start_key='*Empty'* and end_key='*100'*
> This is because it does not consider  end_key=*'Empty'* and considers 
> end_key=*'100'* as maximum which in turn makes all the hfiles of three 
> regions to move into new region even if records in hfile is more than the 
> end_key='*100'* and one empty region *Region -5   (100,Empty)* will be 
> created because table region end key was not empty.
> Now we have 2 regions:
> 1.*Region-4*(Empty,100)=>\{00,111,200}
> 2.*Region-5*(100,Empty)=>{}
> when the entire table scan is done, all the records will be displayed, there 
> wont be any data loss but scan with start_key is done below are the results:
> 1.scan 't', \{ STARTROW => '00'} => \{00,111,200}
> 2.scan 't', \{ STARTROW => '100'}=>{}
> The second scan will give empty result because it searches the rows in
> *Region -5*(100,Empty) which contains no records but records \{111,200} is 
> present in *Region-4*(Empty,100).
> The problem exists only when end_key=*'Empty'* is present in any of the 
> overlapping regions.I think if end_key is present in any of the overlapping 
> regions,we have to consider it as maximum end_key.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [hbase] carp84 merged pull request #181: HBASE-22283 Print row and table information when failed to get region location

2019-04-24 Thread GitBox
carp84 merged pull request #181: HBASE-22283 Print row and table information 
when failed to get region location
URL: https://github.com/apache/hbase/pull/181
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Resolved] (HBASE-22283) Print row and table information when failed to get region location

2019-04-24 Thread Yu Li (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu Li resolved HBASE-22283.
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 1.5.1
   2.2.1
   3.0.0

Merged in:
master: ab3d6cf811
branch-1: 4648ab1db6
branch-2: 54b944a10f

> Print row and table information when failed to get region location
> --
>
> Key: HBASE-22283
> URL: https://issues.apache.org/jira/browse/HBASE-22283
> Project: HBase
>  Issue Type: Improvement
>  Components: Client, logging
>Affects Versions: 1.4.9, 2.0.5, 2.1.4
>Reporter: Yu Li
>Assignee: Yu Li
>Priority: Major
> Fix For: 3.0.0, 2.2.1, 1.5.1
>
>
> Currently when failed to get region location, especially when the 
> {{RegionLocations}} returned is null in 
> {{RpcRetryingCallerWithReadReplicas.getRegionLocations}} (we may see more 
> useful message if there's an exception thrown), we only log the replica id 
> w/o any detailed information about row and table, which makes the debugging 
> difficult. Below is an example error message:
> {noformat}
> Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException: Can't 
> get the location for replica 0
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:372)
>   at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:153)
>   at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:58)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:219)
>   at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:277)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:438)
>   at org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:312)
>   at 
> org.apache.hadoop.hbase.MetaTableAccessor.fullScan(MetaTableAccessor.java:639)
>   at 
> org.apache.hadoop.hbase.MetaTableAccessor.tableExists(MetaTableAccessor.java:366)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:409)
> {noformat}
> And here we propose to improve this part.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [hbase] carp84 commented on issue #181: HBASE-22283 Print row and table information when failed to get region location

2019-04-24 Thread GitBox
carp84 commented on issue #181: HBASE-22283 Print row and table information 
when failed to get region location
URL: https://github.com/apache/hbase/pull/181#issuecomment-486112318
 
 
   Thanks @saintstack, merged into master and manually pushed into 
branch-1/branch-2


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (HBASE-22294) Remove deprecated method from WALKeyImpl

2019-04-24 Thread Sayed Anisul Hoque (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sayed Anisul Hoque updated HBASE-22294:
---
Fix Version/s: 3.0.0
  Description: 
the method - _getLogSeqNum_ in the class WALKeyImpl is a deprecated function 
that needs to be removed in 3.0.0

According to [HBASE-15158|https://issues.apache.org/jira/browse/HBASE-15158] 
this function is deprecated in 2.0.0

  was:the method - _getLogSeqNum_ in the class WALKeyImpl is deprecated 
function that needs to be removed in 3.0.0


> Remove deprecated method from WALKeyImpl
> 
>
> Key: HBASE-22294
> URL: https://issues.apache.org/jira/browse/HBASE-22294
> Project: HBase
>  Issue Type: Task
>Reporter: Sayed Anisul Hoque
>Assignee: Sayed Anisul Hoque
>Priority: Minor
> Fix For: 3.0.0
>
>
> the method - _getLogSeqNum_ in the class WALKeyImpl is a deprecated function 
> that needs to be removed in 3.0.0
> According to [HBASE-15158|https://issues.apache.org/jira/browse/HBASE-15158] 
> this function is deprecated in 2.0.0



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22144) MultiRowRangeFilter does not work with reversed scans

2019-04-24 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16824934#comment-16824934
 ] 

Hudson commented on HBASE-22144:


Results for branch branch-1
[build #787 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/787/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/787//General_Nightly_Build_Report/]


(x) {color:red}-1 jdk7 checks{color}
-- For more information [see jdk7 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/787//JDK7_Nightly_Build_Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/787//JDK8_Nightly_Build_Report_(Hadoop2)/]




(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> MultiRowRangeFilter does not work with reversed scans
> -
>
> Key: HBASE-22144
> URL: https://issues.apache.org/jira/browse/HBASE-22144
> Project: HBase
>  Issue Type: Bug
>  Components: Filters, scan
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Critical
> Fix For: 3.0.0, 2.2.0, 2.3.0, 2.0.6, 2.1.5
>
> Attachments: HBASE-22144.001.patch, HBASE-22144.002.patch, 
> HBASE-22144.002.patch
>
>
> It appears that MultiRowRangeFilter was never written to function with 
> reverse scans. There is too much logic that operates with the assumption that 
> we are always moving "forward" through increasing ranges. It needs to be 
> rewritten to "traverse" forward or backward, given how the context of the 
> scan being used.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-16488) Starting namespace and quota services in master startup asynchronously

2019-04-24 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-16488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16824935#comment-16824935
 ] 

Hudson commented on HBASE-16488:


Results for branch branch-1
[build #787 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/787/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/787//General_Nightly_Build_Report/]


(x) {color:red}-1 jdk7 checks{color}
-- For more information [see jdk7 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/787//JDK7_Nightly_Build_Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/787//JDK8_Nightly_Build_Report_(Hadoop2)/]




(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> Starting namespace and quota services in master startup asynchronously
> --
>
> Key: HBASE-16488
> URL: https://issues.apache.org/jira/browse/HBASE-16488
> Project: HBase
>  Issue Type: Improvement
>  Components: master
>Affects Versions: 1.3.0, 1.0.3, 1.4.0, 1.1.5, 1.2.2, 2.0.0
>Reporter: Stephen Yuan Jiang
>Assignee: Xu Cang
>Priority: Major
> Fix For: 1.5.0
>
> Attachments: HBASE-16488.branch-1.012.patch, 
> HBASE-16488.branch-1.012.patch, HBASE-16488.branch-1.013.patch, 
> HBASE-16488.branch-1.013.patch, HBASE-16488.branch-1.014.patch, 
> HBASE-16488.branch-1.015.patch, HBASE-16488.branch-1.016.patch, 
> HBASE-16488.branch-1.017.patch, HBASE-16488.revisit.v11-branch-1.patch, 
> HBASE-16488.v1-branch-1.patch, HBASE-16488.v1-master.patch, 
> HBASE-16488.v10-branch-1.patch, HBASE-16488.v2-branch-1.patch, 
> HBASE-16488.v2-branch-1.patch, HBASE-16488.v3-branch-1.patch, 
> HBASE-16488.v3-branch-1.patch, HBASE-16488.v4-branch-1.patch, 
> HBASE-16488.v5-branch-1.patch, HBASE-16488.v6-branch-1.patch, 
> HBASE-16488.v7-branch-1.patch, HBASE-16488.v8-branch-1.patch, 
> HBASE-16488.v9-branch-1.patch
>
>
> From time to time, during internal IT test and from customer, we often see 
> master initialization failed due to namespace table region takes long time to 
> assign (eg. sometimes split log takes long time or hanging; or sometimes RS 
> is temporarily not available; sometimes due to some unknown assignment 
> issue).  In the past, there was some proposal to improve this situation, eg. 
> HBASE-13556 / HBASE-14190 (Assign system tables ahead of user region 
> assignment) or HBASE-13557 (Special WAL handling for system tables) or  
> HBASE-14623 (Implement dedicated WAL for system tables).  
> This JIRA proposes another way to solve this master initialization fail 
> issue: namespace service is only used by a handful operations (eg. create 
> table / namespace DDL / get namespace API / some RS group DDL).  Only quota 
> manager depends on it and quota management is off by default.  Therefore, 
> namespace service is not really needed for master to be functional.  So we 
> could start namespace service asynchronizely without blocking master startup.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22215) Backport MultiRowRangeFilter does not work with reverse scans

2019-04-24 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16824933#comment-16824933
 ] 

Hudson commented on HBASE-22215:


Results for branch branch-1
[build #787 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/787/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/787//General_Nightly_Build_Report/]


(x) {color:red}-1 jdk7 checks{color}
-- For more information [see jdk7 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/787//JDK7_Nightly_Build_Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/787//JDK8_Nightly_Build_Report_(Hadoop2)/]




(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> Backport MultiRowRangeFilter does not work with reverse scans
> -
>
> Key: HBASE-22215
> URL: https://issues.apache.org/jira/browse/HBASE-22215
> Project: HBase
>  Issue Type: Sub-task
>  Components: Filters
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Major
> Fix For: 1.5.0, 1.4.10
>
> Attachments: HBASE-22215.001.branch-1.patch, HBASE-22215.001.patch
>
>
> See parent. Modify and apply to 1.x lines.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22215) Backport MultiRowRangeFilter does not work with reverse scans

2019-04-24 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16824955#comment-16824955
 ] 

Hudson commented on HBASE-22215:


Results for branch branch-1.4
[build #761 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/761/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/761//General_Nightly_Build_Report/]


(x) {color:red}-1 jdk7 checks{color}
-- For more information [see jdk7 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/761//JDK7_Nightly_Build_Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/761//JDK8_Nightly_Build_Report_(Hadoop2)/]




(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> Backport MultiRowRangeFilter does not work with reverse scans
> -
>
> Key: HBASE-22215
> URL: https://issues.apache.org/jira/browse/HBASE-22215
> Project: HBase
>  Issue Type: Sub-task
>  Components: Filters
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Major
> Fix For: 1.5.0, 1.4.10
>
> Attachments: HBASE-22215.001.branch-1.patch, HBASE-22215.001.patch
>
>
> See parent. Modify and apply to 1.x lines.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22144) MultiRowRangeFilter does not work with reversed scans

2019-04-24 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16824956#comment-16824956
 ] 

Hudson commented on HBASE-22144:


Results for branch branch-1.4
[build #761 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/761/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/761//General_Nightly_Build_Report/]


(x) {color:red}-1 jdk7 checks{color}
-- For more information [see jdk7 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/761//JDK7_Nightly_Build_Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/761//JDK8_Nightly_Build_Report_(Hadoop2)/]




(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> MultiRowRangeFilter does not work with reversed scans
> -
>
> Key: HBASE-22144
> URL: https://issues.apache.org/jira/browse/HBASE-22144
> Project: HBase
>  Issue Type: Bug
>  Components: Filters, scan
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Critical
> Fix For: 3.0.0, 2.2.0, 2.3.0, 2.0.6, 2.1.5
>
> Attachments: HBASE-22144.001.patch, HBASE-22144.002.patch, 
> HBASE-22144.002.patch
>
>
> It appears that MultiRowRangeFilter was never written to function with 
> reverse scans. There is too much logic that operates with the assumption that 
> we are always moving "forward" through increasing ranges. It needs to be 
> rewritten to "traverse" forward or backward, given how the context of the 
> scan being used.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22298) branch-2.2 nightly fails "[ForOverride] Method annotated @ForOverride must have protected or package-private visibility"

2019-04-24 Thread Guanghao Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16824973#comment-16824973
 ] 

Guanghao Zhang commented on HBASE-22298:


[~stack] Sorry sir. I already push a RC2 to git yesterday... Need a new RC3?

> branch-2.2 nightly fails "[ForOverride] Method annotated @ForOverride must 
> have protected or package-private visibility"
> 
>
> Key: HBASE-22298
> URL: https://issues.apache.org/jira/browse/HBASE-22298
> Project: HBase
>  Issue Type: Bug
>Reporter: stack
>Assignee: stack
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.1.5, 2.2.1
>
> Attachments: HBASE-22298.branch-2.2.001.patch
>
>
> The change to use guava service happened a long time ago but we errorprone 
> only complains now... update?
> {code}
> 
> [INFO] 97 warnings 
> [INFO] -
> [INFO] -
> [ERROR] COMPILATION ERROR : 
> [INFO] -
> [ERROR] 
> /testptch/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/master/ClusterSchemaServiceImpl.java:[60,27]
>  error: [ForOverride] Method annotated @ForOverride must have protected or 
> package-private visibility
> (see https://errorprone.info/bugpattern/ForOverride)
> [INFO] 1 error
> {code}
> See https://errorprone.info/bugpattern/ForOverride



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22301) Consider rolling the WAL if the HDFS write pipeline is slow

2019-04-24 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16824988#comment-16824988
 ] 

Sean Busbey commented on HBASE-22301:
-

{code}
junit.framework.AssertionFailedError: Waiting timed out after [1,000] msec
at 
org.apache.hadoop.hbase.regionserver.wal.TestLogRolling.testSlowSyncLogRolling(TestLogRolling.java:321)

{code}

a per-test timeout of 1s isn't going to work for this test.

> Consider rolling the WAL if the HDFS write pipeline is slow
> ---
>
> Key: HBASE-22301
> URL: https://issues.apache.org/jira/browse/HBASE-22301
> Project: HBase
>  Issue Type: Improvement
>  Components: wal
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 3.0.0, 1.5.0, 2.3.0
>
> Attachments: HBASE-22301-branch-1.patch, HBASE-22301-branch-1.patch, 
> HBASE-22301-branch-1.patch
>
>
> Consider the case when a subset of the HDFS fleet is unhealthy but suffering 
> a gray failure not an outright outage. HDFS operations, notably syncs, are 
> abnormally slow on pipelines which include this subset of hosts. If the 
> regionserver's WAL is backed by an impacted pipeline, all WAL handlers can be 
> consumed waiting for acks from the datanodes in the pipeline (recall that 
> some of them are sick). Imagine a write heavy application distributing load 
> uniformly over the cluster at a fairly high rate. With the WAL subsystem 
> slowed by HDFS level issues, all handlers can be blocked waiting to append to 
> the WAL. Once all handlers are blocked, the application will experience 
> backpressure. All (HBase) clients eventually have too many outstanding writes 
> and block.
> Because the application is distributing writes near uniformly in the 
> keyspace, the probability any given service endpoint will dispatch a request 
> to an impacted regionserver, even a single regionserver, approaches 1.0. So 
> the probability that all service endpoints will be affected approaches 1.0.
> In order to break the logjam, we need to remove the slow datanodes. Although 
> there is HDFS level monitoring, mechanisms, and procedures for this, we 
> should also attempt to take mitigating action at the HBase layer as soon as 
> we find ourselves in trouble. It would be enough to remove the affected 
> datanodes from the writer pipelines. A super simple strategy that can be 
> effective is described below:
> This is with branch-1 code. I think branch-2's async WAL can mitigate but 
> still can be susceptible. branch-2 sync WAL is susceptible. 
> We already roll the WAL writer if the pipeline suffers the failure of a 
> datanode and the replication factor on the pipeline is too low. We should 
> also consider how much time it took for the write pipeline to complete a sync 
> the last time we measured it, or the max over the interval from now to the 
> last time we checked. If the sync time exceeds a configured threshold, roll 
> the log writer then too. Fortunately we don't need to know which datanode is 
> making the WAL write pipeline slow, only that syncs on the pipeline are too 
> slow and exceeding a threshold. This is enough information to know when to 
> roll it. Once we roll it, we will get three new randomly selected datanodes. 
> On most clusters the probability the new pipeline includes the slow datanode 
> will be low. (And if for some reason it does end up with a problematic 
> datanode again, we roll again.)
> This is not a silver bullet but this can be a reasonably effective mitigation.
> Provide a metric for tracking when log roll is requested (and for what 
> reason).
> Emit a log line at log roll time that includes datanode pipeline details for 
> further debugging and analysis, similar to the existing slow FSHLog sync log 
> line.
> If we roll too many times within a short interval of time this probably means 
> there is a widespread problem with the fleet and so our mitigation is not 
> helping and may be exacerbating those problems or operator difficulties. 
> Ensure log roll requests triggered by this new feature happen infrequently 
> enough to not cause difficulties under either normal or abnormal conditions. 
> A very simple strategy that could work well under both normal and abnormal 
> conditions is to define a fairly lengthy interval, default 5 minutes, and 
> then insure we do not roll more than once during this interval for this 
> reason.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22301) Consider rolling the WAL if the HDFS write pipeline is slow

2019-04-24 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16824989#comment-16824989
 ] 

Sean Busbey commented on HBASE-22301:
-

{code}
./hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/MetricsWAL.java:77:
switch (reason) {: switch without "default" clause. [MissingSwitchDefault]
{code}

this is a good find from checkstyle, since without a default that fails it'll 
be easy for someone to add to the enum but forget to update the metrics.

> Consider rolling the WAL if the HDFS write pipeline is slow
> ---
>
> Key: HBASE-22301
> URL: https://issues.apache.org/jira/browse/HBASE-22301
> Project: HBase
>  Issue Type: Improvement
>  Components: wal
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 3.0.0, 1.5.0, 2.3.0
>
> Attachments: HBASE-22301-branch-1.patch, HBASE-22301-branch-1.patch, 
> HBASE-22301-branch-1.patch
>
>
> Consider the case when a subset of the HDFS fleet is unhealthy but suffering 
> a gray failure not an outright outage. HDFS operations, notably syncs, are 
> abnormally slow on pipelines which include this subset of hosts. If the 
> regionserver's WAL is backed by an impacted pipeline, all WAL handlers can be 
> consumed waiting for acks from the datanodes in the pipeline (recall that 
> some of them are sick). Imagine a write heavy application distributing load 
> uniformly over the cluster at a fairly high rate. With the WAL subsystem 
> slowed by HDFS level issues, all handlers can be blocked waiting to append to 
> the WAL. Once all handlers are blocked, the application will experience 
> backpressure. All (HBase) clients eventually have too many outstanding writes 
> and block.
> Because the application is distributing writes near uniformly in the 
> keyspace, the probability any given service endpoint will dispatch a request 
> to an impacted regionserver, even a single regionserver, approaches 1.0. So 
> the probability that all service endpoints will be affected approaches 1.0.
> In order to break the logjam, we need to remove the slow datanodes. Although 
> there is HDFS level monitoring, mechanisms, and procedures for this, we 
> should also attempt to take mitigating action at the HBase layer as soon as 
> we find ourselves in trouble. It would be enough to remove the affected 
> datanodes from the writer pipelines. A super simple strategy that can be 
> effective is described below:
> This is with branch-1 code. I think branch-2's async WAL can mitigate but 
> still can be susceptible. branch-2 sync WAL is susceptible. 
> We already roll the WAL writer if the pipeline suffers the failure of a 
> datanode and the replication factor on the pipeline is too low. We should 
> also consider how much time it took for the write pipeline to complete a sync 
> the last time we measured it, or the max over the interval from now to the 
> last time we checked. If the sync time exceeds a configured threshold, roll 
> the log writer then too. Fortunately we don't need to know which datanode is 
> making the WAL write pipeline slow, only that syncs on the pipeline are too 
> slow and exceeding a threshold. This is enough information to know when to 
> roll it. Once we roll it, we will get three new randomly selected datanodes. 
> On most clusters the probability the new pipeline includes the slow datanode 
> will be low. (And if for some reason it does end up with a problematic 
> datanode again, we roll again.)
> This is not a silver bullet but this can be a reasonably effective mitigation.
> Provide a metric for tracking when log roll is requested (and for what 
> reason).
> Emit a log line at log roll time that includes datanode pipeline details for 
> further debugging and analysis, similar to the existing slow FSHLog sync log 
> line.
> If we roll too many times within a short interval of time this probably means 
> there is a widespread problem with the fleet and so our mitigation is not 
> helping and may be exacerbating those problems or operator difficulties. 
> Ensure log roll requests triggered by this new feature happen infrequently 
> enough to not cause difficulties under either normal or abnormal conditions. 
> A very simple strategy that could work well under both normal and abnormal 
> conditions is to define a fairly lengthy interval, default 5 minutes, and 
> then insure we do not roll more than once during this interval for this 
> reason.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22301) Consider rolling the WAL if the HDFS write pipeline is slow

2019-04-24 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16824990#comment-16824990
 ] 

Sean Busbey commented on HBASE-22301:
-

{code}
java.lang.AssertionError: The regionserver should have thrown an exception
at 
org.apache.hadoop.hbase.regionserver.TestFailedAppendAndSync.testLockupAroundBadAssignSync(TestFailedAppendAndSync.java:258)
{code}

This looks like it might be related, since the changed code path is getting 
exercised, but I haven't dug in enough to figure out what's going on with it.

> Consider rolling the WAL if the HDFS write pipeline is slow
> ---
>
> Key: HBASE-22301
> URL: https://issues.apache.org/jira/browse/HBASE-22301
> Project: HBase
>  Issue Type: Improvement
>  Components: wal
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 3.0.0, 1.5.0, 2.3.0
>
> Attachments: HBASE-22301-branch-1.patch, HBASE-22301-branch-1.patch, 
> HBASE-22301-branch-1.patch
>
>
> Consider the case when a subset of the HDFS fleet is unhealthy but suffering 
> a gray failure not an outright outage. HDFS operations, notably syncs, are 
> abnormally slow on pipelines which include this subset of hosts. If the 
> regionserver's WAL is backed by an impacted pipeline, all WAL handlers can be 
> consumed waiting for acks from the datanodes in the pipeline (recall that 
> some of them are sick). Imagine a write heavy application distributing load 
> uniformly over the cluster at a fairly high rate. With the WAL subsystem 
> slowed by HDFS level issues, all handlers can be blocked waiting to append to 
> the WAL. Once all handlers are blocked, the application will experience 
> backpressure. All (HBase) clients eventually have too many outstanding writes 
> and block.
> Because the application is distributing writes near uniformly in the 
> keyspace, the probability any given service endpoint will dispatch a request 
> to an impacted regionserver, even a single regionserver, approaches 1.0. So 
> the probability that all service endpoints will be affected approaches 1.0.
> In order to break the logjam, we need to remove the slow datanodes. Although 
> there is HDFS level monitoring, mechanisms, and procedures for this, we 
> should also attempt to take mitigating action at the HBase layer as soon as 
> we find ourselves in trouble. It would be enough to remove the affected 
> datanodes from the writer pipelines. A super simple strategy that can be 
> effective is described below:
> This is with branch-1 code. I think branch-2's async WAL can mitigate but 
> still can be susceptible. branch-2 sync WAL is susceptible. 
> We already roll the WAL writer if the pipeline suffers the failure of a 
> datanode and the replication factor on the pipeline is too low. We should 
> also consider how much time it took for the write pipeline to complete a sync 
> the last time we measured it, or the max over the interval from now to the 
> last time we checked. If the sync time exceeds a configured threshold, roll 
> the log writer then too. Fortunately we don't need to know which datanode is 
> making the WAL write pipeline slow, only that syncs on the pipeline are too 
> slow and exceeding a threshold. This is enough information to know when to 
> roll it. Once we roll it, we will get three new randomly selected datanodes. 
> On most clusters the probability the new pipeline includes the slow datanode 
> will be low. (And if for some reason it does end up with a problematic 
> datanode again, we roll again.)
> This is not a silver bullet but this can be a reasonably effective mitigation.
> Provide a metric for tracking when log roll is requested (and for what 
> reason).
> Emit a log line at log roll time that includes datanode pipeline details for 
> further debugging and analysis, similar to the existing slow FSHLog sync log 
> line.
> If we roll too many times within a short interval of time this probably means 
> there is a widespread problem with the fleet and so our mitigation is not 
> helping and may be exacerbating those problems or operator difficulties. 
> Ensure log roll requests triggered by this new feature happen infrequently 
> enough to not cause difficulties under either normal or abnormal conditions. 
> A very simple strategy that could work well under both normal and abnormal 
> conditions is to define a fairly lengthy interval, default 5 minutes, and 
> then insure we do not roll more than once during this interval for this 
> reason.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22301) Consider rolling the WAL if the HDFS write pipeline is slow

2019-04-24 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16824992#comment-16824992
 ] 

Sean Busbey commented on HBASE-22301:
-

bq. hadoop.hbase.util.hbck.TestOfflineMetaRebuildBase

I don't think this one is related. it just started to fail in nightly last 
build.

> Consider rolling the WAL if the HDFS write pipeline is slow
> ---
>
> Key: HBASE-22301
> URL: https://issues.apache.org/jira/browse/HBASE-22301
> Project: HBase
>  Issue Type: Improvement
>  Components: wal
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 3.0.0, 1.5.0, 2.3.0
>
> Attachments: HBASE-22301-branch-1.patch, HBASE-22301-branch-1.patch, 
> HBASE-22301-branch-1.patch
>
>
> Consider the case when a subset of the HDFS fleet is unhealthy but suffering 
> a gray failure not an outright outage. HDFS operations, notably syncs, are 
> abnormally slow on pipelines which include this subset of hosts. If the 
> regionserver's WAL is backed by an impacted pipeline, all WAL handlers can be 
> consumed waiting for acks from the datanodes in the pipeline (recall that 
> some of them are sick). Imagine a write heavy application distributing load 
> uniformly over the cluster at a fairly high rate. With the WAL subsystem 
> slowed by HDFS level issues, all handlers can be blocked waiting to append to 
> the WAL. Once all handlers are blocked, the application will experience 
> backpressure. All (HBase) clients eventually have too many outstanding writes 
> and block.
> Because the application is distributing writes near uniformly in the 
> keyspace, the probability any given service endpoint will dispatch a request 
> to an impacted regionserver, even a single regionserver, approaches 1.0. So 
> the probability that all service endpoints will be affected approaches 1.0.
> In order to break the logjam, we need to remove the slow datanodes. Although 
> there is HDFS level monitoring, mechanisms, and procedures for this, we 
> should also attempt to take mitigating action at the HBase layer as soon as 
> we find ourselves in trouble. It would be enough to remove the affected 
> datanodes from the writer pipelines. A super simple strategy that can be 
> effective is described below:
> This is with branch-1 code. I think branch-2's async WAL can mitigate but 
> still can be susceptible. branch-2 sync WAL is susceptible. 
> We already roll the WAL writer if the pipeline suffers the failure of a 
> datanode and the replication factor on the pipeline is too low. We should 
> also consider how much time it took for the write pipeline to complete a sync 
> the last time we measured it, or the max over the interval from now to the 
> last time we checked. If the sync time exceeds a configured threshold, roll 
> the log writer then too. Fortunately we don't need to know which datanode is 
> making the WAL write pipeline slow, only that syncs on the pipeline are too 
> slow and exceeding a threshold. This is enough information to know when to 
> roll it. Once we roll it, we will get three new randomly selected datanodes. 
> On most clusters the probability the new pipeline includes the slow datanode 
> will be low. (And if for some reason it does end up with a problematic 
> datanode again, we roll again.)
> This is not a silver bullet but this can be a reasonably effective mitigation.
> Provide a metric for tracking when log roll is requested (and for what 
> reason).
> Emit a log line at log roll time that includes datanode pipeline details for 
> further debugging and analysis, similar to the existing slow FSHLog sync log 
> line.
> If we roll too many times within a short interval of time this probably means 
> there is a widespread problem with the fleet and so our mitigation is not 
> helping and may be exacerbating those problems or operator difficulties. 
> Ensure log roll requests triggered by this new feature happen infrequently 
> enough to not cause difficulties under either normal or abnormal conditions. 
> A very simple strategy that could work well under both normal and abnormal 
> conditions is to define a fairly lengthy interval, default 5 minutes, and 
> then insure we do not roll more than once during this interval for this 
> reason.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21920) Ignoring 'empty' end_key while calculating end_key for new region in HBCK -fixHdfsOverlaps command can cause data loss

2019-04-24 Thread HBase QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16824994#comment-16824994
 ] 

HBase QA commented on HBASE-21920:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 19m 
40s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-1 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
34s{color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} branch-1 passed with JDK v1.8.0_212 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
39s{color} | {color:green} branch-1 passed with JDK v1.7.0_222 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
19s{color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
37s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} branch-1 passed with JDK v1.8.0_212 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} branch-1 passed with JDK v1.7.0_222 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed with JDK v1.8.0_212 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed with JDK v1.7.0_222 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
14s{color} | {color:red} hbase-server: The patch generated 1 new + 181 
unchanged - 1 fixed = 182 total (was 182) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  2m 
35s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
1m 40s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.7.4. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed with JDK v1.8.0_212 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed with JDK v1.7.0_222 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}138m 23s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}185m 45s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.master.TestMasterFailover |
|   | hadoop.hbase.coprocessor.TestCoprocessorEndpoint |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/PreCommit-HBASE-Build/170/artifact/patchprocess/Dockerfile
 |
| JIRA Issue | HBASE-21920 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12966849/HBASE-21920.branch-1.002.patch
 |
| Optional T

[jira] [Commented] (HBASE-22298) branch-2.2 nightly fails "[ForOverride] Method annotated @ForOverride must have protected or package-private visibility"

2019-04-24 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16824997#comment-16824997
 ] 

Hudson commented on HBASE-22298:


Results for branch master
[build #957 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/957/]: (x) 
*{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/957//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/957//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/957//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> branch-2.2 nightly fails "[ForOverride] Method annotated @ForOverride must 
> have protected or package-private visibility"
> 
>
> Key: HBASE-22298
> URL: https://issues.apache.org/jira/browse/HBASE-22298
> Project: HBase
>  Issue Type: Bug
>Reporter: stack
>Assignee: stack
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.1.5, 2.2.1
>
> Attachments: HBASE-22298.branch-2.2.001.patch
>
>
> The change to use guava service happened a long time ago but we errorprone 
> only complains now... update?
> {code}
> 
> [INFO] 97 warnings 
> [INFO] -
> [INFO] -
> [ERROR] COMPILATION ERROR : 
> [INFO] -
> [ERROR] 
> /testptch/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/master/ClusterSchemaServiceImpl.java:[60,27]
>  error: [ForOverride] Method annotated @ForOverride must have protected or 
> package-private visibility
> (see https://errorprone.info/bugpattern/ForOverride)
> [INFO] 1 error
> {code}
> See https://errorprone.info/bugpattern/ForOverride



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22250) The same constants used in many places should be placed in constant classes

2019-04-24 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16825001#comment-16825001
 ] 

Hudson commented on HBASE-22250:


Results for branch master
[build #957 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/957/]: (x) 
*{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/957//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/957//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/957//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> The same constants used in many places should be placed in constant classes
> ---
>
> Key: HBASE-22250
> URL: https://issues.apache.org/jira/browse/HBASE-22250
> Project: HBase
>  Issue Type: Improvement
>  Components: Client, conf, regionserver
>Affects Versions: 1.2.0, 2.0.0, 2.1.1, 2.1.4
>Reporter: lixiaobao
>Assignee: lixiaobao
>Priority: Minor
> Fix For: 3.0.0, 2.3.0, 2.2.1
>
> Attachments: HBASE-22250.patch
>
>
> I think we should put these configurations in the HConstants class to avoid 
> the trouble of modifying a lot of places when we modify them later.
> {code:java}
> public static final String MASTER_KRB_PRINCIPAL = 
> "hbase.master.kerberos.principal";
> public static final String MASTER_KRB_KEYTAB_FILE = 
> "hbase.master.keytab.file";
> public static final String REGIONSERVER_KRB_PRINCIPAL = 
> "hbase.regionserver.kerberos.principal";
> public static final String REGIONSERVER_KRB_KEYTAB_FILE = 
> "hbase.regionserver.keytab.file";{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22086) space quota issue: deleting snapshot doesn't update the usage of table

2019-04-24 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16824998#comment-16824998
 ] 

Hudson commented on HBASE-22086:


Results for branch master
[build #957 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/957/]: (x) 
*{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/957//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/957//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/957//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> space quota issue: deleting snapshot doesn't update the usage of table
> --
>
> Key: HBASE-22086
> URL: https://issues.apache.org/jira/browse/HBASE-22086
> Project: HBase
>  Issue Type: Bug
>Reporter: Ajeet Rai
>Assignee: Sakthi
>Priority: Minor
>  Labels: Quota, space
> Fix For: 3.0.0
>
> Attachments: hbase-22086.master.001.patch, 
> hbase-22086.master.002.patch, hbase-22086.master.003.patch, 
> hbase-22086.master.004.patch
>
>
> space quota issue: deleting snapshot doesn't update the usage of table
> Steps: 1:
> set_quota TYPE => SPACE, TABLE => 'bugatti', LIMIT => '7M', POLICY => 
> NO_WRITES_COMPACTIONS
> 2: ./hbase pe --table="bugatti" --nomapred --rows=200 sequentialWrite 10
> 3: ./hbase pe --table="bugatti" --nomapred --rows=200 sequentialWrite 10
> 4: snapshot 'bugatti','bugatti_snapshot'
> 5: ./hbase pe --table="bugatti" --nomapred --rows=200 sequentialWrite 10
> 6: major_compact 'bugatti'
> 7: delete_snapshot 'bugatti_snapshot'
> now check the usage and observe that it is not getting updated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22296) Remove TestFromClientSide.testGetStartEndKeysWithRegionReplicas

2019-04-24 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16824999#comment-16824999
 ] 

Hudson commented on HBASE-22296:


Results for branch master
[build #957 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/957/]: (x) 
*{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/957//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/957//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/957//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Remove TestFromClientSide.testGetStartEndKeysWithRegionReplicas
> ---
>
> Key: HBASE-22296
> URL: https://issues.apache.org/jira/browse/HBASE-22296
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.2.1
>
>
> It tests nothing after HBASE-21753...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22298) branch-2.2 nightly fails "[ForOverride] Method annotated @ForOverride must have protected or package-private visibility"

2019-04-24 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16825002#comment-16825002
 ] 

Hudson commented on HBASE-22298:


Results for branch branch-2
[build #1842 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1842/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1842//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1842//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1842//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> branch-2.2 nightly fails "[ForOverride] Method annotated @ForOverride must 
> have protected or package-private visibility"
> 
>
> Key: HBASE-22298
> URL: https://issues.apache.org/jira/browse/HBASE-22298
> Project: HBase
>  Issue Type: Bug
>Reporter: stack
>Assignee: stack
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.1.5, 2.2.1
>
> Attachments: HBASE-22298.branch-2.2.001.patch
>
>
> The change to use guava service happened a long time ago but we errorprone 
> only complains now... update?
> {code}
> 
> [INFO] 97 warnings 
> [INFO] -
> [INFO] -
> [ERROR] COMPILATION ERROR : 
> [INFO] -
> [ERROR] 
> /testptch/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/master/ClusterSchemaServiceImpl.java:[60,27]
>  error: [ForOverride] Method annotated @ForOverride must have protected or 
> package-private visibility
> (see https://errorprone.info/bugpattern/ForOverride)
> [INFO] 1 error
> {code}
> See https://errorprone.info/bugpattern/ForOverride



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22250) The same constants used in many places should be placed in constant classes

2019-04-24 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16825003#comment-16825003
 ] 

Hudson commented on HBASE-22250:


Results for branch branch-2
[build #1842 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1842/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1842//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1842//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1842//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> The same constants used in many places should be placed in constant classes
> ---
>
> Key: HBASE-22250
> URL: https://issues.apache.org/jira/browse/HBASE-22250
> Project: HBase
>  Issue Type: Improvement
>  Components: Client, conf, regionserver
>Affects Versions: 1.2.0, 2.0.0, 2.1.1, 2.1.4
>Reporter: lixiaobao
>Assignee: lixiaobao
>Priority: Minor
> Fix For: 3.0.0, 2.3.0, 2.2.1
>
> Attachments: HBASE-22250.patch
>
>
> I think we should put these configurations in the HConstants class to avoid 
> the trouble of modifying a lot of places when we modify them later.
> {code:java}
> public static final String MASTER_KRB_PRINCIPAL = 
> "hbase.master.kerberos.principal";
> public static final String MASTER_KRB_KEYTAB_FILE = 
> "hbase.master.keytab.file";
> public static final String REGIONSERVER_KRB_PRINCIPAL = 
> "hbase.regionserver.kerberos.principal";
> public static final String REGIONSERVER_KRB_KEYTAB_FILE = 
> "hbase.regionserver.keytab.file";{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [hbase] anis016 opened a new pull request #186: HBASE-22294 Removed deprecated method from WALKeyImpl

2019-04-24 Thread GitBox
anis016 opened a new pull request #186: HBASE-22294 Removed deprecated method 
from WALKeyImpl
URL: https://github.com/apache/hbase/pull/186
 
 
   Removed deprecated methods in ReplicationLoadSink.
   Fixes: [HBASE-22294](https://issues.apache.org/jira/browse/HBASE-22294)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] HorizonNet merged pull request #165: HBASE-22272 Fixed Checkstyle errors in hbase-backup

2019-04-24 Thread GitBox
HorizonNet merged pull request #165: HBASE-22272 Fixed Checkstyle errors in 
hbase-backup
URL: https://github.com/apache/hbase/pull/165
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Resolved] (HBASE-22272) Fix Checkstyle errors in hbase-backup

2019-04-24 Thread Jan Hentschel (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Hentschel resolved HBASE-22272.
---
   Resolution: Fixed
Fix Version/s: 3.0.0

> Fix Checkstyle errors in hbase-backup
> -
>
> Key: HBASE-22272
> URL: https://issues.apache.org/jira/browse/HBASE-22272
> Project: HBase
>  Issue Type: Task
>Reporter: Jan Hentschel
>Assignee: Jan Hentschel
>Priority: Trivial
> Fix For: 3.0.0
>
>
> There are a few Checkstyle errors in {{hbase-backup}}, which should be fixed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HBASE-22299) Documentation has incorrect default number of versions

2019-04-24 Thread Sayed Anisul Hoque (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sayed Anisul Hoque reassigned HBASE-22299:
--

Assignee: Sayed Anisul Hoque

> Documentation has incorrect default number of versions
> --
>
> Key: HBASE-22299
> URL: https://issues.apache.org/jira/browse/HBASE-22299
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: Peter Somogyi
>Assignee: Sayed Anisul Hoque
>Priority: Trivial
>  Labels: beginner
>
> Reference guide has this section under 
> [compaction|https://hbase.apache.org/book.html#compaction].
> {quote}
> Compaction and Versions
> When you create a Column Family, you can specify the maximum number of 
> versions to keep, by specifying HColumnDescriptor.setMaxVersions(int 
> versions). The default value is 3. If more versions than the specified 
> maximum exist, the excess versions are filtered out and not written back to 
> the compacted StoreFile.
> {quote}
> This is incorrect, the default value is 1.
> Additionally, HColumnDescriptor is deprecated and the example should use 
> ColumnFamilyDescriptorBuilder$setMaxVersions(int) instead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work started] (HBASE-22299) Documentation has incorrect default number of versions

2019-04-24 Thread Sayed Anisul Hoque (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HBASE-22299 started by Sayed Anisul Hoque.
--
> Documentation has incorrect default number of versions
> --
>
> Key: HBASE-22299
> URL: https://issues.apache.org/jira/browse/HBASE-22299
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: Peter Somogyi
>Assignee: Sayed Anisul Hoque
>Priority: Trivial
>  Labels: beginner
>
> Reference guide has this section under 
> [compaction|https://hbase.apache.org/book.html#compaction].
> {quote}
> Compaction and Versions
> When you create a Column Family, you can specify the maximum number of 
> versions to keep, by specifying HColumnDescriptor.setMaxVersions(int 
> versions). The default value is 3. If more versions than the specified 
> maximum exist, the excess versions are filtered out and not written back to 
> the compacted StoreFile.
> {quote}
> This is incorrect, the default value is 1.
> Additionally, HColumnDescriptor is deprecated and the example should use 
> ColumnFamilyDescriptorBuilder$setMaxVersions(int) instead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22283) Print row and table information when failed to get region location

2019-04-24 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16825077#comment-16825077
 ] 

Hudson commented on HBASE-22283:


Results for branch master
[build #958 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/958/]: (x) 
*{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/958//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/958//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/958//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Print row and table information when failed to get region location
> --
>
> Key: HBASE-22283
> URL: https://issues.apache.org/jira/browse/HBASE-22283
> Project: HBase
>  Issue Type: Improvement
>  Components: Client, logging
>Affects Versions: 1.4.9, 2.0.5, 2.1.4
>Reporter: Yu Li
>Assignee: Yu Li
>Priority: Major
> Fix For: 3.0.0, 2.2.1, 1.5.1
>
>
> Currently when failed to get region location, especially when the 
> {{RegionLocations}} returned is null in 
> {{RpcRetryingCallerWithReadReplicas.getRegionLocations}} (we may see more 
> useful message if there's an exception thrown), we only log the replica id 
> w/o any detailed information about row and table, which makes the debugging 
> difficult. Below is an example error message:
> {noformat}
> Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException: Can't 
> get the location for replica 0
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:372)
>   at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:153)
>   at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:58)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:219)
>   at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:277)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:438)
>   at org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:312)
>   at 
> org.apache.hadoop.hbase.MetaTableAccessor.fullScan(MetaTableAccessor.java:639)
>   at 
> org.apache.hadoop.hbase.MetaTableAccessor.tableExists(MetaTableAccessor.java:366)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:409)
> {noformat}
> And here we propose to improve this part.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22250) The same constants used in many places should be placed in constant classes

2019-04-24 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16825079#comment-16825079
 ] 

Hudson commented on HBASE-22250:


Results for branch branch-2.2
[build #209 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/209/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/209//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/209//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/209//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> The same constants used in many places should be placed in constant classes
> ---
>
> Key: HBASE-22250
> URL: https://issues.apache.org/jira/browse/HBASE-22250
> Project: HBase
>  Issue Type: Improvement
>  Components: Client, conf, regionserver
>Affects Versions: 1.2.0, 2.0.0, 2.1.1, 2.1.4
>Reporter: lixiaobao
>Assignee: lixiaobao
>Priority: Minor
> Fix For: 3.0.0, 2.3.0, 2.2.1
>
> Attachments: HBASE-22250.patch
>
>
> I think we should put these configurations in the HConstants class to avoid 
> the trouble of modifying a lot of places when we modify them later.
> {code:java}
> public static final String MASTER_KRB_PRINCIPAL = 
> "hbase.master.kerberos.principal";
> public static final String MASTER_KRB_KEYTAB_FILE = 
> "hbase.master.keytab.file";
> public static final String REGIONSERVER_KRB_PRINCIPAL = 
> "hbase.regionserver.kerberos.principal";
> public static final String REGIONSERVER_KRB_KEYTAB_FILE = 
> "hbase.regionserver.keytab.file";{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [hbase] anis016 opened a new pull request #187: HBASE-22299 Documentation has incorrect default number of versions

2019-04-24 Thread GitBox
anis016 opened a new pull request #187: HBASE-22299 Documentation has incorrect 
default number of versions
URL: https://github.com/apache/hbase/pull/187
 
 
   Corrected the default value as 1. Also updated the removed the deprecated 
HColumnDescriptor class and added ColumnFamilyDescriptorBuilder in the 
documentation.
   
   Fixes: [HBASE-22299](https://issues.apache.org/jira/browse/HBASE-22299)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-21879) Read HFile's block to ByteBuffer directly instead of to byte for reducing young gc purpose

2019-04-24 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16825082#comment-16825082
 ] 

Hudson commented on HBASE-21879:


Results for branch HBASE-21879
[build #74 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21879/74/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21879/74//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21879/74//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21879/74//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Read HFile's block to ByteBuffer directly instead of to byte for reducing 
> young gc purpose
> --
>
> Key: HBASE-21879
> URL: https://issues.apache.org/jira/browse/HBASE-21879
> Project: HBase
>  Issue Type: Improvement
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Fix For: 3.0.0, 2.3.0
>
> Attachments: HBASE-21879.v1.patch, HBASE-21879.v1.patch, 
> QPS-latencies-before-HBASE-21879.png, gc-data-before-HBASE-21879.png
>
>
> In HFileBlock#readBlockDataInternal,  we have the following: 
> {code}
> @VisibleForTesting
> protected HFileBlock readBlockDataInternal(FSDataInputStream is, long offset,
> long onDiskSizeWithHeaderL, boolean pread, boolean verifyChecksum, 
> boolean updateMetrics)
>  throws IOException {
>  // .
>   // TODO: Make this ByteBuffer-based. Will make it easier to go to HDFS with 
> BBPool (offheap).
>   byte [] onDiskBlock = new byte[onDiskSizeWithHeader + hdrSize];
>   int nextBlockOnDiskSize = readAtOffset(is, onDiskBlock, preReadHeaderSize,
>   onDiskSizeWithHeader - preReadHeaderSize, true, offset + 
> preReadHeaderSize, pread);
>   if (headerBuf != null) {
> // ...
>   }
>   // ...
>  }
> {code}
> In the read path,  we still read the block from hfile to on-heap byte[], then 
> copy the on-heap byte[] to offheap bucket cache asynchronously,  and in my  
> 100% get performance test, I also observed some frequent young gc,  The 
> largest memory footprint in the young gen should be the on-heap block byte[].
> In fact, we can read HFile's block to ByteBuffer directly instead of to 
> byte[] for reducing young gc purpose. we did not implement this before, 
> because no ByteBuffer reading interface in the older HDFS client, but 2.7+ 
> has supported this now,  so we can fix this now. I think. 
> Will provide an patch and some perf-comparison for this. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [hbase] Apache-HBase commented on issue #187: HBASE-22299 Documentation has incorrect default number of versions

2019-04-24 Thread GitBox
Apache-HBase commented on issue #187: HBASE-22299 Documentation has incorrect 
default number of versions
URL: https://github.com/apache/hbase/pull/187#issuecomment-486212646
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 68 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   ||| _ master Compile Tests _ |
   | +1 | mvninstall | 376 | master passed |
   | 0 | refguide | 529 | branch has no errors when building the reference 
guide. See footer for rendered docs, which you should manually inspect. |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 272 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | 0 | refguide | 469 | patch has no errors when building the reference 
guide. See footer for rendered docs, which you should manually inspect. |
   ||| _ Other Tests _ |
   | +1 | asflicense | 17 | The patch does not generate ASF License warnings. |
   | | | 1798 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-187/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/187 |
   | Optional Tests |  dupname  asflicense  refguide  |
   | uname | Linux 4122f04ddf16 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | /testptch/patchprocess/precommit/personality/provided.sh |
   | git revision | master / d3bf9c0a77 |
   | maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
   | refguide | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-187/1/artifact/out/branch-site/book.html
 |
   | refguide | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-187/1/artifact/out/patch-site/book.html
 |
   | Max. process+thread count | 96 (vs. ulimit of 1) |
   | modules | C: . U: . |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-187/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (HBASE-22020) upgrade to yetus 0.9.0

2019-04-24 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-22020:

   Resolution: Fixed
Fix Version/s: 1.3.5
   2.2.1
   2.1.5
   2.3.0
   1.4.10
   1.5.0
   3.0.0
   Status: Resolved  (was: Patch Available)

> upgrade to yetus 0.9.0
> --
>
> Key: HBASE-22020
> URL: https://issues.apache.org/jira/browse/HBASE-22020
> Project: HBase
>  Issue Type: Task
>  Components: build, community
>Reporter: stack
>Assignee: Sean Busbey
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 1.4.10, 2.3.0, 2.1.5, 2.2.1, 1.3.5
>
> Attachments: HBASE-22020-branch-1.v1.patch, HBASE-22020.0.patch, 
> HBASE-22020.1.patch
>
>
> branch-1/jdk7 checkstyle dtd xml parse complaint; "script engine for language 
> js can not be found"
> See parent for some context. Checkstyle references dtds that were hosted on 
> puppycrawl, then on sourceforge up until ten days ago. Nightlies are failing 
> for among other reasons, complaint that there is bad xml in the build... 
> notably,  the unresolvable DTDs.
> I'd just update the DTDs but there is a need for a js engine some where and 
> openjdk7 doesn't ship with one (openjdk8 does). That needs addressing and 
> then we can backport the parent issue...
> See 
> https://builds.apache.org/view/H-L/view/HBase/job/HBase%20Nightly/job/branch-1/710/artifact/output-general/xml.txt
>  ... which in case its rolled away, is filled with this message:
> "script engine for language js can not be found"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22122) Change to release mob hfile's block after rpc server shipped response to client

2019-04-24 Thread Zheng Hu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Hu updated HBASE-22122:
-
Attachment: HBASE-22122.HBASE-21879.v01.patch

> Change to release mob hfile's block  after rpc server shipped response to 
> client   
> ---
>
> Key: HBASE-22122
> URL: https://issues.apache.org/jira/browse/HBASE-22122
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Attachments: HBASE-22122.HBASE-21879.v01.patch, unit-test.patch
>
>
> In HBASE-22005,  there's an known bug [1], and I just copied the cell's 
> byte[] from block to on-heap directly in HBASE-22005, so can make HBASE-22005 
> forward. 
>  I marked it as an TODO subtask to eliminate the offheap-to-heap copying 
> here. 
> 1. 
> https://issues.apache.org/jira/browse/HBASE-22005?focusedCommentId=16803734&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16803734



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22122) Change to release mob hfile's block after rpc server shipped response to client

2019-04-24 Thread Zheng Hu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Hu updated HBASE-22122:
-
Status: Patch Available  (was: Open)

> Change to release mob hfile's block  after rpc server shipped response to 
> client   
> ---
>
> Key: HBASE-22122
> URL: https://issues.apache.org/jira/browse/HBASE-22122
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Attachments: HBASE-22122.HBASE-21879.v01.patch, unit-test.patch
>
>
> In HBASE-22005,  there's an known bug [1], and I just copied the cell's 
> byte[] from block to on-heap directly in HBASE-22005, so can make HBASE-22005 
> forward. 
>  I marked it as an TODO subtask to eliminate the offheap-to-heap copying 
> here. 
> 1. 
> https://issues.apache.org/jira/browse/HBASE-22005?focusedCommentId=16803734&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16803734



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21879) Read HFile's block to ByteBuffer directly instead of to byte for reducing young gc purpose

2019-04-24 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16825128#comment-16825128
 ] 

Hudson commented on HBASE-21879:


Results for branch HBASE-21879
[build #75 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21879/75/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21879/75//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21879/75//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21879/75//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Read HFile's block to ByteBuffer directly instead of to byte for reducing 
> young gc purpose
> --
>
> Key: HBASE-21879
> URL: https://issues.apache.org/jira/browse/HBASE-21879
> Project: HBase
>  Issue Type: Improvement
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Fix For: 3.0.0, 2.3.0
>
> Attachments: HBASE-21879.v1.patch, HBASE-21879.v1.patch, 
> QPS-latencies-before-HBASE-21879.png, gc-data-before-HBASE-21879.png
>
>
> In HFileBlock#readBlockDataInternal,  we have the following: 
> {code}
> @VisibleForTesting
> protected HFileBlock readBlockDataInternal(FSDataInputStream is, long offset,
> long onDiskSizeWithHeaderL, boolean pread, boolean verifyChecksum, 
> boolean updateMetrics)
>  throws IOException {
>  // .
>   // TODO: Make this ByteBuffer-based. Will make it easier to go to HDFS with 
> BBPool (offheap).
>   byte [] onDiskBlock = new byte[onDiskSizeWithHeader + hdrSize];
>   int nextBlockOnDiskSize = readAtOffset(is, onDiskBlock, preReadHeaderSize,
>   onDiskSizeWithHeader - preReadHeaderSize, true, offset + 
> preReadHeaderSize, pread);
>   if (headerBuf != null) {
> // ...
>   }
>   // ...
>  }
> {code}
> In the read path,  we still read the block from hfile to on-heap byte[], then 
> copy the on-heap byte[] to offheap bucket cache asynchronously,  and in my  
> 100% get performance test, I also observed some frequent young gc,  The 
> largest memory footprint in the young gen should be the on-heap block byte[].
> In fact, we can read HFile's block to ByteBuffer directly instead of to 
> byte[] for reducing young gc purpose. we did not implement this before, 
> because no ByteBuffer reading interface in the older HDFS client, but 2.7+ 
> has supported this now,  so we can fix this now. I think. 
> Will provide an patch and some perf-comparison for this. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HBASE-22109) Update hbase shaded content checker after guava update in hadoop branch-3.0 to 27.0-jre

2019-04-24 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey reassigned HBASE-22109:
---

Assignee: Sean Busbey  (was: Gabor Bota)

> Update hbase shaded content checker after guava update in hadoop branch-3.0 
> to 27.0-jre
> ---
>
> Key: HBASE-22109
> URL: https://issues.apache.org/jira/browse/HBASE-22109
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Gabor Bota
>Assignee: Sean Busbey
>Priority: Minor
> Attachments: HBASE-22109.001.patch
>
>
> I'm updating guava version from 11.0.2 to 27.0-jre in HADOOP-15960 because of 
> a CVE. I will create a patch for branch-3.0, 3.1, 3.2 and trunk (3.3).  
> I wanted to be sure that HBase works with the updated guava, I compiled and 
> run the HBase tests with my hadoop snapshot containing the updated version, 
> but there were some issues that I had to fix:
> * New shaded dependency: org.checkerframework
> * New license needs to be added to LICENSE.vm: Apache 2.0



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work started] (HBASE-22109) Update hbase shaded content checker after guava update in hadoop branch-3.0 to 27.0-jre

2019-04-24 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HBASE-22109 started by Sean Busbey.
---
> Update hbase shaded content checker after guava update in hadoop branch-3.0 
> to 27.0-jre
> ---
>
> Key: HBASE-22109
> URL: https://issues.apache.org/jira/browse/HBASE-22109
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Gabor Bota
>Assignee: Sean Busbey
>Priority: Minor
> Attachments: HBASE-22109.001.patch
>
>
> I'm updating guava version from 11.0.2 to 27.0-jre in HADOOP-15960 because of 
> a CVE. I will create a patch for branch-3.0, 3.1, 3.2 and trunk (3.3).  
> I wanted to be sure that HBase works with the updated guava, I compiled and 
> run the HBase tests with my hadoop snapshot containing the updated version, 
> but there were some issues that I had to fix:
> * New shaded dependency: org.checkerframework
> * New license needs to be added to LICENSE.vm: Apache 2.0



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22109) Update hbase shaded content checker after guava update in hadoop branch-3.0 to 27.0-jre

2019-04-24 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16825151#comment-16825151
 ] 

Gabor Bota commented on HBASE-22109:


No, I'm not working on this right now - I need to do the update on hadoop first.
Feel free to work on this.

> Update hbase shaded content checker after guava update in hadoop branch-3.0 
> to 27.0-jre
> ---
>
> Key: HBASE-22109
> URL: https://issues.apache.org/jira/browse/HBASE-22109
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Gabor Bota
>Assignee: Sean Busbey
>Priority: Minor
> Attachments: HBASE-22109.001.patch
>
>
> I'm updating guava version from 11.0.2 to 27.0-jre in HADOOP-15960 because of 
> a CVE. I will create a patch for branch-3.0, 3.1, 3.2 and trunk (3.3).  
> I wanted to be sure that HBase works with the updated guava, I compiled and 
> run the HBase tests with my hadoop snapshot containing the updated version, 
> but there were some issues that I had to fix:
> * New shaded dependency: org.checkerframework
> * New license needs to be added to LICENSE.vm: Apache 2.0



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [hbase] petersomogyi merged pull request #187: HBASE-22299 Documentation has incorrect default number of versions

2019-04-24 Thread GitBox
petersomogyi merged pull request #187: HBASE-22299 Documentation has incorrect 
default number of versions
URL: https://github.com/apache/hbase/pull/187
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (HBASE-22302) Fix TestHbck

2019-04-24 Thread Duo Zhang (JIRA)
Duo Zhang created HBASE-22302:
-

 Summary: Fix TestHbck
 Key: HBASE-22302
 URL: https://issues.apache.org/jira/browse/HBASE-22302
 Project: HBase
  Issue Type: Bug
  Components: test
Reporter: Duo Zhang
Assignee: Duo Zhang






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HBASE-22299) Documentation has incorrect default number of versions

2019-04-24 Thread Peter Somogyi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Somogyi resolved HBASE-22299.
---
   Resolution: Fixed
Fix Version/s: 3.0.0

Merged PR#189, Thanks [~anis016] for your contribution!

> Documentation has incorrect default number of versions
> --
>
> Key: HBASE-22299
> URL: https://issues.apache.org/jira/browse/HBASE-22299
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: Peter Somogyi
>Assignee: Sayed Anisul Hoque
>Priority: Trivial
>  Labels: beginner
> Fix For: 3.0.0
>
>
> Reference guide has this section under 
> [compaction|https://hbase.apache.org/book.html#compaction].
> {quote}
> Compaction and Versions
> When you create a Column Family, you can specify the maximum number of 
> versions to keep, by specifying HColumnDescriptor.setMaxVersions(int 
> versions). The default value is 3. If more versions than the specified 
> maximum exist, the excess versions are filtered out and not written back to 
> the compacted StoreFile.
> {quote}
> This is incorrect, the default value is 1.
> Additionally, HColumnDescriptor is deprecated and the example should use 
> ColumnFamilyDescriptorBuilder$setMaxVersions(int) instead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [hbase] Apache-HBase commented on issue #186: HBASE-22294 Removed deprecated method from WALKeyImpl

2019-04-24 Thread GitBox
Apache-HBase commented on issue #186: HBASE-22294 Removed deprecated method 
from WALKeyImpl
URL: https://github.com/apache/hbase/pull/186#issuecomment-486262249
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 20 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | hbaseanti | 0 |  Patch does not have any anti-patterns. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -0 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ master Compile Tests _ |
   | +1 | mvninstall | 269 | master passed |
   | +1 | compile | 52 | master passed |
   | +1 | checkstyle | 76 | master passed |
   | +1 | shadedjars | 279 | branch has no errors when building our shaded 
downstream artifacts. |
   | +1 | findbugs | 193 | master passed |
   | +1 | javadoc | 34 | master passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 251 | the patch passed |
   | +1 | compile | 54 | the patch passed |
   | +1 | javac | 54 | the patch passed |
   | +1 | checkstyle | 74 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedjars | 278 | patch has no errors when building our shaded 
downstream artifacts. |
   | +1 | hadoopcheck | 538 | Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. |
   | +1 | findbugs | 193 | the patch passed |
   | +1 | javadoc | 32 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 8604 | hbase-server in the patch passed. |
   | +1 | asflicense | 25 | The patch does not generate ASF License warnings. |
   | | | 11044 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-186/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/186 |
   | Optional Tests |  dupname  asflicense  javac  javadoc  unit  findbugs  
shadedjars  hadoopcheck  hbaseanti  checkstyle  compile  |
   | uname | Linux 054187f2f0e7 4.4.0-143-generic #169~14.04.2-Ubuntu SMP Wed 
Feb 13 15:00:41 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | /testptch/patchprocess/precommit/personality/provided.sh |
   | git revision | master / ab3d6cf811 |
   | maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
   | Default Java | 1.8.0_181 |
   | findbugs | v3.1.11 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-186/1/testReport/
 |
   | Max. process+thread count | 4849 (vs. ulimit of 1) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-186/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (HBASE-22302) Fix TestHbck

2019-04-24 Thread Duo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-22302:
--
Issue Type: Sub-task  (was: Bug)
Parent: HBASE-21512

> Fix TestHbck
> 
>
> Key: HBASE-22302
> URL: https://issues.apache.org/jira/browse/HBASE-22302
> Project: HBase
>  Issue Type: Sub-task
>  Components: test
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22122) Change to release mob hfile's block after rpc server shipped response to client

2019-04-24 Thread HBase QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16825217#comment-16825217
 ] 

HBase QA commented on HBASE-22122:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  6m  
6s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} HBASE-21879 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
49s{color} | {color:green} HBASE-21879 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
5s{color} | {color:green} HBASE-21879 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
35s{color} | {color:green} HBASE-21879 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
45s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
38s{color} | {color:blue} hbase-server in HBASE-21879 has 11 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} HBASE-21879 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
26s{color} | {color:red} hbase-server: The patch generated 1 new + 97 unchanged 
- 0 fixed = 98 total (was 97) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
42s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
10m 58s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 33m 39s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 88m 13s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.mob.TestMobFile |
|   | hadoop.hbase.mob.TestCachedMobFile |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/PreCommit-HBASE-Build/171/artifact/patchprocess/Dockerfile
 |
| JIRA Issue | HBASE-22122 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12966882/HBASE-22122.HBASE-21879.v01.patch
 |
| Optional Tests |  dupname  asflicense  javac  javadoc  unit  findbugs  
shadedjars  hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux f6aef6647c6e 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | dev-support/hbase-personality.sh |
| git revision | HBASE-21879 / 2a098281d9 |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.11 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HBASE-Build/171/artifact/patchprocess/diff-checkstyle-hbase-server.txt
 |
| unit | 
https://build

[jira] [Updated] (HBASE-22302) Fix TestHbck

2019-04-24 Thread Duo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-22302:
--
Attachment: (was: HBASE-22302.patch)

> Fix TestHbck
> 
>
> Key: HBASE-22302
> URL: https://issues.apache.org/jira/browse/HBASE-22302
> Project: HBase
>  Issue Type: Sub-task
>  Components: test
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Attachments: HBASE-22302-HBASE-21512.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22302) Fix TestHbck

2019-04-24 Thread Duo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-22302:
--
Attachment: HBASE-22302-HBASE-21512.patch

> Fix TestHbck
> 
>
> Key: HBASE-22302
> URL: https://issues.apache.org/jira/browse/HBASE-22302
> Project: HBase
>  Issue Type: Sub-task
>  Components: test
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Attachments: HBASE-22302-HBASE-21512.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22302) Fix TestHbck

2019-04-24 Thread Duo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-22302:
--
Attachment: HBASE-22302.patch

> Fix TestHbck
> 
>
> Key: HBASE-22302
> URL: https://issues.apache.org/jira/browse/HBASE-22302
> Project: HBase
>  Issue Type: Sub-task
>  Components: test
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Attachments: HBASE-22302-HBASE-21512.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22302) Fix TestHbck

2019-04-24 Thread Duo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-22302:
--
Status: Patch Available  (was: Open)

> Fix TestHbck
> 
>
> Key: HBASE-22302
> URL: https://issues.apache.org/jira/browse/HBASE-22302
> Project: HBase
>  Issue Type: Sub-task
>  Components: test
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Attachments: HBASE-22302-HBASE-21512.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-22303) Fix TestReplicationDroppedTables

2019-04-24 Thread Duo Zhang (JIRA)
Duo Zhang created HBASE-22303:
-

 Summary: Fix TestReplicationDroppedTables
 Key: HBASE-22303
 URL: https://issues.apache.org/jira/browse/HBASE-22303
 Project: HBase
  Issue Type: Sub-task
Reporter: Duo Zhang
Assignee: Duo Zhang


It is broken by HBASE-22239...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22020) upgrade to yetus 0.9.0

2019-04-24 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16825239#comment-16825239
 ] 

Hudson commented on HBASE-22020:


Results for branch branch-1.2
[build #745 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.2/745/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.2/745//General_Nightly_Build_Report/]


(/) {color:green}+1 jdk7 checks{color}
-- For more information [see jdk7 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.2/745//JDK7_Nightly_Build_Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.2/745//JDK8_Nightly_Build_Report_(Hadoop2)/]




(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> upgrade to yetus 0.9.0
> --
>
> Key: HBASE-22020
> URL: https://issues.apache.org/jira/browse/HBASE-22020
> Project: HBase
>  Issue Type: Task
>  Components: build, community
>Reporter: stack
>Assignee: Sean Busbey
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 1.4.10, 2.3.0, 2.1.5, 2.2.1, 1.3.5
>
> Attachments: HBASE-22020-branch-1.v1.patch, HBASE-22020.0.patch, 
> HBASE-22020.1.patch
>
>
> branch-1/jdk7 checkstyle dtd xml parse complaint; "script engine for language 
> js can not be found"
> See parent for some context. Checkstyle references dtds that were hosted on 
> puppycrawl, then on sourceforge up until ten days ago. Nightlies are failing 
> for among other reasons, complaint that there is bad xml in the build... 
> notably,  the unresolvable DTDs.
> I'd just update the DTDs but there is a need for a js engine some where and 
> openjdk7 doesn't ship with one (openjdk8 does). That needs addressing and 
> then we can backport the parent issue...
> See 
> https://builds.apache.org/view/H-L/view/HBase/job/HBase%20Nightly/job/branch-1/710/artifact/output-general/xml.txt
>  ... which in case its rolled away, is filled with this message:
> "script engine for language js can not be found"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22303) Fix TestReplicationDroppedTables

2019-04-24 Thread Duo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-22303:
--
Attachment: HBASE-22303-HBASE-21512.patch

> Fix TestReplicationDroppedTables
> 
>
> Key: HBASE-22303
> URL: https://issues.apache.org/jira/browse/HBASE-22303
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Attachments: HBASE-22303-HBASE-21512.patch
>
>
> It is broken by HBASE-22239...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22303) Fix TestReplicationDroppedTables

2019-04-24 Thread Duo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-22303:
--
Status: Patch Available  (was: Open)

> Fix TestReplicationDroppedTables
> 
>
> Key: HBASE-22303
> URL: https://issues.apache.org/jira/browse/HBASE-22303
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Attachments: HBASE-22303-HBASE-21512.patch
>
>
> It is broken by HBASE-22239...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-22304) Fix remaining Checkstyle issues in hbase-endpoint

2019-04-24 Thread Jan Hentschel (JIRA)
Jan Hentschel created HBASE-22304:
-

 Summary: Fix remaining Checkstyle issues in hbase-endpoint
 Key: HBASE-22304
 URL: https://issues.apache.org/jira/browse/HBASE-22304
 Project: HBase
  Issue Type: Task
Affects Versions: 3.0.0
Reporter: Jan Hentschel
Assignee: Jan Hentschel


The module {{hbase-endpoint}} still has a small number of Checkstyle issues, 
which should be fixed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work started] (HBASE-22304) Fix remaining Checkstyle issues in hbase-endpoint

2019-04-24 Thread Jan Hentschel (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HBASE-22304 started by Jan Hentschel.
-
> Fix remaining Checkstyle issues in hbase-endpoint
> -
>
> Key: HBASE-22304
> URL: https://issues.apache.org/jira/browse/HBASE-22304
> Project: HBase
>  Issue Type: Task
>Affects Versions: 3.0.0
>Reporter: Jan Hentschel
>Assignee: Jan Hentschel
>Priority: Trivial
>
> The module {{hbase-endpoint}} still has a small number of Checkstyle issues, 
> which should be fixed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [hbase] HorizonNet opened a new pull request #188: HBASE-22304 Fixed remaining Checkstyle issues in hbase-endpoint

2019-04-24 Thread GitBox
HorizonNet opened a new pull request #188: HBASE-22304 Fixed remaining 
Checkstyle issues in hbase-endpoint
URL: https://github.com/apache/hbase/pull/188
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-22020) upgrade to yetus 0.9.0

2019-04-24 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16825254#comment-16825254
 ] 

Hudson commented on HBASE-22020:


Results for branch branch-1.3
[build #737 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.3/737/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.3/737//General_Nightly_Build_Report/]


(/) {color:green}+1 jdk7 checks{color}
-- For more information [see jdk7 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.3/737//JDK7_Nightly_Build_Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.3/737//JDK8_Nightly_Build_Report_(Hadoop2)/]




(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> upgrade to yetus 0.9.0
> --
>
> Key: HBASE-22020
> URL: https://issues.apache.org/jira/browse/HBASE-22020
> Project: HBase
>  Issue Type: Task
>  Components: build, community
>Reporter: stack
>Assignee: Sean Busbey
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 1.4.10, 2.3.0, 2.1.5, 2.2.1, 1.3.5
>
> Attachments: HBASE-22020-branch-1.v1.patch, HBASE-22020.0.patch, 
> HBASE-22020.1.patch
>
>
> branch-1/jdk7 checkstyle dtd xml parse complaint; "script engine for language 
> js can not be found"
> See parent for some context. Checkstyle references dtds that were hosted on 
> puppycrawl, then on sourceforge up until ten days ago. Nightlies are failing 
> for among other reasons, complaint that there is bad xml in the build... 
> notably,  the unresolvable DTDs.
> I'd just update the DTDs but there is a need for a js engine some where and 
> openjdk7 doesn't ship with one (openjdk8 does). That needs addressing and 
> then we can backport the parent issue...
> See 
> https://builds.apache.org/view/H-L/view/HBase/job/HBase%20Nightly/job/branch-1/710/artifact/output-general/xml.txt
>  ... which in case its rolled away, is filled with this message:
> "script engine for language js can not be found"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [hbase] Apache-HBase commented on issue #180: HBASE-22231 Removed unused and '*' import

2019-04-24 Thread GitBox
Apache-HBase commented on issue #180: HBASE-22231 Removed unused and '*' import
URL: https://github.com/apache/hbase/pull/180#issuecomment-486299081
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 57 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | hbaseanti | 0 |  Patch does not have any anti-patterns. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 81 new or modified test 
files. |
   ||| _ branch-2.0 Compile Tests _ |
   | 0 | mvndep | 15 | Maven dependency ordering for branch |
   | +1 | mvninstall | 236 | branch-2.0 passed |
   | +1 | compile | 282 | branch-2.0 passed |
   | +1 | checkstyle | 292 | branch-2.0 passed |
   | +1 | shadedjars | 322 | branch has no errors when building our shaded 
downstream artifacts. |
   | +1 | findbugs | 514 | branch-2.0 passed |
   | +1 | javadoc | 199 | branch-2.0 passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 17 | Maven dependency ordering for patch |
   | +1 | mvninstall | 215 | the patch passed |
   | +1 | compile | 275 | the patch passed |
   | +1 | javac | 275 | the patch passed |
   | +1 | checkstyle | 28 | hbase-common: The patch generated 0 new + 11 
unchanged - 3 fixed = 11 total (was 14) |
   | +1 | checkstyle | 44 | hbase-client: The patch generated 0 new + 208 
unchanged - 20 fixed = 208 total (was 228) |
   | +1 | checkstyle | 16 | hbase-zookeeper: The patch generated 0 new + 0 
unchanged - 1 fixed = 0 total (was 1) |
   | +1 | checkstyle | 15 | hbase-replication: The patch generated 0 new + 9 
unchanged - 1 fixed = 9 total (was 10) |
   | +1 | checkstyle | 19 | hbase-procedure: The patch generated 0 new + 1 
unchanged - 7 fixed = 1 total (was 8) |
   | +1 | checkstyle | 95 | hbase-server: The patch generated 0 new + 497 
unchanged - 127 fixed = 497 total (was 624) |
   | +1 | checkstyle | 25 | hbase-mapreduce: The patch generated 0 new + 97 
unchanged - 12 fixed = 97 total (was 109) |
   | +1 | checkstyle | 27 | hbase-it: The patch generated 0 new + 88 unchanged 
- 9 fixed = 88 total (was 97) |
   | +1 | checkstyle | 21 | hbase-rest: The patch generated 0 new + 15 
unchanged - 3 fixed = 15 total (was 18) |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedjars | 316 | patch has no errors when building our shaded 
downstream artifacts. |
   | +1 | hadoopcheck | 690 | Patch does not cause any errors with Hadoop 2.6.5 
2.7.4 or 3.0.0. |
   | +1 | findbugs | 597 | the patch passed |
   | +1 | javadoc | 199 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 188 | hbase-common in the patch passed. |
   | +1 | unit | 225 | hbase-client in the patch passed. |
   | +1 | unit | 55 | hbase-zookeeper in the patch passed. |
   | +1 | unit | 19 | hbase-replication in the patch passed. |
   | +1 | unit | 236 | hbase-procedure in the patch passed. |
   | +1 | unit | 11324 | hbase-server in the patch passed. |
   | +1 | unit | 1379 | hbase-mapreduce in the patch passed. |
   | +1 | unit | 86 | hbase-it in the patch passed. |
   | +1 | unit | 479 | hbase-rest in the patch passed. |
   | +1 | asflicense | 285 | The patch does not generate ASF License warnings. |
   | | | 19026 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-180/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/180 |
   | Optional Tests |  dupname  asflicense  javac  javadoc  unit  findbugs  
shadedjars  hadoopcheck  hbaseanti  checkstyle  compile  |
   | uname | Linux 5b031bb5c861 4.4.0-145-generic #171-Ubuntu SMP Tue Mar 26 
12:43:40 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | /testptch/patchprocess/precommit/personality/provided.sh |
   | git revision | branch-2.0 / c3f926419d |
   | maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
   | Default Java | 1.8.0_181 |
   | findbugs | v3.1.0-RC3 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-180/4/testReport/
 |
   | Max. process+thread count | 4878 (vs. ulimit of 1) |
   | modules | C: hbase-common hbase-client hbase-zookeeper hbase-replication 
hbase-procedure hbase-server hbase-mapreduce hbase-it hbase-rest U: . |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-180/4/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this servic

[GitHub] [hbase] Apache-HBase commented on issue #188: HBASE-22304 Fixed remaining Checkstyle issues in hbase-endpoint

2019-04-24 Thread GitBox
Apache-HBase commented on issue #188: HBASE-22304 Fixed remaining Checkstyle 
issues in hbase-endpoint
URL: https://github.com/apache/hbase/pull/188#issuecomment-486302598
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 65 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | hbaseanti | 0 |  Patch does not have any anti-patterns. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ master Compile Tests _ |
   | +1 | mvninstall | 295 | master passed |
   | +1 | compile | 25 | master passed |
   | +1 | checkstyle | 14 | master passed |
   | +1 | shadedjars | 278 | branch has no errors when building our shaded 
downstream artifacts. |
   | +1 | findbugs | 38 | master passed |
   | +1 | javadoc | 12 | master passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 261 | the patch passed |
   | +1 | compile | 25 | the patch passed |
   | +1 | javac | 25 | the patch passed |
   | +1 | checkstyle | 11 | hbase-endpoint: The patch generated 0 new + 0 
unchanged - 4 fixed = 0 total (was 4) |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedjars | 265 | patch has no errors when building our shaded 
downstream artifacts. |
   | +1 | hadoopcheck | 574 | Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. |
   | +1 | findbugs | 46 | the patch passed |
   | +1 | javadoc | 11 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 311 | hbase-endpoint in the patch passed. |
   | +1 | asflicense | 11 | The patch does not generate ASF License warnings. |
   | | | 2314 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-188/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/188 |
   | Optional Tests |  dupname  asflicense  javac  javadoc  unit  findbugs  
shadedjars  hadoopcheck  hbaseanti  checkstyle  compile  |
   | uname | Linux 7237a0c0bb88 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | /testptch/patchprocess/precommit/personality/provided.sh |
   | git revision | master / e39f7dc930 |
   | maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
   | Default Java | 1.8.0_181 |
   | findbugs | v3.1.11 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-188/1/testReport/
 |
   | Max. process+thread count | 2510 (vs. ulimit of 1) |
   | modules | C: hbase-endpoint U: hbase-endpoint |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-188/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-22020) upgrade to yetus 0.9.0

2019-04-24 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16825294#comment-16825294
 ] 

Hudson commented on HBASE-22020:


Results for branch branch-1.4
[build #762 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/762/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/762//General_Nightly_Build_Report/]


(x) {color:red}-1 jdk7 checks{color}
-- For more information [see jdk7 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/762//JDK7_Nightly_Build_Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/762//JDK8_Nightly_Build_Report_(Hadoop2)/]




(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> upgrade to yetus 0.9.0
> --
>
> Key: HBASE-22020
> URL: https://issues.apache.org/jira/browse/HBASE-22020
> Project: HBase
>  Issue Type: Task
>  Components: build, community
>Reporter: stack
>Assignee: Sean Busbey
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 1.4.10, 2.3.0, 2.1.5, 2.2.1, 1.3.5
>
> Attachments: HBASE-22020-branch-1.v1.patch, HBASE-22020.0.patch, 
> HBASE-22020.1.patch
>
>
> branch-1/jdk7 checkstyle dtd xml parse complaint; "script engine for language 
> js can not be found"
> See parent for some context. Checkstyle references dtds that were hosted on 
> puppycrawl, then on sourceforge up until ten days ago. Nightlies are failing 
> for among other reasons, complaint that there is bad xml in the build... 
> notably,  the unresolvable DTDs.
> I'd just update the DTDs but there is a need for a js engine some where and 
> openjdk7 doesn't ship with one (openjdk8 does). That needs addressing and 
> then we can backport the parent issue...
> See 
> https://builds.apache.org/view/H-L/view/HBase/job/HBase%20Nightly/job/branch-1/710/artifact/output-general/xml.txt
>  ... which in case its rolled away, is filled with this message:
> "script engine for language js can not be found"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22020) upgrade to yetus 0.9.0

2019-04-24 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16825298#comment-16825298
 ] 

Hudson commented on HBASE-22020:


Results for branch branch-1
[build #789 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/789/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/789//General_Nightly_Build_Report/]


(x) {color:red}-1 jdk7 checks{color}
-- For more information [see jdk7 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/789//JDK7_Nightly_Build_Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/789//JDK8_Nightly_Build_Report_(Hadoop2)/]




(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> upgrade to yetus 0.9.0
> --
>
> Key: HBASE-22020
> URL: https://issues.apache.org/jira/browse/HBASE-22020
> Project: HBase
>  Issue Type: Task
>  Components: build, community
>Reporter: stack
>Assignee: Sean Busbey
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 1.4.10, 2.3.0, 2.1.5, 2.2.1, 1.3.5
>
> Attachments: HBASE-22020-branch-1.v1.patch, HBASE-22020.0.patch, 
> HBASE-22020.1.patch
>
>
> branch-1/jdk7 checkstyle dtd xml parse complaint; "script engine for language 
> js can not be found"
> See parent for some context. Checkstyle references dtds that were hosted on 
> puppycrawl, then on sourceforge up until ten days ago. Nightlies are failing 
> for among other reasons, complaint that there is bad xml in the build... 
> notably,  the unresolvable DTDs.
> I'd just update the DTDs but there is a need for a js engine some where and 
> openjdk7 doesn't ship with one (openjdk8 does). That needs addressing and 
> then we can backport the parent issue...
> See 
> https://builds.apache.org/view/H-L/view/HBase/job/HBase%20Nightly/job/branch-1/710/artifact/output-general/xml.txt
>  ... which in case its rolled away, is filled with this message:
> "script engine for language js can not be found"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22283) Print row and table information when failed to get region location

2019-04-24 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16825297#comment-16825297
 ] 

Hudson commented on HBASE-22283:


Results for branch branch-1
[build #789 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/789/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/789//General_Nightly_Build_Report/]


(x) {color:red}-1 jdk7 checks{color}
-- For more information [see jdk7 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/789//JDK7_Nightly_Build_Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/789//JDK8_Nightly_Build_Report_(Hadoop2)/]




(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> Print row and table information when failed to get region location
> --
>
> Key: HBASE-22283
> URL: https://issues.apache.org/jira/browse/HBASE-22283
> Project: HBase
>  Issue Type: Improvement
>  Components: Client, logging
>Affects Versions: 1.4.9, 2.0.5, 2.1.4
>Reporter: Yu Li
>Assignee: Yu Li
>Priority: Major
> Fix For: 3.0.0, 2.2.1, 1.5.1
>
>
> Currently when failed to get region location, especially when the 
> {{RegionLocations}} returned is null in 
> {{RpcRetryingCallerWithReadReplicas.getRegionLocations}} (we may see more 
> useful message if there's an exception thrown), we only log the replica id 
> w/o any detailed information about row and table, which makes the debugging 
> difficult. Below is an example error message:
> {noformat}
> Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException: Can't 
> get the location for replica 0
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:372)
>   at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:153)
>   at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:58)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:219)
>   at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:277)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:438)
>   at org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:312)
>   at 
> org.apache.hadoop.hbase.MetaTableAccessor.fullScan(MetaTableAccessor.java:639)
>   at 
> org.apache.hadoop.hbase.MetaTableAccessor.tableExists(MetaTableAccessor.java:366)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:409)
> {noformat}
> And here we propose to improve this part.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22298) branch-2.2 nightly fails "[ForOverride] Method annotated @ForOverride must have protected or package-private visibility"

2019-04-24 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16825300#comment-16825300
 ] 

stack commented on HBASE-22298:
---

[~zghaobac] The nightlies were failing w/o this since about April 18th. With 
this back in place, #208 from last night passed. Failures are just flakies now. 
 
https://builds.apache.org/view/H-L/view/HBase/job/HBase%20Nightly/job/branch-2.2/
 This fixed an errorprone complaint that found legit issue.  If tests are 
passing for you w/o this fix, then go ahead w/ RC2 I'd say.

Were you able to use the build script to generate the RC2?



> branch-2.2 nightly fails "[ForOverride] Method annotated @ForOverride must 
> have protected or package-private visibility"
> 
>
> Key: HBASE-22298
> URL: https://issues.apache.org/jira/browse/HBASE-22298
> Project: HBase
>  Issue Type: Bug
>Reporter: stack
>Assignee: stack
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.1.5, 2.2.1
>
> Attachments: HBASE-22298.branch-2.2.001.patch
>
>
> The change to use guava service happened a long time ago but we errorprone 
> only complains now... update?
> {code}
> 
> [INFO] 97 warnings 
> [INFO] -
> [INFO] -
> [ERROR] COMPILATION ERROR : 
> [INFO] -
> [ERROR] 
> /testptch/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/master/ClusterSchemaServiceImpl.java:[60,27]
>  error: [ForOverride] Method annotated @ForOverride must have protected or 
> package-private visibility
> (see https://errorprone.info/bugpattern/ForOverride)
> [INFO] 1 error
> {code}
> See https://errorprone.info/bugpattern/ForOverride



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22020) upgrade to yetus 0.9.0

2019-04-24 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16825301#comment-16825301
 ] 

Hudson commented on HBASE-22020:


SUCCESS: Integrated in Jenkins build HBase-1.3-IT #544 (See 
[https://builds.apache.org/job/HBase-1.3-IT/544/])
HBASE-22020 update nightly to yetus 0.9.0 (busbey: 
[https://github.com/apache/hbase/commit/887b27048c3d747b1b10a63c83c545f2b1804aeb])
* (edit) dev-support/Jenkinsfile


> upgrade to yetus 0.9.0
> --
>
> Key: HBASE-22020
> URL: https://issues.apache.org/jira/browse/HBASE-22020
> Project: HBase
>  Issue Type: Task
>  Components: build, community
>Reporter: stack
>Assignee: Sean Busbey
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 1.4.10, 2.3.0, 2.1.5, 2.2.1, 1.3.5
>
> Attachments: HBASE-22020-branch-1.v1.patch, HBASE-22020.0.patch, 
> HBASE-22020.1.patch
>
>
> branch-1/jdk7 checkstyle dtd xml parse complaint; "script engine for language 
> js can not be found"
> See parent for some context. Checkstyle references dtds that were hosted on 
> puppycrawl, then on sourceforge up until ten days ago. Nightlies are failing 
> for among other reasons, complaint that there is bad xml in the build... 
> notably,  the unresolvable DTDs.
> I'd just update the DTDs but there is a need for a js engine some where and 
> openjdk7 doesn't ship with one (openjdk8 does). That needs addressing and 
> then we can backport the parent issue...
> See 
> https://builds.apache.org/view/H-L/view/HBase/job/HBase%20Nightly/job/branch-1/710/artifact/output-general/xml.txt
>  ... which in case its rolled away, is filled with this message:
> "script engine for language js can not be found"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-22305) LRU Block cache may not retain recently used blocks during eviction

2019-04-24 Thread Biju Nair (JIRA)
Biju Nair created HBASE-22305:
-

 Summary: LRU Block cache may not retain recently used blocks 
during eviction
 Key: HBASE-22305
 URL: https://issues.apache.org/jira/browse/HBASE-22305
 Project: HBase
  Issue Type: Improvement
  Components: BlockCache
Reporter: Biju Nair


During block 
[eviction|https://github.com/apache/hbase/blob/8ec93ea193f6765fd2639ce851ef8cac7df3f555/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java#L628-L648],
 LRU block cache creates LruCachedBlockQueue and adds blocks from the 
Concurrent Hash Map (Cache) to identify the ones for retention. During the 
[add|https://github.com/apache/hbase/blob/8ec93ea193f6765fd2639ce851ef8cac7df3f555/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/LruCachedBlockQueue.java#L67]
 , entries from the Cache (Hash map) are added to the queue without any check 
on the block access time until total size of blocks added to the queue reaches 
the max size for the queue. After the max size is reached, only blocks with 
[access time greater than the access time of last block in the queue is added 
from "Cache" to the 
queue|https://github.com/apache/hbase/blob/8ec93ea193f6765fd2639ce851ef8cac7df3f555/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/LruCachedBlockQueue.java#L73].
 
 
But as path of Cache operation, when there is a cache hit the [access time of 
the block is changed to the latest 
time|https://github.com/apache/hbase/blob/8ec93ea193f6765fd2639ce851ef8cac7df3f555/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java#L507].
 While iterating through Cache and adding blocks to the LruCachedBlockQueue, if 
the last element added when the queue size reached max has a access time 
greater than the rest of the blocks in Cache Hash Map, then remaining blocks 
will not be added to the queue to be considered for retention even though they 
have access time greater than other blocks added to the queue before the last 
one. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [hbase] Apache-HBase commented on issue #178: HBASE-22231 Removed unused and '*' import

2019-04-24 Thread GitBox
Apache-HBase commented on issue #178: HBASE-22231 Removed unused and '*' import
URL: https://github.com/apache/hbase/pull/178#issuecomment-486317651
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 283 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | hbaseanti | 0 |  Patch does not have any anti-patterns. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 66 new or modified test 
files. |
   ||| _ branch-2.2 Compile Tests _ |
   | 0 | mvndep | 9 | Maven dependency ordering for branch |
   | +1 | mvninstall | 240 | branch-2.2 passed |
   | +1 | compile | 172 | branch-2.2 passed |
   | +1 | checkstyle | 178 | branch-2.2 passed |
   | +1 | shadedjars | 244 | branch has no errors when building our shaded 
downstream artifacts. |
   | +1 | findbugs | 330 | branch-2.2 passed |
   | +1 | javadoc | 122 | branch-2.2 passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 15 | Maven dependency ordering for patch |
   | +1 | mvninstall | 232 | the patch passed |
   | +1 | compile | 173 | the patch passed |
   | +1 | javac | 173 | the patch passed |
   | +1 | checkstyle | 21 | hbase-common: The patch generated 0 new + 11 
unchanged - 3 fixed = 11 total (was 14) |
   | +1 | checkstyle | 34 | hbase-client: The patch generated 0 new + 202 
unchanged - 18 fixed = 202 total (was 220) |
   | +1 | checkstyle | 68 | hbase-server: The patch generated 0 new + 278 
unchanged - 102 fixed = 278 total (was 380) |
   | +1 | checkstyle | 17 | hbase-mapreduce: The patch generated 0 new + 91 
unchanged - 12 fixed = 91 total (was 103) |
   | +1 | checkstyle | 16 | hbase-it: The patch generated 0 new + 88 unchanged 
- 9 fixed = 88 total (was 97) |
   | +1 | checkstyle | 17 | hbase-rest: The patch generated 0 new + 15 
unchanged - 2 fixed = 15 total (was 17) |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedjars | 232 | patch has no errors when building our shaded 
downstream artifacts. |
   | +1 | hadoopcheck | 473 | Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. |
   | +1 | findbugs | 352 | the patch passed |
   | +1 | javadoc | 119 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 152 | hbase-common in the patch passed. |
   | +1 | unit | 199 | hbase-client in the patch passed. |
   | -1 | unit | 15397 | hbase-server in the patch failed. |
   | -1 | unit | 1665 | hbase-mapreduce in the patch failed. |
   | +1 | unit | 81 | hbase-it in the patch passed. |
   | +1 | unit | 500 | hbase-rest in the patch passed. |
   | +1 | asflicense | 203 | The patch does not generate ASF License warnings. |
   | | | 21691 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hbase.client.TestFromClientSideWithCoprocessor 
|
   |   | hadoop.hbase.coprocessor.TestMetaTableMetrics |
   |   | hadoop.hbase.replication.TestVerifyReplicationCrossDiffHdfs |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-178/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/178 |
   | Optional Tests |  dupname  asflicense  javac  javadoc  unit  findbugs  
shadedjars  hadoopcheck  hbaseanti  checkstyle  compile  |
   | uname | Linux 5ecb38f2d620 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | /testptch/patchprocess/precommit/personality/provided.sh |
   | git revision | branch-2.2 / e14e2db122 |
   | maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
   | Default Java | 1.8.0_181 |
   | findbugs | v3.1.11 |
   | unit | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-178/4/artifact/out/patch-unit-hbase-server.txt
 |
   | unit | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-178/4/artifact/out/patch-unit-hbase-mapreduce.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-178/4/testReport/
 |
   | Max. process+thread count | 5596 (vs. ulimit of 1) |
   | modules | C: hbase-common hbase-client hbase-server hbase-mapreduce 
hbase-it hbase-rest U: . |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-178/4/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apac

[jira] [Commented] (HBASE-22020) upgrade to yetus 0.9.0

2019-04-24 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16825331#comment-16825331
 ] 

Hudson commented on HBASE-22020:


SUCCESS: Integrated in Jenkins build HBase-1.2-IT #1222 (See 
[https://builds.apache.org/job/HBase-1.2-IT/1222/])
HBASE-22020 update nightly to yetus 0.9.0 (busbey: 
[https://github.com/apache/hbase/commit/406593c2a6b2dd2650bd9290ff58d90b6ba33fbb])
* (edit) dev-support/Jenkinsfile


> upgrade to yetus 0.9.0
> --
>
> Key: HBASE-22020
> URL: https://issues.apache.org/jira/browse/HBASE-22020
> Project: HBase
>  Issue Type: Task
>  Components: build, community
>Reporter: stack
>Assignee: Sean Busbey
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 1.4.10, 2.3.0, 2.1.5, 2.2.1, 1.3.5
>
> Attachments: HBASE-22020-branch-1.v1.patch, HBASE-22020.0.patch, 
> HBASE-22020.1.patch
>
>
> branch-1/jdk7 checkstyle dtd xml parse complaint; "script engine for language 
> js can not be found"
> See parent for some context. Checkstyle references dtds that were hosted on 
> puppycrawl, then on sourceforge up until ten days ago. Nightlies are failing 
> for among other reasons, complaint that there is bad xml in the build... 
> notably,  the unresolvable DTDs.
> I'd just update the DTDs but there is a need for a js engine some where and 
> openjdk7 doesn't ship with one (openjdk8 does). That needs addressing and 
> then we can backport the parent issue...
> See 
> https://builds.apache.org/view/H-L/view/HBase/job/HBase%20Nightly/job/branch-1/710/artifact/output-general/xml.txt
>  ... which in case its rolled away, is filled with this message:
> "script engine for language js can not be found"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [hbase] HorizonNet merged pull request #180: HBASE-22231 Removed unused and '*' import

2019-04-24 Thread GitBox
HorizonNet merged pull request #180: HBASE-22231 Removed unused and '*' import
URL: https://github.com/apache/hbase/pull/180
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] HorizonNet merged pull request #188: HBASE-22304 Fixed remaining Checkstyle issues in hbase-endpoint

2019-04-24 Thread GitBox
HorizonNet merged pull request #188: HBASE-22304 Fixed remaining Checkstyle 
issues in hbase-endpoint
URL: https://github.com/apache/hbase/pull/188
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-22231) Remove unused and * imports

2019-04-24 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16825339#comment-16825339
 ] 

Hudson commented on HBASE-22231:


Results for branch branch-2.1
[build #1081 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1081/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1081//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1081//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1081//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Remove unused and * imports
> ---
>
> Key: HBASE-22231
> URL: https://issues.apache.org/jira/browse/HBASE-22231
> Project: HBase
>  Issue Type: Task
>Affects Versions: 3.0.0
>Reporter: Jan Hentschel
>Assignee: Jan Hentschel
>Priority: Minor
> Fix For: 3.0.0
>
>
> Currently there are a lot of unused imports, as well as '*' imports, are 
> used. They should be removed or replaced.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22301) Consider rolling the WAL if the HDFS write pipeline is slow

2019-04-24 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16825353#comment-16825353
 ] 

Andrew Purtell commented on HBASE-22301:


There is no default switch case by deliberate choice, but since it triggers 
checkstyle I'll change that.

Whoops on the test timeout, will make it 10x.

Agree TestOfflineMetaRebuildBase failure is not related.

Let me update the patch in a bit, and we need one for branch-2 and up, at least 
for the same code in the sync WAL.

> Consider rolling the WAL if the HDFS write pipeline is slow
> ---
>
> Key: HBASE-22301
> URL: https://issues.apache.org/jira/browse/HBASE-22301
> Project: HBase
>  Issue Type: Improvement
>  Components: wal
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 3.0.0, 1.5.0, 2.3.0
>
> Attachments: HBASE-22301-branch-1.patch, HBASE-22301-branch-1.patch, 
> HBASE-22301-branch-1.patch
>
>
> Consider the case when a subset of the HDFS fleet is unhealthy but suffering 
> a gray failure not an outright outage. HDFS operations, notably syncs, are 
> abnormally slow on pipelines which include this subset of hosts. If the 
> regionserver's WAL is backed by an impacted pipeline, all WAL handlers can be 
> consumed waiting for acks from the datanodes in the pipeline (recall that 
> some of them are sick). Imagine a write heavy application distributing load 
> uniformly over the cluster at a fairly high rate. With the WAL subsystem 
> slowed by HDFS level issues, all handlers can be blocked waiting to append to 
> the WAL. Once all handlers are blocked, the application will experience 
> backpressure. All (HBase) clients eventually have too many outstanding writes 
> and block.
> Because the application is distributing writes near uniformly in the 
> keyspace, the probability any given service endpoint will dispatch a request 
> to an impacted regionserver, even a single regionserver, approaches 1.0. So 
> the probability that all service endpoints will be affected approaches 1.0.
> In order to break the logjam, we need to remove the slow datanodes. Although 
> there is HDFS level monitoring, mechanisms, and procedures for this, we 
> should also attempt to take mitigating action at the HBase layer as soon as 
> we find ourselves in trouble. It would be enough to remove the affected 
> datanodes from the writer pipelines. A super simple strategy that can be 
> effective is described below:
> This is with branch-1 code. I think branch-2's async WAL can mitigate but 
> still can be susceptible. branch-2 sync WAL is susceptible. 
> We already roll the WAL writer if the pipeline suffers the failure of a 
> datanode and the replication factor on the pipeline is too low. We should 
> also consider how much time it took for the write pipeline to complete a sync 
> the last time we measured it, or the max over the interval from now to the 
> last time we checked. If the sync time exceeds a configured threshold, roll 
> the log writer then too. Fortunately we don't need to know which datanode is 
> making the WAL write pipeline slow, only that syncs on the pipeline are too 
> slow and exceeding a threshold. This is enough information to know when to 
> roll it. Once we roll it, we will get three new randomly selected datanodes. 
> On most clusters the probability the new pipeline includes the slow datanode 
> will be low. (And if for some reason it does end up with a problematic 
> datanode again, we roll again.)
> This is not a silver bullet but this can be a reasonably effective mitigation.
> Provide a metric for tracking when log roll is requested (and for what 
> reason).
> Emit a log line at log roll time that includes datanode pipeline details for 
> further debugging and analysis, similar to the existing slow FSHLog sync log 
> line.
> If we roll too many times within a short interval of time this probably means 
> there is a widespread problem with the fleet and so our mitigation is not 
> helping and may be exacerbating those problems or operator difficulties. 
> Ensure log roll requests triggered by this new feature happen infrequently 
> enough to not cause difficulties under either normal or abnormal conditions. 
> A very simple strategy that could work well under both normal and abnormal 
> conditions is to define a fairly lengthy interval, default 5 minutes, and 
> then insure we do not roll more than once during this interval for this 
> reason.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-22301) Consider rolling the WAL if the HDFS write pipeline is slow

2019-04-24 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16825353#comment-16825353
 ] 

Andrew Purtell edited comment on HBASE-22301 at 4/24/19 5:15 PM:
-

There is no default switch case by deliberate choice, but since it triggers 
checkstyle I'll change that.

Whoops on the test timeout, will make it 10x.

Agree TestOfflineMetaRebuildBase failure is not related.

Let me update the patch in a bit, and we need one for branch-2 and up, at least 
for the same code in the sync WAL.

The TestFailedAppendAndSync one is potentially related, I'll see if I can 
reproduce it locally (perhaps in a loop). Probably just need to tweak config 
for the test.


was (Author: apurtell):
There is no default switch case by deliberate choice, but since it triggers 
checkstyle I'll change that.

Whoops on the test timeout, will make it 10x.

Agree TestOfflineMetaRebuildBase failure is not related.

Let me update the patch in a bit, and we need one for branch-2 and up, at least 
for the same code in the sync WAL.

> Consider rolling the WAL if the HDFS write pipeline is slow
> ---
>
> Key: HBASE-22301
> URL: https://issues.apache.org/jira/browse/HBASE-22301
> Project: HBase
>  Issue Type: Improvement
>  Components: wal
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 3.0.0, 1.5.0, 2.3.0
>
> Attachments: HBASE-22301-branch-1.patch, HBASE-22301-branch-1.patch, 
> HBASE-22301-branch-1.patch
>
>
> Consider the case when a subset of the HDFS fleet is unhealthy but suffering 
> a gray failure not an outright outage. HDFS operations, notably syncs, are 
> abnormally slow on pipelines which include this subset of hosts. If the 
> regionserver's WAL is backed by an impacted pipeline, all WAL handlers can be 
> consumed waiting for acks from the datanodes in the pipeline (recall that 
> some of them are sick). Imagine a write heavy application distributing load 
> uniformly over the cluster at a fairly high rate. With the WAL subsystem 
> slowed by HDFS level issues, all handlers can be blocked waiting to append to 
> the WAL. Once all handlers are blocked, the application will experience 
> backpressure. All (HBase) clients eventually have too many outstanding writes 
> and block.
> Because the application is distributing writes near uniformly in the 
> keyspace, the probability any given service endpoint will dispatch a request 
> to an impacted regionserver, even a single regionserver, approaches 1.0. So 
> the probability that all service endpoints will be affected approaches 1.0.
> In order to break the logjam, we need to remove the slow datanodes. Although 
> there is HDFS level monitoring, mechanisms, and procedures for this, we 
> should also attempt to take mitigating action at the HBase layer as soon as 
> we find ourselves in trouble. It would be enough to remove the affected 
> datanodes from the writer pipelines. A super simple strategy that can be 
> effective is described below:
> This is with branch-1 code. I think branch-2's async WAL can mitigate but 
> still can be susceptible. branch-2 sync WAL is susceptible. 
> We already roll the WAL writer if the pipeline suffers the failure of a 
> datanode and the replication factor on the pipeline is too low. We should 
> also consider how much time it took for the write pipeline to complete a sync 
> the last time we measured it, or the max over the interval from now to the 
> last time we checked. If the sync time exceeds a configured threshold, roll 
> the log writer then too. Fortunately we don't need to know which datanode is 
> making the WAL write pipeline slow, only that syncs on the pipeline are too 
> slow and exceeding a threshold. This is enough information to know when to 
> roll it. Once we roll it, we will get three new randomly selected datanodes. 
> On most clusters the probability the new pipeline includes the slow datanode 
> will be low. (And if for some reason it does end up with a problematic 
> datanode again, we roll again.)
> This is not a silver bullet but this can be a reasonably effective mitigation.
> Provide a metric for tracking when log roll is requested (and for what 
> reason).
> Emit a log line at log roll time that includes datanode pipeline details for 
> further debugging and analysis, similar to the existing slow FSHLog sync log 
> line.
> If we roll too many times within a short interval of time this probably means 
> there is a widespread problem with the fleet and so our mitigation is not 
> helping and may be exacerbating those problems or operator difficulties. 
> Ensure log roll requests triggered by this new feature happen infrequently 
> enough to not cause difficulties under either 

[jira] [Comment Edited] (HBASE-22301) Consider rolling the WAL if the HDFS write pipeline is slow

2019-04-24 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16825353#comment-16825353
 ] 

Andrew Purtell edited comment on HBASE-22301 at 4/24/19 5:19 PM:
-

There is no default switch case by deliberate choice, but since it triggers 
checkstyle I'll change that.

Whoops on the test timeout, will make it 10x. Although, normally a wait should 
not be necessary, so this could be a legitimate failure. 

Agree TestOfflineMetaRebuildBase failure is not related.

Let me update the patch in a bit, and we need one for branch-2 and up, at least 
for the same code in the sync WAL.

The TestFailedAppendAndSync one is potentially related, I'll see if I can 
reproduce it locally (perhaps in a loop). Probably just need to tweak config 
for the test.


was (Author: apurtell):
There is no default switch case by deliberate choice, but since it triggers 
checkstyle I'll change that.

Whoops on the test timeout, will make it 10x.

Agree TestOfflineMetaRebuildBase failure is not related.

Let me update the patch in a bit, and we need one for branch-2 and up, at least 
for the same code in the sync WAL.

The TestFailedAppendAndSync one is potentially related, I'll see if I can 
reproduce it locally (perhaps in a loop). Probably just need to tweak config 
for the test.

> Consider rolling the WAL if the HDFS write pipeline is slow
> ---
>
> Key: HBASE-22301
> URL: https://issues.apache.org/jira/browse/HBASE-22301
> Project: HBase
>  Issue Type: Improvement
>  Components: wal
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 3.0.0, 1.5.0, 2.3.0
>
> Attachments: HBASE-22301-branch-1.patch, HBASE-22301-branch-1.patch, 
> HBASE-22301-branch-1.patch
>
>
> Consider the case when a subset of the HDFS fleet is unhealthy but suffering 
> a gray failure not an outright outage. HDFS operations, notably syncs, are 
> abnormally slow on pipelines which include this subset of hosts. If the 
> regionserver's WAL is backed by an impacted pipeline, all WAL handlers can be 
> consumed waiting for acks from the datanodes in the pipeline (recall that 
> some of them are sick). Imagine a write heavy application distributing load 
> uniformly over the cluster at a fairly high rate. With the WAL subsystem 
> slowed by HDFS level issues, all handlers can be blocked waiting to append to 
> the WAL. Once all handlers are blocked, the application will experience 
> backpressure. All (HBase) clients eventually have too many outstanding writes 
> and block.
> Because the application is distributing writes near uniformly in the 
> keyspace, the probability any given service endpoint will dispatch a request 
> to an impacted regionserver, even a single regionserver, approaches 1.0. So 
> the probability that all service endpoints will be affected approaches 1.0.
> In order to break the logjam, we need to remove the slow datanodes. Although 
> there is HDFS level monitoring, mechanisms, and procedures for this, we 
> should also attempt to take mitigating action at the HBase layer as soon as 
> we find ourselves in trouble. It would be enough to remove the affected 
> datanodes from the writer pipelines. A super simple strategy that can be 
> effective is described below:
> This is with branch-1 code. I think branch-2's async WAL can mitigate but 
> still can be susceptible. branch-2 sync WAL is susceptible. 
> We already roll the WAL writer if the pipeline suffers the failure of a 
> datanode and the replication factor on the pipeline is too low. We should 
> also consider how much time it took for the write pipeline to complete a sync 
> the last time we measured it, or the max over the interval from now to the 
> last time we checked. If the sync time exceeds a configured threshold, roll 
> the log writer then too. Fortunately we don't need to know which datanode is 
> making the WAL write pipeline slow, only that syncs on the pipeline are too 
> slow and exceeding a threshold. This is enough information to know when to 
> roll it. Once we roll it, we will get three new randomly selected datanodes. 
> On most clusters the probability the new pipeline includes the slow datanode 
> will be low. (And if for some reason it does end up with a problematic 
> datanode again, we roll again.)
> This is not a silver bullet but this can be a reasonably effective mitigation.
> Provide a metric for tracking when log roll is requested (and for what 
> reason).
> Emit a log line at log roll time that includes datanode pipeline details for 
> further debugging and analysis, similar to the existing slow FSHLog sync log 
> line.
> If we roll too many times within a short interval of time this probably means 
> there is a widesp

[jira] [Commented] (HBASE-22020) upgrade to yetus 0.9.0

2019-04-24 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16825384#comment-16825384
 ] 

Hudson commented on HBASE-22020:


Results for branch master
[build #959 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/959/]: (x) 
*{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/959//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/959//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/959//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> upgrade to yetus 0.9.0
> --
>
> Key: HBASE-22020
> URL: https://issues.apache.org/jira/browse/HBASE-22020
> Project: HBase
>  Issue Type: Task
>  Components: build, community
>Reporter: stack
>Assignee: Sean Busbey
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 1.4.10, 2.3.0, 2.1.5, 2.2.1, 1.3.5
>
> Attachments: HBASE-22020-branch-1.v1.patch, HBASE-22020.0.patch, 
> HBASE-22020.1.patch
>
>
> branch-1/jdk7 checkstyle dtd xml parse complaint; "script engine for language 
> js can not be found"
> See parent for some context. Checkstyle references dtds that were hosted on 
> puppycrawl, then on sourceforge up until ten days ago. Nightlies are failing 
> for among other reasons, complaint that there is bad xml in the build... 
> notably,  the unresolvable DTDs.
> I'd just update the DTDs but there is a need for a js engine some where and 
> openjdk7 doesn't ship with one (openjdk8 does). That needs addressing and 
> then we can backport the parent issue...
> See 
> https://builds.apache.org/view/H-L/view/HBase/job/HBase%20Nightly/job/branch-1/710/artifact/output-general/xml.txt
>  ... which in case its rolled away, is filled with this message:
> "script engine for language js can not be found"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22272) Fix Checkstyle errors in hbase-backup

2019-04-24 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16825383#comment-16825383
 ] 

Hudson commented on HBASE-22272:


Results for branch master
[build #959 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/959/]: (x) 
*{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/959//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/959//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/959//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Fix Checkstyle errors in hbase-backup
> -
>
> Key: HBASE-22272
> URL: https://issues.apache.org/jira/browse/HBASE-22272
> Project: HBase
>  Issue Type: Task
>Reporter: Jan Hentschel
>Assignee: Jan Hentschel
>Priority: Trivial
> Fix For: 3.0.0
>
>
> There are a few Checkstyle errors in {{hbase-backup}}, which should be fixed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22020) upgrade to yetus 0.9.0

2019-04-24 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16825386#comment-16825386
 ] 

Hudson commented on HBASE-22020:


Results for branch branch-2.1
[build #1082 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1082/]: 
(/) *{color:green}+1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1082//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1082//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1082//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> upgrade to yetus 0.9.0
> --
>
> Key: HBASE-22020
> URL: https://issues.apache.org/jira/browse/HBASE-22020
> Project: HBase
>  Issue Type: Task
>  Components: build, community
>Reporter: stack
>Assignee: Sean Busbey
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 1.4.10, 2.3.0, 2.1.5, 2.2.1, 1.3.5
>
> Attachments: HBASE-22020-branch-1.v1.patch, HBASE-22020.0.patch, 
> HBASE-22020.1.patch
>
>
> branch-1/jdk7 checkstyle dtd xml parse complaint; "script engine for language 
> js can not be found"
> See parent for some context. Checkstyle references dtds that were hosted on 
> puppycrawl, then on sourceforge up until ten days ago. Nightlies are failing 
> for among other reasons, complaint that there is bad xml in the build... 
> notably,  the unresolvable DTDs.
> I'd just update the DTDs but there is a need for a js engine some where and 
> openjdk7 doesn't ship with one (openjdk8 does). That needs addressing and 
> then we can backport the parent issue...
> See 
> https://builds.apache.org/view/H-L/view/HBase/job/HBase%20Nightly/job/branch-1/710/artifact/output-general/xml.txt
>  ... which in case its rolled away, is filled with this message:
> "script engine for language js can not be found"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21512) Introduce an AsyncClusterConnection and replace the usage of ClusterConnection

2019-04-24 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16825393#comment-16825393
 ] 

Hudson commented on HBASE-21512:


Results for branch HBASE-21512
[build #192 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21512/192/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21512/192//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21512/192//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21512/192//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Introduce an AsyncClusterConnection and replace the usage of ClusterConnection
> --
>
> Key: HBASE-21512
> URL: https://issues.apache.org/jira/browse/HBASE-21512
> Project: HBase
>  Issue Type: Umbrella
>Reporter: Duo Zhang
>Priority: Major
> Fix For: 3.0.0
>
>
> At least for the RSProcedureDispatcher, with CompletableFuture we do not need 
> to set a delay and use a thread pool any more, which could reduce the 
> resource usage and also the latency.
> Once this is done, I think we can remove the ClusterConnection completely, 
> and start to rewrite the old sync client based on the async client, which 
> could reduce the code base a lot for our client.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22274) Cell size limit check on append should consider cell's previous size.

2019-04-24 Thread Abhishek Singh Chouhan (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16825420#comment-16825420
 ] 

Abhishek Singh Chouhan commented on HBASE-22274:


Lgtm +1. Thanks [~xucang]

> Cell size limit check on append should consider cell's previous size.
> -
>
> Key: HBASE-22274
> URL: https://issues.apache.org/jira/browse/HBASE-22274
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.0.0, 1.3.5
>Reporter: Xu Cang
>Assignee: Xu Cang
>Priority: Minor
> Attachments: HBASE-22274-branch-1.001.patch, 
> HBASE-22274-branch-1.002.patch, HBASE-22274-master.001.patch, 
> HBASE-22274-master.002.patch, HBASE-22274-master.002.patch, 
> HBASE-22274-master.003.patch
>
>
> Now we have cell size limit check based on this parameter 
> *hbase.server.keyvalue.maxsize* 
> One case was missing: appending to a cell only take append op's cell size 
> into account against this limit check. we should check against the potential 
> final cell size after the append.'
> It's easy to reproduce this :
>  
> Apply this diff
>  
> {code:java}
> diff --git 
> a/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java
>  
> b/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java
>  index 5a285ef6ba..8633177ebe 100644 --- 
> a/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java
>  +++ 
> b/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java
>  @@ -6455,7 +6455,7 
> - t.append(new Append(ROW).addColumn(FAMILY, QUALIFIER, new byte[10 * 
> 1024])); 
> + t.append(new Append(ROW).addColumn(FAMILY, QUALIFIER, new byte[2 * 1024])); 
> {code}
>  
> Fix is to add this check in #reckonDeltas in HRegion class, where we have 
> already got the appended cell's size. 
> Will throw DoNotRetryIOException if checks is failed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22296) Remove TestFromClientSide.testGetStartEndKeysWithRegionReplicas

2019-04-24 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16825421#comment-16825421
 ] 

Hudson commented on HBASE-22296:


Results for branch branch-2.2
[build #210 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/210/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/210//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/210//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/210//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Remove TestFromClientSide.testGetStartEndKeysWithRegionReplicas
> ---
>
> Key: HBASE-22296
> URL: https://issues.apache.org/jira/browse/HBASE-22296
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.2.1
>
>
> It tests nothing after HBASE-21753...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22020) upgrade to yetus 0.9.0

2019-04-24 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16825422#comment-16825422
 ] 

Hudson commented on HBASE-22020:


Results for branch branch-2.2
[build #210 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/210/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/210//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/210//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/210//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> upgrade to yetus 0.9.0
> --
>
> Key: HBASE-22020
> URL: https://issues.apache.org/jira/browse/HBASE-22020
> Project: HBase
>  Issue Type: Task
>  Components: build, community
>Reporter: stack
>Assignee: Sean Busbey
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 1.4.10, 2.3.0, 2.1.5, 2.2.1, 1.3.5
>
> Attachments: HBASE-22020-branch-1.v1.patch, HBASE-22020.0.patch, 
> HBASE-22020.1.patch
>
>
> branch-1/jdk7 checkstyle dtd xml parse complaint; "script engine for language 
> js can not be found"
> See parent for some context. Checkstyle references dtds that were hosted on 
> puppycrawl, then on sourceforge up until ten days ago. Nightlies are failing 
> for among other reasons, complaint that there is bad xml in the build... 
> notably,  the unresolvable DTDs.
> I'd just update the DTDs but there is a need for a js engine some where and 
> openjdk7 doesn't ship with one (openjdk8 does). That needs addressing and 
> then we can backport the parent issue...
> See 
> https://builds.apache.org/view/H-L/view/HBase/job/HBase%20Nightly/job/branch-1/710/artifact/output-general/xml.txt
>  ... which in case its rolled away, is filled with this message:
> "script engine for language js can not be found"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21512) Introduce an AsyncClusterConnection and replace the usage of ClusterConnection

2019-04-24 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16825433#comment-16825433
 ] 

Hudson commented on HBASE-21512:


Results for branch HBASE-21512
[build #191 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21512/191/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21512/191//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21512/191//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21512/191//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(x) {color:red}-1 client integration test{color}
--Failed when running client tests on top of Hadoop 2. [see log for 
details|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21512/191//artifact/output-integration/hadoop-2.log].
 (note that this means we didn't run on Hadoop 3)


> Introduce an AsyncClusterConnection and replace the usage of ClusterConnection
> --
>
> Key: HBASE-21512
> URL: https://issues.apache.org/jira/browse/HBASE-21512
> Project: HBase
>  Issue Type: Umbrella
>Reporter: Duo Zhang
>Priority: Major
> Fix For: 3.0.0
>
>
> At least for the RSProcedureDispatcher, with CompletableFuture we do not need 
> to set a delay and use a thread pool any more, which could reduce the 
> resource usage and also the latency.
> Once this is done, I think we can remove the ClusterConnection completely, 
> and start to rewrite the old sync client based on the async client, which 
> could reduce the code base a lot for our client.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22301) Consider rolling the WAL if the HDFS write pipeline is slow

2019-04-24 Thread Andrew Purtell (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-22301:
---
Attachment: HBASE-22301-branch-1.patch

> Consider rolling the WAL if the HDFS write pipeline is slow
> ---
>
> Key: HBASE-22301
> URL: https://issues.apache.org/jira/browse/HBASE-22301
> Project: HBase
>  Issue Type: Improvement
>  Components: wal
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 3.0.0, 1.5.0, 2.3.0
>
> Attachments: HBASE-22301-branch-1.patch, HBASE-22301-branch-1.patch, 
> HBASE-22301-branch-1.patch, HBASE-22301-branch-1.patch
>
>
> Consider the case when a subset of the HDFS fleet is unhealthy but suffering 
> a gray failure not an outright outage. HDFS operations, notably syncs, are 
> abnormally slow on pipelines which include this subset of hosts. If the 
> regionserver's WAL is backed by an impacted pipeline, all WAL handlers can be 
> consumed waiting for acks from the datanodes in the pipeline (recall that 
> some of them are sick). Imagine a write heavy application distributing load 
> uniformly over the cluster at a fairly high rate. With the WAL subsystem 
> slowed by HDFS level issues, all handlers can be blocked waiting to append to 
> the WAL. Once all handlers are blocked, the application will experience 
> backpressure. All (HBase) clients eventually have too many outstanding writes 
> and block.
> Because the application is distributing writes near uniformly in the 
> keyspace, the probability any given service endpoint will dispatch a request 
> to an impacted regionserver, even a single regionserver, approaches 1.0. So 
> the probability that all service endpoints will be affected approaches 1.0.
> In order to break the logjam, we need to remove the slow datanodes. Although 
> there is HDFS level monitoring, mechanisms, and procedures for this, we 
> should also attempt to take mitigating action at the HBase layer as soon as 
> we find ourselves in trouble. It would be enough to remove the affected 
> datanodes from the writer pipelines. A super simple strategy that can be 
> effective is described below:
> This is with branch-1 code. I think branch-2's async WAL can mitigate but 
> still can be susceptible. branch-2 sync WAL is susceptible. 
> We already roll the WAL writer if the pipeline suffers the failure of a 
> datanode and the replication factor on the pipeline is too low. We should 
> also consider how much time it took for the write pipeline to complete a sync 
> the last time we measured it, or the max over the interval from now to the 
> last time we checked. If the sync time exceeds a configured threshold, roll 
> the log writer then too. Fortunately we don't need to know which datanode is 
> making the WAL write pipeline slow, only that syncs on the pipeline are too 
> slow and exceeding a threshold. This is enough information to know when to 
> roll it. Once we roll it, we will get three new randomly selected datanodes. 
> On most clusters the probability the new pipeline includes the slow datanode 
> will be low. (And if for some reason it does end up with a problematic 
> datanode again, we roll again.)
> This is not a silver bullet but this can be a reasonably effective mitigation.
> Provide a metric for tracking when log roll is requested (and for what 
> reason).
> Emit a log line at log roll time that includes datanode pipeline details for 
> further debugging and analysis, similar to the existing slow FSHLog sync log 
> line.
> If we roll too many times within a short interval of time this probably means 
> there is a widespread problem with the fleet and so our mitigation is not 
> helping and may be exacerbating those problems or operator difficulties. 
> Ensure log roll requests triggered by this new feature happen infrequently 
> enough to not cause difficulties under either normal or abnormal conditions. 
> A very simple strategy that could work well under both normal and abnormal 
> conditions is to define a fairly lengthy interval, default 5 minutes, and 
> then insure we do not roll more than once during this interval for this 
> reason.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22301) Consider rolling the WAL if the HDFS write pipeline is slow

2019-04-24 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16825437#comment-16825437
 ] 

Andrew Purtell commented on HBASE-22301:


Updated patch. Fixes checkstyle nit. I broke the test by updating the config 
constants per request in FSHLog in a fast pass without running the test, oops. 
Fixed. Cannot reproduce TestFailedAppendAndSync failure. Although it is flagged 
as a medium test it completes almost instantly and never fails. Forgot to 
update FSHLog FIXED_OVERHEAD, did so this time around.

> Consider rolling the WAL if the HDFS write pipeline is slow
> ---
>
> Key: HBASE-22301
> URL: https://issues.apache.org/jira/browse/HBASE-22301
> Project: HBase
>  Issue Type: Improvement
>  Components: wal
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 3.0.0, 1.5.0, 2.3.0
>
> Attachments: HBASE-22301-branch-1.patch, HBASE-22301-branch-1.patch, 
> HBASE-22301-branch-1.patch, HBASE-22301-branch-1.patch
>
>
> Consider the case when a subset of the HDFS fleet is unhealthy but suffering 
> a gray failure not an outright outage. HDFS operations, notably syncs, are 
> abnormally slow on pipelines which include this subset of hosts. If the 
> regionserver's WAL is backed by an impacted pipeline, all WAL handlers can be 
> consumed waiting for acks from the datanodes in the pipeline (recall that 
> some of them are sick). Imagine a write heavy application distributing load 
> uniformly over the cluster at a fairly high rate. With the WAL subsystem 
> slowed by HDFS level issues, all handlers can be blocked waiting to append to 
> the WAL. Once all handlers are blocked, the application will experience 
> backpressure. All (HBase) clients eventually have too many outstanding writes 
> and block.
> Because the application is distributing writes near uniformly in the 
> keyspace, the probability any given service endpoint will dispatch a request 
> to an impacted regionserver, even a single regionserver, approaches 1.0. So 
> the probability that all service endpoints will be affected approaches 1.0.
> In order to break the logjam, we need to remove the slow datanodes. Although 
> there is HDFS level monitoring, mechanisms, and procedures for this, we 
> should also attempt to take mitigating action at the HBase layer as soon as 
> we find ourselves in trouble. It would be enough to remove the affected 
> datanodes from the writer pipelines. A super simple strategy that can be 
> effective is described below:
> This is with branch-1 code. I think branch-2's async WAL can mitigate but 
> still can be susceptible. branch-2 sync WAL is susceptible. 
> We already roll the WAL writer if the pipeline suffers the failure of a 
> datanode and the replication factor on the pipeline is too low. We should 
> also consider how much time it took for the write pipeline to complete a sync 
> the last time we measured it, or the max over the interval from now to the 
> last time we checked. If the sync time exceeds a configured threshold, roll 
> the log writer then too. Fortunately we don't need to know which datanode is 
> making the WAL write pipeline slow, only that syncs on the pipeline are too 
> slow and exceeding a threshold. This is enough information to know when to 
> roll it. Once we roll it, we will get three new randomly selected datanodes. 
> On most clusters the probability the new pipeline includes the slow datanode 
> will be low. (And if for some reason it does end up with a problematic 
> datanode again, we roll again.)
> This is not a silver bullet but this can be a reasonably effective mitigation.
> Provide a metric for tracking when log roll is requested (and for what 
> reason).
> Emit a log line at log roll time that includes datanode pipeline details for 
> further debugging and analysis, similar to the existing slow FSHLog sync log 
> line.
> If we roll too many times within a short interval of time this probably means 
> there is a widespread problem with the fleet and so our mitigation is not 
> helping and may be exacerbating those problems or operator difficulties. 
> Ensure log roll requests triggered by this new feature happen infrequently 
> enough to not cause difficulties under either normal or abnormal conditions. 
> A very simple strategy that could work well under both normal and abnormal 
> conditions is to define a fairly lengthy interval, default 5 minutes, and 
> then insure we do not roll more than once during this interval for this 
> reason.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-22301) Consider rolling the WAL if the HDFS write pipeline is slow

2019-04-24 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16825437#comment-16825437
 ] 

Andrew Purtell edited comment on HBASE-22301 at 4/24/19 6:56 PM:
-

Updated patch. Fixes checkstyle nit. I broke the test by updating the config 
constants per request in FSHLog in a fast pass without updating or running the 
test, oops. Fixed. In fact the waiter isn't necessary. Cannot reproduce 
TestFailedAppendAndSync failure. Although it is flagged as a medium test it 
completes almost instantly and never fails. Forgot to update FSHLog 
FIXED_OVERHEAD, did so this time around.


was (Author: apurtell):
Updated patch. Fixes checkstyle nit. I broke the test by updating the config 
constants per request in FSHLog in a fast pass without running the test, oops. 
Fixed. Cannot reproduce TestFailedAppendAndSync failure. Although it is flagged 
as a medium test it completes almost instantly and never fails. Forgot to 
update FSHLog FIXED_OVERHEAD, did so this time around.

> Consider rolling the WAL if the HDFS write pipeline is slow
> ---
>
> Key: HBASE-22301
> URL: https://issues.apache.org/jira/browse/HBASE-22301
> Project: HBase
>  Issue Type: Improvement
>  Components: wal
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 3.0.0, 1.5.0, 2.3.0
>
> Attachments: HBASE-22301-branch-1.patch, HBASE-22301-branch-1.patch, 
> HBASE-22301-branch-1.patch, HBASE-22301-branch-1.patch
>
>
> Consider the case when a subset of the HDFS fleet is unhealthy but suffering 
> a gray failure not an outright outage. HDFS operations, notably syncs, are 
> abnormally slow on pipelines which include this subset of hosts. If the 
> regionserver's WAL is backed by an impacted pipeline, all WAL handlers can be 
> consumed waiting for acks from the datanodes in the pipeline (recall that 
> some of them are sick). Imagine a write heavy application distributing load 
> uniformly over the cluster at a fairly high rate. With the WAL subsystem 
> slowed by HDFS level issues, all handlers can be blocked waiting to append to 
> the WAL. Once all handlers are blocked, the application will experience 
> backpressure. All (HBase) clients eventually have too many outstanding writes 
> and block.
> Because the application is distributing writes near uniformly in the 
> keyspace, the probability any given service endpoint will dispatch a request 
> to an impacted regionserver, even a single regionserver, approaches 1.0. So 
> the probability that all service endpoints will be affected approaches 1.0.
> In order to break the logjam, we need to remove the slow datanodes. Although 
> there is HDFS level monitoring, mechanisms, and procedures for this, we 
> should also attempt to take mitigating action at the HBase layer as soon as 
> we find ourselves in trouble. It would be enough to remove the affected 
> datanodes from the writer pipelines. A super simple strategy that can be 
> effective is described below:
> This is with branch-1 code. I think branch-2's async WAL can mitigate but 
> still can be susceptible. branch-2 sync WAL is susceptible. 
> We already roll the WAL writer if the pipeline suffers the failure of a 
> datanode and the replication factor on the pipeline is too low. We should 
> also consider how much time it took for the write pipeline to complete a sync 
> the last time we measured it, or the max over the interval from now to the 
> last time we checked. If the sync time exceeds a configured threshold, roll 
> the log writer then too. Fortunately we don't need to know which datanode is 
> making the WAL write pipeline slow, only that syncs on the pipeline are too 
> slow and exceeding a threshold. This is enough information to know when to 
> roll it. Once we roll it, we will get three new randomly selected datanodes. 
> On most clusters the probability the new pipeline includes the slow datanode 
> will be low. (And if for some reason it does end up with a problematic 
> datanode again, we roll again.)
> This is not a silver bullet but this can be a reasonably effective mitigation.
> Provide a metric for tracking when log roll is requested (and for what 
> reason).
> Emit a log line at log roll time that includes datanode pipeline details for 
> further debugging and analysis, similar to the existing slow FSHLog sync log 
> line.
> If we roll too many times within a short interval of time this probably means 
> there is a widespread problem with the fleet and so our mitigation is not 
> helping and may be exacerbating those problems or operator difficulties. 
> Ensure log roll requests triggered by this new feature happen infrequently 
> enough to not cause difficulties under either normal or

[jira] [Updated] (HBASE-22301) Consider rolling the WAL if the HDFS write pipeline is slow

2019-04-24 Thread Andrew Purtell (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-22301:
---
Attachment: HBASE-22301-branch-1.patch

> Consider rolling the WAL if the HDFS write pipeline is slow
> ---
>
> Key: HBASE-22301
> URL: https://issues.apache.org/jira/browse/HBASE-22301
> Project: HBase
>  Issue Type: Improvement
>  Components: wal
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 3.0.0, 1.5.0, 2.3.0
>
> Attachments: HBASE-22301-branch-1.patch, HBASE-22301-branch-1.patch, 
> HBASE-22301-branch-1.patch, HBASE-22301-branch-1.patch, 
> HBASE-22301-branch-1.patch
>
>
> Consider the case when a subset of the HDFS fleet is unhealthy but suffering 
> a gray failure not an outright outage. HDFS operations, notably syncs, are 
> abnormally slow on pipelines which include this subset of hosts. If the 
> regionserver's WAL is backed by an impacted pipeline, all WAL handlers can be 
> consumed waiting for acks from the datanodes in the pipeline (recall that 
> some of them are sick). Imagine a write heavy application distributing load 
> uniformly over the cluster at a fairly high rate. With the WAL subsystem 
> slowed by HDFS level issues, all handlers can be blocked waiting to append to 
> the WAL. Once all handlers are blocked, the application will experience 
> backpressure. All (HBase) clients eventually have too many outstanding writes 
> and block.
> Because the application is distributing writes near uniformly in the 
> keyspace, the probability any given service endpoint will dispatch a request 
> to an impacted regionserver, even a single regionserver, approaches 1.0. So 
> the probability that all service endpoints will be affected approaches 1.0.
> In order to break the logjam, we need to remove the slow datanodes. Although 
> there is HDFS level monitoring, mechanisms, and procedures for this, we 
> should also attempt to take mitigating action at the HBase layer as soon as 
> we find ourselves in trouble. It would be enough to remove the affected 
> datanodes from the writer pipelines. A super simple strategy that can be 
> effective is described below:
> This is with branch-1 code. I think branch-2's async WAL can mitigate but 
> still can be susceptible. branch-2 sync WAL is susceptible. 
> We already roll the WAL writer if the pipeline suffers the failure of a 
> datanode and the replication factor on the pipeline is too low. We should 
> also consider how much time it took for the write pipeline to complete a sync 
> the last time we measured it, or the max over the interval from now to the 
> last time we checked. If the sync time exceeds a configured threshold, roll 
> the log writer then too. Fortunately we don't need to know which datanode is 
> making the WAL write pipeline slow, only that syncs on the pipeline are too 
> slow and exceeding a threshold. This is enough information to know when to 
> roll it. Once we roll it, we will get three new randomly selected datanodes. 
> On most clusters the probability the new pipeline includes the slow datanode 
> will be low. (And if for some reason it does end up with a problematic 
> datanode again, we roll again.)
> This is not a silver bullet but this can be a reasonably effective mitigation.
> Provide a metric for tracking when log roll is requested (and for what 
> reason).
> Emit a log line at log roll time that includes datanode pipeline details for 
> further debugging and analysis, similar to the existing slow FSHLog sync log 
> line.
> If we roll too many times within a short interval of time this probably means 
> there is a widespread problem with the fleet and so our mitigation is not 
> helping and may be exacerbating those problems or operator difficulties. 
> Ensure log roll requests triggered by this new feature happen infrequently 
> enough to not cause difficulties under either normal or abnormal conditions. 
> A very simple strategy that could work well under both normal and abnormal 
> conditions is to define a fairly lengthy interval, default 5 minutes, and 
> then insure we do not roll more than once during this interval for this 
> reason.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22054) Space Quota: Compaction is not working for super user in case of NO_WRITES_COMPACTIONS

2019-04-24 Thread Sakthi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sakthi updated HBASE-22054:
---
Attachment: hbase-22054.master.004.patch

> Space Quota: Compaction is not working for super user in case of 
> NO_WRITES_COMPACTIONS
> --
>
> Key: HBASE-22054
> URL: https://issues.apache.org/jira/browse/HBASE-22054
> Project: HBase
>  Issue Type: Bug
>Reporter: Ajeet Rai
>Assignee: Sakthi
>Priority: Minor
>  Labels: Quota, Space
> Attachments: hbase-22054.master.001.patch, 
> hbase-22054.master.002.patch, hbase-22054.master.003.patch, 
> hbase-22054.master.004.patch
>
>
> Space Quota: Compaction is not working for super user. Compaction command is 
> issued successfully at client but actually compaction is not happening.
> In debug log below message is printed:
> as an active space quota violation policy disallows compaction.
>  Reference: 
>  
> [https://lists.apache.org/thread.html/d09aa7abaacf1f0be9d59fa9260515ddc0c17ac0aba9cc0f2ac569bf@%3Cuser.hbase.apache.org%3E]
> Actually in requestCompactionInternal method of  CompactSplit class ,there is 
> no check for super user and compcations are disallowed
> {noformat}
>   RegionServerSpaceQuotaManager spaceQuotaManager =
> this.server.getRegionServerSpaceQuotaManager();
> if (spaceQuotaManager != null &&
> 
> spaceQuotaManager.areCompactionsDisabled(region.getTableDescriptor().getTableName()))
>  {
>   String reason = "Ignoring compaction request for " + region +
>   " as an active space quota violation " + " policy disallows 
> compactions.";
>   tracker.notExecuted(store, reason);
>   completeTracker.completed(store);
>   LOG.debug(reason);
>   return;
> }
> {noformat}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22274) Cell size limit check on append should consider cell's previous size.

2019-04-24 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16825446#comment-16825446
 ] 

Andrew Purtell commented on HBASE-22274:


I'm working in this general area and on branch-1 in general. Let me commit this 
in a bit after some local checks.

> Cell size limit check on append should consider cell's previous size.
> -
>
> Key: HBASE-22274
> URL: https://issues.apache.org/jira/browse/HBASE-22274
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.0.0, 1.3.5
>Reporter: Xu Cang
>Assignee: Xu Cang
>Priority: Minor
> Attachments: HBASE-22274-branch-1.001.patch, 
> HBASE-22274-branch-1.002.patch, HBASE-22274-master.001.patch, 
> HBASE-22274-master.002.patch, HBASE-22274-master.002.patch, 
> HBASE-22274-master.003.patch
>
>
> Now we have cell size limit check based on this parameter 
> *hbase.server.keyvalue.maxsize* 
> One case was missing: appending to a cell only take append op's cell size 
> into account against this limit check. we should check against the potential 
> final cell size after the append.'
> It's easy to reproduce this :
>  
> Apply this diff
>  
> {code:java}
> diff --git 
> a/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java
>  
> b/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java
>  index 5a285ef6ba..8633177ebe 100644 --- 
> a/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java
>  +++ 
> b/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java
>  @@ -6455,7 +6455,7 
> - t.append(new Append(ROW).addColumn(FAMILY, QUALIFIER, new byte[10 * 
> 1024])); 
> + t.append(new Append(ROW).addColumn(FAMILY, QUALIFIER, new byte[2 * 1024])); 
> {code}
>  
> Fix is to add this check in #reckonDeltas in HRegion class, where we have 
> already got the appended cell's size. 
> Will throw DoNotRetryIOException if checks is failed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-22274) Cell size limit check on append should consider cell's previous size.

2019-04-24 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16825446#comment-16825446
 ] 

Andrew Purtell edited comment on HBASE-22274 at 4/24/19 7:14 PM:
-

I'm working in this area and on branch-1 in general. Let me commit this in a 
bit after some local checks.


was (Author: apurtell):
I'm working in this general area and on branch-1 in general. Let me commit this 
in a bit after some local checks.

> Cell size limit check on append should consider cell's previous size.
> -
>
> Key: HBASE-22274
> URL: https://issues.apache.org/jira/browse/HBASE-22274
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.0.0, 1.3.5
>Reporter: Xu Cang
>Assignee: Xu Cang
>Priority: Minor
> Attachments: HBASE-22274-branch-1.001.patch, 
> HBASE-22274-branch-1.002.patch, HBASE-22274-master.001.patch, 
> HBASE-22274-master.002.patch, HBASE-22274-master.002.patch, 
> HBASE-22274-master.003.patch
>
>
> Now we have cell size limit check based on this parameter 
> *hbase.server.keyvalue.maxsize* 
> One case was missing: appending to a cell only take append op's cell size 
> into account against this limit check. we should check against the potential 
> final cell size after the append.'
> It's easy to reproduce this :
>  
> Apply this diff
>  
> {code:java}
> diff --git 
> a/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java
>  
> b/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java
>  index 5a285ef6ba..8633177ebe 100644 --- 
> a/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java
>  +++ 
> b/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java
>  @@ -6455,7 +6455,7 
> - t.append(new Append(ROW).addColumn(FAMILY, QUALIFIER, new byte[10 * 
> 1024])); 
> + t.append(new Append(ROW).addColumn(FAMILY, QUALIFIER, new byte[2 * 1024])); 
> {code}
>  
> Fix is to add this check in #reckonDeltas in HRegion class, where we have 
> already got the appended cell's size. 
> Will throw DoNotRetryIOException if checks is failed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HBASE-22304) Fix remaining Checkstyle issues in hbase-endpoint

2019-04-24 Thread Jan Hentschel (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Hentschel resolved HBASE-22304.
---
   Resolution: Fixed
Fix Version/s: 2.2.1
   2.1.5
   2.3.0
   3.0.0

> Fix remaining Checkstyle issues in hbase-endpoint
> -
>
> Key: HBASE-22304
> URL: https://issues.apache.org/jira/browse/HBASE-22304
> Project: HBase
>  Issue Type: Task
>Affects Versions: 3.0.0
>Reporter: Jan Hentschel
>Assignee: Jan Hentschel
>Priority: Trivial
> Fix For: 3.0.0, 2.3.0, 2.1.5, 2.2.1
>
>
> The module {{hbase-endpoint}} still has a small number of Checkstyle issues, 
> which should be fixed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [hbase] HorizonNet merged pull request #186: HBASE-22294 Removed deprecated method from WALKeyImpl

2019-04-24 Thread GitBox
HorizonNet merged pull request #186: HBASE-22294 Removed deprecated method from 
WALKeyImpl
URL: https://github.com/apache/hbase/pull/186
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Resolved] (HBASE-22294) Remove deprecated method from WALKeyImpl

2019-04-24 Thread Jan Hentschel (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Hentschel resolved HBASE-22294.
---
  Resolution: Fixed
Hadoop Flags: Incompatible change
Release Note: Removed WALKeyImpl#getLogSeqNum, which was deprecated in 
2.0.0 by HBASE-15158. Use WALKeyImpl#getSequenceId instead.

> Remove deprecated method from WALKeyImpl
> 
>
> Key: HBASE-22294
> URL: https://issues.apache.org/jira/browse/HBASE-22294
> Project: HBase
>  Issue Type: Task
>Reporter: Sayed Anisul Hoque
>Assignee: Sayed Anisul Hoque
>Priority: Minor
> Fix For: 3.0.0
>
>
> the method - _getLogSeqNum_ in the class WALKeyImpl is a deprecated function 
> that needs to be removed in 3.0.0
> According to [HBASE-15158|https://issues.apache.org/jira/browse/HBASE-15158] 
> this function is deprecated in 2.0.0



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22274) Cell size limit check on append should consider cell's previous size.

2019-04-24 Thread Xu Cang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16825474#comment-16825474
 ] 

Xu Cang commented on HBASE-22274:
-

OK. thanks [~apurtell]

Thanks for the review [~abhishek.chouhan]

> Cell size limit check on append should consider cell's previous size.
> -
>
> Key: HBASE-22274
> URL: https://issues.apache.org/jira/browse/HBASE-22274
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.0.0, 1.3.5
>Reporter: Xu Cang
>Assignee: Xu Cang
>Priority: Minor
> Attachments: HBASE-22274-branch-1.001.patch, 
> HBASE-22274-branch-1.002.patch, HBASE-22274-master.001.patch, 
> HBASE-22274-master.002.patch, HBASE-22274-master.002.patch, 
> HBASE-22274-master.003.patch
>
>
> Now we have cell size limit check based on this parameter 
> *hbase.server.keyvalue.maxsize* 
> One case was missing: appending to a cell only take append op's cell size 
> into account against this limit check. we should check against the potential 
> final cell size after the append.'
> It's easy to reproduce this :
>  
> Apply this diff
>  
> {code:java}
> diff --git 
> a/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java
>  
> b/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java
>  index 5a285ef6ba..8633177ebe 100644 --- 
> a/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java
>  +++ 
> b/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java
>  @@ -6455,7 +6455,7 
> - t.append(new Append(ROW).addColumn(FAMILY, QUALIFIER, new byte[10 * 
> 1024])); 
> + t.append(new Append(ROW).addColumn(FAMILY, QUALIFIER, new byte[2 * 1024])); 
> {code}
>  
> Fix is to add this check in #reckonDeltas in HRegion class, where we have 
> already got the appended cell's size. 
> Will throw DoNotRetryIOException if checks is failed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22054) Space Quota: Compaction is not working for super user in case of NO_WRITES_COMPACTIONS

2019-04-24 Thread Sakthi (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16825480#comment-16825480
 ] 

Sakthi commented on HBASE-22054:


I have changed the check logic. Now, if a received user is null then we don't 
even need to check for space quota violation and we can proceed with the 
compaction under the assumption that it's a systemRequestedCompaction. Also 
have added the superusers reloading in the RegionServer. 

I re-ran all the above failed tests with the patch and they passed except 
TestCompactionLifeCycleTracker#testSpaceQuotaViolation() which assumes that 
compaction wouldn't happen with null user in case of space quota violation. 
Hence I think this test can be ignored/removed completely. As of now have 
@Ignored and have uploaded a patch to see what does the QA feel about the unit 
tests. 

Ping [~elserj] !

> Space Quota: Compaction is not working for super user in case of 
> NO_WRITES_COMPACTIONS
> --
>
> Key: HBASE-22054
> URL: https://issues.apache.org/jira/browse/HBASE-22054
> Project: HBase
>  Issue Type: Bug
>Reporter: Ajeet Rai
>Assignee: Sakthi
>Priority: Minor
>  Labels: Quota, Space
> Attachments: hbase-22054.master.001.patch, 
> hbase-22054.master.002.patch, hbase-22054.master.003.patch, 
> hbase-22054.master.004.patch
>
>
> Space Quota: Compaction is not working for super user. Compaction command is 
> issued successfully at client but actually compaction is not happening.
> In debug log below message is printed:
> as an active space quota violation policy disallows compaction.
>  Reference: 
>  
> [https://lists.apache.org/thread.html/d09aa7abaacf1f0be9d59fa9260515ddc0c17ac0aba9cc0f2ac569bf@%3Cuser.hbase.apache.org%3E]
> Actually in requestCompactionInternal method of  CompactSplit class ,there is 
> no check for super user and compcations are disallowed
> {noformat}
>   RegionServerSpaceQuotaManager spaceQuotaManager =
> this.server.getRegionServerSpaceQuotaManager();
> if (spaceQuotaManager != null &&
> 
> spaceQuotaManager.areCompactionsDisabled(region.getTableDescriptor().getTableName()))
>  {
>   String reason = "Ignoring compaction request for " + region +
>   " as an active space quota violation " + " policy disallows 
> compactions.";
>   tracker.notExecuted(store, reason);
>   completeTracker.completed(store);
>   LOG.debug(reason);
>   return;
> }
> {noformat}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22054) Space Quota: Compaction is not working for super user in case of NO_WRITES_COMPACTIONS

2019-04-24 Thread Sakthi (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16825481#comment-16825481
 ] 

Sakthi commented on HBASE-22054:


Will update the doc with this info once we are okay with the patches.

> Space Quota: Compaction is not working for super user in case of 
> NO_WRITES_COMPACTIONS
> --
>
> Key: HBASE-22054
> URL: https://issues.apache.org/jira/browse/HBASE-22054
> Project: HBase
>  Issue Type: Bug
>Reporter: Ajeet Rai
>Assignee: Sakthi
>Priority: Minor
>  Labels: Quota, Space
> Attachments: hbase-22054.master.001.patch, 
> hbase-22054.master.002.patch, hbase-22054.master.003.patch, 
> hbase-22054.master.004.patch
>
>
> Space Quota: Compaction is not working for super user. Compaction command is 
> issued successfully at client but actually compaction is not happening.
> In debug log below message is printed:
> as an active space quota violation policy disallows compaction.
>  Reference: 
>  
> [https://lists.apache.org/thread.html/d09aa7abaacf1f0be9d59fa9260515ddc0c17ac0aba9cc0f2ac569bf@%3Cuser.hbase.apache.org%3E]
> Actually in requestCompactionInternal method of  CompactSplit class ,there is 
> no check for super user and compcations are disallowed
> {noformat}
>   RegionServerSpaceQuotaManager spaceQuotaManager =
> this.server.getRegionServerSpaceQuotaManager();
> if (spaceQuotaManager != null &&
> 
> spaceQuotaManager.areCompactionsDisabled(region.getTableDescriptor().getTableName()))
>  {
>   String reason = "Ignoring compaction request for " + region +
>   " as an active space quota violation " + " policy disallows 
> compactions.";
>   tracker.notExecuted(store, reason);
>   completeTracker.completed(store);
>   LOG.debug(reason);
>   return;
> }
> {noformat}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22302) Fix TestHbck

2019-04-24 Thread HBase QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16825484#comment-16825484
 ] 

HBase QA commented on HBASE-22302:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  4m 
59s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HBASE-21512 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
16s{color} | {color:green} HBASE-21512 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} HBASE-21512 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 9s{color} | {color:green} HBASE-21512 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
17s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
1s{color} | {color:green} HBASE-21512 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} HBASE-21512 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
10s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
8m 14s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}307m 41s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}350m 13s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.namespace.TestNamespaceAuditor |
|   | hadoop.hbase.master.TestMasterShutdown |
|   | hadoop.hbase.client.TestSnapshotTemporaryDirectoryWithRegionReplicas |
|   | hadoop.hbase.client.replication.TestReplicationAdmin |
|   | hadoop.hbase.master.procedure.TestTruncateTableProcedure |
|   | hadoop.hbase.client.TestSnapshotDFSTemporaryDirectory |
|   | hadoop.hbase.client.replication.TestReplicationAdminWithClusters |
|   | hadoop.hbase.client.TestCloneSnapshotFromClientNormal |
|   | hadoop.hbase.tool.TestSecureBulkLoadHFiles |
|   | hadoop.hbase.replication.TestReplicationStatus |
|   | hadoop.hbase.util.TestFromClientSide3WoUnsafe |
|   | hadoop.hbase.replication.TestReplicationSmallTests |
|   | hadoop.hbase.tool.TestBulkLoadHFiles |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/PreCommit-HBASE-Build/172/artifact/patchprocess/Dockerfile
 |
| JIRA Issue | HBASE-22302 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12966908/HBASE-22302-HBASE-21512.patch
 |
| Optional Tests |  dupname  asflicense  javac  javadoc  unit  findbugs  
shadedjars  hadoopc

[jira] [Commented] (HBASE-22274) Cell size limit check on append should consider cell's previous size.

2019-04-24 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16825486#comment-16825486
 ] 

Andrew Purtell commented on HBASE-22274:


That test failure in precommit may be short circuiting other units that should 
run, like TestHRegion. I'm seeing this failure, reproducible with the master 
patch, which does not occur on HEAD of master:
{noformat}
$ mvn clean install -DskipITs -Dtest=TestFromClientSide,TestHRegion
...
[ERROR] Tests run: 105, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
115.593 s <<< FAILURE! - in org.apache.hadoop.hbase.regionserver.TestHRegion
[ERROR] 
testCheckAndMutate_WithCorrectValue(org.apache.hadoop.hbase.regionserver.TestHRegion)
  Time elapsed: 0.179 s  <<< FAILURE!
java.lang.AssertionError: expected: but was:
    at 
org.apache.hadoop.hbase.regionserver.TestHRegion.testCheckAndMutate_WithCorrectValue(TestHRegion.java:1867)
[INFO] Running org.apache.hadoop.hbase.client.TestFromClientSide
[WARNING] Tests run: 89, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 
187.189 s - in org.apache.hadoop.hbase.client.TestFromClientSide
[INFO]
[INFO] Results:
[INFO]
[ERROR] Failures:
[ERROR]   TestHRegion.testCheckAndMutate_WithCorrectValue:1867 expected: 
but was:
[INFO]
[ERROR] Tests run: 194, Failures: 1, Errors: 0, Skipped: 4'
{noformat}

I think more test changes to account for this improvement will be necessary.

> Cell size limit check on append should consider cell's previous size.
> -
>
> Key: HBASE-22274
> URL: https://issues.apache.org/jira/browse/HBASE-22274
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.0.0, 1.3.5
>Reporter: Xu Cang
>Assignee: Xu Cang
>Priority: Minor
> Attachments: HBASE-22274-branch-1.001.patch, 
> HBASE-22274-branch-1.002.patch, HBASE-22274-master.001.patch, 
> HBASE-22274-master.002.patch, HBASE-22274-master.002.patch, 
> HBASE-22274-master.003.patch
>
>
> Now we have cell size limit check based on this parameter 
> *hbase.server.keyvalue.maxsize* 
> One case was missing: appending to a cell only take append op's cell size 
> into account against this limit check. we should check against the potential 
> final cell size after the append.'
> It's easy to reproduce this :
>  
> Apply this diff
>  
> {code:java}
> diff --git 
> a/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java
>  
> b/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java
>  index 5a285ef6ba..8633177ebe 100644 --- 
> a/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java
>  +++ 
> b/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java
>  @@ -6455,7 +6455,7 
> - t.append(new Append(ROW).addColumn(FAMILY, QUALIFIER, new byte[10 * 
> 1024])); 
> + t.append(new Append(ROW).addColumn(FAMILY, QUALIFIER, new byte[2 * 1024])); 
> {code}
>  
> Fix is to add this check in #reckonDeltas in HRegion class, where we have 
> already got the appended cell's size. 
> Will throw DoNotRetryIOException if checks is failed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22274) Cell size limit check on append should consider cell's previous size.

2019-04-24 Thread Andrew Purtell (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-22274:
---
Status: Open  (was: Patch Available)

> Cell size limit check on append should consider cell's previous size.
> -
>
> Key: HBASE-22274
> URL: https://issues.apache.org/jira/browse/HBASE-22274
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0, 3.0.0, 1.3.5
>Reporter: Xu Cang
>Assignee: Xu Cang
>Priority: Minor
> Attachments: HBASE-22274-branch-1.001.patch, 
> HBASE-22274-branch-1.002.patch, HBASE-22274-master.001.patch, 
> HBASE-22274-master.002.patch, HBASE-22274-master.002.patch, 
> HBASE-22274-master.003.patch
>
>
> Now we have cell size limit check based on this parameter 
> *hbase.server.keyvalue.maxsize* 
> One case was missing: appending to a cell only take append op's cell size 
> into account against this limit check. we should check against the potential 
> final cell size after the append.'
> It's easy to reproduce this :
>  
> Apply this diff
>  
> {code:java}
> diff --git 
> a/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java
>  
> b/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java
>  index 5a285ef6ba..8633177ebe 100644 --- 
> a/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java
>  +++ 
> b/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java
>  @@ -6455,7 +6455,7 
> - t.append(new Append(ROW).addColumn(FAMILY, QUALIFIER, new byte[10 * 
> 1024])); 
> + t.append(new Append(ROW).addColumn(FAMILY, QUALIFIER, new byte[2 * 1024])); 
> {code}
>  
> Fix is to add this check in #reckonDeltas in HRegion class, where we have 
> already got the appended cell's size. 
> Will throw DoNotRetryIOException if checks is failed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22263) Master creates duplicate ServerCrashProcedure on initialization, leading to assignment hanging in region-dense clusters

2019-04-24 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16825490#comment-16825490
 ] 

Andrew Purtell commented on HBASE-22263:


The WIP patch makes sense, and some questionable things are still XXX so that 
seems fine. 

The part where you skip scheduling another SCP for a failed server found in 
scan of the WAL dir because it already has a queued SCP is the essential change.

In ServerManager:
bq. when a server is already in the dead server list (including start code) do 
we need to schedule an SCP?

Good question. I updated DeadServers handling some time ago but only to make 
the state in DeadServers internally consistent, I left callers alone. 
It would seem to me if we can assure invariant that we only add a server to the 
dead server list after checking if either a SCP is already scheduled or we just 
scheduled one, then the answer at this point in the code to your question is no.

> Master creates duplicate ServerCrashProcedure on initialization, leading to 
> assignment hanging in region-dense clusters
> ---
>
> Key: HBASE-22263
> URL: https://issues.apache.org/jira/browse/HBASE-22263
> Project: HBase
>  Issue Type: Bug
>  Components: proc-v2, Region Assignment
>Affects Versions: 1.2.0, 1.3.0, 1.4.0, 1.5.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
> Attachments: HBASE-22263-branch-1.v0.patch
>
>
> h3. Problem:
> During Master initialization we
>  # restore existing procedures that still need to run from prior active 
> Master instances
>  # look for signs that Region Servers have died and need to be recovered 
> while we were out and schedule a ServerCrashProcedure (SCP) for each them
>  # turn on the assignment manager
> The normal turn of events for a ServerCrashProcedure will attempt to use a 
> bulk assignment to maintain the set of regions on a RS if possible. However, 
> we wait around and retry a bit later if the assignment manager isn’t ready 
> yet.
> Note that currently #2 has no notion of wether or not a previous active 
> Master instances has already done a check. This means we might schedule an 
> SCP for a ServerName (host, port, start code) that already has an SCP 
> scheduled. Ideally, such a duplicate should be a no-op.
> However, before step #2 schedules the SCP it first marks the region server as 
> dead and not yet processed, with the expectation that the SCP it just created 
> will look if there is log splitting work and then mark the server as easy for 
> region assignment. At the same time, any restored SCPs that are past the step 
> of log splitting will be waiting for the AssignmentManager still. As a part 
> of restoring themselves, they do not update with the current master instance 
> to show that they are past the point of WAL processing.
> Once the AssignmentManager starts in #3 the restored SCP continues; it will 
> eventually get to the assignment phase and find that its server is marked as 
> dead and in need of wal processing. Such assignments are skipped with a log 
> message. Thus as we iterate over the regions to assign we’ll skip all of 
> them. This non-intuitively shifts the “no-op” status from the newer SCP we 
> scheduled at #2 to the older SCP that was restored in #1.
> Bulk assignment works by sending the assign calls via a pool to allow more 
> parallelism. Once we’ve set up the pool we just wait to see if the region 
> state updates to online. Unfortunately, since all of the assigns got skipped, 
> we’ll never change the state for any of these regions. That means the bulk 
> assign, and the older SCP that started it, will wait until it hits a timeout.
> By default the timeout for a bulk assignment is the smaller of {{(# Regions 
> in the plan * 10s)}} or {{(# Regions in the most loaded RS in the plan * 1s + 
> 60s + # of RegionServers in the cluster * 30s)}}. For even modest clusters 
> with several hundreds of regions per region server, this means the “no-op” 
> SCP will end up waiting ~tens-of-minutes (e.g. ~50 minutes for an average 
> region density of 300 regions per region server on a 100 node cluster. ~11 
> minutes for 300 regions per region server on a 10 node cluster). During this 
> time, the SCP will hold one of the available procedure execution slots for 
> both the overall pool and for the specific server queue.
> As previously mentioned, restored SCPs will retry their submission if the 
> assignment manager has not yet been activated (done in #3), this can cause 
> them to be scheduled after the newer SCPs (created in #2). Thus the order of 
> execution of no-op and usable SCPs can vary from run-to-run of master 
> initialization.
> This means that unless you get luc

  1   2   >