[jira] [Updated] (HBASE-22208) Create access checker and expose it in RS

2019-04-18 Thread Yi Mei (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Mei updated HBASE-22208:
---
Attachment: HBASE-22208.branch-2.001.patch

> Create access checker and expose it in RS
> -
>
> Key: HBASE-22208
> URL: https://issues.apache.org/jira/browse/HBASE-22208
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Yi Mei
>Assignee: Yi Mei
>Priority: Major
> Attachments: HBASE-22208.branch-2.001.patch, 
> HBASE-22208.master.001.patch, HBASE-22208.master.002.patch, 
> HBASE-22208.master.003.patch, HBASE-22208.master.004.patch, 
> HBASE-22208.master.005.patch, HBASE-22208.master.006.patch, 
> HBASE-22208.master.007.patch, HBASE-22208.master.008.patch, 
> HBASE-22208.master.009.patch
>
>
> In HBase access control service, access checker performs authorization checks 
> for a given user's assigned permissions. The access checker holds a auth 
> manager instance which cache all global, namespace and table permissions.
> A access checker is created when master, RS and region load AccessController, 
> permission cache is refreshed when acl znode changed.
> We can create access checker when master and RS start and expose it in order 
> to use procedure to refresh its cache rather than watch ZK.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22072) High read/write intensive regions may cause long crash recovery

2019-04-18 Thread ramkrishna.s.vasudevan (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16821671#comment-16821671
 ] 

ramkrishna.s.vasudevan commented on HBASE-22072:


bq.Also in an earlier comment I have raised some more issues where we just open 
scanner on some files and do not use those scanner as the are TTL not matching 
for the scan . Well that is not happening in this issue still they are issues. 
I have not checked this comment or this code part. If at all there I think we 
can fix in new issue.

> High read/write intensive regions may cause long crash recovery
> ---
>
> Key: HBASE-22072
> URL: https://issues.apache.org/jira/browse/HBASE-22072
> Project: HBase
>  Issue Type: Bug
>  Components: Performance, Recovery
>Affects Versions: 2.1.2
>Reporter: Pavel
>Assignee: ramkrishna.s.vasudevan
>Priority: Major
> Attachments: HBASE-22072.HBASE-21879-v1.patch
>
>
> Compaction of high read loaded region may leave compacted files undeleted 
> because of existing scan references:
> INFO org.apache.hadoop.hbase.regionserver.HStore - Can't archive compacted 
> file hdfs://hdfs-ha/hbase... because of either isCompactedAway=true or file 
> has reference, isReferencedInReads=true, refCount=1, skipping for now
> If region is either high write loaded this happens quite often and region may 
> have few storefiles and tons of undeleted compacted hdfs files.
> Region keeps all that files (in my case thousands) untill graceful region 
> closing procedure, which ignores existing references and drop obsolete files. 
> It works fine unless consuming some extra hdfs space, but only in case of 
> normal region closing. If region server crashes than new region server, 
> responsible for that overfiling region, reads hdfs folder and try to deal 
> with all undeleted files, producing tons of storefiles, compaction tasks and 
> consuming abnormal amount of memory, wich may lead to OutOfMemory Exception 
> and further region servers crash. This stops writing to region because number 
> of storefiles reach *hbase.hstore.blockingStoreFiles* limit, forces high GC 
> duty and may take hours to compact all files into working set of files.
> Workaround is a periodically check hdfs folders files count and force region 
> assign for ones with too many files.
> It could be nice if regionserver had a setting similar to 
> hbase.hstore.blockingStoreFiles and invoke attempt to drop undeleted 
> compacted files if number of files reaches this setting.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22072) High read/write intensive regions may cause long crash recovery

2019-04-18 Thread ramkrishna.s.vasudevan (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16821670#comment-16821670
 ] 

ramkrishna.s.vasudevan commented on HBASE-22072:


bq.But the scanner is still not over. And so the scanner did not get a chance 
to update the readers. So we can not really do this immediate return model.
This is what I tried to check in the code. As per my code reading it seems once 
a StoreScanner says close(false) in the next() flow or reseek() flow it means 
from the region level there are not going to be any other scan that is going to 
happen from that StoreScanner. Finally after a shipped call this store scanner 
will be closed when the scan completes. So I felt it is better we just don't 
update the readers in that case. And that is why if at all there is a close() 
call we just avoid the updateReaders itself. The other way to look at that is 
by making 'closing' true in all cases.

> High read/write intensive regions may cause long crash recovery
> ---
>
> Key: HBASE-22072
> URL: https://issues.apache.org/jira/browse/HBASE-22072
> Project: HBase
>  Issue Type: Bug
>  Components: Performance, Recovery
>Affects Versions: 2.1.2
>Reporter: Pavel
>Assignee: ramkrishna.s.vasudevan
>Priority: Major
> Attachments: HBASE-22072.HBASE-21879-v1.patch
>
>
> Compaction of high read loaded region may leave compacted files undeleted 
> because of existing scan references:
> INFO org.apache.hadoop.hbase.regionserver.HStore - Can't archive compacted 
> file hdfs://hdfs-ha/hbase... because of either isCompactedAway=true or file 
> has reference, isReferencedInReads=true, refCount=1, skipping for now
> If region is either high write loaded this happens quite often and region may 
> have few storefiles and tons of undeleted compacted hdfs files.
> Region keeps all that files (in my case thousands) untill graceful region 
> closing procedure, which ignores existing references and drop obsolete files. 
> It works fine unless consuming some extra hdfs space, but only in case of 
> normal region closing. If region server crashes than new region server, 
> responsible for that overfiling region, reads hdfs folder and try to deal 
> with all undeleted files, producing tons of storefiles, compaction tasks and 
> consuming abnormal amount of memory, wich may lead to OutOfMemory Exception 
> and further region servers crash. This stops writing to region because number 
> of storefiles reach *hbase.hstore.blockingStoreFiles* limit, forces high GC 
> duty and may take hours to compact all files into working set of files.
> Workaround is a periodically check hdfs folders files count and force region 
> assign for ones with too many files.
> It could be nice if regionserver had a setting similar to 
> hbase.hstore.blockingStoreFiles and invoke attempt to drop undeleted 
> compacted files if number of files reaches this setting.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22206) dist.apache.org must not be used for public downloads

2019-04-18 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16821669#comment-16821669
 ] 

Sean Busbey commented on HBASE-22206:
-

+1 from me too. Let me know if you'd like me to push the commit Dima.

> dist.apache.org must not be used for public downloads
> -
>
> Key: HBASE-22206
> URL: https://issues.apache.org/jira/browse/HBASE-22206
> Project: HBase
>  Issue Type: Bug
>  Components: website
>Reporter: Sebb
>Assignee: Dima Spivak
>Priority: Major
> Attachments: HBASE-22206.master.001.patch, 
> HBASE-22206.master.001.patch
>
>
> The dist.apache.org server is only intended for use by developers in staging 
> releases.
> It must not be used on public download pages.
> Please use www.apache.org/dist (for KEYS, hashes and sigs) and the mirror 
> system instead.
> The current download page has lots of references to dist.a.o; please replace 
> thes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22264) Rest Server (master branch) on jdk 11 throws NoClassDefFoundError : javax/annotation/Priority

2019-04-18 Thread Sakthi (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16821662#comment-16821662
 ] 

Sakthi commented on HBASE-22264:


Have uploaded a patch with an assumption that we won't need these jars in the 
client tarball. Could you please review, [~busbey] .

> Rest Server (master branch) on jdk 11 throws NoClassDefFoundError : 
> javax/annotation/Priority
> -
>
> Key: HBASE-22264
> URL: https://issues.apache.org/jira/browse/HBASE-22264
> Project: HBase
>  Issue Type: Bug
>Reporter: Sakthi
>Assignee: Sakthi
>Priority: Major
>  Labels: jdk11
> Attachments: hbase-22264.master.001.patch, 
> hbase-22264.master.002.patch
>
>
> This is in continuation with HBASE-22249. When compiled with jdk 8 and run on 
> jdk 11, the master branch throws the following exception during an attempt to 
> start the hbase rest server:
> {code:java}
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> javax/annotation/Priority
>   at 
> org.glassfish.jersey.model.internal.ComponentBag.modelFor(ComponentBag.java:483)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag.access$100(ComponentBag.java:89)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag$5.call(ComponentBag.java:408)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag$5.call(ComponentBag.java:398)
>   at org.glassfish.jersey.internal.Errors.process(Errors.java:315)
>   at org.glassfish.jersey.internal.Errors.process(Errors.java:297)
>   at org.glassfish.jersey.internal.Errors.process(Errors.java:228)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag.registerModel(ComponentBag.java:398)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag.register(ComponentBag.java:235)
>   at 
> org.glassfish.jersey.model.internal.CommonConfig.register(CommonConfig.java:420)
>   at 
> org.glassfish.jersey.server.ResourceConfig.register(ResourceConfig.java:425)
>   at org.apache.hadoop.hbase.rest.RESTServer.run(RESTServer.java:245)
>   at org.apache.hadoop.hbase.rest.RESTServer.main(RESTServer.java:421)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22264) Rest Server (master branch) on jdk 11 throws NoClassDefFoundError : javax/annotation/Priority

2019-04-18 Thread Sakthi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sakthi updated HBASE-22264:
---
Attachment: hbase-22264.master.002.patch

> Rest Server (master branch) on jdk 11 throws NoClassDefFoundError : 
> javax/annotation/Priority
> -
>
> Key: HBASE-22264
> URL: https://issues.apache.org/jira/browse/HBASE-22264
> Project: HBase
>  Issue Type: Bug
>Reporter: Sakthi
>Assignee: Sakthi
>Priority: Major
>  Labels: jdk11
> Attachments: hbase-22264.master.001.patch, 
> hbase-22264.master.002.patch
>
>
> This is in continuation with HBASE-22249. When compiled with jdk 8 and run on 
> jdk 11, the master branch throws the following exception during an attempt to 
> start the hbase rest server:
> {code:java}
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> javax/annotation/Priority
>   at 
> org.glassfish.jersey.model.internal.ComponentBag.modelFor(ComponentBag.java:483)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag.access$100(ComponentBag.java:89)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag$5.call(ComponentBag.java:408)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag$5.call(ComponentBag.java:398)
>   at org.glassfish.jersey.internal.Errors.process(Errors.java:315)
>   at org.glassfish.jersey.internal.Errors.process(Errors.java:297)
>   at org.glassfish.jersey.internal.Errors.process(Errors.java:228)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag.registerModel(ComponentBag.java:398)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag.register(ComponentBag.java:235)
>   at 
> org.glassfish.jersey.model.internal.CommonConfig.register(CommonConfig.java:420)
>   at 
> org.glassfish.jersey.server.ResourceConfig.register(ResourceConfig.java:425)
>   at org.apache.hadoop.hbase.rest.RESTServer.run(RESTServer.java:245)
>   at org.apache.hadoop.hbase.rest.RESTServer.main(RESTServer.java:421)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22206) dist.apache.org must not be used for public downloads

2019-04-18 Thread HBase QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16821661#comment-16821661
 ] 

HBase QA commented on HBASE-22206:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
39s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
28s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 16m 
10s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 16m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 42m 27s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/PreCommit-HBASE-Build/118/artifact/patchprocess/Dockerfile
 |
| JIRA Issue | HBASE-22206 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12965670/HBASE-22206.master.001.patch
 |
| Optional Tests |  dupname  asflicense  mvnsite  xml  |
| uname | Linux c0df075639b2 4.4.0-143-generic #169~14.04.2-Ubuntu SMP Wed Feb 
13 15:00:41 UTC 2019 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | dev-support/hbase-personality.sh |
| git revision | master / 9e2181c85f |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| Max. process+thread count | 82 (vs. ulimit of 1) |
| modules | C: . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/118/console |
| Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |


This message was automatically generated.



> dist.apache.org must not be used for public downloads
> -
>
> Key: HBASE-22206
> URL: https://issues.apache.org/jira/browse/HBASE-22206
> Project: HBase
>  Issue Type: Bug
>  Components: website
>Reporter: Sebb
>Assignee: Dima Spivak
>Priority: Major
> Attachments: HBASE-22206.master.001.patch, 
> HBASE-22206.master.001.patch
>
>
> The dist.apache.org server is only intended for use by developers in staging 
> releases.
> It must not be used on public download pages.
> Please use www.apache.org/dist (for KEYS, hashes and sigs) and the mirror 
> system instead.
> The current download page has lots of references to dist.a.o; please replace 
> thes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22208) Create access checker and expose it in RS

2019-04-18 Thread Guanghao Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16821660#comment-16821660
 ] 

Guanghao Zhang commented on HBASE-22208:


Pushed to master. But can't applied to branch-2 directly.

> Create access checker and expose it in RS
> -
>
> Key: HBASE-22208
> URL: https://issues.apache.org/jira/browse/HBASE-22208
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Yi Mei
>Assignee: Yi Mei
>Priority: Major
> Attachments: HBASE-22208.master.001.patch, 
> HBASE-22208.master.002.patch, HBASE-22208.master.003.patch, 
> HBASE-22208.master.004.patch, HBASE-22208.master.005.patch, 
> HBASE-22208.master.006.patch, HBASE-22208.master.007.patch, 
> HBASE-22208.master.008.patch, HBASE-22208.master.009.patch
>
>
> In HBase access control service, access checker performs authorization checks 
> for a given user's assigned permissions. The access checker holds a auth 
> manager instance which cache all global, namespace and table permissions.
> A access checker is created when master, RS and region load AccessController, 
> permission cache is refreshed when acl znode changed.
> We can create access checker when master and RS start and expose it in order 
> to use procedure to refresh its cache rather than watch ZK.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22264) Rest Server (master branch) on jdk 11 throws NoClassDefFoundError : javax/annotation/Priority

2019-04-18 Thread Sakthi (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16821659#comment-16821659
 ] 

Sakthi commented on HBASE-22264:


I generated the client tarball and wanted to see if it contains any of the 
javax* jars. Found the following ones, but I'm not sure if we use them. mvn 
dependency:tree in hbase-client didn't yield out any java* listing. Is there 
any other way I could check [~busbey] if we need these? or the jars in patch 
here ?
{code:java}
$ ls hbase-3.0.0-SNAPSHOT-client/lib/ | grep javax
javax.activation-1.2.0.jar
javax.el-3.0.1-b08.jar
javax.inject-2.5.0-b32.jar
javax.servlet-api-3.1.0.jar
javax.servlet.jsp-2.3.2.jar
javax.servlet.jsp-api-2.3.1.jar
javax.ws.rs-api-2.0.1.jar
{code}

> Rest Server (master branch) on jdk 11 throws NoClassDefFoundError : 
> javax/annotation/Priority
> -
>
> Key: HBASE-22264
> URL: https://issues.apache.org/jira/browse/HBASE-22264
> Project: HBase
>  Issue Type: Bug
>Reporter: Sakthi
>Assignee: Sakthi
>Priority: Major
>  Labels: jdk11
> Attachments: hbase-22264.master.001.patch
>
>
> This is in continuation with HBASE-22249. When compiled with jdk 8 and run on 
> jdk 11, the master branch throws the following exception during an attempt to 
> start the hbase rest server:
> {code:java}
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> javax/annotation/Priority
>   at 
> org.glassfish.jersey.model.internal.ComponentBag.modelFor(ComponentBag.java:483)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag.access$100(ComponentBag.java:89)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag$5.call(ComponentBag.java:408)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag$5.call(ComponentBag.java:398)
>   at org.glassfish.jersey.internal.Errors.process(Errors.java:315)
>   at org.glassfish.jersey.internal.Errors.process(Errors.java:297)
>   at org.glassfish.jersey.internal.Errors.process(Errors.java:228)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag.registerModel(ComponentBag.java:398)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag.register(ComponentBag.java:235)
>   at 
> org.glassfish.jersey.model.internal.CommonConfig.register(CommonConfig.java:420)
>   at 
> org.glassfish.jersey.server.ResourceConfig.register(ResourceConfig.java:425)
>   at org.apache.hadoop.hbase.rest.RESTServer.run(RESTServer.java:245)
>   at org.apache.hadoop.hbase.rest.RESTServer.main(RESTServer.java:421)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HBASE-22171) Release 1.2.12

2019-04-18 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey resolved HBASE-22171.
-
Resolution: Fixed

announcement email sent 
[dev|https://lists.apache.org/thread.html/e8b95f94d5c619fafad83a93483972e7af0efe8c342cb66d0baaef56@%3Cdev.hbase.apache.org%3E]
 and 
[user|https://lists.apache.org/thread.html/dec2afc424007ff3eaee42e65e6b9f0d63a31426c057361eed72a5f2@%3Cuser.hbase.apache.org%3E]

> Release 1.2.12
> --
>
> Key: HBASE-22171
> URL: https://issues.apache.org/jira/browse/HBASE-22171
> Project: HBase
>  Issue Type: Task
>  Components: community
>Affects Versions: 1.2.12
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Major
> Fix For: 1.2.12
>
>
> Last planned release out of 1.2 line, so bit different on steps this time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22208) Create access checker and expose it in RS

2019-04-18 Thread Guanghao Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16821655#comment-16821655
 ] 

Guanghao Zhang commented on HBASE-22208:


+1 for 009 patch.

> Create access checker and expose it in RS
> -
>
> Key: HBASE-22208
> URL: https://issues.apache.org/jira/browse/HBASE-22208
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Yi Mei
>Assignee: Yi Mei
>Priority: Major
> Attachments: HBASE-22208.master.001.patch, 
> HBASE-22208.master.002.patch, HBASE-22208.master.003.patch, 
> HBASE-22208.master.004.patch, HBASE-22208.master.005.patch, 
> HBASE-22208.master.006.patch, HBASE-22208.master.007.patch, 
> HBASE-22208.master.008.patch, HBASE-22208.master.009.patch
>
>
> In HBase access control service, access checker performs authorization checks 
> for a given user's assigned permissions. The access checker holds a auth 
> manager instance which cache all global, namespace and table permissions.
> A access checker is created when master, RS and region load AccessController, 
> permission cache is refreshed when acl znode changed.
> We can create access checker when master and RS start and expose it in order 
> to use procedure to refresh its cache rather than watch ZK.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [hbase] openinx commented on a change in pull request #163: HBASE-21995 Add a coprocessor to set HDFS ACL for hbase granted user

2019-04-18 Thread GitBox
openinx commented on a change in pull request #163: HBASE-21995 Add a 
coprocessor to set HDFS ACL for hbase granted user
URL: https://github.com/apache/hbase/pull/163#discussion_r276897931
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/HDFSAclController.java
 ##
 @@ -0,0 +1,634 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.security.access;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Optional;
+import java.util.Set;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.permission.FsPermission;
+import org.apache.hadoop.hbase.Cell;
+import org.apache.hadoop.hbase.CellUtil;
+import org.apache.hadoop.hbase.HBaseInterfaceAudience;
+import org.apache.hadoop.hbase.NamespaceDescriptor;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.Admin;
+import org.apache.hadoop.hbase.client.ColumnFamilyDescriptorBuilder;
+import org.apache.hadoop.hbase.client.Delete;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.RegionInfo;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.ResultScanner;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.client.SnapshotDescription;
+import org.apache.hadoop.hbase.client.Table;
+import org.apache.hadoop.hbase.client.TableDescriptor;
+import org.apache.hadoop.hbase.client.TableDescriptorBuilder;
+import org.apache.hadoop.hbase.coprocessor.CoreCoprocessor;
+import org.apache.hadoop.hbase.coprocessor.HasMasterServices;
+import org.apache.hadoop.hbase.coprocessor.MasterCoprocessor;
+import org.apache.hadoop.hbase.coprocessor.MasterCoprocessorEnvironment;
+import org.apache.hadoop.hbase.coprocessor.MasterObserver;
+import org.apache.hadoop.hbase.coprocessor.ObserverContext;
+import org.apache.hadoop.hbase.master.MasterServices;
+import org.apache.hadoop.hbase.security.User;
+import org.apache.hadoop.hbase.security.UserProvider;
+import org.apache.hadoop.hbase.security.access.HDFSAclHelper.PathHelper;
+import org.apache.hadoop.hbase.security.access.Permission.Action;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Set HDFS ACLs to hFiles to make HBase granted users have permission to scan 
snapshot
+ * 
+ * To use this feature, please mask sure HDFS config:
+ * 
+ * dfs.permissions.enabled = true
+ * fs.permissions.umask-mode = 027 (or smaller umask than 027)
+ * 
+ * 
+ * 
+ * The implementation of this feature is as followings:
+ * 
+ * For public directories such as 'data' and 'archive', set other 
permission to '--x' to make
+ * everyone have the permission to access the directory.
+ * For namespace or table directories such as 'data/ns/table', 
'archive/ns/table' and
+ * '.hbase-snapshot/snapshotName', set user 'r-x' acl and default 'r-x' acl 
when following
+ * operations happen:
+ * 
+ * grant user with global, namespace or table permission;
+ * revoke user from global, namespace or table;
+ * snapshot table;
+ * truncate table;
+ * 
+ * 
+ * Note: Because snapshots are at table level, so this feature just 
considers users with global,
+ * namespace or table permissions, ignores users with table CF or cell 
permissions.
+ * 
+ * 
+ */
+@CoreCoprocessor
+@InterfaceAudience.LimitedPrivate(HBaseInterfaceAudience.CONFIG)
+public class HDFSAclController implements MasterCoprocessor, MasterObserver {
+  private static final Logger LOG = 
LoggerFactory.getLogger(HDFSAclController.class);
+
+  public static final String HDFS_ACL_ENABLE = "hbase.hdfs.acl.enable";
+  public static final String HDFS_ACL_THREAD_NUMBER = 
"hbase.hdfs.acl.thread.number";
+  // the tmp directory to restore snapshot, it can not be a sub directory of 
HBase root dir
+  public static final String SNAPSHOT_RESTORE_TMP_DIR = 
"hbase.snapshot.restore.tmp.dir";
+  public 

[GitHub] [hbase] openinx commented on a change in pull request #163: HBASE-21995 Add a coprocessor to set HDFS ACL for hbase granted user

2019-04-18 Thread GitBox
openinx commented on a change in pull request #163: HBASE-21995 Add a 
coprocessor to set HDFS ACL for hbase granted user
URL: https://github.com/apache/hbase/pull/163#discussion_r276895386
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/MasterObserver.java
 ##
 @@ -750,6 +750,17 @@ default void postSnapshot(final 
ObserverContext ct
   final SnapshotDescription snapshot, final TableDescriptor 
tableDescriptor)
   throws IOException {}
 
+  /**
+   * Called after the snapshot operation has been completed.
+   * @param ctx the environment to interact with the framework and master
+   * @param snapshot the SnapshotDescriptor for the snapshot
+   * @param tableDescriptor the TableDescriptor of the table to snapshot
+   */
+  default void postCompletedSnapshotAction(final 
ObserverContext ctx,
 
 Review comment:
   What's the difference between this method and the existed one: 
   
   ```java
 /**
  * Called after the snapshot operation has been requested.
  * Called as part of snapshot RPC call.
  * @param ctx the environment to interact with the framework and master
  * @param snapshot the SnapshotDescriptor for the snapshot
  * @param tableDescriptor the TableDescriptor of the table to snapshot
  */
 default void postSnapshot(final 
ObserverContext ctx,
 final SnapshotDescription snapshot, final TableDescriptor 
tableDescriptor)
 throws IOException {}
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] openinx commented on a change in pull request #163: HBASE-21995 Add a coprocessor to set HDFS ACL for hbase granted user

2019-04-18 Thread GitBox
openinx commented on a change in pull request #163: HBASE-21995 Add a 
coprocessor to set HDFS ACL for hbase granted user
URL: https://github.com/apache/hbase/pull/163#discussion_r276895902
 
 

 ##
 File path: hbase-server/src/main/java/org/apache/hadoop/hbase/io/FileLink.java
 ##
 @@ -314,12 +316,24 @@ private FSDataInputStream tryOpen() throws IOException {
   return(in);
 } catch (FileNotFoundException e) {
   // Try another file location
+} catch (AccessControlException e) {
 
 Review comment:
   Here, I prefer to simplify the logic as a small method:
   1. remember the thrown exception as e;
   2. if notfound or accessControl exception, continue to try another file;
   3.  if still not find an right file.  then throw the e. 
   Please consider this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] openinx commented on a change in pull request #163: HBASE-21995 Add a coprocessor to set HDFS ACL for hbase granted user

2019-04-18 Thread GitBox
openinx commented on a change in pull request #163: HBASE-21995 Add a 
coprocessor to set HDFS ACL for hbase granted user
URL: https://github.com/apache/hbase/pull/163#discussion_r276898011
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/HDFSAclController.java
 ##
 @@ -0,0 +1,634 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.security.access;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Optional;
+import java.util.Set;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.permission.FsPermission;
+import org.apache.hadoop.hbase.Cell;
+import org.apache.hadoop.hbase.CellUtil;
+import org.apache.hadoop.hbase.HBaseInterfaceAudience;
+import org.apache.hadoop.hbase.NamespaceDescriptor;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.Admin;
+import org.apache.hadoop.hbase.client.ColumnFamilyDescriptorBuilder;
+import org.apache.hadoop.hbase.client.Delete;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.RegionInfo;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.ResultScanner;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.client.SnapshotDescription;
+import org.apache.hadoop.hbase.client.Table;
+import org.apache.hadoop.hbase.client.TableDescriptor;
+import org.apache.hadoop.hbase.client.TableDescriptorBuilder;
+import org.apache.hadoop.hbase.coprocessor.CoreCoprocessor;
+import org.apache.hadoop.hbase.coprocessor.HasMasterServices;
+import org.apache.hadoop.hbase.coprocessor.MasterCoprocessor;
+import org.apache.hadoop.hbase.coprocessor.MasterCoprocessorEnvironment;
+import org.apache.hadoop.hbase.coprocessor.MasterObserver;
+import org.apache.hadoop.hbase.coprocessor.ObserverContext;
+import org.apache.hadoop.hbase.master.MasterServices;
+import org.apache.hadoop.hbase.security.User;
+import org.apache.hadoop.hbase.security.UserProvider;
+import org.apache.hadoop.hbase.security.access.HDFSAclHelper.PathHelper;
+import org.apache.hadoop.hbase.security.access.Permission.Action;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Set HDFS ACLs to hFiles to make HBase granted users have permission to scan 
snapshot
+ * 
+ * To use this feature, please mask sure HDFS config:
+ * 
+ * dfs.permissions.enabled = true
+ * fs.permissions.umask-mode = 027 (or smaller umask than 027)
+ * 
+ * 
+ * 
+ * The implementation of this feature is as followings:
+ * 
+ * For public directories such as 'data' and 'archive', set other 
permission to '--x' to make
+ * everyone have the permission to access the directory.
+ * For namespace or table directories such as 'data/ns/table', 
'archive/ns/table' and
+ * '.hbase-snapshot/snapshotName', set user 'r-x' acl and default 'r-x' acl 
when following
+ * operations happen:
+ * 
+ * grant user with global, namespace or table permission;
+ * revoke user from global, namespace or table;
+ * snapshot table;
+ * truncate table;
+ * 
+ * 
+ * Note: Because snapshots are at table level, so this feature just 
considers users with global,
+ * namespace or table permissions, ignores users with table CF or cell 
permissions.
+ * 
+ * 
+ */
+@CoreCoprocessor
+@InterfaceAudience.LimitedPrivate(HBaseInterfaceAudience.CONFIG)
+public class HDFSAclController implements MasterCoprocessor, MasterObserver {
+  private static final Logger LOG = 
LoggerFactory.getLogger(HDFSAclController.class);
+
+  public static final String HDFS_ACL_ENABLE = "hbase.hdfs.acl.enable";
+  public static final String HDFS_ACL_THREAD_NUMBER = 
"hbase.hdfs.acl.thread.number";
+  // the tmp directory to restore snapshot, it can not be a sub directory of 
HBase root dir
+  public static final String SNAPSHOT_RESTORE_TMP_DIR = 
"hbase.snapshot.restore.tmp.dir";
+  public 

[GitHub] [hbase] openinx commented on a change in pull request #163: HBASE-21995 Add a coprocessor to set HDFS ACL for hbase granted user

2019-04-18 Thread GitBox
openinx commented on a change in pull request #163: HBASE-21995 Add a 
coprocessor to set HDFS ACL for hbase granted user
URL: https://github.com/apache/hbase/pull/163#discussion_r276897635
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/HDFSAclController.java
 ##
 @@ -0,0 +1,634 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.security.access;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Optional;
+import java.util.Set;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.permission.FsPermission;
+import org.apache.hadoop.hbase.Cell;
+import org.apache.hadoop.hbase.CellUtil;
+import org.apache.hadoop.hbase.HBaseInterfaceAudience;
+import org.apache.hadoop.hbase.NamespaceDescriptor;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.Admin;
+import org.apache.hadoop.hbase.client.ColumnFamilyDescriptorBuilder;
+import org.apache.hadoop.hbase.client.Delete;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.RegionInfo;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.ResultScanner;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.client.SnapshotDescription;
+import org.apache.hadoop.hbase.client.Table;
+import org.apache.hadoop.hbase.client.TableDescriptor;
+import org.apache.hadoop.hbase.client.TableDescriptorBuilder;
+import org.apache.hadoop.hbase.coprocessor.CoreCoprocessor;
+import org.apache.hadoop.hbase.coprocessor.HasMasterServices;
+import org.apache.hadoop.hbase.coprocessor.MasterCoprocessor;
+import org.apache.hadoop.hbase.coprocessor.MasterCoprocessorEnvironment;
+import org.apache.hadoop.hbase.coprocessor.MasterObserver;
+import org.apache.hadoop.hbase.coprocessor.ObserverContext;
+import org.apache.hadoop.hbase.master.MasterServices;
+import org.apache.hadoop.hbase.security.User;
+import org.apache.hadoop.hbase.security.UserProvider;
+import org.apache.hadoop.hbase.security.access.HDFSAclHelper.PathHelper;
+import org.apache.hadoop.hbase.security.access.Permission.Action;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Set HDFS ACLs to hFiles to make HBase granted users have permission to scan 
snapshot
+ * 
+ * To use this feature, please mask sure HDFS config:
+ * 
+ * dfs.permissions.enabled = true
+ * fs.permissions.umask-mode = 027 (or smaller umask than 027)
+ * 
+ * 
+ * 
+ * The implementation of this feature is as followings:
+ * 
+ * For public directories such as 'data' and 'archive', set other 
permission to '--x' to make
+ * everyone have the permission to access the directory.
+ * For namespace or table directories such as 'data/ns/table', 
'archive/ns/table' and
+ * '.hbase-snapshot/snapshotName', set user 'r-x' acl and default 'r-x' acl 
when following
+ * operations happen:
+ * 
+ * grant user with global, namespace or table permission;
+ * revoke user from global, namespace or table;
+ * snapshot table;
+ * truncate table;
+ * 
+ * 
+ * Note: Because snapshots are at table level, so this feature just 
considers users with global,
+ * namespace or table permissions, ignores users with table CF or cell 
permissions.
+ * 
+ * 
+ */
+@CoreCoprocessor
+@InterfaceAudience.LimitedPrivate(HBaseInterfaceAudience.CONFIG)
+public class HDFSAclController implements MasterCoprocessor, MasterObserver {
+  private static final Logger LOG = 
LoggerFactory.getLogger(HDFSAclController.class);
+
+  public static final String HDFS_ACL_ENABLE = "hbase.hdfs.acl.enable";
+  public static final String HDFS_ACL_THREAD_NUMBER = 
"hbase.hdfs.acl.thread.number";
+  // the tmp directory to restore snapshot, it can not be a sub directory of 
HBase root dir
+  public static final String SNAPSHOT_RESTORE_TMP_DIR = 
"hbase.snapshot.restore.tmp.dir";
+  public 

[GitHub] [hbase] openinx commented on a change in pull request #163: HBASE-21995 Add a coprocessor to set HDFS ACL for hbase granted user

2019-04-18 Thread GitBox
openinx commented on a change in pull request #163: HBASE-21995 Add a 
coprocessor to set HDFS ACL for hbase granted user
URL: https://github.com/apache/hbase/pull/163#discussion_r276895956
 
 

 ##
 File path: hbase-server/src/main/java/org/apache/hadoop/hbase/io/FileLink.java
 ##
 @@ -405,14 +419,22 @@ public Path getAvailablePath(FileSystem fs) throws 
IOException {
* @throws IOException on unexpected error.
*/
   public FileStatus getFileStatus(FileSystem fs) throws IOException {
+AccessControlException accessControlException = null;
 for (int i = 0; i < locations.length; ++i) {
   try {
 return fs.getFileStatus(locations[i]);
   } catch (FileNotFoundException e) {
 // Try another file location
+  } catch (AccessControlException e) {
 
 Review comment:
   Same question above.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] openinx commented on a change in pull request #163: HBASE-21995 Add a coprocessor to set HDFS ACL for hbase granted user

2019-04-18 Thread GitBox
openinx commented on a change in pull request #163: HBASE-21995 Add a 
coprocessor to set HDFS ACL for hbase granted user
URL: https://github.com/apache/hbase/pull/163#discussion_r276896899
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/HDFSAclController.java
 ##
 @@ -0,0 +1,634 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.security.access;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Optional;
+import java.util.Set;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.permission.FsPermission;
+import org.apache.hadoop.hbase.Cell;
+import org.apache.hadoop.hbase.CellUtil;
+import org.apache.hadoop.hbase.HBaseInterfaceAudience;
+import org.apache.hadoop.hbase.NamespaceDescriptor;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.Admin;
+import org.apache.hadoop.hbase.client.ColumnFamilyDescriptorBuilder;
+import org.apache.hadoop.hbase.client.Delete;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.RegionInfo;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.ResultScanner;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.client.SnapshotDescription;
+import org.apache.hadoop.hbase.client.Table;
+import org.apache.hadoop.hbase.client.TableDescriptor;
+import org.apache.hadoop.hbase.client.TableDescriptorBuilder;
+import org.apache.hadoop.hbase.coprocessor.CoreCoprocessor;
+import org.apache.hadoop.hbase.coprocessor.HasMasterServices;
+import org.apache.hadoop.hbase.coprocessor.MasterCoprocessor;
+import org.apache.hadoop.hbase.coprocessor.MasterCoprocessorEnvironment;
+import org.apache.hadoop.hbase.coprocessor.MasterObserver;
+import org.apache.hadoop.hbase.coprocessor.ObserverContext;
+import org.apache.hadoop.hbase.master.MasterServices;
+import org.apache.hadoop.hbase.security.User;
+import org.apache.hadoop.hbase.security.UserProvider;
+import org.apache.hadoop.hbase.security.access.HDFSAclHelper.PathHelper;
+import org.apache.hadoop.hbase.security.access.Permission.Action;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Set HDFS ACLs to hFiles to make HBase granted users have permission to scan 
snapshot
+ * 
+ * To use this feature, please mask sure HDFS config:
+ * 
+ * dfs.permissions.enabled = true
+ * fs.permissions.umask-mode = 027 (or smaller umask than 027)
+ * 
+ * 
+ * 
+ * The implementation of this feature is as followings:
+ * 
+ * For public directories such as 'data' and 'archive', set other 
permission to '--x' to make
+ * everyone have the permission to access the directory.
+ * For namespace or table directories such as 'data/ns/table', 
'archive/ns/table' and
+ * '.hbase-snapshot/snapshotName', set user 'r-x' acl and default 'r-x' acl 
when following
+ * operations happen:
+ * 
+ * grant user with global, namespace or table permission;
+ * revoke user from global, namespace or table;
+ * snapshot table;
+ * truncate table;
+ * 
+ * 
+ * Note: Because snapshots are at table level, so this feature just 
considers users with global,
+ * namespace or table permissions, ignores users with table CF or cell 
permissions.
+ * 
+ * 
+ */
+@CoreCoprocessor
+@InterfaceAudience.LimitedPrivate(HBaseInterfaceAudience.CONFIG)
+public class HDFSAclController implements MasterCoprocessor, MasterObserver {
+  private static final Logger LOG = 
LoggerFactory.getLogger(HDFSAclController.class);
+
+  public static final String HDFS_ACL_ENABLE = "hbase.hdfs.acl.enable";
+  public static final String HDFS_ACL_THREAD_NUMBER = 
"hbase.hdfs.acl.thread.number";
+  // the tmp directory to restore snapshot, it can not be a sub directory of 
HBase root dir
+  public static final String SNAPSHOT_RESTORE_TMP_DIR = 
"hbase.snapshot.restore.tmp.dir";
+  public 

[GitHub] [hbase] openinx commented on a change in pull request #163: HBASE-21995 Add a coprocessor to set HDFS ACL for hbase granted user

2019-04-18 Thread GitBox
openinx commented on a change in pull request #163: HBASE-21995 Add a 
coprocessor to set HDFS ACL for hbase granted user
URL: https://github.com/apache/hbase/pull/163#discussion_r276896795
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/HDFSAclController.java
 ##
 @@ -0,0 +1,634 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.security.access;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Optional;
+import java.util.Set;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.permission.FsPermission;
+import org.apache.hadoop.hbase.Cell;
+import org.apache.hadoop.hbase.CellUtil;
+import org.apache.hadoop.hbase.HBaseInterfaceAudience;
+import org.apache.hadoop.hbase.NamespaceDescriptor;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.Admin;
+import org.apache.hadoop.hbase.client.ColumnFamilyDescriptorBuilder;
+import org.apache.hadoop.hbase.client.Delete;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.RegionInfo;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.ResultScanner;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.client.SnapshotDescription;
+import org.apache.hadoop.hbase.client.Table;
+import org.apache.hadoop.hbase.client.TableDescriptor;
+import org.apache.hadoop.hbase.client.TableDescriptorBuilder;
+import org.apache.hadoop.hbase.coprocessor.CoreCoprocessor;
+import org.apache.hadoop.hbase.coprocessor.HasMasterServices;
+import org.apache.hadoop.hbase.coprocessor.MasterCoprocessor;
+import org.apache.hadoop.hbase.coprocessor.MasterCoprocessorEnvironment;
+import org.apache.hadoop.hbase.coprocessor.MasterObserver;
+import org.apache.hadoop.hbase.coprocessor.ObserverContext;
+import org.apache.hadoop.hbase.master.MasterServices;
+import org.apache.hadoop.hbase.security.User;
+import org.apache.hadoop.hbase.security.UserProvider;
+import org.apache.hadoop.hbase.security.access.HDFSAclHelper.PathHelper;
+import org.apache.hadoop.hbase.security.access.Permission.Action;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Set HDFS ACLs to hFiles to make HBase granted users have permission to scan 
snapshot
+ * 
+ * To use this feature, please mask sure HDFS config:
+ * 
+ * dfs.permissions.enabled = true
+ * fs.permissions.umask-mode = 027 (or smaller umask than 027)
+ * 
+ * 
+ * 
+ * The implementation of this feature is as followings:
+ * 
+ * For public directories such as 'data' and 'archive', set other 
permission to '--x' to make
+ * everyone have the permission to access the directory.
+ * For namespace or table directories such as 'data/ns/table', 
'archive/ns/table' and
+ * '.hbase-snapshot/snapshotName', set user 'r-x' acl and default 'r-x' acl 
when following
+ * operations happen:
+ * 
+ * grant user with global, namespace or table permission;
+ * revoke user from global, namespace or table;
+ * snapshot table;
+ * truncate table;
+ * 
+ * 
+ * Note: Because snapshots are at table level, so this feature just 
considers users with global,
+ * namespace or table permissions, ignores users with table CF or cell 
permissions.
+ * 
+ * 
+ */
+@CoreCoprocessor
+@InterfaceAudience.LimitedPrivate(HBaseInterfaceAudience.CONFIG)
+public class HDFSAclController implements MasterCoprocessor, MasterObserver {
+  private static final Logger LOG = 
LoggerFactory.getLogger(HDFSAclController.class);
+
+  public static final String HDFS_ACL_ENABLE = "hbase.hdfs.acl.enable";
+  public static final String HDFS_ACL_THREAD_NUMBER = 
"hbase.hdfs.acl.thread.number";
+  // the tmp directory to restore snapshot, it can not be a sub directory of 
HBase root dir
+  public static final String SNAPSHOT_RESTORE_TMP_DIR = 
"hbase.snapshot.restore.tmp.dir";
+  public 

[GitHub] [hbase] openinx commented on a change in pull request #163: HBASE-21995 Add a coprocessor to set HDFS ACL for hbase granted user

2019-04-18 Thread GitBox
openinx commented on a change in pull request #163: HBASE-21995 Add a 
coprocessor to set HDFS ACL for hbase granted user
URL: https://github.com/apache/hbase/pull/163#discussion_r276896682
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/HDFSAclController.java
 ##
 @@ -0,0 +1,634 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.security.access;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Optional;
+import java.util.Set;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.permission.FsPermission;
+import org.apache.hadoop.hbase.Cell;
+import org.apache.hadoop.hbase.CellUtil;
+import org.apache.hadoop.hbase.HBaseInterfaceAudience;
+import org.apache.hadoop.hbase.NamespaceDescriptor;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.Admin;
+import org.apache.hadoop.hbase.client.ColumnFamilyDescriptorBuilder;
+import org.apache.hadoop.hbase.client.Delete;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.RegionInfo;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.ResultScanner;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.client.SnapshotDescription;
+import org.apache.hadoop.hbase.client.Table;
+import org.apache.hadoop.hbase.client.TableDescriptor;
+import org.apache.hadoop.hbase.client.TableDescriptorBuilder;
+import org.apache.hadoop.hbase.coprocessor.CoreCoprocessor;
+import org.apache.hadoop.hbase.coprocessor.HasMasterServices;
+import org.apache.hadoop.hbase.coprocessor.MasterCoprocessor;
+import org.apache.hadoop.hbase.coprocessor.MasterCoprocessorEnvironment;
+import org.apache.hadoop.hbase.coprocessor.MasterObserver;
+import org.apache.hadoop.hbase.coprocessor.ObserverContext;
+import org.apache.hadoop.hbase.master.MasterServices;
+import org.apache.hadoop.hbase.security.User;
+import org.apache.hadoop.hbase.security.UserProvider;
+import org.apache.hadoop.hbase.security.access.HDFSAclHelper.PathHelper;
+import org.apache.hadoop.hbase.security.access.Permission.Action;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Set HDFS ACLs to hFiles to make HBase granted users have permission to scan 
snapshot
+ * 
+ * To use this feature, please mask sure HDFS config:
+ * 
+ * dfs.permissions.enabled = true
+ * fs.permissions.umask-mode = 027 (or smaller umask than 027)
+ * 
+ * 
+ * 
+ * The implementation of this feature is as followings:
+ * 
+ * For public directories such as 'data' and 'archive', set other 
permission to '--x' to make
+ * everyone have the permission to access the directory.
+ * For namespace or table directories such as 'data/ns/table', 
'archive/ns/table' and
+ * '.hbase-snapshot/snapshotName', set user 'r-x' acl and default 'r-x' acl 
when following
+ * operations happen:
+ * 
+ * grant user with global, namespace or table permission;
+ * revoke user from global, namespace or table;
+ * snapshot table;
+ * truncate table;
+ * 
+ * 
+ * Note: Because snapshots are at table level, so this feature just 
considers users with global,
+ * namespace or table permissions, ignores users with table CF or cell 
permissions.
+ * 
+ * 
+ */
+@CoreCoprocessor
+@InterfaceAudience.LimitedPrivate(HBaseInterfaceAudience.CONFIG)
+public class HDFSAclController implements MasterCoprocessor, MasterObserver {
 
 Review comment:
   Not a good class name, the class want to sync file acl between HBase and 
HDFS ? and mostly for those directories when scanning snapshot  ? we don't 
consider those directories which is unrelated to snapshot, such as WAL, oldWals 
etc...  Please consider another name.


This is an automated message from the Apache Git Service.
To respond to the message, please log 

[jira] [Comment Edited] (HBASE-20626) Change the value of "Requests Per Second" on WEBUI

2019-04-18 Thread Yu Li (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16821647#comment-16821647
 ] 

Yu Li edited comment on HBASE-20626 at 4/19/19 4:18 AM:


Makes sense and using {{totalRowActionRequestCount}} is also recommended in 
release note of HBASE-18469. Patch LGTM, +1. Thanks for the efforts 
[~andrewcheng].


was (Author: carp84):
Makes sense. Patch LGTM, +1. Thanks for the efforts [~andrewcheng].

> Change the value of "Requests Per Second" on WEBUI
> --
>
> Key: HBASE-20626
> URL: https://issues.apache.org/jira/browse/HBASE-20626
> Project: HBase
>  Issue Type: Improvement
>  Components: metrics, UI
>Reporter: Guangxu Cheng
>Assignee: Guangxu Cheng
>Priority: Major
> Attachments: HBASE-20626.master.001.patch, 
> HBASE-20626.master.002.patch
>
>
> Now we use "totalRequestCount"(RSRpcServices#requestCount) to calculate 
> requests per second.
>  After HBASE-18469, "totalRequestCount" count only once for multi 
> request.(Includes requests that are not serviced by regions.)
>  When we have a large number of read and write requests, the value of 
> "Requests Per Second" is very small which does not reflect the load of the 
> cluster.
> Maybe it is more reasonable to use "totalRowActionRequestCount" to calculate 
> RPS?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22206) dist.apache.org must not be used for public downloads

2019-04-18 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16821649#comment-16821649
 ] 

Sean Busbey commented on HBASE-22206:
-

started a new precommit build

https://builds.apache.org/view/H-L/view/HBase/job/PreCommit-HBASE-Build/118/

> dist.apache.org must not be used for public downloads
> -
>
> Key: HBASE-22206
> URL: https://issues.apache.org/jira/browse/HBASE-22206
> Project: HBase
>  Issue Type: Bug
>  Components: website
>Reporter: Sebb
>Assignee: Dima Spivak
>Priority: Major
> Attachments: HBASE-22206.master.001.patch, 
> HBASE-22206.master.001.patch
>
>
> The dist.apache.org server is only intended for use by developers in staging 
> releases.
> It must not be used on public download pages.
> Please use www.apache.org/dist (for KEYS, hashes and sigs) and the mirror 
> system instead.
> The current download page has lots of references to dist.a.o; please replace 
> thes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22222) Site build fails after hbase-thirdparty upgrade

2019-04-18 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-2?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-2:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

pushed to target branches and checked that the website builds.

> Site build fails after hbase-thirdparty upgrade
> ---
>
> Key: HBASE-2
> URL: https://issues.apache.org/jira/browse/HBASE-2
> Project: HBase
>  Issue Type: Bug
>  Components: website
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Peter Somogyi
>Assignee: Duo Zhang
>Priority: Blocker
> Fix For: 3.0.0, 2.2.0, 2.3.0
>
> Attachments: HBASE-2.patch
>
>
> After hbase-thirdparty upgrade the hbase_generate_website job is failing in 
> mvn site target on javadoc.
>  
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-site-plugin:3.7.1:site (default-site) on 
> project hbase: Error generating maven-javadoc-plugin:3.0.1:aggregate report:
> [ERROR] Exit code: 1 - 
> /home/jenkins/jenkins-slave/workspace/hbase_generate_website/hbase/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java:1034:
>  warning - Tag @link: can't find tagsIterator(Cell) in 
> org.apache.hadoop.hbase.CellUtil
> [ERROR] javadoc: error - class file for 
> org.apache.hbase.thirdparty.com.google.errorprone.annotations.Immutable not 
> found{noformat}
> After reverting thirdparty upgrade locally the site build passed. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20626) Change the value of "Requests Per Second" on WEBUI

2019-04-18 Thread Yu Li (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16821647#comment-16821647
 ] 

Yu Li commented on HBASE-20626:
---

Makes sense. Patch LGTM, +1. Thanks for the efforts [~andrewcheng].

> Change the value of "Requests Per Second" on WEBUI
> --
>
> Key: HBASE-20626
> URL: https://issues.apache.org/jira/browse/HBASE-20626
> Project: HBase
>  Issue Type: Improvement
>  Components: metrics, UI
>Reporter: Guangxu Cheng
>Assignee: Guangxu Cheng
>Priority: Major
> Attachments: HBASE-20626.master.001.patch, 
> HBASE-20626.master.002.patch
>
>
> Now we use "totalRequestCount"(RSRpcServices#requestCount) to calculate 
> requests per second.
>  After HBASE-18469, "totalRequestCount" count only once for multi 
> request.(Includes requests that are not serviced by regions.)
>  When we have a large number of read and write requests, the value of 
> "Requests Per Second" is very small which does not reflect the load of the 
> cluster.
> Maybe it is more reasonable to use "totalRowActionRequestCount" to calculate 
> RPS?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21937) Make the Compression#decompress can accept ByteBuff as input

2019-04-18 Thread Zheng Hu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16821641#comment-16821641
 ] 

Zheng Hu commented on HBASE-21937:
--

Ping [~anoop.hbase], [~ram_krish], [~Apache9], any concerns ? 

> Make the Compression#decompress can accept ByteBuff as input 
> -
>
> Key: HBASE-21937
> URL: https://issues.apache.org/jira/browse/HBASE-21937
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Attachments: HBASE-21937.HBASE-21879.v1.patch, 
> HBASE-21937.HBASE-21879.v2.patch, HBASE-21937.HBASE-21879.v3.patch
>
>
> When decompressing an  compressed block, we are also allocating 
> HeapByteBuffer for the unpacked block.  should allocate ByteBuff from the 
> global ByteBuffAllocator, skimmed the code,  the key point is, we need an  
> ByteBuff decompress interface, not the following: 
> {code}
> # Compression.java
>   public static void decompress(byte[] dest, int destOffset,
>   InputStream bufferedBoundedStream, int compressedSize,
>   int uncompressedSize, Compression.Algorithm compressAlgo)
>   throws IOException {
>   //...
> }
> {code}
> Not very high priority,  let me make the block without compression to be 
> offheap firstly. 
> In HBASE-22005,  I ignored the unit test: 
> 1. TestLoadAndSwitchEncodeOnDisk ; 
> 2. TestHFileBlock#testPreviousOffset; 
> Need to resolve this issue and make those UT works fine. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [hbase] ndimiduk commented on a change in pull request #157: HBASE-16002 Made constructors of DataType subclasses public

2019-04-18 Thread GitBox
ndimiduk commented on a change in pull request #157: HBASE-16002 Made 
constructors of DataType subclasses public
URL: https://github.com/apache/hbase/pull/157#discussion_r276892920
 
 

 ##
 File path: 
hbase-common/src/main/java/org/apache/hadoop/hbase/types/OrderedBlob.java
 ##
 @@ -32,7 +32,7 @@
   public static final OrderedBlob ASCENDING = new OrderedBlob(Order.ASCENDING);
   public static final OrderedBlob DESCENDING = new 
OrderedBlob(Order.DESCENDING);
 
-  protected OrderedBlob(Order order) {
+  public OrderedBlob(Order order) {
 
 Review comment:
   So that's precisely it. See my last comment on JIRA. The original design of 
this part of the API was to try to prevent callers from needless allocations. 
That couldn't be done uniformly, though, as some of these implementations 
require explicit construction parameters. I myself filed a ticket wondering why 
I had left some constructors private. I believe the uniformity of public 
constructors across all concrete types is more consistent that having static 
constants in most, but not all, places.
   
   This is definitely a subjective area of the API design and I'd love to get 
other opinions :)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-22020) upgrade to yetus 0.9.0

2019-04-18 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16821625#comment-16821625
 ] 

Hudson commented on HBASE-22020:


Results for branch HBASE-22020
[build #16 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-22020/16/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-22020/16//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-22020/16//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-22020/16//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> upgrade to yetus 0.9.0
> --
>
> Key: HBASE-22020
> URL: https://issues.apache.org/jira/browse/HBASE-22020
> Project: HBase
>  Issue Type: Task
>  Components: build, community
>Reporter: stack
>Assignee: Sean Busbey
>Priority: Major
> Attachments: HBASE-22020-branch-1.v1.patch, HBASE-22020.0.patch, 
> HBASE-22020.1.patch
>
>
> branch-1/jdk7 checkstyle dtd xml parse complaint; "script engine for language 
> js can not be found"
> See parent for some context. Checkstyle references dtds that were hosted on 
> puppycrawl, then on sourceforge up until ten days ago. Nightlies are failing 
> for among other reasons, complaint that there is bad xml in the build... 
> notably,  the unresolvable DTDs.
> I'd just update the DTDs but there is a need for a js engine some where and 
> openjdk7 doesn't ship with one (openjdk8 does). That needs addressing and 
> then we can backport the parent issue...
> See 
> https://builds.apache.org/view/H-L/view/HBase/job/HBase%20Nightly/job/branch-1/710/artifact/output-general/xml.txt
>  ... which in case its rolled away, is filled with this message:
> "script engine for language js can not be found"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-22269) Consider simplifying the logic of BucketCache eviction.

2019-04-18 Thread Zheng Hu (JIRA)
Zheng Hu created HBASE-22269:


 Summary: Consider simplifying the logic of BucketCache eviction.
 Key: HBASE-22269
 URL: https://issues.apache.org/jira/browse/HBASE-22269
 Project: HBase
  Issue Type: Sub-task
Reporter: Zheng Hu


As discussed in review board: https://reviews.apache.org/r/70465 . [~Apache9] 
has an comment: 

bq. I think with the new reference counted framework, we do not need to treat 
rpc reference specially? Just release the bucket from oldest to newest, until 
we can find enough free space? We could know if the space has been freed from 
the return value of release ? Can be a follow on issue, maybe.

Now, we'll choose those non-RPC refered block to mark as evicted, maybe can 
simplify the logic here , just as [~Apache9] said.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22171) Release 1.2.12

2019-04-18 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16821619#comment-16821619
 ] 

Sean Busbey commented on HBASE-22171:
-

* updated website docs with contents from release:
{code}
Busbey-MBA:hbase-site busbey$ rm -rf 1.2/*
Busbey-MBA:hbase-site busbey$ tar --strip-components=2 -xzf 
~/projects/hbase_project/hbase-releases/hbase-1.2.12/hbase-1.2.12-bin.tar.gz -C 
1.2/ hbase-1.2.12/docs/
{code}

(which works for 1.2 because we still include all of the docs in the binary 
tarball on that release line)

> Release 1.2.12
> --
>
> Key: HBASE-22171
> URL: https://issues.apache.org/jira/browse/HBASE-22171
> Project: HBase
>  Issue Type: Task
>  Components: community
>Affects Versions: 1.2.12
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Major
> Fix For: 1.2.12
>
>
> Last planned release out of 1.2 line, so bit different on steps this time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22072) High read/write intensive regions may cause long crash recovery

2019-04-18 Thread Anoop Sam John (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16821616#comment-16821616
 ] 

Anoop Sam John commented on HBASE-22072:


Also in an earlier comment I have raised some more issues where we just open 
scanner on some files and do not use those scanner as the are TTL not matching 
for the scan . Well that is not happening in this issue still they are issues.  
We can open new jiras to solve those also or fix in this issue itself?

> High read/write intensive regions may cause long crash recovery
> ---
>
> Key: HBASE-22072
> URL: https://issues.apache.org/jira/browse/HBASE-22072
> Project: HBase
>  Issue Type: Bug
>  Components: Performance, Recovery
>Affects Versions: 2.1.2
>Reporter: Pavel
>Assignee: ramkrishna.s.vasudevan
>Priority: Major
> Attachments: HBASE-22072.HBASE-21879-v1.patch
>
>
> Compaction of high read loaded region may leave compacted files undeleted 
> because of existing scan references:
> INFO org.apache.hadoop.hbase.regionserver.HStore - Can't archive compacted 
> file hdfs://hdfs-ha/hbase... because of either isCompactedAway=true or file 
> has reference, isReferencedInReads=true, refCount=1, skipping for now
> If region is either high write loaded this happens quite often and region may 
> have few storefiles and tons of undeleted compacted hdfs files.
> Region keeps all that files (in my case thousands) untill graceful region 
> closing procedure, which ignores existing references and drop obsolete files. 
> It works fine unless consuming some extra hdfs space, but only in case of 
> normal region closing. If region server crashes than new region server, 
> responsible for that overfiling region, reads hdfs folder and try to deal 
> with all undeleted files, producing tons of storefiles, compaction tasks and 
> consuming abnormal amount of memory, wich may lead to OutOfMemory Exception 
> and further region servers crash. This stops writing to region because number 
> of storefiles reach *hbase.hstore.blockingStoreFiles* limit, forces high GC 
> duty and may take hours to compact all files into working set of files.
> Workaround is a periodically check hdfs folders files count and force region 
> assign for ones with too many files.
> It could be nice if regionserver had a setting similar to 
> hbase.hstore.blockingStoreFiles and invoke attempt to drop undeleted 
> compacted files if number of files reaches this setting.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22072) High read/write intensive regions may cause long crash recovery

2019-04-18 Thread Anoop Sam John (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16821615#comment-16821615
 ] 

Anoop Sam John commented on HBASE-22072:


bq.Is it possible if other thread, performing updateReaders, see closing flag 
still false after StoreScanner#close acomplished?
'closing' variable to be volatile.
One issue with patch is say one thread doing close(false).  So this is not a 
real scanner close.  Here we will not make 'closing' as true.  So assume thread 
is having the lock now.   Now same time another thread comes for 
updateReaders().  Its tryLock() call will fail as the lock is with other 
thread.  Now we will just do log and come out. But the scanner is still not 
over.  And so the scanner did not get a chance to update the readers.  So we 
can not really do this immediate return model.

> High read/write intensive regions may cause long crash recovery
> ---
>
> Key: HBASE-22072
> URL: https://issues.apache.org/jira/browse/HBASE-22072
> Project: HBase
>  Issue Type: Bug
>  Components: Performance, Recovery
>Affects Versions: 2.1.2
>Reporter: Pavel
>Assignee: ramkrishna.s.vasudevan
>Priority: Major
> Attachments: HBASE-22072.HBASE-21879-v1.patch
>
>
> Compaction of high read loaded region may leave compacted files undeleted 
> because of existing scan references:
> INFO org.apache.hadoop.hbase.regionserver.HStore - Can't archive compacted 
> file hdfs://hdfs-ha/hbase... because of either isCompactedAway=true or file 
> has reference, isReferencedInReads=true, refCount=1, skipping for now
> If region is either high write loaded this happens quite often and region may 
> have few storefiles and tons of undeleted compacted hdfs files.
> Region keeps all that files (in my case thousands) untill graceful region 
> closing procedure, which ignores existing references and drop obsolete files. 
> It works fine unless consuming some extra hdfs space, but only in case of 
> normal region closing. If region server crashes than new region server, 
> responsible for that overfiling region, reads hdfs folder and try to deal 
> with all undeleted files, producing tons of storefiles, compaction tasks and 
> consuming abnormal amount of memory, wich may lead to OutOfMemory Exception 
> and further region servers crash. This stops writing to region because number 
> of storefiles reach *hbase.hstore.blockingStoreFiles* limit, forces high GC 
> duty and may take hours to compact all files into working set of files.
> Workaround is a periodically check hdfs folders files count and force region 
> assign for ones with too many files.
> It could be nice if regionserver had a setting similar to 
> hbase.hstore.blockingStoreFiles and invoke attempt to drop undeleted 
> compacted files if number of files reaches this setting.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22244) Make use of MetricsConnection in async client

2019-04-18 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16821585#comment-16821585
 ] 

Hudson commented on HBASE-22244:


Results for branch branch-2.2
[build #194 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/194/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/194//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/194//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/194//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Make use of MetricsConnection in async client
> -
>
> Key: HBASE-22244
> URL: https://issues.apache.org/jira/browse/HBASE-22244
> Project: HBase
>  Issue Type: Sub-task
>  Components: asynclient, metrics
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.3.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22222) Site build fails after hbase-thirdparty upgrade

2019-04-18 Thread Guanghao Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-2?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16821584#comment-16821584
 ] 

Guanghao Zhang commented on HBASE-2:


+1

> Site build fails after hbase-thirdparty upgrade
> ---
>
> Key: HBASE-2
> URL: https://issues.apache.org/jira/browse/HBASE-2
> Project: HBase
>  Issue Type: Bug
>  Components: website
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Peter Somogyi
>Assignee: Duo Zhang
>Priority: Blocker
> Fix For: 3.0.0, 2.2.0, 2.3.0
>
> Attachments: HBASE-2.patch
>
>
> After hbase-thirdparty upgrade the hbase_generate_website job is failing in 
> mvn site target on javadoc.
>  
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-site-plugin:3.7.1:site (default-site) on 
> project hbase: Error generating maven-javadoc-plugin:3.0.1:aggregate report:
> [ERROR] Exit code: 1 - 
> /home/jenkins/jenkins-slave/workspace/hbase_generate_website/hbase/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java:1034:
>  warning - Tag @link: can't find tagsIterator(Cell) in 
> org.apache.hadoop.hbase.CellUtil
> [ERROR] javadoc: error - class file for 
> org.apache.hbase.thirdparty.com.google.errorprone.annotations.Immutable not 
> found{noformat}
> After reverting thirdparty upgrade locally the site build passed. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22222) Site build fails after hbase-thirdparty upgrade

2019-04-18 Thread HBase QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-2?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16821566#comment-16821566
 ] 

HBase QA commented on HBASE-2:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
41s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:orange}-0{color} | {color:orange} test4tests {color} | {color:orange}  
0m  0s{color} | {color:orange} The patch doesn't appear to include any new or 
modified tests. Please justify why no new tests are needed for this patch. Also 
please list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
10s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
33s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
27s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
47s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
27s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
8m 20s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}253m 37s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
 4s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}292m 55s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.client.TestFromClientSide3 |
|   | hadoop.hbase.client.TestSnapshotDFSTemporaryDirectory |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/PreCommit-HBASE-Build/117/artifact/patchprocess/Dockerfile
 |
| JIRA Issue | HBASE-2 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12965807/HBASE-2.patch |
| Optional Tests |  dupname  asflicense  javac  javadoc  unit  xml  shadedjars  
hadoopcheck  compile  |
| uname | Linux 65cea8c88ab7 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | dev-support/hbase-personality.sh |
| git revision | master / a3d2a2df3a |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| Default Java | 1.8.0_181 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/117/artifact/patchprocess/patch-unit-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/117/testReport/ |
| Max. process+thread count | 5227 (vs. ulimit of 1) |
| modules | C: hbase-resource-bundle . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/117/console |
| Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |



[jira] [Commented] (HBASE-22244) Make use of MetricsConnection in async client

2019-04-18 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16821540#comment-16821540
 ] 

Hudson commented on HBASE-22244:


Results for branch master
[build #943 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/943/]: (x) 
*{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/943//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/943//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/943//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Make use of MetricsConnection in async client
> -
>
> Key: HBASE-22244
> URL: https://issues.apache.org/jira/browse/HBASE-22244
> Project: HBase
>  Issue Type: Sub-task
>  Components: asynclient, metrics
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.3.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22259) Removed deprecated method in ReplicationLoadSource

2019-04-18 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16821539#comment-16821539
 ] 

Hudson commented on HBASE-22259:


Results for branch master
[build #943 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/943/]: (x) 
*{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/943//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/943//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/943//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Removed deprecated method in ReplicationLoadSource
> --
>
> Key: HBASE-22259
> URL: https://issues.apache.org/jira/browse/HBASE-22259
> Project: HBase
>  Issue Type: Task
>  Components: Client
>Reporter: Jan Hentschel
>Assignee: Jan Hentschel
>Priority: Trivial
> Fix For: 3.0.0
>
>
> ReplicationLoadSource#getTimeStampOfLastShippedOp was deprecated in 2.0.0 and 
> should be removed in 3.0.0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22244) Make use of MetricsConnection in async client

2019-04-18 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16821532#comment-16821532
 ] 

Hudson commented on HBASE-22244:


Results for branch branch-2
[build #1828 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1828/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1828//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1828//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1828//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Make use of MetricsConnection in async client
> -
>
> Key: HBASE-22244
> URL: https://issues.apache.org/jira/browse/HBASE-22244
> Project: HBase
>  Issue Type: Sub-task
>  Components: asynclient, metrics
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.3.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22264) Rest Server (master branch) on jdk 11 throws NoClassDefFoundError : javax/annotation/Priority

2019-04-18 Thread Sakthi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sakthi updated HBASE-22264:
---
Description: 
This is in continuation with HBASE-22249. When compiled with jdk 8 and run on 
jdk 11, the master branch throws the following exception during an attempt to 
start the hbase rest server:
{code:java}
Exception in thread "main" java.lang.NoClassDefFoundError: 
javax/annotation/Priority
at 
org.glassfish.jersey.model.internal.ComponentBag.modelFor(ComponentBag.java:483)
at 
org.glassfish.jersey.model.internal.ComponentBag.access$100(ComponentBag.java:89)
at 
org.glassfish.jersey.model.internal.ComponentBag$5.call(ComponentBag.java:408)
at 
org.glassfish.jersey.model.internal.ComponentBag$5.call(ComponentBag.java:398)
at org.glassfish.jersey.internal.Errors.process(Errors.java:315)
at org.glassfish.jersey.internal.Errors.process(Errors.java:297)
at org.glassfish.jersey.internal.Errors.process(Errors.java:228)
at 
org.glassfish.jersey.model.internal.ComponentBag.registerModel(ComponentBag.java:398)
at 
org.glassfish.jersey.model.internal.ComponentBag.register(ComponentBag.java:235)
at 
org.glassfish.jersey.model.internal.CommonConfig.register(CommonConfig.java:420)
at 
org.glassfish.jersey.server.ResourceConfig.register(ResourceConfig.java:425)
at org.apache.hadoop.hbase.rest.RESTServer.run(RESTServer.java:245)
at org.apache.hadoop.hbase.rest.RESTServer.main(RESTServer.java:421)
{code}

  was:
This is in continuation with HBASE-22249. When compiled with jdk 8 and run on 
jdk 11, the master branch throws the following exception while an attempt to 
start the hbase rest server:
{code:java}
Exception in thread "main" java.lang.NoClassDefFoundError: 
javax/annotation/Priority
at 
org.glassfish.jersey.model.internal.ComponentBag.modelFor(ComponentBag.java:483)
at 
org.glassfish.jersey.model.internal.ComponentBag.access$100(ComponentBag.java:89)
at 
org.glassfish.jersey.model.internal.ComponentBag$5.call(ComponentBag.java:408)
at 
org.glassfish.jersey.model.internal.ComponentBag$5.call(ComponentBag.java:398)
at org.glassfish.jersey.internal.Errors.process(Errors.java:315)
at org.glassfish.jersey.internal.Errors.process(Errors.java:297)
at org.glassfish.jersey.internal.Errors.process(Errors.java:228)
at 
org.glassfish.jersey.model.internal.ComponentBag.registerModel(ComponentBag.java:398)
at 
org.glassfish.jersey.model.internal.ComponentBag.register(ComponentBag.java:235)
at 
org.glassfish.jersey.model.internal.CommonConfig.register(CommonConfig.java:420)
at 
org.glassfish.jersey.server.ResourceConfig.register(ResourceConfig.java:425)
at org.apache.hadoop.hbase.rest.RESTServer.run(RESTServer.java:245)
at org.apache.hadoop.hbase.rest.RESTServer.main(RESTServer.java:421)
{code}


> Rest Server (master branch) on jdk 11 throws NoClassDefFoundError : 
> javax/annotation/Priority
> -
>
> Key: HBASE-22264
> URL: https://issues.apache.org/jira/browse/HBASE-22264
> Project: HBase
>  Issue Type: Bug
>Reporter: Sakthi
>Assignee: Sakthi
>Priority: Major
>  Labels: jdk11
> Attachments: hbase-22264.master.001.patch
>
>
> This is in continuation with HBASE-22249. When compiled with jdk 8 and run on 
> jdk 11, the master branch throws the following exception during an attempt to 
> start the hbase rest server:
> {code:java}
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> javax/annotation/Priority
>   at 
> org.glassfish.jersey.model.internal.ComponentBag.modelFor(ComponentBag.java:483)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag.access$100(ComponentBag.java:89)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag$5.call(ComponentBag.java:408)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag$5.call(ComponentBag.java:398)
>   at org.glassfish.jersey.internal.Errors.process(Errors.java:315)
>   at org.glassfish.jersey.internal.Errors.process(Errors.java:297)
>   at org.glassfish.jersey.internal.Errors.process(Errors.java:228)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag.registerModel(ComponentBag.java:398)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag.register(ComponentBag.java:235)
>   at 
> org.glassfish.jersey.model.internal.CommonConfig.register(CommonConfig.java:420)
>   at 
> org.glassfish.jersey.server.ResourceConfig.register(ResourceConfig.java:425)
>   at org.apache.hadoop.hbase.rest.RESTServer.run(RESTServer.java:245)
>   at org.apache.hadoop.hbase.rest.RESTServer.main(RESTServer.java:421)
> {code}




[GitHub] [hbase] Apache-HBase commented on issue #120: HBASE-20494 Updated the version of metrics-core to 3.2.6

2019-04-18 Thread GitBox
Apache-HBase commented on issue #120: HBASE-20494 Updated the version of 
metrics-core to 3.2.6
URL: https://github.com/apache/hbase/pull/120#issuecomment-484666268
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 341 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -0 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ master Compile Tests _ |
   | +1 | mvninstall | 352 | master passed |
   | +1 | compile | 219 | master passed |
   | +1 | shadedjars | 370 | branch has no errors when building our shaded 
downstream artifacts. |
   | +1 | javadoc | 207 | master passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 331 | the patch passed |
   | +1 | compile | 219 | the patch passed |
   | +1 | javac | 219 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedjars | 353 | patch has no errors when building our shaded 
downstream artifacts. |
   | +1 | hadoopcheck | 688 | Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. |
   | +1 | javadoc | 215 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 20128 | root in the patch failed. |
   | +1 | asflicense | 46 | The patch does not generate ASF License warnings. |
   | | | 23535 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hbase.client.TestAdminShell2 |
   |   | hadoop.hbase.client.TestShell |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-120/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/120 |
   | Optional Tests |  dupname  asflicense  javac  javadoc  unit  shadedjars  
hadoopcheck  xml  compile  |
   | uname | Linux acffcbb19400 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | /testptch/patchprocess/precommit/personality/provided.sh |
   | git revision | master / f4aaf735e4 |
   | maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
   | Default Java | 1.8.0_181 |
   | unit | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-120/2/artifact/out/patch-unit-root.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-120/2/testReport/
 |
   | Max. process+thread count | 5446 (vs. ulimit of 1) |
   | modules | C: . U: . |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-120/2/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-22264) Rest Server (master branch) on jdk 11 throws NoClassDefFoundError : javax/annotation/Priority

2019-04-18 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16821448#comment-16821448
 ] 

Sean Busbey commented on HBASE-22264:
-

are any of these jars needed by the client tarball? I think no?

> Rest Server (master branch) on jdk 11 throws NoClassDefFoundError : 
> javax/annotation/Priority
> -
>
> Key: HBASE-22264
> URL: https://issues.apache.org/jira/browse/HBASE-22264
> Project: HBase
>  Issue Type: Bug
>Reporter: Sakthi
>Assignee: Sakthi
>Priority: Major
>  Labels: jdk11
> Attachments: hbase-22264.master.001.patch
>
>
> This is in continuation with HBASE-22249. When compiled with jdk 8 and run on 
> jdk 11, the master branch throws the following exception while an attempt to 
> start the hbase rest server:
> {code:java}
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> javax/annotation/Priority
>   at 
> org.glassfish.jersey.model.internal.ComponentBag.modelFor(ComponentBag.java:483)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag.access$100(ComponentBag.java:89)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag$5.call(ComponentBag.java:408)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag$5.call(ComponentBag.java:398)
>   at org.glassfish.jersey.internal.Errors.process(Errors.java:315)
>   at org.glassfish.jersey.internal.Errors.process(Errors.java:297)
>   at org.glassfish.jersey.internal.Errors.process(Errors.java:228)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag.registerModel(ComponentBag.java:398)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag.register(ComponentBag.java:235)
>   at 
> org.glassfish.jersey.model.internal.CommonConfig.register(CommonConfig.java:420)
>   at 
> org.glassfish.jersey.server.ResourceConfig.register(ResourceConfig.java:425)
>   at org.apache.hadoop.hbase.rest.RESTServer.run(RESTServer.java:245)
>   at org.apache.hadoop.hbase.rest.RESTServer.main(RESTServer.java:421)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22264) Rest Server (master branch) on jdk 11 throws NoClassDefFoundError : javax/annotation/Priority

2019-04-18 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16821445#comment-16821445
 ] 

Sean Busbey commented on HBASE-22264:
-

{quote}

652 # Add lib/jdk11 jars to the classpath
653 
654 JAVA=$JAVA_HOME/bin/java
655 version=$(JAVA -version 2>&1 | awk -F '"' '/version/ {print $2}')
656 if [[ "$version" > "11" ]]; then
657   for f in ${HBASE_HOME}/lib/jdk11/*; do
658 if [ -f "${f}" ]; then
659   CLASSPATH="${CLASSPATH}:${f}"
660 fi
661   done
662 fi
{quote}

given brittleness of jdk version strings, can we 1) in debug mode output some 
text about what we're doing here and 2) add an environment variable that 
overrides this detection to include/exclude these dependencies when it's 
defined?

> Rest Server (master branch) on jdk 11 throws NoClassDefFoundError : 
> javax/annotation/Priority
> -
>
> Key: HBASE-22264
> URL: https://issues.apache.org/jira/browse/HBASE-22264
> Project: HBase
>  Issue Type: Bug
>Reporter: Sakthi
>Assignee: Sakthi
>Priority: Major
>  Labels: jdk11
> Attachments: hbase-22264.master.001.patch
>
>
> This is in continuation with HBASE-22249. When compiled with jdk 8 and run on 
> jdk 11, the master branch throws the following exception while an attempt to 
> start the hbase rest server:
> {code:java}
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> javax/annotation/Priority
>   at 
> org.glassfish.jersey.model.internal.ComponentBag.modelFor(ComponentBag.java:483)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag.access$100(ComponentBag.java:89)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag$5.call(ComponentBag.java:408)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag$5.call(ComponentBag.java:398)
>   at org.glassfish.jersey.internal.Errors.process(Errors.java:315)
>   at org.glassfish.jersey.internal.Errors.process(Errors.java:297)
>   at org.glassfish.jersey.internal.Errors.process(Errors.java:228)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag.registerModel(ComponentBag.java:398)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag.register(ComponentBag.java:235)
>   at 
> org.glassfish.jersey.model.internal.CommonConfig.register(CommonConfig.java:420)
>   at 
> org.glassfish.jersey.server.ResourceConfig.register(ResourceConfig.java:425)
>   at org.apache.hadoop.hbase.rest.RESTServer.run(RESTServer.java:245)
>   at org.apache.hadoop.hbase.rest.RESTServer.main(RESTServer.java:421)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22222) Site build fails after hbase-thirdparty upgrade

2019-04-18 Thread Peter Somogyi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-2?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Somogyi updated HBASE-2:
--
Status: Patch Available  (was: Open)

> Site build fails after hbase-thirdparty upgrade
> ---
>
> Key: HBASE-2
> URL: https://issues.apache.org/jira/browse/HBASE-2
> Project: HBase
>  Issue Type: Bug
>  Components: website
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Peter Somogyi
>Assignee: Duo Zhang
>Priority: Blocker
> Fix For: 3.0.0, 2.2.0, 2.3.0
>
> Attachments: HBASE-2.patch
>
>
> After hbase-thirdparty upgrade the hbase_generate_website job is failing in 
> mvn site target on javadoc.
>  
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-site-plugin:3.7.1:site (default-site) on 
> project hbase: Error generating maven-javadoc-plugin:3.0.1:aggregate report:
> [ERROR] Exit code: 1 - 
> /home/jenkins/jenkins-slave/workspace/hbase_generate_website/hbase/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java:1034:
>  warning - Tag @link: can't find tagsIterator(Cell) in 
> org.apache.hadoop.hbase.CellUtil
> [ERROR] javadoc: error - class file for 
> org.apache.hbase.thirdparty.com.google.errorprone.annotations.Immutable not 
> found{noformat}
> After reverting thirdparty upgrade locally the site build passed. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22268) Update shading for javax.activation

2019-04-18 Thread HBase QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16821397#comment-16821397
 ] 

HBase QA commented on HBASE-22268:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
45s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:orange}-0{color} | {color:orange} test4tests {color} | {color:orange}  
0m  0s{color} | {color:orange} The patch doesn't appear to include any new or 
modified tests. Please justify why no new tests are needed for this patch. Also 
please list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
10s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
28s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
21s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
8m 29s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
24s{color} | {color:green} hbase-shaded in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
13s{color} | {color:green} hbase-shaded-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 30m 15s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/PreCommit-HBASE-Build/116/artifact/patchprocess/Dockerfile
 |
| JIRA Issue | HBASE-22268 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12966400/HBASE-22268.master.002.patch
 |
| Optional Tests |  dupname  asflicense  javac  javadoc  unit  shadedjars  
hadoopcheck  xml  compile  |
| uname | Linux 5f6245cd7da8 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | dev-support/hbase-personality.sh |
| git revision | master / a3d2a2df3a |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| Default Java | 1.8.0_181 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/116/testReport/ |
| Max. process+thread count | 95 (vs. ulimit of 1) |
| modules | C: hbase-shaded hbase-shaded/hbase-shaded-client U: hbase-shaded |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/116/console |
| Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |


This message was automatically generated.



> Update 

[jira] [Commented] (HBASE-22263) Master creates duplicate ServerCrashProcedure on initialization, leading to assignment hanging in region-dense clusters

2019-04-18 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16821370#comment-16821370
 ] 

Sean Busbey commented on HBASE-22263:
-

yea definitely. Also "forgot to turn it off" is a pretty big risk.

> Master creates duplicate ServerCrashProcedure on initialization, leading to 
> assignment hanging in region-dense clusters
> ---
>
> Key: HBASE-22263
> URL: https://issues.apache.org/jira/browse/HBASE-22263
> Project: HBase
>  Issue Type: Bug
>  Components: proc-v2, Region Assignment
>Affects Versions: 1.2.0, 1.3.0, 1.4.0, 1.5.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
>
> h3. Problem:
> During Master initialization we
>  # restore existing procedures that still need to run from prior active 
> Master instances
>  # look for signs that Region Servers have died and need to be recovered 
> while we were out and schedule a ServerCrashProcedure (SCP) for each them
>  # turn on the assignment manager
> The normal turn of events for a ServerCrashProcedure will attempt to use a 
> bulk assignment to maintain the set of regions on a RS if possible. However, 
> we wait around and retry a bit later if the assignment manager isn’t ready 
> yet.
> Note that currently #2 has no notion of wether or not a previous active 
> Master instances has already done a check. This means we might schedule an 
> SCP for a ServerName (host, port, start code) that already has an SCP 
> scheduled. Ideally, such a duplicate should be a no-op.
> However, before step #2 schedules the SCP it first marks the region server as 
> dead and not yet processed, with the expectation that the SCP it just created 
> will look if there is log splitting work and then mark the server as easy for 
> region assignment. At the same time, any restored SCPs that are past the step 
> of log splitting will be waiting for the AssignmentManager still. As a part 
> of restoring themselves, they do not update with the current master instance 
> to show that they are past the point of WAL processing.
> Once the AssignmentManager starts in #3 the restored SCP continues; it will 
> eventually get to the assignment phase and find that its server is marked as 
> dead and in need of wal processing. Such assignments are skipped with a log 
> message. Thus as we iterate over the regions to assign we’ll skip all of 
> them. This non-intuitively shifts the “no-op” status from the newer SCP we 
> scheduled at #2 to the older SCP that was restored in #1.
> Bulk assignment works by sending the assign calls via a pool to allow more 
> parallelism. Once we’ve set up the pool we just wait to see if the region 
> state updates to online. Unfortunately, since all of the assigns got skipped, 
> we’ll never change the state for any of these regions. That means the bulk 
> assign, and the older SCP that started it, will wait until it hits a timeout.
> By default the timeout for a bulk assignment is the smaller of {{(# Regions 
> in the plan * 10s)}} or {{(# Regions in the most loaded RS in the plan * 1s + 
> 60s + # of RegionServers in the cluster * 30s)}}. For even modest clusters 
> with several hundreds of regions per region server, this means the “no-op” 
> SCP will end up waiting ~tens-of-minutes (e.g. ~50 minutes for an average 
> region density of 300 regions per region server on a 100 node cluster. ~11 
> minutes for 300 regions per region server on a 10 node cluster). During this 
> time, the SCP will hold one of the available procedure execution slots for 
> both the overall pool and for the specific server queue.
> As previously mentioned, restored SCPs will retry their submission if the 
> assignment manager has not yet been activated (done in #3), this can cause 
> them to be scheduled after the newer SCPs (created in #2). Thus the order of 
> execution of no-op and usable SCPs can vary from run-to-run of master 
> initialization.
> This means that unless you get lucky with SCP ordering, impacted regions will 
> remain as RIT for an extended period of time. If you get particularly unlucky 
> and a critical system table is included in the regions that are being 
> recovered, then master initialization itself will end up blocked on this 
> sequence of SCP timeouts. If there are enough of them to exceed the master 
> initialization timeouts, then the situation can be self-sustaining as 
> additional master fails over cause even more duplicative SCPs to be scheduled.
> h3. Indicators:
>  * Master appears to hang; failing to assign regions to available region 
> servers.
>  * Master appears to hang during initialization; shows waiting for the meta 
> or namespace regions.
>  * Repeated master restarts allow some progress to be made on 

[jira] [Commented] (HBASE-22263) Master creates duplicate ServerCrashProcedure on initialization, leading to assignment hanging in region-dense clusters

2019-04-18 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16821364#comment-16821364
 ] 

Andrew Purtell commented on HBASE-22263:


Coprocessor has risks an external tool would not have, including the need to 
bounce the process. FWIW

> Master creates duplicate ServerCrashProcedure on initialization, leading to 
> assignment hanging in region-dense clusters
> ---
>
> Key: HBASE-22263
> URL: https://issues.apache.org/jira/browse/HBASE-22263
> Project: HBase
>  Issue Type: Bug
>  Components: proc-v2, Region Assignment
>Affects Versions: 1.2.0, 1.3.0, 1.4.0, 1.5.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
>
> h3. Problem:
> During Master initialization we
>  # restore existing procedures that still need to run from prior active 
> Master instances
>  # look for signs that Region Servers have died and need to be recovered 
> while we were out and schedule a ServerCrashProcedure (SCP) for each them
>  # turn on the assignment manager
> The normal turn of events for a ServerCrashProcedure will attempt to use a 
> bulk assignment to maintain the set of regions on a RS if possible. However, 
> we wait around and retry a bit later if the assignment manager isn’t ready 
> yet.
> Note that currently #2 has no notion of wether or not a previous active 
> Master instances has already done a check. This means we might schedule an 
> SCP for a ServerName (host, port, start code) that already has an SCP 
> scheduled. Ideally, such a duplicate should be a no-op.
> However, before step #2 schedules the SCP it first marks the region server as 
> dead and not yet processed, with the expectation that the SCP it just created 
> will look if there is log splitting work and then mark the server as easy for 
> region assignment. At the same time, any restored SCPs that are past the step 
> of log splitting will be waiting for the AssignmentManager still. As a part 
> of restoring themselves, they do not update with the current master instance 
> to show that they are past the point of WAL processing.
> Once the AssignmentManager starts in #3 the restored SCP continues; it will 
> eventually get to the assignment phase and find that its server is marked as 
> dead and in need of wal processing. Such assignments are skipped with a log 
> message. Thus as we iterate over the regions to assign we’ll skip all of 
> them. This non-intuitively shifts the “no-op” status from the newer SCP we 
> scheduled at #2 to the older SCP that was restored in #1.
> Bulk assignment works by sending the assign calls via a pool to allow more 
> parallelism. Once we’ve set up the pool we just wait to see if the region 
> state updates to online. Unfortunately, since all of the assigns got skipped, 
> we’ll never change the state for any of these regions. That means the bulk 
> assign, and the older SCP that started it, will wait until it hits a timeout.
> By default the timeout for a bulk assignment is the smaller of {{(# Regions 
> in the plan * 10s)}} or {{(# Regions in the most loaded RS in the plan * 1s + 
> 60s + # of RegionServers in the cluster * 30s)}}. For even modest clusters 
> with several hundreds of regions per region server, this means the “no-op” 
> SCP will end up waiting ~tens-of-minutes (e.g. ~50 minutes for an average 
> region density of 300 regions per region server on a 100 node cluster. ~11 
> minutes for 300 regions per region server on a 10 node cluster). During this 
> time, the SCP will hold one of the available procedure execution slots for 
> both the overall pool and for the specific server queue.
> As previously mentioned, restored SCPs will retry their submission if the 
> assignment manager has not yet been activated (done in #3), this can cause 
> them to be scheduled after the newer SCPs (created in #2). Thus the order of 
> execution of no-op and usable SCPs can vary from run-to-run of master 
> initialization.
> This means that unless you get lucky with SCP ordering, impacted regions will 
> remain as RIT for an extended period of time. If you get particularly unlucky 
> and a critical system table is included in the regions that are being 
> recovered, then master initialization itself will end up blocked on this 
> sequence of SCP timeouts. If there are enough of them to exceed the master 
> initialization timeouts, then the situation can be self-sustaining as 
> additional master fails over cause even more duplicative SCPs to be scheduled.
> h3. Indicators:
>  * Master appears to hang; failing to assign regions to available region 
> servers.
>  * Master appears to hang during initialization; shows waiting for the meta 
> or namespace regions.
>  * Repeated master 

[jira] [Commented] (HBASE-22263) Master creates duplicate ServerCrashProcedure on initialization, leading to assignment hanging in region-dense clusters

2019-04-18 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16821359#comment-16821359
 ] 

Sean Busbey commented on HBASE-22263:
-

another approach would be to add coprocessor visibility to the procedure system 
so that as an operator I can deploy something in the master process to 
intercede when I know it's going down a bad path.

> Master creates duplicate ServerCrashProcedure on initialization, leading to 
> assignment hanging in region-dense clusters
> ---
>
> Key: HBASE-22263
> URL: https://issues.apache.org/jira/browse/HBASE-22263
> Project: HBase
>  Issue Type: Bug
>  Components: proc-v2, Region Assignment
>Affects Versions: 1.2.0, 1.3.0, 1.4.0, 1.5.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
>
> h3. Problem:
> During Master initialization we
>  # restore existing procedures that still need to run from prior active 
> Master instances
>  # look for signs that Region Servers have died and need to be recovered 
> while we were out and schedule a ServerCrashProcedure (SCP) for each them
>  # turn on the assignment manager
> The normal turn of events for a ServerCrashProcedure will attempt to use a 
> bulk assignment to maintain the set of regions on a RS if possible. However, 
> we wait around and retry a bit later if the assignment manager isn’t ready 
> yet.
> Note that currently #2 has no notion of wether or not a previous active 
> Master instances has already done a check. This means we might schedule an 
> SCP for a ServerName (host, port, start code) that already has an SCP 
> scheduled. Ideally, such a duplicate should be a no-op.
> However, before step #2 schedules the SCP it first marks the region server as 
> dead and not yet processed, with the expectation that the SCP it just created 
> will look if there is log splitting work and then mark the server as easy for 
> region assignment. At the same time, any restored SCPs that are past the step 
> of log splitting will be waiting for the AssignmentManager still. As a part 
> of restoring themselves, they do not update with the current master instance 
> to show that they are past the point of WAL processing.
> Once the AssignmentManager starts in #3 the restored SCP continues; it will 
> eventually get to the assignment phase and find that its server is marked as 
> dead and in need of wal processing. Such assignments are skipped with a log 
> message. Thus as we iterate over the regions to assign we’ll skip all of 
> them. This non-intuitively shifts the “no-op” status from the newer SCP we 
> scheduled at #2 to the older SCP that was restored in #1.
> Bulk assignment works by sending the assign calls via a pool to allow more 
> parallelism. Once we’ve set up the pool we just wait to see if the region 
> state updates to online. Unfortunately, since all of the assigns got skipped, 
> we’ll never change the state for any of these regions. That means the bulk 
> assign, and the older SCP that started it, will wait until it hits a timeout.
> By default the timeout for a bulk assignment is the smaller of {{(# Regions 
> in the plan * 10s)}} or {{(# Regions in the most loaded RS in the plan * 1s + 
> 60s + # of RegionServers in the cluster * 30s)}}. For even modest clusters 
> with several hundreds of regions per region server, this means the “no-op” 
> SCP will end up waiting ~tens-of-minutes (e.g. ~50 minutes for an average 
> region density of 300 regions per region server on a 100 node cluster. ~11 
> minutes for 300 regions per region server on a 10 node cluster). During this 
> time, the SCP will hold one of the available procedure execution slots for 
> both the overall pool and for the specific server queue.
> As previously mentioned, restored SCPs will retry their submission if the 
> assignment manager has not yet been activated (done in #3), this can cause 
> them to be scheduled after the newer SCPs (created in #2). Thus the order of 
> execution of no-op and usable SCPs can vary from run-to-run of master 
> initialization.
> This means that unless you get lucky with SCP ordering, impacted regions will 
> remain as RIT for an extended period of time. If you get particularly unlucky 
> and a critical system table is included in the regions that are being 
> recovered, then master initialization itself will end up blocked on this 
> sequence of SCP timeouts. If there are enough of them to exceed the master 
> initialization timeouts, then the situation can be self-sustaining as 
> additional master fails over cause even more duplicative SCPs to be scheduled.
> h3. Indicators:
>  * Master appears to hang; failing to assign regions to available region 
> servers.
>  * Master appears to hang 

[jira] [Updated] (HBASE-22268) Update shading for javax.activation

2019-04-18 Thread Adam Antal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Antal updated HBASE-22268:
---
Attachment: HBASE-22268.master.002.patch

> Update shading for javax.activation
> ---
>
> Key: HBASE-22268
> URL: https://issues.apache.org/jira/browse/HBASE-22268
> Project: HBase
>  Issue Type: Bug
>Reporter: Adam Antal
>Priority: Major
> Attachments: HBASE-22268.master.001.patch, 
> HBASE-22268.master.002.patch
>
>
> The javax.activation dependency is added in Hadoop trunk (3.3.0, 
> HADOOP-15775) and HBase does not compile against hadoop trunk successfully.
> It is required for supporting JDK11 in Hadoop.
> HBASE-22087 will concern other dependencies.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22268) Update shading for javax.activation

2019-04-18 Thread Adam Antal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Antal updated HBASE-22268:
---
Summary: Update shading for javax.activation  (was: Update LICENSE/shading 
for javax.activation)

> Update shading for javax.activation
> ---
>
> Key: HBASE-22268
> URL: https://issues.apache.org/jira/browse/HBASE-22268
> Project: HBase
>  Issue Type: Bug
>Reporter: Adam Antal
>Priority: Major
> Attachments: HBASE-22268.master.001.patch, 
> HBASE-22268.master.002.patch
>
>
> The javax.activation dependency is added in Hadoop trunk (3.3.0, 
> HADOOP-15775) and HBase does not compile against hadoop trunk successfully.
> It is required for supporting JDK11 in Hadoop.
> HBASE-22087 will concern other dependencies.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22268) Update LICENSE/shading for javax.activation

2019-04-18 Thread Adam Antal (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16821354#comment-16821354
 ] 

Adam Antal commented on HBASE-22268:


I am not sure about your first question, but I guess no, so I updated the the 
commit msg. (License inclusion added in HBASE-21371, so we indeed don't touch 
licensing.)

I uploaded patch v2 which addressed your other concern.

> Update LICENSE/shading for javax.activation
> ---
>
> Key: HBASE-22268
> URL: https://issues.apache.org/jira/browse/HBASE-22268
> Project: HBase
>  Issue Type: Bug
>Reporter: Adam Antal
>Priority: Major
> Attachments: HBASE-22268.master.001.patch, 
> HBASE-22268.master.002.patch
>
>
> The javax.activation dependency is added in Hadoop trunk (3.3.0, 
> HADOOP-15775) and HBase does not compile against hadoop trunk successfully.
> It is required for supporting JDK11 in Hadoop.
> HBASE-22087 will concern other dependencies.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22263) Master creates duplicate ServerCrashProcedure on initialization, leading to assignment hanging in region-dense clusters

2019-04-18 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16821353#comment-16821353
 ] 

Sean Busbey commented on HBASE-22263:
-

bq. hbck doesn't handle procedures. Should hbck have a switch/mode where it can 
scan a deep queue of SCPs, determine which are redundant, and use the above 
mentioned cancel facility to remove them from the queue? This is orthogonal to 
fixing the underlying problem but reasonable to consider.

Yeah I've been chatting with some of our frontline support folks about building 
something like this.

Also note that hbck2 expressly added a "skip a procedure" option for those on 
HBase 2. In my experience  that's more dangerous than doing the same on HBase 
1, so I hope it wouldn't be blocked anymore.

> Master creates duplicate ServerCrashProcedure on initialization, leading to 
> assignment hanging in region-dense clusters
> ---
>
> Key: HBASE-22263
> URL: https://issues.apache.org/jira/browse/HBASE-22263
> Project: HBase
>  Issue Type: Bug
>  Components: proc-v2, Region Assignment
>Affects Versions: 1.2.0, 1.3.0, 1.4.0, 1.5.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
>
> h3. Problem:
> During Master initialization we
>  # restore existing procedures that still need to run from prior active 
> Master instances
>  # look for signs that Region Servers have died and need to be recovered 
> while we were out and schedule a ServerCrashProcedure (SCP) for each them
>  # turn on the assignment manager
> The normal turn of events for a ServerCrashProcedure will attempt to use a 
> bulk assignment to maintain the set of regions on a RS if possible. However, 
> we wait around and retry a bit later if the assignment manager isn’t ready 
> yet.
> Note that currently #2 has no notion of wether or not a previous active 
> Master instances has already done a check. This means we might schedule an 
> SCP for a ServerName (host, port, start code) that already has an SCP 
> scheduled. Ideally, such a duplicate should be a no-op.
> However, before step #2 schedules the SCP it first marks the region server as 
> dead and not yet processed, with the expectation that the SCP it just created 
> will look if there is log splitting work and then mark the server as easy for 
> region assignment. At the same time, any restored SCPs that are past the step 
> of log splitting will be waiting for the AssignmentManager still. As a part 
> of restoring themselves, they do not update with the current master instance 
> to show that they are past the point of WAL processing.
> Once the AssignmentManager starts in #3 the restored SCP continues; it will 
> eventually get to the assignment phase and find that its server is marked as 
> dead and in need of wal processing. Such assignments are skipped with a log 
> message. Thus as we iterate over the regions to assign we’ll skip all of 
> them. This non-intuitively shifts the “no-op” status from the newer SCP we 
> scheduled at #2 to the older SCP that was restored in #1.
> Bulk assignment works by sending the assign calls via a pool to allow more 
> parallelism. Once we’ve set up the pool we just wait to see if the region 
> state updates to online. Unfortunately, since all of the assigns got skipped, 
> we’ll never change the state for any of these regions. That means the bulk 
> assign, and the older SCP that started it, will wait until it hits a timeout.
> By default the timeout for a bulk assignment is the smaller of {{(# Regions 
> in the plan * 10s)}} or {{(# Regions in the most loaded RS in the plan * 1s + 
> 60s + # of RegionServers in the cluster * 30s)}}. For even modest clusters 
> with several hundreds of regions per region server, this means the “no-op” 
> SCP will end up waiting ~tens-of-minutes (e.g. ~50 minutes for an average 
> region density of 300 regions per region server on a 100 node cluster. ~11 
> minutes for 300 regions per region server on a 10 node cluster). During this 
> time, the SCP will hold one of the available procedure execution slots for 
> both the overall pool and for the specific server queue.
> As previously mentioned, restored SCPs will retry their submission if the 
> assignment manager has not yet been activated (done in #3), this can cause 
> them to be scheduled after the newer SCPs (created in #2). Thus the order of 
> execution of no-op and usable SCPs can vary from run-to-run of master 
> initialization.
> This means that unless you get lucky with SCP ordering, impacted regions will 
> remain as RIT for an extended period of time. If you get particularly unlucky 
> and a critical system table is included in the regions that are being 
> recovered, then master initialization itself 

[jira] [Commented] (HBASE-22087) Update LICENSE/shading for the dependencies from the latest Hadoop trunk

2019-04-18 Thread Adam Antal (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16821347#comment-16821347
 ] 

Adam Antal commented on HBASE-22087:


Filed HBASE-22268 for decoupling the javax dependency. The others can be 
discussed here.

> Update LICENSE/shading for the dependencies from the latest Hadoop trunk
> 
>
> Key: HBASE-22087
> URL: https://issues.apache.org/jira/browse/HBASE-22087
> Project: HBase
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Attachments: HBASE-22087.master.001.patch, depcheck_hadoop33.log
>
>
> The following list of dependencies were added in Hadoop trunk (3.3.0) and 
> HBase does not compile successfully:
> YARN-8778 added jline 3.9.0
> HADOOP-15775 added javax.activation
> HADOOP-15531 added org.apache.common.text (commons-text)
> HADOOP-15764 added dnsjava (org.xbill)
> Some of these are needed to support JDK9/10/11 in Hadoop.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-22263) Master creates duplicate ServerCrashProcedure on initialization, leading to assignment hanging in region-dense clusters

2019-04-18 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16821329#comment-16821329
 ] 

Andrew Purtell edited comment on HBASE-22263 at 4/18/19 5:28 PM:
-

{quote}Maybe worth some extra notes on this in the guide, and/or even extra 
messaging on RIT related log messages adverting the risk of playing with master 
proc wals.
{quote}
It's fine to advertise the risks, but if this is the only way to get the master 
back up and running when you are staring the outage of a production service in 
the face, we are forced to disregard them.

We have encountered procedure related production issues before. That time, it 
was of our own making. An errant process scheduled hundreds of bogus 
procedures. We removed the proc wal to clear them out. Afterward we asked for a 
facility to cancel and clear out in queue procedures but this idea was not well 
received, basically vetoed if I recall. Well, let me assert again we operators 
need something like this.

hbck doesn't handle procedures. Should hbck have a switch/mode where it can 
scan a deep queue of SCPs, determine which are redundant, and use the above 
mentioned cancel facility to remove them from the queue? This is orthogonal to 
fixing the underlying problem but reasonable to consider.

In the absence of "blessed" production recovery tools and procedures we will do 
what we have to. (shrug)

 Edit: Or condense the above down into a tool that can, if cluster is either 
offline or online, remove/clear the master proc wal after throwing up a 
suitably dire warning about it being an expert action and clear it with the 
support or operations team first. It can become smarter over time, as we add 
switches to turn on or off filters to drop or retain various procedure types, 
or filter on procedure submission details. Removing the proc wal is a very 
blunt tool. Perhaps we can give operators a tool that can be more surgical.


was (Author: apurtell):
{quote}Maybe worth some extra notes on this in the guide, and/or even extra 
messaging on RIT related log messages adverting the risk of playing with master 
proc wals.
{quote}
It's fine to advertise the risks, but if this is the only way to get the master 
back up and running when you are staring the outage of a production service in 
the face, we are forced to disregard them.

We have encountered procedure related production issues before. That time, it 
was of our own making. An errant process scheduled hundreds of bogus 
procedures. We removed the proc wal to clear them out. Afterward we asked for a 
facility to cancel and clear out in queue procedures but this idea was not well 
received, basically vetoed if I recall. Well, let me assert again we operators 
need something like this.

hbck doesn't handle procedures. Should hbck have a switch/mode where it can 
scan a deep queue of SCPs, determine which are redundant, and use the above 
mentioned cancel facility to remove them from the queue? This is orthogonal to 
fixing the underlying problem but reasonable to consider.

In the absence of "blessed" production recovery tools and procedures we will do 
what we have to. (shrug)

 

> Master creates duplicate ServerCrashProcedure on initialization, leading to 
> assignment hanging in region-dense clusters
> ---
>
> Key: HBASE-22263
> URL: https://issues.apache.org/jira/browse/HBASE-22263
> Project: HBase
>  Issue Type: Bug
>  Components: proc-v2, Region Assignment
>Affects Versions: 1.2.0, 1.3.0, 1.4.0, 1.5.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
>
> h3. Problem:
> During Master initialization we
>  # restore existing procedures that still need to run from prior active 
> Master instances
>  # look for signs that Region Servers have died and need to be recovered 
> while we were out and schedule a ServerCrashProcedure (SCP) for each them
>  # turn on the assignment manager
> The normal turn of events for a ServerCrashProcedure will attempt to use a 
> bulk assignment to maintain the set of regions on a RS if possible. However, 
> we wait around and retry a bit later if the assignment manager isn’t ready 
> yet.
> Note that currently #2 has no notion of wether or not a previous active 
> Master instances has already done a check. This means we might schedule an 
> SCP for a ServerName (host, port, start code) that already has an SCP 
> scheduled. Ideally, such a duplicate should be a no-op.
> However, before step #2 schedules the SCP it first marks the region server as 
> dead and not yet processed, with the expectation that the SCP it just created 
> will look if there is log splitting work and then mark the server as easy for 
> region assignment. 

[jira] [Commented] (HBASE-22263) Master creates duplicate ServerCrashProcedure on initialization, leading to assignment hanging in region-dense clusters

2019-04-18 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16821329#comment-16821329
 ] 

Andrew Purtell commented on HBASE-22263:


{quote}Maybe worth some extra notes on this in the guide, and/or even extra 
messaging on RIT related log messages adverting the risk of playing with master 
proc wals.
{quote}
It's fine to advertise the risks, but if this is the only way to get the master 
back up and running when you are staring the outage of a production service in 
the face, we are forced to disregard them.

We have encountered procedure related production issues before. That time, it 
was of our own making. An errant process scheduled hundreds of bogus 
procedures. We removed the proc wal to clear them out. Afterward we asked for a 
facility to cancel and clear out in queue procedures but this idea was not well 
received, basically vetoed if I recall. Well, let me assert again we operators 
need something like this.

hbck doesn't handle procedures. Should hbck have a switch/mode where it can 
scan a deep queue of SCPs, determine which are redundant, and use the above 
mentioned cancel facility to remove them from the queue? This is orthogonal to 
fixing the underlying problem but reasonable to consider.

In the absence of "blessed" production recovery tools and procedures we will do 
what we have to. (shrug)

 

> Master creates duplicate ServerCrashProcedure on initialization, leading to 
> assignment hanging in region-dense clusters
> ---
>
> Key: HBASE-22263
> URL: https://issues.apache.org/jira/browse/HBASE-22263
> Project: HBase
>  Issue Type: Bug
>  Components: proc-v2, Region Assignment
>Affects Versions: 1.2.0, 1.3.0, 1.4.0, 1.5.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
>
> h3. Problem:
> During Master initialization we
>  # restore existing procedures that still need to run from prior active 
> Master instances
>  # look for signs that Region Servers have died and need to be recovered 
> while we were out and schedule a ServerCrashProcedure (SCP) for each them
>  # turn on the assignment manager
> The normal turn of events for a ServerCrashProcedure will attempt to use a 
> bulk assignment to maintain the set of regions on a RS if possible. However, 
> we wait around and retry a bit later if the assignment manager isn’t ready 
> yet.
> Note that currently #2 has no notion of wether or not a previous active 
> Master instances has already done a check. This means we might schedule an 
> SCP for a ServerName (host, port, start code) that already has an SCP 
> scheduled. Ideally, such a duplicate should be a no-op.
> However, before step #2 schedules the SCP it first marks the region server as 
> dead and not yet processed, with the expectation that the SCP it just created 
> will look if there is log splitting work and then mark the server as easy for 
> region assignment. At the same time, any restored SCPs that are past the step 
> of log splitting will be waiting for the AssignmentManager still. As a part 
> of restoring themselves, they do not update with the current master instance 
> to show that they are past the point of WAL processing.
> Once the AssignmentManager starts in #3 the restored SCP continues; it will 
> eventually get to the assignment phase and find that its server is marked as 
> dead and in need of wal processing. Such assignments are skipped with a log 
> message. Thus as we iterate over the regions to assign we’ll skip all of 
> them. This non-intuitively shifts the “no-op” status from the newer SCP we 
> scheduled at #2 to the older SCP that was restored in #1.
> Bulk assignment works by sending the assign calls via a pool to allow more 
> parallelism. Once we’ve set up the pool we just wait to see if the region 
> state updates to online. Unfortunately, since all of the assigns got skipped, 
> we’ll never change the state for any of these regions. That means the bulk 
> assign, and the older SCP that started it, will wait until it hits a timeout.
> By default the timeout for a bulk assignment is the smaller of {{(# Regions 
> in the plan * 10s)}} or {{(# Regions in the most loaded RS in the plan * 1s + 
> 60s + # of RegionServers in the cluster * 30s)}}. For even modest clusters 
> with several hundreds of regions per region server, this means the “no-op” 
> SCP will end up waiting ~tens-of-minutes (e.g. ~50 minutes for an average 
> region density of 300 regions per region server on a 100 node cluster. ~11 
> minutes for 300 regions per region server on a 10 node cluster). During this 
> time, the SCP will hold one of the available procedure execution slots for 
> both the overall pool and for the specific server queue.
> 

[jira] [Commented] (HBASE-21959) CompactionTool should close the store it uses for compacting files, in order to properly archive compacted files.

2019-04-18 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16821320#comment-16821320
 ] 

Andrew Purtell commented on HBASE-21959:


No, this test appears to interact with other tests in the suite causing them to 
fail with ConnectionRefused exceptions and other oddness. After removing this 
test that behavior no longer manifests.

> CompactionTool should close the store it uses for compacting files, in order 
> to properly archive compacted files.
> -
>
> Key: HBASE-21959
> URL: https://issues.apache.org/jira/browse/HBASE-21959
> Project: HBase
>  Issue Type: Bug
>  Components: tooling
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 1.4.10, 2.3.0
>
> Attachments: HBASE-21959-branch-1-001.patch, 
> HBASE-21959-branch-1-002.patch, HBASE-21959-branch-1.patch, 
> HBASE-21959-master-001.patch, HBASE-21959-master-002.patch, 
> HBASE-21959-master-003.patch
>
>
> While using CompactionTool to offload RSes, noticed compacted files were 
> never archived from original region dir, causing the space used by the region 
> to actually double. Going through its compaction related code on HStore, 
> which is used by CompactionTool for performing compactions, found out what 
> that compacted files archiving happens mainly while closing the HStore 
> instance. CompactionTool is never explicitly closing its HStore instance, so 
> adding a simple patch that properly close the store.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22268) Update LICENSE/shading for javax.activation

2019-04-18 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16821295#comment-16821295
 ] 

Sean Busbey commented on HBASE-22268:
-

Does the javax.activation jar end up in the assembly when building with the 
impacted version of hadoop?

If it doesn't, could you update the commit message to make clear no LICENSE 
change is happening?

Could you add a note to the shading exclusion that it comes in when particular 
Hadoop versions are used (and what version is expected)? Otherwise it might get 
removed during a cleanup since we don't ourselves declare a need for it.

> Update LICENSE/shading for javax.activation
> ---
>
> Key: HBASE-22268
> URL: https://issues.apache.org/jira/browse/HBASE-22268
> Project: HBase
>  Issue Type: Bug
>Reporter: Adam Antal
>Priority: Major
> Attachments: HBASE-22268.master.001.patch
>
>
> The javax.activation dependency is added in Hadoop trunk (3.3.0, 
> HADOOP-15775) and HBase does not compile against hadoop trunk successfully.
> It is required for supporting JDK11 in Hadoop.
> HBASE-22087 will concern other dependencies.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22268) Update LICENSE/shading for javax.activation

2019-04-18 Thread HBase QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16821263#comment-16821263
 ] 

HBase QA commented on HBASE-22268:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:orange}-0{color} | {color:orange} test4tests {color} | {color:orange}  
0m  0s{color} | {color:orange} The patch doesn't appear to include any new or 
modified tests. Please justify why no new tests are needed for this patch. Also 
please list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 9s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
29s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
25s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
8m 21s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
22s{color} | {color:green} hbase-shaded in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
14s{color} | {color:green} hbase-shaded-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 29m 43s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/PreCommit-HBASE-Build/115/artifact/patchprocess/Dockerfile
 |
| JIRA Issue | HBASE-22268 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12966379/HBASE-22268.master.001.patch
 |
| Optional Tests |  dupname  asflicense  javac  javadoc  unit  shadedjars  
hadoopcheck  xml  compile  |
| uname | Linux 75c12df6c3c2 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | dev-support/hbase-personality.sh |
| git revision | master / a3d2a2df3a |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| Default Java | 1.8.0_181 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/115/testReport/ |
| Max. process+thread count | 95 (vs. ulimit of 1) |
| modules | C: hbase-shaded hbase-shaded/hbase-shaded-client U: hbase-shaded |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/115/console |
| Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |


This message was automatically generated.



> Update 

[jira] [Commented] (HBASE-21512) Introduce an AsyncClusterConnection and replace the usage of ClusterConnection

2019-04-18 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16821241#comment-16821241
 ] 

Hudson commented on HBASE-21512:


Results for branch HBASE-21512
[build #183 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21512/183/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21512/183//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21512/183//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21512/183//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Introduce an AsyncClusterConnection and replace the usage of ClusterConnection
> --
>
> Key: HBASE-21512
> URL: https://issues.apache.org/jira/browse/HBASE-21512
> Project: HBase
>  Issue Type: Umbrella
>Reporter: Duo Zhang
>Priority: Major
> Fix For: 3.0.0
>
>
> At least for the RSProcedureDispatcher, with CompletableFuture we do not need 
> to set a delay and use a thread pool any more, which could reduce the 
> resource usage and also the latency.
> Once this is done, I think we can remove the ClusterConnection completely, 
> and start to rewrite the old sync client based on the async client, which 
> could reduce the code base a lot for our client.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22268) Update LICENSE/shading for javax.activation

2019-04-18 Thread Adam Antal (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16821239#comment-16821239
 ] 

Adam Antal commented on HBASE-22268:


CCing [~busbey], [~jojochuang] and [~jatsakthi].

Are you okay separating this dependency and pushing this in? (We can handle the 
others in HBASE-22087).

> Update LICENSE/shading for javax.activation
> ---
>
> Key: HBASE-22268
> URL: https://issues.apache.org/jira/browse/HBASE-22268
> Project: HBase
>  Issue Type: Bug
>Reporter: Adam Antal
>Priority: Major
> Attachments: HBASE-22268.master.001.patch
>
>
> The javax.activation dependency is added in Hadoop trunk (3.3.0, 
> HADOOP-15775) and HBase does not compile against hadoop trunk successfully.
> It is required for supporting JDK11 in Hadoop.
> HBASE-22087 will concern other dependencies.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22268) Update LICENSE/shading for javax.activation

2019-04-18 Thread Adam Antal (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16821232#comment-16821232
 ] 

Adam Antal commented on HBASE-22268:


Uploaded the patch from HBASE-22087 created by [~jojochuang], but only the part 
concerning the javax.activation license error. Pending on jenkins.

> Update LICENSE/shading for javax.activation
> ---
>
> Key: HBASE-22268
> URL: https://issues.apache.org/jira/browse/HBASE-22268
> Project: HBase
>  Issue Type: Bug
>Reporter: Adam Antal
>Priority: Major
> Attachments: HBASE-22268.master.001.patch
>
>
> The javax.activation dependency is added in Hadoop trunk (3.3.0, 
> HADOOP-15775) and HBase does not compile against hadoop trunk successfully.
> It is required for supporting JDK11 in Hadoop.
> HBASE-22087 will concern other dependencies.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22268) Update LICENSE/shading for javax.activation

2019-04-18 Thread Adam Antal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Antal updated HBASE-22268:
---
Status: Patch Available  (was: Open)

> Update LICENSE/shading for javax.activation
> ---
>
> Key: HBASE-22268
> URL: https://issues.apache.org/jira/browse/HBASE-22268
> Project: HBase
>  Issue Type: Bug
>Reporter: Adam Antal
>Priority: Major
> Attachments: HBASE-22268.master.001.patch
>
>
> The javax.activation dependency is added in Hadoop trunk (3.3.0, 
> HADOOP-15775) and HBase does not compile against hadoop trunk successfully.
> It is required for supporting JDK11 in Hadoop.
> HBASE-22087 will concern other dependencies.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22268) Update LICENSE/shading for javax.activation

2019-04-18 Thread Adam Antal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Antal updated HBASE-22268:
---
Attachment: HBASE-22268.master.001.patch

> Update LICENSE/shading for javax.activation
> ---
>
> Key: HBASE-22268
> URL: https://issues.apache.org/jira/browse/HBASE-22268
> Project: HBase
>  Issue Type: Bug
>Reporter: Adam Antal
>Priority: Major
> Attachments: HBASE-22268.master.001.patch
>
>
> The javax.activation dependency is added in Hadoop trunk (3.3.0, 
> HADOOP-15775) and HBase does not compile against hadoop trunk successfully.
> It is required for supporting JDK11 in Hadoop.
> HBASE-22087 will concern other dependencies.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-22268) Update LICENSE/shading for javax.activation

2019-04-18 Thread Adam Antal (JIRA)
Adam Antal created HBASE-22268:
--

 Summary: Update LICENSE/shading for javax.activation
 Key: HBASE-22268
 URL: https://issues.apache.org/jira/browse/HBASE-22268
 Project: HBase
  Issue Type: Bug
Reporter: Adam Antal


The javax.activation dependency is added in Hadoop trunk (3.3.0, HADOOP-15775) 
and HBase does not compile against hadoop trunk successfully.

It is required for supporting JDK11 in Hadoop.

HBASE-22087 will concern other dependencies.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21957) Unify refCount of BucketEntry and refCount of hbase.nio.ByteBuff into one

2019-04-18 Thread HBase QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16821204#comment-16821204
 ] 

HBase QA commented on HBASE-21957:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
39s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 9 new or modified test 
files. {color} |
|| || || || {color:brown} HBASE-21879 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
23s{color} | {color:green} HBASE-21879 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
12s{color} | {color:green} HBASE-21879 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
42s{color} | {color:green} HBASE-21879 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
46s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m  
3s{color} | {color:blue} hbase-server in HBASE-21879 has 11 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} HBASE-21879 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
20s{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
4s{color} | {color:green} hbase-server generated 0 new + 6 unchanged - 2 fixed 
= 6 total (was 8) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} The patch passed checkstyle in hbase-common {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
19s{color} | {color:green} hbase-server: The patch generated 0 new + 133 
unchanged - 10 fixed = 133 total (was 143) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
55s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
10m  5s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
57s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
31s{color} | {color:red} hbase-server generated 2 new + 0 unchanged - 0 fixed = 
2 total (was 0) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
34s{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}144m 33s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
46s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}194m 15s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.quotas.TestClusterScopeQuotaThrottle |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce base: 

[jira] [Created] (HBASE-22267) Implement client push back for async client

2019-04-18 Thread Duo Zhang (JIRA)
Duo Zhang created HBASE-22267:
-

 Summary: Implement client push back for async client
 Key: HBASE-22267
 URL: https://issues.apache.org/jira/browse/HBASE-22267
 Project: HBase
  Issue Type: Sub-task
  Components: asyncclient
Reporter: Duo Zhang






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22252) ClientBackoffPolicy should not be IA.Public

2019-04-18 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16821196#comment-16821196
 ] 

Sean Busbey commented on HBASE-22252:
-

(y)

> ClientBackoffPolicy should not be IA.Public
> ---
>
> Key: HBASE-22252
> URL: https://issues.apache.org/jira/browse/HBASE-22252
> Project: HBase
>  Issue Type: Improvement
>Reporter: Duo Zhang
>Priority: Major
>
> We exposes this interface as IA.Public, but one of the parameters of the 
> getBackoffTime method is IA.Private, which means we do not expect users to 
> implement the interface, and it should not be called by user directly either, 
> so I think it should be marked as IA.Private. And for its sub classes, we 
> should mark them as IA.LimitedPrivate(CONFIG), so user could make use of them 
> through the config file.
> And seems it is only used in AsyncProcess, which is super complicated, and I 
> want to completely remove it, by re-implementing sync client based on async 
> client.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22252) ClientBackoffPolicy should not be IA.Public

2019-04-18 Thread Duo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-22252:
--
Summary: ClientBackoffPolicy should not be IA.Public  (was: Implement 
client pushback for async client)

> ClientBackoffPolicy should not be IA.Public
> ---
>
> Key: HBASE-22252
> URL: https://issues.apache.org/jira/browse/HBASE-22252
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Priority: Major
>
> We exposes this interface as IA.Public, but one of the parameters of the 
> getBackoffTime method is IA.Private, which means we do not expect users to 
> implement the interface, and it should not be called by user directly either, 
> so I think it should be marked as IA.Private. And for its sub classes, we 
> should mark them as IA.LimitedPrivate(CONFIG), so user could make use of them 
> through the config file.
> And seems it is only used in AsyncProcess, which is super complicated, and I 
> want to completely remove it, by re-implementing sync client based on async 
> client.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22252) ClientBackoffPolicy should not be IA.Public

2019-04-18 Thread Duo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-22252:
--
Issue Type: Improvement  (was: Sub-task)
Parent: (was: HBASE-17856)

> ClientBackoffPolicy should not be IA.Public
> ---
>
> Key: HBASE-22252
> URL: https://issues.apache.org/jira/browse/HBASE-22252
> Project: HBase
>  Issue Type: Improvement
>Reporter: Duo Zhang
>Priority: Major
>
> We exposes this interface as IA.Public, but one of the parameters of the 
> getBackoffTime method is IA.Private, which means we do not expect users to 
> implement the interface, and it should not be called by user directly either, 
> so I think it should be marked as IA.Private. And for its sub classes, we 
> should mark them as IA.LimitedPrivate(CONFIG), so user could make use of them 
> through the config file.
> And seems it is only used in AsyncProcess, which is super complicated, and I 
> want to completely remove it, by re-implementing sync client based on async 
> client.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22252) Implement client pushback for async client

2019-04-18 Thread Duo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-22252:
--
Description: 
We exposes this interface as IA.Public, but one of the parameters of the 
getBackoffTime method is IA.Private, which means we do not expect users to 
implement the interface, and it should not be called by user directly either, 
so I think it should be marked as IA.Private. And for its sub classes, we 
should mark them as IA.LimitedPrivate(CONFIG), so user could make use of them 
through the config file.

And seems it is only used in AsyncProcess, which is super complicated, and I 
want to completely remove it, by re-implementing sync client based on async 
client.


  was:
ClientBackoffPolicy should not be IA.Public

We exposes this interface as IA.Public, but one of the parameters of the 
getBackoffTime method is IA.Private, which means we do not expect users to 
implement the interface, and it should not be called by user directly either, 
so I think it should be marked as IA.Private. And for its sub classes, we 
should mark them as IA.LimitedPrivate(CONFIG), so user could make use of them 
through the config file.

And seems it is only used in AsyncProcess, which is super complicated, and I 
want to completely remove it, by re-implementing sync client based on async 
client.

(*edit*: original subject put in start of description)



> Implement client pushback for async client
> --
>
> Key: HBASE-22252
> URL: https://issues.apache.org/jira/browse/HBASE-22252
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Priority: Major
>
> We exposes this interface as IA.Public, but one of the parameters of the 
> getBackoffTime method is IA.Private, which means we do not expect users to 
> implement the interface, and it should not be called by user directly either, 
> so I think it should be marked as IA.Private. And for its sub classes, we 
> should mark them as IA.LimitedPrivate(CONFIG), so user could make use of them 
> through the config file.
> And seems it is only used in AsyncProcess, which is super complicated, and I 
> want to completely remove it, by re-implementing sync client based on async 
> client.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22252) Implement client pushback for async client

2019-04-18 Thread Duo Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16821191#comment-16821191
 ] 

Duo Zhang commented on HBASE-22252:
---

OK, so let me open a issue for implementing the logic in async client, and 
change the title back. I think the annotation change should be done in a 
separated issue.

> Implement client pushback for async client
> --
>
> Key: HBASE-22252
> URL: https://issues.apache.org/jira/browse/HBASE-22252
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Priority: Major
>
> ClientBackoffPolicy should not be IA.Public
> We exposes this interface as IA.Public, but one of the parameters of the 
> getBackoffTime method is IA.Private, which means we do not expect users to 
> implement the interface, and it should not be called by user directly either, 
> so I think it should be marked as IA.Private. And for its sub classes, we 
> should mark them as IA.LimitedPrivate(CONFIG), so user could make use of them 
> through the config file.
> And seems it is only used in AsyncProcess, which is super complicated, and I 
> want to completely remove it, by re-implementing sync client based on async 
> client.
> (*edit*: original subject put in start of description)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22252) Implement client pushback for async client

2019-04-18 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16821179#comment-16821179
 ] 

Sean Busbey commented on HBASE-22252:
-

In general I prefer deprecation and then changing in the next major version. 
But I'm on the conservative side wrt API handling. In such case either 
annotation or name changing would be fine on master.

> Implement client pushback for async client
> --
>
> Key: HBASE-22252
> URL: https://issues.apache.org/jira/browse/HBASE-22252
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Priority: Major
>
> ClientBackoffPolicy should not be IA.Public
> We exposes this interface as IA.Public, but one of the parameters of the 
> getBackoffTime method is IA.Private, which means we do not expect users to 
> implement the interface, and it should not be called by user directly either, 
> so I think it should be marked as IA.Private. And for its sub classes, we 
> should mark them as IA.LimitedPrivate(CONFIG), so user could make use of them 
> through the config file.
> And seems it is only used in AsyncProcess, which is super complicated, and I 
> want to completely remove it, by re-implementing sync client based on async 
> client.
> (*edit*: original subject put in start of description)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22252) Implement client pushback for async client

2019-04-18 Thread Duo Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16821168#comment-16821168
 ] 

Duo Zhang commented on HBASE-22252:
---

The implementation in async client will be in branch-2 at least, or even 
branch-2.2/branch-2.1.

And the changing of IA.Public annotation, I'm not sure. Maybe we marked them as 
deprecated first, and on master, we change the name of the classes? Or just 
change the annotation?

> Implement client pushback for async client
> --
>
> Key: HBASE-22252
> URL: https://issues.apache.org/jira/browse/HBASE-22252
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Priority: Major
>
> ClientBackoffPolicy should not be IA.Public
> We exposes this interface as IA.Public, but one of the parameters of the 
> getBackoffTime method is IA.Private, which means we do not expect users to 
> implement the interface, and it should not be called by user directly either, 
> so I think it should be marked as IA.Private. And for its sub classes, we 
> should mark them as IA.LimitedPrivate(CONFIG), so user could make use of them 
> through the config file.
> And seems it is only used in AsyncProcess, which is super complicated, and I 
> want to completely remove it, by re-implementing sync client based on async 
> client.
> (*edit*: original subject put in start of description)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HBASE-22244) Make use of MetricsConnection in async client

2019-04-18 Thread Duo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang resolved HBASE-22244.
---
   Resolution: Fixed
 Assignee: Duo Zhang
 Hadoop Flags: Reviewed
Fix Version/s: 2.3.0
   2.2.0
   3.0.0

Pushed to branch-2.2+.

Thanks [~zghaobac] for reviewing.

> Make use of MetricsConnection in async client
> -
>
> Key: HBASE-22244
> URL: https://issues.apache.org/jira/browse/HBASE-22244
> Project: HBase
>  Issue Type: Sub-task
>  Components: asynclient, metrics
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.3.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [hbase] Apache9 merged pull request #155: HBASE-22244 Make use of MetricsConnection in async client

2019-04-18 Thread GitBox
Apache9 merged pull request #155: HBASE-22244 Make use of MetricsConnection in 
async client
URL: https://github.com/apache/hbase/pull/155
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-22260) Remove deprecated methods in ReplicationLoadSink

2019-04-18 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16821130#comment-16821130
 ] 

Hudson commented on HBASE-22260:


Results for branch master
[build #942 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/942/]: (x) 
*{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/942//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/942//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/942//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Remove deprecated methods in ReplicationLoadSink
> 
>
> Key: HBASE-22260
> URL: https://issues.apache.org/jira/browse/HBASE-22260
> Project: HBase
>  Issue Type: Task
>Reporter: Sayed Anisul Hoque
>Assignee: Sayed Anisul Hoque
>Priority: Trivial
> Fix For: 3.0.0
>
>
> The ReplicationLoadSink class defines the deprecated 
> getTimeStampsOfLastAppliedOp() method, which should be removed. For now it 
> should be for 3.0.0 only.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21879) Read HFile's block to ByteBuffer directly instead of to byte for reducing young gc purpose

2019-04-18 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16821121#comment-16821121
 ] 

Hudson commented on HBASE-21879:


Results for branch HBASE-21879
[build #67 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21879/67/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21879/67//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21879/67//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21879/67//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Read HFile's block to ByteBuffer directly instead of to byte for reducing 
> young gc purpose
> --
>
> Key: HBASE-21879
> URL: https://issues.apache.org/jira/browse/HBASE-21879
> Project: HBase
>  Issue Type: Improvement
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Fix For: 3.0.0, 2.3.0
>
> Attachments: HBASE-21879.v1.patch, HBASE-21879.v1.patch, 
> QPS-latencies-before-HBASE-21879.png, gc-data-before-HBASE-21879.png
>
>
> In HFileBlock#readBlockDataInternal,  we have the following: 
> {code}
> @VisibleForTesting
> protected HFileBlock readBlockDataInternal(FSDataInputStream is, long offset,
> long onDiskSizeWithHeaderL, boolean pread, boolean verifyChecksum, 
> boolean updateMetrics)
>  throws IOException {
>  // .
>   // TODO: Make this ByteBuffer-based. Will make it easier to go to HDFS with 
> BBPool (offheap).
>   byte [] onDiskBlock = new byte[onDiskSizeWithHeader + hdrSize];
>   int nextBlockOnDiskSize = readAtOffset(is, onDiskBlock, preReadHeaderSize,
>   onDiskSizeWithHeader - preReadHeaderSize, true, offset + 
> preReadHeaderSize, pread);
>   if (headerBuf != null) {
> // ...
>   }
>   // ...
>  }
> {code}
> In the read path,  we still read the block from hfile to on-heap byte[], then 
> copy the on-heap byte[] to offheap bucket cache asynchronously,  and in my  
> 100% get performance test, I also observed some frequent young gc,  The 
> largest memory footprint in the young gen should be the on-heap block byte[].
> In fact, we can read HFile's block to ByteBuffer directly instead of to 
> byte[] for reducing young gc purpose. we did not implement this before, 
> because no ByteBuffer reading interface in the older HDFS client, but 2.7+ 
> has supported this now,  so we can fix this now. I think. 
> Will provide an patch and some perf-comparison for this. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HBASE-22245) Please add my public key to committer keys

2019-04-18 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey resolved HBASE-22245.
-
Resolution: Fixed

pushed updated KEYS file

> Please add my public key to committer keys
> --
>
> Key: HBASE-22245
> URL: https://issues.apache.org/jira/browse/HBASE-22245
> Project: HBase
>  Issue Type: Task
>  Components: community, release
>Reporter: Balazs Meszaros
>Assignee: Sean Busbey
>Priority: Major
> Attachments: meszibalu.asc
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [hbase-connectors] asfgit commented on issue #23: HBASE-22266 Add yetus personality to connectors to avoid scaladoc issues

2019-04-18 Thread GitBox
asfgit commented on issue #23: HBASE-22266 Add yetus personality to connectors 
to avoid scaladoc issues
URL: https://github.com/apache/hbase-connectors/pull/23#issuecomment-484505322
 
 
   
   Refer to this link for build results (access rights to CI server needed): 
   https://builds.apache.org/job/PreCommit-HBASE-CONNECTORS-Build/30/
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-22208) Create access checker and expose it in RS

2019-04-18 Thread HBase QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16821066#comment-16821066
 ] 

HBase QA commented on HBASE-22208:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  2m 
48s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 8 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
28s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
11s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
10s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
24s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
31s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
12s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
30s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
8m 27s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}124m 
42s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
31s{color} | {color:green} hbase-rsgroup in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
56s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}173m 42s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/PreCommit-HBASE-Build/113/artifact/patchprocess/Dockerfile
 |
| JIRA Issue | HBASE-22208 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12966340/HBASE-22208.master.009.patch
 |
| Optional Tests |  dupname  asflicense  javac  javadoc  unit  findbugs  
shadedjars  hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 41d5373b2231 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | dev-support/hbase-personality.sh |
| git revision | master / f4aaf735e4 |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| Default Java | 1.8.0_181 |
| 

[jira] [Updated] (HBASE-21957) Unify refCount of BucketEntry and refCount of hbase.nio.ByteBuff into one

2019-04-18 Thread Zheng Hu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Hu updated HBASE-21957:
-
Attachment: HBASE-21957.HBASE-21879.v10.patch

> Unify refCount of BucketEntry and refCount of hbase.nio.ByteBuff into one
> -
>
> Key: HBASE-21957
> URL: https://issues.apache.org/jira/browse/HBASE-21957
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Attachments: HBASE-21957.HBASE-21879.v1.patch, 
> HBASE-21957.HBASE-21879.v10.patch, HBASE-21957.HBASE-21879.v2.patch, 
> HBASE-21957.HBASE-21879.v3.patch, HBASE-21957.HBASE-21879.v4.patch, 
> HBASE-21957.HBASE-21879.v5.patch, HBASE-21957.HBASE-21879.v6.patch, 
> HBASE-21957.HBASE-21879.v8.patch, HBASE-21957.HBASE-21879.v9.patch, 
> HBASE-21957.HBASE-21879.v9.patch
>
>
> After HBASE-12295, we have block with MemoryType.SHARED or 
> MemoryType.EXCLUSIVE, the block in offheap BucketCache will be shared, and 
> have an reference count to track its life cycle.  If no rpc reference to the 
> shared block, then the block can be evicted. 
> while after the HBASE-21916,  we introduced an refcount for ByteBuff,  then I 
> think we can unify the two into one.  tried to fix this when preparing patch 
> for HBASE-21879, but seems can be different sub-task, and it won't affect the 
> main logic of HBASE-21879,  so create a seperate one. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22245) Please add my public key to committer keys

2019-04-18 Thread Balazs Meszaros (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16820960#comment-16820960
 ] 

Balazs Meszaros commented on HBASE-22245:
-

I replaced the original attachment. Thanks [~busbey]!

> Please add my public key to committer keys
> --
>
> Key: HBASE-22245
> URL: https://issues.apache.org/jira/browse/HBASE-22245
> Project: HBase
>  Issue Type: Task
>  Components: community, release
>Reporter: Balazs Meszaros
>Assignee: Sean Busbey
>Priority: Major
> Attachments: meszibalu.asc
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22245) Please add my public key to committer keys

2019-04-18 Thread Balazs Meszaros (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Balazs Meszaros updated HBASE-22245:

Attachment: meszibalu.asc

> Please add my public key to committer keys
> --
>
> Key: HBASE-22245
> URL: https://issues.apache.org/jira/browse/HBASE-22245
> Project: HBase
>  Issue Type: Task
>  Components: community, release
>Reporter: Balazs Meszaros
>Assignee: Sean Busbey
>Priority: Major
> Attachments: meszibalu.asc
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22245) Please add my public key to committer keys

2019-04-18 Thread Balazs Meszaros (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Balazs Meszaros updated HBASE-22245:

Attachment: (was: meszibalu.asc)

> Please add my public key to committer keys
> --
>
> Key: HBASE-22245
> URL: https://issues.apache.org/jira/browse/HBASE-22245
> Project: HBase
>  Issue Type: Task
>  Components: community, release
>Reporter: Balazs Meszaros
>Assignee: Sean Busbey
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [hbase-connectors] meszibalu opened a new pull request #23: HBASE-22266 Add yetus personality to connectors to avoid scaladoc issues

2019-04-18 Thread GitBox
meszibalu opened a new pull request #23: HBASE-22266 Add yetus personality to 
connectors to avoid scaladoc issues
URL: https://github.com/apache/hbase-connectors/pull/23
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (HBASE-22266) Add yetus personality to connectors to avoid scaladoc issues

2019-04-18 Thread Balazs Meszaros (JIRA)
Balazs Meszaros created HBASE-22266:
---

 Summary: Add yetus personality to connectors to avoid scaladoc 
issues
 Key: HBASE-22266
 URL: https://issues.apache.org/jira/browse/HBASE-22266
 Project: HBase
  Issue Type: Task
  Components: hbase-connectors
Affects Versions: connector-1.0.0
Reporter: Balazs Meszaros
Assignee: Balazs Meszaros
 Fix For: connector-1.0.0


yetus scaladoc plugin failed, because it tries to run it on every module:
{noformat}
[ERROR] No plugin found for prefix 'scala' in the current project and in the 
plugin groups [org.apache.maven.plugins, org.codehaus.mojo] available from the 
repositories [local 
(/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-CONNECTORS-Build/yetus-m2/hbase-connectors-master-patch-1),
 central (https://repo.maven.apache.org/maven2)] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/NoPluginFoundForPrefixException
{noformat}

We should enable scaladoc plugin only on hbase-spark project.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22208) Create access checker and expose it in RS

2019-04-18 Thread Yi Mei (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Mei updated HBASE-22208:
---
Attachment: HBASE-22208.master.009.patch

> Create access checker and expose it in RS
> -
>
> Key: HBASE-22208
> URL: https://issues.apache.org/jira/browse/HBASE-22208
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Yi Mei
>Assignee: Yi Mei
>Priority: Major
> Attachments: HBASE-22208.master.001.patch, 
> HBASE-22208.master.002.patch, HBASE-22208.master.003.patch, 
> HBASE-22208.master.004.patch, HBASE-22208.master.005.patch, 
> HBASE-22208.master.006.patch, HBASE-22208.master.007.patch, 
> HBASE-22208.master.008.patch, HBASE-22208.master.009.patch
>
>
> In HBase access control service, access checker performs authorization checks 
> for a given user's assigned permissions. The access checker holds a auth 
> manager instance which cache all global, namespace and table permissions.
> A access checker is created when master, RS and region load AccessController, 
> permission cache is refreshed when acl znode changed.
> We can create access checker when master and RS start and expose it in order 
> to use procedure to refresh its cache rather than watch ZK.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22087) Update LICENSE/shading for the dependencies from the latest Hadoop trunk

2019-04-18 Thread Adam Antal (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16820919#comment-16820919
 ] 

Adam Antal commented on HBASE-22087:


Thanks [~jojochuang]. I agree following that line, and not creating maven 
profiles.

Any objections against the commit?

> Update LICENSE/shading for the dependencies from the latest Hadoop trunk
> 
>
> Key: HBASE-22087
> URL: https://issues.apache.org/jira/browse/HBASE-22087
> Project: HBase
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Attachments: HBASE-22087.master.001.patch, depcheck_hadoop33.log
>
>
> The following list of dependencies were added in Hadoop trunk (3.3.0) and 
> HBase does not compile successfully:
> YARN-8778 added jline 3.9.0
> HADOOP-15775 added javax.activation
> HADOOP-15531 added org.apache.common.text (commons-text)
> HADOOP-15764 added dnsjava (org.xbill)
> Some of these are needed to support JDK9/10/11 in Hadoop.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HBASE-22259) Removed deprecated method in ReplicationLoadSource

2019-04-18 Thread Jan Hentschel (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Hentschel resolved HBASE-22259.
---
   Resolution: Fixed
 Hadoop Flags: Incompatible change
Fix Version/s: 3.0.0
 Release Note: Removed the deprecated getTimeStampOfLastShippedOp() method 
from ReplicationLoadSource. Use 
ReplicationLoadSource#getTimestampOfLastShippedOp() instead.

> Removed deprecated method in ReplicationLoadSource
> --
>
> Key: HBASE-22259
> URL: https://issues.apache.org/jira/browse/HBASE-22259
> Project: HBase
>  Issue Type: Task
>  Components: Client
>Reporter: Jan Hentschel
>Assignee: Jan Hentschel
>Priority: Trivial
> Fix For: 3.0.0
>
>
> ReplicationLoadSource#getTimeStampOfLastShippedOp was deprecated in 2.0.0 and 
> should be removed in 3.0.0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [hbase] HorizonNet merged pull request #160: HBASE-22259 Removed deprecated method in ReplicationLoadSource

2019-04-18 Thread GitBox
HorizonNet merged pull request #160: HBASE-22259 Removed deprecated method in 
ReplicationLoadSource
URL: https://github.com/apache/hbase/pull/160
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-21959) CompactionTool should close the store it uses for compacting files, in order to properly archive compacted files.

2019-04-18 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16820893#comment-16820893
 ] 

Hudson commented on HBASE-21959:


Results for branch branch-1
[build #779 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/779/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/779//General_Nightly_Build_Report/]


(x) {color:red}-1 jdk7 checks{color}
-- For more information [see jdk7 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/779//JDK7_Nightly_Build_Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/779//JDK8_Nightly_Build_Report_(Hadoop2)/]




(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> CompactionTool should close the store it uses for compacting files, in order 
> to properly archive compacted files.
> -
>
> Key: HBASE-21959
> URL: https://issues.apache.org/jira/browse/HBASE-21959
> Project: HBase
>  Issue Type: Bug
>  Components: tooling
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 1.4.10, 2.3.0
>
> Attachments: HBASE-21959-branch-1-001.patch, 
> HBASE-21959-branch-1-002.patch, HBASE-21959-branch-1.patch, 
> HBASE-21959-master-001.patch, HBASE-21959-master-002.patch, 
> HBASE-21959-master-003.patch
>
>
> While using CompactionTool to offload RSes, noticed compacted files were 
> never archived from original region dir, causing the space used by the region 
> to actually double. Going through its compaction related code on HStore, 
> which is used by CompactionTool for performing compactions, found out what 
> that compacted files archiving happens mainly while closing the HStore 
> instance. CompactionTool is never explicitly closing its HStore instance, so 
> adding a simple patch that properly close the store.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20626) Change the value of "Requests Per Second" on WEBUI

2019-04-18 Thread Guangxu Cheng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guangxu Cheng updated HBASE-20626:
--
Attachment: HBASE-20626.master.002.patch

> Change the value of "Requests Per Second" on WEBUI
> --
>
> Key: HBASE-20626
> URL: https://issues.apache.org/jira/browse/HBASE-20626
> Project: HBase
>  Issue Type: Improvement
>  Components: metrics, UI
>Reporter: Guangxu Cheng
>Assignee: Guangxu Cheng
>Priority: Major
> Attachments: HBASE-20626.master.001.patch, 
> HBASE-20626.master.002.patch
>
>
> Now we use "totalRequestCount"(RSRpcServices#requestCount) to calculate 
> requests per second.
>  After HBASE-18469, "totalRequestCount" count only once for multi 
> request.(Includes requests that are not serviced by regions.)
>  When we have a large number of read and write requests, the value of 
> "Requests Per Second" is very small which does not reflect the load of the 
> cluster.
> Maybe it is more reasonable to use "totalRowActionRequestCount" to calculate 
> RPS?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21959) CompactionTool should close the store it uses for compacting files, in order to properly archive compacted files.

2019-04-18 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16820876#comment-16820876
 ] 

Hudson commented on HBASE-21959:


Results for branch branch-1.4
[build #754 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/754/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/754//General_Nightly_Build_Report/]


(x) {color:red}-1 jdk7 checks{color}
-- For more information [see jdk7 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/754//JDK7_Nightly_Build_Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/754//JDK8_Nightly_Build_Report_(Hadoop2)/]




(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> CompactionTool should close the store it uses for compacting files, in order 
> to properly archive compacted files.
> -
>
> Key: HBASE-21959
> URL: https://issues.apache.org/jira/browse/HBASE-21959
> Project: HBase
>  Issue Type: Bug
>  Components: tooling
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 1.4.10, 2.3.0
>
> Attachments: HBASE-21959-branch-1-001.patch, 
> HBASE-21959-branch-1-002.patch, HBASE-21959-branch-1.patch, 
> HBASE-21959-master-001.patch, HBASE-21959-master-002.patch, 
> HBASE-21959-master-003.patch
>
>
> While using CompactionTool to offload RSes, noticed compacted files were 
> never archived from original region dir, causing the space used by the region 
> to actually double. Going through its compaction related code on HStore, 
> which is used by CompactionTool for performing compactions, found out what 
> that compacted files archiving happens mainly while closing the HStore 
> instance. CompactionTool is never explicitly closing its HStore instance, so 
> adding a simple patch that properly close the store.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22263) Master creates duplicate ServerCrashProcedure on initialization, leading to assignment hanging in region-dense clusters

2019-04-18 Thread Wellington Chevreuil (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16820874#comment-16820874
 ] 

Wellington Chevreuil commented on HBASE-22263:
--

Yeah, the master proc wal removal workaround has been widely used, specially 
after several unsuccessful master restart attempts causes even more SCPs to be 
piled on it, operators observe an increasing number of master proc wal files 
and resort to exclude those. It's a bit concerning that this is now seen as a 
"standard" approach for branch-1 based releases, among admins/supporters 
community, had already seen cases where this was attempted for branch-2 AMv2 
assignment issues, and it didn't end very well. Maybe worth some extra notes on 
this in the guide, and/or even extra messaging on RIT related log messages 
adverting the risk of playing with master proc wals.

> Master creates duplicate ServerCrashProcedure on initialization, leading to 
> assignment hanging in region-dense clusters
> ---
>
> Key: HBASE-22263
> URL: https://issues.apache.org/jira/browse/HBASE-22263
> Project: HBase
>  Issue Type: Bug
>  Components: proc-v2, Region Assignment
>Affects Versions: 1.2.0, 1.3.0, 1.4.0, 1.5.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
>
> h3. Problem:
> During Master initialization we
>  # restore existing procedures that still need to run from prior active 
> Master instances
>  # look for signs that Region Servers have died and need to be recovered 
> while we were out and schedule a ServerCrashProcedure (SCP) for each them
>  # turn on the assignment manager
> The normal turn of events for a ServerCrashProcedure will attempt to use a 
> bulk assignment to maintain the set of regions on a RS if possible. However, 
> we wait around and retry a bit later if the assignment manager isn’t ready 
> yet.
> Note that currently #2 has no notion of wether or not a previous active 
> Master instances has already done a check. This means we might schedule an 
> SCP for a ServerName (host, port, start code) that already has an SCP 
> scheduled. Ideally, such a duplicate should be a no-op.
> However, before step #2 schedules the SCP it first marks the region server as 
> dead and not yet processed, with the expectation that the SCP it just created 
> will look if there is log splitting work and then mark the server as easy for 
> region assignment. At the same time, any restored SCPs that are past the step 
> of log splitting will be waiting for the AssignmentManager still. As a part 
> of restoring themselves, they do not update with the current master instance 
> to show that they are past the point of WAL processing.
> Once the AssignmentManager starts in #3 the restored SCP continues; it will 
> eventually get to the assignment phase and find that its server is marked as 
> dead and in need of wal processing. Such assignments are skipped with a log 
> message. Thus as we iterate over the regions to assign we’ll skip all of 
> them. This non-intuitively shifts the “no-op” status from the newer SCP we 
> scheduled at #2 to the older SCP that was restored in #1.
> Bulk assignment works by sending the assign calls via a pool to allow more 
> parallelism. Once we’ve set up the pool we just wait to see if the region 
> state updates to online. Unfortunately, since all of the assigns got skipped, 
> we’ll never change the state for any of these regions. That means the bulk 
> assign, and the older SCP that started it, will wait until it hits a timeout.
> By default the timeout for a bulk assignment is the smaller of {{(# Regions 
> in the plan * 10s)}} or {{(# Regions in the most loaded RS in the plan * 1s + 
> 60s + # of RegionServers in the cluster * 30s)}}. For even modest clusters 
> with several hundreds of regions per region server, this means the “no-op” 
> SCP will end up waiting ~tens-of-minutes (e.g. ~50 minutes for an average 
> region density of 300 regions per region server on a 100 node cluster. ~11 
> minutes for 300 regions per region server on a 10 node cluster). During this 
> time, the SCP will hold one of the available procedure execution slots for 
> both the overall pool and for the specific server queue.
> As previously mentioned, restored SCPs will retry their submission if the 
> assignment manager has not yet been activated (done in #3), this can cause 
> them to be scheduled after the newer SCPs (created in #2). Thus the order of 
> execution of no-op and usable SCPs can vary from run-to-run of master 
> initialization.
> This means that unless you get lucky with SCP ordering, impacted regions will 
> remain as RIT for an extended period of time. If you get particularly unlucky 
> and a critical system 

[jira] [Commented] (HBASE-20626) Change the value of "Requests Per Second" on WEBUI

2019-04-18 Thread Guangxu Cheng (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16820871#comment-16820871
 ] 

Guangxu Cheng commented on HBASE-20626:
---

 ping [~carp84] [~zghaobac] mind taking a look at it ? thanks

> Change the value of "Requests Per Second" on WEBUI
> --
>
> Key: HBASE-20626
> URL: https://issues.apache.org/jira/browse/HBASE-20626
> Project: HBase
>  Issue Type: Improvement
>  Components: metrics, UI
>Reporter: Guangxu Cheng
>Assignee: Guangxu Cheng
>Priority: Major
> Attachments: HBASE-20626.master.001.patch
>
>
> Now we use "totalRequestCount"(RSRpcServices#requestCount) to calculate 
> requests per second.
>  After HBASE-18469, "totalRequestCount" count only once for multi 
> request.(Includes requests that are not serviced by regions.)
>  When we have a large number of read and write requests, the value of 
> "Requests Per Second" is very small which does not reflect the load of the 
> cluster.
> Maybe it is more reasonable to use "totalRowActionRequestCount" to calculate 
> RPS?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22256) Enabling FavoredStochasticBalancer on existing cluster leaves regions unassigned

2019-04-18 Thread Nikhil Bafna (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16820870#comment-16820870
 ] 

Nikhil Bafna commented on HBASE-22256:
--

TestReplicationKillSlaveRS failed in the test on build server, but it passes 
locally though.

> Enabling FavoredStochasticBalancer on existing cluster leaves regions 
> unassigned
> 
>
> Key: HBASE-22256
> URL: https://issues.apache.org/jira/browse/HBASE-22256
> Project: HBase
>  Issue Type: Bug
>  Components: Balancer
>Affects Versions: 2.1.3
>Reporter: Nikhil Bafna
>Priority: Major
> Attachments: HBASE-22256.master.001.patch, 
> HBASE-22256.master.001.patch
>
>
> This is related to HBASE-18349.
> The test that fails corresponding to this is 
> TestFavoredStochasticLoadBalancer#testMisplacedRegions. When a region is 
> misplaced w.r.t to the favored nodes, this balancer unassigns the region and 
> the new RegionPlan has the source server as null leading to NPE later. This 
> leaves the affected regions to be unassigned after the balancer run. 
> This is problematic especially when moving from a different balancer to the 
> FavoredStochasticLoadBalancer because all regions would be "misplaced" in the 
> favored balancer's run.
> The fix is along the lines of HBASE-18602.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21959) CompactionTool should close the store it uses for compacting files, in order to properly archive compacted files.

2019-04-18 Thread Wellington Chevreuil (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16820856#comment-16820856
 ] 

Wellington Chevreuil commented on HBASE-21959:
--

Hey [~apurtell], is this test introducing timeout failures? Maybe we should 
reduce the number of files for compaction in the test.

> CompactionTool should close the store it uses for compacting files, in order 
> to properly archive compacted files.
> -
>
> Key: HBASE-21959
> URL: https://issues.apache.org/jira/browse/HBASE-21959
> Project: HBase
>  Issue Type: Bug
>  Components: tooling
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 1.4.10, 2.3.0
>
> Attachments: HBASE-21959-branch-1-001.patch, 
> HBASE-21959-branch-1-002.patch, HBASE-21959-branch-1.patch, 
> HBASE-21959-master-001.patch, HBASE-21959-master-002.patch, 
> HBASE-21959-master-003.patch
>
>
> While using CompactionTool to offload RSes, noticed compacted files were 
> never archived from original region dir, causing the space used by the region 
> to actually double. Going through its compaction related code on HStore, 
> which is used by CompactionTool for performing compactions, found out what 
> that compacted files archiving happens mainly while closing the HStore 
> instance. CompactionTool is never explicitly closing its HStore instance, so 
> adding a simple patch that properly close the store.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22256) Enabling FavoredStochasticBalancer on existing cluster leaves regions unassigned

2019-04-18 Thread HBase QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16820851#comment-16820851
 ] 

HBase QA commented on HBASE-22256:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
41s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
38s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
16s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
45s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
12s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
 8s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
12m 44s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}169m 47s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}216m 19s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/PreCommit-HBASE-Build/112/artifact/patchprocess/Dockerfile
 |
| JIRA Issue | HBASE-22256 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12966317/HBASE-22256.master.001.patch
 |
| Optional Tests |  dupname  asflicense  javac  javadoc  unit  findbugs  
shadedjars  hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux cbe18638fec2 4.4.0-143-generic #169~14.04.2-Ubuntu SMP Wed Feb 
13 15:00:41 UTC 2019 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | dev-support/hbase-personality.sh |
| git revision | master / 428afa9c5e |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.11 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/112/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/112/testReport/ |
| Max. process+thread count | 4839 (vs. ulimit of 1) |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/112/console |
| Powered by | Apache 

[jira] [Commented] (HBASE-15533) Add RSGroup Favored Balancer

2019-04-18 Thread Nikhil Bafna (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-15533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16820840#comment-16820840
 ] 

Nikhil Bafna commented on HBASE-15533:
--

Thanks, [~thiruvel].

I had also submitted a patch in HBASE-22256 to fix 
TestFavoredStochasticLoadBalancerwere#testMisplacedRegions.

 

> Add RSGroup Favored Balancer
> 
>
> Key: HBASE-15533
> URL: https://issues.apache.org/jira/browse/HBASE-15533
> Project: HBase
>  Issue Type: Sub-task
>  Components: FavoredNodes
>Reporter: Francis Liu
>Assignee: Thiruvel Thirumoolan
>Priority: Major
> Attachments: HBASE-15533.master.001.patch, 
> HBASE-15533.master.002.patch, HBASE-15533.patch, HBASE-15533.rough.draft.patch
>
>
> HBASE-16942 added favored stochastic load balancer so we can pick and choose 
> nodes to assign based on the favored nodes and load/locality. The intention 
> of this jira is to add a group based load balancer on top of the favored 
> stochastic balancer. This will ensure splits/merges will only use favored 
> nodes from that group and will inherit from the parents appropriately.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22208) Create access checker and expose it in RS

2019-04-18 Thread HBase QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16820836#comment-16820836
 ] 

HBase QA commented on HBASE-22208:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 8 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
19s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
11s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
30s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
39s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
34s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
40s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
8m 47s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}191m 
10s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
16s{color} | {color:green} hbase-rsgroup in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
43s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}238m 31s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/PreCommit-HBASE-Build/111/artifact/patchprocess/Dockerfile
 |
| JIRA Issue | HBASE-22208 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12966310/HBASE-22208.master.008.patch
 |
| Optional Tests |  dupname  asflicense  javac  javadoc  unit  findbugs  
shadedjars  hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 890b774497c2 4.4.0-143-generic #169~14.04.2-Ubuntu SMP Wed Feb 
13 15:00:41 UTC 2019 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | dev-support/hbase-personality.sh |
| git revision | master / 428afa9c5e |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| Default Java | 1.8.0_181 |

[jira] [Commented] (HBASE-15533) Add RSGroup Favored Balancer

2019-04-18 Thread Thiruvel Thirumoolan (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-15533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16820830#comment-16820830
 ] 

Thiruvel Thirumoolan commented on HBASE-15533:
--

[~zodvik],

Thanks for the interest. Most of this patch should work as it is. Since 
majority of these tests dependent on FavoredStochastic unit tests, lemme get 
HBASE-18349 in before continuing to work on this. I think I have a patch to fix 
all but one unit test. So will post a partial patch or the whole patch on 
HBASE-18349 and then will resume work on this.

> Add RSGroup Favored Balancer
> 
>
> Key: HBASE-15533
> URL: https://issues.apache.org/jira/browse/HBASE-15533
> Project: HBase
>  Issue Type: Sub-task
>  Components: FavoredNodes
>Reporter: Francis Liu
>Assignee: Thiruvel Thirumoolan
>Priority: Major
> Attachments: HBASE-15533.master.001.patch, 
> HBASE-15533.master.002.patch, HBASE-15533.patch, HBASE-15533.rough.draft.patch
>
>
> HBASE-16942 added favored stochastic load balancer so we can pick and choose 
> nodes to assign based on the favored nodes and load/locality. The intention 
> of this jira is to add a group based load balancer on top of the favored 
> stochastic balancer. This will ensure splits/merges will only use favored 
> nodes from that group and will inherit from the parents appropriately.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [hbase] Apache-HBase commented on issue #163: HBASE-21995 Add a coprocessor to set HDFS ACL for hbase granted user

2019-04-18 Thread GitBox
Apache-HBase commented on issue #163: HBASE-21995 Add a coprocessor to set HDFS 
ACL for hbase granted user
URL: https://github.com/apache/hbase/pull/163#issuecomment-484377327
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 394 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | hbaseanti | 0 |  Patch does not have any anti-patterns. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ master Compile Tests _ |
   | +1 | mvninstall | 302 | master passed |
   | +1 | compile | 55 | master passed |
   | +1 | checkstyle | 72 | master passed |
   | +1 | shadedjars | 281 | branch has no errors when building our shaded 
downstream artifacts. |
   | +1 | findbugs | 215 | master passed |
   | +1 | javadoc | 34 | master passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 258 | the patch passed |
   | +1 | compile | 53 | the patch passed |
   | +1 | javac | 53 | the patch passed |
   | -1 | checkstyle | 67 | hbase-server: The patch generated 3 new + 51 
unchanged - 2 fixed = 54 total (was 53) |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedjars | 269 | patch has no errors when building our shaded 
downstream artifacts. |
   | +1 | hadoopcheck | 554 | Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. |
   | +1 | findbugs | 213 | the patch passed |
   | +1 | javadoc | 34 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 12876 | hbase-server in the patch failed. |
   | +1 | asflicense | 28 | The patch does not generate ASF License warnings. |
   | | | 15781 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hbase.client.TestFromClientSide |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-163/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/163 |
   | Optional Tests |  dupname  asflicense  javac  javadoc  unit  findbugs  
shadedjars  hadoopcheck  hbaseanti  checkstyle  compile  |
   | uname | Linux 495d4df97ab4 4.4.0-137-generic #163-Ubuntu SMP Mon Sep 24 
13:14:43 UTC 2018 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | /testptch/patchprocess/precommit/personality/provided.sh |
   | git revision | master / 428afa9c5e |
   | maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
   | Default Java | 1.8.0_181 |
   | findbugs | v3.1.11 |
   | checkstyle | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-163/1/artifact/out/diff-checkstyle-hbase-server.txt
 |
   | unit | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-163/1/artifact/out/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-163/1/testReport/
 |
   | Max. process+thread count | 4769 (vs. ulimit of 1) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-163/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   >