[jira] [Commented] (HBASE-24876) Fix the flaky job url for branch-2.2

2020-08-12 Thread Duo Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17176776#comment-17176776
 ] 

Duo Zhang commented on HBASE-24876:
---

It is not only for branch-2.2, on master we also have this incorrect url. So 
let's fix it for all branches?

> Fix the flaky job url for branch-2.2
> 
>
> Key: HBASE-24876
> URL: https://issues.apache.org/jira/browse/HBASE-24876
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Guanghao Zhang
>Priority: Major
>
>  
> I found that the precommit job of branch-2.2 still used the wrong url. See 
> [https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2247/1/console]
> {code:java}
> 16:56:14  [Wed Aug 12 08:56:14 UTC 2020 INFO]: Personality: patch unit
> 16:56:14  [Wed Aug 12 08:56:14 UTC 2020 INFO]: EXCLUDE_TESTS_URL=
> 16:56:14  [Wed Aug 12 08:56:14 UTC 2020 INFO]: INCLUDE_TESTS_URL=
> 16:56:14  --2020-08-12 08:56:14--  
> https://builds.apache.org/job/HBase-Find-Flaky-Tests/job/branch-2.2/lastSuccessfulBuild/artifact/excludes/
> 16:56:14  Resolving builds.apache.org (builds.apache.org)... 195.201.213.130, 
> 2a01:4f8:c0:2cc9::2
> 16:56:14  Connecting to builds.apache.org 
> (builds.apache.org)|195.201.213.130|:443... connected.
> 16:56:15  HTTP request sent, awaiting response... 404 
> 16:56:15  2020-08-12 08:56:15 ERROR 404: (no description).
> 16:56:15  
> 16:56:15  Wget error 8 in fetching excludes file from url 
> https://builds.apache.org/job/HBase-Find-Flaky-Tests/job/branch-2.2/lastSuccessfulBuild/artifact/excludes/.
>  Ignoring and proceeding.{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] taklwu edited a comment on pull request #2237: HBASE-24833: Bootstrap should not delete the META table directory if …

2020-08-12 Thread GitBox


taklwu edited a comment on pull request #2237:
URL: https://github.com/apache/hbase/pull/2237#issuecomment-673253206


   first of all, thanks Duo again. 
   
   > I think for the scenario here, we just need to write the cluster id and 
other things to zookeeper? Just make sure that the current code in HBase will 
not consider us as a fresh new cluster. We do not need to rebuild meta?
   
   So, let me confirm your suggestion, that means if we add one more field in 
ZNode, e.g. a boolean `completedMetaBoostrap`, if we find both `clusterId` and 
`completedMetaBoostrap` in ZK, we will not delete meta directory ?
   
   followup if ZK Znode data is used to determine if this is a fresh new 
cluster, can we skip the delete meta directory if `clusterId` and 
`completedMetaBoostrap` are never set but we found meta directory?  this is the 
cloud use cases which we don't have ZK to make the decision; such we don't know 
if the meta is partial, and IMO, we should just leave the meta directory and if 
anything bad happens, the operator can still run HBCK. (if we do the other way 
around and always delete the meta, then we're losing the possibility the 
cluster can heal itself, and we cannot confirm if this is partial, doesn't it?)
   
> For the InitMetaProcedure, the assumption is that, if we found that the 
meta table directory is there, then it means the procedure itself has crashed 
before finishing the creation of meta table, i.e, the meta table is 'partial'. 
So it is safe to just remove it and create again. I think this is a very common 
trick in distributed system for handling failures?
   
   do you mean `idempotent` is the `trick` ? `InitMetaProcedure` may be 
idempotent and can make `hbase:meta` online (as a empty table), but I don't 
think if the cluster/HM itself is `idempotent` automatically; and yeah, it can 
rebuild the data content of the original meta with the help of HBCK, but just 
if HM continues the flow with some existing data, e.g. the namespace table 
(sorry for branch-2 we have namespace table) and HM restart with a empty meta, 
based on the experiment I did, the cluster hangs and HM cannot be initialized. 
   
   if we step back to just think on the definition of `partial` meta, it would 
be great if the meta table itself can tell if it's partial, because it's still 
a table in HBase and HFiles are immutable. e.g. can we tell if a user table is 
partial by looking at its data? I may be wrong, but it seems like we're not 
able to tell from HFiles itself, and we need ZK and WAL to define it.  
   
   So, again, IMO data content in a table is sensitive ([updated] sorry if you 
guys think data in meta table is not sensitive), I'm proposing not to delete 
meta directory if possible (it's also like running a hbck to delete and 
rebuild). 
   
   Based on our discussion here, IMO we have two proposal mentioned to define 
`partial meta` . 
   
   1. add a boolean in WAL like a proc-level data
   2. write a boolean in ZNode to tell if the bootstrap completes
   *. no matter we choose 1) and 2) above, we have an additional condition, if 
we don't find any WAL or ZK about this condition, we should not delete the meta 
table. 
   
   seems 2) + *) should be the simplest solution, what do you guys think?  
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] taklwu edited a comment on pull request #2237: HBASE-24833: Bootstrap should not delete the META table directory if …

2020-08-12 Thread GitBox


taklwu edited a comment on pull request #2237:
URL: https://github.com/apache/hbase/pull/2237#issuecomment-673253206


   first of all, thanks Duo again. 
   
   > I think for the scenario here, we just need to write the cluster id and 
other things to zookeeper? Just make sure that the current code in HBase will 
not consider us as a fresh new cluster. We do not need to rebuild meta?
   
   So, let me confirm your suggestion, that means if we add one more field in 
ZNode, e.g. a boolean `completedMetaBoostrap`, if we find both `clusterId` and 
`completedMetaBoostrap` in ZK, we will not delete meta directory ?
   
   followup if ZK Znode data is used to determine if this is a fresh new 
cluster, can we skip the delete meta directory if `clusterId` and 
`completedMetaBoostrap` are never set but we found meta directory?  this is the 
cloud use cases which we don't have ZK to make the decision; such we don't know 
if the meta is partial, and IMO, we should just leave the meta directory and if 
anything bad happens, the operator can still run HBCK. (if we do the other way 
around and always delete the meta, then we're losing the possibility the 
cluster can heal itself, and we cannot confirm if this is partial, doesn't it?)
   
> For the InitMetaProcedure, the assumption is that, if we found that the 
meta table directory is there, then it means the procedure itself has crashed 
before finishing the creation of meta table, i.e, the meta table is 'partial'. 
So it is safe to just remove it and create again. I think this is a very common 
trick in distributed system for handling failures?
   
   do you mean `idempotent` is the `trick` ? `InitMetaProcedure` may be 
idempotent and can make `hbase:meta` online (as a empty table), but I don't 
think if the cluster/HM itself is `idempotent` automatically; and yeah, it can 
rebuild the data content of the original meta with the help of HBCK, but just 
if HM continues the flow with some existing data, e.g. the namespace table 
(sorry for branch-2 we have namespace table) and HM restart with a empty meta, 
based on the experiment I did, the cluster hangs and HM cannot be initialized. 
   
   if we step back to just think on the definition of `partial` meta, it would 
be great if the meta table itself can tell if it's partial, because it's still 
a table in HBase and HFiles are immutable. e.g. can we tell if a user table is 
partial by looking at its data? I may be wrong, but it seems like we're not 
able to tell from HFiles itself, and we need ZK and WAL to define it.  
   
   So, again, IMO data content in a table is sensitive especially the meta 
table, I'm proposing not to delete meta if possible here (it's also like 
running a hbck to delete and rebuild). 
   
   Based on our discussion here, IMO we have two proposal mentioned to define 
`partial meta` . 
   
   1. add a boolean in WAL like a proc-level data
   2. write a boolean in ZNode to tell if the bootstrap completes
   *. no matter we choose 1) and 2) above, we have an additional condition, if 
we don't find any WAL or ZK about this condition, we should not delete the meta 
table. 
   
   seems 2) + *) should be the simplest solution, what do you guys think?  
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] taklwu commented on pull request #2237: HBASE-24833: Bootstrap should not delete the META table directory if …

2020-08-12 Thread GitBox


taklwu commented on pull request #2237:
URL: https://github.com/apache/hbase/pull/2237#issuecomment-673253206


   first of all, thanks Duo again. 
   
   > I think for the scenario here, we just need to write the cluster id and 
other things to zookeeper? Just make sure that the current code in HBase will 
not consider us as a fresh new cluster. We do not need to rebuild meta?
   
   So, let me confirm your suggestion, that means if we add one more field in 
ZNode, e.g. a boolean `completedMetaBoostrap`, if we find both `clusterId` and 
`completedMetaBoostrap` in ZK, we will not delete meta directory ?
   
   followup if ZK Znode data is used to determine if this is a fresh new 
cluster, can we skip the delete meta directory if `clusterId` and 
`completedMetaBoostrap` are never set but we found meta directory?  this is the 
cloud use cases which we don't have ZK to make the decision; such we don't know 
if the meta is partial, and IMO, we should just leave the meta directory and if 
anything bad happens, the operator can still run HBCK. (if we do the other way 
around and always delete the meta, then we're losing the possibility the 
cluster can heal itself, and we cannot confirm if this is partial, doesn't it?)
   
> For the InitMetaProcedure, the assumption is that, if we found that the 
meta table directory is there, then it means the procedure itself has crashed 
before finishing the creation of meta table, i.e, the meta table is 'partial'. 
So it is safe to just remove it and create again. I think this is a very common 
trick in distributed system for handling failures?
   
   do you mean `idempotent` is trick ? `InitMetaProcedure` may be idempotent 
and can make `hbase:meta` online (as a empty table), but I don't think if the 
cluster/HM itself is `idempotent` automatically; and yeah, it can rebuild the 
data content of the original meta with the help of HBCK, but just if HM 
continues the flow with some existing data, e.g. the namespace table (sorry for 
branch-2 we have namespace table) and HM restart with a empty meta, based on 
the experiment I did, the cluster hangs and HM cannot be initialized. 
   
   if we step back to just think on the definition of `partial` meta, it would 
be great if the meta table itself can tell if it's partial, because it's still 
a table in HBase and HFiles are immutable. e.g. can we tell if a user table is 
partial by looking at its data? I may be wrong, but it seems like we're not 
able to tell from HFiles itself, and we need ZK and WAL to define it.  
   
   So, again, IMO data content in a table is sensitive especially the meta 
table, I'm proposing not to delete meta if possible here (it's also like 
running a hbck to delete and rebuild). 
   
   Based on our discussion here, IMO we have two proposal mentioned to define 
`partial meta` . 
   
   1. add a boolean in WAL like a proc-level data
   2. write a boolean in ZNode to tell if the bootstrap completes
   *. no matter we choose 1) and 2) above, we have an additional condition, if 
we don't find any WAL or ZK about this condition, we should not delete the meta 
table. 
   
   seems 2) + *) should be the simplest solution, what do you guys think?  
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (HBASE-24750) All executor service should start using guava ThreadFactory

2020-08-12 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17176759#comment-17176759
 ] 

Hudson commented on HBASE-24750:


Results for branch branch-2
[build #6 on 
builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/6/]:
 (x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/6/General_20Nightly_20Build_20Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/6/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/6/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/6/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(x) {color:red}-1 client integration test{color}
--Failed when running client tests on top of Hadoop 2. [see log for 
details|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/6//artifact/output-integration/hadoop-2.log].
 (note that this means we didn't run on Hadoop 3)


> All executor service should start using guava ThreadFactory
> ---
>
> Key: HBASE-24750
> URL: https://issues.apache.org/jira/browse/HBASE-24750
> Project: HBase
>  Issue Type: Improvement
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.4.0
>
>
> Currently, we have majority Executor services using guava's 
> ThreadFactoryBuilder while creating fixed size thread pool. There are some 
> executors using our internal hbase-common's Threads class which provides util 
> methods for creating thread factory.
> Although there is no perf impact, we should let all Executors start using our 
> internal library for using ThreadFactory rather than having external guava 
> dependency (which is nothing more than a builder class). We might have to add 
> a couple more arguments to support full fledged ThreadFactory, but let's do 
> it and stop using guava's builder class.
> *Update:*
> Based on the consensus, we should use only guava library and retire our 
> internal code which maintains ThreadFactory creation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-24583) Normalizer can't actually merge empty regions when neighbor is larger than average size

2020-08-12 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17176761#comment-17176761
 ] 

Hudson commented on HBASE-24583:


Results for branch branch-2
[build #6 on 
builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/6/]:
 (x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/6/General_20Nightly_20Build_20Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/6/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/6/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/6/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(x) {color:red}-1 client integration test{color}
--Failed when running client tests on top of Hadoop 2. [see log for 
details|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/6//artifact/output-integration/hadoop-2.log].
 (note that this means we didn't run on Hadoop 3)


> Normalizer can't actually merge empty regions when neighbor is larger than 
> average size
> ---
>
> Key: HBASE-24583
> URL: https://issues.apache.org/jira/browse/HBASE-24583
> Project: HBase
>  Issue Type: Bug
>  Components: master, Normalizer
>Affects Versions: 2.3.0
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.3.1, 2.4.0
>
>
> There are plenty of cases where empty regions can accumulate -- incorrect 
> guessing at split points, old data is automatically expiring off,  The 
> normalizer stubbornly refuses to handle this case, despite this being an 
> original feature it was intended to support (HBASE-6613).
> Earlier discussion had concerns for a user pre-splitting a table and then the 
> normalizer coming along and merging those splits away before they could be 
> populated. Thus, right now, the default behavior via 
> {{hbase.normalizer.merge.min_region_size.mb=1}} is to not split any region 
> that's so small. Later, we added 
> {{hbase.normalizer.merge.min_region_age.days=3}}, which prevents us from 
> merging any region too young. So there's plenty of nobs for an operator to 
> customize their behavior.
> But when I set {{hbase.normalizer.merge.min_region_size.mb=0}}, I still end 
> up with stubborn regions that won't merge away. Looks like a large neighbor 
> will prevent a merge from going through.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-24844) Exception on standalone (master) shutdown

2020-08-12 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17176760#comment-17176760
 ] 

Hudson commented on HBASE-24844:


Results for branch branch-2
[build #6 on 
builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/6/]:
 (x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/6/General_20Nightly_20Build_20Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/6/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/6/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/6/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(x) {color:red}-1 client integration test{color}
--Failed when running client tests on top of Hadoop 2. [see log for 
details|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/6//artifact/output-integration/hadoop-2.log].
 (note that this means we didn't run on Hadoop 3)


> Exception on standalone (master) shutdown
> -
>
> Key: HBASE-24844
> URL: https://issues.apache.org/jira/browse/HBASE-24844
> Project: HBase
>  Issue Type: Bug
>  Components: Zookeeper
>Affects Versions: 3.0.0-alpha-1
>Reporter: Nick Dimiduk
>Assignee: wenfeiyi666
>Priority: Minor
> Fix For: 3.0.0-alpha-1, 2.3.1, 1.7.0, 2.4.0, 2.2.7
>
>
> Running HBase ({{master}} branch) in standalone mode, terminating the process 
> results in the following stack traces logged at error. It appears we shutdown 
> the zookeeper client out-of-order with {{shutdown}} of the thread pools.
> {noformat}
> 2020-08-10 14:21:46,777 INFO  [RS:0;localhost:16020] zookeeper.ZooKeeper: 
> Session: 0x100111361f20001 closed
> 2020-08-10 14:21:46,778 INFO  [RS:0;localhost:16020] 
> regionserver.HRegionServer: Exiting; stopping=localhost,16020,1597094491257; 
> zookeeper connection closed.
> 2020-08-10 14:21:46,778 ERROR [main-EventThread] zookeeper.ClientCnxn: Error 
> while calling watcher 
> java.util.concurrent.RejectedExecutionException: Task 
> java.util.concurrent.FutureTask@6e61af4b rejected from 
> java.util.concurrent.ThreadPoolExecutor@6a5e365f[Terminated, pool size = 0, 
> active threads = 0, queued tasks = 0, completed tasks = 4]
> at 
> java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063)
> at 
> java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830)
> at 
> java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379)
> at 
> java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112)
> at 
> java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678)
> at 
> org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:559)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510)
> 2020-08-10 14:21:46,778 INFO  [shutdown-hook-0] regionserver.ShutdownHook: 
> Starting fs shutdown hook thread.
> 2020-08-10 14:21:46,779 ERROR [main-EventThread] zookeeper.ClientCnxn: Error 
> while calling watcher 
> java.util.concurrent.RejectedExecutionException: Task 
> java.util.concurrent.FutureTask@7d41da91 rejected from 
> java.util.concurrent.ThreadPoolExecutor@6a5e365f[Terminated, pool size = 0, 
> active threads = 0, queued tasks = 0, completed tasks = 4]
> at 
> java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063)
> at 
> java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830)
> at 
> java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379)
> at 
> java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112)
> at 
> java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678)
> at 
> org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:559)
> at 
> org.apache.hadoop.hbase.zookeeper.PendingWatcher.process(PendingWatcher.java:40)

[jira] [Commented] (HBASE-23035) Retain region to the last RegionServer make the failover slower

2020-08-12 Thread Bo Cui (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17176744#comment-17176744
 ] 

Bo Cui commented on HBASE-23035:


[~zghao]

During startup, hbase needs to assign region to previous rs without affecting 
the scan performance,  so we can add conf to solve this problem

> Retain region to the last RegionServer make the failover slower
> ---
>
> Key: HBASE-23035
> URL: https://issues.apache.org/jira/browse/HBASE-23035
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha-1, 2.3.0, 2.2.1, 2.1.6
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.3.0, 2.1.7, 2.2.2
>
>
> Now if one RS crashed, the regions will try to use the old location for the 
> region deploy. But one RS only have 3 threads to open region by default. If a 
> RS have hundreds of regions, the failover is very slower. Assign to same RS 
> may have good locality if the Datanode is deploied on same host. But slower 
> failover make the availability worse. And the locality is not big deal when 
> deploy HBase on cloud.
> This was introduced by HBASE-18946.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache-HBase commented on pull request #2253: HBASE-24876 Fix the flaky job url for branch-2.2

2020-08-12 Thread GitBox


Apache-HBase commented on pull request #2253:
URL: https://github.com/apache/hbase/pull/2253#issuecomment-673231409


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 35s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  shelldocs  |   0m  0s |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ branch-2.2 Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 18s |  Maven dependency ordering for branch  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m  8s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  There were no new shellcheck 
issues.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   ||| _ Other Tests _ |
   | +0 :ok: |  asflicense  |   0m  0s |  ASF License check generated no 
output?  |
   |  |   |   2m  5s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.12 Server=19.03.12 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2253/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/2253 |
   | Optional Tests | dupname asflicense shellcheck shelldocs |
   | uname | Linux 7af3b38123ed 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-home/workspace/Base-PreCommit-GitHub-PR_PR-2253/out/precommit/personality/provided.sh
 |
   | git revision | branch-2.2 / a7cc7d8239 |
   | Max. process+thread count | 48 (vs. ulimit of 1) |
   | modules | C:  U:  |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2253/1/console
 |
   | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] nyl3532016 commented on pull request #2250: HBASE-24872 refactor valueOf PoolType

2020-08-12 Thread GitBox


nyl3532016 commented on pull request #2250:
URL: https://github.com/apache/hbase/pull/2250#issuecomment-673228584


   > Can you explain a bit more about
   > `since Reusable PoolType has been removed, no need to check 
allowedPoolTypes`? Went through the patch, still not clear about the 
connection, thanks.
   @huaxiangsun The  Reusable PoolType may has potential bug and not used any 
more, So I remove it  in PR [#2208 
](https://github.com/apache/hbase/pull/2208), the allowedPoolTypes in method 
valueOf prevent use of Reusable PoolType, and can be removed as well



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] infraio opened a new pull request #2253: HBASE-24876 Fix the flaky job url for branch-2.2

2020-08-12 Thread GitBox


infraio opened a new pull request #2253:
URL: https://github.com/apache/hbase/pull/2253


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #2241: HBASE-24854 Correct the help content of assign and unassign commands …

2020-08-12 Thread GitBox


Apache-HBase commented on pull request #2241:
URL: https://github.com/apache/hbase/pull/2241#issuecomment-673221492


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 12s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 37s |  master passed  |
   | +1 :green_heart: |  javadoc  |   0m 12s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 29s |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 12s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   7m 55s |  hbase-shell in the patch passed.  |
   |  |   |  19m 42s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.12 Server=19.03.12 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2241/5/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/2241 |
   | Optional Tests | javac javadoc unit |
   | uname | Linux f13678e3dac2 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / b8fd621201 |
   | Default Java | 2020-01-14 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2241/5/testReport/
 |
   | Max. process+thread count | 1576 (vs. ulimit of 12500) |
   | modules | C: hbase-shell U: hbase-shell |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2241/5/console
 |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (HBASE-24876) Fix the flaky job url for branch-2.2

2020-08-12 Thread Guanghao Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-24876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-24876:
---
Description: 
 

I found that the precommit job of branch-2.2 still used the wrong url. See 
[https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2247/1/console]
{code:java}
16:56:14  [Wed Aug 12 08:56:14 UTC 2020 INFO]: Personality: patch unit
16:56:14  [Wed Aug 12 08:56:14 UTC 2020 INFO]: EXCLUDE_TESTS_URL=
16:56:14  [Wed Aug 12 08:56:14 UTC 2020 INFO]: INCLUDE_TESTS_URL=
16:56:14  --2020-08-12 08:56:14--  
https://builds.apache.org/job/HBase-Find-Flaky-Tests/job/branch-2.2/lastSuccessfulBuild/artifact/excludes/
16:56:14  Resolving builds.apache.org (builds.apache.org)... 195.201.213.130, 
2a01:4f8:c0:2cc9::2
16:56:14  Connecting to builds.apache.org 
(builds.apache.org)|195.201.213.130|:443... connected.
16:56:15  HTTP request sent, awaiting response... 404 
16:56:15  2020-08-12 08:56:15 ERROR 404: (no description).
16:56:15  
16:56:15  Wget error 8 in fetching excludes file from url 
https://builds.apache.org/job/HBase-Find-Flaky-Tests/job/branch-2.2/lastSuccessfulBuild/artifact/excludes/.
 Ignoring and proceeding.{code}

> Fix the flaky job url for branch-2.2
> 
>
> Key: HBASE-24876
> URL: https://issues.apache.org/jira/browse/HBASE-24876
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Guanghao Zhang
>Priority: Major
>
>  
> I found that the precommit job of branch-2.2 still used the wrong url. See 
> [https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2247/1/console]
> {code:java}
> 16:56:14  [Wed Aug 12 08:56:14 UTC 2020 INFO]: Personality: patch unit
> 16:56:14  [Wed Aug 12 08:56:14 UTC 2020 INFO]: EXCLUDE_TESTS_URL=
> 16:56:14  [Wed Aug 12 08:56:14 UTC 2020 INFO]: INCLUDE_TESTS_URL=
> 16:56:14  --2020-08-12 08:56:14--  
> https://builds.apache.org/job/HBase-Find-Flaky-Tests/job/branch-2.2/lastSuccessfulBuild/artifact/excludes/
> 16:56:14  Resolving builds.apache.org (builds.apache.org)... 195.201.213.130, 
> 2a01:4f8:c0:2cc9::2
> 16:56:14  Connecting to builds.apache.org 
> (builds.apache.org)|195.201.213.130|:443... connected.
> 16:56:15  HTTP request sent, awaiting response... 404 
> 16:56:15  2020-08-12 08:56:15 ERROR 404: (no description).
> 16:56:15  
> 16:56:15  Wget error 8 in fetching excludes file from url 
> https://builds.apache.org/job/HBase-Find-Flaky-Tests/job/branch-2.2/lastSuccessfulBuild/artifact/excludes/.
>  Ignoring and proceeding.{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-24876) Fix the flaky job url for branch-2.2

2020-08-12 Thread Guanghao Zhang (Jira)
Guanghao Zhang created HBASE-24876:
--

 Summary: Fix the flaky job url for branch-2.2
 Key: HBASE-24876
 URL: https://issues.apache.org/jira/browse/HBASE-24876
 Project: HBase
  Issue Type: Sub-task
Reporter: Guanghao Zhang






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache-HBase commented on pull request #2241: HBASE-24854 Correct the help content of assign and unassign commands …

2020-08-12 Thread GitBox


Apache-HBase commented on pull request #2241:
URL: https://github.com/apache/hbase/pull/2241#issuecomment-673220262


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 27s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 33s |  master passed  |
   | +1 :green_heart: |  javadoc  |   0m 13s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 31s |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 13s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   6m 57s |  hbase-shell in the patch passed.  |
   |  |   |  15m 59s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.12 Server=19.03.12 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2241/5/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/2241 |
   | Optional Tests | javac javadoc unit |
   | uname | Linux 01188d5cfa90 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / b8fd621201 |
   | Default Java | 1.8.0_232 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2241/5/testReport/
 |
   | Max. process+thread count | 2372 (vs. ulimit of 12500) |
   | modules | C: hbase-shell U: hbase-shell |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2241/5/console
 |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (HBASE-24875) The describe and param for unassign not correct since the implementation changed at server side

2020-08-12 Thread Zheng Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-24875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Wang updated HBASE-24875:
---
Summary: The describe and param for unassign not correct since the 
implementation changed at server side  (was: The describe and param for 
unassign not correct since the implement changed at server side)

> The describe and param for unassign not correct since the implementation 
> changed at server side
> ---
>
> Key: HBASE-24875
> URL: https://issues.apache.org/jira/browse/HBASE-24875
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Reporter: Zheng Wang
>Assignee: Zheng Wang
>Priority: Major
>
> The describe of unassign in Admin.java shows below, it is not true, in fact, 
> we just close the region now, also we do not need the force param any more.
> {code:java}
> /**
>  * Unassign a region from current hosting regionserver.  Region will then be 
> assigned to a
>  * regionserver chosen at random.  Region could be reassigned back to the 
> same server.  Use {@link
>  * #move(byte[], ServerName)} if you want to control the region movement.
>  *
>  * @param regionName Region to unassign. Will clear any existing RegionPlan 
> if one found.
>  * @param force If true, force unassign (Will remove region from 
> regions-in-transition too if
>  * present. If results in double assignment use hbck -fix to resolve. To be 
> used by experts).
>  * @throws IOException if a remote or network exception occurs
>  */
> void unassign(byte[] regionName, boolean force)
> throws IOException;
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-24875) The describe and param for unassign not correct since the implement changed at server side

2020-08-12 Thread Zheng Wang (Jira)
Zheng Wang created HBASE-24875:
--

 Summary: The describe and param for unassign not correct since the 
implement changed at server side
 Key: HBASE-24875
 URL: https://issues.apache.org/jira/browse/HBASE-24875
 Project: HBase
  Issue Type: Improvement
  Components: Client
Reporter: Zheng Wang
Assignee: Zheng Wang


The describe of unassign in Admin.java shows below, it is not true, in fact, we 
just close the region now, also we do not need the force param any more.
{code:java}
/**
 * Unassign a region from current hosting regionserver.  Region will then be 
assigned to a
 * regionserver chosen at random.  Region could be reassigned back to the same 
server.  Use {@link
 * #move(byte[], ServerName)} if you want to control the region movement.
 *
 * @param regionName Region to unassign. Will clear any existing RegionPlan if 
one found.
 * @param force If true, force unassign (Will remove region from 
regions-in-transition too if
 * present. If results in double assignment use hbck -fix to resolve. To be 
used by experts).
 * @throws IOException if a remote or network exception occurs
 */
void unassign(byte[] regionName, boolean force)
throws IOException;
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache-HBase commented on pull request #2241: HBASE-24854 Correct the help content of assign and unassign commands …

2020-08-12 Thread GitBox


Apache-HBase commented on pull request #2241:
URL: https://github.com/apache/hbase/pull/2241#issuecomment-673216860


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 58s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  rubocop  |   0m  7s |  There were no new rubocop 
issues.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m  9s |  The patch does not generate 
ASF License warnings.  |
   |  |   |   2m 23s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.12 Server=19.03.12 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2241/5/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/2241 |
   | Optional Tests | dupname asflicense rubocop |
   | uname | Linux 3c5aca8ba4a2 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / b8fd621201 |
   | Max. process+thread count | 42 (vs. ulimit of 12500) |
   | modules | C: hbase-shell U: hbase-shell |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2241/5/console
 |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) 
rubocop=0.80.0 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (HBASE-24874) hbase-shell should not use ModifiableTableDescriptor directly

2020-08-12 Thread Elliot Miller (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17176727#comment-17176727
 ] 

Elliot Miller commented on HBASE-24874:
---

[~zhangduo], we do have unit tests for both these cases in TestAdminShell.java. 
These shell unit tests are actually the reason that one of my open PRs 
([https://github.com/apache/hbase/pull/2232]) is failing CI testing.

I just took a look at the nightly build for both JDK 8 and 11. hbase-shell is 
passing for both of them. (/)

I think the problem is that *TestAdminShell is not always run in CI*. It was 
definitely run for my recently opened PR. However, *TestAdminShell now appears 
in the list of flaky excludes* for the master branch, so it is not being run on 
nightly builds. This may have other effects, but I'm not familiar enough with 
our CI to comment further at the moment. See 
[https://ci-hadoop.apache.org/job/HBase/job/HBase-Find-Flaky-Tests/job/master/lastSuccessfulBuild/artifact/dashboard.html]

I'll have a chance to look into this further tomorrow.

> hbase-shell should not use ModifiableTableDescriptor directly
> -
>
> Key: HBASE-24874
> URL: https://issues.apache.org/jira/browse/HBASE-24874
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Affects Versions: 3.0.0-alpha-1
>Reporter: Elliot Miller
>Assignee: Elliot Miller
>Priority: Major
>
> HBASE-20819 prepared us for HBase 3.x by removing usages of the deprecated 
> HTableDescriptor and HColumnDescriptor classes from the shell. However, it 
> did use two methods from the ModifiableTableDescriptor, which was only public 
> for compatibility/migration and was marked with 
> {{@InterfaceAudience.Private}}. When {{ModifiableTableDescriptor}} was made 
> private last week by HBASE-24507 it broke two hbase-shell commands 
> (*describe* and *alter* when used to set a coprocessor) that were using 
> methods from {{ModifiableTableDescriptor}} (these methods are not present on 
> the general {{TableDescriptor}} interface).
> This story will remove the two references in hbase-shell to methods on the 
> now-private {{ModifiableTableDescriptor}} class and will find appropriate 
> replacements for the calls.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-24841) Change the jenkins job urls in our jenkinsfile

2020-08-12 Thread Duo Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17176712#comment-17176712
 ] 

Duo Zhang commented on HBASE-24841:
---

I think it is this line?

https://github.com/apache/hbase/blob/b8fd621201591d320819b3fbc79ca2cc3b441085/dev-support/hbase-personality.sh#L321

On branch-2.3+, we will pass the flag in but on branch-2.2, we do not, so we 
will arrive this line and use the wrong url?

Could file a new issue to fix it.

Thanks.

> Change the jenkins job urls in our jenkinsfile
> --
>
> Key: HBASE-24841
> URL: https://issues.apache.org/jira/browse/HBASE-24841
> Project: HBase
>  Issue Type: Sub-task
>  Components: build, scripts
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.3.1, 1.3.7, 1.7.0, 2.4.0, 1.4.14, 2.2.6
>
>
> On ci-hadoop.a.o, we have a folder for all the hbase job, so the job url 
> contains an extra 'job/HBase'.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache9 commented on pull request #2237: HBASE-24833: Bootstrap should not delete the META table directory if …

2020-08-12 Thread GitBox


Apache9 commented on pull request #2237:
URL: https://github.com/apache/hbase/pull/2237#issuecomment-673201873


   > Thanks @Apache9 , I want to agree with you to have a HBCK option, but one 
concern I have and keep struggling about making this automated instead of HBCK 
options. If one HBase cluster has hundred of tables with thousand of regions, 
how would the operator recovery the cluster? does he/she (offline/online) 
repair the meta table by scanning the storage on each region ? (instead we can 
just load the meta without rebuilding it?)
   
   I think for the scenario here, we just need to write the cluster id and 
other things to zookeeper? Just make sure that the current code in HBase will 
not consider us as a fresh new cluster. We do not need to rebuild meta?
   > 
   > Tbh, I felt bad to bring this meta table issue because normal HBase 
cluster does not assume Zookeeper (and WAL) could be gone after the cluster 
starts and restarts.
   > 
   > [updated] for this PR/JIRA, mainly, I'm questioning what a `partial meta` 
should be (e.g. it's now relying on the state of `InitMetaProcedure` instead of 
the data of meta table), any thoughts ?
   After introducing proc-v2, we reply on it to record the state of a 
multi-step operation. Here, I believe the problem is that, we schedule an 
InitMetaProcedure when we do have a meta table in place. For the 
InitMetaProcedure, the assumption is that, if we found that the meta table 
directory is there, then it means the procedure itself has crashed before 
finishing the creation of meta table, i.e, the meta table is 'partial'. So it 
is safe to just remove it and create again. I think this is a very common trick 
in distributed system for handling failures?
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (HBASE-24874) hbase-shell should not use ModifiableTableDescriptor directly

2020-08-12 Thread Duo Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17176707#comment-17176707
 ] 

Duo Zhang commented on HBASE-24874:
---

We do not have UTs for this two commands? IIRC I did not see any failures on 
the hbase-shell UTs.

> hbase-shell should not use ModifiableTableDescriptor directly
> -
>
> Key: HBASE-24874
> URL: https://issues.apache.org/jira/browse/HBASE-24874
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Affects Versions: 3.0.0-alpha-1
>Reporter: Elliot Miller
>Assignee: Elliot Miller
>Priority: Major
>
> HBASE-20819 prepared us for HBase 3.x by removing usages of the deprecated 
> HTableDescriptor and HColumnDescriptor classes from the shell. However, it 
> did use two methods from the ModifiableTableDescriptor, which was only public 
> for compatibility/migration and was marked with 
> {{@InterfaceAudience.Private}}. When {{ModifiableTableDescriptor}} was made 
> private last week by HBASE-24507 it broke two hbase-shell commands 
> (*describe* and *alter* when used to set a coprocessor) that were using 
> methods from {{ModifiableTableDescriptor}} (these methods are not present on 
> the general {{TableDescriptor}} interface).
> This story will remove the two references in hbase-shell to methods on the 
> now-private {{ModifiableTableDescriptor}} class and will find appropriate 
> replacements for the calls.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-24841) Change the jenkins job urls in our jenkinsfile

2020-08-12 Thread Duo Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17176701#comment-17176701
 ] 

Duo Zhang commented on HBASE-24841:
---

Maybe we define the url at another place for branch-2.2?

> Change the jenkins job urls in our jenkinsfile
> --
>
> Key: HBASE-24841
> URL: https://issues.apache.org/jira/browse/HBASE-24841
> Project: HBase
>  Issue Type: Sub-task
>  Components: build, scripts
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.3.1, 1.3.7, 1.7.0, 2.4.0, 1.4.14, 2.2.6
>
>
> On ci-hadoop.a.o, we have a folder for all the hbase job, so the job url 
> contains an extra 'job/HBase'.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-24841) Change the jenkins job urls in our jenkinsfile

2020-08-12 Thread Guanghao Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17176690#comment-17176690
 ] 

Guanghao Zhang commented on HBASE-24841:


EXCLUDE_TESTS_URL = 
"${JENKINS_URL}/job/HBase/job/HBase-Find-Flaky-Tests/job/${CHANGE_TARGET}/lastSuccessfulBuild/artifact/exclude

 

[~zhangduo] sir, How pass the JENKINS_URL to the precommit job?

 

I found that the precommit job of branch-2.2 still used the wrong url. See 
[https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2247/1/console]
{code:java}
16:56:14  [Wed Aug 12 08:56:14 UTC 2020 INFO]: Personality: patch unit
16:56:14  [Wed Aug 12 08:56:14 UTC 2020 INFO]: EXCLUDE_TESTS_URL=
16:56:14  [Wed Aug 12 08:56:14 UTC 2020 INFO]: INCLUDE_TESTS_URL=
16:56:14  --2020-08-12 08:56:14--  
https://builds.apache.org/job/HBase-Find-Flaky-Tests/job/branch-2.2/lastSuccessfulBuild/artifact/excludes/
16:56:14  Resolving builds.apache.org (builds.apache.org)... 195.201.213.130, 
2a01:4f8:c0:2cc9::2
16:56:14  Connecting to builds.apache.org 
(builds.apache.org)|195.201.213.130|:443... connected.
16:56:15  HTTP request sent, awaiting response... 404 
16:56:15  2020-08-12 08:56:15 ERROR 404: (no description).
16:56:15  
16:56:15  Wget error 8 in fetching excludes file from url 
https://builds.apache.org/job/HBase-Find-Flaky-Tests/job/branch-2.2/lastSuccessfulBuild/artifact/excludes/.
 Ignoring and proceeding.
{code}

> Change the jenkins job urls in our jenkinsfile
> --
>
> Key: HBASE-24841
> URL: https://issues.apache.org/jira/browse/HBASE-24841
> Project: HBase
>  Issue Type: Sub-task
>  Components: build, scripts
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.3.1, 1.3.7, 1.7.0, 2.4.0, 1.4.14, 2.2.6
>
>
> On ci-hadoop.a.o, we have a folder for all the hbase job, so the job url 
> contains an extra 'job/HBase'.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-24650) Change the return types of the new checkAndMutate methods introduced in HBASE-8458

2020-08-12 Thread Toshihiro Suzuki (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-24650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated HBASE-24650:
-
Release Note: 
HBASE-24650 introduced CheckAndMutateResult class and changed the return type 
of checkAndMutate methods to this class in order to support CheckAndMutate with 
Increment/Append. CheckAndMutateResult class has two fields, one is *success* 
that indicates whether the operation is successful or not, and the other one is 
*result* that's the result of the operation and is used for  CheckAndMutate 
with Increment/Append.

The new APIs for the Table interface:
```
/**
 * checkAndMutate that atomically checks if a row matches the specified 
condition. If it does,
 * it performs the specified action.
 *
 * @param checkAndMutate The CheckAndMutate object.
 * @return A CheckAndMutateResult object that represents the result for the 
CheckAndMutate.
 * @throws IOException if a remote or network exception occurs.
 */
default CheckAndMutateResult checkAndMutate(CheckAndMutate checkAndMutate) 
throws IOException {
  return checkAndMutate(Collections.singletonList(checkAndMutate)).get(0);
}

/**
 * Batch version of checkAndMutate. The specified CheckAndMutates are batched 
only in the sense
 * that they are sent to a RS in one RPC, but each CheckAndMutate operation is 
still executed
 * atomically (and thus, each may fail independently of others).
 *
 * @param checkAndMutates The list of CheckAndMutate.
 * @return A list of CheckAndMutateResult objects that represents the result 
for each
 *   CheckAndMutate.
 * @throws IOException if a remote or network exception occurs.
 */
default List checkAndMutate(List 
checkAndMutates)
  throws IOException {
  throw new NotImplementedException("Add an implementation!");
}
{code}

The new APIs for the AsyncTable interface:
{code}
/**
 * checkAndMutate that atomically checks if a row matches the specified 
condition. If it does,
 * it performs the specified action.
 *
 * @param checkAndMutate The CheckAndMutate object.
 * @return A {@link CompletableFuture}s that represent the result for the 
CheckAndMutate.
 */
CompletableFuture checkAndMutate(CheckAndMutate 
checkAndMutate);

/**
 * Batch version of checkAndMutate. The specified CheckAndMutates are batched 
only in the sense
 * that they are sent to a RS in one RPC, but each CheckAndMutate operation is 
still executed
 * atomically (and thus, each may fail independently of others).
 *
 * @param checkAndMutates The list of CheckAndMutate.
 * @return A list of {@link CompletableFuture}s that represent the result for 
each
 *   CheckAndMutate.
 */
List> checkAndMutate(
  List checkAndMutates);

/**
 * A simple version of batch checkAndMutate. It will fail if there are any 
failures.
 *
 * @param checkAndMutates The list of rows to apply.
 * @return A {@link CompletableFuture} that wrapper the result list.
 */
default CompletableFuture> checkAndMutateAll(
  List checkAndMutates) {
  return allOf(checkAndMutate(checkAndMutates));
}
```


  was:
HBASE-24650 introduced CheckAndMutateResult class and changed the return type 
of checkAndMutate methods to this class in order to support CheckAndMutate with 
Increment/Append.

The new APIs for the Table interface:
```
/**
 * checkAndMutate that atomically checks if a row matches the specified 
condition. If it does,
 * it performs the specified action.
 *
 * @param checkAndMutate The CheckAndMutate object.
 * @return A CheckAndMutateResult object that represents the result for the 
CheckAndMutate.
 * @throws IOException if a remote or network exception occurs.
 */
default CheckAndMutateResult checkAndMutate(CheckAndMutate checkAndMutate) 
throws IOException {
  return checkAndMutate(Collections.singletonList(checkAndMutate)).get(0);
}

/**
 * Batch version of checkAndMutate. The specified CheckAndMutates are batched 
only in the sense
 * that they are sent to a RS in one RPC, but each CheckAndMutate operation is 
still executed
 * atomically (and thus, each may fail independently of others).
 *
 * @param checkAndMutates The list of CheckAndMutate.
 * @return A list of CheckAndMutateResult objects that represents the result 
for each
 *   CheckAndMutate.
 * @throws IOException if a remote or network exception occurs.
 */
default List checkAndMutate(List 
checkAndMutates)
  throws IOException {
  throw new NotImplementedException("Add an implementation!");
}
{code}

The new APIs for the AsyncTable interface:
{code}
/**
 * checkAndMutate that atomically checks if a row matches the specified 
condition. If it does,
 * it performs the specified action.
 *
 * @param checkAndMutate The CheckAndMutate object.
 * @return A {@link CompletableFuture}s that represent the result for the 
CheckAndMutate.
 */
CompletableFuture checkAndMutate(CheckAndMutate 
checkAndMutate);

/**
 * Batch version of checkAndMutate. The specified CheckAndMutates are batched 
only in the sense
 * that they 

[jira] [Updated] (HBASE-24650) Change the return types of the new checkAndMutate methods introduced in HBASE-8458

2020-08-12 Thread Toshihiro Suzuki (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-24650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated HBASE-24650:
-
Release Note: 
HBASE-24650 introduced CheckAndMutateResult class and changed the return type 
of checkAndMutate methods to this class in order to support CheckAndMutate with 
Increment/Append.

The new APIs for the Table interface:
```
/**
 * checkAndMutate that atomically checks if a row matches the specified 
condition. If it does,
 * it performs the specified action.
 *
 * @param checkAndMutate The CheckAndMutate object.
 * @return A CheckAndMutateResult object that represents the result for the 
CheckAndMutate.
 * @throws IOException if a remote or network exception occurs.
 */
default CheckAndMutateResult checkAndMutate(CheckAndMutate checkAndMutate) 
throws IOException {
  return checkAndMutate(Collections.singletonList(checkAndMutate)).get(0);
}

/**
 * Batch version of checkAndMutate. The specified CheckAndMutates are batched 
only in the sense
 * that they are sent to a RS in one RPC, but each CheckAndMutate operation is 
still executed
 * atomically (and thus, each may fail independently of others).
 *
 * @param checkAndMutates The list of CheckAndMutate.
 * @return A list of CheckAndMutateResult objects that represents the result 
for each
 *   CheckAndMutate.
 * @throws IOException if a remote or network exception occurs.
 */
default List checkAndMutate(List 
checkAndMutates)
  throws IOException {
  throw new NotImplementedException("Add an implementation!");
}
{code}

The new APIs for the AsyncTable interface:
{code}
/**
 * checkAndMutate that atomically checks if a row matches the specified 
condition. If it does,
 * it performs the specified action.
 *
 * @param checkAndMutate The CheckAndMutate object.
 * @return A {@link CompletableFuture}s that represent the result for the 
CheckAndMutate.
 */
CompletableFuture checkAndMutate(CheckAndMutate 
checkAndMutate);

/**
 * Batch version of checkAndMutate. The specified CheckAndMutates are batched 
only in the sense
 * that they are sent to a RS in one RPC, but each CheckAndMutate operation is 
still executed
 * atomically (and thus, each may fail independently of others).
 *
 * @param checkAndMutates The list of CheckAndMutate.
 * @return A list of {@link CompletableFuture}s that represent the result for 
each
 *   CheckAndMutate.
 */
List> checkAndMutate(
  List checkAndMutates);

/**
 * A simple version of batch checkAndMutate. It will fail if there are any 
failures.
 *
 * @param checkAndMutates The list of rows to apply.
 * @return A {@link CompletableFuture} that wrapper the result list.
 */
default CompletableFuture> checkAndMutateAll(
  List checkAndMutates) {
  return allOf(checkAndMutate(checkAndMutates));
}
```


  was:
HBASE-24650 introduced CheckAndMutateResult class and changed the return type 
of checkAndMutate methods to this class in order to support CheckAndMutate with 
Increment/Append.

The new APIs for the Table interface:
{code}
/**
 * checkAndMutate that atomically checks if a row matches the specified 
condition. If it does,
 * it performs the specified action.
 *
 * @param checkAndMutate The CheckAndMutate object.
 * @return A CheckAndMutateResult object that represents the result for the 
CheckAndMutate.
 * @throws IOException if a remote or network exception occurs.
 */
default CheckAndMutateResult checkAndMutate(CheckAndMutate checkAndMutate) 
throws IOException {
  return checkAndMutate(Collections.singletonList(checkAndMutate)).get(0);
}

/**
 * Batch version of checkAndMutate. The specified CheckAndMutates are batched 
only in the sense
 * that they are sent to a RS in one RPC, but each CheckAndMutate operation is 
still executed
 * atomically (and thus, each may fail independently of others).
 *
 * @param checkAndMutates The list of CheckAndMutate.
 * @return A list of CheckAndMutateResult objects that represents the result 
for each
 *   CheckAndMutate.
 * @throws IOException if a remote or network exception occurs.
 */
default List checkAndMutate(List 
checkAndMutates)
  throws IOException {
  throw new NotImplementedException("Add an implementation!");
}
{code}

The new APIs for the AsyncTable interface:
{code}
/**
 * checkAndMutate that atomically checks if a row matches the specified 
condition. If it does,
 * it performs the specified action.
 *
 * @param checkAndMutate The CheckAndMutate object.
 * @return A {@link CompletableFuture}s that represent the result for the 
CheckAndMutate.
 */
CompletableFuture checkAndMutate(CheckAndMutate 
checkAndMutate);

/**
 * Batch version of checkAndMutate. The specified CheckAndMutates are batched 
only in the sense
 * that they are sent to a RS in one RPC, but each CheckAndMutate operation is 
still executed
 * atomically (and thus, each may fail independently of others).
 *
 * @param checkAndMutates The list of CheckAndMutate.
 * @return A list of {@link 

[jira] [Commented] (HBASE-8458) Support for batch version of checkAndMutate()

2020-08-12 Thread Toshihiro Suzuki (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-8458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17176670#comment-17176670
 ] 

Toshihiro Suzuki commented on HBASE-8458:
-

[~ndimiduk] Thank you for pointing that out. I just changed the Release Note.

> Support for batch version of checkAndMutate()
> -
>
> Key: HBASE-8458
> URL: https://issues.apache.org/jira/browse/HBASE-8458
> Project: HBase
>  Issue Type: New Feature
>  Components: Client, regionserver
>Reporter: Hari Mankude
>Assignee: Toshihiro Suzuki
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.4.0
>
>
> The use case is that the user has multiple threads loading hundreds of keys 
> into a hbase table. Occasionally there are collisions in the keys being 
> uploaded by different threads. So for correctness, it is required to do 
> checkAndMutate() instead of a put(). However, doing a checkAndMutate() rpc 
> for every key update is non optimal. It would be good to have a batch version 
> of checkAndMutate() similar to batch put(). The client can partition the keys 
> on region boundaries.
> The jira is NOT looking for any type of cross-row locking or multi-row 
> atomicity with checkAndMutate().



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-8458) Support for batch version of checkAndMutate()

2020-08-12 Thread Toshihiro Suzuki (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-8458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated HBASE-8458:

Release Note: 
HBASE-8458 introduced CheckAndMutate class that's used to perform 
CheckAndMutate operations. Use the builder class to instantiate a 
CheckAndMutate object. This builder class is fluent style APIs, the code are 
like:
```
// A CheckAndMutate operation where do the specified action if the column 
(specified by the
family and the qualifier) of the row equals to the specified value
CheckAndMutate checkAndMutate = CheckAndMutate.newBuilder(row)
  .ifEquals(family, qualifier, value)
  .build(put);

// A CheckAndMutate operation where do the specified action if the column 
(specified by the
// family and the qualifier) of the row doesn't exist
CheckAndMutate checkAndMutate = CheckAndMutate.newBuilder(row)
  .ifNotExists(family, qualifier)
  .build(put);

// A CheckAndMutate operation where do the specified action if the row matches 
the filter
CheckAndMutate checkAndMutate = CheckAndMutate.newBuilder(row)
  .ifMatches(filter)
  .build(delete);
```

And This added new checkAndMutate APIs to the Table and AsyncTable interfaces, 
and deprecated the old checkAndMutate APIs. The example code for the new APIs 
are as follows:
```
Table table = ...;

CheckAndMutate checkAndMutate = ...;

// Perform the checkAndMutate operation
boolean success = table.checkAndMutate(checkAndMutate);

CheckAndMutate checkAndMutate1 = ...;
CheckAndMutate checkAndMutate2 = ...;

// Batch version
List successList = table.checkAndMutate(Arrays.asList(checkAndMutate1, 
checkAndMutate2));
```

This also has Protocol Buffers level changes. Old clients without this patch 
will work against new servers with this patch. However, new clients will break 
against old servers without this patch for checkAndMutate with RM and 
mutateRow. So, for rolling upgrade, we will need to upgrade servers first, and 
then roll out the new clients.

  was:
HBASE-8458 introduced CheckAndMutate class that's used to perform 
CheckAndMutate operations.
The following is the JavaDoc for this class:
{code}
 * Used to perform CheckAndMutate operations. Currently {@link Put}, {@link 
Delete}
 * and {@link RowMutations} are supported.
 * 
 * Use the builder class to instantiate a CheckAndMutate object.
 * This builder class is fluent style APIs, the code are like:
 * 
 * 
 * // A CheckAndMutate operation where do the specified action if the column 
(specified by the
 * // family and the qualifier) of the row equals to the specified value
 * CheckAndMutate checkAndMutate = CheckAndMutate.newBuilder(row)
 *   .ifEquals(family, qualifier, value)
 *   .build(put);
 *
 * // A CheckAndMutate operation where do the specified action if the column 
(specified by the
 * // family and the qualifier) of the row doesn't exist
 * CheckAndMutate checkAndMutate = CheckAndMutate.newBuilder(row)
 *   .ifNotExists(family, qualifier)
 *   .build(put);
 *
 * // A CheckAndMutate operation where do the specified action if the row 
matches the filter
 * CheckAndMutate checkAndMutate = CheckAndMutate.newBuilder(row)
 *   .ifMatches(filter)
 *   .build(delete);
 * 
 * 
{code}

And it added new checkAndMutate APIs to the Table and AsyncTable interfaces, 
and deprecated the old checkAndMutate APIs.
The new APIs for the Table interface:
{code}
/**
  * checkAndMutate that atomically checks if a row matches the specified 
condition. If it does,
  * it performs the specified action.
  *
  * @param checkAndMutate The CheckAndMutate object.
  * @return boolean that represents the result for the CheckAndMutate.
  * @throws IOException if a remote or network exception occurs.
  */
 default boolean checkAndMutate(CheckAndMutate checkAndMutate) throws 
IOException {
   return checkAndMutate(Collections.singletonList(checkAndMutate))[0];
 }

 /**
  * Batch version of checkAndMutate.
  *
  * @param checkAndMutates The list of CheckAndMutate.
  * @return A array of boolean that represents the result for each 
CheckAndMutate.
  * @throws IOException if a remote or network exception occurs.
  */
 default boolean[] checkAndMutate(List checkAndMutates) throws 
IOException {
   throw new NotImplementedException("Add an implementation!");
 }
{code}

The new APIs for the AsyncTable interface:
{code}
/**
 * checkAndMutate that atomically checks if a row matches the specified 
condition. If it does,
 * it performs the specified action.
 *
 * @param checkAndMutate The CheckAndMutate object.
 * @return A {@link CompletableFuture}s that represent the result for the 
CheckAndMutate.
 */
CompletableFuture checkAndMutate(CheckAndMutate checkAndMutate);

/**
 * Batch version of checkAndMutate.
 *
 * @param checkAndMutates The list of CheckAndMutate.
 * @return A list of {@link CompletableFuture}s that represent the result for 
each
 *   CheckAndMutate.
 */
List> checkAndMutate(List 
checkAndMutates);

/**
 * A 

[jira] [Commented] (HBASE-11288) Splittable Meta

2020-08-12 Thread Duo Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-11288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17176644#comment-17176644
 ] 

Duo Zhang commented on HBASE-11288:
---

But you still do not answer my question directly right? Why root must be a 
table?

In my implementation, root is still stored as a ‘region’, so we could still 
reuse most of the code right? And I’ve done a refactoring on master for 
generalizing a CatalogFamilyFormat, for putting these methods. And now we 
already have a framework to distributed the load of root, so there is no such 
‘specialized solution’, we just use an existing solution, for distributed 
‘cluster bootstrap information’.

Passing ITBLL is good, even without splitting support, this means our proc-v2 
framework is stable enough after the several years polishing, to support 
complicated logic, and introducing root table is generally fine. Though I still 
do not like that we introduce a lot of new state in SCP. It is easy to add new 
states but hard to delete them.

But I’m very disappointed that, starting from the first comment, you never 
changed anything. Though I explained a lot, you just did not see and kept 
saying root as table is the only good way, all other ways are compromise. You 
just kept pushing your solution, I do not think this is a good way for 
collaboration.





> Splittable Meta
> ---
>
> Key: HBASE-11288
> URL: https://issues.apache.org/jira/browse/HBASE-11288
> Project: HBase
>  Issue Type: Umbrella
>  Components: meta
>Reporter: Francis Christopher Liu
>Assignee: Francis Christopher Liu
>Priority: Major
> Attachments: jstack20200807_bad_rpc_priority.txt, root_priority.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] huaxiangsun commented on pull request #2249: HBASE-24871 Replication may loss data when refresh recovered replicat…

2020-08-12 Thread GitBox


huaxiangsun commented on pull request #2249:
URL: https://github.com/apache/hbase/pull/2249#issuecomment-673145600


   As @infraio said, it will be great to add an UT to show the issue w/o 
change. Otherwise, looks good to me.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] taklwu commented on a change in pull request #2237: HBASE-24833: Bootstrap should not delete the META table directory if …

2020-08-12 Thread GitBox


taklwu commented on a change in pull request #2237:
URL: https://github.com/apache/hbase/pull/2237#discussion_r469554017



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
##
@@ -915,6 +917,11 @@ private void 
finishActiveMasterInitialization(MonitoredTask status)
   this.tableDescriptors.getAll();
 }
 
+// check cluster Id stored in ZNode before, and use it to indicate if a 
cluster has been
+// restarted with an existing Zookeeper quorum.
+isClusterRestartWithExistingZNodes =

Review comment:
   > So this says the need to keep this boolean info somewhere once we find 
that and even before creating the zk node for ClusterId. Am I making the 
concern clear this time?
   
   ack and I got the concern. 
   
   but before I move to implement the proc level information, let's wait a bit 
on @Apache9 for the question on META's `tableInfo` and `partial meta`.  The 
ideal case is that we may be able to use 
`FSTableDescriptors.getTableDescriptorFromFs()` if found and can be readed to 
indicate meta is not partial. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] taklwu commented on a change in pull request #2237: HBASE-24833: Bootstrap should not delete the META table directory if …

2020-08-12 Thread GitBox


taklwu commented on a change in pull request #2237:
URL: https://github.com/apache/hbase/pull/2237#discussion_r469554017



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
##
@@ -915,6 +917,11 @@ private void 
finishActiveMasterInitialization(MonitoredTask status)
   this.tableDescriptors.getAll();
 }
 
+// check cluster Id stored in ZNode before, and use it to indicate if a 
cluster has been
+// restarted with an existing Zookeeper quorum.
+isClusterRestartWithExistingZNodes =

Review comment:
   > So this says the need to keep this boolean info somewhere once we find 
that and even before creating the zk node for ClusterId. Am I making the 
concern clear this time?
   
   ack and I got the concern. 
   
   but before I move to implement the proc level information, let's wait a bit 
on @Apache9 for the question on META's `tableInfo` and `partial meta`. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (HBASE-24568) do-release need not wait for tag

2020-08-12 Thread Nick Dimiduk (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-24568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-24568:
-
Fix Version/s: 3.0.0-alpha-1

> do-release need not wait for tag
> 
>
> Key: HBASE-24568
> URL: https://issues.apache.org/jira/browse/HBASE-24568
> Project: HBase
>  Issue Type: Bug
>  Components: build, community
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Major
> Fix For: 3.0.0-alpha-1
>
>
> Making release failed waiting for tag to propagate to GitHub. On inspection, 
> it seems the GitHub url is missing host information.
> {noformat}
> Waiting up to 30 seconds for tag to propagate to github mirror...
> + sleep 30
> + max_propagation_time=0
> + check_for_tag 2.3.0RC0
> + curl -s --head --fail /releases/tag/2.3.0RC0
> + ((  max_propagation_time <= 0  ))
> + echo 'ERROR: Taking more than 5 minutes to propagate Release Tag 2.3.0RC0 
> to github mirror.'
> ERROR: Taking more than 5 minutes to propagate Release Tag 2.3.0RC0 to github 
> mirror.
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (HBASE-24568) do-release need not wait for tag

2020-08-12 Thread Nick Dimiduk (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-24568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk resolved HBASE-24568.
--
Resolution: Fixed

> do-release need not wait for tag
> 
>
> Key: HBASE-24568
> URL: https://issues.apache.org/jira/browse/HBASE-24568
> Project: HBase
>  Issue Type: Bug
>  Components: build, community
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Major
> Fix For: 3.0.0-alpha-1
>
>
> Making release failed waiting for tag to propagate to GitHub. On inspection, 
> it seems the GitHub url is missing host information.
> {noformat}
> Waiting up to 30 seconds for tag to propagate to github mirror...
> + sleep 30
> + max_propagation_time=0
> + check_for_tag 2.3.0RC0
> + curl -s --head --fail /releases/tag/2.3.0RC0
> + ((  max_propagation_time <= 0  ))
> + echo 'ERROR: Taking more than 5 minutes to propagate Release Tag 2.3.0RC0 
> to github mirror.'
> ERROR: Taking more than 5 minutes to propagate Release Tag 2.3.0RC0 to github 
> mirror.
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-11554) Remove Reusable poolmap Rpc client type.

2020-08-12 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-11554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17176589#comment-17176589
 ] 

Hudson commented on HBASE-11554:


Results for branch master
[build #6 on 
builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/6/]:
 (x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/6/General_20Nightly_20Build_20Report/]






(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/6/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(x) {color:red}-1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/6/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(x) {color:red}-1 client integration test{color}
--Failed when running client tests on top of Hadoop 2. [see log for 
details|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/6//artifact/output-integration/hadoop-2.log].
 (note that this means we didn't run on Hadoop 3)


> Remove Reusable poolmap Rpc client type.
> 
>
> Key: HBASE-11554
> URL: https://issues.apache.org/jira/browse/HBASE-11554
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 0.98.0, 0.96.0, 0.99.0, 2.0.0
>Reporter: Jonathan Hsieh
>Assignee: Jonathan Hsieh
>Priority: Major
> Fix For: 0.99.0, 0.98.5
>
> Attachments: hbase-11554.patch
>
>
> From HBASE-11313 we found that the "Reusable" RpcClient PoolType was 
> impossible to set via configuration and that it was essentially incorrct 
> because it didn't bounded instances.  This patch removes the code.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-24827) BackPort HBASE-11554 Remove Reusable poolmap Rpc client type.

2020-08-12 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17176590#comment-17176590
 ] 

Hudson commented on HBASE-24827:


Results for branch master
[build #6 on 
builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/6/]:
 (x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/6/General_20Nightly_20Build_20Report/]






(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/6/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(x) {color:red}-1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/6/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(x) {color:red}-1 client integration test{color}
--Failed when running client tests on top of Hadoop 2. [see log for 
details|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/6//artifact/output-integration/hadoop-2.log].
 (note that this means we didn't run on Hadoop 3)


> BackPort HBASE-11554 Remove Reusable poolmap Rpc client type.
> -
>
> Key: HBASE-24827
> URL: https://issues.apache.org/jira/browse/HBASE-24827
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Reporter: niuyulin
>Assignee: niuyulin
>Priority: Major
> Fix For: 3.0.0-alpha-1
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] ndimiduk merged pull request #2015: HBASE-24568 do-release need not wait for tag

2020-08-12 Thread GitBox


ndimiduk merged pull request #2015:
URL: https://github.com/apache/hbase/pull/2015


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] ndimiduk commented on pull request #2015: HBASE-24568 do-release need not wait for tag

2020-08-12 Thread GitBox


ndimiduk commented on pull request #2015:
URL: https://github.com/apache/hbase/pull/2015#issuecomment-673093342


   This is what I used to generate 2.3.1RC0.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (HBASE-24874) hbase-shell should not use ModifiableTableDescriptor directly

2020-08-12 Thread Elliot Miller (Jira)
Elliot Miller created HBASE-24874:
-

 Summary: hbase-shell should not use ModifiableTableDescriptor 
directly
 Key: HBASE-24874
 URL: https://issues.apache.org/jira/browse/HBASE-24874
 Project: HBase
  Issue Type: Bug
  Components: shell
Affects Versions: 3.0.0-alpha-1
Reporter: Elliot Miller
Assignee: Elliot Miller


HBASE-20819 prepared us for HBase 3.x by removing usages of the deprecated 
HTableDescriptor and HColumnDescriptor classes from the shell. However, it did 
use two methods from the ModifiableTableDescriptor, which was only public for 
compatibility/migration and was marked with {{@InterfaceAudience.Private}}. 
When {{ModifiableTableDescriptor}} was made private last week by HBASE-24507 it 
broke two hbase-shell commands (*describe* and *alter* when used to set a 
coprocessor) that were using methods from {{ModifiableTableDescriptor}} (these 
methods are not present on the general {{TableDescriptor}} interface).

This story will remove the two references in hbase-shell to methods on the 
now-private {{ModifiableTableDescriptor}} class and will find appropriate 
replacements for the calls.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-24873) Not able to access Flink 1.7.2 with HBase 2.0.0 included in HDP cluster 3.0.1

2020-08-12 Thread Pasha Shaik (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-24873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pasha Shaik updated HBASE-24873:

Hadoop Flags: Incompatible change

> Not able to access Flink 1.7.2 with HBase 2.0.0 included in HDP cluster 3.0.1
> -
>
> Key: HBASE-24873
> URL: https://issues.apache.org/jira/browse/HBASE-24873
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 2.0.0
> Environment: * I am using Ambari Server 2.7.1 with HDP cluster 3.0.1 
> with YARN 3.1.1 and HBase 2.0.0
>  * Using hbase-client-2.0.0.jar along with Flink 1.7.2
>  * 
> [flink-1.7.2-bin-hadoop27-scala_2.11.tgz|https://archive.apache.org/dist/flink/flink-1.7.2/flink-1.7.2-bin-hadoop27-scala_2.11.tgz]
>  using this one for trying. 
>Reporter: Pasha Shaik
>Priority: Blocker
> Attachments: 0874A89C-597E-451A-8986-E619A0E8237B.jpeg, 
> 0ECDB56A-3A76-424F-8926-A9FEB1BD96BB.jpeg, 
> 4237EAEA-D3CD-4791-8322-49E2F9FA8666.png, 
> 667CC1DB-CF0E-44E9-B9E8-1B44151FC00E.jpeg, 
> 87939306-6571-4886-A23B-4780897B88D4.jpeg
>
>
> * I am not able to access Flink 1.7.2 with HDP 3.0.1
>  * The YARN version is 3.1.1 and HBASE is 2.0.0
>  * Flink is successfully getting mounted on Yarn and showing it as RUNNING. 
>  * But in actual when I try to test my code, it is showing below error.
>  * The .tgz which I used is  
> [flink-1.7.2-bin-hadoop27-scala_2.11.tgz|https://archive.apache.org/dist/flink/flink-1.7.2/flink-1.7.2-bin-hadoop27-scala_2.11.tgz]
>  * The reason for the failure is w.r.t HDP 3.0.1, the associated HBase-Client 
> ("org.apache.hbase:hbase-client:2.0.0")  is still not in sync with 
> Flink-Hbase_2.11-1.7.2 as the HTable.class  constructor is completely removed 
> in this version as those respective classes still uses that and throws below 
> I/O exception.
>  * Please find the logs and screenshots for more info.
>  
>  
> -**d
>                                           HERE ARE THE LOGS BELOW
> *org.apache.flink.runtime.client.JobExecutionException: Failed to submit job 
> cbb64a9b4e2e3ad0167eb4ceeb53ac87 (Flink Java Job at Tue Aug 11 10:10:47 CEST 
> 2020) at* 
> org.apache.flink.runtime.jobmanager.JobManager.org$apache$flink$runtime$jobmanager$JobManager$$submitJob(JobManager.scala:1325)
>  ~[flink-runtime_2.11-1.4.2.jar:1.4.2] at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1.applyOrElse(JobManager.scala:447)
>  ~[flink-runtime_2.11-1.4.2.jar:1.4.2] at 
> scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:36) 
> ~[scala-library-2.11.11.jar:?] at 
> org.apache.flink.runtime.LeaderSessionMessageFilter$$anonfun$receive$1.applyOrElse(LeaderSessionMessageFilter.scala:38)
>  ~[flink-runtime_2.11-1.4.2.jar:1.4.2] at 
> scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:36) 
> ~[scala-library-2.11.11.jar:?] at 
> org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:33) 
> ~[flink-runtime_2.11-1.4.2.jar:1.4.2] at 
> org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:28) 
> ~[flink-runtime_2.11-1.4.2.jar:1.4.2] at 
> scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123) 
> ~[scala-library-2.11.11.jar:?] at 
> org.apache.flink.runtime.LogMessages$$anon$1.applyOrElse(LogMessages.scala:28)
>  ~[flink-runtime_2.11-1.4.2.jar:1.4.2] at 
> akka.actor.Actor$class.aroundReceive(Actor.scala:502) 
> ~[akka-actor_2.11-2.4.20.jar:?] at 
> org.apache.flink.runtime.jobmanager.JobManager.aroundReceive(JobManager.scala:122)
>  ~[flink-runtime_2.11-1.4.2.jar:1.4.2] at 
> akka.actor.ActorCell.receiveMessage(ActorCell.scala:526) 
> ~[akka-actor_2.11-2.4.20.jar:?] at 
> akka.actor.ActorCell.invoke(ActorCell.scala:495) 
> ~[akka-actor_2.11-2.4.20.jar:?] at 
> akka.dispatch.Mailbox.processMailbox(Mailbox.scala:257) 
> ~[akka-actor_2.11-2.4.20.jar:?] at 
> akka.dispatch.Mailbox.run(Mailbox.scala:224) ~[akka-actor_2.11-2.4.20.jar:?] 
> at akka.dispatch.Mailbox.exec(Mailbox.scala:234) 
> ~[akka-actor_2.11-2.4.20.jar:?] at 
> scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) 
> ~[scala-library-2.11.11.jar:?] at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>  ~[scala-library-2.11.11.jar:?] at 
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) 
> ~[scala-library-2.11.11.jar:?] at 
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
>  ~
>  
> *[scala-library-2.11.11.jar:?]Caused by: 
> org.apache.flink.runtime.JobException: Creating the input splits caused an 
> error: connection is closed at 
> 

[jira] [Created] (HBASE-24873) Not able to access Flink 1.7.2 with HBase 2.0.0 included in HDP cluster 3.0.1

2020-08-12 Thread Pasha Shaik (Jira)
Pasha Shaik created HBASE-24873:
---

 Summary: Not able to access Flink 1.7.2 with HBase 2.0.0 included 
in HDP cluster 3.0.1
 Key: HBASE-24873
 URL: https://issues.apache.org/jira/browse/HBASE-24873
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 2.0.0
 Environment: * I am using Ambari Server 2.7.1 with HDP cluster 3.0.1 
with YARN 3.1.1 and HBase 2.0.0
 * Using hbase-client-2.0.0.jar along with Flink 1.7.2
 * 
[flink-1.7.2-bin-hadoop27-scala_2.11.tgz|https://archive.apache.org/dist/flink/flink-1.7.2/flink-1.7.2-bin-hadoop27-scala_2.11.tgz]
 using this one for trying. 
Reporter: Pasha Shaik
 Attachments: 0874A89C-597E-451A-8986-E619A0E8237B.jpeg, 
0ECDB56A-3A76-424F-8926-A9FEB1BD96BB.jpeg, 
4237EAEA-D3CD-4791-8322-49E2F9FA8666.png, 
667CC1DB-CF0E-44E9-B9E8-1B44151FC00E.jpeg, 
87939306-6571-4886-A23B-4780897B88D4.jpeg

* I am not able to access Flink 1.7.2 with HDP 3.0.1
 * The YARN version is 3.1.1 and HBASE is 2.0.0
 * Flink is successfully getting mounted on Yarn and showing it as RUNNING. 
 * But in actual when I try to test my code, it is showing below error.
 * The .tgz which I used is  
[flink-1.7.2-bin-hadoop27-scala_2.11.tgz|https://archive.apache.org/dist/flink/flink-1.7.2/flink-1.7.2-bin-hadoop27-scala_2.11.tgz]
 * The reason for the failure is w.r.t HDP 3.0.1, the associated HBase-Client 
("org.apache.hbase:hbase-client:2.0.0")  is still not in sync with 
Flink-Hbase_2.11-1.7.2 as the HTable.class  constructor is completely removed 
in this version as those respective classes still uses that and throws below 
I/O exception.
 * Please find the logs and screenshots for more info.

 

 

-**d

                                          HERE ARE THE LOGS BELOW

*org.apache.flink.runtime.client.JobExecutionException: Failed to submit job 
cbb64a9b4e2e3ad0167eb4ceeb53ac87 (Flink Java Job at Tue Aug 11 10:10:47 CEST 
2020) at* 
org.apache.flink.runtime.jobmanager.JobManager.org$apache$flink$runtime$jobmanager$JobManager$$submitJob(JobManager.scala:1325)
 ~[flink-runtime_2.11-1.4.2.jar:1.4.2] at 
org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1.applyOrElse(JobManager.scala:447)
 ~[flink-runtime_2.11-1.4.2.jar:1.4.2] at 
scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:36) 
~[scala-library-2.11.11.jar:?] at 
org.apache.flink.runtime.LeaderSessionMessageFilter$$anonfun$receive$1.applyOrElse(LeaderSessionMessageFilter.scala:38)
 ~[flink-runtime_2.11-1.4.2.jar:1.4.2] at 
scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:36) 
~[scala-library-2.11.11.jar:?] at 
org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:33) 
~[flink-runtime_2.11-1.4.2.jar:1.4.2] at 
org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:28) 
~[flink-runtime_2.11-1.4.2.jar:1.4.2] at 
scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123) 
~[scala-library-2.11.11.jar:?] at 
org.apache.flink.runtime.LogMessages$$anon$1.applyOrElse(LogMessages.scala:28) 
~[flink-runtime_2.11-1.4.2.jar:1.4.2] at 
akka.actor.Actor$class.aroundReceive(Actor.scala:502) 
~[akka-actor_2.11-2.4.20.jar:?] at 
org.apache.flink.runtime.jobmanager.JobManager.aroundReceive(JobManager.scala:122)
 ~[flink-runtime_2.11-1.4.2.jar:1.4.2] at 
akka.actor.ActorCell.receiveMessage(ActorCell.scala:526) 
~[akka-actor_2.11-2.4.20.jar:?] at 
akka.actor.ActorCell.invoke(ActorCell.scala:495) 
~[akka-actor_2.11-2.4.20.jar:?] at 
akka.dispatch.Mailbox.processMailbox(Mailbox.scala:257) 
~[akka-actor_2.11-2.4.20.jar:?] at akka.dispatch.Mailbox.run(Mailbox.scala:224) 
~[akka-actor_2.11-2.4.20.jar:?] at 
akka.dispatch.Mailbox.exec(Mailbox.scala:234) ~[akka-actor_2.11-2.4.20.jar:?] 
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) 
~[scala-library-2.11.11.jar:?] at 
scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
 ~[scala-library-2.11.11.jar:?] at 
scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) 
~[scala-library-2.11.11.jar:?] at 
scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
 ~

 

*[scala-library-2.11.11.jar:?]Caused by: org.apache.flink.runtime.JobException: 
Creating the input splits caused an error: connection is closed at 
org.apache.flink.runtime.executiongraph.ExecutionJobVertex.*(ExecutionJobVertex.java:262)
 ~[flink-runtime_2.11-1.4.2.jar:1.4.2] at 
org.apache.flink.runtime.executiongraph.ExecutionGraph.attachJobGraph(ExecutionGraph.java:810)
 ~[flink-runtime_2.11-1.4.2.jar:1.4.2] at 
org.apache.flink.runtime.executiongraph.ExecutionGraphBuilder.buildGraph(ExecutionGraphBuilder.java:180)
 ~[flink-runtime_2.11-1.4.2.jar:1.4.2] at 

[jira] [Commented] (HBASE-24527) Improve region housekeeping status observability

2020-08-12 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17176551#comment-17176551
 ] 

Viraj Jasani commented on HBASE-24527:
--

{quote}at regionserver scope:
 * listing the current state of a regionserver's compaction, split, and merge 
tasks and threads
 * counting (simple view) and listing (detailed view) a regionserver's 
compaction queues
 * listing a region's currently compacting, splitting, or merging status

at master scope, aggregations of the above detailed information into:
 * listing the active compaction tasks and threads for a given table, the 
extension of _compaction_state_ with a new detailed view
 * listing the active split or merge tasks and threads for a given table's 
regions{quote}
Among the scopes listed here, from operator's viewpoint, master scope seems 
more relevant because usually we would want to know what is going on with 
regions of the table we are interested in. 

For regionserver scope, if we store all region tasks and thread info at 
regionserver, perhaps we should not allow client to query all RS and aggregate 
results because each RS might have accommodated many region tasks related info, 
only one RS should be queried for detailed view of a region at a time.

Master scope can provide table -> regions (with RS and current state) mapping, 
and operator can query specific RS for detailed view of a region. On the other 
hand, querying all RS with filtered table/regions might require too many RPC 
calls from client (which, operator is more likely to keep repeating until all 
regions come to intended states). Hence, basically both of above scopes, when 
used together, might provide better results (with likely optimal performance).

Thought?

> Improve region housekeeping status observability
> 
>
> Key: HBASE-24527
> URL: https://issues.apache.org/jira/browse/HBASE-24527
> Project: HBase
>  Issue Type: New Feature
>  Components: Admin, Compaction, Operability, shell, UI
>Reporter: Andrew Kyle Purtell
>Priority: Major
>
> We provide a coarse grained admin API and associated shell command for 
> determining the compaction status of a table:
> {noformat}
> hbase(main):001:0> help "compaction_state"
> Here is some help for this command:
>  Gets compaction status (MAJOR, MAJOR_AND_MINOR, MINOR, NONE) for a table:
>  hbase> compaction_state 'ns1:t1'
>  hbase> compaction_state 't1'
> {noformat}
> We also log  compaction activity, including a compaction journal at 
> completion, via log4j to whatever log aggregation solution is available in 
> production.  
> This is not sufficient for online and interactive observation, debugging, or 
> performance analysis of current compaction activity. In this kind of activity 
> an operator is attempting to observe and analyze compaction activity in real 
> time. Log aggregation and presentation solutions have typical latencies (end 
> to end visibility of log lines on the order of ~minutes) which make that not 
> possible today.
> We don't offer any API or tools for directly interrogating split and merge 
> activity in real time. Some indirect knowledge of split or merge activity can 
> be inferred from RIT information via ClusterStatus. It can also be scraped, 
> with some difficulty, from the debug servlet. 
> We should have new APIs and shell commands, and perhaps also new admin UI 
> views, for
> at regionserver scope:
> * listing the current state of a regionserver's compaction, split, and merge 
> tasks and threads
> * counting (simple view) and listing (detailed view) a regionserver's 
> compaction queues
> * listing a region's currently compacting, splitting, or merging status
> at master scope, aggregations of the above detailed information into:
> * listing the active compaction tasks and threads for a given table, the 
> extension of _compaction_state_ with a new detailed view
> * listing the active split or merge tasks and threads for a given table's 
> regions
> Compaction detail should include the names of the effective engine and policy 
> classes, and the results and timestamp of the last compaction selection 
> evaluation. Split and merge detail should include the names of the effective 
> policy classes and the result of the last split or merge criteria evaluation. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-24528) Improve balancer decision observability

2020-08-12 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17176539#comment-17176539
 ] 

Viraj Jasani commented on HBASE-24528:
--

Thanks Josh (y)

> Improve balancer decision observability
> ---
>
> Key: HBASE-24528
> URL: https://issues.apache.org/jira/browse/HBASE-24528
> Project: HBase
>  Issue Type: New Feature
>  Components: Admin, Balancer, Operability, shell, UI
>Reporter: Andrew Kyle Purtell
>Assignee: Viraj Jasani
>Priority: Major
> Attachments: Screenshot 2020-08-12 at 11.50.43 PM.png
>
>
> We provide detailed INFO and DEBUG level logging of balancer decision 
> factors, outcome, and reassignment planning, as well as similarly detailed 
> logging of the resulting assignment manager activity. However, an operator 
> may need to perform online and interactive observation, debugging, or 
> performance analysis of current balancer activity. Scraping and correlating 
> the many log lines resulting from a balancer execution is labor intensive and 
> has a lot of latency (order of ~minutes to acquire and index, order of 
> ~minutes to correlate). 
> The balancer should maintain a rolling window of history, e.g. the last 100 
> region move plans, or last 1000 region move plans submitted to the assignment 
> manager. This history should include decision factor details and weights and 
> costs. The rsgroups balancer may be able to provide fairly simple decision 
> factors, like for example "this table was reassigned to that regionserver 
> group". The underlying or vanilla stochastic balancer on the other hand, 
> after a walk over random assignment plans, will have considered a number of 
> cost functions with various inputs (locality, load, etc.) and multipliers, 
> including custom cost functions. We can devise an extensible class structure 
> that represents explanations for balancer decisions, and for each region move 
> plan that is actually submitted to the assignment manager, we can keep the 
> explanations of all relevant decision factors alongside the other details of 
> the assignment plan like the region name, and the source and destination 
> regionservers. 
> This history should be available via API for use by new shell commands and 
> admin UI widgets.
> The new shell commands and UI widgets can unpack the representation of 
> balancer decision components into human readable output. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-24528) Improve balancer decision observability

2020-08-12 Thread Josh Elser (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17176538#comment-17176538
 ] 

Josh Elser commented on HBASE-24528:


This JSON looks excellent to me.

> Improve balancer decision observability
> ---
>
> Key: HBASE-24528
> URL: https://issues.apache.org/jira/browse/HBASE-24528
> Project: HBase
>  Issue Type: New Feature
>  Components: Admin, Balancer, Operability, shell, UI
>Reporter: Andrew Kyle Purtell
>Assignee: Viraj Jasani
>Priority: Major
> Attachments: Screenshot 2020-08-12 at 11.50.43 PM.png
>
>
> We provide detailed INFO and DEBUG level logging of balancer decision 
> factors, outcome, and reassignment planning, as well as similarly detailed 
> logging of the resulting assignment manager activity. However, an operator 
> may need to perform online and interactive observation, debugging, or 
> performance analysis of current balancer activity. Scraping and correlating 
> the many log lines resulting from a balancer execution is labor intensive and 
> has a lot of latency (order of ~minutes to acquire and index, order of 
> ~minutes to correlate). 
> The balancer should maintain a rolling window of history, e.g. the last 100 
> region move plans, or last 1000 region move plans submitted to the assignment 
> manager. This history should include decision factor details and weights and 
> costs. The rsgroups balancer may be able to provide fairly simple decision 
> factors, like for example "this table was reassigned to that regionserver 
> group". The underlying or vanilla stochastic balancer on the other hand, 
> after a walk over random assignment plans, will have considered a number of 
> cost functions with various inputs (locality, load, etc.) and multipliers, 
> including custom cost functions. We can devise an extensible class structure 
> that represents explanations for balancer decisions, and for each region move 
> plan that is actually submitted to the assignment manager, we can keep the 
> explanations of all relevant decision factors alongside the other details of 
> the assignment plan like the region name, and the source and destination 
> regionservers. 
> This history should be available via API for use by new shell commands and 
> admin UI widgets.
> The new shell commands and UI widgets can unpack the representation of 
> balancer decision components into human readable output. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (HBASE-24528) Improve balancer decision observability

2020-08-12 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17176530#comment-17176530
 ] 

Viraj Jasani edited comment on HBASE-24528 at 8/12/20, 6:42 PM:


Attached one sample json for balancer decision output, please let me know if 
operator can find it useful as shell command output and this way, check the 
history of balancer decisions with list of region plans.

The final list output is expected to represent records present on HMaster ring 
buffer in FIFO order.


was (Author: vjasani):
Attached one sample json for balancer decision output, please let me know if 
operator can find it useful as shell command output and this way, check the 
history of balancer decisions with list of region plans.

The final list output is expected to represent records present on ring buffer 
in FIFO order.

> Improve balancer decision observability
> ---
>
> Key: HBASE-24528
> URL: https://issues.apache.org/jira/browse/HBASE-24528
> Project: HBase
>  Issue Type: New Feature
>  Components: Admin, Balancer, Operability, shell, UI
>Reporter: Andrew Kyle Purtell
>Assignee: Viraj Jasani
>Priority: Major
> Attachments: Screenshot 2020-08-12 at 11.50.43 PM.png
>
>
> We provide detailed INFO and DEBUG level logging of balancer decision 
> factors, outcome, and reassignment planning, as well as similarly detailed 
> logging of the resulting assignment manager activity. However, an operator 
> may need to perform online and interactive observation, debugging, or 
> performance analysis of current balancer activity. Scraping and correlating 
> the many log lines resulting from a balancer execution is labor intensive and 
> has a lot of latency (order of ~minutes to acquire and index, order of 
> ~minutes to correlate). 
> The balancer should maintain a rolling window of history, e.g. the last 100 
> region move plans, or last 1000 region move plans submitted to the assignment 
> manager. This history should include decision factor details and weights and 
> costs. The rsgroups balancer may be able to provide fairly simple decision 
> factors, like for example "this table was reassigned to that regionserver 
> group". The underlying or vanilla stochastic balancer on the other hand, 
> after a walk over random assignment plans, will have considered a number of 
> cost functions with various inputs (locality, load, etc.) and multipliers, 
> including custom cost functions. We can devise an extensible class structure 
> that represents explanations for balancer decisions, and for each region move 
> plan that is actually submitted to the assignment manager, we can keep the 
> explanations of all relevant decision factors alongside the other details of 
> the assignment plan like the region name, and the source and destination 
> regionservers. 
> This history should be available via API for use by new shell commands and 
> admin UI widgets.
> The new shell commands and UI widgets can unpack the representation of 
> balancer decision components into human readable output. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (HBASE-24528) Improve balancer decision observability

2020-08-12 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17176530#comment-17176530
 ] 

Viraj Jasani edited comment on HBASE-24528 at 8/12/20, 6:41 PM:


Attached one sample json for balancer decision output, please let me know if 
operator can find it useful as shell command output and this way, check the 
history of balancer decisions with list of region plans.

The final list output is expected to represent records present on ring buffer 
in FIFO order.


was (Author: vjasani):
Attached one sample json for balancer decision output, please let me know if 
operator would find it useful as shell command output and check the history of 
balancer decision as list of such json values.

> Improve balancer decision observability
> ---
>
> Key: HBASE-24528
> URL: https://issues.apache.org/jira/browse/HBASE-24528
> Project: HBase
>  Issue Type: New Feature
>  Components: Admin, Balancer, Operability, shell, UI
>Reporter: Andrew Kyle Purtell
>Assignee: Viraj Jasani
>Priority: Major
> Attachments: Screenshot 2020-08-12 at 11.50.43 PM.png
>
>
> We provide detailed INFO and DEBUG level logging of balancer decision 
> factors, outcome, and reassignment planning, as well as similarly detailed 
> logging of the resulting assignment manager activity. However, an operator 
> may need to perform online and interactive observation, debugging, or 
> performance analysis of current balancer activity. Scraping and correlating 
> the many log lines resulting from a balancer execution is labor intensive and 
> has a lot of latency (order of ~minutes to acquire and index, order of 
> ~minutes to correlate). 
> The balancer should maintain a rolling window of history, e.g. the last 100 
> region move plans, or last 1000 region move plans submitted to the assignment 
> manager. This history should include decision factor details and weights and 
> costs. The rsgroups balancer may be able to provide fairly simple decision 
> factors, like for example "this table was reassigned to that regionserver 
> group". The underlying or vanilla stochastic balancer on the other hand, 
> after a walk over random assignment plans, will have considered a number of 
> cost functions with various inputs (locality, load, etc.) and multipliers, 
> including custom cost functions. We can devise an extensible class structure 
> that represents explanations for balancer decisions, and for each region move 
> plan that is actually submitted to the assignment manager, we can keep the 
> explanations of all relevant decision factors alongside the other details of 
> the assignment plan like the region name, and the source and destination 
> regionservers. 
> This history should be available via API for use by new shell commands and 
> admin UI widgets.
> The new shell commands and UI widgets can unpack the representation of 
> balancer decision components into human readable output. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-24528) Improve balancer decision observability

2020-08-12 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17176530#comment-17176530
 ] 

Viraj Jasani commented on HBASE-24528:
--

Attached one sample json for balancer decision output, please let me know if 
operator would find it useful as shell command output and check the history of 
balancer decision as list of such json values.

> Improve balancer decision observability
> ---
>
> Key: HBASE-24528
> URL: https://issues.apache.org/jira/browse/HBASE-24528
> Project: HBase
>  Issue Type: New Feature
>  Components: Admin, Balancer, Operability, shell, UI
>Reporter: Andrew Kyle Purtell
>Assignee: Viraj Jasani
>Priority: Major
> Attachments: Screenshot 2020-08-12 at 11.50.43 PM.png
>
>
> We provide detailed INFO and DEBUG level logging of balancer decision 
> factors, outcome, and reassignment planning, as well as similarly detailed 
> logging of the resulting assignment manager activity. However, an operator 
> may need to perform online and interactive observation, debugging, or 
> performance analysis of current balancer activity. Scraping and correlating 
> the many log lines resulting from a balancer execution is labor intensive and 
> has a lot of latency (order of ~minutes to acquire and index, order of 
> ~minutes to correlate). 
> The balancer should maintain a rolling window of history, e.g. the last 100 
> region move plans, or last 1000 region move plans submitted to the assignment 
> manager. This history should include decision factor details and weights and 
> costs. The rsgroups balancer may be able to provide fairly simple decision 
> factors, like for example "this table was reassigned to that regionserver 
> group". The underlying or vanilla stochastic balancer on the other hand, 
> after a walk over random assignment plans, will have considered a number of 
> cost functions with various inputs (locality, load, etc.) and multipliers, 
> including custom cost functions. We can devise an extensible class structure 
> that represents explanations for balancer decisions, and for each region move 
> plan that is actually submitted to the assignment manager, we can keep the 
> explanations of all relevant decision factors alongside the other details of 
> the assignment plan like the region name, and the source and destination 
> regionservers. 
> This history should be available via API for use by new shell commands and 
> admin UI widgets.
> The new shell commands and UI widgets can unpack the representation of 
> balancer decision components into human readable output. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-24528) Improve balancer decision observability

2020-08-12 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-24528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HBASE-24528:
-
Attachment: Screenshot 2020-08-12 at 11.50.43 PM.png

> Improve balancer decision observability
> ---
>
> Key: HBASE-24528
> URL: https://issues.apache.org/jira/browse/HBASE-24528
> Project: HBase
>  Issue Type: New Feature
>  Components: Admin, Balancer, Operability, shell, UI
>Reporter: Andrew Kyle Purtell
>Assignee: Viraj Jasani
>Priority: Major
> Attachments: Screenshot 2020-08-12 at 11.50.43 PM.png
>
>
> We provide detailed INFO and DEBUG level logging of balancer decision 
> factors, outcome, and reassignment planning, as well as similarly detailed 
> logging of the resulting assignment manager activity. However, an operator 
> may need to perform online and interactive observation, debugging, or 
> performance analysis of current balancer activity. Scraping and correlating 
> the many log lines resulting from a balancer execution is labor intensive and 
> has a lot of latency (order of ~minutes to acquire and index, order of 
> ~minutes to correlate). 
> The balancer should maintain a rolling window of history, e.g. the last 100 
> region move plans, or last 1000 region move plans submitted to the assignment 
> manager. This history should include decision factor details and weights and 
> costs. The rsgroups balancer may be able to provide fairly simple decision 
> factors, like for example "this table was reassigned to that regionserver 
> group". The underlying or vanilla stochastic balancer on the other hand, 
> after a walk over random assignment plans, will have considered a number of 
> cost functions with various inputs (locality, load, etc.) and multipliers, 
> including custom cost functions. We can devise an extensible class structure 
> that represents explanations for balancer decisions, and for each region move 
> plan that is actually submitted to the assignment manager, we can keep the 
> explanations of all relevant decision factors alongside the other details of 
> the assignment plan like the region name, and the source and destination 
> regionservers. 
> This history should be available via API for use by new shell commands and 
> admin UI widgets.
> The new shell commands and UI widgets can unpack the representation of 
> balancer decision components into human readable output. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-24583) Normalizer can't actually merge empty regions when neighbor is larger than average size

2020-08-12 Thread Nick Dimiduk (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-24583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-24583:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Backported as far as branch-2.3. It could probably go back further, but 
branch-2.2 appears to also be missing HBASE-24376 and HBASE-24588.

> Normalizer can't actually merge empty regions when neighbor is larger than 
> average size
> ---
>
> Key: HBASE-24583
> URL: https://issues.apache.org/jira/browse/HBASE-24583
> Project: HBase
>  Issue Type: Bug
>  Components: master, Normalizer
>Affects Versions: 2.3.0
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.3.1, 2.4.0
>
>
> There are plenty of cases where empty regions can accumulate -- incorrect 
> guessing at split points, old data is automatically expiring off,  The 
> normalizer stubbornly refuses to handle this case, despite this being an 
> original feature it was intended to support (HBASE-6613).
> Earlier discussion had concerns for a user pre-splitting a table and then the 
> normalizer coming along and merging those splits away before they could be 
> populated. Thus, right now, the default behavior via 
> {{hbase.normalizer.merge.min_region_size.mb=1}} is to not split any region 
> that's so small. Later, we added 
> {{hbase.normalizer.merge.min_region_age.days=3}}, which prevents us from 
> merging any region too young. So there's plenty of nobs for an operator to 
> customize their behavior.
> But when I set {{hbase.normalizer.merge.min_region_size.mb=0}}, I still end 
> up with stubborn regions that won't merge away. Looks like a large neighbor 
> will prevent a merge from going through.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] ndimiduk opened a new pull request #2252: Backport "HBASE-24583 Normalizer can't actually merge empty regions..." to branch-2.3

2020-08-12 Thread GitBox


ndimiduk opened a new pull request #2252:
URL: https://github.com/apache/hbase/pull/2252


   when neighbor is larger than average size
   
   * add `testMergeEmptyRegions` to explicitly cover different
 interleaving of 0-sized regions.
   * fix bug where merging a 0-size region is skipped due to large
 neighbor.
   * remove unused `splitPoint` from `SplitNormalizationPlan`.
   * generate `toString`, `hashCode`, and `equals` methods from Apache
 Commons Lang3 template on `SplitNormalizationPlan` and
 `MergeNormalizationPlan`.
   * simplify test to use equality matching over `*NormalizationPlan`
 instances as plain pojos.
   * test make use of this handy `TableNameTestRule`.
   * fix line-length issues in `TestSimpleRegionNormalizer`
   
   Signed-off-by: Wellington Chevreuil 
   Signed-off-by: Viraj Jasani 
   Signed-off-by: huaxiangsun 
   Signed-off-by: Aman Poonia 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (HBASE-24583) Normalizer can't actually merge empty regions when neighbor is larger than average size

2020-08-12 Thread Nick Dimiduk (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-24583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-24583:
-
Fix Version/s: 2.3.1

> Normalizer can't actually merge empty regions when neighbor is larger than 
> average size
> ---
>
> Key: HBASE-24583
> URL: https://issues.apache.org/jira/browse/HBASE-24583
> Project: HBase
>  Issue Type: Bug
>  Components: master, Normalizer
>Affects Versions: 2.3.0
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.3.1, 2.4.0
>
>
> There are plenty of cases where empty regions can accumulate -- incorrect 
> guessing at split points, old data is automatically expiring off,  The 
> normalizer stubbornly refuses to handle this case, despite this being an 
> original feature it was intended to support (HBASE-6613).
> Earlier discussion had concerns for a user pre-splitting a table and then the 
> normalizer coming along and merging those splits away before they could be 
> populated. Thus, right now, the default behavior via 
> {{hbase.normalizer.merge.min_region_size.mb=1}} is to not split any region 
> that's so small. Later, we added 
> {{hbase.normalizer.merge.min_region_age.days=3}}, which prevents us from 
> merging any region too young. So there's plenty of nobs for an operator to 
> customize their behavior.
> But when I set {{hbase.normalizer.merge.min_region_size.mb=0}}, I still end 
> up with stubborn regions that won't merge away. Looks like a large neighbor 
> will prevent a merge from going through.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] ndimiduk merged pull request #2252: Backport "HBASE-24583 Normalizer can't actually merge empty regions..." to branch-2.3

2020-08-12 Thread GitBox


ndimiduk merged pull request #2252:
URL: https://github.com/apache/hbase/pull/2252


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (HBASE-24583) Normalizer can't actually merge empty regions when neighbor is larger than average size

2020-08-12 Thread Nick Dimiduk (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-24583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-24583:
-
Fix Version/s: 2.4.0

> Normalizer can't actually merge empty regions when neighbor is larger than 
> average size
> ---
>
> Key: HBASE-24583
> URL: https://issues.apache.org/jira/browse/HBASE-24583
> Project: HBase
>  Issue Type: Bug
>  Components: master, Normalizer
>Affects Versions: 2.3.0
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.4.0
>
>
> There are plenty of cases where empty regions can accumulate -- incorrect 
> guessing at split points, old data is automatically expiring off,  The 
> normalizer stubbornly refuses to handle this case, despite this being an 
> original feature it was intended to support (HBASE-6613).
> Earlier discussion had concerns for a user pre-splitting a table and then the 
> normalizer coming along and merging those splits away before they could be 
> populated. Thus, right now, the default behavior via 
> {{hbase.normalizer.merge.min_region_size.mb=1}} is to not split any region 
> that's so small. Later, we added 
> {{hbase.normalizer.merge.min_region_age.days=3}}, which prevents us from 
> merging any region too young. So there's plenty of nobs for an operator to 
> customize their behavior.
> But when I set {{hbase.normalizer.merge.min_region_size.mb=0}}, I still end 
> up with stubborn regions that won't merge away. Looks like a large neighbor 
> will prevent a merge from going through.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] ndimiduk opened a new pull request #2251: Backport "HBASE-24583 Normalizer can't actually merge empty regions..." to branch-2

2020-08-12 Thread GitBox


ndimiduk opened a new pull request #2251:
URL: https://github.com/apache/hbase/pull/2251


   when neighbor is larger than average size
   
   * add `testMergeEmptyRegions` to explicitly cover different
 interleaving of 0-sized regions.
   * fix bug where merging a 0-size region is skipped due to large
 neighbor.
   * remove unused `splitPoint` from `SplitNormalizationPlan`.
   * generate `toString`, `hashCode`, and `equals` methods from Apache
 Commons Lang3 template on `SplitNormalizationPlan` and
 `MergeNormalizationPlan`.
   * simplify test to use equality matching over `*NormalizationPlan`
 instances as plain pojos.
   * test make use of this handy `TableNameTestRule`.
   * fix line-length issues in `TestSimpleRegionNormalizer`
   
   Signed-off-by: Wellington Chevreuil 
   Signed-off-by: Viraj Jasani 
   Signed-off-by: huaxiangsun 
   Signed-off-by: Aman Poonia 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] ndimiduk merged pull request #2251: Backport "HBASE-24583 Normalizer can't actually merge empty regions..." to branch-2

2020-08-12 Thread GitBox


ndimiduk merged pull request #2251:
URL: https://github.com/apache/hbase/pull/2251


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] anoopsjohn commented on a change in pull request #2237: HBASE-24833: Bootstrap should not delete the META table directory if …

2020-08-12 Thread GitBox


anoopsjohn commented on a change in pull request #2237:
URL: https://github.com/apache/hbase/pull/2237#discussion_r469412600



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
##
@@ -915,6 +917,11 @@ private void 
finishActiveMasterInitialization(MonitoredTask status)
   this.tableDescriptors.getAll();
 }
 
+// check cluster Id stored in ZNode before, and use it to indicate if a 
cluster has been
+// restarted with an existing Zookeeper quorum.
+isClusterRestartWithExistingZNodes =

Review comment:
   Ya at proc level..  But before that itself one more thing.
   Say there is a cluster recreate over existing data and so no zk node for 
clusterId. We will get true for 'isClusterRestartWithExistingZNodes '. In next 
lines, we will create and write the zk node for clusterId.  Now assume after 
executing that lines, the HM restarted.  So the zk node was created but the 
InitMetaProc was NOT submitted.  Now after restart, when we come here, we have 
zk data for clusterId.  So 'isClusterRestartWithExistingZNodes ' will become 
false.  Now this time the InitMetaProc started and as part of that we will end 
up deleting the Meta dir.
   So this says the need to keep this boolean info somewhere once we find that 
and even before creating the zk node for ClusterId.  Am I making the concern 
clear this time?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (HBASE-24583) Normalizer can't actually merge empty regions when neighbor is larger than average size

2020-08-12 Thread Nick Dimiduk (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-24583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-24583:
-
Fix Version/s: 3.0.0-alpha-1

> Normalizer can't actually merge empty regions when neighbor is larger than 
> average size
> ---
>
> Key: HBASE-24583
> URL: https://issues.apache.org/jira/browse/HBASE-24583
> Project: HBase
>  Issue Type: Bug
>  Components: master, Normalizer
>Affects Versions: 2.3.0
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Major
> Fix For: 3.0.0-alpha-1
>
>
> There are plenty of cases where empty regions can accumulate -- incorrect 
> guessing at split points, old data is automatically expiring off,  The 
> normalizer stubbornly refuses to handle this case, despite this being an 
> original feature it was intended to support (HBASE-6613).
> Earlier discussion had concerns for a user pre-splitting a table and then the 
> normalizer coming along and merging those splits away before they could be 
> populated. Thus, right now, the default behavior via 
> {{hbase.normalizer.merge.min_region_size.mb=1}} is to not split any region 
> that's so small. Later, we added 
> {{hbase.normalizer.merge.min_region_age.days=3}}, which prevents us from 
> merging any region too young. So there's plenty of nobs for an operator to 
> customize their behavior.
> But when I set {{hbase.normalizer.merge.min_region_size.mb=0}}, I still end 
> up with stubborn regions that won't merge away. Looks like a large neighbor 
> will prevent a merge from going through.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] ndimiduk merged pull request #1922: HBASE-24583 Normalizer can't actually merge empty regions when neighbor is larger than average size

2020-08-12 Thread GitBox


ndimiduk merged pull request #1922:
URL: https://github.com/apache/hbase/pull/1922


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (HBASE-24844) Exception on standalone (master) shutdown

2020-08-12 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17176478#comment-17176478
 ] 

Hudson commented on HBASE-24844:


Results for branch branch-2.2
[build #4 on 
builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.2/4/]:
 (x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.2/4//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.2/4//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.2/4//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Exception on standalone (master) shutdown
> -
>
> Key: HBASE-24844
> URL: https://issues.apache.org/jira/browse/HBASE-24844
> Project: HBase
>  Issue Type: Bug
>  Components: Zookeeper
>Affects Versions: 3.0.0-alpha-1
>Reporter: Nick Dimiduk
>Assignee: wenfeiyi666
>Priority: Minor
> Fix For: 3.0.0-alpha-1, 2.3.1, 1.7.0, 2.4.0, 2.2.7
>
>
> Running HBase ({{master}} branch) in standalone mode, terminating the process 
> results in the following stack traces logged at error. It appears we shutdown 
> the zookeeper client out-of-order with {{shutdown}} of the thread pools.
> {noformat}
> 2020-08-10 14:21:46,777 INFO  [RS:0;localhost:16020] zookeeper.ZooKeeper: 
> Session: 0x100111361f20001 closed
> 2020-08-10 14:21:46,778 INFO  [RS:0;localhost:16020] 
> regionserver.HRegionServer: Exiting; stopping=localhost,16020,1597094491257; 
> zookeeper connection closed.
> 2020-08-10 14:21:46,778 ERROR [main-EventThread] zookeeper.ClientCnxn: Error 
> while calling watcher 
> java.util.concurrent.RejectedExecutionException: Task 
> java.util.concurrent.FutureTask@6e61af4b rejected from 
> java.util.concurrent.ThreadPoolExecutor@6a5e365f[Terminated, pool size = 0, 
> active threads = 0, queued tasks = 0, completed tasks = 4]
> at 
> java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063)
> at 
> java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830)
> at 
> java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379)
> at 
> java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112)
> at 
> java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678)
> at 
> org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:559)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510)
> 2020-08-10 14:21:46,778 INFO  [shutdown-hook-0] regionserver.ShutdownHook: 
> Starting fs shutdown hook thread.
> 2020-08-10 14:21:46,779 ERROR [main-EventThread] zookeeper.ClientCnxn: Error 
> while calling watcher 
> java.util.concurrent.RejectedExecutionException: Task 
> java.util.concurrent.FutureTask@7d41da91 rejected from 
> java.util.concurrent.ThreadPoolExecutor@6a5e365f[Terminated, pool size = 0, 
> active threads = 0, queued tasks = 0, completed tasks = 4]
> at 
> java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063)
> at 
> java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830)
> at 
> java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379)
> at 
> java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112)
> at 
> java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678)
> at 
> org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:559)
> at 
> org.apache.hadoop.hbase.zookeeper.PendingWatcher.process(PendingWatcher.java:40)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510)
> 2020-08-10 14:21:46,780 INFO  [main-EventThread] zookeeper.ClientCnxn: 
> EventThread shut down for session: 0x100111361f20001
> 2020-08-10 14:21:46,780 INFO  [shutdown-hook-0] regionserver.ShutdownHook: 
> Shutdown hook finished.
> {noformat}



--
This message was sent by Atlassian Jira

[jira] [Commented] (HBASE-22524) Refactor TestReplicationSyncUpTool

2020-08-12 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17176477#comment-17176477
 ] 

Hudson commented on HBASE-22524:


Results for branch branch-2.2
[build #4 on 
builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.2/4/]:
 (x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.2/4//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.2/4//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.2/4//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Refactor TestReplicationSyncUpTool
> --
>
> Key: HBASE-22524
> URL: https://issues.apache.org/jira/browse/HBASE-22524
> Project: HBase
>  Issue Type: Test
>  Components: test
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.3.0, 2.2.6
>
>
> Especially that TestReplicationSyncUpToolWithBulkLoadedData overrides a test 
> method, which is a bit hard to change in the future.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (HBASE-24715) Cleanup RELEASENOTES.md in the wake of HBASE-24711

2020-08-12 Thread Nick Dimiduk (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-24715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk resolved HBASE-24715.
--
Resolution: Fixed

> Cleanup RELEASENOTES.md in the wake of HBASE-24711
> --
>
> Key: HBASE-24715
> URL: https://issues.apache.org/jira/browse/HBASE-24715
> Project: HBase
>  Issue Type: Sub-task
>  Components: community
>Affects Versions: 2.3.0
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Major
> Fix For: 2.3.1
>
>
> Seems it'll need some manual adjustment.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] ndimiduk commented on pull request #2244: HBASE-24715 Cleanup RELEASENOTES.md in the wake of HBASE-24711

2020-08-12 Thread GitBox


ndimiduk commented on pull request #2244:
URL: https://github.com/apache/hbase/pull/2244#issuecomment-672986050


   Thanks for the quick reviews.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] joshelser commented on a change in pull request #2191: HBASE-24813 ReplicationSource should clear buffer usage on Replicatio…

2020-08-12 Thread GitBox


joshelser commented on a change in pull request #2191:
URL: https://github.com/apache/hbase/pull/2191#discussion_r469395639



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceWALReader.java
##
@@ -309,6 +310,16 @@ public WALEntryBatch poll(long timeout) throws 
InterruptedException {
 return entryBatchQueue.poll(timeout, TimeUnit.MILLISECONDS);
   }
 
+  public void clearWALEntryBatch() {

Review comment:
   > after we certify that neither the shipper, nor the reader threads are 
alive anymore, so I don't think it would be an issue. Of course, there's the 
risk someone inadvertently call this method somewhere else, so maybe we should 
put a warning comment?
   
   Also OK to just put a warning if moving this check doesn't make things more 
clear :). Thanks for clarifying for me.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] ndimiduk merged pull request #2244: HBASE-24715 Cleanup RELEASENOTES.md in the wake of HBASE-24711

2020-08-12 Thread GitBox


ndimiduk merged pull request #2244:
URL: https://github.com/apache/hbase/pull/2244


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] wchevreuil commented on a change in pull request #2191: HBASE-24813 ReplicationSource should clear buffer usage on Replicatio…

2020-08-12 Thread GitBox


wchevreuil commented on a change in pull request #2191:
URL: https://github.com/apache/hbase/pull/2191#discussion_r469394487



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceWALReader.java
##
@@ -309,6 +310,16 @@ public WALEntryBatch poll(long timeout) throws 
InterruptedException {
 return entryBatchQueue.poll(timeout, TimeUnit.MILLISECONDS);
   }
 
+  public void clearWALEntryBatch() {

Review comment:
   > We are only calling it from ReplicationSource.terminate (see line 
#606), after we certify that neither the shipper, nor the reader threads are 
alive anymore
   
   Actually, we could move that check to clearWALEntryBatch method itself, and 
since shipper has a reference to reader, but not the other way around, we can 
move it to shipper, instead? Let me give it a try.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] wchevreuil commented on a change in pull request #2191: HBASE-24813 ReplicationSource should clear buffer usage on Replicatio…

2020-08-12 Thread GitBox


wchevreuil commented on a change in pull request #2191:
URL: https://github.com/apache/hbase/pull/2191#discussion_r469391760



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceWALReader.java
##
@@ -309,6 +310,16 @@ public WALEntryBatch poll(long timeout) throws 
InterruptedException {
 return entryBatchQueue.poll(timeout, TimeUnit.MILLISECONDS);
   }
 
+  public void clearWALEntryBatch() {
+entryBatchQueue.forEach(w -> {
+  entryBatchQueue.remove(w);
+  w.getWalEntries().forEach(e -> {

Review comment:
   Makes sense, coming soon...





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] wchevreuil commented on a change in pull request #2191: HBASE-24813 ReplicationSource should clear buffer usage on Replicatio…

2020-08-12 Thread GitBox


wchevreuil commented on a change in pull request #2191:
URL: https://github.com/apache/hbase/pull/2191#discussion_r469391582



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java
##
@@ -600,6 +600,10 @@ public void terminate(String reason, Exception cause, 
boolean clearMetrics, bool
 if (worker.entryReader.isAlive()) {
   worker.entryReader.interrupt();
 }
+  } else {
+//If worker is already stopped but there was still entries batched,
+//wee need to clear buffer used for non processed entries

Review comment:
   oops...





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] huaxiangsun commented on pull request #2250: HBASE-24872 refactor valueOf PoolType

2020-08-12 Thread GitBox


huaxiangsun commented on pull request #2250:
URL: https://github.com/apache/hbase/pull/2250#issuecomment-672981681


   Can you explain a bit more about
   ```since Reusable PoolType has been removed, no need to check 
allowedPoolTypes```?  Went through the patch, still not clear about the 
connection, thanks.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] ndimiduk commented on pull request #1922: HBASE-24583 Normalizer can't actually merge empty regions when neighbor is larger than average size

2020-08-12 Thread GitBox


ndimiduk commented on pull request #1922:
URL: https://github.com/apache/hbase/pull/1922#issuecomment-672980734


   > > Can you explain why this change in controversial?
   > 
   > It's not controversial. I got your thought, while I incline to code case 
by case to avoid unnecessary codes. So combining both I gave -0 which means 
it's ok to go (that's why not -1), but superfluous in this case (that's why not 
+1).
   
   Understood. Thank you for clarifying.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (HBASE-24844) Exception on standalone (master) shutdown

2020-08-12 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17176453#comment-17176453
 ] 

Hudson commented on HBASE-24844:


Results for branch branch-1
[build #4 on 
builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-1/4/]:
 (x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-1/4//General_Nightly_Build_Report/]


(x) {color:red}-1 jdk7 checks{color}
-- For more information [see jdk7 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-1/4//JDK7_Nightly_Build_Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-1/4//JDK8_Nightly_Build_Report_(Hadoop2)/]




(x) {color:red}-1 source release artifact{color}
-- See build output for details.


> Exception on standalone (master) shutdown
> -
>
> Key: HBASE-24844
> URL: https://issues.apache.org/jira/browse/HBASE-24844
> Project: HBase
>  Issue Type: Bug
>  Components: Zookeeper
>Affects Versions: 3.0.0-alpha-1
>Reporter: Nick Dimiduk
>Assignee: wenfeiyi666
>Priority: Minor
> Fix For: 3.0.0-alpha-1, 2.3.1, 1.7.0, 2.4.0, 2.2.7
>
>
> Running HBase ({{master}} branch) in standalone mode, terminating the process 
> results in the following stack traces logged at error. It appears we shutdown 
> the zookeeper client out-of-order with {{shutdown}} of the thread pools.
> {noformat}
> 2020-08-10 14:21:46,777 INFO  [RS:0;localhost:16020] zookeeper.ZooKeeper: 
> Session: 0x100111361f20001 closed
> 2020-08-10 14:21:46,778 INFO  [RS:0;localhost:16020] 
> regionserver.HRegionServer: Exiting; stopping=localhost,16020,1597094491257; 
> zookeeper connection closed.
> 2020-08-10 14:21:46,778 ERROR [main-EventThread] zookeeper.ClientCnxn: Error 
> while calling watcher 
> java.util.concurrent.RejectedExecutionException: Task 
> java.util.concurrent.FutureTask@6e61af4b rejected from 
> java.util.concurrent.ThreadPoolExecutor@6a5e365f[Terminated, pool size = 0, 
> active threads = 0, queued tasks = 0, completed tasks = 4]
> at 
> java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063)
> at 
> java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830)
> at 
> java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379)
> at 
> java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112)
> at 
> java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678)
> at 
> org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:559)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510)
> 2020-08-10 14:21:46,778 INFO  [shutdown-hook-0] regionserver.ShutdownHook: 
> Starting fs shutdown hook thread.
> 2020-08-10 14:21:46,779 ERROR [main-EventThread] zookeeper.ClientCnxn: Error 
> while calling watcher 
> java.util.concurrent.RejectedExecutionException: Task 
> java.util.concurrent.FutureTask@7d41da91 rejected from 
> java.util.concurrent.ThreadPoolExecutor@6a5e365f[Terminated, pool size = 0, 
> active threads = 0, queued tasks = 0, completed tasks = 4]
> at 
> java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063)
> at 
> java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830)
> at 
> java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379)
> at 
> java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112)
> at 
> java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678)
> at 
> org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:559)
> at 
> org.apache.hadoop.hbase.zookeeper.PendingWatcher.process(PendingWatcher.java:40)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510)
> 2020-08-10 14:21:46,780 INFO  [main-EventThread] zookeeper.ClientCnxn: 
> EventThread shut down for session: 0x100111361f20001
> 2020-08-10 14:21:46,780 INFO  [shutdown-hook-0] regionserver.ShutdownHook: 
> Shutdown hook finished.
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Reidddddd commented on pull request #1922: HBASE-24583 Normalizer can't actually merge empty regions when neighbor is larger than average size

2020-08-12 Thread GitBox


Reidd commented on pull request #1922:
URL: https://github.com/apache/hbase/pull/1922#issuecomment-672975747


   > Can you explain why this change in controversial?
   
   It's not controversial. I got your thought, while I incline to code case by 
case to avoid unnecessary codes. So combining both I gave -0 which means it's 
ok to go (that's why not -1), but superfluous in this case (that's why not +1).
   
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] wchevreuil commented on a change in pull request #2191: HBASE-24813 ReplicationSource should clear buffer usage on Replicatio…

2020-08-12 Thread GitBox


wchevreuil commented on a change in pull request #2191:
URL: https://github.com/apache/hbase/pull/2191#discussion_r469382747



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceWALReader.java
##
@@ -309,6 +310,16 @@ public WALEntryBatch poll(long timeout) throws 
InterruptedException {
 return entryBatchQueue.poll(timeout, TimeUnit.MILLISECONDS);
   }
 
+  public void clearWALEntryBatch() {

Review comment:
   We are only calling it from `ReplicationSource.terminate` (see line 
#606), after we certify that neither the shipper, nor the reader threads are 
alive anymore, so I don't think it would be an issue. Of course, there's the 
risk someone inadvertently call this method somewhere else, so maybe we should 
put a warning comment? I don't think there's any gain of synchronising accesses 
to totalBufferUsed variable here, concurrent threads could still succeed on the 
double decrement, if we call clearWALEntryBatch while shipper thread is still 
running.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] joshelser commented on a change in pull request #2228: HBASE-24602 Add Increment and Append support to CheckAndMutate

2020-08-12 Thread GitBox


joshelser commented on a change in pull request #2228:
URL: https://github.com/apache/hbase/pull/2228#discussion_r469353813



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
##
@@ -3805,6 +3838,196 @@ public void 
prepareMiniBatchOperations(MiniBatchOperationInProgress mi
   }
 }
 
+/**
+ * Do coprocessor pre-increment or pre-append call.
+ * @return Result returned out of the coprocessor, which means bypass all 
further processing
+ *   and return the proffered Result instead, or null which means proceed.

Review comment:
   nit: preferred

##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
##
@@ -3805,6 +3838,196 @@ public void 
prepareMiniBatchOperations(MiniBatchOperationInProgress mi
   }
 }
 
+/**
+ * Do coprocessor pre-increment or pre-append call.
+ * @return Result returned out of the coprocessor, which means bypass all 
further processing
+ *   and return the proffered Result instead, or null which means proceed.
+ */
+private Result doCoprocessorPreCall(Mutation mutation) throws IOException {

Review comment:
   Maybe `doCoprocessorPreCallAfterRowLock()` and indicate that this method 
is a no-op for Mutations which do not have a `pre*AfterRowLock()` method in the 
javadoc?

##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
##
@@ -3805,6 +3838,196 @@ public void 
prepareMiniBatchOperations(MiniBatchOperationInProgress mi
   }
 }
 
+/**
+ * Do coprocessor pre-increment or pre-append call.
+ * @return Result returned out of the coprocessor, which means bypass all 
further processing
+ *   and return the proffered Result instead, or null which means proceed.
+ */
+private Result doCoprocessorPreCall(Mutation mutation) throws IOException {
+  assert mutation instanceof Increment || mutation instanceof Append;
+  Result result = null;
+  if (region.coprocessorHost != null) {
+if (mutation instanceof Increment) {
+  result = region.coprocessorHost.preIncrementAfterRowLock((Increment) 
mutation);
+} else {
+  result = region.coprocessorHost.preAppendAfterRowLock((Append) 
mutation);
+}
+  }
+  return result;
+}
+
+private Map> reckonDeltas(Mutation mutation, List 
results)

Review comment:
   What about `compute` or `calculate` instead of `reckon`? I had to go to 
a dictionary :)

##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
##
@@ -3805,6 +3838,196 @@ public void 
prepareMiniBatchOperations(MiniBatchOperationInProgress mi
   }
 }
 
+/**
+ * Do coprocessor pre-increment or pre-append call.
+ * @return Result returned out of the coprocessor, which means bypass all 
further processing
+ *   and return the proffered Result instead, or null which means proceed.
+ */
+private Result doCoprocessorPreCall(Mutation mutation) throws IOException {
+  assert mutation instanceof Increment || mutation instanceof Append;
+  Result result = null;
+  if (region.coprocessorHost != null) {
+if (mutation instanceof Increment) {
+  result = region.coprocessorHost.preIncrementAfterRowLock((Increment) 
mutation);
+} else {
+  result = region.coprocessorHost.preAppendAfterRowLock((Append) 
mutation);
+}
+  }
+  return result;
+}
+
+private Map> reckonDeltas(Mutation mutation, List 
results)
+  throws IOException {
+  long now = EnvironmentEdgeManager.currentTime();
+  Map> ret = new HashMap<>();
+  // Process a Store/family at a time.
+  for (Map.Entry> entry: 
mutation.getFamilyCellMap().entrySet()) {
+final byte[] columnFamilyName = entry.getKey();
+List deltas = entry.getValue();
+// Reckon for the Store what to apply to WAL and MemStore.
+List toApply = 
reckonDeltasByStore(region.stores.get(columnFamilyName), mutation,
+  now, deltas, results);
+if (!toApply.isEmpty()) {
+  for (Cell cell : toApply) {

Review comment:
   Genuine question, will this save us anything? Not sure how the JIT will 
(or won't) optimize such a thing away. I guess, at a minimum, it would save 
construction of an Iterator object?

##
File path: 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAsyncTable.java
##
@@ -1282,6 +1438,80 @@ public void 
testCheckAndMutateBatchWithFilterAndTimeRange() throws Throwable {
 assertEquals("f", Bytes.toString(result.getValue(FAMILY, 
Bytes.toBytes("F";
   }
 
+  @Test
+  public void testCheckAndIncrementBatch() throws Throwable {
+AsyncTable table = getTable.get();
+byte[] row2 = Bytes.toBytes(Bytes.toString(row) + "2");
+
+table.putAll(Arrays.asList(
+  new 

[GitHub] [hbase] wchevreuil commented on a change in pull request #2241: HBASE-24854 Correct the help content of assign and unassign commands …

2020-08-12 Thread GitBox


wchevreuil commented on a change in pull request #2241:
URL: https://github.com/apache/hbase/pull/2241#discussion_r469354711



##
File path: hbase-shell/src/main/ruby/shell/commands/assign.rb
##
@@ -22,8 +22,8 @@ module Commands
 class Assign < Command
   def help
 <<-EOF
-Assign a region. Use with caution. If region already assigned,
-this command will do a force reassign. For experts only.
+Assign a region. It could be executed only when region in expected 
state(CLOSED, OFFLINE).

Review comment:
   Nit: Instead of `you can use "assigns" which supported by HBCK2`, let's 
say `you can use "assigns" command available on HBCK2 tool`





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (HBASE-24869) migrate website generation to new asf jenkins

2020-08-12 Thread Sean Busbey (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17176392#comment-17176392
 ] 

Sean Busbey commented on HBASE-24869:
-

merged the PR. disabled the job on builds.a.o. updated the job on ci-hadoop.a.o 
to use master instead of the feature branch. started a test build

> migrate website generation to new asf jenkins
> -
>
> Key: HBASE-24869
> URL: https://issues.apache.org/jira/browse/HBASE-24869
> Project: HBase
>  Issue Type: Task
>  Components: build, website
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Major
>
> Update our website generation so we can use it on the new jenkins ci server
> * needs a job name that has no spaces (or fix the script to handle paths with 
> spaces)
> * needs to only run on nodes labeled git-websites (so it will have the creds 
> to push updates)
> * needs to set editable email notification on failure (details in comment)
> Also we will need to converte to a pipeline DSL
> * define tools, namely maven (alternative get [Tool Environment 
> Plugin|https://plugins.jenkins.io/toolenv/])
> * set timeout for 4 hours (alternative get [build timeout 
> plugin|https://plugins.jenkins.io/build-timeout/])
> * needs to clean worksapce when done (haven't found an alternatiave, maybe 
> it's a default for non-pipeline jobs now?)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] asfgit closed pull request #2246: HBASE-24869 migrate website generation to new asf jenkins

2020-08-12 Thread GitBox


asfgit closed pull request #2246:
URL: https://github.com/apache/hbase/pull/2246


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #2248: HBASE-24870 Ignore TestAsyncTableRSCrashPublish

2020-08-12 Thread GitBox


Apache-HBase commented on pull request #2248:
URL: https://github.com/apache/hbase/pull/2248#issuecomment-672887463


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 10s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ branch-2.2 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   5m 16s |  branch-2.2 passed  |
   | +1 :green_heart: |  compile  |   0m 54s |  branch-2.2 passed  |
   | +1 :green_heart: |  checkstyle  |   1m 19s |  branch-2.2 passed  |
   | +1 :green_heart: |  shadedjars  |   4m  0s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  branch-2.2 passed  |
   | +0 :ok: |  spotbugs  |   3m 12s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m  9s |  branch-2.2 passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 38s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 55s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 55s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   1m 17s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedjars  |   4m 11s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  hadoopcheck  |  25m 12s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2 2.10.0 or 3.1.2 3.2.1.  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   3m 22s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 348m 32s |  hbase-server in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m 38s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 412m  5s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hbase.client.TestFromClientSide3 |
   |   | hadoop.hbase.replication.TestReplicationChangingPeerRegionservers |
   |   | hadoop.hbase.client.TestSnapshotTemporaryDirectoryWithRegionReplicas |
   |   | hadoop.hbase.client.replication.TestReplicationAdminWithClusters |
   |   | hadoop.hbase.client.TestCloneSnapshotFromClientNormal |
   |   | hadoop.hbase.client.TestAdmin1 |
   |   | hadoop.hbase.replication.TestReplicationKillSlaveRS |
   |   | hadoop.hbase.replication.TestReplicationKillSlaveRSWithSeparateOldWALs 
|
   |   | hadoop.hbase.client.TestSnapshotTemporaryDirectory |
   |   | hadoop.hbase.replication.TestReplicationSmallTests |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.12 Server=19.03.12 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2248/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/2248 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux f19e8fb01e66 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 
17:06:04 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-home/workspace/Base-PreCommit-GitHub-PR_PR-2248/out/precommit/personality/provided.sh
 |
   | git revision | branch-2.2 / d3a72d99d5 |
   | Default Java | 1.8.0_181 |
   | unit | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2248/1/artifact/out/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2248/1/testReport/
 |
   | Max. process+thread count | 5105 (vs. ulimit of 1) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2248/1/console
 |
   | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #2241: HBASE-24854 Correct the help content of assign and unassign commands …

2020-08-12 Thread GitBox


Apache-HBase commented on pull request #2241:
URL: https://github.com/apache/hbase/pull/2241#issuecomment-672854696


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 33s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 30s |  master passed  |
   | +1 :green_heart: |  javadoc  |   0m 13s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 18s |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 14s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   6m 49s |  hbase-shell in the patch passed.  |
   |  |   |  17m 43s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.12 Server=19.03.12 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2241/4/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/2241 |
   | Optional Tests | javac javadoc unit |
   | uname | Linux 6f401dfa4a64 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 066be4a76f |
   | Default Java | 2020-01-14 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2241/4/testReport/
 |
   | Max. process+thread count | 2334 (vs. ulimit of 12500) |
   | modules | C: hbase-shell U: hbase-shell |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2241/4/console
 |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #2241: HBASE-24854 Correct the help content of assign and unassign commands …

2020-08-12 Thread GitBox


Apache-HBase commented on pull request #2241:
URL: https://github.com/apache/hbase/pull/2241#issuecomment-672854029


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 27s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  4s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 43s |  master passed  |
   | +1 :green_heart: |  javadoc  |   0m 15s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 25s |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 14s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   7m  6s |  hbase-shell in the patch passed.  |
   |  |   |  16m 16s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.12 Server=19.03.12 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2241/4/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/2241 |
   | Optional Tests | javac javadoc unit |
   | uname | Linux 172755ae4e03 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 066be4a76f |
   | Default Java | 1.8.0_232 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2241/4/testReport/
 |
   | Max. process+thread count | 2352 (vs. ulimit of 12500) |
   | modules | C: hbase-shell U: hbase-shell |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2241/4/console
 |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #2241: HBASE-24854 Correct the help content of assign and unassign commands …

2020-08-12 Thread GitBox


Apache-HBase commented on pull request #2241:
URL: https://github.com/apache/hbase/pull/2241#issuecomment-672848412


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 55s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  rubocop  |   0m 10s |  There were no new rubocop 
issues.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 15s |  The patch does not generate 
ASF License warnings.  |
   |  |   |   3m 39s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.12 Server=19.03.12 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2241/4/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/2241 |
   | Optional Tests | dupname asflicense rubocop |
   | uname | Linux 69b1d9568f23 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 066be4a76f |
   | Max. process+thread count | 47 (vs. ulimit of 12500) |
   | modules | C: hbase-shell U: hbase-shell |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2241/4/console
 |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) 
rubocop=0.80.0 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] bsglz commented on a change in pull request #2241: HBASE-24854 Correct the help content of assign and unassign commands …

2020-08-12 Thread GitBox


bsglz commented on a change in pull request #2241:
URL: https://github.com/apache/hbase/pull/2241#discussion_r469226122



##
File path: hbase-shell/src/main/ruby/shell/commands/unassign.rb
##
@@ -22,8 +22,10 @@ module Commands
 class Unassign < Command
   def help
 <<-EOF
-Unassign a region. Unassign will close region in current location and then
-reopen it again.  Pass 'true' to force the unassignment ('force' will clear
+Unassign a region. It could be executed only when region in expected 
state(OPEN).
+In addition, you can use "unassigns" which supported by hbck2 to skip the 
state check.
+See 
https://github.com/apache/hbase-operator-tools/blob/master/hbase-hbck2/README.md
 for more info.

Review comment:
   The semantic of unassign was changed, and not need param any more, maybe 
it is better to fix it in a separate Jira.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] bsglz commented on a change in pull request #2241: HBASE-24854 Correct the help content of assign and unassign commands …

2020-08-12 Thread GitBox


bsglz commented on a change in pull request #2241:
URL: https://github.com/apache/hbase/pull/2241#discussion_r469205948



##
File path: hbase-shell/src/main/ruby/shell/commands/unassign.rb
##
@@ -22,8 +22,10 @@ module Commands
 class Unassign < Command
   def help
 <<-EOF
-Unassign a region. Unassign will close region in current location and then
-reopen it again.  Pass 'true' to force the unassignment ('force' will clear
+Unassign a region. It could be executed only when region in expected 
state(OPEN).
+In addition, you can use "unassigns" which supported by hbck2 to skip the 
state check.
+See 
https://github.com/apache/hbase-operator-tools/blob/master/hbase-hbck2/README.md
 for more info.

Review comment:
   Make sense.
   BTW, this part seems not correct too, since we do nothing for the "force" 
param at server side now.
   ```
   Pass 'true' to force the unassignment ('force' will clear all in-memory 
state in
   master before the reassign. If results in double assignment use hbck -fix to 
resolve.
   To be used by experts).
   ```





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] bsglz commented on a change in pull request #2241: HBASE-24854 Correct the help content of assign and unassign commands …

2020-08-12 Thread GitBox


bsglz commented on a change in pull request #2241:
URL: https://github.com/apache/hbase/pull/2241#discussion_r469205948



##
File path: hbase-shell/src/main/ruby/shell/commands/unassign.rb
##
@@ -22,8 +22,10 @@ module Commands
 class Unassign < Command
   def help
 <<-EOF
-Unassign a region. Unassign will close region in current location and then
-reopen it again.  Pass 'true' to force the unassignment ('force' will clear
+Unassign a region. It could be executed only when region in expected 
state(OPEN).
+In addition, you can use "unassigns" which supported by hbck2 to skip the 
state check.
+See 
https://github.com/apache/hbase-operator-tools/blob/master/hbase-hbck2/README.md
 for more info.

Review comment:
   Make sense.
   BTW, this part seems not correct too, since we do nothing for the "force" 
param in server side now.
   ```
   Pass 'true' to force the unassignment ('force' will clear all in-memory 
state in
   master before the reassign. If results in double assignment use hbck -fix to 
resolve.
   To be used by experts).
   ```





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Resolved] (HBASE-24844) Exception on standalone (master) shutdown

2020-08-12 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-24844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani resolved HBASE-24844.
--
Fix Version/s: 2.2.7
   2.4.0
   1.7.0
   2.3.1
   3.0.0-alpha-1
 Hadoop Flags: Reviewed
   Resolution: Fixed

> Exception on standalone (master) shutdown
> -
>
> Key: HBASE-24844
> URL: https://issues.apache.org/jira/browse/HBASE-24844
> Project: HBase
>  Issue Type: Bug
>  Components: Zookeeper
>Affects Versions: 3.0.0-alpha-1
>Reporter: Nick Dimiduk
>Assignee: wenfeiyi666
>Priority: Minor
> Fix For: 3.0.0-alpha-1, 2.3.1, 1.7.0, 2.4.0, 2.2.7
>
>
> Running HBase ({{master}} branch) in standalone mode, terminating the process 
> results in the following stack traces logged at error. It appears we shutdown 
> the zookeeper client out-of-order with {{shutdown}} of the thread pools.
> {noformat}
> 2020-08-10 14:21:46,777 INFO  [RS:0;localhost:16020] zookeeper.ZooKeeper: 
> Session: 0x100111361f20001 closed
> 2020-08-10 14:21:46,778 INFO  [RS:0;localhost:16020] 
> regionserver.HRegionServer: Exiting; stopping=localhost,16020,1597094491257; 
> zookeeper connection closed.
> 2020-08-10 14:21:46,778 ERROR [main-EventThread] zookeeper.ClientCnxn: Error 
> while calling watcher 
> java.util.concurrent.RejectedExecutionException: Task 
> java.util.concurrent.FutureTask@6e61af4b rejected from 
> java.util.concurrent.ThreadPoolExecutor@6a5e365f[Terminated, pool size = 0, 
> active threads = 0, queued tasks = 0, completed tasks = 4]
> at 
> java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063)
> at 
> java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830)
> at 
> java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379)
> at 
> java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112)
> at 
> java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678)
> at 
> org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:559)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510)
> 2020-08-10 14:21:46,778 INFO  [shutdown-hook-0] regionserver.ShutdownHook: 
> Starting fs shutdown hook thread.
> 2020-08-10 14:21:46,779 ERROR [main-EventThread] zookeeper.ClientCnxn: Error 
> while calling watcher 
> java.util.concurrent.RejectedExecutionException: Task 
> java.util.concurrent.FutureTask@7d41da91 rejected from 
> java.util.concurrent.ThreadPoolExecutor@6a5e365f[Terminated, pool size = 0, 
> active threads = 0, queued tasks = 0, completed tasks = 4]
> at 
> java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063)
> at 
> java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830)
> at 
> java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379)
> at 
> java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112)
> at 
> java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678)
> at 
> org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:559)
> at 
> org.apache.hadoop.hbase.zookeeper.PendingWatcher.process(PendingWatcher.java:40)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510)
> 2020-08-10 14:21:46,780 INFO  [main-EventThread] zookeeper.ClientCnxn: 
> EventThread shut down for session: 0x100111361f20001
> 2020-08-10 14:21:46,780 INFO  [shutdown-hook-0] regionserver.ShutdownHook: 
> Shutdown hook finished.
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-24844) Exception on standalone (master) shutdown

2020-08-12 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17176284#comment-17176284
 ] 

Viraj Jasani commented on HBASE-24844:
--

Thanks [~wenfeiyi666] for the patch. Applied it to master, branch-2, 2.3, 2.2, 
branch-1.

> Exception on standalone (master) shutdown
> -
>
> Key: HBASE-24844
> URL: https://issues.apache.org/jira/browse/HBASE-24844
> Project: HBase
>  Issue Type: Bug
>  Components: Zookeeper
>Affects Versions: 3.0.0-alpha-1
>Reporter: Nick Dimiduk
>Assignee: wenfeiyi666
>Priority: Minor
>
> Running HBase ({{master}} branch) in standalone mode, terminating the process 
> results in the following stack traces logged at error. It appears we shutdown 
> the zookeeper client out-of-order with {{shutdown}} of the thread pools.
> {noformat}
> 2020-08-10 14:21:46,777 INFO  [RS:0;localhost:16020] zookeeper.ZooKeeper: 
> Session: 0x100111361f20001 closed
> 2020-08-10 14:21:46,778 INFO  [RS:0;localhost:16020] 
> regionserver.HRegionServer: Exiting; stopping=localhost,16020,1597094491257; 
> zookeeper connection closed.
> 2020-08-10 14:21:46,778 ERROR [main-EventThread] zookeeper.ClientCnxn: Error 
> while calling watcher 
> java.util.concurrent.RejectedExecutionException: Task 
> java.util.concurrent.FutureTask@6e61af4b rejected from 
> java.util.concurrent.ThreadPoolExecutor@6a5e365f[Terminated, pool size = 0, 
> active threads = 0, queued tasks = 0, completed tasks = 4]
> at 
> java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063)
> at 
> java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830)
> at 
> java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379)
> at 
> java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112)
> at 
> java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678)
> at 
> org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:559)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510)
> 2020-08-10 14:21:46,778 INFO  [shutdown-hook-0] regionserver.ShutdownHook: 
> Starting fs shutdown hook thread.
> 2020-08-10 14:21:46,779 ERROR [main-EventThread] zookeeper.ClientCnxn: Error 
> while calling watcher 
> java.util.concurrent.RejectedExecutionException: Task 
> java.util.concurrent.FutureTask@7d41da91 rejected from 
> java.util.concurrent.ThreadPoolExecutor@6a5e365f[Terminated, pool size = 0, 
> active threads = 0, queued tasks = 0, completed tasks = 4]
> at 
> java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063)
> at 
> java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830)
> at 
> java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379)
> at 
> java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112)
> at 
> java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678)
> at 
> org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:559)
> at 
> org.apache.hadoop.hbase.zookeeper.PendingWatcher.process(PendingWatcher.java:40)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510)
> 2020-08-10 14:21:46,780 INFO  [main-EventThread] zookeeper.ClientCnxn: 
> EventThread shut down for session: 0x100111361f20001
> 2020-08-10 14:21:46,780 INFO  [shutdown-hook-0] regionserver.ShutdownHook: 
> Shutdown hook finished.
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] virajjasani commented on a change in pull request #2241: HBASE-24854 Correct the help content of assign and unassign commands …

2020-08-12 Thread GitBox


virajjasani commented on a change in pull request #2241:
URL: https://github.com/apache/hbase/pull/2241#discussion_r469198506



##
File path: hbase-shell/src/main/ruby/shell/commands/unassign.rb
##
@@ -22,8 +22,10 @@ module Commands
 class Unassign < Command
   def help
 <<-EOF
-Unassign a region. Unassign will close region in current location and then
-reopen it again.  Pass 'true' to force the unassignment ('force' will clear
+Unassign a region. It could be executed only when region in expected 
state(OPEN).
+In addition, you can use "unassigns" which supported by hbck2 to skip the 
state check.
+See 
https://github.com/apache/hbase-operator-tools/blob/master/hbase-hbck2/README.md
 for more info.

Review comment:
   And also, we can keep link in bracket like `(For more info on HBCK2: 
https://github.com/apache/hbase-operator-tools/blob/master/hbase-hbck2/README.md)`

##
File path: hbase-shell/src/main/ruby/shell/commands/unassign.rb
##
@@ -22,8 +22,10 @@ module Commands
 class Unassign < Command
   def help
 <<-EOF
-Unassign a region. Unassign will close region in current location and then
-reopen it again.  Pass 'true' to force the unassignment ('force' will clear
+Unassign a region. It could be executed only when region in expected 
state(OPEN).
+In addition, you can use "unassigns" which supported by hbck2 to skip the 
state check.
+See 
https://github.com/apache/hbase-operator-tools/blob/master/hbase-hbck2/README.md
 for more info.

Review comment:
   I think we can write this at the end similar to above `assign.rb` (above 
Examples)





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] virajjasani closed pull request #2239: HBASE-24844 RecoverableZookeeper#close followed by ExecutorService shutdown

2020-08-12 Thread GitBox


virajjasani closed pull request #2239:
URL: https://github.com/apache/hbase/pull/2239


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #2241: HBASE-24854 Correct the help content of assign and unassign commands …

2020-08-12 Thread GitBox


Apache-HBase commented on pull request #2241:
URL: https://github.com/apache/hbase/pull/2241#issuecomment-672819894


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 26s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 11s |  master passed  |
   | +1 :green_heart: |  javadoc  |   0m 15s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   5m  1s |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 14s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   6m 53s |  hbase-shell in the patch passed.  |
   |  |   |  18m  7s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.12 Server=19.03.12 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2241/3/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/2241 |
   | Optional Tests | javac javadoc unit |
   | uname | Linux 0b6be3cb1dee 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 8646ac139d |
   | Default Java | 2020-01-14 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2241/3/testReport/
 |
   | Max. process+thread count | 2390 (vs. ulimit of 12500) |
   | modules | C: hbase-shell U: hbase-shell |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2241/3/console
 |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #2241: HBASE-24854 Correct the help content of assign and unassign commands …

2020-08-12 Thread GitBox


Apache-HBase commented on pull request #2241:
URL: https://github.com/apache/hbase/pull/2241#issuecomment-672819190


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 31s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 46s |  master passed  |
   | +1 :green_heart: |  javadoc  |   0m 17s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 26s |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 15s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   6m 54s |  hbase-shell in the patch passed.  |
   |  |   |  16m 16s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.12 Server=19.03.12 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2241/3/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/2241 |
   | Optional Tests | javac javadoc unit |
   | uname | Linux 4d12f79ffa94 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 8646ac139d |
   | Default Java | 1.8.0_232 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2241/3/testReport/
 |
   | Max. process+thread count | 2366 (vs. ulimit of 12500) |
   | modules | C: hbase-shell U: hbase-shell |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2241/3/console
 |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (HBASE-24843) Sort the constants in `hbase_constants.rb`

2020-08-12 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17176275#comment-17176275
 ] 

Hudson commented on HBASE-24843:


Results for branch master
[build #5 on 
builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/5/]:
 (x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/5/General_20Nightly_20Build_20Report/]






(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/5/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/5/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(x) {color:red}-1 client integration test{color}
--Failed when running client tests on top of Hadoop 2. [see log for 
details|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/5//artifact/output-integration/hadoop-2.log].
 (note that this means we didn't run on Hadoop 3)


> Sort the constants in `hbase_constants.rb`
> --
>
> Key: HBASE-24843
> URL: https://issues.apache.org/jira/browse/HBASE-24843
> Project: HBase
>  Issue Type: Task
>  Components: shell
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Minor
> Fix For: 3.0.0-alpha-1, 2.4.0
>
>
> The list of constant definitions in {{hbase_constants.rb}} is a bit of a 
> mess. Sort them.
> Breaks out the minor cleanup from HBASE-24627 / 
> [PR#2215|https://github.com/apache/hbase/pull/2215] into it's own ticket.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-24750) All executor service should start using guava ThreadFactory

2020-08-12 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17176274#comment-17176274
 ] 

Hudson commented on HBASE-24750:


Results for branch master
[build #5 on 
builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/5/]:
 (x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/5/General_20Nightly_20Build_20Report/]






(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/5/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/5/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(x) {color:red}-1 client integration test{color}
--Failed when running client tests on top of Hadoop 2. [see log for 
details|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/5//artifact/output-integration/hadoop-2.log].
 (note that this means we didn't run on Hadoop 3)


> All executor service should start using guava ThreadFactory
> ---
>
> Key: HBASE-24750
> URL: https://issues.apache.org/jira/browse/HBASE-24750
> Project: HBase
>  Issue Type: Improvement
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.4.0
>
>
> Currently, we have majority Executor services using guava's 
> ThreadFactoryBuilder while creating fixed size thread pool. There are some 
> executors using our internal hbase-common's Threads class which provides util 
> methods for creating thread factory.
> Although there is no perf impact, we should let all Executors start using our 
> internal library for using ThreadFactory rather than having external guava 
> dependency (which is nothing more than a builder class). We might have to add 
> a couple more arguments to support full fledged ThreadFactory, but let's do 
> it and stop using guava's builder class.
> *Update:*
> Based on the consensus, we should use only guava library and retire our 
> internal code which maintains ThreadFactory creation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-24856) Fix error prone error in FlushTableSubprocedure

2020-08-12 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17176276#comment-17176276
 ] 

Hudson commented on HBASE-24856:


Results for branch master
[build #5 on 
builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/5/]:
 (x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/5/General_20Nightly_20Build_20Report/]






(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/5/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/5/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(x) {color:red}-1 client integration test{color}
--Failed when running client tests on top of Hadoop 2. [see log for 
details|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/5//artifact/output-integration/hadoop-2.log].
 (note that this means we didn't run on Hadoop 3)


> Fix error prone error in FlushTableSubprocedure
> ---
>
> Key: HBASE-24856
> URL: https://issues.apache.org/jira/browse/HBASE-24856
> Project: HBase
>  Issue Type: Bug
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.4.0
>
>
> {noformat}
> [ERROR] 
> /home/jenkins/jenkins-home/workspace/HBase_HBase_Nightly_master/component/hbase-server/src/main/java/org/apache/hadoop/hbase/procedure/flush/FlushTableSubprocedure.java:[105,30]
>  error: [ArraysAsListPrimitiveArray] Arrays.asList does not autobox primitive 
> arrays, as one might expect.
> (see https://errorprone.info/bugpattern/ArraysAsListPrimitiveArray)
>   Did you mean 'families = Bytes.asList(Bytes.toBytes(family));'?
> [INFO] 1 error
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache-HBase commented on pull request #2241: HBASE-24854 Correct the help content of assign and unassign commands …

2020-08-12 Thread GitBox


Apache-HBase commented on pull request #2241:
URL: https://github.com/apache/hbase/pull/2241#issuecomment-672813572


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m  4s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  rubocop  |   0m  8s |  There were no new rubocop 
issues.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 12s |  The patch does not generate 
ASF License warnings.  |
   |  |   |   2m 33s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.12 Server=19.03.12 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2241/3/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/2241 |
   | Optional Tests | dupname asflicense rubocop |
   | uname | Linux dcc0c3b869cc 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 8646ac139d |
   | Max. process+thread count | 46 (vs. ulimit of 12500) |
   | modules | C: hbase-shell U: hbase-shell |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2241/3/console
 |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) 
rubocop=0.80.0 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #2239: HBASE-24844 Exception on standalone (master) shutdown

2020-08-12 Thread GitBox


Apache-HBase commented on pull request #2239:
URL: https://github.com/apache/hbase/pull/2239#issuecomment-672812421


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   2m 33s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 53s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 22s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   7m  9s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | -0 :warning: |  javadoc  |   0m 22s |  hbase-zookeeper in master failed.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 46s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 22s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 22s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   6m 53s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | -0 :warning: |  javadoc  |   0m 18s |  hbase-zookeeper in the patch 
failed.  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 47s |  hbase-zookeeper in the patch 
passed.  |
   |  |   |  29m 44s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.12 Server=19.03.12 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2239/2/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/2239 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 61be5cfc1363 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 
17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 8646ac139d |
   | Default Java | 2020-01-14 |
   | javadoc | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2239/2/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-zookeeper.txt
 |
   | javadoc | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2239/2/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-zookeeper.txt
 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2239/2/testReport/
 |
   | Max. process+thread count | 447 (vs. ulimit of 12500) |
   | modules | C: hbase-zookeeper U: hbase-zookeeper |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2239/2/console
 |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #2239: HBASE-24844 Exception on standalone (master) shutdown

2020-08-12 Thread GitBox


Apache-HBase commented on pull request #2239:
URL: https://github.com/apache/hbase/pull/2239#issuecomment-672812686


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m  1s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m  0s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   0m 13s |  master passed  |
   | +1 :green_heart: |  spotbugs  |   0m 33s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 42s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 11s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |  12m 14s |  Patch does not cause any 
errors with Hadoop 3.1.2 3.2.1.  |
   | +1 :green_heart: |  spotbugs  |   0m 42s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 12s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  30m 26s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.12 Server=19.03.12 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2239/2/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/2239 |
   | Optional Tests | dupname asflicense spotbugs hadoopcheck hbaseanti 
checkstyle |
   | uname | Linux b7e8658bcc2d 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 8646ac139d |
   | Max. process+thread count | 84 (vs. ulimit of 12500) |
   | modules | C: hbase-zookeeper U: hbase-zookeeper |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2239/2/console
 |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) 
spotbugs=3.1.12 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #2239: HBASE-24844 Exception on standalone (master) shutdown

2020-08-12 Thread GitBox


Apache-HBase commented on pull request #2239:
URL: https://github.com/apache/hbase/pull/2239#issuecomment-672812142


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 55s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  4s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   5m  6s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 21s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   7m  8s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 19s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 41s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 21s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 21s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   6m 54s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 17s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 49s |  hbase-zookeeper in the patch 
passed.  |
   |  |   |  29m  7s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.12 Server=19.03.12 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2239/2/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/2239 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 115ef9f9cff8 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 8646ac139d |
   | Default Java | 1.8.0_232 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2239/2/testReport/
 |
   | Max. process+thread count | 344 (vs. ulimit of 12500) |
   | modules | C: hbase-zookeeper U: hbase-zookeeper |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2239/2/console
 |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] virajjasani commented on pull request #2239: HBASE-24844 Exception on standalone (master) shutdown

2020-08-12 Thread GitBox


virajjasani commented on pull request #2239:
URL: https://github.com/apache/hbase/pull/2239#issuecomment-672809153


   Let me keep patch title: "RecoverableZookeeper#close followed by 
ExecutorService shutdown"



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (HBASE-11288) Splittable Meta

2020-08-12 Thread Francis Christopher Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-11288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17176265#comment-17176265
 ] 

Francis Christopher Liu commented on HBASE-11288:
-

The run I mentioned in the previous comment completed successfully. I'll try to 
run using AsyncFSWAL and remove workaround #4.

> Splittable Meta
> ---
>
> Key: HBASE-11288
> URL: https://issues.apache.org/jira/browse/HBASE-11288
> Project: HBase
>  Issue Type: Umbrella
>  Components: meta
>Reporter: Francis Christopher Liu
>Assignee: Francis Christopher Liu
>Priority: Major
> Attachments: jstack20200807_bad_rpc_priority.txt, root_priority.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-11288) Splittable Meta

2020-08-12 Thread Francis Christopher Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-11288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17176263#comment-17176263
 ] 

Francis Christopher Liu commented on HBASE-11288:
-

Hi [~zhangduo], there's a lot of code for a few reasons: 1. The way I wrote the 
patch was to be easy to read how adding root is merely piggybacking/extending 
on a lot of the meta code hence there's a lot of copy paste,  2. The code to 
split meta will be basically touching existing code that's been touched already 
but will be generalized. 3. There's already some code that went in to address 
splittable meta. 4. There's some refactoring (renaming root to meta), 
generalization, etc. The production could changes would mainly be generalizing 
and extending existing code paths. I could try and cleanup the branch-2 patch 
so we can see what it would look like if that's a big concern?

With regards to biasing root as a table. Root is half of the catalog it's 
responsibilities are sort of similar to that of the meta table hence I think 
there's some reasonable motivation with the bias. In which case I would like to 
understand why we are doing the specialized solution for root instead of coming 
up with a general one for the catalog as a whole? Also why is it worth it? 
Sorry if I sound like a broken record, I see that you responded to my question 
in a previous post. I just want to make sure I understand your current position 
(as some things might've changed these past 2 weeks)?




> Splittable Meta
> ---
>
> Key: HBASE-11288
> URL: https://issues.apache.org/jira/browse/HBASE-11288
> Project: HBase
>  Issue Type: Umbrella
>  Components: meta
>Reporter: Francis Christopher Liu
>Assignee: Francis Christopher Liu
>Priority: Major
> Attachments: jstack20200807_bad_rpc_priority.txt, root_priority.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] virajjasani commented on pull request #2239: HBASE-24844 Exception on standalone (master) shutdown

2020-08-12 Thread GitBox


virajjasani commented on pull request #2239:
URL: https://github.com/apache/hbase/pull/2239#issuecomment-672803045


   @WenFeiYi can you update the PR title with what we did? Something that 
represent threadpool shutdown followed by close of RecoverableZookeeper? With 
error msg in commit msg, it might not indicate what is done as part of the 
commit.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




  1   2   >