[jira] [Commented] (HBASE-26001) When turn on access control, the cell level TTL of Increment and Append operations is invalid.

2021-07-27 Thread Yutong Xiao (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17387846#comment-17387846
 ] 

Yutong Xiao commented on HBASE-26001:
-

I run the tests with branch-2.3 locally. And got the following message :

2021-07-27 15:14:46,016 DEBUG [master/192.168.1.127:0:becomeActiveMaster] 
asyncfs.FanOutOneBlockAsyncDFSOutputHelper(265): ClientProtocol::create wrong 
number of arguments, should be hadoop 3.2 or below
2021-07-27 15:14:46,016 DEBUG [master/192.168.1.127:0:becomeActiveMaster] 
asyncfs.FanOutOneBlockAsyncDFSOutputHelper(271): ClientProtocol::create wrong 
number of arguments, should be hadoop 2.x
2021-07-27 15:14:46,029 DEBUG [master/192.168.1.127:0:becomeActiveMaster] 
asyncfs.FanOutOneBlockAsyncDFSOutputHelper(280): can not find SHOULD_REPLICATE 
flag, should be hadoop 2.x
java.lang.IllegalArgumentException: No enum constant 
org.apache.hadoop.fs.CreateFlag.SHOULD_REPLICATE

I also rerun the tests under branch-2 and it passed. 
Maybe the tests failure are due to a version problem of hadoop. [~stack]

> When turn on access control, the cell level TTL of Increment and Append 
> operations is invalid.
> --
>
> Key: HBASE-26001
> URL: https://issues.apache.org/jira/browse/HBASE-26001
> Project: HBase
>  Issue Type: Bug
>  Components: Coprocessors
>Reporter: Yutong Xiao
>Assignee: Yutong Xiao
>Priority: Minor
> Fix For: 3.0.0-alpha-1, 2.6.7, 2.5.0, 2.4.5
>
>
> AccessController postIncrementBeforeWAL() and postAppendBeforeWAL() methods 
> will rewrite the new cell's tags by the old cell's. This will makes the other 
> kinds of tag in new cell invisible (such as TTL tag) after this. As in 
> Increment and Append operations, the new cell has already catch forward all 
> tags of the old cell and TTL tag from mutation operation, here in 
> AccessController we do not need to rewrite the tags once again. Also, the TTL 
> tag of newCell will be invisible in the new created cell. Actually, in 
> Increment and Append operations, the newCell has already copied all tags of 
> the oldCell. So the oldCell is useless here.
> {code:java}
> private Cell createNewCellWithTags(Mutation mutation, Cell oldCell, Cell 
> newCell) {
> // Collect any ACLs from the old cell
> List tags = Lists.newArrayList();
> List aclTags = Lists.newArrayList();
> ListMultimap perms = ArrayListMultimap.create();
> if (oldCell != null) {
>   Iterator tagIterator = PrivateCellUtil.tagsIterator(oldCell);
>   while (tagIterator.hasNext()) {
> Tag tag = tagIterator.next();
> if (tag.getType() != PermissionStorage.ACL_TAG_TYPE) {
>   // Not an ACL tag, just carry it through
>   if (LOG.isTraceEnabled()) {
> LOG.trace("Carrying forward tag from " + oldCell + ": type " + 
> tag.getType()
> + " length " + tag.getValueLength());
>   }
>   tags.add(tag);
> } else {
>   aclTags.add(tag);
> }
>   }
> }
> // Do we have an ACL on the operation?
> byte[] aclBytes = mutation.getACL();
> if (aclBytes != null) {
>   // Yes, use it
>   tags.add(new ArrayBackedTag(PermissionStorage.ACL_TAG_TYPE, aclBytes));
> } else {
>   // No, use what we carried forward
>   if (perms != null) {
> // TODO: If we collected ACLs from more than one tag we may have a
> // List of size > 1, this can be collapsed into a single
> // Permission
> if (LOG.isTraceEnabled()) {
>   LOG.trace("Carrying forward ACLs from " + oldCell + ": " + perms);
> }
> tags.addAll(aclTags);
>   }
> }
> // If we have no tags to add, just return
> if (tags.isEmpty()) {
>   return newCell;
> }
> // Here the new cell's tags will be in visible.
> return PrivateCellUtil.createCell(newCell, tags);
>   }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache-HBase commented on pull request #3528: HBASE-26120 New replication gets stuck or data loss when multiwal gro…

2021-07-27 Thread GitBox


Apache-HBase commented on pull request #3528:
URL: https://github.com/apache/hbase/pull/3528#issuecomment-887286495


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   6m 21s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  6s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ branch-2 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m  2s |  branch-2 passed  |
   | +1 :green_heart: |  compile  |   0m 59s |  branch-2 passed  |
   | +1 :green_heart: |  shadedjars  |   6m 37s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  branch-2 passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 41s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  1s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m  1s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   6m 33s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 229m 15s |  hbase-server in the patch passed.  
|
   |  |   | 261m 43s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3528/1/artifact/yetus-jdk8-hadoop2-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3528 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux e260bb957bda 4.15.0-128-generic #131-Ubuntu SMP Wed Dec 9 
06:57:35 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2 / 20a4aaedcc |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3528/1/testReport/
 |
   | Max. process+thread count | 2263 (vs. ulimit of 12500) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3528/1/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #3503: HBASE-26096 Cleanup the deprecated methods in HBTU related classes and format code

2021-07-27 Thread GitBox


Apache-HBase commented on pull request #3503:
URL: https://github.com/apache/hbase/pull/3503#issuecomment-887293395


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   8m 31s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  4s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 33s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   6m 32s |  master passed  |
   | +1 :green_heart: |  compile  |   3m 55s |  master passed  |
   | +1 :green_heart: |  shadedjars  |  10m 12s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 25s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 18s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   5m 52s |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m  2s |  the patch passed  |
   | +1 :green_heart: |  javac  |   4m  2s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |  10m 15s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 25s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 44s |  hbase-common in the patch passed.  
|
   | +1 :green_heart: |  unit  |   1m  1s |  hbase-zookeeper in the patch 
passed.  |
   | +1 :green_heart: |  unit  | 233m 58s |  hbase-server in the patch passed.  
|
   | +1 :green_heart: |  unit  |  17m 26s |  hbase-mapreduce in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   1m 21s |  hbase-it in the patch passed.  |
   |  |   | 314m 37s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3503/4/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3503 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 6c996045a834 4.15.0-142-generic #146-Ubuntu SMP Tue Apr 13 
01:11:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 02d263e7dd |
   | Default Java | AdoptOpenJDK-11.0.10+9 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3503/4/testReport/
 |
   | Max. process+thread count | 3308 (vs. ulimit of 3) |
   | modules | C: hbase-common hbase-zookeeper hbase-server hbase-mapreduce 
hbase-it U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3503/4/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (HBASE-26142) NullPointerException when set "hbase.hregion.memstore.mslab.indexchunksize.percent" to zero"

2021-07-27 Thread chenglei (Jira)
chenglei created HBASE-26142:


 Summary: NullPointerException when set 
"hbase.hregion.memstore.mslab.indexchunksize.percent" to zero"
 Key: HBASE-26142
 URL: https://issues.apache.org/jira/browse/HBASE-26142
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.4.0, 3.0.0-alpha-1
Reporter: chenglei






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-26142) NullPointerException when set 'hbase.hregion.memstore.mslab.indexchunksize.percent' to zero

2021-07-27 Thread chenglei (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenglei updated HBASE-26142:
-
Summary: NullPointerException when set 
'hbase.hregion.memstore.mslab.indexchunksize.percent' to zero  (was: 
NullPointerException when set 
"hbase.hregion.memstore.mslab.indexchunksize.percent" to zero")

> NullPointerException when set 
> 'hbase.hregion.memstore.mslab.indexchunksize.percent' to zero
> ---
>
> Key: HBASE-26142
> URL: https://issues.apache.org/jira/browse/HBASE-26142
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha-1, 2.4.0
>Reporter: chenglei
>Priority: Critical
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-26142) NullPointerException when set 'hbase.hregion.memstore.mslab.indexchunksize.percent' to zero

2021-07-27 Thread chenglei (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenglei updated HBASE-26142:
-
Description: By default, we use {{DefaultMemStore}} , which use no  
{{IndexChunk}} and {{ChunkCreator.indexChunksPool}} is useless, so we could set 

> NullPointerException when set 
> 'hbase.hregion.memstore.mslab.indexchunksize.percent' to zero
> ---
>
> Key: HBASE-26142
> URL: https://issues.apache.org/jira/browse/HBASE-26142
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha-1, 2.4.0
>Reporter: chenglei
>Priority: Critical
>
> By default, we use {{DefaultMemStore}} , which use no  {{IndexChunk}} and 
> {{ChunkCreator.indexChunksPool}} is useless, so we could set 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-26142) NullPointerException when set 'hbase.hregion.memstore.mslab.indexchunksize.percent' to zero

2021-07-27 Thread chenglei (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenglei updated HBASE-26142:
-
Description: By default, we use {{DefaultMemStore}} , which use no  
{{IndexChunk}} and {{ChunkCreator.indexChunksPool}} is useless, so we could set 
 {{hbase.hregion.memstore.mslab.indexchunksize.percent}} to 0 tos save memory 
space, but after r  (was: By default, we use {{DefaultMemStore}} , which use no 
 {{IndexChunk}} and {{ChunkCreator.indexChunksPool}} is useless, so we could 
set )

> NullPointerException when set 
> 'hbase.hregion.memstore.mslab.indexchunksize.percent' to zero
> ---
>
> Key: HBASE-26142
> URL: https://issues.apache.org/jira/browse/HBASE-26142
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha-1, 2.4.0
>Reporter: chenglei
>Assignee: chenglei
>Priority: Critical
>
> By default, we use {{DefaultMemStore}} , which use no  {{IndexChunk}} and 
> {{ChunkCreator.indexChunksPool}} is useless, so we could set  
> {{hbase.hregion.memstore.mslab.indexchunksize.percent}} to 0 tos save memory 
> space, but after r



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HBASE-26142) NullPointerException when set 'hbase.hregion.memstore.mslab.indexchunksize.percent' to zero

2021-07-27 Thread chenglei (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenglei reassigned HBASE-26142:


Assignee: chenglei

> NullPointerException when set 
> 'hbase.hregion.memstore.mslab.indexchunksize.percent' to zero
> ---
>
> Key: HBASE-26142
> URL: https://issues.apache.org/jira/browse/HBASE-26142
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha-1, 2.4.0
>Reporter: chenglei
>Assignee: chenglei
>Priority: Critical
>
> By default, we use {{DefaultMemStore}} , which use no  {{IndexChunk}} and 
> {{ChunkCreator.indexChunksPool}} is useless, so we could set  
> {{hbase.hregion.memstore.mslab.indexchunksize.percent}} to 0 tos save memory 
> space, but after r



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] nyl3532016 commented on a change in pull request #3499: HBASE-26089 Support RegionCoprocessor on CompactionServer

2021-07-27 Thread GitBox


nyl3532016 commented on a change in pull request #3499:
URL: https://github.com/apache/hbase/pull/3499#discussion_r677224073



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.java
##
@@ -409,6 +432,18 @@ void loadTableCoprocessors(final Configuration conf) {
   @Override
   public RegionEnvironment createEnvironment(RegionCoprocessor instance, int 
priority, int seq,
   Configuration conf) {
+if (coprocessorService instanceof HCompactionServer) {

Review comment:
   Yes, We can introduce `RegionCompactionCoprocessorHost` which only used 
for CompactionServer




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (HBASE-26114) when “hbase.mob.compaction.threads.max” is set to a negative number, HMaster cannot start normally

2021-07-27 Thread Anoop Sam John (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-26114:
---
Priority: Minor  (was: Major)

> when “hbase.mob.compaction.threads.max” is set to a negative number, HMaster 
> cannot start normally 
> ---
>
> Key: HBASE-26114
> URL: https://issues.apache.org/jira/browse/HBASE-26114
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Affects Versions: 2.2.0, 2.4.4
> Environment: HBase 2.2.2
> os.name=Linux
> os.arch=amd64
> os.version=5.4.0-72-generic
> java.version=1.8.0_191
> java.vendor=Oracle Corporation
>Reporter: Jingxuan Fu
>Priority: Minor
>  Labels: patch
> Fix For: 3.0.0-alpha-1
>
>   Original Estimate: 10m
>  Remaining Estimate: 10m
>
> In hbase-default.xml:
>   
> {code:java}
>  
> hbase.mob.compaction.threads.max 
> 1 
>    
>   The max number of threads used in MobCompactor. 
>    
> {code}
>  
> When the value is set to a negative number, such as -1, Hmaster cannot start 
> normally.
> The log file will output:
>   
> {code:cpp}
> 2021-07-22 18:54:13,758 ERROR [master/JavaFuzz:16000:becomeActiveMaster] 
> master.HMaster: Failed to become active master 
> java.lang.IllegalArgumentException
> at 
> java.util.concurrent.ThreadPoolExecutor.(ThreadPoolExecutor.java:1314)  
> at 
> org.apache.hadoop.hbase.mob.MobUtils.createMobCompactorThreadPool(MobUtils.java:880)
> at org.apache.hadoop.hbase.master.MobCompactionChore.
> (MobCompactionChore.java:51)   at 
> org.apache.hadoop.hbase.master.HMaster.initMobCleaner(HMaster.java:1278) 
>   at 
> org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:1161)
>  
> at 
> org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2112)
> at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:580)
> at java.lang.Thread.run(Thread.java:748) 
> 2021-07-22 18:54:13,760 ERROR [master/JavaFuzz:16000:becomeActiveMaster] 
> master.HMaster: Master server abort: loaded coprocessors are: 
> [org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint] 
> 2021-07-22 18:54:13,760 ERROR [master/JavaFuzz:16000:becomeActiveMaster] 
> master.HMaster: * ABORTING master javafuzz,16000,1626951243154: Unhandled 
> exception. Starting shutdown. * java.lang.IllegalArgumentException 
> at 
> java.util.concurrent.ThreadPoolExecutor.(ThreadPoolExecutor.java:1314)  
>  at 
> org.apache.hadoop.hbase.mob.MobUtils.createMobCompactorThreadPool(MobUtils.java:880)
>  
> at 
> org.apache.hadoop.hbase.master.MobCompactionChore.(MobCompactionChore.java:51)
>  
>   at org.apache.hadoop.hbase.master.HMaster.initMobCleaner(HMaster.java:1278) 
>   at 
> org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:1161)
>  
>   at 
> org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2112)
>  
> at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:580) 
>   at java.lang.Thread.run(Thread.java:748) 
> 2021-07-
> 22 18:54:13,760 INFO  [master/JavaFuzz:16000:becomeActiveMaster] 
> regionserver.HRegionServer: * STOPPING region server 
> 'javafuzz,16000,1626951243154' *{code}
>  
> In MobUtils.java(package org.apache.hadoop.hbase.mob) 
>  This method from version 2.2.0 to version 2.4.4 is the same
> {code:java}
>   public static ExecutorService createMobCompactorThreadPool(Configuration 
> conf) { int maxThreads = 
> conf.getInt(MobConstants.MOB_COMPACTION_THREADS_MAX, 
> MobConstants.DEFAULT_MOB_COMPACTION_THREADS_MAX); 
> if (maxThreads == 0) { 
>maxThreads = 1;    
>    } 
> final SynchronousQueue queue = new SynchronousQueue<>();
> ThreadPoolExecutor pool = new ThreadPoolExecutor(1, maxThreads, 60, 
> TimeUnit.SECONDS, queue,   
> Threads.newDaemonThreadFactory("MobCompactor"), new 
> RejectedExecutionHandler() {
>    @Override
> public void rejectedExecution(Runnable r, ThreadPoolExecutor 
> executor) {   
> try { 
> // waiting for a thread to pick up instead of throwing 
> exceptions.     
> queue.put(r);   
> } catch (InterruptedException e) { 
> throw new RejectedExecutionException(e);   
> } 
>   }   
> }); 
> ((ThreadPoolExecutor) pool).allowCoreThreadTimeOut(true); 
> return pool;   
>}{code}
> When MOB_COMPACTION_THREADS_MAX is set to 0, mobUtil will set it to 1. But 
> the program does not take into account that it is set 

[jira] [Updated] (HBASE-26142) NullPointerException when set 'hbase.hregion.memstore.mslab.indexchunksize.percent' to zero

2021-07-27 Thread chenglei (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenglei updated HBASE-26142:
-
Description: 
By default, we use {{DefaultMemStore}} , which use no  {{IndexChunk}} and 
{{ChunkCreator.indexChunksPool}} is useless, so we could set  
{{hbase.hregion.memstore.mslab.indexchunksize.percent}} to 0 to save memory 
space, 
But after running a while, the {{RegionServer}} throws {{NullPointerException}} 
 and abort:
{code:java}
   Caused by: java.lang.NullPointerException
at 
org.apache.hadoop.hbase.regionserver.ChunkCreator$MemStoreChunkPool.access$900(ChunkCreator.java:310)
at 
org.apache.hadoop.hbase.regionserver.ChunkCreator.putbackChunks(ChunkCreator.java:608)
at 
org.apache.hadoop.hbase.regionserver.MemStoreLABImpl.recycleChunks(MemStoreLABImpl.java:297)
at 
org.apache.hadoop.hbase.regionserver.MemStoreLABImpl.close(MemStoreLABImpl.java:268)
at org.apache.hadoop.hbase.regionserver.Segment.close(Segment.java:149)
at 
org.apache.hadoop.hbase.regionserver.AbstractMemStore.clearSnapshot(AbstractMemStore.java:251)
at 
org.apache.hadoop.hbase.regionserver.HStore.updateStorefiles(HStore.java:1244)
at 
org.apache.hadoop.hbase.regionserver.HStore.access$700(HStore.java:137)
at 
org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.commit(HStore.java:2461)
at 
org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2963)
{code}

The problem is caused by line 608 in {{ChunkCreator.putbackChunks}} : for 
{{DataChunk}} , {{Chunk.isIndexChunk}} return true, and
 {{ChunkCreator.indexChunksPool}} is null: 
{code:java}
594 synchronized void putbackChunks(Set chunks) {
595// if there is no pool just try to clear the chunkIdMap in case there is 
something
596if (dataChunksPool == null && indexChunksPool == null) {
597  this.removeChunks(chunks);
598  return;
599}
600
601   // if there is a pool, go over all chunk IDs that came back, the chunks 
may be from pool or not
602for (int chunkID : chunks) {
603 // translate chunk ID to chunk, if chunk initially wasn't in pool
604  // this translation will (most likely) return null
605  Chunk chunk = ChunkCreator.this.getChunk(chunkID);
606 if (chunk != null) {
607if (chunk.isFromPool() && chunk.isIndexChunk()) {
608  indexChunksPool.putbackChunks(chunk);
{code}

 For {{DataChunk}} , {{Chunk.isIndexChunk}} return true because  
{{Chunk.isIndexChunk}}  determines the type of {{chunk}} based on {{Chunk.size}}
{code:java}
 boolean isIndexChunk() {
return size == 
ChunkCreator.getInstance().getChunkSize(ChunkCreator.ChunkType.INDEX_CHUNK);
  }
{code}

and {{ChunkCreator.getChunkSize}} incorrectly return {{DataChunk}} size when 
{{ChunkCreator.indexChunksPool}} is null:
{code:java}
int getChunkSize(ChunkType chunkType) {
switch (chunkType) {
  case INDEX_CHUNK:
if (indexChunksPool != null) {
  return indexChunksPool.getChunkSize();
}
  case DATA_CHUNK:
if (dataChunksPool != null) {
  return dataChunksPool.getChunkSize();
} else { // When pools are empty
  return chunkSize;
}
  default:
throw new IllegalArgumentException(
"chunkType must either be INDEX_CHUNK or DATA_CHUNK");
}
  }
{code}
In my opinion, in addition to erroneous implementation of 
{{ChunkCreator.getChunkSize}}, we would better not determines the type of 
{{Chunk}} based on {{Chunk.size}}, because
{{hbase.hregion.memstore.mslab.indexchunksize.percent}} is set by user and the 
size of {{IndexChunk}} and {{DataChunk}} could be the same.



  was:By default, we use {{DefaultMemStore}} , which use no  {{IndexChunk}} and 
{{ChunkCreator.indexChunksPool}} is useless, so we could set  
{{hbase.hregion.memstore.mslab.indexchunksize.percent}} to 0 tos save memory 
space, but after r


> NullPointerException when set 
> 'hbase.hregion.memstore.mslab.indexchunksize.percent' to zero
> ---
>
> Key: HBASE-26142
> URL: https://issues.apache.org/jira/browse/HBASE-26142
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha-1, 2.4.0
>Reporter: chenglei
>Assignee: chenglei
>Priority: Critical
>
> By default, we use {{DefaultMemStore}} , which use no  {{IndexChunk}} and 
> {{ChunkCreator.indexChunksPool}} is useless, so we could set  
> {{hbase.hregion.memstore.mslab.indexchunksize.percent}} to 0 to save memory 
> space, 
> But after running a while, the {{RegionServer}} throws 
> {{NullPointerException}}  and abort:
> {code:java}
>Caused by: java.lang.NullPointerException
> at 
> org.apache.hadoop.hbase.regionserver.ChunkCreator$MemStoreChunkPool.acce

[jira] [Updated] (HBASE-26142) NullPointerException when set 'hbase.hregion.memstore.mslab.indexchunksize.percent' to zero

2021-07-27 Thread chenglei (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenglei updated HBASE-26142:
-
Description: 
By default, we use {{DefaultMemStore}} , which use no  {{IndexChunk}} and 
{{ChunkCreator.indexChunksPool}} is useless, so we could set  
{{hbase.hregion.memstore.mslab.indexchunksize.percent}} to 0 to save memory 
space, 
But after running a while, the {{RegionServer}} throws {{NullPointerException}} 
 and abort:
{code:java}
   Caused by: java.lang.NullPointerException
at 
org.apache.hadoop.hbase.regionserver.ChunkCreator$MemStoreChunkPool.access$900(ChunkCreator.java:310)
at 
org.apache.hadoop.hbase.regionserver.ChunkCreator.putbackChunks(ChunkCreator.java:608)
at 
org.apache.hadoop.hbase.regionserver.MemStoreLABImpl.recycleChunks(MemStoreLABImpl.java:297)
at 
org.apache.hadoop.hbase.regionserver.MemStoreLABImpl.close(MemStoreLABImpl.java:268)
at org.apache.hadoop.hbase.regionserver.Segment.close(Segment.java:149)
at 
org.apache.hadoop.hbase.regionserver.AbstractMemStore.clearSnapshot(AbstractMemStore.java:251)
at 
org.apache.hadoop.hbase.regionserver.HStore.updateStorefiles(HStore.java:1244)
at 
org.apache.hadoop.hbase.regionserver.HStore.access$700(HStore.java:137)
at 
org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.commit(HStore.java:2461)
at 
org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2963)
{code}

The problem is caused by line 608 in {{ChunkCreator.putbackChunks}} 
:{{Chunk.isIndexChunk}} return true for{{DataChunk}}  , and
 {{ChunkCreator.indexChunksPool}} is null: 
{code:java}
594 synchronized void putbackChunks(Set chunks) {
595// if there is no pool just try to clear the chunkIdMap in case there is 
something
596if (dataChunksPool == null && indexChunksPool == null) {
597  this.removeChunks(chunks);
598  return;
599}
600
601   // if there is a pool, go over all chunk IDs that came back, the chunks 
may be from pool or not
602for (int chunkID : chunks) {
603 // translate chunk ID to chunk, if chunk initially wasn't in pool
604  // this translation will (most likely) return null
605  Chunk chunk = ChunkCreator.this.getChunk(chunkID);
606 if (chunk != null) {
607if (chunk.isFromPool() && chunk.isIndexChunk()) {
608  indexChunksPool.putbackChunks(chunk);
{code}

 For {{DataChunk}} , {{Chunk.isIndexChunk}} return true because  
{{Chunk.isIndexChunk}}  determines the type of {{chunk}} based on {{Chunk.size}}
{code:java}
 boolean isIndexChunk() {
return size == 
ChunkCreator.getInstance().getChunkSize(ChunkCreator.ChunkType.INDEX_CHUNK);
  }
{code}

and {{ChunkCreator.getChunkSize}} incorrectly return {{DataChunk}} size when 
{{ChunkCreator.indexChunksPool}} is null:
{code:java}
int getChunkSize(ChunkType chunkType) {
switch (chunkType) {
  case INDEX_CHUNK:
if (indexChunksPool != null) {
  return indexChunksPool.getChunkSize();
}
  case DATA_CHUNK:
if (dataChunksPool != null) {
  return dataChunksPool.getChunkSize();
} else { // When pools are empty
  return chunkSize;
}
  default:
throw new IllegalArgumentException(
"chunkType must either be INDEX_CHUNK or DATA_CHUNK");
}
  }
{code}
In my opinion, in addition to erroneous implementation of 
{{ChunkCreator.getChunkSize}}, we would better not determines the type of 
{{Chunk}} based on {{Chunk.size}}, because
{{hbase.hregion.memstore.mslab.indexchunksize.percent}} is set by user and the 
size of {{IndexChunk}} and {{DataChunk}} could be the same.



  was:
By default, we use {{DefaultMemStore}} , which use no  {{IndexChunk}} and 
{{ChunkCreator.indexChunksPool}} is useless, so we could set  
{{hbase.hregion.memstore.mslab.indexchunksize.percent}} to 0 to save memory 
space, 
But after running a while, the {{RegionServer}} throws {{NullPointerException}} 
 and abort:
{code:java}
   Caused by: java.lang.NullPointerException
at 
org.apache.hadoop.hbase.regionserver.ChunkCreator$MemStoreChunkPool.access$900(ChunkCreator.java:310)
at 
org.apache.hadoop.hbase.regionserver.ChunkCreator.putbackChunks(ChunkCreator.java:608)
at 
org.apache.hadoop.hbase.regionserver.MemStoreLABImpl.recycleChunks(MemStoreLABImpl.java:297)
at 
org.apache.hadoop.hbase.regionserver.MemStoreLABImpl.close(MemStoreLABImpl.java:268)
at org.apache.hadoop.hbase.regionserver.Segment.close(Segment.java:149)
at 
org.apache.hadoop.hbase.regionserver.AbstractMemStore.clearSnapshot(AbstractMemStore.java:251)
at 
org.apache.hadoop.hbase.regionserver.HStore.updateStorefiles(HStore.java:1244)
at 
org.apache.hadoop.hbase.regionserver.HStore.access$700(HStore.java:137)
at 
org.apache.hadoop.hbase.regionserver.HStore$StoreFlu

[jira] [Updated] (HBASE-26142) NullPointerException when set 'hbase.hregion.memstore.mslab.indexchunksize.percent' to zero

2021-07-27 Thread chenglei (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenglei updated HBASE-26142:
-
Description: 
By default, we use {{DefaultMemStore}} , which use no  {{IndexChunk}} and 
{{ChunkCreator.indexChunksPool}} is useless, so we could set  
{{hbase.hregion.memstore.mslab.indexchunksize.percent}} to 0 to save memory 
space, 
But after running a while, the {{RegionServer}} throws {{NullPointerException}} 
 and abort:
{code:java}
   Caused by: java.lang.NullPointerException
at 
org.apache.hadoop.hbase.regionserver.ChunkCreator$MemStoreChunkPool.access$900(ChunkCreator.java:310)
at 
org.apache.hadoop.hbase.regionserver.ChunkCreator.putbackChunks(ChunkCreator.java:608)
at 
org.apache.hadoop.hbase.regionserver.MemStoreLABImpl.recycleChunks(MemStoreLABImpl.java:297)
at 
org.apache.hadoop.hbase.regionserver.MemStoreLABImpl.close(MemStoreLABImpl.java:268)
at org.apache.hadoop.hbase.regionserver.Segment.close(Segment.java:149)
at 
org.apache.hadoop.hbase.regionserver.AbstractMemStore.clearSnapshot(AbstractMemStore.java:251)
at 
org.apache.hadoop.hbase.regionserver.HStore.updateStorefiles(HStore.java:1244)
at 
org.apache.hadoop.hbase.regionserver.HStore.access$700(HStore.java:137)
at 
org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.commit(HStore.java:2461)
at 
org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2963)
{code}

The problem is caused by line 608 in {{ChunkCreator.putbackChunks}} 
:{{Chunk.isIndexChunk}} return true for{{DataChunk}}  , and
 {{ChunkCreator.indexChunksPool}} is null: 
{code:java}
594 synchronized void putbackChunks(Set chunks) {
595// if there is no pool just try to clear the chunkIdMap in case there is 
something
596if (dataChunksPool == null && indexChunksPool == null) {
597  this.removeChunks(chunks);
598  return;
599}
600
601   // if there is a pool, go over all chunk IDs that came back, the chunks 
may be from pool or not
602for (int chunkID : chunks) {
603 // translate chunk ID to chunk, if chunk initially wasn't in pool
604  // this translation will (most likely) return null
605  Chunk chunk = ChunkCreator.this.getChunk(chunkID);
606 if (chunk != null) {
607if (chunk.isFromPool() && chunk.isIndexChunk()) {
608  indexChunksPool.putbackChunks(chunk);
{code}

 For {{DataChunk}} , {{Chunk.isIndexChunk}} return true because  
{{Chunk.isIndexChunk}}  determines the type of {{chunk}} based on {{Chunk.size}}
{code:java}
 boolean isIndexChunk() {
return size == 
ChunkCreator.getInstance().getChunkSize(ChunkCreator.ChunkType.INDEX_CHUNK);
  }
{code}

and {{ChunkCreator.getChunkSize}} incorrectly return {{DataChunk}} size when 
{{ChunkCreator.indexChunksPool}} is null:
{code:java}
int getChunkSize(ChunkType chunkType) {
switch (chunkType) {
  case INDEX_CHUNK:
if (indexChunksPool != null) {
  return indexChunksPool.getChunkSize();
}
  case DATA_CHUNK:
if (dataChunksPool != null) {
  return dataChunksPool.getChunkSize();
} else { // When pools are empty
  return chunkSize;
}
  default:
throw new IllegalArgumentException(
"chunkType must either be INDEX_CHUNK or DATA_CHUNK");
}
  }
{code}
In my opinion, in addition to erroneous implementation of 
{{ChunkCreator.getChunkSize}}, we would better not determine the type of 
{{Chunk}} based on {{Chunk.size}}, because
{{hbase.hregion.memstore.mslab.indexchunksize.percent}} is set by user and the 
size of {{IndexChunk}} and {{DataChunk}} could be the same.



  was:
By default, we use {{DefaultMemStore}} , which use no  {{IndexChunk}} and 
{{ChunkCreator.indexChunksPool}} is useless, so we could set  
{{hbase.hregion.memstore.mslab.indexchunksize.percent}} to 0 to save memory 
space, 
But after running a while, the {{RegionServer}} throws {{NullPointerException}} 
 and abort:
{code:java}
   Caused by: java.lang.NullPointerException
at 
org.apache.hadoop.hbase.regionserver.ChunkCreator$MemStoreChunkPool.access$900(ChunkCreator.java:310)
at 
org.apache.hadoop.hbase.regionserver.ChunkCreator.putbackChunks(ChunkCreator.java:608)
at 
org.apache.hadoop.hbase.regionserver.MemStoreLABImpl.recycleChunks(MemStoreLABImpl.java:297)
at 
org.apache.hadoop.hbase.regionserver.MemStoreLABImpl.close(MemStoreLABImpl.java:268)
at org.apache.hadoop.hbase.regionserver.Segment.close(Segment.java:149)
at 
org.apache.hadoop.hbase.regionserver.AbstractMemStore.clearSnapshot(AbstractMemStore.java:251)
at 
org.apache.hadoop.hbase.regionserver.HStore.updateStorefiles(HStore.java:1244)
at 
org.apache.hadoop.hbase.regionserver.HStore.access$700(HStore.java:137)
at 
org.apache.hadoop.hbase.regionserver.HStore$StoreFlus

[GitHub] [hbase] Apache-HBase commented on pull request #3468: HBASE-26076 Support favoredNodes when do compaction offload

2021-07-27 Thread GitBox


Apache-HBase commented on pull request #3468:
URL: https://github.com/apache/hbase/pull/3468#issuecomment-887335735


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 34s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ HBASE-25714 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 47s |  HBASE-25714 passed  |
   | +1 :green_heart: |  compile  |   3m 23s |  HBASE-25714 passed  |
   | +1 :green_heart: |  checkstyle  |   1m 28s |  HBASE-25714 passed  |
   | +1 :green_heart: |  spotbugs  |   2m 14s |  HBASE-25714 passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 45s |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 13s |  the patch passed  |
   | +1 :green_heart: |  javac  |   3m 13s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   1m  4s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |  18m  1s |  Patch does not cause any 
errors with Hadoop 3.1.2 3.2.1 3.3.0.  |
   | +1 :green_heart: |  spotbugs  |   2m 42s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 15s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  48m 49s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3468/5/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3468 |
   | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti 
checkstyle compile |
   | uname | Linux 4e39f4c357da 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | HBASE-25714 / 85f02919da |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | Max. process+thread count | 96 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3468/5/console
 |
   | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #3529: HBASE-26124 Backport HBASE-25373 "Remove HTrace completely in code base and try to make use of OpenTelemetry" to branch-2

2021-07-27 Thread GitBox


Apache-HBase commented on pull request #3529:
URL: https://github.com/apache/hbase/pull/3529#issuecomment-887342234


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 15s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  7s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ branch-2 Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 23s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   4m 54s |  branch-2 passed  |
   | +1 :green_heart: |  compile  |   3m 13s |  branch-2 passed  |
   | +1 :green_heart: |  shadedjars  |   7m 42s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   6m 54s |  branch-2 passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 18s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   4m 30s |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m  5s |  the patch passed  |
   | +1 :green_heart: |  javac  |   3m  5s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   7m 54s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   6m 55s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 257m 49s |  root in the patch passed.  |
   |  |   | 309m  4s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3529/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3529 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux e3b18447e592 4.15.0-147-generic #151-Ubuntu SMP Fri Jun 18 
19:21:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2 / 20a4aaedcc |
   | Default Java | AdoptOpenJDK-11.0.10+9 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3529/1/testReport/
 |
   | Max. process+thread count | 4118 (vs. ulimit of 12500) |
   | modules | C: hbase-protocol-shaded hbase-common hbase-client 
hbase-zookeeper hbase-asyncfs hbase-server hbase-mapreduce hbase-shell hbase-it 
hbase-shaded hbase-shaded/hbase-shaded-client hbase-external-blockcache 
hbase-shaded/hbase-shaded-testing-util . U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3529/1/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] comnetwork opened a new pull request #3530: HBASE-26142

2021-07-27 Thread GitBox


comnetwork opened a new pull request #3530:
URL: https://github.com/apache/hbase/pull/3530


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (HBASE-26142) NullPointerException when set 'hbase.hregion.memstore.mslab.indexchunksize.percent' to zero

2021-07-27 Thread Anoop Sam John (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17387905#comment-17387905
 ] 

Anoop Sam John commented on HBASE-26142:


what is the default BB pool size that this index chunk pool will create?

> NullPointerException when set 
> 'hbase.hregion.memstore.mslab.indexchunksize.percent' to zero
> ---
>
> Key: HBASE-26142
> URL: https://issues.apache.org/jira/browse/HBASE-26142
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha-1, 2.4.0
>Reporter: chenglei
>Assignee: chenglei
>Priority: Critical
>
> By default, we use {{DefaultMemStore}} , which use no  {{IndexChunk}} and 
> {{ChunkCreator.indexChunksPool}} is useless, so we could set  
> {{hbase.hregion.memstore.mslab.indexchunksize.percent}} to 0 to save memory 
> space, 
> But after running a while, the {{RegionServer}} throws 
> {{NullPointerException}}  and abort:
> {code:java}
>Caused by: java.lang.NullPointerException
> at 
> org.apache.hadoop.hbase.regionserver.ChunkCreator$MemStoreChunkPool.access$900(ChunkCreator.java:310)
> at 
> org.apache.hadoop.hbase.regionserver.ChunkCreator.putbackChunks(ChunkCreator.java:608)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreLABImpl.recycleChunks(MemStoreLABImpl.java:297)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreLABImpl.close(MemStoreLABImpl.java:268)
> at 
> org.apache.hadoop.hbase.regionserver.Segment.close(Segment.java:149)
> at 
> org.apache.hadoop.hbase.regionserver.AbstractMemStore.clearSnapshot(AbstractMemStore.java:251)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.updateStorefiles(HStore.java:1244)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.access$700(HStore.java:137)
> at 
> org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.commit(HStore.java:2461)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2963)
> {code}
> The problem is caused by line 608 in {{ChunkCreator.putbackChunks}} 
> :{{Chunk.isIndexChunk}} return true for{{DataChunk}}  , and
>  {{ChunkCreator.indexChunksPool}} is null: 
> {code:java}
> 594 synchronized void putbackChunks(Set chunks) {
> 595// if there is no pool just try to clear the chunkIdMap in case there 
> is something
> 596if (dataChunksPool == null && indexChunksPool == null) {
> 597  this.removeChunks(chunks);
> 598  return;
> 599}
> 600
> 601   // if there is a pool, go over all chunk IDs that came back, the chunks 
> may be from pool or not
> 602for (int chunkID : chunks) {
> 603 // translate chunk ID to chunk, if chunk initially wasn't in pool
> 604  // this translation will (most likely) return null
> 605  Chunk chunk = ChunkCreator.this.getChunk(chunkID);
> 606 if (chunk != null) {
> 607if (chunk.isFromPool() && chunk.isIndexChunk()) {
> 608  indexChunksPool.putbackChunks(chunk);
> {code}
>  For {{DataChunk}} , {{Chunk.isIndexChunk}} return true because  
> {{Chunk.isIndexChunk}}  determines the type of {{chunk}} based on 
> {{Chunk.size}}
> {code:java}
>  boolean isIndexChunk() {
> return size == 
> ChunkCreator.getInstance().getChunkSize(ChunkCreator.ChunkType.INDEX_CHUNK);
>   }
> {code}
> and {{ChunkCreator.getChunkSize}} incorrectly return {{DataChunk}} size when 
> {{ChunkCreator.indexChunksPool}} is null:
> {code:java}
> int getChunkSize(ChunkType chunkType) {
> switch (chunkType) {
>   case INDEX_CHUNK:
> if (indexChunksPool != null) {
>   return indexChunksPool.getChunkSize();
> }
>   case DATA_CHUNK:
> if (dataChunksPool != null) {
>   return dataChunksPool.getChunkSize();
> } else { // When pools are empty
>   return chunkSize;
> }
>   default:
> throw new IllegalArgumentException(
> "chunkType must either be INDEX_CHUNK or DATA_CHUNK");
> }
>   }
> {code}
> In my opinion, in addition to erroneous implementation of 
> {{ChunkCreator.getChunkSize}}, we would better not determine the type of 
> {{Chunk}} based on {{Chunk.size}}, because
> {{hbase.hregion.memstore.mslab.indexchunksize.percent}} is set by user and 
> the size of {{IndexChunk}} and {{DataChunk}} could be the same.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache-HBase commented on pull request #3530: HBASE-26142

2021-07-27 Thread GitBox


Apache-HBase commented on pull request #3530:
URL: https://github.com/apache/hbase/pull/3530#issuecomment-887345054






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #3530: HBASE-26142

2021-07-27 Thread GitBox


Apache-HBase commented on pull request #3530:
URL: https://github.com/apache/hbase/pull/3530#issuecomment-887345239


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m  0s |  Docker mode activated.  |
   | -1 :x: |  patch  |   0m  4s |  https://github.com/apache/hbase/pull/3530 
does not apply to master. Rebase required? Wrong Branch? See 
https://yetus.apache.org/documentation/in-progress/precommit-patchnames for 
help.  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hbase/pull/3530 |
   | JIRA Issue | HBASE-26142 |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3530/1/console
 |
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] comnetwork closed pull request #3530: HBASE-26142

2021-07-27 Thread GitBox


comnetwork closed pull request #3530:
URL: https://github.com/apache/hbase/pull/3530


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] comnetwork opened a new pull request #3531: HBASE-26142 NullPointerException when set 'hbase.hregion.memstore.msl…

2021-07-27 Thread GitBox


comnetwork opened a new pull request #3531:
URL: https://github.com/apache/hbase/pull/3531


   …ab.indexchunksize.percent' to zero


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (HBASE-26142) NullPointerException when set 'hbase.hregion.memstore.mslab.indexchunksize.percent' to zero

2021-07-27 Thread chenglei (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17387925#comment-17387925
 ] 

chenglei commented on HBASE-26142:
--

[~anoop.hbase], the default value of 
{{hbase.hregion.memstore.mslab.indexchunksize.percent}} is 0.1, that is to say 
, the byte size of the index chunk pool is 
{hbase.regionserver.global.memstore.size}*0.1 by default

> NullPointerException when set 
> 'hbase.hregion.memstore.mslab.indexchunksize.percent' to zero
> ---
>
> Key: HBASE-26142
> URL: https://issues.apache.org/jira/browse/HBASE-26142
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha-1, 2.4.0
>Reporter: chenglei
>Assignee: chenglei
>Priority: Critical
>
> By default, we use {{DefaultMemStore}} , which use no  {{IndexChunk}} and 
> {{ChunkCreator.indexChunksPool}} is useless, so we could set  
> {{hbase.hregion.memstore.mslab.indexchunksize.percent}} to 0 to save memory 
> space, 
> But after running a while, the {{RegionServer}} throws 
> {{NullPointerException}}  and abort:
> {code:java}
>Caused by: java.lang.NullPointerException
> at 
> org.apache.hadoop.hbase.regionserver.ChunkCreator$MemStoreChunkPool.access$900(ChunkCreator.java:310)
> at 
> org.apache.hadoop.hbase.regionserver.ChunkCreator.putbackChunks(ChunkCreator.java:608)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreLABImpl.recycleChunks(MemStoreLABImpl.java:297)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreLABImpl.close(MemStoreLABImpl.java:268)
> at 
> org.apache.hadoop.hbase.regionserver.Segment.close(Segment.java:149)
> at 
> org.apache.hadoop.hbase.regionserver.AbstractMemStore.clearSnapshot(AbstractMemStore.java:251)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.updateStorefiles(HStore.java:1244)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.access$700(HStore.java:137)
> at 
> org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.commit(HStore.java:2461)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2963)
> {code}
> The problem is caused by line 608 in {{ChunkCreator.putbackChunks}} 
> :{{Chunk.isIndexChunk}} return true for{{DataChunk}}  , and
>  {{ChunkCreator.indexChunksPool}} is null: 
> {code:java}
> 594 synchronized void putbackChunks(Set chunks) {
> 595// if there is no pool just try to clear the chunkIdMap in case there 
> is something
> 596if (dataChunksPool == null && indexChunksPool == null) {
> 597  this.removeChunks(chunks);
> 598  return;
> 599}
> 600
> 601   // if there is a pool, go over all chunk IDs that came back, the chunks 
> may be from pool or not
> 602for (int chunkID : chunks) {
> 603 // translate chunk ID to chunk, if chunk initially wasn't in pool
> 604  // this translation will (most likely) return null
> 605  Chunk chunk = ChunkCreator.this.getChunk(chunkID);
> 606 if (chunk != null) {
> 607if (chunk.isFromPool() && chunk.isIndexChunk()) {
> 608  indexChunksPool.putbackChunks(chunk);
> {code}
>  For {{DataChunk}} , {{Chunk.isIndexChunk}} return true because  
> {{Chunk.isIndexChunk}}  determines the type of {{chunk}} based on 
> {{Chunk.size}}
> {code:java}
>  boolean isIndexChunk() {
> return size == 
> ChunkCreator.getInstance().getChunkSize(ChunkCreator.ChunkType.INDEX_CHUNK);
>   }
> {code}
> and {{ChunkCreator.getChunkSize}} incorrectly return {{DataChunk}} size when 
> {{ChunkCreator.indexChunksPool}} is null:
> {code:java}
> int getChunkSize(ChunkType chunkType) {
> switch (chunkType) {
>   case INDEX_CHUNK:
> if (indexChunksPool != null) {
>   return indexChunksPool.getChunkSize();
> }
>   case DATA_CHUNK:
> if (dataChunksPool != null) {
>   return dataChunksPool.getChunkSize();
> } else { // When pools are empty
>   return chunkSize;
> }
>   default:
> throw new IllegalArgumentException(
> "chunkType must either be INDEX_CHUNK or DATA_CHUNK");
> }
>   }
> {code}
> In my opinion, in addition to erroneous implementation of 
> {{ChunkCreator.getChunkSize}}, we would better not determine the type of 
> {{Chunk}} based on {{Chunk.size}}, because
> {{hbase.hregion.memstore.mslab.indexchunksize.percent}} is set by user and 
> the size of {{IndexChunk}} and {{DataChunk}} could be the same.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (HBASE-26142) NullPointerException when set 'hbase.hregion.memstore.mslab.indexchunksize.percent' to zero

2021-07-27 Thread chenglei (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17387925#comment-17387925
 ] 

chenglei edited comment on HBASE-26142 at 7/27/21, 9:26 AM:


[~anoop.hbase], the default value of 
{{hbase.hregion.memstore.mslab.indexchunksize.percent}} is 0.1, that is to say 
, the byte size of the index chunk pool is 
{hbase.regionserver.global.memstore.size} * 0.1 by default


was (Author: comnetwork):
[~anoop.hbase], the default value of 
{{hbase.hregion.memstore.mslab.indexchunksize.percent}} is 0.1, that is to say 
, the byte size of the index chunk pool is 
{hbase.regionserver.global.memstore.size}*0.1 by default

> NullPointerException when set 
> 'hbase.hregion.memstore.mslab.indexchunksize.percent' to zero
> ---
>
> Key: HBASE-26142
> URL: https://issues.apache.org/jira/browse/HBASE-26142
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha-1, 2.4.0
>Reporter: chenglei
>Assignee: chenglei
>Priority: Critical
>
> By default, we use {{DefaultMemStore}} , which use no  {{IndexChunk}} and 
> {{ChunkCreator.indexChunksPool}} is useless, so we could set  
> {{hbase.hregion.memstore.mslab.indexchunksize.percent}} to 0 to save memory 
> space, 
> But after running a while, the {{RegionServer}} throws 
> {{NullPointerException}}  and abort:
> {code:java}
>Caused by: java.lang.NullPointerException
> at 
> org.apache.hadoop.hbase.regionserver.ChunkCreator$MemStoreChunkPool.access$900(ChunkCreator.java:310)
> at 
> org.apache.hadoop.hbase.regionserver.ChunkCreator.putbackChunks(ChunkCreator.java:608)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreLABImpl.recycleChunks(MemStoreLABImpl.java:297)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreLABImpl.close(MemStoreLABImpl.java:268)
> at 
> org.apache.hadoop.hbase.regionserver.Segment.close(Segment.java:149)
> at 
> org.apache.hadoop.hbase.regionserver.AbstractMemStore.clearSnapshot(AbstractMemStore.java:251)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.updateStorefiles(HStore.java:1244)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.access$700(HStore.java:137)
> at 
> org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.commit(HStore.java:2461)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2963)
> {code}
> The problem is caused by line 608 in {{ChunkCreator.putbackChunks}} 
> :{{Chunk.isIndexChunk}} return true for{{DataChunk}}  , and
>  {{ChunkCreator.indexChunksPool}} is null: 
> {code:java}
> 594 synchronized void putbackChunks(Set chunks) {
> 595// if there is no pool just try to clear the chunkIdMap in case there 
> is something
> 596if (dataChunksPool == null && indexChunksPool == null) {
> 597  this.removeChunks(chunks);
> 598  return;
> 599}
> 600
> 601   // if there is a pool, go over all chunk IDs that came back, the chunks 
> may be from pool or not
> 602for (int chunkID : chunks) {
> 603 // translate chunk ID to chunk, if chunk initially wasn't in pool
> 604  // this translation will (most likely) return null
> 605  Chunk chunk = ChunkCreator.this.getChunk(chunkID);
> 606 if (chunk != null) {
> 607if (chunk.isFromPool() && chunk.isIndexChunk()) {
> 608  indexChunksPool.putbackChunks(chunk);
> {code}
>  For {{DataChunk}} , {{Chunk.isIndexChunk}} return true because  
> {{Chunk.isIndexChunk}}  determines the type of {{chunk}} based on 
> {{Chunk.size}}
> {code:java}
>  boolean isIndexChunk() {
> return size == 
> ChunkCreator.getInstance().getChunkSize(ChunkCreator.ChunkType.INDEX_CHUNK);
>   }
> {code}
> and {{ChunkCreator.getChunkSize}} incorrectly return {{DataChunk}} size when 
> {{ChunkCreator.indexChunksPool}} is null:
> {code:java}
> int getChunkSize(ChunkType chunkType) {
> switch (chunkType) {
>   case INDEX_CHUNK:
> if (indexChunksPool != null) {
>   return indexChunksPool.getChunkSize();
> }
>   case DATA_CHUNK:
> if (dataChunksPool != null) {
>   return dataChunksPool.getChunkSize();
> } else { // When pools are empty
>   return chunkSize;
> }
>   default:
> throw new IllegalArgumentException(
> "chunkType must either be INDEX_CHUNK or DATA_CHUNK");
> }
>   }
> {code}
> In my opinion, in addition to erroneous implementation of 
> {{ChunkCreator.getChunkSize}}, we would better not determine the type of 
> {{Chunk}} based on {{Chunk.size}}, because
> {{hbase.hregion.memstore.mslab.indexchunksize.percent}} is set by user and 
> the size of {{IndexChunk}} and {{DataChunk}} could be the s

[jira] [Comment Edited] (HBASE-26142) NullPointerException when set 'hbase.hregion.memstore.mslab.indexchunksize.percent' to zero

2021-07-27 Thread chenglei (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17387925#comment-17387925
 ] 

chenglei edited comment on HBASE-26142 at 7/27/21, 9:26 AM:


[~anoop.hbase], the default value of 
{{hbase.hregion.memstore.mslab.indexchunksize.percent}} is 0.1, that is to say 
, the byte size of the index chunk pool is 
{{hbase.regionserver.global.memstore.size}}* 0.1 by default


was (Author: comnetwork):
[~anoop.hbase], the default value of 
{{hbase.hregion.memstore.mslab.indexchunksize.percent}} is 0.1, that is to say 
, the byte size of the index chunk pool is 
{hbase.regionserver.global.memstore.size} * 0.1 by default

> NullPointerException when set 
> 'hbase.hregion.memstore.mslab.indexchunksize.percent' to zero
> ---
>
> Key: HBASE-26142
> URL: https://issues.apache.org/jira/browse/HBASE-26142
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha-1, 2.4.0
>Reporter: chenglei
>Assignee: chenglei
>Priority: Critical
>
> By default, we use {{DefaultMemStore}} , which use no  {{IndexChunk}} and 
> {{ChunkCreator.indexChunksPool}} is useless, so we could set  
> {{hbase.hregion.memstore.mslab.indexchunksize.percent}} to 0 to save memory 
> space, 
> But after running a while, the {{RegionServer}} throws 
> {{NullPointerException}}  and abort:
> {code:java}
>Caused by: java.lang.NullPointerException
> at 
> org.apache.hadoop.hbase.regionserver.ChunkCreator$MemStoreChunkPool.access$900(ChunkCreator.java:310)
> at 
> org.apache.hadoop.hbase.regionserver.ChunkCreator.putbackChunks(ChunkCreator.java:608)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreLABImpl.recycleChunks(MemStoreLABImpl.java:297)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreLABImpl.close(MemStoreLABImpl.java:268)
> at 
> org.apache.hadoop.hbase.regionserver.Segment.close(Segment.java:149)
> at 
> org.apache.hadoop.hbase.regionserver.AbstractMemStore.clearSnapshot(AbstractMemStore.java:251)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.updateStorefiles(HStore.java:1244)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.access$700(HStore.java:137)
> at 
> org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.commit(HStore.java:2461)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2963)
> {code}
> The problem is caused by line 608 in {{ChunkCreator.putbackChunks}} 
> :{{Chunk.isIndexChunk}} return true for{{DataChunk}}  , and
>  {{ChunkCreator.indexChunksPool}} is null: 
> {code:java}
> 594 synchronized void putbackChunks(Set chunks) {
> 595// if there is no pool just try to clear the chunkIdMap in case there 
> is something
> 596if (dataChunksPool == null && indexChunksPool == null) {
> 597  this.removeChunks(chunks);
> 598  return;
> 599}
> 600
> 601   // if there is a pool, go over all chunk IDs that came back, the chunks 
> may be from pool or not
> 602for (int chunkID : chunks) {
> 603 // translate chunk ID to chunk, if chunk initially wasn't in pool
> 604  // this translation will (most likely) return null
> 605  Chunk chunk = ChunkCreator.this.getChunk(chunkID);
> 606 if (chunk != null) {
> 607if (chunk.isFromPool() && chunk.isIndexChunk()) {
> 608  indexChunksPool.putbackChunks(chunk);
> {code}
>  For {{DataChunk}} , {{Chunk.isIndexChunk}} return true because  
> {{Chunk.isIndexChunk}}  determines the type of {{chunk}} based on 
> {{Chunk.size}}
> {code:java}
>  boolean isIndexChunk() {
> return size == 
> ChunkCreator.getInstance().getChunkSize(ChunkCreator.ChunkType.INDEX_CHUNK);
>   }
> {code}
> and {{ChunkCreator.getChunkSize}} incorrectly return {{DataChunk}} size when 
> {{ChunkCreator.indexChunksPool}} is null:
> {code:java}
> int getChunkSize(ChunkType chunkType) {
> switch (chunkType) {
>   case INDEX_CHUNK:
> if (indexChunksPool != null) {
>   return indexChunksPool.getChunkSize();
> }
>   case DATA_CHUNK:
> if (dataChunksPool != null) {
>   return dataChunksPool.getChunkSize();
> } else { // When pools are empty
>   return chunkSize;
> }
>   default:
> throw new IllegalArgumentException(
> "chunkType must either be INDEX_CHUNK or DATA_CHUNK");
> }
>   }
> {code}
> In my opinion, in addition to erroneous implementation of 
> {{ChunkCreator.getChunkSize}}, we would better not determine the type of 
> {{Chunk}} based on {{Chunk.size}}, because
> {{hbase.hregion.memstore.mslab.indexchunksize.percent}} is set by user and 
> the size of {{IndexChunk}} and {{DataChunk}} could be th

[jira] [Updated] (HBASE-26142) NullPointerException when set 'hbase.hregion.memstore.mslab.indexchunksize.percent' to zero

2021-07-27 Thread chenglei (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenglei updated HBASE-26142:
-
Status: Patch Available  (was: Open)

> NullPointerException when set 
> 'hbase.hregion.memstore.mslab.indexchunksize.percent' to zero
> ---
>
> Key: HBASE-26142
> URL: https://issues.apache.org/jira/browse/HBASE-26142
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.4.0, 3.0.0-alpha-1
>Reporter: chenglei
>Assignee: chenglei
>Priority: Critical
>
> By default, we use {{DefaultMemStore}} , which use no  {{IndexChunk}} and 
> {{ChunkCreator.indexChunksPool}} is useless, so we could set  
> {{hbase.hregion.memstore.mslab.indexchunksize.percent}} to 0 to save memory 
> space, 
> But after running a while, the {{RegionServer}} throws 
> {{NullPointerException}}  and abort:
> {code:java}
>Caused by: java.lang.NullPointerException
> at 
> org.apache.hadoop.hbase.regionserver.ChunkCreator$MemStoreChunkPool.access$900(ChunkCreator.java:310)
> at 
> org.apache.hadoop.hbase.regionserver.ChunkCreator.putbackChunks(ChunkCreator.java:608)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreLABImpl.recycleChunks(MemStoreLABImpl.java:297)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreLABImpl.close(MemStoreLABImpl.java:268)
> at 
> org.apache.hadoop.hbase.regionserver.Segment.close(Segment.java:149)
> at 
> org.apache.hadoop.hbase.regionserver.AbstractMemStore.clearSnapshot(AbstractMemStore.java:251)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.updateStorefiles(HStore.java:1244)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.access$700(HStore.java:137)
> at 
> org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.commit(HStore.java:2461)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2963)
> {code}
> The problem is caused by line 608 in {{ChunkCreator.putbackChunks}} 
> :{{Chunk.isIndexChunk}} return true for{{DataChunk}}  , and
>  {{ChunkCreator.indexChunksPool}} is null: 
> {code:java}
> 594 synchronized void putbackChunks(Set chunks) {
> 595// if there is no pool just try to clear the chunkIdMap in case there 
> is something
> 596if (dataChunksPool == null && indexChunksPool == null) {
> 597  this.removeChunks(chunks);
> 598  return;
> 599}
> 600
> 601   // if there is a pool, go over all chunk IDs that came back, the chunks 
> may be from pool or not
> 602for (int chunkID : chunks) {
> 603 // translate chunk ID to chunk, if chunk initially wasn't in pool
> 604  // this translation will (most likely) return null
> 605  Chunk chunk = ChunkCreator.this.getChunk(chunkID);
> 606 if (chunk != null) {
> 607if (chunk.isFromPool() && chunk.isIndexChunk()) {
> 608  indexChunksPool.putbackChunks(chunk);
> {code}
>  For {{DataChunk}} , {{Chunk.isIndexChunk}} return true because  
> {{Chunk.isIndexChunk}}  determines the type of {{chunk}} based on 
> {{Chunk.size}}
> {code:java}
>  boolean isIndexChunk() {
> return size == 
> ChunkCreator.getInstance().getChunkSize(ChunkCreator.ChunkType.INDEX_CHUNK);
>   }
> {code}
> and {{ChunkCreator.getChunkSize}} incorrectly return {{DataChunk}} size when 
> {{ChunkCreator.indexChunksPool}} is null:
> {code:java}
> int getChunkSize(ChunkType chunkType) {
> switch (chunkType) {
>   case INDEX_CHUNK:
> if (indexChunksPool != null) {
>   return indexChunksPool.getChunkSize();
> }
>   case DATA_CHUNK:
> if (dataChunksPool != null) {
>   return dataChunksPool.getChunkSize();
> } else { // When pools are empty
>   return chunkSize;
> }
>   default:
> throw new IllegalArgumentException(
> "chunkType must either be INDEX_CHUNK or DATA_CHUNK");
> }
>   }
> {code}
> In my opinion, in addition to erroneous implementation of 
> {{ChunkCreator.getChunkSize}}, we would better not determine the type of 
> {{Chunk}} based on {{Chunk.size}}, because
> {{hbase.hregion.memstore.mslab.indexchunksize.percent}} is set by user and 
> the size of {{IndexChunk}} and {{DataChunk}} could be the same.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-26142) NullPointerException when set 'hbase.hregion.memstore.mslab.indexchunksize.percent' to zero

2021-07-27 Thread chenglei (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenglei updated HBASE-26142:
-
Description: 
The default value of {{}} By default, we use {{DefaultMemStore}} , which use no 
 {{IndexChunk}} and {{ChunkCreator.indexChunksPool}} is useless, so we could 
set  {{hbase.hregion.memstore.mslab.indexchunksize.percent}} to 0 to save 
memory space, 
But after running a while, the {{RegionServer}} throws {{NullPointerException}} 
 and abort:
{code:java}
   Caused by: java.lang.NullPointerException
at 
org.apache.hadoop.hbase.regionserver.ChunkCreator$MemStoreChunkPool.access$900(ChunkCreator.java:310)
at 
org.apache.hadoop.hbase.regionserver.ChunkCreator.putbackChunks(ChunkCreator.java:608)
at 
org.apache.hadoop.hbase.regionserver.MemStoreLABImpl.recycleChunks(MemStoreLABImpl.java:297)
at 
org.apache.hadoop.hbase.regionserver.MemStoreLABImpl.close(MemStoreLABImpl.java:268)
at org.apache.hadoop.hbase.regionserver.Segment.close(Segment.java:149)
at 
org.apache.hadoop.hbase.regionserver.AbstractMemStore.clearSnapshot(AbstractMemStore.java:251)
at 
org.apache.hadoop.hbase.regionserver.HStore.updateStorefiles(HStore.java:1244)
at 
org.apache.hadoop.hbase.regionserver.HStore.access$700(HStore.java:137)
at 
org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.commit(HStore.java:2461)
at 
org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2963)
{code}

The problem is caused by line 608 in {{ChunkCreator.putbackChunks}} 
:{{Chunk.isIndexChunk}} return true for{{DataChunk}}  , and
 {{ChunkCreator.indexChunksPool}} is null: 
{code:java}
594 synchronized void putbackChunks(Set chunks) {
595// if there is no pool just try to clear the chunkIdMap in case there is 
something
596if (dataChunksPool == null && indexChunksPool == null) {
597  this.removeChunks(chunks);
598  return;
599}
600
601   // if there is a pool, go over all chunk IDs that came back, the chunks 
may be from pool or not
602for (int chunkID : chunks) {
603 // translate chunk ID to chunk, if chunk initially wasn't in pool
604  // this translation will (most likely) return null
605  Chunk chunk = ChunkCreator.this.getChunk(chunkID);
606 if (chunk != null) {
607if (chunk.isFromPool() && chunk.isIndexChunk()) {
608  indexChunksPool.putbackChunks(chunk);
{code}

 For {{DataChunk}} , {{Chunk.isIndexChunk}} return true because  
{{Chunk.isIndexChunk}}  determines the type of {{chunk}} based on {{Chunk.size}}
{code:java}
 boolean isIndexChunk() {
return size == 
ChunkCreator.getInstance().getChunkSize(ChunkCreator.ChunkType.INDEX_CHUNK);
  }
{code}

and {{ChunkCreator.getChunkSize}} incorrectly return {{DataChunk}} size when 
{{ChunkCreator.indexChunksPool}} is null:
{code:java}
int getChunkSize(ChunkType chunkType) {
switch (chunkType) {
  case INDEX_CHUNK:
if (indexChunksPool != null) {
  return indexChunksPool.getChunkSize();
}
  case DATA_CHUNK:
if (dataChunksPool != null) {
  return dataChunksPool.getChunkSize();
} else { // When pools are empty
  return chunkSize;
}
  default:
throw new IllegalArgumentException(
"chunkType must either be INDEX_CHUNK or DATA_CHUNK");
}
  }
{code}
In my opinion, in addition to erroneous implementation of 
{{ChunkCreator.getChunkSize}}, we would better not determine the type of 
{{Chunk}} based on {{Chunk.size}}, because
{{hbase.hregion.memstore.mslab.indexchunksize.percent}} is set by user and the 
size of {{IndexChunk}} and {{DataChunk}} could be the same.



  was:
By default, we use {{DefaultMemStore}} , which use no  {{IndexChunk}} and 
{{ChunkCreator.indexChunksPool}} is useless, so we could set  
{{hbase.hregion.memstore.mslab.indexchunksize.percent}} to 0 to save memory 
space, 
But after running a while, the {{RegionServer}} throws {{NullPointerException}} 
 and abort:
{code:java}
   Caused by: java.lang.NullPointerException
at 
org.apache.hadoop.hbase.regionserver.ChunkCreator$MemStoreChunkPool.access$900(ChunkCreator.java:310)
at 
org.apache.hadoop.hbase.regionserver.ChunkCreator.putbackChunks(ChunkCreator.java:608)
at 
org.apache.hadoop.hbase.regionserver.MemStoreLABImpl.recycleChunks(MemStoreLABImpl.java:297)
at 
org.apache.hadoop.hbase.regionserver.MemStoreLABImpl.close(MemStoreLABImpl.java:268)
at org.apache.hadoop.hbase.regionserver.Segment.close(Segment.java:149)
at 
org.apache.hadoop.hbase.regionserver.AbstractMemStore.clearSnapshot(AbstractMemStore.java:251)
at 
org.apache.hadoop.hbase.regionserver.HStore.updateStorefiles(HStore.java:1244)
at 
org.apache.hadoop.hbase.regionserver.HStore.access$700(HStore.java:137)
at 
org.apache.hadoop.hbase.reg

[jira] [Updated] (HBASE-26142) NullPointerException when set 'hbase.hregion.memstore.mslab.indexchunksize.percent' to zero

2021-07-27 Thread chenglei (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenglei updated HBASE-26142:
-
Description: 
The default value of {{hbase.hregion.memstore.mslab.indexchunksize.percent}} 
introduce by HBASE-24892 is 0.1, but when we use {{DefaultMemStore}} by default 
, which has no  {{IndexChunk}} and {{ChunkCreator.indexChunksPool}} is useless, 
so we set  {{hbase.hregion.memstore.mslab.indexchunksize.percent}} to 0 to save 
memory space, 
But after running a while, the {{RegionServer}} throws {{NullPointerException}} 
 and abort:
{code:java}
   Caused by: java.lang.NullPointerException
at 
org.apache.hadoop.hbase.regionserver.ChunkCreator$MemStoreChunkPool.access$900(ChunkCreator.java:310)
at 
org.apache.hadoop.hbase.regionserver.ChunkCreator.putbackChunks(ChunkCreator.java:608)
at 
org.apache.hadoop.hbase.regionserver.MemStoreLABImpl.recycleChunks(MemStoreLABImpl.java:297)
at 
org.apache.hadoop.hbase.regionserver.MemStoreLABImpl.close(MemStoreLABImpl.java:268)
at org.apache.hadoop.hbase.regionserver.Segment.close(Segment.java:149)
at 
org.apache.hadoop.hbase.regionserver.AbstractMemStore.clearSnapshot(AbstractMemStore.java:251)
at 
org.apache.hadoop.hbase.regionserver.HStore.updateStorefiles(HStore.java:1244)
at 
org.apache.hadoop.hbase.regionserver.HStore.access$700(HStore.java:137)
at 
org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.commit(HStore.java:2461)
at 
org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2963)
{code}

The problem is caused by line 608 in {{ChunkCreator.putbackChunks}} 
:{{Chunk.isIndexChunk}} return true for{{DataChunk}}  , and
 {{ChunkCreator.indexChunksPool}} is null: 
{code:java}
594 synchronized void putbackChunks(Set chunks) {
595// if there is no pool just try to clear the chunkIdMap in case there is 
something
596if (dataChunksPool == null && indexChunksPool == null) {
597  this.removeChunks(chunks);
598  return;
599}
600
601   // if there is a pool, go over all chunk IDs that came back, the chunks 
may be from pool or not
602for (int chunkID : chunks) {
603 // translate chunk ID to chunk, if chunk initially wasn't in pool
604  // this translation will (most likely) return null
605  Chunk chunk = ChunkCreator.this.getChunk(chunkID);
606 if (chunk != null) {
607if (chunk.isFromPool() && chunk.isIndexChunk()) {
608  indexChunksPool.putbackChunks(chunk);
{code}

 For {{DataChunk}} , {{Chunk.isIndexChunk}} return true because  
{{Chunk.isIndexChunk}}  determines the type of {{chunk}} based on {{Chunk.size}}
{code:java}
 boolean isIndexChunk() {
return size == 
ChunkCreator.getInstance().getChunkSize(ChunkCreator.ChunkType.INDEX_CHUNK);
  }
{code}

and {{ChunkCreator.getChunkSize}} incorrectly return {{DataChunk}} size when 
{{ChunkCreator.indexChunksPool}} is null:
{code:java}
int getChunkSize(ChunkType chunkType) {
switch (chunkType) {
  case INDEX_CHUNK:
if (indexChunksPool != null) {
  return indexChunksPool.getChunkSize();
}
  case DATA_CHUNK:
if (dataChunksPool != null) {
  return dataChunksPool.getChunkSize();
} else { // When pools are empty
  return chunkSize;
}
  default:
throw new IllegalArgumentException(
"chunkType must either be INDEX_CHUNK or DATA_CHUNK");
}
  }
{code}
In my opinion, in addition to erroneous implementation of 
{{ChunkCreator.getChunkSize}}, we would better not determine the type of 
{{Chunk}} based on {{Chunk.size}}, because
{{hbase.hregion.memstore.mslab.indexchunksize.percent}} is set by user and the 
size of {{IndexChunk}} and {{DataChunk}} could be the same.



  was:
The default value of {{}} By default, we use {{DefaultMemStore}} , which use no 
 {{IndexChunk}} and {{ChunkCreator.indexChunksPool}} is useless, so we could 
set  {{hbase.hregion.memstore.mslab.indexchunksize.percent}} to 0 to save 
memory space, 
But after running a while, the {{RegionServer}} throws {{NullPointerException}} 
 and abort:
{code:java}
   Caused by: java.lang.NullPointerException
at 
org.apache.hadoop.hbase.regionserver.ChunkCreator$MemStoreChunkPool.access$900(ChunkCreator.java:310)
at 
org.apache.hadoop.hbase.regionserver.ChunkCreator.putbackChunks(ChunkCreator.java:608)
at 
org.apache.hadoop.hbase.regionserver.MemStoreLABImpl.recycleChunks(MemStoreLABImpl.java:297)
at 
org.apache.hadoop.hbase.regionserver.MemStoreLABImpl.close(MemStoreLABImpl.java:268)
at org.apache.hadoop.hbase.regionserver.Segment.close(Segment.java:149)
at 
org.apache.hadoop.hbase.regionserver.AbstractMemStore.clearSnapshot(AbstractMemStore.java:251)
at 
org.apache.hadoop.hbase.regionserver.HStore.updateStorefiles(HStore.java:1244)
at

[jira] [Created] (HBASE-26143) The default value of 'hbase.hregion.memstore.mslab.indexchunksize.percent' should depend on MemStore type

2021-07-27 Thread chenglei (Jira)
chenglei created HBASE-26143:


 Summary: The default value of 
'hbase.hregion.memstore.mslab.indexchunksize.percent' should depend on MemStore 
type
 Key: HBASE-26143
 URL: https://issues.apache.org/jira/browse/HBASE-26143
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.4.0, 3.0.0-alpha-1
Reporter: chenglei






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (HBASE-26142) NullPointerException when set 'hbase.hregion.memstore.mslab.indexchunksize.percent' to zero

2021-07-27 Thread chenglei (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17387925#comment-17387925
 ] 

chenglei edited comment on HBASE-26142 at 7/27/21, 9:43 AM:


[~anoop.hbase], the default value of 
{{hbase.hregion.memstore.mslab.indexchunksize.percent}} is 0.1, that is to say 
, the byte size of the index chunk pool is 
{{hbase.regionserver.global.memstore.size}}* 0.1 by default.
Open another JIRA HBASE-26143 to optimize the default value of  
{{hbase.hregion.memstore.mslab.indexchunksize.percent}}


was (Author: comnetwork):
[~anoop.hbase], the default value of 
{{hbase.hregion.memstore.mslab.indexchunksize.percent}} is 0.1, that is to say 
, the byte size of the index chunk pool is 
{{hbase.regionserver.global.memstore.size}}* 0.1 by default

> NullPointerException when set 
> 'hbase.hregion.memstore.mslab.indexchunksize.percent' to zero
> ---
>
> Key: HBASE-26142
> URL: https://issues.apache.org/jira/browse/HBASE-26142
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha-1, 2.4.0
>Reporter: chenglei
>Assignee: chenglei
>Priority: Critical
>
> The default value of {{hbase.hregion.memstore.mslab.indexchunksize.percent}} 
> introduce by HBASE-24892 is 0.1, but when we use {{DefaultMemStore}} by 
> default , which has no  {{IndexChunk}} and {{ChunkCreator.indexChunksPool}} 
> is useless, so we set  
> {{hbase.hregion.memstore.mslab.indexchunksize.percent}} to 0 to save memory 
> space, 
> But after running a while, the {{RegionServer}} throws 
> {{NullPointerException}}  and abort:
> {code:java}
>Caused by: java.lang.NullPointerException
> at 
> org.apache.hadoop.hbase.regionserver.ChunkCreator$MemStoreChunkPool.access$900(ChunkCreator.java:310)
> at 
> org.apache.hadoop.hbase.regionserver.ChunkCreator.putbackChunks(ChunkCreator.java:608)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreLABImpl.recycleChunks(MemStoreLABImpl.java:297)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreLABImpl.close(MemStoreLABImpl.java:268)
> at 
> org.apache.hadoop.hbase.regionserver.Segment.close(Segment.java:149)
> at 
> org.apache.hadoop.hbase.regionserver.AbstractMemStore.clearSnapshot(AbstractMemStore.java:251)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.updateStorefiles(HStore.java:1244)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.access$700(HStore.java:137)
> at 
> org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.commit(HStore.java:2461)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2963)
> {code}
> The problem is caused by line 608 in {{ChunkCreator.putbackChunks}} 
> :{{Chunk.isIndexChunk}} return true for{{DataChunk}}  , and
>  {{ChunkCreator.indexChunksPool}} is null: 
> {code:java}
> 594 synchronized void putbackChunks(Set chunks) {
> 595// if there is no pool just try to clear the chunkIdMap in case there 
> is something
> 596if (dataChunksPool == null && indexChunksPool == null) {
> 597  this.removeChunks(chunks);
> 598  return;
> 599}
> 600
> 601   // if there is a pool, go over all chunk IDs that came back, the chunks 
> may be from pool or not
> 602for (int chunkID : chunks) {
> 603 // translate chunk ID to chunk, if chunk initially wasn't in pool
> 604  // this translation will (most likely) return null
> 605  Chunk chunk = ChunkCreator.this.getChunk(chunkID);
> 606 if (chunk != null) {
> 607if (chunk.isFromPool() && chunk.isIndexChunk()) {
> 608  indexChunksPool.putbackChunks(chunk);
> {code}
>  For {{DataChunk}} , {{Chunk.isIndexChunk}} return true because  
> {{Chunk.isIndexChunk}}  determines the type of {{chunk}} based on 
> {{Chunk.size}}
> {code:java}
>  boolean isIndexChunk() {
> return size == 
> ChunkCreator.getInstance().getChunkSize(ChunkCreator.ChunkType.INDEX_CHUNK);
>   }
> {code}
> and {{ChunkCreator.getChunkSize}} incorrectly return {{DataChunk}} size when 
> {{ChunkCreator.indexChunksPool}} is null:
> {code:java}
> int getChunkSize(ChunkType chunkType) {
> switch (chunkType) {
>   case INDEX_CHUNK:
> if (indexChunksPool != null) {
>   return indexChunksPool.getChunkSize();
> }
>   case DATA_CHUNK:
> if (dataChunksPool != null) {
>   return dataChunksPool.getChunkSize();
> } else { // When pools are empty
>   return chunkSize;
> }
>   default:
> throw new IllegalArgumentException(
> "chunkType must either be INDEX_CHUNK or DATA_CHUNK");
> }
>   }
> {code}
> In my opinion, in addition to erroneous implementation of 
> {{ChunkCreator

[jira] [Updated] (HBASE-26142) NullPointerException when set 'hbase.hregion.memstore.mslab.indexchunksize.percent' to zero

2021-07-27 Thread chenglei (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenglei updated HBASE-26142:
-
Description: 
The default value of {{hbase.hregion.memstore.mslab.indexchunksize.percent}} 
introduced by HBASE-24892 is 0.1, but when we use {{DefaultMemStore}} by 
default , which has no  {{IndexChunk}} and {{ChunkCreator.indexChunksPool}} is 
useless, so we set  {{hbase.hregion.memstore.mslab.indexchunksize.percent}} to 
0 to save memory space, 
But after running a while, the {{RegionServer}} throws {{NullPointerException}} 
 and abort:
{code:java}
   Caused by: java.lang.NullPointerException
at 
org.apache.hadoop.hbase.regionserver.ChunkCreator$MemStoreChunkPool.access$900(ChunkCreator.java:310)
at 
org.apache.hadoop.hbase.regionserver.ChunkCreator.putbackChunks(ChunkCreator.java:608)
at 
org.apache.hadoop.hbase.regionserver.MemStoreLABImpl.recycleChunks(MemStoreLABImpl.java:297)
at 
org.apache.hadoop.hbase.regionserver.MemStoreLABImpl.close(MemStoreLABImpl.java:268)
at org.apache.hadoop.hbase.regionserver.Segment.close(Segment.java:149)
at 
org.apache.hadoop.hbase.regionserver.AbstractMemStore.clearSnapshot(AbstractMemStore.java:251)
at 
org.apache.hadoop.hbase.regionserver.HStore.updateStorefiles(HStore.java:1244)
at 
org.apache.hadoop.hbase.regionserver.HStore.access$700(HStore.java:137)
at 
org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.commit(HStore.java:2461)
at 
org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2963)
{code}

The problem is caused by line 608 in {{ChunkCreator.putbackChunks}} 
:{{Chunk.isIndexChunk}} return true for{{DataChunk}}  , and
 {{ChunkCreator.indexChunksPool}} is null: 
{code:java}
594 synchronized void putbackChunks(Set chunks) {
595// if there is no pool just try to clear the chunkIdMap in case there is 
something
596if (dataChunksPool == null && indexChunksPool == null) {
597  this.removeChunks(chunks);
598  return;
599}
600
601   // if there is a pool, go over all chunk IDs that came back, the chunks 
may be from pool or not
602for (int chunkID : chunks) {
603 // translate chunk ID to chunk, if chunk initially wasn't in pool
604  // this translation will (most likely) return null
605  Chunk chunk = ChunkCreator.this.getChunk(chunkID);
606 if (chunk != null) {
607if (chunk.isFromPool() && chunk.isIndexChunk()) {
608  indexChunksPool.putbackChunks(chunk);
{code}

 For {{DataChunk}} , {{Chunk.isIndexChunk}} return true because  
{{Chunk.isIndexChunk}}  determines the type of {{chunk}} based on {{Chunk.size}}
{code:java}
 boolean isIndexChunk() {
return size == 
ChunkCreator.getInstance().getChunkSize(ChunkCreator.ChunkType.INDEX_CHUNK);
  }
{code}

and {{ChunkCreator.getChunkSize}} incorrectly return {{DataChunk}} size when 
{{ChunkCreator.indexChunksPool}} is null:
{code:java}
int getChunkSize(ChunkType chunkType) {
switch (chunkType) {
  case INDEX_CHUNK:
if (indexChunksPool != null) {
  return indexChunksPool.getChunkSize();
}
  case DATA_CHUNK:
if (dataChunksPool != null) {
  return dataChunksPool.getChunkSize();
} else { // When pools are empty
  return chunkSize;
}
  default:
throw new IllegalArgumentException(
"chunkType must either be INDEX_CHUNK or DATA_CHUNK");
}
  }
{code}
In my opinion, in addition to erroneous implementation of 
{{ChunkCreator.getChunkSize}}, we would better not determine the type of 
{{Chunk}} based on {{Chunk.size}}, because
{{hbase.hregion.memstore.mslab.indexchunksize.percent}} is set by user and the 
size of {{IndexChunk}} and {{DataChunk}} could be the same.



  was:
The default value of {{hbase.hregion.memstore.mslab.indexchunksize.percent}} 
introduce by HBASE-24892 is 0.1, but when we use {{DefaultMemStore}} by default 
, which has no  {{IndexChunk}} and {{ChunkCreator.indexChunksPool}} is useless, 
so we set  {{hbase.hregion.memstore.mslab.indexchunksize.percent}} to 0 to save 
memory space, 
But after running a while, the {{RegionServer}} throws {{NullPointerException}} 
 and abort:
{code:java}
   Caused by: java.lang.NullPointerException
at 
org.apache.hadoop.hbase.regionserver.ChunkCreator$MemStoreChunkPool.access$900(ChunkCreator.java:310)
at 
org.apache.hadoop.hbase.regionserver.ChunkCreator.putbackChunks(ChunkCreator.java:608)
at 
org.apache.hadoop.hbase.regionserver.MemStoreLABImpl.recycleChunks(MemStoreLABImpl.java:297)
at 
org.apache.hadoop.hbase.regionserver.MemStoreLABImpl.close(MemStoreLABImpl.java:268)
at org.apache.hadoop.hbase.regionserver.Segment.close(Segment.java:149)
at 
org.apache.hadoop.hbase.regionserver.AbstractMemStore.clearSnapshot(AbstractMemStore.java:251)
at 
o

[jira] [Comment Edited] (HBASE-26142) NullPointerException when set 'hbase.hregion.memstore.mslab.indexchunksize.percent' to zero

2021-07-27 Thread chenglei (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17387925#comment-17387925
 ] 

chenglei edited comment on HBASE-26142 at 7/27/21, 9:44 AM:


[~anoop.hbase], the default value of 
{{hbase.hregion.memstore.mslab.indexchunksize.percent}} is 0.1, that is to say 
, the byte size of the index chunk pool is 
{{hbase.regionserver.global.memstore.size}}* 0.1 by default.
Open another JIRA HBASE-26143 to optimize the default value of  
{{hbase.hregion.memstore.mslab.indexchunksize.percent}}.


was (Author: comnetwork):
[~anoop.hbase], the default value of 
{{hbase.hregion.memstore.mslab.indexchunksize.percent}} is 0.1, that is to say 
, the byte size of the index chunk pool is 
{{hbase.regionserver.global.memstore.size}}* 0.1 by default.
Open another JIRA HBASE-26143 to optimize the default value of  
{{hbase.hregion.memstore.mslab.indexchunksize.percent}}

> NullPointerException when set 
> 'hbase.hregion.memstore.mslab.indexchunksize.percent' to zero
> ---
>
> Key: HBASE-26142
> URL: https://issues.apache.org/jira/browse/HBASE-26142
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha-1, 2.4.0
>Reporter: chenglei
>Assignee: chenglei
>Priority: Critical
>
> The default value of {{hbase.hregion.memstore.mslab.indexchunksize.percent}} 
> introduce by HBASE-24892 is 0.1, but when we use {{DefaultMemStore}} by 
> default , which has no  {{IndexChunk}} and {{ChunkCreator.indexChunksPool}} 
> is useless, so we set  
> {{hbase.hregion.memstore.mslab.indexchunksize.percent}} to 0 to save memory 
> space, 
> But after running a while, the {{RegionServer}} throws 
> {{NullPointerException}}  and abort:
> {code:java}
>Caused by: java.lang.NullPointerException
> at 
> org.apache.hadoop.hbase.regionserver.ChunkCreator$MemStoreChunkPool.access$900(ChunkCreator.java:310)
> at 
> org.apache.hadoop.hbase.regionserver.ChunkCreator.putbackChunks(ChunkCreator.java:608)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreLABImpl.recycleChunks(MemStoreLABImpl.java:297)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreLABImpl.close(MemStoreLABImpl.java:268)
> at 
> org.apache.hadoop.hbase.regionserver.Segment.close(Segment.java:149)
> at 
> org.apache.hadoop.hbase.regionserver.AbstractMemStore.clearSnapshot(AbstractMemStore.java:251)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.updateStorefiles(HStore.java:1244)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.access$700(HStore.java:137)
> at 
> org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.commit(HStore.java:2461)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2963)
> {code}
> The problem is caused by line 608 in {{ChunkCreator.putbackChunks}} 
> :{{Chunk.isIndexChunk}} return true for{{DataChunk}}  , and
>  {{ChunkCreator.indexChunksPool}} is null: 
> {code:java}
> 594 synchronized void putbackChunks(Set chunks) {
> 595// if there is no pool just try to clear the chunkIdMap in case there 
> is something
> 596if (dataChunksPool == null && indexChunksPool == null) {
> 597  this.removeChunks(chunks);
> 598  return;
> 599}
> 600
> 601   // if there is a pool, go over all chunk IDs that came back, the chunks 
> may be from pool or not
> 602for (int chunkID : chunks) {
> 603 // translate chunk ID to chunk, if chunk initially wasn't in pool
> 604  // this translation will (most likely) return null
> 605  Chunk chunk = ChunkCreator.this.getChunk(chunkID);
> 606 if (chunk != null) {
> 607if (chunk.isFromPool() && chunk.isIndexChunk()) {
> 608  indexChunksPool.putbackChunks(chunk);
> {code}
>  For {{DataChunk}} , {{Chunk.isIndexChunk}} return true because  
> {{Chunk.isIndexChunk}}  determines the type of {{chunk}} based on 
> {{Chunk.size}}
> {code:java}
>  boolean isIndexChunk() {
> return size == 
> ChunkCreator.getInstance().getChunkSize(ChunkCreator.ChunkType.INDEX_CHUNK);
>   }
> {code}
> and {{ChunkCreator.getChunkSize}} incorrectly return {{DataChunk}} size when 
> {{ChunkCreator.indexChunksPool}} is null:
> {code:java}
> int getChunkSize(ChunkType chunkType) {
> switch (chunkType) {
>   case INDEX_CHUNK:
> if (indexChunksPool != null) {
>   return indexChunksPool.getChunkSize();
> }
>   case DATA_CHUNK:
> if (dataChunksPool != null) {
>   return dataChunksPool.getChunkSize();
> } else { // When pools are empty
>   return chunkSize;
> }
>   default:
> throw new IllegalArgumentException(
> "chunkType must either be INDEX_

[jira] [Updated] (HBASE-26142) NullPointerException when set 'hbase.hregion.memstore.mslab.indexchunksize.percent' to zero

2021-07-27 Thread chenglei (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenglei updated HBASE-26142:
-
Description: 
The default value of {{hbase.hregion.memstore.mslab.indexchunksize.percent}} 
introduced by HBASE-24892 is 0.1, but when we use {{DefaultMemStore}} by 
default , which has no  {{IndexChunk}} and {{ChunkCreator.indexChunksPool}} is 
useless({{IndexChunk}} is only used by {{CompactingMemStore}}), so we set  
{{hbase.hregion.memstore.mslab.indexchunksize.percent}} to 0 to save memory 
space, 
But after running a while, the {{RegionServer}} throws {{NullPointerException}} 
 and abort:
{code:java}
   Caused by: java.lang.NullPointerException
at 
org.apache.hadoop.hbase.regionserver.ChunkCreator$MemStoreChunkPool.access$900(ChunkCreator.java:310)
at 
org.apache.hadoop.hbase.regionserver.ChunkCreator.putbackChunks(ChunkCreator.java:608)
at 
org.apache.hadoop.hbase.regionserver.MemStoreLABImpl.recycleChunks(MemStoreLABImpl.java:297)
at 
org.apache.hadoop.hbase.regionserver.MemStoreLABImpl.close(MemStoreLABImpl.java:268)
at org.apache.hadoop.hbase.regionserver.Segment.close(Segment.java:149)
at 
org.apache.hadoop.hbase.regionserver.AbstractMemStore.clearSnapshot(AbstractMemStore.java:251)
at 
org.apache.hadoop.hbase.regionserver.HStore.updateStorefiles(HStore.java:1244)
at 
org.apache.hadoop.hbase.regionserver.HStore.access$700(HStore.java:137)
at 
org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.commit(HStore.java:2461)
at 
org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2963)
{code}

The problem is caused by line 608 in {{ChunkCreator.putbackChunks}} 
:{{Chunk.isIndexChunk}} return true for{{DataChunk}}  , and
 {{ChunkCreator.indexChunksPool}} is null: 
{code:java}
594 synchronized void putbackChunks(Set chunks) {
595// if there is no pool just try to clear the chunkIdMap in case there is 
something
596if (dataChunksPool == null && indexChunksPool == null) {
597  this.removeChunks(chunks);
598  return;
599}
600
601   // if there is a pool, go over all chunk IDs that came back, the chunks 
may be from pool or not
602for (int chunkID : chunks) {
603 // translate chunk ID to chunk, if chunk initially wasn't in pool
604  // this translation will (most likely) return null
605  Chunk chunk = ChunkCreator.this.getChunk(chunkID);
606 if (chunk != null) {
607if (chunk.isFromPool() && chunk.isIndexChunk()) {
608  indexChunksPool.putbackChunks(chunk);
{code}

 For {{DataChunk}} , {{Chunk.isIndexChunk}} return true because  
{{Chunk.isIndexChunk}}  determines the type of {{chunk}} based on {{Chunk.size}}
{code:java}
 boolean isIndexChunk() {
return size == 
ChunkCreator.getInstance().getChunkSize(ChunkCreator.ChunkType.INDEX_CHUNK);
  }
{code}

and {{ChunkCreator.getChunkSize}} incorrectly return {{DataChunk}} size when 
{{ChunkCreator.indexChunksPool}} is null:
{code:java}
int getChunkSize(ChunkType chunkType) {
switch (chunkType) {
  case INDEX_CHUNK:
if (indexChunksPool != null) {
  return indexChunksPool.getChunkSize();
}
  case DATA_CHUNK:
if (dataChunksPool != null) {
  return dataChunksPool.getChunkSize();
} else { // When pools are empty
  return chunkSize;
}
  default:
throw new IllegalArgumentException(
"chunkType must either be INDEX_CHUNK or DATA_CHUNK");
}
  }
{code}
In my opinion, in addition to erroneous implementation of 
{{ChunkCreator.getChunkSize}}, we would better not determine the type of 
{{Chunk}} based on {{Chunk.size}}, because
{{hbase.hregion.memstore.mslab.indexchunksize.percent}} is set by user and the 
size of {{IndexChunk}} and {{DataChunk}} could be the same.



  was:
The default value of {{hbase.hregion.memstore.mslab.indexchunksize.percent}} 
introduced by HBASE-24892 is 0.1, but when we use {{DefaultMemStore}} by 
default , which has no  {{IndexChunk}} and {{ChunkCreator.indexChunksPool}} is 
useless, so we set  {{hbase.hregion.memstore.mslab.indexchunksize.percent}} to 
0 to save memory space, 
But after running a while, the {{RegionServer}} throws {{NullPointerException}} 
 and abort:
{code:java}
   Caused by: java.lang.NullPointerException
at 
org.apache.hadoop.hbase.regionserver.ChunkCreator$MemStoreChunkPool.access$900(ChunkCreator.java:310)
at 
org.apache.hadoop.hbase.regionserver.ChunkCreator.putbackChunks(ChunkCreator.java:608)
at 
org.apache.hadoop.hbase.regionserver.MemStoreLABImpl.recycleChunks(MemStoreLABImpl.java:297)
at 
org.apache.hadoop.hbase.regionserver.MemStoreLABImpl.close(MemStoreLABImpl.java:268)
at org.apache.hadoop.hbase.regionserver.Segment.close(Segment.java:149)
at 
org.apache.hadoop.hbase.regionserver.AbstractMemSto

[jira] [Updated] (HBASE-26142) NullPointerException when set 'hbase.hregion.memstore.mslab.indexchunksize.percent' to zero

2021-07-27 Thread chenglei (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenglei updated HBASE-26142:
-
Description: 
The default value of {{hbase.hregion.memstore.mslab.indexchunksize.percent}} 
introduced by HBASE-24892 is 0.1, but when we use {{DefaultMemStore}} by 
default , which has no  {{IndexChunk}} and {{ChunkCreator.indexChunksPool}} is 
useless({{IndexChunk}} is only used by {{CompactingMemStore}}), so we set  
{{hbase.hregion.memstore.mslab.indexchunksize.percent}} to 0 to save memory 
space, 
But after running a while, the {{RegionServer}} throws {{NullPointerException}} 
 and abort:
{code:java}
   Caused by: java.lang.NullPointerException
at 
org.apache.hadoop.hbase.regionserver.ChunkCreator$MemStoreChunkPool.access$900(ChunkCreator.java:310)
at 
org.apache.hadoop.hbase.regionserver.ChunkCreator.putbackChunks(ChunkCreator.java:608)
at 
org.apache.hadoop.hbase.regionserver.MemStoreLABImpl.recycleChunks(MemStoreLABImpl.java:297)
at 
org.apache.hadoop.hbase.regionserver.MemStoreLABImpl.close(MemStoreLABImpl.java:268)
at org.apache.hadoop.hbase.regionserver.Segment.close(Segment.java:149)
at 
org.apache.hadoop.hbase.regionserver.AbstractMemStore.clearSnapshot(AbstractMemStore.java:251)
at 
org.apache.hadoop.hbase.regionserver.HStore.updateStorefiles(HStore.java:1244)
at 
org.apache.hadoop.hbase.regionserver.HStore.access$700(HStore.java:137)
at 
org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.commit(HStore.java:2461)
at 
org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2963)
{code}

The problem is caused by line 608 in {{ChunkCreator.putbackChunks}} 
:{{Chunk.isIndexChunk}} return true for{{DataChunk}}  , and
 {{ChunkCreator.indexChunksPool}} is null: 
{code:java}
594 synchronized void putbackChunks(Set chunks) {
595// if there is no pool just try to clear the chunkIdMap in case there is 
something
596if (dataChunksPool == null && indexChunksPool == null) {
597  this.removeChunks(chunks);
598  return;
599}
600
601   // if there is a pool, go over all chunk IDs that came back, the chunks 
may be from pool or not
602for (int chunkID : chunks) {
603 // translate chunk ID to chunk, if chunk initially wasn't in pool
604  // this translation will (most likely) return null
605  Chunk chunk = ChunkCreator.this.getChunk(chunkID);
606 if (chunk != null) {
607if (chunk.isFromPool() && chunk.isIndexChunk()) {
608  indexChunksPool.putbackChunks(chunk);
{code}

 For {{DataChunk}} , {{Chunk.isIndexChunk}} return true because  
{{Chunk.isIndexChunk}}  determines the type of {{chunk}} based on {{Chunk.size}}
{code:java}
 boolean isIndexChunk() {
return size == 
ChunkCreator.getInstance().getChunkSize(ChunkCreator.ChunkType.INDEX_CHUNK);
  }
{code}

and {{ChunkCreator.getChunkSize}} incorrectly return {{DataChunk}} size when 
{{ChunkCreator.indexChunksPool}} is null:
{code:java}
int getChunkSize(ChunkType chunkType) {
switch (chunkType) {
  case INDEX_CHUNK:
if (indexChunksPool != null) {
  return indexChunksPool.getChunkSize();
}
  case DATA_CHUNK:
if (dataChunksPool != null) {
  return dataChunksPool.getChunkSize();
} else { // When pools are empty
  return chunkSize;
}
  default:
throw new IllegalArgumentException(
"chunkType must either be INDEX_CHUNK or DATA_CHUNK");
}
  }
{code}
In my opinion, in addition to erroneous implementation of 
{{ChunkCreator.getChunkSize}}, we would better not determine the type of 
{{Chunk}} based on {{Chunk.size}}, because
{{hbase.hregion.memstore.mslab.indexchunksize.percent}} is set by user and the 
size of {{IndexChunk}} and {{DataChunk}} could be the same.Tagged a 
{{ChunkType}} to {{Chunk}} is a better choice.



  was:
The default value of {{hbase.hregion.memstore.mslab.indexchunksize.percent}} 
introduced by HBASE-24892 is 0.1, but when we use {{DefaultMemStore}} by 
default , which has no  {{IndexChunk}} and {{ChunkCreator.indexChunksPool}} is 
useless({{IndexChunk}} is only used by {{CompactingMemStore}}), so we set  
{{hbase.hregion.memstore.mslab.indexchunksize.percent}} to 0 to save memory 
space, 
But after running a while, the {{RegionServer}} throws {{NullPointerException}} 
 and abort:
{code:java}
   Caused by: java.lang.NullPointerException
at 
org.apache.hadoop.hbase.regionserver.ChunkCreator$MemStoreChunkPool.access$900(ChunkCreator.java:310)
at 
org.apache.hadoop.hbase.regionserver.ChunkCreator.putbackChunks(ChunkCreator.java:608)
at 
org.apache.hadoop.hbase.regionserver.MemStoreLABImpl.recycleChunks(MemStoreLABImpl.java:297)
at 
org.apache.hadoop.hbase.regionserver.MemStoreLABImpl.close(MemStoreLABImpl.java:268)
at org.apache.hadoop.hb

[GitHub] [hbase] Apache-HBase commented on pull request #3531: HBASE-26142 NullPointerException when set 'hbase.hregion.memstore.msl…

2021-07-27 Thread GitBox


Apache-HBase commented on pull request #3531:
URL: https://github.com/apache/hbase/pull/3531#issuecomment-887406709


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 30s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   5m 42s |  master passed  |
   | +1 :green_heart: |  compile  |   7m 58s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   2m 43s |  master passed  |
   | +1 :green_heart: |  spotbugs  |   3m 47s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   5m 42s |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m 37s |  the patch passed  |
   | +1 :green_heart: |  javac  |   4m 37s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   1m 28s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |  32m 37s |  Patch does not cause any 
errors with Hadoop 3.1.2 3.2.1 3.3.0.  |
   | +1 :green_heart: |  spotbugs  |   4m 34s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 38s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  83m 50s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3531/1/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3531 |
   | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti 
checkstyle compile |
   | uname | Linux 1c873855d152 4.15.0-142-generic #146-Ubuntu SMP Tue Apr 13 
01:11:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 02d263e7dd |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | Max. process+thread count | 86 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3531/1/console
 |
   | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #3529: HBASE-26124 Backport HBASE-25373 "Remove HTrace completely in code base and try to make use of OpenTelemetry" to branch-2

2021-07-27 Thread GitBox


Apache-HBase commented on pull request #3529:
URL: https://github.com/apache/hbase/pull/3529#issuecomment-887412593


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 29s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  7s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ branch-2 Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 25s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   4m 35s |  branch-2 passed  |
   | +1 :green_heart: |  compile  |   3m 14s |  branch-2 passed  |
   | +1 :green_heart: |  shadedjars  |   7m 14s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   6m 42s |  branch-2 passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 22s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   4m 37s |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 11s |  the patch passed  |
   | +1 :green_heart: |  javac  |   3m 11s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   8m  4s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | -0 :warning: |  javadoc  |   2m 52s |  root generated 1 new + 54 unchanged 
- 1 fixed = 55 total (was 55)  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 365m 11s |  root in the patch failed.  |
   |  |   | 416m 36s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3529/1/artifact/yetus-jdk8-hadoop2-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3529 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 0e8da7932790 4.15.0-128-generic #131-Ubuntu SMP Wed Dec 9 
06:57:35 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2 / 20a4aaedcc |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | javadoc | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3529/1/artifact/yetus-jdk8-hadoop2-check/output/diff-javadoc-javadoc-root.txt
 |
   | unit | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3529/1/artifact/yetus-jdk8-hadoop2-check/output/patch-unit-root.txt
 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3529/1/testReport/
 |
   | Max. process+thread count | 2432 (vs. ulimit of 12500) |
   | modules | C: hbase-protocol-shaded hbase-common hbase-client 
hbase-zookeeper hbase-asyncfs hbase-server hbase-mapreduce hbase-shell hbase-it 
hbase-shaded hbase-shaded/hbase-shaded-client hbase-external-blockcache 
hbase-shaded/hbase-shaded-testing-util . U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3529/1/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #3468: HBASE-26076 Support favoredNodes when do compaction offload

2021-07-27 Thread GitBox


Apache-HBase commented on pull request #3468:
URL: https://github.com/apache/hbase/pull/3468#issuecomment-887421005


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 31s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  5s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ HBASE-25714 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 42s |  HBASE-25714 passed  |
   | +1 :green_heart: |  compile  |   1m  1s |  HBASE-25714 passed  |
   | +1 :green_heart: |  shadedjars  |   8m 18s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 42s |  HBASE-25714 passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 55s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  3s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m  3s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   7m 50s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 150m 42s |  hbase-server in the patch failed.  |
   |  |   | 180m 47s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3468/5/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3468 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux dac159fb85c1 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | HBASE-25714 / 85f02919da |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | unit | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3468/5/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3468/5/testReport/
 |
   | Max. process+thread count | 4504 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3468/5/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #3528: HBASE-26120 New replication gets stuck or data loss when multiwal gro…

2021-07-27 Thread GitBox


Apache-HBase commented on pull request #3528:
URL: https://github.com/apache/hbase/pull/3528#issuecomment-887426750


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  10m 59s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ branch-2 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m  6s |  branch-2 passed  |
   | +1 :green_heart: |  compile  |   3m 43s |  branch-2 passed  |
   | +1 :green_heart: |  checkstyle  |   1m 14s |  branch-2 passed  |
   | +1 :green_heart: |  spotbugs  |   2m 14s |  branch-2 passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 30s |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 22s |  the patch passed  |
   | +1 :green_heart: |  javac  |   3m 22s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   1m  6s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |  13m  7s |  Patch does not cause any 
errors with Hadoop 3.1.2 3.2.1.  |
   | +1 :green_heart: |  spotbugs  |   2m 24s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 16s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  54m 18s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3528/2/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3528 |
   | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti 
checkstyle compile |
   | uname | Linux fc0a5066f7bc 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 
17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2 / 20a4aaedcc |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | Max. process+thread count | 96 (vs. ulimit of 12500) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3528/2/console
 |
   | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] bbeaudreault opened a new pull request #3532: HBASE-26122: Implement an optional maximum size for Gets, after which a partial result is returned

2021-07-27 Thread GitBox


bbeaudreault opened a new pull request #3532:
URL: https://github.com/apache/hbase/pull/3532


   https://issues.apache.org/jira/browse/HBASE-26122
   
   Adds a `Get#setMaxResultSize(long)` method, similar to the one in Scan. The 
default is -1, meaning not enabled. When a max result size is added to a Get 
and there are more cells than fit in the specified size, 
`Result#mayHaveMoreCellsInRow()` will be true. This utilizes ScannerContext on 
the server side, since Gets are backed by single row scans.
   
   Unlike Scans, no response stitching is implemented. The user must handle the 
possible true value in `Result#mayHaveMoreCellsInRow()` accordingly. This seems 
like a fine initial behavior since the default is unlimited, meaning a user 
would have to opt in to this new functionality with the intent to handle the 
possible return values.
   
   I've added tests to TestHRegion for the HRegion implementation. For the 
RSRpcServices I wasn't sure of the best convention, so I added it to the 
existing TestPartialResultsFromClientSide. All new tests pass.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (HBASE-26144) The HStore.snapshot method is never called in main code

2021-07-27 Thread Duo Zhang (Jira)
Duo Zhang created HBASE-26144:
-

 Summary: The HStore.snapshot method is never called in main code
 Key: HBASE-26144
 URL: https://issues.apache.org/jira/browse/HBASE-26144
 Project: HBase
  Issue Type: Improvement
  Components: regionserver
Reporter: Duo Zhang
Assignee: Duo Zhang


In the comment of HStore.flushCache, we say that HStore.snapshot method must be 
called first. But actually, we will call memstore.snapshot directly from 
StoreFlusherImpl.prepare, without holding the write lock. The reason we do not 
need to hold the write lock is that, we hold HRegion.updatesLock in the upper 
layer, so it is OK.

See HBASE-10087 for more discussion about this.

So in general, I think we could remove the snapshot method in HStore. And I do 
not think we need to hold the write lock when calling clearSnapshot then. As in 
the HRegion layer, we need to guarantee that there is only one ongoing flush of 
the region, so before we finish the region, it is already safe to operate on 
the snapshot of the memstore, especially that the clearSnapshot method itself 
is thread safe.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache-HBase commented on pull request #3532: HBASE-26122: Implement an optional maximum size for Gets, after which a partial result is returned

2021-07-27 Thread GitBox


Apache-HBase commented on pull request #3532:
URL: https://github.com/apache/hbase/pull/3532#issuecomment-887457123


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   3m 56s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  8s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ branch-2 Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 16s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   3m 47s |  branch-2 passed  |
   | +1 :green_heart: |  compile  |   2m 36s |  branch-2 passed  |
   | +1 :green_heart: |  shadedjars  |   6m  7s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 34s |  branch-2 passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 15s |  Maven dependency ordering for patch  |
   | -1 :x: |  mvninstall  |   2m  5s |  root in the patch failed.  |
   | -1 :x: |  compile  |   0m 20s |  hbase-server in the patch failed.  |
   | -0 :warning: |  javac  |   0m 20s |  hbase-server in the patch failed.  |
   | -1 :x: |  shadedjars  |   4m 28s |  patch has 10 errors when building our 
shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 26s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 43s |  hbase-protocol-shaded in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   0m 26s |  hbase-protocol in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m 38s |  hbase-client in the patch passed.  
|
   | -1 :x: |  unit  |   0m 23s |  hbase-server in the patch failed.  |
   |  |   |  34m 30s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3532/1/artifact/yetus-jdk8-hadoop2-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3532 |
   | JIRA Issue | HBASE-26122 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 536468bd32e6 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2 / 20a4aaedcc |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | mvninstall | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3532/1/artifact/yetus-jdk8-hadoop2-check/output/patch-mvninstall-root.txt
 |
   | compile | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3532/1/artifact/yetus-jdk8-hadoop2-check/output/patch-compile-hbase-server.txt
 |
   | javac | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3532/1/artifact/yetus-jdk8-hadoop2-check/output/patch-compile-hbase-server.txt
 |
   | shadedjars | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3532/1/artifact/yetus-jdk8-hadoop2-check/output/patch-shadedjars.txt
 |
   | unit | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3532/1/artifact/yetus-jdk8-hadoop2-check/output/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3532/1/testReport/
 |
   | Max. process+thread count | 345 (vs. ulimit of 12500) |
   | modules | C: hbase-protocol-shaded hbase-protocol hbase-client 
hbase-server U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3532/1/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #3531: HBASE-26142 NullPointerException when set 'hbase.hregion.memstore.msl…

2021-07-27 Thread GitBox


Apache-HBase commented on pull request #3531:
URL: https://github.com/apache/hbase/pull/3531#issuecomment-887459232


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 28s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 28s |  master passed  |
   | +1 :green_heart: |  compile  |   1m 11s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   8m 46s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 45s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 38s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 22s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 22s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   8m 19s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 44s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 140m 22s |  hbase-server in the patch passed.  
|
   |  |   | 173m 26s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3531/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3531 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux be951c247c18 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 02d263e7dd |
   | Default Java | AdoptOpenJDK-11.0.10+9 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3531/1/testReport/
 |
   | Max. process+thread count | 4448 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3531/1/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #3531: HBASE-26142 NullPointerException when set 'hbase.hregion.memstore.msl…

2021-07-27 Thread GitBox


Apache-HBase commented on pull request #3531:
URL: https://github.com/apache/hbase/pull/3531#issuecomment-887462362


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 30s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 57s |  master passed  |
   | +1 :green_heart: |  compile  |   1m  1s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   8m 24s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 42s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m  1s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  4s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m  4s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   8m 42s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 40s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 146m 55s |  hbase-server in the patch passed.  
|
   |  |   | 178m 16s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3531/1/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3531 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 3da9ee4512b0 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 
05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 02d263e7dd |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3531/1/testReport/
 |
   | Max. process+thread count | 4654 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3531/1/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #3532: HBASE-26122: Implement an optional maximum size for Gets, after which a partial result is returned

2021-07-27 Thread GitBox


Apache-HBase commented on pull request #3532:
URL: https://github.com/apache/hbase/pull/3532#issuecomment-887465009


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 49s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  prototool  |   0m  0s |  prototool was not available.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ branch-2 Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 15s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   3m 49s |  branch-2 passed  |
   | +1 :green_heart: |  compile  |   6m 35s |  branch-2 passed  |
   | +1 :green_heart: |  checkstyle  |   2m 20s |  branch-2 passed  |
   | +1 :green_heart: |  spotbugs  |   7m 54s |  branch-2 passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 15s |  Maven dependency ordering for patch  |
   | -1 :x: |  mvninstall  |   1m 39s |  root in the patch failed.  |
   | -1 :x: |  compile  |   0m 20s |  hbase-server in the patch failed.  |
   | -0 :warning: |  cc  |   0m 20s |  hbase-server in the patch failed.  |
   | -0 :warning: |  javac  |   0m 20s |  hbase-server in the patch failed.  |
   | -0 :warning: |  checkstyle  |   1m 13s |  hbase-server: The patch 
generated 8 new + 272 unchanged - 0 fixed = 280 total (was 272)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | -1 :x: |  hadoopcheck  |   2m  7s |  The patch causes 10 errors with 
Hadoop v3.1.2.  |
   | -1 :x: |  hadoopcheck  |   4m  9s |  The patch causes 10 errors with 
Hadoop v3.2.1.  |
   | -1 :x: |  hbaseprotoc  |   0m 21s |  hbase-server in the patch failed.  |
   | -1 :x: |  spotbugs  |   0m 18s |  hbase-server in the patch failed.  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 43s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  46m 24s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3532/1/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3532 |
   | JIRA Issue | HBASE-26122 |
   | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti 
checkstyle compile cc hbaseprotoc prototool |
   | uname | Linux cb2aae62b5d6 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 
17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2 / 20a4aaedcc |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | mvninstall | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3532/1/artifact/yetus-general-check/output/patch-mvninstall-root.txt
 |
   | compile | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3532/1/artifact/yetus-general-check/output/patch-compile-hbase-server.txt
 |
   | cc | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3532/1/artifact/yetus-general-check/output/patch-compile-hbase-server.txt
 |
   | javac | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3532/1/artifact/yetus-general-check/output/patch-compile-hbase-server.txt
 |
   | checkstyle | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3532/1/artifact/yetus-general-check/output/diff-checkstyle-hbase-server.txt
 |
   | hadoopcheck | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3532/1/artifact/yetus-general-check/output/patch-javac-3.1.2.txt
 |
   | hadoopcheck | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3532/1/artifact/yetus-general-check/output/patch-javac-3.2.1.txt
 |
   | hbaseprotoc | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3532/1/artifact/yetus-general-check/output/patch-hbaseprotoc-hbase-server.txt
 |
   | spotbugs | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3532/1/artifact/yetus-general-check/output/patch-spotbugs-hbase-server.txt
 |
   | Max. process+thread count | 96 (vs. ulimit of 12500) |
   | modules | C: hbase-protocol-shaded hbase-protocol hbase-client 
hbase-server U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3532/1/console
 |
   | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated messa

[jira] [Commented] (HBASE-26049) Remove DfsBuilderUtility

2021-07-27 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17388028#comment-17388028
 ] 

Hudson commented on HBASE-26049:


Results for branch master
[build #353 on 
builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/353/]:
 (x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/353/General_20Nightly_20Build_20Report/]






(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/353/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(x) {color:red}-1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/353/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Remove DfsBuilderUtility
> 
>
> Key: HBASE-26049
> URL: https://issues.apache.org/jira/browse/HBASE-26049
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha-1, 2.5.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Fix For: 3.0.0-alpha-2
>
>
> DfsBuilderUtility was created to reflectively access 
> DistributedFileSystem$HdfsDataOutputStreamBuilder, which was added by 
> HDFS-11170 and available since Hadoop 2.9.0.
> We can remove this class and access the HDFS builder class directly in HBase 
> 3 and 2.5.0.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-26119) Polish TestAsyncNonMetaRegionLocator

2021-07-27 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17388027#comment-17388027
 ] 

Hudson commented on HBASE-26119:


Results for branch master
[build #353 on 
builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/353/]:
 (x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/353/General_20Nightly_20Build_20Report/]






(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/353/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(x) {color:red}-1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/353/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Polish TestAsyncNonMetaRegionLocator
> 
>
> Key: HBASE-26119
> URL: https://issues.apache.org/jira/browse/HBASE-26119
> Project: HBase
>  Issue Type: Improvement
>  Components: meta replicas, test
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 2.5.0, 3.0.0-alpha-2
>
>
> It creates a Connection in constructor but only close it in AfterClass 
> method, which leaks Connections and make the code a bit ugly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-21946) Use ByteBuffer pread instead of byte[] pread in HFileBlock when applicable

2021-07-27 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-21946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17388029#comment-17388029
 ] 

Hudson commented on HBASE-21946:


Results for branch master
[build #353 on 
builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/353/]:
 (x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/353/General_20Nightly_20Build_20Report/]






(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/353/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(x) {color:red}-1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/353/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Use ByteBuffer pread instead of byte[] pread in HFileBlock when applicable
> --
>
> Key: HBASE-21946
> URL: https://issues.apache.org/jira/browse/HBASE-21946
> Project: HBase
>  Issue Type: Improvement
>  Components: Offheaping
>Reporter: Zheng Hu
>Assignee: Wei-Chiu Chuang
>Priority: Critical
> Fix For: 2.5.0, 3.0.0-alpha-2
>
> Attachments: HBASE-21946.HBASE-21879.v01.patch, 
> HBASE-21946.HBASE-21879.v02.patch, HBASE-21946.HBASE-21879.v03.patch, 
> HBASE-21946.HBASE-21879.v04.patch
>
>
> [~stakiar] is working on HDFS-3246,  so now we have to keep the byte[] pread 
> in HFileBlock reading.  Once it get resolved, we can upgrade the hadoop 
> version and do the replacement. 
> I think it will be a great p999 latency improvement in 100% Get case, anyway 
> file a issue address this firstly. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-26118) The HStore.commitFile and HStore.moveFileIntoPlace almost have the same logic

2021-07-27 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17388030#comment-17388030
 ] 

Hudson commented on HBASE-26118:


Results for branch master
[build #353 on 
builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/353/]:
 (x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/353/General_20Nightly_20Build_20Report/]






(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/353/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(x) {color:red}-1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/353/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> The HStore.commitFile and HStore.moveFileIntoPlace almost have the same logic
> -
>
> Key: HBASE-26118
> URL: https://issues.apache.org/jira/browse/HBASE-26118
> Project: HBase
>  Issue Type: Improvement
>  Components: Compaction, regionserver
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 2.5.0, 3.0.0-alpha-2
>
>
> We should unify them and only have single entry point for committing 
> storefiles.
> This is good for implementing HBASE-26067.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache-HBase commented on pull request #3529: HBASE-26124 Backport HBASE-25373 "Remove HTrace completely in code base and try to make use of OpenTelemetry" to branch-2

2021-07-27 Thread GitBox


Apache-HBase commented on pull request #3529:
URL: https://github.com/apache/hbase/pull/3529#issuecomment-887482606


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 48s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ branch-2 Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 21s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   4m 20s |  branch-2 passed  |
   | +1 :green_heart: |  compile  |   9m 55s |  branch-2 passed  |
   | +1 :green_heart: |  checkstyle  |   2m 51s |  branch-2 passed  |
   | +1 :green_heart: |  spotbugs  |  20m 22s |  branch-2 passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 20s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   4m 29s |  the patch passed  |
   | +1 :green_heart: |  compile  |   9m 49s |  the patch passed  |
   | -0 :warning: |  javac  |   9m 49s |  root generated 3 new + 1887 unchanged 
- 0 fixed = 1890 total (was 1887)  |
   | -0 :warning: |  checkstyle  |   2m 41s |  root: The patch generated 27 new 
+ 742 unchanged - 21 fixed = 769 total (was 763)  |
   | -0 :warning: |  rubocop  |   0m  4s |  The patch generated 4 new + 3 
unchanged - 8 fixed = 7 total (was 11)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m 15s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  hadoopcheck  |  15m 35s |  Patch does not cause any 
errors with Hadoop 3.1.2 3.2.1.  |
   | +1 :green_heart: |  spotbugs  |  23m 50s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   2m 44s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 113m  8s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3529/2/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3529 |
   | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti 
checkstyle compile xml rubocop |
   | uname | Linux 67fdf8d83f26 4.15.0-128-generic #131-Ubuntu SMP Wed Dec 9 
06:57:35 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2 / 20a4aaedcc |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | javac | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3529/2/artifact/yetus-general-check/output/diff-compile-javac-root.txt
 |
   | checkstyle | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3529/2/artifact/yetus-general-check/output/diff-checkstyle-root.txt
 |
   | rubocop | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3529/2/artifact/yetus-general-check/output/diff-patch-rubocop.txt
 |
   | Max. process+thread count | 126 (vs. ulimit of 12500) |
   | modules | C: hbase-protocol-shaded hbase-common hbase-client 
hbase-zookeeper hbase-asyncfs hbase-server hbase-mapreduce hbase-shell hbase-it 
hbase-shaded hbase-shaded/hbase-shaded-client hbase-external-blockcache 
hbase-shaded/hbase-shaded-testing-util . U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3529/2/console
 |
   | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 rubocop=0.80.0 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (HBASE-26142) NullPointerException when set 'hbase.hregion.memstore.mslab.indexchunksize.percent' to zero

2021-07-27 Thread chenglei (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenglei updated HBASE-26142:
-
Description: 
The default value of {{hbase.hregion.memstore.mslab.indexchunksize.percent}} 
introduced by HBASE-24892 is 0.1, but when we use {{DefaultMemStore}} by 
default , which has no  {{IndexChunk}} and {{ChunkCreator.indexChunksPool}} is 
useless({{IndexChunk}} is only used by {{CompactingMemStore}}), so we set  
{{hbase.hregion.memstore.mslab.indexchunksize.percent}} to 0 to save memory 
space, 
But after running a while, the {{RegionServer}} throws {{NullPointerException}} 
 and abort:
{code:java}
   Caused by: java.lang.NullPointerException
at 
org.apache.hadoop.hbase.regionserver.ChunkCreator$MemStoreChunkPool.access$900(ChunkCreator.java:310)
at 
org.apache.hadoop.hbase.regionserver.ChunkCreator.putbackChunks(ChunkCreator.java:608)
at 
org.apache.hadoop.hbase.regionserver.MemStoreLABImpl.recycleChunks(MemStoreLABImpl.java:297)
at 
org.apache.hadoop.hbase.regionserver.MemStoreLABImpl.close(MemStoreLABImpl.java:268)
at org.apache.hadoop.hbase.regionserver.Segment.close(Segment.java:149)
at 
org.apache.hadoop.hbase.regionserver.AbstractMemStore.clearSnapshot(AbstractMemStore.java:251)
at 
org.apache.hadoop.hbase.regionserver.HStore.updateStorefiles(HStore.java:1244)
at 
org.apache.hadoop.hbase.regionserver.HStore.access$700(HStore.java:137)
at 
org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.commit(HStore.java:2461)
at 
org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2963)
{code}

The problem is caused by line 608 in {{ChunkCreator.putbackChunks}} : 
{{Chunk.isIndexChunk}} incorrectly returns true for {{DataChunk}}  and
unexpectedly invokes {{indexChunksPool.putbackChunks}}, while 
{{indexChunksPool}}  is null: 

{code:java}
594 synchronized void putbackChunks(Set chunks) {
595// if there is no pool just try to clear the chunkIdMap in case there is 
something
596if (dataChunksPool == null && indexChunksPool == null) {
597  this.removeChunks(chunks);
598  return;
599}
600
601   // if there is a pool, go over all chunk IDs that came back, the chunks 
may be from pool or not
602for (int chunkID : chunks) {
603 // translate chunk ID to chunk, if chunk initially wasn't in pool
604  // this translation will (most likely) return null
605  Chunk chunk = ChunkCreator.this.getChunk(chunkID);
606 if (chunk != null) {
607if (chunk.isFromPool() && chunk.isIndexChunk()) {
608  indexChunksPool.putbackChunks(chunk);
{code}

For {{DataChunk}} , {{Chunk.isIndexChunk}} return true because  
{{Chunk.isIndexChunk}}  determines the type of {{chunk}} based on {{Chunk.size}}
{code:java}
 boolean isIndexChunk() {
return size == 
ChunkCreator.getInstance().getChunkSize(ChunkCreator.ChunkType.INDEX_CHUNK);
  }
{code}

and {{ChunkCreator.getChunkSize}} incorrectly return {{DataChunk}} size when 
{{ChunkCreator.indexChunksPool}} is null:
{code:java}
int getChunkSize(ChunkType chunkType) {
switch (chunkType) {
  case INDEX_CHUNK:
if (indexChunksPool != null) {
  return indexChunksPool.getChunkSize();
}
  case DATA_CHUNK:
if (dataChunksPool != null) {
  return dataChunksPool.getChunkSize();
} else { // When pools are empty
  return chunkSize;
}
  default:
throw new IllegalArgumentException(
"chunkType must either be INDEX_CHUNK or DATA_CHUNK");
}
  }
{code}
In my opinion, in addition to erroneous implementation of 
{{ChunkCreator.getChunkSize}}, we would better not determine the type of 
{{Chunk}} based on {{Chunk.size}}, because
{{hbase.hregion.memstore.mslab.indexchunksize.percent}} is set by user and the 
size of {{IndexChunk}} and {{DataChunk}} could be the same.Tagged a 
{{ChunkType}} to {{Chunk}} is a better choice.



  was:
The default value of {{hbase.hregion.memstore.mslab.indexchunksize.percent}} 
introduced by HBASE-24892 is 0.1, but when we use {{DefaultMemStore}} by 
default , which has no  {{IndexChunk}} and {{ChunkCreator.indexChunksPool}} is 
useless({{IndexChunk}} is only used by {{CompactingMemStore}}), so we set  
{{hbase.hregion.memstore.mslab.indexchunksize.percent}} to 0 to save memory 
space, 
But after running a while, the {{RegionServer}} throws {{NullPointerException}} 
 and abort:
{code:java}
   Caused by: java.lang.NullPointerException
at 
org.apache.hadoop.hbase.regionserver.ChunkCreator$MemStoreChunkPool.access$900(ChunkCreator.java:310)
at 
org.apache.hadoop.hbase.regionserver.ChunkCreator.putbackChunks(ChunkCreator.java:608)
at 
org.apache.hadoop.hbase.regionserver.MemStoreLABImpl.recycleChunks(MemStoreLABImpl.java:297)
at 
org.apache.hadoop.hbase.regionserver.MemStoreLABImpl.

[jira] [Created] (HBASE-26145) Update fails with DoNotRetryIOException while the region is being split

2021-07-27 Thread Oleg Muravskiy (Jira)
Oleg Muravskiy created HBASE-26145:
--

 Summary: Update fails with DoNotRetryIOException while the region 
is being split
 Key: HBASE-26145
 URL: https://issues.apache.org/jira/browse/HBASE-26145
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.4.4
 Environment: We are running HBase compiled from the release 2.4.4 tag 
with hadoop3 profile enabled, no other changes:

JVM Version Red Hat, Inc. 1.8.0_292-25.292-b10 JVM vendor and version
HBase Version 2.4.4-H3, revision=Unknown HBase version and revision
HBase Compiled Fri Jul 23 08:21:49 UTC 2021, giioper When HBase version was 
compiled and by whom
HBase Source Checksum 
148222e0d18db5336d86bb2fda0d12a164bcbd710cd7181a98467a5bf41fa55648d7bf626d0fc32551d62986b82a7ce214c917c0f8ab6293702610c2034660d3
 HBase source SHA512 checksum
Hadoop Version 3.2.2, revision=7a3bc90b05f257c8ace2f76d74264906f0f7a932 Hadoop 
version and revision
Hadoop Compiled 2021-01-03T12:14Z, hexiaoqiao When Hadoop version was compiled 
and by whom
Hadoop Source Checksum 5a8f564f46624254b27f6a33126ff4 Hadoop source MD5 checksum
ZooKeeper Client Version 3.5.7, 
revision=f0fdd52973d373ffd9c86b81d99842dc2c7f660e ZooKeeper client version and 
revision hash
ZooKeeper Client Compiled 02/10/2020 11:30 GMT When ZooKeeper client version 
was compiled
ZooKeeper Quorum zookeeper1:2181
zookeeper2:2181
zookeeper3:2181
zookeeper4:2181
zookeeper5:2181 Addresses of all registered ZK servers. For more, see zk dump.
ZooKeeper Base Path /hbase Root node of this cluster in ZK.
Cluster Key zookeeper1:2181
zookeeper2:2181
zookeeper3:2181
zookeeper4:2181
zookeeper5:2181:/hbase Key to add this cluster as a peer for replication. Use 
'help "add_peer"' in the shell for details.
HBase Root Directory hdfs://dune/hbase Location of HBase home directory
HMaster Start Time Fri Jul 23 09:53:50 UTC 2021 Date stamp of when this HMaster 
was started
HMaster Active Time Fri Jul 23 09:53:52 UTC 2021 Date stamp of when this 
HMaster became active
HBase Cluster ID 6cd0faae-6cf3-4888-87e0-e977a45faeaf Unique identifier 
generated for each HBase cluster
Load average 156.28 Average number of regions per regionserver. Naive 
computation.
Coprocessors [] Coprocessors currently loaded by the master
LoadBalancer org.apache.hadoop.hbase.master.balancer.StochasticLoadBalancer 
LoadBalancer to be used in the Master
Reporter: Oleg Muravskiy


While inserting records into HBase, I'm getting following exceptions on the 
client side:
{noformat}
2021-07-27 12:07:33,986 WARN  [hconnection-0x6d4081cd-shared-pool-158] 
o.a.h.h.c.AsyncRequestFutureImpl  - id=1, table=ris:ris-updates, attempt=1/16, 
failureCount=9ops, last 
exception=org.apache.hadoop.hbase.DoNotRetryIOException: java.lang.ExceptionInI
nitializerError on worker008,16020,1627053306686, tracking started Tue Jul 27 
12:07:33 UTC 2021; NOT retrying, failed=9 -- final attempt!
2021-07-27 12:07:34,008 ERROR [Thread-4] o.a.hadoop.hbase.client.BatchErrors - 
Exception occurred! Exception details: 
[org.apache.hadoop.hbase.DoNotRetryIOException: 
java.lang.ExceptionInInitializerError, org.a
pache.hadoop.hbase.DoNotRetryIOException: 
java.lang.ExceptionInInitializerError, 
org.apache.hadoop.hbase.DoNotRetryIOException: 
java.lang.ExceptionInInitializerError, 
org.apache.hadoop.hbase.DoNotRetryIOException: 
java.lang.ExceptionInInitializerError, 
org.apache.hadoop.hbase.DoNotRetryIOException: 
java.lang.ExceptionInInitializerError, 
org.apache.hadoop.hbase.DoNotRetryIOException: 
java.lang.ExceptionInInitializerError, 
org.apache.hadoop.hbase.DoNotRetryIOException: 
java.lang.ExceptionInInitializerEr
ror, org.apache.hadoop.hbase.DoNotRetryIOException: 
java.lang.ExceptionInInitializerError, 
org.apache.hadoop.hbase.DoNotRetryIOException: 
java.lang.ExceptionInInitializerError];
Actions: 
[{"totalColumns":2,"row":"\\x00\\x03\\x00\\x00\\x01z\\xE7\\xDC\\xE5z\\x04P\\xF9\\xD3\\xD9\\x00\\x00\\x00\\x00","families":{"-":[{"qualifier":"mrt","vlen":117,"tag":[],"timestamp":"1627387651450"},{"qualifier":"source","vlen":11,"tag":[],"timest
amp":"1627387651450"}]},"ts":"1627387651450"}, 
{"totalColumns":2,"row":"\\x00\\x03\\x00\\x00\\x01z\\xE7\\xDC\\xE5z\\x04P\\xF9\\xD1\\xB0\\x00\\x00\\x00\\x01","families":{"-":[{"qualifier":"mrt","vlen":117,"tag":[],"timestamp":"1627387651450"},{"qualifier
":"source","vlen":11,"tag":[],"timestamp":"1627387651450"}]},"ts":"1627387651450"},
 
{"totalColumns":2,"row":"\\x00\\x03\\x00\\x00\\x01z\\xE7\\xDC\\xE5z\\x04P\\xF9\\xD2\\xD0\\x00\\x00\\x00\\x02","families":{"-":[{"qualifier":"mrt","vlen":125,"tag":[],"ti
mestamp":"1627387651450"},{"qualifier":"source","vlen":11,"tag":[],"timestamp":"1627387651450"}]},"ts":"1627387651450"},
 {"totalColumns":2,"row":"\\x00\\x03\\x00\\x00\\x01z\\xE7\\xDC\\xE5z\\x06 
\\x01\\x07\\xF8\\x00\\x01\\x00\\x00\\x00\\x00\\xA5\\x02\\x9
4g\\x00\\x01\\x00\\x00\\x00\\x03","families":{"-":[{"qualifier":"mrt","vlen":175,"tag

[jira] [Updated] (HBASE-26114) when “hbase.mob.compaction.threads.max” is set to a negative number, HMaster cannot start normally

2021-07-27 Thread Jingxuan Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingxuan Fu updated HBASE-26114:

Status: Patch Available  (was: Open)

> when “hbase.mob.compaction.threads.max” is set to a negative number, HMaster 
> cannot start normally 
> ---
>
> Key: HBASE-26114
> URL: https://issues.apache.org/jira/browse/HBASE-26114
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Affects Versions: 2.4.4, 2.2.0
> Environment: HBase 2.2.2
> os.name=Linux
> os.arch=amd64
> os.version=5.4.0-72-generic
> java.version=1.8.0_191
> java.vendor=Oracle Corporation
>Reporter: Jingxuan Fu
>Priority: Minor
>  Labels: patch
>   Original Estimate: 10m
>  Remaining Estimate: 10m
>
> In hbase-default.xml:
>   
> {code:java}
>  
> hbase.mob.compaction.threads.max 
> 1 
>    
>   The max number of threads used in MobCompactor. 
>    
> {code}
>  
> When the value is set to a negative number, such as -1, Hmaster cannot start 
> normally.
> The log file will output:
>   
> {code:cpp}
> 2021-07-22 18:54:13,758 ERROR [master/JavaFuzz:16000:becomeActiveMaster] 
> master.HMaster: Failed to become active master 
> java.lang.IllegalArgumentException
> at 
> java.util.concurrent.ThreadPoolExecutor.(ThreadPoolExecutor.java:1314)  
> at 
> org.apache.hadoop.hbase.mob.MobUtils.createMobCompactorThreadPool(MobUtils.java:880)
> at org.apache.hadoop.hbase.master.MobCompactionChore.
> (MobCompactionChore.java:51)   at 
> org.apache.hadoop.hbase.master.HMaster.initMobCleaner(HMaster.java:1278) 
>   at 
> org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:1161)
>  
> at 
> org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2112)
> at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:580)
> at java.lang.Thread.run(Thread.java:748) 
> 2021-07-22 18:54:13,760 ERROR [master/JavaFuzz:16000:becomeActiveMaster] 
> master.HMaster: Master server abort: loaded coprocessors are: 
> [org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint] 
> 2021-07-22 18:54:13,760 ERROR [master/JavaFuzz:16000:becomeActiveMaster] 
> master.HMaster: * ABORTING master javafuzz,16000,1626951243154: Unhandled 
> exception. Starting shutdown. * java.lang.IllegalArgumentException 
> at 
> java.util.concurrent.ThreadPoolExecutor.(ThreadPoolExecutor.java:1314)  
>  at 
> org.apache.hadoop.hbase.mob.MobUtils.createMobCompactorThreadPool(MobUtils.java:880)
>  
> at 
> org.apache.hadoop.hbase.master.MobCompactionChore.(MobCompactionChore.java:51)
>  
>   at org.apache.hadoop.hbase.master.HMaster.initMobCleaner(HMaster.java:1278) 
>   at 
> org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:1161)
>  
>   at 
> org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2112)
>  
> at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:580) 
>   at java.lang.Thread.run(Thread.java:748) 
> 2021-07-
> 22 18:54:13,760 INFO  [master/JavaFuzz:16000:becomeActiveMaster] 
> regionserver.HRegionServer: * STOPPING region server 
> 'javafuzz,16000,1626951243154' *{code}
>  
> In MobUtils.java(package org.apache.hadoop.hbase.mob) 
>  This method from version 2.2.0 to version 2.4.4 is the same
> {code:java}
>   public static ExecutorService createMobCompactorThreadPool(Configuration 
> conf) { int maxThreads = 
> conf.getInt(MobConstants.MOB_COMPACTION_THREADS_MAX, 
> MobConstants.DEFAULT_MOB_COMPACTION_THREADS_MAX); 
> if (maxThreads == 0) { 
>maxThreads = 1;    
>    } 
> final SynchronousQueue queue = new SynchronousQueue<>();
> ThreadPoolExecutor pool = new ThreadPoolExecutor(1, maxThreads, 60, 
> TimeUnit.SECONDS, queue,   
> Threads.newDaemonThreadFactory("MobCompactor"), new 
> RejectedExecutionHandler() {
>    @Override
> public void rejectedExecution(Runnable r, ThreadPoolExecutor 
> executor) {   
> try { 
> // waiting for a thread to pick up instead of throwing 
> exceptions.     
> queue.put(r);   
> } catch (InterruptedException e) { 
> throw new RejectedExecutionException(e);   
> } 
>   }   
> }); 
> ((ThreadPoolExecutor) pool).allowCoreThreadTimeOut(true); 
> return pool;   
>}{code}
> When MOB_COMPACTION_THREADS_MAX is set to 0, mobUtil will set it to 1. But 
> the program does not take into account that it is set to a negative number. 
> When it is se

[jira] [Updated] (HBASE-26114) when “hbase.mob.compaction.threads.max” is set to a negative number, HMaster cannot start normally

2021-07-27 Thread Jingxuan Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingxuan Fu updated HBASE-26114:

Status: Open  (was: Patch Available)

> when “hbase.mob.compaction.threads.max” is set to a negative number, HMaster 
> cannot start normally 
> ---
>
> Key: HBASE-26114
> URL: https://issues.apache.org/jira/browse/HBASE-26114
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Affects Versions: 2.4.4, 2.2.0
> Environment: HBase 2.2.2
> os.name=Linux
> os.arch=amd64
> os.version=5.4.0-72-generic
> java.version=1.8.0_191
> java.vendor=Oracle Corporation
>Reporter: Jingxuan Fu
>Priority: Minor
>  Labels: patch
>   Original Estimate: 10m
>  Remaining Estimate: 10m
>
> In hbase-default.xml:
>   
> {code:java}
>  
> hbase.mob.compaction.threads.max 
> 1 
>    
>   The max number of threads used in MobCompactor. 
>    
> {code}
>  
> When the value is set to a negative number, such as -1, Hmaster cannot start 
> normally.
> The log file will output:
>   
> {code:cpp}
> 2021-07-22 18:54:13,758 ERROR [master/JavaFuzz:16000:becomeActiveMaster] 
> master.HMaster: Failed to become active master 
> java.lang.IllegalArgumentException
> at 
> java.util.concurrent.ThreadPoolExecutor.(ThreadPoolExecutor.java:1314)  
> at 
> org.apache.hadoop.hbase.mob.MobUtils.createMobCompactorThreadPool(MobUtils.java:880)
> at org.apache.hadoop.hbase.master.MobCompactionChore.
> (MobCompactionChore.java:51)   at 
> org.apache.hadoop.hbase.master.HMaster.initMobCleaner(HMaster.java:1278) 
>   at 
> org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:1161)
>  
> at 
> org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2112)
> at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:580)
> at java.lang.Thread.run(Thread.java:748) 
> 2021-07-22 18:54:13,760 ERROR [master/JavaFuzz:16000:becomeActiveMaster] 
> master.HMaster: Master server abort: loaded coprocessors are: 
> [org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint] 
> 2021-07-22 18:54:13,760 ERROR [master/JavaFuzz:16000:becomeActiveMaster] 
> master.HMaster: * ABORTING master javafuzz,16000,1626951243154: Unhandled 
> exception. Starting shutdown. * java.lang.IllegalArgumentException 
> at 
> java.util.concurrent.ThreadPoolExecutor.(ThreadPoolExecutor.java:1314)  
>  at 
> org.apache.hadoop.hbase.mob.MobUtils.createMobCompactorThreadPool(MobUtils.java:880)
>  
> at 
> org.apache.hadoop.hbase.master.MobCompactionChore.(MobCompactionChore.java:51)
>  
>   at org.apache.hadoop.hbase.master.HMaster.initMobCleaner(HMaster.java:1278) 
>   at 
> org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:1161)
>  
>   at 
> org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2112)
>  
> at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:580) 
>   at java.lang.Thread.run(Thread.java:748) 
> 2021-07-
> 22 18:54:13,760 INFO  [master/JavaFuzz:16000:becomeActiveMaster] 
> regionserver.HRegionServer: * STOPPING region server 
> 'javafuzz,16000,1626951243154' *{code}
>  
> In MobUtils.java(package org.apache.hadoop.hbase.mob) 
>  This method from version 2.2.0 to version 2.4.4 is the same
> {code:java}
>   public static ExecutorService createMobCompactorThreadPool(Configuration 
> conf) { int maxThreads = 
> conf.getInt(MobConstants.MOB_COMPACTION_THREADS_MAX, 
> MobConstants.DEFAULT_MOB_COMPACTION_THREADS_MAX); 
> if (maxThreads == 0) { 
>maxThreads = 1;    
>    } 
> final SynchronousQueue queue = new SynchronousQueue<>();
> ThreadPoolExecutor pool = new ThreadPoolExecutor(1, maxThreads, 60, 
> TimeUnit.SECONDS, queue,   
> Threads.newDaemonThreadFactory("MobCompactor"), new 
> RejectedExecutionHandler() {
>    @Override
> public void rejectedExecution(Runnable r, ThreadPoolExecutor 
> executor) {   
> try { 
> // waiting for a thread to pick up instead of throwing 
> exceptions.     
> queue.put(r);   
> } catch (InterruptedException e) { 
> throw new RejectedExecutionException(e);   
> } 
>   }   
> }); 
> ((ThreadPoolExecutor) pool).allowCoreThreadTimeOut(true); 
> return pool;   
>}{code}
> When MOB_COMPACTION_THREADS_MAX is set to 0, mobUtil will set it to 1. But 
> the program does not take into account that it is set to a negative number. 
> When it is se

[GitHub] [hbase] Apache-HBase commented on pull request #3528: HBASE-26120 New replication gets stuck or data loss when multiwal gro…

2021-07-27 Thread GitBox


Apache-HBase commented on pull request #3528:
URL: https://github.com/apache/hbase/pull/3528#issuecomment-887505252


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m  7s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  8s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ branch-2 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 36s |  branch-2 passed  |
   | +1 :green_heart: |  compile  |   0m 57s |  branch-2 passed  |
   | +1 :green_heart: |  shadedjars  |   6m  0s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 40s |  branch-2 passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 23s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 59s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 59s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   6m  1s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 150m 27s |  hbase-server in the patch passed.  
|
   |  |   | 176m 13s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3528/2/artifact/yetus-jdk8-hadoop2-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3528 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux adf3c3fa030c 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2 / 20a4aaedcc |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3528/2/testReport/
 |
   | Max. process+thread count | 3553 (vs. ulimit of 12500) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3528/2/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (HBASE-26145) Update fails with DoNotRetryIOException while the region is being split

2021-07-27 Thread Oleg Muravskiy (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleg Muravskiy updated HBASE-26145:
---
Description: 
While inserting records into HBase, I'm getting following exceptions on the 
client side:
{noformat}
2021-07-27 12:07:33,986 WARN  [hconnection-0x6d4081cd-shared-pool-158] 
o.a.h.h.c.AsyncRequestFutureImpl  - id=1, table=ris:ris-updates, attempt=1/16, 
failureCount=9ops, last 
exception=org.apache.hadoop.hbase.DoNotRetryIOException: 
java.lang.ExceptionInInitializerError on worker008,16020,1627053306686, 
tracking started Tue Jul 27 12:07:33 UTC 2021; NOT retrying, failed=9 -- final 
attempt!
2021-07-27 12:07:34,008 ERROR [Thread-4] o.a.hadoop.hbase.client.BatchErrors - 
Exception occurred! Exception details: 
[org.apache.hadoop.hbase.DoNotRetryIOException: 
java.lang.ExceptionInInitializerError, 
org.apache.hadoop.hbase.DoNotRetryIOException: 
java.lang.ExceptionInInitializerError, 
org.apache.hadoop.hbase.DoNotRetryIOException: 
java.lang.ExceptionInInitializerError, 
org.apache.hadoop.hbase.DoNotRetryIOException: 
java.lang.ExceptionInInitializerError, 
org.apache.hadoop.hbase.DoNotRetryIOException: 
java.lang.ExceptionInInitializerError, 
org.apache.hadoop.hbase.DoNotRetryIOException: 
java.lang.ExceptionInInitializerError, 
org.apache.hadoop.hbase.DoNotRetryIOException: 
java.lang.ExceptionInInitializerError, 
org.apache.hadoop.hbase.DoNotRetryIOException: 
java.lang.ExceptionInInitializerError, 
org.apache.hadoop.hbase.DoNotRetryIOException: 
java.lang.ExceptionInInitializerError];
Actions: 
[{"totalColumns":2,"row":"\\x00\\x03\\x00\\x00\\x01z\\xE7\\xDC\\xE5z\\x04P\\xF9\\xD3\\xD9\\x00\\x00\\x00\\x00","families":{"-":[{"qualifier":"mrt","vlen":117,"tag":[],"timestamp":"1627387651450"},{"qualifier":"source","vlen":11,"tag":[],"timestamp":"1627387651450"}]},"ts":"1627387651450"},
 
{"totalColumns":2,"row":"\\x00\\x03\\x00\\x00\\x01z\\xE7\\xDC\\xE5z\\x04P\\xF9\\xD1\\xB0\\x00\\x00\\x00\\x01","families":{"-":[{"qualifier":"mrt","vlen":117,"tag":[],"timestamp":"1627387651450"},{"qualifier":"source","vlen":11,"tag":[],"timestamp":"1627387651450"}]},"ts":"1627387651450"},
 
{"totalColumns":2,"row":"\\x00\\x03\\x00\\x00\\x01z\\xE7\\xDC\\xE5z\\x04P\\xF9\\xD2\\xD0\\x00\\x00\\x00\\x02","families":{"-":[{"qualifier":"mrt","vlen":125,"tag":[],"timestamp":"1627387651450"},{"qualifier":"source","vlen":11,"tag":[],"timestamp":"1627387651450"}]},"ts":"1627387651450"},
 {"totalColumns":2,"row":"\\x00\\x03\\x00\\x00\\x01z\\xE7\\xDC\\xE5z\\x06 
\\x01\\x07\\xF8\\x00\\x01\\x00\\x00\\x00\\x00\\xA5\\x02\\x94g\\x00\\x01\\x00\\x00\\x00\\x03","families":{"-":[{"qualifier":"mrt","vlen":175,"tag":[],"timestamp":"1627387651450"},{"qualifier":"source","vlen":11,"tag":[],"timestamp":"1627387651450"}]},"ts":"1627387651450"},
 
{"totalColumns":2,"row":"\\x00\\x03\\x00\\x00\\x01z\\xE7\\xDC\\xE9b\\x04P\\xF9\\xD3\\xD9\\x00\\x00\\x00\\x00","families":{"-":[{"qualifier":"mrt","vlen":125,"tag":[],"timestamp":"1627387652450"},{"qualifier":"source","vlen":11,"tag":[],"timestamp":"1627387652450"}]},"ts":"1627387652450"},
 {"totalColumns":2,"row":"\\x00\\x03\\x00\\x00\\x01z\\xE7\\xDC\\xE9b\\x06 
\\x01\\x07\\xF8\\x00\\x01\\x00\\x00\\x00\\x00\\xA5\\x00\\x84U\\x00\\x01\\x00\\x00\\x00\\x01","families":{"-":[{"qualifier":"mrt","vlen":75,"tag":[],"timestamp":"1627387652450"},{"qualifier":"source","vlen":11,"tag":[],"timestamp":"1627387652450"}]},"ts":"1627387652450"},
 
{"totalColumns":2,"row":"\\x00\\x03\\x00\\x00\\x01z\\xE7\\xDC\\xE9b\\x04P\\xF9\\xD1\\xB0\\x00\\x00\\x00\\x02","families":{"-":[{"qualifier":"mrt","vlen":110,"tag":[],"timestamp":"1627387652450"},{"qualifier":"source","vlen":11,"tag":[],"timestamp":"1627387652450"}]},"ts":"1627387652450"},
 {"totalColumns":2,"row":"\\x00\\x03\\x00\\x00\\x01z\\xE7\\xDC\\xE9b\\x06 
\\x01\\x07\\xF8\\x00\\x01\\x00\\x00\\x00\\x00\\xA5\\x02\\x94g\\x00\\x01\\x00\\x00\\x00\\x03","families":{"-":[{"qualifier":"mrt","vlen":171,"tag":[],"timestamp":"1627387652450"},{"qualifier":"source","vlen":11,"tag":[],"timestamp":"1627387652450"}]},"ts":"1627387652450"},
 
{"totalColumns":2,"row":"\\x00\\x03\\x00\\x00\\x01z\\xE7\\xDC\\xE9b\\x04P\\xF9\\xD3\\xD9\\x00\\x00\\x00\\x04","families":{"-":[{"qualifier":"mrt","vlen":115,"tag":[],"timestamp":"1627387652450"},{"qualifier":"source","vlen":11,"tag":[],"timestamp":"1627387652450"}]},"ts":"1627387652450"}]

2021-07-27 12:07:34,015 ERROR [Thread-4] n.r.g.r.a.AbstractApp  - Error: 
org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 9 
actions: java.lang.ExceptionInInitializerError: 9 times, servers with issues: 
worker008,16020,1627053306686
at 
org.apache.hadoop.hbase.client.BatchErrors.makeException(BatchErrors.java:54)
at 
org.apache.hadoop.hbase.client.AsyncRequestFutureImpl.getErrors(AsyncRequestFutureImpl.java:1196)
at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:451)
at org.apache.hadoop.

[jira] [Updated] (HBASE-26114) when “hbase.mob.compaction.threads.max” is set to a negative number, HMaster cannot start normally

2021-07-27 Thread Jingxuan Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingxuan Fu updated HBASE-26114:

Fix Version/s: (was: 3.0.0-alpha-1)

> when “hbase.mob.compaction.threads.max” is set to a negative number, HMaster 
> cannot start normally 
> ---
>
> Key: HBASE-26114
> URL: https://issues.apache.org/jira/browse/HBASE-26114
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Affects Versions: 2.2.0, 2.4.4
> Environment: HBase 2.2.2
> os.name=Linux
> os.arch=amd64
> os.version=5.4.0-72-generic
> java.version=1.8.0_191
> java.vendor=Oracle Corporation
>Reporter: Jingxuan Fu
>Priority: Minor
>  Labels: patch
>   Original Estimate: 10m
>  Remaining Estimate: 10m
>
> In hbase-default.xml:
>   
> {code:java}
>  
> hbase.mob.compaction.threads.max 
> 1 
>    
>   The max number of threads used in MobCompactor. 
>    
> {code}
>  
> When the value is set to a negative number, such as -1, Hmaster cannot start 
> normally.
> The log file will output:
>   
> {code:cpp}
> 2021-07-22 18:54:13,758 ERROR [master/JavaFuzz:16000:becomeActiveMaster] 
> master.HMaster: Failed to become active master 
> java.lang.IllegalArgumentException
> at 
> java.util.concurrent.ThreadPoolExecutor.(ThreadPoolExecutor.java:1314)  
> at 
> org.apache.hadoop.hbase.mob.MobUtils.createMobCompactorThreadPool(MobUtils.java:880)
> at org.apache.hadoop.hbase.master.MobCompactionChore.
> (MobCompactionChore.java:51)   at 
> org.apache.hadoop.hbase.master.HMaster.initMobCleaner(HMaster.java:1278) 
>   at 
> org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:1161)
>  
> at 
> org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2112)
> at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:580)
> at java.lang.Thread.run(Thread.java:748) 
> 2021-07-22 18:54:13,760 ERROR [master/JavaFuzz:16000:becomeActiveMaster] 
> master.HMaster: Master server abort: loaded coprocessors are: 
> [org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint] 
> 2021-07-22 18:54:13,760 ERROR [master/JavaFuzz:16000:becomeActiveMaster] 
> master.HMaster: * ABORTING master javafuzz,16000,1626951243154: Unhandled 
> exception. Starting shutdown. * java.lang.IllegalArgumentException 
> at 
> java.util.concurrent.ThreadPoolExecutor.(ThreadPoolExecutor.java:1314)  
>  at 
> org.apache.hadoop.hbase.mob.MobUtils.createMobCompactorThreadPool(MobUtils.java:880)
>  
> at 
> org.apache.hadoop.hbase.master.MobCompactionChore.(MobCompactionChore.java:51)
>  
>   at org.apache.hadoop.hbase.master.HMaster.initMobCleaner(HMaster.java:1278) 
>   at 
> org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:1161)
>  
>   at 
> org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2112)
>  
> at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:580) 
>   at java.lang.Thread.run(Thread.java:748) 
> 2021-07-
> 22 18:54:13,760 INFO  [master/JavaFuzz:16000:becomeActiveMaster] 
> regionserver.HRegionServer: * STOPPING region server 
> 'javafuzz,16000,1626951243154' *{code}
>  
> In MobUtils.java(package org.apache.hadoop.hbase.mob) 
>  This method from version 2.2.0 to version 2.4.4 is the same
> {code:java}
>   public static ExecutorService createMobCompactorThreadPool(Configuration 
> conf) { int maxThreads = 
> conf.getInt(MobConstants.MOB_COMPACTION_THREADS_MAX, 
> MobConstants.DEFAULT_MOB_COMPACTION_THREADS_MAX); 
> if (maxThreads == 0) { 
>maxThreads = 1;    
>    } 
> final SynchronousQueue queue = new SynchronousQueue<>();
> ThreadPoolExecutor pool = new ThreadPoolExecutor(1, maxThreads, 60, 
> TimeUnit.SECONDS, queue,   
> Threads.newDaemonThreadFactory("MobCompactor"), new 
> RejectedExecutionHandler() {
>    @Override
> public void rejectedExecution(Runnable r, ThreadPoolExecutor 
> executor) {   
> try { 
> // waiting for a thread to pick up instead of throwing 
> exceptions.     
> queue.put(r);   
> } catch (InterruptedException e) { 
> throw new RejectedExecutionException(e);   
> } 
>   }   
> }); 
> ((ThreadPoolExecutor) pool).allowCoreThreadTimeOut(true); 
> return pool;   
>}{code}
> When MOB_COMPACTION_THREADS_MAX is set to 0, mobUtil will set it to 1. But 
> the program does not take into account that it is set to a negative number. 
> When it is

[jira] [Updated] (HBASE-26145) Update fails with DoNotRetryIOException while the region is being split

2021-07-27 Thread Oleg Muravskiy (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleg Muravskiy updated HBASE-26145:
---
Description: 
While inserting records into HBase, I'm getting following exceptions on the 
client side:
{noformat}
2021-07-27 12:07:33,986 WARN  [hconnection-0x6d4081cd-shared-pool-158] 
o.a.h.h.c.AsyncRequestFutureImpl  - id=1, table=ris:ris-updates, attempt=1/16, 
failureCount=9ops, last 
exception=org.apache.hadoop.hbase.DoNotRetryIOException: 
java.lang.ExceptionInInitializerError on worker008,16020,1627053306686, 
tracking started Tue Jul 27 12:07:33 UTC 2021; NOT retrying, failed=9 -- final 
attempt!
2021-07-27 12:07:34,008 ERROR [Thread-4] o.a.hadoop.hbase.client.BatchErrors - 
Exception occurred! Exception details: 
[org.apache.hadoop.hbase.DoNotRetryIOException: 
java.lang.ExceptionInInitializerError, 
org.apache.hadoop.hbase.DoNotRetryIOException: 
java.lang.ExceptionInInitializerError, 
org.apache.hadoop.hbase.DoNotRetryIOException: 
java.lang.ExceptionInInitializerError, 
org.apache.hadoop.hbase.DoNotRetryIOException: 
java.lang.ExceptionInInitializerError, 
org.apache.hadoop.hbase.DoNotRetryIOException: 
java.lang.ExceptionInInitializerError, 
org.apache.hadoop.hbase.DoNotRetryIOException: 
java.lang.ExceptionInInitializerError, 
org.apache.hadoop.hbase.DoNotRetryIOException: 
java.lang.ExceptionInInitializerError, 
org.apache.hadoop.hbase.DoNotRetryIOException: 
java.lang.ExceptionInInitializerError, 
org.apache.hadoop.hbase.DoNotRetryIOException: 
java.lang.ExceptionInInitializerError];
Actions: 
[{"totalColumns":2,"row":"\\x00\\x03\\x00\\x00\\x01z\\xE7\\xDC\\xE5z\\x04P\\xF9\\xD3\\xD9\\x00\\x00\\x00\\x00","families":{"-":[{"qualifier":"mrt","vlen":117,"tag":[],"timestamp":"1627387651450"},{"qualifier":"source","vlen":11,"tag":[],"timestamp":"1627387651450"}]},"ts":"1627387651450"},
 
{"totalColumns":2,"row":"\\x00\\x03\\x00\\x00\\x01z\\xE7\\xDC\\xE5z\\x04P\\xF9\\xD1\\xB0\\x00\\x00\\x00\\x01","families":{"-":[{"qualifier":"mrt","vlen":117,"tag":[],"timestamp":"1627387651450"},{"qualifier":"source","vlen":11,"tag":[],"timestamp":"1627387651450"}]},"ts":"1627387651450"},
 
{"totalColumns":2,"row":"\\x00\\x03\\x00\\x00\\x01z\\xE7\\xDC\\xE5z\\x04P\\xF9\\xD2\\xD0\\x00\\x00\\x00\\x02","families":{"-":[{"qualifier":"mrt","vlen":125,"tag":[],"timestamp":"1627387651450"},{"qualifier":"source","vlen":11,"tag":[],"timestamp":"1627387651450"}]},"ts":"1627387651450"},
 {"totalColumns":2,"row":"\\x00\\x03\\x00\\x00\\x01z\\xE7\\xDC\\xE5z\\x06 
\\x01\\x07\\xF8\\x00\\x01\\x00\\x00\\x00\\x00\\xA5\\x02\\x94g\\x00\\x01\\x00\\x00\\x00\\x03","families":{"-":[{"qualifier":"mrt","vlen":175,"tag":[],"timestamp":"1627387651450"},{"qualifier":"source","vlen":11,"tag":[],"timestamp":"1627387651450"}]},"ts":"1627387651450"},
 
{"totalColumns":2,"row":"\\x00\\x03\\x00\\x00\\x01z\\xE7\\xDC\\xE9b\\x04P\\xF9\\xD3\\xD9\\x00\\x00\\x00\\x00","families":{"-":[{"qualifier":"mrt","vlen":125,"tag":[],"timestamp":"1627387652450"},{"qualifier":"source","vlen":11,"tag":[],"timestamp":"1627387652450"}]},"ts":"1627387652450"},
 {"totalColumns":2,"row":"\\x00\\x03\\x00\\x00\\x01z\\xE7\\xDC\\xE9b\\x06 
\\x01\\x07\\xF8\\x00\\x01\\x00\\x00\\x00\\x00\\xA5\\x00\\x84U\\x00\\x01\\x00\\x00\\x00\\x01","families":{"-":[{"qualifier":"mrt","vlen":75,"tag":[],"timestamp":"1627387652450"},{"qualifier":"source","vlen":11,"tag":[],"timestamp":"1627387652450"}]},"ts":"1627387652450"},
 
{"totalColumns":2,"row":"\\x00\\x03\\x00\\x00\\x01z\\xE7\\xDC\\xE9b\\x04P\\xF9\\xD1\\xB0\\x00\\x00\\x00\\x02","families":{"-":[{"qualifier":"mrt","vlen":110,"tag":[],"timestamp":"1627387652450"},{"qualifier":"source","vlen":11,"tag":[],"timestamp":"1627387652450"}]},"ts":"1627387652450"},
 {"totalColumns":2,"row":"\\x00\\x03\\x00\\x00\\x01z\\xE7\\xDC\\xE9b\\x06 
\\x01\\x07\\xF8\\x00\\x01\\x00\\x00\\x00\\x00\\xA5\\x02\\x94g\\x00\\x01\\x00\\x00\\x00\\x03","families":{"-":[{"qualifier":"mrt","vlen":171,"tag":[],"timestamp":"1627387652450"},{"qualifier":"source","vlen":11,"tag":[],"timestamp":"1627387652450"}]},"ts":"1627387652450"},
 
{"totalColumns":2,"row":"\\x00\\x03\\x00\\x00\\x01z\\xE7\\xDC\\xE9b\\x04P\\xF9\\xD3\\xD9\\x00\\x00\\x00\\x04","families":{"-":[{"qualifier":"mrt","vlen":115,"tag":[],"timestamp":"1627387652450"},{"qualifier":"source","vlen":11,"tag":[],"timestamp":"1627387652450"}]},"ts":"1627387652450"}]

2021-07-27 12:07:34,015 ERROR [Thread-4] n.r.g.r.a.AbstractApp  - Error: 
org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 9 
actions: java.lang.ExceptionInInitializerError: 9 times, servers with issues: 
worker008,16020,1627053306686
at 
org.apache.hadoop.hbase.client.BatchErrors.makeException(BatchErrors.java:54)
at 
org.apache.hadoop.hbase.client.AsyncRequestFutureImpl.getErrors(AsyncRequestFutureImpl.java:1196)
at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:451)
at org.apache.hadoop.

[GitHub] [hbase] Apache9 opened a new pull request #3533: HBASE-26144 The HStore.snapshot method is never called in main code

2021-07-27 Thread GitBox


Apache9 opened a new pull request #3533:
URL: https://github.com/apache/hbase/pull/3533


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #3528: HBASE-26120 New replication gets stuck or data loss when multiwal gro…

2021-07-27 Thread GitBox


Apache-HBase commented on pull request #3528:
URL: https://github.com/apache/hbase/pull/3528#issuecomment-887512702


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   3m 58s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  7s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ branch-2 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 26s |  branch-2 passed  |
   | +1 :green_heart: |  compile  |   1m  8s |  branch-2 passed  |
   | +1 :green_heart: |  shadedjars  |   6m 56s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 44s |  branch-2 passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m  8s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  7s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m  7s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   7m  2s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 42s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 153m 37s |  hbase-server in the patch failed.  |
   |  |   | 185m 58s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3528/2/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3528 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 9371ac470444 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2 / 20a4aaedcc |
   | Default Java | AdoptOpenJDK-11.0.10+9 |
   | unit | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3528/2/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3528/2/testReport/
 |
   | Max. process+thread count | 4145 (vs. ulimit of 12500) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3528/2/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #3532: HBASE-26122: Implement an optional maximum size for Gets, after which a partial result is returned

2021-07-27 Thread GitBox


Apache-HBase commented on pull request #3532:
URL: https://github.com/apache/hbase/pull/3532#issuecomment-887524522


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 44s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  prototool  |   0m  1s |  prototool was not available.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ branch-2 Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 18s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   4m 43s |  branch-2 passed  |
   | +1 :green_heart: |  compile  |   7m 55s |  branch-2 passed  |
   | +1 :green_heart: |  checkstyle  |   2m 46s |  branch-2 passed  |
   | +1 :green_heart: |  spotbugs  |   9m 45s |  branch-2 passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 14s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   4m 26s |  the patch passed  |
   | +1 :green_heart: |  compile  |   7m 46s |  the patch passed  |
   | +1 :green_heart: |  cc  |   7m 46s |  the patch passed  |
   | +1 :green_heart: |  javac  |   7m 46s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   1m 32s |  hbase-server: The patch 
generated 5 new + 272 unchanged - 0 fixed = 277 total (was 272)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |  16m  1s |  Patch does not cause any 
errors with Hadoop 3.1.2 3.2.1.  |
   | +1 :green_heart: |  hbaseprotoc  |   2m 49s |  the patch passed  |
   | +1 :green_heart: |  spotbugs  |  10m  2s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 50s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  82m 11s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3532/2/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3532 |
   | JIRA Issue | HBASE-26122 |
   | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti 
checkstyle compile cc hbaseprotoc prototool |
   | uname | Linux 2298b95587ac 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 
05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2 / 20a4aaedcc |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | checkstyle | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3532/2/artifact/yetus-general-check/output/diff-checkstyle-hbase-server.txt
 |
   | Max. process+thread count | 96 (vs. ulimit of 12500) |
   | modules | C: hbase-protocol-shaded hbase-protocol hbase-client 
hbase-server U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3532/2/console
 |
   | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (HBASE-26146) Allow custom opts for hbck in hbase bin

2021-07-27 Thread Bryan Beaudreault (Jira)
Bryan Beaudreault created HBASE-26146:
-

 Summary: Allow custom opts for hbck in hbase bin
 Key: HBASE-26146
 URL: https://issues.apache.org/jira/browse/HBASE-26146
 Project: HBase
  Issue Type: Improvement
Reporter: Bryan Beaudreault
Assignee: Bryan Beaudreault


https://issues.apache.org/jira/browse/HBASE-15145 made it so that when you 
execute {{hbase hbck}}, the regionserver or JAAS opts are added automatically 
to the command line. This is problematic in some cases depending on what 
regionserver opts have been set. For instance, one might configure a jmx port 
for the regionserver but then hbck will fail due to a port conflict if run on 
the same host as a regionserver. Another example would be that a regionserver 
might define an {{-Xms}} value which is significantly more than hbck requires.

 

We should make it possible for users to define their own HBASE_HBCK_OPTS which 
take precedence over the server opts added by default.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] bbeaudreault opened a new pull request #3534: HBASE-26146: Add support for HBASE_HBCK_OPTS

2021-07-27 Thread GitBox


bbeaudreault opened a new pull request #3534:
URL: https://github.com/apache/hbase/pull/3534


   https://issues.apache.org/jira/browse/HBASE-26146
   
   This adds a new env var to the hbase bin, HBASE_HBCK_OPTS. As with other 
per-command opts, this will get added to HBASE_OPTS last so it will take 
precedence over previous values.
   
   In order to maintain backwards compatibility with 
https://issues.apache.org/jira/browse/HBASE-15145, a default value for 
HBASE_HBCK_OPTS is pulled from AUTH_AS_SERVER_OPTS. 
   
   I've tested this on our internal installation of hbase 1 and 2, with 
HBASE_HBCK_OPTS set and not set. Let me know if there are other tests I should 
be adding or running as well. I don't see any tests added in the similar patch 
provided for HBASE-15145.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #3533: HBASE-26144 The HStore.snapshot method is never called in main code

2021-07-27 Thread GitBox


Apache-HBase commented on pull request #3533:
URL: https://github.com/apache/hbase/pull/3533#issuecomment-887577046


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 53s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   5m 21s |  master passed  |
   | +1 :green_heart: |  compile  |   3m 15s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   1m  7s |  master passed  |
   | +1 :green_heart: |  spotbugs  |   2m 10s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 42s |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 11s |  the patch passed  |
   | +1 :green_heart: |  javac  |   3m 11s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   1m  1s |  hbase-server: The patch 
generated 0 new + 35 unchanged - 6 fixed = 35 total (was 41)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |  19m 18s |  Patch does not cause any 
errors with Hadoop 3.1.2 3.2.1 3.3.0.  |
   | +1 :green_heart: |  spotbugs  |   2m 57s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 16s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  53m 19s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3533/1/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3533 |
   | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti 
checkstyle compile |
   | uname | Linux 09f928372c8b 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 02d263e7dd |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | Max. process+thread count | 96 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3533/1/console
 |
   | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #3529: HBASE-26124 Backport HBASE-25373 "Remove HTrace completely in code base and try to make use of OpenTelemetry" to branch-2

2021-07-27 Thread GitBox


Apache-HBase commented on pull request #3529:
URL: https://github.com/apache/hbase/pull/3529#issuecomment-887590959


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 40s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  6s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ branch-2 Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 27s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   5m 30s |  branch-2 passed  |
   | +1 :green_heart: |  compile  |   4m  6s |  branch-2 passed  |
   | +1 :green_heart: |  shadedjars  |   9m  7s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   8m 14s |  branch-2 passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 25s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   5m 22s |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 40s |  the patch passed  |
   | +1 :green_heart: |  javac  |   3m 40s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   9m 48s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   7m 46s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 169m 20s |  root in the patch passed.  |
   |  |   | 230m  2s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3529/2/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3529 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 58e94616b4a2 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 
05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2 / 20a4aaedcc |
   | Default Java | AdoptOpenJDK-11.0.10+9 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3529/2/testReport/
 |
   | Max. process+thread count | 6220 (vs. ulimit of 12500) |
   | modules | C: hbase-protocol-shaded hbase-common hbase-client 
hbase-zookeeper hbase-asyncfs hbase-server hbase-mapreduce hbase-shell hbase-it 
hbase-shaded hbase-shaded/hbase-shaded-client hbase-external-blockcache 
hbase-shaded/hbase-shaded-testing-util . U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3529/2/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (HBASE-26120) New replication gets stuck or data loss when multiwal groups more than 10

2021-07-27 Thread Andrew Kyle Purtell (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17388133#comment-17388133
 ] 

Andrew Kyle Purtell commented on HBASE-26120:
-

Thanks for the fix [~zhangduo]. 

> New replication gets stuck or data loss when multiwal groups more than 10
> -
>
> Key: HBASE-26120
> URL: https://issues.apache.org/jira/browse/HBASE-26120
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 1.7.1, 2.4.5
>Reporter: Jasee Tao
>Assignee: Duo Zhang
>Priority: Critical
>
> {code:java}
> void preLogRoll(Path newLog) throws IOException {
>   recordLog(newLog);
>   String logName = newLog.getName();
>   String logPrefix = DefaultWALProvider.getWALPrefixFromWALName(logName);
>   synchronized (latestPaths) {
> Iterator iterator = latestPaths.iterator();
> while (iterator.hasNext()) {
>   Path path = iterator.next();
>   if (path.getName().contains(logPrefix)) {
> iterator.remove();
> break;
>   }
> }
> this.latestPaths.add(newLog);
>   }
> }
> {code}
> ReplicationSourceManager use _latestPaths_ to track each walgroup's last 
> WALlog and all of them will be enqueue for replication when new replication  
> peer added。
> If we set hbase.wal.regiongrouping.numgroups > 10, says 12, the name of 
> WALlog group will be _regionserver.null0.timestamp_ to 
> _regionserver.null11.timestamp_。*_String.contains_* is used in _preoLogRoll_ 
> to replace old logs in same group, leads when _regionserver.null1.ts_ comes, 
> _regionserver.null11.ts_ may be replaced, and *_latestPaths_ growing with 
> wrong logs*.
> Replication then partly stuckd as _regionsserver.null1.ts_ not exists on 
> hdfs, and data may not be replicated to slave as _regionserver.null11.ts_ not 
> in replication queue at startup.
> Because of 
> [ZOOKEEPER-706|https://issues.apache.org/jira/browse/ZOOKEEPER-706], if there 
> is too many logs in zk _/hbase/replication/rs/regionserver/peer_, remove_peer 
> may not delete this znode, and other regionserver can't not pick up this 
> queue for replication failover. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-26120) New replication gets stuck or data loss when multiwal groups more than 10

2021-07-27 Thread Andrew Kyle Purtell (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Kyle Purtell updated HBASE-26120:

Fix Version/s: 2.4.5
   2.5.0

> New replication gets stuck or data loss when multiwal groups more than 10
> -
>
> Key: HBASE-26120
> URL: https://issues.apache.org/jira/browse/HBASE-26120
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 1.7.1, 2.4.5
>Reporter: Jasee Tao
>Assignee: Duo Zhang
>Priority: Critical
> Fix For: 2.5.0, 2.4.5
>
>
> {code:java}
> void preLogRoll(Path newLog) throws IOException {
>   recordLog(newLog);
>   String logName = newLog.getName();
>   String logPrefix = DefaultWALProvider.getWALPrefixFromWALName(logName);
>   synchronized (latestPaths) {
> Iterator iterator = latestPaths.iterator();
> while (iterator.hasNext()) {
>   Path path = iterator.next();
>   if (path.getName().contains(logPrefix)) {
> iterator.remove();
> break;
>   }
> }
> this.latestPaths.add(newLog);
>   }
> }
> {code}
> ReplicationSourceManager use _latestPaths_ to track each walgroup's last 
> WALlog and all of them will be enqueue for replication when new replication  
> peer added。
> If we set hbase.wal.regiongrouping.numgroups > 10, says 12, the name of 
> WALlog group will be _regionserver.null0.timestamp_ to 
> _regionserver.null11.timestamp_。*_String.contains_* is used in _preoLogRoll_ 
> to replace old logs in same group, leads when _regionserver.null1.ts_ comes, 
> _regionserver.null11.ts_ may be replaced, and *_latestPaths_ growing with 
> wrong logs*.
> Replication then partly stuckd as _regionsserver.null1.ts_ not exists on 
> hdfs, and data may not be replicated to slave as _regionserver.null11.ts_ not 
> in replication queue at startup.
> Because of 
> [ZOOKEEPER-706|https://issues.apache.org/jira/browse/ZOOKEEPER-706], if there 
> is too many logs in zk _/hbase/replication/rs/regionserver/peer_, remove_peer 
> may not delete this znode, and other regionserver can't not pick up this 
> queue for replication failover. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-26120) New replication gets stuck or data loss when multiwal groups more than 10

2021-07-27 Thread Andrew Kyle Purtell (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Kyle Purtell updated HBASE-26120:

Fix Version/s: 1.7.2

> New replication gets stuck or data loss when multiwal groups more than 10
> -
>
> Key: HBASE-26120
> URL: https://issues.apache.org/jira/browse/HBASE-26120
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 1.7.1, 2.4.5
>Reporter: Jasee Tao
>Assignee: Duo Zhang
>Priority: Critical
> Fix For: 2.5.0, 2.4.5, 1.7.2
>
>
> {code:java}
> void preLogRoll(Path newLog) throws IOException {
>   recordLog(newLog);
>   String logName = newLog.getName();
>   String logPrefix = DefaultWALProvider.getWALPrefixFromWALName(logName);
>   synchronized (latestPaths) {
> Iterator iterator = latestPaths.iterator();
> while (iterator.hasNext()) {
>   Path path = iterator.next();
>   if (path.getName().contains(logPrefix)) {
> iterator.remove();
> break;
>   }
> }
> this.latestPaths.add(newLog);
>   }
> }
> {code}
> ReplicationSourceManager use _latestPaths_ to track each walgroup's last 
> WALlog and all of them will be enqueue for replication when new replication  
> peer added。
> If we set hbase.wal.regiongrouping.numgroups > 10, says 12, the name of 
> WALlog group will be _regionserver.null0.timestamp_ to 
> _regionserver.null11.timestamp_。*_String.contains_* is used in _preoLogRoll_ 
> to replace old logs in same group, leads when _regionserver.null1.ts_ comes, 
> _regionserver.null11.ts_ may be replaced, and *_latestPaths_ growing with 
> wrong logs*.
> Replication then partly stuckd as _regionsserver.null1.ts_ not exists on 
> hdfs, and data may not be replicated to slave as _regionserver.null11.ts_ not 
> in replication queue at startup.
> Because of 
> [ZOOKEEPER-706|https://issues.apache.org/jira/browse/ZOOKEEPER-706], if there 
> is too many logs in zk _/hbase/replication/rs/regionserver/peer_, remove_peer 
> may not delete this znode, and other regionserver can't not pick up this 
> queue for replication failover. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-26120) New replication gets stuck or data loss when multiwal groups more than 10

2021-07-27 Thread Andrew Kyle Purtell (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17388135#comment-17388135
 ] 

Andrew Kyle Purtell commented on HBASE-26120:
-

Let me make an attempt to port and apply this to branch-1 as well. 

> New replication gets stuck or data loss when multiwal groups more than 10
> -
>
> Key: HBASE-26120
> URL: https://issues.apache.org/jira/browse/HBASE-26120
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 1.7.1, 2.4.5
>Reporter: Jasee Tao
>Assignee: Duo Zhang
>Priority: Critical
> Fix For: 2.5.0, 2.4.5, 1.7.2
>
>
> {code:java}
> void preLogRoll(Path newLog) throws IOException {
>   recordLog(newLog);
>   String logName = newLog.getName();
>   String logPrefix = DefaultWALProvider.getWALPrefixFromWALName(logName);
>   synchronized (latestPaths) {
> Iterator iterator = latestPaths.iterator();
> while (iterator.hasNext()) {
>   Path path = iterator.next();
>   if (path.getName().contains(logPrefix)) {
> iterator.remove();
> break;
>   }
> }
> this.latestPaths.add(newLog);
>   }
> }
> {code}
> ReplicationSourceManager use _latestPaths_ to track each walgroup's last 
> WALlog and all of them will be enqueue for replication when new replication  
> peer added。
> If we set hbase.wal.regiongrouping.numgroups > 10, says 12, the name of 
> WALlog group will be _regionserver.null0.timestamp_ to 
> _regionserver.null11.timestamp_。*_String.contains_* is used in _preoLogRoll_ 
> to replace old logs in same group, leads when _regionserver.null1.ts_ comes, 
> _regionserver.null11.ts_ may be replaced, and *_latestPaths_ growing with 
> wrong logs*.
> Replication then partly stuckd as _regionsserver.null1.ts_ not exists on 
> hdfs, and data may not be replicated to slave as _regionserver.null11.ts_ not 
> in replication queue at startup.
> Because of 
> [ZOOKEEPER-706|https://issues.apache.org/jira/browse/ZOOKEEPER-706], if there 
> is too many logs in zk _/hbase/replication/rs/regionserver/peer_, remove_peer 
> may not delete this znode, and other regionserver can't not pick up this 
> queue for replication failover. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] apurtell merged pull request #3528: HBASE-26120 New replication gets stuck or data loss when multiwal gro…

2021-07-27 Thread GitBox


apurtell merged pull request #3528:
URL: https://github.com/apache/hbase/pull/3528


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (HBASE-26120) New replication gets stuck or data loss when multiwal groups more than 10

2021-07-27 Thread Andrew Kyle Purtell (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Kyle Purtell updated HBASE-26120:

Fix Version/s: 2.3.7

> New replication gets stuck or data loss when multiwal groups more than 10
> -
>
> Key: HBASE-26120
> URL: https://issues.apache.org/jira/browse/HBASE-26120
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 1.7.1, 2.4.5
>Reporter: Jasee Tao
>Assignee: Duo Zhang
>Priority: Critical
> Fix For: 2.5.0, 2.4.5, 1.7.2, 2.3.7
>
>
> {code:java}
> void preLogRoll(Path newLog) throws IOException {
>   recordLog(newLog);
>   String logName = newLog.getName();
>   String logPrefix = DefaultWALProvider.getWALPrefixFromWALName(logName);
>   synchronized (latestPaths) {
> Iterator iterator = latestPaths.iterator();
> while (iterator.hasNext()) {
>   Path path = iterator.next();
>   if (path.getName().contains(logPrefix)) {
> iterator.remove();
> break;
>   }
> }
> this.latestPaths.add(newLog);
>   }
> }
> {code}
> ReplicationSourceManager use _latestPaths_ to track each walgroup's last 
> WALlog and all of them will be enqueue for replication when new replication  
> peer added。
> If we set hbase.wal.regiongrouping.numgroups > 10, says 12, the name of 
> WALlog group will be _regionserver.null0.timestamp_ to 
> _regionserver.null11.timestamp_。*_String.contains_* is used in _preoLogRoll_ 
> to replace old logs in same group, leads when _regionserver.null1.ts_ comes, 
> _regionserver.null11.ts_ may be replaced, and *_latestPaths_ growing with 
> wrong logs*.
> Replication then partly stuckd as _regionsserver.null1.ts_ not exists on 
> hdfs, and data may not be replicated to slave as _regionserver.null11.ts_ not 
> in replication queue at startup.
> Because of 
> [ZOOKEEPER-706|https://issues.apache.org/jira/browse/ZOOKEEPER-706], if there 
> is too many logs in zk _/hbase/replication/rs/regionserver/peer_, remove_peer 
> may not delete this znode, and other regionserver can't not pick up this 
> queue for replication failover. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache-HBase commented on pull request #3532: HBASE-26122: Implement an optional maximum size for Gets, after which a partial result is returned

2021-07-27 Thread GitBox


Apache-HBase commented on pull request #3532:
URL: https://github.com/apache/hbase/pull/3532#issuecomment-887612794


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 47s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  8s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ branch-2 Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 17s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   5m 14s |  branch-2 passed  |
   | +1 :green_heart: |  compile  |   3m 20s |  branch-2 passed  |
   | +1 :green_heart: |  shadedjars  |   7m 50s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 38s |  branch-2 passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 21s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   4m 46s |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 26s |  the patch passed  |
   | +1 :green_heart: |  javac  |   3m 26s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   8m  3s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 45s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 56s |  hbase-protocol-shaded in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   0m 35s |  hbase-protocol in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m 28s |  hbase-client in the patch passed.  
|
   | +1 :green_heart: |  unit  | 146m 10s |  hbase-server in the patch passed.  
|
   |  |   | 190m 26s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3532/2/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3532 |
   | JIRA Issue | HBASE-26122 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux b89302716a1f 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2 / 20a4aaedcc |
   | Default Java | AdoptOpenJDK-11.0.10+9 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3532/2/testReport/
 |
   | Max. process+thread count | 3605 (vs. ulimit of 12500) |
   | modules | C: hbase-protocol-shaded hbase-protocol hbase-client 
hbase-server U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3532/2/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #3532: HBASE-26122: Implement an optional maximum size for Gets, after which a partial result is returned

2021-07-27 Thread GitBox


Apache-HBase commented on pull request #3532:
URL: https://github.com/apache/hbase/pull/3532#issuecomment-887621871


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 44s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  7s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ branch-2 Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 14s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   3m 34s |  branch-2 passed  |
   | +1 :green_heart: |  compile  |   2m 40s |  branch-2 passed  |
   | +1 :green_heart: |  shadedjars  |   6m 22s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 35s |  branch-2 passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 17s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 35s |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 46s |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 46s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   6m 43s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 32s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 43s |  hbase-protocol-shaded in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   0m 27s |  hbase-protocol in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m 45s |  hbase-client in the patch passed.  
|
   | +1 :green_heart: |  unit  | 163m 51s |  hbase-server in the patch passed.  
|
   |  |   | 201m 49s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3532/2/artifact/yetus-jdk8-hadoop2-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3532 |
   | JIRA Issue | HBASE-26122 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux a16ca566652c 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 
17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2 / 20a4aaedcc |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3532/2/testReport/
 |
   | Max. process+thread count | 3403 (vs. ulimit of 12500) |
   | modules | C: hbase-protocol-shaded hbase-protocol hbase-client 
hbase-server U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3532/2/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (HBASE-26092) JVM core dump in the replication path

2021-07-27 Thread Huaxiang Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17388150#comment-17388150
 ] 

Huaxiang Sun commented on HBASE-26092:
--

Hmm, probably fine with SimpleRPCClient, but have not read the code to confirm. 
The idea of crash is similar to HBASE-24984, byteBuffer is referenced in 
different threads context and one frees it and ByteBuffer is reused.

> JVM core dump in the replication path
> -
>
> Key: HBASE-26092
> URL: https://issues.apache.org/jira/browse/HBASE-26092
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 2.3.5
>Reporter: Huaxiang Sun
>Priority: Critical
>
> When replication is turned on, we found the following code dump in the region 
> server. 
> I checked the code dump for replication. I think I got some ideas. For 
> replication, when RS receives walEdits from remote cluster, it needs to send 
> them out to final RS. In this case, NettyRpcConnection is deployed, calls are 
> queued while it refers to ByteBuffer in the context of replicationHandler 
> (returned to the pool once it returns). Code dump will happen since the 
> byteBuffer has been reused. Needs ref count in this asynchronous processing.
>  
> Feel free to take it, otherwise, I will try to work on a patch later.
>  
>  
> {code:java}
> Stack: [0x7fb1bf039000,0x7fb1bf13a000],  sp=0x7fb1bf138560,  free 
> space=1021k
> Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native 
> code)
> J 28175 C2 
> org.apache.hadoop.hbase.ByteBufferKeyValue.write(Ljava/io/OutputStream;Z)I 
> (21 bytes) @ 0x7fd2663c [0x7fd263c0+0x27c]
> J 14912 C2 
> org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.writeRequest(Lorg/apache/hbase/thirdparty/io/netty/channel/ChannelHandlerContext;Lorg/apache/hadoop/hbase/ipc/Call;Lorg/apache/hbase/thirdparty/io/netty/channel/ChannelPromise;)V
>  (370 bytes) @ 0x7fdbbb94b590 [0x7fdbbb949c00+0x1990]
> J 14911 C2 
> org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.write(Lorg/apache/hbase/thirdparty/io/netty/channel/ChannelHandlerContext;Ljava/lang/Object;Lorg/apache/hbase/thirdparty/io/netty/channel/ChannelPromise;)V
>  (30 bytes) @ 0x7fdbb972d1d4 [0x7fdbb972d1a0+0x34]
> J 30476 C2 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.write(Ljava/lang/Object;ZLorg/apache/hbase/thirdparty/io/netty/channel/ChannelPromise;)V
>  (149 bytes) @ 0x7fdbbd4e7084 [0x7fdbbd4e6900+0x784]
> J 14914 C2 org.apache.hadoop.hbase.ipc.NettyRpcConnection$6$1.run()V (22 
> bytes) @ 0x7fdbbb9344ec [0x7fdbbb934280+0x26c]
> J 23528 C2 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(J)Z
>  (106 bytes) @ 0x7fdbbcbb0efc [0x7fdbbcbb0c40+0x2bc]
> J 15987% C2 
> org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run()V (461 
> bytes) @ 0x7fdbbbaf1580 [0x7fdbbbaf1360+0x220]
> j  
> org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run()V+44
> j  
> org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run()V+11
> j  
> org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run()V+4
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (HBASE-26120) New replication gets stuck or data loss when multiwal groups more than 10

2021-07-27 Thread Andrew Kyle Purtell (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Kyle Purtell resolved HBASE-26120.
-
Hadoop Flags: Reviewed
  Resolution: Fixed

> New replication gets stuck or data loss when multiwal groups more than 10
> -
>
> Key: HBASE-26120
> URL: https://issues.apache.org/jira/browse/HBASE-26120
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 1.7.1, 2.4.5
>Reporter: Jasee Tao
>Assignee: Duo Zhang
>Priority: Critical
> Fix For: 2.5.0, 1.7.2, 2.3.7, 2.4.5
>
>
> {code:java}
> void preLogRoll(Path newLog) throws IOException {
>   recordLog(newLog);
>   String logName = newLog.getName();
>   String logPrefix = DefaultWALProvider.getWALPrefixFromWALName(logName);
>   synchronized (latestPaths) {
> Iterator iterator = latestPaths.iterator();
> while (iterator.hasNext()) {
>   Path path = iterator.next();
>   if (path.getName().contains(logPrefix)) {
> iterator.remove();
> break;
>   }
> }
> this.latestPaths.add(newLog);
>   }
> }
> {code}
> ReplicationSourceManager use _latestPaths_ to track each walgroup's last 
> WALlog and all of them will be enqueue for replication when new replication  
> peer added。
> If we set hbase.wal.regiongrouping.numgroups > 10, says 12, the name of 
> WALlog group will be _regionserver.null0.timestamp_ to 
> _regionserver.null11.timestamp_。*_String.contains_* is used in _preoLogRoll_ 
> to replace old logs in same group, leads when _regionserver.null1.ts_ comes, 
> _regionserver.null11.ts_ may be replaced, and *_latestPaths_ growing with 
> wrong logs*.
> Replication then partly stuckd as _regionsserver.null1.ts_ not exists on 
> hdfs, and data may not be replicated to slave as _regionserver.null11.ts_ not 
> in replication queue at startup.
> Because of 
> [ZOOKEEPER-706|https://issues.apache.org/jira/browse/ZOOKEEPER-706], if there 
> is too many logs in zk _/hbase/replication/rs/regionserver/peer_, remove_peer 
> may not delete this znode, and other regionserver can't not pick up this 
> queue for replication failover. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] taklwu commented on pull request #3529: HBASE-26124 Backport HBASE-25373 "Remove HTrace completely in code base and try to make use of OpenTelemetry" to branch-2

2021-07-27 Thread GitBox


taklwu commented on pull request #3529:
URL: https://github.com/apache/hbase/pull/3529#issuecomment-887666142


   The failed test TestSnapshotScannerHDFSAclController2 passed locally and the 
error of `NoSuchColumnFamilyException` for  table `'hbase:acl'`  should not be 
related to this change. 
   
   ```
   [INFO] ---
   [INFO]  T E S T S
   [INFO] ---
   [INFO] Running 
org.apache.hadoop.hbase.security.access.TestSnapshotScannerHDFSAclController2
   [INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
38.725 s - in 
org.apache.hadoop.hbase.security.access.TestSnapshotScannerHDFSAclController2
   [INFO]
   [INFO] Results:
   [INFO]
   [INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0
   [INFO]
   [INFO]
   [INFO] --- maven-surefire-plugin:3.0.0-M4:test (secondPartTestsExecution) @ 
hbase-server ---
   [INFO] Tests are skipped.
   [INFO] 

   [INFO] BUILD SUCCESS
   [INFO] 

   [INFO] Total time:  03:31 min
   [INFO] Finished at: 2021-07-27T09:36:30-07:00
   [INFO] 

   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (HBASE-26114) when “hbase.mob.compaction.threads.max” is set to a negative number, HMaster cannot start normally

2021-07-27 Thread Anoop Sam John (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17388177#comment-17388177
 ] 

Anoop Sam John commented on HBASE-26114:


[~fujx]  u can provide a PR for the change

> when “hbase.mob.compaction.threads.max” is set to a negative number, HMaster 
> cannot start normally 
> ---
>
> Key: HBASE-26114
> URL: https://issues.apache.org/jira/browse/HBASE-26114
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Affects Versions: 2.2.0, 2.4.4
> Environment: HBase 2.2.2
> os.name=Linux
> os.arch=amd64
> os.version=5.4.0-72-generic
> java.version=1.8.0_191
> java.vendor=Oracle Corporation
>Reporter: Jingxuan Fu
>Priority: Minor
>  Labels: patch
>   Original Estimate: 10m
>  Remaining Estimate: 10m
>
> In hbase-default.xml:
>   
> {code:java}
>  
> hbase.mob.compaction.threads.max 
> 1 
>    
>   The max number of threads used in MobCompactor. 
>    
> {code}
>  
> When the value is set to a negative number, such as -1, Hmaster cannot start 
> normally.
> The log file will output:
>   
> {code:cpp}
> 2021-07-22 18:54:13,758 ERROR [master/JavaFuzz:16000:becomeActiveMaster] 
> master.HMaster: Failed to become active master 
> java.lang.IllegalArgumentException
> at 
> java.util.concurrent.ThreadPoolExecutor.(ThreadPoolExecutor.java:1314)  
> at 
> org.apache.hadoop.hbase.mob.MobUtils.createMobCompactorThreadPool(MobUtils.java:880)
> at org.apache.hadoop.hbase.master.MobCompactionChore.
> (MobCompactionChore.java:51)   at 
> org.apache.hadoop.hbase.master.HMaster.initMobCleaner(HMaster.java:1278) 
>   at 
> org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:1161)
>  
> at 
> org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2112)
> at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:580)
> at java.lang.Thread.run(Thread.java:748) 
> 2021-07-22 18:54:13,760 ERROR [master/JavaFuzz:16000:becomeActiveMaster] 
> master.HMaster: Master server abort: loaded coprocessors are: 
> [org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint] 
> 2021-07-22 18:54:13,760 ERROR [master/JavaFuzz:16000:becomeActiveMaster] 
> master.HMaster: * ABORTING master javafuzz,16000,1626951243154: Unhandled 
> exception. Starting shutdown. * java.lang.IllegalArgumentException 
> at 
> java.util.concurrent.ThreadPoolExecutor.(ThreadPoolExecutor.java:1314)  
>  at 
> org.apache.hadoop.hbase.mob.MobUtils.createMobCompactorThreadPool(MobUtils.java:880)
>  
> at 
> org.apache.hadoop.hbase.master.MobCompactionChore.(MobCompactionChore.java:51)
>  
>   at org.apache.hadoop.hbase.master.HMaster.initMobCleaner(HMaster.java:1278) 
>   at 
> org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:1161)
>  
>   at 
> org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2112)
>  
> at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:580) 
>   at java.lang.Thread.run(Thread.java:748) 
> 2021-07-
> 22 18:54:13,760 INFO  [master/JavaFuzz:16000:becomeActiveMaster] 
> regionserver.HRegionServer: * STOPPING region server 
> 'javafuzz,16000,1626951243154' *{code}
>  
> In MobUtils.java(package org.apache.hadoop.hbase.mob) 
>  This method from version 2.2.0 to version 2.4.4 is the same
> {code:java}
>   public static ExecutorService createMobCompactorThreadPool(Configuration 
> conf) { int maxThreads = 
> conf.getInt(MobConstants.MOB_COMPACTION_THREADS_MAX, 
> MobConstants.DEFAULT_MOB_COMPACTION_THREADS_MAX); 
> if (maxThreads == 0) { 
>maxThreads = 1;    
>    } 
> final SynchronousQueue queue = new SynchronousQueue<>();
> ThreadPoolExecutor pool = new ThreadPoolExecutor(1, maxThreads, 60, 
> TimeUnit.SECONDS, queue,   
> Threads.newDaemonThreadFactory("MobCompactor"), new 
> RejectedExecutionHandler() {
>    @Override
> public void rejectedExecution(Runnable r, ThreadPoolExecutor 
> executor) {   
> try { 
> // waiting for a thread to pick up instead of throwing 
> exceptions.     
> queue.put(r);   
> } catch (InterruptedException e) { 
> throw new RejectedExecutionException(e);   
> } 
>   }   
> }); 
> ((ThreadPoolExecutor) pool).allowCoreThreadTimeOut(true); 
> return pool;   
>}{code}
> When MOB_COMPACTION_THREADS_MAX is set to 0, mobUtil will set it to 1. But 
> the program does not take 

[GitHub] [hbase] Apache-HBase commented on pull request #3533: HBASE-26144 The HStore.snapshot method is never called in main code

2021-07-27 Thread GitBox


Apache-HBase commented on pull request #3533:
URL: https://github.com/apache/hbase/pull/3533#issuecomment-887681288


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 31s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m  7s |  master passed  |
   | +1 :green_heart: |  compile  |   1m  0s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   8m 19s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 41s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  1s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m  1s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   8m 10s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | -0 :warning: |  javadoc  |   0m 36s |  hbase-server generated 1 new + 21 
unchanged - 0 fixed = 22 total (was 21)  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 151m 18s |  hbase-server in the patch passed.  
|
   |  |   | 181m 38s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3533/1/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3533 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux e70dc236faa7 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 02d263e7dd |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | javadoc | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3533/1/artifact/yetus-jdk8-hadoop3-check/output/diff-javadoc-javadoc-hbase-server.txt
 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3533/1/testReport/
 |
   | Max. process+thread count | 4712 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3533/1/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #3529: HBASE-26124 Backport HBASE-25373 "Remove HTrace completely in code base and try to make use of OpenTelemetry" to branch-2

2021-07-27 Thread GitBox


Apache-HBase commented on pull request #3529:
URL: https://github.com/apache/hbase/pull/3529#issuecomment-887688697


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 34s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  7s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ branch-2 Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 25s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   3m 35s |  branch-2 passed  |
   | +1 :green_heart: |  compile  |   2m 15s |  branch-2 passed  |
   | +1 :green_heart: |  shadedjars  |   5m 58s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   5m 44s |  branch-2 passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 21s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 19s |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 17s |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 17s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   6m 26s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   5m 47s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 327m 11s |  root in the patch failed.  |
   |  |   | 368m 26s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3529/2/artifact/yetus-jdk8-hadoop2-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3529 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 80e68b0d9f11 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2 / 20a4aaedcc |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | unit | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3529/2/artifact/yetus-jdk8-hadoop2-check/output/patch-unit-root.txt
 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3529/2/testReport/
 |
   | Max. process+thread count | 4826 (vs. ulimit of 12500) |
   | modules | C: hbase-protocol-shaded hbase-common hbase-client 
hbase-zookeeper hbase-asyncfs hbase-server hbase-mapreduce hbase-shell hbase-it 
hbase-shaded hbase-shaded/hbase-shaded-client hbase-external-blockcache 
hbase-shaded/hbase-shaded-testing-util . U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3529/2/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] petersomogyi commented on a change in pull request #3529: HBASE-26124 Backport HBASE-25373 "Remove HTrace completely in code base and try to make use of OpenTelemetry" to branch-2

2021-07-27 Thread GitBox


petersomogyi commented on a change in pull request #3529:
URL: https://github.com/apache/hbase/pull/3529#discussion_r677656453



##
File path: hbase-shaded/hbase-shaded-testing-util/pom.xml
##
@@ -225,9 +225,11 @@
 com.github.spotbugs:*
 org.apache.htrace:*
 org.apache.yetus:*
+
org.apache.logging.log4j:*

Review comment:
   Same here.

##
File path: hbase-shaded/hbase-shaded-client/pom.xml
##
@@ -76,9 +76,11 @@
 com.github.spotbugs:*
 org.apache.htrace:*
 org.apache.yetus:*
+
org.apache.logging.log4j:*

Review comment:
   Not from the original commit. Is this needed?

##
File path: hbase-shaded/pom.xml
##
@@ -156,9 +156,11 @@
 
com.github.spotbugs:*
 org.apache.htrace:*
 org.apache.yetus:*
+
org.apache.logging.log4j:*

Review comment:
   Same here.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] caroliney14 opened a new pull request #3535: HBASE-25469 Create RIT servlet in HMaster to track more detailed RIT info not captured in metrics

2021-07-27 Thread GitBox


caroliney14 opened a new pull request #3535:
URL: https://github.com/apache/hbase/pull/3535


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (HBASE-25469) Create RIT servlet in HMaster to track more detailed RIT info not captured in metrics

2021-07-27 Thread Caroline Zhou (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-25469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caroline Zhou updated HBASE-25469:
--
Attachment: Screen Shot 2021-07-27 at 10.34.53.png

> Create RIT servlet in HMaster to track more detailed RIT info not captured in 
> metrics
> -
>
> Key: HBASE-25469
> URL: https://issues.apache.org/jira/browse/HBASE-25469
> Project: HBase
>  Issue Type: Improvement
>Reporter: Caroline Zhou
>Assignee: Caroline Zhou
>Priority: Minor
> Attachments: Screen Shot 2021-07-27 at 10.34.45.png, Screen Shot 
> 2021-07-27 at 10.34.53.png
>
>
> In HBase 2.1+, there is a RIT JSP page that was added as part of HBASE-21410.
> There are some additional RIT details that would be helpful to have in one 
> place:
>  * RIT Start Time
>  * RIT Duration (ms)
>  * Server
>  * Procedure Type
> This info can be added to the table under the {{/rit.jsp}} page, and we can 
> also add a button on that page to view info as JSON, for easy parsing into 
> metrics, etc. This JSON dump can be served as a servlet.
> We may also consider different ways of grouping the JSON results, such as by 
> state or server name.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-25469) Create RIT servlet in HMaster to track more detailed RIT info not captured in metrics

2021-07-27 Thread Caroline Zhou (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-25469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caroline Zhou updated HBASE-25469:
--
Description: 
In HBase 2.1+, there is a RIT JSP page that was added as part of HBASE-21410.

There are some additional RIT details that would be helpful to have in one 
place:
 * RIT Start Time
 * RIT Duration (ms)
 * Server
 * Procedure Type

This info can be added to the table under the {{/rit.jsp}} page, and we can 
also add a button on that page to view info as JSON, for easy parsing into 
metrics, etc. This JSON dump can be served as a servlet.

We may also consider different ways of grouping the JSON results, such as by 
state or server name.

!Screen Shot 2021-07-27 at 10.34.45.png!

!Screen Shot 2021-07-27 at 10.34.53.png!

  was:
In HBase 2.1+, there is a RIT JSP page that was added as part of HBASE-21410.

There are some additional RIT details that would be helpful to have in one 
place:
 * RIT Start Time
 * RIT Duration (ms)
 * Server
 * Procedure Type

This info can be added to the table under the {{/rit.jsp}} page, and we can 
also add a button on that page to view info as JSON, for easy parsing into 
metrics, etc. This JSON dump can be served as a servlet.

We may also consider different ways of grouping the JSON results, such as by 
state or server name.


> Create RIT servlet in HMaster to track more detailed RIT info not captured in 
> metrics
> -
>
> Key: HBASE-25469
> URL: https://issues.apache.org/jira/browse/HBASE-25469
> Project: HBase
>  Issue Type: Improvement
>Reporter: Caroline Zhou
>Assignee: Caroline Zhou
>Priority: Minor
> Attachments: Screen Shot 2021-07-27 at 10.34.45.png, Screen Shot 
> 2021-07-27 at 10.34.53.png
>
>
> In HBase 2.1+, there is a RIT JSP page that was added as part of HBASE-21410.
> There are some additional RIT details that would be helpful to have in one 
> place:
>  * RIT Start Time
>  * RIT Duration (ms)
>  * Server
>  * Procedure Type
> This info can be added to the table under the {{/rit.jsp}} page, and we can 
> also add a button on that page to view info as JSON, for easy parsing into 
> metrics, etc. This JSON dump can be served as a servlet.
> We may also consider different ways of grouping the JSON results, such as by 
> state or server name.
> !Screen Shot 2021-07-27 at 10.34.45.png!
> !Screen Shot 2021-07-27 at 10.34.53.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-25469) Create RIT servlet in HMaster to track more detailed RIT info not captured in metrics

2021-07-27 Thread Caroline Zhou (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-25469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caroline Zhou updated HBASE-25469:
--
Attachment: Screen Shot 2021-07-27 at 10.34.45.png

> Create RIT servlet in HMaster to track more detailed RIT info not captured in 
> metrics
> -
>
> Key: HBASE-25469
> URL: https://issues.apache.org/jira/browse/HBASE-25469
> Project: HBase
>  Issue Type: Improvement
>Reporter: Caroline Zhou
>Assignee: Caroline Zhou
>Priority: Minor
> Attachments: Screen Shot 2021-07-27 at 10.34.45.png, Screen Shot 
> 2021-07-27 at 10.34.53.png
>
>
> In HBase 2.1+, there is a RIT JSP page that was added as part of HBASE-21410.
> There are some additional RIT details that would be helpful to have in one 
> place:
>  * RIT Start Time
>  * RIT Duration (ms)
>  * Server
>  * Procedure Type
> This info can be added to the table under the {{/rit.jsp}} page, and we can 
> also add a button on that page to view info as JSON, for easy parsing into 
> metrics, etc. This JSON dump can be served as a servlet.
> We may also consider different ways of grouping the JSON results, such as by 
> state or server name.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache-HBase commented on pull request #3534: HBASE-26146: Add support for HBASE_HBCK_OPTS

2021-07-27 Thread GitBox


Apache-HBase commented on pull request #3534:
URL: https://github.com/apache/hbase/pull/3534#issuecomment-887701006


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 46s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  9s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ branch-2 Compile Tests _ |
   ||| _ Patch Compile Tests _ |
   ||| _ Other Tests _ |
   |  |   |   2m 38s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3534/2/artifact/yetus-jdk8-hadoop2-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3534 |
   | JIRA Issue | HBASE-26146 |
   | Optional Tests |  |
   | uname | Linux a1ea4a3c51b2 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2 / dd22fecc2b |
   | Max. process+thread count | 46 (vs. ulimit of 12500) |
   | modules | C: . U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3534/2/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #3534: HBASE-26146: Add support for HBASE_HBCK_OPTS

2021-07-27 Thread GitBox


Apache-HBase commented on pull request #3534:
URL: https://github.com/apache/hbase/pull/3534#issuecomment-887701308


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 16s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  shelldocs  |   0m  0s |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ branch-2 Compile Tests _ |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  shellcheck  |   0m  1s |  There were no new shellcheck 
issues.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 16s |  The patch does not generate 
ASF License warnings.  |
   |  |   |   3m 15s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3534/2/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3534 |
   | JIRA Issue | HBASE-26146 |
   | Optional Tests | dupname asflicense shellcheck shelldocs |
   | uname | Linux c194ee5f4bf7 4.15.0-147-generic #151-Ubuntu SMP Fri Jun 18 
19:21:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2 / dd22fecc2b |
   | Max. process+thread count | 47 (vs. ulimit of 12500) |
   | modules | C: . U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3534/2/console
 |
   | versions | git=2.17.1 maven=3.6.3 shellcheck=0.4.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #3534: HBASE-26146: Add support for HBASE_HBCK_OPTS

2021-07-27 Thread GitBox


Apache-HBase commented on pull request #3534:
URL: https://github.com/apache/hbase/pull/3534#issuecomment-887703164


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 51s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  7s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ branch-2 Compile Tests _ |
   ||| _ Patch Compile Tests _ |
   ||| _ Other Tests _ |
   |  |   |   2m 18s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3534/2/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3534 |
   | JIRA Issue | HBASE-26146 |
   | Optional Tests |  |
   | uname | Linux 9391528bea2b 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2 / dd22fecc2b |
   | Max. process+thread count | 45 (vs. ulimit of 12500) |
   | modules | C: . U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3534/2/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (HBASE-25469) Create RIT servlet in HMaster to track more detailed RIT info not captured in metrics

2021-07-27 Thread Caroline Zhou (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-25469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17388214#comment-17388214
 ] 

Caroline Zhou commented on HBASE-25469:
---

[~apurtell] [~stack] [~tianjingyun] [~bharathv] [~vjasani] Please take a look. 
Added a few fields to the rit.jsp page as well as created a new servlet to 
serve individual RIT information as json (RIT info in one place, can be parsed 
for metrics). A couple of things to consider/I would like your feedback on:
 * Any other fields we should add to the json output? (Aggregates like 
ritCount, ritCountOverThreshold, etc. can be found in AssignmentManager metrics 
so I didn't include those here.)
 * Should we include the ability for the json to display only RIT over 
threshold/RITs grouped by state or server name?

I would also like to backport this to branch-1, after master PR is approved.

Thanks.

> Create RIT servlet in HMaster to track more detailed RIT info not captured in 
> metrics
> -
>
> Key: HBASE-25469
> URL: https://issues.apache.org/jira/browse/HBASE-25469
> Project: HBase
>  Issue Type: Improvement
>Reporter: Caroline Zhou
>Assignee: Caroline Zhou
>Priority: Minor
> Attachments: Screen Shot 2021-07-27 at 10.34.45.png, Screen Shot 
> 2021-07-27 at 10.34.53.png
>
>
> In HBase 2.1+, there is a RIT JSP page that was added as part of HBASE-21410.
> There are some additional RIT details that would be helpful to have in one 
> place:
>  * RIT Start Time
>  * RIT Duration (ms)
>  * Server
>  * Procedure Type
> This info can be added to the table under the {{/rit.jsp}} page, and we can 
> also add a button on that page to view info as JSON, for easy parsing into 
> metrics, etc. This JSON dump can be served as a servlet.
> We may also consider different ways of grouping the JSON results, such as by 
> state or server name.
> !Screen Shot 2021-07-27 at 10.34.45.png!
> !Screen Shot 2021-07-27 at 10.34.53.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-26001) When turn on access control, the cell level TTL of Increment and Append operations is invalid.

2021-07-27 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17388215#comment-17388215
 ] 

Hudson commented on HBASE-26001:


Results for branch branch-2.3
[build #263 on 
builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.3/263/]:
 (/) *{color:green}+1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.3/263/General_20Nightly_20Build_20Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.3/263/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.3/263/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.3/263/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> When turn on access control, the cell level TTL of Increment and Append 
> operations is invalid.
> --
>
> Key: HBASE-26001
> URL: https://issues.apache.org/jira/browse/HBASE-26001
> Project: HBase
>  Issue Type: Bug
>  Components: Coprocessors
>Reporter: Yutong Xiao
>Assignee: Yutong Xiao
>Priority: Minor
> Fix For: 3.0.0-alpha-1, 2.6.7, 2.5.0, 2.4.5
>
>
> AccessController postIncrementBeforeWAL() and postAppendBeforeWAL() methods 
> will rewrite the new cell's tags by the old cell's. This will makes the other 
> kinds of tag in new cell invisible (such as TTL tag) after this. As in 
> Increment and Append operations, the new cell has already catch forward all 
> tags of the old cell and TTL tag from mutation operation, here in 
> AccessController we do not need to rewrite the tags once again. Also, the TTL 
> tag of newCell will be invisible in the new created cell. Actually, in 
> Increment and Append operations, the newCell has already copied all tags of 
> the oldCell. So the oldCell is useless here.
> {code:java}
> private Cell createNewCellWithTags(Mutation mutation, Cell oldCell, Cell 
> newCell) {
> // Collect any ACLs from the old cell
> List tags = Lists.newArrayList();
> List aclTags = Lists.newArrayList();
> ListMultimap perms = ArrayListMultimap.create();
> if (oldCell != null) {
>   Iterator tagIterator = PrivateCellUtil.tagsIterator(oldCell);
>   while (tagIterator.hasNext()) {
> Tag tag = tagIterator.next();
> if (tag.getType() != PermissionStorage.ACL_TAG_TYPE) {
>   // Not an ACL tag, just carry it through
>   if (LOG.isTraceEnabled()) {
> LOG.trace("Carrying forward tag from " + oldCell + ": type " + 
> tag.getType()
> + " length " + tag.getValueLength());
>   }
>   tags.add(tag);
> } else {
>   aclTags.add(tag);
> }
>   }
> }
> // Do we have an ACL on the operation?
> byte[] aclBytes = mutation.getACL();
> if (aclBytes != null) {
>   // Yes, use it
>   tags.add(new ArrayBackedTag(PermissionStorage.ACL_TAG_TYPE, aclBytes));
> } else {
>   // No, use what we carried forward
>   if (perms != null) {
> // TODO: If we collected ACLs from more than one tag we may have a
> // List of size > 1, this can be collapsed into a single
> // Permission
> if (LOG.isTraceEnabled()) {
>   LOG.trace("Carrying forward ACLs from " + oldCell + ": " + perms);
> }
> tags.addAll(aclTags);
>   }
> }
> // If we have no tags to add, just return
> if (tags.isEmpty()) {
>   return newCell;
> }
> // Here the new cell's tags will be in visible.
> return PrivateCellUtil.createCell(newCell, tags);
>   }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (HBASE-25469) Create RIT servlet in HMaster to track more detailed RIT info not captured in metrics

2021-07-27 Thread Caroline Zhou (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-25469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17388214#comment-17388214
 ] 

Caroline Zhou edited comment on HBASE-25469 at 7/27/21, 5:44 PM:
-

[~apurtell] [~stack] [~tianjingyun] [~bharathv] [~vjasani] Please take a look. 
Added a few fields to the rit.jsp page as well as created a new servlet to 
serve individual RIT information as json (RIT info in one place, can be parsed 
for metrics). A couple of things to consider/I would like your feedback on:
 * Any other fields we should add to the json output? (Aggregates like 
ritCount, ritCountOverThreshold, etc. can be found in AssignmentManager metrics 
so I didn't include those here, but we could also provide counts by RIT 
state/by server.)
 * Should we include the ability for the json to display only RIT over 
threshold/RITs grouped by state or server name?

I would also like to backport this to branch-1, after master PR is approved.

Thanks.


was (Author: caroliney14):
[~apurtell] [~stack] [~tianjingyun] [~bharathv] [~vjasani] Please take a look. 
Added a few fields to the rit.jsp page as well as created a new servlet to 
serve individual RIT information as json (RIT info in one place, can be parsed 
for metrics). A couple of things to consider/I would like your feedback on:
 * Any other fields we should add to the json output? (Aggregates like 
ritCount, ritCountOverThreshold, etc. can be found in AssignmentManager metrics 
so I didn't include those here.)
 * Should we include the ability for the json to display only RIT over 
threshold/RITs grouped by state or server name?

I would also like to backport this to branch-1, after master PR is approved.

Thanks.

> Create RIT servlet in HMaster to track more detailed RIT info not captured in 
> metrics
> -
>
> Key: HBASE-25469
> URL: https://issues.apache.org/jira/browse/HBASE-25469
> Project: HBase
>  Issue Type: Improvement
>Reporter: Caroline Zhou
>Assignee: Caroline Zhou
>Priority: Minor
> Attachments: Screen Shot 2021-07-27 at 10.34.45.png, Screen Shot 
> 2021-07-27 at 10.34.53.png
>
>
> In HBase 2.1+, there is a RIT JSP page that was added as part of HBASE-21410.
> There are some additional RIT details that would be helpful to have in one 
> place:
>  * RIT Start Time
>  * RIT Duration (ms)
>  * Server
>  * Procedure Type
> This info can be added to the table under the {{/rit.jsp}} page, and we can 
> also add a button on that page to view info as JSON, for easy parsing into 
> metrics, etc. This JSON dump can be served as a servlet.
> We may also consider different ways of grouping the JSON results, such as by 
> state or server name.
> !Screen Shot 2021-07-27 at 10.34.45.png!
> !Screen Shot 2021-07-27 at 10.34.53.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-25469) Create RIT servlet in HMaster to track more detailed RIT info not captured in metrics

2021-07-27 Thread Caroline Zhou (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-25469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caroline Zhou updated HBASE-25469:
--
Description: 
In HBase 2.1+, there is a RIT jsp page that was added as part of HBASE-21410.

There are some additional RIT details that would be helpful to have in one 
place:
 * RIT Start Time
 * RIT Duration (ms)
 * Server
 * Procedure Type

This info can be added to the table under the {{/rit.jsp}} page, and we can 
also add a button on that page to view info as JSON, for easy parsing into 
metrics, etc. This JSON dump can be served as a servlet.

We may also consider different ways of grouping the JSON results, such as by 
state, table, or server name, and/or adding counts of RIT by state or server 
name.

!Screen Shot 2021-07-27 at 10.34.45.png!

!Screen Shot 2021-07-27 at 10.34.53.png!

  was:
In HBase 2.1+, there is a RIT JSP page that was added as part of HBASE-21410.

There are some additional RIT details that would be helpful to have in one 
place:
 * RIT Start Time
 * RIT Duration (ms)
 * Server
 * Procedure Type

This info can be added to the table under the {{/rit.jsp}} page, and we can 
also add a button on that page to view info as JSON, for easy parsing into 
metrics, etc. This JSON dump can be served as a servlet.

We may also consider different ways of grouping the JSON results, such as by 
state or server name.

!Screen Shot 2021-07-27 at 10.34.45.png!

!Screen Shot 2021-07-27 at 10.34.53.png!


> Create RIT servlet in HMaster to track more detailed RIT info not captured in 
> metrics
> -
>
> Key: HBASE-25469
> URL: https://issues.apache.org/jira/browse/HBASE-25469
> Project: HBase
>  Issue Type: Improvement
>Reporter: Caroline Zhou
>Assignee: Caroline Zhou
>Priority: Minor
> Attachments: Screen Shot 2021-07-27 at 10.34.45.png, Screen Shot 
> 2021-07-27 at 10.34.53.png
>
>
> In HBase 2.1+, there is a RIT jsp page that was added as part of HBASE-21410.
> There are some additional RIT details that would be helpful to have in one 
> place:
>  * RIT Start Time
>  * RIT Duration (ms)
>  * Server
>  * Procedure Type
> This info can be added to the table under the {{/rit.jsp}} page, and we can 
> also add a button on that page to view info as JSON, for easy parsing into 
> metrics, etc. This JSON dump can be served as a servlet.
> We may also consider different ways of grouping the JSON results, such as by 
> state, table, or server name, and/or adding counts of RIT by state or server 
> name.
> !Screen Shot 2021-07-27 at 10.34.45.png!
> !Screen Shot 2021-07-27 at 10.34.53.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-26119) Polish TestAsyncNonMetaRegionLocator

2021-07-27 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17388230#comment-17388230
 ] 

Hudson commented on HBASE-26119:


Results for branch branch-2
[build #307 on 
builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/307/]:
 (x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/307/General_20Nightly_20Build_20Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/307/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/307/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/307/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(x) {color:red}-1 client integration test{color}
-- Something went wrong with this stage, [check relevant console 
output|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/307//console].


> Polish TestAsyncNonMetaRegionLocator
> 
>
> Key: HBASE-26119
> URL: https://issues.apache.org/jira/browse/HBASE-26119
> Project: HBase
>  Issue Type: Improvement
>  Components: meta replicas, test
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 2.5.0, 3.0.0-alpha-2
>
>
> It creates a Connection in constructor but only close it in AfterClass 
> method, which leaks Connections and make the code a bit ugly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-26118) The HStore.commitFile and HStore.moveFileIntoPlace almost have the same logic

2021-07-27 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17388232#comment-17388232
 ] 

Hudson commented on HBASE-26118:


Results for branch branch-2
[build #307 on 
builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/307/]:
 (x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/307/General_20Nightly_20Build_20Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/307/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/307/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/307/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(x) {color:red}-1 client integration test{color}
-- Something went wrong with this stage, [check relevant console 
output|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/307//console].


> The HStore.commitFile and HStore.moveFileIntoPlace almost have the same logic
> -
>
> Key: HBASE-26118
> URL: https://issues.apache.org/jira/browse/HBASE-26118
> Project: HBase
>  Issue Type: Improvement
>  Components: Compaction, regionserver
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 2.5.0, 3.0.0-alpha-2
>
>
> We should unify them and only have single entry point for committing 
> storefiles.
> This is good for implementing HBASE-26067.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-21946) Use ByteBuffer pread instead of byte[] pread in HFileBlock when applicable

2021-07-27 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-21946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17388231#comment-17388231
 ] 

Hudson commented on HBASE-21946:


Results for branch branch-2
[build #307 on 
builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/307/]:
 (x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/307/General_20Nightly_20Build_20Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/307/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/307/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/307/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(x) {color:red}-1 client integration test{color}
-- Something went wrong with this stage, [check relevant console 
output|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/307//console].


> Use ByteBuffer pread instead of byte[] pread in HFileBlock when applicable
> --
>
> Key: HBASE-21946
> URL: https://issues.apache.org/jira/browse/HBASE-21946
> Project: HBase
>  Issue Type: Improvement
>  Components: Offheaping
>Reporter: Zheng Hu
>Assignee: Wei-Chiu Chuang
>Priority: Critical
> Fix For: 2.5.0, 3.0.0-alpha-2
>
> Attachments: HBASE-21946.HBASE-21879.v01.patch, 
> HBASE-21946.HBASE-21879.v02.patch, HBASE-21946.HBASE-21879.v03.patch, 
> HBASE-21946.HBASE-21879.v04.patch
>
>
> [~stakiar] is working on HDFS-3246,  so now we have to keep the byte[] pread 
> in HFileBlock reading.  Once it get resolved, we can upgrade the hadoop 
> version and do the replacement. 
> I think it will be a great p999 latency improvement in 100% Get case, anyway 
> file a issue address this firstly. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache-HBase commented on pull request #3352: HBASE-25913 Introduce EnvironmentEdge.Clock and Clock.currentTimeAdvancing

2021-07-27 Thread GitBox


Apache-HBase commented on pull request #3352:
URL: https://github.com/apache/hbase/pull/3352#issuecomment-887742096


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m  4s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 17s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   3m 53s |  master passed  |
   | +1 :green_heart: |  compile  |   4m 35s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   1m 47s |  master passed  |
   | +1 :green_heart: |  spotbugs  |   3m 28s |  master passed  |
   | -0 :warning: |  patch  |   2m 25s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 13s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 37s |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m 37s |  the patch passed  |
   | +1 :green_heart: |  javac  |   4m 37s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 24s |  hbase-common: The patch 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)  |
   | -0 :warning: |  checkstyle  |   1m  8s |  hbase-server: The patch 
generated 4 new + 200 unchanged - 3 fixed = 204 total (was 203)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |  18m 58s |  Patch does not cause any 
errors with Hadoop 3.1.2 3.2.1 3.3.0.  |
   | -1 :x: |  spotbugs  |   0m 57s |  hbase-common generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0)  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 35s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  57m 45s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hbase-common |
   |  |  Inconsistent synchronization of 
org.apache.hadoop.hbase.util.IncrementingEnvironmentEdge.timeIncrement; locked 
71% of time  Unsynchronized access at IncrementingEnvironmentEdge.java:71% of 
time  Unsynchronized access at IncrementingEnvironmentEdge.java:[line 49] |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3352/4/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3352 |
   | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti 
checkstyle compile |
   | uname | Linux f2fe2583054c 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 02d263e7dd |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | checkstyle | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3352/4/artifact/yetus-general-check/output/diff-checkstyle-hbase-common.txt
 |
   | checkstyle | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3352/4/artifact/yetus-general-check/output/diff-checkstyle-hbase-server.txt
 |
   | spotbugs | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3352/4/artifact/yetus-general-check/output/new-spotbugs-hbase-common.html
 |
   | Max. process+thread count | 96 (vs. ulimit of 3) |
   | modules | C: hbase-common hbase-server hbase-backup U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3352/4/console
 |
   | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #3533: HBASE-26144 The HStore.snapshot method is never called in main code

2021-07-27 Thread GitBox


Apache-HBase commented on pull request #3533:
URL: https://github.com/apache/hbase/pull/3533#issuecomment-887745397


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   2m  8s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  4s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   6m 48s |  master passed  |
   | +1 :green_heart: |  compile  |   1m 53s |  master passed  |
   | +1 :green_heart: |  shadedjars  |  11m 55s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 56s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   5m  3s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 26s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 26s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   9m 26s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 47s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 215m 50s |  hbase-server in the patch passed.  
|
   |  |   | 258m 19s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3533/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3533 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux d0a79fbfd17b 4.15.0-128-generic #131-Ubuntu SMP Wed Dec 9 
06:57:35 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 02d263e7dd |
   | Default Java | AdoptOpenJDK-11.0.10+9 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3533/1/testReport/
 |
   | Max. process+thread count | 3158 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3533/1/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #3360: HBASE-25975 Row Commit Sequencer

2021-07-27 Thread GitBox


Apache-HBase commented on pull request #3360:
URL: https://github.com/apache/hbase/pull/3360#issuecomment-887762891


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m  5s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 15s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   4m 18s |  master passed  |
   | +1 :green_heart: |  compile  |   3m 55s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   1m 33s |  master passed  |
   | +1 :green_heart: |  spotbugs  |   3m  8s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 13s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   4m 48s |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m 22s |  the patch passed  |
   | -0 :warning: |  javac  |   0m 31s |  hbase-hadoop-compat generated 1 new + 
102 unchanged - 1 fixed = 103 total (was 103)  |
   | -0 :warning: |  javac  |   3m 51s |  hbase-server generated 1 new + 192 
unchanged - 1 fixed = 193 total (was 193)  |
   | -0 :warning: |  checkstyle  |   1m 21s |  hbase-server: The patch 
generated 27 new + 175 unchanged - 0 fixed = 202 total (was 175)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |  21m 57s |  Patch does not cause any 
errors with Hadoop 3.1.2 3.2.1 3.3.0.  |
   | +1 :green_heart: |  spotbugs  |   3m 21s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 23s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  60m 18s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3360/22/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3360 |
   | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti 
checkstyle compile |
   | uname | Linux 79d8051c2109 4.15.0-147-generic #151-Ubuntu SMP Fri Jun 18 
19:21:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 02d263e7dd |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | javac | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3360/22/artifact/yetus-general-check/output/diff-compile-javac-hbase-hadoop-compat.txt
 |
   | javac | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3360/22/artifact/yetus-general-check/output/diff-compile-javac-hbase-server.txt
 |
   | checkstyle | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3360/22/artifact/yetus-general-check/output/diff-checkstyle-hbase-server.txt
 |
   | Max. process+thread count | 86 (vs. ulimit of 3) |
   | modules | C: hbase-hadoop-compat hbase-server U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3360/22/console
 |
   | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (HBASE-24095) HBase Bad Substitution ERROR on hadoop-functions.sh

2021-07-27 Thread Gaurav Kanade (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17388274#comment-17388274
 ] 

Gaurav Kanade commented on HBASE-24095:
---

have you checked if you might be hitting this ? 
https://issues.apache.org/jira/browse/HADOOP-16167

> HBase Bad Substitution ERROR on hadoop-functions.sh
> ---
>
> Key: HBASE-24095
> URL: https://issues.apache.org/jira/browse/HBASE-24095
> Project: HBase
>  Issue Type: Bug
>  Components: hadoop3
>Affects Versions: 2.2.3
> Environment: hbase 2.2.3 with hadoop 3.2.1:
> Installed both hadoop and hbase according the apache "Getting Started" 
> guides. for hbase, i have removed the hadoop jar files it downloaded with, 
> which do not match my current version of hadoop, as per the documentation.
>Reporter: Alex Swarner
>Priority: Major
>
> Any Time i make a call to hbase (e.g. "hbase version" or "hbase-daemon.sh 
> start master", i receive this error message:
> */usr/hdeco/hadoop/bin/../libexec/hadoop-functions.sh: line 2366: 
> HADOOP_ORG.APACHE.HADOOP.HBASE.UTIL.GETJAVAPROPERTY_USER: bad substitution*
> */usr/hdeco/hadoop/bin/../libexec/hadoop-functions.sh: line 2461: 
> HADOOP_ORG.APACHE.HADOOP.HBASE.UTIL.GETJAVAPROPERTY_OPTS: bad substitution*
>  
> "hbase version" does provide version information after this error message, 
> but i am unable to start the hbase master, so i am unable to use hbase 
> further.
>  
> I have never posted in any forum before, so let me know if more information 
> is needed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] taklwu commented on a change in pull request #3529: HBASE-26124 Backport HBASE-25373 "Remove HTrace completely in code base and try to make use of OpenTelemetry" to branch-2

2021-07-27 Thread GitBox


taklwu commented on a change in pull request #3529:
URL: https://github.com/apache/hbase/pull/3529#discussion_r677764183



##
File path: hbase-shaded/hbase-shaded-client/pom.xml
##
@@ -76,9 +76,11 @@
 com.github.spotbugs:*
 org.apache.htrace:*
 org.apache.yetus:*
+
org.apache.logging.log4j:*

Review comment:
   you're right, I will remove them 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] saintstack merged pull request #3534: HBASE-26146: Add support for HBASE_HBCK_OPTS

2021-07-27 Thread GitBox


saintstack merged pull request #3534:
URL: https://github.com/apache/hbase/pull/3534


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Resolved] (HBASE-26146) Allow custom opts for hbck in hbase bin

2021-07-27 Thread Michael Stack (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Stack resolved HBASE-26146.
---
Fix Version/s: 3.0.0-alpha-2
   2.5.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

Thanks for the patch [~bbeaudreault] Merged to branch-2+.  It didn't go to 
branch-2.4. Conflicts. Make a subtask for a backport if you want it in 2.4/2.3 
boss.

> Allow custom opts for hbck in hbase bin
> ---
>
> Key: HBASE-26146
> URL: https://issues.apache.org/jira/browse/HBASE-26146
> Project: HBase
>  Issue Type: Improvement
>Reporter: Bryan Beaudreault
>Assignee: Bryan Beaudreault
>Priority: Minor
> Fix For: 2.5.0, 3.0.0-alpha-2
>
>
> https://issues.apache.org/jira/browse/HBASE-15145 made it so that when you 
> execute {{hbase hbck}}, the regionserver or JAAS opts are added automatically 
> to the command line. This is problematic in some cases depending on what 
> regionserver opts have been set. For instance, one might configure a jmx port 
> for the regionserver but then hbck will fail due to a port conflict if run on 
> the same host as a regionserver. Another example would be that a regionserver 
> might define an {{-Xms}} value which is significantly more than hbck requires.
>  
> We should make it possible for users to define their own HBASE_HBCK_OPTS 
> which take precedence over the server opts added by default.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] saintstack commented on a change in pull request #3359: HBASE-25891 remove dependence storing wal filenames for backup

2021-07-27 Thread GitBox


saintstack commented on a change in pull request #3359:
URL: https://github.com/apache/hbase/pull/3359#discussion_r677780945



##
File path: 
hbase-backup/src/main/java/org/apache/hadoop/hbase/backup/impl/BackupManifest.java
##
@@ -512,11 +512,11 @@ public void addDependentImage(BackupImage image) {
* Set the incremental timestamp map directly.
* @param incrTimestampMap timestamp map
*/
-  public void setIncrTimestampMap(HashMap> 
incrTimestampMap) {
+  public void setIncrTimestampMap(Map> 
incrTimestampMap) {
 this.backupImage.setIncrTimeRanges(incrTimestampMap);
   }
 
-  public Map> getIncrTimestampMap() {
+  public Map> getIncrTimestampMap() {

Review comment:
   Might have been better to do these converstions -- HashMap to Map -- as 
an issue of their own because they distract from meat of this PR (They are 
good, just off-the-point I'd say). Next time.

##
File path: 
hbase-backup/src/main/java/org/apache/hadoop/hbase/backup/impl/BackupSystemTable.java
##
@@ -1011,148 +1006,6 @@ public void deleteIncrementalBackupTableSet(String 
backupRoot) throws IOExceptio
 }
   }
 
-  /**
-   * Register WAL files as eligible for deletion
-   * @param files files
-   * @param backupId backup id
-   * @param backupRoot root directory path to backup destination
-   * @throws IOException exception
-   */
-  public void addWALFiles(List files, String backupId, String 
backupRoot)
-  throws IOException {
-if (LOG.isTraceEnabled()) {
-  LOG.trace("add WAL files to backup system table: " + backupId + " " + 
backupRoot + " files ["
-  + StringUtils.join(files, ",") + "]");
-}
-if (LOG.isDebugEnabled()) {
-  files.forEach(file -> LOG.debug("add :" + file));
-}
-try (Table table = connection.getTable(tableName)) {
-  List puts = createPutsForAddWALFiles(files, backupId, backupRoot);
-  table.put(puts);
-}
-  }
-
-  /**
-   * Register WAL files as eligible for deletion
-   * @param backupRoot root directory path to backup
-   * @throws IOException exception
-   */
-  public Iterator getWALFilesIterator(String backupRoot) throws 
IOException {
-LOG.trace("get WAL files from backup system table");
-
-final Table table = connection.getTable(tableName);
-Scan scan = createScanForGetWALs(backupRoot);
-final ResultScanner scanner = table.getScanner(scan);
-final Iterator it = scanner.iterator();
-return new Iterator() {
-
-  @Override
-  public boolean hasNext() {
-boolean next = it.hasNext();
-if (!next) {
-  // close all
-  try {
-scanner.close();
-table.close();
-  } catch (IOException e) {
-LOG.error("Close WAL Iterator", e);
-  }
-}
-return next;
-  }
-
-  @Override
-  public WALItem next() {
-Result next = it.next();
-List cells = next.listCells();
-byte[] buf = cells.get(0).getValueArray();
-int len = cells.get(0).getValueLength();
-int offset = cells.get(0).getValueOffset();
-String backupId = new String(buf, offset, len);
-buf = cells.get(1).getValueArray();
-len = cells.get(1).getValueLength();
-offset = cells.get(1).getValueOffset();
-String walFile = new String(buf, offset, len);
-buf = cells.get(2).getValueArray();
-len = cells.get(2).getValueLength();
-offset = cells.get(2).getValueOffset();
-String backupRoot = new String(buf, offset, len);
-return new WALItem(backupId, walFile, backupRoot);
-  }
-
-  @Override
-  public void remove() {
-// not implemented
-throw new RuntimeException("remove is not supported");
-  }
-};
-  }
-
-  /**
-   * Check if WAL file is eligible for deletion Future: to support all backup 
destinations
-   * @param file name of a file to check
-   * @return true, if deletable, false otherwise.
-   * @throws IOException exception
-   */
-  // TODO: multiple backup destination support
-  public boolean isWALFileDeletable(String file) throws IOException {
-if (LOG.isTraceEnabled()) {
-  LOG.trace("Check if WAL file has been already backed up in backup system 
table " + file);
-}
-try (Table table = connection.getTable(tableName)) {
-  Get get = createGetForCheckWALFile(file);
-  Result res = table.get(get);
-  return (!res.isEmpty());
-}
-  }
-
-  /**
-   * Check if WAL file is eligible for deletion using multi-get
-   * @param files names of a file to check
-   * @return map of results (key: FileStatus object. value: true if the file 
is deletable, false
-   * otherwise)
-   * @throws IOException exception
-   */
-  public Map areWALFilesDeletable(Iterable 
files)
-  throws IOException {
-final int BUF_SIZE = 100;
-
-Map ret = new HashMap<>();
-try (Table table = connection.getTable(tableName)) {
-  List getBuffer = new ArrayList<>()

  1   2   >