[GitHub] [hbase] Apache-HBase commented on pull request #3583: HBASE-26193 Do not store meta region location as permanent state on z…

2021-08-13 Thread GitBox


Apache-HBase commented on pull request #3583:
URL: https://github.com/apache/hbase/pull/3583#issuecomment-898817699


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m  3s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m  4s |  master passed  |
   | +1 :green_heart: |  compile  |   3m 19s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   1m 10s |  master passed  |
   | +1 :green_heart: |  spotbugs  |   2m 11s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m  8s |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 21s |  the patch passed  |
   | +1 :green_heart: |  javac  |   3m 21s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   1m  9s |  hbase-server: The patch 
generated 1 new + 94 unchanged - 0 fixed = 95 total (was 94)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |  20m 14s |  Patch does not cause any 
errors with Hadoop 3.1.2 3.2.1 3.3.0.  |
   | +1 :green_heart: |  spotbugs  |   2m 21s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 11s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  51m 44s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3583/3/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3583 |
   | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti 
checkstyle compile |
   | uname | Linux b4745694882e 4.15.0-142-generic #146-Ubuntu SMP Tue Apr 13 
01:11:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 44d5624908 |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | checkstyle | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3583/3/artifact/yetus-general-check/output/diff-checkstyle-hbase-server.txt
 |
   | Max. process+thread count | 85 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3583/3/console
 |
   | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #3586: HBASE-26197 Fix some obvious bugs in MultiByteBuff.put

2021-08-13 Thread GitBox


Apache-HBase commented on pull request #3586:
URL: https://github.com/apache/hbase/pull/3586#issuecomment-898816810


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 27s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 17s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   8m 10s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 18s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 26s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   8m  5s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m  2s |  hbase-common in the patch passed.  
|
   |  |   |  30m 34s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3586/2/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3586 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux ef57a59c87e5 4.15.0-151-generic #157-Ubuntu SMP Fri Jul 9 
23:07:57 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 44d5624908 |
   | Default Java | AdoptOpenJDK-11.0.10+9 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3586/2/testReport/
 |
   | Max. process+thread count | 302 (vs. ulimit of 3) |
   | modules | C: hbase-common U: hbase-common |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3586/2/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #3586: HBASE-26197 Fix some obvious bugs in MultiByteBuff.put

2021-08-13 Thread GitBox


Apache-HBase commented on pull request #3586:
URL: https://github.com/apache/hbase/pull/3586#issuecomment-898816625


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 25s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 59s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 24s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   8m 10s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 44s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 25s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 25s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   8m 11s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 21s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 52s |  hbase-common in the patch passed.  
|
   |  |   |  29m 14s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3586/2/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3586 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 2c9bba475059 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 44d5624908 |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3586/2/testReport/
 |
   | Max. process+thread count | 338 (vs. ulimit of 3) |
   | modules | C: hbase-common U: hbase-common |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3586/2/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (HBASE-25680) Non-idempotent test in TestReplicationHFileCleaner

2021-08-13 Thread Duo Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-25680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-25680:
--
Fix Version/s: 2.3.7
   2.4.6
   3.0.0-alpha-2
   2.5.0
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Pushed to branch-2.3+.

Thanks [~lzx404243] for contributing.

> Non-idempotent test in TestReplicationHFileCleaner
> --
>
> Key: HBASE-25680
> URL: https://issues.apache.org/jira/browse/HBASE-25680
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Reporter: Zhengxi Li
>Assignee: Zhengxi Li
>Priority: Minor
> Fix For: 2.5.0, 3.0.0-alpha-2, 2.4.6, 2.3.7
>
> Attachments: HBASE-25680.master.001.patch
>
>
> The test 
> *{{org.apache.hadoop.hbase.master.cleaner.TestReplicationHFileCleaner.testIsFileDeletable}}*
>  is not idempotent and fail if run twice in the same JVM, because it pollutes 
> some states shared among tests. It may be good to clean this state pollution 
> so that some other tests do not fail in the future due to the shared state 
> polluted by this test.
> h3. Detail
> Running {{TestReplicationHFileCleaner.testIsFileDeletable}} twice would 
> result in the second run failing due to the following assertion error:
> {noformat}
> java.lang.AssertionError: Cleaner should allow to delete this file as there 
> is 
> no hfile reference node for it in the queue.
> {noformat}
> The root cause is that the a hfile reference is added during the first test 
> run, which doesn't get removed upon test exits. Therefore, in the second test 
> run , {{cleaner.isFileDeletable(fs.getFileStatus(file)))}} would return 
> {{false}}, resulting in the assertion error.
>  
> PR link: https://github.com/apache/hbase/pull/2984



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HBASE-25680) Non-idempotent test in TestReplicationHFileCleaner

2021-08-13 Thread Duo Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-25680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang reassigned HBASE-25680:
-

Assignee: Zhengxi Li

> Non-idempotent test in TestReplicationHFileCleaner
> --
>
> Key: HBASE-25680
> URL: https://issues.apache.org/jira/browse/HBASE-25680
> Project: HBase
>  Issue Type: Test
>Reporter: Zhengxi Li
>Assignee: Zhengxi Li
>Priority: Minor
> Attachments: HBASE-25680.master.001.patch
>
>
> The test 
> *{{org.apache.hadoop.hbase.master.cleaner.TestReplicationHFileCleaner.testIsFileDeletable}}*
>  is not idempotent and fail if run twice in the same JVM, because it pollutes 
> some states shared among tests. It may be good to clean this state pollution 
> so that some other tests do not fail in the future due to the shared state 
> polluted by this test.
> h3. Detail
> Running {{TestReplicationHFileCleaner.testIsFileDeletable}} twice would 
> result in the second run failing due to the following assertion error:
> {noformat}
> java.lang.AssertionError: Cleaner should allow to delete this file as there 
> is 
> no hfile reference node for it in the queue.
> {noformat}
> The root cause is that the a hfile reference is added during the first test 
> run, which doesn't get removed upon test exits. Therefore, in the second test 
> run , {{cleaner.isFileDeletable(fs.getFileStatus(file)))}} would return 
> {{false}}, resulting in the assertion error.
>  
> PR link: https://github.com/apache/hbase/pull/2984



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-25680) Non-idempotent test in TestReplicationHFileCleaner

2021-08-13 Thread Duo Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-25680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-25680:
--
Component/s: test

> Non-idempotent test in TestReplicationHFileCleaner
> --
>
> Key: HBASE-25680
> URL: https://issues.apache.org/jira/browse/HBASE-25680
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Reporter: Zhengxi Li
>Assignee: Zhengxi Li
>Priority: Minor
> Attachments: HBASE-25680.master.001.patch
>
>
> The test 
> *{{org.apache.hadoop.hbase.master.cleaner.TestReplicationHFileCleaner.testIsFileDeletable}}*
>  is not idempotent and fail if run twice in the same JVM, because it pollutes 
> some states shared among tests. It may be good to clean this state pollution 
> so that some other tests do not fail in the future due to the shared state 
> polluted by this test.
> h3. Detail
> Running {{TestReplicationHFileCleaner.testIsFileDeletable}} twice would 
> result in the second run failing due to the following assertion error:
> {noformat}
> java.lang.AssertionError: Cleaner should allow to delete this file as there 
> is 
> no hfile reference node for it in the queue.
> {noformat}
> The root cause is that the a hfile reference is added during the first test 
> run, which doesn't get removed upon test exits. Therefore, in the second test 
> run , {{cleaner.isFileDeletable(fs.getFileStatus(file)))}} would return 
> {{false}}, resulting in the assertion error.
>  
> PR link: https://github.com/apache/hbase/pull/2984



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-25680) Non-idempotent test in TestReplicationHFileCleaner

2021-08-13 Thread Duo Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-25680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-25680:
--
Issue Type: Improvement  (was: Test)

> Non-idempotent test in TestReplicationHFileCleaner
> --
>
> Key: HBASE-25680
> URL: https://issues.apache.org/jira/browse/HBASE-25680
> Project: HBase
>  Issue Type: Improvement
>Reporter: Zhengxi Li
>Assignee: Zhengxi Li
>Priority: Minor
> Attachments: HBASE-25680.master.001.patch
>
>
> The test 
> *{{org.apache.hadoop.hbase.master.cleaner.TestReplicationHFileCleaner.testIsFileDeletable}}*
>  is not idempotent and fail if run twice in the same JVM, because it pollutes 
> some states shared among tests. It may be good to clean this state pollution 
> so that some other tests do not fail in the future due to the shared state 
> polluted by this test.
> h3. Detail
> Running {{TestReplicationHFileCleaner.testIsFileDeletable}} twice would 
> result in the second run failing due to the following assertion error:
> {noformat}
> java.lang.AssertionError: Cleaner should allow to delete this file as there 
> is 
> no hfile reference node for it in the queue.
> {noformat}
> The root cause is that the a hfile reference is added during the first test 
> run, which doesn't get removed upon test exits. Therefore, in the second test 
> run , {{cleaner.isFileDeletable(fs.getFileStatus(file)))}} would return 
> {{false}}, resulting in the assertion error.
>  
> PR link: https://github.com/apache/hbase/pull/2984



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache9 merged pull request #2984: HBASE-25680 Non-idempotent test in TestReplicationHFileCleaner

2021-08-13 Thread GitBox


Apache9 merged pull request #2984:
URL: https://github.com/apache/hbase/pull/2984


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache9 commented on a change in pull request #3566: HBASE-26172 Deprecated MasterRegistry and allow getBootstrapNodes to …

2021-08-13 Thread GitBox


Apache9 commented on a change in pull request #3566:
URL: https://github.com/apache/hbase/pull/3566#discussion_r688853460



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
##
@@ -308,18 +310,35 @@
*/
   private static final long 
DEFAULT_REGION_SERVER_RPC_MINIMUM_SCAN_TIME_LIMIT_DELTA = 10;
 
-  /*
+  /**
* Whether to reject rows with size > threshold defined by
* {@link RSRpcServices#BATCH_ROWS_THRESHOLD_NAME}
*/
   private static final String REJECT_BATCH_ROWS_OVER_THRESHOLD =
 "hbase.rpc.rows.size.threshold.reject";
 
-  /*
+  /**
* Default value of config {@link 
RSRpcServices#REJECT_BATCH_ROWS_OVER_THRESHOLD}
*/
   private static final boolean DEFAULT_REJECT_BATCH_ROWS_OVER_THRESHOLD = 
false;
 
+  /**
+   * Determine the bootstrap nodes we want to return to the client connection 
registry.
+   * 
+   * {@link #MASTER}: return masters as bootstrap nodes.

Review comment:
   Email sent.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache9 commented on pull request #2984: HBASE-25680 Non-idempotent test in TestReplicationHFileCleaner

2021-08-13 Thread GitBox


Apache9 commented on pull request #2984:
URL: https://github.com/apache/hbase/pull/2984#issuecomment-898797248


   Oh, sorry... Will merge soon.
   
   Thanks for the reminding~


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] lzx404243 commented on pull request #2984: HBASE-25680 Non-idempotent test in TestReplicationHFileCleaner

2021-08-13 Thread GitBox


lzx404243 commented on pull request #2984:
URL: https://github.com/apache/hbase/pull/2984#issuecomment-898793163


   Thanks @Apache9  for the approval! Is there anything I can do before this 
can be merged? Thanks!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] bharathv commented on a change in pull request #3566: HBASE-26172 Deprecated MasterRegistry and allow getBootstrapNodes to …

2021-08-13 Thread GitBox


bharathv commented on a change in pull request #3566:
URL: https://github.com/apache/hbase/pull/3566#discussion_r688844196



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
##
@@ -308,18 +310,35 @@
*/
   private static final long 
DEFAULT_REGION_SERVER_RPC_MINIMUM_SCAN_TIME_LIMIT_DELTA = 10;
 
-  /*
+  /**
* Whether to reject rows with size > threshold defined by
* {@link RSRpcServices#BATCH_ROWS_THRESHOLD_NAME}
*/
   private static final String REJECT_BATCH_ROWS_OVER_THRESHOLD =
 "hbase.rpc.rows.size.threshold.reject";
 
-  /*
+  /**
* Default value of config {@link 
RSRpcServices#REJECT_BATCH_ROWS_OVER_THRESHOLD}
*/
   private static final boolean DEFAULT_REJECT_BATCH_ROWS_OVER_THRESHOLD = 
false;
 
+  /**
+   * Determine the bootstrap nodes we want to return to the client connection 
registry.
+   * 
+   * {@link #MASTER}: return masters as bootstrap nodes.

Review comment:
   ya okay. I understand your concern now. Thanks for the explanation.
   
   >  If we really think we should remove this feature, then we'd better send a 
discussion email to dev list first. If there are no big concerns, then we can 
do this.
   
   +1 on this fwiw. Since the decision to inline masters in RW path was a wrong 
design choice, I vote to get rid of it in the long term.
   
   Patch lgtm otherwise.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (HBASE-25051) DIGEST based auth broken for MasterRegistry

2021-08-13 Thread Duo Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-25051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17399002#comment-17399002
 ] 

Duo Zhang commented on HBASE-25051:
---

Negotiating authentication method through the first exchange of packets is a 
typical way in network protocols.

So in general I think this is a possible way. Let's draw a whole picture on how 
we do rpc connecting first?

Thanks.

> DIGEST based auth broken for MasterRegistry
> ---
>
> Key: HBASE-25051
> URL: https://issues.apache.org/jira/browse/HBASE-25051
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client, security
>Affects Versions: 3.0.0-alpha-1, 2.3.0, 1.7.0
>Reporter: Bharath Vissapragada
>Assignee: Bharath Vissapragada
>Priority: Minor
>
> DIGEST-MD5 based sasl auth depends on cluster-ID to obtain tokens. With 
> master registry, we have a circular dependency here because master registry 
> needs an rpcClient to talk to masters (and to get cluster ID) and rpc-Client 
> needs a clusterId if DIGEST based auth is configured. Earlier, there was a ZK 
> client that has its own authentication mechanism to fetch the cluster ID.
> HBASE-23330, I think doesn't fully fix the problem. It depends on an active 
> connection to fetch delegation tokens for the MR job and that inherently 
> assumes that the active connection does not use a DIGEST auth.
> It is not clear to me how common it is to use DIGEST based auth in 
> connections.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache9 commented on a change in pull request #3566: HBASE-26172 Deprecated MasterRegistry and allow getBootstrapNodes to …

2021-08-13 Thread GitBox


Apache9 commented on a change in pull request #3566:
URL: https://github.com/apache/hbase/pull/3566#discussion_r688842192



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
##
@@ -308,18 +310,35 @@
*/
   private static final long 
DEFAULT_REGION_SERVER_RPC_MINIMUM_SCAN_TIME_LIMIT_DELTA = 10;
 
-  /*
+  /**
* Whether to reject rows with size > threshold defined by
* {@link RSRpcServices#BATCH_ROWS_THRESHOLD_NAME}
*/
   private static final String REJECT_BATCH_ROWS_OVER_THRESHOLD =
 "hbase.rpc.rows.size.threshold.reject";
 
-  /*
+  /**
* Default value of config {@link 
RSRpcServices#REJECT_BATCH_ROWS_OVER_THRESHOLD}
*/
   private static final boolean DEFAULT_REJECT_BATCH_ROWS_OVER_THRESHOLD = 
false;
 
+  /**
+   * Determine the bootstrap nodes we want to return to the client connection 
registry.
+   * 
+   * {@link #MASTER}: return masters as bootstrap nodes.

Review comment:
   There are no compatible issue between client and server...
   
   The key difference between option 1 and 2, is that whether still support 
using master as registry endpoint after the removal of MasterRegistry in 4.0.0. 
If we still support this feature in the future, then I think a release note and 
a notice email to dev list is enough. If we really think we should remove this 
feature, then we'd better send a discussion email to dev list first. If there 
are no big concerns, then we can do this. If no, I think we'd better go with 
option 1, to let RpcConnectionRegistry still have the ability to use masters as 
registry endpoint.
   
   Is this clear enough? Let me point out the key problem again, it is about 
whether people could still use masters as registry endpoint in 4.0.0, not about 
now...
   
   Thanks.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache9 commented on a change in pull request #3566: HBASE-26172 Deprecated MasterRegistry and allow getBootstrapNodes to …

2021-08-13 Thread GitBox


Apache9 commented on a change in pull request #3566:
URL: https://github.com/apache/hbase/pull/3566#discussion_r688842192



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
##
@@ -308,18 +310,35 @@
*/
   private static final long 
DEFAULT_REGION_SERVER_RPC_MINIMUM_SCAN_TIME_LIMIT_DELTA = 10;
 
-  /*
+  /**
* Whether to reject rows with size > threshold defined by
* {@link RSRpcServices#BATCH_ROWS_THRESHOLD_NAME}
*/
   private static final String REJECT_BATCH_ROWS_OVER_THRESHOLD =
 "hbase.rpc.rows.size.threshold.reject";
 
-  /*
+  /**
* Default value of config {@link 
RSRpcServices#REJECT_BATCH_ROWS_OVER_THRESHOLD}
*/
   private static final boolean DEFAULT_REJECT_BATCH_ROWS_OVER_THRESHOLD = 
false;
 
+  /**
+   * Determine the bootstrap nodes we want to return to the client connection 
registry.
+   * 
+   * {@link #MASTER}: return masters as bootstrap nodes.

Review comment:
   There are no compatible issue between client and server...
   
   The key difference between option 1 and 2, is that whether still support 
using master as registry endpoint after the removal of MasterRegistry in 4.0.0. 
If we still support this feature in the future, then I think a release note and 
a notice email to dev list is enough. If we really think we should remove this 
feature, then we'd better send a discussion email to dev list first. If there 
are no big concerns, then we can do this. If no, I think we'd better go with 
option 1, to let RpcConnectionRegistry still has the ability to use masters as 
registry endpoint.
   
   Is this clear enough? Let me point out the key problem again, it is about 
whether people could still use masters as registry endpoint in 4.0.0, not about 
now...
   
   Thanks.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #3580: HBASE-26089 Support RegionCoprocessor on CompactionServer

2021-08-13 Thread GitBox


Apache-HBase commented on pull request #3580:
URL: https://github.com/apache/hbase/pull/3580#issuecomment-898768372


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m  6s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  5s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ HBASE-25714 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 21s |  HBASE-25714 passed  |
   | +1 :green_heart: |  compile  |   1m  4s |  HBASE-25714 passed  |
   | +1 :green_heart: |  shadedjars  |   9m 11s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 47s |  HBASE-25714 passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 31s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 20s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 20s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   8m 52s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 218m 15s |  hbase-server in the patch passed.  
|
   |  |   | 251m 53s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3580/1/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3580 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 1d56893b30f7 4.15.0-147-generic #151-Ubuntu SMP Fri Jun 18 
19:21:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | HBASE-25714 / 85f02919da |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3580/1/testReport/
 |
   | Max. process+thread count | 3455 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3580/1/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (HBASE-25051) DIGEST based auth broken for MasterRegistry

2021-08-13 Thread Bharath Vissapragada (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-25051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17398980#comment-17398980
 ] 

Bharath Vissapragada edited comment on HBASE-25051 at 8/13/21, 11:11 PM:
-

I think one way to fix this problem is to "advertise" the cluster ID as a part 
of channel setup. This is an IPC protocol change. Currently the way connection 
setup works is 

1. client to server: socket connect()
2. client sends a connection preamble (validated on server and connection 
closed if malformed)
3. client optionally does a sasl handshake (if configured)

(3) is where cluster ID is needed (if token is configured as a part of DIGEST 
based auth). 

Now my proposal is to modify it as follows.

1. client to server: socket connect()
2. server responds with a 16 byte UUID that is read by the client (client can 
get the actual SASL mode with this clusterId info and looking up the tokens)
3. client sends a connection preamble (validated on server and connection 
closed if malformed)
4. client optionally does a sasl handshake (using the sasl and token from 
step(2)).

A sample implementation using netty IPC is something like this 
https://github.com/bharathv/hbase/commit/95ff0d65828e8459d212e6173c00648b9c7b6814
 . The patch still needs to do the following,

1. Move providers.selectProvider(clusterId, ticket) .. to after step (2)
2. Implement an equivalent change in Blocking IPC.

One problem I can think of is compatibility between old client and new server 
(say during a rolling upgrade when server is upgraded first), I can see if I 
can get it working by making client ignore this piece of response. 

cc: [~zhangduo] WDYT. You think this breaks anything else too or is there a 
better way?



was (Author: bharathv):
I think one way to fix this problem is to "advertise" the cluster ID as a part 
of channel setup. This is an IPC protocol change. Currently the way connection 
setup works is 

1. client to server: socket connect()
2. client sends a connection preamble (validated on server and connection 
closed if malformed)
3. client optionally does a sasl handshake (if configured)

(3) is where cluster ID is needed (if token is configured as a part of DIGEST 
based auth). 

Now my proposal is to modify it as follows.

1. client to server: socket connect()
2. server responds with a 16 byte UUID that is read by the client (client can 
get the actual SASL mode with this clusterId info and looking up the tokens)
2. client sends a connection preamble (validated on server and connection 
closed if malformed)
3. client optionally does a sasl handshake (using the sasl and token from 
step(2)).

A sample implementation using netty IPC is something like this 
https://github.com/bharathv/hbase/commit/95ff0d65828e8459d212e6173c00648b9c7b6814
 . The patch still needs to do the following,

1. Move providers.selectProvider(clusterId, ticket) .. to after step (2)
2. Implement an equivalent change in Blocking IPC.

One problem I can think of is compatibility between old client and new server 
(say during a rolling upgrade when server is upgraded first), I can see if I 
can get it working by making client ignore this piece of response. 

cc: [~zhangduo] WDYT. You think this breaks anything else too or is there a 
better way?


> DIGEST based auth broken for MasterRegistry
> ---
>
> Key: HBASE-25051
> URL: https://issues.apache.org/jira/browse/HBASE-25051
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client, security
>Affects Versions: 3.0.0-alpha-1, 2.3.0, 1.7.0
>Reporter: Bharath Vissapragada
>Assignee: Bharath Vissapragada
>Priority: Minor
>
> DIGEST-MD5 based sasl auth depends on cluster-ID to obtain tokens. With 
> master registry, we have a circular dependency here because master registry 
> needs an rpcClient to talk to masters (and to get cluster ID) and rpc-Client 
> needs a clusterId if DIGEST based auth is configured. Earlier, there was a ZK 
> client that has its own authentication mechanism to fetch the cluster ID.
> HBASE-23330, I think doesn't fully fix the problem. It depends on an active 
> connection to fetch delegation tokens for the MR job and that inherently 
> assumes that the active connection does not use a DIGEST auth.
> It is not clear to me how common it is to use DIGEST based auth in 
> connections.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-25051) DIGEST based auth broken for MasterRegistry

2021-08-13 Thread Bharath Vissapragada (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-25051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17398980#comment-17398980
 ] 

Bharath Vissapragada commented on HBASE-25051:
--

I think one way to fix this problem is to "advertise" the cluster ID as a part 
of channel setup. This is an IPC protocol change. Currently the way connection 
setup works is 

1. client to server: socket connect()
2. client sends a connection preamble (validated on server and connection 
closed if malformed)
3. client optionally does a sasl handshake (if configured)

(3) is where cluster ID is needed (if token is configured as a part of DIGEST 
based auth). 

Now my proposal is to modify it as follows.

1. client to server: socket connect()
2. server responds with a 16 byte UUID that is read by the client (client can 
get the actual SASL mode with this clusterId info and looking up the tokens)
2. client sends a connection preamble (validated on server and connection 
closed if malformed)
3. client optionally does a sasl handshake (using the sasl and token from 
step(2)).

A sample implementation using netty IPC is something like this 
https://github.com/bharathv/hbase/commit/95ff0d65828e8459d212e6173c00648b9c7b6814
 . The patch still needs to do the following,

1. Move providers.selectProvider(clusterId, ticket) .. to after step (2)
2. Implement an equivalent change in Blocking IPC.

One problem I can think of is compatibility between old client and new server 
(say during a rolling upgrade when server is upgraded first), I can see if I 
can get it working by making client ignore this piece of response. 

cc: [~zhangduo] WDYT. You think this breaks anything else too or is there a 
better way?


> DIGEST based auth broken for MasterRegistry
> ---
>
> Key: HBASE-25051
> URL: https://issues.apache.org/jira/browse/HBASE-25051
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client, security
>Affects Versions: 3.0.0-alpha-1, 2.3.0, 1.7.0
>Reporter: Bharath Vissapragada
>Assignee: Bharath Vissapragada
>Priority: Minor
>
> DIGEST-MD5 based sasl auth depends on cluster-ID to obtain tokens. With 
> master registry, we have a circular dependency here because master registry 
> needs an rpcClient to talk to masters (and to get cluster ID) and rpc-Client 
> needs a clusterId if DIGEST based auth is configured. Earlier, there was a ZK 
> client that has its own authentication mechanism to fetch the cluster ID.
> HBASE-23330, I think doesn't fully fix the problem. It depends on an active 
> connection to fetch delegation tokens for the MR job and that inherently 
> assumes that the active connection does not use a DIGEST auth.
> It is not clear to me how common it is to use DIGEST based auth in 
> connections.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] bharathv commented on a change in pull request #3566: HBASE-26172 Deprecated MasterRegistry and allow getBootstrapNodes to …

2021-08-13 Thread GitBox


bharathv commented on a change in pull request #3566:
URL: https://github.com/apache/hbase/pull/3566#discussion_r688816428



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
##
@@ -308,18 +310,35 @@
*/
   private static final long 
DEFAULT_REGION_SERVER_RPC_MINIMUM_SCAN_TIME_LIMIT_DELTA = 10;
 
-  /*
+  /**
* Whether to reject rows with size > threshold defined by
* {@link RSRpcServices#BATCH_ROWS_THRESHOLD_NAME}
*/
   private static final String REJECT_BATCH_ROWS_OVER_THRESHOLD =
 "hbase.rpc.rows.size.threshold.reject";
 
-  /*
+  /**
* Default value of config {@link 
RSRpcServices#REJECT_BATCH_ROWS_OVER_THRESHOLD}
*/
   private static final boolean DEFAULT_REJECT_BATCH_ROWS_OVER_THRESHOLD = 
false;
 
+  /**
+   * Determine the bootstrap nodes we want to return to the client connection 
registry.
+   * 
+   * {@link #MASTER}: return masters as bootstrap nodes.

Review comment:
   What I'm saying in my above comment is "deprecate MasterRegistry and 
rename RpcConnectionRegistry" (similar to option 2) and _everything still works 
as expected for current MasterRegistry users_. Details in 
https://github.com/apache/hbase/pull/3566#discussion_r687025151
   
   If we go with option 2, can you explain me what does not work for current 
users of MasterRegistry? If you explain how it is broken, I'm +1 on option (1). 
(see my last comment as to why I think it is not broken, may be I misunderstood 
something) https://github.com/apache/hbase/pull/3566#discussion_r687025151
   
   Only broken case I can think of is old client binary is not compatible with 
new Server binary, but that is not a problem we are worried about? Is there any 
other case too?
   Thanks.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #3586: HBASE-26197 Fix some obvious bugs in MultiByteBuff.put

2021-08-13 Thread GitBox


Apache-HBase commented on pull request #3586:
URL: https://github.com/apache/hbase/pull/3586#issuecomment-898754114


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 39s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 43s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 27s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   8m 16s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 23s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 27s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 27s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   8m 28s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m  3s |  hbase-common in the patch passed.  
|
   |  |   |  32m 45s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3586/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3586 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux f02680d64caf 4.15.0-151-generic #157-Ubuntu SMP Fri Jul 9 
23:07:57 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 11222fc4df |
   | Default Java | AdoptOpenJDK-11.0.10+9 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3586/1/testReport/
 |
   | Max. process+thread count | 301 (vs. ulimit of 3) |
   | modules | C: hbase-common U: hbase-common |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3586/1/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #3586: HBASE-26197 Fix some obvious bugs in MultiByteBuff.put

2021-08-13 Thread GitBox


Apache-HBase commented on pull request #3586:
URL: https://github.com/apache/hbase/pull/3586#issuecomment-898753533


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m  5s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m  4s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 23s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   8m 10s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 50s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 25s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 25s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   8m 12s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 22s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 48s |  hbase-common in the patch passed.  
|
   |  |   |  30m 37s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3586/1/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3586 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux e7c4caff3dd9 4.15.0-151-generic #157-Ubuntu SMP Fri Jul 9 
23:07:57 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 11222fc4df |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3586/1/testReport/
 |
   | Max. process+thread count | 337 (vs. ulimit of 3) |
   | modules | C: hbase-common U: hbase-common |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3586/1/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #3585: HBASE-26197 Fix some obvious bugs in MultiByteBuff.put

2021-08-13 Thread GitBox


Apache-HBase commented on pull request #3585:
URL: https://github.com/apache/hbase/pull/3585#issuecomment-898753496


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m  4s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 59s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 53s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   0m 27s |  master passed  |
   | +1 :green_heart: |  spotbugs  |   0m 48s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 36s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 49s |  the patch passed  |
   | -0 :warning: |  javac  |   0m 49s |  hbase-common generated 5 new + 155 
unchanged - 0 fixed = 160 total (was 155)  |
   | -0 :warning: |  checkstyle  |   0m 25s |  hbase-common: The patch 
generated 2 new + 25 unchanged - 1 fixed = 27 total (was 26)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |  18m 27s |  Patch does not cause any 
errors with Hadoop 3.1.2 3.2.1 3.3.0.  |
   | +1 :green_heart: |  spotbugs  |   0m 54s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 15s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  39m 27s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3585/1/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3585 |
   | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti 
checkstyle compile |
   | uname | Linux 9bc048a1adaa 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 
17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 11222fc4df |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | javac | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3585/1/artifact/yetus-general-check/output/diff-compile-javac-hbase-common.txt
 |
   | checkstyle | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3585/1/artifact/yetus-general-check/output/diff-checkstyle-hbase-common.txt
 |
   | Max. process+thread count | 95 (vs. ulimit of 3) |
   | modules | C: hbase-common U: hbase-common |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3585/1/console
 |
   | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #3585: HBASE-26197 Fix some obvious bugs in MultiByteBuff.put

2021-08-13 Thread GitBox


Apache-HBase commented on pull request #3585:
URL: https://github.com/apache/hbase/pull/3585#issuecomment-898750699






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #3580: HBASE-26089 Support RegionCoprocessor on CompactionServer

2021-08-13 Thread GitBox


Apache-HBase commented on pull request #3580:
URL: https://github.com/apache/hbase/pull/3580#issuecomment-898746086


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 29s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  6s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ HBASE-25714 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 28s |  HBASE-25714 passed  |
   | +1 :green_heart: |  compile  |   1m 10s |  HBASE-25714 passed  |
   | +1 :green_heart: |  shadedjars  |   7m 44s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 45s |  HBASE-25714 passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 12s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 11s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 11s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   7m 48s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 41s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 141m 21s |  hbase-server in the patch passed.  
|
   |  |   | 172m  4s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3580/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3580 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 13e8a60d71eb 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | HBASE-25714 / 85f02919da |
   | Default Java | AdoptOpenJDK-11.0.10+9 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3580/1/testReport/
 |
   | Max. process+thread count | 4047 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3580/1/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (HBASE-26197) Fix some obvious bugs in MultiByteBuff.put

2021-08-13 Thread chenglei (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenglei reassigned HBASE-26197:


Assignee: chenglei

> Fix some obvious bugs in MultiByteBuff.put
> --
>
> Key: HBASE-26197
> URL: https://issues.apache.org/jira/browse/HBASE-26197
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha-1, 2.4.5
>Reporter: chenglei
>Assignee: chenglei
>Priority: Major
>
> MultiByteBuff.put(int destOffset, ByteBuff src, int srcOffset, int length) 
> has some obvious bugs:
> * It seems mix up {{items}} in {{src}} {{MutiByteBuff}} and {{items}} in the  
> {{dest}} {{MultiByteBuff}} , just as line 749 and line  754 illustrated.  The 
> logic is only right when  src {{ByteBuff}} is also a {{MultiByteBuff}} and 
> byte size of every {{ByteBuffer}} in {{src.items}} has exactly the same size 
> as every {{ByteBuffer}} in the {{dest.items}},but looking the usage of this 
> method in the hbase project, obviously the assumption is not right.
> {code:java}
> 746 public MultiByteBuff put(int offset, ByteBuff src, int srcOffset, int 
> length) {
> 747 checkRefCount();
> 748 int destItemIndex = getItemIndex(offset);
> 749 int srcItemIndex = getItemIndex(srcOffset);
> 750 ByteBuffer destItem = this.items[destItemIndex];
> 751 offset = offset - this.itemBeginPos[destItemIndex];
> 752
> 753ByteBuffer srcItem = getItemByteBuffer(src, srcItemIndex);
> 754srcOffset = srcOffset - this.itemBeginPos[srcItemIndex];
> ...
> {code}
>
> * If src is {{SingleByteBuff}} and its remaining space is fewer than 
> length,when remaining space is exhausted, this {{MultiByteBuff.put}} method 
> would not throw any exception and continue to put src {{ByteBuff}} once again 
> from position 0 because following {{MultiByteBuff.getItemByteBuffer}} ignores 
> index paramter for  {{SingleByteBuff}} . Obviously, this behavior is much 
> strange and unexpected.
>   {code:java}
>private static ByteBuffer getItemByteBuffer(ByteBuff buf, int index) {
>  return (buf instanceof SingleByteBuff) ? buf.nioByteBuffers()[0]
>: ((MultiByteBuff) buf).items[index];
> }
>{code} 
> Why seems tests is OK with too much bugs? Because in normal cases, we just 
> use {{SingleByteBuff}} not {{MultiByteBuff}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-26197) Fix some obvious bugs in MultiByteBuff.put

2021-08-13 Thread chenglei (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenglei updated HBASE-26197:
-
Status: Patch Available  (was: Open)

> Fix some obvious bugs in MultiByteBuff.put
> --
>
> Key: HBASE-26197
> URL: https://issues.apache.org/jira/browse/HBASE-26197
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.4.5, 3.0.0-alpha-1
>Reporter: chenglei
>Assignee: chenglei
>Priority: Major
>
> MultiByteBuff.put(int destOffset, ByteBuff src, int srcOffset, int length) 
> has some obvious bugs:
> * It seems mix up {{items}} in {{src}} {{MutiByteBuff}} and {{items}} in the  
> {{dest}} {{MultiByteBuff}} , just as line 749 and line  754 illustrated.  The 
> logic is only right when  src {{ByteBuff}} is also a {{MultiByteBuff}} and 
> byte size of every {{ByteBuffer}} in {{src.items}} has exactly the same size 
> as every {{ByteBuffer}} in the {{dest.items}},but looking the usage of this 
> method in the hbase project, obviously the assumption is not right.
> {code:java}
> 746 public MultiByteBuff put(int offset, ByteBuff src, int srcOffset, int 
> length) {
> 747 checkRefCount();
> 748 int destItemIndex = getItemIndex(offset);
> 749 int srcItemIndex = getItemIndex(srcOffset);
> 750 ByteBuffer destItem = this.items[destItemIndex];
> 751 offset = offset - this.itemBeginPos[destItemIndex];
> 752
> 753ByteBuffer srcItem = getItemByteBuffer(src, srcItemIndex);
> 754srcOffset = srcOffset - this.itemBeginPos[srcItemIndex];
> ...
> {code}
>
> * If src is {{SingleByteBuff}} and its remaining space is fewer than 
> length,when remaining space is exhausted, this {{MultiByteBuff.put}} method 
> would not throw any exception and continue to put src {{ByteBuff}} once again 
> from position 0 because following {{MultiByteBuff.getItemByteBuffer}} ignores 
> index paramter for  {{SingleByteBuff}} . Obviously, this behavior is much 
> strange and unexpected.
>   {code:java}
>private static ByteBuffer getItemByteBuffer(ByteBuff buf, int index) {
>  return (buf instanceof SingleByteBuff) ? buf.nioByteBuffers()[0]
>: ((MultiByteBuff) buf).items[index];
> }
>{code} 
> Why seems tests is OK with too much bugs? Because in normal cases, we just 
> use {{SingleByteBuff}} not {{MultiByteBuff}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] comnetwork opened a new pull request #3586: HBASE-26197 Fix some obvious bugs in MultiByteBuff.put

2021-08-13 Thread GitBox


comnetwork opened a new pull request #3586:
URL: https://github.com/apache/hbase/pull/3586


   HBASE-26197 Fix some obvious bugs in MultiByteBuff.put


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] comnetwork closed pull request #3585: HBASE-26197 Fix some obvious bugs in MultiByteBuff.put

2021-08-13 Thread GitBox


comnetwork closed pull request #3585:
URL: https://github.com/apache/hbase/pull/3585


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #3580: HBASE-26089 Support RegionCoprocessor on CompactionServer

2021-08-13 Thread GitBox


Apache-HBase commented on pull request #3580:
URL: https://github.com/apache/hbase/pull/3580#issuecomment-898695030


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m  6s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ HBASE-25714 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 56s |  HBASE-25714 passed  |
   | +1 :green_heart: |  compile  |   3m 16s |  HBASE-25714 passed  |
   | +1 :green_heart: |  checkstyle  |   1m 13s |  HBASE-25714 passed  |
   | +1 :green_heart: |  spotbugs  |   2m  9s |  HBASE-25714 passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 38s |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 13s |  the patch passed  |
   | +1 :green_heart: |  javac  |   3m 13s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   1m 10s |  hbase-server: The patch 
generated 14 new + 268 unchanged - 4 fixed = 282 total (was 272)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |  18m  7s |  Patch does not cause any 
errors with Hadoop 3.1.2 3.2.1 3.3.0.  |
   | +1 :green_heart: |  spotbugs  |   2m 17s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 16s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  47m 58s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3580/1/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3580 |
   | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti 
checkstyle compile |
   | uname | Linux a07bcf0bff28 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | HBASE-25714 / 85f02919da |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | checkstyle | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3580/1/artifact/yetus-general-check/output/diff-checkstyle-hbase-server.txt
 |
   | Max. process+thread count | 96 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3580/1/console
 |
   | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #3468: HBASE-26076 Support favoredNodes when do compaction offload

2021-08-13 Thread GitBox


Apache-HBase commented on pull request #3468:
URL: https://github.com/apache/hbase/pull/3468#issuecomment-898657267


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   2m  5s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  6s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ HBASE-25714 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 42s |  HBASE-25714 passed  |
   | +1 :green_heart: |  compile  |   1m 12s |  HBASE-25714 passed  |
   | +1 :green_heart: |  shadedjars  |   9m  8s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 43s |  HBASE-25714 passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 10s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  1s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m  1s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   8m 29s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 216m 31s |  hbase-server in the patch passed.  
|
   |  |   | 250m 34s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3468/3/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3468 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 72893a73cfb8 4.15.0-147-generic #151-Ubuntu SMP Fri Jun 18 
19:21:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | HBASE-25714 / 85f02919da |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3468/3/testReport/
 |
   | Max. process+thread count | 2956 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3468/3/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #3468: HBASE-26076 Support favoredNodes when do compaction offload

2021-08-13 Thread GitBox


Apache-HBase commented on pull request #3468:
URL: https://github.com/apache/hbase/pull/3468#issuecomment-898652616


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m  1s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  6s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ HBASE-25714 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   5m 13s |  HBASE-25714 passed  |
   | +1 :green_heart: |  compile  |   1m 20s |  HBASE-25714 passed  |
   | +1 :green_heart: |  shadedjars  |   8m 30s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 44s |  HBASE-25714 passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 47s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 19s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 19s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   8m 32s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 41s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 207m  6s |  hbase-server in the patch passed.  
|
   |  |   | 241m  1s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3468/3/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3468 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 9d58794e3e40 4.15.0-143-generic #147-Ubuntu SMP Wed Apr 14 
16:10:11 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | HBASE-25714 / 85f02919da |
   | Default Java | AdoptOpenJDK-11.0.10+9 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3468/3/testReport/
 |
   | Max. process+thread count | 3158 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3468/3/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (HBASE-26026) HBase Write may be stuck forever when using CompactingMemStore

2021-08-13 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17398825#comment-17398825
 ] 

Hudson commented on HBASE-26026:


Results for branch branch-2.3
[build #274 on 
builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.3/274/]:
 (/) *{color:green}+1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.3/274/General_20Nightly_20Build_20Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.3/274/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.3/274/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.3/274/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> HBase Write may be stuck forever when using CompactingMemStore
> --
>
> Key: HBASE-26026
> URL: https://issues.apache.org/jira/browse/HBASE-26026
> Project: HBase
>  Issue Type: Bug
>  Components: in-memory-compaction
>Affects Versions: 3.0.0-alpha-1, 2.3.0, 2.4.0
>Reporter: chenglei
>Assignee: chenglei
>Priority: Critical
> Fix For: 2.5.0, 3.0.0-alpha-2, 2.4.6, 2.3.7
>
>
> Sometimes I observed that HBase Write might be stuck  in my hbase cluster 
> which enabling {{CompactingMemStore}}.  I have simulated the problem  by unit 
> test in my PR. 
> The problem is caused by {{CompactingMemStore.checkAndAddToActiveSize}} : 
> {code:java}
> 425   private boolean checkAndAddToActiveSize(MutableSegment currActive, Cell 
> cellToAdd,
> 426  MemStoreSizing memstoreSizing) {
> 427if (shouldFlushInMemory(currActive, cellToAdd, memstoreSizing)) {
> 428  if (currActive.setInMemoryFlushed()) {
> 429flushInMemory(currActive);
> 430if (setInMemoryCompactionFlag()) {
> 431 // The thread is dispatched to do in-memory compaction in the 
> background
>   ..
>  }
> {code}
> In line 427, {{shouldFlushInMemory}} checking if  {{currActive.getDataSize}} 
> adding the size of {{cellToAdd}} exceeds 
> {{CompactingMemStore.inmemoryFlushSize}},if true,  then  {{currActive}} 
> should be flushed, {{currActive.setInMemoryFlushed()}} is invoked in  line 
> 428 :
> {code:java}
> public boolean setInMemoryFlushed() {
> return flushed.compareAndSet(false, true);
>   }
> {code}
> After sucessfully set {{currActive.flushed}} to true, in above line 429 
> {{flushInMemory(currActive)}} invokes 
> {{CompactingMemStore.pushActiveToPipeline}} :
> {code:java}
>  protected void pushActiveToPipeline(MutableSegment currActive) {
> if (!currActive.isEmpty()) {
>   pipeline.pushHead(currActive);
>   resetActive();
> }
>   }
> {code}
> In above {{CompactingMemStore.pushActiveToPipeline}} method , if the 
> {{currActive.cellSet}} is empty, then nothing is done. Due to  concurrent 
> writes and because we first add cell size to {{currActive.getDataSize}} and 
> then actually add cell to {{currActive.cellSet}}, it is possible that 
> {{currActive.getDataSize}} could not accommodate {{cellToAdd}}  but 
> {{currActive.cellSet}} is still empty if pending writes which not yet add 
> cells to {{currActive.cellSet}}.
> So if the {{currActive.cellSet}} is empty now, then no {{ActiveSegment}} is 
> created, and new writes still continue target to {{currActive}}, but 
> {{currActive.flushed}} is true, {{currActive}} could not enter 
> {{flushInMemory(currActive)}} again,and new  {{ActiveSegment}} could not be 
> created forever !  In the end all writes would be stuck.
> In my opinion , once  {{currActive.flushed}} is set true, it could not 
> continue use as {{ActiveSegment}} , and because of concurrent pending writes, 
> only after {{currActive.updatesLock.writeLock()}} is acquired(i.e. 
> {{currActive.waitForUpdates}} is called) in 
> {{CompactingMemStore.inMemoryCompaction}} ,we can safely say {{currActive}}  
> is empty or not.
> My fix is remove the {{if (!currActive.isEmpty())}} check here and left the 
> check to background {{InMemoryCompactionRunnable}} after 
> {{currActive.waitForUpdates}} is called. An alternative fix is we use 
> synchronization mechanism in 

[jira] [Commented] (HBASE-26155) JVM crash when scan

2021-08-13 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17398824#comment-17398824
 ] 

Hudson commented on HBASE-26155:


Results for branch branch-2.3
[build #274 on 
builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.3/274/]:
 (/) *{color:green}+1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.3/274/General_20Nightly_20Build_20Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.3/274/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.3/274/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.3/274/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> JVM crash when scan
> ---
>
> Key: HBASE-26155
> URL: https://issues.apache.org/jira/browse/HBASE-26155
> Project: HBase
>  Issue Type: Bug
>  Components: Scanners
>Affects Versions: 3.0.0-alpha-1
>Reporter: Xiaolin Ha
>Assignee: Xiaolin Ha
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.5.0, 2.4.6, 2.3.7
>
> Attachments: scan-error.png
>
>
> There are scanner close caused regionserver JVM coredump problems on our 
> production clusters.
> {code:java}
> Stack: [0x7fca4b0cc000,0x7fca4b1cd000],  sp=0x7fca4b1cb0d8,  free 
> space=1020k
> Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native 
> code)
> V  [libjvm.so+0x7fd314]
> J 2810  sun.misc.Unsafe.copyMemory(Ljava/lang/Object;JLjava/lang/Object;JJ)V 
> (0 bytes) @ 0x7fdae55a9e61 [0x7fdae55a9d80+0xe1]
> j  
> org.apache.hadoop.hbase.util.UnsafeAccess.unsafeCopy(Ljava/lang/Object;JLjava/lang/Object;JJ)V+36
> j  
> org.apache.hadoop.hbase.util.UnsafeAccess.copy(Ljava/nio/ByteBuffer;I[BII)V+69
> j  
> org.apache.hadoop.hbase.util.ByteBufferUtils.copyFromBufferToArray([BLjava/nio/ByteBuffer;III)V+39
> j  
> org.apache.hadoop.hbase.CellUtil.copyQualifierTo(Lorg/apache/hadoop/hbase/Cell;[BI)I+31
> j  
> org.apache.hadoop.hbase.KeyValueUtil.appendKeyTo(Lorg/apache/hadoop/hbase/Cell;[BI)I+43
> J 14724 C2 org.apache.hadoop.hbase.regionserver.StoreScanner.shipped()V (51 
> bytes) @ 0x7fdae6a298d0 [0x7fdae6a29780+0x150]
> J 21387 C2 
> org.apache.hadoop.hbase.regionserver.RSRpcServices$RegionScannerShippedCallBack.run()V
>  (53 bytes) @ 0x7fdae622bab8 [0x7fdae622acc0+0xdf8]
> J 26353 C2 
> org.apache.hadoop.hbase.ipc.ServerCall.setResponse(Lorg/apache/hbase/thirdparty/com/google/protobuf/Message;Lorg/apache/hadoop/hbase/CellScanner;Ljava/lang/Throwable;Ljava/lang/String;)V
>  (384 bytes) @ 0x7fdae7f139d8 [0x7fdae7f12980+0x1058]
> J 26226 C2 org.apache.hadoop.hbase.ipc.CallRunner.run()V (1554 bytes) @ 
> 0x7fdae959f68c [0x7fdae959e400+0x128c]
> J 19598% C2 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(Ljava/util/concurrent/BlockingQueue;Ljava/util/concurrent/atomic/AtomicInteger;)V
>  (338 bytes) @ 0x7fdae81c54d4 [0x7fdae81c53e0+0xf4]
> {code}
> There are also scan rpc errors when coredump happens at the handler,
> !scan-error.png|width=585,height=235!
> I found some clue in the logs, that some blocks may be replaced when its 
> nextBlockOnDiskSize less than the newly one in the method 
>  
> {code:java}
> public static boolean shouldReplaceExistingCacheBlock(BlockCache blockCache,
> BlockCacheKey cacheKey, Cacheable newBlock) {
>   if (cacheKey.toString().indexOf(".") != -1) { // reference file
> LOG.warn("replace existing cached block, cache key is : " + cacheKey);
> return true;
>   }
>   Cacheable existingBlock = blockCache.getBlock(cacheKey, false, false, 
> false);
>   if (existingBlock == null) {
> return true;
>   }
>   try {
> int comparison = BlockCacheUtil.validateBlockAddition(existingBlock, 
> newBlock, cacheKey);
> if (comparison < 0) {
>   LOG.warn("Cached block contents differ by nextBlockOnDiskSize, the new 
> block has "
>   + "nextBlockOnDiskSize set. Caching new block.");
>   return true;
> ..{code}
>  
> And the block will be replaced if it is not in the RAMCache but in the 
> BucketCache.
> When using 
>  
> {code:java}
> private void 

[GitHub] [hbase] Apache-HBase commented on pull request #3583: HBASE-26193 Do not store meta region location as permanent state on z…

2021-08-13 Thread GitBox


Apache-HBase commented on pull request #3583:
URL: https://github.com/apache/hbase/pull/3583#issuecomment-898585354


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   2m 20s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 23s |  master passed  |
   | +1 :green_heart: |  compile  |   3m 21s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   1m 12s |  master passed  |
   | +1 :green_heart: |  spotbugs  |   2m 14s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m  2s |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 19s |  the patch passed  |
   | +1 :green_heart: |  javac  |   3m 19s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   1m  9s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |  20m 13s |  Patch does not cause any 
errors with Hadoop 3.1.2 3.2.1 3.3.0.  |
   | +1 :green_heart: |  spotbugs  |   2m 25s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 13s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  53m 23s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3583/2/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3583 |
   | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti 
checkstyle compile |
   | uname | Linux 169476c664c1 4.15.0-142-generic #146-Ubuntu SMP Tue Apr 13 
01:11:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 11222fc4df |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | Max. process+thread count | 86 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3583/2/console
 |
   | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #3468: HBASE-26076 Support favoredNodes when do compaction offload

2021-08-13 Thread GitBox


Apache-HBase commented on pull request #3468:
URL: https://github.com/apache/hbase/pull/3468#issuecomment-898544947


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m  5s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ HBASE-25714 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m  0s |  HBASE-25714 passed  |
   | +1 :green_heart: |  compile  |   3m 17s |  HBASE-25714 passed  |
   | +1 :green_heart: |  checkstyle  |   1m  6s |  HBASE-25714 passed  |
   | +1 :green_heart: |  spotbugs  |   2m  6s |  HBASE-25714 passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 37s |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 12s |  the patch passed  |
   | +1 :green_heart: |  javac  |   3m 12s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   1m  7s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |  17m 57s |  Patch does not cause any 
errors with Hadoop 3.1.2 3.2.1 3.3.0.  |
   | +1 :green_heart: |  spotbugs  |   2m 16s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 15s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  47m 33s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3468/3/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3468 |
   | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti 
checkstyle compile |
   | uname | Linux 5a3677debf17 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 
17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | HBASE-25714 / 85f02919da |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | Max. process+thread count | 96 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3468/3/console
 |
   | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (HBASE-26195) Data is present in replicated cluster but not present in primary cluster.

2021-08-13 Thread Andrew Kyle Purtell (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17398739#comment-17398739
 ] 

Andrew Kyle Purtell commented on HBASE-26195:
-

[~vjasani]  ^^^

> Data is present in replicated cluster but not present in primary cluster.
> -
>
> Key: HBASE-26195
> URL: https://issues.apache.org/jira/browse/HBASE-26195
> Project: HBase
>  Issue Type: Bug
>  Components: Replication, wal
>Affects Versions: 3.0.0-alpha-1, 1.7.0, 2.5.0
>Reporter: Rushabh Shah
>Assignee: Rushabh Shah
>Priority: Major
>
> We encountered a case where we are seeing some rows (via Phoenix) in 
> replicated cluster but they are not present in source/active cluster.
> Triaging further we found memstore rollback logs in few of the region servers.
> {noformat}
> 2021-07-28 14:17:59,353 DEBUG [3,queue=3,port=60020] regionserver.HRegion - 
> rollbackMemstore rolled back 23
> 2021-07-28 14:17:59,353 DEBUG [,queue=25,port=60020] regionserver.HRegion - 
> rollbackMemstore rolled back 23
> 2021-07-28 14:17:59,354 DEBUG [3,queue=3,port=60020] regionserver.HRegion - 
> rollbackMemstore rolled back 23
> 2021-07-28 14:17:59,354 DEBUG [,queue=25,port=60020] regionserver.HRegion - 
> rollbackMemstore rolled back 23
> 2021-07-28 14:17:59,354 DEBUG [3,queue=3,port=60020] regionserver.HRegion - 
> rollbackMemstore rolled back 23
> 2021-07-28 14:17:59,354 DEBUG [,queue=25,port=60020] regionserver.HRegion - 
> rollbackMemstore rolled back 23
> 2021-07-28 14:17:59,355 DEBUG [3,queue=3,port=60020] regionserver.HRegion - 
> rollbackMemstore rolled back 23
> 2021-07-28 14:17:59,355 DEBUG [,queue=25,port=60020] regionserver.HRegion - 
> rollbackMemstore rolled back 23
> 2021-07-28 14:17:59,356 DEBUG [,queue=25,port=60020] regionserver.HRegion - 
> rollbackMemstore rolled back 23
> {noformat}
> Looking more into logs, found that there were some hdfs layer issues sync'ing 
> wal to hdfs.
> It was taking around 6 mins to sync wal. Logs below
> {noformat}
> 2021-07-28 14:19:30,511 WARN  [sync.0] hdfs.DataStreamer - Slow 
> waitForAckedSeqno took 391210ms (threshold=3ms). File being written: 
> /hbase/WALs/,60020,1626191371499/%2C60020%2C1626191371499.1627480615620,
>  block: BP-958889176--1567030695029:blk_1689647875_616028364, Write 
> pipeline datanodes: 
> [DatanodeInfoWithStorage[:50010,DS-b5747702-8ab9-4a5e-916e-5fae6e305738,DISK],
>  
> DatanodeInfoWithStorage[:50010,DS-505dabb0-0fd6-42d9-b25d-f25e249fe504,DISK],
>  
> DatanodeInfoWithStorage[:50010,DS-6c585673-d4d0-4ec6-bafe-ad4cd861fb4b,DISK]].
> 2021-07-28 14:19:30,589 WARN  [sync.1] hdfs.DataStreamer - Slow 
> waitForAckedSeqno took 391148ms (threshold=3ms). File being written: 
> /hbase/WALs/,60020,1626191371499/%2C60020%2C1626191371499.1627480615620,
>  block: BP-958889176--1567030695029:blk_1689647875_616028364, Write 
> pipeline datanodes: 
> [DatanodeInfoWithStorage[:50010,DS-b5747702-8ab9-4a5e-916e-5fae6e305738,DISK],
>  
> DatanodeInfoWithStorage[:50010,DS-505dabb0-0fd6-42d9-b25d-f25e249fe504,DISK],
>  
> DatanodeInfoWithStorage[:50010,DS-6c585673-d4d0-4ec6-bafe-ad4cd861fb4b,DISK]].
> 2021-07-28 14:19:30,589 WARN  [sync.2] hdfs.DataStreamer - Slow 
> waitForAckedSeqno took 391147ms (threshold=3ms). File being written: 
> /hbase/WALs/,60020,1626191371499/%2C60020%2C1626191371499.1627480615620,
>  block: BP-958889176--1567030695029:blk_1689647875_616028364, Write 
> pipeline datanodes: 
> [DatanodeInfoWithStorage[:50010,DS-b5747702-8ab9-4a5e-916e-5fae6e305738,DISK],
>  
> DatanodeInfoWithStorage[:50010,DS-505dabb0-0fd6-42d9-b25d-f25e249fe504,DISK],
>  
> DatanodeInfoWithStorage[:50010,DS-6c585673-d4d0-4ec6-bafe-ad4cd861fb4b,DISK]].
> 2021-07-28 14:19:30,591 INFO  [sync.0] wal.FSHLog - Slow sync cost: 391289 
> ms, current pipeline: 
> [DatanodeInfoWithStorage[:50010,DS-b5747702-8ab9-4a5e-916e-5fae6e305738,DISK],
>  
> DatanodeInfoWithStorage[:50010,DS-505dabb0-0fd6-42d9-b25d-f25e249fe504,DISK],
>  
> DatanodeInfoWithStorage[:50010,DS-6c585673-d4d0-4ec6-bafe-ad4cd861fb4b,DISK]]
> 2021-07-28 14:19:30,591 INFO  [sync.1] wal.FSHLog - Slow sync cost: 391227 
> ms, current pipeline: 
> [DatanodeInfoWithStorage[:50010,DS-b5747702-8ab9-4a5e-916e-5fae6e305738,DISK],
>  
> DatanodeInfoWithStorage[:50010,DS-505dabb0-0fd6-42d9-b25d-f25e249fe504,DISK],
>  
> DatanodeInfoWithStorage[:50010,DS-6c585673-d4d0-4ec6-bafe-ad4cd861fb4b,DISK]]
> 2021-07-28 14:19:30,591 WARN  [sync.1] wal.FSHLog - Requesting log roll 
> because we exceeded slow sync threshold; time=391227 ms, threshold=1 ms, 
> current pipeline: 
> [DatanodeInfoWithStorage[:50010,DS-b5747702-8ab9-4a5e-916e-5fae6e305738,DISK],
>  
> DatanodeInfoWithStorage[:50010,DS-505dabb0-0fd6-42d9-b25d-f25e249fe504,DISK],
>  
> 

[jira] [Commented] (HBASE-26195) Data is present in replicated cluster but not present in primary cluster.

2021-08-13 Thread Andrew Kyle Purtell (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17398737#comment-17398737
 ] 

Andrew Kyle Purtell commented on HBASE-26195:
-

We have known suboptimal HDFS level timeout settings where HDFS clients will 
wait longer than our WAL sync timeout  … 

Anyway it is true the FS WAL in particular has this risk that timeouts must 
lead to aborts for safety, but I think async WAL and therefore HBase 2 and up 
has better handling of lost writes. It can roll the writer without needing to 
reach a safe point and redo the write into the new log. So your analysis needs 
to include capabilities of modern versions of HBase too, not just 1.x.  

> Data is present in replicated cluster but not present in primary cluster.
> -
>
> Key: HBASE-26195
> URL: https://issues.apache.org/jira/browse/HBASE-26195
> Project: HBase
>  Issue Type: Bug
>  Components: Replication, wal
>Affects Versions: 3.0.0-alpha-1, 1.7.0, 2.5.0
>Reporter: Rushabh Shah
>Assignee: Rushabh Shah
>Priority: Major
>
> We encountered a case where we are seeing some rows (via Phoenix) in 
> replicated cluster but they are not present in source/active cluster.
> Triaging further we found memstore rollback logs in few of the region servers.
> {noformat}
> 2021-07-28 14:17:59,353 DEBUG [3,queue=3,port=60020] regionserver.HRegion - 
> rollbackMemstore rolled back 23
> 2021-07-28 14:17:59,353 DEBUG [,queue=25,port=60020] regionserver.HRegion - 
> rollbackMemstore rolled back 23
> 2021-07-28 14:17:59,354 DEBUG [3,queue=3,port=60020] regionserver.HRegion - 
> rollbackMemstore rolled back 23
> 2021-07-28 14:17:59,354 DEBUG [,queue=25,port=60020] regionserver.HRegion - 
> rollbackMemstore rolled back 23
> 2021-07-28 14:17:59,354 DEBUG [3,queue=3,port=60020] regionserver.HRegion - 
> rollbackMemstore rolled back 23
> 2021-07-28 14:17:59,354 DEBUG [,queue=25,port=60020] regionserver.HRegion - 
> rollbackMemstore rolled back 23
> 2021-07-28 14:17:59,355 DEBUG [3,queue=3,port=60020] regionserver.HRegion - 
> rollbackMemstore rolled back 23
> 2021-07-28 14:17:59,355 DEBUG [,queue=25,port=60020] regionserver.HRegion - 
> rollbackMemstore rolled back 23
> 2021-07-28 14:17:59,356 DEBUG [,queue=25,port=60020] regionserver.HRegion - 
> rollbackMemstore rolled back 23
> {noformat}
> Looking more into logs, found that there were some hdfs layer issues sync'ing 
> wal to hdfs.
> It was taking around 6 mins to sync wal. Logs below
> {noformat}
> 2021-07-28 14:19:30,511 WARN  [sync.0] hdfs.DataStreamer - Slow 
> waitForAckedSeqno took 391210ms (threshold=3ms). File being written: 
> /hbase/WALs/,60020,1626191371499/%2C60020%2C1626191371499.1627480615620,
>  block: BP-958889176--1567030695029:blk_1689647875_616028364, Write 
> pipeline datanodes: 
> [DatanodeInfoWithStorage[:50010,DS-b5747702-8ab9-4a5e-916e-5fae6e305738,DISK],
>  
> DatanodeInfoWithStorage[:50010,DS-505dabb0-0fd6-42d9-b25d-f25e249fe504,DISK],
>  
> DatanodeInfoWithStorage[:50010,DS-6c585673-d4d0-4ec6-bafe-ad4cd861fb4b,DISK]].
> 2021-07-28 14:19:30,589 WARN  [sync.1] hdfs.DataStreamer - Slow 
> waitForAckedSeqno took 391148ms (threshold=3ms). File being written: 
> /hbase/WALs/,60020,1626191371499/%2C60020%2C1626191371499.1627480615620,
>  block: BP-958889176--1567030695029:blk_1689647875_616028364, Write 
> pipeline datanodes: 
> [DatanodeInfoWithStorage[:50010,DS-b5747702-8ab9-4a5e-916e-5fae6e305738,DISK],
>  
> DatanodeInfoWithStorage[:50010,DS-505dabb0-0fd6-42d9-b25d-f25e249fe504,DISK],
>  
> DatanodeInfoWithStorage[:50010,DS-6c585673-d4d0-4ec6-bafe-ad4cd861fb4b,DISK]].
> 2021-07-28 14:19:30,589 WARN  [sync.2] hdfs.DataStreamer - Slow 
> waitForAckedSeqno took 391147ms (threshold=3ms). File being written: 
> /hbase/WALs/,60020,1626191371499/%2C60020%2C1626191371499.1627480615620,
>  block: BP-958889176--1567030695029:blk_1689647875_616028364, Write 
> pipeline datanodes: 
> [DatanodeInfoWithStorage[:50010,DS-b5747702-8ab9-4a5e-916e-5fae6e305738,DISK],
>  
> DatanodeInfoWithStorage[:50010,DS-505dabb0-0fd6-42d9-b25d-f25e249fe504,DISK],
>  
> DatanodeInfoWithStorage[:50010,DS-6c585673-d4d0-4ec6-bafe-ad4cd861fb4b,DISK]].
> 2021-07-28 14:19:30,591 INFO  [sync.0] wal.FSHLog - Slow sync cost: 391289 
> ms, current pipeline: 
> [DatanodeInfoWithStorage[:50010,DS-b5747702-8ab9-4a5e-916e-5fae6e305738,DISK],
>  
> DatanodeInfoWithStorage[:50010,DS-505dabb0-0fd6-42d9-b25d-f25e249fe504,DISK],
>  
> DatanodeInfoWithStorage[:50010,DS-6c585673-d4d0-4ec6-bafe-ad4cd861fb4b,DISK]]
> 2021-07-28 14:19:30,591 INFO  [sync.1] wal.FSHLog - Slow sync cost: 391227 
> ms, current pipeline: 
> [DatanodeInfoWithStorage[:50010,DS-b5747702-8ab9-4a5e-916e-5fae6e305738,DISK],
>  
> 

[GitHub] [hbase] Apache-HBase commented on pull request #3468: HBASE-26076 Support favoredNodes when do compaction offload

2021-08-13 Thread GitBox


Apache-HBase commented on pull request #3468:
URL: https://github.com/apache/hbase/pull/3468#issuecomment-898505046


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 37s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  5s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ HBASE-25714 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 31s |  HBASE-25714 passed  |
   | +1 :green_heart: |  compile  |   1m  7s |  HBASE-25714 passed  |
   | +1 :green_heart: |  shadedjars  |   8m 35s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  HBASE-25714 passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 14s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  5s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m  5s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   8m 37s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 250m 50s |  hbase-server in the patch failed.  |
   |  |   | 283m 56s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3468/2/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3468 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux f8c7971e4f36 4.15.0-147-generic #151-Ubuntu SMP Fri Jun 18 
19:21:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | HBASE-25714 / 85f02919da |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | unit | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3468/2/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3468/2/testReport/
 |
   | Max. process+thread count | 3115 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3468/2/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #3468: HBASE-26076 Support favoredNodes when do compaction offload

2021-08-13 Thread GitBox


Apache-HBase commented on pull request #3468:
URL: https://github.com/apache/hbase/pull/3468#issuecomment-898499009


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 43s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  5s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ HBASE-25714 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   5m 17s |  HBASE-25714 passed  |
   | +1 :green_heart: |  compile  |   1m 22s |  HBASE-25714 passed  |
   | +1 :green_heart: |  shadedjars  |   8m 43s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 49s |  HBASE-25714 passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 56s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 25s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 25s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   8m 47s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 52s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 238m 41s |  hbase-server in the patch failed.  |
   |  |   | 274m 27s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3468/2/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3468 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux ab56fefb5603 4.15.0-147-generic #151-Ubuntu SMP Fri Jun 18 
19:21:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | HBASE-25714 / 85f02919da |
   | Default Java | AdoptOpenJDK-11.0.10+9 |
   | unit | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3468/2/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3468/2/testReport/
 |
   | Max. process+thread count | 3148 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3468/2/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #3583: HBASE-26193 Do not store meta region location as permanent state on z…

2021-08-13 Thread GitBox


Apache-HBase commented on pull request #3583:
URL: https://github.com/apache/hbase/pull/3583#issuecomment-898486213


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m  2s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 14s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   4m 18s |  master passed  |
   | +1 :green_heart: |  compile  |   1m 28s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   9m  2s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 59s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 13s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   4m  9s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 27s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 27s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   9m 44s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  3s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 34s |  hbase-client in the patch passed.  
|
   | -1 :x: |  unit  | 213m 33s |  hbase-server in the patch failed.  |
   |  |   | 251m  2s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3583/1/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3583 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 2afa3f888629 4.15.0-147-generic #151-Ubuntu SMP Fri Jun 18 
19:21:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 11222fc4df |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | unit | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3583/1/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3583/1/testReport/
 |
   | Max. process+thread count | 2842 (vs. ulimit of 3) |
   | modules | C: hbase-client hbase-server U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3583/1/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #3584: HBASE-26194 Introduce a ReplicationServerSourceManager to simplify HR…

2021-08-13 Thread GitBox


Apache-HBase commented on pull request #3584:
URL: https://github.com/apache/hbase/pull/3584#issuecomment-898485663


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 28s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  5s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ HBASE-24666 Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 42s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   3m 46s |  HBASE-24666 passed  |
   | +1 :green_heart: |  compile  |   2m 11s |  HBASE-24666 passed  |
   | +1 :green_heart: |  shadedjars  |   8m 15s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 41s |  HBASE-24666 passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 16s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 41s |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 12s |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 12s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   8m 13s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 38s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 39s |  hbase-common in the patch passed.  
|
   | +1 :green_heart: |  unit  |   1m 15s |  hbase-client in the patch passed.  
|
   | +1 :green_heart: |  unit  |   0m 27s |  hbase-replication in the patch 
passed.  |
   | -1 :x: |  unit  | 150m 23s |  hbase-server in the patch failed.  |
   |  |   | 189m 54s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3584/1/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3584 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux bbdd90584aab 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | HBASE-24666 / fd2f3d1abd |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | unit | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3584/1/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3584/1/testReport/
 |
   | Max. process+thread count | 4796 (vs. ulimit of 3) |
   | modules | C: hbase-common hbase-client hbase-replication hbase-server U: . 
|
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3584/1/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] nyl3532016 commented on a change in pull request #3468: HBASE-26076 Support favoredNodes when do compaction offload

2021-08-13 Thread GitBox


nyl3532016 commented on a change in pull request #3468:
URL: https://github.com/apache/hbase/pull/3468#discussion_r688509255



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java
##
@@ -344,14 +344,27 @@ private StoreContext 
initializeStoreContext(ColumnFamilyDescriptor family) throw
   }
 
   private InetSocketAddress[] getFavoredNodes() {
-InetSocketAddress[] favoredNodes = null;
 if (region.getRegionServerServices() != null) {
-  favoredNodes = region.getRegionServerServices().getFavoredNodesForRegion(
-  region.getRegionInfo().getEncodedName());
+  return region.getRegionServerServices()
+  .getFavoredNodesForRegion(region.getRegionInfo().getEncodedName());
 }
 return favoredNodes;
   }
 
+  // Favored nodes used by compaction offload
+  private InetSocketAddress[] favoredNodes = null;

Review comment:
   Must be public for not same package, Let me change name to 
`assignFavoredNodesForCompactionOffload` and add annotation




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] nyl3532016 commented on a change in pull request #3468: HBASE-26076 Support favoredNodes when do compaction offload

2021-08-13 Thread GitBox


nyl3532016 commented on a change in pull request #3468:
URL: https://github.com/apache/hbase/pull/3468#discussion_r688509255



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java
##
@@ -344,14 +344,27 @@ private StoreContext 
initializeStoreContext(ColumnFamilyDescriptor family) throw
   }
 
   private InetSocketAddress[] getFavoredNodes() {
-InetSocketAddress[] favoredNodes = null;
 if (region.getRegionServerServices() != null) {
-  favoredNodes = region.getRegionServerServices().getFavoredNodesForRegion(
-  region.getRegionInfo().getEncodedName());
+  return region.getRegionServerServices()
+  .getFavoredNodesForRegion(region.getRegionInfo().getEncodedName());
 }
 return favoredNodes;
   }
 
+  // Favored nodes used by compaction offload
+  private InetSocketAddress[] favoredNodes = null;

Review comment:
   Must be public for not same package, Let me change name to 
`assignFavoredNodesForCompactionOffload` and add annotate




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] nyl3532016 commented on a change in pull request #3468: HBASE-26076 Support favoredNodes when do compaction offload

2021-08-13 Thread GitBox


nyl3532016 commented on a change in pull request #3468:
URL: https://github.com/apache/hbase/pull/3468#discussion_r688509255



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java
##
@@ -344,14 +344,27 @@ private StoreContext 
initializeStoreContext(ColumnFamilyDescriptor family) throw
   }
 
   private InetSocketAddress[] getFavoredNodes() {
-InetSocketAddress[] favoredNodes = null;
 if (region.getRegionServerServices() != null) {
-  favoredNodes = region.getRegionServerServices().getFavoredNodesForRegion(
-  region.getRegionInfo().getEncodedName());
+  return region.getRegionServerServices()
+  .getFavoredNodesForRegion(region.getRegionInfo().getEncodedName());
 }
 return favoredNodes;
   }
 
+  // Favored nodes used by compaction offload
+  private InetSocketAddress[] favoredNodes = null;

Review comment:
   Must be public for not same package, Let me change name to 
'assignFavoredNodesForCompactionOffload' and add annotate




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache9 commented on a change in pull request #3575: HBASE-26178 Improve data structure and algorithm for BalanceClusterSt…

2021-08-13 Thread GitBox


Apache9 commented on a change in pull request #3575:
URL: https://github.com/apache/hbase/pull/3575#discussion_r688499232



##
File path: 
hbase-balancer/src/main/java/org/apache/hadoop/hbase/master/balancer/BalancerClusterState.java
##
@@ -207,15 +212,20 @@ public String getRack(ServerName server) {
 
 serverIndexToHostIndex = new int[numServers];
 serverIndexToRackIndex = new int[numServers];
-regionsPerServer = new int[numServers][];
-serverIndexToRegionsOffset = new int[numServers];
-regionsPerHost = new int[numHosts][];
-regionsPerRack = new int[numRacks][];
-primariesOfRegionsPerServer = new int[numServers][];
-primariesOfRegionsPerHost = new int[numHosts][];
-primariesOfRegionsPerRack = new int[numRacks][];
+regionsPerServer = new ArrayList>(numServers);
+regionsPerHost = new ArrayList>(numHosts);
+regionsPerRack = new ArrayList>(numRacks);
+primariesOfRegionsPerServer = new ArrayList>>(numServers);
+primariesOfRegionsPerHost = new ArrayList>>(numHosts);

Review comment:
   And do we really need HashMap here? The key is just a index?

##
File path: 
hbase-balancer/src/main/java/org/apache/hadoop/hbase/master/balancer/BalancerClusterState.java
##
@@ -207,15 +212,20 @@ public String getRack(ServerName server) {
 
 serverIndexToHostIndex = new int[numServers];
 serverIndexToRackIndex = new int[numServers];
-regionsPerServer = new int[numServers][];
-serverIndexToRegionsOffset = new int[numServers];
-regionsPerHost = new int[numHosts][];
-regionsPerRack = new int[numRacks][];
-primariesOfRegionsPerServer = new int[numServers][];
-primariesOfRegionsPerHost = new int[numHosts][];
-primariesOfRegionsPerRack = new int[numRacks][];
+regionsPerServer = new ArrayList>(numServers);
+regionsPerHost = new ArrayList>(numHosts);
+regionsPerRack = new ArrayList>(numRacks);
+primariesOfRegionsPerServer = new ArrayList>>(numServers);
+primariesOfRegionsPerHost = new ArrayList>>(numHosts);

Review comment:
   Could just be new HashMap[numHosts]

##
File path: 
hbase-balancer/src/main/java/org/apache/hadoop/hbase/master/balancer/BalancerClusterState.java
##
@@ -207,15 +212,20 @@ public String getRack(ServerName server) {
 
 serverIndexToHostIndex = new int[numServers];
 serverIndexToRackIndex = new int[numServers];
-regionsPerServer = new int[numServers][];
-serverIndexToRegionsOffset = new int[numServers];
-regionsPerHost = new int[numHosts][];
-regionsPerRack = new int[numRacks][];
-primariesOfRegionsPerServer = new int[numServers][];
-primariesOfRegionsPerHost = new int[numHosts][];
-primariesOfRegionsPerRack = new int[numRacks][];
+regionsPerServer = new ArrayList>(numServers);

Review comment:
   I think we could implement a simple IntArrayList, this could save a lot 
of memory, so here we could write code like
   
   regionsPerServer = new IntArrayList[numServers];




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #3583: HBASE-26193 Do not store meta region location as permanent state on z…

2021-08-13 Thread GitBox


Apache-HBase commented on pull request #3583:
URL: https://github.com/apache/hbase/pull/3583#issuecomment-898443313


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 47s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 16s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   5m  4s |  master passed  |
   | +1 :green_heart: |  compile  |   1m 55s |  master passed  |
   | +1 :green_heart: |  shadedjars  |  10m 20s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 22s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 17s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   5m  3s |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m  1s |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m  1s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   9m  9s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 12s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 24s |  hbase-client in the patch passed.  
|
   | -1 :x: |  unit  | 140m 58s |  hbase-server in the patch failed.  |
   |  |   | 183m 27s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3583/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3583 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 188b8099aacc 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 11222fc4df |
   | Default Java | AdoptOpenJDK-11.0.10+9 |
   | unit | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3583/1/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3583/1/testReport/
 |
   | Max. process+thread count | 3834 (vs. ulimit of 3) |
   | modules | C: hbase-client hbase-server U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3583/1/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache9 commented on a change in pull request #3572: HBASE-26184 TestTableSnapshotScanner.testMergeRegion error message is…

2021-08-13 Thread GitBox


Apache9 commented on a change in pull request #3572:
URL: https://github.com/apache/hbase/pull/3572#discussion_r688490034



##
File path: 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestTableSnapshotScanner.java
##
@@ -460,7 +460,7 @@ public void testMergeRegion() throws Exception {
   }
 } catch (Exception e) {
   LOG.error("scan snapshot error", e);
-  Assert.fail("Should not throw FileNotFoundException");
+  Assert.fail("Should not throw Exception: " + e.getMessage());

Review comment:
   We have a fail call here, then the two asserts below are useless?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (HBASE-26026) HBase Write may be stuck forever when using CompactingMemStore

2021-08-13 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17398634#comment-17398634
 ] 

Hudson commented on HBASE-26026:


Results for branch master
[build #366 on 
builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/366/]:
 (x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/366/General_20Nightly_20Build_20Report/]






(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/366/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/366/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> HBase Write may be stuck forever when using CompactingMemStore
> --
>
> Key: HBASE-26026
> URL: https://issues.apache.org/jira/browse/HBASE-26026
> Project: HBase
>  Issue Type: Bug
>  Components: in-memory-compaction
>Affects Versions: 3.0.0-alpha-1, 2.3.0, 2.4.0
>Reporter: chenglei
>Assignee: chenglei
>Priority: Critical
> Fix For: 2.5.0, 3.0.0-alpha-2, 2.4.6, 2.3.7
>
>
> Sometimes I observed that HBase Write might be stuck  in my hbase cluster 
> which enabling {{CompactingMemStore}}.  I have simulated the problem  by unit 
> test in my PR. 
> The problem is caused by {{CompactingMemStore.checkAndAddToActiveSize}} : 
> {code:java}
> 425   private boolean checkAndAddToActiveSize(MutableSegment currActive, Cell 
> cellToAdd,
> 426  MemStoreSizing memstoreSizing) {
> 427if (shouldFlushInMemory(currActive, cellToAdd, memstoreSizing)) {
> 428  if (currActive.setInMemoryFlushed()) {
> 429flushInMemory(currActive);
> 430if (setInMemoryCompactionFlag()) {
> 431 // The thread is dispatched to do in-memory compaction in the 
> background
>   ..
>  }
> {code}
> In line 427, {{shouldFlushInMemory}} checking if  {{currActive.getDataSize}} 
> adding the size of {{cellToAdd}} exceeds 
> {{CompactingMemStore.inmemoryFlushSize}},if true,  then  {{currActive}} 
> should be flushed, {{currActive.setInMemoryFlushed()}} is invoked in  line 
> 428 :
> {code:java}
> public boolean setInMemoryFlushed() {
> return flushed.compareAndSet(false, true);
>   }
> {code}
> After sucessfully set {{currActive.flushed}} to true, in above line 429 
> {{flushInMemory(currActive)}} invokes 
> {{CompactingMemStore.pushActiveToPipeline}} :
> {code:java}
>  protected void pushActiveToPipeline(MutableSegment currActive) {
> if (!currActive.isEmpty()) {
>   pipeline.pushHead(currActive);
>   resetActive();
> }
>   }
> {code}
> In above {{CompactingMemStore.pushActiveToPipeline}} method , if the 
> {{currActive.cellSet}} is empty, then nothing is done. Due to  concurrent 
> writes and because we first add cell size to {{currActive.getDataSize}} and 
> then actually add cell to {{currActive.cellSet}}, it is possible that 
> {{currActive.getDataSize}} could not accommodate {{cellToAdd}}  but 
> {{currActive.cellSet}} is still empty if pending writes which not yet add 
> cells to {{currActive.cellSet}}.
> So if the {{currActive.cellSet}} is empty now, then no {{ActiveSegment}} is 
> created, and new writes still continue target to {{currActive}}, but 
> {{currActive.flushed}} is true, {{currActive}} could not enter 
> {{flushInMemory(currActive)}} again,and new  {{ActiveSegment}} could not be 
> created forever !  In the end all writes would be stuck.
> In my opinion , once  {{currActive.flushed}} is set true, it could not 
> continue use as {{ActiveSegment}} , and because of concurrent pending writes, 
> only after {{currActive.updatesLock.writeLock()}} is acquired(i.e. 
> {{currActive.waitForUpdates}} is called) in 
> {{CompactingMemStore.inMemoryCompaction}} ,we can safely say {{currActive}}  
> is empty or not.
> My fix is remove the {{if (!currActive.isEmpty())}} check here and left the 
> check to background {{InMemoryCompactionRunnable}} after 
> {{currActive.waitForUpdates}} is called. An alternative fix is we use 
> synchronization mechanism in {{checkAndAddToActiveSize}} method to prevent 
> all writes , wait for all pending write completed(i.e. 
> currActive.waitForUpdates is called) and if {{currActive}} is still empty 
> ,then we set {{currActive.flushed}} back to false,but I am not inclined to 
> use 

[jira] [Commented] (HBASE-26185) Fix TestMaster#testMoveRegionWhenNotInitialized with hbase.min.version.move.system.tables

2021-08-13 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17398632#comment-17398632
 ] 

Hudson commented on HBASE-26185:


Results for branch master
[build #366 on 
builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/366/]:
 (x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/366/General_20Nightly_20Build_20Report/]






(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/366/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/366/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Fix TestMaster#testMoveRegionWhenNotInitialized with 
> hbase.min.version.move.system.tables
> -
>
> Key: HBASE-26185
> URL: https://issues.apache.org/jira/browse/HBASE-26185
> Project: HBase
>  Issue Type: Test
>Reporter: Viraj Jasani
>Assignee: Rushabh Shah
>Priority: Minor
> Fix For: 2.5.0, 3.0.0-alpha-2, 1.7.2, 2.4.6, 2.3.7
>
>
> In order to protect meta region movement unexpectedly during upgrade with 
> rsGroup enabled, it is good practice to keep 
> hbase.min.version.move.system.tables in hbase-default for specific branch so 
> that the use-case for the specific version of HBase is well under control. 
> However, TestMaster#testMoveRegionWhenNotInitialized would fail because it 
> would not find server to move meta to. We should fix this.
>  
> {code:java}
> INFO  [Time-limited test] master.HMaster(2029): Passed destination servername 
> is null/empty so choosing a server at random
> java.lang.UnsupportedOperationExceptionjava.lang.UnsupportedOperationException
>  at java.util.AbstractList.add(AbstractList.java:148) at 
> java.util.AbstractList.add(AbstractList.java:108) at 
> org.apache.hadoop.hbase.master.HMaster.move(HMaster.java:2031) at 
> org.apache.hadoop.hbase.master.TestMaster.testMoveRegionWhenNotInitialized(TestMaster.java:181)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-26155) JVM crash when scan

2021-08-13 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17398633#comment-17398633
 ] 

Hudson commented on HBASE-26155:


Results for branch master
[build #366 on 
builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/366/]:
 (x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/366/General_20Nightly_20Build_20Report/]






(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/366/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/366/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> JVM crash when scan
> ---
>
> Key: HBASE-26155
> URL: https://issues.apache.org/jira/browse/HBASE-26155
> Project: HBase
>  Issue Type: Bug
>  Components: Scanners
>Affects Versions: 3.0.0-alpha-1
>Reporter: Xiaolin Ha
>Assignee: Xiaolin Ha
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.5.0, 2.4.6, 2.3.7
>
> Attachments: scan-error.png
>
>
> There are scanner close caused regionserver JVM coredump problems on our 
> production clusters.
> {code:java}
> Stack: [0x7fca4b0cc000,0x7fca4b1cd000],  sp=0x7fca4b1cb0d8,  free 
> space=1020k
> Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native 
> code)
> V  [libjvm.so+0x7fd314]
> J 2810  sun.misc.Unsafe.copyMemory(Ljava/lang/Object;JLjava/lang/Object;JJ)V 
> (0 bytes) @ 0x7fdae55a9e61 [0x7fdae55a9d80+0xe1]
> j  
> org.apache.hadoop.hbase.util.UnsafeAccess.unsafeCopy(Ljava/lang/Object;JLjava/lang/Object;JJ)V+36
> j  
> org.apache.hadoop.hbase.util.UnsafeAccess.copy(Ljava/nio/ByteBuffer;I[BII)V+69
> j  
> org.apache.hadoop.hbase.util.ByteBufferUtils.copyFromBufferToArray([BLjava/nio/ByteBuffer;III)V+39
> j  
> org.apache.hadoop.hbase.CellUtil.copyQualifierTo(Lorg/apache/hadoop/hbase/Cell;[BI)I+31
> j  
> org.apache.hadoop.hbase.KeyValueUtil.appendKeyTo(Lorg/apache/hadoop/hbase/Cell;[BI)I+43
> J 14724 C2 org.apache.hadoop.hbase.regionserver.StoreScanner.shipped()V (51 
> bytes) @ 0x7fdae6a298d0 [0x7fdae6a29780+0x150]
> J 21387 C2 
> org.apache.hadoop.hbase.regionserver.RSRpcServices$RegionScannerShippedCallBack.run()V
>  (53 bytes) @ 0x7fdae622bab8 [0x7fdae622acc0+0xdf8]
> J 26353 C2 
> org.apache.hadoop.hbase.ipc.ServerCall.setResponse(Lorg/apache/hbase/thirdparty/com/google/protobuf/Message;Lorg/apache/hadoop/hbase/CellScanner;Ljava/lang/Throwable;Ljava/lang/String;)V
>  (384 bytes) @ 0x7fdae7f139d8 [0x7fdae7f12980+0x1058]
> J 26226 C2 org.apache.hadoop.hbase.ipc.CallRunner.run()V (1554 bytes) @ 
> 0x7fdae959f68c [0x7fdae959e400+0x128c]
> J 19598% C2 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(Ljava/util/concurrent/BlockingQueue;Ljava/util/concurrent/atomic/AtomicInteger;)V
>  (338 bytes) @ 0x7fdae81c54d4 [0x7fdae81c53e0+0xf4]
> {code}
> There are also scan rpc errors when coredump happens at the handler,
> !scan-error.png|width=585,height=235!
> I found some clue in the logs, that some blocks may be replaced when its 
> nextBlockOnDiskSize less than the newly one in the method 
>  
> {code:java}
> public static boolean shouldReplaceExistingCacheBlock(BlockCache blockCache,
> BlockCacheKey cacheKey, Cacheable newBlock) {
>   if (cacheKey.toString().indexOf(".") != -1) { // reference file
> LOG.warn("replace existing cached block, cache key is : " + cacheKey);
> return true;
>   }
>   Cacheable existingBlock = blockCache.getBlock(cacheKey, false, false, 
> false);
>   if (existingBlock == null) {
> return true;
>   }
>   try {
> int comparison = BlockCacheUtil.validateBlockAddition(existingBlock, 
> newBlock, cacheKey);
> if (comparison < 0) {
>   LOG.warn("Cached block contents differ by nextBlockOnDiskSize, the new 
> block has "
>   + "nextBlockOnDiskSize set. Caching new block.");
>   return true;
> ..{code}
>  
> And the block will be replaced if it is not in the RAMCache but in the 
> BucketCache.
> When using 
>  
> {code:java}
> private void putIntoBackingMap(BlockCacheKey key, BucketEntry bucketEntry) {
>   BucketEntry previousEntry = backingMap.put(key, bucketEntry);
>   if (previousEntry != null && previousEntry != bucketEntry) {
> ReentrantReadWriteLock lock = 

[jira] [Comment Edited] (HBASE-26195) Data is present in replicated cluster but not present in primary cluster.

2021-08-13 Thread Rushabh Shah (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17398621#comment-17398621
 ] 

Rushabh Shah edited comment on HBASE-26195 at 8/13/21, 12:29 PM:
-

> since this is a rare condition and I don't expect it to affect availability 
> much.

I agree that approach 3 is the correct one. Just adding one more data point. 
During the time of this slow hdfs sync issue, there were 21 region servers 
which did memstore rollback. There were 295 RS in that cluster. If we go with 
the "Abort RS" route, then we would have lost around 7% of the capacity. In 
smaller clusters, it would be more impactful since those bad/slow DNs would be 
part of many WAL pipelines.
[~gjacoby] [~apurtell] 


was (Author: shahrs87):
> since this is a rare condition and I don't expect it to affect availability 
> much.

I agree that approach 3 is the correct one. Just adding one more data point. 
During the time of this slow hdfs sync issue, there were 21 region servers 
which did memstore rollback. There were 295 RS in that cluster. If we go with 
the "Abort RS" route, then we would have lost around 7% of the capacity. In 
smaller clusters, it would be more impactful since those bad DN's would be part 
of many WAL pipelines.
[~gjacoby] [~apurtell] 

> Data is present in replicated cluster but not present in primary cluster.
> -
>
> Key: HBASE-26195
> URL: https://issues.apache.org/jira/browse/HBASE-26195
> Project: HBase
>  Issue Type: Bug
>  Components: Replication, wal
>Affects Versions: 3.0.0-alpha-1, 1.7.0, 2.5.0
>Reporter: Rushabh Shah
>Assignee: Rushabh Shah
>Priority: Major
>
> We encountered a case where we are seeing some rows (via Phoenix) in 
> replicated cluster but they are not present in source/active cluster.
> Triaging further we found memstore rollback logs in few of the region servers.
> {noformat}
> 2021-07-28 14:17:59,353 DEBUG [3,queue=3,port=60020] regionserver.HRegion - 
> rollbackMemstore rolled back 23
> 2021-07-28 14:17:59,353 DEBUG [,queue=25,port=60020] regionserver.HRegion - 
> rollbackMemstore rolled back 23
> 2021-07-28 14:17:59,354 DEBUG [3,queue=3,port=60020] regionserver.HRegion - 
> rollbackMemstore rolled back 23
> 2021-07-28 14:17:59,354 DEBUG [,queue=25,port=60020] regionserver.HRegion - 
> rollbackMemstore rolled back 23
> 2021-07-28 14:17:59,354 DEBUG [3,queue=3,port=60020] regionserver.HRegion - 
> rollbackMemstore rolled back 23
> 2021-07-28 14:17:59,354 DEBUG [,queue=25,port=60020] regionserver.HRegion - 
> rollbackMemstore rolled back 23
> 2021-07-28 14:17:59,355 DEBUG [3,queue=3,port=60020] regionserver.HRegion - 
> rollbackMemstore rolled back 23
> 2021-07-28 14:17:59,355 DEBUG [,queue=25,port=60020] regionserver.HRegion - 
> rollbackMemstore rolled back 23
> 2021-07-28 14:17:59,356 DEBUG [,queue=25,port=60020] regionserver.HRegion - 
> rollbackMemstore rolled back 23
> {noformat}
> Looking more into logs, found that there were some hdfs layer issues sync'ing 
> wal to hdfs.
> It was taking around 6 mins to sync wal. Logs below
> {noformat}
> 2021-07-28 14:19:30,511 WARN  [sync.0] hdfs.DataStreamer - Slow 
> waitForAckedSeqno took 391210ms (threshold=3ms). File being written: 
> /hbase/WALs/,60020,1626191371499/%2C60020%2C1626191371499.1627480615620,
>  block: BP-958889176--1567030695029:blk_1689647875_616028364, Write 
> pipeline datanodes: 
> [DatanodeInfoWithStorage[:50010,DS-b5747702-8ab9-4a5e-916e-5fae6e305738,DISK],
>  
> DatanodeInfoWithStorage[:50010,DS-505dabb0-0fd6-42d9-b25d-f25e249fe504,DISK],
>  
> DatanodeInfoWithStorage[:50010,DS-6c585673-d4d0-4ec6-bafe-ad4cd861fb4b,DISK]].
> 2021-07-28 14:19:30,589 WARN  [sync.1] hdfs.DataStreamer - Slow 
> waitForAckedSeqno took 391148ms (threshold=3ms). File being written: 
> /hbase/WALs/,60020,1626191371499/%2C60020%2C1626191371499.1627480615620,
>  block: BP-958889176--1567030695029:blk_1689647875_616028364, Write 
> pipeline datanodes: 
> [DatanodeInfoWithStorage[:50010,DS-b5747702-8ab9-4a5e-916e-5fae6e305738,DISK],
>  
> DatanodeInfoWithStorage[:50010,DS-505dabb0-0fd6-42d9-b25d-f25e249fe504,DISK],
>  
> DatanodeInfoWithStorage[:50010,DS-6c585673-d4d0-4ec6-bafe-ad4cd861fb4b,DISK]].
> 2021-07-28 14:19:30,589 WARN  [sync.2] hdfs.DataStreamer - Slow 
> waitForAckedSeqno took 391147ms (threshold=3ms). File being written: 
> /hbase/WALs/,60020,1626191371499/%2C60020%2C1626191371499.1627480615620,
>  block: BP-958889176--1567030695029:blk_1689647875_616028364, Write 
> pipeline datanodes: 
> [DatanodeInfoWithStorage[:50010,DS-b5747702-8ab9-4a5e-916e-5fae6e305738,DISK],
>  
> DatanodeInfoWithStorage[:50010,DS-505dabb0-0fd6-42d9-b25d-f25e249fe504,DISK],
>  
> 

[jira] [Commented] (HBASE-26190) High rate logging of BucketAllocatorException: Allocation too big

2021-08-13 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17398624#comment-17398624
 ] 

Viraj Jasani commented on HBASE-26190:
--

+1 to logging WARN for bad allocations once per minute.

> High rate logging of BucketAllocatorException: Allocation too big 
> --
>
> Key: HBASE-26190
> URL: https://issues.apache.org/jira/browse/HBASE-26190
> Project: HBase
>  Issue Type: Bug
>  Components: BucketCache, Operability
>Affects Versions: 2.4.5
>Reporter: Andrew Kyle Purtell
>Priority: Major
> Fix For: 2.5.0, 3.0.0-alpha-2, 2.4.6
>
>
> These log lines may be printed at high frequency when using a schema that 
> creates large blocks and the bucket cache is enabled. It makes sense to warn 
> about this initially, but the very high rate of warnings when the use case is 
> legit and this is expected does not. Print this once, then not again; or, 
> rate-limit this message to be printed at a more reasonable rate, like once 
> per minute. 
> {noformat}
> 2021-08-11 23:42:10,902 WARN  [main-BucketCacheWriter-0]
> bucket.BucketCache: Failed allocation for 
> 4842189251414b8a9212f6831462e415_218610843;
> org.apache.hadoop.hbase.io.hfile.bucket.BucketAllocatorException:
> Allocation too big size=1049337;
> adjust BucketCache sizes hbase.bucketcache.bucket.sizes to accomodate
> if size seems reasonable and you want it cached.
> {noformat}
> Also, it might be better to log this at INFO given the caveat "if size seems 
> reasonable and you want it cached". Reads like an informational message.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-26195) Data is present in replicated cluster but not present in primary cluster.

2021-08-13 Thread Rushabh Shah (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17398621#comment-17398621
 ] 

Rushabh Shah commented on HBASE-26195:
--

> since this is a rare condition and I don't expect it to affect availability 
> much.

I agree that approach 3 is the correct one. Just adding one more data point. 
During the time of this slow hdfs sync issue, there were 21 region servers 
which did memstore rollback. There were 295 RS in that cluster. If we go with 
the "Abort RS" route, then we would have lost around 7% of the capacity. In 
smaller clusters, it would be more impactful since those bad DN's would be part 
of many WAL pipelines.
[~gjacoby] [~apurtell] 

> Data is present in replicated cluster but not present in primary cluster.
> -
>
> Key: HBASE-26195
> URL: https://issues.apache.org/jira/browse/HBASE-26195
> Project: HBase
>  Issue Type: Bug
>  Components: Replication, wal
>Affects Versions: 3.0.0-alpha-1, 1.7.0, 2.5.0
>Reporter: Rushabh Shah
>Assignee: Rushabh Shah
>Priority: Major
>
> We encountered a case where we are seeing some rows (via Phoenix) in 
> replicated cluster but they are not present in source/active cluster.
> Triaging further we found memstore rollback logs in few of the region servers.
> {noformat}
> 2021-07-28 14:17:59,353 DEBUG [3,queue=3,port=60020] regionserver.HRegion - 
> rollbackMemstore rolled back 23
> 2021-07-28 14:17:59,353 DEBUG [,queue=25,port=60020] regionserver.HRegion - 
> rollbackMemstore rolled back 23
> 2021-07-28 14:17:59,354 DEBUG [3,queue=3,port=60020] regionserver.HRegion - 
> rollbackMemstore rolled back 23
> 2021-07-28 14:17:59,354 DEBUG [,queue=25,port=60020] regionserver.HRegion - 
> rollbackMemstore rolled back 23
> 2021-07-28 14:17:59,354 DEBUG [3,queue=3,port=60020] regionserver.HRegion - 
> rollbackMemstore rolled back 23
> 2021-07-28 14:17:59,354 DEBUG [,queue=25,port=60020] regionserver.HRegion - 
> rollbackMemstore rolled back 23
> 2021-07-28 14:17:59,355 DEBUG [3,queue=3,port=60020] regionserver.HRegion - 
> rollbackMemstore rolled back 23
> 2021-07-28 14:17:59,355 DEBUG [,queue=25,port=60020] regionserver.HRegion - 
> rollbackMemstore rolled back 23
> 2021-07-28 14:17:59,356 DEBUG [,queue=25,port=60020] regionserver.HRegion - 
> rollbackMemstore rolled back 23
> {noformat}
> Looking more into logs, found that there were some hdfs layer issues sync'ing 
> wal to hdfs.
> It was taking around 6 mins to sync wal. Logs below
> {noformat}
> 2021-07-28 14:19:30,511 WARN  [sync.0] hdfs.DataStreamer - Slow 
> waitForAckedSeqno took 391210ms (threshold=3ms). File being written: 
> /hbase/WALs/,60020,1626191371499/%2C60020%2C1626191371499.1627480615620,
>  block: BP-958889176--1567030695029:blk_1689647875_616028364, Write 
> pipeline datanodes: 
> [DatanodeInfoWithStorage[:50010,DS-b5747702-8ab9-4a5e-916e-5fae6e305738,DISK],
>  
> DatanodeInfoWithStorage[:50010,DS-505dabb0-0fd6-42d9-b25d-f25e249fe504,DISK],
>  
> DatanodeInfoWithStorage[:50010,DS-6c585673-d4d0-4ec6-bafe-ad4cd861fb4b,DISK]].
> 2021-07-28 14:19:30,589 WARN  [sync.1] hdfs.DataStreamer - Slow 
> waitForAckedSeqno took 391148ms (threshold=3ms). File being written: 
> /hbase/WALs/,60020,1626191371499/%2C60020%2C1626191371499.1627480615620,
>  block: BP-958889176--1567030695029:blk_1689647875_616028364, Write 
> pipeline datanodes: 
> [DatanodeInfoWithStorage[:50010,DS-b5747702-8ab9-4a5e-916e-5fae6e305738,DISK],
>  
> DatanodeInfoWithStorage[:50010,DS-505dabb0-0fd6-42d9-b25d-f25e249fe504,DISK],
>  
> DatanodeInfoWithStorage[:50010,DS-6c585673-d4d0-4ec6-bafe-ad4cd861fb4b,DISK]].
> 2021-07-28 14:19:30,589 WARN  [sync.2] hdfs.DataStreamer - Slow 
> waitForAckedSeqno took 391147ms (threshold=3ms). File being written: 
> /hbase/WALs/,60020,1626191371499/%2C60020%2C1626191371499.1627480615620,
>  block: BP-958889176--1567030695029:blk_1689647875_616028364, Write 
> pipeline datanodes: 
> [DatanodeInfoWithStorage[:50010,DS-b5747702-8ab9-4a5e-916e-5fae6e305738,DISK],
>  
> DatanodeInfoWithStorage[:50010,DS-505dabb0-0fd6-42d9-b25d-f25e249fe504,DISK],
>  
> DatanodeInfoWithStorage[:50010,DS-6c585673-d4d0-4ec6-bafe-ad4cd861fb4b,DISK]].
> 2021-07-28 14:19:30,591 INFO  [sync.0] wal.FSHLog - Slow sync cost: 391289 
> ms, current pipeline: 
> [DatanodeInfoWithStorage[:50010,DS-b5747702-8ab9-4a5e-916e-5fae6e305738,DISK],
>  
> DatanodeInfoWithStorage[:50010,DS-505dabb0-0fd6-42d9-b25d-f25e249fe504,DISK],
>  
> DatanodeInfoWithStorage[:50010,DS-6c585673-d4d0-4ec6-bafe-ad4cd861fb4b,DISK]]
> 2021-07-28 14:19:30,591 INFO  [sync.1] wal.FSHLog - Slow sync cost: 391227 
> ms, current pipeline: 
> [DatanodeInfoWithStorage[:50010,DS-b5747702-8ab9-4a5e-916e-5fae6e305738,DISK],
>  
> 

[GitHub] [hbase] Apache-HBase commented on pull request #3584: HBASE-26194 Introduce a ReplicationServerSourceManager to simplify HR…

2021-08-13 Thread GitBox


Apache-HBase commented on pull request #3584:
URL: https://github.com/apache/hbase/pull/3584#issuecomment-898410627


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   2m 36s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ HBASE-24666 Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 41s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   3m 49s |  HBASE-24666 passed  |
   | +1 :green_heart: |  compile  |   5m 22s |  HBASE-24666 passed  |
   | +1 :green_heart: |  checkstyle  |   2m  9s |  HBASE-24666 passed  |
   | +1 :green_heart: |  spotbugs  |   4m 13s |  HBASE-24666 passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 13s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 37s |  the patch passed  |
   | +1 :green_heart: |  compile  |   5m 25s |  the patch passed  |
   | +1 :green_heart: |  javac  |   5m 25s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   2m  5s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |  18m 23s |  Patch does not cause any 
errors with Hadoop 3.1.2 3.2.1 3.3.0.  |
   | +1 :green_heart: |  spotbugs  |   4m 58s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 52s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  63m 15s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3584/1/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3584 |
   | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti 
checkstyle compile |
   | uname | Linux 95da2cc73334 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | HBASE-24666 / fd2f3d1abd |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | Max. process+thread count | 96 (vs. ulimit of 3) |
   | modules | C: hbase-common hbase-client hbase-replication hbase-server U: . 
|
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3584/1/console
 |
   | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (HBASE-26026) HBase Write may be stuck forever when using CompactingMemStore

2021-08-13 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17398606#comment-17398606
 ] 

Hudson commented on HBASE-26026:


Results for branch branch-2
[build #322 on 
builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/322/]:
 (/) *{color:green}+1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/322/General_20Nightly_20Build_20Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/322/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/322/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/322/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> HBase Write may be stuck forever when using CompactingMemStore
> --
>
> Key: HBASE-26026
> URL: https://issues.apache.org/jira/browse/HBASE-26026
> Project: HBase
>  Issue Type: Bug
>  Components: in-memory-compaction
>Affects Versions: 3.0.0-alpha-1, 2.3.0, 2.4.0
>Reporter: chenglei
>Assignee: chenglei
>Priority: Critical
> Fix For: 2.5.0, 3.0.0-alpha-2, 2.4.6, 2.3.7
>
>
> Sometimes I observed that HBase Write might be stuck  in my hbase cluster 
> which enabling {{CompactingMemStore}}.  I have simulated the problem  by unit 
> test in my PR. 
> The problem is caused by {{CompactingMemStore.checkAndAddToActiveSize}} : 
> {code:java}
> 425   private boolean checkAndAddToActiveSize(MutableSegment currActive, Cell 
> cellToAdd,
> 426  MemStoreSizing memstoreSizing) {
> 427if (shouldFlushInMemory(currActive, cellToAdd, memstoreSizing)) {
> 428  if (currActive.setInMemoryFlushed()) {
> 429flushInMemory(currActive);
> 430if (setInMemoryCompactionFlag()) {
> 431 // The thread is dispatched to do in-memory compaction in the 
> background
>   ..
>  }
> {code}
> In line 427, {{shouldFlushInMemory}} checking if  {{currActive.getDataSize}} 
> adding the size of {{cellToAdd}} exceeds 
> {{CompactingMemStore.inmemoryFlushSize}},if true,  then  {{currActive}} 
> should be flushed, {{currActive.setInMemoryFlushed()}} is invoked in  line 
> 428 :
> {code:java}
> public boolean setInMemoryFlushed() {
> return flushed.compareAndSet(false, true);
>   }
> {code}
> After sucessfully set {{currActive.flushed}} to true, in above line 429 
> {{flushInMemory(currActive)}} invokes 
> {{CompactingMemStore.pushActiveToPipeline}} :
> {code:java}
>  protected void pushActiveToPipeline(MutableSegment currActive) {
> if (!currActive.isEmpty()) {
>   pipeline.pushHead(currActive);
>   resetActive();
> }
>   }
> {code}
> In above {{CompactingMemStore.pushActiveToPipeline}} method , if the 
> {{currActive.cellSet}} is empty, then nothing is done. Due to  concurrent 
> writes and because we first add cell size to {{currActive.getDataSize}} and 
> then actually add cell to {{currActive.cellSet}}, it is possible that 
> {{currActive.getDataSize}} could not accommodate {{cellToAdd}}  but 
> {{currActive.cellSet}} is still empty if pending writes which not yet add 
> cells to {{currActive.cellSet}}.
> So if the {{currActive.cellSet}} is empty now, then no {{ActiveSegment}} is 
> created, and new writes still continue target to {{currActive}}, but 
> {{currActive.flushed}} is true, {{currActive}} could not enter 
> {{flushInMemory(currActive)}} again,and new  {{ActiveSegment}} could not be 
> created forever !  In the end all writes would be stuck.
> In my opinion , once  {{currActive.flushed}} is set true, it could not 
> continue use as {{ActiveSegment}} , and because of concurrent pending writes, 
> only after {{currActive.updatesLock.writeLock()}} is acquired(i.e. 
> {{currActive.waitForUpdates}} is called) in 
> {{CompactingMemStore.inMemoryCompaction}} ,we can safely say {{currActive}}  
> is empty or not.
> My fix is remove the {{if (!currActive.isEmpty())}} check here and left the 
> check to background {{InMemoryCompactionRunnable}} after 
> {{currActive.waitForUpdates}} is called. An alternative fix is we use 
> synchronization mechanism in {{checkAndAddToActiveSize}} 

[GitHub] [hbase-connectors] Apache-HBase commented on pull request #72: [HBASE-25357] allow specifying binary row key range to pre-split regions

2021-08-13 Thread GitBox


Apache-HBase commented on pull request #72:
URL: https://github.com/apache/hbase-connectors/pull/72#issuecomment-898397760


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m  1s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | -0 :warning: |  test4tests  |   0m  0s |  The patch doesn't appear to 
include any new or modified tests. Please justify why no new tests are needed 
for this patch. Also please list what manual steps were performed to verify 
this patch.  |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 27s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 37s |  master passed  |
   | +1 :green_heart: |  scaladoc  |   0m 46s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 45s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 39s |  the patch passed  |
   | +1 :green_heart: |  scalac  |   0m 39s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  scaladoc  |   0m 46s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   7m  3s |  hbase-spark in the patch passed.  |
   |  |   |  13m 48s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-Connectors-PreCommit/job/PR-72/1/artifact/yetus-precommit-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase-connectors/pull/72 |
   | Optional Tests | dupname scalac scaladoc unit compile |
   | uname | Linux b9487a03e2cc 5.4.0-1025-aws #25~18.04.1-Ubuntu SMP Fri Sep 
11 12:03:04 UTC 2020 x86_64 GNU/Linux |
   | Build tool | hb_maven |
   | Personality | dev-support/jenkins/hbase-personality.sh |
   | git revision | master / fddb433 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-Connectors-PreCommit/job/PR-72/1/testReport/
 |
   | Max. process+thread count | 918 (vs. ulimit of 12500) |
   | modules | C: spark/hbase-spark U: spark/hbase-spark |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-Connectors-PreCommit/job/PR-72/1/console
 |
   | versions | git=2.20.1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache9 commented on a change in pull request #3574: HBASE-26187 Write straight into the store directory when Splitting an…

2021-08-13 Thread GitBox


Apache9 commented on a change in pull request #3574:
URL: https://github.com/apache/hbase/pull/3574#discussion_r688434905



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionFileSystem.java
##
@@ -638,47 +598,36 @@ void cleanupDaughterRegion(final RegionInfo regionInfo) 
throws IOException {
*/
   public Path commitDaughterRegion(final RegionInfo regionInfo)
   throws IOException {
-Path regionDir = new Path(this.tableDir, regionInfo.getEncodedName());
-Path daughterTmpDir = this.getSplitsDir(regionInfo);
-
-if (fs.exists(daughterTmpDir)) {
-
+Path regionDir = this.getSplitsDir(regionInfo);
+if (fs.exists(regionDir)) {
   // Write HRI to a file in case we need to recover hbase:meta
-  Path regionInfoFile = new Path(daughterTmpDir, REGION_INFO_FILE);
+  Path regionInfoFile = new Path(regionDir, REGION_INFO_FILE);
   byte[] regionInfoContent = getRegionInfoFileContent(regionInfo);
   writeRegionInfoFileContent(conf, fs, regionInfoFile, regionInfoContent);
-
-  // Move the daughter temp dir to the table dir
-  if (!rename(daughterTmpDir, regionDir)) {
-throw new IOException("Unable to rename " + daughterTmpDir + " to " + 
regionDir);
-  }
 }
 
 return regionDir;
   }
 
   /**
-   * Create the region splits directory.
+   * Creates region split daughter directories under the table dir. If the 
daughter regions already
+   * exist, for example, in the case of a recovery from a previous failed 
split procedure, this
+   * method deletes the given region dir recursively, then recreates it again.
*/
   public void createSplitsDir(RegionInfo daughterA, RegionInfo daughterB) 
throws IOException {
-Path splitdir = getSplitsDir();
-if (fs.exists(splitdir)) {
-  LOG.info("The " + splitdir + " directory exists.  Hence deleting it to 
recreate it");
-  if (!deleteDir(splitdir)) {
-throw new IOException("Failed deletion of " + splitdir + " before 
creating them again.");
-  }
+Path daughterADir = getSplitsDir(daughterA);
+if (fs.exists(daughterADir)) {
+  fs.delete(daughterADir, true);

Review comment:
   We do not use the deleteDir method in HRegionFileSystem? Seems it is 
better to keep the old way, test the return value of deleteDir and throw 
IOException?

##
File path: 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegionFileSystem.java
##
@@ -70,6 +70,7 @@
   private static final Logger LOG = 
LoggerFactory.getLogger(TestHRegionFileSystem.class);
 
   public static final byte[] FAMILY_NAME = Bytes.toBytes("info");
+

Review comment:
   Do not need to modify this file?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (HBASE-26197) Fix some obvious bugs in MultiByteBuff.put

2021-08-13 Thread chenglei (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenglei updated HBASE-26197:
-
Description: 
MultiByteBuff.put(int destOffset, ByteBuff src, int srcOffset, int length) has 
some obvious bugs:
* It seems mix up {{items}} in {{src}} {{MutiByteBuff}} and {{items}} in the  
{{dest}} {{MultiByteBuff}} , just as line 749 and line  754 illustrated.  The 
logic is only right when  src {{ByteBuff}} is also a {{MultiByteBuff}} and byte 
size of every {{ByteBuffer}} in {{src.items}} has exactly the same size as 
every {{ByteBuffer}} in the {{dest.items}},but looking the usage of this method 
in the hbase project, obviously the assumption is not right.
{code:java}
746 public MultiByteBuff put(int offset, ByteBuff src, int srcOffset, int 
length) {
747 checkRefCount();
748 int destItemIndex = getItemIndex(offset);
749 int srcItemIndex = getItemIndex(srcOffset);
750 ByteBuffer destItem = this.items[destItemIndex];
751 offset = offset - this.itemBeginPos[destItemIndex];
752
753ByteBuffer srcItem = getItemByteBuffer(src, srcItemIndex);
754srcOffset = srcOffset - this.itemBeginPos[srcItemIndex];
...
{code}
   


* If src is {{SingleByteBuff}} and its remaining space is fewer than 
length,when remaining space is exhausted, this {{MultiByteBuff.put}} method 
would not throw any exception and continue to put src {{ByteBuff}} once again 
from position 0 because following {{MultiByteBuff.getItemByteBuffer}} ignores 
index paramter for  {{SingleByteBuff}} . Obviously, this behavior is much 
strange and unexpected.
  {code:java}
   private static ByteBuffer getItemByteBuffer(ByteBuff buf, int index) {
 return (buf instanceof SingleByteBuff) ? buf.nioByteBuffers()[0]
   : ((MultiByteBuff) buf).items[index];
}
   {code} 

Why seems tests is OK with too much bugs? Because in normal cases, we just use 
{{SingleByteBuff}} not {{MultiByteBuff}}.

  was:
MultiByteBuff.put(int destOffset, ByteBuff src, int srcOffset, int length) has 
some obvious bug:
* It seems mix up {{items}} in {{src}} {{MutiByteBuff}} and {{items}} in the  
{{dest}} {{MultiByteBuff}} , just as line 749 and line  754 illustrated.  The 
logic is only right when  src {{ByteBuff}} is also a {{MultiByteBuff}} and byte 
size of every {{ByteBuffer}} in {{src.items}} has exactly the same size as 
every {{ByteBuffer}} in the {{dest.items}},but looking the usage of this method 
in the hbase project, obviously the assumption is not right.
{code:java}
746 public MultiByteBuff put(int offset, ByteBuff src, int srcOffset, int 
length) {
747 checkRefCount();
748 int destItemIndex = getItemIndex(offset);
749 int srcItemIndex = getItemIndex(srcOffset);
750 ByteBuffer destItem = this.items[destItemIndex];
751 offset = offset - this.itemBeginPos[destItemIndex];
752
753ByteBuffer srcItem = getItemByteBuffer(src, srcItemIndex);
754srcOffset = srcOffset - this.itemBeginPos[srcItemIndex];
...
{code}
   


* If src is {{SingleByteBuff}} and its remaining space is fewer than 
length,when remaining space is exhausted, this {{MultiByteBuff.put}} method 
would not throw any exception and continue to put src {{ByteBuff}} once again 
from position 0 because following {{MultiByteBuff.getItemByteBuffer}} ignores 
index paramter for  {{SingleByteBuff}} . Obviously, this behavior is much 
strange and unexpected.
  {code:java}
   private static ByteBuffer getItemByteBuffer(ByteBuff buf, int index) {
 return (buf instanceof SingleByteBuff) ? buf.nioByteBuffers()[0]
   : ((MultiByteBuff) buf).items[index];
}
   {code} 

Why seems tests is OK with too much bugs? Because in normal cases, we just use 
{{SingleByteBuff}} not {{MultiByteBuff}}.


> Fix some obvious bugs in MultiByteBuff.put
> --
>
> Key: HBASE-26197
> URL: https://issues.apache.org/jira/browse/HBASE-26197
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha-1, 2.4.5
>Reporter: chenglei
>Priority: Major
>
> MultiByteBuff.put(int destOffset, ByteBuff src, int srcOffset, int length) 
> has some obvious bugs:
> * It seems mix up {{items}} in {{src}} {{MutiByteBuff}} and {{items}} in the  
> {{dest}} {{MultiByteBuff}} , just as line 749 and line  754 illustrated.  The 
> logic is only right when  src {{ByteBuff}} is also a {{MultiByteBuff}} and 
> byte size of every {{ByteBuffer}} in {{src.items}} has exactly the same size 
> as every {{ByteBuffer}} in the {{dest.items}},but looking the usage of this 
> method in the hbase project, obviously the assumption is not right.
> {code:java}
> 746 public MultiByteBuff put(int offset, ByteBuff src, int srcOffset, int 
> length) 

[jira] [Updated] (HBASE-26197) Fix some obvious bugs in MultiByteBuff.put

2021-08-13 Thread chenglei (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenglei updated HBASE-26197:
-
Description: 
MultiByteBuff.put(int destOffset, ByteBuff src, int srcOffset, int length) has 
some obvious bug:
* It seems mix up {{items}} in {{src}} {{MutiByteBuff}} and {{items}} in the  
{{dest}} {{MultiByteBuff}} , just as line 749 and line  754 illustrated.  The 
logic is only right when  src {{ByteBuff}} is also a {{MultiByteBuff}} and byte 
size of every {{ByteBuffer}} in {{src.items}} has exactly the same size as 
every {{ByteBuffer}} in the {{dest.items}},but looking the usage of this method 
in the hbase project, obviously the assumption is not right.
{code:java}
746 public MultiByteBuff put(int offset, ByteBuff src, int srcOffset, int 
length) {
747 checkRefCount();
748 int destItemIndex = getItemIndex(offset);
749 int srcItemIndex = getItemIndex(srcOffset);
750 ByteBuffer destItem = this.items[destItemIndex];
751 offset = offset - this.itemBeginPos[destItemIndex];
752
753ByteBuffer srcItem = getItemByteBuffer(src, srcItemIndex);
754srcOffset = srcOffset - this.itemBeginPos[srcItemIndex];
...
{code}
   


* If src is {{SingleByteBuff}} and its remaining space is fewer than 
length,when remaining space is exhausted, this {{MultiByteBuff.put}} method 
would not throw any exception and continue to put src {{ByteBuff}} once again 
from position 0 because following {{MultiByteBuff.getItemByteBuffer}} ignores 
index paramter for  {{SingleByteBuff}} . Obviously, this behavior is much 
strange and unexpected.
  {code:java}
   private static ByteBuffer getItemByteBuffer(ByteBuff buf, int index) {
 return (buf instanceof SingleByteBuff) ? buf.nioByteBuffers()[0]
   : ((MultiByteBuff) buf).items[index];
}
   {code} 

Why seems tests is OK with too much bugs? Because in normal cases, we just use 
{{SingleByteBuff}} not {{MultiByteBuff}}.

  was:
MultiByteBuff.put(int destOffset, ByteBuff src, int srcOffset, int length) has 
some obvious bug:
* It seems mix up {{items}} in {{src}} {{MutiByteBuff}} and {{items}} in the  
{{dest}} {{MultiByteBuff}} , just as line 749 and line  754 illustrated.  The 
logic is only right when  src {{ByteBuff}} is also a {{MultiByteBuff}} and byte 
size of every {{ByteBuffer}} in {{src.items}} has exactly the same size as 
every {{ByteBuffer}} in the {{dest.items}},but looking the usage of this method 
in the hbase project, obviously the assumption is not right.
{code:java}:
746 public MultiByteBuff put(int offset, ByteBuff src, int srcOffset, int 
length) {
747 checkRefCount();
748 int destItemIndex = getItemIndex(offset);
749 int srcItemIndex = getItemIndex(srcOffset);
750 ByteBuffer destItem = this.items[destItemIndex];
751 offset = offset - this.itemBeginPos[destItemIndex];
752
753ByteBuffer srcItem = getItemByteBuffer(src, srcItemIndex);
754srcOffset = srcOffset - this.itemBeginPos[srcItemIndex];
...
{code}
   


* If src is {{SingleByteBuff}} and its remaining space is fewer than 
length,when remaining space is exhausted, this {{MultiByteBuff.put}} method 
would not throw any exception and continue to put src {{ByteBuff}} once again 
from position 0 because following {{MultiByteBuff.getItemByteBuffer}} ignores 
index paramter for  {{SingleByteBuff}} . Obviously, this behavior is much 
strange and unexpected.
  {code:java}
   private static ByteBuffer getItemByteBuffer(ByteBuff buf, int index) {
 return (buf instanceof SingleByteBuff) ? buf.nioByteBuffers()[0]
   : ((MultiByteBuff) buf).items[index];
}
   {code} 

Why seems tests is OK with too much bugs? Because in normal cases, we just use 
{{SingleByteBuff}} not {{MultiByteBuff}}.


> Fix some obvious bugs in MultiByteBuff.put
> --
>
> Key: HBASE-26197
> URL: https://issues.apache.org/jira/browse/HBASE-26197
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha-1, 2.4.5
>Reporter: chenglei
>Priority: Major
>
> MultiByteBuff.put(int destOffset, ByteBuff src, int srcOffset, int length) 
> has some obvious bug:
> * It seems mix up {{items}} in {{src}} {{MutiByteBuff}} and {{items}} in the  
> {{dest}} {{MultiByteBuff}} , just as line 749 and line  754 illustrated.  The 
> logic is only right when  src {{ByteBuff}} is also a {{MultiByteBuff}} and 
> byte size of every {{ByteBuffer}} in {{src.items}} has exactly the same size 
> as every {{ByteBuffer}} in the {{dest.items}},but looking the usage of this 
> method in the hbase project, obviously the assumption is not right.
> {code:java}
> 746 public MultiByteBuff put(int offset, ByteBuff src, int srcOffset, int 
> length) 

[GitHub] [hbase] ddupg opened a new pull request #3584: HBASE-26194 Introduce a ReplicationServerSourceManager to simplify HR…

2021-08-13 Thread GitBox


ddupg opened a new pull request #3584:
URL: https://github.com/apache/hbase/pull/3584


   …eplicationServer


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache9 commented on a change in pull request #3468: HBASE-26076 Support favoredNodes when do compaction offload

2021-08-13 Thread GitBox


Apache9 commented on a change in pull request #3468:
URL: https://github.com/apache/hbase/pull/3468#discussion_r688422424



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java
##
@@ -344,14 +344,27 @@ private StoreContext 
initializeStoreContext(ColumnFamilyDescriptor family) throw
   }
 
   private InetSocketAddress[] getFavoredNodes() {
-InetSocketAddress[] favoredNodes = null;
 if (region.getRegionServerServices() != null) {
-  favoredNodes = region.getRegionServerServices().getFavoredNodesForRegion(
-  region.getRegionInfo().getEncodedName());
+  return region.getRegionServerServices()
+  .getFavoredNodesForRegion(region.getRegionInfo().getEncodedName());
 }
 return favoredNodes;
   }
 
+  // Favored nodes used by compaction offload
+  private InetSocketAddress[] favoredNodes = null;

Review comment:
   Is it possible to not make this method public? Or at least change the 
method name? The name setXXX seems that we can call it multiple times...




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #3468: HBASE-26076 Support favoredNodes when do compaction offload

2021-08-13 Thread GitBox


Apache-HBase commented on pull request #3468:
URL: https://github.com/apache/hbase/pull/3468#issuecomment-898367351


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m  4s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ HBASE-25714 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 52s |  HBASE-25714 passed  |
   | +1 :green_heart: |  compile  |   3m 10s |  HBASE-25714 passed  |
   | +1 :green_heart: |  checkstyle  |   1m  6s |  HBASE-25714 passed  |
   | +1 :green_heart: |  spotbugs  |   2m  7s |  HBASE-25714 passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 37s |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 14s |  the patch passed  |
   | +1 :green_heart: |  javac  |   3m 14s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   1m  6s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |  17m 57s |  Patch does not cause any 
errors with Hadoop 3.1.2 3.2.1 3.3.0.  |
   | +1 :green_heart: |  spotbugs  |   2m 16s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 16s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  47m 16s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3468/2/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3468 |
   | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti 
checkstyle compile |
   | uname | Linux 3e0974a922e1 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 
17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | HBASE-25714 / 85f02919da |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | Max. process+thread count | 96 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3468/2/console
 |
   | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] virajjasani commented on a change in pull request #3579: HBASE-26189 Reduce log level of CompactionProgress notice to DEBUG

2021-08-13 Thread GitBox


virajjasani commented on a change in pull request #3579:
URL: https://github.com/apache/hbase/pull/3579#discussion_r688403882



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/CompactionProgress.java
##
@@ -77,8 +77,10 @@ public void complete() {
*/
   public long getTotalCompactingKVs() {
 if (totalCompactingKVs < currentCompactedKVs) {
-  LOG.warn("totalCompactingKVs={} less than currentCompactedKVs={}",
+  if (LOG.isDebugEnabled()) {

Review comment:
   +1 once check is removed




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache9 opened a new pull request #3583: HBASE-26193 Do not store meta region location as permanent state on z…

2021-08-13 Thread GitBox


Apache9 opened a new pull request #3583:
URL: https://github.com/apache/hbase/pull/3583


   …ookeeper


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (HBASE-26193) Do not store meta region location as permanent state on zookeeper

2021-08-13 Thread Duo Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-26193:
--
Summary: Do not store meta region location as permanent state on zookeeper  
(was: Do not store meta region location on zookeeper)

> Do not store meta region location as permanent state on zookeeper
> -
>
> Key: HBASE-26193
> URL: https://issues.apache.org/jira/browse/HBASE-26193
> Project: HBase
>  Issue Type: Improvement
>  Components: meta, Zookeeper
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
>
> As it breaks one of our design rules
> https://hbase.apache.org/book.html#design.invariants.zk.data
> We used to think hbase should be recovered automatically when all the data on 
> zk (except the replication data) are cleared, but obviously, if you clear the 
> meta region location, the cluster will be in trouble, and need to use 
> operation tools to recover the cluster.
> So here, along with the ConnectionRegistry improvements, we should also 
> consider move the meta region location off zookeeper.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-26197) Fix some obvious bugs in MultiByteBuff.put

2021-08-13 Thread chenglei (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenglei updated HBASE-26197:
-
Description: 
MultiByteBuff.put(int destOffset, ByteBuff src, int srcOffset, int length) has 
some obvious bug:
* It seems mix up {{items}} in {{src}} {{MutiByteBuff}} and {{items}} in the  
{{dest}} {{MultiByteBuff}} , just as line 749 and line  754 illustrated.  The 
logic is only right when  src {{ByteBuff}} is also a {{MultiByteBuff}} and byte 
size of every {{ByteBuffer}} in {{src.items}} has exactly the same size as 
every {{ByteBuffer}} in the {{dest.items}},but looking the usage of this method 
in the hbase project, obviously the assumption is not right.
{code:java}:
746 public MultiByteBuff put(int offset, ByteBuff src, int srcOffset, int 
length) {
747 checkRefCount();
748 int destItemIndex = getItemIndex(offset);
749 int srcItemIndex = getItemIndex(srcOffset);
750 ByteBuffer destItem = this.items[destItemIndex];
751 offset = offset - this.itemBeginPos[destItemIndex];
752
753ByteBuffer srcItem = getItemByteBuffer(src, srcItemIndex);
754srcOffset = srcOffset - this.itemBeginPos[srcItemIndex];
...
{code}
   


* If src is {{SingleByteBuff}} and its remaining space is fewer than 
length,when remaining space is exhausted, this {{MultiByteBuff.put}} method 
would not throw any exception and continue to put src {{ByteBuff}} once again 
from position 0 because following {{MultiByteBuff.getItemByteBuffer}} ignores 
index paramter for  {{SingleByteBuff}} . Obviously, this behavior is much 
strange and unexpected.
  {code:java}
   private static ByteBuffer getItemByteBuffer(ByteBuff buf, int index) {
 return (buf instanceof SingleByteBuff) ? buf.nioByteBuffers()[0]
   : ((MultiByteBuff) buf).items[index];
}
   {code} 

Why seems tests is OK with too much bugs? Because in normal cases, we just use 
{{SingleByteBuff}} not {{MultiByteBuff}}.

  was:
MultiByteBuff.put(int destOffset, ByteBuff src, int srcOffset, int length) has 
some obvious bug:
* It seems mix up {{items}} in {{src}} {{MutiByteBuff}} and {{items}} in the  
{{dest}} {{MultiByteBuff}} , just as line 749 and line  754 illustrated.  The 
logic is only right when  src {{ByteBuff}} is also a {{MultiByteBuff}} and byte 
size of every {{ByteBuffer}} in {{src.items}} has exactly the same size as 
every {{ByteBuffer}} in the {{dest.items}},but looking the usage of this method 
in the hbase project, obviously the assumption is not right.
{code:java}:
746 public MultiByteBuff put(int offset, ByteBuff src, int srcOffset, int 
length) {
747 checkRefCount();
748 int destItemIndex = getItemIndex(offset);
749 int srcItemIndex = getItemIndex(srcOffset);
750 ByteBuffer destItem = this.items[destItemIndex];
751 offset = offset - this.itemBeginPos[destItemIndex];
752
753ByteBuffer srcItem = getItemByteBuffer(src, srcItemIndex);
754srcOffset = srcOffset - this.itemBeginPos[srcItemIndex];
...
{code}
   


* If src is {{SingleByteBuff}} and its remaining space is fewer than 
length,when remaining space is exhausted, this {{MultiByteBuff.put}} method 
would not throw any exception and continue to put src {{ByteBuff}} once again 
from position 0 because following {{MultiByteBuff.getItemByteBuffer}} ignores 
index paramter for  {{SingleByteBuff}} . Obviously, this behavior is much 
strange and unexpected.
  {code:java}
   private static ByteBuffer getItemByteBuffer(ByteBuff buf, int index) {
 return (buf instanceof SingleByteBuff) ? buf.nioByteBuffers()[0]
   : ((MultiByteBuff) buf).items[index];
}
   {code} 

Why seems test is OK with too much bugs? Because in normal cases, we just use 
{{SingleByteBuff}} not {{MultiByteBuff}}.


> Fix some obvious bugs in MultiByteBuff.put
> --
>
> Key: HBASE-26197
> URL: https://issues.apache.org/jira/browse/HBASE-26197
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha-1, 2.4.5
>Reporter: chenglei
>Priority: Major
>
> MultiByteBuff.put(int destOffset, ByteBuff src, int srcOffset, int length) 
> has some obvious bug:
> * It seems mix up {{items}} in {{src}} {{MutiByteBuff}} and {{items}} in the  
> {{dest}} {{MultiByteBuff}} , just as line 749 and line  754 illustrated.  The 
> logic is only right when  src {{ByteBuff}} is also a {{MultiByteBuff}} and 
> byte size of every {{ByteBuffer}} in {{src.items}} has exactly the same size 
> as every {{ByteBuffer}} in the {{dest.items}},but looking the usage of this 
> method in the hbase project, obviously the assumption is not right.
> {code:java}:
> 746 public MultiByteBuff put(int offset, ByteBuff src, int srcOffset, int 
> length) 

[jira] [Updated] (HBASE-26197) Fix some obvious bugs in MultiByteBuff.put

2021-08-13 Thread chenglei (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenglei updated HBASE-26197:
-
Description: 
MultiByteBuff.put(int destOffset, ByteBuff src, int srcOffset, int length) has 
some obvious bug:
* It seems mix up {{items}} in {{src}} {{MutiByteBuff}} and {{items}} in the  
{{dest}} {{MultiByteBuff}} , just as line 749 and line  754 illustrated.  The 
logic is only right when  src {{ByteBuff}} is also a {{MultiByteBuff}} and byte 
size of every {{ByteBuffer}} in {{src.items}} has exactly the same size as 
every {{ByteBuffer}} in the {{dest.items}},but looking the usage of this method 
in the hbase project, obviously the assumption is not right.
{code:java}:
746 public MultiByteBuff put(int offset, ByteBuff src, int srcOffset, int 
length) {
747 checkRefCount();
748 int destItemIndex = getItemIndex(offset);
749 int srcItemIndex = getItemIndex(srcOffset);
750 ByteBuffer destItem = this.items[destItemIndex];
751 offset = offset - this.itemBeginPos[destItemIndex];
752
753ByteBuffer srcItem = getItemByteBuffer(src, srcItemIndex);
754srcOffset = srcOffset - this.itemBeginPos[srcItemIndex];
...
{code}
   


* If src is {{SingleByteBuff}} and its remaining space is fewer than 
length,when remaining space is exhausted, this {{MultiByteBuff.put}} method 
would not throw any exception and continue to put src {{ByteBuff}} once again 
from position 0 because following {{MultiByteBuff.getItemByteBuffer}} ignores 
index paramter for  {{SingleByteBuff}} . Obviously, this behavior is much 
strange and unexpected.
  {code:java}
   private static ByteBuffer getItemByteBuffer(ByteBuff buf, int index) {
 return (buf instanceof SingleByteBuff) ? buf.nioByteBuffers()[0]
   : ((MultiByteBuff) buf).items[index];
}
   {code} 

Why seems test is OK with too much bugs? Because in normal cases, we just use 
{{SingleByteBuff}} not {{MultiByteBuff}}.

  was:
MultiByteBuff.put(int destOffset, ByteBuff src, int srcOffset, int length) has 
some obvious bug:
* It is a common utility method and may be used in many situations, but its 
implementation seems mixup {{items}} in {{src}} {{MutiByteBuff}} and {{items}} 
in the  {{dest}} {{MultiByteBuff}} , just as line 749 and line  754 
illustrated.  The logical is right only when the  that src {{ByteBuff}} is also 
a {{MultiByteBuff}} and byte size of every {{ByteBuffer}} in {{src.items}} has 
exactly the same size as every {{ByteBuffer}} in the {{dest.items}},but looking 
the usage of this method in the hbase project, obviously the assumption is not 
right.
{code:java}:
746 public MultiByteBuff put(int offset, ByteBuff src, int srcOffset, int 
length) {
747 checkRefCount();
748 int destItemIndex = getItemIndex(offset);
749 int srcItemIndex = getItemIndex(srcOffset);
750 ByteBuffer destItem = this.items[destItemIndex];
751 offset = offset - this.itemBeginPos[destItemIndex];
752
753ByteBuffer srcItem = getItemByteBuffer(src, srcItemIndex);
754srcOffset = srcOffset - this.itemBeginPos[srcItemIndex];
...
{code}
   


* If src is {{SingleByteBuff}} and its remaining space is fewer than 
length,when remaining space is exhausted, this {{MultiByteBuff.put}} method 
would not throw any exception and continue to put src {{ByteBuff}} once again 
from position 0 because following {{MultiByteBuff.getItemByteBuffer}} ignores 
index paramter for  {{SingleByteBuff}} . Obviously, this behavior is much 
strange and unexpected.
  {code:java}
   private static ByteBuffer getItemByteBuffer(ByteBuff buf, int index) {
 return (buf instanceof SingleByteBuff) ? buf.nioByteBuffers()[0]
   : ((MultiByteBuff) buf).items[index];
}
   {code} 

Why seems test is OK with too much bugs? Because in normal cases, we just use 
{{SingleByteBuff}} not {{MultiByteBuff}}.


> Fix some obvious bugs in MultiByteBuff.put
> --
>
> Key: HBASE-26197
> URL: https://issues.apache.org/jira/browse/HBASE-26197
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha-1, 2.4.5
>Reporter: chenglei
>Priority: Major
>
> MultiByteBuff.put(int destOffset, ByteBuff src, int srcOffset, int length) 
> has some obvious bug:
> * It seems mix up {{items}} in {{src}} {{MutiByteBuff}} and {{items}} in the  
> {{dest}} {{MultiByteBuff}} , just as line 749 and line  754 illustrated.  The 
> logic is only right when  src {{ByteBuff}} is also a {{MultiByteBuff}} and 
> byte size of every {{ByteBuffer}} in {{src.items}} has exactly the same size 
> as every {{ByteBuffer}} in the {{dest.items}},but looking the usage of this 
> method in the hbase project, obviously the assumption is not right.
> 

[jira] [Updated] (HBASE-26197) Fix some obvious bugs in MultiByteBuff.put

2021-08-13 Thread chenglei (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenglei updated HBASE-26197:
-
Description: 
MultiByteBuff.put(int destOffset, ByteBuff src, int srcOffset, int length) has 
some obvious bug:
* It is a common utility method and may be used in many situations, but its 
implementation seems mixup {{items}} in {{src}} {{MutiByteBuff}} and {{items}} 
in the  {{dest}} {{MultiByteBuff}} , just as line 749 and line  754 
illustrated.  The logical is right only when the  that src {{ByteBuff}} is also 
a {{MultiByteBuff}} and byte size of every {{ByteBuffer}} in {{src.items}} has 
exactly the same size as every {{ByteBuffer}} in the {{dest.items}},but looking 
the usage of this method in the hbase project, obviously the assumption is not 
right.
{code:java}:
746 public MultiByteBuff put(int offset, ByteBuff src, int srcOffset, int 
length) {
747 checkRefCount();
748 int destItemIndex = getItemIndex(offset);
749 int srcItemIndex = getItemIndex(srcOffset);
750 ByteBuffer destItem = this.items[destItemIndex];
751 offset = offset - this.itemBeginPos[destItemIndex];
752
753ByteBuffer srcItem = getItemByteBuffer(src, srcItemIndex);
754srcOffset = srcOffset - this.itemBeginPos[srcItemIndex];
...
{code}
   


* If src is {{SingleByteBuff}} and its remaining space is fewer than 
length,when remaining space is exhausted, this {{MultiByteBuff.put}} method 
would not throw any exception and continue to put src {{ByteBuff}} once again 
from position 0 because following {{MultiByteBuff.getItemByteBuffer}} ignores 
index paramter for  {{SingleByteBuff}} . Obviously, this behavior is much 
strange and unexpected.
  {code:java}
   private static ByteBuffer getItemByteBuffer(ByteBuff buf, int index) {
 return (buf instanceof SingleByteBuff) ? buf.nioByteBuffers()[0]
   : ((MultiByteBuff) buf).items[index];
}
   {code} 

Why seems test is OK with too much bugs? Because in normal cases, we just use 
{{SingleByteBuff}} not {{MultiByteBuff}}.

  was:
MultiByteBuff.put(int destOffset, ByteBuff src, int srcOffset, int length) has 
some obvious bug:
* It is a common utility method and may be used in many situations, but its 
implementation seems  assuming that src {{ByteBuff}} is also a 
{{MultiByteBuff}} and byte size of every {{ByteBuffer}} in {{src.items}} has 
exactly the same size as every {{ByteBuffer}}  in the dest 
{{MultiByteBuff}},just as line 749 and line  754 illustrated:
{code:java}:
746 public MultiByteBuff put(int offset, ByteBuff src, int srcOffset, int 
length) {
747 checkRefCount();
748 int destItemIndex = getItemIndex(offset);
749 int srcItemIndex = getItemIndex(srcOffset);
750 ByteBuffer destItem = this.items[destItemIndex];
751 offset = offset - this.itemBeginPos[destItemIndex];
752
753ByteBuffer srcItem = getItemByteBuffer(src, srcItemIndex);
754srcOffset = srcOffset - this.itemBeginPos[srcItemIndex];
...
{code}
   But looking the usage of this method in the hbase project, obviously the 
assumption is not right and even in following 
{{MultiByteBuff.getItemByteBuffer}} which inside the above 
{{MultiByteBuff.put}} method, it also considing the case the src {{ByteBuff}} 
may be {{SingleByteBuff}}, which is in contradiction with line 754 in the above 
{{MultiByteBuff.put}}:

*{code:java}
   private static ByteBuffer getItemByteBuffer(ByteBuff buf, int index) {
 return (buf instanceof SingleByteBuff) ? buf.nioByteBuffers()[0]
   : ((MultiByteBuff) buf).items[index];
}
   {code} 

* If src is {{SingleByteBuff}} and its remaining space is fewer than 
length,when remaining space is exhausted, this {{MultiByteBuff.put}} method 
would not throw any exception and continue to put src {{ByteBuff}} again from 
position 0 because above {{MultiByteBuff.getItemByteBuffer}} ignores index 
paramter for   {{SingleByteBuff}} . Obviously, this behavior is much strange 
and unexpected.



> Fix some obvious bugs in MultiByteBuff.put
> --
>
> Key: HBASE-26197
> URL: https://issues.apache.org/jira/browse/HBASE-26197
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha-1, 2.4.5
>Reporter: chenglei
>Priority: Major
>
> MultiByteBuff.put(int destOffset, ByteBuff src, int srcOffset, int length) 
> has some obvious bug:
> * It is a common utility method and may be used in many situations, but its 
> implementation seems mixup {{items}} in {{src}} {{MutiByteBuff}} and 
> {{items}} in the  {{dest}} {{MultiByteBuff}} , just as line 749 and line  754 
> illustrated.  The logical is right only when the  that src {{ByteBuff}} is 
> also a {{MultiByteBuff}} and byte size of 

[GitHub] [hbase] nyl3532016 commented on a change in pull request #3468: HBASE-26076 Support favoredNodes when do compaction offload

2021-08-13 Thread GitBox


nyl3532016 commented on a change in pull request #3468:
URL: https://github.com/apache/hbase/pull/3468#discussion_r688384735



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java
##
@@ -344,14 +344,27 @@ private StoreContext 
initializeStoreContext(ColumnFamilyDescriptor family) throw
   }
 
   private InetSocketAddress[] getFavoredNodes() {
-InetSocketAddress[] favoredNodes = null;
 if (region.getRegionServerServices() != null) {
-  favoredNodes = region.getRegionServerServices().getFavoredNodesForRegion(
-  region.getRegionInfo().getEncodedName());
+  return region.getRegionServerServices()
+  .getFavoredNodesForRegion(region.getRegionInfo().getEncodedName());
 }
 return favoredNodes;
   }
 
+  // Favored nodes used by compaction offload
+  private InetSocketAddress[] favoredNodes = null;

Review comment:
   Not thread safe, but only called once after `initializeStoreContext` and 
before real do compaction.For we initialize a new store everytime for a 
compaction request.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (HBASE-26197) Fix some obvious bugs in MultiByteBuff.put

2021-08-13 Thread chenglei (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenglei updated HBASE-26197:
-
Description: 
MultiByteBuff.put(int destOffset, ByteBuff src, int srcOffset, int length) has 
some obvious bug:
* It is a common utility method and may be used in many situations, but its 
implementation seems  assuming that src {{ByteBuff}} is also a 
{{MultiByteBuff}} and byte size of every {{ByteBuffer}} in {{src.items}} has 
exactly the same size as every {{ByteBuffer}}  in the dest 
{{MultiByteBuff}},just as line 749 and line  754 illustrated:
{code:java}:
746 public MultiByteBuff put(int offset, ByteBuff src, int srcOffset, int 
length) {
747 checkRefCount();
748 int destItemIndex = getItemIndex(offset);
749 int srcItemIndex = getItemIndex(srcOffset);
750 ByteBuffer destItem = this.items[destItemIndex];
751 offset = offset - this.itemBeginPos[destItemIndex];
752
753ByteBuffer srcItem = getItemByteBuffer(src, srcItemIndex);
754srcOffset = srcOffset - this.itemBeginPos[srcItemIndex];
...
{code}
   But looking the usage of this method in the hbase project, obviously the 
assumption is not right and even in following 
{{MultiByteBuff.getItemByteBuffer}} which inside the above 
{{MultiByteBuff.put}} method, it also considing the case the src {{ByteBuff}} 
may be {{SingleByteBuff}}, which is in contradiction with line 754 in the above 
{{MultiByteBuff.put}}:

*{code:java}
   private static ByteBuffer getItemByteBuffer(ByteBuff buf, int index) {
 return (buf instanceof SingleByteBuff) ? buf.nioByteBuffers()[0]
   : ((MultiByteBuff) buf).items[index];
}
   {code} 

* If src is {{SingleByteBuff}} and its remaining space is fewer than 
length,when remaining space is exhausted, this {{MultiByteBuff.put}} method 
would not throw any exception and continue to put src {{ByteBuff}} again from 
position 0 because above {{MultiByteBuff.getItemByteBuffer}} ignores index 
paramter for   {{SingleByteBuff}} . Obviously, this behavior is much strange 
and unexpected.


  was:
MultiByteBuff.put(int destOffset, ByteBuff src, int srcOffset, int length) has 
some obvious bug:
* It is a common utility method and may be used in many situations, but its 
implementation seems  assuming that src {{ByteBuff}} is also a 
{{MultiByteBuff}} and byte size of every {{ByteBuffer}} in {{src.items}} has 
exactly the same size as every {{ByteBuffer}}  in the dest 
{{MultiByteBuff}},just as line 749 and line  754 illustrated:
{code:java}:
746 public MultiByteBuff put(int offset, ByteBuff src, int srcOffset, int 
length) {
747 checkRefCount();
748 int destItemIndex = getItemIndex(offset);
749 int srcItemIndex = getItemIndex(srcOffset);
750 ByteBuffer destItem = this.items[destItemIndex];
751 offset = offset - this.itemBeginPos[destItemIndex];
752
753ByteBuffer srcItem = getItemByteBuffer(src, srcItemIndex);
754srcOffset = srcOffset - this.itemBeginPos[srcItemIndex];
...
{code}
But looking the usage of this method in the hbase project, obviously the 
assumption is not right and even in following 
{{MultiByteBuff.getItemByteBuffer}} which inside the above 
{{MultiByteBuff.put}} method, it also considing the case the src {{ByteBuff}} 
may be {{SingleByteBuff}},which is in contradiction with line 754 in the above 
{{MultiByteBuff.put}}:

*{code:java}
   private static ByteBuffer getItemByteBuffer(ByteBuff buf, int index) {
 return (buf instanceof SingleByteBuff) ? buf.nioByteBuffers()[0]
   : ((MultiByteBuff) buf).items[index];
}
   {code} 

* If src is {{SingleByteBuff}} and its remaining space is fewer than 
length,when remaining space is exhausted, this {{MultiByteBuff.put}} method 
would not throw any exception and continue to put src {{ByteBuff}} again from 
position 0 because above {{MultiByteBuff.getItemByteBuffer}} ignores index 
paramter for   {{SingleByteBuff}} . Obviously, this behavior is much strange 
and unexpected.



> Fix some obvious bugs in MultiByteBuff.put
> --
>
> Key: HBASE-26197
> URL: https://issues.apache.org/jira/browse/HBASE-26197
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha-1, 2.4.5
>Reporter: chenglei
>Priority: Major
>
> MultiByteBuff.put(int destOffset, ByteBuff src, int srcOffset, int length) 
> has some obvious bug:
> * It is a common utility method and may be used in many situations, but its 
> implementation seems  assuming that src {{ByteBuff}} is also a 
> {{MultiByteBuff}} and byte size of every {{ByteBuffer}} in {{src.items}} has 
> exactly the same size as every {{ByteBuffer}}  in the dest 
> {{MultiByteBuff}},just as line 749 and line  754 

[jira] [Updated] (HBASE-26197) Fix some obvious bugs in MultiByteBuff.put

2021-08-13 Thread chenglei (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenglei updated HBASE-26197:
-
Description: 
MultiByteBuff.put(int destOffset, ByteBuff src, int srcOffset, int length) has 
some obvious bug:
* It is a common utility method and may be used in many situations, but its 
implementation seems  assuming that src {{ByteBuff}} is also a 
{{MultiByteBuff}} and byte size of every {{ByteBuffer}} in {{src.items}} has 
exactly the same size as every {{ByteBuffer}}  in the dest 
{{MultiByteBuff}},just as line 749 and line  754 illustrated:
{code:java}:
746 public MultiByteBuff put(int offset, ByteBuff src, int srcOffset, int 
length) {
747 checkRefCount();
748 int destItemIndex = getItemIndex(offset);
749 int srcItemIndex = getItemIndex(srcOffset);
750 ByteBuffer destItem = this.items[destItemIndex];
751 offset = offset - this.itemBeginPos[destItemIndex];
752
753ByteBuffer srcItem = getItemByteBuffer(src, srcItemIndex);
754srcOffset = srcOffset - this.itemBeginPos[srcItemIndex];
...
{code}
But looking the usage of this method in the hbase project, obviously the 
assumption is not right and even in following 
{{MultiByteBuff.getItemByteBuffer}} which inside the above 
{{MultiByteBuff.put}} method, it also considing the case the src {{ByteBuff}} 
may be {{SingleByteBuff}},which is in contradiction with line 754 in the above 
{{MultiByteBuff.put}}:

*{code:java}
   private static ByteBuffer getItemByteBuffer(ByteBuff buf, int index) {
 return (buf instanceof SingleByteBuff) ? buf.nioByteBuffers()[0]
   : ((MultiByteBuff) buf).items[index];
}
   {code} 

* If src is {{SingleByteBuff}} and its remaining space is fewer than 
length,when remaining space is exhausted, this {{MultiByteBuff.put}} method 
would not throw any exception and continue to put src {{ByteBuff}} again from 
position 0 because above {{MultiByteBuff.getItemByteBuffer}} ignores index 
paramter for   {{SingleByteBuff}} . Obviously, this behavior
is much strange and unexpected.


  was:
MultiByteBuff.put(int destOffset, ByteBuff src, int srcOffset, int length) has 
some obvious bug:
* It is a common utility method and may be used in many situations, but its 
implementation seems  assuming that src {{ByteBuff}} is also a 
{{MultiByteBuff}} and byte size of every {{ByteBuffer}} in {{src.items}} has 
exactly the same size as every {{ByteBuffer}}  in the dest 
{{MultiByteBuff}},just as line 749 and line  754 illustrated:
{code:java}:
746 public MultiByteBuff put(int offset, ByteBuff src, int srcOffset, int 
length) {
747 checkRefCount();
748 int destItemIndex = getItemIndex(offset);
749 int srcItemIndex = getItemIndex(srcOffset);
750 ByteBuffer destItem = this.items[destItemIndex];
751 offset = offset - this.itemBeginPos[destItemIndex];
752
753ByteBuffer srcItem = getItemByteBuffer(src, srcItemIndex);
754srcOffset = srcOffset - this.itemBeginPos[srcItemIndex];
...
{code}
But looking the usage of this method in the hbase project, obviously the 
assumption is not right and even in following 
{{MultiByteBuff.getItemByteBuffer}} which inside the above 
{{MultiByteBuff.put}} method, it also considing the case the src {{ByteBuff}} 
may be {{SingleByteBuff}},which is in contradiction with line 754 in the above 
{{MultiByteBuff.put}}:

*{code:java}
   private static ByteBuffer getItemByteBuffer(ByteBuff buf, int index) {
 return (buf instanceof SingleByteBuff) ? buf.nioByteBuffers()[0]
   : ((MultiByteBuff) buf).items[index];
}
   {code} 

* If src is {{SingleByteBuff}} and its remaining space is fewer than 
length,when remaining space is exhausted, this {{MultiByteBuff.put}} method 
would not throw any exception and continue to put src {{ByteBuff}} again from 
position 0 because above {{MultiByteBuff.getItemByteBuffer}} ignores index 
paramter for   {{SingleByteBuff}} . Obviously, this behavior
is strange and unexpected.



> Fix some obvious bugs in MultiByteBuff.put
> --
>
> Key: HBASE-26197
> URL: https://issues.apache.org/jira/browse/HBASE-26197
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha-1, 2.4.5
>Reporter: chenglei
>Priority: Major
>
> MultiByteBuff.put(int destOffset, ByteBuff src, int srcOffset, int length) 
> has some obvious bug:
> * It is a common utility method and may be used in many situations, but its 
> implementation seems  assuming that src {{ByteBuff}} is also a 
> {{MultiByteBuff}} and byte size of every {{ByteBuffer}} in {{src.items}} has 
> exactly the same size as every {{ByteBuffer}}  in the dest 
> {{MultiByteBuff}},just as line 749 and line  754 illustrated:
> 

[jira] [Updated] (HBASE-26197) Fix some obvious bugs in MultiByteBuff.put

2021-08-13 Thread chenglei (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenglei updated HBASE-26197:
-
Description: 
MultiByteBuff.put(int destOffset, ByteBuff src, int srcOffset, int length) has 
some obvious bug:
* It is a common utility method and may be used in many situations, but its 
implementation seems  assuming that src {{ByteBuff}} is also a 
{{MultiByteBuff}} and byte size of every {{ByteBuffer}} in {{src.items}} has 
exactly the same size as every {{ByteBuffer}}  in the dest 
{{MultiByteBuff}},just as line 749 and line  754 illustrated:
{code:java}:
746 public MultiByteBuff put(int offset, ByteBuff src, int srcOffset, int 
length) {
747 checkRefCount();
748 int destItemIndex = getItemIndex(offset);
749 int srcItemIndex = getItemIndex(srcOffset);
750 ByteBuffer destItem = this.items[destItemIndex];
751 offset = offset - this.itemBeginPos[destItemIndex];
752
753ByteBuffer srcItem = getItemByteBuffer(src, srcItemIndex);
754srcOffset = srcOffset - this.itemBeginPos[srcItemIndex];
...
{code}
But looking the usage of this method in the hbase project, obviously the 
assumption is not right and even in following 
{{MultiByteBuff.getItemByteBuffer}} which inside the above 
{{MultiByteBuff.put}} method, it also considing the case the src {{ByteBuff}} 
may be {{SingleByteBuff}},which is in contradiction with line 754 in the above 
{{MultiByteBuff.put}}:

*{code:java}
   private static ByteBuffer getItemByteBuffer(ByteBuff buf, int index) {
 return (buf instanceof SingleByteBuff) ? buf.nioByteBuffers()[0]
   : ((MultiByteBuff) buf).items[index];
}
   {code} 

* If src is {{SingleByteBuff}} and its remaining space is fewer than 
length,when remaining space is exhausted, this {{MultiByteBuff.put}} method 
would not throw any exception and continue to put src {{ByteBuff}} again from 
position 0 because above {{MultiByteBuff.getItemByteBuffer}} ignores index 
paramter for   {{SingleByteBuff}} . Obviously, this behavior is much strange 
and unexpected.


  was:
MultiByteBuff.put(int destOffset, ByteBuff src, int srcOffset, int length) has 
some obvious bug:
* It is a common utility method and may be used in many situations, but its 
implementation seems  assuming that src {{ByteBuff}} is also a 
{{MultiByteBuff}} and byte size of every {{ByteBuffer}} in {{src.items}} has 
exactly the same size as every {{ByteBuffer}}  in the dest 
{{MultiByteBuff}},just as line 749 and line  754 illustrated:
{code:java}:
746 public MultiByteBuff put(int offset, ByteBuff src, int srcOffset, int 
length) {
747 checkRefCount();
748 int destItemIndex = getItemIndex(offset);
749 int srcItemIndex = getItemIndex(srcOffset);
750 ByteBuffer destItem = this.items[destItemIndex];
751 offset = offset - this.itemBeginPos[destItemIndex];
752
753ByteBuffer srcItem = getItemByteBuffer(src, srcItemIndex);
754srcOffset = srcOffset - this.itemBeginPos[srcItemIndex];
...
{code}
But looking the usage of this method in the hbase project, obviously the 
assumption is not right and even in following 
{{MultiByteBuff.getItemByteBuffer}} which inside the above 
{{MultiByteBuff.put}} method, it also considing the case the src {{ByteBuff}} 
may be {{SingleByteBuff}},which is in contradiction with line 754 in the above 
{{MultiByteBuff.put}}:

*{code:java}
   private static ByteBuffer getItemByteBuffer(ByteBuff buf, int index) {
 return (buf instanceof SingleByteBuff) ? buf.nioByteBuffers()[0]
   : ((MultiByteBuff) buf).items[index];
}
   {code} 

* If src is {{SingleByteBuff}} and its remaining space is fewer than 
length,when remaining space is exhausted, this {{MultiByteBuff.put}} method 
would not throw any exception and continue to put src {{ByteBuff}} again from 
position 0 because above {{MultiByteBuff.getItemByteBuffer}} ignores index 
paramter for   {{SingleByteBuff}} . Obviously, this behavior
is much strange and unexpected.



> Fix some obvious bugs in MultiByteBuff.put
> --
>
> Key: HBASE-26197
> URL: https://issues.apache.org/jira/browse/HBASE-26197
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha-1, 2.4.5
>Reporter: chenglei
>Priority: Major
>
> MultiByteBuff.put(int destOffset, ByteBuff src, int srcOffset, int length) 
> has some obvious bug:
> * It is a common utility method and may be used in many situations, but its 
> implementation seems  assuming that src {{ByteBuff}} is also a 
> {{MultiByteBuff}} and byte size of every {{ByteBuffer}} in {{src.items}} has 
> exactly the same size as every {{ByteBuffer}}  in the dest 
> {{MultiByteBuff}},just as line 749 and line  754 illustrated:

[jira] [Updated] (HBASE-26197) Fix some obvious bugs in MultiByteBuff.put

2021-08-13 Thread chenglei (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenglei updated HBASE-26197:
-
Description: 
MultiByteBuff.put(int destOffset, ByteBuff src, int srcOffset, int length) has 
some obvious bug:
* It is a common utility method and may be used in many situations, but its 
implementation seems  assuming that src {{ByteBuff}} is also a 
{{MultiByteBuff}} and byte size of every {{ByteBuffer}} in {{src.items}} has 
exactly the same size as every {{ByteBuffer}}  in the dest 
{{MultiByteBuff}},just as line 749 and line  754 illustrated:
{code:java}:
746 public MultiByteBuff put(int offset, ByteBuff src, int srcOffset, int 
length) {
747 checkRefCount();
748 int destItemIndex = getItemIndex(offset);
749 int srcItemIndex = getItemIndex(srcOffset);
750 ByteBuffer destItem = this.items[destItemIndex];
751 offset = offset - this.itemBeginPos[destItemIndex];
752
753ByteBuffer srcItem = getItemByteBuffer(src, srcItemIndex);
754srcOffset = srcOffset - this.itemBeginPos[srcItemIndex];
...
{code}
But looking the usage of this method in the hbase project, obviously the 
assumption is not right and even in following 
{{MultiByteBuff.getItemByteBuffer}} which inside the above 
{{MultiByteBuff.put}} method, it also considing the case the src {{ByteBuff}} 
may be {{SingleByteBuff}},which is in contradiction with line 754 in the above 
{{MultiByteBuff.put}}:

*{code:java}
   private static ByteBuffer getItemByteBuffer(ByteBuff buf, int index) {
 return (buf instanceof SingleByteBuff) ? buf.nioByteBuffers()[0]
   : ((MultiByteBuff) buf).items[index];
}
   {code} 

* If src is {{SingleByteBuff}} and its remaining space is fewer than 
length,when remaining space is exhausted, this {{MultiByteBuff.put}} method 
would not throw any exception and continue to put src {{ByteBuff}} again from 
position 0 because above {{MultiByteBuff.getItemByteBuffer}} ignores index 
paramter for   {{SingleByteBuff}} . Obviously, this behavior
is strange and unexpected.


  was:
MultiByteBuff.put(int destOffset, ByteBuff src, int srcOffset, int length) has 
some obvious bug:
* It is a common utility method and may be used in many situations, but its 
implementation seems  assuming that src {{ByteBuff}} is also a 
{{MultiByteBuff}} and byte size of every {{ByteBuffer}} in {{src.items}} has 
exactly the same size as every {{ByteBuffer}}  in the dest 
{{MultiByteBuff}},just as line 749 and line  754 illustrated:
{code:java}:
746 public MultiByteBuff put(int offset, ByteBuff src, int srcOffset, int 
length) {
747 checkRefCount();
748 int destItemIndex = getItemIndex(offset);
749 int srcItemIndex = getItemIndex(srcOffset);
750 ByteBuffer destItem = this.items[destItemIndex];
751 offset = offset - this.itemBeginPos[destItemIndex];
752
753ByteBuffer srcItem = getItemByteBuffer(src, srcItemIndex);
754srcOffset = srcOffset - this.itemBeginPos[srcItemIndex];
...
{code}
But looking the usage of this method in the hbase project, obviously the 
assumption is not right and even in following 
{{MultiByteBuff.getItemByteBuffer}} which inside the above 
{{MultiByteBuff.put}} method, it also considing the case the src {{ByteBuff}} 
may be {{SingleByteBuff}},which is in contradiction with line 754 in the above 
{{MultiByteBuff.put}}:
* 
   {code:java}
   private static ByteBuffer getItemByteBuffer(ByteBuff buf, int index) {
 return (buf instanceof SingleByteBuff) ? buf.nioByteBuffers()[0]
   : ((MultiByteBuff) buf).items[index];
}
   {code} 

* If src is {{SingleByteBuff}} and its remaining space is fewer than 
length,when remaining space is exhausted, this {{MultiByteBuff.put}} method 
would not throw any exception and continue to put src {{ByteBuff}} again from 
position 0 because above {{MultiByteBuff.getItemByteBuffer}} ignores index 
paramter for   {{SingleByteBuff}} . Obviously, this behavior
is strange and unexpected.



> Fix some obvious bugs in MultiByteBuff.put
> --
>
> Key: HBASE-26197
> URL: https://issues.apache.org/jira/browse/HBASE-26197
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha-1, 2.4.5
>Reporter: chenglei
>Priority: Major
>
> MultiByteBuff.put(int destOffset, ByteBuff src, int srcOffset, int length) 
> has some obvious bug:
> * It is a common utility method and may be used in many situations, but its 
> implementation seems  assuming that src {{ByteBuff}} is also a 
> {{MultiByteBuff}} and byte size of every {{ByteBuffer}} in {{src.items}} has 
> exactly the same size as every {{ByteBuffer}}  in the dest 
> {{MultiByteBuff}},just as line 749 and line  754 illustrated:
> 

[jira] [Updated] (HBASE-26197) Fix some obvious bugs in MultiByteBuff.put

2021-08-13 Thread chenglei (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenglei updated HBASE-26197:
-
Description: 
MultiByteBuff.put(int destOffset, ByteBuff src, int srcOffset, int length) has 
some obvious bug:
* It is a common utility method and may be used in many situations, but its 
implementation seems  assuming that src {{ByteBuff}} is also a 
{{MultiByteBuff}} and byte size of every {{ByteBuffer}} in {{src.items}} has 
exactly the same size as every {{ByteBuffer}}  in the dest 
{{MultiByteBuff}},just as line 749 and line  754 illustrated:
{code:java}:
746 public MultiByteBuff put(int offset, ByteBuff src, int srcOffset, int 
length) {
747 checkRefCount();
748 int destItemIndex = getItemIndex(offset);
749 int srcItemIndex = getItemIndex(srcOffset);
750 ByteBuffer destItem = this.items[destItemIndex];
751 offset = offset - this.itemBeginPos[destItemIndex];
752
753ByteBuffer srcItem = getItemByteBuffer(src, srcItemIndex);
754srcOffset = srcOffset - this.itemBeginPos[srcItemIndex];
...
{code}
But looking the usage of this method in the hbase project, obviously the 
assumption is not right and even in following 
{{MultiByteBuff.getItemByteBuffer}} which inside the above 
{{MultiByteBuff.put}} method, it also considing the case the src {{ByteBuff}} 
may be {{SingleByteBuff}},which is in contradiction with line 754 in the above 
{{MultiByteBuff.put}}:
* 
   {code:java}
   private static ByteBuffer getItemByteBuffer(ByteBuff buf, int index) {
 return (buf instanceof SingleByteBuff) ? buf.nioByteBuffers()[0]
   : ((MultiByteBuff) buf).items[index];
}
   {code} 

* If src is {{SingleByteBuff}} and its remaining space is fewer than 
length,when remaining space is exhausted, this {{MultiByteBuff.put}} method 
would not throw any exception and continue to put src {{ByteBuff}} again from 
position 0 because above {{MultiByteBuff.getItemByteBuffer}} ignores index 
paramter for   {{SingleByteBuff}} . Obviously, this behavior
is strange and unexpected.


  was:
MultiByteBuff.put(int destOffset, ByteBuff src, int srcOffset, int length) has 
some obvious bug:
* It is a common utility method and may be used in many situations, but its 
implementation seems  assuming that src {{ByteBuff}} is also a 
{{MultiByteBuff}} and byte size of every {{ByteBuffer}} in {{src.items}} has 
exactly the same size as every {{ByteBuffer}}  in the dest 
{{MultiByteBuff}},just as line 749 and line  754 illustrated:
{code:java}:
746 public MultiByteBuff put(int offset, ByteBuff src, int srcOffset, int 
length) {
747 checkRefCount();
748 int destItemIndex = getItemIndex(offset);
749 int srcItemIndex = getItemIndex(srcOffset);
750 ByteBuffer destItem = this.items[destItemIndex];
751 offset = offset - this.itemBeginPos[destItemIndex];
752
753ByteBuffer srcItem = getItemByteBuffer(src, srcItemIndex);
754srcOffset = srcOffset - this.itemBeginPos[srcItemIndex];
...
{code}
But looking the usage of this method in the hbase project, obviously the 
assumption is not right and even in following 
{{MultiByteBuff.getItemByteBuffer}} which inside the above 
{{MultiByteBuff.put}} method, it also considing the case the src {{ByteBuff}} 
may be {{SingleByteBuff}},which is in contradiction with line 754 in the above 
{{MultiByteBuff.put}}:

   {code:java}
   private static ByteBuffer getItemByteBuffer(ByteBuff buf, int index) {
 return (buf instanceof SingleByteBuff) ? buf.nioByteBuffers()[0]
   : ((MultiByteBuff) buf).items[index];
}
   {code} 

* If src is {{SingleByteBuff}} and its remaining space is fewer than 
length,when remaining space is exhausted, this {{MultiByteBuff.put}} method 
would not throw any exception and continue to put src {{ByteBuff}} again from 
position 0 because above {{MultiByteBuff.getItemByteBuffer}} ignores index 
paramter for   {{SingleByteBuff}} . Obviously, this behavior
is strange and unexpected.



> Fix some obvious bugs in MultiByteBuff.put
> --
>
> Key: HBASE-26197
> URL: https://issues.apache.org/jira/browse/HBASE-26197
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha-1, 2.4.5
>Reporter: chenglei
>Priority: Major
>
> MultiByteBuff.put(int destOffset, ByteBuff src, int srcOffset, int length) 
> has some obvious bug:
> * It is a common utility method and may be used in many situations, but its 
> implementation seems  assuming that src {{ByteBuff}} is also a 
> {{MultiByteBuff}} and byte size of every {{ByteBuffer}} in {{src.items}} has 
> exactly the same size as every {{ByteBuffer}}  in the dest 
> {{MultiByteBuff}},just as line 749 and line  754 illustrated:
> 

[jira] [Updated] (HBASE-26197) Fix some obvious bugs in MultiByteBuff.put

2021-08-13 Thread chenglei (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenglei updated HBASE-26197:
-
Description: 
MultiByteBuff.put(int destOffset, ByteBuff src, int srcOffset, int length) has 
some obvious bug:
* It is a common utility method and may be used in many situations, but its 
implementation seems  assuming that src {{ByteBuff}} is also a 
{{MultiByteBuff}} and byte size of every {{ByteBuffer}} in {{src.items}} has 
exactly the same size as every {{ByteBuffer}}  in the dest 
{{MultiByteBuff}},just as line 749 and line  754 illustrated:
{code:java}:
746 public MultiByteBuff put(int offset, ByteBuff src, int srcOffset, int 
length) {
747 checkRefCount();
748 int destItemIndex = getItemIndex(offset);
749 int srcItemIndex = getItemIndex(srcOffset);
750 ByteBuffer destItem = this.items[destItemIndex];
751 offset = offset - this.itemBeginPos[destItemIndex];
752
753ByteBuffer srcItem = getItemByteBuffer(src, srcItemIndex);
754srcOffset = srcOffset - this.itemBeginPos[srcItemIndex];
...
{code}
But looking the usage of this method in the hbase project, obviously the 
assumption is not right and even in following 
{{MultiByteBuff.getItemByteBuffer}} which inside the above 
{{MultiByteBuff.put}} method, it also considing the case the src {{ByteBuff}} 
may be {{SingleByteBuff}},which is in contradiction with line 754 in the above 
{{MultiByteBuff.put}}:

   {code:java}
   private static ByteBuffer getItemByteBuffer(ByteBuff buf, int index) {
 return (buf instanceof SingleByteBuff) ? buf.nioByteBuffers()[0]
   : ((MultiByteBuff) buf).items[index];
}
   {code} 

* If src is {{SingleByteBuff}} and its remaining space is fewer than 
length,when remaining space is exhausted, this {{MultiByteBuff.put}} method 
would not throw any exception and continue to put src {{ByteBuff}} again from 
position 0 because above {{MultiByteBuff.getItemByteBuffer}} ignores index 
paramter for   {{SingleByteBuff}} . Obviously, this behavior
is strange and unexpected.


  was:
MultiByteBuff.put(int destOffset, ByteBuff src, int srcOffset, int length) has 
some obvious bug:
* It is a common utility method and may be used in many situations, but its 
implementation seems  assuming that src {{ByteBuff}} is also a 
{{MultiByteBuff}} and byte size of every {{ByteBuffer}} in {{src.items}} has 
exactly the same size as every {{ByteBuffer}}  in the dest 
{{MultiByteBuff}},just as line 749 and line  754 illustrated:
{code:java}:
746 public MultiByteBuff put(int offset, ByteBuff src, int srcOffset, int 
length) {
747 checkRefCount();
748 int destItemIndex = getItemIndex(offset);
749 int srcItemIndex = getItemIndex(srcOffset);
750 ByteBuffer destItem = this.items[destItemIndex];
751 offset = offset - this.itemBeginPos[destItemIndex];
752
753ByteBuffer srcItem = getItemByteBuffer(src, srcItemIndex);
754srcOffset = srcOffset - this.itemBeginPos[srcItemIndex];
...
{code}
But looking the usage of this method in the hbase project, obviously the 
assumption is not right and even in following 
{{MultiByteBuff.getItemByteBuffer}} which inside the above 
{{MultiByteBuff.put}} method, it also considing the case the src {{ByteBuff}} 
may be {{SingleByteBuff}},which is in contradiction with line 754 in the above 
{{MultiByteBuff.put}}

{code:java}
private static ByteBuffer getItemByteBuffer(ByteBuff buf, int index) {
return (buf instanceof SingleByteBuff) ? buf.nioByteBuffers()[0]
: ((MultiByteBuff) buf).items[index];
  }
{code} 

* If src is {{SingleByteBuff}} and its remaining space is fewer than 
length,when remaining space is exhausted, this {{MultiByteBuff.put}} method 
would not throw any exception and continue to put src {{ByteBuff}} again from 
position 0 because above {{MultiByteBuff.getItemByteBuffer}} ignores index 
paramter for   {{SingleByteBuff}} . Obviously, this behavior
is strange and unexpected.



> Fix some obvious bugs in MultiByteBuff.put
> --
>
> Key: HBASE-26197
> URL: https://issues.apache.org/jira/browse/HBASE-26197
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha-1, 2.4.5
>Reporter: chenglei
>Priority: Major
>
> MultiByteBuff.put(int destOffset, ByteBuff src, int srcOffset, int length) 
> has some obvious bug:
> * It is a common utility method and may be used in many situations, but its 
> implementation seems  assuming that src {{ByteBuff}} is also a 
> {{MultiByteBuff}} and byte size of every {{ByteBuffer}} in {{src.items}} has 
> exactly the same size as every {{ByteBuffer}}  in the dest 
> {{MultiByteBuff}},just as line 749 and line  754 illustrated:
> {code:java}:
> 746 public MultiByteBuff put(int 

[jira] [Updated] (HBASE-26197) Fix some obvious bugs in MultiByteBuff.put

2021-08-13 Thread chenglei (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenglei updated HBASE-26197:
-
Description: 
MultiByteBuff.put(int destOffset, ByteBuff src, int srcOffset, int length) has 
some obvious bug:
* It is a common utility method and may be used in many situations, but its 
implementation seems  assuming that src {{ByteBuff}} is also a 
{{MultiByteBuff}} and byte size of every {{ByteBuffer}} in {{src.items}} has 
exactly the same size as every {{ByteBuffer}}  in the dest 
{{MultiByteBuff}},just as line 749 and line  754 illustrated:
{code:java}:
746 public MultiByteBuff put(int offset, ByteBuff src, int srcOffset, int 
length) {
747 checkRefCount();
748 int destItemIndex = getItemIndex(offset);
749 int srcItemIndex = getItemIndex(srcOffset);
750 ByteBuffer destItem = this.items[destItemIndex];
751 offset = offset - this.itemBeginPos[destItemIndex];
752
753ByteBuffer srcItem = getItemByteBuffer(src, srcItemIndex);
754srcOffset = srcOffset - this.itemBeginPos[srcItemIndex];
...
{code}
But looking the usage of this method in the hbase project, obviously the 
assumption is not right and even in following 
{{MultiByteBuff.getItemByteBuffer}} which inside the above 
{{MultiByteBuff.put}} method, it also considing the case the src {{ByteBuff}} 
may be {{SingleByteBuff}},which is in contradiction with line 754 in the above 
{{MultiByteBuff.put}}

{code:java}
private static ByteBuffer getItemByteBuffer(ByteBuff buf, int index) {
return (buf instanceof SingleByteBuff) ? buf.nioByteBuffers()[0]
: ((MultiByteBuff) buf).items[index];
  }
{code} 

* If src is {{SingleByteBuff}} and its remaining space is fewer than 
length,when remaining space is exhausted, this {{MultiByteBuff.put}} method 
would not throw any exception and continue to put src {{ByteBuff}} again from 
position 0 because above {{MultiByteBuff.getItemByteBuffer}} ignores index 
paramter for   {{SingleByteBuff}} . Obviously, this behavior
is strange and unexpected.


> Fix some obvious bugs in MultiByteBuff.put
> --
>
> Key: HBASE-26197
> URL: https://issues.apache.org/jira/browse/HBASE-26197
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha-1, 2.4.5
>Reporter: chenglei
>Priority: Major
>
> MultiByteBuff.put(int destOffset, ByteBuff src, int srcOffset, int length) 
> has some obvious bug:
> * It is a common utility method and may be used in many situations, but its 
> implementation seems  assuming that src {{ByteBuff}} is also a 
> {{MultiByteBuff}} and byte size of every {{ByteBuffer}} in {{src.items}} has 
> exactly the same size as every {{ByteBuffer}}  in the dest 
> {{MultiByteBuff}},just as line 749 and line  754 illustrated:
> {code:java}:
> 746 public MultiByteBuff put(int offset, ByteBuff src, int srcOffset, int 
> length) {
> 747 checkRefCount();
> 748 int destItemIndex = getItemIndex(offset);
> 749 int srcItemIndex = getItemIndex(srcOffset);
> 750 ByteBuffer destItem = this.items[destItemIndex];
> 751 offset = offset - this.itemBeginPos[destItemIndex];
> 752
> 753ByteBuffer srcItem = getItemByteBuffer(src, srcItemIndex);
> 754srcOffset = srcOffset - this.itemBeginPos[srcItemIndex];
> ...
> {code}
> But looking the usage of this method in the hbase project, obviously the 
> assumption is not right and even in following 
> {{MultiByteBuff.getItemByteBuffer}} which inside the above 
> {{MultiByteBuff.put}} method, it also considing the case the src {{ByteBuff}} 
> may be {{SingleByteBuff}},which is in contradiction with line 754 in the 
> above {{MultiByteBuff.put}}
> {code:java}
> private static ByteBuffer getItemByteBuffer(ByteBuff buf, int index) {
> return (buf instanceof SingleByteBuff) ? buf.nioByteBuffers()[0]
> : ((MultiByteBuff) buf).items[index];
>   }
> {code} 
> * If src is {{SingleByteBuff}} and its remaining space is fewer than 
> length,when remaining space is exhausted, this {{MultiByteBuff.put}} method 
> would not throw any exception and continue to put src {{ByteBuff}} again from 
> position 0 because above {{MultiByteBuff.getItemByteBuffer}} ignores index 
> paramter for   {{SingleByteBuff}} . Obviously, this behavior
> is strange and unexpected.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] nyl3532016 commented on a change in pull request #3468: HBASE-26076 Support favoredNodes when do compaction offload

2021-08-13 Thread GitBox


nyl3532016 commented on a change in pull request #3468:
URL: https://github.com/apache/hbase/pull/3468#discussion_r688369232



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java
##
@@ -344,14 +344,27 @@ private StoreContext 
initializeStoreContext(ColumnFamilyDescriptor family) throw
   }
 
   private InetSocketAddress[] getFavoredNodes() {
-InetSocketAddress[] favoredNodes = null;
 if (region.getRegionServerServices() != null) {
-  favoredNodes = region.getRegionServerServices().getFavoredNodesForRegion(
-  region.getRegionInfo().getEncodedName());
+  return region.getRegionServerServices()
+  .getFavoredNodesForRegion(region.getRegionInfo().getEncodedName());
 }
 return favoredNodes;
   }
 
+  // Favored nodes used by compaction offload
+  private InetSocketAddress[] favoredNodes = null;
+
+  public void setFavoredNodes(
+  
List 
favoredNodes) {
+if (favoredNodes != null && favoredNodes.size() > 0) {

Review comment:
   ok




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] nyl3532016 commented on a change in pull request #3468: HBASE-26076 Support favoredNodes when do compaction offload

2021-08-13 Thread GitBox


nyl3532016 commented on a change in pull request #3468:
URL: https://github.com/apache/hbase/pull/3468#discussion_r688366528



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java
##
@@ -344,14 +344,27 @@ private StoreContext 
initializeStoreContext(ColumnFamilyDescriptor family) throw
   }
 
   private InetSocketAddress[] getFavoredNodes() {
-InetSocketAddress[] favoredNodes = null;
 if (region.getRegionServerServices() != null) {
-  favoredNodes = region.getRegionServerServices().getFavoredNodesForRegion(
-  region.getRegionInfo().getEncodedName());
+  return region.getRegionServerServices()
+  .getFavoredNodesForRegion(region.getRegionInfo().getEncodedName());
 }
 return favoredNodes;
   }
 
+  // Favored nodes used by compaction offload
+  private InetSocketAddress[] favoredNodes = null;
+
+  public void setFavoredNodes(
+  
List 
favoredNodes) {

Review comment:
   ok




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] nyl3532016 commented on a change in pull request #3468: HBASE-26076 Support favoredNodes when do compaction offload

2021-08-13 Thread GitBox


nyl3532016 commented on a change in pull request #3468:
URL: https://github.com/apache/hbase/pull/3468#discussion_r688363023



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/compactionserver/CompactionThreadManager.java
##
@@ -150,6 +150,27 @@ public void requestCompaction(CompactionTask 
compactionTask) throws IOException
 }
   }
 
+  /**
+   * Open store, and clean stale compacted file in cache
+   */
+  private HStore openStore(RegionInfo regionInfo, ColumnFamilyDescriptor cfd, 
boolean major,

Review comment:
   not just refactor, we need setFavoredNodes on store.
   So move this logic outof `selectCompaction` method




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache9 commented on a change in pull request #3566: HBASE-26172 Deprecated MasterRegistry and allow getBootstrapNodes to …

2021-08-13 Thread GitBox


Apache9 commented on a change in pull request #3566:
URL: https://github.com/apache/hbase/pull/3566#discussion_r688283296



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
##
@@ -308,18 +310,35 @@
*/
   private static final long 
DEFAULT_REGION_SERVER_RPC_MINIMUM_SCAN_TIME_LIMIT_DELTA = 10;
 
-  /*
+  /**
* Whether to reject rows with size > threshold defined by
* {@link RSRpcServices#BATCH_ROWS_THRESHOLD_NAME}
*/
   private static final String REJECT_BATCH_ROWS_OVER_THRESHOLD =
 "hbase.rpc.rows.size.threshold.reject";
 
-  /*
+  /**
* Default value of config {@link 
RSRpcServices#REJECT_BATCH_ROWS_OVER_THRESHOLD}
*/
   private static final boolean DEFAULT_REJECT_BATCH_ROWS_OVER_THRESHOLD = 
false;
 
+  /**
+   * Determine the bootstrap nodes we want to return to the client connection 
registry.
+   * 
+   * {@link #MASTER}: return masters as bootstrap nodes.

Review comment:
   > It appeared that you intended to keep both MasterRegistry, 
RpcConnectionRegistry with masters.
   
   Let me explain more...
   
   The issue here, aims to remove MasterRegistry in the future, so
   1. Ideally RpcConnectionRegistry should cover what we have in 
MasterRegistry, by some configurations at least.
   2. We just do not want to support masters any more, so we do not need to 
provide configurations at server side, after MasterRegistry is removed, users 
can not use masters as connection registry endpoint any more.
   
   But if we go with option 2, maybe it breaks some users, so maybe we should 
keep MasterRegistry and do not mark it as deprecated? And then, the 
RpcConnectionRegistry should better be renamed to RegionServerRegistry.
   
   So let me conclude, there are 3 options:
   1. Deprecated MasterRegistry, plan a removal in 4.0.0, and let 
RpcConnectionRegistry still provide the ability to use masters as connection 
registry endpoint.
   2. Deprecated MasterRegistry, plan a removal in 4.0.0, but still keep 
RpcConnectionRegistry only use region servers as connection registry endpoint.
   3. Do not deprecated MasterRegistry, rename RpcConnectionRegistry to 
RegionServerRegistry.
   
   I do not think we need to rename RpcConnectionRegistry to 
RegionServerRegistry if we still want to remove MasterRegistry, so there is no 
option 4.
   
   Please let me know your choice. But if you want to choose 2 or 3, I think 
we'd better post a discussion email to the dev list, as it removes a feature 
currently in use, we need to collect more information from users.
   
   Thanks.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache9 commented on a change in pull request #3566: HBASE-26172 Deprecated MasterRegistry and allow getBootstrapNodes to …

2021-08-13 Thread GitBox


Apache9 commented on a change in pull request #3566:
URL: https://github.com/apache/hbase/pull/3566#discussion_r688283296



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
##
@@ -308,18 +310,35 @@
*/
   private static final long 
DEFAULT_REGION_SERVER_RPC_MINIMUM_SCAN_TIME_LIMIT_DELTA = 10;
 
-  /*
+  /**
* Whether to reject rows with size > threshold defined by
* {@link RSRpcServices#BATCH_ROWS_THRESHOLD_NAME}
*/
   private static final String REJECT_BATCH_ROWS_OVER_THRESHOLD =
 "hbase.rpc.rows.size.threshold.reject";
 
-  /*
+  /**
* Default value of config {@link 
RSRpcServices#REJECT_BATCH_ROWS_OVER_THRESHOLD}
*/
   private static final boolean DEFAULT_REJECT_BATCH_ROWS_OVER_THRESHOLD = 
false;
 
+  /**
+   * Determine the bootstrap nodes we want to return to the client connection 
registry.
+   * 
+   * {@link #MASTER}: return masters as bootstrap nodes.

Review comment:
   > It appeared that you intended to keep both MasterRegistry, 
RpcConnectionRegistry with masters.
   
   Let me explain more...
   
   The issue here, aims to remove MasterRegistry in the future, so
   1. Ideally RpcConnectionRegistry should cover what we have in 
MasterRegistry, by some configurations at least.
   2. We just do not want to support masters any more, so we do not need to 
provide configurations at server side, after MasterRegistry is removed, users 
can not use masters as connection registry endpoint any more.
   
   But if we go with option 2, maybe it breaks some users, so maybe we should 
keep MasterRegistry and do not mark it as deprecated? And then, the 
RpcConnectionRegistry should better be renamed to RegionServerRegistry.
   
   So let me conclude, there are 3 options:
   1. Deprecated MasterRegistry, plan a removal in 4.0.0, and let 
RpcConnectionRegistry still provide the ability to use masters as connection 
registry endpoint.
   2. Deprecated MasterRegistry, plan a removal in 4.0.0, but still keep 
RpcConnectionRegistry only use region servers as connection registry endpoint.
   3. Do not deprecated MasterRegistry, rename RpcConnectionRegistry to 
RegionServerRegistry.
   
   I do not think we need to rename RpcConnectionRegistry to 
RegionServerRegistry if we still want to remove MasterRegistry, so there is 
option 4.
   
   Please let me know your choice. But if you want to choose 2 or 3, I think 
we'd better post a discussion email to the dev list, as it removes a feature 
currently in use, we need to collect more information from users.
   
   Thanks.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (HBASE-26197) Fix some obvious bugs in MultiByteBuff.put

2021-08-13 Thread chenglei (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenglei updated HBASE-26197:
-
Summary: Fix some obvious bugs in MultiByteBuff.put  (was: Fix obvious bugs 
in MultiByteBuff.put)

> Fix some obvious bugs in MultiByteBuff.put
> --
>
> Key: HBASE-26197
> URL: https://issues.apache.org/jira/browse/HBASE-26197
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha-1, 2.4.5
>Reporter: chenglei
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-26197) Fix obvious bugs in MultiByteBuff.put

2021-08-13 Thread chenglei (Jira)
chenglei created HBASE-26197:


 Summary: Fix obvious bugs in MultiByteBuff.put
 Key: HBASE-26197
 URL: https://issues.apache.org/jira/browse/HBASE-26197
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.4.5, 3.0.0-alpha-1
Reporter: chenglei






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-26143) The default value of 'hbase.hregion.memstore.mslab.indexchunksize.percent' should be 0

2021-08-13 Thread chenglei (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenglei updated HBASE-26143:
-
Summary: The default value of 
'hbase.hregion.memstore.mslab.indexchunksize.percent' should be 0  (was: The 
default value of 'hbase.hregion.memstore.mslab.indexchunksize.percent' should 
depend on MemStore type)

> The default value of 'hbase.hregion.memstore.mslab.indexchunksize.percent' 
> should be 0
> --
>
> Key: HBASE-26143
> URL: https://issues.apache.org/jira/browse/HBASE-26143
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha-1, 2.4.0
>Reporter: chenglei
>Priority: Major
>
> The default value of {{hbase.hregion.memstore.mslab.indexchunksize.percent}} 
> introduced by HBASE-24892 is 0.1, but when we use {{DefaultMemStore}} by 
> default , which has no IndexChunk and {{ChunkCreator.indexChunksPool}} is 
> useless(IndexChunk is only used by CompactingMemStore), so the 
> {{hbase.hregion.memstore.mslab.indexchunksize.percent}} should be  0 when we 
> using {{DefaultMemStore}} to save memory space. Only when we use 
> {{CompactingMemStore}} and {{CellChunkMap}}, it is meaningful to set 
> {{hbase.hregion.memstore.mslab.indexchunksize.percent}} by user. 
> Howerver, because existing bug in {{ChunkCreator}}, it is depends on 
> HBASE-26142



--
This message was sent by Atlassian Jira
(v8.3.4#803005)