[GitHub] [hbase] Apache-HBase commented on pull request #5379: HBASE-28055 Performance improvement for scan over several stores.

2023-08-30 Thread via GitHub


Apache-HBase commented on PR #5379:
URL: https://github.com/apache/hbase/pull/5379#issuecomment-1700406741

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 31s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 50s |  master passed  |
   | +1 :green_heart: |  compile  |   2m 28s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   0m 36s |  master passed  |
   | +1 :green_heart: |  spotless  |   0m 41s |  branch has no errors when 
running spotless:check.  |
   | +1 :green_heart: |  spotbugs  |   1m 28s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 33s |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 22s |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 22s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 34s |  hbase-server: The patch 
generated 2 new + 7 unchanged - 0 fixed = 9 total (was 7)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |   8m 59s |  Patch does not cause any 
errors with Hadoop 3.2.4 3.3.5.  |
   | -1 :x: |  spotless  |   0m 36s |  patch has 32 errors when running 
spotless:check, run spotless:apply to fix.  |
   | +1 :green_heart: |  spotbugs  |   1m 35s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 10s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  31m 10s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5379/1/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5379 |
   | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti 
spotless checkstyle compile |
   | uname | Linux f50b066f35d1 5.4.0-152-generic #169-Ubuntu SMP Tue Jun 6 
22:23:09 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 5527dd9453 |
   | Default Java | Eclipse Adoptium-11.0.17+8 |
   | checkstyle | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5379/1/artifact/yetus-general-check/output/diff-checkstyle-hbase-server.txt
 |
   | spotless | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5379/1/artifact/yetus-general-check/output/patch-spotless.txt
 |
   | Max. process+thread count | 77 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5379/1/console 
|
   | versions | git=2.34.1 maven=3.8.6 spotbugs=4.7.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Work started] (HBASE-28055) Performance improvement for scan over several stores.

2023-08-30 Thread Sergey Soldatov (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-28055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HBASE-28055 started by Sergey Soldatov.
---
> Performance improvement for scan over several stores. 
> --
>
> Key: HBASE-28055
> URL: https://issues.apache.org/jira/browse/HBASE-28055
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha-4, 2.5.5
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
>Priority: Major
>
> During the fix of HBASE-19863, an additional check for fake cells that 
> trigger reseek was added.  It comes that this check produces unnecessary 
> reseeks because
> matcher.compareKeyForNextColumn should be used only with indexed keys. Later  
> [~larsh] suggested doing a simple check for OLD_TIMESTAMP and it looks like a 
> better solution.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-28055) Performance improvement for scan over several stores.

2023-08-30 Thread Sergey Soldatov (Jira)
Sergey Soldatov created HBASE-28055:
---

 Summary: Performance improvement for scan over several stores. 
 Key: HBASE-28055
 URL: https://issues.apache.org/jira/browse/HBASE-28055
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.5.5, 3.0.0-alpha-4
Reporter: Sergey Soldatov
Assignee: Sergey Soldatov


During the fix of HBASE-19863, an additional check for fake cells that trigger 
reseek was added.  It comes that this check produces unnecessary reseeks because
matcher.compareKeyForNextColumn should be used only with indexed keys. Later  
[~larsh] suggested doing a simple check for OLD_TIMESTAMP and it looks like a 
better solution.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [hbase-connectors] Reidddddd merged pull request #122: HBASE-28054 Add spotless in hbase-connectors pre commit check

2023-08-30 Thread via GitHub


Reidd merged PR #122:
URL: https://github.com/apache/hbase-connectors/pull/122


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (HBASE-28051) The javadoc about RegionProcedureStore.delete is incorrect

2023-08-30 Thread Duo Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-28051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-28051:
--
Summary: The javadoc about RegionProcedureStore.delete is incorrect  (was: 
The annotation about RegionProcedureStore.delete is not right)

> The javadoc about RegionProcedureStore.delete is incorrect
> --
>
> Key: HBASE-28051
> URL: https://issues.apache.org/jira/browse/HBASE-28051
> Project: HBase
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.4.13
>Reporter: guluo
>Assignee: guluo
>Priority: Trivial
> Attachments: image-2023-08-30-21-54-59-999.png, 
> image-2023-08-30-21-57-32-393.png
>
>
> As shown in the following figure.
> !image-2023-08-30-21-54-59-999.png!
>  
> Actually, we would fill the {color:#ff}*proc:d*{color} column with an 
> empty byte array when calling RegionProcedureStore.delete().
> !image-2023-08-30-21-57-32-393.png!
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HBASE-28048) RSProcedureDispatcher to abort executing request after configurable retries

2023-08-30 Thread Duo Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-28048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17760676#comment-17760676
 ] 

Duo Zhang commented on HBASE-28048:
---

If we want to recover then the only safe way is to kill the region server. Just 
giving up and trying another region server may cause another more serious 
problem, double assign...
It is much more difficult to figure out and also much more difficult to fix.

Introducing a liveness check is a good idea, maybe we could add a feature in 
canary, where we kill the unhealthy region server after checking availibility?

Thanks.

> RSProcedureDispatcher to abort executing request after configurable retries
> ---
>
> Key: HBASE-28048
> URL: https://issues.apache.org/jira/browse/HBASE-28048
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha-4, 2.4.17, 2.5.5
>Reporter: Viraj Jasani
>Priority: Major
> Fix For: 2.6.0, 2.4.18, 2.5.6, 3.0.0-beta-1
>
>
> In a recent incident, we observed that RSProcedureDispatcher continues 
> executing region open/close procedures with unbounded retries even in the 
> presence of known failures like GSS initiate failure:
>  
> {code:java}
> 2023-08-25 02:21:02,821 WARN [ispatcher-pool-40777] 
> procedure.RSProcedureDispatcher - request to rs1,61020,1692930044498 failed 
> due to java.io.IOException: Call to address=rs1:61020 failed on local 
> exception: java.io.IOException: 
> org.apache.hbase.thirdparty.io.netty.handler.codec.DecoderException: 
> org.apache.hadoop.ipc.RemoteException(javax.security.sasl.SaslException): GSS 
> initiate failed, try=0, retrying... {code}
>  
>  
> If the remote execution results in IOException, the dispatcher attempts to 
> schedule the procedure for further retries:
>  
> {code:java}
>     private boolean scheduleForRetry(IOException e) {
>       LOG.debug("Request to {} failed, try={}", serverName, 
> numberOfAttemptsSoFar, e);
>       // Should we wait a little before retrying? If the server is starting 
> it's yes.
>       ...
>       ...
>       ...
>       numberOfAttemptsSoFar++;
>       // Add some backoff here as the attempts rise otherwise if a stuck 
> condition, will fill logs
>       // with failed attempts. None of our backoff classes -- RetryCounter or 
> ClientBackoffPolicy
>       // -- fit here nicely so just do something simple; increment by 
> rsRpcRetryInterval millis *
>       // retry^2 on each try
>       // up to max of 10 seconds (don't want to back off too much in case of 
> situation change).
>       submitTask(this,
>         Math.min(rsRpcRetryInterval * (this.numberOfAttemptsSoFar * 
> this.numberOfAttemptsSoFar),
>           10 * 1000),
>         TimeUnit.MILLISECONDS);
>       return true;
>     }
>  {code}
>  
>  
> Even though we try to provide backoff while retrying, max wait time is 10s:
>  
> {code:java}
> submitTask(this,
>   Math.min(rsRpcRetryInterval * (this.numberOfAttemptsSoFar * 
> this.numberOfAttemptsSoFar),
> 10 * 1000),
>   TimeUnit.MILLISECONDS); {code}
>  
>  
> This results in endless loop of retries, until either the underlying issue is 
> fixed (e.g. krb issue in this case) or regionserver is killed and the ongoing 
> open/close region procedure (and perhaps entire SCP) for the affected 
> regionserver is sidelined manually.
> {code:java}
> 2023-08-25 03:04:18,918 WARN  [ispatcher-pool-41274] 
> procedure.RSProcedureDispatcher - request to rs1,61020,1692930044498 failed 
> due to java.io.IOException: Call to address=rs1:61020 failed on local 
> exception: java.io.IOException: 
> org.apache.hbase.thirdparty.io.netty.handler.codec.DecoderException: 
> org.apache.hadoop.ipc.RemoteException(javax.security.sasl.SaslException): GSS 
> initiate failed, try=217, retrying...
> 2023-08-25 03:04:18,916 WARN  [ispatcher-pool-41280] 
> procedure.RSProcedureDispatcher - request to rs1,61020,1692930044498 failed 
> due to java.io.IOException: Call to address=rs1:61020 failed on local 
> exception: java.io.IOException: 
> org.apache.hbase.thirdparty.io.netty.handler.codec.DecoderException: 
> org.apache.hadoop.ipc.RemoteException(javax.security.sasl.SaslException): GSS 
> initiate failed, try=193, retrying...
> 2023-08-25 03:04:28,968 WARN  [ispatcher-pool-41315] 
> procedure.RSProcedureDispatcher - request to rs1,61020,1692930044498 failed 
> due to java.io.IOException: Call to address=rs1:61020 failed on local 
> exception: java.io.IOException: 
> org.apache.hbase.thirdparty.io.netty.handler.codec.DecoderException: 
> org.apache.hadoop.ipc.RemoteException(javax.security.sasl.SaslException): GSS 
> initiate failed, try=266, retrying...
> 2023-08-25 03:04:28,969 WARN  [ispatcher-pool-41240] 
> procedure.RSProcedureDispatcher - request to rs1,61020,1692930044498 failed 
> due to java.io.IOException: 

[jira] [Commented] (HBASE-28048) RSProcedureDispatcher to abort executing request after configurable retries

2023-08-30 Thread Andrew Kyle Purtell (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-28048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17760627#comment-17760627
 ] 

Andrew Kyle Purtell commented on HBASE-28048:
-

bq. Let's assume we are moving all regions from server A to server B. If server 
A is not reachable, and we fail all TRSP for region moves from A to B, the only 
alternative that the operator or software would be left with is stopping server 
A non-gracefully so that new SCP for server A can be processed by master.

We have this same problem for any region state transitions, any TRSP. 
[~vjasani] [~zhangduo] 

In some cases in our production we are seeing retries > 10 minutes to an 
unresponsive or dead regionserver. It's too much, too long. It cannot be 
required for an operator to step in every time to manually schedule a SCP for 
the unresponsive server. TRSP should abort itself, or the parent procedure of 
the TRSP should abort it, if the target server does not respond within a 
reasonable time bound. I am thinking 1 minute. The clock is ticking on the RIT 
while we are retrying RPCs to a unresponsive server. The time required to 
detect the server is unresponsive should be fairly short, so the total RIT time 
remains fairly short. 

To start with, TRSP should not retry for effectively infinite times. If the 
total retry time is more than a minute or two, it should give up. Then, 
depending on the region state, either another server is chosen or the 
unresponsive server is fenced and killed with a forced SCP, which grabs the 
lease on the RS WAL to split it, killing the RS as desired.

We should maybe also consider adding active probes and liveness checks and a 
predictive component (like Φ-accrual) for the master to identify sick or 
unresponsive regionservers before they impact production and fence and kill 
them proactively with SCP. 

> RSProcedureDispatcher to abort executing request after configurable retries
> ---
>
> Key: HBASE-28048
> URL: https://issues.apache.org/jira/browse/HBASE-28048
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha-4, 2.4.17, 2.5.5
>Reporter: Viraj Jasani
>Priority: Major
> Fix For: 2.6.0, 2.4.18, 2.5.6, 3.0.0-beta-1
>
>
> In a recent incident, we observed that RSProcedureDispatcher continues 
> executing region open/close procedures with unbounded retries even in the 
> presence of known failures like GSS initiate failure:
>  
> {code:java}
> 2023-08-25 02:21:02,821 WARN [ispatcher-pool-40777] 
> procedure.RSProcedureDispatcher - request to rs1,61020,1692930044498 failed 
> due to java.io.IOException: Call to address=rs1:61020 failed on local 
> exception: java.io.IOException: 
> org.apache.hbase.thirdparty.io.netty.handler.codec.DecoderException: 
> org.apache.hadoop.ipc.RemoteException(javax.security.sasl.SaslException): GSS 
> initiate failed, try=0, retrying... {code}
>  
>  
> If the remote execution results in IOException, the dispatcher attempts to 
> schedule the procedure for further retries:
>  
> {code:java}
>     private boolean scheduleForRetry(IOException e) {
>       LOG.debug("Request to {} failed, try={}", serverName, 
> numberOfAttemptsSoFar, e);
>       // Should we wait a little before retrying? If the server is starting 
> it's yes.
>       ...
>       ...
>       ...
>       numberOfAttemptsSoFar++;
>       // Add some backoff here as the attempts rise otherwise if a stuck 
> condition, will fill logs
>       // with failed attempts. None of our backoff classes -- RetryCounter or 
> ClientBackoffPolicy
>       // -- fit here nicely so just do something simple; increment by 
> rsRpcRetryInterval millis *
>       // retry^2 on each try
>       // up to max of 10 seconds (don't want to back off too much in case of 
> situation change).
>       submitTask(this,
>         Math.min(rsRpcRetryInterval * (this.numberOfAttemptsSoFar * 
> this.numberOfAttemptsSoFar),
>           10 * 1000),
>         TimeUnit.MILLISECONDS);
>       return true;
>     }
>  {code}
>  
>  
> Even though we try to provide backoff while retrying, max wait time is 10s:
>  
> {code:java}
> submitTask(this,
>   Math.min(rsRpcRetryInterval * (this.numberOfAttemptsSoFar * 
> this.numberOfAttemptsSoFar),
> 10 * 1000),
>   TimeUnit.MILLISECONDS); {code}
>  
>  
> This results in endless loop of retries, until either the underlying issue is 
> fixed (e.g. krb issue in this case) or regionserver is killed and the ongoing 
> open/close region procedure (and perhaps entire SCP) for the affected 
> regionserver is sidelined manually.
> {code:java}
> 2023-08-25 03:04:18,918 WARN  [ispatcher-pool-41274] 
> procedure.RSProcedureDispatcher - request to rs1,61020,1692930044498 failed 
> due to java.io.IOException: Call to address=rs1:61020 failed on local 
> 

[jira] [Commented] (HBASE-27966) HBase Master/RS JVM metrics populated incorrectly

2023-08-30 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17760616#comment-17760616
 ] 

Hudson commented on HBASE-27966:


Results for branch branch-2.5
[build #395 on 
builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/395/]:
 (x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/395/General_20Nightly_20Build_20Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/395/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/395/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/395/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> HBase Master/RS JVM metrics populated incorrectly
> -
>
> Key: HBASE-27966
> URL: https://issues.apache.org/jira/browse/HBASE-27966
> Project: HBase
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 2.0.0-alpha-4
>Reporter: Nihal Jain
>Assignee: Nihal Jain
>Priority: Major
> Fix For: 2.6.0, 2.5.6, 3.0.0-beta-1
>
> Attachments: test_patch.txt
>
>
> HBase Master/RS JVM metrics populated incorrectly due to regression causing 
> ambari metrics system to not able to capture them.
> Based on my analysis the issue is relevant for all release post 2.0.0-alpha-4 
> and seems to be caused due to HBASE-18846.
> Have been able to compare the JVM metrics across 3 versions of HBase and 
> attaching results of same below:
> HBase: 1.1.2
> {code:java}
> {
> "name" : "Hadoop:service=HBase,name=JvmMetrics",
> "modelerType" : "JvmMetrics",
> "tag.Context" : "jvm",
> "tag.ProcessName" : "RegionServer",
> "tag.SessionId" : "",
> "tag.Hostname" : "HOSTNAME",
> "MemNonHeapUsedM" : 196.05664,
> "MemNonHeapCommittedM" : 347.60547,
> "MemNonHeapMaxM" : 4336.0,
> "MemHeapUsedM" : 7207.315,
> "MemHeapCommittedM" : 66080.0,
> "MemHeapMaxM" : 66080.0,
> "MemMaxM" : 66080.0,
> "GcCount" : 3953,
> "GcTimeMillis" : 662520,
> "ThreadsNew" : 0,
> "ThreadsRunnable" : 214,
> "ThreadsBlocked" : 0,
> "ThreadsWaiting" : 626,
> "ThreadsTimedWaiting" : 78,
> "ThreadsTerminated" : 0,
> "LogFatal" : 0,
> "LogError" : 0,
> "LogWarn" : 0,
> "LogInfo" : 0
>   },
> {code}
> HBase 2.0.2
> {code:java}
> {
> "name" : "Hadoop:service=HBase,name=JvmMetrics",
> "modelerType" : "JvmMetrics",
> "tag.Context" : "jvm",
> "tag.ProcessName" : "IO",
> "tag.SessionId" : "",
> "tag.Hostname" : "HOSTNAME",
> "MemNonHeapUsedM" : 203.86688,
> "MemNonHeapCommittedM" : 740.6953,
> "MemNonHeapMaxM" : -1.0,
> "MemHeapUsedM" : 14879.477,
> "MemHeapCommittedM" : 31744.0,
> "MemHeapMaxM" : 31744.0,
> "MemMaxM" : 31744.0,
> "GcCount" : 75922,
> "GcTimeMillis" : 5134691,
> "ThreadsNew" : 0,
> "ThreadsRunnable" : 90,
> "ThreadsBlocked" : 3,
> "ThreadsWaiting" : 158,
> "ThreadsTimedWaiting" : 36,
> "ThreadsTerminated" : 0,
> "LogFatal" : 0,
> "LogError" : 0,
> "LogWarn" : 0,
> "LogInfo" : 0
>   },
> {code}
> HBase: 2.5.2
> {code:java}
> {
>   "name": "Hadoop:service=HBase,name=JvmMetrics",
>   "modelerType": "JvmMetrics",
>   "tag.Context": "jvm",
>   "tag.ProcessName": "IO",
>   "tag.SessionId": "",
>   "tag.Hostname": "HOSTNAME",
>   "MemNonHeapUsedM": 192.9798,
>   "MemNonHeapCommittedM": 198.4375,
>   "MemNonHeapMaxM": -1.0,
>   "MemHeapUsedM": 773.23584,
>   "MemHeapCommittedM": 1004.0,
>   "MemHeapMaxM": 1024.0,
>   "MemMaxM": 1024.0,
>   "GcCount": 2048,
>   "GcTimeMillis": 25440,
>   "ThreadsNew": 0,
>   "ThreadsRunnable": 22,
>   "ThreadsBlocked": 0,
>   "ThreadsWaiting": 121,
>   "ThreadsTimedWaiting": 49,
>   "ThreadsTerminated": 0,
>   "LogFatal": 0,
>   "LogError": 0,
>   "LogWarn": 0,
>   "LogInfo": 0
>  },
> {code}
> It can be observed that 2.0.x onwards the field "tag.ProcessName" is 
> populating as "IO" instead of expected "RegionServer" or "Master".
> Ambari relies on this field process name to create a metric 
> 

[GitHub] [hbase-connectors] Apache-HBase commented on pull request #123: [DO NOT MERGE] HBASE-28054 Test spotless fail

2023-08-30 Thread via GitHub


Apache-HBase commented on PR #123:
URL: https://github.com/apache/hbase-connectors/pull/123#issuecomment-1699762633

   (!) A patch to the testing environment has been detected. 
   Re-executing against the patched versions to perform further tests. 
   The console is at 
https://ci-hbase.apache.org/job/HBase-Connectors-PreCommit/job/PR-123/1/console 
in case of problems.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase-connectors] NihalJain commented on pull request #122: HBASE-28054 Add spotless in hbase-connectors pre commit check

2023-08-30 Thread via GitHub


NihalJain commented on PR #122:
URL: https://github.com/apache/hbase-connectors/pull/122#issuecomment-1699677789

   Hi @Apache9, @Reidd Could you please review?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase-connectors] Apache-HBase commented on pull request #122: HBASE-28054 Add spotless in hbase-connectors pre commit check

2023-08-30 Thread via GitHub


Apache-HBase commented on PR #122:
URL: https://github.com/apache/hbase-connectors/pull/122#issuecomment-1699673447

   (!) A patch to the testing environment has been detected. 
   Re-executing against the patched versions to perform further tests. 
   The console is at 
https://ci-hbase.apache.org/job/HBase-Connectors-PreCommit/job/PR-122/1/console 
in case of problems.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase-connectors] NihalJain commented on pull request #122: HBASE-28054 Add spotless in hbase-connectors pre commit check

2023-08-30 Thread via GitHub


NihalJain commented on PR #122:
URL: https://github.com/apache/hbase-connectors/pull/122#issuecomment-1699672530

   Ran spotless locally and ensured it failed with a bad patch
   ```
   | Vote |Subsystem |  Runtime   | Comment
   
   +---
   |  |  || Prechecks
   +---
   +---
   |  |  || HBASE-28054 Compile Tests
   +---
   |   0  |  mvndep  |   0m 10s   | Maven dependency ordering for branch
   |  +1  |spotless  |   0m 19s   | branch has no errors when running
   |  |  || spotless:check.
   +---
   |  |  || Patch Compile Tests
   +---
   |   0  |  mvndep  |   0m 09s   | Maven dependency ordering for patch
   |  -1  |spotless  |   0m 19s   | patch has 26 errors when running
   |  |  || spotless:check, run spotless:apply to 
fix.
   +---
   |  |  || Other Tests
   +---
   |  |  |   1m 25s   |
   ```
   
   Output of spotless run file:
   ```
   ➜  hbase-connectors git:(HBASE-28054) ✗ cat 
/private/tmp/yetus-21136.6492/patch-spotless.txt
   Thu Aug 31 00:09:40 IST 2023
   cd /Users/nihaljain/code/os/hbase-connectors
   mvn --offline --batch-mode spotless:check
   [INFO] Scanning for projects...
   [INFO] 

   [INFO] Reactor Build Order:
   [INFO]
   [INFO] Apache HBase Connectors
[pom]
   [INFO] Apache HBase - Kafka   
[pom]
   [INFO] Apache HBase - Model Objects for Kafka Proxy   
[jar]
   [INFO] Apache HBase - Kafka Proxy 
[jar]
   [INFO] Apache HBase - Spark   
[pom]
   [INFO] Apache HBase - Spark Protocol  
[jar]
   [INFO] Apache HBase - Spark Protocol (Shaded) 
[jar]
   [INFO] Apache HBase - Spark Connector 
[jar]
   [INFO] Apache HBase - Spark Integration Tests 
[jar]
   [INFO] Apache HBase Connectors - Assembly 
[pom]
   [INFO]
   [INFO] < org.apache.hbase.connectors:hbase-connectors 
>
   [INFO] Building Apache HBase Connectors 1.0.1-SNAPSHOT   
[1/10]
   [INFO] [ pom 
]-
   [INFO]
   [INFO] --- spotless-maven-plugin:2.27.2:check (default-cli) @ 
hbase-connectors ---
   [INFO] Sorting file 
/var/folders/xb/j3j5fp5153g06gh5sdkmhf7cgq/T/pom2175271538862894342.xml
   [INFO] Pom file is already sorted, exiting
   [INFO]
   [INFO] -< org.apache.hbase.connectors:kafka 
>--
   [INFO] Building Apache HBase - Kafka 1.0.1-SNAPSHOT  
[2/10]
   [INFO] [ pom 
]-
   [INFO]
   [INFO] --- spotless-maven-plugin:2.27.2:check (default-cli) @ kafka ---
   [INFO] Sorting file 
/var/folders/xb/j3j5fp5153g06gh5sdkmhf7cgq/T/pom4621269930024559741.xml
   [INFO] Pom file is already sorted, exiting
   [INFO]
   [INFO] < org.apache.hbase.connectors.kafka:hbase-kafka-model 
>-
   [INFO] Building Apache HBase - Model Objects for Kafka Proxy 1.0.1-SNAPSHOT 
[3/10]
   [INFO] [ jar 
]-
   [INFO]
   [INFO] --- spotless-maven-plugin:2.27.2:check (default-cli) @ 
hbase-kafka-model ---
   [INFO] Sorting file 
/var/folders/xb/j3j5fp5153g06gh5sdkmhf7cgq/T/pom503402107018101105.xml
   [INFO] Pom file is already sorted, exiting
   [INFO]
   [INFO] < org.apache.hbase.connectors.kafka:hbase-kafka-proxy 
>-
   [INFO] Building Apache HBase - Kafka Proxy 1.0.1-SNAPSHOT
[4/10]
   [INFO] [ jar 
]-
   [INFO]
   [INFO] --- spotless-maven-plugin:2.27.2:check (default-cli) @ 
hbase-kafka-proxy ---
   [INFO] Sorting file 
/var/folders/xb/j3j5fp5153g06gh5sdkmhf7cgq/T/pom677669518756705814.xml
   [INFO] Pom file is already sorted, exiting
   [INFO]
   [INFO] -< 

[jira] [Updated] (HBASE-28054) [hbase-connectors] Add spotless in hbase-connectors pre commit check

2023-08-30 Thread Nihal Jain (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-28054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nihal Jain updated HBASE-28054:
---
Status: Patch Available  (was: In Progress)

> [hbase-connectors] Add spotless in hbase-connectors pre commit check
> 
>
> Key: HBASE-28054
> URL: https://issues.apache.org/jira/browse/HBASE-28054
> Project: HBase
>  Issue Type: Sub-task
>  Components: build, community, hbase-connectors, jenkins
>Reporter: Nihal Jain
>Assignee: Nihal Jain
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [hbase] Apache-HBase commented on pull request #5377: HBASE-28051 The annotation about RegionProcedureStore.delete is not right

2023-08-30 Thread via GitHub


Apache-HBase commented on PR #5377:
URL: https://github.com/apache/hbase/pull/5377#issuecomment-1699641050

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 26s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 35s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   4m 58s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 13s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 36s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   4m 54s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 22s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 219m  2s |  hbase-server in the patch failed.  |
   |  |   | 240m 18s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5377/1/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5377 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 1a43fb6658d5 5.4.0-1103-aws #111~18.04.1-Ubuntu SMP Tue May 
23 20:04:10 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 5527dd9453 |
   | Default Java | Temurin-1.8.0_352-b08 |
   | unit | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5377/1/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5377/1/testReport/
 |
   | Max. process+thread count | 4456 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5377/1/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #5377: HBASE-28051 The annotation about RegionProcedureStore.delete is not right

2023-08-30 Thread via GitHub


Apache-HBase commented on PR #5377:
URL: https://github.com/apache/hbase/pull/5377#issuecomment-1699633815

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 26s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  2s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m  2s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 44s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   5m 12s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 45s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 44s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 44s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   5m  6s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 211m 19s |  hbase-server in the patch passed.  
|
   |  |   | 234m 10s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5377/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5377 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 28d5acd588b1 5.4.0-1103-aws #111~18.04.1-Ubuntu SMP Tue May 
23 20:04:10 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 5527dd9453 |
   | Default Java | Eclipse Adoptium-11.0.17+8 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5377/1/testReport/
 |
   | Max. process+thread count | 4711 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5377/1/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (HBASE-28054) [hbase-connectors] Add spotless in hbase-connectors pre commit check

2023-08-30 Thread Nihal Jain (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-28054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nihal Jain updated HBASE-28054:
---
Component/s: build
 community
 hbase-connectors
 jenkins

> [hbase-connectors] Add spotless in hbase-connectors pre commit check
> 
>
> Key: HBASE-28054
> URL: https://issues.apache.org/jira/browse/HBASE-28054
> Project: HBase
>  Issue Type: Sub-task
>  Components: build, community, hbase-connectors, jenkins
>Reporter: Nihal Jain
>Assignee: Nihal Jain
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HBASE-28054) [hbase-connectors] Add spotless in hbase-connectors pre commit check

2023-08-30 Thread Nihal Jain (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-28054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nihal Jain updated HBASE-28054:
---
Summary: [hbase-connectors] Add spotless in hbase-connectors pre commit 
check  (was: [hbase-connectors] Add spotless in hbase-connectors pre commit 
build)

> [hbase-connectors] Add spotless in hbase-connectors pre commit check
> 
>
> Key: HBASE-28054
> URL: https://issues.apache.org/jira/browse/HBASE-28054
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Nihal Jain
>Assignee: Nihal Jain
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work started] (HBASE-28054) [hbase-connectors] Add spotless in hbase-connectors pre commit build

2023-08-30 Thread Nihal Jain (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-28054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HBASE-28054 started by Nihal Jain.
--
> [hbase-connectors] Add spotless in hbase-connectors pre commit build
> 
>
> Key: HBASE-28054
> URL: https://issues.apache.org/jira/browse/HBASE-28054
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Nihal Jain
>Assignee: Nihal Jain
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (HBASE-27177) [hbase-connectors] Add checkstyle check in hbase-connectors pre commit build

2023-08-30 Thread Nihal Jain (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nihal Jain reassigned HBASE-27177:
--

Assignee: (was: Nihal Jain)

> [hbase-connectors] Add checkstyle check in hbase-connectors pre commit build
> 
>
> Key: HBASE-27177
> URL: https://issues.apache.org/jira/browse/HBASE-27177
> Project: HBase
>  Issue Type: Sub-task
>  Components: build, hbase-connectors, jenkins
>Reporter: Duo Zhang
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HBASE-27177) [hbase-connectors] Add checkstyle check in hbase-connectors pre commit build

2023-08-30 Thread Nihal Jain (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nihal Jain updated HBASE-27177:
---
Summary: [hbase-connectors] Add checkstyle check in hbase-connectors pre 
commit build  (was: [hbase-connectors] Add spotless in hbase-connectors pre 
commit build)

> [hbase-connectors] Add checkstyle check in hbase-connectors pre commit build
> 
>
> Key: HBASE-27177
> URL: https://issues.apache.org/jira/browse/HBASE-27177
> Project: HBase
>  Issue Type: Sub-task
>  Components: build, hbase-connectors, jenkins
>Reporter: Duo Zhang
>Assignee: Nihal Jain
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HBASE-27177) [hbase-connectors] Add spotless in hbase-connectors pre commit build

2023-08-30 Thread Nihal Jain (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17760535#comment-17760535
 ] 

Nihal Jain commented on HBASE-27177:


Seems this was for checkstyle, will add spotless as part of HBASE-28054

> [hbase-connectors] Add spotless in hbase-connectors pre commit build
> 
>
> Key: HBASE-27177
> URL: https://issues.apache.org/jira/browse/HBASE-27177
> Project: HBase
>  Issue Type: Sub-task
>  Components: build, hbase-connectors, jenkins
>Reporter: Duo Zhang
>Assignee: Nihal Jain
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [hbase] ankitsinghal commented on a diff in pull request #5371: HBASE-28044 Reduce frequency of saving backing map in persistence cache

2023-08-30 Thread via GitHub


ankitsinghal commented on code in PR #5371:
URL: https://github.com/apache/hbase/pull/5371#discussion_r1310571851


##
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java:
##
@@ -1700,17 +1853,17 @@ public BucketEntry writeToCache(final IOEngine 
ioEngine, final BucketAllocator a
   HFileBlock block = (HFileBlock) data;
   ByteBuff sliceBuf = block.getBufferReadOnly();
   block.getMetaData(metaBuff);
-  ioEngine.write(sliceBuf, offset);
   // adds the cache time after the block and metadata part
   if (isCachePersistent) {
-ioEngine.write(metaBuff, offset + len - metaBuff.limit() - 
Long.BYTES);
 ByteBuffer buffer = ByteBuffer.allocate(Long.BYTES);
 buffer.putLong(bucketEntry.getCachedTime());
 buffer.rewind();
-ioEngine.write(buffer, (offset + len - Long.BYTES));
+ioEngine.write(buffer, offset);
+ioEngine.write(sliceBuf, offset + Long.BYTES);

Review Comment:
   I'm wondering we are changing the order of reads/writes, Are they backward 
compatible if someone has a bucket cache already persisted during rolling 
restarts?
   
   And I see reads are updated on only FileIOEngine, don't we need to update 
other Engines too?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (HBASE-28054) [hbase-connectors] Add spotless in hbase-connectors pre commit build

2023-08-30 Thread Nihal Jain (Jira)
Nihal Jain created HBASE-28054:
--

 Summary: [hbase-connectors] Add spotless in hbase-connectors pre 
commit build
 Key: HBASE-28054
 URL: https://issues.apache.org/jira/browse/HBASE-28054
 Project: HBase
  Issue Type: Sub-task
Reporter: Nihal Jain
Assignee: Nihal Jain






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (HBASE-27177) [hbase-connectors] Add checkstyle check in hbase-connectors pre commit build

2023-08-30 Thread Nihal Jain (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nihal Jain reassigned HBASE-27177:
--

Assignee: Nihal Jain

> [hbase-connectors] Add checkstyle check in hbase-connectors pre commit build
> 
>
> Key: HBASE-27177
> URL: https://issues.apache.org/jira/browse/HBASE-27177
> Project: HBase
>  Issue Type: Sub-task
>  Components: build, hbase-connectors, jenkins
>Reporter: Duo Zhang
>Assignee: Nihal Jain
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HBASE-27177) [hbase-connectors] Add spotless in hbase-connectors pre commit build

2023-08-30 Thread Nihal Jain (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nihal Jain updated HBASE-27177:
---
Summary: [hbase-connectors] Add spotless in hbase-connectors pre commit 
build  (was: [hbase-connectors] Add checkstyle check in hbase-connectors pre 
commit build)

> [hbase-connectors] Add spotless in hbase-connectors pre commit build
> 
>
> Key: HBASE-27177
> URL: https://issues.apache.org/jira/browse/HBASE-27177
> Project: HBase
>  Issue Type: Sub-task
>  Components: build, hbase-connectors, jenkins
>Reporter: Duo Zhang
>Assignee: Nihal Jain
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (HBASE-27978) [hbase-operator-tools] Add spotless in hbase-operator-tools pre-commit build

2023-08-30 Thread Nihal Jain (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nihal Jain reassigned HBASE-27978:
--

Assignee: Nihal Jain

> [hbase-operator-tools] Add spotless in hbase-operator-tools pre-commit build
> 
>
> Key: HBASE-27978
> URL: https://issues.apache.org/jira/browse/HBASE-27978
> Project: HBase
>  Issue Type: Sub-task
>  Components: build, hbase-operator-tools
>Reporter: Nihal Jain
>Assignee: Nihal Jain
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [hbase] ankitsinghal commented on a diff in pull request #5371: HBASE-28044 Reduce frequency of saving backing map in persistence cache

2023-08-30 Thread via GitHub


ankitsinghal commented on code in PR #5371:
URL: https://github.com/apache/hbase/pull/5371#discussion_r1310496448


##
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java:
##
@@ -1285,12 +1320,104 @@ void persistToFile() throws IOException {
   LOG.warn("Failed to commit cache persistent file. We might lose cached 
blocks if "
 + "RS crashes/restarts before we successfully checkpoint again.");
 }
+LOG.debug("Saving current state of bucket cache index map took {}ms.",
+  EnvironmentEdgeManager.currentTime() - startTime);
+  }
+
+  private void recordTransaction(BlockCacheKey key, BucketEntry bucketEntry,
+BucketCacheProtos.TransactionType type) {
+if (persistencePath != null) {
+  File path = new File(persistencePath + "tx-" + System.nanoTime());
+  long startTime = EnvironmentEdgeManager.currentTime();
+  try (FileOutputStream fos = new FileOutputStream(path, false)) {
+fos.write(ProtobufMagic.PB_MAGIC);
+BucketProtoUtils.toPB(this, key, bucketEntry, 
type).writeDelimitedTo(fos);
+txsCount.incrementAndGet();
+fos.flush();
+  } catch (Exception e) {
+LOG.error("Failed to record cache transaction {} for key {}. In the 
event of a crash, "
+  + "this key would require a re-cache.", type.name(), key, e);
+  }
+  LOG.debug("Cache transaction recording took {}ms",
+EnvironmentEdgeManager.currentTime() - startTime);
+}

Review Comment:
   we have this in the critical path of the reads, having it synchronous could 
delay the performance of the reads during both cases of eviction or addition 
blocks. This will also generate a large amount of files as we are creating new 
files for every transaction. Or we are doing this only in a very special case?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #5376: HBASE-27999 Implement cache prefetch aware load balancer

2023-08-30 Thread via GitHub


Apache-HBase commented on PR #5376:
URL: https://github.com/apache/hbase/pull/5376#issuecomment-1699462765

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 32s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ HBASE-27389 Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 11s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   2m 25s |  HBASE-27389 passed  |
   | +1 :green_heart: |  compile  |   1m 10s |  HBASE-27389 passed  |
   | +1 :green_heart: |  shadedjars  |   4m 31s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 52s |  HBASE-27389 passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 13s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 15s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 11s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 11s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   4m 34s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 48s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m  5s |  hbase-common in the patch passed.  
|
   | +1 :green_heart: |  unit  |   9m 23s |  hbase-balancer in the patch 
passed.  |
   | -1 :x: |  unit  | 238m 44s |  hbase-server in the patch failed.  |
   |  |   | 273m 53s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5376/4/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5376 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 7989ea11055c 5.4.0-152-generic #169-Ubuntu SMP Tue Jun 6 
22:23:09 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | HBASE-27389 / 5e2cc6363b |
   | Default Java | Temurin-1.8.0_352-b08 |
   | unit | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5376/4/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5376/4/testReport/
 |
   | Max. process+thread count | 4604 (vs. ulimit of 3) |
   | modules | C: hbase-common hbase-balancer hbase-server U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5376/4/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #5376: HBASE-27999 Implement cache prefetch aware load balancer

2023-08-30 Thread via GitHub


Apache-HBase commented on PR #5376:
URL: https://github.com/apache/hbase/pull/5376#issuecomment-1699449773

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 13s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  2s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ HBASE-27389 Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 11s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   2m 29s |  HBASE-27389 passed  |
   | +1 :green_heart: |  compile  |   1m 20s |  HBASE-27389 passed  |
   | +1 :green_heart: |  shadedjars  |   4m 41s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 53s |  HBASE-27389 passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 13s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 45s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 19s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 19s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   4m 38s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 53s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 18s |  hbase-common in the patch passed.  
|
   | +1 :green_heart: |  unit  |   9m  8s |  hbase-balancer in the patch 
passed.  |
   | +1 :green_heart: |  unit  | 229m 15s |  hbase-server in the patch passed.  
|
   |  |   | 266m  3s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5376/4/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5376 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 22e85db7d1f6 5.4.0-153-generic #170-Ubuntu SMP Fri Jun 16 
13:43:31 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | HBASE-27389 / 5e2cc6363b |
   | Default Java | Eclipse Adoptium-11.0.17+8 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5376/4/testReport/
 |
   | Max. process+thread count | 4702 (vs. ulimit of 3) |
   | modules | C: hbase-common hbase-balancer hbase-server U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5376/4/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (HBASE-28053) ServerCrashProcedure seems to fail when using Hadoop3.3.1+

2023-08-30 Thread aplio (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-28053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

aplio updated HBASE-28053:
--
Description: 
HBase Cluster Issue with Server Crash Procedure After Region Server Goes Down

We are running an HBase cluster with version 2.5.5 (HBase jar sourced from the 
[HBase download page|https://hbase.apache.org/downloads.html] under 
hadoop3-bin) paired with Hadoop version 3.3.2. When a region server went down 
and initiated a serverCrashProcedure, we encountered an exception. This 
exception prevented our cluster from recovering.

Below is a snippet of the exception:
{code:java}
2023-08-28 21:02:52,163 INFO 
[RS_LOG_REPLAY_OPS-regionserver/HOSTNAME_HERE:16020-1] wal.WALSplitter 
(WALSplitter.java:splitWAL(300)) - Splitting 
hdfs://hbase:8020/hbase/WALs/HOSTNAME_HERE,16020,1693214237545-splitting/HOSTNAME_HERE%2C16020%2C1693214237545.1693214243056,
 size=15.7 K (16082bytes)
2023-08-28 21:02:52,163 INFO 
[RS_LOG_REPLAY_OPS-regionserver/HOSTNAME_HERE:16020-1] util.RecoverLeaseFSUtils 
(RecoverLeaseFSUtils.java:recoverDFSFileLease(86)) - Recover lease on dfs file 
hdfs://hbase:8020/hbase/WALs/HOSTNAME_HERE,16020,1693214237545-splitting/HOSTNAME_HERE%2C16020%2C1693214237545.1693214243056
2023-08-28 21:02:52,164 INFO 
[RS_LOG_REPLAY_OPS-regionserver/HOSTNAME_HERE:16020-1] util.RecoverLeaseFSUtils 
(RecoverLeaseFSUtils.java:recoverLease(175)) - Recovered lease, attempt=0 on 
file=hdfs://hbase:8020/hbase/WALs/HOSTNAME_HERE,16020,1693214237545-splitting/HOSTNAME_HERE%2C16020%2C1693214237545.1693214243056
 after 0ms
2023-08-28 21:02:52,167 INFO 
[RS_LOG_REPLAY_OPS-regionserver/HOSTNAME_HERE:16020-1] wal.WALSplitter 
(WALSplitter.java:splitWAL(423)) - Processed 0 edits across 0 Regions in 4 ms; 
skipped=0; 
WAL=hdfs://hbase:8020/hbase/WALs/HOSTNAME_HERE,16020,1693214237545-splitting/HOSTNAME_HERE%2C16020%2C1693214237545.1693214243056,
 size=15.7 K, length=16082, corrupted=false, cancelled=false
2023-08-28 21:02:52,167 ERROR 
[RS_LOG_REPLAY_OPS-regionserver/HOSTNAME_HERE:16020-1] 
handler.RSProcedureHandler (RSProcedureHandler.java:process(53)) - pid=5848252
java.lang.NoSuchMethodError: 'org.apache.hadoop.hdfs.protocol.DatanodeInfo[] 
org.apache.hadoop.hdfs.protocol.LocatedBlock.getLocations()'
at 
org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks.reorderBlocks(HFileSystem.java:428)
at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:367){code}
Upon investigation, this seems to be a consequence of the changes introduced in 
Hadoop 3.3.1 under HDFS-15255. The getLocations method of LocatedBlock has been 
modified from returning a DatanodeInfo[] to a DatanodeStorageInfo[]. However, 
HBase 2.5.5 still references DatanodeInfo[] in HFileSystem.java:428, leading to 
the aforementioned exception. You can view the relevant HBase code [here of 
hbase 
code|https://github.com/apache/hbase/blob/7ebd4381261fefd78fc2acf258a95184f4147cee/hbase-server/src/main/java/org/apache/hadoop/hbase/fs/HFileSystem.java#L428].

A potential solution we identified is to rebuild HBase using a patch available 
at this repository. This appears to rectify the issue.(at least for now).
[https://github.com/aplio/hbase/tree/monkeypatch/fix-serverClashProcedure-caused-by-hbase-3-dataNodeInfo-change]

 

This issue helped us investigate and fix.

https://issues.apache.org/jira/browse/HBASE-26198

 

I'd like to submit a PR to the HBase documentation stating that Hadoop 3.3.1 
and later versions are not compatible with HBase (specifically version 2.5.5), 
provided that this bug is confirmed (or if my observations are accurate).

  was:
HBase Cluster Issue with Server Crash Procedure After Region Server Goes Down

We are running an HBase cluster with version 2.5.5 (HBase jar sourced from the 
[HBase download page|https://hbase.apache.org/downloads.html] under 
hadoop3-bin) paired with Hadoop version 3.3.2. When a region server went down 
and initiated a serverCrashProcedure, we encountered an exception. This 
exception prevented our cluster from recovering.

Below is a snippet of the exception:
{code:java}
2023-08-28 21:02:52,163 INFO 
[RS_LOG_REPLAY_OPS-regionserver/HOSTNAME_HERE:16020-1] wal.WALSplitter 
(WALSplitter.java:splitWAL(300)) - Splitting 
hdfs://hbase:8020/hbase/WALs/HOSTNAME_HERE,16020,1693214237545-splitting/HOSTNAME_HERE%2C16020%2C1693214237545.1693214243056,
 size=15.7 K (16082bytes)
2023-08-28 21:02:52,163 INFO 
[RS_LOG_REPLAY_OPS-regionserver/HOSTNAME_HERE:16020-1] util.RecoverLeaseFSUtils 
(RecoverLeaseFSUtils.java:recoverDFSFileLease(86)) - Recover lease on dfs file 
hdfs://hbase:8020/hbase/WALs/HOSTNAME_HERE,16020,1693214237545-splitting/HOSTNAME_HERE%2C16020%2C1693214237545.1693214243056
2023-08-28 21:02:52,164 INFO 
[RS_LOG_REPLAY_OPS-regionserver/HOSTNAME_HERE:16020-1] util.RecoverLeaseFSUtils 
(RecoverLeaseFSUtils.java:recoverLease(175)) - Recovered lease, attempt=0 on 

[jira] [Updated] (HBASE-28053) ServerCrashProcedure seems to fail when using Hadoop3.3.1+

2023-08-30 Thread aplio (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-28053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

aplio updated HBASE-28053:
--
Description: 
HBase Cluster Issue with Server Crash Procedure After Region Server Goes Down

We are running an HBase cluster with version 2.5.5 (HBase jar sourced from the 
[HBase download page|https://hbase.apache.org/downloads.html] under 
hadoop3-bin) paired with Hadoop version 3.3.2. When a region server went down 
and initiated a serverCrashProcedure, we encountered an exception. This 
exception prevented our cluster from recovering.

Below is a snippet of the exception:
{code:java}
2023-08-28 21:02:52,163 INFO 
[RS_LOG_REPLAY_OPS-regionserver/HOSTNAME_HERE:16020-1] wal.WALSplitter 
(WALSplitter.java:splitWAL(300)) - Splitting 
hdfs://hbase:8020/hbase/WALs/HOSTNAME_HERE,16020,1693214237545-splitting/HOSTNAME_HERE%2C16020%2C1693214237545.1693214243056,
 size=15.7 K (16082bytes)
2023-08-28 21:02:52,163 INFO 
[RS_LOG_REPLAY_OPS-regionserver/HOSTNAME_HERE:16020-1] util.RecoverLeaseFSUtils 
(RecoverLeaseFSUtils.java:recoverDFSFileLease(86)) - Recover lease on dfs file 
hdfs://hbase:8020/hbase/WALs/HOSTNAME_HERE,16020,1693214237545-splitting/HOSTNAME_HERE%2C16020%2C1693214237545.1693214243056
2023-08-28 21:02:52,164 INFO 
[RS_LOG_REPLAY_OPS-regionserver/HOSTNAME_HERE:16020-1] util.RecoverLeaseFSUtils 
(RecoverLeaseFSUtils.java:recoverLease(175)) - Recovered lease, attempt=0 on 
file=hdfs://hbase:8020/hbase/WALs/HOSTNAME_HERE,16020,1693214237545-splitting/HOSTNAME_HERE%2C16020%2C1693214237545.1693214243056
 after 0ms
2023-08-28 21:02:52,167 INFO 
[RS_LOG_REPLAY_OPS-regionserver/HOSTNAME_HERE:16020-1] wal.WALSplitter 
(WALSplitter.java:splitWAL(423)) - Processed 0 edits across 0 Regions in 4 ms; 
skipped=0; 
WAL=hdfs://hbase:8020/hbase/WALs/HOSTNAME_HERE,16020,1693214237545-splitting/HOSTNAME_HERE%2C16020%2C1693214237545.1693214243056,
 size=15.7 K, length=16082, corrupted=false, cancelled=false
2023-08-28 21:02:52,167 ERROR 
[RS_LOG_REPLAY_OPS-regionserver/HOSTNAME_HERE:16020-1] 
handler.RSProcedureHandler (RSProcedureHandler.java:process(53)) - pid=5848252
java.lang.NoSuchMethodError: 'org.apache.hadoop.hdfs.protocol.DatanodeInfo[] 
org.apache.hadoop.hdfs.protocol.LocatedBlock.getLocations()'
at 
org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks.reorderBlocks(HFileSystem.java:428)
at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:367){code}
Upon investigation, this seems to be a consequence of the changes introduced in 
Hadoop 3.3.1 under HDFS-15255. The getLocations method of LocatedBlock has been 
modified from returning a DatanodeInfo[] to a DatanodeStorageInfo[]. However, 
HBase 2.5.5 still references DatanodeInfo[] in HFileSystem.java:428, leading to 
the aforementioned exception. You can view the relevant HBase code [here of 
hbase 
code|https://github.com/apache/hbase/blob/7ebd4381261fefd78fc2acf258a95184f4147cee/hbase-server/src/main/java/org/apache/hadoop/hbase/fs/HFileSystem.java#L428].

A potential solution we identified is to rebuild HBase using a patch available 
at this repository. This appears to rectify the issue.(at least for now).
[https://github.com/aplio/hbase/tree/monkeypatch/fix-serverClashProcedure-caused-by-hbase-3-dataNodeInfo-change]

 

This issue helped us investigate and fix.

https://issues.apache.org/jira/browse/HBASE-26198

  was:
HBase Cluster Issue with Server Crash Procedure After Region Server Goes Down

We are running an HBase cluster with version 2.5.5 (HBase jar sourced from the 
[HBase download page|https://hbase.apache.org/downloads.html] under 
hadoop3-bin) paired with Hadoop version 3.3.2. When a region server went down 
and initiated a serverCrashProcedure, we encountered an exception. This 
exception prevented our cluster from recovering.

Below is a snippet of the exception:

```
{code:java}
2023-08-28 21:02:52,163 INFO 
[RS_LOG_REPLAY_OPS-regionserver/HOSTNAME_HERE:16020-1] wal.WALSplitter 
(WALSplitter.java:splitWAL(300)) - Splitting 
hdfs://hbase:8020/hbase/WALs/HOSTNAME_HERE,16020,1693214237545-splitting/HOSTNAME_HERE%2C16020%2C1693214237545.1693214243056,
 size=15.7 K (16082bytes)
2023-08-28 21:02:52,163 INFO 
[RS_LOG_REPLAY_OPS-regionserver/HOSTNAME_HERE:16020-1] util.RecoverLeaseFSUtils 
(RecoverLeaseFSUtils.java:recoverDFSFileLease(86)) - Recover lease on dfs file 
hdfs://hbase:8020/hbase/WALs/HOSTNAME_HERE,16020,1693214237545-splitting/HOSTNAME_HERE%2C16020%2C1693214237545.1693214243056
2023-08-28 21:02:52,164 INFO 
[RS_LOG_REPLAY_OPS-regionserver/HOSTNAME_HERE:16020-1] util.RecoverLeaseFSUtils 
(RecoverLeaseFSUtils.java:recoverLease(175)) - Recovered lease, attempt=0 on 
file=hdfs://hbase:8020/hbase/WALs/HOSTNAME_HERE,16020,1693214237545-splitting/HOSTNAME_HERE%2C16020%2C1693214237545.1693214243056
 after 0ms
2023-08-28 21:02:52,167 INFO 
[RS_LOG_REPLAY_OPS-regionserver/HOSTNAME_HERE:16020-1] wal.WALSplitter 
(WALSplitter.java:splitWAL(423)) - 

[jira] [Updated] (HBASE-28053) ServerCrashProcedure seems to fail when using Hadoop3.3.1+

2023-08-30 Thread aplio (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-28053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

aplio updated HBASE-28053:
--
Description: 
HBase Cluster Issue with Server Crash Procedure After Region Server Goes Down

We are running an HBase cluster with version 2.5.5 (HBase jar sourced from the 
[HBase download page|https://hbase.apache.org/downloads.html] under 
hadoop3-bin) paired with Hadoop version 3.3.2. When a region server went down 
and initiated a serverCrashProcedure, we encountered an exception. This 
exception prevented our cluster from recovering.

Below is a snippet of the exception:

```
{code:java}
2023-08-28 21:02:52,163 INFO 
[RS_LOG_REPLAY_OPS-regionserver/HOSTNAME_HERE:16020-1] wal.WALSplitter 
(WALSplitter.java:splitWAL(300)) - Splitting 
hdfs://hbase:8020/hbase/WALs/HOSTNAME_HERE,16020,1693214237545-splitting/HOSTNAME_HERE%2C16020%2C1693214237545.1693214243056,
 size=15.7 K (16082bytes)
2023-08-28 21:02:52,163 INFO 
[RS_LOG_REPLAY_OPS-regionserver/HOSTNAME_HERE:16020-1] util.RecoverLeaseFSUtils 
(RecoverLeaseFSUtils.java:recoverDFSFileLease(86)) - Recover lease on dfs file 
hdfs://hbase:8020/hbase/WALs/HOSTNAME_HERE,16020,1693214237545-splitting/HOSTNAME_HERE%2C16020%2C1693214237545.1693214243056
2023-08-28 21:02:52,164 INFO 
[RS_LOG_REPLAY_OPS-regionserver/HOSTNAME_HERE:16020-1] util.RecoverLeaseFSUtils 
(RecoverLeaseFSUtils.java:recoverLease(175)) - Recovered lease, attempt=0 on 
file=hdfs://hbase:8020/hbase/WALs/HOSTNAME_HERE,16020,1693214237545-splitting/HOSTNAME_HERE%2C16020%2C1693214237545.1693214243056
 after 0ms
2023-08-28 21:02:52,167 INFO 
[RS_LOG_REPLAY_OPS-regionserver/HOSTNAME_HERE:16020-1] wal.WALSplitter 
(WALSplitter.java:splitWAL(423)) - Processed 0 edits across 0 Regions in 4 ms; 
skipped=0; 
WAL=hdfs://hbase:8020/hbase/WALs/HOSTNAME_HERE,16020,1693214237545-splitting/HOSTNAME_HERE%2C16020%2C1693214237545.1693214243056,
 size=15.7 K, length=16082, corrupted=false, cancelled=false
2023-08-28 21:02:52,167 ERROR 
[RS_LOG_REPLAY_OPS-regionserver/HOSTNAME_HERE:16020-1] 
handler.RSProcedureHandler (RSProcedureHandler.java:process(53)) - pid=5848252
java.lang.NoSuchMethodError: 'org.apache.hadoop.hdfs.protocol.DatanodeInfo[] 
org.apache.hadoop.hdfs.protocol.LocatedBlock.getLocations()'
at 
org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks.reorderBlocks(HFileSystem.java:428)
at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:367){code}
Upon investigation, this seems to be a consequence of the changes introduced in 
Hadoop 3.3.1 under HDFS-15255. The getLocations method of LocatedBlock has been 
modified from returning a DatanodeInfo[] to a DatanodeStorageInfo[]. However, 
HBase 2.5.5 still references DatanodeInfo[] in HFileSystem.java:428, leading to 
the aforementioned exception. You can view the relevant HBase code [here of 
hbase 
code|https://github.com/apache/hbase/blob/7ebd4381261fefd78fc2acf258a95184f4147cee/hbase-server/src/main/java/org/apache/hadoop/hbase/fs/HFileSystem.java#L428].

A potential solution we identified is to rebuild HBase using a patch available 
at this repository. This appears to rectify the issue.(at least for now).
[https://github.com/aplio/hbase/tree/monkeypatch/fix-serverClashProcedure-caused-by-hbase-3-dataNodeInfo-change]

 

This issue helped us investigate and fix.

https://issues.apache.org/jira/browse/HBASE-26198

  was:
HBase Cluster Issue with Server Crash Procedure After Region Server Goes Down

We are running an HBase cluster with version 2.5.5 (HBase jar sourced from the 
[HBase download page|https://hbase.apache.org/downloads.html] under 
hadoop3-bin) paired with Hadoop version 3.3.2. When a region server went down 
and initiated a serverCrashProcedure, we encountered an exception. This 
exception prevented our cluster from recovering.

Below is a snippet of the exception:

```
2023-08-28 21:02:52,163 INFO 
[RS_LOG_REPLAY_OPS-regionserver/HOSTNAME_HERE:16020-1] wal.WALSplitter 
(WALSplitter.java:splitWAL(300)) - Splitting 
hdfs://hbase:8020/hbase/WALs/HOSTNAME_HERE,16020,1693214237545-splitting/HOSTNAME_HERE%2C16020%2C1693214237545.1693214243056,
 size=15.7 K (16082bytes)
2023-08-28 21:02:52,163 INFO 
[RS_LOG_REPLAY_OPS-regionserver/HOSTNAME_HERE:16020-1] util.RecoverLeaseFSUtils 
(RecoverLeaseFSUtils.java:recoverDFSFileLease(86)) - Recover lease on dfs file 
hdfs://hbase:8020/hbase/WALs/HOSTNAME_HERE,16020,1693214237545-splitting/HOSTNAME_HERE%2C16020%2C1693214237545.1693214243056
2023-08-28 21:02:52,164 INFO 
[RS_LOG_REPLAY_OPS-regionserver/HOSTNAME_HERE:16020-1] util.RecoverLeaseFSUtils 
(RecoverLeaseFSUtils.java:recoverLease(175)) - Recovered lease, attempt=0 on 
file=hdfs://hbase:8020/hbase/WALs/HOSTNAME_HERE,16020,1693214237545-splitting/HOSTNAME_HERE%2C16020%2C1693214237545.1693214243056
 after 0ms
2023-08-28 21:02:52,167 INFO 
[RS_LOG_REPLAY_OPS-regionserver/HOSTNAME_HERE:16020-1] wal.WALSplitter 
(WALSplitter.java:splitWAL(423)) - 

[GitHub] [hbase] Apache-HBase commented on pull request #5378: HBASE-28052 Removing the useless parameters from ProcedureExecutor.loadProcedures

2023-08-30 Thread via GitHub


Apache-HBase commented on PR #5378:
URL: https://github.com/apache/hbase/pull/5378#issuecomment-1699410500

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 37s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 20s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 25s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   0m 10s |  master passed  |
   | +1 :green_heart: |  spotless  |   0m 47s |  branch has no errors when 
running spotless:check.  |
   | +1 :green_heart: |  spotbugs  |   0m 29s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 45s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 23s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 23s |  hbase-procedure generated 0 new + 
43 unchanged - 1 fixed = 43 total (was 44)  |
   | +1 :green_heart: |  checkstyle  |   0m 10s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |  11m 36s |  Patch does not cause any 
errors with Hadoop 3.2.4 3.3.5.  |
   | +1 :green_heart: |  spotless  |   0m 47s |  patch has no errors when 
running spotless:check.  |
   | +1 :green_heart: |  spotbugs  |   0m 33s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 10s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  30m  3s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5378/1/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5378 |
   | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti 
spotless checkstyle compile |
   | uname | Linux 4330ddc34a56 5.4.0-1103-aws #111~18.04.1-Ubuntu SMP Tue May 
23 20:04:10 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 5527dd9453 |
   | Default Java | Eclipse Adoptium-11.0.17+8 |
   | Max. process+thread count | 79 (vs. ulimit of 3) |
   | modules | C: hbase-procedure U: hbase-procedure |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5378/1/console 
|
   | versions | git=2.34.1 maven=3.8.6 spotbugs=4.7.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (HBASE-28053) ServerCrashProcedure seems to fail when using Hadoop3.3.1+

2023-08-30 Thread aplio (Jira)
aplio created HBASE-28053:
-

 Summary: ServerCrashProcedure seems to fail when using Hadoop3.3.1+
 Key: HBASE-28053
 URL: https://issues.apache.org/jira/browse/HBASE-28053
 Project: HBase
  Issue Type: Bug
  Components: hadoop3, wal
Reporter: aplio


HBase Cluster Issue with Server Crash Procedure After Region Server Goes Down

We are running an HBase cluster with version 2.5.5 (HBase jar sourced from the 
[HBase download page|https://hbase.apache.org/downloads.html] under 
hadoop3-bin) paired with Hadoop version 3.3.2. When a region server went down 
and initiated a serverCrashProcedure, we encountered an exception. This 
exception prevented our cluster from recovering.

Below is a snippet of the exception:

```
2023-08-28 21:02:52,163 INFO 
[RS_LOG_REPLAY_OPS-regionserver/HOSTNAME_HERE:16020-1] wal.WALSplitter 
(WALSplitter.java:splitWAL(300)) - Splitting 
hdfs://hbase:8020/hbase/WALs/HOSTNAME_HERE,16020,1693214237545-splitting/HOSTNAME_HERE%2C16020%2C1693214237545.1693214243056,
 size=15.7 K (16082bytes)
2023-08-28 21:02:52,163 INFO 
[RS_LOG_REPLAY_OPS-regionserver/HOSTNAME_HERE:16020-1] util.RecoverLeaseFSUtils 
(RecoverLeaseFSUtils.java:recoverDFSFileLease(86)) - Recover lease on dfs file 
hdfs://hbase:8020/hbase/WALs/HOSTNAME_HERE,16020,1693214237545-splitting/HOSTNAME_HERE%2C16020%2C1693214237545.1693214243056
2023-08-28 21:02:52,164 INFO 
[RS_LOG_REPLAY_OPS-regionserver/HOSTNAME_HERE:16020-1] util.RecoverLeaseFSUtils 
(RecoverLeaseFSUtils.java:recoverLease(175)) - Recovered lease, attempt=0 on 
file=hdfs://hbase:8020/hbase/WALs/HOSTNAME_HERE,16020,1693214237545-splitting/HOSTNAME_HERE%2C16020%2C1693214237545.1693214243056
 after 0ms
2023-08-28 21:02:52,167 INFO 
[RS_LOG_REPLAY_OPS-regionserver/HOSTNAME_HERE:16020-1] wal.WALSplitter 
(WALSplitter.java:splitWAL(423)) - Processed 0 edits across 0 Regions in 4 ms; 
skipped=0; 
WAL=hdfs://hbase:8020/hbase/WALs/HOSTNAME_HERE,16020,1693214237545-splitting/HOSTNAME_HERE%2C16020%2C1693214237545.1693214243056,
 size=15.7 K, length=16082, corrupted=false, cancelled=false
2023-08-28 21:02:52,167 ERROR 
[RS_LOG_REPLAY_OPS-regionserver/HOSTNAME_HERE:16020-1] 
handler.RSProcedureHandler (RSProcedureHandler.java:process(53)) - pid=5848252
java.lang.NoSuchMethodError: 'org.apache.hadoop.hdfs.protocol.DatanodeInfo[] 
org.apache.hadoop.hdfs.protocol.LocatedBlock.getLocations()'
at 
org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks.reorderBlocks(HFileSystem.java:428)
at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:367)
```

Upon investigation, this seems to be a consequence of the changes introduced in 
Hadoop 3.3.1 under HDFS-15255. The getLocations method of LocatedBlock has been 
modified from returning a DatanodeInfo[] to a DatanodeStorageInfo[]. However, 
HBase 2.5.5 still references DatanodeInfo[] in HFileSystem.java:428, leading to 
the aforementioned exception. You can view the relevant HBase code [here of 
hbase 
code|https://github.com/apache/hbase/blob/7ebd4381261fefd78fc2acf258a95184f4147cee/hbase-server/src/main/java/org/apache/hadoop/hbase/fs/HFileSystem.java#L428].

A potential solution we identified is to rebuild HBase using a patch available 
at this repository. This appears to rectify the issue.(at least for now).
https://github.com/aplio/hbase/tree/monkeypatch/fix-serverClashProcedure-caused-by-hbase-3-dataNodeInfo-change



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [hbase] Apache-HBase commented on pull request #5378: HBASE-28052 Removing the useless parameters from ProcedureExecutor.loadProcedures

2023-08-30 Thread via GitHub


Apache-HBase commented on PR #5378:
URL: https://github.com/apache/hbase/pull/5378#issuecomment-1699399424

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 14s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 15s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 18s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   6m 33s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 16s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 48s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 25s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 25s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   6m 52s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 13s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 33s |  hbase-procedure in the patch 
passed.  |
   |  |   |  24m 47s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5378/1/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5378 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 9096af784d87 5.4.0-153-generic #170-Ubuntu SMP Fri Jun 16 
13:43:31 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 5527dd9453 |
   | Default Java | Temurin-1.8.0_352-b08 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5378/1/testReport/
 |
   | Max. process+thread count | 247 (vs. ulimit of 3) |
   | modules | C: hbase-procedure U: hbase-procedure |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5378/1/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #5378: HBASE-28052 Removing the useless parameters from ProcedureExecutor.loadProcedures

2023-08-30 Thread via GitHub


Apache-HBase commented on PR #5378:
URL: https://github.com/apache/hbase/pull/5378#issuecomment-1699391089

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 31s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 54s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 15s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   4m 39s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 14s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 39s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 15s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 15s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   4m 42s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 13s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 26s |  hbase-procedure in the patch 
passed.  |
   |  |   |  19m 31s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5378/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5378 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 4dc154e7505b 5.4.0-156-generic #173-Ubuntu SMP Tue Jul 11 
07:25:22 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 5527dd9453 |
   | Default Java | Eclipse Adoptium-11.0.17+8 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5378/1/testReport/
 |
   | Max. process+thread count | 268 (vs. ulimit of 3) |
   | modules | C: hbase-procedure U: hbase-procedure |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5378/1/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #5377: HBASE-28051 The annotation about RegionProcedureStore.delete is not right

2023-08-30 Thread via GitHub


Apache-HBase commented on PR #5377:
URL: https://github.com/apache/hbase/pull/5377#issuecomment-1699335446

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 33s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 51s |  master passed  |
   | +1 :green_heart: |  compile  |   2m 25s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   0m 34s |  master passed  |
   | +1 :green_heart: |  spotless  |   0m 43s |  branch has no errors when 
running spotless:check.  |
   | +1 :green_heart: |  spotbugs  |   1m 30s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 35s |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 23s |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 23s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 36s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |   9m  5s |  Patch does not cause any 
errors with Hadoop 3.2.4 3.3.5.  |
   | +1 :green_heart: |  spotless  |   0m 43s |  patch has no errors when 
running spotless:check.  |
   | +1 :green_heart: |  spotbugs  |   1m 39s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 11s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  31m 42s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5377/1/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5377 |
   | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti 
spotless checkstyle compile |
   | uname | Linux 13e7b96ce9d6 5.4.0-156-generic #173-Ubuntu SMP Tue Jul 11 
07:25:22 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 5527dd9453 |
   | Default Java | Eclipse Adoptium-11.0.17+8 |
   | Max. process+thread count | 79 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5377/1/console 
|
   | versions | git=2.34.1 maven=3.8.6 spotbugs=4.7.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (HBASE-28052) Removing the useless parameters from ProcedureExecutor.loadProcedures

2023-08-30 Thread guluo (Jira)
guluo created HBASE-28052:
-

 Summary: Removing the useless parameters from 
ProcedureExecutor.loadProcedures
 Key: HBASE-28052
 URL: https://issues.apache.org/jira/browse/HBASE-28052
 Project: HBase
  Issue Type: Improvement
  Components: proc-v2
Affects Versions: 2.4.13
Reporter: guluo


In this method, parameter abortOnCorruption is useless, so it is recommended to 
remove it



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [hbase] Apache-HBase commented on pull request #5371: HBASE-28044 Reduce frequency of saving backing map in persistence cache

2023-08-30 Thread via GitHub


Apache-HBase commented on PR #5371:
URL: https://github.com/apache/hbase/pull/5371#issuecomment-1699325323

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 25s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ HBASE-27389 Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 12s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   2m 23s |  HBASE-27389 passed  |
   | +1 :green_heart: |  compile  |   1m  1s |  HBASE-27389 passed  |
   | +1 :green_heart: |  shadedjars  |   4m 52s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  HBASE-27389 passed  |
   | -0 :warning: |  patch  |   5m 36s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 11s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 39s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 15s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 15s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   5m 41s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 29s |  hbase-protocol-shaded in the patch 
passed.  |
   | -1 :x: |  unit  | 255m 33s |  hbase-server in the patch failed.  |
   |  |   | 280m 21s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5371/5/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5371 |
   | Optional Tests | unit javac javadoc shadedjars compile |
   | uname | Linux d5731bc2c6ae 5.4.0-1103-aws #111~18.04.1-Ubuntu SMP Tue May 
23 20:04:10 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | HBASE-27389 / 5e2cc6363b |
   | Default Java | Temurin-1.8.0_352-b08 |
   | unit | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5371/5/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5371/5/testReport/
 |
   | Max. process+thread count | 4641 (vs. ulimit of 3) |
   | modules | C: hbase-protocol-shaded hbase-server U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5371/5/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #5371: HBASE-28044 Reduce frequency of saving backing map in persistence cache

2023-08-30 Thread via GitHub


Apache-HBase commented on PR #5371:
URL: https://github.com/apache/hbase/pull/5371#issuecomment-1699257891

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 27s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ HBASE-27389 Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 12s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   2m 43s |  HBASE-27389 passed  |
   | +1 :green_heart: |  compile  |   1m 14s |  HBASE-27389 passed  |
   | +1 :green_heart: |  shadedjars  |   4m 55s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  HBASE-27389 passed  |
   | -0 :warning: |  patch  |   5m 40s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 12s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 35s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 16s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 16s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   4m 54s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 29s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 33s |  hbase-protocol-shaded in the patch 
passed.  |
   | +1 :green_heart: |  unit  | 216m 54s |  hbase-server in the patch passed.  
|
   |  |   | 241m 14s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5371/5/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5371 |
   | Optional Tests | unit javac javadoc shadedjars compile |
   | uname | Linux 314e88ffa382 5.4.0-1103-aws #111~18.04.1-Ubuntu SMP Tue May 
23 20:04:10 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | HBASE-27389 / 5e2cc6363b |
   | Default Java | Eclipse Adoptium-11.0.17+8 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5371/5/testReport/
 |
   | Max. process+thread count | 4833 (vs. ulimit of 3) |
   | modules | C: hbase-protocol-shaded hbase-server U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5371/5/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] guluo2016 opened a new pull request, #5377: HBASE-28051 The annotation about RegionProcedureStore.delete is not right

2023-08-30 Thread via GitHub


guluo2016 opened a new pull request, #5377:
URL: https://github.com/apache/hbase/pull/5377

   Details see: HBASE-28051


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (HBASE-28051) The annotation about RegionProcedureStore.delete is not right

2023-08-30 Thread guluo (Jira)
guluo created HBASE-28051:
-

 Summary: The annotation about RegionProcedureStore.delete is not 
right
 Key: HBASE-28051
 URL: https://issues.apache.org/jira/browse/HBASE-28051
 Project: HBase
  Issue Type: Improvement
  Components: documentation
Affects Versions: 2.4.13
Reporter: guluo
Assignee: guluo
 Attachments: image-2023-08-30-21-54-59-999.png, 
image-2023-08-30-21-57-32-393.png

As shown in the following figure.
!image-2023-08-30-21-54-59-999.png!
 
Actually, we would fill the {color:#ff}*proc:d*{color} column with an empty 
byte array when calling RegionProcedureStore.delete().

!image-2023-08-30-21-57-32-393.png!

 

 

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [hbase] wchevreuil commented on a diff in pull request #5370: HBASE-28038 Add TLS settings to ZooKeeper client

2023-08-30 Thread via GitHub


wchevreuil commented on code in PR #5370:
URL: https://github.com/apache/hbase/pull/5370#discussion_r1310281661


##
hbase-common/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKConfig.java:
##
@@ -330,4 +341,27 @@ public static String 
getClientZKQuorumServersString(Configuration conf) {
 final String[] serverHosts = StringUtils.getStrings(clientQuromServers);
 return buildZKQuorumServerString(serverHosts, clientZkClientPort);
   }
+
+  private static void setZooKeeperClientSystemProperties(String prefix, 
Configuration conf) {

Review Comment:
   nit: call it setZooKeeperClientTLSSystemProperties instead, since the method 
just cares about the TLS system properties?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] ragarkar commented on a diff in pull request #5376: HBASE-27999 Implement cache prefetch aware load balancer

2023-08-30 Thread via GitHub


ragarkar commented on code in PR #5376:
URL: https://github.com/apache/hbase/pull/5376#discussion_r1310267557


##
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java:
##
@@ -1477,6 +1477,7 @@ private void disableCache() {
   // If persistent ioengine and a path, we will serialize out the 
backingMap.
   this.backingMap.clear();
   this.fullyCachedFiles.clear();
+  this.regionCachedSizeMap.clear();

Review Comment:
   Done. Added code to clear the regionCachedSizeMap



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #5376: HBASE-27999 Implement cache prefetch aware load balancer

2023-08-30 Thread via GitHub


Apache-HBase commented on PR #5376:
URL: https://github.com/apache/hbase/pull/5376#issuecomment-1699051669

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 27s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ HBASE-27389 Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 11s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   2m 47s |  HBASE-27389 passed  |
   | +1 :green_heart: |  compile  |   3m 18s |  HBASE-27389 passed  |
   | +1 :green_heart: |  checkstyle  |   0m 53s |  HBASE-27389 passed  |
   | +1 :green_heart: |  spotless  |   0m 42s |  branch has no errors when 
running spotless:check.  |
   | +1 :green_heart: |  spotbugs  |   2m 17s |  HBASE-27389 passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 11s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 49s |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 24s |  the patch passed  |
   | -0 :warning: |  javac  |   0m 18s |  hbase-balancer generated 2 new + 14 
unchanged - 0 fixed = 16 total (was 14)  |
   | -0 :warning: |  checkstyle  |   0m  8s |  hbase-balancer: The patch 
generated 1 new + 3 unchanged - 0 fixed = 4 total (was 3)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |  10m 18s |  Patch does not cause any 
errors with Hadoop 3.2.4 3.3.5.  |
   | +1 :green_heart: |  spotless  |   0m 41s |  patch has no errors when 
running spotless:check.  |
   | +1 :green_heart: |  spotbugs  |   2m 45s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 23s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  38m 56s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5376/4/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5376 |
   | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti 
spotless checkstyle compile |
   | uname | Linux 1fdcb56a612c 5.4.0-1103-aws #111~18.04.1-Ubuntu SMP Tue May 
23 20:04:10 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | HBASE-27389 / 5e2cc6363b |
   | Default Java | Eclipse Adoptium-11.0.17+8 |
   | javac | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5376/4/artifact/yetus-general-check/output/diff-compile-javac-hbase-balancer.txt
 |
   | checkstyle | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5376/4/artifact/yetus-general-check/output/diff-checkstyle-hbase-balancer.txt
 |
   | Max. process+thread count | 78 (vs. ulimit of 3) |
   | modules | C: hbase-common hbase-balancer hbase-server U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5376/4/console 
|
   | versions | git=2.34.1 maven=3.8.6 spotbugs=4.7.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] ragarkar commented on a diff in pull request #5376: HBASE-27999 Implement cache prefetch aware load balancer

2023-08-30 Thread via GitHub


ragarkar commented on code in PR #5376:
URL: https://github.com/apache/hbase/pull/5376#discussion_r1310115419


##
hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtil.java:
##
@@ -2906,6 +2907,25 @@ public boolean ensureSomeRegionServersAvailable(final 
int num) throws IOExceptio
 return startedServer;
   }
 
+  /**
+   * Waits for all the regions of a table to be prefetched fully
+   * @param table Table to be wait on.
+   */

Review Comment:
   Removed these functions as they are not needed. I had written them for a 
test, but that test is not included in the current patch as I still need to 
work on it.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #5371: HBASE-28044 Reduce frequency of saving backing map in persistence cache

2023-08-30 Thread via GitHub


Apache-HBase commented on PR #5371:
URL: https://github.com/apache/hbase/pull/5371#issuecomment-1698926631

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 25s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  prototool  |   0m  0s |  prototool was not available.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ HBASE-27389 Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 19s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   2m 42s |  HBASE-27389 passed  |
   | +1 :green_heart: |  compile  |   3m  4s |  HBASE-27389 passed  |
   | +1 :green_heart: |  checkstyle  |   0m 37s |  HBASE-27389 passed  |
   | +1 :green_heart: |  spotless  |   0m 40s |  branch has no errors when 
running spotless:check.  |
   | +1 :green_heart: |  spotbugs  |   3m 40s |  HBASE-27389 passed  |
   | -0 :warning: |  patch  |   1m 36s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 11s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 39s |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m  2s |  the patch passed  |
   | +1 :green_heart: |  cc  |   3m  2s |  the patch passed  |
   | +1 :green_heart: |  javac  |   3m  2s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 41s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |  10m  8s |  Patch does not cause any 
errors with Hadoop 3.2.4 3.3.5.  |
   | +1 :green_heart: |  hbaseprotoc  |   1m  3s |  the patch passed  |
   | +1 :green_heart: |  spotless  |   0m 40s |  patch has no errors when 
running spotless:check.  |
   | +1 :green_heart: |  spotbugs  |   3m 58s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 15s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  40m 52s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5371/5/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5371 |
   | Optional Tests | dupname asflicense cc hbaseprotoc spotless prototool 
javac spotbugs hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux 3e67a65f979d 5.4.0-1103-aws #111~18.04.1-Ubuntu SMP Tue May 
23 20:04:10 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | HBASE-27389 / 5e2cc6363b |
   | Default Java | Eclipse Adoptium-11.0.17+8 |
   | Max. process+thread count | 80 (vs. ulimit of 3) |
   | modules | C: hbase-protocol-shaded hbase-server U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5371/5/console 
|
   | versions | git=2.34.1 maven=3.8.6 spotbugs=4.7.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #5376: HBASE-27999 Implement cache prefetch aware load balancer

2023-08-30 Thread via GitHub


Apache-HBase commented on PR #5376:
URL: https://github.com/apache/hbase/pull/5376#issuecomment-1698853202

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 35s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ HBASE-27389 Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 13s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   2m 14s |  HBASE-27389 passed  |
   | +1 :green_heart: |  compile  |   1m 10s |  HBASE-27389 passed  |
   | +1 :green_heart: |  shadedjars  |   4m 31s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 50s |  HBASE-27389 passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 13s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 19s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 10s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 10s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   4m 27s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 50s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m  5s |  hbase-common in the patch passed.  
|
   | +1 :green_heart: |  unit  |   9m 21s |  hbase-balancer in the patch 
passed.  |
   | -1 :x: |  unit  | 237m  3s |  hbase-server in the patch failed.  |
   |  |   | 271m 58s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5376/3/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5376 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 68eb1f79af0e 5.4.0-152-generic #169-Ubuntu SMP Tue Jun 6 
22:23:09 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | HBASE-27389 / 5e2cc6363b |
   | Default Java | Temurin-1.8.0_352-b08 |
   | unit | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5376/3/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5376/3/testReport/
 |
   | Max. process+thread count | 4668 (vs. ulimit of 3) |
   | modules | C: hbase-common hbase-balancer hbase-server U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5376/3/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #5376: HBASE-27999 Implement cache prefetch aware load balancer

2023-08-30 Thread via GitHub


Apache-HBase commented on PR #5376:
URL: https://github.com/apache/hbase/pull/5376#issuecomment-1698843236

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 12s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  2s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ HBASE-27389 Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 11s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   2m 44s |  HBASE-27389 passed  |
   | +1 :green_heart: |  compile  |   1m 19s |  HBASE-27389 passed  |
   | +1 :green_heart: |  shadedjars  |   4m 40s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 54s |  HBASE-27389 passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 14s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 37s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 19s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 19s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   4m 38s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 53s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 18s |  hbase-common in the patch passed.  
|
   | +1 :green_heart: |  unit  |   9m  8s |  hbase-balancer in the patch 
passed.  |
   | -1 :x: |  unit  | 229m 11s |  hbase-server in the patch failed.  |
   |  |   | 265m 22s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5376/3/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5376 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 4f9310a089c6 5.4.0-153-generic #170-Ubuntu SMP Fri Jun 16 
13:43:31 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | HBASE-27389 / 5e2cc6363b |
   | Default Java | Eclipse Adoptium-11.0.17+8 |
   | unit | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5376/3/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5376/3/testReport/
 |
   | Max. process+thread count | 4576 (vs. ulimit of 3) |
   | modules | C: hbase-common hbase-balancer hbase-server U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5376/3/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] wchevreuil commented on a diff in pull request #5376: HBASE-27999 Implement cache prefetch aware load balancer

2023-08-30 Thread via GitHub


wchevreuil commented on code in PR #5376:
URL: https://github.com/apache/hbase/pull/5376#discussion_r1309959061


##
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java:
##
@@ -1477,6 +1477,7 @@ private void disableCache() {
   // If persistent ioengine and a path, we will serialize out the 
backingMap.
   this.backingMap.clear();
   this.fullyCachedFiles.clear();
+  this.regionCachedSizeMap.clear();

Review Comment:
   I think we also need to do this 
[here](https://github.com/apache/hbase/pull/5376/files#diff-b75abcdb76c582e16144df3a9bf2ddbc8fd0814c06190c33503a2c1cb365273cL342)



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] wchevreuil commented on a diff in pull request #5376: HBASE-27999 Implement cache prefetch aware load balancer

2023-08-30 Thread via GitHub


wchevreuil commented on code in PR #5376:
URL: https://github.com/apache/hbase/pull/5376#discussion_r1309943247


##
hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtil.java:
##
@@ -3642,6 +3662,110 @@ public boolean evaluate() throws IOException {
 };
   }
 
+  /**
+   * Returns a {@Link Predicate} for checking that all the regions for a table 
are prefetched
+   */
+  public Waiter.Predicate
+predicateAllRegionsForTableArePrefetched(final TableName tableName) {
+return new ExplainingPredicate() {
+  @Override
+  public String explainFailure() throws IOException {
+return "Not all the regions for the table " + 
tableName.getNameAsString()
+  + " are prefetched";
+  }
+
+  @Override
+  public boolean evaluate() throws IOException {
+List regions = getMiniHBaseCluster().getRegions(tableName);
+int totalRegionCount = regions.size();
+AtomicInteger prefetchedRegionCount = new AtomicInteger();
+for (HRegion r : regions) {
+  
getMiniHBaseCluster().getClusterMetrics().getLiveServerMetrics().forEach((sn, 
sm) -> {
+sm.getRegionMetrics().forEach((rn, rm) -> {
+  String regionNameAsString = 
r.getRegionInfo().getRegionNameAsString();
+  String regionString = rm.getNameAsString();
+  if (regionNameAsString.equals(regionString)) {
+if (rm.getCurrentRegionCachedRatio() == 1.0f) {
+  prefetchedRegionCount.getAndIncrement();
+}
+  }
+});
+  });
+}
+return getAdmin().tableExists(tableName) && totalRegionCount == 
prefetchedRegionCount.get();
+  }
+};
+  }
+
+  /**
+   * Returns a {@Link Predicate} for checking that at least one region for the 
table is prefetched
+   */
+  public Waiter.Predicate 
predicateAtLeastOneRegionIsPrefetchedOnServer(
+final TableName tableName, final ServerName serverName) {
+return new ExplainingPredicate() {
+  @Override
+  public String explainFailure() throws IOException {
+return "No Regions for table " + tableName.getNameAsString() + " 
prefetched on server "
+  + serverName.getAddress();
+  }
+
+  @Override
+  public boolean evaluate() throws IOException {
+List regions = getMiniHBaseCluster().getRegions(tableName);
+AtomicInteger prefetchedRegionCount = new AtomicInteger();
+ServerMetrics sm =
+  
getMiniHBaseCluster().getClusterMetrics().getLiveServerMetrics().get(serverName);
+for (HRegion r : regions) {
+  sm.getRegionMetrics().forEach((rn, rm) -> {
+if (
+  
r.getRegionInfo().getRegionNameAsString().equals(rm.getNameAsString())
+&& rm.getCurrentRegionCachedRatio() == 1.0f
+) {
+  prefetchedRegionCount.getAndIncrement();
+}
+  });
+}
+return getAdmin().tableExists(tableName) && 
prefetchedRegionCount.get() > 0;
+  }
+};
+  }
+
+  /**
+   * Returns a {@Link Predicate} for checking that more than half of the 
regions for the table are
+   * prefetched
+   */
+  public Waiter.Predicate
+predicateMajorityRegionsArePrefetched(final TableName tableName) {
+return new ExplainingPredicate() {
+  @Override
+  public String explainFailure() throws IOException {
+return "No Regions for table " + tableName.getNameAsString() + " 
prefetched";
+  }
+
+  @Override
+  public boolean evaluate() throws IOException {
+List regions = getMiniHBaseCluster().getRegions(tableName);
+int totalRegionCount = regions.size();
+AtomicInteger prefetchedRegionCount = new AtomicInteger();
+for (HRegion r : regions) {
+  
getMiniHBaseCluster().getClusterMetrics().getLiveServerMetrics().forEach((sn, 
sm) -> {
+sm.getRegionMetrics().forEach((rn, rm) -> {
+  String regionNameAsString = 
r.getRegionInfo().getRegionNameAsString();
+  String regionString = rm.getNameAsString();
+  if (regionNameAsString.equals(regionString)) {
+if (rm.getCurrentRegionCachedRatio() == 1.0f) {
+  prefetchedRegionCount.getAndIncrement();
+}
+  }
+});
+  });
+}
+return getAdmin().tableExists(tableName)
+  && (float) prefetchedRegionCount.get() / totalRegionCount > 0.5f;
+  }
+};
+  }
+
   /**

Review Comment:
   These utility methods are all specific for this new balance function, so it 
should not be here in HBaseTestingUtil. Please move it to your tests classes. 
Maybe make an abstract parent class which all your tests would extend?



##
hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtil.java:
##
@@ -2906,6 +2907,25 @@ public boolean ensureSomeRegionServersAvailable(final 
int num) throws IOExceptio
 

[jira] [Commented] (HBASE-27966) HBase Master/RS JVM metrics populated incorrectly

2023-08-30 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17760251#comment-17760251
 ] 

Hudson commented on HBASE-27966:


Results for branch branch-2
[build #872 on 
builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/872/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/872/General_20Nightly_20Build_20Report/]


(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/872/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/872/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(x) {color:red}-1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/872/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> HBase Master/RS JVM metrics populated incorrectly
> -
>
> Key: HBASE-27966
> URL: https://issues.apache.org/jira/browse/HBASE-27966
> Project: HBase
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 2.0.0-alpha-4
>Reporter: Nihal Jain
>Assignee: Nihal Jain
>Priority: Major
> Fix For: 2.6.0, 2.5.6, 3.0.0-beta-1
>
> Attachments: test_patch.txt
>
>
> HBase Master/RS JVM metrics populated incorrectly due to regression causing 
> ambari metrics system to not able to capture them.
> Based on my analysis the issue is relevant for all release post 2.0.0-alpha-4 
> and seems to be caused due to HBASE-18846.
> Have been able to compare the JVM metrics across 3 versions of HBase and 
> attaching results of same below:
> HBase: 1.1.2
> {code:java}
> {
> "name" : "Hadoop:service=HBase,name=JvmMetrics",
> "modelerType" : "JvmMetrics",
> "tag.Context" : "jvm",
> "tag.ProcessName" : "RegionServer",
> "tag.SessionId" : "",
> "tag.Hostname" : "HOSTNAME",
> "MemNonHeapUsedM" : 196.05664,
> "MemNonHeapCommittedM" : 347.60547,
> "MemNonHeapMaxM" : 4336.0,
> "MemHeapUsedM" : 7207.315,
> "MemHeapCommittedM" : 66080.0,
> "MemHeapMaxM" : 66080.0,
> "MemMaxM" : 66080.0,
> "GcCount" : 3953,
> "GcTimeMillis" : 662520,
> "ThreadsNew" : 0,
> "ThreadsRunnable" : 214,
> "ThreadsBlocked" : 0,
> "ThreadsWaiting" : 626,
> "ThreadsTimedWaiting" : 78,
> "ThreadsTerminated" : 0,
> "LogFatal" : 0,
> "LogError" : 0,
> "LogWarn" : 0,
> "LogInfo" : 0
>   },
> {code}
> HBase 2.0.2
> {code:java}
> {
> "name" : "Hadoop:service=HBase,name=JvmMetrics",
> "modelerType" : "JvmMetrics",
> "tag.Context" : "jvm",
> "tag.ProcessName" : "IO",
> "tag.SessionId" : "",
> "tag.Hostname" : "HOSTNAME",
> "MemNonHeapUsedM" : 203.86688,
> "MemNonHeapCommittedM" : 740.6953,
> "MemNonHeapMaxM" : -1.0,
> "MemHeapUsedM" : 14879.477,
> "MemHeapCommittedM" : 31744.0,
> "MemHeapMaxM" : 31744.0,
> "MemMaxM" : 31744.0,
> "GcCount" : 75922,
> "GcTimeMillis" : 5134691,
> "ThreadsNew" : 0,
> "ThreadsRunnable" : 90,
> "ThreadsBlocked" : 3,
> "ThreadsWaiting" : 158,
> "ThreadsTimedWaiting" : 36,
> "ThreadsTerminated" : 0,
> "LogFatal" : 0,
> "LogError" : 0,
> "LogWarn" : 0,
> "LogInfo" : 0
>   },
> {code}
> HBase: 2.5.2
> {code:java}
> {
>   "name": "Hadoop:service=HBase,name=JvmMetrics",
>   "modelerType": "JvmMetrics",
>   "tag.Context": "jvm",
>   "tag.ProcessName": "IO",
>   "tag.SessionId": "",
>   "tag.Hostname": "HOSTNAME",
>   "MemNonHeapUsedM": 192.9798,
>   "MemNonHeapCommittedM": 198.4375,
>   "MemNonHeapMaxM": -1.0,
>   "MemHeapUsedM": 773.23584,
>   "MemHeapCommittedM": 1004.0,
>   "MemHeapMaxM": 1024.0,
>   "MemMaxM": 1024.0,
>   "GcCount": 2048,
>   "GcTimeMillis": 25440,
>   "ThreadsNew": 0,
>   "ThreadsRunnable": 22,
>   "ThreadsBlocked": 0,
>   "ThreadsWaiting": 121,
>   "ThreadsTimedWaiting": 49,
>   "ThreadsTerminated": 0,
>   "LogFatal": 0,
>   "LogError": 0,
>   "LogWarn": 0,
>   "LogInfo": 0
>  },
> {code}
> It can be observed that 2.0.x onwards the field "tag.ProcessName" is 
> populating as "IO" instead of expected "RegionServer" or "Master".
> Ambari relies on this field process name to create a metric 
>