[GitHub] [hbase] zhuyaogai commented on pull request #5228: HBASE-27853 Add client side table metrics for rpc calls and request latency.

2023-09-07 Thread via GitHub


zhuyaogai commented on PR #5228:
URL: https://github.com/apache/hbase/pull/5228#issuecomment-1711128960

   > Thanks for making those changes. There are some warnings in the pre commit 
hooks (checkstyle, etc). Can you fix them?
   > 
   > I'm going to be out of office until sept 5, so probably won't be able to 
re-review this until then. Just fyi, I'll get back to it soon.
   
   @bbeaudreault hi, could you continue the code review for me if you are 
available? Thanks!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] bbeaudreault commented on a diff in pull request #5384: WIP HBASE-28065 Corrupt HFile data is mishandled in several cases

2023-09-07 Thread via GitHub


bbeaudreault commented on code in PR #5384:
URL: https://github.com/apache/hbase/pull/5384#discussion_r1319156850


##
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java:
##
@@ -1697,8 +1722,27 @@ protected HFileBlock 
readBlockDataInternal(FSDataInputStream is, long offset,
   headerBuf = HEAP.allocate(hdrSize);
   readAtOffset(is, headerBuf, hdrSize, false, offset, pread);
   headerBuf.rewind();
+
+  // The caller didn't provide an anticipated block size and headerBuf 
was null, this is
+  // probably the first time this HDFS block has been read. The value 
we just read has not
+  // had HBase checksum validation ; assume it was not protected by 
HDFS checksum either.
+  // Sanity check the value. If it doesn't seem right, either trigger 
fall-back to hdfs
+  // checksum or abort the read.
+  //
+  // TODO: Should we also check the value vs. some multiple of 
fileContext.getBlocksize() ?
+  onDiskSizeWithHeader = getOnDiskSizeWithHeader(headerBuf, 
checksumSupport);
+  if (!checkOnDiskSizeWithHeader(onDiskSizeWithHeader)) {
+if (verifyChecksum) {
+  invalidateNextBlockHeader();
+  span.addEvent("Falling back to HDFS checksumming.", 
attributesBuilder.build());
+  return null;
+} else {
+  throw new IOException("Read invalid onDiskSizeWithHeader=" + 
onDiskSizeWithHeader);
+}
+  }
+} else {
+  onDiskSizeWithHeader = getOnDiskSizeWithHeader(headerBuf, 
checksumSupport);

Review Comment:
   another thought here -- i think we can still possibly cache corrupt headers. 
below, we read from disk `[data block with header and checksum] + [next 
header]`.  we then validate the checksum on the first part, which may be 
correct. but we do nothing to validate the header before caching it.
   
   im not sure we _can_ checksum just the header, so maybe we just have to live 
with a possibly-corrupt cache. however, we could at least validate the 
onDiskSizeWithHeader from that next header before caching it. so i might 
suggest we add one more call to getOnDiskSizeWithHeader + 
checkOnDiskSizeWithHeader in `cacheNextBlockHeader` maybe



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] bbeaudreault commented on a diff in pull request #5384: WIP HBASE-28065 Corrupt HFile data is mishandled in several cases

2023-09-07 Thread via GitHub


bbeaudreault commented on code in PR #5384:
URL: https://github.com/apache/hbase/pull/5384#discussion_r1319153181


##
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java:
##
@@ -1721,14 +1765,36 @@ protected HFileBlock 
readBlockDataInternal(FSDataInputStream is, long offset,
 if (headerBuf == null) {
   headerBuf = onDiskBlock.duplicate().position(0).limit(hdrSize);
 }
-// Do a few checks before we go instantiate HFileBlock.
-assert onDiskSizeWithHeader > this.hdrSize;
-verifyOnDiskSizeMatchesHeader(onDiskSizeWithHeader, headerBuf, offset, 
checksumSupport);
+
 ByteBuff curBlock = 
onDiskBlock.duplicate().position(0).limit(onDiskSizeWithHeader);
 // Verify checksum of the data before using it for building HFileBlock.
 if (verifyChecksum && !validateChecksum(offset, curBlock, hdrSize)) {
+  invalidateNextBlockHeader();
+  span.addEvent("Falling back to HDFS checksumming.", 
attributesBuilder.build());
   return null;
 }
+
+// TODO: is this check necessary or can we proceed with a provided 
value regardless of

Review Comment:
   so this is to validate that if a caller passes in a size, it matches what we 
read. i suppose this is good, but maybe depends on how much we trust the 
caller? not sure where the passed in value comes from at the moment



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] bbeaudreault commented on a diff in pull request #5384: WIP HBASE-28065 Corrupt HFile data is mishandled in several cases

2023-09-07 Thread via GitHub


bbeaudreault commented on code in PR #5384:
URL: https://github.com/apache/hbase/pull/5384#discussion_r1319150874


##
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java:
##
@@ -1667,7 +1688,11 @@ protected HFileBlock 
readBlockDataInternal(FSDataInputStream is, long offset,
   final AttributesBuilder attributesBuilder = Attributes.builder();
   Optional.of(Context.current()).map(val -> val.get(CONTEXT_KEY))
 .ifPresent(c -> c.accept(attributesBuilder));
-  int onDiskSizeWithHeader = checkAndGetSizeAsInt(onDiskSizeWithHeaderL, 
hdrSize);
+  if (!checkOnDiskSizeWithHeader(onDiskSizeWithHeaderL)) {

Review Comment:
   This code is definitely an improvement, but I still find it a bit hard to 
follow. It seems like we want to do the following:
   
   1. safely convert the passed value from long to int
   2. if the value is -1, try to read it from the header
   3. if there is no cached header, fetch that first
   4. finally we have a onDiskSizeWithHeader through one means or another, and 
now we should validate it
   
   Any chance we can lay the code out like that, rather than do validation in 
multiple places? Something like this:
   
   ```
   int onDiskSizeWithHeader = checkAndGetSizeAsInt(onDiskSizeWithHeaderL); // 
throw exception if value is greater than int.max
   ByteBuff headerBuf = getCachedHeader(offset);
   
   // make sure we have a size
   if (onDiskSizeWithHeader == -1) {
 if (headerBuf == null) {
   // allocate and read headerBuf as today
 }
 onDiskSizeWithHeader = getOnDiskSizeWithHeader(headerBuf, checksumSupport);
   }
   
   // now we have a size, validate it
   if (!checkOnDiskSizeWithHeader(onDiskSizeWithHeader)) {
 if (verifyChecksum) {
   invalidateNextBlockHeader();
   span.addEvent("Falling back to HDFS checksumming.", 
attributesBuilder.build());
   return null;
 } else {
   throw new IOException("Read invalid onDiskSizeWithHeader=" + 
onDiskSizeWithHeader);
 }
   }
   ```
   
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] bbeaudreault commented on a diff in pull request #5384: WIP HBASE-28065 Corrupt HFile data is mishandled in several cases

2023-09-07 Thread via GitHub


bbeaudreault commented on code in PR #5384:
URL: https://github.com/apache/hbase/pull/5384#discussion_r1319142361


##
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java:
##
@@ -1697,8 +1722,27 @@ protected HFileBlock 
readBlockDataInternal(FSDataInputStream is, long offset,
   headerBuf = HEAP.allocate(hdrSize);
   readAtOffset(is, headerBuf, hdrSize, false, offset, pread);
   headerBuf.rewind();
+
+  // The caller didn't provide an anticipated block size and headerBuf 
was null, this is
+  // probably the first time this HDFS block has been read. The value 
we just read has not
+  // had HBase checksum validation ; assume it was not protected by 
HDFS checksum either.
+  // Sanity check the value. If it doesn't seem right, either trigger 
fall-back to hdfs
+  // checksum or abort the read.
+  //
+  // TODO: Should we also check the value vs. some multiple of 
fileContext.getBlocksize() ?
+  onDiskSizeWithHeader = getOnDiskSizeWithHeader(headerBuf, 
checksumSupport);
+  if (!checkOnDiskSizeWithHeader(onDiskSizeWithHeader)) {
+if (verifyChecksum) {
+  invalidateNextBlockHeader();
+  span.addEvent("Falling back to HDFS checksumming.", 
attributesBuilder.build());
+  return null;
+} else {
+  throw new IOException("Read invalid onDiskSizeWithHeader=" + 
onDiskSizeWithHeader);
+}
+  }
+} else {
+  onDiskSizeWithHeader = getOnDiskSizeWithHeader(headerBuf, 
checksumSupport);

Review Comment:
   i think this here is when we pull it from the cached header. is there a 
reason not to do the same validation as above for that?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (HBASE-28055) Performance improvement for scan over several stores.

2023-09-07 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-28055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17762876#comment-17762876
 ] 

Hudson commented on HBASE-28055:


Results for branch branch-2.4
[build #613 on 
builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.4/613/]:
 (x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.4/613/General_20Nightly_20Build_20Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.4/613/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.4/613/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(x) {color:red}-1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.4/613/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(x) {color:red}-1 source release artifact{color}
-- See build output for details.


(x) {color:red}-1 client integration test{color}
-- Something went wrong with this stage, [check relevant console 
output|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.4/613//console].


> Performance improvement for scan over several stores. 
> --
>
> Key: HBASE-28055
> URL: https://issues.apache.org/jira/browse/HBASE-28055
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha-4, 2.5.5
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
>Priority: Major
> Fix For: 2.6.0, 2.4.18, 2.5.6, 3.0.0-beta-1, 4.0.0-alpha-1
>
>
> During the fix of HBASE-19863, an additional check for fake cells that 
> trigger reseek was added.  It comes that this check produces unnecessary 
> reseeks because
> matcher.compareKeyForNextColumn should be used only with indexed keys. Later  
> [~larsh] suggested doing a simple check for OLD_TIMESTAMP and it looks like a 
> better solution.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HBASE-28055) Performance improvement for scan over several stores.

2023-09-07 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-28055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17762869#comment-17762869
 ] 

Hudson commented on HBASE-28055:


Results for branch branch-2
[build #876 on 
builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/876/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/876/General_20Nightly_20Build_20Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/876/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/876/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(x) {color:red}-1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/876/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(x) {color:red}-1 source release artifact{color}
-- See build output for details.


(x) {color:red}-1 client integration test{color}
-- Something went wrong with this stage, [check relevant console 
output|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/876//console].


> Performance improvement for scan over several stores. 
> --
>
> Key: HBASE-28055
> URL: https://issues.apache.org/jira/browse/HBASE-28055
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha-4, 2.5.5
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
>Priority: Major
> Fix For: 2.6.0, 2.4.18, 2.5.6, 3.0.0-beta-1, 4.0.0-alpha-1
>
>
> During the fix of HBASE-19863, an additional check for fake cells that 
> trigger reseek was added.  It comes that this check produces unnecessary 
> reseeks because
> matcher.compareKeyForNextColumn should be used only with indexed keys. Later  
> [~larsh] suggested doing a simple check for OLD_TIMESTAMP and it looks like a 
> better solution.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HBASE-28055) Performance improvement for scan over several stores.

2023-09-07 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-28055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17762860#comment-17762860
 ] 

Hudson commented on HBASE-28055:


Results for branch branch-3
[build #41 on 
builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/41/]: 
(/) *{color:green}+1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/41/General_20Nightly_20Build_20Report/]




(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/41/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/41/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Performance improvement for scan over several stores. 
> --
>
> Key: HBASE-28055
> URL: https://issues.apache.org/jira/browse/HBASE-28055
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha-4, 2.5.5
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
>Priority: Major
> Fix For: 2.6.0, 2.4.18, 2.5.6, 3.0.0-beta-1, 4.0.0-alpha-1
>
>
> During the fix of HBASE-19863, an additional check for fake cells that 
> trigger reseek was added.  It comes that this check produces unnecessary 
> reseeks because
> matcher.compareKeyForNextColumn should be used only with indexed keys. Later  
> [~larsh] suggested doing a simple check for OLD_TIMESTAMP and it looks like a 
> better solution.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HBASE-28063) Add documentation to HBase book

2023-09-07 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-28063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17762841#comment-17762841
 ] 

Hudson commented on HBASE-28063:


Results for branch master
[build #900 on 
builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/900/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/900/General_20Nightly_20Build_20Report/]




(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/900/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/900/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Add documentation to HBase book
> ---
>
> Key: HBASE-28063
> URL: https://issues.apache.org/jira/browse/HBASE-28063
> Project: HBase
>  Issue Type: Sub-task
>  Components: documentation, Zookeeper
>Reporter: Andor Molnar
>Assignee: Andor Molnar
>Priority: Major
> Fix For: 3.0.0-beta-1
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HBASE-26754) hbase master crash after running couple of days with error STUCK Region-In-Transition rit=FAILED_OPEN, location=null, table=hbase:meta, region=xxxxxxxxxx

2023-09-07 Thread kaushik mandal (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kaushik mandal updated HBASE-26754:
---
Component/s: master

> hbase master crash after running couple of days with error STUCK 
> Region-In-Transition rit=FAILED_OPEN, location=null, table=hbase:meta, 
> region=xx
> -
>
> Key: HBASE-26754
> URL: https://issues.apache.org/jira/browse/HBASE-26754
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Affects Versions: 2.4.8
>Reporter: kaushik mandal
>Priority: Major
>
> hbase master not responding after running couple of days and region server 
> keep restarting.
> we are seeing bellow warning in master and region server
>  
>  
> WARN [ProcExecTimeout] assignment.AssignmentManager: STUCK 
> Region-In-Transition rit=FAILED_OPEN, location=null, table=hbase:meta, 
> region=x
> [master/-infra-x-hbase-master-0:16000.Chore.3] master.HMaster: Not 
> running balancer because processing dead regionserver(s): 2022-02-07 
> 19:54:11,512 INFO [ReadOnlyZKClient-xx-zookeeper:2181@0x2fcc92d9] 
> zookeeper.ZooKeeper: Initiating client connection, 
> connectString=-zookeeper:2181 sessionTimeout=9 
> watcher=org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$158/0x00010057b440@48d2e00b
>  
>  
> WARN [ProcExecTimeout] assignment.AssignmentManager: STUCK 
> Region-In-Transition rit=FAILED_OPEN, location=null, table=hbase:meta, 
> region=1588230740 2022-02-07 19:54:15,643 INFO 
> [hconnection-0x31420403-shared-pool7-t9731] client.RpcRetryingCallerImpl: 
> org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:3223)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegion(RSRpcServices.java:1414)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.newRegionScanner(RSRpcServices.java:2947)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3272)
>  at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42002)
>  at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409) at 
> org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130) at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) , 
> details=row '' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, 
> hostname=xxx-infra--hbase-regionserver-0.xxx-infra--hbase-regionserver.default.svc.cluster.local,16020,1644089730940,
>  seqNum=-1
>  
> from region server logs
> 2022-02-05 19:39:16,722 WARN 
> [RpcServer.default.FPBQ.Fifo.handler=109,queue=5,port=16020] 
> regionserver.RSRpcServices: Client tried to access missing scanner 0 
> 2022-02-05 19:39:16,722 WARN 
> [RpcServer.default.FPBQ.Fifo.handler=25,queue=12,port=16020] 
> regionserver.RSRpcServices: Client tried to access missing scanner 0 
> 2022-02-05 19:39:16,721 WARN 
> [RpcServer.default.FPBQ.Fifo.handler=24,queue=11,port=16020] 
> regionserver.RSRpcServices: Client tried to access missing scanner 0 
> 2022-02-05 19:39:16,721 WARN 
> [RpcServer.default.FPBQ.Fifo.handler=112,queue=8,port=16020] 
> regionserver.RSRpcServices: Client tried to access missing scanner 0 
> 2022-02-05 19:39:16,721 WARN 
> [RpcServer.default.FPBQ.Fifo.handler=40,queue=1,port=16020] 
> regionserver.RSRpcServices: Client tried to access missing scanner 0 ==> 
> /opt/hbase-2.0.1/logs/SecurityAuth.audit <== 2022-02-05 19:39:17,882 INFO 
> SecurityLogger.org.apache.hadoop.hbase.Server: Auth successful for hdfs 
> (auth:) 2022-02-05 19:39:17,882 INFO 
> SecurityLogger.org.apache.hadoop.hbase.Server: Connection from 10.42.0.124 
> port: 44876 with unknown version info 2022-02-05 19:40:18,307 INFO 
> SecurityLogger.org.apache.hadoop.hbase.Server: Auth successful for hdfs 
> (auth:) 2022-02-05 19:40:18,307 INFO 
> SecurityLogger.org.apache.hadoop.hbase.Server: Connection from 10.42.0.124 
> port: 51098 with unknown version info ==> 
> /opt/hbase-2.0.1/logs/hbase--regionserver--infra-x-hbase-regionserver-0.log
>  <== 2022-02-05 19:40:32,848 INFO [LruBlockCacheStatsExecutor] 
> hfile.LruBlockCache: totalSize=300.98 KB, freeSize=399.71 MB, max=400 MB, 
> blockCount=0, accesses=0, hits=0, hitRatio=0, cachingAccesses=0, 
> cachingHits=0, cachingHitsRatio=0,evictions=29, evicted=0, evictedPerRun=0.0



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-28067) Hbase 2.4.13 vulnerable to CVE-2022-26612

2023-09-07 Thread kaushik mandal (Jira)
kaushik mandal created HBASE-28067:
--

 Summary: Hbase 2.4.13 vulnerable to CVE-2022-26612
 Key: HBASE-28067
 URL: https://issues.apache.org/jira/browse/HBASE-28067
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 2.4.13
Reporter: kaushik mandal


hbase 2.4.13 uses hadoop-common-2.10.0.jar which is vulnerable to 
CVE-2022-26612.

when replaced hadoop-common-2.10.0.jar with 3.2.3, getting version incompatible 
issue and as result hbase shell command failed.

is there any hbase version which is compatible with hadoop-common 3.2.3 or 
above?

or is there any hbase version available where the above CVE addressed?

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HBASE-28055) Performance improvement for scan over several stores.

2023-09-07 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-28055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17762704#comment-17762704
 ] 

Hudson commented on HBASE-28055:


Results for branch branch-2.5
[build #398 on 
builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/398/]:
 (x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/398/General_20Nightly_20Build_20Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/398/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/398/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(x) {color:red}-1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/398/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(x) {color:red}-1 source release artifact{color}
-- See build output for details.


(x) {color:red}-1 client integration test{color}
-- Something went wrong with this stage, [check relevant console 
output|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/398//console].


> Performance improvement for scan over several stores. 
> --
>
> Key: HBASE-28055
> URL: https://issues.apache.org/jira/browse/HBASE-28055
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha-4, 2.5.5
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
>Priority: Major
> Fix For: 2.6.0, 2.4.18, 2.5.6, 3.0.0-beta-1, 4.0.0-alpha-1
>
>
> During the fix of HBASE-19863, an additional check for fake cells that 
> trigger reseek was added.  It comes that this check produces unnecessary 
> reseeks because
> matcher.compareKeyForNextColumn should be used only with indexed keys. Later  
> [~larsh] suggested doing a simple check for OLD_TIMESTAMP and it looks like a 
> better solution.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [hbase] NihalJain commented on pull request #5346: HBASE-27991 fixing ClassCastException in multithread client run

2023-09-07 Thread via GitHub


NihalJain commented on PR #5346:
URL: https://github.com/apache/hbase/pull/5346#issuecomment-1709880884

   > I suspect https://issues.apache.org/jira/browse/HBASE-22244 / 
https://github.com/apache/hbase/pull/155 might have broken compatibility on 
what can be passed to ExecutorService as the failure of this example client 
shows that an old client code which was passing ForkJoinPool no longer works 
and requires a ThreadPoolExecutor only.
   
   > I have been able to write a UT to confirm this. Raised 
https://issues.apache.org/jira/browse/HBASE-28035, raised a UT PR to 
demonstrate same at #5365
   
   Since we have a JIRA for the issue which cause this class to fail, I am good 
with this change to go in as is. We can look at the issue which introduced this 
bug as part of HBASE-28035.
   
   +1 to this change.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (HBASE-12222) FuzzyRowFilter unpredictable with jagged rowkeys

2023-09-07 Thread Nick Dimiduk (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17762672#comment-17762672
 ] 

Nick Dimiduk commented on HBASE-1:
--

[~dethi] Given how picky the current implementation is, we may be better served 
by introducing a new filter implementation that defines a more strict contract. 

> FuzzyRowFilter unpredictable with jagged rowkeys
> 
>
> Key: HBASE-1
> URL: https://issues.apache.org/jira/browse/HBASE-1
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Reporter: Nick Dimiduk
>Priority: Major
> Attachments: 1_tests.patch
>
>
> FuzzyRowFilter getNextCellHint doesn't take into account jagged rowkeys and 
> produces surprising results. For example, given a table of
> {noformat}
> 0
> 0/0
> 0/1
> 0/2
> 1
> 1/0
> 1/1
> 1/2
> 2
> 2/0
> 2/1
> 2/2
> {noformat}
> and FuzzyPrefixFilter like "?/2", {1, 0, 0}
> I would expect
> {noformat}
> 0/2
> 1/2
> 2/2
> {noformat}
> The results include the rows '0', '1', and '2'



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HBASE-27976) [hbase-operator-tools] Add spotless for hbase-operator-tools

2023-09-07 Thread Nick Dimiduk (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17762671#comment-17762671
 ] 

Nick Dimiduk commented on HBASE-27976:
--

Thank you [~nihaljain.cs]!

> [hbase-operator-tools] Add spotless for hbase-operator-tools
> 
>
> Key: HBASE-27976
> URL: https://issues.apache.org/jira/browse/HBASE-27976
> Project: HBase
>  Issue Type: Umbrella
>  Components: build, community, hbase-operator-tools
>Reporter: Nihal Jain
>Assignee: Nihal Jain
>Priority: Major
> Fix For: hbase-operator-tools-1.3.0
>
>
> HBase code repo has spotless plugin to check and fix spotless issues 
> seamlessly, making it easier for developers to fix issue in case the builds 
> fails due to code formatting.
> The goal of this Jira is to integrate spotless with hbase-operator-tools.
>  * As a 1st step will try to add a plugin to run spotless check via maven
>  * Next will fix all spotless issues as part of same task or another (as 
> community suggests)
>  * Finally will integrate the same to pre-commit build to not let PRs wit 
> spotless issues get in. (Would need some support/direction on how to do this 
> as I am not much familiar with the Jenkins and related code.)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [hbase] Apache-HBase commented on pull request #5352: HBASE-26780 HFileBlock.verifyOnDiskSizeMatchesHeader throw IOException: Passed in onDiskSizeWithHeader= A != 33

2023-09-07 Thread via GitHub


Apache-HBase commented on PR #5352:
URL: https://github.com/apache/hbase/pull/5352#issuecomment-1709791349

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 36s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 39s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 42s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   4m 49s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 31s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 41s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 41s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   4m 45s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 21s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 215m  0s |  hbase-server in the patch failed.  |
   |  |   | 236m 27s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5352/5/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5352 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux e200b5e06c6d 5.4.0-1103-aws #111~18.04.1-Ubuntu SMP Tue May 
23 20:04:10 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 97d512be7c |
   | Default Java | Eclipse Adoptium-11.0.17+8 |
   | unit | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5352/5/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5352/5/testReport/
 |
   | Max. process+thread count | 4246 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5352/5/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #5352: HBASE-26780 HFileBlock.verifyOnDiskSizeMatchesHeader throw IOException: Passed in onDiskSizeWithHeader= A != 33

2023-09-07 Thread via GitHub


Apache-HBase commented on PR #5352:
URL: https://github.com/apache/hbase/pull/5352#issuecomment-1709791235

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 26s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 33s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 34s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   4m 47s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 21s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m  9s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 35s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 35s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   4m 45s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 20s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 215m 51s |  hbase-server in the patch passed.  
|
   |  |   | 236m 18s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5352/5/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5352 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux e2b6ebf00dbc 5.4.0-1103-aws #111~18.04.1-Ubuntu SMP Tue May 
23 20:04:10 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 97d512be7c |
   | Default Java | Temurin-1.8.0_352-b08 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5352/5/testReport/
 |
   | Max. process+thread count | 4624 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5352/5/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase-operator-tools] petersomogyi commented on pull request #135: HBASE-27961 Running assigns/unassigns command with large number of files/regions throws CallTimeoutException

2023-09-07 Thread via GitHub


petersomogyi commented on PR #135:
URL: 
https://github.com/apache/hbase-operator-tools/pull/135#issuecomment-1709789938

   The outputs should be consistent. I agree the `[425, 426, 427, 428, 429]` 
output looks better. 🙂 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (HBASE-27114) Upgrade scalatest maven plugin for thread-safety

2023-09-07 Thread Nick Dimiduk (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17762662#comment-17762662
 ] 

Nick Dimiduk commented on HBASE-27114:
--

Or is this a bug in the assembly plugin, that it doesn't propagate the meaning 
of {{useAllReactorProjects}} properly?

> Upgrade scalatest maven plugin for thread-safety
> 
>
> Key: HBASE-27114
> URL: https://issues.apache.org/jira/browse/HBASE-27114
> Project: HBase
>  Issue Type: Task
>  Components: build, spark
>Affects Versions: hbase-connectors-1.0.1, hbase-connectors-1.1.0
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Major
> Fix For: hbase-connectors-1.0.1
>
>
> The {{master}} branch on the connectors repo warns when {{--threads}} is 
> issued, the complaint being the scalatest-maven-plugin. Looks like the latest 
> version resolves the complaint. Let's upgrade.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (HBASE-27114) Upgrade scalatest maven plugin for thread-safety

2023-09-07 Thread Nick Dimiduk (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk resolved HBASE-27114.
--
Fix Version/s: hbase-connectors-1.0.1
   Resolution: Fixed

[~busbey] I think we need a new ticket for the issue you've described.

> Upgrade scalatest maven plugin for thread-safety
> 
>
> Key: HBASE-27114
> URL: https://issues.apache.org/jira/browse/HBASE-27114
> Project: HBase
>  Issue Type: Task
>  Components: build, spark
>Affects Versions: hbase-connectors-1.0.1, hbase-connectors-1.1.0
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Major
> Fix For: hbase-connectors-1.0.1
>
>
> The {{master}} branch on the connectors repo warns when {{--threads}} is 
> issued, the complaint being the scalatest-maven-plugin. Looks like the latest 
> version resolves the complaint. Let's upgrade.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HBASE-20693) Refactor rest and thrift jsp's and extract header and footer

2023-09-07 Thread Nick Dimiduk (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-20693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17762656#comment-17762656
 ] 

Nick Dimiduk commented on HBASE-20693:
--

Heya [~nihaljain.cs] you have a +1 but do you mind putting this up as a PR on 
github? Thanks.

> Refactor rest and thrift jsp's and extract header and footer
> 
>
> Key: HBASE-20693
> URL: https://issues.apache.org/jira/browse/HBASE-20693
> Project: HBase
>  Issue Type: Improvement
>Reporter: Nihal Jain
>Assignee: Nihal Jain
>Priority: Minor
> Fix For: 3.0.0-beta-1
>
> Attachments: HBASE-20693.master.001.patch, rest_home_after_patch.png, 
> thrift_home_after_patch.png, thrift_log_level_after_patch.png, 
> thrift_log_level_before_patch.png
>
>
> Log Level page design was changed to include header and footers in 
> HBASE-20577. Since, thrift and rest do not have header and footer jsp's, the 
> log level page will be as it were before HBASE-20577 i.e without the 
> navigation bar. This JIRA will refactor rest and thrift and extract 
> 'header.jsp' and 'footer.jsp' from them.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [hbase] ndimiduk commented on a diff in pull request #5215: HBASE-27814 Add support for dump and process metrics servlet in REST …

2023-09-07 Thread via GitHub


ndimiduk commented on code in PR #5215:
URL: https://github.com/apache/hbase/pull/5215#discussion_r1318297654


##
hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/RESTDumpServlet.java:
##
@@ -0,0 +1,80 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.rest;
+
+import java.io.IOException;
+import java.io.OutputStream;
+import java.io.PrintStream;
+import java.io.PrintWriter;
+import java.util.Date;
+import javax.servlet.http.HttpServletRequest;
+import javax.servlet.http.HttpServletResponse;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.http.HttpServer;
+import org.apache.hadoop.hbase.monitoring.StateDumpServlet;
+import org.apache.hadoop.hbase.util.LogMonitoring;
+import org.apache.hadoop.hbase.util.Threads;
+import org.apache.yetus.audience.InterfaceAudience;
+
+@InterfaceAudience.Private
+public class RESTDumpServlet extends StateDumpServlet {
+  private static final long serialVersionUID = 1L;
+  private static final String LINE = 
"===";
+
+  @Override
+  public void doGet(HttpServletRequest request, HttpServletResponse response) 
throws IOException {
+if (!HttpServer.isInstrumentationAccessAllowed(getServletContext(), 
request, response)) {
+  return;
+}
+
+RESTServer restServer = (RESTServer) 
getServletContext().getAttribute(RESTServer.REST_SERVER);
+assert restServer != null : "No REST Server in context!";

Review Comment:
   Why `assert`? Does this code ever get run with the `-ea` JVM flag? We don't 
have unit tests for this stuff...



##
hbase-rest/src/main/resources/hbase-webapps/rest/processRest.jsp:
##
@@ -0,0 +1,184 @@
+<%--
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+--%>
+<%@ page contentType="text/html;charset=UTF-8"
+  import="java.util.Date"
+  import="java.util.List"
+  import="javax.management.ObjectName"
+  import="java.lang.management.ManagementFactory"
+  import="java.lang.management.MemoryPoolMXBean"
+  import="java.lang.management.RuntimeMXBean"
+  import="java.lang.management.GarbageCollectorMXBean"
+  import="org.apache.hadoop.hbase.util.JSONMetricUtil"
+  import="org.apache.hadoop.hbase.procedure2.util.StringUtils"
+  import="org.apache.hadoop.util.StringUtils.TraditionalBinaryPrefix"
+%>
+
+<%
+RuntimeMXBean runtimeBean = ManagementFactory.getRuntimeMXBean();
+ObjectName jvmMetrics = new ObjectName("Hadoop:service=HBase,name=JvmMetrics");
+
+// There is always two of GC collectors
+List gcBeans = JSONMetricUtil.getGcCollectorBeans();
+GarbageCollectorMXBean collector1 = null;
+GarbageCollectorMXBean collector2 = null;
+try {
+collector1 = gcBeans.get(0);
+collector2 = gcBeans.get(1);
+} catch(IndexOutOfBoundsException e) {}
+List mPools = JSONMetricUtil.getMemoryPools();
+pageContext.setAttribute("pageTitle", "Process info for PID: " + 
JSONMetricUtil.getProcessPID());
+%>
+
+
+  
+
+
+
+  
+  
+  <%= JSONMetricUtil.getCommmand().split(" ")[0] %>
+  
+  
+  
+
+Started
+Uptime
+PID
+Owner
+
+
+  
+<%= new Date(runtimeBean.getStartTime()) %>
+<%= StringUtils.humanTimeDiff(runtimeBean.getUptime()) %>
+<%= JSONMetricUtil.getProcessPID() %>
+<%= runtimeBean.getSystemProperties().get("user.name") %>
+  
+  
+
+
+  
+
+Threads
+
+
+  
+
+ThreadsNew
+  

[jira] [Commented] (HBASE-27827) Introduce kubernetes deployment

2023-09-07 Thread Nick Dimiduk (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17762652#comment-17762652
 ] 

Nick Dimiduk commented on HBASE-27827:
--

Hi [~zhangduo] thanks for checking. No one was really looking at the patches 
and I've been away for the last couple months. Let me spend some time in the 
coming days to refresh my memory and the patches.

> Introduce kubernetes deployment
> ---
>
> Key: HBASE-27827
> URL: https://issues.apache.org/jira/browse/HBASE-27827
> Project: HBase
>  Issue Type: New Feature
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Major
>
> As per the 
> [discussion|https://lists.apache.org/thread/fgxyk4y32xnhzr5prdmhfjkfpk15g5jx] 
> on the dev list, introduce a basic harness for deploying ZooKeeper, HDFS, and 
> HBase on Kubernetes.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HBASE-27096) Exclude files which are not in source control as much as possible when running spotless

2023-09-07 Thread Nick Dimiduk (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17762631#comment-17762631
 ] 

Nick Dimiduk commented on HBASE-27096:
--

[~nihaljain.cs] We need to apply the change from HBASE-27102 on the 
hbase-operator-tools repo as well?

> Exclude files which are not in source control as much as possible when 
> running spotless
> ---
>
> Key: HBASE-27096
> URL: https://issues.apache.org/jira/browse/HBASE-27096
> Project: HBase
>  Issue Type: Improvement
>  Components: build, pom
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
>
> In HBASE-27084, we added spotless:check to mvn verify stage, but the general 
> format section in spotless will include some files which are not in source 
> control, such as .idea/* or .settings/*, and then fail the 'mvn install'.
> Although usually it could be fixed by a spotless:apply call, but IDE may 
> regenerate the file and fail the build again.
> Let's add more exclude patterns to make it work better.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HBASE-26780) HFileBlock.verifyOnDiskSizeMatchesHeader throw IOException: Passed in onDiskSizeWithHeader= A != B

2023-09-07 Thread Nick Dimiduk (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17762626#comment-17762626
 ] 

Nick Dimiduk commented on HBASE-26780:
--

[~cribbee] Thanks for the stack trace. I think that my patch on HBASE-28065 
will indeed improve the handling of this example you posted. I think it will 
help because it (1) re-orders the sequence of actions to perform checksum 
validation before accessing the hfileblock header value and (2) it will 
propagate such a failure back up to be retried with HDFS checksum validation 
enabled. Such a file may still be broken, which my patch cannot fix, but at 
least it will be sure that the data was read with checksum enabled.

> HFileBlock.verifyOnDiskSizeMatchesHeader throw IOException: Passed in 
> onDiskSizeWithHeader= A != B
> --
>
> Key: HBASE-26780
> URL: https://issues.apache.org/jira/browse/HBASE-26780
> Project: HBase
>  Issue Type: Bug
>  Components: BlockCache
>Affects Versions: 2.2.2
>Reporter: yuzhang
>Priority: Major
> Attachments: IOException.png
>
>
> When I scan a region, HBase throw IOException: Passed in 
> onDiskSizeWithHeader= A != B
> The HFile mentioned Error message can be access normally.
> it recover by command – move region. I guess that onDiskSizeWithHeader of 
> HFileBlock has been changed. And RS get the correct BlockHeader Info after 
> region reopened.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)