[jira] [Commented] (HBASE-28890) RefCnt Leak error when caching index blocks at write time
[ https://issues.apache.org/jira/browse/HBASE-28890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17886776#comment-17886776 ] Hudson commented on HBASE-28890: Results for branch branch-2.5 [build #604 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/604/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/604/General_20Nightly_20Build_20Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/604/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/604/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/604/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/604/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > RefCnt Leak error when caching index blocks at write time > - > > Key: HBASE-28890 > URL: https://issues.apache.org/jira/browse/HBASE-28890 > Project: HBase > Issue Type: Bug >Affects Versions: 3.0.0-beta-1, 2.7.0, 2.6.1, 2.5.10 >Reporter: Wellington Chevreuil >Assignee: Wellington Chevreuil >Priority: Major > Labels: pull-request-available > Fix For: 3.0.0, 2.7.0, 2.5.11, 2.6.2 > > > Following [~bbeaudreault] works from HBASE-27170 that added the (very useful) > refcount leak detector, we sometimes see these reports on some branch-2 based > deployments: > {noformat} > 2024-09-25 10:06:42,413 ERROR > org.apache.hbase.thirdparty.io.netty.util.ResourceLeakDetector: LEAK: > RefCnt.release() was not called before it's garbage-collected. See > https://netty.io/wiki/reference-counted-objects.html for more information. > Recent access records: > Created at: > org.apache.hadoop.hbase.nio.RefCnt.(RefCnt.java:59) > org.apache.hadoop.hbase.nio.RefCnt.create(RefCnt.java:54) > org.apache.hadoop.hbase.nio.ByteBuff.wrap(ByteBuff.java:550) > > org.apache.hadoop.hbase.io.ByteBuffAllocator.allocate(ByteBuffAllocator.java:357) > > org.apache.hadoop.hbase.io.hfile.HFileBlock$Writer.cloneUncompressedBufferWithHeader(HFileBlock.java:1153) > > org.apache.hadoop.hbase.io.hfile.HFileBlock$Writer.getBlockForCaching(HFileBlock.java:1215) > > org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexWriter.lambda$writeIndexBlocks$0(HFileBlockIndex.java:997) > java.base/java.util.Optional.ifPresent(Optional.java:178) > > org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexWriter.writeIndexBlocks(HFileBlockIndex.java:996) > > org.apache.hadoop.hbase.io.hfile.HFileWriterImpl.close(HFileWriterImpl.java:635) > > org.apache.hadoop.hbase.regionserver.StoreFileWriter.close(StoreFileWriter.java:378) > > org.apache.hadoop.hbase.regionserver.StoreFlusher.finalizeWriter(StoreFlusher.java:69) > > org.apache.hadoop.hbase.regionserver.DefaultStoreFlusher.flushSnapshot(DefaultStoreFlusher.java:74) > > org.apache.hadoop.hbase.regionserver.HStore.flushCache(HStore.java:831) > > org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.flushCache(HStore.java:2033) > > org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2878) > > org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2620) > > org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2592) > > org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:2462) > > org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:602) > > org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:572) > > org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$1000(MemStoreFlusher.java:65) > > org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:344) > {noformat} > It turns out that we always convert the block to a "on-heap" one,
[jira] [Commented] (HBASE-28895) Bump Avro dependency version to 1.11.4
[ https://issues.apache.org/jira/browse/HBASE-28895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17886743#comment-17886743 ] Hudson commented on HBASE-28895: Results for branch master [build #1177 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1177/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1177/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1177/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Bump Avro dependency version to 1.11.4 > -- > > Key: HBASE-28895 > URL: https://issues.apache.org/jira/browse/HBASE-28895 > Project: HBase > Issue Type: Task >Reporter: Nick Dimiduk >Assignee: Nick Dimiduk >Priority: Major > Fix For: hbase-connectors-1.1.0 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28846) Change the default Hadoop3 version to 3.4.0, and add tests to make sure HBase works with earlier supported Hadoop versions
[ https://issues.apache.org/jira/browse/HBASE-28846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17886730#comment-17886730 ] Hudson commented on HBASE-28846: Results for branch HBASE-28846 [build #65 on builds.a.o|https://ci-hbase.apache.org/job/Test%20script%20for%20nightly%20script/job/HBASE-28846/65/]: (x) *{color:red}-1 overall{color}* details (if available): (x) {color:red}-1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/Test%20script%20for%20nightly%20script/job/HBASE-28846/65/General_20Nightly_20Build_20Report/] (x) {color:red}-1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/Test%20script%20for%20nightly%20script/job/HBASE-28846/65/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk17 hadoop ${HADOOP_VERSION} backward compatibility checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/Test%20script%20for%20nightly%20script/job/HBASE-28846/65/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk17 hadoop ${HADOOP_VERSION} backward compatibility checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/Test%20script%20for%20nightly%20script/job/HBASE-28846/65/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk17 hadoop ${HADOOP_VERSION} backward compatibility checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/Test%20script%20for%20nightly%20script/job/HBASE-28846/65/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test for 3.3.5 {color} (/) {color:green}+1 client integration test for 3.3.6 {color} (/) {color:green}+1 client integration test for 3.4.0 {color} > Change the default Hadoop3 version to 3.4.0, and add tests to make sure HBase > works with earlier supported Hadoop versions > -- > > Key: HBASE-28846 > URL: https://issues.apache.org/jira/browse/HBASE-28846 > Project: HBase > Issue Type: Improvement > Components: hadoop3, test >Affects Versions: 2.6.0, 3.0.0-beta-1, 4.0.0-alpha-1, 2.7.0 >Reporter: Istvan Toth >Assignee: Istvan Toth >Priority: Major > Labels: pull-request-available > > Discussed on the mailing list: > https://lists.apache.org/thread/orc62x0v2ktvj26ltvrqpfgzr94ncswn -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28893) RefCnt Leak error when closing a HalfStoreFileReader
[ https://issues.apache.org/jira/browse/HBASE-28893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17886592#comment-17886592 ] Hudson commented on HBASE-28893: Results for branch branch-2 [build #1160 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1160/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1160/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1160/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1160/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1160/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (x) {color:red}-1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1160/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > RefCnt Leak error when closing a HalfStoreFileReader > > > Key: HBASE-28893 > URL: https://issues.apache.org/jira/browse/HBASE-28893 > Project: HBase > Issue Type: Bug >Affects Versions: 3.0.0-beta-1, 2.7.0 >Reporter: Wellington Chevreuil >Assignee: Wellington Chevreuil >Priority: Major > Fix For: 3.0.0, 2.7.0 > > > In HBASE-28596 we have added ability for references to get resolved to the > original file blocks in bucket cache. As part of this, we had to modify > HalfStoreFileReader.close method, to create a scanner and seek to boundary > cell in order to get the related offset and calculate the limiting offset for > blocks we want to evict. We missed close the scanner instance there, which > then cause the refcount leaks reported as below: > {noformat} > 2024-09-25 14:24:51,292 ERROR > org.apache.hbase.thirdparty.io.netty.util.ResourceLeakDetector: LEAK: > RefCnt.release() was not called before it's garbage-collected. See > https://netty.io/wiki/reference-counted-objects.html for more information. > Recent access records: > Created at: > org.apache.hadoop.hbase.nio.RefCnt.(RefCnt.java:59) > org.apache.hadoop.hbase.nio.RefCnt.create(RefCnt.java:54) > org.apache.hadoop.hbase.nio.ByteBuff.wrap(ByteBuff.java:550) > > org.apache.hadoop.hbase.io.ByteBuffAllocator.allocate(ByteBuffAllocator.java:357) > > org.apache.hadoop.hbase.io.hfile.bucket.FileIOEngine.read(FileIOEngine.java:134) > > org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.getBlock(BucketCache.java:666) > > org.apache.hadoop.hbase.io.hfile.CombinedBlockCache.getBlock(CombinedBlockCache.java:98) > > org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.getCachedBlock(HFileReaderImpl.java:1102) > > org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.readBlock(HFileReaderImpl.java:1287) > > org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.readBlock(HFileReaderImpl.java:1248) > > org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$CellBasedKeyBlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:318) > > org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.seekTo(HFileReaderImpl.java:670) > > org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.seekTo(HFileReaderImpl.java:623) > > org.apache.hadoop.hbase.io.HalfStoreFileReader.close(HalfStoreFileReader.java:368) > > org.apache.hadoop.hbase.regionserver.HStore.removeCompactedfiles(HStore.java:2352) > > org.apache.hadoop.hbase.regionserver.HStore.closeAndArchiveCompactedFiles(HStore.java:2314) > > org.apache.hadoop.hbase.regionserver.CompactedHFilesDischargeHandler.process(CompactedHFilesDischargeHandler.java:41) > > org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:104) > {noformat} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28884) SFT's BrokenStoreFileCleaner may cause data loss
[ https://issues.apache.org/jira/browse/HBASE-28884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17886590#comment-17886590 ] Hudson commented on HBASE-28884: Results for branch branch-2 [build #1160 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1160/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1160/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1160/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1160/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1160/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (x) {color:red}-1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1160/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > SFT's BrokenStoreFileCleaner may cause data loss > > > Key: HBASE-28884 > URL: https://issues.apache.org/jira/browse/HBASE-28884 > Project: HBase > Issue Type: Bug > Components: SFT >Affects Versions: 2.6.0, 3.0.0-beta-1, 2.7.0, 2.5.10 >Reporter: Wellington Chevreuil >Assignee: Wellington Chevreuil >Priority: Major > Labels: pull-request-available > Fix For: 2.7.0, 3.0.0-beta-2, 2.5.11, 2.6.2 > > > When having this BrokenStoreFileCleaner enabled, one of our customers has run > into a data loss situation, probably due to a race condition between regions > getting moved out of the regionserver while the BrokenStoreFileCleaner was > checking this region's files eligibility for deletion. We have seen that the > file got deleted by the given region server, around the same time the region > got closed on this region server. I believe a race condition during region > close is possible here: > 1) In BrokenStoreFileCleaner, for each region online on the given RS, we get > the list of files in the store dirs, then iterate through it [1]; > 2) For each file listed, we perform several checks, including this one [2] > that checks if the file is "active" > The problem is, if the region for the file we are checking got closed between > point #1 and #2, by the time we check if the file is active in [2], the store > may have already been closed as part of the region closure, so this check > would consider the file as deletable. > One simple solution is to check if the store's region is still open before > proceeding with deleting the file. > [1] > https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/BrokenStoreFileCleaner.java#L99 > [2] > https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/BrokenStoreFileCleaner.java#L133 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28890) RefCnt Leak error when caching index blocks at write time
[ https://issues.apache.org/jira/browse/HBASE-28890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17886591#comment-17886591 ] Hudson commented on HBASE-28890: Results for branch branch-2 [build #1160 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1160/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1160/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1160/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1160/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1160/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (x) {color:red}-1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1160/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > RefCnt Leak error when caching index blocks at write time > - > > Key: HBASE-28890 > URL: https://issues.apache.org/jira/browse/HBASE-28890 > Project: HBase > Issue Type: Bug >Affects Versions: 3.0.0-beta-1, 2.7.0, 2.6.1, 2.5.10 >Reporter: Wellington Chevreuil >Assignee: Wellington Chevreuil >Priority: Major > Labels: pull-request-available > Fix For: 3.0.0, 2.7.0, 2.5.11, 2.6.2 > > > Following [~bbeaudreault] works from HBASE-27170 that added the (very useful) > refcount leak detector, we sometimes see these reports on some branch-2 based > deployments: > {noformat} > 2024-09-25 10:06:42,413 ERROR > org.apache.hbase.thirdparty.io.netty.util.ResourceLeakDetector: LEAK: > RefCnt.release() was not called before it's garbage-collected. See > https://netty.io/wiki/reference-counted-objects.html for more information. > Recent access records: > Created at: > org.apache.hadoop.hbase.nio.RefCnt.(RefCnt.java:59) > org.apache.hadoop.hbase.nio.RefCnt.create(RefCnt.java:54) > org.apache.hadoop.hbase.nio.ByteBuff.wrap(ByteBuff.java:550) > > org.apache.hadoop.hbase.io.ByteBuffAllocator.allocate(ByteBuffAllocator.java:357) > > org.apache.hadoop.hbase.io.hfile.HFileBlock$Writer.cloneUncompressedBufferWithHeader(HFileBlock.java:1153) > > org.apache.hadoop.hbase.io.hfile.HFileBlock$Writer.getBlockForCaching(HFileBlock.java:1215) > > org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexWriter.lambda$writeIndexBlocks$0(HFileBlockIndex.java:997) > java.base/java.util.Optional.ifPresent(Optional.java:178) > > org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexWriter.writeIndexBlocks(HFileBlockIndex.java:996) > > org.apache.hadoop.hbase.io.hfile.HFileWriterImpl.close(HFileWriterImpl.java:635) > > org.apache.hadoop.hbase.regionserver.StoreFileWriter.close(StoreFileWriter.java:378) > > org.apache.hadoop.hbase.regionserver.StoreFlusher.finalizeWriter(StoreFlusher.java:69) > > org.apache.hadoop.hbase.regionserver.DefaultStoreFlusher.flushSnapshot(DefaultStoreFlusher.java:74) > > org.apache.hadoop.hbase.regionserver.HStore.flushCache(HStore.java:831) > > org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.flushCache(HStore.java:2033) > > org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2878) > > org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2620) > > org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2592) > > org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:2462) > > org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:602) > > org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:572) > > org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$1000(MemStoreFlusher.java:65) > > org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:344) > {noformat} > It turns out that we always convert the block to a "on-heap" one, inside
[jira] [Commented] (HBASE-28890) RefCnt Leak error when caching index blocks at write time
[ https://issues.apache.org/jira/browse/HBASE-28890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17886561#comment-17886561 ] Hudson commented on HBASE-28890: Results for branch branch-2.6 [build #214 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/214/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/214/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/214/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/214/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (x) {color:red}-1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/214/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/214/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > RefCnt Leak error when caching index blocks at write time > - > > Key: HBASE-28890 > URL: https://issues.apache.org/jira/browse/HBASE-28890 > Project: HBase > Issue Type: Bug >Affects Versions: 3.0.0-beta-1, 2.7.0, 2.6.1, 2.5.10 >Reporter: Wellington Chevreuil >Assignee: Wellington Chevreuil >Priority: Major > Labels: pull-request-available > Fix For: 3.0.0, 2.7.0, 2.5.11, 2.6.2 > > > Following [~bbeaudreault] works from HBASE-27170 that added the (very useful) > refcount leak detector, we sometimes see these reports on some branch-2 based > deployments: > {noformat} > 2024-09-25 10:06:42,413 ERROR > org.apache.hbase.thirdparty.io.netty.util.ResourceLeakDetector: LEAK: > RefCnt.release() was not called before it's garbage-collected. See > https://netty.io/wiki/reference-counted-objects.html for more information. > Recent access records: > Created at: > org.apache.hadoop.hbase.nio.RefCnt.(RefCnt.java:59) > org.apache.hadoop.hbase.nio.RefCnt.create(RefCnt.java:54) > org.apache.hadoop.hbase.nio.ByteBuff.wrap(ByteBuff.java:550) > > org.apache.hadoop.hbase.io.ByteBuffAllocator.allocate(ByteBuffAllocator.java:357) > > org.apache.hadoop.hbase.io.hfile.HFileBlock$Writer.cloneUncompressedBufferWithHeader(HFileBlock.java:1153) > > org.apache.hadoop.hbase.io.hfile.HFileBlock$Writer.getBlockForCaching(HFileBlock.java:1215) > > org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexWriter.lambda$writeIndexBlocks$0(HFileBlockIndex.java:997) > java.base/java.util.Optional.ifPresent(Optional.java:178) > > org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexWriter.writeIndexBlocks(HFileBlockIndex.java:996) > > org.apache.hadoop.hbase.io.hfile.HFileWriterImpl.close(HFileWriterImpl.java:635) > > org.apache.hadoop.hbase.regionserver.StoreFileWriter.close(StoreFileWriter.java:378) > > org.apache.hadoop.hbase.regionserver.StoreFlusher.finalizeWriter(StoreFlusher.java:69) > > org.apache.hadoop.hbase.regionserver.DefaultStoreFlusher.flushSnapshot(DefaultStoreFlusher.java:74) > > org.apache.hadoop.hbase.regionserver.HStore.flushCache(HStore.java:831) > > org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.flushCache(HStore.java:2033) > > org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2878) > > org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2620) > > org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2592) > > org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:2462) > > org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:602) > > org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:572) > > org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$1000(MemStoreFlusher.java:65) > > org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:344) > {noformat} > It turns out that we always convert the block to a "on-heap" one, i
[jira] [Commented] (HBASE-28884) SFT's BrokenStoreFileCleaner may cause data loss
[ https://issues.apache.org/jira/browse/HBASE-28884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17886560#comment-17886560 ] Hudson commented on HBASE-28884: Results for branch branch-2.6 [build #214 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/214/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/214/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/214/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/214/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (x) {color:red}-1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/214/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/214/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > SFT's BrokenStoreFileCleaner may cause data loss > > > Key: HBASE-28884 > URL: https://issues.apache.org/jira/browse/HBASE-28884 > Project: HBase > Issue Type: Bug > Components: SFT >Affects Versions: 2.6.0, 3.0.0-beta-1, 2.7.0, 2.5.10 >Reporter: Wellington Chevreuil >Assignee: Wellington Chevreuil >Priority: Major > Labels: pull-request-available > Fix For: 2.7.0, 3.0.0-beta-2, 2.5.11, 2.6.2 > > > When having this BrokenStoreFileCleaner enabled, one of our customers has run > into a data loss situation, probably due to a race condition between regions > getting moved out of the regionserver while the BrokenStoreFileCleaner was > checking this region's files eligibility for deletion. We have seen that the > file got deleted by the given region server, around the same time the region > got closed on this region server. I believe a race condition during region > close is possible here: > 1) In BrokenStoreFileCleaner, for each region online on the given RS, we get > the list of files in the store dirs, then iterate through it [1]; > 2) For each file listed, we perform several checks, including this one [2] > that checks if the file is "active" > The problem is, if the region for the file we are checking got closed between > point #1 and #2, by the time we check if the file is active in [2], the store > may have already been closed as part of the region closure, so this check > would consider the file as deletable. > One simple solution is to check if the store's region is still open before > proceeding with deleting the file. > [1] > https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/BrokenStoreFileCleaner.java#L99 > [2] > https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/BrokenStoreFileCleaner.java#L133 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28846) Change the default Hadoop3 version to 3.4.0, and add tests to make sure HBase works with earlier supported Hadoop versions
[ https://issues.apache.org/jira/browse/HBASE-28846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17886539#comment-17886539 ] Hudson commented on HBASE-28846: Results for branch HBASE-28846 [build #64 on builds.a.o|https://ci-hbase.apache.org/job/Test%20script%20for%20nightly%20script/job/HBASE-28846/64/]: (x) *{color:red}-1 overall{color}* details (if available): (x) {color:red}-1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/Test%20script%20for%20nightly%20script/job/HBASE-28846/64/General_20Nightly_20Build_20Report/] (x) {color:red}-1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/Test%20script%20for%20nightly%20script/job/HBASE-28846/64/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk17 hadoop ${HADOOP_VERSION} backward compatibility checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/Test%20script%20for%20nightly%20script/job/HBASE-28846/64/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk17 hadoop ${HADOOP_VERSION} backward compatibility checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/Test%20script%20for%20nightly%20script/job/HBASE-28846/64/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk17 hadoop ${HADOOP_VERSION} backward compatibility checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/Test%20script%20for%20nightly%20script/job/HBASE-28846/64/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test for 3.3.5 {color} (/) {color:green}+1 client integration test for 3.3.6 {color} (/) {color:green}+1 client integration test for 3.4.0 {color} > Change the default Hadoop3 version to 3.4.0, and add tests to make sure HBase > works with earlier supported Hadoop versions > -- > > Key: HBASE-28846 > URL: https://issues.apache.org/jira/browse/HBASE-28846 > Project: HBase > Issue Type: Improvement > Components: hadoop3, test >Affects Versions: 2.6.0, 3.0.0-beta-1, 4.0.0-alpha-1, 2.7.0 >Reporter: Istvan Toth >Assignee: Istvan Toth >Priority: Major > Labels: pull-request-available > > Discussed on the mailing list: > https://lists.apache.org/thread/orc62x0v2ktvj26ltvrqpfgzr94ncswn -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28884) SFT's BrokenStoreFileCleaner may cause data loss
[ https://issues.apache.org/jira/browse/HBASE-28884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17886503#comment-17886503 ] Hudson commented on HBASE-28884: Results for branch branch-2.5 [build #603 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/603/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/603/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/603/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/603/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (x) {color:red}-1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/603/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/603/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > SFT's BrokenStoreFileCleaner may cause data loss > > > Key: HBASE-28884 > URL: https://issues.apache.org/jira/browse/HBASE-28884 > Project: HBase > Issue Type: Bug > Components: SFT >Affects Versions: 2.6.0, 3.0.0-beta-1, 2.7.0, 2.5.10 >Reporter: Wellington Chevreuil >Assignee: Wellington Chevreuil >Priority: Major > Labels: pull-request-available > Fix For: 2.7.0, 3.0.0-beta-2, 2.5.11, 2.6.2 > > > When having this BrokenStoreFileCleaner enabled, one of our customers has run > into a data loss situation, probably due to a race condition between regions > getting moved out of the regionserver while the BrokenStoreFileCleaner was > checking this region's files eligibility for deletion. We have seen that the > file got deleted by the given region server, around the same time the region > got closed on this region server. I believe a race condition during region > close is possible here: > 1) In BrokenStoreFileCleaner, for each region online on the given RS, we get > the list of files in the store dirs, then iterate through it [1]; > 2) For each file listed, we perform several checks, including this one [2] > that checks if the file is "active" > The problem is, if the region for the file we are checking got closed between > point #1 and #2, by the time we check if the file is active in [2], the store > may have already been closed as part of the region closure, so this check > would consider the file as deletable. > One simple solution is to check if the store's region is still open before > proceeding with deleting the file. > [1] > https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/BrokenStoreFileCleaner.java#L99 > [2] > https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/BrokenStoreFileCleaner.java#L133 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28893) RefCnt Leak error when closing a HalfStoreFileReader
[ https://issues.apache.org/jira/browse/HBASE-28893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17886466#comment-17886466 ] Hudson commented on HBASE-28893: Results for branch branch-3 [build #303 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/303/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/303/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/303/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > RefCnt Leak error when closing a HalfStoreFileReader > > > Key: HBASE-28893 > URL: https://issues.apache.org/jira/browse/HBASE-28893 > Project: HBase > Issue Type: Bug >Affects Versions: 3.0.0-beta-1, 2.7.0 >Reporter: Wellington Chevreuil >Assignee: Wellington Chevreuil >Priority: Major > Fix For: 3.0.0, 2.7.0 > > > In HBASE-28596 we have added ability for references to get resolved to the > original file blocks in bucket cache. As part of this, we had to modify > HalfStoreFileReader.close method, to create a scanner and seek to boundary > cell in order to get the related offset and calculate the limiting offset for > blocks we want to evict. We missed close the scanner instance there, which > then cause the refcount leaks reported as below: > {noformat} > 2024-09-25 14:24:51,292 ERROR > org.apache.hbase.thirdparty.io.netty.util.ResourceLeakDetector: LEAK: > RefCnt.release() was not called before it's garbage-collected. See > https://netty.io/wiki/reference-counted-objects.html for more information. > Recent access records: > Created at: > org.apache.hadoop.hbase.nio.RefCnt.(RefCnt.java:59) > org.apache.hadoop.hbase.nio.RefCnt.create(RefCnt.java:54) > org.apache.hadoop.hbase.nio.ByteBuff.wrap(ByteBuff.java:550) > > org.apache.hadoop.hbase.io.ByteBuffAllocator.allocate(ByteBuffAllocator.java:357) > > org.apache.hadoop.hbase.io.hfile.bucket.FileIOEngine.read(FileIOEngine.java:134) > > org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.getBlock(BucketCache.java:666) > > org.apache.hadoop.hbase.io.hfile.CombinedBlockCache.getBlock(CombinedBlockCache.java:98) > > org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.getCachedBlock(HFileReaderImpl.java:1102) > > org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.readBlock(HFileReaderImpl.java:1287) > > org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.readBlock(HFileReaderImpl.java:1248) > > org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$CellBasedKeyBlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:318) > > org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.seekTo(HFileReaderImpl.java:670) > > org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.seekTo(HFileReaderImpl.java:623) > > org.apache.hadoop.hbase.io.HalfStoreFileReader.close(HalfStoreFileReader.java:368) > > org.apache.hadoop.hbase.regionserver.HStore.removeCompactedfiles(HStore.java:2352) > > org.apache.hadoop.hbase.regionserver.HStore.closeAndArchiveCompactedFiles(HStore.java:2314) > > org.apache.hadoop.hbase.regionserver.CompactedHFilesDischargeHandler.process(CompactedHFilesDischargeHandler.java:41) > > org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:104) > {noformat} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28884) SFT's BrokenStoreFileCleaner may cause data loss
[ https://issues.apache.org/jira/browse/HBASE-28884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17886465#comment-17886465 ] Hudson commented on HBASE-28884: Results for branch branch-3 [build #303 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/303/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/303/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/303/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > SFT's BrokenStoreFileCleaner may cause data loss > > > Key: HBASE-28884 > URL: https://issues.apache.org/jira/browse/HBASE-28884 > Project: HBase > Issue Type: Bug > Components: SFT >Affects Versions: 2.6.0, 3.0.0-beta-1, 2.7.0, 2.5.10 >Reporter: Wellington Chevreuil >Assignee: Wellington Chevreuil >Priority: Major > Labels: pull-request-available > Fix For: 2.7.0, 3.0.0-beta-2, 2.5.11, 2.6.2 > > > When having this BrokenStoreFileCleaner enabled, one of our customers has run > into a data loss situation, probably due to a race condition between regions > getting moved out of the regionserver while the BrokenStoreFileCleaner was > checking this region's files eligibility for deletion. We have seen that the > file got deleted by the given region server, around the same time the region > got closed on this region server. I believe a race condition during region > close is possible here: > 1) In BrokenStoreFileCleaner, for each region online on the given RS, we get > the list of files in the store dirs, then iterate through it [1]; > 2) For each file listed, we perform several checks, including this one [2] > that checks if the file is "active" > The problem is, if the region for the file we are checking got closed between > point #1 and #2, by the time we check if the file is active in [2], the store > may have already been closed as part of the region closure, so this check > would consider the file as deletable. > One simple solution is to check if the store's region is still open before > proceeding with deleting the file. > [1] > https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/BrokenStoreFileCleaner.java#L99 > [2] > https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/BrokenStoreFileCleaner.java#L133 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28893) RefCnt Leak error when closing a HalfStoreFileReader
[ https://issues.apache.org/jira/browse/HBASE-28893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17886458#comment-17886458 ] Hudson commented on HBASE-28893: Results for branch master [build #1176 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1176/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1176/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1176/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > RefCnt Leak error when closing a HalfStoreFileReader > > > Key: HBASE-28893 > URL: https://issues.apache.org/jira/browse/HBASE-28893 > Project: HBase > Issue Type: Bug >Affects Versions: 3.0.0-beta-1, 2.7.0 >Reporter: Wellington Chevreuil >Assignee: Wellington Chevreuil >Priority: Major > Fix For: 3.0.0, 2.7.0 > > > In HBASE-28596 we have added ability for references to get resolved to the > original file blocks in bucket cache. As part of this, we had to modify > HalfStoreFileReader.close method, to create a scanner and seek to boundary > cell in order to get the related offset and calculate the limiting offset for > blocks we want to evict. We missed close the scanner instance there, which > then cause the refcount leaks reported as below: > {noformat} > 2024-09-25 14:24:51,292 ERROR > org.apache.hbase.thirdparty.io.netty.util.ResourceLeakDetector: LEAK: > RefCnt.release() was not called before it's garbage-collected. See > https://netty.io/wiki/reference-counted-objects.html for more information. > Recent access records: > Created at: > org.apache.hadoop.hbase.nio.RefCnt.(RefCnt.java:59) > org.apache.hadoop.hbase.nio.RefCnt.create(RefCnt.java:54) > org.apache.hadoop.hbase.nio.ByteBuff.wrap(ByteBuff.java:550) > > org.apache.hadoop.hbase.io.ByteBuffAllocator.allocate(ByteBuffAllocator.java:357) > > org.apache.hadoop.hbase.io.hfile.bucket.FileIOEngine.read(FileIOEngine.java:134) > > org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.getBlock(BucketCache.java:666) > > org.apache.hadoop.hbase.io.hfile.CombinedBlockCache.getBlock(CombinedBlockCache.java:98) > > org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.getCachedBlock(HFileReaderImpl.java:1102) > > org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.readBlock(HFileReaderImpl.java:1287) > > org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.readBlock(HFileReaderImpl.java:1248) > > org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$CellBasedKeyBlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:318) > > org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.seekTo(HFileReaderImpl.java:670) > > org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.seekTo(HFileReaderImpl.java:623) > > org.apache.hadoop.hbase.io.HalfStoreFileReader.close(HalfStoreFileReader.java:368) > > org.apache.hadoop.hbase.regionserver.HStore.removeCompactedfiles(HStore.java:2352) > > org.apache.hadoop.hbase.regionserver.HStore.closeAndArchiveCompactedFiles(HStore.java:2314) > > org.apache.hadoop.hbase.regionserver.CompactedHFilesDischargeHandler.process(CompactedHFilesDischargeHandler.java:41) > > org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:104) > {noformat} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28884) SFT's BrokenStoreFileCleaner may cause data loss
[ https://issues.apache.org/jira/browse/HBASE-28884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17886457#comment-17886457 ] Hudson commented on HBASE-28884: Results for branch master [build #1176 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1176/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1176/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1176/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > SFT's BrokenStoreFileCleaner may cause data loss > > > Key: HBASE-28884 > URL: https://issues.apache.org/jira/browse/HBASE-28884 > Project: HBase > Issue Type: Bug > Components: SFT >Affects Versions: 2.6.0, 3.0.0-beta-1, 2.7.0, 2.5.10 >Reporter: Wellington Chevreuil >Assignee: Wellington Chevreuil >Priority: Major > Labels: pull-request-available > Fix For: 2.7.0, 3.0.0-beta-2, 2.5.11, 2.6.2 > > > When having this BrokenStoreFileCleaner enabled, one of our customers has run > into a data loss situation, probably due to a race condition between regions > getting moved out of the regionserver while the BrokenStoreFileCleaner was > checking this region's files eligibility for deletion. We have seen that the > file got deleted by the given region server, around the same time the region > got closed on this region server. I believe a race condition during region > close is possible here: > 1) In BrokenStoreFileCleaner, for each region online on the given RS, we get > the list of files in the store dirs, then iterate through it [1]; > 2) For each file listed, we perform several checks, including this one [2] > that checks if the file is "active" > The problem is, if the region for the file we are checking got closed between > point #1 and #2, by the time we check if the file is active in [2], the store > may have already been closed as part of the region closure, so this check > would consider the file as deletable. > One simple solution is to check if the store's region is still open before > proceeding with deleting the file. > [1] > https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/BrokenStoreFileCleaner.java#L99 > [2] > https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/BrokenStoreFileCleaner.java#L133 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28846) Change the default Hadoop3 version to 3.4.0, and add tests to make sure HBase works with earlier supported Hadoop versions
[ https://issues.apache.org/jira/browse/HBASE-28846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17886337#comment-17886337 ] Hudson commented on HBASE-28846: Results for branch HBASE-28846 [build #63 on builds.a.o|https://ci-hbase.apache.org/job/Test%20script%20for%20nightly%20script/job/HBASE-28846/63/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 jdk17 hadoop ${HADOOP_VERSION} backward compatibility checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/Test%20script%20for%20nightly%20script/job/HBASE-28846/63/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk17 hadoop ${HADOOP_VERSION} backward compatibility checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/Test%20script%20for%20nightly%20script/job/HBASE-28846/63/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk17 hadoop ${HADOOP_VERSION} backward compatibility checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/Test%20script%20for%20nightly%20script/job/HBASE-28846/63/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test for 3.3.5 {color} (/) {color:green}+1 client integration test for 3.3.6 {color} (/) {color:green}+1 client integration test for 3.4.0 {color} > Change the default Hadoop3 version to 3.4.0, and add tests to make sure HBase > works with earlier supported Hadoop versions > -- > > Key: HBASE-28846 > URL: https://issues.apache.org/jira/browse/HBASE-28846 > Project: HBase > Issue Type: Improvement > Components: hadoop3, test >Affects Versions: 2.6.0, 3.0.0-beta-1, 4.0.0-alpha-1, 2.7.0 >Reporter: Istvan Toth >Assignee: Istvan Toth >Priority: Major > Labels: pull-request-available > > Discussed on the mailing list: > https://lists.apache.org/thread/orc62x0v2ktvj26ltvrqpfgzr94ncswn -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28803) HBase Master stuck due to improper handling of WALSyncTimeoutException within UncheckedIOException
[ https://issues.apache.org/jira/browse/HBASE-28803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17886307#comment-17886307 ] Hudson commented on HBASE-28803: Results for branch branch-2 [build #1159 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1159/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1159/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1159/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1159/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (x) {color:red}-1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1159/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1159/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > HBase Master stuck due to improper handling of WALSyncTimeoutException within > UncheckedIOException > -- > > Key: HBASE-28803 > URL: https://issues.apache.org/jira/browse/HBASE-28803 > Project: HBase > Issue Type: Bug > Components: master, wal >Affects Versions: 2.6.0, 3.0.0-alpha-4 >Reporter: Peter Somogyi >Assignee: Nick Dimiduk >Priority: Critical > Labels: pull-request-available > Fix For: 2.7.0, 3.0.0-beta-2, 2.6.1 > > > One of our test clusters stuck during a rolling restart due to a WAL.sync > timeout. This issue did not result in the Master aborting because the > WALSyncTimeoutException was wrapped in an UncheckedIOException, which > prevented the proper exception handling mechanism from being triggered. As a > result, the Master was handing for a long time and procedures were stuck. > This was a 2.4 based HBase with HBASE-27230. > {noformat} > 2024-08-17 17:23:07,567 ERROR > org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore: Failed > to delete pid=2027 > org.apache.hadoop.hbase.regionserver.wal.WALSyncTimeoutIOException: > org.apache.hadoop.hbase.exceptions.TimeoutIOException: Failed to get sync > result after 30 ms for txid=4347, WAL system stuck? > at > org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.blockOnSync(AbstractFSWAL.java:848) > at > org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.sync(AsyncFSWAL.java:718) > at org.apache.hadoop.hbase.regionserver.HRegion.sync(HRegion.java:8902) > at > org.apache.hadoop.hbase.regionserver.HRegion.doWALAppend(HRegion.java:8469) > at > org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutate(HRegion.java:4523) > at > org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:4447) > at > org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:4377) > at > org.apache.hadoop.hbase.regionserver.HRegion.doBatchMutate(HRegion.java:4853) > at > org.apache.hadoop.hbase.regionserver.HRegion.doBatchMutate(HRegion.java:4847) > at > org.apache.hadoop.hbase.regionserver.HRegion.doBatchMutate(HRegion.java:4843) > at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:3155) > at > org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore.lambda$delete$8(RegionProcedureStore.java:379) > at > org.apache.hadoop.hbase.master.region.MasterRegion.update(MasterRegion.java:141) > at > org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore.delete(RegionProcedureStore.java:379) > at > org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore.delete(RegionProcedureStore.java:410) > at > org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner.periodicExecute(CompletedProcedureCleaner.java:135) > at > org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.executeInMemoryChore(TimeoutExecutorThread.java:122) > at > org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.execDelayedProcedure(TimeoutExecutorThread.java:101) > at > org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.run(TimeoutExecutorThread.java:68) > Caused by: org.apache.hadoop.h
[jira] [Commented] (HBASE-28879) Bump hbase-thirdparty to 4.1.9
[ https://issues.apache.org/jira/browse/HBASE-28879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17886250#comment-17886250 ] Hudson commented on HBASE-28879: Results for branch branch-3 [build #302 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/302/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/302/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/302/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Bump hbase-thirdparty to 4.1.9 > -- > > Key: HBASE-28879 > URL: https://issues.apache.org/jira/browse/HBASE-28879 > Project: HBase > Issue Type: Task > Components: dependencies, thirdparty >Reporter: Duo Zhang >Assignee: Nick Dimiduk >Priority: Major > Labels: pull-request-available > Fix For: 2.7.0, 3.0.0-beta-2, 2.6.1 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28803) HBase Master stuck due to improper handling of WALSyncTimeoutException within UncheckedIOException
[ https://issues.apache.org/jira/browse/HBASE-28803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17886251#comment-17886251 ] Hudson commented on HBASE-28803: Results for branch branch-3 [build #302 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/302/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/302/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/302/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > HBase Master stuck due to improper handling of WALSyncTimeoutException within > UncheckedIOException > -- > > Key: HBASE-28803 > URL: https://issues.apache.org/jira/browse/HBASE-28803 > Project: HBase > Issue Type: Bug > Components: master, wal >Affects Versions: 2.6.0, 3.0.0-alpha-4 >Reporter: Peter Somogyi >Assignee: Nick Dimiduk >Priority: Critical > Labels: pull-request-available > Fix For: 2.7.0, 3.0.0-beta-2, 2.6.1 > > > One of our test clusters stuck during a rolling restart due to a WAL.sync > timeout. This issue did not result in the Master aborting because the > WALSyncTimeoutException was wrapped in an UncheckedIOException, which > prevented the proper exception handling mechanism from being triggered. As a > result, the Master was handing for a long time and procedures were stuck. > This was a 2.4 based HBase with HBASE-27230. > {noformat} > 2024-08-17 17:23:07,567 ERROR > org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore: Failed > to delete pid=2027 > org.apache.hadoop.hbase.regionserver.wal.WALSyncTimeoutIOException: > org.apache.hadoop.hbase.exceptions.TimeoutIOException: Failed to get sync > result after 30 ms for txid=4347, WAL system stuck? > at > org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.blockOnSync(AbstractFSWAL.java:848) > at > org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.sync(AsyncFSWAL.java:718) > at org.apache.hadoop.hbase.regionserver.HRegion.sync(HRegion.java:8902) > at > org.apache.hadoop.hbase.regionserver.HRegion.doWALAppend(HRegion.java:8469) > at > org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutate(HRegion.java:4523) > at > org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:4447) > at > org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:4377) > at > org.apache.hadoop.hbase.regionserver.HRegion.doBatchMutate(HRegion.java:4853) > at > org.apache.hadoop.hbase.regionserver.HRegion.doBatchMutate(HRegion.java:4847) > at > org.apache.hadoop.hbase.regionserver.HRegion.doBatchMutate(HRegion.java:4843) > at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:3155) > at > org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore.lambda$delete$8(RegionProcedureStore.java:379) > at > org.apache.hadoop.hbase.master.region.MasterRegion.update(MasterRegion.java:141) > at > org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore.delete(RegionProcedureStore.java:379) > at > org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore.delete(RegionProcedureStore.java:410) > at > org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner.periodicExecute(CompletedProcedureCleaner.java:135) > at > org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.executeInMemoryChore(TimeoutExecutorThread.java:122) > at > org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.execDelayedProcedure(TimeoutExecutorThread.java:101) > at > org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.run(TimeoutExecutorThread.java:68) > Caused by: org.apache.hadoop.hbase.exceptions.TimeoutIOException: Failed to > get sync result after 30 ms for txid=4347, WAL system stuck? > at > org.apache.hadoop.hbase.regionserver.wal.SyncFuture.get(SyncFuture.java:171) > at > org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.blockOnSync(AbstractFSWAL.java:844) > ... 18 more > 2024-08-17 17:23:07,568 ERROR > org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread: Ignoring pid=-1, > state=WAITING_TIMEOUT; > org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner exception: > org.apache.hadoop.hbase.regionserver.wal.WALSyncTimeoutIOException: > org.apache.hadoop.hbase.exc
[jira] [Commented] (HBASE-24177) MetricsTable#updateFlushTime is wrong
[ https://issues.apache.org/jira/browse/HBASE-24177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17886249#comment-17886249 ] Hudson commented on HBASE-24177: Results for branch branch-3 [build #302 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/302/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/302/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/302/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > MetricsTable#updateFlushTime is wrong > - > > Key: HBASE-24177 > URL: https://issues.apache.org/jira/browse/HBASE-24177 > Project: HBase > Issue Type: Bug > Components: metrics >Affects Versions: 2.2.1 >Reporter: ramkrishna.s.vasudevan >Assignee: Gaurav Kanade >Priority: Minor > Fix For: 3.0.0-alpha-1, 2.3.0, 2.2.5 > > Attachments: after.png, before.png > > > MetricsRegionServer does an update on the MetricsRegionServerSource, > MetricsTable etc. > While doing updateFlushTime, the time taken for flush is rightly updated in > the RegionServerSource but at the MetricsTable level we update the > memstoresize instead of the time. > This applies from 1.1 version onwards. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28890) RefCnt Leak error when caching index blocks at write time
[ https://issues.apache.org/jira/browse/HBASE-28890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17886252#comment-17886252 ] Hudson commented on HBASE-28890: Results for branch branch-3 [build #302 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/302/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/302/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/302/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > RefCnt Leak error when caching index blocks at write time > - > > Key: HBASE-28890 > URL: https://issues.apache.org/jira/browse/HBASE-28890 > Project: HBase > Issue Type: Bug >Affects Versions: 2.6.0, 3.0.0-beta-1, 2.7.0 >Reporter: Wellington Chevreuil >Assignee: Wellington Chevreuil >Priority: Major > Labels: pull-request-available > Fix For: 3.0.0 > > > Following [~bbeaudreault] works from HBASE-27170 that added the (very useful) > refcount leak detector, we sometimes see these reports on some branch-2 based > deployments: > {noformat} > 2024-09-25 10:06:42,413 ERROR > org.apache.hbase.thirdparty.io.netty.util.ResourceLeakDetector: LEAK: > RefCnt.release() was not called before it's garbage-collected. See > https://netty.io/wiki/reference-counted-objects.html for more information. > Recent access records: > Created at: > org.apache.hadoop.hbase.nio.RefCnt.(RefCnt.java:59) > org.apache.hadoop.hbase.nio.RefCnt.create(RefCnt.java:54) > org.apache.hadoop.hbase.nio.ByteBuff.wrap(ByteBuff.java:550) > > org.apache.hadoop.hbase.io.ByteBuffAllocator.allocate(ByteBuffAllocator.java:357) > > org.apache.hadoop.hbase.io.hfile.HFileBlock$Writer.cloneUncompressedBufferWithHeader(HFileBlock.java:1153) > > org.apache.hadoop.hbase.io.hfile.HFileBlock$Writer.getBlockForCaching(HFileBlock.java:1215) > > org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexWriter.lambda$writeIndexBlocks$0(HFileBlockIndex.java:997) > java.base/java.util.Optional.ifPresent(Optional.java:178) > > org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexWriter.writeIndexBlocks(HFileBlockIndex.java:996) > > org.apache.hadoop.hbase.io.hfile.HFileWriterImpl.close(HFileWriterImpl.java:635) > > org.apache.hadoop.hbase.regionserver.StoreFileWriter.close(StoreFileWriter.java:378) > > org.apache.hadoop.hbase.regionserver.StoreFlusher.finalizeWriter(StoreFlusher.java:69) > > org.apache.hadoop.hbase.regionserver.DefaultStoreFlusher.flushSnapshot(DefaultStoreFlusher.java:74) > > org.apache.hadoop.hbase.regionserver.HStore.flushCache(HStore.java:831) > > org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.flushCache(HStore.java:2033) > > org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2878) > > org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2620) > > org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2592) > > org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:2462) > > org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:602) > > org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:572) > > org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$1000(MemStoreFlusher.java:65) > > org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:344) > {noformat} > It turns out that we always convert the block to a "on-heap" one, inside > LruBlockCache.cacheBlock, so when the index block is a SharedMemHFileBlock, > the blockForCaching instance in the code > [here|https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlockIndex.java#L1076] > becomes eligible for GC without releasing buffers/decreasing refcount > (leak), right after we return the BlockIndexWriter.writeIndexBlocks call. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28733) Add "2.6 Documentation" to the website
[ https://issues.apache.org/jira/browse/HBASE-28733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17886225#comment-17886225 ] Hudson commented on HBASE-28733: Results for branch master [build #1175 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1175/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1175/General_20Nightly_20Build_20Report/] (x) {color:red}-1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1175/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Add "2.6 Documentation" to the website > -- > > Key: HBASE-28733 > URL: https://issues.apache.org/jira/browse/HBASE-28733 > Project: HBase > Issue Type: Task > Components: community, documentation >Reporter: Nick Dimiduk >Assignee: Dávid Paksy >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0-alpha-1 > > > We have released 2.6 but the website has not been updated with the new API > docs. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-24177) MetricsTable#updateFlushTime is wrong
[ https://issues.apache.org/jira/browse/HBASE-24177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17886221#comment-17886221 ] Hudson commented on HBASE-24177: Results for branch master [build #1175 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1175/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1175/General_20Nightly_20Build_20Report/] (x) {color:red}-1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1175/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > MetricsTable#updateFlushTime is wrong > - > > Key: HBASE-24177 > URL: https://issues.apache.org/jira/browse/HBASE-24177 > Project: HBase > Issue Type: Bug > Components: metrics >Affects Versions: 2.2.1 >Reporter: ramkrishna.s.vasudevan >Assignee: Gaurav Kanade >Priority: Minor > Fix For: 3.0.0-alpha-1, 2.3.0, 2.2.5 > > Attachments: after.png, before.png > > > MetricsRegionServer does an update on the MetricsRegionServerSource, > MetricsTable etc. > While doing updateFlushTime, the time taken for flush is rightly updated in > the RegionServerSource but at the MetricsTable level we update the > memstoresize instead of the time. > This applies from 1.1 version onwards. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28890) RefCnt Leak error when caching index blocks at write time
[ https://issues.apache.org/jira/browse/HBASE-28890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17886224#comment-17886224 ] Hudson commented on HBASE-28890: Results for branch master [build #1175 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1175/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1175/General_20Nightly_20Build_20Report/] (x) {color:red}-1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1175/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > RefCnt Leak error when caching index blocks at write time > - > > Key: HBASE-28890 > URL: https://issues.apache.org/jira/browse/HBASE-28890 > Project: HBase > Issue Type: Bug >Affects Versions: 2.6.0, 3.0.0-beta-1, 2.7.0 >Reporter: Wellington Chevreuil >Assignee: Wellington Chevreuil >Priority: Major > Labels: pull-request-available > Fix For: 3.0.0 > > > Following [~bbeaudreault] works from HBASE-27170 that added the (very useful) > refcount leak detector, we sometimes see these reports on some branch-2 based > deployments: > {noformat} > 2024-09-25 10:06:42,413 ERROR > org.apache.hbase.thirdparty.io.netty.util.ResourceLeakDetector: LEAK: > RefCnt.release() was not called before it's garbage-collected. See > https://netty.io/wiki/reference-counted-objects.html for more information. > Recent access records: > Created at: > org.apache.hadoop.hbase.nio.RefCnt.(RefCnt.java:59) > org.apache.hadoop.hbase.nio.RefCnt.create(RefCnt.java:54) > org.apache.hadoop.hbase.nio.ByteBuff.wrap(ByteBuff.java:550) > > org.apache.hadoop.hbase.io.ByteBuffAllocator.allocate(ByteBuffAllocator.java:357) > > org.apache.hadoop.hbase.io.hfile.HFileBlock$Writer.cloneUncompressedBufferWithHeader(HFileBlock.java:1153) > > org.apache.hadoop.hbase.io.hfile.HFileBlock$Writer.getBlockForCaching(HFileBlock.java:1215) > > org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexWriter.lambda$writeIndexBlocks$0(HFileBlockIndex.java:997) > java.base/java.util.Optional.ifPresent(Optional.java:178) > > org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexWriter.writeIndexBlocks(HFileBlockIndex.java:996) > > org.apache.hadoop.hbase.io.hfile.HFileWriterImpl.close(HFileWriterImpl.java:635) > > org.apache.hadoop.hbase.regionserver.StoreFileWriter.close(StoreFileWriter.java:378) > > org.apache.hadoop.hbase.regionserver.StoreFlusher.finalizeWriter(StoreFlusher.java:69) > > org.apache.hadoop.hbase.regionserver.DefaultStoreFlusher.flushSnapshot(DefaultStoreFlusher.java:74) > > org.apache.hadoop.hbase.regionserver.HStore.flushCache(HStore.java:831) > > org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.flushCache(HStore.java:2033) > > org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2878) > > org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2620) > > org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2592) > > org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:2462) > > org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:602) > > org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:572) > > org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$1000(MemStoreFlusher.java:65) > > org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:344) > {noformat} > It turns out that we always convert the block to a "on-heap" one, inside > LruBlockCache.cacheBlock, so when the index block is a SharedMemHFileBlock, > the blockForCaching instance in the code > [here|https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlockIndex.java#L1076] > becomes eligible for GC without releasing buffers/decreasing refcount > (leak), right after we return the BlockIndexWriter.writeIndexBlocks call. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28803) HBase Master stuck due to improper handling of WALSyncTimeoutException within UncheckedIOException
[ https://issues.apache.org/jira/browse/HBASE-28803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17886223#comment-17886223 ] Hudson commented on HBASE-28803: Results for branch master [build #1175 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1175/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1175/General_20Nightly_20Build_20Report/] (x) {color:red}-1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1175/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > HBase Master stuck due to improper handling of WALSyncTimeoutException within > UncheckedIOException > -- > > Key: HBASE-28803 > URL: https://issues.apache.org/jira/browse/HBASE-28803 > Project: HBase > Issue Type: Bug > Components: master, wal >Affects Versions: 2.6.0, 3.0.0-alpha-4 >Reporter: Peter Somogyi >Assignee: Nick Dimiduk >Priority: Critical > Labels: pull-request-available > Fix For: 2.7.0, 3.0.0-beta-2, 2.6.1 > > > One of our test clusters stuck during a rolling restart due to a WAL.sync > timeout. This issue did not result in the Master aborting because the > WALSyncTimeoutException was wrapped in an UncheckedIOException, which > prevented the proper exception handling mechanism from being triggered. As a > result, the Master was handing for a long time and procedures were stuck. > This was a 2.4 based HBase with HBASE-27230. > {noformat} > 2024-08-17 17:23:07,567 ERROR > org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore: Failed > to delete pid=2027 > org.apache.hadoop.hbase.regionserver.wal.WALSyncTimeoutIOException: > org.apache.hadoop.hbase.exceptions.TimeoutIOException: Failed to get sync > result after 30 ms for txid=4347, WAL system stuck? > at > org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.blockOnSync(AbstractFSWAL.java:848) > at > org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.sync(AsyncFSWAL.java:718) > at org.apache.hadoop.hbase.regionserver.HRegion.sync(HRegion.java:8902) > at > org.apache.hadoop.hbase.regionserver.HRegion.doWALAppend(HRegion.java:8469) > at > org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutate(HRegion.java:4523) > at > org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:4447) > at > org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:4377) > at > org.apache.hadoop.hbase.regionserver.HRegion.doBatchMutate(HRegion.java:4853) > at > org.apache.hadoop.hbase.regionserver.HRegion.doBatchMutate(HRegion.java:4847) > at > org.apache.hadoop.hbase.regionserver.HRegion.doBatchMutate(HRegion.java:4843) > at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:3155) > at > org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore.lambda$delete$8(RegionProcedureStore.java:379) > at > org.apache.hadoop.hbase.master.region.MasterRegion.update(MasterRegion.java:141) > at > org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore.delete(RegionProcedureStore.java:379) > at > org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore.delete(RegionProcedureStore.java:410) > at > org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner.periodicExecute(CompletedProcedureCleaner.java:135) > at > org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.executeInMemoryChore(TimeoutExecutorThread.java:122) > at > org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.execDelayedProcedure(TimeoutExecutorThread.java:101) > at > org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.run(TimeoutExecutorThread.java:68) > Caused by: org.apache.hadoop.hbase.exceptions.TimeoutIOException: Failed to > get sync result after 30 ms for txid=4347, WAL system stuck? > at > org.apache.hadoop.hbase.regionserver.wal.SyncFuture.get(SyncFuture.java:171) > at > org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.blockOnSync(AbstractFSWAL.java:844) > ... 18 more > 2024-08-17 17:23:07,568 ERROR > org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread: Ignoring pid=-1, > state=WAITING_TIMEOUT; > org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner exception: > org.apache.hadoop.hbase.regionserver.wal.WALSyncTimeoutIOException: > org.apache.hadoop.hbase.exceptions.
[jira] [Commented] (HBASE-28879) Bump hbase-thirdparty to 4.1.9
[ https://issues.apache.org/jira/browse/HBASE-28879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17886222#comment-17886222 ] Hudson commented on HBASE-28879: Results for branch master [build #1175 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1175/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1175/General_20Nightly_20Build_20Report/] (x) {color:red}-1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1175/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Bump hbase-thirdparty to 4.1.9 > -- > > Key: HBASE-28879 > URL: https://issues.apache.org/jira/browse/HBASE-28879 > Project: HBase > Issue Type: Task > Components: dependencies, thirdparty >Reporter: Duo Zhang >Assignee: Nick Dimiduk >Priority: Major > Labels: pull-request-available > Fix For: 2.7.0, 3.0.0-beta-2, 2.6.1 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28803) HBase Master stuck due to improper handling of WALSyncTimeoutException within UncheckedIOException
[ https://issues.apache.org/jira/browse/HBASE-28803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17886079#comment-17886079 ] Hudson commented on HBASE-28803: Results for branch branch-2.6 [build #212 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/212/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/212/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/212/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/212/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/212/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/212/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > HBase Master stuck due to improper handling of WALSyncTimeoutException within > UncheckedIOException > -- > > Key: HBASE-28803 > URL: https://issues.apache.org/jira/browse/HBASE-28803 > Project: HBase > Issue Type: Bug > Components: master, wal >Affects Versions: 2.6.0, 3.0.0-alpha-4 >Reporter: Peter Somogyi >Assignee: Nick Dimiduk >Priority: Critical > Labels: pull-request-available > Fix For: 2.7.0, 3.0.0-beta-2, 2.6.1 > > > One of our test clusters stuck during a rolling restart due to a WAL.sync > timeout. This issue did not result in the Master aborting because the > WALSyncTimeoutException was wrapped in an UncheckedIOException, which > prevented the proper exception handling mechanism from being triggered. As a > result, the Master was handing for a long time and procedures were stuck. > This was a 2.4 based HBase with HBASE-27230. > {noformat} > 2024-08-17 17:23:07,567 ERROR > org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore: Failed > to delete pid=2027 > org.apache.hadoop.hbase.regionserver.wal.WALSyncTimeoutIOException: > org.apache.hadoop.hbase.exceptions.TimeoutIOException: Failed to get sync > result after 30 ms for txid=4347, WAL system stuck? > at > org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.blockOnSync(AbstractFSWAL.java:848) > at > org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.sync(AsyncFSWAL.java:718) > at org.apache.hadoop.hbase.regionserver.HRegion.sync(HRegion.java:8902) > at > org.apache.hadoop.hbase.regionserver.HRegion.doWALAppend(HRegion.java:8469) > at > org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutate(HRegion.java:4523) > at > org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:4447) > at > org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:4377) > at > org.apache.hadoop.hbase.regionserver.HRegion.doBatchMutate(HRegion.java:4853) > at > org.apache.hadoop.hbase.regionserver.HRegion.doBatchMutate(HRegion.java:4847) > at > org.apache.hadoop.hbase.regionserver.HRegion.doBatchMutate(HRegion.java:4843) > at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:3155) > at > org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore.lambda$delete$8(RegionProcedureStore.java:379) > at > org.apache.hadoop.hbase.master.region.MasterRegion.update(MasterRegion.java:141) > at > org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore.delete(RegionProcedureStore.java:379) > at > org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore.delete(RegionProcedureStore.java:410) > at > org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner.periodicExecute(CompletedProcedureCleaner.java:135) > at > org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.executeInMemoryChore(TimeoutExecutorThread.java:122) > at > org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.execDelayedProcedure(TimeoutExecutorThread.java:101) > at > org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.run(TimeoutExecutorThread.java:68) > Caused by: org.ap
[jira] [Commented] (HBASE-28879) Bump hbase-thirdparty to 4.1.9
[ https://issues.apache.org/jira/browse/HBASE-28879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17886078#comment-17886078 ] Hudson commented on HBASE-28879: Results for branch branch-2.6 [build #212 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/212/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/212/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/212/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/212/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/212/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/212/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Bump hbase-thirdparty to 4.1.9 > -- > > Key: HBASE-28879 > URL: https://issues.apache.org/jira/browse/HBASE-28879 > Project: HBase > Issue Type: Task > Components: dependencies, thirdparty >Reporter: Duo Zhang >Assignee: Nick Dimiduk >Priority: Major > Labels: pull-request-available > Fix For: 2.7.0, 3.0.0-beta-2, 2.6.1 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28879) Bump hbase-thirdparty to 4.1.9
[ https://issues.apache.org/jira/browse/HBASE-28879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17886090#comment-17886090 ] Hudson commented on HBASE-28879: Results for branch branch-2 [build #1158 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1158/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1158/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1158/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1158/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1158/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1158/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Bump hbase-thirdparty to 4.1.9 > -- > > Key: HBASE-28879 > URL: https://issues.apache.org/jira/browse/HBASE-28879 > Project: HBase > Issue Type: Task > Components: dependencies, thirdparty >Reporter: Duo Zhang >Assignee: Nick Dimiduk >Priority: Major > Labels: pull-request-available > Fix For: 2.7.0, 3.0.0-beta-2, 2.6.1 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-22658) region_mover.rb should choose same rsgroup servers as target servers
[ https://issues.apache.org/jira/browse/HBASE-22658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885734#comment-17885734 ] Hudson commented on HBASE-22658: Results for branch master [build #1173 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1173/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1173/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1173/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > region_mover.rb should choose same rsgroup servers as target servers > -- > > Key: HBASE-22658 > URL: https://issues.apache.org/jira/browse/HBASE-22658 > Project: HBase > Issue Type: Bug > Components: rsgroup, shell >Affects Versions: 1.4.10 >Reporter: liang.feng >Priority: Major > Labels: gracefulshutdown, region_mover, rsgroup > Fix For: 1.5.0, 1.4.11 > > Attachments: HBASE-22658.branch-1.002.patch, > HBASE-22658.branch-1.patch, with_patch.png, without_patch.png > > > There are many retries when i am using graceful_stop.sh to shutdown region > server after using regroup, because the target server in a different rsgroup. > This makes it slow to graceful shutdown a regionserver. So i think that > region_mover.rb should only choose same rsgroup servers as target servers. > Region mover is implemented by jruby in hbase1.x and is implemented by java > in hbase2.x . I tried to modify the RegionMover.java class to use the same > logic in hbase2.x, but mvn package failed due to hbase-server module and > hbase-rsgroup moudle needed to depend on each other, then maven throw a "The > projects in the reactor contain a cyclic reference". I couldn't solve it.So I > just uploaded patch for hbase1.x . > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-2284) fsWriteLatency metric may be incorrectly reported
[ https://issues.apache.org/jira/browse/HBASE-2284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885732#comment-17885732 ] Hudson commented on HBASE-2284: --- Results for branch master [build #1173 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1173/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1173/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1173/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > fsWriteLatency metric may be incorrectly reported > -- > > Key: HBASE-2284 > URL: https://issues.apache.org/jira/browse/HBASE-2284 > Project: HBase > Issue Type: Bug >Reporter: Kannan Muthukkaruppan >Assignee: Kannan Muthukkaruppan >Priority: Minor > Fix For: 0.20.4, 0.90.0 > > Attachments: 2284_0.20.patch > > > fsWriteLatency metric is computed by maintaining writeTime & writeOps in > HLog. If an HLog.append() carries multiple edits, then "writeTime" is > computed incorrectly for the subsequent edits because doWrite() is called for > each of the edits with the same start time argument ("now"). > This also causes a lot of false WARN spews to the log. Only one of the edits > might have taken a long time, but every edit after that in a given > HLog.append() operation will also raise these warning messages. > {code} > 2010-03-03 11:00:42,247 WARN org.apache.hadoop.hbase.regionserver.HLog: IPC > Server handler 51 on 60020 took 1814ms appending an edit to hlog; > editcount=302227 > 2010-03-03 11:00:42,247 WARN org.apache.hadoop.hbase.regionserver.HLog: IPC > Server handler 51 on 60020 took 1814ms appending an edit to hlog; > editcount=302228 > 2010-03-03 11:00:42,247 WARN org.apache.hadoop.hbase.regionserver.HLog: IPC > Server handler 51 on 60020 took 1814ms appending an edit to hlog; > editcount=302229 > 2010-03-03 11:00:42,247 WARN org.apache.hadoop.hbase.regionserver.HLog: IPC > Server handler 51 on 60020 took 1814ms appending an edit to hlog; > editcount=302230 > 2010-03-03 11:00:42,247 WARN org.apache.hadoop.hbase.regionserver.HLog: IPC > Server handler 51 on 60020 took 1814ms appending an edit to hlog; > editcount=302231 > {code} > Will submit a patch shortly. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-23601) OutputSink.WriterThread exception gets stuck and repeated indefinietly
[ https://issues.apache.org/jira/browse/HBASE-23601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885736#comment-17885736 ] Hudson commented on HBASE-23601: Results for branch master [build #1173 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1173/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1173/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1173/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > OutputSink.WriterThread exception gets stuck and repeated indefinietly > -- > > Key: HBASE-23601 > URL: https://issues.apache.org/jira/browse/HBASE-23601 > Project: HBase > Issue Type: Bug > Components: read replicas >Affects Versions: 2.2.2 >Reporter: Szabolcs Bukros >Assignee: Szabolcs Bukros >Priority: Major > Fix For: 2.3.0, 3.0.0-beta-2 > > > When a WriterThread runs into an exception (ie: NotServingRegionException), > the exception is stored in the controller. It is never removed and can not be > overwritten either. > > {code:java} > public void run() { > try { > doRun(); > } catch (Throwable t) { > LOG.error("Exiting thread", t); > controller.writerThreadError(t); > } > }{code} > Thanks to this every time PipelineController.checkForErrors() is called the > same old exception is rethrown. > > For example in RegionReplicaReplicationEndpoint.replicate there is a while > loop that does the actual replicating. Every time it loops, it calls > checkForErrors(), catches the rethrown exception, logs it but does nothing > about it. This results in ~2GB log files in ~5min in my experience. > > My proposal would be to clean up the stored exception when it reaches > RegionReplicaReplicationEndpoint.replicate and make sure we restart the > WriterThread that died throwing it. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-22484) Javadoc Warnings: Fix warnings coming due to @result tag in TestCoprocessorWhitelistMasterObserver
[ https://issues.apache.org/jira/browse/HBASE-22484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885733#comment-17885733 ] Hudson commented on HBASE-22484: Results for branch master [build #1173 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1173/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1173/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1173/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Javadoc Warnings: Fix warnings coming due to @result tag in > TestCoprocessorWhitelistMasterObserver > -- > > Key: HBASE-22484 > URL: https://issues.apache.org/jira/browse/HBASE-22484 > Project: HBase > Issue Type: Bug > Components: documentation >Affects Versions: 3.0.0-alpha-1 >Reporter: Murtaza Hassan >Assignee: Murtaza Hassan >Priority: Trivial > Labels: beginner > Fix For: 3.0.0-alpha-1 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-22740) [RSGroup] Forward-port HBASE-22658 to master branch
[ https://issues.apache.org/jira/browse/HBASE-22740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885735#comment-17885735 ] Hudson commented on HBASE-22740: Results for branch master [build #1173 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1173/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1173/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1173/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > [RSGroup] Forward-port HBASE-22658 to master branch > --- > > Key: HBASE-22740 > URL: https://issues.apache.org/jira/browse/HBASE-22740 > Project: HBase > Issue Type: Bug > Components: rsgroup >Affects Versions: 2.0.6, 2.2.3, 2.1.9 >Reporter: Reid Chan >Assignee: Reid Chan >Priority: Major > Fix For: 3.0.0-alpha-1 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-22658) region_mover.rb should choose same rsgroup servers as target servers
[ https://issues.apache.org/jira/browse/HBASE-22658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885728#comment-17885728 ] Hudson commented on HBASE-22658: Results for branch branch-3 [build #300 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/300/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/300/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/300/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > region_mover.rb should choose same rsgroup servers as target servers > -- > > Key: HBASE-22658 > URL: https://issues.apache.org/jira/browse/HBASE-22658 > Project: HBase > Issue Type: Bug > Components: rsgroup, shell >Affects Versions: 1.4.10 >Reporter: liang.feng >Priority: Major > Labels: gracefulshutdown, region_mover, rsgroup > Fix For: 1.5.0, 1.4.11 > > Attachments: HBASE-22658.branch-1.002.patch, > HBASE-22658.branch-1.patch, with_patch.png, without_patch.png > > > There are many retries when i am using graceful_stop.sh to shutdown region > server after using regroup, because the target server in a different rsgroup. > This makes it slow to graceful shutdown a regionserver. So i think that > region_mover.rb should only choose same rsgroup servers as target servers. > Region mover is implemented by jruby in hbase1.x and is implemented by java > in hbase2.x . I tried to modify the RegionMover.java class to use the same > logic in hbase2.x, but mvn package failed due to hbase-server module and > hbase-rsgroup moudle needed to depend on each other, then maven throw a "The > projects in the reactor contain a cyclic reference". I couldn't solve it.So I > just uploaded patch for hbase1.x . > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-2284) fsWriteLatency metric may be incorrectly reported
[ https://issues.apache.org/jira/browse/HBASE-2284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885726#comment-17885726 ] Hudson commented on HBASE-2284: --- Results for branch branch-3 [build #300 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/300/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/300/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/300/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > fsWriteLatency metric may be incorrectly reported > -- > > Key: HBASE-2284 > URL: https://issues.apache.org/jira/browse/HBASE-2284 > Project: HBase > Issue Type: Bug >Reporter: Kannan Muthukkaruppan >Assignee: Kannan Muthukkaruppan >Priority: Minor > Fix For: 0.20.4, 0.90.0 > > Attachments: 2284_0.20.patch > > > fsWriteLatency metric is computed by maintaining writeTime & writeOps in > HLog. If an HLog.append() carries multiple edits, then "writeTime" is > computed incorrectly for the subsequent edits because doWrite() is called for > each of the edits with the same start time argument ("now"). > This also causes a lot of false WARN spews to the log. Only one of the edits > might have taken a long time, but every edit after that in a given > HLog.append() operation will also raise these warning messages. > {code} > 2010-03-03 11:00:42,247 WARN org.apache.hadoop.hbase.regionserver.HLog: IPC > Server handler 51 on 60020 took 1814ms appending an edit to hlog; > editcount=302227 > 2010-03-03 11:00:42,247 WARN org.apache.hadoop.hbase.regionserver.HLog: IPC > Server handler 51 on 60020 took 1814ms appending an edit to hlog; > editcount=302228 > 2010-03-03 11:00:42,247 WARN org.apache.hadoop.hbase.regionserver.HLog: IPC > Server handler 51 on 60020 took 1814ms appending an edit to hlog; > editcount=302229 > 2010-03-03 11:00:42,247 WARN org.apache.hadoop.hbase.regionserver.HLog: IPC > Server handler 51 on 60020 took 1814ms appending an edit to hlog; > editcount=302230 > 2010-03-03 11:00:42,247 WARN org.apache.hadoop.hbase.regionserver.HLog: IPC > Server handler 51 on 60020 took 1814ms appending an edit to hlog; > editcount=302231 > {code} > Will submit a patch shortly. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-22484) Javadoc Warnings: Fix warnings coming due to @result tag in TestCoprocessorWhitelistMasterObserver
[ https://issues.apache.org/jira/browse/HBASE-22484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885727#comment-17885727 ] Hudson commented on HBASE-22484: Results for branch branch-3 [build #300 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/300/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/300/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/300/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Javadoc Warnings: Fix warnings coming due to @result tag in > TestCoprocessorWhitelistMasterObserver > -- > > Key: HBASE-22484 > URL: https://issues.apache.org/jira/browse/HBASE-22484 > Project: HBase > Issue Type: Bug > Components: documentation >Affects Versions: 3.0.0-alpha-1 >Reporter: Murtaza Hassan >Assignee: Murtaza Hassan >Priority: Trivial > Labels: beginner > Fix For: 3.0.0-alpha-1 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-23601) OutputSink.WriterThread exception gets stuck and repeated indefinietly
[ https://issues.apache.org/jira/browse/HBASE-23601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885730#comment-17885730 ] Hudson commented on HBASE-23601: Results for branch branch-3 [build #300 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/300/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/300/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/300/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > OutputSink.WriterThread exception gets stuck and repeated indefinietly > -- > > Key: HBASE-23601 > URL: https://issues.apache.org/jira/browse/HBASE-23601 > Project: HBase > Issue Type: Bug > Components: read replicas >Affects Versions: 2.2.2 >Reporter: Szabolcs Bukros >Assignee: Szabolcs Bukros >Priority: Major > Fix For: 2.3.0, 3.0.0-beta-2 > > > When a WriterThread runs into an exception (ie: NotServingRegionException), > the exception is stored in the controller. It is never removed and can not be > overwritten either. > > {code:java} > public void run() { > try { > doRun(); > } catch (Throwable t) { > LOG.error("Exiting thread", t); > controller.writerThreadError(t); > } > }{code} > Thanks to this every time PipelineController.checkForErrors() is called the > same old exception is rethrown. > > For example in RegionReplicaReplicationEndpoint.replicate there is a while > loop that does the actual replicating. Every time it loops, it calls > checkForErrors(), catches the rethrown exception, logs it but does nothing > about it. This results in ~2GB log files in ~5min in my experience. > > My proposal would be to clean up the stored exception when it reaches > RegionReplicaReplicationEndpoint.replicate and make sure we restart the > WriterThread that died throwing it. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-22740) [RSGroup] Forward-port HBASE-22658 to master branch
[ https://issues.apache.org/jira/browse/HBASE-22740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885729#comment-17885729 ] Hudson commented on HBASE-22740: Results for branch branch-3 [build #300 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/300/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/300/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/300/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > [RSGroup] Forward-port HBASE-22658 to master branch > --- > > Key: HBASE-22740 > URL: https://issues.apache.org/jira/browse/HBASE-22740 > Project: HBase > Issue Type: Bug > Components: rsgroup >Affects Versions: 2.0.6, 2.2.3, 2.1.9 >Reporter: Reid Chan >Assignee: Reid Chan >Priority: Major > Fix For: 3.0.0-alpha-1 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-21540) when set property "hbase.systemtables.compacting.memstore.type" to "basic" or "eager" will cause an exception
[ https://issues.apache.org/jira/browse/HBASE-21540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885602#comment-17885602 ] Hudson commented on HBASE-21540: Results for branch master [build #1172 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1172/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1172/General_20Nightly_20Build_20Report/] (x) {color:red}-1 jdk17 hadoop3 checks{color} -- Something went wrong running this stage, please [check relevant console output|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1172//console]. (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > when set property "hbase.systemtables.compacting.memstore.type" to "basic" > or "eager" will cause an exception > --- > > Key: HBASE-21540 > URL: https://issues.apache.org/jira/browse/HBASE-21540 > Project: HBase > Issue Type: Bug > Components: conf >Affects Versions: 2.0.0 >Reporter: lixiaobao >Assignee: lixiaobao >Priority: Major > Fix For: 2.3.0, 2.2.2, 2.1.8, 3.0.0-beta-2 > > Attachments: HBASE-21540-and-ut.patch, HBASE-21540-v2.patch, > HBASE-21540.master.001.patch > > > when set property > "hbase.systemtables.compacting.memstore.type" value to lowercase (not > uppercase ) "basic" or "eager" will > cause an exception "java.lang.IllegalArgumentException: No enum constant > org.apache.hadoop.hbase.MemoryCompactionPolicy.basic | eager" > {code:java} > if (this.getTableName().isSystemTable()) { >inMemoryCompaction = > MemoryCompactionPolicy.valueOf(conf.get("hbase.systemtables.compacting.memstore.type", > "NONE").toUpperCase()); > } else { > inMemoryCompaction = family.getInMemoryCompaction(); > }{code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28585) copy_tables_desc.rb script should handle scenarios where the namespace does not exist in the target cluster
[ https://issues.apache.org/jira/browse/HBASE-28585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885599#comment-17885599 ] Hudson commented on HBASE-28585: Results for branch branch-3 [build #299 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/299/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/299/General_20Nightly_20Build_20Report/] (x) {color:red}-1 jdk17 hadoop3 checks{color} -- Something went wrong running this stage, please [check relevant console output|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/299//console]. (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > copy_tables_desc.rb script should handle scenarios where the namespace does > not exist in the target cluster > --- > > Key: HBASE-28585 > URL: https://issues.apache.org/jira/browse/HBASE-28585 > Project: HBase > Issue Type: Improvement > Components: jruby, scripts >Affects Versions: 2.4.17 >Reporter: wenhao >Assignee: wenhao >Priority: Minor > Labels: pull-request-available > Fix For: 2.7.0, 3.0.0-beta-2 > > > When utilizing the {{copy_tables_desc.rb}} script to duplicate tables to a > target cluster, if the specified table's namespace is nonexistent in the > target cluster, the script fails to execute successfully. It is recommended > to incorporate logic within the script for detecting and handling scenarios > where the namespace does not exist. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28884) SFT's BrokenStoreFileCleaner may cause data loss
[ https://issues.apache.org/jira/browse/HBASE-28884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885598#comment-17885598 ] Hudson commented on HBASE-28884: Results for branch branch-3 [build #299 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/299/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/299/General_20Nightly_20Build_20Report/] (x) {color:red}-1 jdk17 hadoop3 checks{color} -- Something went wrong running this stage, please [check relevant console output|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/299//console]. (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > SFT's BrokenStoreFileCleaner may cause data loss > > > Key: HBASE-28884 > URL: https://issues.apache.org/jira/browse/HBASE-28884 > Project: HBase > Issue Type: Bug >Affects Versions: 2.6.0, 3.0.0-beta-1, 2.7.0, 2.5.10 >Reporter: Wellington Chevreuil >Assignee: Wellington Chevreuil >Priority: Major > Labels: pull-request-available > Fix For: 3.0.0-beta-2 > > > When having this BrokenStoreFileCleaner enabled, one of our customers has run > into a data loss situation, probably due to a race condition between regions > getting moved out of the regionserver while the BrokenStoreFileCleaner was > checking this region's files eligibility for deletion. We have seen that the > file got deleted by the given region server, around the same time the region > got closed on this region server. I believe a race condition during region > close is possible here: > 1) In BrokenStoreFileCleaner, for each region online on the given RS, we get > the list of files in the store dirs, then iterate through it [1]; > 2) For each file listed, we perform several checks, including this one [2] > that checks if the file is "active" > The problem is, if the region for the file we are checking got closed between > point #1 and #2, by the time we check if the file is active in [2], the store > may have already been closed as part of the region closure, so this check > would consider the file as deletable. > One simple solution is to check if the store's region is still open before > proceeding with deleting the file. > [1] > https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/BrokenStoreFileCleaner.java#L99 > [2] > https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/BrokenStoreFileCleaner.java#L133 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28884) SFT's BrokenStoreFileCleaner may cause data loss
[ https://issues.apache.org/jira/browse/HBASE-28884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885601#comment-17885601 ] Hudson commented on HBASE-28884: Results for branch master [build #1172 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1172/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1172/General_20Nightly_20Build_20Report/] (x) {color:red}-1 jdk17 hadoop3 checks{color} -- Something went wrong running this stage, please [check relevant console output|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1172//console]. (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > SFT's BrokenStoreFileCleaner may cause data loss > > > Key: HBASE-28884 > URL: https://issues.apache.org/jira/browse/HBASE-28884 > Project: HBase > Issue Type: Bug >Affects Versions: 2.6.0, 3.0.0-beta-1, 2.7.0, 2.5.10 >Reporter: Wellington Chevreuil >Assignee: Wellington Chevreuil >Priority: Major > Labels: pull-request-available > Fix For: 3.0.0-beta-2 > > > When having this BrokenStoreFileCleaner enabled, one of our customers has run > into a data loss situation, probably due to a race condition between regions > getting moved out of the regionserver while the BrokenStoreFileCleaner was > checking this region's files eligibility for deletion. We have seen that the > file got deleted by the given region server, around the same time the region > got closed on this region server. I believe a race condition during region > close is possible here: > 1) In BrokenStoreFileCleaner, for each region online on the given RS, we get > the list of files in the store dirs, then iterate through it [1]; > 2) For each file listed, we perform several checks, including this one [2] > that checks if the file is "active" > The problem is, if the region for the file we are checking got closed between > point #1 and #2, by the time we check if the file is active in [2], the store > may have already been closed as part of the region closure, so this check > would consider the file as deletable. > One simple solution is to check if the store's region is still open before > proceeding with deleting the file. > [1] > https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/BrokenStoreFileCleaner.java#L99 > [2] > https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/BrokenStoreFileCleaner.java#L133 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-21540) when set property "hbase.systemtables.compacting.memstore.type" to "basic" or "eager" will cause an exception
[ https://issues.apache.org/jira/browse/HBASE-21540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885600#comment-17885600 ] Hudson commented on HBASE-21540: Results for branch branch-3 [build #299 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/299/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/299/General_20Nightly_20Build_20Report/] (x) {color:red}-1 jdk17 hadoop3 checks{color} -- Something went wrong running this stage, please [check relevant console output|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/299//console]. (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > when set property "hbase.systemtables.compacting.memstore.type" to "basic" > or "eager" will cause an exception > --- > > Key: HBASE-21540 > URL: https://issues.apache.org/jira/browse/HBASE-21540 > Project: HBase > Issue Type: Bug > Components: conf >Affects Versions: 2.0.0 >Reporter: lixiaobao >Assignee: lixiaobao >Priority: Major > Fix For: 2.3.0, 2.2.2, 2.1.8, 3.0.0-beta-2 > > Attachments: HBASE-21540-and-ut.patch, HBASE-21540-v2.patch, > HBASE-21540.master.001.patch > > > when set property > "hbase.systemtables.compacting.memstore.type" value to lowercase (not > uppercase ) "basic" or "eager" will > cause an exception "java.lang.IllegalArgumentException: No enum constant > org.apache.hadoop.hbase.MemoryCompactionPolicy.basic | eager" > {code:java} > if (this.getTableName().isSystemTable()) { >inMemoryCompaction = > MemoryCompactionPolicy.valueOf(conf.get("hbase.systemtables.compacting.memstore.type", > "NONE").toUpperCase()); > } else { > inMemoryCompaction = family.getInMemoryCompaction(); > }{code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28585) copy_tables_desc.rb script should handle scenarios where the namespace does not exist in the target cluster
[ https://issues.apache.org/jira/browse/HBASE-28585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885541#comment-17885541 ] Hudson commented on HBASE-28585: Results for branch branch-2 [build #1156 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1156/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1156/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1156/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1156/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (x) {color:red}-1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1156/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1156/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > copy_tables_desc.rb script should handle scenarios where the namespace does > not exist in the target cluster > --- > > Key: HBASE-28585 > URL: https://issues.apache.org/jira/browse/HBASE-28585 > Project: HBase > Issue Type: Improvement > Components: jruby, scripts >Affects Versions: 2.4.17 >Reporter: wenhao >Assignee: wenhao >Priority: Minor > Labels: pull-request-available > Fix For: 2.7.0, 3.0.0-beta-2 > > > When utilizing the {{copy_tables_desc.rb}} script to duplicate tables to a > target cluster, if the specified table's namespace is nonexistent in the > target cluster, the script fails to execute successfully. It is recommended > to incorporate logic within the script for detecting and handling scenarios > where the namespace does not exist. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-20693) Refactor thrift jsp's and extract header and footer
[ https://issues.apache.org/jira/browse/HBASE-20693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885542#comment-17885542 ] Hudson commented on HBASE-20693: Results for branch branch-2 [build #1156 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1156/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1156/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1156/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1156/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (x) {color:red}-1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1156/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1156/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Refactor thrift jsp's and extract header and footer > --- > > Key: HBASE-20693 > URL: https://issues.apache.org/jira/browse/HBASE-20693 > Project: HBase > Issue Type: Improvement > Components: Thrift, UI >Reporter: Nihal Jain >Assignee: Nihal Jain >Priority: Minor > Labels: pull-request-available > Fix For: 2.7.0, 3.0.0-beta-2 > > Attachments: HBASE-20693.master.001.patch, rest_home_after_patch.png, > thrift_home_after_patch.png, thrift_log_level_after_patch.png, > thrift_log_level_before_patch.png > > > Log Level page design was changed to include header and footers in > HBASE-20577. Since, thrift and rest do not have header and footer jsp's, the > log level page will be as it were before HBASE-20577 i.e without the > navigation bar. This JIRA will refactor rest and thrift and extract > 'header.jsp' and 'footer.jsp' from them. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-18382) [Thrift] Add transport type info to info server
[ https://issues.apache.org/jira/browse/HBASE-18382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885539#comment-17885539 ] Hudson commented on HBASE-18382: Results for branch branch-2 [build #1156 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1156/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1156/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1156/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1156/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (x) {color:red}-1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1156/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1156/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > [Thrift] Add transport type info to info server > > > Key: HBASE-18382 > URL: https://issues.apache.org/jira/browse/HBASE-18382 > Project: HBase > Issue Type: Improvement > Components: Thrift >Reporter: Lars George >Assignee: Beata Sudi >Priority: Minor > Labels: beginner > Fix For: 3.0.0-alpha-1 > > > It would be really helpful to know if the Thrift server was started using the > HTTP or binary transport. Any additional info, like QOP settings for SASL > etc. would be great too. Right now the UI is very limited and shows > {{true/false}} for, for example, {{Compact Transport}}. It'd suggest to > change this to show something more useful like this: > {noformat} > Thrift Impl Type: non-blocking > Protocol: Binary > Transport: Framed > QOP: Authentication & Confidential > {noformat} > or > {noformat} > Protocol: Binary + HTTP > Transport: Standard > QOP: none > {noformat} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28888) Backport "HBASE-18382 [Thrift] Add transport type info to info server" to branch-2
[ https://issues.apache.org/jira/browse/HBASE-2?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885538#comment-17885538 ] Hudson commented on HBASE-2: Results for branch branch-2 [build #1156 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1156/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1156/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1156/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1156/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (x) {color:red}-1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1156/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1156/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Backport "HBASE-18382 [Thrift] Add transport type info to info server" to > branch-2 > -- > > Key: HBASE-2 > URL: https://issues.apache.org/jira/browse/HBASE-2 > Project: HBase > Issue Type: Improvement > Components: Thrift >Reporter: Lars George >Assignee: Nihal Jain >Priority: Minor > Labels: beginner, pull-request-available > Fix For: 2.7.0 > > > It would be really helpful to know if the Thrift server was started using the > HTTP or binary transport. Any additional info, like QOP settings for SASL > etc. would be great too. Right now the UI is very limited and shows > {{true/false}} for, for example, {{Compact Transport}}. It'd suggest to > change this to show something more useful like this: > {noformat} > Thrift Impl Type: non-blocking > Protocol: Binary > Transport: Framed > QOP: Authentication & Confidential > {noformat} > or > {noformat} > Protocol: Binary + HTTP > Transport: Standard > QOP: none > {noformat} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28645) Add build information to the REST server version endpoint
[ https://issues.apache.org/jira/browse/HBASE-28645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885540#comment-17885540 ] Hudson commented on HBASE-28645: Results for branch branch-2 [build #1156 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1156/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1156/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1156/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1156/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (x) {color:red}-1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1156/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1156/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Add build information to the REST server version endpoint > - > > Key: HBASE-28645 > URL: https://issues.apache.org/jira/browse/HBASE-28645 > Project: HBase > Issue Type: New Feature > Components: REST >Reporter: Istvan Toth >Assignee: Dávid Paksy >Priority: Minor > Labels: pull-request-available > Fix For: 3.0.0-beta-2, 2.6.1, 2.5.11 > > > There is currently no way to check the REST server version / build number > remotely. > The */version/cluster* endpoint takes the version from master (fair enough), > and the */version/rest* does not include the build information. > We should add a version field to the /version/rest endpoint, which reports > the version of the REST server component. > We should also log this at startup, just like we log the cluster version now. > We may have to add and store the version in the hbase-rest code during build, > similarly to how do it for the other components. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28884) SFT's BrokenStoreFileCleaner may cause data loss
[ https://issues.apache.org/jira/browse/HBASE-28884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885543#comment-17885543 ] Hudson commented on HBASE-28884: Results for branch branch-2 [build #1156 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1156/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1156/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1156/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1156/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (x) {color:red}-1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1156/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1156/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > SFT's BrokenStoreFileCleaner may cause data loss > > > Key: HBASE-28884 > URL: https://issues.apache.org/jira/browse/HBASE-28884 > Project: HBase > Issue Type: Bug >Affects Versions: 2.6.0, 3.0.0-beta-1, 2.7.0, 2.5.10 >Reporter: Wellington Chevreuil >Assignee: Wellington Chevreuil >Priority: Major > Labels: pull-request-available > Fix For: 3.0.0-beta-2 > > > When having this BrokenStoreFileCleaner enabled, one of our customers has run > into a data loss situation, probably due to a race condition between regions > getting moved out of the regionserver while the BrokenStoreFileCleaner was > checking this region's files eligibility for deletion. We have seen that the > file got deleted by the given region server, around the same time the region > got closed on this region server. I believe a race condition during region > close is possible here: > 1) In BrokenStoreFileCleaner, for each region online on the given RS, we get > the list of files in the store dirs, then iterate through it [1]; > 2) For each file listed, we perform several checks, including this one [2] > that checks if the file is "active" > The problem is, if the region for the file we are checking got closed between > point #1 and #2, by the time we check if the file is active in [2], the store > may have already been closed as part of the region closure, so this check > would consider the file as deletable. > One simple solution is to check if the store's region is still open before > proceeding with deleting the file. > [1] > https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/BrokenStoreFileCleaner.java#L99 > [2] > https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/BrokenStoreFileCleaner.java#L133 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28645) Add build information to the REST server version endpoint
[ https://issues.apache.org/jira/browse/HBASE-28645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885533#comment-17885533 ] Hudson commented on HBASE-28645: Results for branch branch-2.6 [build #210 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/210/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/210/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/210/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/210/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/210/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/210/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Add build information to the REST server version endpoint > - > > Key: HBASE-28645 > URL: https://issues.apache.org/jira/browse/HBASE-28645 > Project: HBase > Issue Type: New Feature > Components: REST >Reporter: Istvan Toth >Assignee: Dávid Paksy >Priority: Minor > Labels: pull-request-available > Fix For: 3.0.0-beta-2, 2.6.1, 2.5.11 > > > There is currently no way to check the REST server version / build number > remotely. > The */version/cluster* endpoint takes the version from master (fair enough), > and the */version/rest* does not include the build information. > We should add a version field to the /version/rest endpoint, which reports > the version of the REST server component. > We should also log this at startup, just like we log the cluster version now. > We may have to add and store the version in the hbase-rest code during build, > similarly to how do it for the other components. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28645) Add build information to the REST server version endpoint
[ https://issues.apache.org/jira/browse/HBASE-28645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885475#comment-17885475 ] Hudson commented on HBASE-28645: Results for branch branch-2.5 [build #601 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/601/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/601/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/601/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/601/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/601/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/601/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Add build information to the REST server version endpoint > - > > Key: HBASE-28645 > URL: https://issues.apache.org/jira/browse/HBASE-28645 > Project: HBase > Issue Type: New Feature > Components: REST >Reporter: Istvan Toth >Assignee: Dávid Paksy >Priority: Minor > Labels: pull-request-available > Fix For: 3.0.0-beta-2, 2.6.1, 2.5.11 > > > There is currently no way to check the REST server version / build number > remotely. > The */version/cluster* endpoint takes the version from master (fair enough), > and the */version/rest* does not include the build information. > We should add a version field to the /version/rest endpoint, which reports > the version of the REST server component. > We should also log this at startup, just like we log the cluster version now. > We may have to add and store the version in the hbase-rest code during build, > similarly to how do it for the other components. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28645) Add build information to the REST server version endpoint
[ https://issues.apache.org/jira/browse/HBASE-28645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885451#comment-17885451 ] Hudson commented on HBASE-28645: Results for branch branch-3 [build #298 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/298/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/298/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/298/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Add build information to the REST server version endpoint > - > > Key: HBASE-28645 > URL: https://issues.apache.org/jira/browse/HBASE-28645 > Project: HBase > Issue Type: New Feature > Components: REST >Reporter: Istvan Toth >Assignee: Dávid Paksy >Priority: Minor > Labels: pull-request-available > Fix For: 3.0.0-beta-2, 2.6.1, 2.5.11 > > > There is currently no way to check the REST server version / build number > remotely. > The */version/cluster* endpoint takes the version from master (fair enough), > and the */version/rest* does not include the build information. > We should add a version field to the /version/rest endpoint, which reports > the version of the REST server component. > We should also log this at startup, just like we log the cluster version now. > We may have to add and store the version in the hbase-rest code during build, > similarly to how do it for the other components. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28645) Add build information to the REST server version endpoint
[ https://issues.apache.org/jira/browse/HBASE-28645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885447#comment-17885447 ] Hudson commented on HBASE-28645: Results for branch master [build #1171 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1171/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1171/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1171/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Add build information to the REST server version endpoint > - > > Key: HBASE-28645 > URL: https://issues.apache.org/jira/browse/HBASE-28645 > Project: HBase > Issue Type: New Feature > Components: REST >Reporter: Istvan Toth >Assignee: Dávid Paksy >Priority: Minor > Labels: pull-request-available > Fix For: 3.0.0-beta-2, 2.6.1, 2.5.11 > > > There is currently no way to check the REST server version / build number > remotely. > The */version/cluster* endpoint takes the version from master (fair enough), > and the */version/rest* does not include the build information. > We should add a version field to the /version/rest endpoint, which reports > the version of the REST server component. > We should also log this at startup, just like we log the cluster version now. > We may have to add and store the version in the hbase-rest code during build, > similarly to how do it for the other components. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28585) copy_tables_desc.rb script should handle scenarios where the namespace does not exist in the target cluster
[ https://issues.apache.org/jira/browse/HBASE-28585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885446#comment-17885446 ] Hudson commented on HBASE-28585: Results for branch master [build #1171 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1171/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1171/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1171/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > copy_tables_desc.rb script should handle scenarios where the namespace does > not exist in the target cluster > --- > > Key: HBASE-28585 > URL: https://issues.apache.org/jira/browse/HBASE-28585 > Project: HBase > Issue Type: Improvement > Components: jruby, scripts >Affects Versions: 2.4.17 >Reporter: wenhao >Assignee: wenhao >Priority: Minor > Labels: pull-request-available > Fix For: 2.7.0, 3.0.0-beta-2 > > > When utilizing the {{copy_tables_desc.rb}} script to duplicate tables to a > target cluster, if the specified table's namespace is nonexistent in the > target cluster, the script fails to execute successfully. It is recommended > to incorporate logic within the script for detecting and handling scenarios > where the namespace does not exist. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28887) Fix broken link to mailing lists page in reference guide
[ https://issues.apache.org/jira/browse/HBASE-28887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885445#comment-17885445 ] Hudson commented on HBASE-28887: Results for branch master [build #1171 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1171/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1171/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1171/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Fix broken link to mailing lists page in reference guide > > > Key: HBASE-28887 > URL: https://issues.apache.org/jira/browse/HBASE-28887 > Project: HBase > Issue Type: Task > Components: documentation >Affects Versions: 4.0.0-alpha-1 >Reporter: Dávid Paksy >Assignee: Dávid Paksy >Priority: Minor > Labels: pull-request-available > Fix For: 4.0.0-alpha-1 > > > The Reference Guide (book) link to the mailing lists page -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28187) NPE when flushing a non-existing column family
[ https://issues.apache.org/jira/browse/HBASE-28187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885240#comment-17885240 ] Hudson commented on HBASE-28187: Results for branch branch-2 [build #1155 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1155/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1155/General_20Nightly_20Build_20Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1155/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1155/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1155/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (x) {color:red}-1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1155/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > NPE when flushing a non-existing column family > -- > > Key: HBASE-28187 > URL: https://issues.apache.org/jira/browse/HBASE-28187 > Project: HBase > Issue Type: Bug > Components: Client, regionserver >Affects Versions: 2.6.0, 2.4.17, 2.5.5 >Reporter: Ke Han >Assignee: guluo >Priority: Major > Labels: pull-request-available > Fix For: 2.7.0, 3.0.0-beta-2, 2.6.1 > > > Flush a columnfamily that doesn't exist in the table will cause NPE ERROR in > both shell and the HMaster logs. > h1. Reproduce > Start up HBase 2.5.9 cluster, executing the following commands with hbase > shell in HMaster node will lead to NPE. (Can be reproduced determinstically) > {code:java} > create 'table', {NAME => 'cf1', VERSIONS => 2, COMPRESSION => 'GZ', > BLOOMFILTER => 'ROWCOL'}, {NAME => 'cf2', VERSIONS => 4, COMPRESSION => > 'NONE', BLOOMFILTER => 'ROWCOL'} > incr 'table', 'row1', 'cf1:cell', 2 > flush 'table', 'cf3'{code} > The shell outputs > {code:java} > hbase:006:0> create 'table', {NAME => 'cf1', VERSIONS => 2, COMPRESSION => > 'GZ', BLOOMFILTER => 'ROWCOL'}, {NAME => 'cf2', VERSIONS => 4, COMPRESSION => > 'NONE', BLOOMFILTER => 'ROWCOL'} > Created table table > Took 2.1238 seconds > > => Hbase::Table - table > hbase:007:0> > hbase:008:0> incr 'table', 'row1', 'cf1:cell', 2 > COUNTER VALUE = 2 > Took 0.0131 seconds > > hbase:009:0> > hbase:010:0> flush 'table', 'cf3' > ERROR: java.io.IOException: java.lang.NullPointerException > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:479) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124) > at org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:102) > at org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:82) > Caused by: > org.apache.hadoop.hbase.errorhandling.ForeignException$ProxyThrowable: > java.lang.NullPointerException > at > org.apache.hadoop.hbase.procedure.flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool.waitForOutstandingTasks(RegionServerFlushTableProcedureManager.java:274) > at > org.apache.hadoop.hbase.procedure.flush.FlushTableSubprocedure.flushRegions(FlushTableSubprocedure.java:115) > at > org.apache.hadoop.hbase.procedure.flush.FlushTableSubprocedure.acquireBarrier(FlushTableSubprocedure.java:126) > at org.apache.hadoop.hbase.procedure.Subprocedure.call(Subprocedure.java:160) > at org.apache.hadoop.hbase.procedure.Subprocedure.call(Subprocedure.java:46) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:750) > For usage try 'help "flush"' > Took 12.1713 seconds > {code} > > According to the _flush (flush.rb)_ command specification, user can flush a > specific column family. >
[jira] [Commented] (HBASE-28705) BackupLogCleaner cleans required WALs when using multiple backuproots
[ https://issues.apache.org/jira/browse/HBASE-28705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885239#comment-17885239 ] Hudson commented on HBASE-28705: Results for branch branch-2 [build #1155 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1155/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1155/General_20Nightly_20Build_20Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1155/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1155/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1155/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (x) {color:red}-1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1155/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > BackupLogCleaner cleans required WALs when using multiple backuproots > - > > Key: HBASE-28705 > URL: https://issues.apache.org/jira/browse/HBASE-28705 > Project: HBase > Issue Type: Bug > Components: backup&restore >Affects Versions: 2.6.0, 3.0.0 >Reporter: Dieter De Paepe >Assignee: Dieter De Paepe >Priority: Blocker > Labels: pull-request-available > Fix For: 2.7.0, 3.0.0-beta-2, 2.6.1 > > > The BackupLogCleaner is responsible for avoiding the deletion of WAL/logs > that still need to be included in a future backup. > The logic to decide which files can be deleted does not work correctly when > multiple backup roots are used. Each backup root has a different chain of > backups (full, incremental1, incremental2, ...). So, if any chain requires a > log, it should be preserved. This is not the case. > The result is that logs could be incorrectly deleted, resulting in data loss > in backups. > PR incoming with test & fix. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28187) NPE when flushing a non-existing column family
[ https://issues.apache.org/jira/browse/HBASE-28187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885215#comment-17885215 ] Hudson commented on HBASE-28187: Results for branch branch-2.6 [build #209 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/209/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/209/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/209/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/209/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (x) {color:red}-1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/209/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (x) {color:red}-1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/209/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > NPE when flushing a non-existing column family > -- > > Key: HBASE-28187 > URL: https://issues.apache.org/jira/browse/HBASE-28187 > Project: HBase > Issue Type: Bug > Components: Client, regionserver >Affects Versions: 2.6.0, 2.4.17, 2.5.5 >Reporter: Ke Han >Assignee: guluo >Priority: Major > Labels: pull-request-available > Fix For: 2.7.0, 3.0.0-beta-2, 2.6.1 > > > Flush a columnfamily that doesn't exist in the table will cause NPE ERROR in > both shell and the HMaster logs. > h1. Reproduce > Start up HBase 2.5.9 cluster, executing the following commands with hbase > shell in HMaster node will lead to NPE. (Can be reproduced determinstically) > {code:java} > create 'table', {NAME => 'cf1', VERSIONS => 2, COMPRESSION => 'GZ', > BLOOMFILTER => 'ROWCOL'}, {NAME => 'cf2', VERSIONS => 4, COMPRESSION => > 'NONE', BLOOMFILTER => 'ROWCOL'} > incr 'table', 'row1', 'cf1:cell', 2 > flush 'table', 'cf3'{code} > The shell outputs > {code:java} > hbase:006:0> create 'table', {NAME => 'cf1', VERSIONS => 2, COMPRESSION => > 'GZ', BLOOMFILTER => 'ROWCOL'}, {NAME => 'cf2', VERSIONS => 4, COMPRESSION => > 'NONE', BLOOMFILTER => 'ROWCOL'} > Created table table > Took 2.1238 seconds > > => Hbase::Table - table > hbase:007:0> > hbase:008:0> incr 'table', 'row1', 'cf1:cell', 2 > COUNTER VALUE = 2 > Took 0.0131 seconds > > hbase:009:0> > hbase:010:0> flush 'table', 'cf3' > ERROR: java.io.IOException: java.lang.NullPointerException > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:479) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124) > at org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:102) > at org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:82) > Caused by: > org.apache.hadoop.hbase.errorhandling.ForeignException$ProxyThrowable: > java.lang.NullPointerException > at > org.apache.hadoop.hbase.procedure.flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool.waitForOutstandingTasks(RegionServerFlushTableProcedureManager.java:274) > at > org.apache.hadoop.hbase.procedure.flush.FlushTableSubprocedure.flushRegions(FlushTableSubprocedure.java:115) > at > org.apache.hadoop.hbase.procedure.flush.FlushTableSubprocedure.acquireBarrier(FlushTableSubprocedure.java:126) > at org.apache.hadoop.hbase.procedure.Subprocedure.call(Subprocedure.java:160) > at org.apache.hadoop.hbase.procedure.Subprocedure.call(Subprocedure.java:46) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:750) > For usage try 'help "flush"' > Took 12.1713 seconds > {code} > > According to the _flush (flush.rb)_ command specification, user can flush a > specific column fam
[jira] [Commented] (HBASE-28705) BackupLogCleaner cleans required WALs when using multiple backuproots
[ https://issues.apache.org/jira/browse/HBASE-28705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885214#comment-17885214 ] Hudson commented on HBASE-28705: Results for branch branch-2.6 [build #209 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/209/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/209/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/209/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/209/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (x) {color:red}-1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/209/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (x) {color:red}-1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/209/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > BackupLogCleaner cleans required WALs when using multiple backuproots > - > > Key: HBASE-28705 > URL: https://issues.apache.org/jira/browse/HBASE-28705 > Project: HBase > Issue Type: Bug > Components: backup&restore >Affects Versions: 2.6.0, 3.0.0 >Reporter: Dieter De Paepe >Assignee: Dieter De Paepe >Priority: Blocker > Labels: pull-request-available > Fix For: 2.7.0, 3.0.0-beta-2, 2.6.1 > > > The BackupLogCleaner is responsible for avoiding the deletion of WAL/logs > that still need to be included in a future backup. > The logic to decide which files can be deleted does not work correctly when > multiple backup roots are used. Each backup root has a different chain of > backups (full, incremental1, incremental2, ...). So, if any chain requires a > log, it should be preserved. This is not the case. > The result is that logs could be incorrectly deleted, resulting in data loss > in backups. > PR incoming with test & fix. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28705) BackupLogCleaner cleans required WALs when using multiple backuproots
[ https://issues.apache.org/jira/browse/HBASE-28705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885143#comment-17885143 ] Hudson commented on HBASE-28705: Results for branch master [build #1170 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1170/]: (x) *{color:red}-1 overall{color}* details (if available): (x) {color:red}-1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1170/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1170/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > BackupLogCleaner cleans required WALs when using multiple backuproots > - > > Key: HBASE-28705 > URL: https://issues.apache.org/jira/browse/HBASE-28705 > Project: HBase > Issue Type: Bug > Components: backup&restore >Affects Versions: 2.6.0, 3.0.0 >Reporter: Dieter De Paepe >Assignee: Dieter De Paepe >Priority: Blocker > Labels: pull-request-available > Fix For: 2.7.0, 3.0.0-beta-2, 2.6.1 > > > The BackupLogCleaner is responsible for avoiding the deletion of WAL/logs > that still need to be included in a future backup. > The logic to decide which files can be deleted does not work correctly when > multiple backup roots are used. Each backup root has a different chain of > backups (full, incremental1, incremental2, ...). So, if any chain requires a > log, it should be preserved. This is not the case. > The result is that logs could be incorrectly deleted, resulting in data loss > in backups. > PR incoming with test & fix. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28642) Hide old PR comments when posting new
[ https://issues.apache.org/jira/browse/HBASE-28642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885139#comment-17885139 ] Hudson commented on HBASE-28642: Results for branch branch-2.5 [build #600 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/600/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/600/General_20Nightly_20Build_20Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/600/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/600/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/600/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (x) {color:red}-1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/600/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Hide old PR comments when posting new > - > > Key: HBASE-28642 > URL: https://issues.apache.org/jira/browse/HBASE-28642 > Project: HBase > Issue Type: Task > Components: build, community >Reporter: Nick Dimiduk >Assignee: Nick Dimiduk >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0-alpha-1, 2.7.0, 3.0.0-beta-2, 2.6.1, 2.5.11 > > > It would be really nice if the build bot would hide the old commits when it > posts new ones. When a PR has been open for a while, we end up with more > build-bot activity than human activity and it's easy to lose human comments. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28642) Hide old PR comments when posting new
[ https://issues.apache.org/jira/browse/HBASE-28642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885136#comment-17885136 ] Hudson commented on HBASE-28642: Results for branch branch-3 [build #297 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/297/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/297/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/297/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Hide old PR comments when posting new > - > > Key: HBASE-28642 > URL: https://issues.apache.org/jira/browse/HBASE-28642 > Project: HBase > Issue Type: Task > Components: build, community >Reporter: Nick Dimiduk >Assignee: Nick Dimiduk >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0-alpha-1, 2.7.0, 3.0.0-beta-2, 2.6.1, 2.5.11 > > > It would be really nice if the build bot would hide the old commits when it > posts new ones. When a PR has been open for a while, we end up with more > build-bot activity than human activity and it's easy to lose human comments. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28705) BackupLogCleaner cleans required WALs when using multiple backuproots
[ https://issues.apache.org/jira/browse/HBASE-28705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885137#comment-17885137 ] Hudson commented on HBASE-28705: Results for branch branch-3 [build #297 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/297/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/297/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/297/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > BackupLogCleaner cleans required WALs when using multiple backuproots > - > > Key: HBASE-28705 > URL: https://issues.apache.org/jira/browse/HBASE-28705 > Project: HBase > Issue Type: Bug > Components: backup&restore >Affects Versions: 2.6.0, 3.0.0 >Reporter: Dieter De Paepe >Assignee: Dieter De Paepe >Priority: Blocker > Labels: pull-request-available > Fix For: 2.7.0, 3.0.0-beta-2, 2.6.1 > > > The BackupLogCleaner is responsible for avoiding the deletion of WAL/logs > that still need to be included in a future backup. > The logic to decide which files can be deleted does not work correctly when > multiple backup roots are used. Each backup root has a different chain of > backups (full, incremental1, incremental2, ...). So, if any chain requires a > log, it should be preserved. This is not the case. > The result is that logs could be incorrectly deleted, resulting in data loss > in backups. > PR incoming with test & fix. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28187) NPE when flushing a non-existing column family
[ https://issues.apache.org/jira/browse/HBASE-28187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885135#comment-17885135 ] Hudson commented on HBASE-28187: Results for branch branch-3 [build #297 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/297/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/297/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/297/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > NPE when flushing a non-existing column family > -- > > Key: HBASE-28187 > URL: https://issues.apache.org/jira/browse/HBASE-28187 > Project: HBase > Issue Type: Bug > Components: Client, regionserver >Affects Versions: 2.6.0, 2.4.17, 2.5.5 >Reporter: Ke Han >Assignee: guluo >Priority: Major > Labels: pull-request-available > Fix For: 2.7.0, 3.0.0-beta-2, 2.6.1 > > > Flush a columnfamily that doesn't exist in the table will cause NPE ERROR in > both shell and the HMaster logs. > h1. Reproduce > Start up HBase 2.5.9 cluster, executing the following commands with hbase > shell in HMaster node will lead to NPE. (Can be reproduced determinstically) > {code:java} > create 'table', {NAME => 'cf1', VERSIONS => 2, COMPRESSION => 'GZ', > BLOOMFILTER => 'ROWCOL'}, {NAME => 'cf2', VERSIONS => 4, COMPRESSION => > 'NONE', BLOOMFILTER => 'ROWCOL'} > incr 'table', 'row1', 'cf1:cell', 2 > flush 'table', 'cf3'{code} > The shell outputs > {code:java} > hbase:006:0> create 'table', {NAME => 'cf1', VERSIONS => 2, COMPRESSION => > 'GZ', BLOOMFILTER => 'ROWCOL'}, {NAME => 'cf2', VERSIONS => 4, COMPRESSION => > 'NONE', BLOOMFILTER => 'ROWCOL'} > Created table table > Took 2.1238 seconds > > => Hbase::Table - table > hbase:007:0> > hbase:008:0> incr 'table', 'row1', 'cf1:cell', 2 > COUNTER VALUE = 2 > Took 0.0131 seconds > > hbase:009:0> > hbase:010:0> flush 'table', 'cf3' > ERROR: java.io.IOException: java.lang.NullPointerException > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:479) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124) > at org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:102) > at org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:82) > Caused by: > org.apache.hadoop.hbase.errorhandling.ForeignException$ProxyThrowable: > java.lang.NullPointerException > at > org.apache.hadoop.hbase.procedure.flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool.waitForOutstandingTasks(RegionServerFlushTableProcedureManager.java:274) > at > org.apache.hadoop.hbase.procedure.flush.FlushTableSubprocedure.flushRegions(FlushTableSubprocedure.java:115) > at > org.apache.hadoop.hbase.procedure.flush.FlushTableSubprocedure.acquireBarrier(FlushTableSubprocedure.java:126) > at org.apache.hadoop.hbase.procedure.Subprocedure.call(Subprocedure.java:160) > at org.apache.hadoop.hbase.procedure.Subprocedure.call(Subprocedure.java:46) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:750) > For usage try 'help "flush"' > Took 12.1713 seconds > {code} > > According to the _flush (flush.rb)_ command specification, user can flush a > specific column family. > {code:java} > Flush all regions in passed table or pass a region row to > flush an individual region or a region server name whose format > is 'host,port,startcode', to flush all its regions. > You can also flush a single column family for all regions within a table, > or for an specific region only. > For example: > hbase> flush 'TABLENAME' > hbase> flush 'TABLENAME','FAMILYNAME' {code} > In the above case, *cf3* an incorrect input (non-existing column family). If > user tries to flush it, the expected output is: > # HBase rejects this operation > # returns a prompt saying the column family doesn't exist > {_}"{_}{_}{+}
[jira] [Commented] (HBASE-28642) Hide old PR comments when posting new
[ https://issues.apache.org/jira/browse/HBASE-28642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884880#comment-17884880 ] Hudson commented on HBASE-28642: Results for branch branch-2 [build #1154 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1154/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1154/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1154/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1154/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (x) {color:red}-1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1154/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1154/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Hide old PR comments when posting new > - > > Key: HBASE-28642 > URL: https://issues.apache.org/jira/browse/HBASE-28642 > Project: HBase > Issue Type: Task > Components: build, community >Reporter: Nick Dimiduk >Assignee: Nick Dimiduk >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0-alpha-1, 2.7.0, 3.0.0-beta-2, 2.6.1, 2.5.11 > > > It would be really nice if the build bot would hide the old commits when it > posts new ones. When a PR has been open for a while, we end up with more > build-bot activity than human activity and it's easy to lose human comments. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28642) Hide old PR comments when posting new
[ https://issues.apache.org/jira/browse/HBASE-28642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884847#comment-17884847 ] Hudson commented on HBASE-28642: Results for branch branch-2.6 [build #208 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/208/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/208/General_20Nightly_20Build_20Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/208/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/208/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/208/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (x) {color:red}-1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/208/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Hide old PR comments when posting new > - > > Key: HBASE-28642 > URL: https://issues.apache.org/jira/browse/HBASE-28642 > Project: HBase > Issue Type: Task > Components: build, community >Reporter: Nick Dimiduk >Assignee: Nick Dimiduk >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0-alpha-1, 2.7.0, 3.0.0-beta-2, 2.6.1, 2.5.11 > > > It would be really nice if the build bot would hide the old commits when it > posts new ones. When a PR has been open for a while, we end up with more > build-bot activity than human activity and it's easy to lose human comments. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-27903) Skip submitting Split/Merge procedure when split/merge is disabled at table level
[ https://issues.apache.org/jira/browse/HBASE-27903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884845#comment-17884845 ] Hudson commented on HBASE-27903: Results for branch branch-2.6 [build #208 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/208/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/208/General_20Nightly_20Build_20Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/208/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/208/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/208/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (x) {color:red}-1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/208/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Skip submitting Split/Merge procedure when split/merge is disabled at table > level > - > > Key: HBASE-27903 > URL: https://issues.apache.org/jira/browse/HBASE-27903 > Project: HBase > Issue Type: Improvement > Components: Admin >Reporter: Ashok shetty >Assignee: Nihal Jain >Priority: Minor > Labels: pull-request-available > Fix For: 2.7.0, 3.0.0-beta-2, 2.6.1 > > > *Scenario* > If split/merge is disabled at table level , master will submit a > SplitTableRegionProcedure/MergeTableRegionsProcedure , and rollback it as > execution fails during pre-checks . > *Improvement* > Master can check it early and no need to submit > SplitTableRegionProcedure/MergeTableRegionsProcedure when split/merge switch > is disabled at Table level. > *Steps* > {code:java} > create 'testCreateTableWithMergeDisableParameter', 'f1', {MERGE_ENABLED => > false} > list_regions 'testCreateTableWithMergeDisableParameter' > merge_region > 'd21cdc5d488e8036017696c46cffd9b1','6382c8f731a4f0379b6e98ece4b06e3e' > {code} > {code:java} > create 'testcreatetablewithsplitdisableparameter', 'f1', {SPLIT_ENABLED => > false} > split 'testcreatetablewithsplitdisableparameter','30'{code} > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28842) TestRequestAttributes should fail when expected
[ https://issues.apache.org/jira/browse/HBASE-28842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884846#comment-17884846 ] Hudson commented on HBASE-28842: Results for branch branch-2.6 [build #208 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/208/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/208/General_20Nightly_20Build_20Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/208/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/208/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/208/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (x) {color:red}-1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/208/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > TestRequestAttributes should fail when expected > --- > > Key: HBASE-28842 > URL: https://issues.apache.org/jira/browse/HBASE-28842 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.0, 3.0.0 >Reporter: Evelyn Boland >Assignee: Evelyn Boland >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0-alpha-1, 2.7.0, 3.0.0-beta-2, 2.6.1 > > > Problem: > The tests in the TestRequestAttributes class pass even when they should fail. > I've included an example of a test that should fail but does not below. > Fix: > Throw an IOException in the AttributesCoprocessor when the map of expected > request attributes does not match the map of given request attributes. > > Test: > We set 2+ request attributes on the Get request but always return 0 request > attributes from AttributesCoprocessor::getRequestAttributesForRowKey method. > Yet the test passes even though the map of expected request attributes never > matches the map of given request attributes. > {code:java} > @Category({ ClientTests.class, MediumTests.class }) > public class TestRequestAttributes { > @ClassRule > public static final HBaseClassTestRule CLASS_RULE = > HBaseClassTestRule.forClass(TestRequestAttributes.class); > private static final byte[] ROW_KEY1 = Bytes.toBytes("1"); > private static final Map> > ROW_KEY_TO_REQUEST_ATTRIBUTES = > new HashMap<>(); > static { > CONNECTION_ATTRIBUTES.put("clientId", Bytes.toBytes("foo")); > ROW_KEY_TO_REQUEST_ATTRIBUTES.put(ROW_KEY1, addRandomRequestAttributes()); > } > private static final ExecutorService EXECUTOR_SERVICE = > Executors.newFixedThreadPool(100); > private static final byte[] FAMILY = Bytes.toBytes("0"); > private static final TableName TABLE_NAME = > TableName.valueOf("testRequestAttributes"); > private static final HBaseTestingUtil TEST_UTIL = new HBaseTestingUtil(); > private static SingleProcessHBaseCluster cluster; > @BeforeClass > public static void setUp() throws Exception { > cluster = TEST_UTIL.startMiniCluster(1); > Table table = TEST_UTIL.createTable(TABLE_NAME, new byte[][] { FAMILY }, > 1, > HConstants.DEFAULT_BLOCKSIZE, AttributesCoprocessor.class.getName()); > table.close(); > } > @AfterClass > public static void afterClass() throws Exception { > cluster.close(); > TEST_UTIL.shutdownMiniCluster(); > } > @Test > public void testRequestAttributesGet() throws IOException { > Configuration conf = TEST_UTIL.getConfiguration(); > try ( > Connection conn = ConnectionFactory.createConnection(conf, null, > AuthUtil.loginClient(conf), > CONNECTION_ATTRIBUTES); > Table table = configureRequestAttributes(conn.getTableBuilder(TABLE_NAME, > EXECUTOR_SERVICE), > ROW_KEY_TO_REQUEST_ATTRIBUTES.get(ROW_KEY1)).build()) { > table.get(new Get(ROW_KEY1)); > } > } > private static Map addRandomRequestAttributes() { > Map requestAttributes = new HashMap<>(); > int j = Math.max(2, (int) (10 * Math.random())); > for (int i = 0; i < j; i++) { > requestAttributes.put(String.valueOf(i), > Bytes.toBytes(UUID.randomUUID().toString())); > } > return requestAttributes; > } > public static class AttributesCop
[jira] [Commented] (HBASE-20653) Add missing observer hooks for region server group to MasterObserver
[ https://issues.apache.org/jira/browse/HBASE-20653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884792#comment-17884792 ] Hudson commented on HBASE-20653: Results for branch branch-2.5 [build #599 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/599/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/599/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/599/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/599/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/599/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/599/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Add missing observer hooks for region server group to MasterObserver > > > Key: HBASE-20653 > URL: https://issues.apache.org/jira/browse/HBASE-20653 > Project: HBase > Issue Type: Bug >Reporter: Ted Yu >Assignee: Nihal Jain >Priority: Major > Fix For: 3.0.0-alpha-1 > > Attachments: HBASE-20653.master.001.patch, > HBASE-20653.master.002.patch, HBASE-20653.master.003.patch, > HBASE-20653.master.004.patch > > > Currently the following region server group operations don't have > corresponding hook in MasterObserver : > * getRSGroupInfo > * getRSGroupInfoOfServer > * getRSGroupInfoOfTable > * listRSGroup > This JIRA is to > * add them to MasterObserver > * add pre/post hook calls in RSGroupAdminEndpoint thru > master.getMasterCoprocessorHost for the above operations > * add corresponding tests to TestRSGroups (in similar manner to that of > HBASE-20627) -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28867) Backport "HBASE-20653 Add missing observer hooks for region server group to MasterObserver" to branch-2
[ https://issues.apache.org/jira/browse/HBASE-28867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884791#comment-17884791 ] Hudson commented on HBASE-28867: Results for branch branch-2.5 [build #599 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/599/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/599/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/599/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/599/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/599/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/599/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Backport "HBASE-20653 Add missing observer hooks for region server group to > MasterObserver" to branch-2 > --- > > Key: HBASE-28867 > URL: https://issues.apache.org/jira/browse/HBASE-28867 > Project: HBase > Issue Type: Bug > Components: rsgroup >Affects Versions: 2.6.0, 2.7.0, 2.5.10 >Reporter: Ted Yu >Assignee: Nihal Jain >Priority: Major > Labels: pull-request-available > Fix For: 2.7.0, 2.6.1, 2.5.11 > > > Currently the following region server group operations don't have > corresponding hook in MasterObserver : > * getRSGroupInfo > * getRSGroupInfoOfServer > * getRSGroupInfoOfTable > * listRSGroup > This JIRA is to > * add them to MasterObserver > * add pre/post hook calls in RSGroupAdminEndpoint thru > master.getMasterCoprocessorHost for the above operations > * add corresponding tests to TestRSGroups (in similar manner to that of > HBASE-20627) -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28868) Add missing permission check for updateRSGroupConfig in branch-2
[ https://issues.apache.org/jira/browse/HBASE-28868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884790#comment-17884790 ] Hudson commented on HBASE-28868: Results for branch branch-2.5 [build #599 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/599/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/599/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/599/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/599/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/599/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/599/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Add missing permission check for updateRSGroupConfig in branch-2 > > > Key: HBASE-28868 > URL: https://issues.apache.org/jira/browse/HBASE-28868 > Project: HBase > Issue Type: Task > Components: rsgroup >Affects Versions: 2.7.0 >Reporter: Nihal Jain >Assignee: Nihal Jain >Priority: Minor > Labels: pull-request-available > Fix For: 2.7.0, 2.6.1, 2.5.11 > > > Found this during HBASE-28867, we do not have security check for > updateRSGroupConfig in branch-2. See > [https://github.com/apache/hbase/blob/0dc334f572329be7eb2455cec3519fc820c04c25/hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupAdminEndpoint.java#L450] > Same check exists in master > [https://github.com/apache/hbase/blob/52082bc5b80a60406bfaaa630ed5cb23027436c1/hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/AccessController.java#L2279] > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28187) NPE when flushing a non-existing column family
[ https://issues.apache.org/jira/browse/HBASE-28187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884765#comment-17884765 ] Hudson commented on HBASE-28187: Results for branch master [build #1169 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1169/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1169/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1169/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > NPE when flushing a non-existing column family > -- > > Key: HBASE-28187 > URL: https://issues.apache.org/jira/browse/HBASE-28187 > Project: HBase > Issue Type: Bug > Components: Client, regionserver >Affects Versions: 2.6.0, 2.4.17, 2.5.5 >Reporter: Ke Han >Assignee: guluo >Priority: Major > Labels: pull-request-available > Fix For: 2.7.0, 3.0.0-beta-2, 2.6.1 > > > Flush a columnfamily that doesn't exist in the table will cause NPE ERROR in > both shell and the HMaster logs. > h1. Reproduce > Start up HBase 2.5.9 cluster, executing the following commands with hbase > shell in HMaster node will lead to NPE. (Can be reproduced determinstically) > {code:java} > create 'table', {NAME => 'cf1', VERSIONS => 2, COMPRESSION => 'GZ', > BLOOMFILTER => 'ROWCOL'}, {NAME => 'cf2', VERSIONS => 4, COMPRESSION => > 'NONE', BLOOMFILTER => 'ROWCOL'} > incr 'table', 'row1', 'cf1:cell', 2 > flush 'table', 'cf3'{code} > The shell outputs > {code:java} > hbase:006:0> create 'table', {NAME => 'cf1', VERSIONS => 2, COMPRESSION => > 'GZ', BLOOMFILTER => 'ROWCOL'}, {NAME => 'cf2', VERSIONS => 4, COMPRESSION => > 'NONE', BLOOMFILTER => 'ROWCOL'} > Created table table > Took 2.1238 seconds > > => Hbase::Table - table > hbase:007:0> > hbase:008:0> incr 'table', 'row1', 'cf1:cell', 2 > COUNTER VALUE = 2 > Took 0.0131 seconds > > hbase:009:0> > hbase:010:0> flush 'table', 'cf3' > ERROR: java.io.IOException: java.lang.NullPointerException > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:479) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124) > at org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:102) > at org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:82) > Caused by: > org.apache.hadoop.hbase.errorhandling.ForeignException$ProxyThrowable: > java.lang.NullPointerException > at > org.apache.hadoop.hbase.procedure.flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool.waitForOutstandingTasks(RegionServerFlushTableProcedureManager.java:274) > at > org.apache.hadoop.hbase.procedure.flush.FlushTableSubprocedure.flushRegions(FlushTableSubprocedure.java:115) > at > org.apache.hadoop.hbase.procedure.flush.FlushTableSubprocedure.acquireBarrier(FlushTableSubprocedure.java:126) > at org.apache.hadoop.hbase.procedure.Subprocedure.call(Subprocedure.java:160) > at org.apache.hadoop.hbase.procedure.Subprocedure.call(Subprocedure.java:46) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:750) > For usage try 'help "flush"' > Took 12.1713 seconds > {code} > > According to the _flush (flush.rb)_ command specification, user can flush a > specific column family. > {code:java} > Flush all regions in passed table or pass a region row to > flush an individual region or a region server name whose format > is 'host,port,startcode', to flush all its regions. > You can also flush a single column family for all regions within a table, > or for an specific region only. > For example: > hbase> flush 'TABLENAME' > hbase> flush 'TABLENAME','FAMILYNAME' {code} > In the above case, *cf3* an incorrect input (non-existing column family). If > user tries to flush it, the expected output is: > # HBase rejects this operation > # returns a prompt saying the column family doesn't exist > {_}"{_}{_}{+}ERRO
[jira] [Commented] (HBASE-28839) Exception handling during retrieval of bucket-cache from persistence.
[ https://issues.apache.org/jira/browse/HBASE-28839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884520#comment-17884520 ] Hudson commented on HBASE-28839: Results for branch branch-2 [build #1153 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1153/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1153/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1153/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1153/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (x) {color:red}-1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1153/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (x) {color:red}-1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1153/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Exception handling during retrieval of bucket-cache from persistence. > - > > Key: HBASE-28839 > URL: https://issues.apache.org/jira/browse/HBASE-28839 > Project: HBase > Issue Type: Bug > Components: BucketCache >Affects Versions: 3.0.0-beta-1, 2.7.0 >Reporter: Janardhan Hungund >Assignee: Janardhan Hungund >Priority: Major > Labels: pull-request-available > Fix For: 3.0.0, 2.7.0 > > > During the retrieval of bucket cache from the persistence file during the > startup, it was observed that, if an exception, other than, the IOException > occurs, the bucket cache internal members remain uninitialised and cause the > bucket to remain unusable. The exception is not logged in the trace file and > the retrieval thread exits without initialising the bucket-cache. > Also, the NullPointerExceptions are seen when, trying to use the cache. > {code:java} > 2024-09-10 14:33:30,020 ERROR > org.apache.hadoop.hbase.io.hfile.bucket.BucketCache: WriterThread encountered > error > java.lang.NullPointerException > at > org.apache.hadoop.hbase.io.hfile.bucket.BucketCache$RAMQueueEntry.writeToCache(BucketCache.java:1975) > at > org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.doDrain(BucketCache.java:1298) > {code} > > {code:java} > 2024-09-13 07:01:05,964 ERROR > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: Error getting metrics > from source RegionServer,sub=Server > java.lang.NullPointerException > at > org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.getFreeSize(BucketCache.java:1819) > at > org.apache.hadoop.hbase.io.hfile.CombinedBlockCache.getFreeSize(CombinedBlockCache.java:179) > at > org.apache.hadoop.hbase.regionserver.MetricsRegionServerWrapperImpl.getBlockCacheFreeSize(MetricsRegionServerWrapperImpl.java:308) > at > org.apache.hadoop.hbase.regionserver.MetricsRegionServerSourceImpl.addGaugesToMetricsRecordBuilder(MetricsRegionServerSourceImpl.java:525) > at > org.apache.hadoop.hbase.regionserver.MetricsRegionServerSourceImpl.getMetrics(MetricsRegionServerSourceImpl.java:333) > {code} > All type of exceptions need to be handled gracefully. > All types of exceptions must be logged to the trace file. > The bucket cache needs to reinitialised and made usable. > Thanks, > Janardhan -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28868) Add missing permission check for updateRSGroupConfig in branch-2
[ https://issues.apache.org/jira/browse/HBASE-28868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884500#comment-17884500 ] Hudson commented on HBASE-28868: Results for branch branch-2.6 [build #207 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/207/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/207/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/207/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/207/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/207/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/207/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Add missing permission check for updateRSGroupConfig in branch-2 > > > Key: HBASE-28868 > URL: https://issues.apache.org/jira/browse/HBASE-28868 > Project: HBase > Issue Type: Task > Components: rsgroup >Affects Versions: 2.7.0 >Reporter: Nihal Jain >Assignee: Nihal Jain >Priority: Minor > Labels: pull-request-available > Fix For: 2.7.0, 2.6.1, 2.5.11 > > > Found this during HBASE-28867, we do not have security check for > updateRSGroupConfig in branch-2. See > [https://github.com/apache/hbase/blob/0dc334f572329be7eb2455cec3519fc820c04c25/hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupAdminEndpoint.java#L450] > Same check exists in master > [https://github.com/apache/hbase/blob/52082bc5b80a60406bfaaa630ed5cb23027436c1/hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/AccessController.java#L2279] > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-20653) Add missing observer hooks for region server group to MasterObserver
[ https://issues.apache.org/jira/browse/HBASE-20653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884499#comment-17884499 ] Hudson commented on HBASE-20653: Results for branch branch-2.6 [build #207 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/207/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/207/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/207/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/207/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/207/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/207/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Add missing observer hooks for region server group to MasterObserver > > > Key: HBASE-20653 > URL: https://issues.apache.org/jira/browse/HBASE-20653 > Project: HBase > Issue Type: Bug >Reporter: Ted Yu >Assignee: Nihal Jain >Priority: Major > Fix For: 3.0.0-alpha-1 > > Attachments: HBASE-20653.master.001.patch, > HBASE-20653.master.002.patch, HBASE-20653.master.003.patch, > HBASE-20653.master.004.patch > > > Currently the following region server group operations don't have > corresponding hook in MasterObserver : > * getRSGroupInfo > * getRSGroupInfoOfServer > * getRSGroupInfoOfTable > * listRSGroup > This JIRA is to > * add them to MasterObserver > * add pre/post hook calls in RSGroupAdminEndpoint thru > master.getMasterCoprocessorHost for the above operations > * add corresponding tests to TestRSGroups (in similar manner to that of > HBASE-20627) -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28867) Backport "HBASE-20653 Add missing observer hooks for region server group to MasterObserver" to branch-2
[ https://issues.apache.org/jira/browse/HBASE-28867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884498#comment-17884498 ] Hudson commented on HBASE-28867: Results for branch branch-2.6 [build #207 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/207/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/207/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/207/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/207/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/207/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/207/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Backport "HBASE-20653 Add missing observer hooks for region server group to > MasterObserver" to branch-2 > --- > > Key: HBASE-28867 > URL: https://issues.apache.org/jira/browse/HBASE-28867 > Project: HBase > Issue Type: Bug > Components: rsgroup >Affects Versions: 2.6.0, 2.7.0, 2.5.10 >Reporter: Ted Yu >Assignee: Nihal Jain >Priority: Major > Labels: pull-request-available > Fix For: 2.7.0, 2.6.1, 2.5.11 > > > Currently the following region server group operations don't have > corresponding hook in MasterObserver : > * getRSGroupInfo > * getRSGroupInfoOfServer > * getRSGroupInfoOfTable > * listRSGroup > This JIRA is to > * add them to MasterObserver > * add pre/post hook calls in RSGroupAdminEndpoint thru > master.getMasterCoprocessorHost for the above operations > * add corresponding tests to TestRSGroups (in similar manner to that of > HBASE-20627) -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28839) Exception handling during retrieval of bucket-cache from persistence.
[ https://issues.apache.org/jira/browse/HBASE-28839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884389#comment-17884389 ] Hudson commented on HBASE-28839: Results for branch master [build #1168 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1168/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1168/General_20Nightly_20Build_20Report/] (x) {color:red}-1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1168/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Exception handling during retrieval of bucket-cache from persistence. > - > > Key: HBASE-28839 > URL: https://issues.apache.org/jira/browse/HBASE-28839 > Project: HBase > Issue Type: Bug > Components: BucketCache >Affects Versions: 3.0.0-beta-1, 2.7.0 >Reporter: Janardhan Hungund >Assignee: Janardhan Hungund >Priority: Major > Labels: pull-request-available > Fix For: 3.0.0, 2.7.0 > > > During the retrieval of bucket cache from the persistence file during the > startup, it was observed that, if an exception, other than, the IOException > occurs, the bucket cache internal members remain uninitialised and cause the > bucket to remain unusable. The exception is not logged in the trace file and > the retrieval thread exits without initialising the bucket-cache. > Also, the NullPointerExceptions are seen when, trying to use the cache. > {code:java} > 2024-09-10 14:33:30,020 ERROR > org.apache.hadoop.hbase.io.hfile.bucket.BucketCache: WriterThread encountered > error > java.lang.NullPointerException > at > org.apache.hadoop.hbase.io.hfile.bucket.BucketCache$RAMQueueEntry.writeToCache(BucketCache.java:1975) > at > org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.doDrain(BucketCache.java:1298) > {code} > > {code:java} > 2024-09-13 07:01:05,964 ERROR > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: Error getting metrics > from source RegionServer,sub=Server > java.lang.NullPointerException > at > org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.getFreeSize(BucketCache.java:1819) > at > org.apache.hadoop.hbase.io.hfile.CombinedBlockCache.getFreeSize(CombinedBlockCache.java:179) > at > org.apache.hadoop.hbase.regionserver.MetricsRegionServerWrapperImpl.getBlockCacheFreeSize(MetricsRegionServerWrapperImpl.java:308) > at > org.apache.hadoop.hbase.regionserver.MetricsRegionServerSourceImpl.addGaugesToMetricsRecordBuilder(MetricsRegionServerSourceImpl.java:525) > at > org.apache.hadoop.hbase.regionserver.MetricsRegionServerSourceImpl.getMetrics(MetricsRegionServerSourceImpl.java:333) > {code} > All type of exceptions need to be handled gracefully. > All types of exceptions must be logged to the trace file. > The bucket cache needs to reinitialised and made usable. > Thanks, > Janardhan -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28842) TestRequestAttributes should fail when expected
[ https://issues.apache.org/jira/browse/HBASE-28842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884376#comment-17884376 ] Hudson commented on HBASE-28842: Results for branch branch-3 [build #296 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/296/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/296/General_20Nightly_20Build_20Report/] (x) {color:red}-1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/296/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > TestRequestAttributes should fail when expected > --- > > Key: HBASE-28842 > URL: https://issues.apache.org/jira/browse/HBASE-28842 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.0, 3.0.0 >Reporter: Evelyn Boland >Assignee: Evelyn Boland >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0-alpha-1, 2.7.0, 3.0.0-beta-2, 2.6.1 > > > Problem: > The tests in the TestRequestAttributes class pass even when they should fail. > I've included an example of a test that should fail but does not below. > Fix: > Throw an IOException in the AttributesCoprocessor when the map of expected > request attributes does not match the map of given request attributes. > > Test: > We set 2+ request attributes on the Get request but always return 0 request > attributes from AttributesCoprocessor::getRequestAttributesForRowKey method. > Yet the test passes even though the map of expected request attributes never > matches the map of given request attributes. > {code:java} > @Category({ ClientTests.class, MediumTests.class }) > public class TestRequestAttributes { > @ClassRule > public static final HBaseClassTestRule CLASS_RULE = > HBaseClassTestRule.forClass(TestRequestAttributes.class); > private static final byte[] ROW_KEY1 = Bytes.toBytes("1"); > private static final Map> > ROW_KEY_TO_REQUEST_ATTRIBUTES = > new HashMap<>(); > static { > CONNECTION_ATTRIBUTES.put("clientId", Bytes.toBytes("foo")); > ROW_KEY_TO_REQUEST_ATTRIBUTES.put(ROW_KEY1, addRandomRequestAttributes()); > } > private static final ExecutorService EXECUTOR_SERVICE = > Executors.newFixedThreadPool(100); > private static final byte[] FAMILY = Bytes.toBytes("0"); > private static final TableName TABLE_NAME = > TableName.valueOf("testRequestAttributes"); > private static final HBaseTestingUtil TEST_UTIL = new HBaseTestingUtil(); > private static SingleProcessHBaseCluster cluster; > @BeforeClass > public static void setUp() throws Exception { > cluster = TEST_UTIL.startMiniCluster(1); > Table table = TEST_UTIL.createTable(TABLE_NAME, new byte[][] { FAMILY }, > 1, > HConstants.DEFAULT_BLOCKSIZE, AttributesCoprocessor.class.getName()); > table.close(); > } > @AfterClass > public static void afterClass() throws Exception { > cluster.close(); > TEST_UTIL.shutdownMiniCluster(); > } > @Test > public void testRequestAttributesGet() throws IOException { > Configuration conf = TEST_UTIL.getConfiguration(); > try ( > Connection conn = ConnectionFactory.createConnection(conf, null, > AuthUtil.loginClient(conf), > CONNECTION_ATTRIBUTES); > Table table = configureRequestAttributes(conn.getTableBuilder(TABLE_NAME, > EXECUTOR_SERVICE), > ROW_KEY_TO_REQUEST_ATTRIBUTES.get(ROW_KEY1)).build()) { > table.get(new Get(ROW_KEY1)); > } > } > private static Map addRandomRequestAttributes() { > Map requestAttributes = new HashMap<>(); > int j = Math.max(2, (int) (10 * Math.random())); > for (int i = 0; i < j; i++) { > requestAttributes.put(String.valueOf(i), > Bytes.toBytes(UUID.randomUUID().toString())); > } > return requestAttributes; > } > public static class AttributesCoprocessor implements RegionObserver, > RegionCoprocessor { > @Override > public Optional getRegionObserver() { > return Optional.of(this); > } > @Override > public void preGetOp(ObserverContext c, Get > get, > List result) throws IOException { > > validateRequestAttributes(getRequestAttributesForRowKey(get.getRow(; > } > private Map getRequestAttributesForRowKey(byte[] rowKey) { > return Collections.emptyMap(); // This line helps demonstrate the bug > } > private boolean validateRequestAttributes(Map > requestAttributes) { > RpcCall rpcCall = RpcServer.getCurrentC
[jira] [Commented] (HBASE-28839) Exception handling during retrieval of bucket-cache from persistence.
[ https://issues.apache.org/jira/browse/HBASE-28839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884377#comment-17884377 ] Hudson commented on HBASE-28839: Results for branch branch-3 [build #296 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/296/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/296/General_20Nightly_20Build_20Report/] (x) {color:red}-1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/296/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Exception handling during retrieval of bucket-cache from persistence. > - > > Key: HBASE-28839 > URL: https://issues.apache.org/jira/browse/HBASE-28839 > Project: HBase > Issue Type: Bug > Components: BucketCache >Affects Versions: 3.0.0-beta-1, 2.7.0 >Reporter: Janardhan Hungund >Assignee: Janardhan Hungund >Priority: Major > Labels: pull-request-available > Fix For: 3.0.0, 2.7.0 > > > During the retrieval of bucket cache from the persistence file during the > startup, it was observed that, if an exception, other than, the IOException > occurs, the bucket cache internal members remain uninitialised and cause the > bucket to remain unusable. The exception is not logged in the trace file and > the retrieval thread exits without initialising the bucket-cache. > Also, the NullPointerExceptions are seen when, trying to use the cache. > {code:java} > 2024-09-10 14:33:30,020 ERROR > org.apache.hadoop.hbase.io.hfile.bucket.BucketCache: WriterThread encountered > error > java.lang.NullPointerException > at > org.apache.hadoop.hbase.io.hfile.bucket.BucketCache$RAMQueueEntry.writeToCache(BucketCache.java:1975) > at > org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.doDrain(BucketCache.java:1298) > {code} > > {code:java} > 2024-09-13 07:01:05,964 ERROR > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: Error getting metrics > from source RegionServer,sub=Server > java.lang.NullPointerException > at > org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.getFreeSize(BucketCache.java:1819) > at > org.apache.hadoop.hbase.io.hfile.CombinedBlockCache.getFreeSize(CombinedBlockCache.java:179) > at > org.apache.hadoop.hbase.regionserver.MetricsRegionServerWrapperImpl.getBlockCacheFreeSize(MetricsRegionServerWrapperImpl.java:308) > at > org.apache.hadoop.hbase.regionserver.MetricsRegionServerSourceImpl.addGaugesToMetricsRecordBuilder(MetricsRegionServerSourceImpl.java:525) > at > org.apache.hadoop.hbase.regionserver.MetricsRegionServerSourceImpl.getMetrics(MetricsRegionServerSourceImpl.java:333) > {code} > All type of exceptions need to be handled gracefully. > All types of exceptions must be logged to the trace file. > The bucket cache needs to reinitialised and made usable. > Thanks, > Janardhan -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28867) Backport "HBASE-20653 Add missing observer hooks for region server group to MasterObserver" to branch-2
[ https://issues.apache.org/jira/browse/HBASE-28867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884138#comment-17884138 ] Hudson commented on HBASE-28867: Results for branch branch-2 [build #1152 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1152/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1152/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1152/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1152/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1152/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (x) {color:red}-1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1152/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Backport "HBASE-20653 Add missing observer hooks for region server group to > MasterObserver" to branch-2 > --- > > Key: HBASE-28867 > URL: https://issues.apache.org/jira/browse/HBASE-28867 > Project: HBase > Issue Type: Bug > Components: rsgroup >Affects Versions: 2.6.0, 2.7.0, 2.5.10 >Reporter: Ted Yu >Assignee: Nihal Jain >Priority: Major > Labels: pull-request-available > Fix For: 2.6.1 > > > Currently the following region server group operations don't have > corresponding hook in MasterObserver : > * getRSGroupInfo > * getRSGroupInfoOfServer > * getRSGroupInfoOfTable > * listRSGroup > This JIRA is to > * add them to MasterObserver > * add pre/post hook calls in RSGroupAdminEndpoint thru > master.getMasterCoprocessorHost for the above operations > * add corresponding tests to TestRSGroups (in similar manner to that of > HBASE-20627) -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-20653) Add missing observer hooks for region server group to MasterObserver
[ https://issues.apache.org/jira/browse/HBASE-20653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884139#comment-17884139 ] Hudson commented on HBASE-20653: Results for branch branch-2 [build #1152 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1152/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1152/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1152/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1152/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1152/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (x) {color:red}-1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1152/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Add missing observer hooks for region server group to MasterObserver > > > Key: HBASE-20653 > URL: https://issues.apache.org/jira/browse/HBASE-20653 > Project: HBase > Issue Type: Bug >Reporter: Ted Yu >Assignee: Nihal Jain >Priority: Major > Fix For: 3.0.0-alpha-1 > > Attachments: HBASE-20653.master.001.patch, > HBASE-20653.master.002.patch, HBASE-20653.master.003.patch, > HBASE-20653.master.004.patch > > > Currently the following region server group operations don't have > corresponding hook in MasterObserver : > * getRSGroupInfo > * getRSGroupInfoOfServer > * getRSGroupInfoOfTable > * listRSGroup > This JIRA is to > * add them to MasterObserver > * add pre/post hook calls in RSGroupAdminEndpoint thru > master.getMasterCoprocessorHost for the above operations > * add corresponding tests to TestRSGroups (in similar manner to that of > HBASE-20627) -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-27903) Skip submitting Split/Merge procedure when split/merge is disabled at table level
[ https://issues.apache.org/jira/browse/HBASE-27903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884140#comment-17884140 ] Hudson commented on HBASE-27903: Results for branch branch-2 [build #1152 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1152/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1152/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1152/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1152/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1152/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (x) {color:red}-1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1152/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Skip submitting Split/Merge procedure when split/merge is disabled at table > level > - > > Key: HBASE-27903 > URL: https://issues.apache.org/jira/browse/HBASE-27903 > Project: HBase > Issue Type: Improvement > Components: Admin >Reporter: Ashok shetty >Assignee: Nihal Jain >Priority: Minor > Labels: pull-request-available > Fix For: 2.7.0, 3.0.0-beta-2 > > > *Scenario* > If split/merge is disabled at table level , master will submit a > SplitTableRegionProcedure/MergeTableRegionsProcedure , and rollback it as > execution fails during pre-checks . > *Improvement* > Master can check it early and no need to submit > SplitTableRegionProcedure/MergeTableRegionsProcedure when split/merge switch > is disabled at Table level. > *Steps* > {code:java} > create 'testCreateTableWithMergeDisableParameter', 'f1', {MERGE_ENABLED => > false} > list_regions 'testCreateTableWithMergeDisableParameter' > merge_region > 'd21cdc5d488e8036017696c46cffd9b1','6382c8f731a4f0379b6e98ece4b06e3e' > {code} > {code:java} > create 'testcreatetablewithsplitdisableparameter', 'f1', {SPLIT_ENABLED => > false} > split 'testcreatetablewithsplitdisableparameter','30'{code} > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28868) Add missing permission check for updateRSGroupConfig in branch-2
[ https://issues.apache.org/jira/browse/HBASE-28868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884137#comment-17884137 ] Hudson commented on HBASE-28868: Results for branch branch-2 [build #1152 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1152/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1152/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1152/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1152/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1152/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (x) {color:red}-1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1152/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Add missing permission check for updateRSGroupConfig in branch-2 > > > Key: HBASE-28868 > URL: https://issues.apache.org/jira/browse/HBASE-28868 > Project: HBase > Issue Type: Task > Components: rsgroup >Affects Versions: 2.7.0 >Reporter: Nihal Jain >Assignee: Nihal Jain >Priority: Minor > Labels: pull-request-available > Fix For: 2.7.0, 2.6.1, 2.5.11 > > > Found this during HBASE-28867, we do not have security check for > updateRSGroupConfig in branch-2. See > [https://github.com/apache/hbase/blob/0dc334f572329be7eb2455cec3519fc820c04c25/hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupAdminEndpoint.java#L450] > Same check exists in master > [https://github.com/apache/hbase/blob/52082bc5b80a60406bfaaa630ed5cb23027436c1/hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/AccessController.java#L2279] > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28770) Support partial results in AggregateImplementation and AsyncAggregationClient
[ https://issues.apache.org/jira/browse/HBASE-28770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884136#comment-17884136 ] Hudson commented on HBASE-28770: Results for branch branch-2 [build #1152 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1152/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1152/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1152/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1152/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1152/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (x) {color:red}-1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1152/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Support partial results in AggregateImplementation and AsyncAggregationClient > - > > Key: HBASE-28770 > URL: https://issues.apache.org/jira/browse/HBASE-28770 > Project: HBase > Issue Type: Improvement > Components: Client, Coprocessors, Quotas >Affects Versions: 2.6.0 >Reporter: Charles Connell >Assignee: Charles Connell >Priority: Major > Labels: pull-request-available > Fix For: 2.7.0, 3.0.0-beta-2, 2.6.1 > > > Currently there is a gap in the coverage of HBase's quota-based workload > throttling. Requests sent by {{[Async]AggregationClient}} reach > {{AggregateImplementation}}. This then executes Scans in a way that bypasses > the quota system. We see issues with this at Hubspot where clusters suffer > under this load and we don't have a good way to protect them. > In this ticket I'm teaching {{AggregateImplementation}} to optionally stop > scanning when a throttle is violated, and send back just the results it has > accumulated so far. In addition, it will send back a row key to > {{AsyncAggregationClient}}. When the client gets a response with a row key, > it will sleep in order to satisfy the throttle, and then send a new request > with a scan starting at that row key. This will have the effect of continuing > the work where the last request stopped. > This feature will be unconditionally enabled by {{AsyncAggregationClient}} > once this ticket is finished. {{AggregateImplementation}} will not assume > that clients support partial results, however, so it can keep supporting > older clients. For clients that do not support partial results, throttles > will not be respecting, and results will always be complete. > This feature was [first proposed on the mailing > list|https://lists.apache.org/thread/1vqnxb71z7swq2cogz4qg3cn6b10xp4v]. > Builds on work in HBASE-28346. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28842) TestRequestAttributes should fail when expected
[ https://issues.apache.org/jira/browse/HBASE-28842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884141#comment-17884141 ] Hudson commented on HBASE-28842: Results for branch branch-2 [build #1152 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1152/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1152/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1152/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1152/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1152/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (x) {color:red}-1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1152/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > TestRequestAttributes should fail when expected > --- > > Key: HBASE-28842 > URL: https://issues.apache.org/jira/browse/HBASE-28842 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.0, 3.0.0 >Reporter: Evelyn Boland >Assignee: Evelyn Boland >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0-alpha-1, 2.7.0, 3.0.0-beta-2, 2.6.1 > > > Problem: > The tests in the TestRequestAttributes class pass even when they should fail. > I've included an example of a test that should fail but does not below. > Fix: > Throw an IOException in the AttributesCoprocessor when the map of expected > request attributes does not match the map of given request attributes. > > Test: > We set 2+ request attributes on the Get request but always return 0 request > attributes from AttributesCoprocessor::getRequestAttributesForRowKey method. > Yet the test passes even though the map of expected request attributes never > matches the map of given request attributes. > {code:java} > @Category({ ClientTests.class, MediumTests.class }) > public class TestRequestAttributes { > @ClassRule > public static final HBaseClassTestRule CLASS_RULE = > HBaseClassTestRule.forClass(TestRequestAttributes.class); > private static final byte[] ROW_KEY1 = Bytes.toBytes("1"); > private static final Map> > ROW_KEY_TO_REQUEST_ATTRIBUTES = > new HashMap<>(); > static { > CONNECTION_ATTRIBUTES.put("clientId", Bytes.toBytes("foo")); > ROW_KEY_TO_REQUEST_ATTRIBUTES.put(ROW_KEY1, addRandomRequestAttributes()); > } > private static final ExecutorService EXECUTOR_SERVICE = > Executors.newFixedThreadPool(100); > private static final byte[] FAMILY = Bytes.toBytes("0"); > private static final TableName TABLE_NAME = > TableName.valueOf("testRequestAttributes"); > private static final HBaseTestingUtil TEST_UTIL = new HBaseTestingUtil(); > private static SingleProcessHBaseCluster cluster; > @BeforeClass > public static void setUp() throws Exception { > cluster = TEST_UTIL.startMiniCluster(1); > Table table = TEST_UTIL.createTable(TABLE_NAME, new byte[][] { FAMILY }, > 1, > HConstants.DEFAULT_BLOCKSIZE, AttributesCoprocessor.class.getName()); > table.close(); > } > @AfterClass > public static void afterClass() throws Exception { > cluster.close(); > TEST_UTIL.shutdownMiniCluster(); > } > @Test > public void testRequestAttributesGet() throws IOException { > Configuration conf = TEST_UTIL.getConfiguration(); > try ( > Connection conn = ConnectionFactory.createConnection(conf, null, > AuthUtil.loginClient(conf), > CONNECTION_ATTRIBUTES); > Table table = configureRequestAttributes(conn.getTableBuilder(TABLE_NAME, > EXECUTOR_SERVICE), > ROW_KEY_TO_REQUEST_ATTRIBUTES.get(ROW_KEY1)).build()) { > table.get(new Get(ROW_KEY1)); > } > } > private static Map addRandomRequestAttributes() { > Map requestAttributes = new HashMap<>(); > int j = Math.max(2, (int) (10 * Math.random())); > for (int i = 0; i < j; i++) { > requestAttributes.put(String.valueOf(i), > Bytes.toBytes(UUID.randomUUID().toString())); > } > return requestAttributes; > } > public static class AttributesCoprocesso
[jira] [Commented] (HBASE-28770) Support partial results in AggregateImplementation and AsyncAggregationClient
[ https://issues.apache.org/jira/browse/HBASE-28770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884122#comment-17884122 ] Hudson commented on HBASE-28770: Results for branch branch-2.6 [build #206 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/206/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/206/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/206/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/206/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/206/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/206/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Support partial results in AggregateImplementation and AsyncAggregationClient > - > > Key: HBASE-28770 > URL: https://issues.apache.org/jira/browse/HBASE-28770 > Project: HBase > Issue Type: Improvement > Components: Client, Coprocessors, Quotas >Affects Versions: 2.6.0 >Reporter: Charles Connell >Assignee: Charles Connell >Priority: Major > Labels: pull-request-available > Fix For: 2.7.0, 3.0.0-beta-2, 2.6.1 > > > Currently there is a gap in the coverage of HBase's quota-based workload > throttling. Requests sent by {{[Async]AggregationClient}} reach > {{AggregateImplementation}}. This then executes Scans in a way that bypasses > the quota system. We see issues with this at Hubspot where clusters suffer > under this load and we don't have a good way to protect them. > In this ticket I'm teaching {{AggregateImplementation}} to optionally stop > scanning when a throttle is violated, and send back just the results it has > accumulated so far. In addition, it will send back a row key to > {{AsyncAggregationClient}}. When the client gets a response with a row key, > it will sleep in order to satisfy the throttle, and then send a new request > with a scan starting at that row key. This will have the effect of continuing > the work where the last request stopped. > This feature will be unconditionally enabled by {{AsyncAggregationClient}} > once this ticket is finished. {{AggregateImplementation}} will not assume > that clients support partial results, however, so it can keep supporting > older clients. For clients that do not support partial results, throttles > will not be respecting, and results will always be complete. > This feature was [first proposed on the mailing > list|https://lists.apache.org/thread/1vqnxb71z7swq2cogz4qg3cn6b10xp4v]. > Builds on work in HBASE-28346. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28816) The description of "hbase.superuser" is confusing
[ https://issues.apache.org/jira/browse/HBASE-28816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17883979#comment-17883979 ] Hudson commented on HBASE-28816: Results for branch branch-2.5 [build #598 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/598/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/598/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/598/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/598/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/598/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/598/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > The description of "hbase.superuser" is confusing > - > > Key: HBASE-28816 > URL: https://issues.apache.org/jira/browse/HBASE-28816 > Project: HBase > Issue Type: Improvement > Components: documentation >Reporter: YUBI LEE >Assignee: YUBI LEE >Priority: Major > Labels: pull-request-available > Fix For: 2.7.0, 3.0.0-beta-2, 2.6.1, 2.5.11 > > > With "hbase.superuser" configuration, you can also set groups. However, it > should be prefixed with "@" but there are no such explanation. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28816) The description of "hbase.superuser" is confusing
[ https://issues.apache.org/jira/browse/HBASE-28816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17883974#comment-17883974 ] Hudson commented on HBASE-28816: Results for branch master [build #1167 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1167/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1167/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1167/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > The description of "hbase.superuser" is confusing > - > > Key: HBASE-28816 > URL: https://issues.apache.org/jira/browse/HBASE-28816 > Project: HBase > Issue Type: Improvement > Components: documentation >Reporter: YUBI LEE >Assignee: YUBI LEE >Priority: Major > Labels: pull-request-available > Fix For: 2.7.0, 3.0.0-beta-2, 2.6.1, 2.5.11 > > > With "hbase.superuser" configuration, you can also set groups. However, it > should be prefixed with "@" but there are no such explanation. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28816) The description of "hbase.superuser" is confusing
[ https://issues.apache.org/jira/browse/HBASE-28816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17883928#comment-17883928 ] Hudson commented on HBASE-28816: Results for branch branch-3 [build #295 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/295/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/295/General_20Nightly_20Build_20Report/] (x) {color:red}-1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/295/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > The description of "hbase.superuser" is confusing > - > > Key: HBASE-28816 > URL: https://issues.apache.org/jira/browse/HBASE-28816 > Project: HBase > Issue Type: Improvement > Components: documentation >Reporter: YUBI LEE >Assignee: YUBI LEE >Priority: Major > Labels: pull-request-available > Fix For: 2.7.0, 3.0.0-beta-2, 2.6.1, 2.5.11 > > > With "hbase.superuser" configuration, you can also set groups. However, it > should be prefixed with "@" but there are no such explanation. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28816) The description of "hbase.superuser" is confusing
[ https://issues.apache.org/jira/browse/HBASE-28816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17883721#comment-17883721 ] Hudson commented on HBASE-28816: Results for branch branch-2 [build #1150 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1150/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1150/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1150/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1150/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1150/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1150/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > The description of "hbase.superuser" is confusing > - > > Key: HBASE-28816 > URL: https://issues.apache.org/jira/browse/HBASE-28816 > Project: HBase > Issue Type: Improvement > Components: documentation >Reporter: YUBI LEE >Assignee: YUBI LEE >Priority: Major > Labels: pull-request-available > Fix For: 2.7.0, 3.0.0-beta-2, 2.6.1, 2.5.11 > > > With "hbase.superuser" configuration, you can also set groups. However, it > should be prefixed with "@" but there are no such explanation. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28816) The description of "hbase.superuser" is confusing
[ https://issues.apache.org/jira/browse/HBASE-28816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17883681#comment-17883681 ] Hudson commented on HBASE-28816: Results for branch branch-2.6 [build #204 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/204/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/204/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/204/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/204/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/204/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (x) {color:red}-1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/204/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > The description of "hbase.superuser" is confusing > - > > Key: HBASE-28816 > URL: https://issues.apache.org/jira/browse/HBASE-28816 > Project: HBase > Issue Type: Improvement > Components: documentation >Reporter: YUBI LEE >Assignee: YUBI LEE >Priority: Major > Labels: pull-request-available > Fix For: 2.7.0, 3.0.0-beta-2, 2.6.1, 2.5.11 > > > With "hbase.superuser" configuration, you can also set groups. However, it > should be prefixed with "@" but there are no such explanation. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28850) Only return from ReplicationSink.replicationEntries while all background tasks are finished
[ https://issues.apache.org/jira/browse/HBASE-28850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17883426#comment-17883426 ] Hudson commented on HBASE-28850: Results for branch branch-2.5 [build #597 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/597/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/597/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/597/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/597/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (x) {color:red}-1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/597/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (x) {color:red}-1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/597/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Only return from ReplicationSink.replicationEntries while all background > tasks are finished > --- > > Key: HBASE-28850 > URL: https://issues.apache.org/jira/browse/HBASE-28850 > Project: HBase > Issue Type: Improvement > Components: Replication, rpc >Affects Versions: 2.6.0, 2.5.6, 3.0.0-beta-1, 4.0.0-alpha-1 >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Labels: pull-request-available > Fix For: 2.7.0, 3.0.0-beta-2, 2.6.1, 2.5.11 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28821) Optimise bucket cache persistence by reusing backmap entry object.
[ https://issues.apache.org/jira/browse/HBASE-28821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17883157#comment-17883157 ] Hudson commented on HBASE-28821: Results for branch branch-2 [build #1149 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1149/]: (x) *{color:red}-1 overall{color}* details (if available): (x) {color:red}-1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1149/General_20Nightly_20Build_20Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1149/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1149/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (x) {color:red}-1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1149/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (x) {color:red}-1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1149/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Optimise bucket cache persistence by reusing backmap entry object. > -- > > Key: HBASE-28821 > URL: https://issues.apache.org/jira/browse/HBASE-28821 > Project: HBase > Issue Type: Bug > Components: BucketCache >Affects Versions: 4.0.0-alpha-1, 3.0.0-beta-2 >Reporter: Janardhan Hungund >Assignee: Janardhan Hungund >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0-alpha-1, 2.7.0, 3.0.0-beta-2 > > > During the persistence of backing map entries into the backing map file, we > create a new BackingMapEntry.Builder for each entry in the backing map. This > can be optimised by using a single BackingMapEntry.Builder object and using > it to build each entry during serialisation. > This Jira tracks the optimisation by avoiding multiple builder objects. > Thanks, > Janardhan -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28840) Optimise memory utilisation retrieval of bucket-cache from persistence.
[ https://issues.apache.org/jira/browse/HBASE-28840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17883158#comment-17883158 ] Hudson commented on HBASE-28840: Results for branch branch-2 [build #1149 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1149/]: (x) *{color:red}-1 overall{color}* details (if available): (x) {color:red}-1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1149/General_20Nightly_20Build_20Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1149/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1149/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (x) {color:red}-1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1149/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (x) {color:red}-1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1149/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Optimise memory utilisation retrieval of bucket-cache from persistence. > --- > > Key: HBASE-28840 > URL: https://issues.apache.org/jira/browse/HBASE-28840 > Project: HBase > Issue Type: Bug > Components: BucketCache >Affects Versions: 3.0.0-beta-1, 4.0.0-alpha-1, 2.7.0 >Reporter: Janardhan Hungund >Assignee: Janardhan Hungund >Priority: Major > Labels: pull-request-available > Fix For: 2.7.0, 3.0.0-beta-2 > > > During the persistence of bucket-cache backing map to a file, the backing map > is divided into multiple smaller chunks and persisted to the file. This > chunking avoids the high memory utilisation of during persistence, since only > a small subset of backing map entries need to persisted in one chunk. > However, during the retrieval of the backing map during the server startup, > we accumulate all these chunks into a list and then process each chunk to > recreate the in-memory backing map. Since, all the chunks are fetched from > the persistence file and then processed, the memory requirement is higher. > The retrieval of bucket-cache from persistence file can be optimised to > enable the processing of one chunk at a time to avoid high memory utilisation. > Thanks, > Janardhan -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28850) Only return from ReplicationSink.replicationEntries while all background tasks are finished
[ https://issues.apache.org/jira/browse/HBASE-28850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17883159#comment-17883159 ] Hudson commented on HBASE-28850: Results for branch branch-2 [build #1149 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1149/]: (x) *{color:red}-1 overall{color}* details (if available): (x) {color:red}-1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1149/General_20Nightly_20Build_20Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1149/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1149/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (x) {color:red}-1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1149/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (x) {color:red}-1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1149/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Only return from ReplicationSink.replicationEntries while all background > tasks are finished > --- > > Key: HBASE-28850 > URL: https://issues.apache.org/jira/browse/HBASE-28850 > Project: HBase > Issue Type: Improvement > Components: Replication, rpc >Affects Versions: 2.6.0, 2.5.6, 3.0.0-beta-1, 4.0.0-alpha-1 >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Labels: pull-request-available > Fix For: 2.7.0, 3.0.0-beta-2, 2.6.1, 2.5.11 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28850) Only return from ReplicationSink.replicationEntries while all background tasks are finished
[ https://issues.apache.org/jira/browse/HBASE-28850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17883153#comment-17883153 ] Hudson commented on HBASE-28850: Results for branch branch-2.6 [build #203 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/203/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/203/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/203/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/203/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (x) {color:red}-1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/203/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/203/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Only return from ReplicationSink.replicationEntries while all background > tasks are finished > --- > > Key: HBASE-28850 > URL: https://issues.apache.org/jira/browse/HBASE-28850 > Project: HBase > Issue Type: Improvement > Components: Replication, rpc >Affects Versions: 2.6.0, 2.5.6, 3.0.0-beta-1, 4.0.0-alpha-1 >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Labels: pull-request-available > Fix For: 2.7.0, 3.0.0-beta-2, 2.6.1, 2.5.11 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28840) Optimise memory utilisation retrieval of bucket-cache from persistence.
[ https://issues.apache.org/jira/browse/HBASE-28840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17883090#comment-17883090 ] Hudson commented on HBASE-28840: Results for branch master [build #1166 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1166/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1166/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1166/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Optimise memory utilisation retrieval of bucket-cache from persistence. > --- > > Key: HBASE-28840 > URL: https://issues.apache.org/jira/browse/HBASE-28840 > Project: HBase > Issue Type: Bug > Components: BucketCache >Affects Versions: 3.0.0-beta-1, 4.0.0-alpha-1, 2.7.0 >Reporter: Janardhan Hungund >Assignee: Janardhan Hungund >Priority: Major > Labels: pull-request-available > Fix For: 2.7.0, 3.0.0-beta-2 > > > During the persistence of bucket-cache backing map to a file, the backing map > is divided into multiple smaller chunks and persisted to the file. This > chunking avoids the high memory utilisation of during persistence, since only > a small subset of backing map entries need to persisted in one chunk. > However, during the retrieval of the backing map during the server startup, > we accumulate all these chunks into a list and then process each chunk to > recreate the in-memory backing map. Since, all the chunks are fetched from > the persistence file and then processed, the memory requirement is higher. > The retrieval of bucket-cache from persistence file can be optimised to > enable the processing of one chunk at a time to avoid high memory utilisation. > Thanks, > Janardhan -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28850) Only return from ReplicationSink.replicationEntries while all background tasks are finished
[ https://issues.apache.org/jira/browse/HBASE-28850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17883092#comment-17883092 ] Hudson commented on HBASE-28850: Results for branch master [build #1166 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1166/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1166/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1166/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Only return from ReplicationSink.replicationEntries while all background > tasks are finished > --- > > Key: HBASE-28850 > URL: https://issues.apache.org/jira/browse/HBASE-28850 > Project: HBase > Issue Type: Improvement > Components: Replication, rpc >Affects Versions: 2.6.0, 2.5.6, 3.0.0-beta-1, 4.0.0-alpha-1 >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Labels: pull-request-available > Fix For: 2.7.0, 3.0.0-beta-2, 2.6.1, 2.5.11 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28842) TestRequestAttributes should fail when expected
[ https://issues.apache.org/jira/browse/HBASE-28842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17883091#comment-17883091 ] Hudson commented on HBASE-28842: Results for branch master [build #1166 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1166/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1166/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1166/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > TestRequestAttributes should fail when expected > --- > > Key: HBASE-28842 > URL: https://issues.apache.org/jira/browse/HBASE-28842 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.0, 3.0.0 >Reporter: Evelyn Boland >Assignee: Evelyn Boland >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0-alpha-1, 2.7.0, 3.0.0-beta-2, 2.6.1 > > > Problem: > The tests in the TestRequestAttributes class pass even when they should fail. > I've included an example of a test that should fail but does not below. > Fix: > Throw an IOException in the AttributesCoprocessor when the map of expected > request attributes does not match the map of given request attributes. > > Test: > We set 2+ request attributes on the Get request but always return 0 request > attributes from AttributesCoprocessor::getRequestAttributesForRowKey method. > Yet the test passes even though the map of expected request attributes never > matches the map of given request attributes. > {code:java} > @Category({ ClientTests.class, MediumTests.class }) > public class TestRequestAttributes { > @ClassRule > public static final HBaseClassTestRule CLASS_RULE = > HBaseClassTestRule.forClass(TestRequestAttributes.class); > private static final byte[] ROW_KEY1 = Bytes.toBytes("1"); > private static final Map> > ROW_KEY_TO_REQUEST_ATTRIBUTES = > new HashMap<>(); > static { > CONNECTION_ATTRIBUTES.put("clientId", Bytes.toBytes("foo")); > ROW_KEY_TO_REQUEST_ATTRIBUTES.put(ROW_KEY1, addRandomRequestAttributes()); > } > private static final ExecutorService EXECUTOR_SERVICE = > Executors.newFixedThreadPool(100); > private static final byte[] FAMILY = Bytes.toBytes("0"); > private static final TableName TABLE_NAME = > TableName.valueOf("testRequestAttributes"); > private static final HBaseTestingUtil TEST_UTIL = new HBaseTestingUtil(); > private static SingleProcessHBaseCluster cluster; > @BeforeClass > public static void setUp() throws Exception { > cluster = TEST_UTIL.startMiniCluster(1); > Table table = TEST_UTIL.createTable(TABLE_NAME, new byte[][] { FAMILY }, > 1, > HConstants.DEFAULT_BLOCKSIZE, AttributesCoprocessor.class.getName()); > table.close(); > } > @AfterClass > public static void afterClass() throws Exception { > cluster.close(); > TEST_UTIL.shutdownMiniCluster(); > } > @Test > public void testRequestAttributesGet() throws IOException { > Configuration conf = TEST_UTIL.getConfiguration(); > try ( > Connection conn = ConnectionFactory.createConnection(conf, null, > AuthUtil.loginClient(conf), > CONNECTION_ATTRIBUTES); > Table table = configureRequestAttributes(conn.getTableBuilder(TABLE_NAME, > EXECUTOR_SERVICE), > ROW_KEY_TO_REQUEST_ATTRIBUTES.get(ROW_KEY1)).build()) { > table.get(new Get(ROW_KEY1)); > } > } > private static Map addRandomRequestAttributes() { > Map requestAttributes = new HashMap<>(); > int j = Math.max(2, (int) (10 * Math.random())); > for (int i = 0; i < j; i++) { > requestAttributes.put(String.valueOf(i), > Bytes.toBytes(UUID.randomUUID().toString())); > } > return requestAttributes; > } > public static class AttributesCoprocessor implements RegionObserver, > RegionCoprocessor { > @Override > public Optional getRegionObserver() { > return Optional.of(this); > } > @Override > public void preGetOp(ObserverContext c, Get > get, > List result) throws IOException { > > validateRequestAttributes(getRequestAttributesForRowKey(get.getRow(; > } > private Map getRequestAttributesForRowKey(byte[] rowKey) { > return Collections.emptyMap(); // This line helps demonstrate the bug > } > private boolean validateRequestAttributes(Map > requestAttributes) { > RpcCall rpcCall = RpcServer.getCurrentC
[jira] [Commented] (HBASE-28850) Only return from ReplicationSink.replicationEntries while all background tasks are finished
[ https://issues.apache.org/jira/browse/HBASE-28850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17883076#comment-17883076 ] Hudson commented on HBASE-28850: Results for branch branch-3 [build #294 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/294/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/294/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk17 hadoop3 checks{color} -- For more information [see jdk17 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/294/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Only return from ReplicationSink.replicationEntries while all background > tasks are finished > --- > > Key: HBASE-28850 > URL: https://issues.apache.org/jira/browse/HBASE-28850 > Project: HBase > Issue Type: Improvement > Components: Replication, rpc >Affects Versions: 2.6.0, 2.5.6, 3.0.0-beta-1, 4.0.0-alpha-1 >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Labels: pull-request-available > Fix For: 2.7.0, 3.0.0-beta-2, 2.6.1, 2.5.11 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)